Skip to content

Inference provider for vllm #2886

@jeffmaury

Description

@jeffmaury

Is your feature request related to a problem? Please describe

Allow to run models with vllm

Describe the solution you'd like

Add a new inference server for vllm

Describe alternatives you've considered

No response

Additional context

No response

Metadata

Metadata

Assignees

Labels

No labels
No labels

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions