-
-
Notifications
You must be signed in to change notification settings - Fork 124
Open
Description
With Symfony AI there also comes the AI Platform Component, that has the goal to provide an abstraction layer to inference provider and different models. For example, switch from Llama running on Azure or Llama running with Ollama.
Ollama is a very popular and well adopted tool, which enables developers to run models locally with a streamlined API, and the AI Platform aims to support this - even if that support needs to be extended.
The idea of this would be to provide a wrapper with Symfony CLI, that eases installation and handling of Ollama while working on a Symfony project using AI - similar to the Composer or PHP wrapper.
This could look like:
symfony ollama run llama3.2
See https://github.com/ollama/ollama/blob/main/README.md#cli-reference
Metadata
Metadata
Assignees
Labels
No labels