ARVIS
This project is under construction and we will have all the code ready soon.
Update
[2023.04.06] We added the Gradio demo and built the web API for /tasks and /
results in server mode
The Gradio demo is now hosted on Hugging Face Space. (Build with
inference_mode=hibrid and local_deployment=standard
The Web API /tasks and /results access intermediate results for Stage #1: task
planning and Stage #1-3: model selection with execution results. See here
[2023.04.03] We added the CLI mode and provided parameters for configuring the
scale of local endpoints
You can enjoy a lightweight experience with Jarvis without deploying the
models locally. See here
Just run python awesome_chat.py --config lite.yaml to experience it
[2023.04.01] We updated a version of code for building.
Overview
Language serves as an interface for LLMs to connect numerous AI models for solving
complicated AI tasks!
See our paper: HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in
HuggingFace, Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu and
Yueting Zhuang
We introduce a collaborative system that consists of an LLM as the controller and
numerous expert models as collaborative executors (from HuggingFace Hub). The
workflow of our system consists of four stages
Task Planning: Using ChatGPT to analyze the requests of users to understand their
intention, and disassemble them into possible solvable tasks
Model Selection: To solve the planned tasks, ChatGPT selects expert models
hosted on Hugging Face based on their descriptions
Task Execution: Invokes and executes each selected model, and return the results
to ChatGPT
Response Generation: Finally, using ChatGPT to integrate the prediction of all
models, and generate responses.
System Requirements
Defaul
Ubuntu 16.04 LT
VRAM >= 12G
RAM > 12GB (minimal), 16GB (standard), 42GB (full
Disk > 78G (with 42G for damo-vilab/text-to-video-ms-1.7b)
Minimu
Ubuntu 16.04 LTS
ARVIS
This project is under construction and we will have all the code ready soon.
Update
[2023.04.06] We added the Gradio demo and built the web API for /tasks and /
results in server mode
The Gradio demo is now hosted on Hugging Face Space. (Build with
inference_mode=hibrid and local_deployment=standard
The Web API /tasks and /results access intermediate results for Stage #1: task
planning and Stage #1-3: model selection with execution results. See here
[2023.04.03] We added the CLI mode and provided parameters for configuring the
scale of local endpoints
You can enjoy a lightweight experience with Jarvis without deploying the
models locally. See here
Just run python awesome_chat.py --config lite.yaml to experience it
[2023.04.01] We updated a version of code for building.
Overview
Language serves as an interface for LLMs to connect numerous AI models for solving
complicated AI tasks!
See our paper: HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in
HuggingFace, Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu and
Yueting Zhuang
We introduce a collaborative system that consists of an LLM as the controller and
numerous expert models as collaborative executors (from HuggingFace Hub). The
workflow of our system consists of four stages
Task Planning: Using ChatGPT to analyze the requests of users to understand their
intention, and disassemble them into possible solvable tasks
Model Selection: To solve the planned tasks, ChatGPT selects expert models
hosted on Hugging Face based on their descriptions
Task Execution: Invokes and executes each selected model, and return the results
to ChatGPT
Response Generation: Finally, using ChatGPT to integrate the prediction of all
models, and generate responses.
System Requirements
Defaul
Ubuntu 16.04 LT
VRAM >= 12G
RAM > 12GB (minimal), 16GB (standard), 42GB (full
Disk > 78G (with 42G for damo-vilab/text-to-video-ms-1.7b)
Minimu
Ubuntu 16.04 LTS