0% found this document useful (0 votes)
44 views

Https://github com/microsoft/JARVIS

This project called ARVIS is under construction. It uses an LLM like ChatGPT as a controller to connect numerous AI models for solving complex tasks. The system works in four stages: task planning, model selection, task execution, and response generation. Recent updates include adding a Gradio demo and web API, as well as a CLI mode for lightweight use without local deployment. System requirements include Ubuntu 16.04 LTS, VRAM over 12GB, and RAM/disk space over 12GB/78GB.

Uploaded by

dead face
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views

Https://github com/microsoft/JARVIS

This project called ARVIS is under construction. It uses an LLM like ChatGPT as a controller to connect numerous AI models for solving complex tasks. The system works in four stages: task planning, model selection, task execution, and response generation. Recent updates include adding a Gradio demo and web API, as well as a CLI mode for lightweight use without local deployment. System requirements include Ubuntu 16.04 LTS, VRAM over 12GB, and RAM/disk space over 12GB/78GB.

Uploaded by

dead face
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

ARVIS

This project is under construction and we will have all the code ready soon.

Update
[2023.04.06] We added the Gradio demo and built the web API for /tasks and /
results in server mode
The Gradio demo is now hosted on Hugging Face Space. (Build with
inference_mode=hibrid and local_deployment=standard
The Web API /tasks and /results access intermediate results for Stage #1: task
planning and Stage #1-3: model selection with execution results. See here
[2023.04.03] We added the CLI mode and provided parameters for configuring the
scale of local endpoints
You can enjoy a lightweight experience with Jarvis without deploying the
models locally. See here
Just run python awesome_chat.py --config lite.yaml to experience it
[2023.04.01] We updated a version of code for building.

Overview

Language serves as an interface for LLMs to connect numerous AI models for solving
complicated AI tasks!

See our paper: HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in
HuggingFace, Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu and
Yueting Zhuang

We introduce a collaborative system that consists of an LLM as the controller and


numerous expert models as collaborative executors (from HuggingFace Hub). The
workflow of our system consists of four stages
Task Planning: Using ChatGPT to analyze the requests of users to understand their
intention, and disassemble them into possible solvable tasks
Model Selection: To solve the planned tasks, ChatGPT selects expert models
hosted on Hugging Face based on their descriptions
Task Execution: Invokes and executes each selected model, and return the results
to ChatGPT
Response Generation: Finally, using ChatGPT to integrate the prediction of all
models, and generate responses.

System Requirements

Defaul
Ubuntu 16.04 LT
VRAM >= 12G
RAM > 12GB (minimal), 16GB (standard), 42GB (full
Disk > 78G (with 42G for damo-vilab/text-to-video-ms-1.7b)

Minimu
Ubuntu 16.04 LTS
ARVIS

This project is under construction and we will have all the code ready soon.

Update
[2023.04.06] We added the Gradio demo and built the web API for /tasks and /
results in server mode
The Gradio demo is now hosted on Hugging Face Space. (Build with
inference_mode=hibrid and local_deployment=standard
The Web API /tasks and /results access intermediate results for Stage #1: task
planning and Stage #1-3: model selection with execution results. See here
[2023.04.03] We added the CLI mode and provided parameters for configuring the
scale of local endpoints
You can enjoy a lightweight experience with Jarvis without deploying the
models locally. See here
Just run python awesome_chat.py --config lite.yaml to experience it
[2023.04.01] We updated a version of code for building.

Overview

Language serves as an interface for LLMs to connect numerous AI models for solving
complicated AI tasks!

See our paper: HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in
HuggingFace, Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu and
Yueting Zhuang

We introduce a collaborative system that consists of an LLM as the controller and


numerous expert models as collaborative executors (from HuggingFace Hub). The
workflow of our system consists of four stages
Task Planning: Using ChatGPT to analyze the requests of users to understand their
intention, and disassemble them into possible solvable tasks
Model Selection: To solve the planned tasks, ChatGPT selects expert models
hosted on Hugging Face based on their descriptions
Task Execution: Invokes and executes each selected model, and return the results
to ChatGPT
Response Generation: Finally, using ChatGPT to integrate the prediction of all
models, and generate responses.

System Requirements

Defaul
Ubuntu 16.04 LT
VRAM >= 12G
RAM > 12GB (minimal), 16GB (standard), 42GB (full
Disk > 78G (with 42G for damo-vilab/text-to-video-ms-1.7b)

Minimu
Ubuntu 16.04 LTS

You might also like