Skip to content

Conversation

santiatpml
Copy link
Contributor

No description provided.

@santiatpml santiatpml requested a review from montanalow August 11, 2023 22:38
@levkk
Copy link
Contributor

levkk commented Aug 13, 2023

Really cool post! Kind of an eye opener for me for this use case.


With its knowledge base in place, now the chatbot links to models that allow natural conversations:

- Based on users' questions, querying the indexed chunks to rapidly pull the most relevant passages.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be a numeric list?


!!!

3. Copy the template file to `.env`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could use cp .env.template .env


!!! code_block

```bash
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@chillenberger I'm struggling to remember why we need three levels of nesting in markdown, to represent every single code block. Seems like the default style for triple backticks should "handle it".

@santiatpml santiatpml requested a review from montanalow August 15, 2023 23:59
Copy link
Contributor

@montanalow montanalow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚀



# Introduction
Language models like GPT-3 seem really intelligent at first, but they have a huge blindspot - no external knowledge or memory. Ask them about current events or niche topics and they just can't keep up. To be truly useful in real applications, these large language models (LLMs) need knowledge added to them somehow. The trick is getting them that knowledge fast enough to have natural conversations. Open source tools like LangChain try to help by giving language models more context and knowledge. But they end up glueing together different services into a complex patchwork. This leads to a lot of infrastructure overhead, maintenance needs, and slow response times that hurt chatbot performance. We need a better solution tailored specifically for chatbots to inject knowledge in a way that's fast, relevant and integrated.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Language models like GPT-3 seem really intelligent at first, but they have a huge blindspot - no external knowledge or memory. Ask them about current events or niche topics and they just can't keep up. To be truly useful in real applications, these large language models (LLMs) need knowledge added to them somehow. The trick is getting them that knowledge fast enough to have natural conversations. Open source tools like LangChain try to help by giving language models more context and knowledge. But they end up glueing together different services into a complex patchwork. This leads to a lot of infrastructure overhead, maintenance needs, and slow response times that hurt chatbot performance. We need a better solution tailored specifically for chatbots to inject knowledge in a way that's fast, relevant and integrated.
Language models like GPT-4 seem really intelligent at first, but they have a huge blindspot - no external knowledge or memory. Ask them about current events or niche topics and they just can't keep up. To be truly useful in real applications, these large language models (LLMs) need knowledge added to them somehow. The trick is getting them that knowledge fast enough to have natural conversations. Open source tools like LangChain and LlamaIndex try to help by giving language models more context and knowledge. But they end up glueing together different services into a complex patchwork. This leads to a lot of infrastructure overhead, maintenance needs, and slow response times that hurt chatbot performance. We need a better solution tailored specifically for chatbots to inject knowledge in a way that's fast, relevant and integrated.

# Introduction
Language models like GPT-3 seem really intelligent at first, but they have a huge blindspot - no external knowledge or memory. Ask them about current events or niche topics and they just can't keep up. To be truly useful in real applications, these large language models (LLMs) need knowledge added to them somehow. The trick is getting them that knowledge fast enough to have natural conversations. Open source tools like LangChain try to help by giving language models more context and knowledge. But they end up glueing together different services into a complex patchwork. This leads to a lot of infrastructure overhead, maintenance needs, and slow response times that hurt chatbot performance. We need a better solution tailored specifically for chatbots to inject knowledge in a way that's fast, relevant and integrated.

In the first part of this blog series, we will talk about deploying a chatbot using `pgml-chat` command line tool. In the second part, we will show how `pgml-chat` works under the hood and focus on achieving low-latencies.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
In the first part of this blog series, we will talk about deploying a chatbot using `pgml-chat` command line tool. In the second part, we will show how `pgml-chat` works under the hood and focus on achieving low-latencies.
In the first part of this blog series, we will talk about deploying a chatbot using the `pgml-chat` command line tool. In the second part, we will show how `pgml-chat` works under the hood and focus on achieving low-latencies.

2. Passing those passages to a model like GPT-3 to generate conversational responses.
3. Orchestrating the query, retrieval and generation flow to enable real-time chat.

## 3. Evaluating and Fine-tuning chatbot
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
## 3. Evaluating and Fine-tuning chatbot
## 3. Evaluating and Fine-tuning the chatbot


## 3. Evaluating and Fine-tuning chatbot

Chatbot needs to be evaluated and fine-tuned before it can be deployed to the real world. This involves:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Chatbot needs to be evaluated and fine-tuned before it can be deployed to the real world. This involves:
The chatbot needs to be evaluated and fine-tuned before it can be deployed to the real world. This involves:

@santiatpml santiatpml merged commit 22e9b4e into master Aug 16, 2023
@santiatpml santiatpml deleted the santi-pgml-chat-blog branch August 16, 2023 19:43
kczimm pushed a commit that referenced this pull request Aug 21, 2023
SilasMarvin pushed a commit that referenced this pull request Oct 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants