Skip to content

Commit 2521a43

Browse files
authored
Added new examples for JavaScript (#953)
1 parent eafc5b8 commit 2521a43

File tree

9 files changed

+193
-18
lines changed

9 files changed

+193
-18
lines changed
Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,13 @@
1-
## Javascript Examples
1+
## Examples
22

3-
Here we have a set of examples of different use cases of the pgml javascript SDK.
3+
### [Semantic Search](./semantic_search.js)
4+
This is a basic example to perform semantic search on a collection of documents. Embeddings are created using `intfloat/e5-small` model. The results are semantically similar documemts to the query. Finally, the collection is archived.
45

5-
## Examples:
6+
### [Question Answering](./question_answering.js)
7+
This is an example to find documents relevant to a question from the collection of documents. The query is passed to vector search to retrieve documents that match closely in the embeddings space. A score is returned with each of the search result.
68

7-
1. [Getting Started](./getting-started/) - Simple project that uses the pgml SDK to create a collection, add a pipeline, upsert documents, and run a vector search on the collection.
9+
### [Question Answering using Instructore Model](./question_answering_instructor.js)
10+
In this example, we will use `hknlp/instructor-base` model to build text embeddings instead of the default `intfloat/e5-small` model.
11+
12+
### [Extractive Question Answering](./extractive_question_answering.js)
13+
In this example, we will show how to use `vector_recall` result as a `context` to a HuggingFace question answering model. We will use `Builtins.transform()` to run the model on the database.
Lines changed: 62 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,62 @@
1+
const pgml = require("pgml");
2+
require("dotenv").config();
3+
4+
pgml.js_init_logger();
5+
6+
const main = async () => {
7+
// Initialize the collection
8+
const collection = pgml.newCollection("my_javascript_eqa_collection_2");
9+
10+
// Add a pipeline
11+
const model = pgml.newModel();
12+
const splitter = pgml.newSplitter();
13+
const pipeline = pgml.newPipeline(
14+
"my_javascript_eqa_pipeline_1",
15+
model,
16+
splitter,
17+
);
18+
await collection.add_pipeline(pipeline);
19+
20+
// Upsert documents, these documents are automatically split into chunks and embedded by our pipeline
21+
const documents = [
22+
{
23+
id: "Document One",
24+
text: "PostgresML is the best tool for machine learning applications!",
25+
},
26+
{
27+
id: "Document Two",
28+
text: "PostgresML is open source and available to everyone!",
29+
},
30+
];
31+
await collection.upsert_documents(documents);
32+
33+
const query = "What is the best tool for machine learning?";
34+
35+
// Perform vector search
36+
const queryResults = await collection
37+
.query()
38+
.vector_recall(query, pipeline)
39+
.limit(1)
40+
.fetch_all();
41+
42+
// Construct context from results
43+
const context = queryResults
44+
.map((result) => {
45+
return result[1];
46+
})
47+
.join("\n");
48+
49+
// Query for answer
50+
const builtins = pgml.newBuiltins();
51+
const answer = await builtins.transform("question-answering", [
52+
JSON.stringify({ question: query, context: context }),
53+
]);
54+
55+
// Archive the collection
56+
await collection.archive();
57+
return answer;
58+
};
59+
60+
main().then((results) => {
61+
console.log("Question answer: \n", results);
62+
});

pgml-sdks/rust/pgml/javascript/examples/getting-started/README.md

Lines changed: 0 additions & 12 deletions
This file was deleted.
Lines changed: 55 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,55 @@
1+
const pgml = require("pgml");
2+
require("dotenv").config();
3+
4+
const main = async () => {
5+
// Initialize the collection
6+
const collection = pgml.newCollection("my_javascript_qa_collection");
7+
8+
// Add a pipeline
9+
const model = pgml.newModel();
10+
const splitter = pgml.newSplitter();
11+
const pipeline = pgml.newPipeline(
12+
"my_javascript_qa_pipeline",
13+
model,
14+
splitter,
15+
);
16+
await collection.add_pipeline(pipeline);
17+
18+
// Upsert documents, these documents are automatically split into chunks and embedded by our pipeline
19+
const documents = [
20+
{
21+
id: "Document One",
22+
text: "PostgresML is the best tool for machine learning applications!",
23+
},
24+
{
25+
id: "Document Two",
26+
text: "PostgresML is open source and available to everyone!",
27+
},
28+
];
29+
await collection.upsert_documents(documents);
30+
31+
// Perform vector search
32+
const queryResults = await collection
33+
.query()
34+
.vector_recall("What is the best tool for machine learning?", pipeline)
35+
.limit(1)
36+
.fetch_all();
37+
38+
// Convert the results to an array of objects
39+
const results = queryResults.map((result) => {
40+
const [similarity, text, metadata] = result;
41+
return {
42+
similarity,
43+
text,
44+
metadata,
45+
};
46+
});
47+
48+
// Archive the collection
49+
await collection.archive();
50+
return results;
51+
};
52+
53+
main().then((results) => {
54+
console.log("Vector search Results: \n", results);
55+
});
Lines changed: 60 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,60 @@
1+
const pgml = require("pgml");
2+
require("dotenv").config();
3+
4+
const main = async () => {
5+
// Initialize the collection
6+
const collection = pgml.newCollection("my_javascript_qai_collection");
7+
8+
// Add a pipeline
9+
const model = pgml.newModel("hkunlp/instructor-base", "pgml", {
10+
instruction: "Represent the Wikipedia document for retrieval: ",
11+
});
12+
const splitter = pgml.newSplitter();
13+
const pipeline = pgml.newPipeline(
14+
"my_javascript_qai_pipeline",
15+
model,
16+
splitter,
17+
);
18+
await collection.add_pipeline(pipeline);
19+
20+
// Upsert documents, these documents are automatically split into chunks and embedded by our pipeline
21+
const documents = [
22+
{
23+
id: "Document One",
24+
text: "PostgresML is the best tool for machine learning applications!",
25+
},
26+
{
27+
id: "Document Two",
28+
text: "PostgresML is open source and available to everyone!",
29+
},
30+
];
31+
await collection.upsert_documents(documents);
32+
33+
// Perform vector search
34+
const queryResults = await collection
35+
.query()
36+
.vector_recall("What is the best tool for machine learning?", pipeline, {
37+
instruction:
38+
"Represent the Wikipedia question for retrieving supporting documents: ",
39+
})
40+
.limit(1)
41+
.fetch_all();
42+
43+
// Convert the results to an array of objects
44+
const results = queryResults.map((result) => {
45+
const [similarity, text, metadata] = result;
46+
return {
47+
similarity,
48+
text,
49+
metadata,
50+
};
51+
});
52+
53+
// Archive the collection
54+
await collection.archive();
55+
return results;
56+
};
57+
58+
main().then((results) => {
59+
console.log("Vector search Results: \n", results);
60+
});

pgml-sdks/rust/pgml/javascript/examples/getting-started/index.js renamed to pgml-sdks/rust/pgml/javascript/examples/semantic_search.js

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,10 @@ const main = async () => {
2727
// Perform vector search
2828
const queryResults = await collection
2929
.query()
30-
.vector_recall("Some user query that will match document one first", pipeline)
30+
.vector_recall(
31+
"Some user query that will match document one first",
32+
pipeline,
33+
)
3134
.limit(2)
3235
.fetch_all();
3336

@@ -41,6 +44,7 @@ const main = async () => {
4144
};
4245
});
4346

47+
// Archive the collection
4448
await collection.archive();
4549
return results;
4650
};

pgml-sdks/rust/pgml/python/examples/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
## Examples
22

33
### [Semantic Search](./semantic_search.py)
4-
This is a basic example to perform semantic search on a collection of documents. It loads the Quora dataset, creates a collection in a PostgreSQL database, upserts documents, generates chunks and embeddings, and then performs a vector search on a query. Embeddings are created using `intfloat/e5-small` model. The results are are semantically similar documemts to the query. Finally, the collection is archived.
4+
This is a basic example to perform semantic search on a collection of documents. It loads the Quora dataset, creates a collection in a PostgreSQL database, upserts documents, generates chunks and embeddings, and then performs a vector search on a query. Embeddings are created using `intfloat/e5-small` model. The results are semantically similar documemts to the query. Finally, the collection is archived.
55

66
### [Question Answering](./question_answering.py)
77
This is an example to find documents relevant to a question from the collection of documents. It loads the Stanford Question Answering Dataset (SQuAD) into the database, generates chunks and embeddings. Query is passed to vector search to retrieve documents that match closely in the embeddings space. A score is returned with each of the search result.

0 commit comments

Comments
 (0)