diff --git a/samples/llama_index_quick_start.ipynb b/samples/llama_index_quick_start.ipynb new file mode 100644 index 0000000..4f3cebd --- /dev/null +++ b/samples/llama_index_quick_start.ipynb @@ -0,0 +1,1087 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "upi2EY4L9ei3" + }, + "outputs": [], + "source": [ + "# Copyright 2025 Google LLC\n", + "#\n", + "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", + "# you may not use this file except in compliance with the License.\n", + "# You may obtain a copy of the License at\n", + "#\n", + "# https://www.apache.org/licenses/LICENSE-2.0\n", + "#\n", + "# Unless required by applicable law or agreed to in writing, software\n", + "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", + "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", + "# See the License for the specific language governing permissions and\n", + "# limitations under the License." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "mbF2F2miAT4a" + }, + "source": [ + "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/googleapis/llama-index-cloud-sql-pg-python/blob/main/samples/llama_index_quick_start.ipynb)\n", + "\n", + "---\n", + "# Introduction\n", + "\n", + "In this codelab, you'll learn how to create a powerful interactive generative AI application using Retrieval Augmented Generation powered by [Cloud SQL for PostgreSQL](https://cloud.google.com/sql/docs/postgres) and [LlamaIndex](https://www.llamaindex.ai/). We will be creating an application grounded in a [Netflix Movie dataset](https://www.kaggle.com/datasets/shivamb/netflix-shows), allowing you to interact with movie data in exciting new ways." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Ma6pEng3ypbA" + }, + "source": [ + "## Prerequisites\n", + "\n", + "* A basic understanding of the Google Cloud Console\n", + "* Basic skills in command line interface and Google Cloud shell\n", + "* Basic python knowledge" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "DzDgqJHgysy1" + }, + "source": [ + "## What you'll learn\n", + "\n", + "* How to deploy a Cloud SQL for PostgreSQL instance\n", + "* How to use Cloud SQL for PostgreSQL as a Document Reader\n", + "* How to use Cloud SQL for PostgreSQL as a Vector Store\n", + "* How to use Cloud SQL for PostgreSQL as a Document Store with a Index Store\n", + "* How to use Cloud SQL for PostgreSQL as a Chat Store" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "FbcZUjT1yvTq" + }, + "source": [ + "## What you'll need\n", + "\n", + "* A Google Cloud Account and Google Cloud Project\n", + "* A web browser such as [Chrome](https://www.google.com/chrome/)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "vHdR4fF3vLWA" + }, + "source": [ + "# Setup and Requirements\n", + "\n", + "In the following instructions you will learn to:\n", + "\n", + "1. Install required dependencies for our application\n", + "2. Set up authentication for our project\n", + "3. Set up a Cloud SQL for PostgreSQL Instance\n", + "4. Import the data used by our application" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "uy9KqgPQ4GBi" + }, + "source": [ + "## Install dependencies\n", + "First you will need to install the dependencies needed to run this demo app." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "M_ppDxYf4Gqs" + }, + "outputs": [], + "source": [ + "%pip install llama-index-cloud-sql-pg llama-index llama-index-embeddings-vertex llama-index-llms-vertex cloud-sql-python-connector[pg8000]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Colab only:** Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Automatically restart kernel after installs so that your environment can access the new packages\n", + "import IPython\n", + "\n", + "app = IPython.Application.instance()\n", + "app.kernel.do_shutdown(True)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "DeUbHclxw7_l" + }, + "source": [ + "## Authenticate to Google Cloud within Colab\n", + "In order to access your Google Cloud Project from this notebook, you will need to Authenticate as an IAM user." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "_Q9hyqdyEx6l" + }, + "outputs": [], + "source": [ + "from google.colab import auth\n", + "\n", + "auth.authenticate_user()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "UCiNGP1Qxd6x" + }, + "source": [ + "## Connect Your Google Cloud Project" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "SLUGlG6UE2CK" + }, + "outputs": [], + "source": [ + "# @markdown Please fill in the value below with your GCP project ID and then run the cell.\n", + "\n", + "# Please fill in these values.\n", + "project_id = \"\" # @param {type:\"string\"}\n", + "\n", + "# Quick input validations.\n", + "assert project_id, \"⚠️ Please provide a Google Cloud project ID\"\n", + "\n", + "# Configure gcloud.\n", + "!gcloud config set project {project_id}" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "O-oqMC5Ox-ZM" + }, + "source": [ + "## Configure Your Google Cloud Project\n", + "\n", + "Configure the following in your Google Cloud Project." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "X-bzfFb4A-xK" + }, + "source": [ + "You will need to enable these APIs in order to use `VertexTextEmbeddings` as an embeddings service!" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "1. IAM principal (user, service account, etc.) with the [Cloud SQL Client][client-role] role. The user logged into this notebook will be used as the IAM principal and will be granted the Cloud SQL Client role.\n", + "\n", + "[client-role]: https://cloud.google.com/sql/docs/mysql/roles-and-permissions" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "current_user = !gcloud auth list --filter=status:ACTIVE --format=\"value(account)\"\n", + "!gcloud projects add-iam-policy-binding {project_id} \\\n", + " --member=user:{current_user[0]} \\\n", + " --role=\"roles/cloudsql.client\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "2. Enable the APIs for Cloud SQL and Vertex AI within your project." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "CKWrwyfzyTwH" + }, + "outputs": [], + "source": [ + "# Enable GCP services\n", + "!gcloud services enable sqladmin.googleapis.com aiplatform.googleapis.com" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Gn8g7-wCyZU6" + }, + "source": [ + "## Set up Cloud SQL\n", + "You will need a **Postgres** Cloud SQL instance for the following stages of this notebook. Please set the following variables." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# @markdown Please fill in the both the Google Cloud region and name of your Cloud SQL instance. Once filled in, run the cell.\n", + "\n", + "# Please fill in these values.\n", + "region = \"us-central1\" # @param {type:\"string\"}\n", + "instance_name = \"llamaindex-quickstart-instance\" # @param {type:\"string\"}\n", + "database_name = \"llamaindex-quickstart-db\" # @param {type:\"string\"}\n", + "user = \"postgres\" # @param {type:\"string\"}\n", + "password = input(\"Please provide a password to be used for 'postgres' database user: \")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "T616pEOUygYQ" + }, + "source": [ + "### Create a Postgres Instance\n", + "Running the below cell will verify the existence of the Cloud SQL instance and or create a new instance and database if one does not exist.\n", + "\n", + "> ⏳ - Creating a Cloud SQL instance may take a few minutes." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "XXI1uUu3y8gc" + }, + "outputs": [], + "source": [ + "# check if Cloud SQL instance exists in the provided region\n", + "database_version = !gcloud sql instances describe {instance_name} --format=\"value(databaseVersion)\"\n", + "if database_version[0].startswith(\"POSTGRES\"):\n", + " print(\"Found existing Postgres Cloud SQL Instance!\")\n", + "else:\n", + " print(\"Creating new Cloud SQL instance...\")\n", + " !gcloud sql instances create {instance_name} --database-version=POSTGRES_15 \\\n", + " --region={region} --cpu=1 --memory=4GB --root-password={password} \\\n", + " --database-flags=cloudsql.iam_authentication=On" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create a Database\n", + "\n", + "Next you will create database to store the data for this application." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "databases = !gcloud sql databases list --instance={instance_name} --format=\"value(name)\"\n", + "if database_name not in databases:\n", + " !gcloud sql databases create {database_name} --instance={instance_name}\n", + "else:\n", + " print(\"Found existing database!\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Connect to our New Database\n", + "\n", + "Now you will use `PostgresEngine` that connects to your new database!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from google.cloud.sql.connector import Connector\n", + "import sqlalchemy\n", + "\n", + "# initialize Connector object\n", + "connector = Connector()\n", + "\n", + "\n", + "# function to return the database connection\n", + "def getconn():\n", + " conn = connector.connect(\n", + " f\"{project_id}:{region}:{instance_name}\",\n", + " \"pg8000\",\n", + " user=user,\n", + " password=password,\n", + " db=database_name,\n", + " )\n", + " return conn\n", + "\n", + "\n", + "# create connection pool\n", + "pool = sqlalchemy.create_engine(\n", + " \"postgresql+pg8000://\",\n", + " creator=getconn,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "HdolCWyatZmG" + }, + "source": [ + "## Import data to your database\n", + "\n", + "Now that you have your database, you will need to import data! We will be using a [Netflix Dataset from Kaggle](https://www.kaggle.com/datasets/shivamb/netflix-shows). Here is what the data looks like:" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "36-FBKzJ-tLa" + }, + "source": [ + "| show_id | type | title | director | cast | country | date_added | release_year | rating | duration | listed_in | description |\n", + "|---------|---------|----------------------|------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|-------------------|--------------|--------|-----------|----------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n", + "| s1 | Movie | Dick Johnson Is Dead | Kirsten Johnson | | United States | September 25, 2021 | 2020 | PG-13 | 90 min | Documentaries | As her father nears the end of his life, filmmaker Kirsten Johnson stages his death in inventive and comical ways to help them both face the inevitable. |\n", + "| s2 | TV Show | Blood & Water | | Ama Qamata, Khosi Ngema, Gail Mabalane, Thabang Molaba, Dillon Windvogel, Natasha Thahane, Arno Greeff, Xolile Tshabalala, Getmore Sithole, Cindy Mahlangu, Ryle De Morny, Greteli Fincham, Sello Maake Ka-Ncube, Odwa Gwanya, Mekaila Mathys, Sandi Schultz, Duane Williams, Shamilla Miller, Patrick Mofokeng | South Africa | September 24, 2021 | 2021 | TV-MA | 2 Seasons | International TV Shows, TV Dramas, TV Mysteries | After crossing paths at a party, a Cape Town teen sets out to prove whether a private-school swimming star is her sister who was abducted at birth. |\n", + "| s3 | TV Show | Ganglands | Julien Leclercq | Sami Bouajila, Tracy Gotoas, Samuel Jouy, Nabiha Akkari, Sofia Lesaffre, Salim Kechiouche, Noureddine Farihi, Geert Van Rampelberg, Bakary Diombera | | September 24, 2021 | 2021 | TV-MA | 1 Season | Crime TV Shows, International TV Shows, TV Action & Adventure | To protect his family from a powerful drug lord, skilled thief Mehdi and his expert team of robbers are pulled into a violent and deadly turf war. |\n", + "| s4 | TV Show | Jailbirds New Orleans | | | | September 24, 2021 | 2021 | TV-MA | 1 Season | Docuseries, Reality TV | Feuds, flirtations and toilet talk go down among the incarcerated women at the Orleans Justice Center in New Orleans on this gritty reality series. |\n", + "| s5 | TV Show | Kota Factory | | Mayur More, Jitendra Kumar, Ranjan Raj, Alam Khan, Ahsaas Channa, Revathi Pillai, Urvi Singh, Arun Kumar | India | September 24, 2021 | 2021 | TV-MA | 2 Seasons | International TV Shows, Romantic TV Shows, TV Comedies | In a city of coaching centers known to train India’s finest collegiate minds, an earnest but unexceptional student and his friends navigate campus life. |\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "kQ2KWsYI_Msa" + }, + "source": [ + "The following code has been prepared code to help insert the CSV data into your Cloud SQL for PostgreSQL database." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Dzr-2VZIkvtY" + }, + "source": [ + "Download the CSV file:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "5KkIQ2zSvQkN" + }, + "outputs": [], + "source": [ + "!gsutil cp gs://cloud-samples-data/llamaindex/netflix_titles.csv ." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "oFU13dCBlYHh" + }, + "source": [ + "The download can be verified by the following command or using the \"Files\" tab." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "nQBs10I8vShh" + }, + "outputs": [], + "source": [ + "!ls" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "2H7rorG9Ivur" + }, + "source": [ + "In this next step you will:\n", + "\n", + "1. Create the table into store data\n", + "2. And insert the data from the CSV into the database table\n", + "\n", + "> To avoid costs, the following code uses only 100 rows as an example" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "qCsM2KXbdYiv" + }, + "outputs": [], + "source": [ + "import pandas as pd\n", + "import sqlalchemy\n", + "\n", + "create_table_cmd = sqlalchemy.text(\n", + " 'CREATE TABLE netflix_titles ( \\\n", + " show_id VARCHAR, \\\n", + " type VARCHAR, \\\n", + " title VARCHAR, \\\n", + " director VARCHAR, \\\n", + " \"cast\" VARCHAR, \\\n", + " country VARCHAR, \\\n", + " date_added VARCHAR, \\\n", + " release_year INTEGER, \\\n", + " rating VARCHAR, \\\n", + " duration VARCHAR, \\\n", + " listed_in VARCHAR, \\\n", + " description TEXT \\\n", + " )',\n", + ")\n", + "\n", + "netflix_data = \"/content/netflix_titles.csv\"\n", + "\n", + "df = pd.read_csv(netflix_data)\n", + "insert_data_cmd = sqlalchemy.text(\n", + " \"\"\"\n", + " INSERT INTO netflix_titles VALUES (:show_id, :type, :title, :director,\n", + " :cast, :country, :date_added, :release_year, :rating,\n", + " :duration, :listed_in, :description)\n", + " \"\"\"\n", + ")\n", + "\n", + "parameter_map = [\n", + " {\n", + " \"show_id\": row[\"show_id\"],\n", + " \"type\": row[\"type\"],\n", + " \"title\": row[\"title\"],\n", + " \"director\": row[\"director\"],\n", + " \"cast\": row[\"cast\"],\n", + " \"country\": row[\"country\"],\n", + " \"date_added\": row[\"date_added\"],\n", + " \"release_year\": row[\"release_year\"],\n", + " \"rating\": row[\"rating\"],\n", + " \"duration\": row[\"duration\"],\n", + " \"listed_in\": row[\"listed_in\"],\n", + " \"description\": row[\"description\"],\n", + " }\n", + " for index, row in df.head(100).iterrows() # limit to 100 rows\n", + "]\n", + "\n", + "with pool.connect() as db_conn:\n", + " db_conn.execute(create_table_cmd)\n", + " db_conn.execute(\n", + " insert_data_cmd,\n", + " parameter_map,\n", + " )\n", + " db_conn.commit()\n", + "connector.close()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Set LLM and embedding models globally for Llama Index components" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from llama_index.core import Settings\n", + "from llama_index.embeddings.vertex import VertexTextEmbedding\n", + "from llama_index.llms.vertex import Vertex\n", + "import google.auth\n", + "\n", + "credentials, _ = google.auth.default()\n", + "\n", + "Settings.embed_model = VertexTextEmbedding(\n", + " model_name=\"text-embedding-005\", project=project_id, credentials=credentials\n", + ")\n", + "Settings.llm = Vertex(model=\"gemini-2.0-flash-001\", project=project_id)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "SsGS80H04bDN" + }, + "source": [ + "## Use case 1: Cloud SQL for Postgres as a Document Reader\n", + "\n", + "---\n", + "\n", + "\n", + "\n", + "Now that you have data in your database, you are ready to use Cloud SQL for PostgreSQL as a [Document Reader](https://docs.llamaindex.ai/en/stable/module_guides/loading/connector/). This means you will pull data from the database and load it into memory as documents. These documents can be used to create a Vector Store." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "-CQgPON8dwSK" + }, + "source": [ + "First, create a connection to your Cloud SQL for PostgreSQL instance using the `PostgresEngine` class." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "zrwTsWHMkQ_v" + }, + "outputs": [], + "source": [ + "from llama_index_cloud_sql_pg import PostgresEngine\n", + "\n", + "engine = PostgresEngine.from_instance(\n", + " project_id=project_id,\n", + " instance=instance_name,\n", + " region=region,\n", + " database=database_name,\n", + " user=user,\n", + " password=password,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "8s-C0P-Oee69" + }, + "source": [ + "The `PostgresReader` requires an `PostgresEngine` object to define the database connection and a `table_name` to define which data is to be retrieved. The `content_columns` argument can be used to define the columns that will be used as \"content\" in the document object we will later construct. The rest of the columns in that table will become the \"metadata\" associated with the documents." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "2SdFJT6Vece1" + }, + "outputs": [], + "source": [ + "from llama_index_cloud_sql_pg import PostgresReader\n", + "\n", + "\n", + "table_name = \"netflix_titles\"\n", + "content_columns = [\"title\", \"director\", \"cast\", \"description\"]\n", + "metadata_columns = [\n", + " \"show_id\",\n", + " \"type\",\n", + " \"country\",\n", + " \"date_added\",\n", + " \"release_year\",\n", + " \"rating\",\n", + " \"duration\",\n", + " \"listed_in\",\n", + "]\n", + "reader = PostgresReader.create_sync(\n", + " engine=engine,\n", + " table_name=table_name,\n", + " content_columns=content_columns,\n", + " metadata_columns=metadata_columns,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "dsL-KFrmfuS1" + }, + "source": [ + "Use method `load_data()` to pull documents from out database. You can see the documents from the database here." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "t4zTx-HLfwmW" + }, + "outputs": [], + "source": [ + "documents = reader.load_data()\n", + "print(f\"Loaded {len(documents)} from the database. Examples:\")\n", + "for doc in documents[:5]:\n", + " print(doc)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Nice, you just used Cloud SQL for Postgres as a Document Reader!" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "z9uLV3bs4noo" + }, + "source": [ + "## Use case 2: Cloud SQL for PostgreSQL as Vector Store\n", + "\n", + "---\n", + "\n", + "\n", + "Now, you will learn how to put all of the documents into a [Vector Store](https://docs.llamaindex.ai/en/stable/module_guides/storing/vector_stores/) so that you perform a vector search." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "jfH8oQJ945Ko" + }, + "source": [ + "### Create Your Vector Store table\n", + "\n", + "Create a Vector Store table that can preserve the Document's metadata by using the method `init_vector_store_table` and defining specific metadata columns. The vector size is required. The example shows the vector size, `768`, that corresponds with the length of the vectors computed by the model our embeddings service uses, Vertex AI's `text-embedding-005`. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "e_rmjywG47pv" + }, + "outputs": [], + "source": [ + "from llama_index_cloud_sql_pg import Column\n", + "\n", + "sample_vector_table_name = \"movie_vector_table_samples\"\n", + "engine.init_vector_store_table(\n", + " sample_vector_table_name,\n", + " vector_size=768,\n", + " metadata_columns=[\n", + " Column(\"show_id\", \"VARCHAR\", nullable=True),\n", + " Column(\"type\", \"VARCHAR\", nullable=True),\n", + " Column(\"country\", \"VARCHAR\", nullable=True),\n", + " Column(\"date_added\", \"VARCHAR\", nullable=True),\n", + " Column(\"release_year\", \"INTEGER\", nullable=True),\n", + " Column(\"rating\", \"VARCHAR\", nullable=True),\n", + " Column(\"duration\", \"VARCHAR\", nullable=True),\n", + " Column(\"listed_in\", \"VARCHAR\", nullable=True),\n", + " ],\n", + " overwrite_existing=True, # Enabling this will delete and recreate the table if exists.\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "KG6rwEuJLNIo" + }, + "source": [ + "### Create the Vector Store instance\n", + "\n", + "Next, you will create a `PostgresVectorStore` object that connects to the new Cloud SQL database table to store the data from the documents." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "Wo4-7EYCIFF9" + }, + "outputs": [], + "source": [ + "from llama_index_cloud_sql_pg import PostgresVectorStore\n", + "\n", + "vector_store = PostgresVectorStore.create_sync(\n", + " engine=engine,\n", + " table_name=sample_vector_table_name,\n", + " metadata_columns=[\n", + " \"show_id\",\n", + " \"type\",\n", + " \"country\",\n", + " \"date_added\",\n", + " \"release_year\",\n", + " \"rating\",\n", + " \"duration\",\n", + " \"listed_in\",\n", + " ],\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "fr1rP6KQ-8ag" + }, + "source": [ + "#### Create a LlamaIndex `Index`\n", + "\n", + "An `Index` allows us to quickly retrieve relevant context for a user query. They are used to build `QueryEngines` and `ChatEngines` over which a user can get answers to their queries.\n", + "For a list of indexes that can be built in LlamaIndex, see [Index Guide](https://docs.llamaindex.ai/en/stable/module_guides/indexing/index_guide/).\n", + "\n", + "A `VectorStoreIndex`, can be built using the `PostgresVectorStore`. You can also use the `PostgresDocumentStore` and `PostgresIndexStore` to persist documents and index metadata. These modules can be used to build other `Indexes`.\n", + "For a detailed python notebook on this, see [LlamaIndex Document Store Guide](https://github.com/googleapis/llama-index-cloud-sql-pg-python/blob/main/samples/llama_index_doc_store.ipynb).\n", + "\n", + "Now, add the documents data into the vector table. Here is a code example to load the first 100 documents in the list." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "CTks8Cy--93B" + }, + "outputs": [], + "source": [ + "from llama_index.core import StorageContext, VectorStoreIndex\n", + "\n", + "storage_context = StorageContext.from_defaults(vector_store=vector_store)\n", + "index = VectorStoreIndex.from_documents(\n", + " documents, storage_context=storage_context, show_progress=True\n", + ")\n", + "\n", + "# If you have more documents, you can still add them to the database Vector Store table using the embedding service to create embeddings for each record\n", + "# vector_store.add_documents(docs_to_load)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Query over indexed data\n", + "\n", + "A query engine takes in a natural language query and returns a rich response. It is built on one of the indexes, see [Query Engine Guide](https://docs.llamaindex.ai/en/stable/module_guides/deploying/query_engine/).\n", + "\n", + "You can compose multiple query engines to achieve more advanced querying, see [Query Engine usage patterns](https://docs.llamaindex.ai/en/stable/module_guides/deploying/query_engine/usage_pattern/)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "query_engine = index.as_query_engine()\n", + "query = query_engine.query(\"List shows that are about teenagers\")\n", + "query.response" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Use case 3: Cloud SQL for PostgreSQL as a Document Store with a Index Store\n", + "\n", + "---\n", + "\n", + "\n", + "Llama Index breaks down documents into smaller units called nodes, storing them in a [Document Store](https://docs.llamaindex.ai/en/stable/module_guides/storing/docstores/). The Document Store can be used with multiple indexes where each index is stored in its own [Index Store](https://docs.llamaindex.ai/en/stable/module_guides/storing/index_stores/) and uses the same underlying nodes but can provide a different search capability." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Set up a Document Store" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from llama_index_cloud_sql_pg import PostgresDocumentStore\n", + "\n", + "document_store_table_name = \"document_store\"\n", + "engine.init_doc_store_table(table_name=document_store_table_name)\n", + "doc_store = PostgresDocumentStore.create_sync(\n", + " engine=engine,\n", + " table_name=document_store_table_name,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Parse documents into nodes\n", + "\n", + "Using a `TokenTextSplitter`, you can split the movie names on whitespace characters for building a keyword based search index." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from llama_index.core.node_parser import TokenTextSplitter\n", + "\n", + "splitter = TokenTextSplitter(\n", + " chunk_size=1024,\n", + " chunk_overlap=20,\n", + " separator=\" \",\n", + ")\n", + "nodes = splitter.get_nodes_from_documents(documents)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Add nodes to Document Store" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from llama_index.core import StorageContext\n", + "\n", + "storage_context = StorageContext.from_defaults(docstore=doc_store)\n", + "storage_context.docstore.add_documents(nodes)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Set up an Index Store" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from llama_index_cloud_sql_pg import PostgresIndexStore\n", + "\n", + "index_store_table_name = \"index_store\"\n", + "engine.init_index_store_table(\n", + " table_name=index_store_table_name,\n", + ")\n", + "\n", + "index_store = PostgresIndexStore.create_sync(\n", + " engine=engine,\n", + " table_name=index_store_table_name,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Create a Storage Context" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from llama_index.core import StorageContext\n", + "\n", + "storage_context = StorageContext.from_defaults(\n", + " docstore=doc_store, index_store=index_store\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create Indexes\n", + "\n", + "The Document Store can be used with multiple indexes. Each index uses the same underlying nodes. For example, the keyword table index extracts keywords from each Node and builds a mapping to all nodes containing that keyword. Let's use this to build a keyword search." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from llama_index.core import SimpleKeywordTableIndex\n", + "\n", + "keyword_table_index = SimpleKeywordTableIndex(nodes, storage_context=storage_context)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Query the index" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "query_engine = keyword_table_index.as_query_engine()\n", + "response = query_engine.query(\"What tv shows resonate with crime?\")\n", + "print(response)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ZM_OFzZrQEPs" + }, + "source": [ + "# Use case 4: Cloud SQL for PostgreSQL as Chat Store\n", + "\n", + "---\n", + "\n", + "\n", + "Next, create a [Chat Store](https://docs.llamaindex.ai/en/stable/module_guides/storing/chat_stores/) so the LLM can retain context and information across multiple interactions, leading to more coherent and sophisticated conversations or text generation." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "vyYQILyoEAqg" + }, + "outputs": [], + "source": [ + "from llama_index_cloud_sql_pg import PostgresChatStore\n", + "\n", + "chat_store_table_name = \"chat_store\"\n", + "engine.init_chat_store_table(table_name=chat_store_table_name)\n", + "chat_store = PostgresChatStore.create_sync(\n", + " engine,\n", + " table_name=chat_store_table_name,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create a ChatMemoryBuffer\n", + "The `ChatMemoryBuffer` stores a history of recent chat messages, enabling the LLM to access relevant context from prior interactions.\n", + "\n", + "By providing our Chat Store into the `ChatMemoryBuffer`, it can automatically retrieve and update messages associated with a specific session ID or `chat_store_key`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from llama_index.core.memory import ChatMemoryBuffer\n", + "\n", + "memory = ChatMemoryBuffer.from_defaults(\n", + " token_limit=3000,\n", + " chat_store=chat_store,\n", + " chat_store_key=\"user1\",\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create a LlamaIndex `ChatEngine`\n", + "\n", + "You can re-use the `VectorStoreIndex` created above to create a [ChatEngine](https://docs.llamaindex.ai/en/stable/module_guides/deploying/chat_engines/) which includes a `ChatStore` to save the chats between the user and AI assistant." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "2yuXYLTCl2K1" + }, + "source": [ + "Here is an example of how you would add a chat message and fetch all messages from the Chat Store." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "qDVoTWZal0ZF" + }, + "outputs": [], + "source": [ + "chat_engine = index.as_chat_engine(chat_mode=\"context\", memory=memory)\n", + "\n", + "op = chat_engine.chat(\"What was the cast in Blood and Water?\")\n", + "op = chat_engine.chat(\"How many seasons are there for Kota Factory?\")\n", + "# Retrieve response for a chat message\n", + "print(op.response)\n", + "\n", + "# Retrieve all messages for a user / session\n", + "retrieved_messages = chat_store.get_messages(\"user1\")\n", + "for msg in retrieved_messages:\n", + " print(f\"{msg.role} -> {msg.content}\")" + ] + } + ], + "metadata": { + "colab": { + "provenance": [] + }, + "kernelspec": { + "display_name": "Python 3", + "name": "python3" + }, + "language_info": { + "name": "python" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +}