structured_llama.ipynb 7.7 KB

Open In Colab

Use Llama 3 to chat about structured data

This demo shows how to use LangChain with Llama 3 to query structured data, the 2023-24 NBA roster info, stored in a SQLite DB, and how to ask Llama 3 follow up question about the DB.

We start by installing the necessary packages:

  • Replicate to host the Llama 3 model
  • langchain provides necessary RAG tools for this demo

Note We will be using Replicate to run the examples here. You will need to first sign in with Replicate with your github account, then create a free API token here that you can use for a while. You can also use other Llama 3 cloud providers such as Groq, Together, or Anyscale - see Section 2 of the Getting to Know Llama notebook for more information.

If you'd like to run Llama 3 locally for the benefits of privacy, no cost or no rate limit (some Llama 3 hosting providers set limits for free plan of queries or tokens per second or minute), see Running Llama Locally.

!pip install langchain replicate
from getpass import getpass
import os

REPLICATE_API_TOKEN = getpass()
os.environ["REPLICATE_API_TOKEN"] = REPLICATE_API_TOKEN

Next we call the Llama 3 8b chat model from Replicate. You can also use Llama 3 70b model by replacing the model name with "meta/meta-llama-3-70b-instruct".

from langchain_community.llms import Replicate
llm = Replicate(
    model="meta/meta-llama-3-8b-instruct",
    model_kwargs={"temperature": 0.0, "top_p": 1, "max_new_tokens":500}
)

To recreate the nba_roster.db file, run the two commands below:

  • python txt2csv.py to convert the nba.txt file to nba_roster.csv. The nba.txt file was created by scraping the NBA roster info from the web.
  • python csv2db.py to convert nba_roster.csv to nba_roster.db.
from langchain_community.utilities import SQLDatabase

# Note: to run in Colab, you need to upload the nba_roster.db file in the repo to the Colab folder first.
db = SQLDatabase.from_uri("sqlite:///nba_roster.db", sample_rows_in_table_info=0)

def get_schema():
    return db.get_table_info()
question = "What team is Klay Thompson on?"
prompt = f"""Based on the table schema below, write a SQL query that would answer the user's question; just return the SQL query and nothing else.

Scheme:
{get_schema()}

Question: {question}

SQL Query:"""

print(prompt)
answer = llm.invoke(prompt)
print(answer)

If you don't have the "just return the SQL query and nothing else" in the prompt above, or even with it but asking Llama 2 which doesn't follow instructions as well as Llama 3, you'll likely get more text other than the SQL query back in the answer.

# note this is a dangerous operation and for demo purpose only; in production app you'll need to safe-guard any DB operation
result = db.run(answer)
# how about a follow up question
follow_up = "What's his salary?"
print(llm.invoke(follow_up))

Since we did not pass any context along with the follow-up to the model it did not know who "his" is and just picked LeBron James.

Let's try to fix it by adding context to the follow-up prompt.

prompt = f"""Based on the table schema, question, SQL query, and SQL response below, write a new SQL response; be concise, just output the SQL response.

Scheme:
{get_schema()}

Question: {follow_up}
SQL Query: {question}
SQL Result: {result}

New SQL Response:
"""
print(prompt)
new_answer = llm.invoke(prompt)
print(new_answer)

Because we have "be concise, just output the SQL response", Llama 3 is able to just generate the SQL statement; otherwise output parsing will be needed.

db.run(new_answer)