|
hai 10 meses | |
---|---|---|
.. | ||
README.md | hai 11 meses | |
langgraph_rag_agent.ipynb | hai 10 meses | |
langgraph_rag_agent_local.ipynb | hai 10 meses | |
langgraph_tool_calling_agent.ipynb | hai 10 meses |
Agents
LLM agents use planning, memory, and tools to accomplish tasks. Here, we show how to build agents capable of tool-calling using LangGraph with Llama 3.
Agents can empower Llama 3 with important new capabilities. In particular, we will show how to give Llama 3 the ability to perform web search, call a custom user-defined function, and use multi-modality: image generation (text-to-image), image analysis (image-to-text), and voice (text-to-speech) tools!
Tool-calling agents with LangGraph use two nodes: (1) a LLM node decides which tool to invoke based upon the user input. It outputs the tool name and tool arguments to use based upon the input. (2) the tool name and arguments are passed to a tool node, which calls the tool with the specified arguments and returns the result back to the LLM.
Our first notebook, langgraph-tool-calling-agent
, shows how to build our agent mentioned above using LangGraph.
See this video overview for more detail on the design of this agent.
RAG Agent
Our second notebook, langgraph-rag-agent
, shows how to apply LangGraph to build a custom Llama 3 powered RAG agent that uses ideas from 3 papers:
We implement each approach as a control flow in LangGraph:
We will build from CRAG (blue, below) to Self-RAG (green) and finally to Adaptive RAG (red):
Local LangGraph RAG Agent
Our third notebook, langgraph-rag-agent-local
, shows how to apply LangGraph to build advanced RAG agents using Llama 3 that run locally and reliably.
See this video overview for more detail on the design of this agent.