|
1 ヶ月 前 | |
---|---|---|
.. | ||
README.md | 1 ヶ月 前 | |
langgraph_rag_agent.ipynb | 1 ヶ月 前 | |
langgraph_rag_agent_local.ipynb | 1 ヶ月 前 | |
langgraph_tool_calling_agent.ipynb | 1 ヶ月 前 |
Agents
LLM agents use planning, memory, and tools to accomplish tasks. Here, we show how to build agents capable of tool-calling using LangGraph with Llama 3.
Agents can empower Llama 3 with important new capabilities. In particular, we will show how to give Llama 3 the ability to perform web search, as well as multi-modality: image generation (text-to-image), image analysis (image-to-text), and voice (text-to-speech) tools!
Tool-calling agents with LangGraph use two nodes: (1) a node with an LLM decides which tool to invoke based upon the user question. It outputs the tool name and arguments to use. (2) the tool name and arguments are passed to a tool node, which calls the tool itself with the specified arguments and returns the result back to the LLM.
Our first notebook, langgraph-tool-calling-agent
, shows how to build our agent mentioned above using LangGraph.
See this video overview for more detail on the design of this agent.
RAG Agent
Our second notebook, langgraph-rag-agent
, shows how to apply LangGraph to build a custom Llama 3 powered RAG agent that uses ideas from 3 papers:
We implement each approach as a control flow in LangGraph:
We will build from CRAG (blue, below) to Self-RAG (green) and finally to Adaptive RAG (red):
Local LangGraph RAG Agent
Our third notebook, langgraph-rag-agent-local
, shows how to apply LangGraph to build advanced RAG agents using Llama 3 that run locally and reliably.
See this video overview for more detail on the design of this agent.