LangGraph Tutorial: The Future of AI Agents 🚀

LangGraph Kya Hai aur Ise Kyun Use Karein?

Dosto, LangGraph LangChain ka ek extension hai jo aapko stateful, multi-agent applications banane me madad karta hai. Purane agent executors ke mukable, LangGraph aapko zyada control deta hai. Aap isse complex cycles, human-in-the-loop workflows, aur persistent memory wale agents bana sakte hain.

🔧 Prerequisites

Chaliye, zaroori libraries install karte hain:

pip install langgraph langchain_openai langchain_community tavily-python

Iske alawa, aapko OpenAI aur Tavily ke liye API keys ki zaroorat hogi.

Step 1: State, Tools, aur LLM Setup Karein 🛠️

Har LangGraph application ek "state" object par chalta hai. Yeh state agents ke beech information share karta hai. Hum ek search tool bhi set up karenge.

from typing import TypedDict, Annotated
from langchain_core.messages import AnyMessage, SystemMessage, HumanMessage, ToolMessage
from langchain_openai import ChatOpenAI
from langchain_community.tools.tavily_search import TavilySearchResults
import operator

# 1. Agent State
class AgentState(TypedDict):
    messages: Annotated[list[AnyMessage], operator.add]

# 2. Tools
tool = TavilySearchResults(max_results=2)
tools = [tool]

# 3. LLM
llm = ChatOpenAI(model="gpt-4-turbo-preview")

# LLM ko tools ke baare me batayein
llm_with_tools = llm.bind_tools(tools)

Explanation:

Step 2: Graph Nodes Banayein 🧠

Nodes hamare graph ke building blocks hain. Yeh simple Python functions hote hain jo state par kaam karte hain. Hum do nodes banayenge: ek LLM ko call karne ke liye aur ek tool ko execute karne ke liye.

from langgraph.prebuilt import ToolNode

# Node 1: Agent (LLM ko call karta hai)
def call_model(state: AgentState):
    messages = state['messages']
    response = llm_with_tools.invoke(messages)
    return {"messages": [response]}

# Node 2: Tool Executor (Tools ko call karta hai)
tool_node = ToolNode(tools)

Step 3: Graph ki Edges Define Karein (Logic) ⛓️

Ab hum nodes ko jodenge. Yahan LangGraph ki asli power dikhti hai. Hum conditional edges ka use karke decide karenge ki LLM ke response ke baad kya karna hai: tool call karna hai ya conversation end karni hai.

from langgraph.graph import StateGraph, END

# Conditional Edge Logic
def should_continue(state: AgentState) -> str:
    last_message = state['messages'][-1]
    if last_message.tool_calls:
        return "continue" # Tool call karna hai
    else:
        return "end" # Conversation khatam

# Graph define karein
workflow = StateGraph(AgentState)

# Nodes add karein
workflow.add_node("agent", call_model)
workflow.add_node("action", tool_node)

# Entry point set karein
workflow.set_entry_point("agent")

# Conditional edge add karein
workflow.add_conditional_edges(
    "agent",
    should_continue,
    {
        "continue": "action",
        "end": END,
    },
)

# Tool node se wapas agent node par edge add karein
workflow.add_edge("action", "agent")

# Graph compile karein
app = workflow.compile()

Explanation:

Step 4: Agent se Interact Karein! 💬

Hamara agent ab taiyar hai. Chaliye isse ek sawal puchte hain.

# Agent se interact karein
inputs = {"messages": [HumanMessage(content="What is the weather in London and what are the main attractions there?")]}

# Graph ko run karein
for output in app.stream(inputs):
    for key, value in output.items():
        print(f"Output from node '{key}':")
        print(value["messages"][-1].content)
    print("---")

Jab aap is code ko chalayenge, to aap dekhenge ki agent pehle Tavily search tool ka use karega, information collect karega, aur phir uss information ko use karke ek final answer dega. Yeh sab `stream` ke through live dikhega. 🎯

💡 Pro Tips

← Back to LangChain Tutorials