AI Agents ki Team Banayein, LangChain ke Saath!
Dosto, sochiye agar aap ek AI team bana sakein jahan har agent ek specific kaam mein expert ho? Ek agent research kare, doosra content likhe, aur teesra use review kare. Aaj hum LangGraph (LangChain ka part) use karke ek aisi hi "Multi-Agent System" banayenge.
🔧 Prerequisites
Humein kuch libraries install karni hongi. Is project ke liye humein ek search tool bhi chahiye, jaise Tavily.
pip install langchain langgraph langchain_openai tavily-python
Aapko Tavily par ek free API key bhi banani hogi.
Step 1: Tools aur Agent State Define Karein 🛠️
Sabse pehle, hum ek search tool banayenge aur phir apne graph ka "state" define karenge. State ek dictionary hai jo agents ke beech information share karti hai.
from langchain_community.tools.tavily_search import TavilySearchResults
from typing import TypedDict, Annotated, List
from langchain_core.messages import BaseMessage
import operator
# 1. Tool
tool = TavilySearchResults(max_results=2)
# 2. Agent State
class AgentState(TypedDict):
messages: Annotated[List[BaseMessage], operator.add]
Explanation:
TavilySearchResults: Yeh hamara web search tool hai.AgentState: Yeh graph ka memory bank hai.messagesfield me saari conversation history store hogi.
Step 2: Agent Node aur Graph Banayein 🧠
Ab hum ek "agent node" banayenge. Yeh ek function hai jo LLM ko call karta hai aur decide karta hai ki tool use karna hai ya nahi. Phir hum in nodes ko ek graph me jodenge.
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_tool_executor, ToolInvocation
from langgraph.graph import StateGraph, END
# LLM aur Tool Executor
llm = ChatOpenAI(model="gpt-4-turbo-preview")
tool_executor = create_tool_executor([tool])
# Agent Node Function
def agent_node(state):
# LLM ko call karo
response = llm.invoke(state['messages'])
# Agar LLM tool call karna chahta hai, to use execute karo
if response.tool_calls:
tool_invocations = [ToolInvocation(tool=call['name'], tool_input=call['args']) for call in response.tool_calls]
tool_responses = tool_executor.batch(tool_invocations)
return {"messages": tool_responses}
else:
return {"messages": [response]}
# Graph define karo
workflow = StateGraph(AgentState)
workflow.add_node("agent", agent_node)
workflow.set_entry_point("agent")
workflow.add_edge("agent", END) # Simple graph: agent se seedha end
# Graph compile karo
app = workflow.compile()
Explanation:
agent_node: Yeh hamare agent ka dimaag hai. Yeh state leta hai, LLM se sochwata hai, aur zaroorat padne par tool chalata hai.StateGraph: Yeh hamare multi-agent system ka blueprint hai.add_nodeauradd_edge: Inse hum graph me nodes (agents) aur unke beech connections (edges) banate hain.compile(): Yeh graph ko ek executable application me badal deta hai.
Step 3: Apni AI Team se Kaam Karwayein! 🚀
Ab hamara agent graph taiyar hai. Chaliye isse ek research task dete hain.
from langchain_core.messages import HumanMessage
# Agent se sawal puchein
inputs = {"messages": [HumanMessage(content="What is the weather in Dubai? Also, what is LangGraph and how does it work?")]}
# Graph ko stream me run karein taaki hum live updates dekh sakein
for output in app.stream(inputs):
for key, value in output.items():
print(f"Output from node '{key}':")
print("---")
print(value)
print("\n---\n")
Jab aap is code ko chalayenge, to aap dekhenge ki agent pehle Tavily search tool ka use karke weather aur LangGraph ki information nikalega, aur phir us information ko use karke ek final, comprehensive answer generate karega. 🎯
💡 Pro Tips
- Conditional Edges: Aap
add_conditional_edgeska use karke complex logic bana sakte hain. Jaise, "agar tool call hua to tool node par jao, warna end kar do". - Multiple Agents: Alag-alag roles (e.g., "Researcher", "Writer") ke liye alag-alag nodes banayein. Har node ka apna prompt aur tools ho sakte hain.
- Human-in-the-loop: Aap graph me ek "wait" point daal sakte hain jahan system human approval ke liye rukega.
- Persistence: LangGraph me memory (checkpointers) add karna aasan hai, taaki aapki team lambi conversations ko yaad rakh sake.