LangChain ke Saath Ek Aasan Chatbot Banane Ka Safar 🤖✨
Dosto, aaj hum seekhenge kaise LangChain, Hugging Face aur Ollama ki madad se ek simple lekin powerful chatbot bana sakte hain. Ye tutorial step-by-step hai — installation se lekar model selection aur chain creation tak. Sab kuch Roman Hindi me.
Preparation: Zaruri Tools 🛠️
Sabse pehle kuch packages install karne honge:
pip install langchain-huggingface langchain-ollama
Ye packages aapko Hugging Face embeddings aur Ollama chat models ko LangChain ke andar use karne me madad karenge.
Code Explanation — Short Me 🧐
- langchain-huggingface: Hugging Face ke models ko LangChain me integrate karne ke liye package.
- langchain-ollama: Ollama open-source models ko LangChain me use karne ka support.
Model Selection: Open-Source Ke Sitare 🌟
Hum do models use karenge:
- Hugging Face se ek embedding model (text ko vector me badalne ke liye)
- Ollama se ek chat model (actual conversational responses ke liye)
Hugging Face Embedding Model 📊
Example code — embedding model initialize karna:
from langchain_huggingface import HuggingFaceEmbeddings
embeddings = HuggingFaceEmbeddings(
model_name="BAAI/bge-base-en-v1.5"
)
Code ka matlab: HuggingFaceEmbeddings text ko vectors me convert karta hai.
model_name="BAAI/bge-base-en-v1.5" ek strong semantic embedding model hai jo text ka meaning acche se capture karta hai.
Ollama Chat Model 💬
Example code — chat model initialize karna:
from langchain_ollama import ChatOllama
model = ChatOllama(
model="llama3",
temperature=0.7
)
Code ka explanation:
ChatOllama Ollama ke open-source language model ko LangChain me integrate karta hai.
temperature creativity control karta hai (0–1). 0.7 matlab thodi creativity, par zyada precise jawab bhi milenge.
Chatbot Logic: Magic Shuru! 🪄
Niche ka code dikhata hai kaise prompt template aur chain banakar model ko call karte hain:
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
# Prompt template banate hain
prompt = ChatPromptTemplate.from_template(
"Aap ek friendly chatbot hain. {question} ka jawab dijiye."
)
# Chain create karte hain
chain = (
{"question": RunnablePassthrough()}
| prompt
| model
)
# Bot se baat karte hain
response = chain.invoke("Aap kaun hain?")
print(response)
Code ka gehrayi se explanation:
- ChatPromptTemplate: Custom prompt template banane ka tool —
{question}jagah par actual user question jayega. - RunnablePassthrough(): Input ko chain me seedha aage pass karta hai — flexibility badhata hai.
- Chain Creation (| operator): Ye operator components ko connect karta hai: Question → Prompt → Model.
- chain.invoke(): Actual conversation start karta hai — model se response nikalta hai.
Pro Tips 💡
- Model training ke liye diverse aur high-quality data use karo — taaki model robust bane.
- Temperature ko adjust karke creative vs precise responses balance karo.
- Embedding aur chat models ko use-case ke hisaab se carefully choose karo — kabhi-kabhi smaller models bhi kaafi acche results dete hain.