PyTorch sirf Ek Ghante mein

Part 3 - Models as Computation Graphs

Pichle section mein humne PyTorch ki tensor library ko samjha. Ab hum iske doosre core component, 'autograd' (automatic differentiation engine), ki taraf badh rahe hain.

Autograd ko samajhne se pehle, humein 'Computation Graph' ke concept ko samajhna hoga.

Computation Graph Kya Hai?

Ek computation graph ek directed graph hota hai jo mathematical expressions ko visualize karne ka ek tareeka hai. Deep learning mein, yeh graph unn saare calculations ka sequence dikhata hai jo ek neural network ka output nikalne ke liye zaroori hain.

PyTorch background mein automatically ek aisa graph banata hai. Is graph ka istemal karke hi hum model ke parameters (jaise weights aur bias) ke respect mein loss ka gradient calculate karte hain. Yeh process 'backpropagation' kehlata hai aur model training ka sabse important hissa hai.

Ek Example se Samajhte Hain

Chaliye ek simple logistic regression model ka forward pass (prediction step) dekhte hain. Isse aap ek single-layer neural network ki tarah soch sakte hain.

import torch
import torch.nn.functional as F

y = torch.tensor([1.0])  # Sahi label
x1 = torch.tensor([1.1]) # Input feature
w1 = torch.tensor([2.2]) # Weight parameter
b = torch.tensor([0.0])  # Bias unit

z = x1 * w1 + b          # Net input
a = torch.sigmoid(z)     # Activation aur output

loss = F.binary_cross_entropy(a, y) # Loss calculation

print(loss)

Iska output hoga:

tensor(0.0852)

Agar aapko upar diye gaye code ke saare components samajh nahi aaye, to chinta na karein. Yahan main point yeh hai ki hum calculations ke sequence ko ek graph ke roop mein soch sakte hain.

Is graph mein, har node ek operation (jaise multiplication, addition) ya ek variable (jaise x1, w1, b) ko represent karta hai. Arrows data ke flow ki direction dikhate hain. PyTorch is graph ko background mein banata hai aur iska istemal gradients ko calculate karne ke liye karta hai, jiske baare mein hum agle section mein detail se baat karenge.