PyTorch sirf Ek Ghante mein

Part 4 - Automatic Differentiation Made Easy

Pichle section mein humne computation graphs ke baare mein jaana. PyTorch automatically ek graph banata hai agar uske kisi tensor ka `requires_grad` attribute `True` par set ho.

Yeh gradients calculate karne ke liye zaroori hai, jo neural networks ko train karne ke liye 'backpropagation' algorithm mein use hote hain.

Gradients aur Autograd Engine

Agar aapko calculus ke concepts jaise partial derivatives ya gradients yaad nahi hain, to chinta na karein. Mota-mota, gradients humein batate hain ki model ke parameters (weights aur bias) ko kis direction mein update karna hai taaki loss kam ho.

PyTorch ka `autograd` engine har operation ko track karke background mein ek computation graph banata hai. Isse hum gradients ko aasani se calculate kar sakte hain.

Chaliye pichle example ko dekhte hain, lekin is baar hum `w1` aur `b` ke liye gradients calculate karenge. Iske liye, humein unka `requires_grad=True` set karna hoga.

import torch
import torch.nn.functional as F
from torch.autograd import grad

y = torch.tensor([1.0])
x1 = torch.tensor([1.1])
w1 = torch.tensor([2.2], requires_grad=True) # Gradient tracking enable karein
b = torch.tensor([0.0], requires_grad=True)  # Gradient tracking enable karein

z = x1 * w1 + b
a = torch.sigmoid(z)

loss = F.binary_cross_entropy(a, y)

# Gradients ko manually calculate karein
grad_L_w1 = grad(loss, w1, retain_graph=True)
grad_L_b = grad(loss, b, retain_graph=True)

print(grad_L_w1)
print(grad_L_b)

Yahan `retain_graph=True` zaroori hai kyunki hum ek hi graph par do baar `grad` function call kar rahe hain. By default, PyTorch memory bachane ke liye graph ko pehli call ke baad delete kar deta hai.

Output:

(tensor([-0.0898]),)
(tensor([-0.0817]),)

Sabse Aasan Tareeka: `.backward()`

Practice mein, hum `grad` function ko manually use nahi karte. PyTorch isse bhi aasan tareeka deta hai. Hum seedhe `loss` tensor par `.backward()` method call kar sakte hain.

Yeh method automatically unn sabhi tensors ke gradients calculate kar deta hai jinka `requires_grad=True` set hai. Gradients tensor ke `.grad` attribute mein store ho jaate hain.

# Pichle gradients ko reset karein (agar dobara run kar rahe hain)
if w1.grad is not None:
    w1.grad.zero_()
if b.grad is not None:
    b.grad.zero_()

# Gradients calculate karein
loss.backward()

print(w1.grad)
print(b.grad)

Output bilkul same hoga:

tensor([-0.0898])
tensor([-0.0817])

Is section ka sabse important takeaway yeh hai ki aapko calculus ki chinta karne ki zaroorat nahi hai. PyTorch ka `.backward()` method hamare liye saara kaam kar deta hai.