PyTorch sirf Ek Ghante mein

Part 7 - A Typical Training Loop

Ab tak, humne neural networks train karne ke liye sabhi zaroori cheezon par charcha ki hai: PyTorch ki tensor library, autograd, Module API, aur efficient data loaders. Chaliye ab in sab ko milakar pichle section ke toy dataset par ek neural network train karte hain.

Training Code

Neeche diya gaya code ek complete training loop dikhata hai.

import torch
import torch.nn.functional as F

# Assuming NeuralNetwork, train_loader are defined as in previous parts
# For completeness, let's redefine them quickly

class NeuralNetwork(torch.nn.Module):
    def __init__(self, num_inputs, num_outputs):
        super().__init__()
        self.layers = torch.nn.Sequential(
            torch.nn.Linear(num_inputs, 30), torch.nn.ReLU(),
            torch.nn.Linear(30, 20), torch.nn.ReLU(),
            torch.nn.Linear(20, num_outputs),
        )
    def forward(self, x): return self.layers(x)

# Dummy train_loader for demonstration
X_train = torch.rand(10, 2)
y_train = torch.randint(0, 2, (10,))
train_ds = torch.utils.data.TensorDataset(X_train, y_train)
train_loader = torch.utils.data.DataLoader(train_ds, batch_size=2, shuffle=True)


torch.manual_seed(123)
model = NeuralNetwork(num_inputs=2, num_outputs=2)
optimizer = torch.optim.SGD(model.parameters(), lr=0.5)

num_epochs = 3

for epoch in range(num_epochs):

    model.train() # Model ko training mode mein set karein
    for batch_idx, (features, labels) in enumerate(train_loader):

        # 1. Forward pass
        logits = model(features)

        # 2. Loss calculate karein
        loss = F.cross_entropy(logits, labels)

        # 3. Backpropagation
        optimizer.zero_grad() # Purane gradients ko zero karein
        loss.backward()       # Naye gradients calculate karein
        optimizer.step()      # Model weights ko update karein

        ### LOGGING
        print(f"Epoch: {epoch+1:03d}/{num_epochs:03d}"
              f" | Batch {batch_idx:03d}/{len(train_loader):03d}"
              f" | Loss: {loss:.2f}")

    model.eval() # Model ko evaluation mode mein set karein
    # Yahan optional model evaluation code aa sakta hai

Training Loop ko Samajhna

Chaliye upar diye gaye code ke mukhya hisson ko samajhte hain:

Jaisa ki aap output mein dekhenge, har batch ke saath loss kam hota jaata hai, jiska matlab hai ki hamara model seekh raha hai!

Model se Predictions Karna

Training ke baad, hum model ka use predictions karne ke liye kar sakte hain:

model.eval()

with torch.no_grad():
    outputs = model(X_train)

print(outputs)

Results:

tensor([[ 2.8569, -4.1618],
        [ 2.5382, -3.7548],
        [ 2.0944, -3.1820],
        [-1.4814,  1.4816],
        [-1.7176,  1.7342]])

Class membership probabilities nikalne ke liye, hum softmax function ka use kar sakte hain:

torch.set_printoptions(sci_mode=False)
probas = torch.softmax(outputs, dim=1)
print(probas)

Output:

tensor([[    0.9991,     0.0009],
        [    0.9982,     0.0018],
        [    0.9949,     0.0051],
        [    0.0491,     0.9509],
        [    0.0307,     0.9693]])

In values ko class labels me convert karne ke liye, hum argmax function ka use kar sakte hain:

predictions = torch.argmax(probas, dim=1)
print(predictions)

Output:

tensor([0, 0, 0, 1, 1])

Softmax probabilities calculate karna zaroori nahi hai. Hum seedhe logits par bhi argmax apply kar sakte hain:

predictions = torch.argmax(outputs, dim=1)
print(predictions)

Output:

tensor([0, 0, 0, 1, 1])

Accuracy Calculate Karna

Prediction accuracy calculate karne ke liye, hum ek function implement kar sakte hain:

def compute_accuracy(model, dataloader):

    model = model.eval()
    correct = 0.0
    total_examples = 0

    for idx, (features, labels) in enumerate(dataloader):

        with torch.no_grad():
            logits = model(features)

        predictions = torch.argmax(logits, dim=1)
        compare = labels == predictions
        correct += torch.sum(compare)
        total_examples += len(compare)

    return (correct / total_examples).item()

Yeh function data loader par iterate karta hai aur correct predictions ki sankhya calculate karta hai. Isse hum bade datasets par bhi accuracy calculate kar sakte hain.

Training set par accuracy:

compute_accuracy(model, train_loader)

Output: 1.0

Test set par accuracy:

compute_accuracy(model, test_loader)

Output: 1.0

Is section mein, humne seekha ki hum PyTorch ka use karke ek neural network kaise train kar sakte hain. Agle section mein, hum dekhenge ki training ke baad models ko kaise save aur restore kiya jaata hai.