PyTorch sirf Ek Ghante mein

Part 6 - Setting Up Efficient Data Loaders

Pichle section mein humne ek custom neural network model banaya. Ab, model ko train karne se pehle, humein PyTorch mein efficient data loaders banana seekhna hoga.

Ek Toy Dataset Banana

Shuruaat ke liye, hum ek chhota sa toy dataset banate hain jismein 5 training examples aur 2 features hain.

import torch

X_train = torch.tensor([
    [-1.2, 3.1], [-0.9, 2.9], [-0.5, 2.6],
    [2.3, -1.1], [2.7, -1.5]
])
y_train = torch.tensor([0, 0, 0, 1, 1])

X_test = torch.tensor([
    [-0.8, 2.8], [2.6, -1.6]
])
y_test = torch.tensor([0, 1])

Note: PyTorch mein class labels 0 se shuru hone chahiye.

Custom `Dataset` Class

PyTorch mein data ko handle karne ke liye, hum `torch.utils.data.Dataset` class ko subclass karke ek custom dataset banate hain.

from torch.utils.data import Dataset

class ToyDataset(Dataset):
    def __init__(self, X, y):
        self.features = X
        self.labels = y

    def __getitem__(self, index):
        one_x = self.features[index]
        one_y = self.labels[index]
        return one_x, one_y

    def __len__(self):
        return self.labels.shape[0]

train_ds = ToyDataset(X_train, y_train)
test_ds = ToyDataset(X_test, y_test)

print(f"Training dataset ki length: {len(train_ds)}")

Is custom `Dataset` class ke 3 zaroori hisse hain:

`DataLoader` ka Upyog

`Dataset` banane ke baad, hum `DataLoader` ka use karke data ko batches mein load karte hain. `DataLoader` shuffling aur parallel data loading jaise kaam ko aasan bana deta hai.

from torch.utils.data import DataLoader

torch.manual_seed(123)

train_loader = DataLoader(
    dataset=train_ds,
    batch_size=2,
    shuffle=True, # Training data ko har epoch mein shuffle karein
    num_workers=0 # Abhi ke liye 0 rakhein
)

test_loader = DataLoader(
    dataset=test_ds,
    batch_size=2,
    shuffle=False, # Test data ko shuffle na karein
    num_workers=0
)

Ab hum `train_loader` par iterate karke dekh sakte hain:

for idx, (x, y) in enumerate(train_loader):
    print(f"Batch {idx+1}:", x.shape, y.shape)
    print(f"  Features: {x}")
    print(f"  Labels: {y}\n")

Output:

Batch 1: torch.Size([2, 2]) torch.Size([2])
  Features: tensor([[ 2.3000, -1.1000],
        [-0.9000,  2.9000]])
  Labels: tensor([1, 0])

Batch 2: torch.Size([2, 2]) torch.Size([2])
  Features: tensor([[-1.2000,  3.1000],
        [-0.5000,  2.6000]])
  Labels: tensor([0, 0])

Batch 3: torch.Size([1, 2]) torch.Size([1])
  Features: tensor([[ 2.7000, -1.5000]])
  Labels: tensor([1])

Aap dekh sakte hain ki aakhri batch mein sirf ek example hai, kyunki hamare paas 5 examples hain aur batch size 2 hai. Isse bachne ke liye, hum `drop_last=True` set kar sakte hain.

train_loader = DataLoader(
    dataset=train_ds,
    batch_size=2,
    shuffle=True,
    num_workers=0,
    drop_last=True # Aakhri chhota batch drop karein
)

Iterating with `drop_last=True`

Ab, training loader par iterate karne par, hum dekh sakte hain ki aakhri batch chhod diya gaya hai:

for idx, (x, y) in enumerate(train_loader):
    print(f"Batch {idx+1}:", x, y)

Output:

Batch 1: tensor([[-1.2000,  3.1000],
        [-0.5000,  2.6000]]) tensor([0, 0])
Batch 2: tensor([[ 2.3000, -1.1000],
        [-0.9000,  2.9000]]) tensor([1, 0])

`num_workers` Parameter ko Samajhna

Aakhir mein, chaliye `DataLoader` mein `num_workers=0` setting par charcha karte hain. Yeh parameter data loading aur preprocessing ko parallelize karne ke liye bahut zaroori hai.

Jab `num_workers` 0 par set hota hai, to data loading main process mein hi hota hai. Isse model training mein kaafi slowdown ho sakta hai, khaas kar jab hum bade networks ko GPU par train kar rahe hon. Aisa isliye hota hai kyunki CPU ko model processing ke saath-saath data load karne ka kaam bhi karna padta hai, jisse GPU ko data ke liye intezaar karna pad sakta hai.

Iske vipreet, jab `num_workers` ko 0 se zyada set kiya jaata hai (jaise 1, 2, ya 4), to data ko parallel mein load karne ke liye alag se worker processes launch kiye jaate hain. Isse main process sirf model training par focus kar paata hai aur system ke resources ka behtar istemal hota hai.

Lekin, agar hum bahut chhote datasets ya Jupyter notebooks jaise interactive environments ke saath kaam kar rahe hain, to `num_workers` ko badhane se koi khaas speedup nahi milta. Balki, worker processes ko shuru karne ka overhead data loading se zyada ho sakta hai aur kabhi-kabhi errors bhi aa sakte hain.

Mere anubhav mein, `num_workers=4` set karna aam taur par kai real-world datasets par optimal performance deta hai, lekin best setting aapke hardware aur dataset par depend karti hai.