Part 9 - Optimizing Training Performance with GPUs
Is tutorial ke aakhri section mein, hum dekhenge ki hum GPUs ka istemal kaise kar sakte hain, jo regular CPUs ke muqable deep neural network training ko accelerate karega.
9.1 PyTorch Computations on GPU Devices
PyTorch mein, ek 'device' woh jagah hai jahan computations hote hain aur data rehta hai. CPU aur GPU devices ke examples hain. Ek PyTorch tensor ek device par rehta hai, aur uske operations usi device par execute hote hain.
Chaliye dekhte hain yeh kaise kaam karta hai. Sabse pehle, check karein ki aapke paas GPU-compatible PyTorch version hai ya nahi:
import torch
print(torch.cuda.is_available())
Agar output `True` hai, to aapka system GPU ke liye taiyar hai.
True
By default, saare computations CPU par hote hain:
tensor_1 = torch.tensor([1., 2., 3.])
tensor_2 = torch.tensor([4., 5., 6.])
print(tensor_1 + tensor_2)
Output:
tensor([5., 7., 9.])
Ab, hum `.to()` method ka use karke in tensors ko GPU par transfer kar sakte hain:
tensor_1 = tensor_1.to("cuda")
tensor_2 = tensor_2.to("cuda")
print(tensor_1 + tensor_2)
Naya output dikhayega ki computation GPU par hua hai:
tensor([5., 7., 9.], device='cuda:0')
Yahan `device='cuda:0'` ka matlab hai ki tensor pehle GPU par hai. Agar aapke paas multiple GPUs hain, to aap `.to("cuda:1")`, `.to("cuda:2")` etc. use karke specific GPU select kar sakte hain.
Ek Zaroori Niyam
Yeh bahut zaroori hai ki saare tensors ek hi device par hon. Agar ek tensor CPU par aur doosra GPU par hai, to computation fail ho jayega.
tensor_1 = tensor_1.to("cpu") # Wapas CPU par bhej rahe hain
print(tensor_1 + tensor_2)
Isse ek `RuntimeError` aayega:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
Is section mein, humne seekha ki PyTorch par GPU computations karna kitna aasan hai. Humein bas tensors ko same GPU device par transfer karna hai, aur PyTorch baaki sab sambhal lega. Is jaankari ke saath, ab hum pichle section ke neural network ko GPU par train kar sakte hain.
Single-GPU Training
Ab jab hum tensors ko GPU par transfer karna jaante hain, hum Part 7 ke training loop ko GPU par chalane ke liye modify kar sakte hain. Iske liye sirf teen lines code badalna hoga, jaisa ki neeche dikhaya gaya hai.
import torch.nn.functional as F
# Assuming NeuralNetwork and train_loader are defined
torch.manual_seed(123)
model = NeuralNetwork(num_inputs=2, num_outputs=2)
# Naya: Ek device variable define karein jo GPU ko default karta hai.
device = torch.device("cuda")
# Naya: Model ko GPU par transfer karein.
model.to(device)
optimizer = torch.optim.SGD(model.parameters(), lr=0.5)
num_epochs = 3
for epoch in range(num_epochs):
model.train()
for batch_idx, (features, labels) in enumerate(train_loader):
# Naya: Data ko GPU par transfer karein.
features, labels = features.to(device), labels.to(device)
logits = model(features)
loss = F.cross_entropy(logits, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
### LOGGING
print(f"Epoch: {epoch+1:03d}/{num_epochs:03d}"
f" | Batch {batch_idx:03d}/{len(train_loader):03d}"
f" | Loss: {loss:.2f}")
model.eval()
Upar diye gaye code ko chalane par pehle jaisa hi output milega, lekin saare calculations ab GPU par ho rahe hain.
Epoch: 001/003 | Batch 000/002 | Loss: 0.75
Epoch: 001/003 | Batch 001/002 | Loss: 0.65
Epoch: 002/003 | Batch 000/002 | Loss: 0.44
Epoch: 002/003 | Batch 001/002 | Loss: 0.13
Epoch: 003/003 | Batch 000/002 | Loss: 0.03
Epoch: 003/003 | Batch 001/002 | Loss: 0.00
Code ko aur portable banane ke liye (taaki woh CPU par bhi chal sake agar GPU na ho), yeh best practice maani jaati hai:
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
Is chhote dataset ke liye, shayad aapko speed-up na dikhe kyunki data ko CPU se GPU par transfer karne mein bhi samay lagta hai. Lekin, bade models (jaise LLMs) train karte samay aapko significant speed-up milega.
PyTorch on macOS: Agar aap Apple Mac (M1, M2, etc.) use kar rahe hain, to aap `cuda` ki jagah `mps` ka istemal kar sakte hain: `device = torch.device("mps" if torch.backends.mps.is_available() else "cpu")`.
Multiple GPUs ke Saath Training
Is section mein, hum distributed training ke concept ko briefly samjhenge. Distributed training model training ko multiple GPUs aur machines mein divide karne ka concept hai.
Humein iski zaroorat kyun hai? Jab ek model ko single GPU ya machine par train karna possible ho, tab bhi yeh process bahut time-consuming ho sakta hai. Training process ko multiple machines (har machine mein multiple GPUs ho sakte hain) mein distribute karke training time ko kaafi kam kiya ja sakta hai.
Is section mein, hum distributed training ke sabse basic case ko dekhenge: PyTorch ki DistributedDataParallel (DDP) strategy. DDP available devices par input data ko split karke aur in data subsets ko ek saath process karke parallelism enable karta hai.
Yeh Kaise Kaam Karta Hai?
PyTorch har GPU par ek alag process launch karta hai, aur har process model ki ek copy receive karta hai aur rakhta hai – in copies ko training ke dauraan synchronize kiya jaata hai.
Har training iteration mein, har model data loader se ek minibatch receive karega. Hum ek `DistributedSampler` ka use kar sakte hain taaki yeh sunishchit ho sake ki har GPU DDP ka use karte samay ek alag, non-overlapping batch receive kare.
Kyunki har model copy training data ka ek alag sample dekhega, isliye model copies alag-alag logits as output return karenge aur backward pass ke dauraan alag-alag gradients compute karenge. In gradients ko phir training ke dauraan average aur synchronize kiya jaata hai taaki sabhi models ke weights update ho sakein.
DDP ke Fayde
DDP ka upyog karne ka fayda single GPU ke muqable dataset ko process karne me milne wali behtar speed hai. Devices ke beech thoda communication overhead hone ke bawajood, DDP सैद्धांतिक रूप से (theoretically) do GPUs ke saath ek training epoch ko sirf ek GPU ke muqable aadhe samay me process kar sakta hai. Samay ki yeh बचत GPUs ki sankhya ke saath badhti jaati hai, jisse hum aath GPUs ke saath ek epoch ko aath guna tezi se process kar sakte hain.
Multi-GPU in Interactive Environments
Zaroori Note: DDP, Jupyter notebooks jaise interactive Python environments me theek se kaam nahi karta. Isliye, neeche diye gaye code ko ek alag Python script (`.py` file) ke roop me execute kiya jaana chahiye, na ki notebook interface me. Aisa isliye hai kyunki DDP ko multiple processes spawn karne ki zaroorat hoti hai, aur har process ka apna Python interpreter instance hona chahiye.
Sabse pehle, hum distributed training ke liye kuch additional modules import karenge.
import platform
from torch.utils.data.distributed import DistributedSampler
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.distributed import init_process_group, destroy_process_group
In new utilities ka rationale samajhte hain:
- `DistributedSampler`: Jab hum training ke liye multiple processes spawn karte hain, to humein dataset ko in alag-alag processes ke beech divide karne ka ek tareeka chahiye. Iske liye, hum `DistributedSampler` ka upyog karenge.
- `init_process_group` aur `destroy_process_group`: Inka upyog distributed training mode ko shuru aur band karne ke liye kiya jaata hai.
Poora Multi-GPU Training Script
Neeche poora script diya gaya hai. Isse ek file (jaise `ddp_script.py`) me save karein aur `torchrun` ka use karke command line se chalayein.
import torch
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
# Naye imports:
import os
import platform
from torch.utils.data.distributed import DistributedSampler
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.distributed import init_process_group, destroy_process_group
# Naya: Distributed process group ko initialize karne ke liye function (1 process / GPU)
def ddp_setup(rank, world_size):
"""
Arguments:
rank: ek unique process ID
world_size: group me total processes ki sankhya
"""
if "MASTER_ADDR" not in os.environ:
os.environ["MASTER_ADDR"] = "localhost"
if "MASTER_PORT" not in os.environ:
os.environ["MASTER_PORT"] = "12345"
if platform.system() == "Windows":
# Windows users ko "gloo" backend use karna pad sakta hai
init_process_group(backend="gloo", rank=rank, world_size=world_size)
else:
# Linux/Mac users "nccl" use kar sakte hain
init_process_group(backend="nccl", rank=rank, world_size=world_size)
torch.cuda.set_device(rank)
class ToyDataset(Dataset):
def __init__(self, X, y):
self.features = X
self.labels = y
def __getitem__(self, index):
return self.features[index], self.labels[index]
def __len__(self):
return self.labels.shape[0]
class NeuralNetwork(torch.nn.Module):
def __init__(self, num_inputs, num_outputs):
super().__init__()
self.layers = torch.nn.Sequential(
torch.nn.Linear(num_inputs, 30), torch.nn.ReLU(),
torch.nn.Linear(30, 20), torch.nn.ReLU(),
torch.nn.Linear(20, num_outputs),
)
def forward(self, x):
return self.layers(x)
def prepare_dataset():
X_train = torch.tensor([[-1.2, 3.1], [-0.9, 2.9], [-0.5, 2.6], [2.3, -1.1], [2.7, -1.5]])
y_train = torch.tensor([0, 0, 0, 1, 1])
X_test = torch.tensor([[-0.8, 2.8], [2.6, -1.6]])
y_test = torch.tensor([0, 1])
train_ds = ToyDataset(X_train, y_train)
test_ds = ToyDataset(X_test, y_test)
train_loader = DataLoader(
dataset=train_ds,
batch_size=2,
shuffle=False, # Naya: False, kyunki DistributedSampler use ho raha hai
pin_memory=True,
drop_last=True,
sampler=DistributedSampler(train_ds) # Naya
)
test_loader = DataLoader(dataset=test_ds, batch_size=2, shuffle=False)
return train_loader, test_loader
# Naya: main function wrapper
def main(rank, world_size, num_epochs):
ddp_setup(rank, world_size) # Naya: process groups initialize karein
train_loader, test_loader = prepare_dataset()
model = NeuralNetwork(num_inputs=2, num_outputs=2).to(rank)
optimizer = torch.optim.SGD(model.parameters(), lr=0.5)
model = DDP(model, device_ids=[rank]) # Naya: model ko DDP se wrap karein
for epoch in range(num_epochs):
train_loader.sampler.set_epoch(epoch) # Naya: Har epoch me alag shuffle order ke liye
model.train()
for features, labels in train_loader:
features, labels = features.to(rank), labels.to(rank)
logits = model(features)
loss = F.cross_entropy(logits, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if rank == 0: # Sirf ek GPU se log karein
print(f"[GPU{rank}] Epoch: {epoch+1:03d}/{num_epochs:03d} | Loss: {loss:.2f}")
destroy_process_group() # Naya: distributed mode se cleanly exit karein
if __name__ == "__main__":
world_size = torch.cuda.device_count()
# Naya: torchrun se set kiye gaye environment variables ka use karein
if "WORLD_SIZE" in os.environ:
world_size = int(os.environ["WORLD_SIZE"])
if "LOCAL_RANK" in os.environ:
rank = int(os.environ["LOCAL_RANK"])
else:
rank = 0
if rank == 0:
print("PyTorch version:", torch.__version__)
print("CUDA available:", torch.cuda.is_available())
print(f"Number of GPUs available: {world_size}")
torch.manual_seed(123)
num_epochs = 3
main(rank, world_size, num_epochs)
Is script ko chalane ke liye, apne terminal me yeh command use karein (maan lijiye aapke paas 2 GPUs hain):
torchrun --nproc_per_node=2 your_script_name.py
Multi-GPU Script Kaise Kaam Karta Hai
Script ke neeche `if __name__ == "__main__"` block hai jo tab execute hota hai jab hum file ko ek standalone Python script ke roop me chalate hain. Hum is script ko ek regular Python script (`python your_script.py`) ki tarah nahi chalayenge, balki PyTorch ke modern utility `torchrun` ka upyog karenge.
Jab `torchrun` ka upyog karke script chalaya jaata hai, to yeh har GPU ke liye ek process automatically launch karta hai aur har process ko ek unique rank (ID) assign karta hai. Yeh `WORLD_SIZE` (total processes) aur `LOCAL_RANK` jaise environment variables bhi set karta hai. Hamara script in variables ko `os.environ` ka upyog karke padhta hai aur `main()` function me pass karta hai.
`main()` function `ddp_setup` helper function ke zariye distributed environment ko initialize karta hai. Phir, yeh data load karta hai, model set up karta hai, aur training loop chalata hai. Hum model aur data dono ko sahi GPU par `.to(rank)` ka upyog karke transfer karte hain. Model ko `DistributedDataParallel (DDP)` se wrap kiya jaata hai, jo training ke dauraan sabhi GPUs me synchronized gradient updates ko enable karta hai. Training poori hone ke baad, hum `destroy_process_group()` ko call karke resources ko release karte hain.
Yeh sunishchit karne ke liye ki har GPU ko training data ka ek alag subset mile, hum training data loader me `sampler=DistributedSampler(train_ds)` ka upyog karte hain.
Script ko Chalana
Upar diye gaye code ko `DDP-script-torchrun.py` naam ki file me save karne ke baad, aap ise `torchrun` utility ka upyog karke command line se is tarah chala sakte hain:
torchrun --nproc_per_node=2 DDP-script-torchrun.py
Agar aap ise sabhi available GPUs par chalana chahte hain, to aap is command ka upyog kar sakte hain:
# Linux/macOS
torchrun --nproc_per_node=$(nvidia-smi -L | wc -l) DDP-script-torchrun.py
Note: Kyunki yeh code bahut chhote dataset ka upyog karta hai, isliye zyada GPUs par chalane ke liye aapko script me dataset badhane wali lines ko uncomment karna hoga.
Agar aap is script ko single GPU machine par chalate hain, to aapko neeche diya gaya output dikhega:
PyTorch version: 2.0.1+cu117
CUDA available: True
Number of GPUs available: 1
[GPU0] Epoch: 001/003 | Batchsize 002 | Train/Val Loss: 0.62
[GPU0] Epoch: 001/003 | Batchsize 002 | Train/Val Loss: 0.32
[GPU0] Epoch: 002/003 | Batchsize 002 | Train/Val Loss: 0.11
[GPU0] Epoch: 002/003 | Batchsize 002 | Train/Val Loss: 0.07
[GPU0] Epoch: 003/003 | Batchsize 002 | Train/Val Loss: 0.02
[GPU0] Epoch: 003/003 | Batchsize 002 | Train/Val Loss: 0.03
[GPU0] Training accuracy 1.0
[GPU0] Test accuracy 1.0
Yeh output single-GPU training ke output jaisa hi hai, jo ek accha sanity check hai.
2-GPU Machine par Output
Ab, agar hum wahi command aur code do GPUs wali machine par chalate hain, to humein neeche diya gaya output dikhega:
PyTorch version: 2.0.1+cu117
CUDA available: True
Number of GPUs available: 2
[GPU1] Epoch: 001/003 | Batchsize 002 | Train/Val Loss: 0.60
[GPU0] Epoch: 001/003 | Batchsize 002 | Train/Val Loss: 0.59
[GPU0] Epoch: 002/003 | Batchsize 002 | Train/Val Loss: 0.16
[GPU1] Epoch: 002/003 | Batchsize 002 | Train/Val Loss: 0.17
[GPU0] Epoch: 003/003 | Batchsize 002 | Train/Val Loss: 0.05
[GPU1] Epoch: 003/003 | Batchsize 002 | Train/Val Loss: 0.05
[GPU1] Training accuracy 1.0
[GPU0] Training accuracy 1.0
[GPU1] Test accuracy 1.0
[GPU0] Test accuracy 1.0
Jaisa ki ummeed thi, hum dekh sakte hain ki kuch batches pehle GPU (GPU0) par process ho rahe hain aur kuch doosre (GPU1) par. Lekin, training aur test accuracies print karte samay humein duplicated output lines dikhti hain. Aisa isliye hai kyunki har process (doosre shabdon me, har GPU) test accuracy ko independently print karta hai.
Agar yeh aapko pareshan karta hai, to aap har process ke rank ka upyog karke apne print statements ko control kar sakte hain, taaki sirf ek process (jaise rank 0 wala) hi print kare.
if rank == 0: # Sirf pehle process me print karein
print("Test accuracy: ", accuracy)
Sankshep me, DDP ke zariye distributed training aise kaam karta hai. Agar aap aur details me ruchi rakhte hain, to main official DistributedDataParallel API documentation check karne ki salah deta hun.
Summary
Is poore tutorial ka saaransh yahan hai:
- PyTorch ek open-source library hai jiske teen core components hain: ek tensor library, automatic differentiation (autograd) functions, aur deep learning utilities.
- PyTorch ki tensor library NumPy jaisi array libraries ke samaan hai, lekin isme GPU support ka bada fayda hai.
- PyTorch mein tensors, scalars, vectors, aur matrices jaise multi-dimensional data ko represent karne ke liye array-like data structures hain.
- Autograd humein gradients ko manually derive kiye bina backpropagation ka upyog karke neural networks ko aasani se train karne deta hai.
- PyTorch ke deep learning utilities (`torch.nn`) custom neural networks banane ke liye building blocks pradan karte hain.
- `Dataset` aur `DataLoader` classes efficient data loading pipelines set up karne me madad karti hain.
- Models ko CPU ya single GPU par train karna sabse aasan hai.
- Agar multiple GPUs available hain, to `DistributedDataParallel` (DDP) training ko accelerate karne ka sabse saral tareeka hai.