讓PyTorch訓(xùn)練速度更快,你需要掌握這17種方法

作者:LORENZ KUHN
機器之心編譯
編輯:陳萍
掌握這 17 種方法,用最省力的方式,加速你的 Pytorch 深度學(xué)習(xí)訓(xùn)練。


import torch# Creates once at the beginning of trainingscaler = torch.cuda.amp.GradScaler()for data, label in data_iter:optimizer.zero_grad()# Casts operations to mixed precisionwith torch.cuda.amp.autocast():loss = model(data)# Scales the loss, and calls backward()# to create scaled gradients???scaler.scale(loss).backward()# Unscales gradients and calls# or skips optimizer.step()???scaler.step(optimizer)# Updates the scale for next iterationscaler.update()
model.zero_grad() # Reset gradients tensorsfor i, (inputs, labels) in enumerate(training_set):predictions = model(inputs) # Forward passloss = loss_function(predictions, labels) # Compute loss functionloss = loss / accumulation_steps # Normalize our loss (if averaged)loss.backward() # Backward passif (i+1) % accumulation_steps == 0: # Wait for several backward stepsoptimizer.step() # Now we can do an optimizer stepmodel.zero_grad() # Reset gradients tensorsif (i+1) % evaluation_steps == 0: # Evaluate the model when we...evaluate_model() # ...have no gradients accumulate

程序員GitHub,現(xiàn)已正式上線!
接下來我們將會在該公眾號上,專注為大家分享GitHub上有趣的開源庫包括Python,Java,Go,前端開發(fā)等優(yōu)質(zhì)的學(xué)習(xí)資源和技術(shù),分享一些程序員圈的新鮮趣事。
年度爆款文案
6).30個Python奇淫技巧集?
點這里,獲取騰訊課堂暢學(xué)卡
評論
圖片
表情

