[干貨]PyTorch重大更新:將支持自動混合精度訓(xùn)練!
AI編輯:我是小將
混合精度訓(xùn)練(mixed precision training)可以讓模型訓(xùn)練在盡量不降低性能的情形下提升訓(xùn)練速度,而且也可以降低顯卡使用內(nèi)存。目前主流的深度學(xué)習(xí)框架都開始支持混合精度訓(xùn)練。對于PyTorch,混合精度訓(xùn)練還主要是采用NVIDIA開源的apex庫。但是,PyTorch將迎來重大更新,那就是提供內(nèi)部支持的混合精度訓(xùn)練,而且是自動混合精度訓(xùn)練:

torch.cuda.amp.autocast?:自動為GPU op選擇精度來提升訓(xùn)練性能而不降低模型準(zhǔn)確度。torch.cuda.amp.GradScaler?:?對梯度進(jìn)行scale來加快模型收斂,因為float16梯度容易出現(xiàn)underflow(梯度過小)
兩者結(jié)合在一起,可以實現(xiàn)自動混合精度訓(xùn)練:
#?Creates?model?and?optimizer?in?default?precision
model?=?Net().cuda()
optimizer?=?optim.SGD(model.parameters(),?...)
#?Creates?a?GradScaler?once?at?the?beginning?of?training.
scaler?=?GradScaler()
for?epoch?in?epochs:
????for?input,?target?in?data:
????????optimizer.zero_grad()
????????#?Runs?the?forward?pass?with?autocasting.
????????with?autocast():
????????????output?=?model(input)
????????????loss?=?loss_fn(output,?target)
????????#?Scales?loss.??Calls?backward()?on?scaled?loss?to?create?scaled?gradients.
????????#?Backward?passes?under?autocast?are?not?recommended.
????????#?Backward?ops?run?in?the?same?precision?that?autocast?used?for?corresponding?forward?ops.
????????scaler.scale(loss).backward()
????????#?scaler.step()?first?unscales?the?gradients?of?the?optimizer's?assigned?params.
????????#?If?these?gradients?do?not?contain?infs?or?NaNs,?optimizer.step()?is?then?called,
????????#?otherwise,?optimizer.step()?is?skipped.
????????scaler.step(optimizer)
????????#?Updates?the?scale?for?next?iteration.
????????scaler.update()
可以看到,為了防止梯度的underflow,首先scaler.scale(loss).backward()會對loss乘以一個scale因子,然后backward時所有梯度都會乘以相同的scale因子,這樣保證梯度有較大的magnitude而不會出現(xiàn)為0。我們不希望這個scale因子對學(xué)習(xí)速率產(chǎn)生影響,那么scaler.step(optimizer)會先unscale要更新的梯度然后再更新,如果梯度出現(xiàn)infs或者NaNs,optimizer將忽略這次迭代訓(xùn)練。
如果你想在梯度更新前對梯度進(jìn)行clip,也是可以的:
scaler?=?GradScaler()
for?epoch?in?epochs:
????for?input,?target?in?data:
????????optimizer.zero_grad()
????????with?autocast():
????????????output?=?model(input)
????????????loss?=?loss_fn(output,?target)
????????scaler.scale(loss).backward()
????????#?Unscales?the?gradients?of?optimizer's?assigned?params?in-place
????????scaler.unscale_(optimizer)
????????#?Since?the?gradients?of?optimizer's?assigned?params?are?unscaled,?clips?as?usual:
????????torch.nn.utils.clip_grad_norm_(model.parameters(),?max_norm)
????????#?optimizer's?gradients?are?already?unscaled,?so?scaler.step?does?not?unscale?them,
????????#?although?it?still?skips?optimizer.step()?if?the?gradients?contain?infs?or?NaNs.
????????scaler.step(optimizer)
????????#?Updates?the?scale?for?next?iteration.
????????scaler.update()
當(dāng)然,混合精度訓(xùn)練肯定要支持分布式訓(xùn)練,由于autocast是thread local的,所以要注意以下不同的情形:
如果使用torch.nn.DataParallel:
此時只有一個進(jìn)程,而不同GPU上是各自的線程跑forward過程的,所以下面操作時無效的:
model?=?MyModel()
dp_model?=?nn.DataParallel(model)
#?Sets?autocast?in?the?main?thread
with?autocast():
????#?dp_model's?internal?threads?won't?autocast.??The?main?thread's?autocast?state?has?no?effect.
????output?=?dp_model(input)
????#?loss_fn?still?autocasts,?but?it's?too?late...
????loss?=?loss_fn(output)
此時你需要對model的forward方法用autocast裝飾:
MyModel(nn.Module):
????...
????@autocast()
????def?forward(self,?input):
???????...
#?Alternatively
MyModel(nn.Module):
????...
????def?forward(self,?input):
????????with?autocast():
????????????...
model?=?MyModel()
dp_model?=?nn.DataParallel(model)
with?autocast():
????output?=?dp_model(input)
????loss?=?loss_fn(output)
如果使用torch.nn.parallel.DistributedDataParallel:
一般情形下是單GPU進(jìn)程的,此時原來的用來就沒有問題,但是如果是多GPU一個進(jìn)程那么就和上述問題一樣,需要用autocast裝飾model的forward。
更多內(nèi)容見:?https://pytorch.org/docs/master/notes/amp_examples.html#amp-examples
推薦閱讀
堪比Focal Loss!解決目標(biāo)檢測中樣本不平衡的無采樣方法
另辟蹊徑!斯坦福大學(xué)提出邊界框回歸任務(wù)新Loss:GIoU
機器學(xué)習(xí)算法工程師
? ??? ? ? ? ? ? ? ? ? ? ? ??????????????????一個用心的公眾號
?

