<kbd id="afajh"><form id="afajh"></form></kbd>
<strong id="afajh"><dl id="afajh"></dl></strong>
    <del id="afajh"><form id="afajh"></form></del>
        1. <th id="afajh"><progress id="afajh"></progress></th>
          <b id="afajh"><abbr id="afajh"></abbr></b>
          <th id="afajh"><progress id="afajh"></progress></th>

          PyTorch 常用代碼段匯總

          共 10611字,需瀏覽 22分鐘

           ·

          2021-06-18 11:21

          ↑ 點(diǎn)擊藍(lán)字 關(guān)注極市平臺(tái)

          作者丨cvhuber
          來源丨CVHub
          編輯丨極市平臺(tái)

          極市導(dǎo)讀

           

          工欲善其事必先利其器!本文匯總了深度學(xué)習(xí)常用框架Pytorch的常用代碼段。 >>加入極市CV技術(shù)交流群,走在計(jì)算機(jī)視覺的最前沿

          本次實(shí)驗(yàn)的代碼大家可以到下面的 GitHub倉(cāng)庫(kù) 鏈接中進(jìn)行下載與學(xué)習(xí)。

          Github:
          https://github.com/CVHuber/Pytorch_common_code

          張量處理

          張量基本信息

          tensor = torch.randn(3,4,5)
          print(tensor.type()) # 數(shù)據(jù)類型
          print(tensor.size()) # 張量大小
          print(tensor.dim()) # 維度的數(shù)量

          張量命名

          NCHW = [‘N’, ‘C’, ‘H’, ‘W’] 
          images = torch.randn(32, 3, 56, 56, names=NCHW)
          images.sum('C')
          images.select('C', index=0)

          torch.Tensor與np.ndarray轉(zhuǎn)換

          ndarray = tensor.cpu().numpy() 
          tensor = torch.from_numpy(ndarray).float()

          Torch.tensor與PIL.Image轉(zhuǎn)換

          # torch.Tensor -> PIL.Image 
          image = torchvision.transforms.functional.to_pil_image(tensor)
          # PIL.Image -> torch.Tensor
          path = r'./figure.jpg'
          tensor =torchvision.transforms.functional.to_tensor(PIL.Image.open(path))

          np.ndarray與PIL.Image的轉(zhuǎn)換

          image = PIL.Image.fromarray(ndarray.astype(np.uint8))
          ndarray = np.asarray(PIL.Image.open(path))

          張量拼接

          torch.cat():沿著給定的維度拼接

          torch.stack():新增一個(gè)維度

          tensor = torch.cat(list_of_tensors, dim=0) 
          tensor = torch.stack(list_of_tensors, dim=0)

          將整數(shù)標(biāo)簽轉(zhuǎn)為one-hot編碼

          # pytorch 的標(biāo)記默認(rèn)從 0 開始
          tensor = torch.tensor([0, 2, 1, 3])
          N = tensor.size(0)
          num_classes = 4
          one_hot = torch.zeros(N, num_classes).long() one_hot.scatter_(dim=1,index=torch.unsqueeze(tensor,dim=1),src=torch.ones(N,num_classes).long())

          矩陣乘法

          # Matrix multiplcation: (m*n) * (n*p) * -> (m*p). 
          result = torch.mm(tensor1, tensor2)
          # Batch matrix multiplication: (b*m*n) * (b*n*p) -> (b*m*p)
          result = torch.bmm(tensor1, tensor2)
          # Element-wise multiplication.
          result = tensor1 * tensor2

          模型定義

          兩層卷積網(wǎng)絡(luò)的示例

          class ConvNet(nn.Module): 
          def __init__(self, num_classes=10):
          super(ConvNet, self).__init__()
          self.layer1 = nn.Sequential( nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2), nn.BatchNorm2d(16), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2))
          self.layer2 = nn.Sequential( nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2), nn.BatchNorm2d(32), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2))
          self.fc = nn.Linear(7*7*32, num_classes)

          def forward(self, x):
          out = self.layer1(x)
          out = self.layer2(out)
          out = out.reshape(out.size(0), -1)
          out = self.fc(out) return out
          model = ConvNet(num_classes).to(device)

          計(jì)算模型整體參數(shù)量

          num_parameters = sum(torch.numel(parameter) for parameter in model.parameters())

          模型權(quán)重初始化

          model.modules() :迭代地遍歷模型的所有子層

          model.children() :只遍歷模型下的一層

          for layer in model.modules():
          if isinstance(layer, torch.nn.Conv2d):
          torch.nn.init.kaiming_normal_(layer.weight,mode='fan_out', nonlinearity='relu')
          if layer.bias is not None:
          torch.nn.init.constant_(layer.bias, val=0.0)
          elif isinstance(layer, torch.nn.BatchNorm2d):
          torch.nn.init.constant_(layer.weight, val=1.0) torch.nn.init.constant_(layer.bias, val=0.0)
          elif isinstance(layer, torch.nn.Linear):
          torch.nn.init.xavier_normal_(layer.weight)
          if layer.bias is not None:
          torch.nn.init.constant_(layer.bias, val=0.0)
          layer.weight = torch.nn.Parameter(tensor)

          將在 GPU 保存的模型加載到 CPU

          model.load_state_dict(torch.load('model.pth',map_location='cp'))

          數(shù)據(jù)處理

          計(jì)算數(shù)據(jù)集的均值和標(biāo)準(zhǔn)差

          import os
          import cv2
          import numpy as np
          from torch.utils.data import Dataset
          from PIL import Image
          def compute_mean_and_std(dataset):
          # 輸入 PyTorch 的 dataset,輸出均值和標(biāo)準(zhǔn)差
          mean_r = 0
          mean_g = 0
          mean_b = 0
          for img, _ in dataset:
          img = np.asarray(img) # PIL Image轉(zhuǎn)為numpy array
          mean_b += np.mean(img[:, :, 0])
          mean_g += np.mean(img[:, :, 1])
          mean_r += np.mean(img[:, :, 2])

          mean_b /= len(dataset)
          mean_g /= len(dataset)
          mean_r /= len(dataset)

          diff_r = 0
          diff_g = 0
          diff_b = 0
          N = 0
          for img, _ in dataset:
          img = np.asarray(img)

          diff_b += np.sum(np.power(img[:, :, 0] - mean_b, 2))
          diff_g += np.sum(np.power(img[:, :, 1] - mean_g, 2))
          diff_r += np.sum(np.power(img[:, :, 2] - mean_r, 2))

          N += np.prod(img[:, :, 0].shape)

          std_b = np.sqrt(diff_b / N)
          std_g = np.sqrt(diff_g / N)
          std_r = np.sqrt(diff_r / N)

          mean = (mean_b.item() / 255.0, mean_g.item() / 255.0, mean_r.item() / 255.0)
          std = (std_b.item() / 255.0, std_g.item() / 255.0, std_r.item() / 255.0) return mean, std

          常用訓(xùn)練和驗(yàn)證數(shù)據(jù)預(yù)處理

          其中,ToTensor 操作會(huì)將 PIL.Image 或形狀為 H×W×D,數(shù)值范圍為 [0, 255] 的 np.ndarray 轉(zhuǎn)換為形狀為 D×H×W,數(shù)值范圍為 [0.0, 1.0] 的 torch.Tensor。

          train_transform = torchvision.transforms.Compose([torchvision.transforms.RandomResizedCrop(size=224, scale=(0.08, 1.0)),   torchvision.transforms.RandomHorizontalFlip(), torchvision.transforms.ToTensor(), torchvision.transforms.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)), ]) 
          val_transform = torchvision.transforms.Compose([torchvision.transforms.Resize(256), torchvision.transforms.CenterCrop(224), torchvision.transforms.ToTensor(), torchvision.transforms.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)), ])

          模型訓(xùn)練和測(cè)試

          分類模型訓(xùn)練代碼

          # 損失函數(shù)和優(yōu)化器
          criterion = nn.CrossEntropyLoss()
          optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
          # 訓(xùn)練模型
          total_step = len(train_loader)
          for epoch in range(num_epochs):
          for i ,(images, labels) in enumerate(train_loader):
          images = images.to(device)
          labels = labels.to(device)

          # 計(jì)算損失
          outputs = model(images)
          loss = criterion(outputs, labels)

          # 梯度反向傳播
          optimizer.zero_grad()
          loss.backward()
          optimizer.step()
          if (i+1) % 100 == 0:
          print('Epoch: [{}/{}], Step: [{}/{}], Loss: {}'
          .format(epoch+1, num_epochs, i+1, total_step, loss.item()))

          分類模型測(cè)試代碼

          # 測(cè)試模型
          model.eval()
          # eval mode(batch norm uses moving mean/variance
          #instead of mini-batch mean/variance)
          with torch.no_grad():
          correct = 0
          total = 0
          for images, labels in test_loader:
          images = images.to(device)
          labels = labels.to(device)
          outputs = model(images)
          _, predicted = torch.max(outputs.data, 1)
          total += labels.size(0)
          correct += (predicted == labels).sum().item()
          print('Test accuracy of the model on the 10000 test images: {} %' .format(100 * correct / total))

          自定義損失函數(shù)

          class MyLoss(torch.nn.Moudle): 
          def __init__(self):
          super(MyLoss, self).__init__()
          def forward(self, x, y):
          loss = torch.mean((x - y) ** 2)
          return loss

          預(yù)訓(xùn)練模型修改

          class Net(nn.Module):
          def __init__(self , model):
          super(Net, self).__init__()
          # 忽略模型的最后兩層
          self.resnet_layer = nn.Sequential(*list(model.children())[:-2])
          # 自定義層
          self.transion_layer = nn.ConvTranspose2d(2048, 2048, kernel_size=14, stride=3)
          self.pool_layer = nn.MaxPool2d(32)
          self.Linear_layer = nn.Linear(2048, 8)

          def forward(self, x):
          x = self.resnet_layer(x)
          x = self.transion_layer(x)
          x = self.pool_layer(x)
          x = x.view(x.size(0), -1)
          x = self.Linear_layer(x)
          return x

          resnet = models.resnet50(pretrained= True)
          model = Net(resnet)

          學(xué)習(xí)率衰減策略

          # 定義優(yōu)化器
          optimizer_ExpLR = torch.optim.SGD(net.parameters(),lr=0.1)
          # 指數(shù)衰減
          ExpLR = torch.optim.lr_scheduler.ExponentialLR(optimizer_ExpLR,gamma=0.98)
          # 固定步長(zhǎng)衰減
          optimizer_StepLR = torch.optim.SGD(net.parameters(), lr=0.1)
          StepLR = torch.optim.lr_scheduler.StepLR(optimizer_StepLR, step_size=step_size, gamma=0.65)
          # 多步長(zhǎng)衰減
          optimizer_MultiStepLR = torch.optim.SGD(net.parameters(), lr=0.1)
          torch.optim.lr_scheduler.MultiStepLR(optimizer_MultiStepLR,
          milestones=[200, 300, 320, 340, 200], gamma=0.8)
          # 余弦退火衰減
          optimizer_CosineLR = torch.optim.SGD(net.parameters(), lr=0.1)
          CosineLR = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer_CosineLR, T_max=150, eta_min=0)

          保存與加載斷點(diǎn)

          # 加載模型
          if resume:
          model_path = os.path.join('model', 'best_checkpoint.pth.tar')
          assert os.path.isfile(model_path)
          checkpoint = torch.load(model_path)
          best_acc = checkpoint['best_acc']
          start_epoch = checkpoint['epoch'] model.load_state_dict(checkpoint['model']) optimizer.load_state_dict(checkpoint['optimizer'])
          print('Load checkpoint at epoch {}.'.format(start_epoch))
          print('Best accuracy so far {}.'.format(best_acc))
          # 訓(xùn)練模型
          for epoch in range(start_epoch, num_epochs):
          ...
          # 測(cè)試模型
          ...
          # 保存checkpoint
          is_best = current_acc > best_acc
          best_acc = max(current_acc, best_acc)
          checkpoint = { 'best_acc': best_acc, 'epoch': epoch + 1, 'model': model.state_dict(), 'optimizer': optimizer.state_dict(), }
          model_path = os.path.join('model', 'checkpoint.pth.tar') best_model_path = os.path.join('model', 'best_checkpoint.pth.tar') torch.save(checkpoint, model_path)
          if is_best: shutil.copy(model_path, best_model_path)

          注意事項(xiàng)

          • model(x) 定義好后,用 model.train() 和 model.eval() 切換模型狀態(tài)。

          • 使用with torch.no_grad() 包含無需計(jì)算梯度的代碼塊

          • model.eval()與torch.no_grad的區(qū)別:前者是將模型切換為測(cè)試態(tài),例如BN和Dropout在訓(xùn)練和測(cè)試階段使用不同的計(jì)算方法;后者是關(guān)閉張量的自動(dòng)求導(dǎo)機(jī)制,減少存儲(chǔ)和加速計(jì)算。

          • torch.nn.CrossEntropyLoss 等價(jià)于 torch.nn.functional.log_softmax + torch.nn.NLLLoss。

          • ReLU可使用inplace操作減少顯存消耗。

          • 使用半精度浮點(diǎn)數(shù) half() 可以節(jié)省計(jì)算資源同時(shí)提升模型計(jì)算速度,但需要小心數(shù)值精度過低帶來的穩(wěn)定性問題。

          如果覺得有用,就請(qǐng)分享到朋友圈吧!

          △點(diǎn)擊卡片關(guān)注極市平臺(tái),獲取最新CV干貨

          公眾號(hào)后臺(tái)回復(fù)“79”獲取CVPR 2021:TransT 直播鏈接~


          極市干貨
          YOLO教程:一文讀懂YOLO V5 與 YOLO V4大盤點(diǎn)|YOLO 系目標(biāo)檢測(cè)算法總覽全面解析YOLO V4網(wǎng)絡(luò)結(jié)構(gòu)
          實(shí)操教程:PyTorch vs LibTorch:網(wǎng)絡(luò)推理速度誰(shuí)更快?只用兩行代碼,我讓Transformer推理加速了50倍PyTorch AutoGrad C++層實(shí)現(xiàn)
          算法技巧(trick):深度學(xué)習(xí)訓(xùn)練tricks總結(jié)(有實(shí)驗(yàn)支撐)深度強(qiáng)化學(xué)習(xí)調(diào)參Tricks合集長(zhǎng)尾識(shí)別中的Tricks匯總(AAAI2021
          最新CV競(jìng)賽:2021 高通人工智能應(yīng)用創(chuàng)新大賽CVPR 2021 | Short-video Face Parsing Challenge3D人體目標(biāo)檢測(cè)與行為分析競(jìng)賽開賽,獎(jiǎng)池7萬(wàn)+,數(shù)據(jù)集達(dá)16671張!


          CV技術(shù)社群邀請(qǐng)函 #

          △長(zhǎng)按添加極市小助手
          添加極市小助手微信(ID : cvmart2)

          備注:姓名-學(xué)校/公司-研究方向-城市(如:小極-北大-目標(biāo)檢測(cè)-深圳)


          即可申請(qǐng)加入極市目標(biāo)檢測(cè)/圖像分割/工業(yè)檢測(cè)/人臉/醫(yī)學(xué)影像/3D/SLAM/自動(dòng)駕駛/超分辨率/姿態(tài)估計(jì)/ReID/GAN/圖像增強(qiáng)/OCR/視頻理解等技術(shù)交流群


          每月大咖直播分享、真實(shí)項(xiàng)目需求對(duì)接、求職內(nèi)推、算法競(jìng)賽、干貨資訊匯總、與 10000+來自港科大、北大、清華、中科院、CMU、騰訊、百度等名校名企視覺開發(fā)者互動(dòng)交流~



          覺得有用麻煩給個(gè)在看啦~  
          瀏覽 35
          點(diǎn)贊
          評(píng)論
          收藏
          分享

          手機(jī)掃一掃分享

          分享
          舉報(bào)
          評(píng)論
          圖片
          表情
          推薦
          點(diǎn)贊
          評(píng)論
          收藏
          分享

          手機(jī)掃一掃分享

          分享
          舉報(bào)
          <kbd id="afajh"><form id="afajh"></form></kbd>
          <strong id="afajh"><dl id="afajh"></dl></strong>
            <del id="afajh"><form id="afajh"></form></del>
                1. <th id="afajh"><progress id="afajh"></progress></th>
                  <b id="afajh"><abbr id="afajh"></abbr></b>
                  <th id="afajh"><progress id="afajh"></progress></th>
                  一本色道久久综合无码欧美 | 国产午夜精品一区二区三区视频 | 波多野结衣无码AⅤ一区t二区三区 | 国产蝌蚪 | 日韩欧美三级电影在线观看 |