<kbd id="afajh"><form id="afajh"></form></kbd>
<strong id="afajh"><dl id="afajh"></dl></strong>
    <del id="afajh"><form id="afajh"></form></del>
        1. <th id="afajh"><progress id="afajh"></progress></th>
          <b id="afajh"><abbr id="afajh"></abbr></b>
          <th id="afajh"><progress id="afajh"></progress></th>

          Pytorch轉(zhuǎn)Msnhnet模型思路分享

          共 14135字,需瀏覽 29分鐘

           ·

          2020-09-21 02:01


          ?

          GiantPandaCV導(dǎo)語(yǔ):「這篇文章要為大家介紹一下MsnhNet的模型轉(zhuǎn)換思路,大多數(shù)搞CV的小伙伴都知道:珍愛(ài)生命,遠(yuǎn)離模型轉(zhuǎn)換。但是啊,當(dāng)你想部署訓(xùn)練出來(lái)的模型時(shí)(如Pytorch訓(xùn)練的),模型轉(zhuǎn)換又是必不可少的步驟,這個(gè)時(shí)候就真的很愁。為什么調(diào)用xxx腳本轉(zhuǎn)換出來(lái)的模型報(bào)錯(cuò)了?為什么轉(zhuǎn)換出來(lái)的模型推理結(jié)果不正確呢?所以,我認(rèn)為在做模型轉(zhuǎn)換時(shí)擁有一個(gè)清晰的分析思路是不可少的。這篇文章就為大家分享了一下最近開(kāi)源的前向推理框架MsnhNet是如何將原始的Pytorch模型較為優(yōu)雅的轉(zhuǎn)換過(guò)來(lái),希望我們介紹的思路可以對(duì)有模型轉(zhuǎn)換需求的同學(xué)帶來(lái)一定啟發(fā)」

          ?

          1.網(wǎng)絡(luò)結(jié)構(gòu)的轉(zhuǎn)換

          網(wǎng)絡(luò)結(jié)構(gòu)轉(zhuǎn)換比較復(fù)雜,其原因在于涉及到不同的op以及相關(guān)的基礎(chǔ)操作.

          • 「思路一」: 利用print的結(jié)果進(jìn)行構(gòu)建
            • 「優(yōu)點(diǎn)」: 簡(jiǎn)單易用
            • 「缺點(diǎn)」: 大部分網(wǎng)絡(luò),print并不能完全展現(xiàn)出其結(jié)構(gòu).簡(jiǎn)單網(wǎng)絡(luò)可用.
          • 代碼實(shí)現(xiàn):
          import?torch
          import?torch.nn?as?nn

          class?Model(nn.Module):
          ????def?__init__(self):
          ????????super(Model,?self).__init__()
          ????????self.conv1?=?nn.Conv2d(1,?6,?5)
          ????????self.bn1???=?nn.BatchNorm2d(6,eps=1e-5,momentum=0.1)
          ????????self.relu1?=?nn.ReLU()
          ????????self.pool1?=?nn.MaxPool2d(2)
          ????????self.conv2?=?nn.Conv2d(6,?16,?5)
          ????????self.bn2???=?nn.BatchNorm2d(16,eps=1e-5,momentum=0.1)
          ????????self.relu2?=?nn.ReLU()
          ????????self.pool2?=?nn.MaxPool2d(2)

          ????def?forward(self,?x):
          ????????y?=?self.conv1(x)
          ????????y?=?self.bn1(y)
          ????????y?=?self.relu1(y)?
          ????????y?=?self.pool1(y)?
          ????????y?=?self.conv2(y)
          ????????y?=?self.bn2(y)?
          ????????y?=?self.relu2(y)?
          ????????y?=?self.pool2(y)
          ????????return?y

          nn?=?Model()
          print(nn)
          • 結(jié)果: 很顯然對(duì)于用純nn.Module搭建的網(wǎng)絡(luò)是可行的
          Model
          (
          ??(conv1):?Conv2d(1,?6,?kernel_size=(5,?5),?stride=(1,?1))
          ??(bn1):?BatchNorm2d(6,?eps=1e-05,?momentum=0.1,?affine=True,?track_running_stats=True)
          ??(relu1):?ReLU()
          ??(pool1):?MaxPool2d(kernel_size=2,?stride=2,?padding=0,?dilation=1,?ceil_mode=False)
          ??(conv2):?Conv2d(6,?16,?kernel_size=(5,?5),?stride=(1,?1))
          ??(bn2):?BatchNorm2d(16,?eps=1e-05,?momentum=0.1,?affine=True,?track_running_stats=True)
          ??(relu2):?ReLU()
          ??(pool2):?MaxPool2d(kernel_size=2,?stride=2,?padding=0,?dilation=1,?ceil_mode=False)
          )
          • 如果在forward內(nèi)添加相關(guān)操作,則此方案將無(wú)效.
          • 代碼實(shí)現(xiàn):
          import?torch
          import?torch.nn?as?nn

          class?Model(nn.Module):
          ????def?__init__(self):
          ????????super(Model,?self).__init__()
          ????????self.conv1?=?nn.Conv2d(1,?6,?5)
          ????????self.bn1???=?nn.BatchNorm2d(6,eps=1e-5,momentum=0.1)
          ????????self.relu1?=?nn.ReLU()
          ????????self.pool1?=?nn.MaxPool2d(2)
          ????????self.conv2?=?nn.Conv2d(6,?16,?5)
          ????????self.bn2???=?nn.BatchNorm2d(16,eps=1e-5,momentum=0.1)
          ????????self.relu2?=?nn.ReLU()
          ????????self.pool2?=?nn.MaxPool2d(2)

          ????def?forward(self,?x):
          ????????y?=?self.conv1(x)
          ????????y?=?self.bn1(y)
          ????????y?=?self.relu1(y)?
          ????????y?=?self.pool1(y)?
          ????????y?=?self.conv2(y)
          ????????y?=?self.bn2(y)?
          ????????y?=?self.relu2(y)?
          ????????y?=?self.pool2(y)
          ????????y?=?torch.flatten(y)
          ????????return?y

          nn?=?Model()
          print(nn)
          • 結(jié)果: 很顯然forward內(nèi)的flatten操作并沒(méi)有被導(dǎo)出.
          Model(
          ??(conv1):?Conv2d(1,?6,?kernel_size=(5,?5),?stride=(1,?1))
          ??(bn1):?BatchNorm2d(6,?eps=1e-05,?momentum=0.1,?affine=True,?track_running_stats=True)
          ??(relu1):?ReLU()
          ??(pool1):?MaxPool2d(kernel_size=2,?stride=2,?padding=0,?dilation=1,?ceil_mode=False)
          ??(conv2):?Conv2d(6,?16,?kernel_size=(5,?5),?stride=(1,?1))
          ??(bn2):?BatchNorm2d(16,?eps=1e-05,?momentum=0.1,?affine=True,?track_running_stats=True)
          ??(relu2):?ReLU()
          ??(pool2):?MaxPool2d(kernel_size=2,?stride=2,?padding=0,?dilation=1,?ceil_mode=False)
          )
          • 「思路二」: 通過(guò)類似Windows Hook技術(shù).(思路來(lái)源pytorch_to_caffe) 在pytorch的Op在執(zhí)行之前,對(duì)此Op進(jìn)行截取,以獲取相關(guān)信息,從而實(shí)現(xiàn)網(wǎng)絡(luò)構(gòu)建.
            • 優(yōu)點(diǎn): 幾乎可以完成所有pytorch的op導(dǎo)出.
            • 缺點(diǎn): 實(shí)現(xiàn)復(fù)雜,容易誤操作,可能影響pytorch本身結(jié)果錯(cuò)誤.
          • 代碼實(shí)現(xiàn): 通過(guò)構(gòu)建Hook類, 重寫op, 并替換原op操作,獲取op的參數(shù). 層的上下關(guān)系,通過(guò)tensor的_cdata作為唯一識(shí)別的ID.
          import?torch
          import?torch.nn?as?nn
          from?torchsummary?import?summary
          import?torch.nn.functional?as?F

          logMsg?=?True
          ccc?=?[]
          #?Hook截取類
          class?Hook(object):
          ????hookInited?=?False
          ????def?__init__(self,raw,replace,**kwargs):
          ????????self.obj=replace?#?被截取之后的op
          ????????self.raw=raw?#?原op

          ????def?__call__(self,*args,**kwargs):
          ????????if?not?Hook.hookInited:?#在Hook類未初始化之前,該信號(hào)原路返回
          ????????????return?self.raw(*args,**kwargs)
          ????????else:???????????????????#否則,則按截取之后,實(shí)現(xiàn)的函數(shù)執(zhí)行
          ????????????out=self.obj(self.raw,*args,**kwargs)
          ????????????return?out

          def?log(*args):
          ????if?logMsg:
          ????????print(*args)

          #?替換原cov2d函數(shù)的實(shí)現(xiàn)
          def?_conv2d(raw,inData,?weight,?bias=None,?stride=1,?padding=0,?dilation=1,?groups=1):
          ????#?對(duì)于上下層網(wǎng)絡(luò)關(guān)系,可以使用tensor的_cdata,該參數(shù)類似唯一ID
          ????#?輸入tensor的唯一ID
          ????log(?"conv2d-i"?,?inData._cdata)?
          ????x=raw(inData,weight,bias,stride,padding,dilation,groups)
          ????ccc.append(x)????????????????????#?此處將輸出保存,防止被inplace操作,導(dǎo)致所有tensor的_cdata喪失唯一性
          ????#?此處就可以根據(jù)conv2d參數(shù)就行網(wǎng)絡(luò)構(gòu)建
          ????#?msnhnet.buildConv2d(...)?
          ????#?輸出tensor的唯一ID
          ????log(?"conv2d-o"?,?x._cdata)
          ????return?x

          #?被替換OP??????????????????原OP?????自定義OP
          F.conv2d????????=???Hook(F.conv2d,_conv2d)
          • 完整Demo:
          import?torch
          import?torch.nn?as?nn
          from?torchsummary?import?summary
          import?torch.nn.functional?as?F

          logMsg?=?True
          ccc?=?[]
          #?Hook截取類
          class?Hook(object):
          ????hookInited?=?False
          ????def?__init__(self,raw,replace,**kwargs):
          ????????self.obj=replace?#?被截取之后的op
          ????????self.raw=raw?#?原op

          ????def?__call__(self,*args,**kwargs):
          ????????if?not?Hook.hookInited:?#在Hook類未初始化之前,該信號(hào)原路返回
          ????????????return?self.raw(*args,**kwargs)
          ????????else:???????????????????#否則,則按截取之后,實(shí)現(xiàn)的函數(shù)執(zhí)行
          ????????????out=self.obj(self.raw,*args,**kwargs)
          ????????????return?out

          def?log(*args):
          ????if?logMsg:
          ????????print(*args)

          #?替換原cov2d函數(shù)的實(shí)現(xiàn)
          def?_conv2d(raw,inData,?weight,?bias=None,?stride=1,?padding=0,?dilation=1,?groups=1):
          ????#?對(duì)于上下層網(wǎng)絡(luò)關(guān)系,可以使用tensor的_cdata,該參數(shù)類似唯一ID
          ????#?輸入tensor的唯一ID
          ????log(?"conv2d-i"?,?inData._cdata)?
          ????x=raw(inData,weight,bias,stride,padding,dilation,groups)
          ????ccc.append(x)????????????????????#?此處將輸出保存,防止被inplace操作,導(dǎo)致所有tensor的_cdata喪失唯一性
          ????#?此處就可以根據(jù)conv2d參數(shù)就行網(wǎng)絡(luò)構(gòu)建
          ????#?msnhnet.buildConv2d(...)?
          ????#?輸出tensor的唯一ID
          ????log(?"conv2d-o"?,?x._cdata)
          ????return?x

          def?_relu(raw,?inData,?inplace=False):
          ????log(?"relu-i"?,?inData._cdata)
          ????x?=?raw(inData,False)
          ????ccc.append(x)
          ????log(?"relu-o"?,?x._cdata)
          ????return?x

          def?_batch_norm(raw,inData,?running_mean,?running_var,?weight=None,?bias=None,training=False,?momentum=0.1,?eps=1e-5):
          ????log(?"bn-i"?,?inData._cdata)
          ????x?=?raw(inData,?running_mean,?running_var,?weight,?bias,?training,?momentum,?eps)
          ????ccc.append(x)
          ????log(?"bn-o"?,?x._cdata)
          ????return?x

          def?_flatten(raw,*args):
          ????log(?"flatten-i"?,?args[0]._cdata)
          ????x=raw(*args)
          ????ccc.append(x)
          ????log(?"flatten-o"?,?x._cdata)
          ????return?x

          #?被替換OP???????????????????原OP???????自定義OP
          F.conv2d????????=???Hook(F.conv2d,_conv2d)

          F.batch_norm????=???Hook(F.batch_norm,_batch_norm)
          F.relu??????????=???Hook(F.relu,_relu)
          torch.flatten???=???Hook(torch.flatten,_flatten)

          class?Model(nn.Module):
          ????def?__init__(self):
          ????????super(Model,?self).__init__()
          ????????self.conv1?=?nn.Conv2d(1,?6,?5)
          ????????self.bn1???=?nn.BatchNorm2d(6,eps=1e-5,momentum=0.1)
          ????????self.relu1?=?nn.ReLU()

          ????def?forward(self,?x):
          ????????y?=?self.conv1(x)
          ????????y?=?self.bn1(y)
          ????????y?=?self.relu1(y)?
          ????????y?=?torch.flatten(y)
          ????????return?y

          input_var?=?torch.autograd.Variable(torch.rand(1,?1,?28,?28))
          nn?=?Model()
          nn.eval()
          Hook.hookInited?=?True
          res?=?nn(input_var)
          • 結(jié)果: flatten操作也完成了導(dǎo)出, 且每個(gè)op的input的ID都能在前面找到對(duì)應(yīng)op的output的ID.即可知曉上下層之間的關(guān)系,由此,即可構(gòu)建msnhnet.
          conv2d-i?2748363239504
          conv2d-o?2748363238224
          bn-i?2748363238224
          bn-o?2748363242832
          relu-i?2748363242832
          relu-o?2748363235152
          flatten-i?2748363235152
          flatten-o?2748363242064

          2.參數(shù)的轉(zhuǎn)換

          • 「思路一」: 利用pytorch的state_dict字典,可直接進(jìn)行導(dǎo)出. 由于msnhnet和pytorch的內(nèi)存排布是一致的,都為NCHW模式,且對(duì)于BN層的參數(shù)順序也相同,都為scale, bias, mean和var.只需將參數(shù)進(jìn)行逐個(gè)提取,然后按二進(jìn)制存儲(chǔ)即可。
            • 優(yōu)點(diǎn): 可以在不知道網(wǎng)絡(luò)運(yùn)行結(jié)構(gòu)的時(shí)候?qū)?shù)進(jìn)行導(dǎo)出, 簡(jiǎn)單易用.
            • 缺點(diǎn): 當(dāng)網(wǎng)絡(luò)使用參數(shù)的順序和保存的順序不一致時(shí),會(huì)出現(xiàn)錯(cuò)誤.
          import?torchvision.models?as?models
          import?torch
          from?struct?import?pack

          md?=?models.resnet18(pretrained?=?True)
          md.to("cpu")
          md.eval()
          val?=?[]
          dd?=?0

          for?name?in?md.state_dict():
          ????????if?"num_batches_tracked"?not?in?name:
          ????????????????c?=?md.state_dict()[name].data.flatten().numpy().tolist()
          ????????????????dd?=?dd?+?len(c)
          ????????????????print(name,?":",?len(c))
          ????????????????val.extend(c)

          with?open("alexnet.msnhbin","wb")?as?f:
          ????for?i?in?val?:
          ????????f.write(pack('f',i))

          注意上面出現(xiàn)了一行if "num_batches_tracked" not in name:,這一行是Pytorch的一個(gè)坑點(diǎn),在pytorch 0.4.1及后面的版本里,BatchNorm層新增了num_batches_tracked參數(shù),用來(lái)統(tǒng)計(jì)訓(xùn)練時(shí)的forward過(guò)的batch數(shù)目,源碼如下(pytorch0.4.1):

          ??if?self.training?and?self.track_running_stats:
          ????????self.num_batches_tracked?+=?1
          ????????if?self.momentum?is?None:??#?use?cumulative?moving?average
          ????????????exponential_average_factor?=?1.0?/?self.num_batches_tracked.item()
          ????????else:??#?use?exponential?moving?average
          ????????????exponential_average_factor?=?self.momentum

          在調(diào)用預(yù)訓(xùn)練參數(shù)模型時(shí),官方給定的預(yù)訓(xùn)練模型是在pytorch0.4之前。因此,調(diào)用預(yù)訓(xùn)練參數(shù)時(shí),需要過(guò)濾掉“num_batches_tracked”。

          • 「思路二」: 利用之前的Hook,在算子運(yùn)行時(shí),對(duì)參數(shù)進(jìn)行提取,暫存,最后統(tǒng)一保存.
            • 優(yōu)點(diǎn): 網(wǎng)絡(luò)參數(shù)和網(wǎng)絡(luò)結(jié)構(gòu)同時(shí)導(dǎo)出,保證參數(shù)與網(wǎng)絡(luò)運(yùn)行結(jié)構(gòu)一致性.
            • 缺點(diǎn): 需要獲取網(wǎng)絡(luò)的運(yùn)行順序才能完成轉(zhuǎn)換.
          • 代碼實(shí)現(xiàn):
          ...

          m_weights?=?[]

          def?_conv2d(raw,inData,?weight,?bias=None,?stride=1,?padding=0,?dilation=1,?groups=1):
          ????
          ????x=raw(inData,weight,bias,stride,padding,dilation,groups)

          ????if?Hook.hookInited?:
          ????????log(?"conv2d-i"?,?inData._cdata)
          ????????ccc.append(x)
          ????????log(?"conv2d-o"?,?x._cdata)

          ????????useBias?=?True
          ????????if?bias?is?None:
          ????????????useBias?=?False
          ????????
          ????????m_weights.extend(weight.numpy().flatten().tolist())?#暫存

          ????????if?useBias?:
          ????????????m_weights.extend(bias.numpy().flatten().tolist())?#暫存

          ????????msnhnet.checkInput(inData,sys._getframe().f_code.co_name)
          ????????msnhnet.buildConv2d(str(x._cdata),?x.size()[1],?weight.size()[2],?weight.size()[3],?
          ????????????????????????????padding[0],?padding[1],?stride[0],?stride[1],?dilation[0],?dilation[1],?groups,?useBias)
          ????return?x

          ...

          def?trans(net,?inputVar,?msnhnet_path,?msnhbin_path):
          ????Hook.hookInited?=?True
          ????msnhnet.buildConfig(str(id(inputVar)),?inputVar.size())
          ????net.forward(inputVar)

          ????with?open(msnhnet_path,"w")?as?f1:
          ????????f1.write(msnhnet.net)

          ????with?open(msnhbin_path,"wb")?as?f:?#?參數(shù)保存
          ????????for?i?in?m_weights?:
          ????????????f.write(pack('f',i))?
          ????Hook.hookInited?=?False


          3.詳細(xì)轉(zhuǎn)換過(guò)程代碼編寫

          這里先截取一下構(gòu)建MsnhNet的部分代碼,完整代碼見(jiàn)https://github.com/msnh2012/Msnhnet/blob/master/tools/pytorch2Msnhnet/PytorchToMsnhnet.py,如下:

          from?collections?import?OrderedDict
          import?sys

          class?Msnhnet:
          ????def?__init__(self):
          ????????self.inAddr?=?""
          ????????self.net?=?""
          ????????self.index?=?0
          ????????self.names?=?[]
          ????????self.indexes?=?[]

          ????def?setNameAndIdx(self,?name,?ids):
          ????????self.names.append(name)
          ????????self.indexes.append(ids)

          ????def?getIndexFromName(self,name):
          ????????ids?=?self.indexes[self.names.index(name)]
          ????????return?ids

          ????def?getLastVal(self):
          ????????return?self.indexes[-1]

          ????def?getLastKey(self):
          ????????return?self.names[-1]

          ????def?checkInput(self,?inAddr,fun):

          ????????if?self.index?==?0:
          ????????????return

          ????????if?str(inAddr._cdata)?!=?self.getLastKey():
          ????????????try:
          ????????????????ID?=?self.getIndexFromName(str(inAddr._cdata))
          ????????????????self.buildRoute(str(inAddr._cdata),str(ID),False)
          ????????????except:
          ?????????????????raise?NotImplementedError("last?op?is?not?supported?"?+?fun?+?str(inAddr._cdata))
          ????????????

          ????def?buildConfig(self,?inAddr,?shape):
          ????????self.inAddr?=?inAddr
          ????????self.net?=?self.net?+?"config:\n"
          ????????self.net?=?self.net?+?"??batch:?"?+?str(int(shape[0]))?+?"\n"
          ????????self.net?=?self.net?+?"??channels:?"?+?str(int(shape[1]))?+?"\n"
          ????????self.net?=?self.net?+?"??height:?"?+?str(int(shape[2]))?+?"\n"
          ????????self.net?=?self.net?+?"??width:?"?+?str(int(shape[3]))?+?"\n"

          ?
          ????def?buildConv2d(self,?name,?filters,?kSizeX,?kSizeY,?paddingX,?paddingY,?strideX,?strideY,?dilationX,?dilationY,?groups,?useBias):
          ????????self.setNameAndIdx(name,self.index)
          ????????self.net?=?self.net?+?"#"?+?str(self.index)?+??"\n"
          ????????self.index?=?self.index?+?1
          ????????self.net?=?self.net?+?"conv:\n"
          ????????self.net?=?self.net?+?"??filters:?"?+?str(int(filters))?+?"\n"
          ????????self.net?=?self.net?+?"??kSizeX:?"?+?str(int(kSizeX))?+?"\n"
          ????????self.net?=?self.net?+?"??kSizeY:?"?+?str(int(kSizeY))?+?"\n"
          ????????self.net?=?self.net?+?"??paddingX:?"?+?str(int(paddingX))?+?"\n"
          ????????self.net?=?self.net?+?"??paddingY:?"?+?str(int(paddingY))?+?"\n"
          ????????self.net?=?self.net?+?"??strideX:?"?+?str(int(strideX))?+?"\n"
          ????????self.net?=?self.net?+?"??strideY:?"?+?str(int(strideY))?+?"\n"
          ????????self.net?=?self.net?+?"??dilationX:?"?+?str(int(dilationX))?+?"\n"
          ????????self.net?=?self.net?+?"??dilationY:?"?+?str(int(dilationY))?+?"\n"
          ????????self.net?=?self.net?+?"??groups:?"?+?str(int(groups))?+?"\n"
          ????????self.net?=?self.net?+?"??useBias:?"?+?str(int(useBias))?+?"\n"

          然后Pytorch2MsnhNet就在前向傳播的過(guò)程中按照我們介紹的Hook技術(shù)完成構(gòu)建Pytorch模型對(duì)應(yīng)的MsnhNet模型結(jié)構(gòu)。

          至此,我們就獲得了MsnhNet的模型參數(shù)文件和權(quán)重文件,可以利用MsnhNet加載模型進(jìn)行推理了。

          4. 已經(jīng)支持的OP以及轉(zhuǎn)換實(shí)例

          Pytorch2MsnhNet已經(jīng)支持轉(zhuǎn)換的OP如下:

          -??conv2d
          -??max_pool2d
          -??avg_pool2d
          -??adaptive_avg_pool2d
          -??linear
          -??flatten
          -??dropout
          -??batch_norm
          -??interpolate(nearest,?bilinear)
          -??cat???
          -??elu
          -??selu
          -??relu
          -??relu6
          -??leaky_relu
          -??tanh
          -??softmax
          -??sigmoid
          -??softplus
          -??abs????
          -??acos???
          -??asin???
          -??atan???
          -??cos????
          -??cosh???
          -??sin????
          -??sinh???
          -??tan????
          -??exp????
          -??log????
          -??log10??
          -??mean
          -??permute
          -??view
          -??contiguous
          -??sqrt
          -??pow
          -??sum
          -??pad
          -??+|-|x|/|+=|-=|x=|/=|

          • ResNet18的轉(zhuǎn)換示例:
          import?torch
          import?torch.nn?as?nn
          from?torchvision.models?import?resnet18
          from?PytorchToMsnhnet?import?*

          resnet18=resnet18(pretrained=True)
          resnet18.eval()
          input=torch.ones([1,3,224,224])
          trans(resnet18,?input,"resnet18.msnhnet","resnet18.msnhbin")

          • DeepLabV3的轉(zhuǎn)換示例:
          import?torch
          import?torch.nn?as?nn
          from?torchvision.models.segmentation?import?deeplabv3_resnet101
          from?PytorchToMsnhnet?import?*

          deeplabv3=deeplabv3_resnet101(pretrained=False)
          ccc?=?torch.load("C:/Users/msnh/.cache/torch/checkpoints/deeplabv3_resnet101_coco-586e9e4e.pth")
          del?ccc["aux_classifier.0.weight"]
          del?ccc["aux_classifier.1.weight"]
          del?ccc["aux_classifier.1.bias"]
          del?ccc["aux_classifier.1.running_mean"]
          del?ccc["aux_classifier.1.running_var"]
          del?ccc["aux_classifier.1.num_batches_tracked"]
          del?ccc["aux_classifier.4.weight"]
          del?ccc["aux_classifier.4.bias"]
          deeplabv3.load_state_dict(ccc)
          deeplabv3.requires_grad_(False)
          deeplabv3.eval()


          input=torch.ones([1,3,224,224])

          #?trans?msnhnet?and?msnhbin?file
          trans(deeplabv3,?input,"deeplabv3.msnhnet","deeplabv3.msnhbin")


          5. MsnhNet介紹

          MsnhNet是一款基于純c++的輕量級(jí)推理框架,此框架受到darknet啟發(fā),由穆士凝魂主導(dǎo),并由本公眾號(hào)作者團(tuán)隊(duì)業(yè)余協(xié)助開(kāi)發(fā)。

          項(xiàng)目地址:https://github.com/msnh2012/Msnhnet ,歡迎一鍵三連。

          本框架目前已經(jīng)支持了X86、Cuda、Arm端的推理(支持的OP有限,正努力開(kāi)發(fā)中),并且可以直接將Pytorch模型(后面也會(huì)嘗試接入更多框架)轉(zhuǎn)為本框架的模型進(jìn)行部署,歡迎對(duì)前向推理框架感興趣的同學(xué)試用或者加入我們一起維護(hù)這個(gè)輪子。

          最后,歡迎加入Msnhnet開(kāi)發(fā)QQ交流群,有對(duì)項(xiàng)目的建議或者個(gè)人的需求都可以在群里或者github issue提出。

          交流群圖片

          歡迎關(guān)注GiantPandaCV, 在這里你將看到獨(dú)家的深度學(xué)習(xí)分享,堅(jiān)持原創(chuàng),每天分享我們學(xué)習(xí)到的新鮮知識(shí)。( ? ?ω?? )?

          有對(duì)文章相關(guān)的問(wèn)題,或者想要加入交流群,歡迎添加BBuf微信:

          二維碼

          為了方便讀者獲取資料以及我們公眾號(hào)的作者發(fā)布一些Github工程的更新,我們成立了一個(gè)QQ群,二維碼如下,感興趣可以加入。

          公眾號(hào)QQ交流群


          瀏覽 46
          點(diǎn)贊
          評(píng)論
          收藏
          分享

          手機(jī)掃一掃分享

          分享
          舉報(bào)
          評(píng)論
          圖片
          表情
          推薦
          點(diǎn)贊
          評(píng)論
          收藏
          分享

          手機(jī)掃一掃分享

          分享
          舉報(bào)
          <kbd id="afajh"><form id="afajh"></form></kbd>
          <strong id="afajh"><dl id="afajh"></dl></strong>
            <del id="afajh"><form id="afajh"></form></del>
                1. <th id="afajh"><progress id="afajh"></progress></th>
                  <b id="afajh"><abbr id="afajh"></abbr></b>
                  <th id="afajh"><progress id="afajh"></progress></th>
                  色伊人大香蕉 | 97人人澡 | 一区二区三区三区无码 | 性爱福利导航 | 国内一级A片 |