開源庫 timm | 基于 Pytorch 的28個視覺 Transformer 實現(xiàn)方法
導(dǎo)讀
?本文將介紹一個優(yōu)秀的PyTorch開源庫——timm庫,并對其中的vision transformer.py代碼進(jìn)行了詳細(xì)解讀。
什么是timm庫?
PyTorchImageModels,簡稱timm,是一個巨大的PyTorch代碼集合,包括了一系列:
image models layers utilities optimizers schedulers data-loaders / augmentations training / validation scripts
旨在將各種SOTA模型整合在一起,并具有復(fù)現(xiàn)ImageNet訓(xùn)練結(jié)果的能力。
timm庫作者是來自加拿大溫哥華的Ross Wightman。

作者github鏈接:
https://github.com/rwightman
timm庫鏈接:
https://github.com/rwightman/pytorch-image-models
所有的PyTorch模型及其對應(yīng)arxiv鏈接如下:
timm庫特點
所有的模型都有默認(rèn)的API:
accessing/changing the classifier -? get_classifier?and?reset_classifier只對features做前向傳播 -? forward_features
所有模型都支持多尺度特征提取 (feature pyramids) (通過create_model函數(shù)):
create_model(name, features_only=True, out_indices=..., output_stride=...)
out_indices?指定返回哪個feature maps to return, 從0開始,out_indices[i]對應(yīng)著?C(i + 1)?feature level。
output_stride?通過dilated convolutions控制網(wǎng)絡(luò)的output stride。大多數(shù)網(wǎng)絡(luò)默認(rèn) stride 32 。
所有的模型都有一致的pretrained weight loader,adapts last linear if necessary。
訓(xùn)練方式支持:
NVIDIA DDP w/ a single GPU per process, multiple processes with APEX present (AMP mixed-precision optional) PyTorch DistributedDataParallel w/ multi-gpu, single process (AMP disabled as it crashes when enabled) PyTorch w/ single GPU single process (AMP optional)
動態(tài)的全局池化方式可以選擇:?average pooling, max pooling, average + max, or concat([average, max]),默認(rèn)是adaptive average。
Schedulers:
Schedulers 包括step,cosinew/ restarts,tanhw/ restarts,plateau?。
Optimizer:
rmsprop_tf?adapted from PyTorch RMSProp by myself. Reproduces much improved Tensorflow RMSProp behaviour.radam?by Liyuan Liu (https://arxiv.org/abs/1908.03265)novograd?by Masashi Kimura (https://arxiv.org/abs/1905.11286)lookahead?adapted from impl by Liam (https://arxiv.org/abs/1907.08610)fused?optimizers by name with NVIDIA Apex installedadamp?and?sgdp?by Naver ClovAI (https://arxiv.org/abs/2006.08217)adafactor?adapted from FAIRSeq impl (https://arxiv.org/abs/1804.04235)adahessian?by David Samuel (https://arxiv.org/abs/2006.00719)
timm庫 vision_transformer.py代碼解讀
代碼來自:
https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py
對應(yīng)的論文是ViT,是除了官方開源的代碼之外的又一個優(yōu)秀的PyTorch implement。
An Image Is Worth 16 x 16 Words: Transformers for Image Recognition at Scale
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
https://arxiv.org/abs/2010.11929
另一篇工作DeiT也大量借鑒了timm庫這份代碼的實現(xiàn):
Training data-efficient image transformers & distillation through attention
Training data-efficient image transformers & distillation through attention
https://arxiv.org/abs/2012.12877
vision_transformer.py:
代碼中定義的變量的含義如下:
img_size:tuple?類型,里面是int類型,代表輸入的圖片大小,默認(rèn)是?224。
patch_size:tuple?類型,里面是int類型,代表Patch的大小,默認(rèn)是?16。
in_chans:int?類型,代表輸入圖片的channel數(shù),默認(rèn)是3。
num_classes:int?類型classification head的分類數(shù),比如CIFAR100就是100,默認(rèn)是?1000。
embed_dim:int?類型Transformer的embedding dimension,默認(rèn)是?768。
depth:int??類型,Transformer的Block的數(shù)量,默認(rèn)是?12。
num_heads:int?類型,attention heads的數(shù)量,默認(rèn)是12。
mlp_ratio:int?類型,mlp hidden dim/embedding dim的值,默認(rèn)是?4。
qkv_bias:bool?類型,attention模塊計算qkv時需要bias嗎,默認(rèn)是?True。
qk_scale:?一般設(shè)置成?None?就行。
drop_rate:float?類型,dropout rate,默認(rèn)是?0。
attn_drop_rate:float?類型,attention模塊的dropout rate,默認(rèn)是?0。
drop_path_rate:float?類型,默認(rèn)是?0。
hybrid_backbone:nn.Module?類型,在把圖片轉(zhuǎn)換成Patch之前,需要先通過一個Backbone嗎?默認(rèn)是?None。
如果是None,就直接把圖片轉(zhuǎn)化成Patch。
如果不是None,就先通過這個Backbone,再轉(zhuǎn)化成Patch。
norm_layer:nn.Module?類型,歸一化層類型,默認(rèn)是?None。
1. 導(dǎo)入必要的庫和模型:
import mathimport loggingfrom functools import partialfrom collections import OrderedDictimport torchimport torch.nn as nnimport torch.nn.functional as Ffrom timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STDfrom .helpers import load_pretrainedfrom .layers import StdConv2dSame, DropPath, to_2tuple, trunc_normal_from .resnet import resnet26d, resnet50dfrom .resnetv2 import ResNetV2from?.registry?import?register_model
2. 定義一個字典,代表標(biāo)準(zhǔn)的模型,如果需要更改模型超參數(shù)只需要改變_cfg
的傳入的參數(shù)即可。
def _cfg(url='', **kwargs):return {'url': url,'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': None,'crop_pct': .9, 'interpolation': 'bicubic','mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD,'first_conv': 'patch_embed.proj', 'classifier': 'head',**kwargs}
3. default_cfgs代表支持的所有模型,也定義成字典的形式:
vit_small_patch16_224里面的small代表小模型。
ViT的第一步要把圖片分成一個個patch,然后把這些patch組合在一起作為對圖像的序列化操作,比如一張224 × 224的圖片分成大小為16 × 16的patch,那一共可以分成196個。所以這個圖片就序列化成了(196, 256)的tensor。所以這里的:
16:?就代表patch的大小。
224:?就代表輸入圖片的大小。
按照這個命名方式,支持的模型有:vit_base_patch16_224,vit_base_patch16_384等等。后面的vit_deit_base_patch16_224等等模型代表DeiT這篇論文的模型。
default_cfgs = {# patch models (my experiments)'vit_small_patch16_224': _cfg(url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/vit_small_p16_224-15ec54c9.pth',),# patch models (weights ported from official Google JAX impl)'vit_base_patch16_224': _cfg(url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_p16_224-80ecf9dd.pth',mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5),),'vit_base_patch32_224': _cfg(url='', # no official model weights for this combo, only for in21kmean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5)),'vit_base_patch16_384': _cfg(url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_p16_384-83fb41ba.pth',input_size=(3, 384, 384), mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5), crop_pct=1.0),'vit_base_patch32_384': _cfg(url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_p32_384-830016f5.pth',input_size=(3, 384, 384), mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5), crop_pct=1.0),'vit_large_patch16_224': _cfg(url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_p16_224-4ee7a4dc.pth',mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5)),'vit_large_patch32_224': _cfg(url='', # no official model weights for this combo, only for in21kmean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5)),'vit_large_patch16_384': _cfg(url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_p16_384-b3be5167.pth',input_size=(3, 384, 384), mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5), crop_pct=1.0),'vit_large_patch32_384': _cfg(url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_p32_384-9b920ba8.pth',input_size=(3, 384, 384), mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5), crop_pct=1.0),# patch models, imagenet21k (weights ported from official Google JAX impl)'vit_base_patch16_224_in21k': _cfg(url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_patch16_224_in21k-e5005f0a.pth',num_classes=21843, mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5)),'vit_base_patch32_224_in21k': _cfg(url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_patch32_224_in21k-8db57226.pth',num_classes=21843, mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5)),'vit_large_patch16_224_in21k': _cfg(url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_patch16_224_in21k-606da67d.pth',num_classes=21843, mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5)),'vit_large_patch32_224_in21k': _cfg(url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_patch32_224_in21k-9046d2e7.pth',num_classes=21843, mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5)),'vit_huge_patch14_224_in21k': _cfg(url='', # FIXME I have weights for this but > 2GB limit for github release binariesnum_classes=21843, mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5)),# hybrid models (weights ported from official Google JAX impl)'vit_base_resnet50_224_in21k': _cfg(url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_resnet50_224_in21k-6f7c7740.pth',num_classes=21843, mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5), crop_pct=0.9, first_conv='patch_embed.backbone.stem.conv'),'vit_base_resnet50_384': _cfg(url='https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_resnet50_384-9fd3c705.pth',input_size=(3, 384, 384), mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5), crop_pct=1.0, first_conv='patch_embed.backbone.stem.conv'),# hybrid models (my experiments)'vit_small_resnet26d_224': _cfg(),'vit_small_resnet50d_s3_224': _cfg(),'vit_base_resnet26d_224': _cfg(),'vit_base_resnet50d_224': _cfg(),# deit models (FB weights)'vit_deit_tiny_patch16_224': _cfg(url='https://dl.fbaipublicfiles.com/deit/deit_tiny_patch16_224-a1311bcf.pth'),'vit_deit_small_patch16_224': _cfg(url='https://dl.fbaipublicfiles.com/deit/deit_small_patch16_224-cd65a155.pth'),'vit_deit_base_patch16_224': _cfg(url='https://dl.fbaipublicfiles.com/deit/deit_base_patch16_224-b5f2ef4d.pth',),'vit_deit_base_patch16_384': _cfg(url='https://dl.fbaipublicfiles.com/deit/deit_base_patch16_384-8de9b5d1.pth',input_size=(3, 384, 384), crop_pct=1.0),'vit_deit_tiny_distilled_patch16_224': _cfg(url='https://dl.fbaipublicfiles.com/deit/deit_tiny_distilled_patch16_224-b40b3cf7.pth'),'vit_deit_small_distilled_patch16_224': _cfg(url='https://dl.fbaipublicfiles.com/deit/deit_small_distilled_patch16_224-649709d9.pth'),'vit_deit_base_distilled_patch16_224': _cfg(url='https://dl.fbaipublicfiles.com/deit/deit_base_distilled_patch16_224-df68dfff.pth', ),'vit_deit_base_distilled_patch16_384': _cfg(url='https://dl.fbaipublicfiles.com/deit/deit_base_distilled_patch16_384-d0272ac0.pth',input_size=(3, 384, 384), crop_pct=1.0),}
4. FFN實現(xiàn):
class Mlp(nn.Module):def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):super().__init__()out_features = out_features or in_featureshidden_features = hidden_features or in_featuresself.fc1 = nn.Linear(in_features, hidden_features)self.act = act_layer()self.fc2 = nn.Linear(hidden_features, out_features)self.drop = nn.Dropout(drop)def forward(self, x):x = self.fc1(x)x = self.act(x)x = self.drop(x)x = self.fc2(x)x = self.drop(x)????????return?x
5. Attention實現(xiàn):
在python 3.5以后,@是一個操作符,表示矩陣-向量乘法
A@x 就是矩陣-向量乘法A*x: np.dot(A, x)。
class Attention(nn.Module):def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0.):super().__init__()self.num_heads = num_headshead_dim = dim // num_headsself.scale = qk_scale or head_dim ** -0.5self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)self.attn_drop = nn.Dropout(attn_drop)self.proj = nn.Linear(dim, dim)self.proj_drop = nn.Dropout(proj_drop)def forward(self, x):B, N, C = x.shapeqkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)q, k, v = qkv[0], qkv[1], qkv[2]attn = (q @ k.transpose(-2, -1)) * self.scaleattn = attn.softmax(dim=-1)attn = self.attn_drop(attn)x = (attn @ v).transpose(1, 2).reshape(B, N, C)x = self.proj(x)x = self.proj_drop(x)return x
6. 包含Attention和Add & Norm的Block實現(xiàn):

圖1:Block類對應(yīng)結(jié)構(gòu)
不同之處是:
先進(jìn)行Norm,再Attention;先進(jìn)行Norm,再通過FFN (MLP)。
class Block(nn.Module):def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm):super().__init__()self.norm1 = norm_layer(dim)self.attn = Attention(dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)# NOTE: drop path for stochastic depth, we shall see if this is better than dropout hereself.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()self.norm2 = norm_layer(dim)mlp_hidden_dim = int(dim * mlp_ratio)self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)def forward(self, x):x = x + self.drop_path(self.attn(self.norm1(x)))x = x + self.drop_path(self.mlp(self.norm2(x)))return x
7. 接下來要把圖片轉(zhuǎn)換成Patch,一種做法是直接把Image轉(zhuǎn)化成Patch,另一種做法是把Backbone輸出的特征轉(zhuǎn)化成Patch。
1) 直接把Image轉(zhuǎn)化成Patch:
輸入的x的維度是:(B, C, H, W)
輸出的PatchEmbedding的維度是:(B, 14*14, 768),768表示embed_dim,14*14表示一共有196個Patches。
class PatchEmbed(nn.Module):""" Image to Patch Embedding"""def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768):super().__init__()img_size = to_2tuple(img_size)patch_size = to_2tuple(patch_size)num_patches = (img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0])self.img_size = img_sizeself.patch_size = patch_sizeself.num_patches = num_patchesself.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)def forward(self, x):B, C, H, W = x.shape# FIXME look at relaxing size constraintsassert H == self.img_size[0] and W == self.img_size[1], \f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."x = self.proj(x).flatten(2).transpose(1, 2)# x: (B, 14*14, 768)????????return?x
2) 把Backbone輸出的特征轉(zhuǎn)化成Patch:
輸入的x的維度是:(B, C, H, W)
得到Backbone輸出的維度是:(B, feature_size, feature_size, feature_dim)
輸出的PatchEmbedding的維度是:(B, feature_size, feature_size, embed_dim),一共有feature_size * feature_size個Patches。
class HybridEmbed(nn.Module):""" CNN Feature Map EmbeddingExtract feature map from CNN, flatten, project to embedding dim."""def __init__(self, backbone, img_size=224, feature_size=None, in_chans=3, embed_dim=768):super().__init__()assert isinstance(backbone, nn.Module)img_size = to_2tuple(img_size)self.img_size = img_sizeself.backbone = backboneif feature_size is None:with torch.no_grad():# FIXME this is hacky, but most reliable way of determining the exact dim of the output feature# map for all networks, the feature metadata has reliable channel and stride info, but using# stride to calc feature dim requires info about padding of each stage that isn't captured.training = backbone.trainingif training:backbone.eval()o = self.backbone(torch.zeros(1, in_chans, img_size[0], img_size[1]))if isinstance(o, (list, tuple)):o = o[-1] # last feature if backbone outputs list/tuple of featuresfeature_size = o.shape[-2:]feature_dim = o.shape[1]backbone.train(training)else:feature_size = to_2tuple(feature_size)if hasattr(self.backbone, 'feature_info'):feature_dim = self.backbone.feature_info.channels()[-1]else:feature_dim = self.backbone.num_featuresself.num_patches = feature_size[0] * feature_size[1]self.proj = nn.Conv2d(feature_dim, embed_dim, 1)def forward(self, x):x = self.backbone(x)if isinstance(x, (list, tuple)):x = x[-1] # last feature if backbone outputs list/tuple of featuresx = self.proj(x).flatten(2).transpose(1, 2)return x
8. 以上是ViT所需的所有模塊的定義,下面是VisionTransformer 這個類的實現(xiàn):
8.1 使用這個類時需要傳入的變量,其含義已經(jīng)在本小節(jié)一開始介紹。
class VisionTransformer(nn.Module):""" Vision TransformerA PyTorch impl of : `An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale` -https://arxiv.org/abs/2010.11929"""def __init__(self, img_size=224, patch_size=16, in_chans=3, num_classes=1000, embed_dim=768, depth=12,num_heads=12, mlp_ratio=4., qkv_bias=True, qk_scale=None, representation_size=None,?????????????????drop_rate=0.,?attn_drop_rate=0.,?drop_path_rate=0.,?hybrid_backbone=None,?norm_layer=None):
8.2 得到分塊后的Patch的數(shù)量:
super().__init__()self.num_classes = num_classesself.num_features = self.embed_dim = embed_dim # num_features for consistency with other modelsnorm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6)if hybrid_backbone is not None:self.patch_embed = HybridEmbed(hybrid_backbone, img_size=img_size, in_chans=in_chans, embed_dim=embed_dim)else:self.patch_embed = PatchEmbed(img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim)num_patches?=?self.patch_embed.num_patches
8.3 class token:
一開始定義成(1, 1, 768),之后再變成(B, 1, 768)。
self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))
8.4 定義位置編碼:
self.pos_embed?=?nn.Parameter(torch.zeros(1,?num_patches?+?1,?embed_dim))8.5 把12個Block連接起來:
self.pos_drop = nn.Dropout(p=drop_rate)dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)]self.blocks = nn.ModuleList([Block(dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer)for i in range(depth)])self.norm = norm_layer(embed_dim)
8.6 表示層和分類頭:
表示層輸出維度是representation_size,分類頭輸出維度是num_classes。
# Representation layerif representation_size:self.num_features = representation_sizeself.pre_logits = nn.Sequential(OrderedDict([('fc', nn.Linear(embed_dim, representation_size)),('act', nn.Tanh())]))else:self.pre_logits = nn.Identity()# Classifier headself.head?=?nn.Linear(self.num_features,?num_classes)?if?num_classes?>?0?else?nn.Identity()
8.7 初始化各個模塊:
函數(shù)trunc_normal_(tensor, mean=0., std=1., a=-2., b=2.)的目的是用截斷的正態(tài)分布繪制的值填充輸入張量,我們只需要輸入均值mean,標(biāo)準(zhǔn)差std,下界a,上界b即可。
self.apply(self._init_weights)表示對各個模塊的權(quán)重進(jìn)行初始化。apply函數(shù)的代碼是:
for module in self.children():module.apply(fn)fn(self)return self
遞歸地將fn應(yīng)用于每個子模塊,相當(dāng)于在遞歸調(diào)用fn,即_init_weights這個函數(shù)。
也就是把模型的所有子模塊的nn.Linear和nn.LayerNorm層都初始化掉。
trunc_normal_(self.pos_embed, std=.02)trunc_normal_(self.cls_token, std=.02)self.apply(self._init_weights)def _init_weights(self, m):if isinstance(m, nn.Linear):trunc_normal_(m.weight, std=.02)if isinstance(m, nn.Linear) and m.bias is not None:nn.init.constant_(m.bias, 0)elif isinstance(m, nn.LayerNorm):nn.init.constant_(m.bias, 0)nn.init.constant_(m.weight, 1.0)
8.8 最后就是整個ViT模型的forward實現(xiàn):
def forward_features(self, x):B = x.shape[0]x = self.patch_embed(x)cls_tokens = self.cls_token.expand(B, -1, -1) # stole cls_tokens impl from Phil Wang, thanksx = torch.cat((cls_tokens, x), dim=1)x = x + self.pos_embedx = self.pos_drop(x)for blk in self.blocks:x = blk(x)x = self.norm(x)[:, 0]x = self.pre_logits(x)return xdef forward(self, x):x = self.forward_features(x)x = self.head(x)????return?x
9. 下面是Training data-efficient image transformers & distillation through attention這篇論文的DeiT這個類的實現(xiàn):
整體結(jié)構(gòu)與ViT相似,繼承了上面的VisionTransformer類。
class DistilledVisionTransformer(VisionTransformer):
再額外定義以下3個變量:
distillation token:dist_token 新的位置編碼:pos_embed 蒸餾分類頭:head_dist
DeiT相關(guān)介紹可以參考:Vision Transformer 超詳細(xì)解讀 (原理分析+代碼解讀) (三)。
self.dist_token = nn.Parameter(torch.zeros(1, 1, self.embed_dim))num_patches = self.patch_embed.num_patchesself.pos_embed = nn.Parameter(torch.zeros(1, num_patches + 2, self.embed_dim))self.head_dist?=?nn.Linear(self.embed_dim,?self.num_classes)?if?self.num_classes?>?0?else?nn.Identity()
初始化新定義的變量:
trunc_normal_(self.dist_token, std=.02)trunc_normal_(self.pos_embed, std=.02)self.head_dist.apply(self._init_weights)
前向函數(shù):
def forward_features(self, x):B = x.shape[0]x = self.patch_embed(x)cls_tokens = self.cls_token.expand(B, -1, -1) # stole cls_tokens impl from Phil Wang, thanksdist_token = self.dist_token.expand(B, -1, -1)x = torch.cat((cls_tokens, dist_token, x), dim=1)x = x + self.pos_embedx = self.pos_drop(x)for blk in self.blocks:x = blk(x)x = self.norm(x)return x[:, 0], x[:, 1]def forward(self, x):x, x_dist = self.forward_features(x)x = self.head(x)x_dist = self.head_dist(x_dist)if self.training:return x, x_distelse:# during inference, return the average of both classifier predictionsreturn (x + x_dist) / 2
10. 對位置編碼進(jìn)行插值:
posemb代表未插值的位置編碼權(quán)值,posemb_tok為位置編碼的token部分,posemb_grid為位置編碼的插值部分。
首先把要插值部分posemb_grid給reshape成(1, gs_old, gs_old, -1)的形式,再插值成(1, gs_new, gs_new, -1)的形式,最后與token部分在第1維度拼接在一起,得到插值后的位置編碼posemb。
def resize_pos_embed(posemb, posemb_new):# Rescale the grid of position embeddings when loading from state_dict. Adapted from# https://github.com/google-research/vision_transformer/blob/00883dd691c63a6830751563748663526e811cee/vit_jax/checkpoint.py#L224position embedding: %s to %s', posemb.shape, posemb_new.shape)ntok_new = posemb_new.shape[1]if True:posemb_grid = posemb[:, :1], posemb[0, 1:]ntok_new -= 1else:posemb_grid = posemb[:, :0], posemb[0]gs_old = int(math.sqrt(len(posemb_grid)))gs_new = int(math.sqrt(ntok_new))embedding grid-size from %s to %s', gs_old, gs_new)posemb_grid = posemb_grid.reshape(1, gs_old, gs_old, -1).permute(0, 3, 1, 2)posemb_grid = F.interpolate(posemb_grid, size=(gs_new, gs_new), mode='bilinear')posemb_grid = posemb_grid.permute(0, 2, 3, 1).reshape(1, gs_new * gs_new, -1)posemb = torch.cat([posemb_tok, posemb_grid], dim=1)????return?posemb
11. _create_vision_transformer函數(shù)用于創(chuàng)建vision transformer:
checkpoint_filter_fn的作用是加載預(yù)訓(xùn)練權(quán)重。
def checkpoint_filter_fn(state_dict, model):""" convert patch embedding weight from manual patchify + linear proj to conv"""out_dict = {}if 'model' in state_dict:# For deit modelsstate_dict = state_dict['model']for k, v in state_dict.items():if 'patch_embed.proj.weight' in k and len(v.shape) < 4:# For old models that I trained prior to conv based patchificationO, I, H, W = model.patch_embed.proj.weight.shapev = v.reshape(O, -1, H, W)elif k == 'pos_embed' and v.shape != model.pos_embed.shape:# To resize pos embedding when using model at different size from pretrained weightsv = resize_pos_embed(v, model.pos_embed)out_dict[k] = vreturn out_dictdef _create_vision_transformer(variant, pretrained=False, distilled=False, **kwargs):default_cfg = default_cfgs[variant]default_num_classes = default_cfg['num_classes']default_img_size = default_cfg['input_size'][-1]num_classes = kwargs.pop('num_classes', default_num_classes)img_size = kwargs.pop('img_size', default_img_size)repr_size = kwargs.pop('representation_size', None)if repr_size is not None and num_classes != default_num_classes:# Remove representation layer if fine-tuning. This may not always be the desired action,# but I feel better than doing nothing by default for fine-tuning. Perhaps a better interface?_logger.warning("Removing representation layer for fine-tuning.")repr_size = Nonemodel_cls = DistilledVisionTransformer if distilled else VisionTransformermodel = model_cls(img_size=img_size, num_classes=num_classes, representation_size=repr_size, **kwargs)model.default_cfg = default_cfgif pretrained:load_pretrained(model, num_classes=num_classes, in_chans=kwargs.get('in_chans', 3),filter_fn=partial(checkpoint_filter_fn, model=model))return model
12. 定義和注冊vision transformer模型:
@ 指裝飾器。
@register_model代表注冊器,注冊這個新定義的模型。
model_kwargs是一個存有模型所有超參數(shù)的字典。
最后使用上面定義的_create_vision_transformer函數(shù)創(chuàng)建模型。
def vit_base_patch16_224(pretrained=False, **kwargs):""" ViT-Base (ViT-B/16) from original paper (https://arxiv.org/abs/2010.11929).ImageNet-1k weights fine-tuned from in21k @ 224x224, source https://github.com/google-research/vision_transformer."""model_kwargs = dict(patch_size=16, embed_dim=768, depth=12, num_heads=12, **kwargs)model = _create_vision_transformer('vit_base_patch16_224', pretrained=pretrained, **model_kwargs)return model
一共可以選擇的模型包括:
ViT系列:
vit_small_patch16_224
vit_base_patch16_224
vit_base_patch32_224
vit_base_patch16_384
vit_base_patch32_384
vit_large_patch16_224
vit_large_patch32_224
vit_large_patch16_384
vit_large_patch32_384
vit_base_patch16_224_in21k
vit_base_patch32_224_in21k
vit_large_patch16_224_in21k
vit_large_patch32_224_in21k
vit_huge_patch14_224_in21k
vit_base_resnet50_224_in21k
vit_base_resnet50_384
vit_small_resnet26d_224
vit_small_resnet50d_s3_224
vit_base_resnet26d_224
vit_base_resnet50d_224DeiT系列:
vit_deit_tiny_patch16_224
vit_deit_small_patch16_224
vit_deit_base_patch16_224
vit_deit_base_patch16_384
vit_deit_tiny_distilled_patch16_224
vit_deit_small_distilled_patch16_224
vit_deit_base_distilled_patch16_224
vit_deit_base_distilled_patch16_384
以上就是對timm庫 vision_transformer.py代碼的分析。
如何使用timm庫以及 vision_transformer.py代碼搭建自己的模型?
在搭建我們自己的視覺Transformer模型時,我們可以按照下面的步驟操作:首先
繼承timm庫的VisionTransformer這個類。 添加上自己模型獨有的一些變量。 重寫forward函數(shù)。 通過timm庫的注冊器注冊新模型。
我們以ViT模型的改進(jìn)版DeiT為例:
首先,DeiT的所有模型列表如下:
__all__ = ['deit_tiny_patch16_224', 'deit_small_patch16_224', 'deit_base_patch16_224','deit_tiny_distilled_patch16_224', 'deit_small_distilled_patch16_224','deit_base_distilled_patch16_224', 'deit_base_patch16_384','deit_base_distilled_patch16_384',]
導(dǎo)入VisionTransformer這個類,注冊器register_model,以及初始化函數(shù)trunc_normal_:
from timm.models.vision_transformer import VisionTransformer, _cfgfrom timm.models.registry import register_modelfrom?timm.models.layers?import?trunc_normal_
DeiT的class名稱是DistilledVisionTransformer,它直接繼承了VisionTransformer這個類:
class DistilledVisionTransformer(VisionTransformer):
添加上自己模型獨有的一些變量:
def __init__(self, *args, **kwargs):super().__init__(*args, **kwargs)self.dist_token = nn.Parameter(torch.zeros(1, 1, self.embed_dim))num_patches = self.patch_embed.num_patches# 位置編碼不是ViT中的(b, N, 256), 而變成了(b, N+2, 256), 原因是還有class token和distillation token.self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + 2, self.embed_dim))self.head_dist = nn.Linear(self.embed_dim, self.num_classes) if self.num_classes > 0 else nn.Identity()trunc_normal_(self.dist_token, std=.02)trunc_normal_(self.pos_embed, std=.02)self.head_dist.apply(self._init_weights)
重寫forward函數(shù):
def forward_features(self, x):# taken from https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py# with slight modifications to add the dist_tokenB = x.shape[0]x = self.patch_embed(x)cls_tokens = self.cls_token.expand(B, -1, -1) # stole cls_tokens impl from Phil Wang, thanksdist_token = self.dist_token.expand(B, -1, -1)x = torch.cat((cls_tokens, dist_token, x), dim=1)x = x + self.pos_embedx = self.pos_drop(x)for blk in self.blocks:x = blk(x)x = self.norm(x)return x[:, 0], x[:, 1]def forward(self, x):x, x_dist = self.forward_features(x)x = self.head(x)x_dist = self.head_dist(x_dist)if self.training:return x, x_distelse:# during inference, return the average of both classifier predictionsreturn (x + x_dist) / 2
通過timm庫的注冊器注冊新模型:
def deit_base_patch16_224(pretrained=False, **kwargs):model = VisionTransformer(patch_size=16, embed_dim=768, depth=12, num_heads=12, mlp_ratio=4, qkv_bias=True,norm_layer=partial(nn.LayerNorm, eps=1e-6), **kwargs)model.default_cfg = _cfg()if pretrained:checkpoint = torch.hub.load_state_dict_from_url(url="https://dl.fbaipublicfiles.com/deit/deit_base_patch16_224-b5f2ef4d.pth",map_location="cpu", check_hash=True)model.load_state_dict(checkpoint["model"])return model
往期精彩:
【原創(chuàng)首發(fā)】機(jī)器學(xué)習(xí)公式推導(dǎo)與代碼實現(xiàn)30講.pdf
【原創(chuàng)首發(fā)】深度學(xué)習(xí)語義分割理論與實戰(zhàn)指南.pdf
點個在看
