mobilenet v2结构 发表于 2018-04-Mon | 阅读次数: 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223MobileNetV2( (features): Sequential( (0): Sequential( (0): Conv2d(5, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True) (2): ReLU(inplace) ) (1): InvertedResidual( (conv): Sequential( (0): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True) (2): ReLU6(inplace) (3): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False) (4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True) (5): ReLU6(inplace) (6): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True) ) ) (2): InvertedResidual( (conv): Sequential( (0): Conv2d(16, 96, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True) (2): ReLU6(inplace) (3): Conv2d(96, 96, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=96, bias=False) (4): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True) (5): ReLU6(inplace) (6): Conv2d(96, 24, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True) ) ) (3): InvertedResidual( (conv): Sequential( (0): Conv2d(24, 144, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True) (2): ReLU6(inplace) (3): Conv2d(144, 144, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=144, bias=False) (4): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True) (5): ReLU6(inplace) (6): Conv2d(144, 24, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True) ) ) (4): InvertedResidual( (conv): Sequential( (0): Conv2d(24, 144, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True) (2): ReLU6(inplace) (3): Conv2d(144, 144, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=144, bias=False) (4): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True) (5): ReLU6(inplace) (6): Conv2d(144, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True) ) ) (5): InvertedResidual( (conv): Sequential( (0): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True) (2): ReLU6(inplace) (3): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=192, bias=False) (4): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True) (5): ReLU6(inplace) (6): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True) ) ) (6): InvertedResidual( (conv): Sequential( (0): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True) (2): ReLU6(inplace) (3): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=192, bias=False) (4): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True) (5): ReLU6(inplace) (6): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True) ) ) (7): InvertedResidual( (conv): Sequential( (0): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True) (2): ReLU6(inplace) (3): Conv2d(192, 192, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=192, bias=False) (4): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True) (5): ReLU6(inplace) (6): Conv2d(192, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True) ) ) (8): InvertedResidual( (conv): Sequential( (0): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True) (2): ReLU6(inplace) (3): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False) (4): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True) (5): ReLU6(inplace) (6): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True) ) ) (9): InvertedResidual( (conv): Sequential( (0): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True) (2): ReLU6(inplace) (3): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False) (4): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True) (5): ReLU6(inplace) (6): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True) ) ) (10): InvertedResidual( (conv): Sequential( (0): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True) (2): ReLU6(inplace) (3): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False) (4): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True) (5): ReLU6(inplace) (6): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True) ) ) (11): InvertedResidual( (conv): Sequential( (0): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True) (2): ReLU6(inplace) (3): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False) (4): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True) (5): ReLU6(inplace) (6): Conv2d(384, 96, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True) ) ) (12): InvertedResidual( (conv): Sequential( (0): Conv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True) (2): ReLU6(inplace) (3): Conv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=576, bias=False) (4): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True) (5): ReLU6(inplace) (6): Conv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True) ) ) (13): InvertedResidual( (conv): Sequential( (0): Conv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True) (2): ReLU6(inplace) (3): Conv2d(576, 576, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=576, bias=False) (4): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True) (5): ReLU6(inplace) (6): Conv2d(576, 96, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True) ) ) (14): InvertedResidual( (conv): Sequential( (0): Conv2d(96, 576, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True) (2): ReLU6(inplace) (3): Conv2d(576, 576, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=576, bias=False) (4): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True) (5): ReLU6(inplace) (6): Conv2d(576, 160, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True) ) ) (15): InvertedResidual( (conv): Sequential( (0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True) (2): ReLU6(inplace) (3): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False) (4): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True) (5): ReLU6(inplace) (6): Conv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True) ) ) (16): InvertedResidual( (conv): Sequential( (0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True) (2): ReLU6(inplace) (3): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False) (4): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True) (5): ReLU6(inplace) (6): Conv2d(960, 160, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True) ) ) (17): InvertedResidual( (conv): Sequential( (0): Conv2d(160, 960, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True) (2): ReLU6(inplace) (3): Conv2d(960, 960, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=960, bias=False) (4): BatchNorm2d(960, eps=1e-05, momentum=0.1, affine=True) (5): ReLU6(inplace) (6): Conv2d(960, 320, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True) ) ) (18): Sequential( (0): Conv2d(320, 1280, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(1280, eps=1e-05, momentum=0.1, affine=True) (2): ReLU(inplace) ) (19): AvgPool2d(kernel_size=7, stride=7, padding=0, ceil_mode=False, count_include_pad=True) ) (classifier): Sequential( (0): Dropout(p=0.5) (1): Linear(in_features=1280, out_features=32, bias=True) ))请作者喝一杯咖啡☕️打赏微信支付