分类:PyTorch
在利用torch.max函数和F.Ssoftmax函数时,对应该设置什么维度,总是有点懵,遂总结一下:首先看看二维tensor的函数的例子:importtorchimporttorch.nn.functionalasFinput=torch.randn(3,4)print(input)tensor([[-0.5526,-0.0194,2.1469,-0.2567],[-0.3337,-0.9229,0.0376,-0.0801],[1.4721,0.1181,-2.6214,1.7721]])b=F.softmax(input,dim=0)#按列SoftMax,列和为1print(b)tensor([[0.1018,0.3918,...
继续阅读 >
我就废话不多说了,大家还是直接看代码吧~importtorch.nnasnnimporttorch.nn.functionalasFimporttorch.nnasnnclassAlexNet_1(nn.Module):def__init__(self,num_classes=n):super(AlexNet,self).__init__()self.features=nn.Sequential(nn.Conv2d(3,64,kernel_size=3,stride=2,padding=1),nn.BatchNorm2d(64),nn.ReLU(inplace=True),)defforward(self,x):x=self...
继续阅读 >
我就废话不多说了,大家还是直接看代码吧~fromtorchimportnnclassSELayer(nn.Module):def__init__(self,channel,reduction=16):super(SELayer,self).__init__()//返回1X1大小的特征图,通道数不变self.avg_pool=nn.AdaptiveAvgPool2d(1)self.fc=nn.Sequential(nn.Linear(channel,channel//reduction,bias=False),nn.ReLU(inplace=True),nn.Linear(channel//reduction,channel,bias=False),...
继续阅读 >
我就废话不多说了,大家还是直接看代码吧~importtorchimporttorch.nnasnnimporttorch.nn.functionalasFclassVGG16(nn.Module):def__init__(self):super(VGG16,self).__init__()#3*224*224self.conv1_1=nn.Conv2d(3,64,3)#64*222*222self.conv1_2=nn.Conv2d(64,64,3,padding=(1,1))#64*222*222self.maxpool1=nn.MaxPool2d((2,2),padding=(1,1))#pooling64*112*112...
继续阅读 >
2020
10-08
2020
10-08