全连接神经网络(FC)
全连接神经网络是一种最基本的神经网络结构,英文为Full Connection,所以一般简称FC。
FC的准则很简单:神经网络中除输入层之外的每个节点都和上一层的所有节点有连接。
以上一次的MNIST为例
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 | import torch import torch.utils.data from torch import optim from torchvision import datasets from torchvision.transforms import transforms import torch.nn.functional as F batch_size = 200 learning_rate = 0.001 epochs = 20 train_loader = torch.utils.data.DataLoader( datasets.MNIST( 'mnistdata' , train = True , download = False , transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(( 0.1307 ,), ( 0.3081 ,)) ])), batch_size = batch_size, shuffle = True ) test_loader = torch.utils.data.DataLoader( datasets.MNIST( 'mnistdata' , train = False , download = False , transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(( 0.1307 ,), ( 0.3081 ,)) ])), batch_size = batch_size, shuffle = True ) w1, b1 = torch.randn( 200 , 784 , requires_grad = True ), torch.zeros( 200 , requires_grad = True ) w2, b2 = torch.randn( 200 , 200 , requires_grad = True ), torch.zeros( 200 , requires_grad = True ) w3, b3 = torch.randn( 10 , 200 , requires_grad = True ), torch.zeros( 10 , requires_grad = True ) torch.nn.init.kaiming_normal_(w1) torch.nn.init.kaiming_normal_(w2) torch.nn.init.kaiming_normal_(w3) def forward(x): x = x@w1.t() + b1 x = F.relu(x) x = x@w2.t() + b2 x = F.relu(x) x = x@w3.t() + b3 x = F.relu(x) return x optimizer = optim.Adam([w1, b1, w2, b2, w3, b3], lr = learning_rate) criteon = torch.nn.CrossEntropyLoss() for epoch in range(epochs): for batch_idx, (data, target) in enumerate(train_loader): data = data.view( - 1 , 28 * 28 ) logits = forward(data) loss = criteon(logits, target) optimizer.zero_grad() loss.backward() optimizer.step() if batch_idx % 100 = = 0 : print ( 'Train Epoch : {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}' .format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.item() )) test_loss = 0 correct = 0 for data, target in test_loader: data = data.view( - 1 , 28 * 28 ) logits = forward(data) test_loss + = criteon(logits, target).item() pred = logits.data.max( 1 )[ 1 ] correct + = pred.eq(target.data).sum() test_loss / = len(test_loader.dataset) print ( '\nTest set : Averge loss: {:.4f}, Accurancy: {}/{}({:.3f}%)' .format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset) )) |
我们将每个w和b都进行了定义,并且自己写了一个forward函数。如果我们采用了全连接层,那么整个代码也会更加简介明了。
首先,我们定义自己的网络结构的类:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | class MLP(nn.Module): def __init__( self ): super(MLP, self ).__init__() self .model = nn.Sequential( nn.Linear( 784 , 200 ), nn.LeakyReLU(inplace = True ), nn.Linear( 200 , 200 ), nn.LeakyReLU(inplace = True ), nn.Linear( 200 , 10 ), nn.LeakyReLU(inplace = True ) ) def forward( self , x): x = self .model(x) return x |
它继承于nn.Moudle,并且自己定义里整个网络结构。
其中inplace的作用是直接复用存储空间,减少新开辟存储空间。
除此之外,它可以直接进行运算,不需要手动定义参数和写出运算语句,更加简便。
同时我们还可以发现,它自动完成了初试化,不需要像之前一样再手动写一个初始化了。
区分nn.Relu和F.relu()
前者是一个类的接口,后者是一个函数式接口。
前者都是大写的,并且调用的的时候需要先实例化才能使用,而后者是小写的可以直接使用。
最重要的是后者的自由度更高,更适合做一些自己定义的操作。
完整代码
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 | import torch import torch.utils.data from torch import optim, nn from torchvision import datasets from torchvision.transforms import transforms import torch.nn.functional as F batch_size = 200 learning_rate = 0.001 epochs = 20 train_loader = torch.utils.data.DataLoader( datasets.MNIST( 'mnistdata' , train = True , download = False , transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(( 0.1307 ,), ( 0.3081 ,)) ])), batch_size = batch_size, shuffle = True ) test_loader = torch.utils.data.DataLoader( datasets.MNIST( 'mnistdata' , train = False , download = False , transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(( 0.1307 ,), ( 0.3081 ,)) ])), batch_size = batch_size, shuffle = True ) class MLP(nn.Module): def __init__( self ): super(MLP, self ).__init__() self .model = nn.Sequential( nn.Linear( 784 , 200 ), nn.LeakyReLU(inplace = True ), nn.Linear( 200 , 200 ), nn.LeakyReLU(inplace = True ), nn.Linear( 200 , 10 ), nn.LeakyReLU(inplace = True ) ) def forward( self , x): x = self .model(x) return x device = torch.device( 'cuda:0' ) net = MLP().to(device) optimizer = optim.Adam(net.parameters(), lr = learning_rate) criteon = nn.CrossEntropyLoss().to(device) for epoch in range(epochs): for batch_idx, (data, target) in enumerate(train_loader): data = data.view( - 1 , 28 * 28 ) data, target = data.to(device), target.to(device) logits = net(data) loss = criteon(logits, target) optimizer.zero_grad() loss.backward() optimizer.step() if batch_idx % 100 = = 0 : print ( 'Train Epoch : {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}' .format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.item() )) test_loss = 0 correct = 0 for data, target in test_loader: data = data.view( - 1 , 28 * 28 ) data, target = data.to(device), target.to(device) logits = net(data) test_loss + = criteon(logits, target).item() pred = logits.data.max( 1 )[ 1 ] correct + = pred.eq(target.data).sum() test_loss / = len(test_loader.dataset) print ( '\nTest set : Averge loss: {:.4f}, Accurancy: {}/{}({:.3f}%)' .format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset) )) |
补充:pytorch 实现一个隐层的全连接神经网络
torch.nn 实现 模型的定义,网络层的定义,损失函数的定义。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 | import torch # N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. N, D_in, H, D_out = 64 , 1000 , 100 , 10 # Create random Tensors to hold inputs and outputs x = torch.randn(N, D_in) y = torch.randn(N, D_out) # Use the nn package to define our model as a sequence of layers. nn.Sequential # is a Module which contains other Modules, and applies them in sequence to # produce its output. Each Linear Module computes output from input using a # linear function, and holds internal Tensors for its weight and bias. model = torch.nn.Sequential( torch.nn.Linear(D_in, H), torch.nn.ReLU(), torch.nn.Linear(H, D_out), ) # The nn package also contains definitions of popular loss functions; in this # case we will use Mean Squared Error (MSE) as our loss function. loss_fn = torch.nn.MSELoss(reduction = 'sum' ) learning_rate = 1e - 4 for t in range( 500 ): # Forward pass: compute predicted y by passing x to the model. Module objects # override the __call__ operator so you can call them like functions. When # doing so you pass a Tensor of input data to the Module and it produces # a Tensor of output data. y_pred = model(x) # Compute and print loss. We pass Tensors containing the predicted and true # values of y, and the loss function returns a Tensor containing the # loss. loss = loss_fn(y_pred, y) print (t, loss.item()) # Zero the gradients before running the backward pass. model.zero_grad() # Backward pass: compute gradient of the loss with respect to all the learnable # parameters of the model. Internally, the parameters of each Module are stored # in Tensors with requires_grad=True, so this call will compute gradients for # all learnable parameters in the model. loss.backward() # Update the weights using gradient descent. Each parameter is a Tensor, so # we can access its gradients like we did before. with torch.no_grad(): for param in model.parameters(): param - = learning_rate * param.grad |
上面,我们使用parem= -= learning_rate* param.grad 手动更新参数。
使用torch.optim 自动优化参数。optim这个package提供了各种不同的模型优化方法,包括SGD+momentum, RMSProp, Adam等等。
1 2 3 4 5 6 7 | optimizer = torch.optim.Adam(model.parameters(), lr = learning_rate) for t in range( 500 ): y_pred = model(x) loss = loss_fn(y_pred, y) optimizer.zero_grad() loss.backward() optimizer.step() |
以上为个人经验,希望能给大家一个参考,也希望大家多多支持自学编程网。如有错误或未考虑完全的地方,望不吝赐教。
- 本文固定链接: https://zxbcw.cn/post/211754/
- 转载请注明:必须在正文中标注并保留原文链接
- QQ群: PHP高手阵营官方总群(344148542)
- QQ群: Yii2.0开发(304864863)