DaSE-Computer-Vision-2021
Ви не можете вибрати більше 25 тем Теми мають розпочинатися з літери або цифри, можуть містити дефіси (-) і не повинні перевищувати 35 символів.
 
 
 

1288 рядки
55 KiB

{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from google.colab import drive\n",
"\n",
"drive.mount('/content/drive', force_remount=True)\n",
"\n",
"# 输入daseCV所在的路径\n",
"# 'daseCV' 文件夹包括 '.py', 'classifiers' 和'datasets'文件夹\n",
"# 例如 'CV/assignments/assignment1/daseCV/'\n",
"FOLDERNAME = None\n",
"\n",
"assert FOLDERNAME is not None, \"[!] Enter the foldername.\"\n",
"\n",
"%cd drive/My\\ Drive\n",
"%cp -r $FOLDERNAME ../../\n",
"%cd ../../\n",
"%cd daseCV/datasets/\n",
"!bash get_datasets.sh\n",
"%cd ../../"
]
},
{
"cell_type": "markdown",
"metadata": {
"tags": [
"pdf-title"
]
},
"source": [
"# What's this PyTorch business?\n",
"\n",
"You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.\n",
"\n",
"For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, PyTorch (or TensorFlow, if you choose to use that notebook)."
]
},
{
"cell_type": "markdown",
"metadata": {
"tags": [
"pdf-ignore"
]
},
"source": [
"### What is PyTorch?\n",
"\n",
"PyTorch is a system for executing dynamic computational graphs over Tensor objects that behave similarly as numpy ndarray. It comes with a powerful automatic differentiation engine that removes the need for manual back-propagation. \n",
"\n",
"### Why?\n",
"\n",
"* Our code will now run on GPUs! Much faster training. When using a framework like PyTorch or TensorFlow you can harness the power of the GPU for your own custom neural network architectures without having to write CUDA code directly (which is beyond the scope of this class).\n",
"* We want you to be ready to use one of these frameworks for your project so you can experiment more efficiently than if you were writing every feature you want to use by hand. \n",
"* We want you to stand on the shoulders of giants! TensorFlow and PyTorch are both excellent frameworks that will make your lives a lot easier, and now that you understand their guts, you are free to use them :) \n",
"* We want you to be exposed to the sort of deep learning code you might run into in academia or industry.\n",
"\n",
"### PyTorch versions\n",
"This notebook assumes that you are using **PyTorch version 1.0**. In some of the previous versions (e.g. before 0.4), Tensors had to be wrapped in Variable objects to be used in autograd; however Variables have now been deprecated. In addition 1.0 also separates a Tensor's datatype from its device, and uses numpy-style factories for constructing Tensors rather than directly invoking Tensor constructors."
]
},
{
"cell_type": "markdown",
"metadata": {
"tags": [
"pdf-ignore"
]
},
"source": [
"## How will I learn PyTorch?\n",
"\n",
"Justin Johnson has made an excellent [tutorial](https://github.com/jcjohnson/pytorch-examples) for PyTorch. \n",
"\n",
"You can also find the detailed [API doc](http://pytorch.org/docs/stable/index.html) here. If you have other questions that are not addressed by the API docs, the [PyTorch forum](https://discuss.pytorch.org/) is a much better place to ask than StackOverflow.\n",
"\n",
"\n",
"# Table of Contents\n",
"\n",
"This assignment has 5 parts. You will learn PyTorch on **three different levels of abstraction**, which will help you understand it better and prepare you for the final project. \n",
"\n",
"1. Part I, Preparation: we will use CIFAR-10 dataset.\n",
"2. Part II, Barebones PyTorch: **Abstraction level 1**, we will work directly with the lowest-level PyTorch Tensors. \n",
"3. Part III, PyTorch Module API: **Abstraction level 2**, we will use `nn.Module` to define arbitrary neural network architecture. \n",
"4. Part IV, PyTorch Sequential API: **Abstraction level 3**, we will use `nn.Sequential` to define a linear feed-forward network very conveniently. \n",
"5. Part V, CIFAR-10 open-ended challenge: please implement your own network to get as high accuracy as possible on CIFAR-10. You can experiment with any layer, optimizer, hyperparameters or other advanced features. \n",
"\n",
"Here is a table of comparison:\n",
"\n",
"| API | Flexibility | Convenience |\n",
"|---------------|-------------|-------------|\n",
"| Barebone | High | Low |\n",
"| `nn.Module` | High | Medium |\n",
"| `nn.Sequential` | Low | High |"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Part I. Preparation\n",
"\n",
"首先,我们加载CIFAR-10数据集。第一次执行可能会花费几分钟,但是之后文件应该存储在缓存中,不需要再次花费时间。\n",
"\n",
"在之前的作业中,我们必须编写自己的代码来下载CIFAR-10数据集并对其进行预处理,然后以小批量的方式对其进行遍历。PyTorch为我们提供了方便的工具来自动执行此过程。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"pdf-ignore"
]
},
"outputs": [],
"source": [
"import torch\n",
"import torch.nn as nn\n",
"import torch.optim as optim\n",
"from torch.utils.data import DataLoader\n",
"from torch.utils.data import sampler\n",
"\n",
"import torchvision.datasets as dset\n",
"import torchvision.transforms as T\n",
"\n",
"import numpy as np"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"pdf-ignore"
]
},
"outputs": [],
"source": [
"NUM_TRAIN = 49000\n",
"\n",
"# The torchvision.transforms package provides tools for preprocessing data\n",
"# and for performing data augmentation; here we set up a transform to\n",
"# preprocess the data by subtracting the mean RGB value and dividing by the\n",
"# standard deviation of each RGB value; we've hardcoded the mean and std.\n",
"transform = T.Compose([\n",
" T.ToTensor(),\n",
" T.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))\n",
" ])\n",
"\n",
"# We set up a Dataset object for each split (train / val / test); Datasets load\n",
"# training examples one at a time, so we wrap each Dataset in a DataLoader which\n",
"# iterates through the Dataset and forms minibatches. We divide the CIFAR-10\n",
"# training set into train and val sets by passing a Sampler object to the\n",
"# DataLoader telling how it should sample from the underlying Dataset.\n",
"cifar10_train = dset.CIFAR10('./daseCV/datasets', train=True, download=True,\n",
" transform=transform)\n",
"loader_train = DataLoader(cifar10_train, batch_size=64, \n",
" sampler=sampler.SubsetRandomSampler(range(NUM_TRAIN)))\n",
"\n",
"cifar10_val = dset.CIFAR10('./daseCV/datasets', train=True, download=True,\n",
" transform=transform)\n",
"loader_val = DataLoader(cifar10_val, batch_size=64, \n",
" sampler=sampler.SubsetRandomSampler(range(NUM_TRAIN, 50000)))\n",
"\n",
"cifar10_test = dset.CIFAR10('./daseCV/datasets', train=False, download=True, \n",
" transform=transform)\n",
"loader_test = DataLoader(cifar10_test, batch_size=64)"
]
},
{
"cell_type": "markdown",
"metadata": {
"tags": [
"pdf-ignore"
]
},
"source": [
"你可以他通过**设置下面的flag来使用GPU**。本次作业并非一定使用GPU。请注意,如果您的计算机并没有安装CUDA,则`torch.cuda.is_available()`将返回False,并且本notebook将回退至CPU模式。\n",
"\n",
"全局变量`dtype`和 `device`将在整个作业中控制数据类型。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"pdf-ignore-input"
]
},
"outputs": [],
"source": [
"USE_GPU = True\n",
"\n",
"dtype = torch.float32 # we will be using float throughout this tutorial\n",
"\n",
"if USE_GPU and torch.cuda.is_available():\n",
" device = torch.device('cuda')\n",
"else:\n",
" device = torch.device('cpu')\n",
"\n",
"# Constant to control how frequently we print train loss\n",
"print_every = 100\n",
"\n",
"print('using device:', device)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Part II. Barebones PyTorch\n",
"\n",
"PyTorch附带了高级API,可帮助我们方便地定义模型架构,我们将在本教程的第二部分中介绍。在本节中,我们将从barebone PyTorch元素开始,以更好地了解autograd引擎。在完成本练习之后,您将更加喜欢高级模型API。\n",
"\n",
"我们将从一个简单的全连接的ReLU网络开始,该网络具有两个隐藏层并且没有biases用以对CIFAR分类。此实现使用PyTorch Tensors上的运算来计算正向传播,并使用PyTorch autograd来计算梯度。理解每一行代码很重要,因为在示例之后您将编写一个更难的版本。\n",
"\n",
"当我们使用`requires_grad = True`创建一个PyTorch Tensor时,涉及该Tensor的操作将不仅仅计算值。他们还建立一个计算图,使我们能够轻松地在该图中反向传播,以计算某些张量相对于下游loss的梯度。具体来说,如果x是张量同时设置`x.requires_grad == True`,那么在反向传播之后,`x.grad`将会是另一个张量,其保存了x对于最终loss的梯度。"
]
},
{
"cell_type": "markdown",
"metadata": {
"tags": [
"pdf-ignore"
]
},
"source": [
"### PyTorch Tensors: Flatten Function\n",
"PyTorch Tensor在概念上类似于numpy数组:它是一个n维数字网格,并且像numpy一样,PyTorch提供了许多功能来方便地在Tensor上进行操作。举一个简单的例子,我们提供一个`flatten`功能,该函数可以改变图像数据的形状以用于全连接神经网络。\n",
"\n",
"回想一下,图像数据通常存储在形状为N x C x H x W的张量中,其中:\n",
"\n",
"* N 是数据的数量\n",
"* C 是通道的数量\n",
"* H 是中间特征图的高度(以像素为单位)\n",
"* W 是中间特征图的宽度(以像素为单位)\n",
"\n",
"当我们进行类似2D卷积的操作时,这是表示数据的正确方法,该操作需要对中间特征之间有所了解。但是,当我们使用全连接的仿射层来处理图像时,我们希望每个数据都由单个向量表示,不需要分离数据的不同通道以及行和列。因此,我们使用\"flatten\"操作将每个表示形式为`C x H x W`的值转换为单个长向量。下面的flatten函数首先从给定的一批数据中读取N,C,H和W值,然后返回该数据的\"view\"。“\"view\"类似于numpy的\"reshape\"方法:将x的尺寸转换为N x ??,其中??允许为任何值(在这种情况下,它将为C x H x W,但我们无需明确指定)。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"pdf-ignore-input"
]
},
"outputs": [],
"source": [
"def flatten(x):\n",
" N = x.shape[0] # read in N, C, H, W\n",
" return x.view(N, -1) # \"flatten\" the C * H * W values into a single vector per image\n",
"\n",
"def test_flatten():\n",
" x = torch.arange(12).view(2, 1, 3, 2)\n",
" print('Before flattening: ', x)\n",
" print('After flattening: ', flatten(x))\n",
"\n",
"test_flatten()"
]
},
{
"cell_type": "markdown",
"metadata": {
"tags": [
"pdf-ignore"
]
},
"source": [
"### Barebones PyTorch: Two-Layer Network\n",
"\n",
"在这里,我们定义一个函数`two_layer_fc`,该函数对一批图像数据执行两层全连接的ReLU网络的正向传播。定义正向传播后,我们通过将网络的值设置为0来检查其输出的形状来判断网络是否正确。\n",
"\n",
"您无需在此处编写任何代码,但需要阅读并理解。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"pdf-ignore-input"
]
},
"outputs": [],
"source": [
"import torch.nn.functional as F # useful stateless functions\n",
"\n",
"def two_layer_fc(x, params):\n",
" \"\"\"\n",
" A fully-connected neural networks; the architecture is:\n",
" NN is fully connected -> ReLU -> fully connected layer.\n",
" Note that this function only defines the forward pass; \n",
" PyTorch will take care of the backward pass for us.\n",
" \n",
" The input to the network will be a minibatch of data, of shape\n",
" (N, d1, ..., dM) where d1 * ... * dM = D. The hidden layer will have H units,\n",
" and the output layer will produce scores for C classes.\n",
" \n",
" Inputs:\n",
" - x: A PyTorch Tensor of shape (N, d1, ..., dM) giving a minibatch of\n",
" input data.\n",
" - params: A list [w1, w2] of PyTorch Tensors giving weights for the network;\n",
" w1 has shape (D, H) and w2 has shape (H, C).\n",
" \n",
" Returns:\n",
" - scores: A PyTorch Tensor of shape (N, C) giving classification scores for\n",
" the input data x.\n",
" \"\"\"\n",
" # first we flatten the image\n",
" x = flatten(x) # shape: [batch_size, C x H x W]\n",
" \n",
" w1, w2 = params\n",
" \n",
" # Forward pass: compute predicted y using operations on Tensors. Since w1 and\n",
" # w2 have requires_grad=True, operations involving these Tensors will cause\n",
" # PyTorch to build a computational graph, allowing automatic computation of\n",
" # gradients. Since we are no longer implementing the backward pass by hand we\n",
" # don't need to keep references to intermediate values.\n",
" # you can also use `.clamp(min=0)`, equivalent to F.relu()\n",
" x = F.relu(x.mm(w1))\n",
" x = x.mm(w2)\n",
" return x\n",
" \n",
"\n",
"def two_layer_fc_test():\n",
" hidden_layer_size = 42\n",
" x = torch.zeros((64, 50), dtype=dtype) # minibatch size 64, feature dimension 50\n",
" w1 = torch.zeros((50, hidden_layer_size), dtype=dtype)\n",
" w2 = torch.zeros((hidden_layer_size, 10), dtype=dtype)\n",
" scores = two_layer_fc(x, [w1, w2])\n",
" print(scores.size()) # you should see [64, 10]\n",
"\n",
"two_layer_fc_test()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Barebones PyTorch: Three-Layer ConvNet\n",
"\n",
"在这里,您将完成`three_layer_convnet`函数,该函数将执行三层卷积网络的正向传播。像上面一样,我们通过将网络的值设置为0来检查其输出的形状来判断网络是否正确。网络应具有以下架构:\n",
"\n",
"1. 具有`channel_1`滤波器的卷积层(带偏置),每个滤波器的形状均为`KW1 x KH1`,zero-padding为2\n",
"2. 非线性ReLU\n",
"3. 具有`channel_2`滤波器的卷积层(带偏置),每个滤波器的形状均为`KW2 x KH2`,zero-padding为1\n",
"4. 非线性ReLU\n",
"5. 具有偏差的全连接层,输出C类的分数。\n",
"\n",
"请注意,在我们全连接层之后**没有softmax**:这是因为PyTorch的交叉熵损失会为您执行softmax,并通过捆绑该步骤可以使计算效率更高。\n",
"\n",
"**提示**: 关于卷积: http://pytorch.org/docs/stable/nn.html#torch.nn.functional.conv2d; 注意卷积滤波器的形状!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def three_layer_convnet(x, params):\n",
" \"\"\"\n",
" Performs the forward pass of a three-layer convolutional network with the\n",
" architecture defined above.\n",
"\n",
" Inputs:\n",
" - x: A PyTorch Tensor of shape (N, 3, H, W) giving a minibatch of images\n",
" - params: A list of PyTorch Tensors giving the weights and biases for the\n",
" network; should contain the following:\n",
" - conv_w1: PyTorch Tensor of shape (channel_1, 3, KH1, KW1) giving weights\n",
" for the first convolutional layer\n",
" - conv_b1: PyTorch Tensor of shape (channel_1,) giving biases for the first\n",
" convolutional layer\n",
" - conv_w2: PyTorch Tensor of shape (channel_2, channel_1, KH2, KW2) giving\n",
" weights for the second convolutional layer\n",
" - conv_b2: PyTorch Tensor of shape (channel_2,) giving biases for the second\n",
" convolutional layer\n",
" - fc_w: PyTorch Tensor giving weights for the fully-connected layer. Can you\n",
" figure out what the shape should be?\n",
" - fc_b: PyTorch Tensor giving biases for the fully-connected layer. Can you\n",
" figure out what the shape should be?\n",
" \n",
" Returns:\n",
" - scores: PyTorch Tensor of shape (N, C) giving classification scores for x\n",
" \"\"\"\n",
" conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b = params\n",
" scores = None\n",
" ################################################################################\n",
" # TODO: Implement the forward pass for the three-layer ConvNet. #\n",
" ################################################################################\n",
" # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
"\n",
" pass\n",
"\n",
" # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
" ################################################################################\n",
" # END OF YOUR CODE #\n",
" ################################################################################\n",
" return scores"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"在定义完上述ConvNet的正向传播之后,运行以下cell以测试您的代码。\n",
"\n",
"运行此函数时,scores的形状为(64, 10)。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"pdf-ignore-input"
]
},
"outputs": [],
"source": [
"def three_layer_convnet_test():\n",
" x = torch.zeros((64, 3, 32, 32), dtype=dtype) # minibatch size 64, image size [3, 32, 32]\n",
"\n",
" conv_w1 = torch.zeros((6, 3, 5, 5), dtype=dtype) # [out_channel, in_channel, kernel_H, kernel_W]\n",
" conv_b1 = torch.zeros((6,)) # out_channel\n",
" conv_w2 = torch.zeros((9, 6, 3, 3), dtype=dtype) # [out_channel, in_channel, kernel_H, kernel_W]\n",
" conv_b2 = torch.zeros((9,)) # out_channel\n",
"\n",
" # you must calculate the shape of the tensor after two conv layers, before the fully-connected layer\n",
" fc_w = torch.zeros((9 * 32 * 32, 10))\n",
" fc_b = torch.zeros(10)\n",
"\n",
" scores = three_layer_convnet(x, [conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b])\n",
" print(scores.size()) # you should see [64, 10]\n",
"three_layer_convnet_test()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Barebones PyTorch: Initialization\n",
"让我们编写一些实用的方法来初始化模型的权重矩阵。\n",
"\n",
"- `random_weight(shape)` 使用Kaiming归一化方法初始化权重tensor。\n",
"- `zero_weight(shape)` 用全零初始化权重tensor。主要用于实例化偏差。\n",
"\n",
"`random_weight`函数使用Kaiming归一化,具体描述如下:\n",
"\n",
"He et al, *Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification*, ICCV 2015, https://arxiv.org/abs/1502.01852"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"pdf-ignore-input"
]
},
"outputs": [],
"source": [
"def random_weight(shape):\n",
" \"\"\"\n",
" Create random Tensors for weights; setting requires_grad=True means that we\n",
" want to compute gradients for these Tensors during the backward pass.\n",
" We use Kaiming normalization: sqrt(2 / fan_in)\n",
" \"\"\"\n",
" if len(shape) == 2: # FC weight\n",
" fan_in = shape[0]\n",
" else:\n",
" fan_in = np.prod(shape[1:]) # conv weight [out_channel, in_channel, kH, kW]\n",
" # randn is standard normal distribution generator. \n",
" w = torch.randn(shape, device=device, dtype=dtype) * np.sqrt(2. / fan_in)\n",
" w.requires_grad = True\n",
" return w\n",
"\n",
"def zero_weight(shape):\n",
" return torch.zeros(shape, device=device, dtype=dtype, requires_grad=True)\n",
"\n",
"# create a weight of shape [3 x 5]\n",
"# you should see the type `torch.cuda.FloatTensor` if you use GPU. \n",
"# Otherwise it should be `torch.FloatTensor`\n",
"random_weight((3, 5))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Barebones PyTorch: Check Accuracy\n",
"在训练模型时,我们将使用以下函数在训练或验证集上检查模型的准确性。\n",
"\n",
"在检查准确性时,我们不需要计算任何梯度。当我们计算 scores 时,我们不需要PyTorch为我们构建计算图。为了防止构建图,我们将使用`torch.no_grad()`。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"pdf-ignore-input"
]
},
"outputs": [],
"source": [
"def check_accuracy_part2(loader, model_fn, params):\n",
" \"\"\"\n",
" Check the accuracy of a classification model.\n",
" \n",
" Inputs:\n",
" - loader: A DataLoader for the data split we want to check\n",
" - model_fn: A function that performs the forward pass of the model,\n",
" with the signature scores = model_fn(x, params)\n",
" - params: List of PyTorch Tensors giving parameters of the model\n",
" \n",
" Returns: Nothing, but prints the accuracy of the model\n",
" \"\"\"\n",
" split = 'val' if loader.dataset.train else 'test'\n",
" print('Checking accuracy on the %s set' % split)\n",
" num_correct, num_samples = 0, 0\n",
" with torch.no_grad():\n",
" for x, y in loader:\n",
" x = x.to(device=device, dtype=dtype) # move to device, e.g. GPU\n",
" y = y.to(device=device, dtype=torch.int64)\n",
" scores = model_fn(x, params)\n",
" _, preds = scores.max(1)\n",
" num_correct += (preds == y).sum()\n",
" num_samples += preds.size(0)\n",
" acc = float(num_correct) / num_samples\n",
" print('Got %d / %d correct (%.2f%%)' % (num_correct, num_samples, 100 * acc))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### BareBones PyTorch: Training Loop\n",
"现在,我们可以使用一个基本的循环来训练我们的网络。我们将使用没有momentum的随机梯度下降训练模型,并使用 `torch.functional.cross_entropy`来计算损失;您可以[在此处阅读有关内容](http://pytorch.org/docs/stable/nn.html#cross-entropy)。\n",
"\n",
"将初始化参数列表(在我们的示例中为`[w1, w2]`)和学习率作为神经网络函数训练的输入。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"pdf-ignore-input"
]
},
"outputs": [],
"source": [
"def train_part2(model_fn, params, learning_rate):\n",
" \"\"\"\n",
" Train a model on CIFAR-10.\n",
" \n",
" Inputs:\n",
" - model_fn: A Python function that performs the forward pass of the model.\n",
" It should have the signature scores = model_fn(x, params) where x is a\n",
" PyTorch Tensor of image data, params is a list of PyTorch Tensors giving\n",
" model weights, and scores is a PyTorch Tensor of shape (N, C) giving\n",
" scores for the elements in x.\n",
" - params: List of PyTorch Tensors giving weights for the model\n",
" - learning_rate: Python scalar giving the learning rate to use for SGD\n",
" \n",
" Returns: Nothing\n",
" \"\"\"\n",
" for t, (x, y) in enumerate(loader_train):\n",
" # Move the data to the proper device (GPU or CPU)\n",
" x = x.to(device=device, dtype=dtype)\n",
" y = y.to(device=device, dtype=torch.long)\n",
"\n",
" # Forward pass: compute scores and loss\n",
" scores = model_fn(x, params)\n",
" loss = F.cross_entropy(scores, y)\n",
"\n",
" # Backward pass: PyTorch figures out which Tensors in the computational\n",
" # graph has requires_grad=True and uses backpropagation to compute the\n",
" # gradient of the loss with respect to these Tensors, and stores the\n",
" # gradients in the .grad attribute of each Tensor.\n",
" loss.backward()\n",
"\n",
" # Update parameters. We don't want to backpropagate through the\n",
" # parameter updates, so we scope the updates under a torch.no_grad()\n",
" # context manager to prevent a computational graph from being built.\n",
" with torch.no_grad():\n",
" for w in params:\n",
" w -= learning_rate * w.grad\n",
"\n",
" # Manually zero the gradients after running the backward pass\n",
" w.grad.zero_()\n",
"\n",
" if t % print_every == 0:\n",
" print('Iteration %d, loss = %.4f' % (t, loss.item()))\n",
" check_accuracy_part2(loader_val, model_fn, params)\n",
" print()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### BareBones PyTorch: Train a Two-Layer Network\n",
"现在我们准备好运行训练循环。我们需要为全连接的权重`w1`和`w2`显式的分配tensors。\n",
"\n",
"CIFAR的每个小批都有64个数据,因此tensor形状为`[64, 3, 32, 32]`。\n",
"\n",
"展平后,`x` 形状应为`[64, 3 * 32 * 32]`。这将是`w1`的第一维尺寸。`w1` 的第二维是隐藏层的大小,这同时也是`w2`的第一维。\n",
"\n",
"最后,网络的输出是一个10维向量,代表10类的概率分布。\n",
"\n",
"您无需调整任何超参数,但经过一个epoch的训练后,您应该会看到40%以上的准确度。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"hidden_layer_size = 4000\n",
"learning_rate = 1e-2\n",
"\n",
"w1 = random_weight((3 * 32 * 32, hidden_layer_size))\n",
"w2 = random_weight((hidden_layer_size, 10))\n",
"\n",
"train_part2(two_layer_fc, [w1, w2], learning_rate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### BareBones PyTorch: Training a ConvNet\n",
"\n",
"在下面,您应该使用上面定义的功能在CIFAR上训练三层卷积网络。网络应具有以下架构:\n",
"\n",
"1. 带32 5x5滤波器的卷积层(带偏置),zero-padding为2\n",
"2. ReLU\n",
"3. 带16 3x3滤波器的卷积层(带偏置),zero-padding为1\n",
"4. ReLU\n",
"5. 全连接层(带偏置),可计算10个类别的scores\n",
"\n",
"您应该使用上面定义的`random_weight`函数来初始化权重矩阵,并且使用上面的`zero_weight`函数来初始化偏差向量。\n",
"\n",
"您无需调整任何超参数,但经过一个epoch的训练后,您应该会看到42%以上的准确度。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learning_rate = 3e-3\n",
"\n",
"channel_1 = 32\n",
"channel_2 = 16\n",
"\n",
"conv_w1 = None\n",
"conv_b1 = None\n",
"conv_w2 = None\n",
"conv_b2 = None\n",
"fc_w = None\n",
"fc_b = None\n",
"\n",
"################################################################################\n",
"# TODO: Initialize the parameters of a three-layer ConvNet. #\n",
"################################################################################\n",
"# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
"\n",
"pass\n",
"\n",
"# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
"################################################################################\n",
"# END OF YOUR CODE #\n",
"################################################################################\n",
"\n",
"params = [conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b]\n",
"train_part2(three_layer_convnet, params, learning_rate)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Part III. PyTorch Module API\n",
"\n",
"Barebone PyTorch要求我们手动跟踪所有参数的tensors。这对于具有几个tensors的小型网络倒是没什么问题,但是在较大的网络中跟踪数十个或数百个tensors将非常不方便且容易出错。\n",
"\n",
"PyTorch为您提供`nn.Module` API,以定义任意网络架构,同时为您跟踪每个可学习的参数。在Part II中,我们自己实现了SGD。PyTorch还提供了`torch.optim`软件包,该软件包实现了所有常见的优化器,例如RMSProp,Adagrad和Adam。它甚至支持近似二阶方法,例如L-BFGS!您可以参考[doc](http://pytorch.org/docs/master/optim.html) 了解每个优化器的详细信息。\n",
"\n",
"要使用Module API,请按照以下步骤操作:\n",
"\n",
"1. 定义`nn.Module`的子类,并给您的类起一个直观的名称,例如`TwoLayerFC`。\n",
"\n",
"2. 在构造函数`__init__()`中,将所有的层定义为类属性。像 `nn.Linear`和`nn.Conv2d`这样的层对象本身就是`nn.Module` 子类,并且包含可学习的参数,因此您不必自己实例化原始tensors。`nn.Module`将为您追踪这些内部参数。请参阅[doc](http://pytorch.org/docs/master/nn.html),以了解有关内置层的更多信息。**警告**:别忘了先调用`super().__ init __()`!\n",
"\n",
"3. 在`forward()`方法中,定义网络的*connectivity*。你应该使用 `__init__`中定义的属性作为函数调用,把tensor作为输入,把“变换后的”tensor作为输出。。*不要*在`forward()`中创建任何带有可学习参数的新层!所有这些都必须在`__init__`中预先声明。\n",
"\n",
"定义Module子类后,可以将其实例化为对象,然后像part II中的NN forward函数一样调用它。\n",
"\n",
"### Module API: Two-Layer Network\n",
"这是两层全连接网络的具体示例:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"class TwoLayerFC(nn.Module):\n",
" def __init__(self, input_size, hidden_size, num_classes):\n",
" super().__init__()\n",
" # assign layer objects to class attributes\n",
" self.fc1 = nn.Linear(input_size, hidden_size)\n",
" # nn.init package contains convenient initialization methods\n",
" # http://pytorch.org/docs/master/nn.html#torch-nn-init \n",
" nn.init.kaiming_normal_(self.fc1.weight)\n",
" self.fc2 = nn.Linear(hidden_size, num_classes)\n",
" nn.init.kaiming_normal_(self.fc2.weight)\n",
" \n",
" def forward(self, x):\n",
" # forward always defines connectivity\n",
" x = flatten(x)\n",
" scores = self.fc2(F.relu(self.fc1(x)))\n",
" return scores\n",
"\n",
"def test_TwoLayerFC():\n",
" input_size = 50\n",
" x = torch.zeros((64, input_size), dtype=dtype) # minibatch size 64, feature dimension 50\n",
" model = TwoLayerFC(input_size, 42, 10)\n",
" scores = model(x)\n",
" print(scores.size()) # you should see [64, 10]\n",
"test_TwoLayerFC()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Module API: Three-Layer ConvNet\n",
"在完成全连接层之后接着完成你的三层的ConvNet。网络架构应与 Part II 相同:\n",
"\n",
"1. 具有`channel_1`滤波器的卷积层(带偏置),每个滤波器的形状均为5x5,zero-padding为2\n",
"2. ReLU\n",
"3. 具有`channel_2`滤波器的卷积层(带偏置),每个滤波器的形状均为3x3,zero-padding为1\n",
"4. ReLU\n",
"5. 全连接层,输出`num_classes`类。\n",
"\n",
"您应该使用Kaiming初始化方法初始化模型的权重矩阵。\n",
"\n",
"**提示**: http://pytorch.org/docs/stable/nn.html#conv2d\n",
"\n",
"在实现三层ConvNet之后,`test_ThreeLayerConvNet`函数将运行您的代码;它应该输出形状为`(64,10)`的scores。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"class ThreeLayerConvNet(nn.Module):\n",
" def __init__(self, in_channel, channel_1, channel_2, num_classes):\n",
" super().__init__()\n",
" ########################################################################\n",
" # TODO: Set up the layers you need for a three-layer ConvNet with the #\n",
" # architecture defined above. #\n",
" ########################################################################\n",
" # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
"\n",
" pass\n",
"\n",
" # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
" ########################################################################\n",
" # END OF YOUR CODE # \n",
" ########################################################################\n",
"\n",
" def forward(self, x):\n",
" scores = None\n",
" ########################################################################\n",
" # TODO: Implement the forward function for a 3-layer ConvNet. you #\n",
" # should use the layers you defined in __init__ and specify the #\n",
" # connectivity of those layers in forward() #\n",
" ########################################################################\n",
" # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
"\n",
" pass\n",
"\n",
" # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
" ########################################################################\n",
" # END OF YOUR CODE #\n",
" ########################################################################\n",
" return scores\n",
"\n",
"\n",
"def test_ThreeLayerConvNet():\n",
" x = torch.zeros((64, 3, 32, 32), dtype=dtype) # minibatch size 64, image size [3, 32, 32]\n",
" model = ThreeLayerConvNet(in_channel=3, channel_1=12, channel_2=8, num_classes=10)\n",
" scores = model(x)\n",
" print(scores.size()) # you should see [64, 10]\n",
"test_ThreeLayerConvNet()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Module API: Check Accuracy\n",
"给定验证或测试集,我们可以检查神经网络的分类准确性。\n",
"\n",
"此版本与part II中的版本略有不同。您不再需要手动传递参数。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def check_accuracy_part34(loader, model):\n",
" if loader.dataset.train:\n",
" print('Checking accuracy on validation set')\n",
" else:\n",
" print('Checking accuracy on test set') \n",
" num_correct = 0\n",
" num_samples = 0\n",
" model.eval() # set model to evaluation mode\n",
" with torch.no_grad():\n",
" for x, y in loader:\n",
" x = x.to(device=device, dtype=dtype) # move to device, e.g. GPU\n",
" y = y.to(device=device, dtype=torch.long)\n",
" scores = model(x)\n",
" _, preds = scores.max(1)\n",
" num_correct += (preds == y).sum()\n",
" num_samples += preds.size(0)\n",
" acc = float(num_correct) / num_samples\n",
" print('Got %d / %d correct (%.2f)' % (num_correct, num_samples, 100 * acc))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Module API: Training Loop\n",
"我们还使用了稍微不同的训练循环。我们不用自己更新权重的值,而是使用来自`torch.optim`包的Optimizer对象,该对象抽象了优化算法的概念,并实现了通常用于优化神经网络的大多数算法。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def train_part34(model, optimizer, epochs=1):\n",
" \"\"\"\n",
" Train a model on CIFAR-10 using the PyTorch Module API.\n",
" \n",
" Inputs:\n",
" - model: A PyTorch Module giving the model to train.\n",
" - optimizer: An Optimizer object we will use to train the model\n",
" - epochs: (Optional) A Python integer giving the number of epochs to train for\n",
" \n",
" Returns: Nothing, but prints model accuracies during training.\n",
" \"\"\"\n",
" model = model.to(device=device) # move the model parameters to CPU/GPU\n",
" for e in range(epochs):\n",
" for t, (x, y) in enumerate(loader_train):\n",
" model.train() # put model to training mode\n",
" x = x.to(device=device, dtype=dtype) # move to device, e.g. GPU\n",
" y = y.to(device=device, dtype=torch.long)\n",
"\n",
" scores = model(x)\n",
" loss = F.cross_entropy(scores, y)\n",
"\n",
" # Zero out all of the gradients for the variables which the optimizer\n",
" # will update.\n",
" optimizer.zero_grad()\n",
"\n",
" # This is the backwards pass: compute the gradient of the loss with\n",
" # respect to each parameter of the model.\n",
" loss.backward()\n",
"\n",
" # Actually update the parameters of the model using the gradients\n",
" # computed by the backwards pass.\n",
" optimizer.step()\n",
"\n",
" if t % print_every == 0:\n",
" print('Iteration %d, loss = %.4f' % (t, loss.item()))\n",
" check_accuracy_part34(loader_val, model)\n",
" print()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Module API: Train a Two-Layer Network\n",
"现在我们准备好运行训练循环。与 part II相比,我们不再显式分配参数tensors。\n",
"\n",
"只需将输入大小,隐藏层大小和类数(即输出大小)传递给`TwoLayerFC`的构造函数即可。\n",
"\n",
"您还需要定义一个优化器来追踪`TwoLayerFC`内部的所有可学习参数。\n",
"\n",
"您无需调整任何超参数,经过一个epoch的训练后,您应该会看到模型精度超过40%。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"hidden_layer_size = 4000\n",
"learning_rate = 1e-2\n",
"model = TwoLayerFC(3 * 32 * 32, hidden_layer_size, 10)\n",
"optimizer = optim.SGD(model.parameters(), lr=learning_rate)\n",
"\n",
"train_part34(model, optimizer)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Module API: Train a Three-Layer ConvNet\n",
"现在,您应该使用Module API在CIFAR上训练三层ConvNet。这看起来与训练两层网络非常相似!您无需调整任何超参数,但经过一个epoch的训练后,您应该达到45%以上水平的精度。\n",
"\n",
"您应该使用没有动量的随机梯度下降法训练模型。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learning_rate = 3e-3\n",
"channel_1 = 32\n",
"channel_2 = 16\n",
"\n",
"model = None\n",
"optimizer = None\n",
"################################################################################\n",
"# TODO: Instantiate your ThreeLayerConvNet model and a corresponding optimizer #\n",
"################################################################################\n",
"# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
"\n",
"pass\n",
"\n",
"# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
"################################################################################\n",
"# END OF YOUR CODE \n",
"################################################################################\n",
"\n",
"train_part34(model, optimizer)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Part IV. PyTorch Sequential API\n",
"\n",
"Part III介绍了PyTorch Module API,该API允许您定义任意可学习的层及其连接。\n",
"\n",
"对于简单的模型,你需要经历3个步骤:子类`nn.Module`,在`__init__`中定义各层,并在`forward()`中逐个调用每一层。。那有没有更方便的方法?\n",
"\n",
"幸运的是,PyTorch提供了一个名为`nn.Sequential`的容器模块,该模块将上述步骤合并为一个。它不如`nn.Module`灵活,因为您不能指定更复杂的拓扑结构,但是对于许多用例来说已经足够了。\n",
"\n",
"### Sequential API: Two-Layer Network\n",
"让我们看看如何用`nn.Sequential`重写之前的两层全连接网络示例,并使用上面定义的训练循环对其进行训练。\n",
"\n",
"同样,您无需在此处调整任何超参数,但是经过一个epoch的训练后,您应该达到40%以上的准确性。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# We need to wrap `flatten` function in a module in order to stack it\n",
"# in nn.Sequential\n",
"class Flatten(nn.Module):\n",
" def forward(self, x):\n",
" return flatten(x)\n",
"\n",
"hidden_layer_size = 4000\n",
"learning_rate = 1e-2\n",
"\n",
"model = nn.Sequential(\n",
" Flatten(),\n",
" nn.Linear(3 * 32 * 32, hidden_layer_size),\n",
" nn.ReLU(),\n",
" nn.Linear(hidden_layer_size, 10),\n",
")\n",
"\n",
"# you can use Nesterov momentum in optim.SGD\n",
"optimizer = optim.SGD(model.parameters(), lr=learning_rate,\n",
" momentum=0.9, nesterov=True)\n",
"\n",
"train_part34(model, optimizer)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Sequential API: Three-Layer ConvNet\n",
"在这里,您应该使用`nn.Sequential` 来定义和训练三层ConvNet,其结构与我们在第三部分中使用的结构相同:\n",
"\n",
"1. 带32 5x5滤波器的卷积层(带偏置),zero-padding为2\n",
"2. ReLU\n",
"3. 带16 3x3滤波器的卷积层(带偏置),zero-padding为1\n",
"4. ReLU\n",
"5. 全连接层(带偏置),可计算10个类别的分数\n",
"\n",
"您应该使用上面定义的`random_weight`函数来初始化权重矩阵,并应该使用`zero_weight`函数来初始化偏差向量。\n",
"\n",
"您应该使用Nesterov动量0.9的随机梯度下降来优化模型。\n",
"\n",
"同样,您不需要调整任何超参数,但是经过一个epoch的训练,您应该会看到55%以上的准确性。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"channel_1 = 32\n",
"channel_2 = 16\n",
"learning_rate = 1e-2\n",
"\n",
"model = None\n",
"optimizer = None\n",
"\n",
"################################################################################\n",
"# TODO: Rewrite the 2-layer ConvNet with bias from Part III with the #\n",
"# Sequential API. #\n",
"################################################################################\n",
"# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
"\n",
"pass\n",
"\n",
"# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
"################################################################################\n",
"# END OF YOUR CODE \n",
"################################################################################\n",
"\n",
"train_part34(model, optimizer)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Part V. CIFAR-10 open-ended challenge\n",
"\n",
"在本节中,您可以尝试在CIFAR-10上使用任何ConvNet架构。\n",
"\n",
"现在,您的工作就是尝试使用不同的架构、超参数、损失函数和优化器,以训练出在CIFAR-10上运行10个epoch内的使得 **验证集** 上 **至少达到70%** 精度的模型。你可以使用上面的check_accuracy和train函数。也可以使用`nn.Module`或`nn.Sequential` API。\n",
"\n",
"描述您在本notebook末尾所做的事情。\n",
"\n",
"这是每个组件的官方API文档。需要注意的是:在PyTorch中\"spatial batch norm\"称为\"BatchNorm2D\"。\n",
"\n",
"* Layers in torch.nn package: http://pytorch.org/docs/stable/nn.html\n",
"* Activations: http://pytorch.org/docs/stable/nn.html#non-linear-activations\n",
"* Loss functions: http://pytorch.org/docs/stable/nn.html#loss-functions\n",
"* Optimizers: http://pytorch.org/docs/stable/optim.html\n",
"\n",
"\n",
"### Things you might try:\n",
"- **过滤器大小**:上面我们使用了5x5的大小;较小的过滤器会更有效吗?\n",
"- **过滤器数量**:上面我们使用了32个过滤器。多点更好还是少一点更好?\n",
"- **Pooling vs Strided Convolution**: 您使用 max pooling还是stride convolutions?\n",
"- **Batch normalization**: 尝试在卷积层之后添加空间批处理归一化,并在affine layers之后添加批归一化。您的网络训练速度会更快吗?\n",
"- **网络架构**: 上面的网络具有两层可训练的参数。深度网络可以做得更好吗?可以尝试的良好架构包括:\n",
" - [conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]\n",
" - [conv-relu-conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]\n",
" - [batchnorm-relu-conv]xN -> [affine]xM -> [softmax or SVM]\n",
"- **Global Average Pooling**: 不将图片转变为向量而是有多个仿射层并执行卷积直到图像变小(大约7x7),然后执行平均池化操作以获取1x1图像图片(1, 1 , Filter#),然后将其变换为为(Filter#)向量。在[Google's Inception Network](https://arxiv.org/abs/1512.00567)中使用了它(其结构请参见表1)。\n",
"- **正则化**:添加l2权重正则化,或者使用Dropout。\n",
"\n",
"### Tips for training\n",
"对于尝试的每种网络结构,您都应该调整学习速率和其他超参数。进行此操作时,需要牢记一些重要事项:\n",
"\n",
"- 如果参数运行良好,则应在几百次迭代中看到改进\n",
"- 请记住,从粗略到精细的超参数调整方法:首先测试大范围的超参数,只需要几个训练迭代就可以找到有效的参数组合。\n",
"- 找到一些似乎有效的参数后,请在这些参数周围进行更精细的搜索。您可能需要训练更多的epochs。\n",
"- 您应该使用验证集进行超参数搜索,并保存测试集,以便根据验证集选择的最佳参数评估网络结构。\n",
"\n",
"### Going above and beyond\n",
"If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are **not required** to implement any of these, but don't miss the fun if you have time!\n",
"如果您喜欢冒险,可以使用许多其他功能来尝试并提高性能。下面**不是不须**完成的,但如果有时间,请不要错过!\n",
"\n",
"- 替代的优化器:您可以尝试Adam,Adagrad,RMSprop等。\n",
"- 替代激活函数,例如leaky ReLU,parametric ReLU,ELU或MaxOut。\n",
"- 集成学习\n",
"- 数据增强\n",
"- 新架构\n",
" - [ResNets](https://arxiv.org/abs/1512.03385) where the input from the previous layer is added to the output.\n",
" - [DenseNets](https://arxiv.org/abs/1608.06993) where inputs into previous layers are concatenated together.\n",
" - [This blog has an in-depth overview](https://chatbotslife.com/resnets-highwaynets-and-densenets-oh-my-9bb15918ee32)\n",
"\n",
"### Have fun and happy training! "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"################################################################################\n",
"# TODO: # \n",
"# Experiment with any architectures, optimizers, and hyperparameters. #\n",
"# Achieve AT LEAST 70% accuracy on the *validation set* within 10 epochs. #\n",
"# #\n",
"# Note that you can use the check_accuracy function to evaluate on either #\n",
"# the test set or the validation set, by passing either loader_test or #\n",
"# loader_val as the second argument to check_accuracy. You should not touch #\n",
"# the test set until you have finished your architecture and hyperparameter #\n",
"# tuning, and only run the test set once at the end to report a final value. #\n",
"################################################################################\n",
"model = None\n",
"optimizer = None\n",
"\n",
"# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
"\n",
"pass\n",
"\n",
"# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
"################################################################################\n",
"# END OF YOUR CODE \n",
"################################################################################\n",
"\n",
"# You should get at least 70% accuracy\n",
"train_part34(model, optimizer, epochs=10)"
]
},
{
"cell_type": "markdown",
"metadata": {
"tags": [
"pdf-inline"
]
},
"source": [
"## 描述下你做了什么\n",
"\n",
"在下面的单元格中,你应该解释你做了什么,你实现了什么额外的功能,和/或你在训练和评估你的网络的过程中做了什么。。"
]
},
{
"cell_type": "markdown",
"metadata": {
"tags": [
"pdf-inline"
]
},
"source": [
"TODO: 描述下 你做了什么"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Test set -- run this only once\n",
"\n",
"Now that we've gotten a result we're happy with, we test our final model on the test set (which you should store in best_model). Think about how this compares to your validation set accuracy.\n",
"现在我们已经获得了满意的结果,我们在测试集上测试最终模型(您应该将其存储在best_model中)。考虑一下这与你在验证集上的准确性相比如何。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"best_model = model\n",
"check_accuracy_part34(loader_test, best_model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"# 重要\n",
"\n",
"这里是作业的结尾处,请执行以下步骤:\n",
"\n",
"1. 点击`File -> Save`或者用`control+s`组合键,确保你最新的的notebook的作业已经保存到谷歌云。\n",
"2. 执行以下代码确保 `.py` 文件保存回你的谷歌云。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"FOLDER_TO_SAVE = os.path.join('drive/My Drive/', FOLDERNAME)\n",
"FILES_TO_SAVE = ['daseCV/classifiers/cnn.py', 'daseCV/classifiers/fc_net.py']\n",
"\n",
"for files in FILES_TO_SAVE:\n",
" with open(os.path.join(FOLDER_TO_SAVE, '/'.join(files.split('/')[1:])), 'w') as f:\n",
" f.write(''.join(open(files).readlines()))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.0"
},
"toc": {
"nav_menu": {},
"number_sections": true,
"sideBar": true,
"skip_h1_title": false,
"toc_cell": false,
"toc_position": {},
"toc_section_display": "block",
"toc_window_display": false
},
"varInspector": {
"cols": {
"lenName": 16,
"lenType": 16,
"lenVar": 40
},
"kernels_config": {
"python": {
"delete_cmd_postfix": "",
"delete_cmd_prefix": "del ",
"library": "var_list.py",
"varRefreshCmd": "print(var_dic_list())"
},
"r": {
"delete_cmd_postfix": ") ",
"delete_cmd_prefix": "rm(",
"library": "var_list.r",
"varRefreshCmd": "cat(var_dic_list()) "
}
},
"types_to_exclude": [
"module",
"function",
"builtin_function_or_method",
"instance",
"_Feature"
],
"window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 1
}