{
|
|
"cells": [
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from google.colab import drive\n",
|
|
"\n",
|
|
"drive.mount('/content/drive', force_remount=True)\n",
|
|
"\n",
|
|
"# 输入daseCV所在的路径\n",
|
|
"# 'daseCV' 文件夹包括 '.py', 'classifiers' 和'datasets'文件夹\n",
|
|
"# 例如 'CV/assignments/assignment1/daseCV/'\n",
|
|
"FOLDERNAME = None\n",
|
|
"\n",
|
|
"assert FOLDERNAME is not None, \"[!] Enter the foldername.\"\n",
|
|
"\n",
|
|
"%cd drive/My\\ Drive\n",
|
|
"%cp -r $FOLDERNAME ../../\n",
|
|
"%cd ../../\n",
|
|
"%cd daseCV/datasets/\n",
|
|
"!bash get_datasets.sh\n",
|
|
"%cd ../../"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"tags": [
|
|
"pdf-title"
|
|
]
|
|
},
|
|
"source": [
|
|
"# K-近邻算法 (kNN) 练习\n",
|
|
"\n",
|
|
"*补充并完成本练习。*\n",
|
|
"\n",
|
|
"kNN分类器包含两个阶段:\n",
|
|
"\n",
|
|
"- 训练阶段,分类器获取训练数据并简单地记住它。\n",
|
|
"- 测试阶段, kNN将测试图像与所有训练图像进行比较,并计算出前k个最相似的训练示例的标签来对每个测试图像进行分类。\n",
|
|
"- 对k值进行交叉验证\n",
|
|
"\n",
|
|
"在本练习中,您将实现这些步骤,并了解基本的图像分类、交叉验证和熟练编写高效矢量化代码的能力。"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"tags": [
|
|
"pdf-ignore"
|
|
]
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"# 运行notebook的一些初始化代码\n",
|
|
"\n",
|
|
"import random\n",
|
|
"import numpy as np\n",
|
|
"from daseCV.data_utils import load_CIFAR10\n",
|
|
"import matplotlib.pyplot as plt\n",
|
|
"\n",
|
|
"# 使得matplotlib的图像在当前页显示而不是新的窗口。\n",
|
|
"%matplotlib inline\n",
|
|
"plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\n",
|
|
"plt.rcParams['image.interpolation'] = 'nearest'\n",
|
|
"plt.rcParams['image.cmap'] = 'gray'\n",
|
|
"\n",
|
|
"# 一些更神奇的,使notebook重新加载外部的python模块;\n",
|
|
"# 参见 http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n",
|
|
"%load_ext autoreload\n",
|
|
"%autoreload 2"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"tags": [
|
|
"pdf-ignore"
|
|
]
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"# 加载未处理的 CIFAR-10 数据.\n",
|
|
"cifar10_dir = 'daseCV/datasets/cifar-10-batches-py'\n",
|
|
"\n",
|
|
"# 清理变量以防止多次加载数据(这可能会导致内存问题)\n",
|
|
"try:\n",
|
|
" del X_train, y_train\n",
|
|
" del X_test, y_test\n",
|
|
" print('Clear previously loaded data.')\n",
|
|
"except:\n",
|
|
" pass\n",
|
|
"\n",
|
|
"X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)\n",
|
|
"\n",
|
|
"# 作为健全性检查,我们打印出训练和测试数据的形状。\n",
|
|
"print('Training data shape: ', X_train.shape)\n",
|
|
"print('Training labels shape: ', y_train.shape)\n",
|
|
"print('Test data shape: ', X_test.shape)\n",
|
|
"print('Test labels shape: ', y_test.shape)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"tags": [
|
|
"pdf-ignore"
|
|
]
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"# 可视化数据集中的一些示例。\n",
|
|
"# 我们展示了训练图像的所有类别的一些示例。\n",
|
|
"classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']\n",
|
|
"num_classes = len(classes)\n",
|
|
"samples_per_class = 7\n",
|
|
"for y, cls in enumerate(classes):\n",
|
|
" idxs = np.flatnonzero(y_train == y) # flatnonzero表示返回所给数列的非零项的索引值,这里表示返回所有属于y类的索引\n",
|
|
" idxs = np.random.choice(idxs, samples_per_class, replace=False) # replace表示抽取的样本是否能重复\n",
|
|
" for i, idx in enumerate(idxs):\n",
|
|
" plt_idx = i * num_classes + y + 1\n",
|
|
" plt.subplot(samples_per_class, num_classes, plt_idx)\n",
|
|
" plt.imshow(X_train[idx].astype('uint8'))\n",
|
|
" plt.axis('off')\n",
|
|
" if i == 0:\n",
|
|
" plt.title(cls)\n",
|
|
"plt.show()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"tags": [
|
|
"pdf-ignore"
|
|
]
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"# 在练习中使用更小的子样本可以提高代码的效率\n",
|
|
"num_training = 5000\n",
|
|
"mask = list(range(num_training))\n",
|
|
"X_train = X_train[mask]\n",
|
|
"y_train = y_train[mask]\n",
|
|
"\n",
|
|
"num_test = 500\n",
|
|
"mask = list(range(num_test))\n",
|
|
"X_test = X_test[mask]\n",
|
|
"y_test = y_test[mask]\n",
|
|
"\n",
|
|
"# 将图像数据调整为行\n",
|
|
"X_train = np.reshape(X_train, (X_train.shape[0], -1))\n",
|
|
"X_test = np.reshape(X_test, (X_test.shape[0], -1))\n",
|
|
"print(X_train.shape, X_test.shape)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"tags": [
|
|
"pdf-ignore"
|
|
]
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"from daseCV.classifiers import KNearestNeighbor\n",
|
|
"\n",
|
|
"# 创建一个kNN分类器实例。\n",
|
|
"# 请记住,kNN分类器的训练并不会做什么: \n",
|
|
"# 分类器仅记住数据并且不做进一步处理\n",
|
|
"classifier = KNearestNeighbor()\n",
|
|
"classifier.train(X_train, y_train)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"现在,我们要使用kNN分类器对测试数据进行分类。回想一下,我们可以将该过程分为两个步骤: \n",
|
|
"\n",
|
|
"1. 首先,我们必须计算所有测试样本与所有训练样本之间的距离。 \n",
|
|
"2. 给定这些距离,对于每个测试示例,我们找到k个最接近的示例,并让它们对标签进行投票\n",
|
|
"\n",
|
|
"让我们开始计算所有训练和测试示例之间的距离矩阵。 假设有 **Ntr** 的训练样本和 **Nte** 的测试样本, 该过程的结果存储在一个 **Nte x Ntr** 矩阵中,其中每个元素 (i,j) 表示的是第 i 个测试样本和第 j 个 训练样本的距离。\n",
|
|
"\n",
|
|
"**注意: 在完成此notebook中的三个距离的计算时请不要使用numpy提供的np.linalg.norm()函数。**\n",
|
|
"\n",
|
|
"首先打开 `daseCV/classifiers/k_nearest_neighbor.py` 并且补充完成函数 `compute_distances_two_loops` ,这个函数使用双重循环(效率十分低下)来计算距离矩阵。"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# 打开 daseCV/classifiers/k_nearest_neighbor.py 并且补充完成\n",
|
|
"# compute_distances_two_loops.\n",
|
|
"\n",
|
|
"# 测试你的代码:\n",
|
|
"dists = classifier.compute_distances_two_loops(X_test)\n",
|
|
"print(dists.shape)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# 我们可视化距离矩阵:每行代表一个测试样本与训练样本的距离\n",
|
|
"plt.imshow(dists, interpolation='none')\n",
|
|
"plt.show()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"tags": [
|
|
"pdf-inline"
|
|
]
|
|
},
|
|
"source": [
|
|
"**问题 1** \n",
|
|
"\n",
|
|
"请注意距离矩阵中的结构化图案,其中某些行或列的可见亮度更高。(请注意,使用默认的配色方案,黑色表示低距离,而白色表示高距离。)\n",
|
|
"\n",
|
|
"- 数据中导致行亮度更高的原因是什么?\n",
|
|
"- 那列方向的是什么原因呢?\n",
|
|
"\n",
|
|
"$\\color{blue}{\\textit 答:}$ *在这里做出回答*\n",
|
|
"\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# 现在实现函数predict_labels并运行以下代码:\n",
|
|
"# 我们使用k = 1(这是最近的邻居)。\n",
|
|
"y_test_pred = classifier.predict_labels(dists, k=1)\n",
|
|
"\n",
|
|
"# 计算并打印出预测的精度\n",
|
|
"num_correct = np.sum(y_test_pred == y_test)\n",
|
|
"accuracy = float(num_correct) / num_test\n",
|
|
"print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"你预期的精度应该为 `27%` 左右。 现在让我们尝试更大的 `k`, 比如 `k = 5`:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"y_test_pred = classifier.predict_labels(dists, k=5)\n",
|
|
"num_correct = np.sum(y_test_pred == y_test)\n",
|
|
"accuracy = float(num_correct) / num_test\n",
|
|
"print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"你应该能看到一个比 `k = 1` 稍微好一点的结果。"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"tags": [
|
|
"pdf-inline"
|
|
]
|
|
},
|
|
"source": [
|
|
"**问题 2**\n",
|
|
"\n",
|
|
"我们还可以使用其他距离指标,例如L1距离。\n",
|
|
"\n",
|
|
"记图像 $I_k$ 的每个位置 $(i,j)$ 的像素值为 $p_{ij}^{(k)}$,\n",
|
|
"\n",
|
|
"所有图像上的所有像素的均值 $\\mu$ 为 \n",
|
|
"\n",
|
|
"$$\\mu=\\frac{1}{nhw}\\sum_{k=1}^n\\sum_{i=1}^{h}\\sum_{j=1}^{w}p_{ij}^{(k)}$$\n",
|
|
"\n",
|
|
"并且所有图像的每个像素的均值 $\\mu_{ij}$ 为\n",
|
|
"\n",
|
|
"$$\\mu_{ij}=\\frac{1}{n}\\sum_{k=1}^np_{ij}^{(k)}.$$\n",
|
|
"\n",
|
|
"标准差 $\\sigma$ 以及每个像素的标准差 $\\sigma_{ij}$ 的定义与之类似。\n",
|
|
"\n",
|
|
"以下哪个预处理步骤不会改变使用L1距离的最近邻分类器的效果?选择所有符合条件的答案。\n",
|
|
"1. 减去均值 $\\mu$ ($\\tilde{p}_{ij}^{(k)}=p_{ij}^{(k)}-\\mu$.)\n",
|
|
"2. 减去每个像素均值 $\\mu_{ij}$ ($\\tilde{p}_{ij}^{(k)}=p_{ij}^{(k)}-\\mu_{ij}$.)\n",
|
|
"3. 减去均值 $\\mu$ 然后除以标准偏差 $\\sigma$.\n",
|
|
"4. 减去每个像素均值 $\\mu_{ij}$ 并除以每个素标准差 $\\sigma_{ij}$.\n",
|
|
"5. 旋转数据的坐标轴。\n",
|
|
"\n",
|
|
"$\\color{blue}{\\textit 你的回答:}$\n",
|
|
"\n",
|
|
"\n",
|
|
"$\\color{blue}{\\textit 你的解释:}$\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"tags": [
|
|
"pdf-ignore-input"
|
|
]
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"# 现在,通过部分矢量化并且使用单层循环的来加快距离矩阵的计算。\n",
|
|
"# 需要实现函数compute_distances_one_loop并运行以下代码:\n",
|
|
"\n",
|
|
"dists_one = classifier.compute_distances_one_loop(X_test)\n",
|
|
"\n",
|
|
"# 为了确保我们的矢量化实现正确,我们要保证它的结果与最原始的实现方式结果一致。\n",
|
|
"# 有很多方法可以确定两个矩阵是否相似。最简单的方法之一就是Frobenius范数。 \n",
|
|
"# 如果您以前从未了解过Frobenius范数,它其实是两个矩阵的所有元素之差的平方和的平方根;\n",
|
|
"# 换句话说,就是将矩阵重整为向量并计算它们之间的欧几里得距离。\n",
|
|
"\n",
|
|
"difference = np.linalg.norm(dists - dists_one, ord='fro')\n",
|
|
"print('One loop difference was: %f' % (difference, ))\n",
|
|
"if difference < 0.001:\n",
|
|
" print('Good! The distance matrices are the same')\n",
|
|
"else:\n",
|
|
" print('Uh-oh! The distance matrices are different')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"scrolled": true,
|
|
"tags": [
|
|
"pdf-ignore-input"
|
|
]
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"# 现在完成compute_distances_no_loops实现完全矢量化的版本并运行代码\n",
|
|
"dists_two = classifier.compute_distances_no_loops(X_test)\n",
|
|
"\n",
|
|
"# 检查距离矩阵是否与我们之前计算出的矩阵一致:\n",
|
|
"difference = np.linalg.norm(dists - dists_two, ord='fro')\n",
|
|
"print('No loop difference was: %f' % (difference, ))\n",
|
|
"if difference < 0.001:\n",
|
|
" print('Good! The distance matrices are the same')\n",
|
|
"else:\n",
|
|
" print('Uh-oh! The distance matrices are different')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"tags": [
|
|
"pdf-ignore-input"
|
|
]
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"# 让我们比较一下三种实现方式的速度\n",
|
|
"def time_function(f, *args):\n",
|
|
" \"\"\"\n",
|
|
" Call a function f with args and return the time (in seconds) that it took to execute.\n",
|
|
" \"\"\"\n",
|
|
" import time\n",
|
|
" tic = time.time()\n",
|
|
" f(*args)\n",
|
|
" toc = time.time()\n",
|
|
" return toc - tic\n",
|
|
"\n",
|
|
"two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)\n",
|
|
"print('Two loop version took %f seconds' % two_loop_time)\n",
|
|
"\n",
|
|
"one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)\n",
|
|
"print('One loop version took %f seconds' % one_loop_time)\n",
|
|
"\n",
|
|
"no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)\n",
|
|
"print('No loop version took %f seconds' % no_loop_time)\n",
|
|
"\n",
|
|
"# 你应该会看到使用完全矢量化的实现会有明显更佳的性能!\n",
|
|
"\n",
|
|
"# 注意:在部分计算机上,当您从两层循环转到单层循环时,\n",
|
|
"# 您可能看不到速度的提升,甚至可能会看到速度变慢。"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### 交叉验证\n",
|
|
"\n",
|
|
"我们已经实现了kNN分类器,并且可以设置k = 5。现在,将通过交叉验证来确定此超参数的最佳值。"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"tags": [
|
|
"code"
|
|
]
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"num_folds = 5\n",
|
|
"k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]\n",
|
|
"\n",
|
|
"X_train_folds = []\n",
|
|
"y_train_folds = []\n",
|
|
"################################################################################\n",
|
|
"# 需要完成的事情: \n",
|
|
"# 将训练数据分成多个部分。拆分后,X_train_folds和y_train_folds均应为长度为num_folds的列表,\n",
|
|
"# 其中y_train_folds [i]是X_train_folds [i]中各点的标签向量。\n",
|
|
"# 提示:查阅numpy的array_split函数。 \n",
|
|
"################################################################################\n",
|
|
"# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
|
|
"\n",
|
|
"pass\n",
|
|
"\n",
|
|
"# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
|
|
"\n",
|
|
"# A dictionary holding the accuracies for different values of k that we find when running cross-validation.\n",
|
|
"# 一个字典,存储我们进行交叉验证时不同k的值的精度。\n",
|
|
"# 运行交叉验证后,k_to_accuracies[k]应该是长度为num_folds的列表,存储了k值下的精度值。\n",
|
|
"k_to_accuracies = {}\n",
|
|
"\n",
|
|
"\n",
|
|
"################################################################################\n",
|
|
"# 需要完成的事情: \n",
|
|
"# 执行k的交叉验证,以找到k的最佳值。\n",
|
|
"# 对于每个可能的k值,运行k-最近邻算法 num_folds 次,\n",
|
|
"# 在每次循环下,你都会用所有拆分的数据(除了其中一个需要作为验证集)作为训练数据。\n",
|
|
"# 然后存储所有的精度结果到k_to_accuracies[k]中。 \n",
|
|
"################################################################################\n",
|
|
"# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
|
|
"\n",
|
|
"# 交叉验证。有时候,训练集数量较小(因此验证集的数量更小),人们会使用一种被称为\n",
|
|
"# 交叉验证的方法,这种方法更加复杂些。还是用刚才的例子,如果是交叉验证集,我们就\n",
|
|
"# 不是取1000个图像,而是将训练集平均分成5份,其中4份用来训练,1份用来验证。然后\n",
|
|
"# 我们循环着取其中4份来训练,其中1份来验证,最后取所有5次验证结果的平均值作为算\n",
|
|
"# 法验证结果。\n",
|
|
"\n",
|
|
"for k in k_choices:\n",
|
|
" k_to_accuracies[k] = []\n",
|
|
" for i in range(num_folds):\n",
|
|
" # prepare training data for the current fold\n",
|
|
" X_train_fold = np.concatenate([ fold for j, fold in enumerate(X_train_folds) if i != j ])\n",
|
|
" y_train_fold = np.concatenate([ fold for j, fold in enumerate(y_train_folds) if i != j ])\n",
|
|
" \n",
|
|
" # use of k-nearest-neighbor algorithm\n",
|
|
" classifier.train(X_train_fold, y_train_fold)\n",
|
|
" y_pred_fold = classifier.predict(X_train_folds[i], k=k, num_loops=0)\n",
|
|
"\n",
|
|
" # Compute the fraction of correctly predicted examples\n",
|
|
" num_correct = np.sum(y_pred_fold == y_train_folds[i])\n",
|
|
" accuracy = float(num_correct) / X_train_folds[i].shape[0]\n",
|
|
" k_to_accuracies[k].append(accuracy)\n",
|
|
"\n",
|
|
"# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
|
|
"\n",
|
|
"# 打印出计算的精度\n",
|
|
"for k in sorted(k_to_accuracies):\n",
|
|
" for accuracy in k_to_accuracies[k]:\n",
|
|
" print('k = %d, accuracy = %f' % (k, accuracy))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"tags": [
|
|
"pdf-ignore-input"
|
|
]
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"# 绘制原始观察结果\n",
|
|
"for k in k_choices:\n",
|
|
" accuracies = k_to_accuracies[k]\n",
|
|
" plt.scatter([k] * len(accuracies), accuracies)\n",
|
|
"\n",
|
|
"# 用与标准偏差相对应的误差线绘制趋势线\n",
|
|
"accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])\n",
|
|
"accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])\n",
|
|
"plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)\n",
|
|
"plt.title('Cross-validation on k')\n",
|
|
"plt.xlabel('k')\n",
|
|
"plt.ylabel('Cross-validation accuracy')\n",
|
|
"plt.show()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# 根据上述交叉验证结果,为k选择最佳值,使用所有训练数据重新训练分类器,\n",
|
|
"# 并在测试中对其进行测试数据。您应该能够在测试数据上获得28%以上的准确性。\n",
|
|
"\n",
|
|
"best_k = k_choices[accuracies_mean.argmax()]\n",
|
|
"\n",
|
|
"classifier = KNearestNeighbor()\n",
|
|
"classifier.train(X_train, y_train)\n",
|
|
"y_test_pred = classifier.predict(X_test, k=best_k)\n",
|
|
"\n",
|
|
"# Compute and display the accuracy\n",
|
|
"num_correct = np.sum(y_test_pred == y_test)\n",
|
|
"accuracy = float(num_correct) / num_test\n",
|
|
"print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {
|
|
"tags": [
|
|
"pdf-inline"
|
|
]
|
|
},
|
|
"source": [
|
|
"**问题 3**\n",
|
|
"\n",
|
|
"下列关于$k$-NN的陈述中哪些是在分类器中正确的设置,并且对所有的$k$都有效?选择所有符合条件的选项。\n",
|
|
"\n",
|
|
"1. k-NN分类器的决策边界是线性的。\n",
|
|
"2. 1-NN的训练误差将始终低于5-NN。\n",
|
|
"3. 1-NN的测试误差将始终低于5-NN。\n",
|
|
"4. 使用k-NN分类器对测试示例进行分类所需的时间随训练集的大小而增加。\n",
|
|
"5. 以上都不是。\n",
|
|
"\n",
|
|
"$\\color{blue}{\\textit 你的回答:}$\n",
|
|
"\n",
|
|
"\n",
|
|
"$\\color{blue}{\\textit 你的解释:}$\n",
|
|
"\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"---\n",
|
|
"# 重要\n",
|
|
"\n",
|
|
"这里是作业的结尾处,请执行以下步骤:\n",
|
|
"\n",
|
|
"1. 点击`File -> Save`或者用`control+s`组合键,确保你最新的的notebook的作业已经保存到谷歌云。\n",
|
|
"2. 执行以下代码确保 `.py` 文件保存回你的谷歌云。"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import os\n",
|
|
"\n",
|
|
"FOLDER_TO_SAVE = os.path.join('drive/My Drive/', FOLDERNAME)\n",
|
|
"FILES_TO_SAVE = ['daseCV/classifiers/k_nearest_neighbor.py']\n",
|
|
"\n",
|
|
"for files in FILES_TO_SAVE:\n",
|
|
" with open(os.path.join(FOLDER_TO_SAVE, '/'.join(files.split('/')[1:])), 'w') as f:\n",
|
|
" f.write(''.join(open(files).readlines()))"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.7.0"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 1
|
|
}
|