Remove UD730 Udacity course material, replace with pointer.

PiperOrigin-RevId: 243126614
This commit is contained in:
A. Unique TensorFlower 2019-04-11 13:07:42 -07:00 committed by TensorFlower Gardener
parent c43ab94a05
commit 8be9158c7a
8 changed files with 2 additions and 4255 deletions

View File

@ -1,800 +0,0 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"version": "0.3.2",
"views": {},
"default_view": {},
"name": "1_notmnist.ipynb",
"provenance": []
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "5hIbr52I7Z7U",
"colab_type": "text"
},
"source": [
"Deep Learning\n",
"=============\n",
"\n",
"Assignment 1\n",
"------------\n",
"\n",
"The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.\n",
"\n",
"This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST."
]
},
{
"cell_type": "code",
"metadata": {
"id": "apJbCsBHl-2A",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"cellView": "both"
},
"source": [
"# These are all the modules we'll be using later. Make sure you can import them\n",
"# before proceeding further.\n",
"from __future__ import print_function\n",
"import imageio\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"import os\n",
"import sys\n",
"import tarfile\n",
"from IPython.display import display, Image\n",
"from sklearn.linear_model import LogisticRegression\n",
"from six.moves.urllib.request import urlretrieve\n",
"from six.moves import cPickle as pickle\n",
"\n",
"# Config the matplotlib backend as plotting inline in IPython\n",
"%matplotlib inline"
],
"outputs": [],
"execution_count": 0
},
{
"cell_type": "markdown",
"metadata": {
"id": "jNWGtZaXn-5j",
"colab_type": "text"
},
"source": [
"First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labeled examples. Given these sizes, it should be possible to train models quickly on any machine."
]
},
{
"cell_type": "code",
"metadata": {
"id": "EYRJ4ICW6-da",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{
"item_id": 1
}
]
},
"cellView": "both",
"executionInfo": {
"elapsed": 186058,
"status": "ok",
"timestamp": 1444485672507,
"user": {
"color": "#1FA15D",
"displayName": "Vincent Vanhoucke",
"isAnonymous": false,
"isMe": true,
"permissionId": "05076109866853157986",
"photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg",
"sessionId": "2a0a5e044bb03b66",
"userId": "102167687554210253930"
},
"user_tz": 420
},
"outputId": "0d0f85df-155f-4a89-8e7e-ee32df36ec8d"
},
"source": [
"url = 'https://commondatastorage.googleapis.com/books1000/'\n",
"last_percent_reported = None\n",
"data_root = '.' # Change me to store data elsewhere\n",
"\n",
"def download_progress_hook(count, blockSize, totalSize):\n",
" \"\"\"A hook to report the progress of a download. This is mostly intended for users with\n",
" slow internet connections. Reports every 5% change in download progress.\n",
" \"\"\"\n",
" global last_percent_reported\n",
" percent = int(count * blockSize * 100 / totalSize)\n",
"\n",
" if last_percent_reported != percent:\n",
" if percent % 5 == 0:\n",
" sys.stdout.write(\"%s%%\" % percent)\n",
" sys.stdout.flush()\n",
" else:\n",
" sys.stdout.write(\".\")\n",
" sys.stdout.flush()\n",
" \n",
" last_percent_reported = percent\n",
" \n",
"def maybe_download(filename, expected_bytes, force=False):\n",
" \"\"\"Download a file if not present, and make sure it's the right size.\"\"\"\n",
" dest_filename = os.path.join(data_root, filename)\n",
" if force or not os.path.exists(dest_filename):\n",
" print('Attempting to download:', filename) \n",
" filename, _ = urlretrieve(url + filename, dest_filename, reporthook=download_progress_hook)\n",
" print('\\nDownload Complete!')\n",
" statinfo = os.stat(dest_filename)\n",
" if statinfo.st_size == expected_bytes:\n",
" print('Found and verified', dest_filename)\n",
" else:\n",
" raise Exception(\n",
" 'Failed to verify ' + dest_filename + '. Can you get to it with a browser?')\n",
" return dest_filename\n",
"\n",
"train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)\n",
"test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)"
],
"outputs": [
{
"output_type": "stream",
"text": [
"Found and verified notMNIST_large.tar.gz\n",
"Found and verified notMNIST_small.tar.gz\n"
],
"name": "stdout"
}
],
"execution_count": 0
},
{
"cell_type": "markdown",
"metadata": {
"id": "cC3p0oEyF8QT",
"colab_type": "text"
},
"source": [
"Extract the dataset from the compressed .tar.gz file.\n",
"This should give you a set of directories, labeled A through J."
]
},
{
"cell_type": "code",
"metadata": {
"id": "H8CBE-WZ8nmj",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{
"item_id": 1
}
]
},
"cellView": "both",
"executionInfo": {
"elapsed": 186055,
"status": "ok",
"timestamp": 1444485672525,
"user": {
"color": "#1FA15D",
"displayName": "Vincent Vanhoucke",
"isAnonymous": false,
"isMe": true,
"permissionId": "05076109866853157986",
"photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg",
"sessionId": "2a0a5e044bb03b66",
"userId": "102167687554210253930"
},
"user_tz": 420
},
"outputId": "ef6c790c-2513-4b09-962e-27c79390c762"
},
"source": [
"num_classes = 10\n",
"np.random.seed(133)\n",
"\n",
"def maybe_extract(filename, force=False):\n",
" root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz\n",
" if os.path.isdir(root) and not force:\n",
" # You may override by setting force=True.\n",
" print('%s already present - Skipping extraction of %s.' % (root, filename))\n",
" else:\n",
" print('Extracting data for %s. This may take a while. Please wait.' % root)\n",
" tar = tarfile.open(filename)\n",
" sys.stdout.flush()\n",
" tar.extractall(data_root)\n",
" tar.close()\n",
" data_folders = [\n",
" os.path.join(root, d) for d in sorted(os.listdir(root))\n",
" if os.path.isdir(os.path.join(root, d))]\n",
" if len(data_folders) != num_classes:\n",
" raise Exception(\n",
" 'Expected %d folders, one per class. Found %d instead.' % (\n",
" num_classes, len(data_folders)))\n",
" print(data_folders)\n",
" return data_folders\n",
" \n",
"train_folders = maybe_extract(train_filename)\n",
"test_folders = maybe_extract(test_filename)"
],
"outputs": [
{
"output_type": "stream",
"text": [
"['notMNIST_large/A', 'notMNIST_large/B', 'notMNIST_large/C', 'notMNIST_large/D', 'notMNIST_large/E', 'notMNIST_large/F', 'notMNIST_large/G', 'notMNIST_large/H', 'notMNIST_large/I', 'notMNIST_large/J']\n",
"['notMNIST_small/A', 'notMNIST_small/B', 'notMNIST_small/C', 'notMNIST_small/D', 'notMNIST_small/E', 'notMNIST_small/F', 'notMNIST_small/G', 'notMNIST_small/H', 'notMNIST_small/I', 'notMNIST_small/J']\n"
],
"name": "stdout"
}
],
"execution_count": 0
},
{
"cell_type": "markdown",
"metadata": {
"id": "4riXK3IoHgx6",
"colab_type": "text"
},
"source": [
"---\n",
"Problem 1\n",
"---------\n",
"\n",
"Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "PBdkjESPK8tw",
"colab_type": "text"
},
"source": [
"Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.\n",
"\n",
"We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. \n",
"\n",
"A few images might not be readable, we'll just skip them."
]
},
{
"cell_type": "code",
"metadata": {
"id": "h7q0XhG3MJdf",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{
"item_id": 30
}
]
},
"cellView": "both",
"executionInfo": {
"elapsed": 399874,
"status": "ok",
"timestamp": 1444485886378,
"user": {
"color": "#1FA15D",
"displayName": "Vincent Vanhoucke",
"isAnonymous": false,
"isMe": true,
"permissionId": "05076109866853157986",
"photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg",
"sessionId": "2a0a5e044bb03b66",
"userId": "102167687554210253930"
},
"user_tz": 420
},
"outputId": "92c391bb-86ff-431d-9ada-315568a19e59"
},
"source": [
"image_size = 28 # Pixel width and height.\n",
"pixel_depth = 255.0 # Number of levels per pixel.\n",
"\n",
"def load_letter(folder, min_num_images):\n",
" \"\"\"Load the data for a single letter label.\"\"\"\n",
" image_files = os.listdir(folder)\n",
" dataset = np.ndarray(shape=(len(image_files), image_size, image_size),\n",
" dtype=np.float32)\n",
" print(folder)\n",
" num_images = 0\n",
" for image in image_files:\n",
" image_file = os.path.join(folder, image)\n",
" try:\n",
" image_data = (imageio.imread(image_file).astype(float) - \n",
" pixel_depth / 2) / pixel_depth\n",
" if image_data.shape != (image_size, image_size):\n",
" raise Exception('Unexpected image shape: %s' % str(image_data.shape))\n",
" dataset[num_images, :, :] = image_data\n",
" num_images = num_images + 1\n",
" except (IOError, ValueError) as e:\n",
" print('Could not read:', image_file, ':', e, '- it\\'s ok, skipping.')\n",
" \n",
" dataset = dataset[0:num_images, :, :]\n",
" if num_images < min_num_images:\n",
" raise Exception('Many fewer images than expected: %d < %d' %\n",
" (num_images, min_num_images))\n",
" \n",
" print('Full dataset tensor:', dataset.shape)\n",
" print('Mean:', np.mean(dataset))\n",
" print('Standard deviation:', np.std(dataset))\n",
" return dataset\n",
" \n",
"def maybe_pickle(data_folders, min_num_images_per_class, force=False):\n",
" dataset_names = []\n",
" for folder in data_folders:\n",
" set_filename = folder + '.pickle'\n",
" dataset_names.append(set_filename)\n",
" if os.path.exists(set_filename) and not force:\n",
" # You may override by setting force=True.\n",
" print('%s already present - Skipping pickling.' % set_filename)\n",
" else:\n",
" print('Pickling %s.' % set_filename)\n",
" dataset = load_letter(folder, min_num_images_per_class)\n",
" try:\n",
" with open(set_filename, 'wb') as f:\n",
" pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)\n",
" except Exception as e:\n",
" print('Unable to save data to', set_filename, ':', e)\n",
" \n",
" return dataset_names\n",
"\n",
"train_datasets = maybe_pickle(train_folders, 45000)\n",
"test_datasets = maybe_pickle(test_folders, 1800)"
],
"outputs": [
{
"output_type": "stream",
"text": [
"notMNIST_large/A\n",
"Could not read: notMNIST_large/A/Um9tYW5hIEJvbGQucGZi.png : cannot identify image file - it's ok, skipping.\n",
"Could not read: notMNIST_large/A/RnJlaWdodERpc3BCb29rSXRhbGljLnR0Zg==.png : cannot identify image file - it's ok, skipping.\n",
"Could not read: notMNIST_large/A/SG90IE11c3RhcmQgQlROIFBvc3Rlci50dGY=.png : cannot identify image file - it's ok, skipping.\n",
"Full dataset tensor: (52909, 28, 28)\n",
"Mean: -0.12848\n",
"Standard deviation: 0.425576\n",
"notMNIST_large/B\n",
"Could not read: notMNIST_large/B/TmlraXNFRi1TZW1pQm9sZEl0YWxpYy5vdGY=.png : cannot identify image file - it's ok, skipping.\n",
"Full dataset tensor: (52911, 28, 28)\n",
"Mean: -0.00755947\n",
"Standard deviation: 0.417272\n",
"notMNIST_large/C\n",
"Full dataset tensor: (52912, 28, 28)\n",
"Mean: -0.142321\n",
"Standard deviation: 0.421305\n",
"notMNIST_large/D\n",
"Could not read: notMNIST_large/D/VHJhbnNpdCBCb2xkLnR0Zg==.png : cannot identify image file - it's ok, skipping.\n",
"Full dataset tensor: (52911, 28, 28)\n",
"Mean: -0.0574553\n",
"Standard deviation: 0.434072\n",
"notMNIST_large/E\n",
"Full dataset tensor: (52912, 28, 28)\n",
"Mean: -0.0701406\n",
"Standard deviation: 0.42882\n",
"notMNIST_large/F\n",
"Full dataset tensor: (52912, 28, 28)\n",
"Mean: -0.125914\n",
"Standard deviation: 0.429645\n",
"notMNIST_large/G\n",
"Full dataset tensor: (52912, 28, 28)\n",
"Mean: -0.0947771\n",
"Standard deviation: 0.421674\n",
"notMNIST_large/H\n",
"Full dataset tensor: (52912, 28, 28)\n",
"Mean: -0.0687667\n",
"Standard deviation: 0.430344\n",
"notMNIST_large/I\n",
"Full dataset tensor: (52912, 28, 28)\n",
"Mean: 0.0307405\n",
"Standard deviation: 0.449686\n",
"notMNIST_large/J\n",
"Full dataset tensor: (52911, 28, 28)\n",
"Mean: -0.153479\n",
"Standard deviation: 0.397169\n",
"notMNIST_small/A\n",
"Could not read: notMNIST_small/A/RGVtb2NyYXRpY2FCb2xkT2xkc3R5bGUgQm9sZC50dGY=.png : cannot identify image file - it's ok, skipping.\n",
"Full dataset tensor: (1872, 28, 28)\n",
"Mean: -0.132588\n",
"Standard deviation: 0.445923\n",
"notMNIST_small/B\n",
"Full dataset tensor: (1873, 28, 28)\n",
"Mean: 0.00535619\n",
"Standard deviation: 0.457054\n",
"notMNIST_small/C\n",
"Full dataset tensor: (1873, 28, 28)\n",
"Mean: -0.141489\n",
"Standard deviation: 0.441056\n",
"notMNIST_small/D\n",
"Full dataset tensor: (1873, 28, 28)\n",
"Mean: -0.0492094\n",
"Standard deviation: 0.460477\n",
"notMNIST_small/E\n",
"Full dataset tensor: (1873, 28, 28)\n",
"Mean: -0.0598952\n",
"Standard deviation: 0.456146\n",
"notMNIST_small/F\n",
"Could not read: notMNIST_small/F/Q3Jvc3NvdmVyIEJvbGRPYmxpcXVlLnR0Zg==.png : cannot identify image file - it's ok, skipping.\n",
"Full dataset tensor: (1872, 28, 28)\n",
"Mean: -0.118148\n",
"Standard deviation: 0.451134\n",
"notMNIST_small/G\n",
"Full dataset tensor: (1872, 28, 28)\n",
"Mean: -0.092519\n",
"Standard deviation: 0.448468\n",
"notMNIST_small/H\n",
"Full dataset tensor: (1872, 28, 28)\n",
"Mean: -0.0586729\n",
"Standard deviation: 0.457387\n",
"notMNIST_small/I\n",
"Full dataset tensor: (1872, 28, 28)\n",
"Mean: 0.0526481\n",
"Standard deviation: 0.472657\n",
"notMNIST_small/J\n",
"Full dataset tensor: (1872, 28, 28)\n",
"Mean: -0.15167\n",
"Standard deviation: 0.449521\n"
],
"name": "stdout"
}
],
"execution_count": 0
},
{
"cell_type": "markdown",
"metadata": {
"id": "vUdbskYE2d87",
"colab_type": "text"
},
"source": [
"---\n",
"Problem 2\n",
"---------\n",
"\n",
"Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "cYznx5jUwzoO",
"colab_type": "text"
},
"source": [
"---\n",
"Problem 3\n",
"---------\n",
"Another check: we expect the data to be balanced across classes. Verify that.\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "LA7M7K22ynCt",
"colab_type": "text"
},
"source": [
"Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.\n",
"\n",
"Also create a validation dataset for hyperparameter tuning."
]
},
{
"cell_type": "code",
"metadata": {
"id": "s3mWgZLpyuzq",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{
"item_id": 1
}
]
},
"cellView": "both",
"executionInfo": {
"elapsed": 411281,
"status": "ok",
"timestamp": 1444485897869,
"user": {
"color": "#1FA15D",
"displayName": "Vincent Vanhoucke",
"isAnonymous": false,
"isMe": true,
"permissionId": "05076109866853157986",
"photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg",
"sessionId": "2a0a5e044bb03b66",
"userId": "102167687554210253930"
},
"user_tz": 420
},
"outputId": "8af66da6-902d-4719-bedc-7c9fb7ae7948"
},
"source": [
"def make_arrays(nb_rows, img_size):\n",
" if nb_rows:\n",
" dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)\n",
" labels = np.ndarray(nb_rows, dtype=np.int32)\n",
" else:\n",
" dataset, labels = None, None\n",
" return dataset, labels\n",
"\n",
"def merge_datasets(pickle_files, train_size, valid_size=0):\n",
" num_classes = len(pickle_files)\n",
" valid_dataset, valid_labels = make_arrays(valid_size, image_size)\n",
" train_dataset, train_labels = make_arrays(train_size, image_size)\n",
" vsize_per_class = valid_size // num_classes\n",
" tsize_per_class = train_size // num_classes\n",
" \n",
" start_v, start_t = 0, 0\n",
" end_v, end_t = vsize_per_class, tsize_per_class\n",
" end_l = vsize_per_class+tsize_per_class\n",
" for label, pickle_file in enumerate(pickle_files): \n",
" try:\n",
" with open(pickle_file, 'rb') as f:\n",
" letter_set = pickle.load(f)\n",
" # let's shuffle the letters to have random validation and training set\n",
" np.random.shuffle(letter_set)\n",
" if valid_dataset is not None:\n",
" valid_letter = letter_set[:vsize_per_class, :, :]\n",
" valid_dataset[start_v:end_v, :, :] = valid_letter\n",
" valid_labels[start_v:end_v] = label\n",
" start_v += vsize_per_class\n",
" end_v += vsize_per_class\n",
" \n",
" train_letter = letter_set[vsize_per_class:end_l, :, :]\n",
" train_dataset[start_t:end_t, :, :] = train_letter\n",
" train_labels[start_t:end_t] = label\n",
" start_t += tsize_per_class\n",
" end_t += tsize_per_class\n",
" except Exception as e:\n",
" print('Unable to process data from', pickle_file, ':', e)\n",
" raise\n",
" \n",
" return valid_dataset, valid_labels, train_dataset, train_labels\n",
" \n",
" \n",
"train_size = 200000\n",
"valid_size = 10000\n",
"test_size = 10000\n",
"\n",
"valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(\n",
" train_datasets, train_size, valid_size)\n",
"_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)\n",
"\n",
"print('Training:', train_dataset.shape, train_labels.shape)\n",
"print('Validation:', valid_dataset.shape, valid_labels.shape)\n",
"print('Testing:', test_dataset.shape, test_labels.shape)"
],
"outputs": [
{
"output_type": "stream",
"text": [
"Training (200000, 28, 28) (200000,)\n",
"Validation (10000, 28, 28) (10000,)\n",
"Testing (10000, 28, 28) (10000,)\n"
],
"name": "stdout"
}
],
"execution_count": 0
},
{
"cell_type": "markdown",
"metadata": {
"id": "GPTCnjIcyuKN",
"colab_type": "text"
},
"source": [
"Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match."
]
},
{
"cell_type": "code",
"metadata": {
"id": "6WZ2l2tN2zOL",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"cellView": "both"
},
"source": [
"def randomize(dataset, labels):\n",
" permutation = np.random.permutation(labels.shape[0])\n",
" shuffled_dataset = dataset[permutation,:,:]\n",
" shuffled_labels = labels[permutation]\n",
" return shuffled_dataset, shuffled_labels\n",
"train_dataset, train_labels = randomize(train_dataset, train_labels)\n",
"test_dataset, test_labels = randomize(test_dataset, test_labels)\n",
"valid_dataset, valid_labels = randomize(valid_dataset, valid_labels)"
],
"outputs": [],
"execution_count": 0
},
{
"cell_type": "markdown",
"metadata": {
"id": "puDUTe6t6USl",
"colab_type": "text"
},
"source": [
"---\n",
"Problem 4\n",
"---------\n",
"Convince yourself that the data is still good after shuffling!\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tIQJaJuwg5Hw",
"colab_type": "text"
},
"source": [
"Finally, let's save the data for later reuse:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "QiR_rETzem6C",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"cellView": "both"
},
"source": [
"pickle_file = os.path.join(data_root, 'notMNIST.pickle')\n",
"\n",
"try:\n",
" f = open(pickle_file, 'wb')\n",
" save = {\n",
" 'train_dataset': train_dataset,\n",
" 'train_labels': train_labels,\n",
" 'valid_dataset': valid_dataset,\n",
" 'valid_labels': valid_labels,\n",
" 'test_dataset': test_dataset,\n",
" 'test_labels': test_labels,\n",
" }\n",
" pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)\n",
" f.close()\n",
"except Exception as e:\n",
" print('Unable to save data to', pickle_file, ':', e)\n",
" raise"
],
"outputs": [],
"execution_count": 0
},
{
"cell_type": "code",
"metadata": {
"id": "hQbLjrW_iT39",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{
"item_id": 1
}
]
},
"cellView": "both",
"executionInfo": {
"elapsed": 413065,
"status": "ok",
"timestamp": 1444485899688,
"user": {
"color": "#1FA15D",
"displayName": "Vincent Vanhoucke",
"isAnonymous": false,
"isMe": true,
"permissionId": "05076109866853157986",
"photoUrl": "//lh6.googleusercontent.com/-cCJa7dTDcgQ/AAAAAAAAAAI/AAAAAAAACgw/r2EZ_8oYer4/s50-c-k-no/photo.jpg",
"sessionId": "2a0a5e044bb03b66",
"userId": "102167687554210253930"
},
"user_tz": 420
},
"outputId": "b440efc6-5ee1-4cbc-d02d-93db44ebd956"
},
"source": [
"statinfo = os.stat(pickle_file)\n",
"print('Compressed pickle size:', statinfo.st_size)"
],
"outputs": [
{
"output_type": "stream",
"text": [
"Compressed pickle size: 718193801\n"
],
"name": "stdout"
}
],
"execution_count": 0
},
{
"cell_type": "markdown",
"metadata": {
"id": "gE_cRAQB33lk",
"colab_type": "text"
},
"source": [
"---\n",
"Problem 5\n",
"---------\n",
"\n",
"By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.\n",
"Measure how much overlap there is between training, validation and test samples.\n",
"\n",
"Optional questions:\n",
"- What about near duplicates between datasets? (images that are almost identical)\n",
"- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "L8oww1s4JMQx",
"colab_type": "text"
},
"source": [
"---\n",
"Problem 6\n",
"---------\n",
"\n",
"Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.\n",
"\n",
"Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.\n",
"\n",
"Optional question: train an off-the-shelf model on all the data!\n",
"\n",
"---"
]
}
]
}

View File

@ -1,586 +0,0 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"version": "0.3.2",
"views": {},
"default_view": {},
"name": "2_fullyconnected.ipynb",
"provenance": []
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "kR-4eNdK6lYS",
"colab_type": "text"
},
"source": [
"Deep Learning\n",
"=============\n",
"\n",
"Assignment 2\n",
"------------\n",
"\n",
"Previously in `1_notmnist.ipynb`, we created a pickle with formatted datasets for training, development and testing on the [notMNIST dataset](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html).\n",
"\n",
"The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow."
]
},
{
"cell_type": "code",
"metadata": {
"id": "JLpLa8Jt7Vu4",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"cellView": "both"
},
"source": [
"# These are all the modules we'll be using later. Make sure you can import them\n",
"# before proceeding further.\n",
"from __future__ import print_function\n",
"import numpy as np\n",
"import tensorflow as tf\n",
"from six.moves import cPickle as pickle\n",
"from six.moves import range"
],
"outputs": [],
"execution_count": 0
},
{
"cell_type": "markdown",
"metadata": {
"id": "1HrCK6e17WzV",
"colab_type": "text"
},
"source": [
"First reload the data we generated in `1_notmnist.ipynb`."
]
},
{
"cell_type": "code",
"metadata": {
"id": "y3-cj1bpmuxc",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{
"item_id": 1
}
]
},
"cellView": "both",
"executionInfo": {
"elapsed": 19456,
"status": "ok",
"timestamp": 1449847956073,
"user": {
"color": "",
"displayName": "",
"isAnonymous": false,
"isMe": true,
"permissionId": "",
"photoUrl": "",
"sessionId": "0",
"userId": ""
},
"user_tz": 480
},
"outputId": "0ddb1607-1fc4-4ddb-de28-6c7ab7fb0c33"
},
"source": [
"pickle_file = 'notMNIST.pickle'\n",
"\n",
"with open(pickle_file, 'rb') as f:\n",
" save = pickle.load(f)\n",
" train_dataset = save['train_dataset']\n",
" train_labels = save['train_labels']\n",
" valid_dataset = save['valid_dataset']\n",
" valid_labels = save['valid_labels']\n",
" test_dataset = save['test_dataset']\n",
" test_labels = save['test_labels']\n",
" del save # hint to help gc free up memory\n",
" print('Training set', train_dataset.shape, train_labels.shape)\n",
" print('Validation set', valid_dataset.shape, valid_labels.shape)\n",
" print('Test set', test_dataset.shape, test_labels.shape)"
],
"outputs": [
{
"output_type": "stream",
"text": [
"Training set (200000, 28, 28) (200000,)\n",
"Validation set (10000, 28, 28) (10000,)\n",
"Test set (18724, 28, 28) (18724,)\n"
],
"name": "stdout"
}
],
"execution_count": 0
},
{
"cell_type": "markdown",
"metadata": {
"id": "L7aHrm6nGDMB",
"colab_type": "text"
},
"source": [
"Reformat into a shape that's more adapted to the models we're going to train:\n",
"- data as a flat matrix,\n",
"- labels as float 1-hot encodings."
]
},
{
"cell_type": "code",
"metadata": {
"id": "IRSyYiIIGIzS",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{
"item_id": 1
}
]
},
"cellView": "both",
"executionInfo": {
"elapsed": 19723,
"status": "ok",
"timestamp": 1449847956364,
"user": {
"color": "",
"displayName": "",
"isAnonymous": false,
"isMe": true,
"permissionId": "",
"photoUrl": "",
"sessionId": "0",
"userId": ""
},
"user_tz": 480
},
"outputId": "2ba0fc75-1487-4ace-a562-cf81cae82793"
},
"source": [
"image_size = 28\n",
"num_labels = 10\n",
"\n",
"def reformat(dataset, labels):\n",
" dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)\n",
" # Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]\n",
" labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)\n",
" return dataset, labels\n",
"train_dataset, train_labels = reformat(train_dataset, train_labels)\n",
"valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)\n",
"test_dataset, test_labels = reformat(test_dataset, test_labels)\n",
"print('Training set', train_dataset.shape, train_labels.shape)\n",
"print('Validation set', valid_dataset.shape, valid_labels.shape)\n",
"print('Test set', test_dataset.shape, test_labels.shape)"
],
"outputs": [
{
"output_type": "stream",
"text": [
"Training set (200000, 784) (200000, 10)\n",
"Validation set (10000, 784) (10000, 10)\n",
"Test set (18724, 784) (18724, 10)\n"
],
"name": "stdout"
}
],
"execution_count": 0
},
{
"cell_type": "markdown",
"metadata": {
"id": "nCLVqyQ5vPPH",
"colab_type": "text"
},
"source": [
"We're first going to train a multinomial logistic regression using simple gradient descent.\n",
"\n",
"TensorFlow works like this:\n",
"* First you describe the computation that you want to see performed: what the inputs, the variables, and the operations look like. These get created as nodes over a computation graph. This description is all contained within the block below:\n",
"\n",
" with graph.as_default():\n",
" ...\n",
"\n",
"* Then you can run the operations on this graph as many times as you want by calling `session.run()`, providing it outputs to fetch from the graph that get returned. This runtime operation is all contained in the block below:\n",
"\n",
" with tf.Session(graph=graph) as session:\n",
" ...\n",
"\n",
"Let's load all the data into TensorFlow and build the computation graph corresponding to our training:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "Nfv39qvtvOl_",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"cellView": "both"
},
"source": [
"# With gradient descent training, even this much data is prohibitive.\n",
"# Subset the training data for faster turnaround.\n",
"train_subset = 10000\n",
"\n",
"graph = tf.Graph()\n",
"with graph.as_default():\n",
"\n",
" # Input data.\n",
" # Load the training, validation and test data into constants that are\n",
" # attached to the graph.\n",
" tf_train_dataset = tf.constant(train_dataset[:train_subset, :])\n",
" tf_train_labels = tf.constant(train_labels[:train_subset])\n",
" tf_valid_dataset = tf.constant(valid_dataset)\n",
" tf_test_dataset = tf.constant(test_dataset)\n",
" \n",
" # Variables.\n",
" # These are the parameters that we are going to be training. The weight\n",
" # matrix will be initialized using random values following a (truncated)\n",
" # normal distribution. The biases get initialized to zero.\n",
" weights = tf.Variable(\n",
" tf.truncated_normal([image_size * image_size, num_labels]))\n",
" biases = tf.Variable(tf.zeros([num_labels]))\n",
" \n",
" # Training computation.\n",
" # We multiply the inputs with the weight matrix, and add biases. We compute\n",
" # the softmax and cross-entropy (it's one operation in TensorFlow, because\n",
" # it's very common, and it can be optimized). We take the average of this\n",
" # cross-entropy across all training examples: that's our loss.\n",
" logits = tf.matmul(tf_train_dataset, weights) + biases\n",
" loss = tf.reduce_mean(\n",
" tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))\n",
" \n",
" # Optimizer.\n",
" # We are going to find the minimum of this loss using gradient descent.\n",
" optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)\n",
" \n",
" # Predictions for the training, validation, and test data.\n",
" # These are not part of training, but merely here so that we can report\n",
" # accuracy figures as we train.\n",
" train_prediction = tf.nn.softmax(logits)\n",
" valid_prediction = tf.nn.softmax(\n",
" tf.matmul(tf_valid_dataset, weights) + biases)\n",
" test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)"
],
"outputs": [],
"execution_count": 0
},
{
"cell_type": "markdown",
"metadata": {
"id": "KQcL4uqISHjP",
"colab_type": "text"
},
"source": [
"Let's run this computation and iterate:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "z2cjdenH869W",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{
"item_id": 9
}
]
},
"cellView": "both",
"executionInfo": {
"elapsed": 57454,
"status": "ok",
"timestamp": 1449847994134,
"user": {
"color": "",
"displayName": "",
"isAnonymous": false,
"isMe": true,
"permissionId": "",
"photoUrl": "",
"sessionId": "0",
"userId": ""
},
"user_tz": 480
},
"outputId": "4c037ba1-b526-4d8e-e632-91e2a0333267"
},
"source": [
"num_steps = 801\n",
"\n",
"def accuracy(predictions, labels):\n",
" return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))\n",
" / predictions.shape[0])\n",
"\n",
"with tf.Session(graph=graph) as session:\n",
" # This is a one-time operation which ensures the parameters get initialized as\n",
" # we described in the graph: random weights for the matrix, zeros for the\n",
" # biases. \n",
" tf.global_variables_initializer().run()\n",
" print('Initialized')\n",
" for step in range(num_steps):\n",
" # Run the computations. We tell .run() that we want to run the optimizer,\n",
" # and get the loss value and the training predictions returned as numpy\n",
" # arrays.\n",
" _, l, predictions = session.run([optimizer, loss, train_prediction])\n",
" if (step % 100 == 0):\n",
" print('Loss at step %d: %f' % (step, l))\n",
" print('Training accuracy: %.1f%%' % accuracy(\n",
" predictions, train_labels[:train_subset, :]))\n",
" # Calling .eval() on valid_prediction is basically like calling run(), but\n",
" # just to get that one numpy array. Note that it recomputes all its graph\n",
" # dependencies.\n",
" print('Validation accuracy: %.1f%%' % accuracy(\n",
" valid_prediction.eval(), valid_labels))\n",
" print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))"
],
"outputs": [
{
"output_type": "stream",
"text": [
"Initialized\n",
"Loss at step 0 : 17.2939\n",
"Training accuracy: 10.8%\n",
"Validation accuracy: 13.8%\n",
"Loss at step 100 : 2.26903\n",
"Training accuracy: 72.3%\n",
"Validation accuracy: 71.6%\n",
"Loss at step 200 : 1.84895\n",
"Training accuracy: 74.9%\n",
"Validation accuracy: 73.9%\n",
"Loss at step 300 : 1.60701\n",
"Training accuracy: 76.0%\n",
"Validation accuracy: 74.5%\n",
"Loss at step 400 : 1.43912\n",
"Training accuracy: 76.8%\n",
"Validation accuracy: 74.8%\n",
"Loss at step 500 : 1.31349\n",
"Training accuracy: 77.5%\n",
"Validation accuracy: 75.0%\n",
"Loss at step 600 : 1.21501\n",
"Training accuracy: 78.1%\n",
"Validation accuracy: 75.4%\n",
"Loss at step 700 : 1.13515\n",
"Training accuracy: 78.6%\n",
"Validation accuracy: 75.4%\n",
"Loss at step 800 : 1.0687\n",
"Training accuracy: 79.2%\n",
"Validation accuracy: 75.6%\n",
"Test accuracy: 82.9%\n"
],
"name": "stdout"
}
],
"execution_count": 0
},
{
"cell_type": "markdown",
"metadata": {
"id": "x68f-hxRGm3H",
"colab_type": "text"
},
"source": [
"Let's now switch to stochastic gradient descent training instead, which is much faster.\n",
"\n",
"The graph will be similar, except that instead of holding all the training data into a constant node, we create a `Placeholder` node which will be fed actual data at every call of `session.run()`."
]
},
{
"cell_type": "code",
"metadata": {
"id": "qhPMzWYRGrzM",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"cellView": "both"
},
"source": [
"batch_size = 128\n",
"\n",
"graph = tf.Graph()\n",
"with graph.as_default():\n",
"\n",
" # Input data. For the training data, we use a placeholder that will be fed\n",
" # at run time with a training minibatch.\n",
" tf_train_dataset = tf.placeholder(tf.float32,\n",
" shape=(batch_size, image_size * image_size))\n",
" tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))\n",
" tf_valid_dataset = tf.constant(valid_dataset)\n",
" tf_test_dataset = tf.constant(test_dataset)\n",
" \n",
" # Variables.\n",
" weights = tf.Variable(\n",
" tf.truncated_normal([image_size * image_size, num_labels]))\n",
" biases = tf.Variable(tf.zeros([num_labels]))\n",
" \n",
" # Training computation.\n",
" logits = tf.matmul(tf_train_dataset, weights) + biases\n",
" loss = tf.reduce_mean(\n",
" tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))\n",
" \n",
" # Optimizer.\n",
" optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)\n",
" \n",
" # Predictions for the training, validation, and test data.\n",
" train_prediction = tf.nn.softmax(logits)\n",
" valid_prediction = tf.nn.softmax(\n",
" tf.matmul(tf_valid_dataset, weights) + biases)\n",
" test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)"
],
"outputs": [],
"execution_count": 0
},
{
"cell_type": "markdown",
"metadata": {
"id": "XmVZESmtG4JH",
"colab_type": "text"
},
"source": [
"Let's run it:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "FoF91pknG_YW",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{
"item_id": 6
}
]
},
"cellView": "both",
"executionInfo": {
"elapsed": 66292,
"status": "ok",
"timestamp": 1449848003013,
"user": {
"color": "",
"displayName": "",
"isAnonymous": false,
"isMe": true,
"permissionId": "",
"photoUrl": "",
"sessionId": "0",
"userId": ""
},
"user_tz": 480
},
"outputId": "d255c80e-954d-4183-ca1c-c7333ce91d0a"
},
"source": [
"num_steps = 3001\n",
"\n",
"with tf.Session(graph=graph) as session:\n",
" tf.global_variables_initializer().run()\n",
" print(\"Initialized\")\n",
" for step in range(num_steps):\n",
" # Pick an offset within the training data, which has been randomized.\n",
" # Note: we could use better randomization across epochs.\n",
" offset = (step * batch_size) % (train_labels.shape[0] - batch_size)\n",
" # Generate a minibatch.\n",
" batch_data = train_dataset[offset:(offset + batch_size), :]\n",
" batch_labels = train_labels[offset:(offset + batch_size), :]\n",
" # Prepare a dictionary telling the session where to feed the minibatch.\n",
" # The key of the dictionary is the placeholder node of the graph to be fed,\n",
" # and the value is the numpy array to feed to it.\n",
" feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}\n",
" _, l, predictions = session.run(\n",
" [optimizer, loss, train_prediction], feed_dict=feed_dict)\n",
" if (step % 500 == 0):\n",
" print(\"Minibatch loss at step %d: %f\" % (step, l))\n",
" print(\"Minibatch accuracy: %.1f%%\" % accuracy(predictions, batch_labels))\n",
" print(\"Validation accuracy: %.1f%%\" % accuracy(\n",
" valid_prediction.eval(), valid_labels))\n",
" print(\"Test accuracy: %.1f%%\" % accuracy(test_prediction.eval(), test_labels))"
],
"outputs": [
{
"output_type": "stream",
"text": [
"Initialized\n",
"Minibatch loss at step 0 : 16.8091\n",
"Minibatch accuracy: 12.5%\n",
"Validation accuracy: 14.0%\n",
"Minibatch loss at step 500 : 1.75256\n",
"Minibatch accuracy: 77.3%\n",
"Validation accuracy: 75.0%\n",
"Minibatch loss at step 1000 : 1.32283\n",
"Minibatch accuracy: 77.3%\n",
"Validation accuracy: 76.6%\n",
"Minibatch loss at step 1500 : 0.944533\n",
"Minibatch accuracy: 83.6%\n",
"Validation accuracy: 76.5%\n",
"Minibatch loss at step 2000 : 1.03795\n",
"Minibatch accuracy: 78.9%\n",
"Validation accuracy: 77.8%\n",
"Minibatch loss at step 2500 : 1.10219\n",
"Minibatch accuracy: 80.5%\n",
"Validation accuracy: 78.0%\n",
"Minibatch loss at step 3000 : 0.758874\n",
"Minibatch accuracy: 82.8%\n",
"Validation accuracy: 78.8%\n",
"Test accuracy: 86.1%\n"
],
"name": "stdout"
}
],
"execution_count": 0
},
{
"cell_type": "markdown",
"metadata": {
"id": "7omWxtvLLxik",
"colab_type": "text"
},
"source": [
"---\n",
"Problem\n",
"-------\n",
"\n",
"Turn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units [nn.relu()](https://www.tensorflow.org/versions/r0.7/api_docs/python/nn.html#relu) and 1024 hidden nodes. This model should improve your validation / test accuracy.\n",
"\n",
"---"
]
}
]
}

View File

@ -1,300 +0,0 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"version": "0.3.2",
"views": {},
"default_view": {},
"name": "3_regularization.ipynb",
"provenance": []
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "kR-4eNdK6lYS",
"colab_type": "text"
},
"source": [
"Deep Learning\n",
"=============\n",
"\n",
"Assignment 3\n",
"------------\n",
"\n",
"Previously in `2_fullyconnected.ipynb`, you trained a logistic regression and a neural network model.\n",
"\n",
"The goal of this assignment is to explore regularization techniques."
]
},
{
"cell_type": "code",
"metadata": {
"id": "JLpLa8Jt7Vu4",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"cellView": "both"
},
"source": [
"# These are all the modules we'll be using later. Make sure you can import them\n",
"# before proceeding further.\n",
"from __future__ import print_function\n",
"import numpy as np\n",
"import tensorflow as tf\n",
"from six.moves import cPickle as pickle"
],
"outputs": [],
"execution_count": 0
},
{
"cell_type": "markdown",
"metadata": {
"id": "1HrCK6e17WzV",
"colab_type": "text"
},
"source": [
"First reload the data we generated in `1_notmnist.ipynb`."
]
},
{
"cell_type": "code",
"metadata": {
"id": "y3-cj1bpmuxc",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{
"item_id": 1
}
]
},
"cellView": "both",
"executionInfo": {
"elapsed": 11777,
"status": "ok",
"timestamp": 1449849322348,
"user": {
"color": "",
"displayName": "",
"isAnonymous": false,
"isMe": true,
"permissionId": "",
"photoUrl": "",
"sessionId": "0",
"userId": ""
},
"user_tz": 480
},
"outputId": "e03576f1-ebbe-4838-c388-f1777bcc9873"
},
"source": [
"pickle_file = 'notMNIST.pickle'\n",
"\n",
"with open(pickle_file, 'rb') as f:\n",
" save = pickle.load(f)\n",
" train_dataset = save['train_dataset']\n",
" train_labels = save['train_labels']\n",
" valid_dataset = save['valid_dataset']\n",
" valid_labels = save['valid_labels']\n",
" test_dataset = save['test_dataset']\n",
" test_labels = save['test_labels']\n",
" del save # hint to help gc free up memory\n",
" print('Training set', train_dataset.shape, train_labels.shape)\n",
" print('Validation set', valid_dataset.shape, valid_labels.shape)\n",
" print('Test set', test_dataset.shape, test_labels.shape)"
],
"outputs": [
{
"output_type": "stream",
"text": [
"Training set (200000, 28, 28) (200000,)\n",
"Validation set (10000, 28, 28) (10000,)\n",
"Test set (18724, 28, 28) (18724,)\n"
],
"name": "stdout"
}
],
"execution_count": 0
},
{
"cell_type": "markdown",
"metadata": {
"id": "L7aHrm6nGDMB",
"colab_type": "text"
},
"source": [
"Reformat into a shape that's more adapted to the models we're going to train:\n",
"- data as a flat matrix,\n",
"- labels as float 1-hot encodings."
]
},
{
"cell_type": "code",
"metadata": {
"id": "IRSyYiIIGIzS",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{
"item_id": 1
}
]
},
"cellView": "both",
"executionInfo": {
"elapsed": 11728,
"status": "ok",
"timestamp": 1449849322356,
"user": {
"color": "",
"displayName": "",
"isAnonymous": false,
"isMe": true,
"permissionId": "",
"photoUrl": "",
"sessionId": "0",
"userId": ""
},
"user_tz": 480
},
"outputId": "3f8996ee-3574-4f44-c953-5c8a04636582"
},
"source": [
"image_size = 28\n",
"num_labels = 10\n",
"\n",
"def reformat(dataset, labels):\n",
" dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)\n",
" # Map 1 to [0.0, 1.0, 0.0 ...], 2 to [0.0, 0.0, 1.0 ...]\n",
" labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)\n",
" return dataset, labels\n",
"train_dataset, train_labels = reformat(train_dataset, train_labels)\n",
"valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)\n",
"test_dataset, test_labels = reformat(test_dataset, test_labels)\n",
"print('Training set', train_dataset.shape, train_labels.shape)\n",
"print('Validation set', valid_dataset.shape, valid_labels.shape)\n",
"print('Test set', test_dataset.shape, test_labels.shape)"
],
"outputs": [
{
"output_type": "stream",
"text": [
"Training set (200000, 784) (200000, 10)\n",
"Validation set (10000, 784) (10000, 10)\n",
"Test set (18724, 784) (18724, 10)\n"
],
"name": "stdout"
}
],
"execution_count": 0
},
{
"cell_type": "code",
"metadata": {
"id": "RajPLaL_ZW6w",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"cellView": "both"
},
"source": [
"def accuracy(predictions, labels):\n",
" return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))\n",
" / predictions.shape[0])"
],
"outputs": [],
"execution_count": 0
},
{
"cell_type": "markdown",
"metadata": {
"id": "sgLbUAQ1CW-1",
"colab_type": "text"
},
"source": [
"---\n",
"Problem 1\n",
"---------\n",
"\n",
"Introduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compute the L2 loss for a tensor `t` using `nn.l2_loss(t)`. The right amount of regularization should improve your validation / test accuracy.\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "na8xX2yHZzNF",
"colab_type": "text"
},
"source": [
"---\n",
"Problem 2\n",
"---------\n",
"Let's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens?\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ww3SCBUdlkRc",
"colab_type": "text"
},
"source": [
"---\n",
"Problem 3\n",
"---------\n",
"Introduce Dropout on the hidden layer of the neural network. Remember: Dropout should only be introduced during training, not evaluation, otherwise your evaluation results would be stochastic as well. TensorFlow provides `nn.dropout()` for that, but you have to make sure it's only inserted during training.\n",
"\n",
"What happens to our extreme overfitting case?\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-b1hTz3VWZjw",
"colab_type": "text"
},
"source": [
"---\n",
"Problem 4\n",
"---------\n",
"\n",
"Try to get the best performance you can using a multi-layer model! The best reported test accuracy using a deep network is [97.1%](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html?showComment=1391023266211#c8758720086795711595).\n",
"\n",
"One avenue you can explore is to add multiple layers.\n",
"\n",
"Another one is to use learning rate decay:\n",
"\n",
" global_step = tf.Variable(0) # count the number of steps taken.\n",
" learning_rate = tf.train.exponential_decay(0.5, global_step, ...)\n",
" optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)\n",
" \n",
" ---\n"
]
}
]
}

View File

@ -1,465 +0,0 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"version": "0.3.2",
"views": {},
"default_view": {},
"name": "4_convolutions.ipynb",
"provenance": []
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "4embtkV0pNxM",
"colab_type": "text"
},
"source": [
"Deep Learning\n",
"=============\n",
"\n",
"Assignment 4\n",
"------------\n",
"\n",
"Previously in `2_fullyconnected.ipynb` and `3_regularization.ipynb`, we trained fully connected networks to classify [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) characters.\n",
"\n",
"The goal of this assignment is make the neural network convolutional."
]
},
{
"cell_type": "code",
"metadata": {
"id": "tm2CQN_Cpwj0",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"cellView": "both"
},
"source": [
"# These are all the modules we'll be using later. Make sure you can import them\n",
"# before proceeding further.\n",
"from __future__ import print_function\n",
"import numpy as np\n",
"import tensorflow as tf\n",
"from six.moves import cPickle as pickle\n",
"from six.moves import range"
],
"outputs": [],
"execution_count": 0
},
{
"cell_type": "code",
"metadata": {
"id": "y3-cj1bpmuxc",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{
"item_id": 1
}
]
},
"cellView": "both",
"executionInfo": {
"elapsed": 11948,
"status": "ok",
"timestamp": 1446658914837,
"user": {
"color": "",
"displayName": "",
"isAnonymous": false,
"isMe": true,
"permissionId": "",
"photoUrl": "",
"sessionId": "0",
"userId": ""
},
"user_tz": 480
},
"outputId": "016b1a51-0290-4b08-efdb-8c95ffc3cd01"
},
"source": [
"pickle_file = 'notMNIST.pickle'\n",
"\n",
"with open(pickle_file, 'rb') as f:\n",
" save = pickle.load(f)\n",
" train_dataset = save['train_dataset']\n",
" train_labels = save['train_labels']\n",
" valid_dataset = save['valid_dataset']\n",
" valid_labels = save['valid_labels']\n",
" test_dataset = save['test_dataset']\n",
" test_labels = save['test_labels']\n",
" del save # hint to help gc free up memory\n",
" print('Training set', train_dataset.shape, train_labels.shape)\n",
" print('Validation set', valid_dataset.shape, valid_labels.shape)\n",
" print('Test set', test_dataset.shape, test_labels.shape)"
],
"outputs": [
{
"output_type": "stream",
"text": [
"Training set (200000, 28, 28) (200000,)\n",
"Validation set (10000, 28, 28) (10000,)\n",
"Test set (18724, 28, 28) (18724,)\n"
],
"name": "stdout"
}
],
"execution_count": 0
},
{
"cell_type": "markdown",
"metadata": {
"id": "L7aHrm6nGDMB",
"colab_type": "text"
},
"source": [
"Reformat into a TensorFlow-friendly shape:\n",
"- convolutions need the image data formatted as a cube (width by height by #channels)\n",
"- labels as float 1-hot encodings."
]
},
{
"cell_type": "code",
"metadata": {
"id": "IRSyYiIIGIzS",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{
"item_id": 1
}
]
},
"cellView": "both",
"executionInfo": {
"elapsed": 11952,
"status": "ok",
"timestamp": 1446658914857,
"user": {
"color": "",
"displayName": "",
"isAnonymous": false,
"isMe": true,
"permissionId": "",
"photoUrl": "",
"sessionId": "0",
"userId": ""
},
"user_tz": 480
},
"outputId": "650a208c-8359-4852-f4f5-8bf10e80ef6c"
},
"source": [
"image_size = 28\n",
"num_labels = 10\n",
"num_channels = 1 # grayscale\n",
"\n",
"import numpy as np\n",
"\n",
"def reformat(dataset, labels):\n",
" dataset = dataset.reshape(\n",
" (-1, image_size, image_size, num_channels)).astype(np.float32)\n",
" labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)\n",
" return dataset, labels\n",
"train_dataset, train_labels = reformat(train_dataset, train_labels)\n",
"valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)\n",
"test_dataset, test_labels = reformat(test_dataset, test_labels)\n",
"print('Training set', train_dataset.shape, train_labels.shape)\n",
"print('Validation set', valid_dataset.shape, valid_labels.shape)\n",
"print('Test set', test_dataset.shape, test_labels.shape)"
],
"outputs": [
{
"output_type": "stream",
"text": [
"Training set (200000, 28, 28, 1) (200000, 10)\n",
"Validation set (10000, 28, 28, 1) (10000, 10)\n",
"Test set (18724, 28, 28, 1) (18724, 10)\n"
],
"name": "stdout"
}
],
"execution_count": 0
},
{
"cell_type": "code",
"metadata": {
"id": "AgQDIREv02p1",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"cellView": "both"
},
"source": [
"def accuracy(predictions, labels):\n",
" return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))\n",
" / predictions.shape[0])"
],
"outputs": [],
"execution_count": 0
},
{
"cell_type": "markdown",
"metadata": {
"id": "5rhgjmROXu2O",
"colab_type": "text"
},
"source": [
"Let's build a small network with two convolutional layers, followed by one fully connected layer. Convolutional networks are more expensive computationally, so we'll limit its depth and number of fully connected nodes."
]
},
{
"cell_type": "code",
"metadata": {
"id": "IZYv70SvvOan",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
}
},
"cellView": "both"
},
"source": [
"batch_size = 16\n",
"patch_size = 5\n",
"depth = 16\n",
"num_hidden = 64\n",
"\n",
"graph = tf.Graph()\n",
"\n",
"with graph.as_default():\n",
"\n",
" # Input data.\n",
" tf_train_dataset = tf.placeholder(\n",
" tf.float32, shape=(batch_size, image_size, image_size, num_channels))\n",
" tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))\n",
" tf_valid_dataset = tf.constant(valid_dataset)\n",
" tf_test_dataset = tf.constant(test_dataset)\n",
" \n",
" # Variables.\n",
" layer1_weights = tf.Variable(tf.truncated_normal(\n",
" [patch_size, patch_size, num_channels, depth], stddev=0.1))\n",
" layer1_biases = tf.Variable(tf.zeros([depth]))\n",
" layer2_weights = tf.Variable(tf.truncated_normal(\n",
" [patch_size, patch_size, depth, depth], stddev=0.1))\n",
" layer2_biases = tf.Variable(tf.constant(1.0, shape=[depth]))\n",
" layer3_weights = tf.Variable(tf.truncated_normal(\n",
" [image_size // 4 * image_size // 4 * depth, num_hidden], stddev=0.1))\n",
" layer3_biases = tf.Variable(tf.constant(1.0, shape=[num_hidden]))\n",
" layer4_weights = tf.Variable(tf.truncated_normal(\n",
" [num_hidden, num_labels], stddev=0.1))\n",
" layer4_biases = tf.Variable(tf.constant(1.0, shape=[num_labels]))\n",
" \n",
" # Model.\n",
" def model(data):\n",
" conv = tf.nn.conv2d(data, layer1_weights, [1, 2, 2, 1], padding='SAME')\n",
" hidden = tf.nn.relu(conv + layer1_biases)\n",
" conv = tf.nn.conv2d(hidden, layer2_weights, [1, 2, 2, 1], padding='SAME')\n",
" hidden = tf.nn.relu(conv + layer2_biases)\n",
" shape = hidden.get_shape().as_list()\n",
" reshape = tf.reshape(hidden, [shape[0], shape[1] * shape[2] * shape[3]])\n",
" hidden = tf.nn.relu(tf.matmul(reshape, layer3_weights) + layer3_biases)\n",
" return tf.matmul(hidden, layer4_weights) + layer4_biases\n",
" \n",
" # Training computation.\n",
" logits = model(tf_train_dataset)\n",
" loss = tf.reduce_mean(\n",
" tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))\n",
" \n",
" # Optimizer.\n",
" optimizer = tf.train.GradientDescentOptimizer(0.05).minimize(loss)\n",
" \n",
" # Predictions for the training, validation, and test data.\n",
" train_prediction = tf.nn.softmax(logits)\n",
" valid_prediction = tf.nn.softmax(model(tf_valid_dataset))\n",
" test_prediction = tf.nn.softmax(model(tf_test_dataset))"
],
"outputs": [],
"execution_count": 0
},
{
"cell_type": "code",
"metadata": {
"id": "noKFb2UovVFR",
"colab_type": "code",
"colab": {
"autoexec": {
"startup": false,
"wait_interval": 0
},
"output_extras": [
{
"item_id": 37
}
]
},
"cellView": "both",
"executionInfo": {
"elapsed": 63292,
"status": "ok",
"timestamp": 1446658966251,
"user": {
"color": "",
"displayName": "",
"isAnonymous": false,
"isMe": true,
"permissionId": "",
"photoUrl": "",
"sessionId": "0",
"userId": ""
},
"user_tz": 480
},
"outputId": "28941338-2ef9-4088-8bd1-44295661e628"
},
"source": [
"num_steps = 1001\n",
"\n",
"with tf.Session(graph=graph) as session:\n",
" tf.global_variables_initializer().run()\n",
" print('Initialized')\n",
" for step in range(num_steps):\n",
" offset = (step * batch_size) % (train_labels.shape[0] - batch_size)\n",
" batch_data = train_dataset[offset:(offset + batch_size), :, :, :]\n",
" batch_labels = train_labels[offset:(offset + batch_size), :]\n",
" feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}\n",
" _, l, predictions = session.run(\n",
" [optimizer, loss, train_prediction], feed_dict=feed_dict)\n",
" if (step % 50 == 0):\n",
" print('Minibatch loss at step %d: %f' % (step, l))\n",
" print('Minibatch accuracy: %.1f%%' % accuracy(predictions, batch_labels))\n",
" print('Validation accuracy: %.1f%%' % accuracy(\n",
" valid_prediction.eval(), valid_labels))\n",
" print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))"
],
"outputs": [
{
"output_type": "stream",
"text": [
"Initialized\n",
"Minibatch loss at step 0 : 3.51275\n",
"Minibatch accuracy: 6.2%\n",
"Validation accuracy: 12.8%\n",
"Minibatch loss at step 50 : 1.48703\n",
"Minibatch accuracy: 43.8%\n",
"Validation accuracy: 50.4%\n",
"Minibatch loss at step 100 : 1.04377\n",
"Minibatch accuracy: 68.8%\n",
"Validation accuracy: 67.4%\n",
"Minibatch loss at step 150 : 0.601682\n",
"Minibatch accuracy: 68.8%\n",
"Validation accuracy: 73.0%\n",
"Minibatch loss at step 200 : 0.898649\n",
"Minibatch accuracy: 75.0%\n",
"Validation accuracy: 77.8%\n",
"Minibatch loss at step 250 : 1.3637\n",
"Minibatch accuracy: 56.2%\n",
"Validation accuracy: 75.4%\n",
"Minibatch loss at step 300 : 1.41968\n",
"Minibatch accuracy: 62.5%\n",
"Validation accuracy: 76.0%\n",
"Minibatch loss at step 350 : 0.300648\n",
"Minibatch accuracy: 81.2%\n",
"Validation accuracy: 80.2%\n",
"Minibatch loss at step 400 : 1.32092\n",
"Minibatch accuracy: 56.2%\n",
"Validation accuracy: 80.4%\n",
"Minibatch loss at step 450 : 0.556701\n",
"Minibatch accuracy: 81.2%\n",
"Validation accuracy: 79.4%\n",
"Minibatch loss at step 500 : 1.65595\n",
"Minibatch accuracy: 43.8%\n",
"Validation accuracy: 79.6%\n",
"Minibatch loss at step 550 : 1.06995\n",
"Minibatch accuracy: 75.0%\n",
"Validation accuracy: 81.2%\n",
"Minibatch loss at step 600 : 0.223684\n",
"Minibatch accuracy: 100.0%\n",
"Validation accuracy: 82.3%\n",
"Minibatch loss at step 650 : 0.619602\n",
"Minibatch accuracy: 87.5%\n",
"Validation accuracy: 81.8%\n",
"Minibatch loss at step 700 : 0.812091\n",
"Minibatch accuracy: 75.0%\n",
"Validation accuracy: 82.4%\n",
"Minibatch loss at step 750 : 0.276302\n",
"Minibatch accuracy: 87.5%\n",
"Validation accuracy: 82.3%\n",
"Minibatch loss at step 800 : 0.450241\n",
"Minibatch accuracy: 81.2%\n",
"Validation accuracy: 82.3%\n",
"Minibatch loss at step 850 : 0.137139\n",
"Minibatch accuracy: 93.8%\n",
"Validation accuracy: 82.3%\n",
"Minibatch loss at step 900 : 0.52664\n",
"Minibatch accuracy: 75.0%\n",
"Validation accuracy: 82.2%\n",
"Minibatch loss at step 950 : 0.623835\n",
"Minibatch accuracy: 87.5%\n",
"Validation accuracy: 82.1%\n",
"Minibatch loss at step 1000 : 0.243114\n",
"Minibatch accuracy: 93.8%\n",
"Validation accuracy: 82.9%\n",
"Test accuracy: 90.0%\n"
],
"name": "stdout"
}
],
"execution_count": 0
},
{
"cell_type": "markdown",
"metadata": {
"id": "KedKkn4EutIK",
"colab_type": "text"
},
"source": [
"---\n",
"Problem 1\n",
"---------\n",
"\n",
"The convolutional model above uses convolutions with stride 2 to reduce the dimensionality. Replace the strides by a max pooling operation (`nn.max_pool()`) of stride 2 and kernel size 2.\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "klf21gpbAgb-",
"colab_type": "text"
},
"source": [
"---\n",
"Problem 2\n",
"---------\n",
"\n",
"Try to get the best performance you can using a convolutional net. Look for example at the classic [LeNet5](http://yann.lecun.com/exdb/lenet/) architecture, adding Dropout, and/or adding learning rate decay.\n",
"\n",
"---"
]
}
]
}

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@ -1,15 +0,0 @@
FROM gcr.io/tensorflow/tensorflow:latest
LABEL maintainer="Vincent Vanhoucke <vanhoucke@google.com>"
# Pillow needs libjpeg by default as of 3.0.
RUN apt-get update && apt-get install -y --no-install-recommends \
libjpeg8-dev \
&& \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
RUN pip install scikit-learn pyreadline Pillow imageio
RUN rm -rf /notebooks/*
ADD *.ipynb /notebooks/
WORKDIR /notebooks
CMD ["/run_jupyter.sh", "--allow-root"]

View File

@ -1,127 +1,5 @@
Assignments for Udacity Deep Learning class with TensorFlow
===========================================================
Course information can be found at https://www.udacity.com/course/deep-learning--ud730
The contents of this folder have been moved to:
## Getting Started with Docker
[https://github.com/tensorflow/examples/tree/master/courses/udacity_deep_learning](https://github.com/tensorflow/examples/tree/master/courses/udacity_deep_learning)
If you are new to Docker, follow
[Docker document](https://docs.docker.com/machine/get-started/) to start a
docker instance. Kindly read the requirements of Windows and Mac carefully.
Running the Docker container from the Google Cloud repository
-------------------------------------------------------------
docker run -p 8888:8888 --name tensorflow-udacity -it gcr.io/tensorflow/udacity-assignments:1.0.0
Note that if you ever exit the container, you can return to it using:
docker start -ai tensorflow-udacity
Accessing the Notebooks
-----------------------
On linux, go to: http://127.0.0.1:8888
On mac, go to terminal and find the virtual machine's IP using:
docker-machine ip default
Then go to: http://(ip address received from the above command):8888 (likely
http://192.168.99.100:8888)
On Windows, use powershell to find the virtual machine's IP using:
docker-machine ip default
Then go to: http://(ip address received from the above command):8888 (likely
http://192.168.99.100:8888)
FAQ
---
* **I'm getting a MemoryError when loading data in the first notebook.**
If you're using a Mac, Docker works by running a VM locally (which
is controlled by `docker-machine`). It's quite likely that you'll
need to bump up the amount of RAM allocated to the VM beyond the
default (which is 1G).
[This Stack Overflow question](http://stackoverflow.com/questions/32834082/how-to-increase-docker-machine-memory-mac)
has two good suggestions; we recommend using 8G.
In addition, you may need to pass `--memory=8g` as an extra argument to
`docker run`.
* **I want to create a new virtual machine instead of the default one.**
`docker-machine` is a tool to provision and manage docker hosts, it supports multiple platform (ex. aws, gce, azure, virtualbox, ...). To create a new virtual machine locally with built-in docker engine, you can use
docker-machine create -d virtualbox --virtualbox-memory 8196 tensorflow
`-d` means the driver for the cloud platform, supported drivers listed [here](https://docs.docker.com/machine/drivers/). Here we use virtualbox to create a new virtual machine locally. `tensorflow` means the name of the virtual machine, feel free to use whatever you like. You can use
docker-machine ip tensorflow
to get the ip of the new virtual machine. To switch from default virtual machine to a new one (here we use tensorflow), type
eval $(docker-machine env tensorflow)
Note that `docker-machine env tensorflow` outputs some environment variables such like `DOCKER_HOST`. Then your docker client is now connected to the docker host in virtual machine `tensorflow`
* **I'm getting a TLS connection error.**
If you get an error about the TLS connection of your docker, run the command below to confirm the problem.
docker-machine ip tensorflow
Then if it is the case use the instructions on [this page](https://docs.docker.com/toolbox/faqs/troubleshoot/) to solve the issue.
* **I'm getting the error - docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host? - when I run 'docker run'.**
This is a permissions issue, and a popular answer is provided for Linux and Max OSX [here](http://stackoverflow.com/questions/21871479/docker-cant-connect-to-docker-daemon) on StackOverflow.
Notes for anyone needing to build their own containers (mostly instructors)
===========================================================================
Building a local Docker container
---------------------------------
cd tensorflow/examples/udacity
docker build --pull -t $USER/assignments .
Running the local container
---------------------------
To run a disposable container:
docker run -p 8888:8888 -it --rm $USER/assignments
Note the above command will create an ephemeral container and all data stored in the container will be lost when the container stops.
To avoid losing work between sessions in the container, it is recommended that you mount the `tensorflow/examples/udacity` directory into the container:
docker run -p 8888:8888 -v </path/to/tensorflow/examples/udacity>:/notebooks -it --rm $USER/assignments
This will allow you to save work and have access to generated files on the host filesystem.
Pushing a Google Cloud release
------------------------------
V=1.0.0
docker tag $USER/assignments gcr.io/tensorflow/udacity-assignments:$V
gcloud docker push gcr.io/tensorflow/udacity-assignments
docker tag $USER/assignments gcr.io/tensorflow/udacity-assignments:latest
gcloud docker push gcr.io/tensorflow/udacity-assignments
History
-------
* 0.1.0: Initial release.
* 0.2.0: Many fixes, including lower memory footprint and support for Python 3.
* 0.3.0: Use 0.7.1 release.
* 0.4.0: Move notMNIST data for Google Cloud.
* 0.5.0: Actually use 0.7.1 release.
* 0.6.0: Update to TF 0.10.0, add libjpeg (for Pillow).
* 1.0.0: Update to TF 1.0.0 release.