CSCI 315 Assignment #5

Due 11:59PM Monday 28 March, via Sakai

Goals

The goals of this assignment are
  1. To learn about the Receiver Operating Characteristic (ROC) concept that Prof. John David presented during his guest lecture.

  2. To begin our study of convolutional networks by doing simple image convolution in Theano

  3. To investigate Theano's partial-derivatives capability

  4. To compare CPU and GPU performance for deep convolutional network learning, and move on to a new dataset.

Part 1: ROC

Grab your two / not-two classifier code from Part 2 of Assignment 3, and modify it to pickle the back-prop network after training. Then write a script roc.py that takes these pickled weights, loads the digits_test.txt file, and tries out various thresholds to distinguish 2 from other digits. By looping over thresholds between 0 and 1, you should be able to generate a false-positive and miss value for each threshold. Turn these into fractions (misses / 250, false positives / 2250), and have your script use matplotlib.pyplot to display the resulting values in a ROC plot like the one on the Wikipedia page (you'll have just one curve, instead of three). Note that the plotting convention for ROC is True Positive Rate (hits, not misses) on the y axis!

Part 2: Image convolution

The deeplearning.net tutorial for convolutional networks has a nice illustration of how convolution works. If you copy/paste the code from that section, along with the code below in the Let's have a little bit of fun with this ... section, you'll have a little script (call it wolves.py) to reproduce the edge-detection results for their three-wolf-moon image.* As with the pickling from previous assignments, you'll likely run into some Python2/Python3 hassles with opening binary files. In this case, you'll have to modify the open command for the image. Once I did that I was able see the filtered images.

To understand their remark about random weights implementing edge detection, take a look at the Wikipedia entry on convolution kernel matrices. Based on those examples, you should be able to (1) see how the random weights might implement edge detection; (2) come up with your own weights and biases to implement blur. So to finish this exercise do the following:

  1. Add a print statement at the end of your wolves.py script to report your answer to why these random weights end up implementing edge detection.

  2. Change the weights W and biases b to produce a blurring, rather than edge-detection, result. You will find numpy.ones() useful for this.

Part 3: Symbolic derivatives with Theano

In a script gradient.py modify the Python example in the Computing Derivatives Symbolically section of the lecture slides, so that it computes the gradient for a function other than $f(x) = x^2$. (You could try logistic-sigmoid for example.) Instead of just three values, have it use a whole NumPy array, via np.linspace. Then, using Wolfram's Widget or your mad calculus skillz, compute the derivative by hand, and add the code for computing the derivative this way. Using matplotlib.pyplot, plot the resulting two arrays of derivatives (Theano's and yours) in the same plot, using a different color for each. Finally, report the RMS error between Theano's values and yours (which should of course be very close to zero!)

Part 4: Deep Learning with LeNet on the GPU

Okay, enough messing around – it's time for some Deep Learning! Because of the CUDA interaction, this part will only work in Python2. So to run your code, you should launch IDLE (not IDLE3), or run your script with python (not python3) from the command line.

Returning to the Convolutional Networks tutorial on deeplearning.net, download their convolutional_mlp.py code, and run it using python3. Now, unless you have several hours to sit around while it converges to an insanely low error rate under one percent, you'll want to change the function call in main() to reduce the number epochs to a value like 10 – which should give you good results (error around 2%) in about 11 minutes.

So now it's time to try out that GPU thing you've been hearing so much about. The following trick will work on our NVIDIA-equipped computers in P413; I can't guarantee it'll work on your laptop. You need to edit the hidden .bashrc file in your home directory, copy-pasting in the following two lines:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-7.5/targets/x86_64-linux/lib
export THEANO_FLAGS='cuda.root=/usr/local/cuda,device=gpu,floatX=float32'
(The first line tells the operating system where to look for CUDA, and the second line tells Theano to use CUDA.)

Launch a new terminal and re-run covolutional_mlp.py from there. You should immediately see a message telling you it's running on the GPU. If you don't see the message, make sure your .bashrc file contains the THEANO_FLAGS line exactly as above, and if you still can't get it to run, let me know. Amazingly, it should run 10 epochs in under a minute, with the same excellent results.

At this point you're probably pretty sick of the MNIST digits! So let's wind things up by trying our convolutional network on some other kinds of images. Take a few minutes to read up on the CIFAR-10 dataset of tiny (32x32) color images. Because this dataset is so huge (around 160MB), Steve Goryl and I have put it in a shared directory for you. Although the images are in color, I have written a little module containing a function that will load an entire batch of 10,000 images, convert them all to grayscale at once (the magic of NumPy!) and return their classification labels and file names. To show how to call the function, I added a little code at the bottom to load the first batch and display its first image. (You won't need to display the images in your own code, but it's fun to see what they look like!)

So here's what you're gonna do for this part: copy your modified convolutional_mlp.py to a new file cifar10_mlp.py. Look for the call to load_data(); the actual function is implemented in logistic_sgd.py from Assignment 4. Now, instead of importing load_data from logistic_sgd, add a function load_data to cifar10_mlp.py that works with the CIFAR-10 dataset instead:

  1. Load a training set, validation set, and test set, using calls to my load_cifar10_gray function. For example, you could use data_batch_1 for your training set, data_batch_2 for your validation set, and test_batch for your testing set.

  2. As with the original load_data function, return the training, testing, and validation sets as tuples of shared Theano variables.
Once you've got load_data written, you should only need to modify the layer0 and layer1 code to switch from 28x28 to 32x32 pixels. Since there are 10 images classes, the softmax layer will be the same as with the ten-digit classifier.

Now run your cifar10_mlp.py on the GPU and see what kind of results you get!

Extra-Credit Possibilities

What to submit to sakai

Zip up everything (scripts and pickles) I'll need to run your code. Be smart: after submitting, download, unzip, and test everything in a fresh directory.
* This lovely image always makes me think of Three Dog Night, an awesome band from the 1970's. The T-shirt from which the image derives also has mystical powers.