The good news is, the code is mostly already written for you and accessible online from the
author's github repository.
The bad news is, TensorFlow is evolving so rapidly that the code may already be out of date or undocumented by the time
you're ready to start the assignment. Hence, **a large part of these TensorFlow assignments will be
learning the very practical skill of getting almost-working code to work**.

git clone https://github.com/darksigma/Fundamentals-of-Deep-Learning-BookNow you've got the whole repository in a convenient folder called

Before moving on from logistic regression, change the number of epochs (iterations) at the top from 60 to 100, and make a note of your result, which you'll need in the final part below.

Depending on your hardware setup, this multilayer perceptron model can take a *looooooong* time to
complete its 1,000 training epochs (iterations) – and the results aren't even that good! So, after
noting your final test success for 1,000 epochs, reduce the number of epochs to 100, so you can get your result quicker.

Look over the textbook
page
to see how you can get tensorboard to visualize your logistic-regression or
multilayer-perceptron results in a web browser (either one is okay). Once
you've got the TensorBoard display in your browser, try clicking the different
tabs to see the kinds of visualizations it provides. As evidence of your
success, create two screenshots, one using the **SCALARS** tab to
show a plot of the cost and validation error, and another using the
**GRAPHS** tab to show the graph of your neural network. (If you don't know how to do a screenshot on
your computer, google it, and if you still have trouble, ask me!) Later you will include these screenshots in
a brief write.

If you haven't already changed the number of epochs in **multilayer_perceptron_udpated.py** to 100, do that now.
Then copy it to a new script **multilayer_perceptron_modified.py**. Run that script once just to make sure
it's doing what you expect. Then modify it to use only hidden layer 1 (the one with 256 units). When you're
done with these small modifications, there should be no more code referring to hidden layer 2. Then run
your modified network a few times to see how it does.

- Logistic regression with 100 epochs
- Original (two hidden layers) multilayer perceptron with 1000 epochs
- Original (two hidden layers) multilayer perceptron with 100 epochs
- Modified (one hidden layer) multilayer perceptron with 100 epochs

- Did running many more (1,000 vs 100) epochs yield better or worse results for the original multilayer perceptron?
- Did the multilayer pereceptron do better or worse than logistic regression when you ran them both for 100 epochs?
- Did decreasing the number of hidden layers reduce the success of the multilayer perceptron?
- What general lesson might you deduce from your answers to these three questions?