Level 1 Certification Exercises
Level 1 Certification Exercises
Pick any two exercises to complete.
Your solution notebooks are due one week from the date of request for exercises.
Your solution notebooks are due one week from the date of request for exercises.
Please name the file NN_Level1_FirstnameLastname.nb.
A grade of 70% or higher is required for certification.
1. Regularization and Network Training
1. Regularization and Network Training
1
.Generate noisy data by adding random noise, with an underlying Gaussian distribution, and visualize it.
2
.Use a "low-capacity" perceptron network for training.
3
.Train the network with a "large-capacity" perceptron model.
4
.Visualize the training to see how the model overfits to the noise.
5
.6
.Stop the training immediately (via the GUI) and visualize the results.
7
.8
.One alternate approach is to use DropoutLayer. It represents a net layer that sets its input elements to zero with probability 0.5 during training and is a commonly used form of regularization. Implement this.
2. Forest Fire Area Regression
2. Forest Fire Area Regression
Background
In this exploration, you will see how to use a self-normalizing net to perform numerical data regression. This kind of network assumes that the input data has a mean of 0 and variance of 1, so you will need to standardize the training and testing data. The problem will mostly follow the steps of doing regression on nominal data on the Wolfram Repository page.
In this exploration, you will see how to use a self-normalizing net to perform numerical data regression. This kind of network assumes that the input data has a mean of 0 and variance of 1, so you will need to standardize the training and testing data. The problem will mostly follow the steps of doing regression on nominal data on the Wolfram Repository page.
1
.2
.The last column of the data corresponds to the area of the forest fire, which you must predict given the inputs from the first 12 columns. The descriptions of the independent variables can be inferred from the first row of the dataset. With this in mind, clean up the data and make it suitable to be used for training using either automated machine learning functions or a neural network.
3
.Find the dimensions of the data. Keep 400 data points for training and use the rest for testing.
4
.In order to standardize this data, convert all the nominal classes to indicator vectors.
5
.Next, standardize the numeric vector.
6
.Create the final extractor and use it to create the standardized test and training datasets.
7
.Train the net for 1000 rounds—leaving 7% of the data for a validation set—and return both the trained net and the lowest validation loss.
Hint: The batch size can be important when training this model.
8
.9
.3. Transfer Learning
3. Transfer Learning
1
.Obtain the "Audio Cats and Dogs" dataset and divide it into training and test sets of approximately equal cumulative duration for each animal (e.g. 5 minutes of dog audio should be matched with 5 minutes of cat audio).
2
.Hint: Pay attention to the format of the data going into Classify.
3
.Create the AudioIdentify feature extractor net, which will be used as the starting point for training.
Hint: Extract the core AudioIdentify net, chop off the classification layers and reconstruct the full net using NetMapOperator to compute features on each chunk, then use AggregationLayer to average these sequences over the time dimension. The output should be a single flat vector of length 1664.
4
.Create the net that will be in charge of classifying cat and dog sounds.
5
.Train the classification layers for the new task.
4. Fool a Network
4. Fool a Network
Take an image of yourself and convince the Inception V1 network that you are a tiger (or whatever else you might prefer).
Hint: You may want to add a ConstantArrayLayer and use the LearningRateMultipliers option to retrain the network.