## Description

What you’ll need to do

- Implement and train a basic neural network with backpropagation
- We’ll give you part of the code, you need to fill in the details

- We’ll give you a ZIP archive containing two files:
- py: the class and method outlines

○ studenttest.py: code to train and test your neural network

- py contains some, but not all, of the necessary methods
- You’ll need to write the others yourself

activation(z):

- This is the activation function used in the feed-forward compution of the network.
- The included function is the sigmoid, but you can change this as you see fit.

Here is a graph showing visually how the sigmoid function works:

sigderiv(z):

This is the derivative of the sigmoid function

- You’ll need to use it to update the values when training your neural network
- If you use an activation function other than the sigmoid, then you’ll need to use a different derivative than the one returned by sigderiv

Class: neuralnetwork:

**__init__(self, size, seed):**

- This function will initialize the weights and biases for a neural network of the size specified by the ‘size’ parameter. ● The ‘size’ parameter is a list of the form
- inputsize, hidden layer 1 size, … , hidden layer n size, output size]

- Example on next page

Example of __init__

Suppose size = [3, 4, 4, 1]

- then __init__ creates the network shown at right
- It initializes a variable weights = [ [v
_{11 }v_{12 }v_{13}]

[v21 v22 v23] [v31 v32 v33]

[v41 v42 v43] …]

- each v
is the weight on the connection to unit_{ij }*i*in the first hidden layer from unit*j*in the input layer

Methods/Classes provided in studNet.py (continued) forward(self, input):

Given a vector of input parameters of form:

[ [parameter 11, p12, …, p1n] [p21, p22, …, p2n] … ] This method will return a 3-tuple:

- the output value(s) of each example (the variable ‘a’ in the source code)

○ The values before activation was applied after the input was weighted (the variable ‘pre_activations’)

○ The values after activation for all layers (the variable ‘activations’)

- The reason it returns ‘pre_activations’ and ‘activations’ is because you’ll need them for updating the network

What you have to do:

- Implement a method called train(…)
- test_train calls it to train the neural network on the data set

○ To do this training, you will need to perform back propagation and calculate deltas

- We’ve provided method headers for train(), backpropagate(), and calcDeltas()
- If you want to use them, you’ll need to write the method bodies

○ But you don’t have to use them if you want to implement your network differently

- The only things that must stay the same for testing:
- you
**MUST**have the**test_train**method and**predict**as provided

- you

○ test_train must return an instance of your neural network that can be used to call predict(a).