neural_network.two_hidden_layers_neural_network¶
- References:
http://neuralnetworksanddeeplearning.com/chap2.html (Backpropagation)
https://en.wikipedia.org/wiki/Sigmoid_function (Sigmoid activation function)
https://en.wikipedia.org/wiki/Feedforward_neural_network (Feedforward)
Classes¶
Functions¶
|
Example for "how to use the neural network class and use the |
|
Applies sigmoid activation function. |
|
Provides the derivative value of the sigmoid function. |
Module Contents¶
- class neural_network.two_hidden_layers_neural_network.TwoHiddenLayerNeuralNetwork(input_array: numpy.ndarray, output_array: numpy.ndarray)¶
- back_propagation() → None¶
Function for fine-tuning the weights of the neural net based on the error rate obtained in the previous epoch (i.e., iteration). Updation is done using derivative of sogmoid activation function.
>>> input_val = np.array(([0, 0, 0], [0, 0, 0], [0, 0, 0]), dtype=float) >>> output_val = np.array(([0], [0], [0]), dtype=float) >>> nn = TwoHiddenLayerNeuralNetwork(input_val, output_val) >>> res = nn.feedforward() >>> nn.back_propagation() >>> updated_weights = nn.second_hidden_layer_and_output_layer_weights >>> bool((res == updated_weights).all()) False
- feedforward() → numpy.ndarray¶
The information moves in only one direction i.e. forward from the input nodes, through the two hidden nodes and to the output nodes. There are no cycles or loops in the network.
- Return layer_between_second_hidden_layer_and_output
(i.e the last layer of the neural network).
>>> input_val = np.array(([0, 0, 0], [0, 0, 0], [0, 0, 0]), dtype=float) >>> output_val = np.array(([0], [0], [0]), dtype=float) >>> nn = TwoHiddenLayerNeuralNetwork(input_val, output_val) >>> res = nn.feedforward() >>> array_sum = np.sum(res) >>> bool(np.isnan(array_sum)) False
- predict(input_arr: numpy.ndarray) → int¶
Predict’s the output for the given input values using the trained neural network.
The output value given by the model ranges in-between 0 and 1. The predict function returns 1 if the model value is greater than the threshold value else returns 0, as the real output values are in binary.
>>> input_val = np.array(([0, 0, 0], [0, 1, 0], [0, 0, 1]), dtype=float) >>> output_val = np.array(([0], [1], [1]), dtype=float) >>> nn = TwoHiddenLayerNeuralNetwork(input_val, output_val) >>> nn.train(output_val, 1000, False) >>> nn.predict([0, 1, 0]) in (0, 1) True
- train(output: numpy.ndarray, iterations: int, give_loss: bool) → None¶
Performs the feedforwarding and back propagation process for the given number of iterations. Every iteration will update the weights of neural network.
output : real output values,required for calculating loss. iterations : number of times the weights are to be updated. give_loss : boolean value, If True then prints loss for each iteration,
If False then nothing is printed
>>> input_val = np.array(([0, 0, 0], [0, 1, 0], [0, 0, 1]), dtype=float) >>> output_val = np.array(([0], [1], [1]), dtype=float) >>> nn = TwoHiddenLayerNeuralNetwork(input_val, output_val) >>> first_iteration_weights = nn.feedforward() >>> nn.back_propagation() >>> updated_weights = nn.second_hidden_layer_and_output_layer_weights >>> bool((first_iteration_weights == updated_weights).all()) False
- first_hidden_layer_and_second_hidden_layer_weights¶
- input_array¶
- input_layer_and_first_hidden_layer_weights¶
- output_array¶
- predicted_output¶
- second_hidden_layer_and_output_layer_weights¶
- neural_network.two_hidden_layers_neural_network.example() → int¶
Example for “how to use the neural network class and use the respected methods for the desired output”. Calls the TwoHiddenLayerNeuralNetwork class and provides the fixed input output values to the model. Model is trained for a fixed amount of iterations then the predict method is called. In this example the output is divided into 2 classes i.e. binary classification, the two classes are represented by ‘0’ and ‘1’.
>>> example() in (0, 1) True
- neural_network.two_hidden_layers_neural_network.sigmoid(value: numpy.ndarray) → numpy.ndarray¶
Applies sigmoid activation function.
return normalized values
>>> sigmoid(np.array(([1, 0, 2], [1, 0, 0]), dtype=np.float64)) array([[0.73105858, 0.5 , 0.88079708], [0.73105858, 0.5 , 0.5 ]])
- neural_network.two_hidden_layers_neural_network.sigmoid_derivative(value: numpy.ndarray) → numpy.ndarray¶
Provides the derivative value of the sigmoid function.
returns derivative of the sigmoid value
>>> sigmoid_derivative(np.array(([1, 0, 2], [1, 0, 0]), dtype=np.float64)) array([[ 0., 0., -2.], [ 0., 0., 0.]])