In: Computer Science
Use Python :
Problem Set 01:
The network for this task has two input nodes, one hidden layer consisting of two nodes, and one output node. It uses a ReLU activation
function. For the hidden layer, the weights for the first hidden node (from the input nodes) are (2.3, -0.64, 2). The last number is the
weight for the bias term. The weights for the second hidden node are (-3, -2, -1).
For the output layer, the weights are (5, 3, -34).
Calculate the output for the following inputs (where the bias term is the last value of each vector). Show your work.
1. (-4, -2, 1)
2. (0, -2, 1)
3. (4, -2, 1)
4. (0, -3, 1)
Python Program:
import numpy as np
#function for relu activation
def ReLu(layer):
  layer[layer<0]=0
  return layer
def model_output(input_layer):
  #hidden layer weights
  hidden_layer=np.array([[2.3,-0.64,2],[-3,-2,-1]])
  #output layer weights
  output_layer=np.array([[5,3,-34]])
  #hidden layer manipulation
  z1=np.dot(input_layer,np.transpose(hidden_layer))
  a1=ReLu(z1)
  #appending bias to the input of output layer
  bias=np.ones(a1.shape[0]).reshape(a1.shape[0],1)
  a1=np.append(a1,bias,axis=1)
  #output layer manipulation
  z2=np.dot(a1,np.transpose(output_layer))
  a2=ReLu(z2)
  return a2
#input layer units in the formate of numpy array
input_layer=np.array([[-4,-2,1],[0,-2,1],[4,-2,1],[0,-3,1]])
print(model_output(input_layer))
Output:
