In: Operations Management
Explain how the ANN model works. (use the steps listed below)
Step 1: Determine the Topology and Activation Function
Step 2: Initiation
Step 3: Calculating Error
Step 4: Weight Adjustment
Step 4: Weight Adjustment
ANN stands for Artificial Neural Networks. Basically, it’s a computational model. That is based on structures and functions of biological neural networks.
The working of human brain by making the right connection is the idea behind ANN.
In this topology diagrams, you will learn everything in a detailed manner.
In this, each arrow represents a connection between two neurons. Also, they used to indicate the pathway for the flow of information. As it was noticed that each connection has a weight, an integer number. That used to controls the signal between the two neurons.
If the output is good that was generated by the network then we don’t require to adjust the weights. Although, if poor output generated. Then surely system will alter the weight to improve results.
Total error it's just information for you or for some heuristic algorithms where you need just compare current iteration error with error from previous epoch. So you can compute error as you wish.But before making it you need to think that you calculate it in the right way. For example, if you use error function as
E=Target−output
and for example
target=[1,0,1]
Output=[0,1,1]
Error you will get:
E=[1,0,1]−[0,1,1]=[1,−1,0]
And if you try calculate the mean, you'll get this result:
mean=1+(−1)+03=0
So your error is 0. It's wrong (as solution you can use absolute value of error and then take a mean). But in real algorithm you will probably use cross entropy or square error there no this problem.
Weights in an ANN are the most important factor in converting an input to impact the output. This is similar to slope in linear regression, where a weight is multiplied to the input to add up to form the output. Weights are numerical parameters which determine how strongly each of the neurons affects the other.
For a typical neuron, if the inputs are x1, x2, and x3, then the synaptic weights to be applied to them are denoted as w1, w2, and w3.
Output is
where i is 1 to the number of inputs.
Simply, this is a matrix multiplication to arrive at the weighted sum.