In: Computer Science
Machine learning Neural Networks question:
Which one statement is true about neural networks?
(Select the single best answer), and please explain why they are
true or false:
(A) We always train neural networks by optimising a convex cost
function.
(B) Neural networks are more robust to outliers than support vector
machines.
(C) Neural networks always output values between 0 and 1.
(D) A neural network with a large number of parameters often can
better use big training data than support vector
machines.
(NO GUESS WORK. EXPLAIN TOO)
Hello dear,
PLEASE UPVOTE IF THIS ANSWER SEEMS HELPFUL THAT WOULD BE AN ENCOURAGEMENT TO HELP MORE STUDENTS.
THANKYOU.
(A) We always train neural networks by optimising a convex cost function.
False
explaination : The cost function of a neural network is in general neither convex nor concave. This means that the matrix of all second partial derivatives (the Hessian) is neither positive semidefinite, nor negative semidefinite. Since the second derivative is a matrix, it's possible that it's neither one or the other. That is whether the cost function is convex or not depends on the details of the network.
(B) Neural networks are more robust to outliers than support vector machines.
False
Explaination : tSVMs are relatively robust to outliers, better methods need to be developed to generate outliers. Certainly, incorrectly labeled classes are one particular type of outlier, but probably the most common type of outlier is one in which a particular feature has unexpected or poorly sampled data. Also, we labeled all of our training examples as noise. It is quite possible that an outlier detection scheme would only be able to detect a fraction of the outliers, and so it might be valuable to study the effects on accuracy when only a fraction of outliers are weighted.
(C) Neural networks always output values between 0 and 1.
True
Explaination : The reason these networks output numbers between 0 and 1 is in the layer activations of the network. For example, an acceptable range of output is usually between 0 and 1, or it could be −1 and 1.
(D) A neural network with a large number of parameters often can better use big training data than support vector machines.
False
Explaination : The number of parameters is kept small compared to number of samples available for training in neuralnetwork.Thus NN with large number of parameters can use only lesser training data compared to SVM.
PLEASE GIVE IT A THUMBS UP, I SERIOUSLY NEED ONE, IF YOU NEED ANY MODIFICATION THEN LET ME KNOW, I WILL DO IT FOR YOU