# Deep Learning MCQ, Part_04 Posted by

The "Answers" given with the Question is "not verified". To view Verified answers click on the button below.

Q31. Momentum based gradient descent algorithm and Nesterov accelerated gradient descent are faster than Stochastic gradient descent algorithm”

(A)True
(B)False

Q32. Consider the following statement, “It takes less time to navigate the regions having a gentle slope” The above statement is true in the case of

(A)I
(B)II
(C)II & I

Q33. Identify the technique that is used to achieve relatively better learning rate by updating w using bunch of different values of η.

(A)Bias Correction
(B)Line Search
(C)Stochastic
(D)All the above

Q34. There is no guarantee that the loss decreases at each step in a stochastic Gradient Descent”
(A)True
(B)False

I.Corrects its course quicker than Momentum-based gradient descent
II.Oscillations are smaller
III.Chances of escaping minima valley are also smaller

(A)I
(B)only II only
(C)II and III
(D)I, II, and III

Q36. Pick out the methods for annealing learning rate that has only a number of epochs as the hyperparameter.

(A)Step decay
(B)Exponential Decay
(C)1/t Decay

Q37. Adagrad got stuck when it was close to convergence. How does RMSProp overcome this problem?

(A)More Aggressive on decay
(B)Less Aggressive on decay
(C)No decay

Q38. Which of the following gradient descent algorithm suffers from more oscillations?

(D)None of the above

Q39. In a neural network, knowing the weight and bias of each neuron is the most important step. If you can somehow get the correct value of weight and bias for each neuron, you can approximate any function. What would be the best way to approach this?

(A)Assign random values and pray to God they are correct
(C)Iteratively check that after assigning a value how far you are from the best values, and slightly change the assigned values to make them better
(D)None of these

Q40. What are the steps for using a gradient descent algorithm?

1. Calculate error between the actual value and the predicted value
2.Reiterate until you find the best weights of the network
3.Pass an input through the network and get values from the output layer
4.Initialize random weight and bias
5.Go to each neuron which contributes to the error and change its respective values to reduce the error

(A)1, 2, 3, 4, 5
(B)5, 4, 3, 2, 1
(C)3, 2, 1, 5, 4
(D)4, 3, 1, 5, 2