Deep Learning MCQ, Part_04

0
(0)

Q31. Momentum based gradient descent algorithm and Nesterov accelerated gradient descent are faster than Stochastic gradient descent algorithm”

(A)True
(B)False

Correct Answer: B


Q32. Consider the following statement, “It takes less time to navigate the regions having a gentle slope” The above statement is true in the case of

I.Gradient descent algorithm
II.Momentum based gradient descent algorithm

(A)I
(B)II
(C)II & I

Correct Answer: B


Q33. Identify the technique that is used to achieve relatively better learning rate by updating w using bunch of different values of η.

(A)Bias Correction
(B)Line Search
(C)Stochastic
(D)All the above

Correct Answer: B


Q34. There is no guarantee that the loss decreases at each step in a stochastic Gradient Descent”
(A)True
(B)False

Correct Answer: A


Q35. Identify the advantages of Nesterov accelerated gradient descent.

I.Corrects its course quicker than Momentum-based gradient descent
II.Oscillations are smaller
III.Chances of escaping minima valley are also smaller

(A)I
(B)only II only
(C)II and III
(D)I, II, and III

Correct Answer: D


Q36. Pick out the methods for annealing learning rate that has only a number of epochs as the hyperparameter.

(A)Step decay
(B)Exponential Decay
(C)1/t Decay

Correct Answer: A


Q37. Adagrad got stuck when it was close to convergence. How does RMSProp overcome this problem?

(A)More Aggressive on decay
(B)Less Aggressive on decay
(C)No decay

Correct Answer: B


Q38. Which of the following gradient descent algorithm suffers from more oscillations?

(A)Momentum based gradient descent
(B)Nesterov accelerated gradient descent
(C)Vanilla gradient descent
(D)None of the above

Correct Answer: A


Q39. In a neural network, knowing the weight and bias of each neuron is the most important step. If you can somehow get the correct value of weight and bias for each neuron, you can approximate any function. What would be the best way to approach this?

(A)Assign random values and pray to God they are correct
(B)Search every possible combination of weights and biases till you get the best value
(C)Iteratively check that after assigning a value how far you are from the best values, and slightly change the assigned values to make them better
(D)None of these

Correct Answer: C


Q40. What are the steps for using a gradient descent algorithm?

1. Calculate error between the actual value and the predicted value
2.Reiterate until you find the best weights of the network
3.Pass an input through the network and get values from the output layer
4.Initialize random weight and bias
5.Go to each neuron which contributes to the error and change its respective values to reduce the error

(A)1, 2, 3, 4, 5
(B)5, 4, 3, 2, 1
(C)3, 2, 1, 5, 4
(D)4, 3, 1, 5, 2

Correct Answer: D

How useful This Post?

Click on a star to rate This Post!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this Post.

We are sorry that this App was not useful for you!

...............................

Tell here how Developers can improve this App?

Leave a Comment