Neural Networks – MCQ with answer

Posted by

The "Answers" given with the Question is "not verified". To view Verified answers click on the button below.

Neural Networks – 1

1. A 3-input neuron is trained to output a zero when the input is 110 and a one when the input is 111. After generalization, the output will be zero when and only when the input is?
a) 000 or 110 or 011 or 101
b) 010 or 100 or 110 or 101
c) 000 or 010 or 110 or 100
d) 100 or 111 or 101 or 001
View Answer
Answer: c
Explanation: The truth table before generalization is: Inputs Output
000 $
001 $
010 $
011 $
100 $
101 $
110 0
111 1
where $ represents don’t know cases and the output is random. After generalization, the truth table becomes:
Inputs Output
000 0
001 1
010 0
011 1
100 0
101 1
110 0
111 1

2. What is perceptron?
a) a single layer feed-forward neural network with pre-processing
b) an auto-associative neural network
c) a double layer auto-associative neural network
d) a neural network that contains feedback
View Answer
Answer: a
Explanation: The perceptron is a single layer feed-forward neural network. It is not an autoassociative network because it has no feedback and is not a multiple layer neural network because the pre-processing stage is not made of neurons.

3. What is an auto-associative network?
a) a neural network that contains no loops
b) a neural network that contains feedback
c) a neural network that has only one loop
d) a single layer feed-forward neural network with pre-processing
View Answer
Answer: b
Explanation: An auto-associative network is equivalent to a neural network that contains feedback. The number of feedback paths(loops) does not have to be one.

4. A 4-input neuron has weights 1, 2, 3 and 4. The transfer function is linear with the constant of
proportionality being equal to 2. The inputs are 4, 10, 5 and 20 respectively. What will be the
output?
a) 238
b) 76
c) 119
d) 123
View Answer
Answer: a
Explanation: The output is found by multiplying the weights with their respective inputs, summing the results and multiplying with the transfer function. Therefore: Output = 2 * (1*4 + 2*10 + 3*5 + 4*20) = 238.

5. Which of the following is true?
(i) On average, neural networks have higher computational rates than conventional computers.
(ii) Neural networks learn by example.
(iii) Neural networks mimic the way the human brain works.
a) All of the mentioned are true
b) (ii) and (iii) are true
c) (i), (ii) and (iii) are true
d) None of the mentioned
View Answer
Answer: a
Explanation: Neural networks have higher computational rates than conventional computers because a lot of the operation is done in parallel. That is not the case when the neural network is simulated on a computer. The idea behind neural nets is based on the way the human brain works. Neural nets cannot be programmed, they can only learn by examples.

6. Which of the following is true for neural networks?
(i) The training time depends on the size of the network.
(ii) Neural networks can be simulated on a conventional computer.
(iii) Artificial neurons are identical in operation to biological ones.
a) All of the mentioned
b) (ii) is true
c) (i) and (ii) are true
d) None of the mentioned
View Answer
Answer: c
Explanation: The training time depends on the size of the network; the number of neuron is greater and therefore the number of possible ‘states’ is increased. Neural networks can be simulated on a conventional computer but the main advantage of neural networks – parallel execution – is lost. Artificial neurons are not identical in operation to the biological ones.

7. What are the advantages of neural networks over conventional computers?
(i) They have the ability to learn by example
(ii) They are more fault tolerant
(iii)They are more suited for real time operation due to their high ‘computational’ rates
a) (i) and (ii) are true
b) (i) and (iii) are true
c) Only (i)
d) All of the mentioned
View Answer
Answer: d
Explanation: Neural networks learn by example. They are more fault tolerant because they are always able to respond and small changes in input do not normally cause a change in output. Because of their parallel architecture, high computational rates are achieved.

8. Which of the following is true? Single layer associative neural networks do not have the ability to:
(i) perform pattern recognition
(ii) find the parity of a picture
(iii)determine whether two or more shapes in a picture are connected or not
a) (ii) and (iii) are true
b) (ii) is true
c) All of the mentioned
d) None of the mentioned
View Answer
Answer: a
Explanation: Pattern recognition is what single layer neural networks are best at but they don’t have the ability to find the parity of a picture or to determine whether two shapes are connected or not.

9. Which is true for neural networks?
a) It has set of nodes and connections
b) Each node computes it’s weighted input
c) Node could be in excited state or non-excited state
d) All of the mentioned
View Answer
Answer: d
Explanation: All mentioned are the characteristics of neural network.

10. What is Neuro software?
a) A software used to analyze neurons
b) It is powerful and easy neural network
c) Designed to aid experts in real world
d) It is software used by Neurosurgeon
View Answer
Answer: b
Explanation: None.

 

Neural Networks – 2

1. Why is the XOR problem exceptionally interesting to neural network researchers?
a) Because it can be expressed in a way that allows you to use a neural network
b) Because it is complex binary operation that cannot be solved using neural networks
c) Because it can be solved by a single layer perceptron
d) Because it is the simplest linearly inseparable problem that exists.
View Answer
Answer: d
Explanation: None

2. What is back propagation?
a) It is another name given to the curvy function in the perceptron
b) It is the transmission of error back through the network to adjust the inputs
c) It is the transmission of error back through the network to allow weights to be adjusted so that
the network can learn
d) None of the mentioned
View Answer
Answer: c
Explanation: Back propagation is the transmission of error back through the network to allow weights to be adjusted so that the network can learn.

3. Why are linearly separable problems of interest of neural network researchers?
a) Because they are the only class of problem that network can solve successfully
b) Because they are the only class of problem that Perceptron can solve successfully
c) Because they are the only mathematical functions that are continue
d) Because they are the only mathematical functions you can draw
View Answer
Answer: b
Explanation: Linearly separable problems of interest of neural network researchers because they are the only class of problem that Perceptron can solve successfully.

4. Which of the following is not the promise of artificial neural network?
a) It can explain result
b) It can survive the failure of some nodes
c) It has inherent parallelism
d) It can handle noise
View Answer
Answer: a
Explanation: The artificial Neural Network (ANN) cannot explain result.

5. Neural Networks are complex ______________ with many parameters.
a) Linear Functions
b) Nonlinear Functions
c) Discrete Functions
d) Exponential Functions
View Answer
Answer: a
Explanation: Neural networks are complex linear functions with many parameters.

6. A perceptron adds up all the weighted inputs it receives, and if it exceeds a certain value, it outputs a 1, otherwise it just outputs a 0.
a) True
b) False
c) Sometimes – it can also output intermediate values as well
d) Can’t say
View Answer
Answer: a
Explanation: Yes the perceptron works like that.

7. What is the name of the function in the following statement “A perceptron adds up all the weighted inputs it receives, and if it exceeds a certain value, it outputs a 1, otherwise it just outputs a 0”?
a) Step function
b) Heaviside function
c) Logistic function
d) Perceptron function
View Answer
Answer: b
Explanation: Also known as the step function – so answer 1 is also right. It is a hard thresholding function, either on or off with no in-between.

8. Having multiple perceptrons can actually solve the XOR problem satisfactorily: this is because each perceptron can partition off a linear part of the space itself, and they can then combine their results.
a) True – this works always, and these multiple perceptrons learn to classify even complex problems
b) False – perceptrons are mathematically incapable of solving linearly inseparable functions, no matter what you do
c) True – perceptrons can do this but are unable to learn to do it – they have to be explicitly handcoded
d) False – just having a single perceptron is enough
View Answer
Answer: c
Explanation: None.

9. The network that involves backward links from output to the input and hidden layers is called _________
a) Self organizing maps
b) Perceptrons
c) Recurrent neural network
d) Multi layered perceptron
View Answer
Answer: c
Explanation: RNN (Recurrent neural network) topology involves backward links from output to the input and hidden layers.

10. Which of the following is an application of NN (Neural Network)?
a) Sales forecasting
b) Data validation
c) Risk management
d) All of the mentioned
View Answer
Answer: d
Explanation: All mentioned options are applications of Neural Network.