Parallel Computing MCQ’s With answers

Posted by

Parallel Computing MCQ’s With answers, 100% free pdf Download

PARALLEL COMPUTING

1. It is the simultaneous use of multiple compute resources to solve a computational problem
(A) Parallel computing
(B) Single processing
(C) Sequential computing
(D) None of these

2. Parallel Execution
(A) A sequential execution of a program, one statement at a time
(B) Execution of a program by more than one task, with each task being able to execute the same or different statement at the same moment in time
(C) A program or set of instructions that is executed by a processor.
(D) None of these

3. Scalability refers to a parallel system’s (hardware and/or software) ability
(A) To demonstrate a proportionate increase in parallel speedup with the removal of some processors
(B) To demonstrate a proportionate increase in parallel speedup with the addition of more processors
(C) To demonstrate a proportionate decrease in parallel speedup with the addition of more processors
(D) None of these

4. Parallel computing can include
(A) Single computer with multiple processors
(B) Arbitrary number of computers connected by a network
(C) Combination of both A and B
(D) None of these

5. Serial Execution
(A) A sequential execution of a program, one statement at a time
(B) Execution of a program by more than one task, with each task being able to execute the same or different statement at the same moment in time
(C) A program or set of instructions that is executed by a processor.
(D) None of these

6. Shared Memory is
(A) A computer architecture where all processors have direct access to common physical memory
(B) It refers to network based memory access for physical memory that is not common.
(C) Parallel tasks typically need to exchange dat(A) There are several ways this can be accomplished, such as through, a shared memory bus or over a network, however the actual event of data exchange is commonly referred to as communications regardless of the method employe(D)
(D) None of these

7. Distributed Memory
(A) A computer architecture where all processors have direct access to common physical memory
(B) It refers to network based memory access for physical memory that is not common
(C) Parallel tasks typically need to exchange dat(A) There are several ways this can be accomplished, such as through, a shared memory bus or over a network, however the actual event of data exchange is commonly referred to as communications regardless of the method employe(D)
(D) None of these

8. Parallel Overhead is
(A) Observed speedup of a code which has been parallelized, defined as: wall-clock time of serial execution and wall-clock time of parallel execution
(B) The amount of time required to coordinate parallel tasks. It includes factors such as: Task start-up time, Synchronizations, Data communications.
(C) Refers to the hardware that comprises a given parallel system – having many processors
(D) None of these

9. Massively Parallel
(A) Observed speedup of a code which has been parallelized, defined as: wall-clock time of serial execution and wall-clock time of parallel execution
(B) The amount of time required to coordinate parallel tasks. It includes factors such as: Task start-up time, Synchronizations, Data communications.
(C) Refers to the hardware that comprises a given parallel system – having many processors
(D) None of these

10. Fine-grain Parallelism is
(A) In parallel computing, it is a qualitative measure of the ratio of computation to communication
(B) Here relatively small amounts of computational work are done between communication events
(C) Relatively large amounts of computational work are done between communication / synchronization events
(D) None of these

11. In shared Memory
(A) Changes in a memory location effected by one processor do not affect all other processors.
(B) Changes in a memory location effected by one processor are visible to all other processors
(C) Changes in a memory location effected by one processor are randomly visible to all other processors.
(D) None of these

12. In shared Memory:
(A) Here all processors access, all memory as global address space
(B) Here all processors have individual memory
(C) Here some processors access, all memory as global address space and some not
(D) None of these

13. In shared Memory
(A) Multiple processors can operate independently but share the same memory resources
(B) Multiple processors can operate independently but do not share the same memory resources
(C) Multiple processors can operate independently but some do not share the same memory resources
(D) None of these

14. In designing a parallel program, one has to break the problem into discreet chunks of work that can be distributed to multiple tasks. This is known as
(A) Decomposition
(B) Partitioning
(C) Compounding
(D) Both A and B

15. Latency is
(A) Partitioning in that the data associated with a problem is decompose(D) Each parallel task then works on a portion of the dat(A)
(B) Partitioning in that, the focus is on the computation that is to be performed rather than on the data manipulated by the computation. The problem is decomposed according to the work that must be done. Each task then performs a portion of the overall work.
(C) It is the time it takes to send a minimal (0 byte) message from one point to other point
(D) None of these

16. Domain Decomposition
(A) Partitioning in that the data associated with a problem is decompose(D) Each parallel task then works on a portion of the dat(A)
(B) Partitioning in that, the focus is on the computation that is to be performed rather than on the data manipulated by the computation. The problem is decomposed according to the work that must be done. Each task then performs a portion of the overall work.
(C) It is the time it takes to send a minimal (0 byte) message from point A to point (B)
(D) None of these

17. Functional Decomposition:
(A) Partitioning in that the data associated with a problem is decompose(D) Each parallel task then works on a portion of the dat(A)
(B) Partitioning in that, the focus is on the computation that is to be performed rather than on the data manipulated by the computation. The problem is decomposed according to the work that must be done. Each task then performs a portion of the overall work.
(C) It is the time it takes to send a minimal (0 byte) message from point A to point (B)
(D) None of these

18. Synchronous communications
(A) It require some type of “handshaking” between tasks that are sharing dat(A) This can be explicitly structured in code by the programmer, or it may happen at a lower level unknown to the programmer.
(B) It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.
(C) It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.
(D) It allows tasks to transfer data independently from one another.

19. Collective communication
(A) It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.
(B) It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.
(C) It allows tasks to transfer data independently from one another.
(D) None of these

20. Point-to-point communication referred to
(A) It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.
(B) It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.*
(C) It allows tasks to transfer data independently from one another.
(D) None of these

21. Uniform Memory Access (UMA) referred to
(A) Here all processors have equal access and access times to memory
(B) Here if one processor updates a location in shared memory, all the other processors know about the update.
(C) Here one SMP can directly access memory of another SMP and not all processors have equal access time to all memories
(D) None of these

22. Asynchronous communications
(A) It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.
(B) It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.
(C) It allows tasks to transfer data independently from one another.
(D) None of these

23. Granularity is
(A) In parallel computing, it is a qualitative measure of the ratio of computation to communication
(B) Here relatively small amounts of computational work are done between communication events
(C) Relatively large amounts of computational work are done between communication / synchronization events
(D) None of these

24. Coarse-grain Parallelism
(A) In parallel computing, it is a qualitative measure of the ratio of computation to communication
(B) Here relatively small amounts of computational work are done between communication events
(C) Relatively large amounts of computational work are done between communication / synchronization events
(D) None of these

25. Cache Coherent UMA (CC-UMA) is
(A) Here all processors have equal access and access times to memory
(B) Here if one processor updates a location in shared memory, all the other processors know about the update.
(C) Here one SMP can directly access memory of another SMP and not all processors have equal access time to all memories
(D) None of these

26. Non-Uniform Memory Access (NUMA) is
(A) Here all processors have equal access and access times to memory
(B) Here if one processor updates a location in shared memory, all the other processors know about the update.
(C) Here one SMP can directly access memory of another SMP and not all processors have equal access time to all memories
(D) None of these

27. It distinguishes multi-processor computer architectures according to how they can be classified along the two independent dimensions of Instruction and Dat(A) Each of these dimensions can have only one of two possible states: Single or Multiple.
(A) Single Program Multiple Data (SPMD)
(B) Flynn’s taxonomy
(C) Von Neumann Architecture
(D) None of these

28. In the threads model of parallel programming
(A) A single process can have multiple, concurrent execution paths
(B) A single process can have single, concurrent execution paths.
(C) A multiple process can have single concurrent execution paths.
(D) None of these

29. These applications typically have multiple executable object files (programs). While the application is being run in parallel, each task can be executing the same or different program as other tasks. All tasks may use different data
(A) Single Program Multiple Data (SPMD)
(B) Multiple Program Multiple Data (MPMD)
(C) Von Neumann Architecture
(D) None of these

30. Here a single program is executed by all tasks simultaneously. At any moment in time, tasks can be executing the same or different instructions within the same program. These programs usually have the necessary logic programmed into them to allow different tasks to branch or conditionally execute only those parts of the program they are designed to execute.
(A) Single Program Multiple Data (SPMD)
(B) Multiple Program Multiple Data (MPMD)
(C) Von Neumann Architecture
(D) None of these

31. These computer uses the stored-program concept. Memory is used to store both program and data instructions and central processing unit (CPU) gets instructions and/ or data from memory. CPU, decodes the instructions and then sequentially performs them.
(A) Single Program Multiple Data (SPMD)
(B) Flynn’s taxonomy
(C) Von Neumann Architecture
(D) None of these

32. Load balancing is
(A) Involves only those tasks executing a communication operation
(B) It exists between program statements when the order of statement execution affects the results of the program.
(C) It refers to the practice of distributing work among tasks so that all tasks are kept busy all of the time. It can be considered as minimization of task idle time.
(D) None of these

33. Synchronous communication operations referred to
(A) Involves only those tasks executing a communication operation
(B) It exists between program statements when the order of statement execution affects the results of the program.
(C) It refers to the practice of distributing work among tasks so that all tasks are kept busy all of the time. It can be considered as minimization of task idle time.
(D) None of these

34. Data dependence is
(A) Involves only those tasks executing a communication operation
(B) It exists between program statements when the order of statement execution affects the results of the program.
(C) It refers to the practice of distributing work among tasks so that all tasks are kept busy all of the time. It can be considered as minimization of task idle time.
(D) None of these


Answer Sheet:
1. (A) 2. (B) 3. (B) 4. (C) 5. (A)
6. (A) 7. (B) 8. (B) 9. (B) 10. (B)
11. (B) 12. (A) 13. (A) 14. (D) 15. (C)
16. (A) 17. (B) 18. (A) 19. (A) 20. (B)
21. (A) 22. (C) 23. (A) 24. (C) 25. (B)
26. (C) 27. (B) 28. (A) 29. (B) 30. (A)
31. (C) 32. (C) 33. (A) 34. (B)