0% found this document useful (0 votes)
12 views5 pages

Cuda

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views5 pages

Cuda

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

1. What does CUDA stand for?

A. Central Unit for Data Access


B. Compute Unified Device Architecture
C. Core Unit Data Application
D. Computer Use Direct Access
Answer: B

2. CUDA is mainly used for:


A. Web development
B. Graphics designing
C. Parallel computing on GPU
D. Database management
Answer: C

3. CUDA C++ is based on which language?


A. Java
B. Python
C. C++
D. Ruby
Answer: C

4. The __global__ keyword is used to define:


A. CPU function
B. File structure
C. Device kernel
D. Variable scope
Answer: C

5. CUDA kernels are launched using:


A. Curly braces {}
B. Square brackets []
C. Triple angle brackets <<< >>>
D. Round brackets ()
Answer: C
6. In CUDA, the host is:
A. GPU
B. RAM
C. CPU
D. SSD
Answer: C

7. In CUDA, what is a 'block'?


A. A storage unit
B. A group of threads
C. A GPU memory part
D. A GPU chip
Answer: B

8. A 'grid' in CUDA is made up of:


A. Registers
B. Threads
C. Blocks
D. Kernels
Answer: C

9. What is the index of the first element in CUDA arrays?


A. 1
B. 0
C. -1
D. Depends on size
Answer: B

10. Which variable is used to access the current block index?


A. block.x
B. blockDim.x
C. blockIdx.x
D. blockIndex
Answer: C
11. Which variable is used to access the current thread index?
A. thread.x
B. threadIndex
C. blockDim
D. threadIdx.x
Answer: D

12. Which CUDA function allocates managed memory?


A. cudaAlloc()
B. malloc()
C. cudaMallocManaged()
D. new()
Answer: C

13. Which CUDA function frees allocated memory?


A. delete()
B. cudaFree()
C. free()
D. cudaRemove()
Answer: B

14. What does the following line do? add<<<1, N>>>();


A. Launches 1 thread
B. Launches N blocks
C. Launches 1 block with N threads
D. Compiles the code
Answer: C

15. What does blockDim.x represent?


A. Block index
B. Thread count per block
C. Grid size
D. Kernel size
Answer: B
16. What is the formula to calculate thread index across blocks?
A. blockIdx.x * threadIdx.x
B. blockIdx.x + threadIdx.x
C. threadIdx.x * blockDim.x
D. threadIdx.x + blockIdx.x * blockDim.x
Answer: D

17. What is used to avoid accessing beyond the end of an array?


A. A break statement
B. An if condition
C. A continue statement
D. A while loop
Answer: B

18. In CUDA, threads can:


A. Only run independently
B. Not share data
C. Communicate and synchronize
D. Only read memory
Answer: C

19. What is the advantage of combining blocks and threads?


A. Better image quality
B. Simpler code
C. More parallelism
D. Uses less memory
Answer: C

20. CUDA threads are grouped under:


A. Loops
B. Blocks
C. Registers
D. Functions
Answer: B
21. Which of the following is a built-in CUDA variable?
A. mainIndex
B. threadCount
C. threadIdx.x
D. blockCount
Answer: C

22. What is the purpose of random_ints() in CUDA code?


A. Allocate memory
B. Free memory
C. Fill arrays with random values
D. Compile the kernel
Answer: C

23. Why do we use (N + M - 1) / M in kernel launch?


A. To waste GPU time
B. To make kernel launch simpler
C. To handle any array size
D. To reduce memory usage
Answer: C

You might also like