0% found this document useful (0 votes)
25 views8 pages

Ai Unit-4 QB

The document consists of a quiz with 42 questions focused on Natural Language Processing (NLP) concepts, techniques, and terminology. It covers topics such as syntactic analysis, machine translation, sentiment analysis, and various algorithms used in NLP. Each question provides multiple-choice answers related to the field of NLP.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views8 pages

Ai Unit-4 QB

The document consists of a quiz with 42 questions focused on Natural Language Processing (NLP) concepts, techniques, and terminology. It covers topics such as syntactic analysis, machine translation, sentiment analysis, and various algorithms used in NLP. Each question provides multiple-choice answers related to the field of NLP.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

<br>

AlUnit-4 (42 questions)

1. Subfield dealing with human language:


(a) Machine Learning
(b) Natural Language Processing
(c) Speech Recognition
(d) Image Processing

2. What does NLP stand for?


(a) Natural Logic Programming
(b) Neural Language Processing
(c) Natural Language Processing
(d) None of the above

3. Primary goal of NLP:


(a) Understanding text and speech in a way that mimics human understanding
(b)Translating human language to machine code
(c) Recognizing images
(d) Processing data streams

4. Task involved in syntactic analysis:


(a) Language generation
(b) Sentiment analysis
(c) Part-of-speech tagging
(d) Translation

5. What is a corpus in NLP?


(a) A single sentence
(b) A large collection of text used for training
(c) A grammatical model
(d) A linguistic rulebook

6. Technique to convert text to numerical format:


(a) Parsing
(b) Word embeddings
<br>

(c) Tokenization
(d) Chunking

7. Task converting text into vector space:


(a) Parsing
(b) Lemmatization
(c) Vectorization
(d) POS tagging

8. POS tagging refers to:


(a) Part-of-Speech tagging
(b) Pattern over sequence
(c) Parsing object systems
(d) Programming operating systems

9. What is a "token"?
(a) A word or a meaningful sequence of characters
(b) A language model
(c) A syntax error
(d) A feature in ML

10. Stopword removal purpose:


(a) To eliminate common words that do not carry significant meaning
(b) To find rare words
(c) To tokenize sentences
(d) To create vectors

11. Model for machine translation:


(a) Decision Trees
(b) Random Forest
(c) Neural Networks
(d) k-NN

12. Exanmple of a language model:


(a) GPT
<br>

(b) CNN
(c) SVM
(d) RNN

13. What is lemmatization?


(a) Removing punctuation
(b) Reducing words to their base form
(c) Finding synonyms
(d) Parsing sentences

14. Sentiment analysis aims to determine:


(a) Text complexity
(b) POS tagging
(c) The emotion or opinion expressed in the text
(d) Translation accuracy

15. Named Entity Recognition (NER):


(a) Summarizing documents
(b) Translating languages
(c) ldentifying entities like names, locations, and organizations
(d) Predicting next word

16. Common evaluation metric in NLP:


(a) RMSE
(b) Precision, Recall, and F1-score
(c) Log L0ss
(d) MAE

17. Technique to extract meaning in context:


(a) Tokenization
(b) Stopword removal
(c) Word Embeddings
(d) Parsing
<br>

18. Algorithm for text classification:


(a) Naive Bayes
(b) K-Means
(c) PCA
(d) Apriori

19. Not part of NLP pipeline:


(a) Tokenization
(b)Clustering
(c) POS tagging
(d) Parsing

20. Deep learning model used in NLP tasks:


(a) CNN
(b) Recurrent Neural Network (RNN)
(c) SVM
(d) Decision Tree

21. Unsupervised learning in NLP:


(a) POS tagging
(b) Word2Vec
(c) Naive Bayes
(d) CRF

22. Model used in transfer learning:


(a) BERT
(b) CNN
(c) RNN
(d) GloVe

23. Purpose of Bag of Words:


(a) To analyze the frequency of words in a document
(b) To parse sentences
(c) To predict words
(d) To translate texts
<br>

24. What is "embedding"?


(a) Encoding textual data into a fixed-size vector
(b) Removing stopwords
(c) Predicting next word
(d) Training dataset

25. Main difference between Word2Vec and GloVe:


(a) Word2Vec uses co-0ccurrence matrix
(b) Word2Vec uses a skip-gram model, while GloVe is based on co-0ccurrence
matrix factorization
(c) Both are identical
(d) GloVe uses skip-gram

26. Language generation model:


(a) GPT
(b) CNN
(c) RNN
(d) BERT

27. Purpose of confusion matrix:


(a) To evaluate the performance of classification models
(b) To generate tokens
(c) To vectorize text
(d) To translate texts

28. Purpose of neural network in NLP:


(a) Parsing
(b) Tokenization
(c) To learn patterns from data for tasks like classification and translation
(d) Word embedding creation

29. Algorithm for sequence labeling:


(a) Naive Bayes
(b) Conditional Random Fields (CRF)
(c) Decision Tree
(d) Apriori
<br>

30. Advantage of pre-trained models like BERT:


(a) Require more data to train
(b) Requires less data to train
(c) Slow convergence
(d) No transfer learning

31. Goal of syntactic parsing:


(a) Translate to another language
(b) To identify the grammatical structure of sentences
(c) Tokenize data
(d) Generate embeddings

32. What is tokenization?


(a) Parsing syntax trees
(b) Splitting text into individual words or tokens
(c) Generating stopwords
(d) Translation

33. Objective of machine translation:


(a) Find grammatical errors
(b) Sentiment analysis
(c) To translate text from one language to another
(d) Tokenization

34. Improve NLP model accuracy:


(a) Data cleaning
(b) Model tuning
(c) Feature engineering
(d) All of the above

35. Sequence-to-sequence model task:


(a) Named Entity Recognition
(b) Text summarization
(c) Tokenization
(d) Stopword removal
<br>

36. Example of an NLP library:


(a) Scikit-learn
(b) TensorFlow
(c) NLTK
(d) Keras

37. Feature of BERT:


(a) Unidirectional context
(b) No context
(c) It uses a bidirectional context for training
(d) One-way attention

38. Method of language generation:


(a) Clusteringmodels
(b) Decision Trees
(c) Sequence-to-sequence models
(d) Naive Bayes

39. What RNN excels at in NLP:


(a) Handling sequential data
(b) Image classification
(c) Regression analysis
(d) Feature reduction

40. Purpose of Levenshtein distance:


(a) Translate sentences
(b)To calculate the difference between two strings
(c) POS tagging
(d) Language modeling

41. Feature of a transformer model:


(a)Uses sequential memory
(b) It uses attention mechanisms
(c) No training required
(d) Focuses only on word frequency
<br>

42. Role of attention mechanism:


(a) Reduces model complexity
(b) Predicts grammar errors
(b)To assign weights to different parts of the input sequence
(d) Parses sentences

You might also like