Automatic Answer Script Evaluation
Automatic Answer Script Evaluation
Abstract: The way of correcting or evaluating a answer script in the present days is becoming so adverse nowadays and also a heavy burden for the
faculty to correct the papers this is the major problem found nowadays in the educational institutions and also by the human evaluator there might be
some mistakes over taken or even some overridden the problem arises if the hand writing might not be understood by the evaluator or even the mood of
the person hence we came up with the idea of machine correcting the papers in this project we are having two parts converting the hand written letters to
a text document and then evaluating it . In evaluating we use the techniques of computer vision and artificial intelligence m ethods for converting into text
from hand written and for the later we use a algorithm called as natural language processing which understands the natural language of the human and
tokenizes the words required and then checks the words with the key already uploaded by the lecturers hence the evaluation is perfectly done and the
burden also decreases. The evaluation processes mostly depends on the key words used in the answer script and the matching in the key paper given
by the faculty
4023
IJSTR©2020
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 9, ISSUE 01, JANUARY 2020 ISSN 2277-8616
reducing the performance of algorithmic program. As given in comparative problem zone system to be incorporated into the
Figure half dozen, the CER improvement between exploitation knowledge by the school towards the beginning. The
the greedy algorithmic program and therefore the lexicon researchers ' arrangement sheet is to be moved within the
search was token is. Needless to say, the beam search record location, and each corresponding answer within the
algorithmic program out performed the greedy and therefore information archive area unit tokenizes what's more, divided
the lexicon search results with CER = 18.840. into words. At that point the key sentences zone unit
recovered from the data and zone unit tokenized and split into
the words The token partition clarification is to isolate the
sentence into word by word all together that it is clear to
coordinate with the arrangement the phonetics which methods
for the key sentences. By exploiting the nltk (Natural
Language Tool Kit) in-manufactured work known as "nltk.word
tokenize(sentence)," the token division is upheld by any place
sentence that is the parameter that will be tokenized.
Information: "It might be a fake language . it's a system
acclimated language." Yield: ['It', 'is', 'a', 'programming',
'language „. „It', 'is', 'a', 'strategy', 'arranged', 'language', '.']
Parts of Speech tagging unremarkably English language
comprises of eight segments of discourse. The POS tag is
designated upheld the utilization of words inside the
sentences. Similarly to see the use of words inside the content
archive, everything about words inside the tokenized
sentences is distributed with the worthy segments of discourse
tag. The key sentences for the relating answer region unit to
be recovered from the data and assigned with the satisfactory
segments of discourse tag. The parts of discourse labeling is
authorized by exploitation the nltk (Natural Language Tool Kit)
in-assembled work known as "nltk.pos_tag(words)" any place
words is that the parameter that zone unit tokenized by the
Figure 2:evalution of paper by using key sheet penetrable operate. Input: ['It', 'is', 'a', 'programming',
'language „. „It', 'is', 'a', 'system', 'arranged', 'language', '.']
Tokenization The solution sheet tokenization can contain Output: [('It', 'PRP'), ('is', 'VBZ'), ('a', 'DT'), ('programming',
multiple paragraphs and each paragraph contains multiple 'NN'), ('language‟,‟NN‟),(„.‟,‟.‟),(„It', 'NN'), ('is', 'VBZ'), ('a',
sentences in succession. Sentences should be divided from 'DT'), ('system', 'NN'), ('arranged', 'VBN'),('language', 'NN'), ('.',
paragraphs and words from sentences for analysis. Such a '.')] Stop Words Removal The response for the specific inquiry
cacophonic mechanism between sentences or paragraphs is shifts from one understudy to another. English lexicon
referred to as a token of separation comprises of a significant number of the filler words like "an, of
Speech Tagging elements The NLTK module includes a powe an.etc".This stop word evacuation module is used to evacuate
rful side called Speech tagging elements dealing with the assi the filler words from the content record of the student. The key
gnment of a label such as Noun, Adjective. etc. For each of sentences are to be recovered from the database for the
the words tokenized. Avoid Words Removal Stop System specificinquiry and the words stops are to be expelled from the
words.That doesn't have vital importance to be used in Search key sentences. The stop words evacuation is executed by
Queries. These words space unit sometimes filtered out of utilizing the English word lexicon and the nltk (Natural
search queries as a result of Brobdingnag Ian quantity of Language Tool Kit) worked in work called "stop_words =
surplus information.Stemming words Stemming is the method set(stopwords.words('english'))" Input: [('It', 'PRP'), ('is', 'VBZ'),
used to reduce inflected (or generally derived) words to their ('a', 'DT'), ('programming', 'NN'), ('language‟,‟NN‟),(„.‟,‟.‟),(„It',
stem, base, or root type in linguistic morphology and data 'NN'), ('is', 'VBZ'), ('a', 'DT'), ('strategy', 'NN'), ('situated',
retrieval. Computing the similarity of the similarity of the 'VBN'),('language', 'NN'), ('.', '.')] Output: [('programming', 'NN'),
sentence linguistics can be a measure illustrated over a ('language‟,‟NN‟),(„.‟,‟.‟), ('strategy', 'NN'), ('situated',
number of records or words. The hole between the 'VBN'),('language', 'NN'), ('.', '.')] Stemming words This module
arrangement of archives or terms is based on their similarity is used to examine the methods used for tense words since
that implies content or etymology rather than representation of the methods used for words take on an essential job rather
the sentence structure (such as their string group).Report than tenses. To be tokenized the key sentences zone unit and
Generation This Module is utilized to send the report back to each tokenized word zone unit to be stemmed to provide the
the researchers concerning their presentation in each right tense word coordination. The stemming words is
question. the arrangement sheet should be assessed for each implemented by abuse the nltk (Natural Language Tool Kit)
understudy and furthermore the outcome's sent to all or any module known as "Watchman Stemmer" and in- fabricated
the researchers through mail. The report concerning the perform known as "stem(words)" any place words is that the
general execution of the researchers, singular execution of the parameter that E. Projection of extreme Scores: Last
researchers, subjects {in that during which within which} the determined scores apportioned to understudy Unit of reaction
researchers region unit powerless and which must be focused zone specified in the report. Current progress in evaluating the
on extra zone unit sent to the school individuals for extra illustrative response of the framework is as follows
activities. Tokenization of the main sentences for the Step 1: start.
4025
IJSTR©2020
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 9, ISSUE 01, JANUARY 2020 ISSN 2277-8616
Step 2: type and store the correct answer in the table. As more enterprises embrace the Internet of Things, a host of
Step 3: Using POS tagger, set up watchwords, label new security vulnerabilities will emerge. The increased risk
catchphrases, and assigned loads endless proximity in can be attributed to device limitations, and because of missed
sentence. opportunities to enhance security. Here are 12 leading IoT
Step 4: Store equivalent words and antonyms into another security challenges that enterprises must address.
table. • Secure constrained devices
Step 5: set understudy score to zero. Info understudy reaction • Authorize and authenticate devices
and store it in another table i.e.SR table. • Manage device updates
Step 6: presently check whether catchphrase blessing in SR • Secure communication
table if blessing apportioned score=score + effectively • Ensure data privacy and integrity
distributed weight. • Secure web, mobile, and cloud applications
Step 7: If watchword not discovered sign in equivalent words • Ensure high availability
table and on finding allocate score=score + effectively • Detect vulnerabilities and incidents
distributed weight. • Manage vulnerabilities
Step 8: Check on the off chance that antonyms blessing in SR • Predict and pre-empt security issue Attacking IoT
table in the event that blessing, at that point score=score*(-1). • Default, weak, and hardcoded credentials
Step 9: Check the position vector of noun and verb • Difficult to update firmware and OS
combination in input answer and compare it thereto of correct • Lack of vendor support for repairing vulnerabilities
answer to verify dependencies of noun and verb in answer. • Vulnerable web interfaces (SQL injection, XSS)
Step 10: if grammatical mistakes area unit gift deduce a pair of • Coding errors (buffer overflow)
marks from internet score. • Clear text protocols and unnecessary open ports
Step 11: currently build summation of allotted several Student • DoS / DDoS
responses. • Physical theft and tampering
Step 12: If scores calculated area unit negative then answer
entered by student is inaccurate, else if scores area unit in 5 CONCLUSION:
same vary with already allotted scores then student response Here by we conclude that, through this automatic answer
is marked to be CORRECT. script evaluator which decreases the correction stress for the
faculty and also if there might be any miscorrections also there
4 Results: so to reduce those we can use this automatic answer script
Answer sheet: evaluator.This answer script evaluator is a better performer
when compared to other because of using CNN this can also
be implemented very easily.
6 REFERENCES:
[1] Raghava Prasad C., Kishore P.V.V., Morphological
differential gradient active contours for rolling stock
segmentation in train bogies,2016, ARPN Journal of
Engineering and Applied Sciences, Vol: 11Issue: 5,
pp: 2799 - 2804, ISSN 18196608
[2] Hari Priya D., Sastry A.S.C.S., Rao K.S., FPGA based
design and implementation for detecting Cardiac
arrhythmias ,2016, ARPN Journal of Engineering and
Applied Sciences, Vol: 11, Issue: 5, pp:3513 - 3518,
ISSN 18196608
[3] Kishore P.V.V., Srinivasa Rao M., Raghava Prasad C.,
Anil Kumar D., Medical image watermarking: Run
through review ,2016, ARPN Journal of Engineering
and Applied Sciences, Vol: 11, Issue: 5, pp:2882 -
2899, ISSN 18196608
[4] Mannepalli K., Sastry P.N., Suman M., MFCC-GMM
based accentrecognition system for Telugu speech
signals ,2016, International Journal of Speech
Technology, Vol: 19, Issue: 1, pp: 87 - 93,
ISSN13812416
[5] Ur Rahman M.Z., Mirza S.S., Process techniques for
Figure3: anwer script human thoracic electrical bio-impedance signal in
remote healthcare systems ,2016, Healthcare
Marks: Technology Letters, Vol: 3, Issue: 2, pp: 124 - 128,
9/10 ISSN 20533713
Key: [6] Krishna A.R., Chakravarthy A.S.N., Sastry A.S.C.S., A
1) Identifying the security challenges existing in IOT system? hybrid cryptographic system for secured device to
device communication ,2016, International Journal of
Electrical and Computer Engineering, Vol: 6, Issue: 6,
4026
IJSTR©2020
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 9, ISSUE 01, JANUARY 2020 ISSN 2277-8616
4027
IJSTR©2020
www.ijstr.org