Automated Health Document Classification
Automated Health Document Classification
INTRODUCTION
will consider the background of the study, statement of the problem, aims and
objectives, methodology used to design the system, scope of the study, its
facilities make use of some kind of information system. These could be either a
functions that these systems provide, they are mainly used in collecting patient
Numerous patient data are being recorded on a daily basis which forms a large
Every day physicians and other health workers are required to work with
this “Big Data” in other to provide solution. Some of the everyday tasks include
information retrieval and data mining. Retrieving information from big data can
be very laborious and time consuming. This has given rise to the study of text or
1
from big data. Today, text classification is a necessity due to the very large
amount of text documents that we have to deal with daily (Hull, 2019).
problem that is at the core of many information management and retrieval tasks.
problem in information retrieval which has been well studied (Russell, 2018).
Wide Web it is no longer feasible for a human observer to understand all the
data coming in or even classify it into categories. Also in the health sector,
numerous patient records are being collected every day and are used for
2
1.3 Aim and Objectives of the Study
The software delivered from this project work will greatly reduce the
time used by doctors, physicians and other health workers in searching and
retrieving documents.
1. Helps students and other interested individuals that want to develop a similar
application.
machine learning.
3. It will serve as source of materials for students who are interested in studying
machine learning.
3
1.5 Definition of Terms
Machine Learning: the study and construction of algorithms that can learn
JSP: Java Server Pages is a java technology for creating dynamic web pages.
manipulating databases.
Server’s functionality.
framework for faster and easier web development. It uses HTML, CSS and
Javascript.
4
CHAPTER TWO
LITERATURE REVIEW
The more relevant the representation is, the more relevant the classification will
be. The second phase includes learning from training corpus, making a model
for classes and classifying the new documents according to the model.
5
from texts and text mining (PAZIENZA, 1997). “Text mining” is mostly used to
represent all the tasks that, by analyzing large quantities of text and identifying
usage patterns, try to extract probably helpful (although only probably correct)
categories,
The task of building a classifier for documents does not vary from other
(Leopold, 2020).
One special certainty of the text categorization problem is that the number
6
which can be used either in choosing a subset of the original features (Brank,
2020), or transforming the features into new ones, that is, adding new features.
2.2.1Tokenization
the list of tokens is input to the next processing of text classification. Generally,
tokenization occurs at the word level. Nevertheless, it is not easy to define the
for instance:
may or may not be added in the resulting list of tokens. In languages like
no word boundaries like Chinese. Simple white spaced limited tokenization also
shows toughness in word collocations like New York which must be considered
as single token. Some ways to mention this problem are by improving more
7
2.2.2 Stemming
process for decreasing deviated (or sometimes derived) words to their stem,
original form. The stem need not be identical to the morphological root of the
this stem is not a valid root. In computer science algorithms for stemming have
been studied since 1968. Many search engines consider words with the similar
Typically in computing, stop words are filtered out prior to the processing
of natural language data (text) which is managed by man but not a machine. A
prepared list of stop words do not exist which can be used by every tool.
Though any stop word list is used by any tool in order to support the phrase
Any group of words can be selected as the stop words for a particular
cause. For a few search machines, these is a list of common words, short
function words, like the, is, at, which and on that create problems in performing
text mining phrases that consist them. Therefore it is needed to eliminate stop
words contains lexical words, like "want" from phrases to raise performance.
8
2.2.4 Vector Representation of the Documents
identifiers, like, for example, index terms which will be utilized in information
document is generally denoted by an array of words. The group of all the words
produced by a binary vector, assigning the value 1 if the document includes the
dimensionality of the dataset by eliminating features that are not related for the
needs for the text categorization algorithms (especially those that do not scale
well with the feature set size) and comfortable shrinking of the search space.
9
to the contingent characteristics of the training data rather than the constitutive
approaches, but like them its aim is to decrease the feature set volume. The
approach does not weight terms in order to neglect the lower weighted but
probabilistic models, etc. They regularly vary in the approach taken are decision
trees, naïve Bayes, rule induction, neural networks, nearest neighbors, and
automated text classification is however a major area of research first due to the
10
effectiveness of present automated text classifiers is not errorless and
experiments due to its easy and effectiveness (Kim, 2002). Nevertheless, its
Schneider addressed the problems and display that they can be resolved
learning very large tree-like Bayesian networks (Klopotek, 2003). The study
advices that tree-like Bayesian networks are able to deal a text classification
task in one hundred thousand variables with sufficient speed and accuracy.
SVM. Shanahan and Roma explained an automatic process for adjusting the
al. explained a fast decision tree construction algorithm that receives benefits of
the sparse text data, and a rule simplification method that translates the decision
11
kNNmethod with various decision functions, k values, and feature sets are also
some classes are a bit harder than others to classify. Reasons for this are: very
few positive training examples for the class, and lack of good forecasting
use all the documents in the training corpus that has the category as related
training data and all the documents in the training corpus that are of the other
learning is required.
12
2.4 Review of Related Work
the naive Bayes classifier, the nearest neighbor classifier, decision trees and a
combination. Their experimental results indicate that the naive Bayes classifier
and the subspace method outperform the other two classifiers on our data sets.
accuracy compared to the best individual classifier. Among the three different
here performed the best. The best classification accuracy that they were able to
considered here is more difficult because the pattern classes used in our
(LI, 1998).
document classification task for German text. They evaluate different feature
construction and selection methods and various classifiers. Their main results
are: feature selection is necessary not only to reduce learning and classification
13
time, but also to avoid over fitting (even for Support Vector Machines);
Ankit et al, discusses the different types of feature vectors through which
document can be represented and later classified. They compares the Binary,
Count and TfIdf feature vectors and their impact on document classification. To
test how well each of the three mentioned feature vectors perform, they used the
20-newsgroup dataset and converted the documents to all the three feature
vectors. For each feature vector representation, they trained the Naïve Bayes
classifier and then tested the generated classifier on test documents. In their
results, they found that TfIdf performed 4% better than Count vectorizer and
6% better than Binary vectorizer if stop words are removed. If stop words are
not removed, then TfIdf performed 6% better than Binary vectorizer and 11%
better than Count vectorizer. Also, Count vectorizer performs better than Binary
words are not removed. Thus, they can conclude that TfIdf should be the
2017).
14
CHAPTER THREE
3.0 Introduction
This chapter shows all the modules and components used to design the
system, and how they work together. It also shows us how the users of the
health documents through stacking of physical files in file cabinets. This makes
2. Since files are kept in the office, documents could liter the office which
leads to dirtiness.
involves looking for file and even document to reveal such information.
4. There could be loss of data during document transaction since records are
15
3.2.1 Requirements of the System
For the system to serve its intended purpose properly, the system will have to
learns from the model to the point that when it will produce similar result when
similar data (similar to the model) is presented to the algorithm. In this project
work we make use of the OpenNLP API for document classification. The
OpenNLP API is a set of Java tools from the Apache software foundation for
In other to carry out the classification, we first train a model. Our model
16
to start with these three diseases as a little Google search shows them to be the
OpenNLP, you need to create a file of training data. The training file format
consists of a series of lines, the first word of the line is the category. The
text containing the words malaria, hypertension and diarrhea which we source
DocumentCategorizerME class. The train method trains the file and outputs a
After training, the model file produced will be used to, classify the health
The use case diagram is used to show the interaction between the system
use cases and its clients without much detail. A use case diagram displays an
actor and its use cases, the actors are also the users of the system.
Health Worker
17
Create train file
Train model
Upload Document
View Classification
streamline activities. Sequence diagrams are used to show how objects interact
time passes from top to bottom: the interaction starts near the top of the diagram
2 TrainFile
3 TrainFile
4 ModeFile
5 UploadHealthDocument
6 ViewClassification
UploadHealthDocument
18
3.7Class Diagrams
the system. We describe these classes using class diagrams and implement them
in Java. The class diagram enable us to model via class diagrams, each class is
modeled as a rectangle with three compartments. The top one contains the name
contains the class attributes, while the bottom compartment contains the class
users. The following figures are the system flow chart for our system.
19
Figure 3.4 System Flow Chart
20
CHAPTER FOUR
DOCUMENTATION
4.0 Introduction
and observing the results to see if the system has been properly deigned or if it
contains bugs. This is usually done with data which has known results. In this
meet some hardware and software requirements. Also since it has been designed
as a web enabled application, the server on which the system will be deployed
also has to meet certain hardware and software requirements. The following
21
3. Apache Tomcat Server version 7
4. MySQL version 5
1. 1GB of RAM
2. 80 GB Hard Disk
5. Internet modem
2. Web browser
This section displays the sample interface, and describes the functions of
This is the first page that displays to the users of the system. It contains a
brief introduction to the application as well as the login link for the
22
Figure 4.1 Home Page
This page contains a login form for the administrator to login, the form
includes two text input fields which captures the user name and password, a
switcher so the browser can remember the user details and a sign up button.
23
Figure 4.2 Administrator Page
This is the dashboard for the administrator; it is the first page the
administrator sees after login. It contains links to upload the training file.
This page contains a login form for the user to login, the form includes
two text input fields which captures the user name and password, a switcher so
the browser can remember the user details and a sign up button.
This is the dashboard for the user; it is the first page the user sees after
24
4.2.6 Upload Document
The upload document page is used by the user to upload the health
document.
The program is installed on the server that meets the above requirements.
Below are a few steps to take when installing the program on the server.
1. Ensure that the server meets the above software and hardware
requirements.
2. The software will be built in to a .war file, copy the .war file into the
the tables. Create a database called webscrap.sql, and import the .sql file.
1. Ensure that the client system meets the above software and hardware
requirements.
25
2. Strong java programming is not required, so it is suitable for non-java
programmers
code)
8. Gives built-in JSP tags and allows to develop custom JSP tags and to use
The Apache OpenNLP library is a machine learning based toolkit for the
processing of natural language text. It supports the most common NLP tasks,
entity extraction, chunking, parsing, and coreference resolution. These tasks are
26
CHAPTER FIVE
5.1Summary
Programming Interface which is a Java API for training a model and classifying
JavaScript framework for building the user interface. The software is also built
5.2 Conclusion
classification of text and text based documents is the most effective instead of
regarded as over kill. Natural language processing has a lot of potential outside
document classification; its relevance has been seen in the area of sentiment
27
5.3 Recommendation
1. The system can be hosted online on a Tomcat server, so that all users can
chapter four).
Due to the limited time involved in developing this project work, some key
2. When there is new data added to the model from the internet, a listener
should be notified.
28
REFERENCE
Brank J., Grobelnik M., Milic-Frayling N., Mladenic D. (2002), "Interaction of Feature
Selection Methods and Linear Classification Models", Proc. of the 19 th
International Conference on Machine Learning, Australia.
Fox C. (1992), “Lexical analysis and stoplist,” in Information Retrieval Data Structures and
Algorithms, W. Frakes and R. Baeza-Yates, Eds. Prentice Hall, , pp. 102–130.
Hull D., J. Pedersen, and H. Schutze (1996), “Document routing as statistical classification,”
in AAAI Spring Symp.On Machine Learning in Information Access Technical
Papers, Palo Alto.
Klopotek M. and Woch M. (2003), "Very Large Bayesian Networks in Text Classification",
ICCS 2003, LNCS 2657, 2003, pp. 397-406
Liu H. and Motoda (1998), Feature Extraction, construction and selection: A Data Mining
Perspective. Boston, Massachusetts: Springer.
Shanahan J. and Roma N. (2003), Improving SVM Text Classification Performance through
Threshold Adjustment, LNAI 2837, 2003, 361- 372
Wang Y. and X. Wang (2005), “A new approach to feature selection in text classification,” in
Proceedings of 4th International Conference on Machine Learning and Cybernetics,
vol. 6, pp. 3814–3819.
Wang Z.-Q., X. Sun, D.-X.Zhang, and X. Li (2006), “An optimal svm based text
classification algorithm,” in Fifth International Conference on Machine Learning and
Cybernetics, pp. 13–16.
29
APPENDIX A
30
31
APPENDIX B
UserController.java
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package controller;
importdao.DbConnection;
importjava.io.File;
importjava.io.FileInputStream;
importjava.io.FileNotFoundException;
importjava.io.IOException;
importjava.io.InputStream;
importjava.io.PrintWriter;
importjava.sql.SQLException;
importjava.util.ArrayList;
importjava.util.Arrays;
importjava.util.HashMap;
importjava.util.Iterator;
importjava.util.List;
importjava.util.Random;
importjavax.crypto.KeyGenerator;
importjavax.crypto.SecretKey;
importjavax.servlet.RequestDispatcher;
importjavax.servlet.ServletContext;
importjavax.servlet.ServletException;
importjavax.servlet.http.HttpServlet;
importjavax.servlet.http.HttpServletRequest;
importjavax.servlet.http.HttpServletResponse;
importjavax.servlet.http.HttpSession;
importopennlp.tools.doccat.DoccatModel;
importopennlp.tools.doccat.DocumentCategorizerME;
importopennlp.tools.tokenize.Tokenizer;
importopennlp.tools.tokenize.WhitespaceTokenizer;
importorg.apache.commons.fileupload.FileItem;
importorg.apache.commons.fileupload.FileUploadException;
importorg.apache.commons.fileupload.disk.DiskFileItemFactory;
importorg.apache.commons.fileupload.servlet.ServletFileUpload;
importorg.mindrot.jbcrypt.BCrypt;
/**
*
* @author harmony
*/
public class UserController extends HttpServlet {
32
ServletContextservletContext = getServletContext();
String relativePath = servletContext.getInitParameter("fileUploads1.dir");
File file = new File(rootPath + File.separator + relativePath);
if (!file.exists()) {
file.mkdirs();
}
// Verify the content type
String contentType = request.getContentType();
if ((contentType.indexOf("multipart/form-data") >= 0)) {
// Create a factory for disk-based file items
DiskFileItemFactoryfileFactory = new DiskFileItemFactory();
File filesDir = (File) (file);
fileFactory.setRepository(filesDir);
// Create a new file upload handler
ServletFileUpload upload = new ServletFileUpload(fileFactory);
// Parse the request to get file items.
List<FileItem>fileItemsList = upload.parseRequest(request);
// Process the uploaded items
Iterator<FileItem>fileItemsIterator = fileItemsList.iterator();
while (fileItemsIterator.hasNext()) {
FileItemfileItem = fileItemsIterator.next();
if (fileItem.isFormField()) {
String name = fileItem.getFieldName();
String value = fileItem.getString();
if (name.equals("first_name")) {
first_name = value;
}
if (name.equals("last_name")) {
last_name = value;
}
if (name.equals("phone")) {
phone = value;
}
if (name.equals("email")) {
email = value;
}
if (name.equals("password")) {
password = value;
}
if (name.equals("cpassword")) {
cpassword = value;
}
if (name.equals("email")) {
email = value;
}
} else {
profile_picture = rootPath + File.separator + relativePath + File.separator + fileItem.getName();
System.out.println("This is what's in profile_picture: " + profile_picture);
File file1 = new File(profile_picture);
System.out.println("This is what's in rootPath: " + rootPath);
System.out.println("This is what's in relativePath: " + relativePath);
System.out.println(fileItem.getName());
try {
fileItem.write(file1);
} catch (Exception ex) {
ex.printStackTrace();
}
}
33
}
if (!cpassword.equals(password)) {
RequestDispatcherrd = request.getRequestDispatcher("/unmatch_password.jsp");
rd.forward(request, response);
} else {
DbConnectioncreateUserAccount = new DbConnection();
// Hash User Data
//String hPassword = BCrypt.hashpw(password.trim(), BCrypt.gensalt(15));
//System.out.println("password.trim() is: " + password.trim());
//System.out.println("hPassword is: " + hPassword);
createUserAccount.createUserAccount(first_name, last_name, phone, email, password, profile_picture);
createUserAccount.logUserRegistration();
RequestDispatcherrd = getServletContext().getRequestDispatcher("/user_registration_successful.jsp");
rd.forward(request, response);
}
} catch (ClassNotFoundException | FileNotFoundException | FileUploadException error) {
System.out.print(error);
}
}
protected void userLogin(HttpServletRequest request, HttpServletResponse response)
throwsServletException, IOException {
try {
String username = request.getParameter("username");
String password = request.getParameter("password");
DbConnectionuser_login = new DbConnection();
String[] user_details = user_login.userLogin(username, password);
String user_password = user_details[0];
String firstName = user_details[1];
String lastName = user_details[2];
String username1 = user_details[3];
String user_phone = user_details[4];
//String generatedOtp = Arrays.toString(generateOTP(request, response));
//String generatedOtpRemoveComma = generatedOtp.replace(",","");
//String generatedOtpTrim = generatedOtpRemoveComma.replace(" ","");
//String generatedOtpRemoveOpenBrace = generatedOtpTrim.replace("[","");
//String generatedOtpRemoveCloseBrace = generatedOtpRemoveOpenBrace.replace("]","");
String[] sessionData = {username1, firstName, lastName};
if (username != null || password != null) {
if (!"".equals(username) || !"".equals(password)) {
if (password.equals(user_password)) {
System.out.println("It matches");
HttpSession session = request.getSession(true);
String sessionId = session.getId();
System.out.println("sessionId is " + sessionId);
sessionMap.put(sessionId, sessionData);
String[] sessionMapValues = sessionMap.get(sessionId);
String sessionFirstName = sessionMapValues[1];
String sessionLastName = sessionMapValues[2];
String sessionUserName = sessionMapValues[0];
request.setAttribute("sessionId", sessionId);
request.setAttribute("sessionFirstName", sessionFirstName);
request.setAttribute("sessionLastName", sessionLastName);
request.setAttribute("sessionUserName", sessionUserName);
RequestDispatcherrd = request.getRequestDispatcher("/user/user_dashboard.jsp");
rd.forward(request, response);
} else {
System.out.println("It does not match");
}
}
34
}
35
try {
fileItem.write(file1);
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
}
if (document_title != null || health_document != null) {
classifyDocuments(request, response);
String[] sessionMapValues = sessionMap.get(sessionId);
String sessionFirstName = sessionMapValues[2];
String sessionLastName = sessionMapValues[1];
String sessionUserName = sessionMapValues[0];
request.setAttribute("sessionId", sessionId);
request.setAttribute("sessionFirstName", sessionFirstName);
request.setAttribute("sessionLastName", sessionLastName);
request.setAttribute("sessionUserName", sessionUserName);
}
else {
RequestDispatcherrd = request.getRequestDispatcher("/error_page.jsp");
rd.forward(request, response);
}
}
catch(Exception e){
e.printStackTrace();
}
}
public void classifyDocuments(HttpServletRequest request, HttpServletResponse response)
throws IOException, FileNotFoundException {
String modelFileName = "en-diseases.bin";
String rootPath = System.getProperty("catalina.home");
ServletContextservletContext = getServletContext();
String relativePath = servletContext.getInitParameter("fileUploads1.dir");
String modelFile = rootPath + File.separator + relativePath + File.separator + modelFileName;
// Set up a byte array to hold the file's content
byte[] content = new byte[0];
Tokenizertokenizer = WhitespaceTokenizer.INSTANCE;
try{
// Create an input stream for the file
FileInputStreamhamletInputStream = new FileInputStream(health_document);
// Figure out how much content the file has
intbytesAvailable = hamletInputStream.available();
// Set the content array to the length of the content
content = new byte[bytesAvailable];
// Load the file's content into our byte array
hamletInputStream.read(content);
String[] inputText = tokenizer.tokenize(new String(content));
InputStreammodelIn = new FileInputStream(modelFile);
System.out.println("modelFile value is: " + modelFile);
System.out.println("model FIle assigned to modelIn variable");
System.out.println("modelIn variable value is: " + modelIn);
DoccatModel model = new DoccatModel(modelIn);
DocumentCategorizerME categorizer = new DocumentCategorizerME(model);
double[] outcomes = categorizer.categorize(inputText);
for (int i = 0; i <categorizer.getNumberOfCategories(); i++) {
String category = categorizer.getCategory(i);
System.out.println(category + " - " + outcomes[i]);
36
}
}catch(Exception e){
e.printStackTrace();
}
}
sessionMap.remove(sessionId);
RequestDispatcherrd = getServletContext().getRequestDispatcher("/user/userLogin.jsp");
rd.forward(request, response);
}
@Override
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throwsServletException, IOException {
doPost(request, response);
}
@Override
protected void doPost(HttpServletRequest request, HttpServletResponse response)
throwsServletException, IOException {
try {
switch (user_action) {
case "register_user":
createProfile(request, response);
break;
case "user_login":
userLogin(request, response);
break;
case "go_to_upload_document":
goToUploadDocument(request, response);
break;
case "upload_document":
uploadDocument(request, response);
break;
case "logout":
logout(request, response);
break;
37
} catch (ServletException | IOException | ClassNotFoundException | FileUploadException | SQLException
error) {
error.printStackTrace();
}
}
/**
* Returns a short description of the servlet.
*
* @return a String containing servlet description
*/
@Override
public String getServletInfo() {
return "Short description";
}// </editor-fold>
AdministratorController.java
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package controller;
importdao.DbConnection;
importjava.io.BufferedOutputStream;
importjava.io.File;
importjava.io.FileNotFoundException;
importjava.io.FileOutputStream;
importjava.io.IOException;
importjava.io.OutputStream;
importjava.io.PrintWriter;
importjava.nio.charset.StandardCharsets;
importjava.sql.SQLException;
importjava.util.HashMap;
importjava.util.Iterator;
importjava.util.List;
importjavax.servlet.RequestDispatcher;
importjavax.servlet.ServletContext;
importjavax.servlet.ServletException;
importjavax.servlet.http.HttpServlet;
importjavax.servlet.http.HttpServletRequest;
importjavax.servlet.http.HttpServletResponse;
importjavax.servlet.http.HttpSession;
importopennlp.tools.doccat.DoccatFactory;
importopennlp.tools.doccat.DoccatModel;
importopennlp.tools.doccat.DocumentCategorizerME;
importopennlp.tools.doccat.DocumentSample;
importopennlp.tools.doccat.DocumentSampleStream;
importopennlp.tools.util.InputStreamFactory;
importopennlp.tools.util.MarkableFileInputStreamFactory;
importopennlp.tools.util.ObjectStream;
importopennlp.tools.util.PlainTextByLineStream;
importopennlp.tools.util.TrainingParameters;
importorg.apache.commons.fileupload.FileItem;
importorg.apache.commons.fileupload.FileUploadException;
importorg.apache.commons.fileupload.disk.DiskFileItemFactory;
38
importorg.apache.commons.fileupload.servlet.ServletFileUpload;
/**
*
* @author harmony
*/
public class AdministratorController extends HttpServlet {
try {
longlongValueOfLastLogon = Long.parseLong(lastlogon);
if (!"".equals(username) || !"".equals(password)) {
if (administrator_password.equals(password)) {
longlongValueOfLastLogonForm = Long.parseLong(lastLogonForm);
if (longValueOfLastLogonForm>longValueOfLastLogon) {
sessionMap.put(sessionId, sessionData);
39
String stringValueOfLastLogonForm = String.valueOf(longValueOfLastLogonForm);
admin_login.updateAdministratorLastLogon(stringValueOfLastLogonForm, username);
request.setAttribute("sessionId", sessionId);
request.setAttribute("sessionFirstName", sessionFirstName);
request.setAttribute("sessionLastName", sessionLastName);
request.setAttribute("sessionUserName", sessionUserName);
RequestDispatcherrd = request.getRequestDispatcher("/admin/administrator_dashboard.jsp");
rd.forward(request, response);
}
}
}
}
error.printStackTrace();
}
}
request.setAttribute("sessionId", sessionId);
request.setAttribute("sessionFirstName", sessionFirstName);
request.setAttribute("sessionLastName", sessionLastName);
request.setAttribute("sessionUserName", sessionUserName);
RequestDispatcherrd = getServletContext().getRequestDispatcher("/admin/upload_training_file.jsp");
rd.forward(request, response);
}
40
}
fileFactory.setRepository(filesDir);
FileItemfileItem = fileItemsIterator.next();
if (fileItem.isFormField()) {
if (name.equals("sessionId")) {
sessionId = value;
}
} else {
try {
fileItem.write(file1);
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
}
41
String sessionFirstName = sessionMapValues[2];
String sessionLastName = sessionMapValues[1];
String sessionUserName = sessionMapValues[0];
trainModel(request, response);
request.setAttribute("sessionId", sessionId);
request.setAttribute("sessionFirstName", sessionFirstName);
request.setAttribute("sessionLastName", sessionLastName);
request.setAttribute("sessionUserName", sessionUserName);
RequestDispatcherrd = getServletContext().getRequestDispatcher("/admin/training_successful.jsp");
rd.forward(request, response);
42
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throwsServletException, IOException {
doPost(request, response);
}
@Override
protected void doPost(HttpServletRequest request, HttpServletResponse response)
throwsServletException, IOException {
try {
String administrator_action = request.getParameter("administrator_action");
switch (administrator_action) {
case "administrator_login":
administratorLogin(request, response);
break;
case "go_to_upload_training_file":
goToUploadTrainingFile(request, response);
break;
case "upload_train_file":
uploadTrainFile(request, response);
break;
/** case "go_to_add_room":
goToAddRoom(request, response);
break;
case "add_room":
addRoom(request, response);
break;
case "logout":
logout(request, response);
break;**/
}
error.printStackTrace();
}
}
@Override
public String getServletInfo() {
return "Short description";
}// </editor-fold>
43
Malaria is a mosquito-borne infectious disease affecting humans and other animals caused by parasitic
protozoans
Malaria is a mosquito-borne disease caused by a parasite
Malaria occurred worldwide and 445,000 people died
Malaria is caused by parasites from the genus Plasmodium
Malaria parasite in most countries
Malaria is an acute febrile illness
Malaria If not treated within 24 hours
Malaria can progress to severe illness
Malaria frequently develop one or more of the following symptoms
Malaria cases and deaths
Malaria transmission
Malaria control programmes
Malaria infection
Diarrhea can be prevented by improved sanitation
Diarrhea it is recommended that they continue to eat healthy food and babies continue to be breastfed
Diarrhea and a high fever
Diarrhea on average three times a year
Diarrhea are also a common cause of malnutrition and the most common cause in those younger than five years
of age
Diarrhea is defined by the World Health Organization as having three or more loose or liquid stools per day
Diarrhea is defined as an abnormally frequent discharge of semisolid or fluid fecal matter from the bowel
Diarrhea means that there is an increase in the active secretion
Diarrhea is a cholera toxin that stimulates the secretion of anions
Diarrhea intestinal fluid secretion is isotonic with plasma even during fasting
Diarrhea occurs when too much water is drawn into the bowels
Diarrhea can also be the result of maldigestion
Diarrhea and distention of the bowel
Hypertension also known as high blood pressure
Hypertension was believed to have been a factor in
Hypertension is rarely accompanied by symptoms, and its identification is usually through screening
Hypertension may be associated with the presence of changes in the optic fundus seen by ophthalmoscopy
Hypertension with certain specific additional signs and symptoms may suggest secondary hypertension
Hypertension due to an identifiable cause
Hypertension accompanied by headache
Hypertension occurs in approximately
Hypertension in pregnancy
Hypertension during pregnancy without protein in the urine
Hypertension in newborns and young infants. In older infants and children
Hypertension results from a complex interaction of genes and environmental factors
Hypertension results from an identifiable cause
Hypertension can also be caused by endocrine conditions
44