Project Report Format Copy
Project Report Format Copy
A PROJECT REPORT ON
NyayaMitra-AI Driven app
Submitted
for the Partial fulfillment of award of
Bachelor of Technology
in
Information Technology
by
Shobhit Singh (2200270130164.)
Trisha Kumari (2200270130183.)
Samarth Gupta (2200270130148.)
Soumya Shukla (2200270130170.)
GHAZIABAD
i
Certificate
This is to certify that the report entitled NYAYA MITRA: AI DRIVEN
LEGAL APP submitted by Shobhit singh( 2200270130164 ), Samarth
Gupta (2200270130148), Trisha Kumari(2200270130183),Soumya
shukla(2200270130174) to the Dr. A. P. J. Abdul Kalam Technical Uni-
versity, Lucknow (U.P.) in partial fulfillment of the requirements for the
award of the Degree of Bachelor of Technology in Information Technol-
ogy is a bonafide record of the project work carried out by him/her under
my/our guidance and supervision. This report in any form has not been
submitted to any other university or institute for any purpose, to the best
of my knowledge.
Place: Ghaziabad
September 18, 2025
ii
Acknowledgements
We would like to express our thanks to all the people who have helped bring
this project to the stage of fulfillment. We would wish to put on record
very special thanks to my major project mentor, Dr. Sunil Kumar, for
the support, guidance, encouragement, and some very valuable insight that
he guided us in the entire process. His mentorship has been very pivotal
in terms of shaping of our project and leading us toward excellence.
We would like to appreciate our Head of the Department, Dr. Rahul
Sharma, who had provided us with the wherewithals and put us into an
environment that would bring out such innovation towards learning. We
would also want to appreciate our teachers and faculty members for all
that they share at this crucial juncture in our academic career. We wish
to appreciate many more for: helped out or, with their presence, indirectly
made contributions to this project.
iii
Contents
Declaration i
Certificate ii
Acknowledgements iii
List of Figures vi
1 Introduction 1
1.1 Problem Statement of Project . . . . . . . . . . . . . . . . . 1
1.2 Scope of Project . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Detail of Problem Domain . . . . . . . . . . . . . . . . . . . 3
1.4 Gantt Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5 System Requirements . . . . . . . . . . . . . . . . . . . . . . 5
1.6 Project Report Outline . . . . . . . . . . . . . . . . . . . . . 5
2 Literature Review 6
2.1 Related Study . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Research Gaps . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 Objective of Project . . . . . . . . . . . . . . . . . . . . . . 16
3 Methodology Used 17
4 Designing of Project 23
4.1 0-Level DFD . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.2 1-Level DFD . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.3 2-Level DFD . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.4 Use Case Diagram . . . . . . . . . . . . . . . . . . . . . . . 28
iv
Bibliography 37
Appendix A 38
Appendix B 39
Appendix C 40
Appendix D 42
v
List of Figures
vi
Chapter 1
Introduction
1
support for complex documentation and research. Officers are inclined
to prefer Nyaya Mitra rather than manual work since AI creates doc-
uments with much better quality and speed. Analysis can include the
initial complaint or subsequent evidence or both, depending on the stage
of the investigation. Our legal framework is a complex part of our gov-
ernance, since it houses our laws; incorrect application of these laws can
cause problems. There are really two kinds of errors in an FIR which are
procedural or factual; procedural refers to legal mistakes, while factual is
inaccurate information. When these procedural and factual errors enter
a legal document, this can result in damage to the case, which can be
detrimental to justice.
Legal complaints come in all shapes and sizes. There seem to be two
types of legal issues: criminal and civil. The type of legal sections that
may apply can differ depending on the nature and details of the complaint.
Some cases involve direct violations of the law, while others create legal
complexities through related statutes. The intricate details of a case can
be easily processed through Nyaya Mitra. Precise legal codes and minor
details are pretty puzzling for humans to analyze under pressure. There is
information gathering, analysis, and document generation as examples of
the approach Nyaya Mitra can be used in order to predict the correct course
of action. Deep learning algorithms are used by this study in creating
accurate legal documents with a much-improved efficiency rate with the
help of an intelligent assistant.
2
Transformer models from HuggingFace, to automatically suggest ap-
propriate legal sections for an FIR and classify legal documents into
different categories (e.g., judgments, petitions, applications).
• Performance Evaluation of Models: Compare different architec-
tures (e.g., BERT, DistilBERT) and assess their strengths and weak-
nesses using metrics such as accuracy, precision, and recall. This helps
in determining the best approach for legal text analysis.
• Multi-Modal Data Integration: Combine user-provided text with
other data types like uploaded case files, historical judgments, or rel-
evant legal statutes to build a more comprehensive model that can
ensure FIR accuracy, improve investigation outcomes, and aid in the
delivery of justice.
3
1.4 Gantt Chart
A Gantt chart is a visual project management tool that displays tasks, their
duration, and dependencies on a timeline. It helps in tracking progress,
scheduling tasks, and improving team coordination.
4
1.5 System Requirements
1.Hardware Requirement :
Processor: Intel i7 or higher / AMD Ryzen 7 (or equivalent)
RAM: Minimum 16 GB (32 GB recommended for faster processing)
GPU: NVIDIA GTX 1080 or higher (for deep learning models)
Storage: SSD with at least 512 GB (to store images and models)
Display: High-resolution monitor (for image visualization) Additional
Devices: High-speed internet, external storage, and cooling systems for
GPU-intensive tasks.
2. Software Requirement :
Operating System: Windows 10/11, Ubuntu (Linux), or macOS
Programming Language: Python (preferred)
Libraries/Frameworks: TensorFlow / PyTorch (for model training)
OpenCV (image processing) NumPy, Pandas, Matplotlib (data analysis
and visualization)
Development Environment: Jupyter Notebook, PyCharm, or VS Code
5
Chapter 2
Literature Review
Vikram Singh .et.al/2022 [3]: The purpose of the current work is to clas-
sify legal complaints into four classes of crime using a hybrid framework
that is L-HTC. Normally, this framework was working by taking an FIR
6
narrative as input and text normalization was applied to standardize the
language with the reduction of noise in the input text. An entity recogni-
tion scheme was used to extract the core elements of the crime separately
and afterwards, in addition to that, some feature extraction techniques
were also utilized in order to extract the contextual characteristics of the
incident. In the proposed framework, a hybrid method of optimization
was used on the feature vector, and the results produced a fully optimized
dataset. Finally, many algorithms were tested, among which the MLP
algorithm achieves excellence with a precision of 97.8 for the classification
of the four crime types. This framework has the main potentials to help
police officers develop the capability for accurate FIR drafting and would
be robust in minimizing human error in such highly critical legal tasks.
Ananya Joshi .et.al/2022 [4]: Researchers used transfer learning for sev-
eral of the deep learning models to identify which one should be utilized
for the suggestion of legal sections from complaint narratives. Seven fea-
ture extraction methods which exist in NLP have been compared with five
different metrics such as accuracy and F1-score. In such analysis, it has
been determined that a fine-tuned RoBERTa pre-trained model combined
with the SVM classifier obtained an accuracy of 99.5 percentage.
7
focuses on accuracy and efficiency. First of all, it cleans up the user’s
input text with preprocessing filters then tokenizes the text into sections
for better analysis. Key legal entities and actions are extracted from each
section, and finally, the specialized AI model analyzes this information to
establish which legal sections are applicable to the complaint.
Vikram Singh .et.al/2022 [3]: The purpose of the current work is to clas-
sify legal complaints into four classes of crime using a hybrid framework
that is L-HTC. Normally, this framework was working by taking an FIR
narrative as input and text normalization was applied to standardize the
language with the reduction of noise in the input text. An entity recogni-
tion scheme was used to extract the core elements of the crime separately
and afterwards, in addition to that, some feature extraction techniques
were also utilized in order to extract the contextual characteristics of the
incident. In the proposed framework, a hybrid method of optimization
was used on the feature vector, and the results produced a fully optimized
dataset. Finally, many algorithms were tested, among which the MLP
algorithm achieves excellence with a precision of 97.8
Ananya Joshi .et.al/2022 [4]: Researchers used transfer learning for sev-
eral of the deep learning models to identify which one should be utilized
for the suggestion of legal sections from complaint narratives. Seven fea-
ture extraction methods which exist in NLP have been compared with five
different metrics such as accuracy and F1-score. In such analysis, it has
been determined that a fine-tuned RoBERTa pre-trained model combined
with the SVM classifier obtained an accuracy of 99.5 percentage.
Priya Das .et.al/2022 [5]: A scratch-trained deep learning network has
been trained on a custom Indian legal document dataset based on data
augmentation with a combined loss function. Promising results have been
yielded from this approach that has shown effectiveness in identifying key
entities like ’victim’, ’accused’, and ’offense’ with high accuracy. Our pro-
posed method compared with the existing legal text analysis approaches
provides a promising alternative, and the modification applied at its cost
allows significant improvements in results. The architecture may be ap-
plied to a wide range of legal document analysis tasks, outside the appli-
cation studied.
8
approach aims to provide better foundational results compared to simple
keyword-based searches. An example of this approach was trained and
tested on a dataset of 2,500 First Information Reports (FIRs) and court
filings. For the evaluation of the performance, metrics like precision and
recall were employed, showing significant improvement in accurately cate-
gorizing documents, with a total accuracy of 94. This paper proves NLP’s
ability to structure and understand legal narratives at a high rate. The
proposed architecture of NLP pipelines is made explicit, and its application
to a prepared collection of legal texts will prove its effectiveness. Future
work may focus on refining the model to suggest specific legal sections, not
just broad categories.
A. Verma,.et.al/2022 [2]: This paper’s researchers designed a new legal
text analysis method called Legal-BERT for FIR narratives that mainly
focuses on accuracy and efficiency. First of all, it cleans up the user’s
input text with preprocessing filters then tokenizes the text into sections
for better analysis. Key legal entities and actions are extracted from each
section, and finally, the specialized AI model analyzes this information to
establish which legal sections are applicable to the complaint.
Vikram Singh .et.al/2022 [3]: The purpose of the current work is to clas-
sify legal complaints into four classes of crime using a hybrid framework
that is L-HTC. Normally, this framework was working by taking an FIR
narrative as input and text normalization was applied to standardize the
language with the reduction of noise in the input text. An entity recogni-
tion scheme was used to extract the core elements of the crime separately
and afterwards, in addition to that, some feature extraction techniques
were also utilized in order to extract the contextual characteristics of the
incident. In the proposed framework, a hybrid method of optimization
was used on the feature vector, and the results produced a fully optimized
dataset. Finally, many algorithms were tested, among which the MLP
algorithm achieves excellence with a precision of 97.8
Ananya Joshi .et.al/2022 [4]: Researchers used transfer learning for sev-
eral of the deep learning models to identify which one should be utilized
for the suggestion of legal sections from complaint narratives. Seven fea-
ture extraction methods which exist in NLP have been compared with five
different metrics such as accuracy and F1-score. In such analysis, it has
been determined that a fine-tuned RoBERTa pre-trained model combined
with the SVM classifier obtained an accuracy of 99.5 percentage.
Priya Das .et.al/2022 [5]: A scratch-trained deep learning network has
been trained on a custom Indian legal document dataset based on data
9
augmentation with a combined loss function. Promising results have been
yielded from this approach that has shown effectiveness in identifying key
entities like ’victim’, ’accused’, and ’offense’ with high accuracy. Our pro-
posed method compared with the existing legal text analysis approaches
provides a promising alternative, and the modification applied at its cost
allows significant improvements in results. The architecture may be ap-
plied to a wide range of legal document analysis tasks, outside the appli-
cation studied.
Sanjay Kumar.et.al/2023 [6]: This paper will concentrate on how to au-
tomatically suggest appropriate legal sections from complaint narratives by
computer programs. They will compare the method of classical computer
programs (machine learning like TF-IDF) with a relatively new and yet
more powerful method known as deep learning. Deep learning is depen-
dent on a type of artificial intelligence known as a Transformer Network to
analyze text. The Transformer proved to be significantly better than the
classical methods and, in many settings, it achieved nearly 97 accuracy.
However, in a few instances, the researchers were unable to differentiate be-
tween civil and criminal matters with overlapping language. The authors
suggest that this technology should be advanced further so that these two
domains can be differentiated in an ideal way. For this, legal-domain-
specific datasets should be designed. The findings of this work show that
deep learning seems promising in the analysis of legal text, still, there is
much scope for improvement.
10
existing programs too. However, one limitation we came across was that
the training time is too long because to a less powerful computer. This
training time would double then, too with even larger Datasets. This other
information may arise from any of the following sources, including medical
history. More direct wording is used in this edition, with definitions of
technical Simplifies sentence styles with terms and easy-to-read language.
11
Table 2.1: Literature Review
12
2.2 Research Gaps
1. Limited Availability of Annotated Legal Datasets
Problem: There are very few high-quality, annotated, and publicly acces-
sible datasets of Indian legal complaints (FIRs) due to significant privacy
concerns and the difficulty in getting accurate, section-level annotations
from legal experts.
Impact: This scarcity limits the model’s ability to generalize across the
highly diverse population of criminal cases, which have wide variations in
narrative style and complexity.
Future Direction: Investigate text-based data augmentation techniques
or build synthetic legal datasets with the help of Generative Adversarial
Networks (GANs) and Large Language Models (LLMs).
13
gle language (e.g., English) but in actual practice in India, complaints are
filed in numerous regional languages and dialects with unique legal collo-
quialisms.
Impact: The usage of the model is thus restricted only to the specific
linguistic and jurisdictional scenarios that the training data was exposed
to.
Future Direction: Train and test models on multilingual, code-switched
datasets and explore cross-lingual transfer learning to improve versatility
and national-level applicability.
14
Future Direction: Employ more advanced text preprocessing techniques,
including robust spell-checkers, slang normalization, or domain adaptation
methods that can make the model more resilient to noisy data.
15
2.3 Objective of Project
1. Develop a Legal Section Suggestion System :
To design and implement a model using Transformer networks, specifically
leveraging a fine-tuned BERT architecture, for automated analysis of legal
complaint texts and suggestion of appropriate penal sections.
16
Chapter 3
Methodology Used
This section describes the proposed methodology, which can classify legal
complaints and suggest appropriate penal sections using NLP-based deep
learning techniques. This study aims to objectively identify applicable
legal statutes from complaint narratives through the use of modern deep
learning techniques based on natural language understanding and text
classification. The designed system takes as input raw legal complaint
texts and then performs a number of text processing steps towards legal
section suggestion. In this first step, the input text is preprocessed using
a Transformer-based model that aids in improving the quality of the text
by correcting spelling, normalizing slang, and rejecting noise.
Chapter 3: Methodology
17
ural Language Processing (NLP) and deep learning techniques.
This study aims to objectively identify applicable legal statutes from
complaint narratives through the use of modern deep learning tech-
niques based on natural language understanding and text clas-
sification. The designed system takes as input raw legal complaint
texts and then performs a number of text processing steps towards
legal section suggestion. In this first step, the input text is prepro-
cessed using a Transformer-based model that aids in improving
the quality of the text by correcting spelling, normalizing slang, and
rejecting noise. 3.1 Text Acquisition
This is the process in which text data is collected for further pro-
cessing. In this project’s case, these texts are legal complaints or
First Information Reports (FIRs) that will be collected from the
end-users through the user interface created on the web application.
3.2 Text Pre-processing
In this process, the text collected from the end-users is pre-processed
to remove noise (e.g., irrelevant characters, spelling errors) and to
make it useful for further processing. Raw text data is often unstruc-
tured and difficult to analyze directly. Pre-processed data is easier for
the model to use and analyze. 3.3 Entity & Intent Recognition
This is a technique used to simplify a large body of text by splitting it
into various parts, such as identifying key entities (names, locations,
dates) and the primary intent of the complaint. This makes it easier
for the model to analyze the text for subsequent studies. 3.4 Feature
Extraction
In this process, useful semantic features are extracted from the large
text dataset by converting words and sentences into numerical repre-
sentations (embeddings). This eliminates the need for manual feature
engineering and helps the model understand the context and meaning
of the complaint. 3.5 Model Comparison
This process involves comparing the performance metrics (like accu-
racy, precision, F1-score) that are obtained from various NLP models
or algorithms. This is used to find the best-performing model for the
task. 3.6 Classification / Section Suggestion
The final step is the classification of the complaint into predefined
crime categories and the suggestion of appropriate legal sections, dis-
18
playing the results with the highest possible accuracy.
3.1.1 Dataset:
For an investigation on deep learning analysis in the suggestion of
legal sections from complaint texts, we need a well-curated dataset,
which comprises legal complaints along with annotations that indicate
the correct corresponding legal sections.
Here are some recommendations to acquire or design an appropriate
dataset:
– Public Datasets: One should begin to look for public datasets
of legal documents, such as court judgments or anonymized FIRs
that are sometimes made available for research.
– Kaggle: Go through relevant text datasets available on Kaggle.
There are a number of datasets related to legal text classification
and analysis.
– Collaborate with Law Firms or Police Departments: You
can contact legal professionals or institutions to get de-identified
19
datasets of legal complaints. One needs to follow all the ethi-
cal and legal provisions applicable in the case of such sensitive
information.
20
In training, it learns patterns and features from the input texts
and their corresponding legal section labels.
– Validation Set: The validation set is used for hyperparame-
ter tuning and to monitor the performance of the model during
training. It assures us of the way our model generalizes with new,
unseen data. As a thumb rule, allocate about 10-15% of your
dataset to the validation set.
– Testing Set: The testing set is kept completely isolated from
the training and validation sets and is used in judging the final
performance of our learned model. It provides an unbiased esti-
mate of how well our model will most likely do on new, real-world
data. Reserve the last 10-20% of your dataset for the testing set.
21
text into high-dimensional numerical vectors (embeddings) that cap-
ture the context and meaning. This is helpful for further analysis, as
the model can work with these meaningful numerical representations.
22
Chapter 4
Designing of Project
4.1 0-Level DFD
Figure 4.1 enlists the general flow of a brain tumor detection system,
considering the general interaction between patients, doctors, medical
equipment, and the system itself.
23
images become vital data in identifying and even classifying the brain
tumors.
2. Brain Tumor Detection System : This is the base system of
the entire process. It incorporates MRI images, processes the images,
and feeds algorithms associated with machine or deep learning tech-
niques, including CNNs, to discern whether or not an individual has
a brain tumor. If a tumor exists, then it further categorizes the type
as malignant or benign.
- The system relies on medical equipment for quality images from
MRI. The machinery produces images that are clear and accurate,
thereby giving a correct diagnosis. From the analysis, the system pro-
duces output, showing the existence of a tumor, classification, size,
position, among other relevant facts.
3. Doctors: The output obtained by the brain tumor detection
system is communicated to doctors. In this regard, it provides health
care professionals with sufficient information to be able to make an
appropriate decision toward the diagnosis and treatment of the pa-
tients. The doctors can utilize the outcome to diagnose the condition,
develop treatment strategies, and provide proper prognosis to the pa-
tients.
24
Figure 4.2: 1-level DFD
25
eas, key features are to be extracted from such areas. Feature extrac-
tion is defined as the process of quantifying important characteristics
such as: Tumor size, shape, texture, and intensity. These features
are used as the input by the classification model. Feature extraction
transforms image data into numerical values describing the tumor;
4. Training the LR Model (1.4): This stage requires utilization
of the extracted features to train the Logistic Regression (LR) model.
This example does learn the logistic regression model on a labeled
dataset to map input features with output classes. For brain tumor
classification, the logistic regression model learns how to classify the
two major types of tumors: benign and malignant by their respective
features extracted.
26
Figure 4.3: 2-level DFD
3 Preprocessing
All the images are resized to have uniformity as per the following size.
Standardises pixel intensity values and quality enhancements in the
images by removing the variations captured in the images,
Thresholding and Denoising: This would remove the unwanted noises
and artifacts in the images that would be needed or not, therefore
giving prominence to the regions of the tumor.
Contour Detection : This will enable tracing the contours of the tumor
that existed in the brain image .
Extreme Points Detection : It will help detect the extreme points of
the tumor so that proper image segmentation occurs.
4. Image Augmentation
It artificially increases the dataset by having techniques applied in
the augmentation of data. This will provide the ability to generalize
through making variations within data, for instance through rotation,
scaling, or flipping.
27
5. ResNet50 Model The preprocessed and augmented data will
be passed to the modified version of the ResNet50 model-it’s really
deep network-densely applied into the problems with more than two
classes, like image classification but not limited to that. The model
is trained on the task of tumour classification and detection.
6. Classification Output : Finally, it outputs the classification
result as benign or malignant by using the features obtained from
images. Workflow This workflow achieves the preciseness of classi-
fication for a given brain tumor through the process that combines
preprocessing, segmentation, and deep learning.
1. Role of Developer
Developer deals with the back-end of the system. Such big tasks are:
– Input: MRI images This sets the system to accept the MRI
images created by it.
28
– Data Preprocessing Cleaning, normalization, and enhancement
of images for good quality data.
– Feature Extraction : Relevance features from the MRI images,
such as size, texture, or intensity, have been among the chief
factors for classification.
– Training : It is trained with features retrieved by machine learn-
ing algorithms like deep models.
– Feature Matching : The system, after being trained, uses
learned features to classify and detect the tumours after matching
newly input images.
2. Users Role
The user then interacts with a front-end interface, which is simpler to
use for the system. Some of the most critical activities in this regard
are:
– Select Image : The patient should be required to select an MRI
image for the input analysis.
– Upload MRI Image : The system should accept an uploaded
MRI image from the patient.
– Get Prediction : The user will get the prediction that the
system computes based on its model as to what kind of tumor,
whether it is benign or malignant, is going to be.
29
Chapter 5
30
Figure 5.1: Class Diagram
31
chine learning techniques like SVM plus graph-based classification
have been performed for robust performance.
32
Medical History of the patient is noted and scanned times, and he is
diagnosed accordingly.
2. Doctor : Attributes : Doctor ID, Name, Speciality, Experience,
Contact Details. Diagnose for the scan is selected that comes with
reading of multiple reports prepared by various doctors.
3. Scan : Attributes: Scan ID, Scan Resolution, Scan Date, Image
File. Every Scan is provided to the patient and results in association
with classification and detection of tumor.
4. Detection of Tumor : Attributes: Detection ID, Tumor Present,
Tumor Size, Tumor Location, Confidence Score. This is also kept
there to state if a tumor exists or not and then combined with other
information like its size, location, and even the extent to which one
believes the system will be accurate in detecting.
5. Tumor Classification : Attributes: Classification ID, Tumor
Type, Risk Level, Classification Accuracy. Once there is a tumor, the
system will diagnosis if the tumor is benign or malignant and its likely
chance of the cancer it has and also gives an accuracy classification
of the cancer.
6. Report : Attributes: Report ID, Generation Date, Diagnosis,
Recommended Treatment. This report would be produced based on
the results of the scan and the classification of the tumor as a diagnosis
along with appropriate treatment recommendations if needed.
33
Figure 5.3: Activity Diagram
34
reduces the total number of features only to the ones that are most
significant in ensuring accurate classification.
6. Training and Classification Phase 1: These features that are
obtained are fed into the trained learning machine model that was
pre-trained to classify the image as ”Normal” or ”Abnormal”. If it
classifies as ”Normal”, then no other process is called for. If it is
classified as ”Abnormal”, then the given operation is passed on to the
next phase.
7. Classification : Step 2 Such images can be classified as abnor-
mal. These abnormalities are then subclassified into benign, which is
non-cancerous, and malignant, which is cancerous classification. As
classified, are thus used in guiding the treatment plans.
35
1. Original Image : It starts with an MRI picture or picture of
any other brain. That picture forms the raw data source to be fed in
further for processing .
2. Thresholding : It is the first pre-processing step in which me-
dian filtering is applied on an image to remove noise. Noise removal
enhances the quality of the image by making it smooth without los-
ing the essential features of the image, which are edges. - Extraction
: Features of interest or the region of interest is extracted from the
image after filtration. Such regions might be potential tumors and
hence are focused upon to extract them.
3. Segmentation : This algorithm implements the usage of K-
Means clustering with which it clusters the image. In the process, it
groups the pixels into various clusters based on the resemblance, to
separate the tumor area from the normal brain tissue. This step of
segmentation would very well help to isolate the region of tumor to
further analyze.
4. SVM Classifier : From this classification, the classified image is
transferred to the SVM classifier, which in reality uses SVM. SVM is
one of the binary classification algorithms which applies data-driven
rules in a decision to classify as benign or malign.
5. Result : Such an output will be generated based upon the predic-
tion that SVM has made in the result and therefore, it will come up
with the output as benign tumors or malignant tumors. Therefore,
such a type of output will further deliver the result towards the user
for diagnosis and treatment planning in the medicine field.
36
Bibliography
37
Appendix A
https://www.kaggle.com/datasets/adarshsingh0903/legal-dataset-sc-judgments-
india-19502024/data
38
Appendix B
39
Appendix C
40
Screenshot 2024-12-17 183159.png
41
Appendix D
42
Figure 5.8: Plagiarism Report
43
Figure D.2 shows the AI report of our project, which is less than 20
percentage
44