Purpose of this module is to provide a framework for the selected / approved users to write
their own scripts to test, compute and process the data by using existing data available in the
system. It will be like plug-n-play feature for the users who want to write custom script to test,
compute, and process data on-the-fly.
Users can act as follows in the system:
• Can run their own script (AI/ML Models. Algorithms etc.)
• Can upload their own data / rules (own custom data collected from other sources)
• Can use system generated rules (select pretrained algorithms/methods like Random
Forest, Neural Network based etc.) and data (repository data related to crop and other
modules like weather, soil etc.)
• Can also save their script and run in the future if they wish
About the Framework:
Since, this framework will enable users in a way that with little extra coding, they should be able
to access pre-existing tool libraries that are divided into several categories in order to carry out
data processing activities.
In order to write scripts for spatial-temporal data processing, spatial analytics, machine
learning, aggregations, combining datasets, and downloading results, users should have
access to a Python-based scripting environment on the platform. They can install and
import the necessary packages and libraries for data processing
The system will be able to generate processing user accounts, and each account must
have a maximum storage capacity for user-specific files, such as user scripts and vector
geometry of admin boundaries or area of interest boundaries. For offline usage, the user
will be able to download processed data
For both raster and vector operations, the environment will provide pre-existing data
processing methods including mosaicking, clipping, layer stacking, zonal statistics,
intersection, union, merge, dissolve, etc. Either bespoke algorithms or already-existing
open-source packages can be merged with these algorithms
The system will provide many functions for processing and analysing data, such as time-
series analysis, spatial analysis, and image processing. It ought to offer an array of
functions and methods for processing and manipulating data
Advanced remote sensing image processing techniques, including the classification of
land cover, change detection, and the computation of spectral indices, will be supported
by the system. It should offer mosaicking, picture fusion, and image enhancement
techniques
The system will make it possible for users to do geospatial analysis and modelling, which
will include simulating environmental processes, suitability analysis, and spatial
interpolation. It ought to make modelling tools and functions for spatial analysis
accessible
User actions will be encrypted and recorded in the system so they can save their script
and perform it at a later time.
Salient Other Features:
The platform will provide a Python-based scripting environment where users can write
scripts for machine learning, aggregations, spatial-temporal data processing, spatial
analytics, combining datasets, and downloading results within the modules developed
under Krishi DSS. Users can also install and import the necessary packages and libraries
for data processing
To handle data, users will have minimal additional code to do thanks to pre-existing tool
libraries that are listed under several categories
Cloud computing infrastructure will also be utilised by the system to guarantee
scalability and effective handling of massive geographical data. For best results, it should
manage distributed computing and parallel processing
The processing algorithm employed by the system in different dashboards can be
updated or modified by the administrator. To confirm that the updated algorithm
produces the intended results correctly, the user should be able to test it.
Facilities for browsing and viewing pre-existing datasets under several categories,
including DEM, satellite data, LULC, and theme layers, will be available within the
environment. When creating custom scripts or processing flows, users ought to be able
to use or import these datasets
User-specific files such as area of interest boundaries, user scripts, and vector geometry
of administrative boundaries will have a set amount of storage allotted to each user
account.
Basic functionalities like basic editing tools for student communities. Accessing limited
functionalities like editing tools and accessing some of the rules and data available into
the module for scientists, researchers with limited functionalities like editing tools etc.
Fundamental features, such as simple editing capabilities, for student groups. scientists
and researchers having access to some restricted features, such as editing tools, and to
certain regulations and data contained in the module.
Use cases in Agriculture:
In this module, predictive analytics technology will be used in agriculture to help farmers better
predict crop yields, forecast demand for specific crops, and optimize irrigation and fertilizer
usage. By analyzing past data patterns, predictive analytics can provide insights that can help
farmers make more informed decisions about when to plant, how to care for their crops, and
what prices to charge for their produce. In addition, predictive analytics can identify early
warning signs of crop pests and diseases, allowing farmers to take preventive measures and
avoid or mitigate potential damage. Some of the applications are as follows:
Precision agriculture
Another way to use algorithms in agriculture research is to implement precision agriculture, or
the practice of applying the right inputs such as water, fertilizer, and pesticides, to the right
place and time, based on the specific needs of each crop and field. Precision agriculture can
help farmers reduce costs, increase productivity, and improve environmental sustainability.
Algorithms can enable precision agriculture by analyzing data from sensors, drones, cameras,
and GPS devices, and providing recommendations or automated actions for optimal crop
management. For example, an algorithm can detect weed growth and apply herbicides only to
the affected areas, or adjust irrigation levels based on soil moisture and weather conditions.
Crop yield prediction
One of the most common and important uses of algorithms in agriculture research is to predict
crop yield, or how much harvest a farmer can expect from a given plot of land. Crop yield
prediction can help farmers plan their planting, irrigation, fertilization, and harvesting strategies,
as well as their market prices and supply chains. Algorithms can use various types of data, such
as weather, soil, satellite imagery, and historical records, to model and forecast crop yield with
high accuracy and efficiency. For example, a machine learning algorithm can learn from past
data and adjust its predictions based on new information, such as rainfall or pest infestation.
Crop loss detection
A third way to use algorithms in agriculture research is to detect crop loss due to harmful effects
of pests, insects, or environmental factors on plant health and quality. Crop diseases can cause
significant losses in yield and income for farmers, as well as pose risks to food security and
safety. Algorithms can help farmers and researchers identify and diagnose crop diseases early
and accurately, and prevent or treat them effectively. Algorithms can use image processing,
computer vision, and deep learning techniques to analyze photos or videos of crops, and classify
them into healthy or diseased categories, or even into specific types of diseases. For example,
an algorithm can recognize the symptoms of damage on a leaf, and suggest the appropriate
treatment or prevention measures.
Validations:
Processing Time: The task manager should display the estimated processing time or
display a note regarding the same
Performance: To ensure that data processing is done quickly and efficiently on-the-fly,
the platform should enable fast execution of bespoke scripts
Usability: The user interface of the platform and scripting environment should be
intuitive, user-friendly, and provide clear instructions and guidance for writing scripts,
importing datasets, and accessing pre-existing tools
Compatibility: In order to ensure compatibility with well-liked and often used tools in the
Python ecosystem, the platform should offer a broad variety of packages and libraries for
data processing
Data Source Integration: In order to enable users to add and use a variety of datasets for
their own scripts and processing flows, the platform should facilitate the smooth
integration of new data sources into the data library
Approval Workflow Efficiency: Timely access to the custom scripting environment for
authenticated users should be ensured by optimizing and streamlining the account
approval workflow to minimize delays
Save and Reuse Custom Scripts: Users should be able to reuse previously stored scripts
under their account and save any custom scripts under their user workspace.