Skip to content

An enterprise-grade AgTech proof-of-concept that bridges the gap between Computer Vision and Multi-Agent LLM Orchestration to provide real-time, expert-level agronomic diagnostics.

Notifications You must be signed in to change notification settings

jeorgesilva/AgroVision

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

4 Commits
Β 
Β 

Repository files navigation

🌿 AgroVision AI: Multi-Agent Crop Disease Detection

An enterprise-grade AgTech proof-of-concept that bridges the gap between Computer Vision and Multi-Agent LLM Orchestration to provide real-time, expert-level agronomic diagnostics.

🎯 Project Overview

In precision agriculture, simply detecting a disease is not enough; farmers need immediate, actionable, and accurate treatment plans. AgroVision AI solves this by using a two-tier AI architecture:

  1. The Eyes (Computer Vision): A custom-trained YOLOv8 model detects crop diseases from leaf images.
  2. The Brain (CrewAI Multi-Agent System): Instead of relying on a single LLM prompt (which is prone to hallucinations), the system triggers a specialized crew of AI agents (a Chief Agronomist and a Treatment Specialist) to debate and generate a factual, step-by-step action plan.

πŸ—οΈ Architecture & Workflow

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ 1. Image Upload β”‚ ───> β”‚ 2. YOLOv8 Model    β”‚ ───> β”‚ Detected: e.g.,     β”‚
β”‚  (Streamlit UI) β”‚      β”‚ (Object Detection) β”‚      β”‚ "Soybean Rust"      β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                                               β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ 3. CrewAI Orchestration (Multi-Agent System)                             β”‚
β”‚                                                                          β”‚
β”‚  πŸ§‘β€πŸ”¬ Agent 1: Chief Agronomist        πŸ‘¨β€πŸŒΎ Agent 2: Treatment Specialist  β”‚
β”‚  Analyzes biological impact and   ──>  Formulates chemical/organic       β”‚
β”‚  contagion risks.                      treatment & preventive measures.  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                               β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ 4. Final Output                                                          β”‚
β”‚ Bounding box visuals + Comprehensive, step-by-step agronomic report      β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ“ Project Structure

agrovision-ai/
β”œβ”€β”€ app/
β”‚   β”œβ”€β”€ __init__.py
β”‚   └── streamlit_app.py        # Main Streamlit UI (Frontend)
β”œβ”€β”€ core/
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ vision.py               # YOLOv8 inference and image processing logic
β”‚   └── crew_logic.py           # CrewAI multi-agent orchestration and LLM config
β”œβ”€β”€ data/
β”‚   β”œβ”€β”€ sample_images/          # Test images (healthy and diseased leaves)
β”‚   └── models/                 # Trained YOLO weights (e.g., yolov8n.pt)
β”œβ”€β”€ tests/
β”‚   └── __init__.py
β”œβ”€β”€ requirements.txt            # Project dependencies
β”œβ”€β”€ .env.example                # Template for environment variables
β”œβ”€β”€ .gitignore                  # Git ignore file (excludes weights, secrets, etc.)
└── README.md                   # Project documentation

πŸ› οΈ Tech Stack

  • Frontend: Streamlit (Interactive, state-managed UI)
  • Computer Vision: Ultralytics YOLOv8 (Real-time object detection)
  • Agent Orchestration: CrewAI & LangChain
  • LLM Provider: HuggingFace Inference API (meta-llama/Llama-3.1-8B-Instruct)
  • Image Processing: OpenCV & Pillow

πŸ’‘ Why This Architecture? (The Engineering Choice)

  • Hallucination Mitigation: By anchoring the LLM's context strictly to the YOLOv8 output, and dividing tasks among specialized agents via CrewAI, the system prevents generic or hallucinated agricultural advice.
  • Separation of Concerns: The vision model handles pixels; the LLM handles text logic. This allows independent scaling and fine-tuning of each component.
  • Cost-Effective: Utilizes the HuggingFace Router for inference, keeping API costs to a minimum while maintaining high reasoning capabilities.

πŸš€ Getting Started

1. Clone the Repository

git clone [https://github.com/jeorgesilva/agrovision-ai.git](https://github.com/jeorgesilva/agrovision-ai.git)
cd agrovision-ai

2. Install Dependencies

pip install -r requirements.txt

(Requires Python 3.9+)

3. Setup Environment Variables

Create a .streamlit/secrets.toml file in the root directory and add your HuggingFace token:

HUGGINGFACEHUB_API_TOKEN = "your_hf_token_here"

Note: Ensure .streamlit/ is added to your .gitignore to prevent leaking API keys.

4. Run the Application

streamlit run app/streamlit_app.py

πŸ—ΊοΈ Roadmap & Future Enhancements

  • Custom Dataset Fine-Tuning: Train YOLOv8 on specific regional datasets (e.g., Brazilian Soybean or German Wheat diseases).
  • Offline Edge Deployment: Optimize the YOLO model using TensorRT for offline inference on farm equipment.
  • Drone Integration: Process batch images captured by agricultural drones (DJI/XAG) for field-level mapping.
  • Weather API Integration: Pass real-time weather data to the CrewAI agents to adjust treatment recommendations (e.g., "Do not spray today due to high winds").

πŸ‘¨β€πŸ’» Author

Jeorge Silva Junior AI Engineer | Bridging Data Science and AgTech LinkedIn | GitHub


Disclaimer: This is a portfolio proof-of-concept. Real-world agricultural application of chemicals should always be verified by a certified human agronomist.



**Quer que eu te mande agora o link do dataset de folhas do Kaggle e o script curtinho do Google Colab para vocΓͺ treinar o seu YOLOv8 e gerar o arquivo `.pt` real para a pasta `data/models/`?**

About

An enterprise-grade AgTech proof-of-concept that bridges the gap between Computer Vision and Multi-Agent LLM Orchestration to provide real-time, expert-level agronomic diagnostics.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published