Vision system for automation of tea harvesting – as part of
1. Objective
Overall objective of vision system is to facilitate automatic harvesting of tea shoots by identifying
the right tea shoots along with their localization. It is desired that vision system be developed in
progressive phases to achieve high accuracy in identifying the pod bearing shoots in a tea bush.
Further the system should also be capable to classify the pods in to different classes such non
mature pods, mature and ideal to cut, over grown pods etc.
Further the system should have capability to identify the detected pod's positions in 2 dimensional
plane and also identify the right height of the shoot in vertical axis. This information is then
required to be provided to the mechanical cutter/grabber system.
2. Essential features
1. Detect Pod Bearing Shoots (PBS) in a selected area of the tea bush using static
images or video images or both.
2. Estimate the position of each detected PBS within a reference frame. This reference
would be a common reference frame for both vision system and mechanical
cuter/grabber system
3. Classify each PBS detected, as, Mature-Ready to be Plucked, Over Grown, or Non
Mature. This has to achieved automatically through appropriate image analysis of
each detected PBS.
4. Estimate the cut-position on the stem of the shoot based on the image analysis and
set of predefined rules.
5. Mission system should be self sufficient in terms of lighting system, camera, power
back up and data back up systems.
3. High level approach
We propose to use a 2 camera imaging system. One to capture top view of the bush and
second one to capture longitudinal view of individual PBS.
While the top view is used in detecting the PBS in selected area of the bush the horizontal
view is used for classification of the shoot. The semantic characteristic features of each class
will be predefined by domain experts. This for example could be in terms of number of
leaves, colour of leaves, size of the pod, color of the pod etc.
Top View imaging Side View imaging
Appropriate image processing techniques shall be developed to detect and classify the PBS.
Once the ready-to-cut shoots are identified along with their 3 dimensional coordinates, these
coordinates are communicated to the cutter-grabber system.
At a high level, the vision system would consist of 3 essential sub-systems, namely, the
Imaging devices, Image processing module, Vision-cutter interface module.
Vision System Image Processing
Imaging Devices
Module
Vission-Cutter Interface
Module
Vision System
\
4. Development Methodology
Software & Hardware
We envisage majority of the vision system to be built using open software. All vision and image
processing programing will be based on OpenCV framework. Software development will be carried
out on a Linux platform on a personal computer, where all the required image processing programs
will be developed. However the choice of hardware for the field deployment of the vision system
will be considered as part of the final design by considering both the software processing
requirements and field conditions.
Image data Collection
Vision System development would require substantial image data to be collected form the
plantation. We envisage to collect this data using commercially available Digital SLR cameras. A
simple structure will be designed for mounting the camera to collect image data for development
purpose. However the choice of imaging system for the field deployment of the vision system will
be considered as part of the final design by considering the imaging requirements evolved during
the development. Imaging devices will be an integrated part of the harvester system along with the
cutter grabber and will as the part of the overall design of the harvester.
Mission System development
Essentially the development process would be an iterative process involving cycles of algorithm
development, software programing, testing with image data. This would involve several rounds of
data collection under varying field conditions followed by testing and refining the algorithms to
achieve optimum results.
Road Map for implementing the first Preliminary Vision concept.
Our approach to vision is based on a two step process. These two steps are namely:
1. Identification and Localization of ( Pod Bearing Shoots)PBS. This is based on the image
analysis of the top view images of the tea bush to identify the PBS and also its position with
respect to a fixed reference frame.
2. Analysis of the longitudinal view of the PBS identified in previous step to classify the PBS
in to an appropriate class.
We propose to develop a preliminary algorithm for Identification and localization and implement
the same using openCV libraries on a Linux system. Following are the high level steps for this
activity:
1. Image data collection from plantation
2. Survey of available open source libraries.
3. Design of algorithm
4. Software development and Testing.
5. Training of vision system with images data
6. Testing of vision system for identification and localization on the collected image data.
7. Review and identify the further development requirements.
Approximate time line for above steps:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
1 Image data collection from plantation 3 weeks
2 Survey of available open source libraries. 2 weeks
3 Design of algorithm 4 weeks
4 Sofware development and Testing. 6 weeks
5 Training of vission system with images data 1 week
Testing of vision system for identification and localization
6 on the collected image data. 1 week
7 Review and identify the further development requirements. 1 week
Resource requirements:
Above Time-lines are based on one engineer working full time. However the skills and experience
required for Algorithm design are different from those required for rest of the activities. Therefore
the resource requirement would be as follows:
1 Software engineer (Full time)
◦ Skills required: C, C++, Linux, OpenCV
1 Image processing /vision engineer (Part Time)
◦ Skills required: Signal processing, image processing in particular, well versed with usage
of OpenCV libraries.
◦ Some assistance will be required from plantation staff in felicitating data collection in
the field.
Tools and materials
1 DSLR Camera (Nikon - DSLR D3300, http://www.nikon.co.in/en_IN/product/digital-slr-
cameras/d3300)
Structure for mounting the camera in the field along with lighting source.
1 Personal Computer (Laptop, Specifications will be provided)
Proprietary software for the DSLR camera to operate the camera through PC. (Nikon
Camera Control Pro, http://www.nikon.co.in/en_IN/product/software/camera-control-pro)
Related Research literature
1. X.F. Wang, D.S. Huang, J.X. Du, H. Xu, and L. Heutte, “Classification of plant leaf images
with complicated background,” Applied Mathematics and Computation, vol. 205, pp. 916–
926, 2008.
2. H. Zhiyi, C. Quansheng, and C. Jianrong, “Identification of green tea (camellia sinensis)
quality level using computer vision and pattern recognition,” in Proceedings of the 2012
International Conference on Biological and Biomedical Sciences, 2012, Advances in
Biomedical Engineering, pp. 20–28.
3. Hai Vu, Thi-Lan Le, Thanh-Hai Tran, Thuy Thi Nguyen: A vision-based method for
automatizing tea shoots detection. ICIP 2013: 3775-3779.
Related Technology
1. Computer Science professor develops mobile app to identify plant species,
http://techventures.columbia.edu/news/computer-science-professor-develops-mobile-app-
identify-plant-species