PointNet and geometric reasoning for detection of grape vines from single frame RGB-D data in outdoor conditions

Authors

DOI:

https://doi.org/10.7557/18.5155

Keywords:

RGBD, Deep-learning, Agricultural robotics, outdoor vision, grape

Abstract

In this paper we present the usage of PointNet, a deep neural network that consumes raw un-ordered point clouds, for detection of grape vine clusters in outdoor conditions. We investigate the added value of feeding the detection network with both RGB and depth, contradictory to common practice in agricultural robotics of relying on RGB only. A total of 5057 pointclouds (1033 manually annotated and 4024 annotated using geometric reasoning) were collected in a field experiment conducted in outdoor conditions on 9 grape vines and 5 plants. The detection results show overall accuracy of 91% (average class accuracy of 74%, precision 53% recall 48%) for RGBXYZ data and a significant drop in recall for RGB or XYZ data only. These results suggest the usage of depth cameras for vision in agricultural robotics is crucial for crops where the color contrast between the crop and the background is complex. The results also suggest geometric reasoning can be used for increased training set size, a major bottleneck in the development of agricultural vision systems.

References

T. Adão, J. Hruška, L. Pádua, J. Bessa, E. Peres, R. Morais, and J. Sousa. Hyperspectral imaging: A review on uav-based sensors, data processing and applications for agriculture and forestry. Remote Sensing, 9(11):1110, 2017.

B. Arad, P. Kurtser, E. Barnea, B. Harel, Y. Edan, and O. Ben-Shahar. Controlled lighting and illumination-independent target detection for real-time cost-efficient applications. the case study of sweet pepper robotic harvesting. Sensors, 19(6):1390, 2019.

C. W. Bac, E. J. van Henten, J. Hemming, and Y. Edan. Harvesting Robots for High-value Crops: State-of-the-art Review and Challenges Ahead. JFR, 31(6):888–911, 2014.

W. Bac, J. Hemming, B. van Tuijl, R. Barth, E. Wais, and E.J. Van Henten. Performance evaluation of a harvesting robot for sweet pepper. JFR, 34(6):1123–1139, 2017.

R. Barth, J. Hemming, and E. J. van Henten. Design of an eye-in-hand sensing and servocontrol framework for harvesting robotics in dense vegetation. Biosystems Engineering, 146:71–84, 2016.

R. Berenstein, O. B. Shahar, A. Shapiro, and Y. Edan. Grape clusters and foliage detection algorithms for autonomous selective vineyard sprayer. Intelligent Service Robotics, 3(4):233– 243, 2010.

R. Q. Charles, H. Su, M. Kaichun, and L. J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In CCVPR, pages 652–660, 2017.

R. Cong, H. Chen, H. Zhu, and H. Fu. Foreground detection and segmentation in rgb-d images. In RGB-D Image Analysis and Processing, pages 221–241. Springer, 2019.

Y. Edan, S. Han, and N. Kondo. Automation in agriculture. In Springer handbook of automation, pages 1095–1128. Springer, 2009.

D. A. Eroshenkova and V. I. Terekhov. Automated determination of forest-vegetation characteristics with the use of a neural network of deep learning. In Advances in Neural Computation, Machine Learning, and Cognitive Research III, volume 2, page 295. Springer Nature, 2019.

A. Grunnet-Jepsen and D. Tong. Depth post- processing for intel realsense d400 depth cam-eras. New Technologies Group, Intel Corporation, 2018.

A. Kamilaris and F. X. Prenafeta-Boldú. Deep learning in agriculture: A survey. COMPAG, 147:70–90, 2018.

M. Kragh, R. N. Jørgensen, and H. Pedersen. Object detection and terrain classification in agricultural fields using 3d lidar data. In International conference on computer vision systems, pages 188–197. Springer, 2015.

P. Kurtser and Y. Edan. Statistical models for fruit detectability: spatial and temporal analyses of sweet peppers. Biosystems Engineering, 171:272–289, 2018.

P. Kurtser and Y. Edan. The use of dynamic sensing strategies to improve detection for a pepper harvesting robot. In IROS, pages 8286– 8293, 2018.

S. Nuske, S. Achar, T. Bates, S. Narasimhan, and S. Singh. Yield estimation in vineyards by visual grape detection. In 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 2352–2358. IEEE, 2011.

O. Ringdahl, P. Kurtser, and Y. Edan. Evaluation of approach strategies for harvesting robots: Case study of sweet pepper harvesting. Journal of Intelligent and Robotic Systems, pages 1–16, 2018.

O. Ringdahl, P. Kurtser, and Y. Edan. Performance of rgb-d camera for different object types in greenhouse conditions. In ECMR, pages 1–6. IEEE, 2019.

B. C. Russell, A. Torralba, K. P. Murphy, and W. T. Freeman. Labelme: a database and web-based tool for image annotation. IJCV, 77(1-3):157–173, 2008.

I. Sa, Z. Ge, F. Dayoub, B. Upcroft, T. Perez, and C. McCool. Deepfruits: A fruit detection system using deep neural networks. Sensors, 16(8):1222, 2016.

A. Vit and G. Shani. Comparing rgb-d sensors for close range outdoor agricultural phenotyping. Sensors, 18(12):4413, 2018.

E. Zemmour, P. Kurtser, and Y. Edan. Automatic parameter tuning for adaptive thresholding in fruit detection. Sensors, 19(9):2130, 2019.

C. Zhao, L. Sun, P. Purkait, T. Duckett, and R. Stolkin. Dense rgb-d semantic mapping with pixel-voxel neural network. Sensors, 18(9):3099, 2018.

Downloads

Published

2020-02-06