品质至上,客户至上,您的满意就是我们的目标
技术文章
当前位置: 首页 > 技术文章
利用Airphen多光谱表型成像系统进行田间获得作物图像深度学习解读
发表时间:2019-02-25 16:15:27点击:1305
源自INRA法国农业科学院的Hiphen公司是农业领域较具创新的公司之一,是较先进将遥感以及表型结合的公司,其开发的Airphen多光谱表型成像系统毫无疑问代表了这个领域较高水准。随着田间表型研究日新月异的进步,该多光谱表型成像系统也继续将引领表型研究进展。
北京欧亚国际科技有限公司是法国Hiphen公司中国区总代理,全面负责其系列产品在中国的市场推广、销售和售后服务。
Deep learning for interpreting images of crops acquired under field conditions
BACKGROUND
Genetic progress is one of the major leverage used to increase food production. Increasing the yearly genetic gain is therefore mandatory to feed the increasing human population under global change issues. Selecting or creating the optimal cultivar for a given locations is very challenging considering the very large spatial and temporal variability of environmental conditions. Over time, public and private sectors have developed breeding programs based on comprehensive observations of the crops to better describe its functioning and the associated genetic control. The combination of proximal sensing with the IoT technologies, rover robots and unmanned aerial vehicles (UAVs) allows gathering a large amount of images of the crops. Multiple attributes of the crops could then be retrieved using deep learning approaches. These attributes describe important physiological traits associated to biotic (diseases) and abiotic (climate, soil) stress. We illustrate here the use of Convolutional neural networks (CNN) to estimate those various traits from images taken in the fields.
Some applications of deep learning in field crops
Nadir high resolution RGB images were acquired under various conditions for different wheat genotypes using robots and UAVs. A two-stage object detection algorithms (Faster- RCNN) is then used to identify ears and thereby get the ear density. Likewise, using this deep learning approach will provide additional traits characterizing the ears
Approaches based on CNNs provide a solution to increase the throughput as well as the spatial representativeness which was found to be critical in a context of field phenotyping platform and human limited resources.
Imagery from UAV at low altitude allows identification of plant at emergence and plant density estimates. A centimetric localization of the plants also allows precise monitoring.
Semantic segmentation identifies the presence of pathogens. The cover fraction is also estimated. High spatial resolution is therefore required (IOT images) to get optimal performances.
Main results and limitation
CNNs models trained with transfer learning from large scale object dataset (COCO,OpenImage) were found to be more robust when applied to images taken under changes in illumination conditions and camera configuration.
Deep models often required spatial resolution of a fraction of millimiter. This is a trade-off between spatial representativeness and throughput. Some work on photogrammetric solutions (videos, multi-focal features …) and vectors (UAV, robots) also have to be conducted.
Future challenges and prospective
CNNs with regression output can be used to estimate various traits. Multispectral images and depth measurement from LiDAR and photogrammetry may also add significant information
Digital Plant Phenotyping Platform (D3P) that simulates the images taken over crops under field conditions [4] may complete the need far large scale labeled images required for semantic segmentation
Computing resources is often critical.Data augmentation and domain adaptation are suppose to improve the robustess of the models. Open source algorithms and dataset have to encouraged in the agriculture fields