We recently presented our preliminary results in large-scale mapping of grass-clover leys at “Græsland 2018”, the largest grass-clover event in northern Europe, hosted by DLF.
Using an ATV, numerous georeferenced images were collected a couple of days before the first harvest of the 2018 season. Using an automated segmentation of each image into grass (blue), clover (red), and soil (green), the spatial distribution of the three is qualitatively visualized across the field. The three sample images exemplify the corresponding distribution at the three points in space.
In the future, by mounting the camera directly on front of the harvester, the farmer can monitor his entire fields when harvesting, without the need for driving an ATV. This allows the farmer to optimize his fertilization strategy based on the condition and spatial clover/grass distributions of his fields.
Our previous work in pixel-wise classification of high-resolution RGB images of grass-clover leys into clover, grass, and weeds demonstrated state of the art accuracy.
Extending this work into a two-step classification scheme with corresponding hierarchical labels, we have demonstrated an extended segmentation of clovers into the present species in the dataset: red clover (Trifolium pratense) and white clover (Trifolium repens).
The following samples show some promising results of an automated weed instance detector that works in cereal fields and is able to distinguish monocot (red) from dicot (blue) weeds. It utilizes the focal-loss introduced by Facebook AI Research (FAIR), which balance the error contribution based on how easy detectable they are.
Even though the final aim is to classify each weed instance, it is sometimes not possible due to the small size of the weeds. In this case a mono/dicot discrimination is still valuable to determine a suitable weed control strategy.
Yara Rizk from the Department of Electrical and Computer Engineering at the American University of Beirut (AUB) visits Aarhus University to present some of the work done in the Human Machine Interaction and Machine Learning lab at AUB.
Anders’ PhD defence presentation can be seen in the video below.
PhD defense of Anders Krogh Mortensen for the thesis “Estimation of Above-Ground Biomass and Nitrogen-Content of Agricultural Field Crops using Computer Vision”.
The thesis investigates yield estimation derived from RGB images and coloured 3D point clouds. Image segmentation methods based on image processing, handcrafted feature extraction, and deep learning is investigated. Furthermore, a novel method for segmenting lettuce in 3D coloured point clouds is proposed. Several yield models based on the segmented crops are investigated.
The research findings have shown that recent advances in deep learning can be transferred to segmentation of (mixed) crops. It was further shown, that (simple) growth models can be improved using crop coverage to explain the local variations in the crop.
“Man starter med at skaffe sig overblik over hvor ukrudttet er ude i ens kornmark, og så sprøjter man bagefter, men kun de steder, der er ukrudt. Sådan lyder den besnærrende enkle opskrift, som et hold forskere fra Aarhus Universitet lige nu arbejder på i samarbejder med flere teknologivirksomheder.”
The plant seedlings dataset, which was re-published last week, is now subject to a Kaggle image-classification competition. We encourage all to take a look at the dataset and commit their solution to the competition.
Inspired by yesterday’s visit to Agritechnica and the booths by Bayer/Xarvio/Bosch demonstrating precision weed localization and spraying, I decided to make a small video of the our current weed-decrimination capabilities: