Detecting diseases in sugarbeet using an RGB camera and Deep Learning

In the one year project “RGB sensor based measurements of blight in sugarbeets” (In Danish: “Sensor RGB baseret måling af bladsvampeangreb i sukkerroer “) funded by Sukkerroeafgiftsfonden (“The Sugarbeet tax fund”) and in collaboration with the Department of Agroecology, Aarhus University, a deep neural network was trained to segment images of sugarbeet. The network was trained on images collected from four strip field experiments in the 2022 growing season. Image collection was carried out using a field robot with a high quality RGB camera mounted nadir. Images were collected for 9 weeks, and a subset og images were subsequently manually annotated and used as a training set for fine-tuning the pre-trained network.

Results

The preliminary results show a good segmentation of green sugarbeet leafs (89% accuracy) and of the sugarbeet disease rust (80%), with a slight tendency to overestimate the area of the latter. The network has some trouble with distinguishing meldew from relections from the built-in flash, however, the latter was classified as green leaf in the training set rather than being given its own class.

Applying the network to all the collected images, show a clear pattern of the rust initially developing at a few hotspots, which continueue to develop and spread throughout the season. The infection seem to move along the stripes and to a lesser extend across them.

These preliminary results were published in “NBR Faglig beretning 2022” (non-peer reviewed). For more details, see “Kamera og kunstig intelligens til vurdering af sygdomstryk i sukkerroer” in “NBR Faglig beretning 2022”.

For more information, contact Anders Krogh Mortensen or René Gislum.

Data availability

The collected images are available here.

ESHRE 2019 Presentation – Automatic morphological grading of human blastocysts with time-lapse imaging and artificial intelligence

Mikkel Fly Kragh presented his work on automating human embryo grading at the European Society of Human Reproduction and Embryology (ESHRE) conference this year in front of an audience of around 2,000. The abstract was submitted in February and selected for oral presentation the 25th of June in Vienna, Austria. The full abstract is available here.

Large-scale Mapping of Mixed Crop Fields of Clover and Grass

We recently presented our preliminary results in large-scale mapping of grass-clover leys at “Græsland 2018”, the largest grass-clover event in northern Europe, hosted by DLF.

Using an ATV, numerous georeferenced images were collected a couple of days before the first harvest of the 2018 season. Using an automated segmentation of each image into grass (blue), clover (red), and soil (green), the spatial distribution of the three is qualitatively visualized across the field. The three sample images exemplify the corresponding distribution at the three points in space.

In the future, by mounting the camera directly on front of the harvester, the farmer can monitor his entire fields when harvesting, without the need for driving an ATV. This allows the farmer to optimize his fertilization strategy based on the condition and spatial clover/grass distributions of his fields.

For more information, contact Søren Skovsen.

Hierarchical Classification in Mixed Crops of Clover and Grass

Our previous work in pixel-wise classification of high-resolution RGB images of grass-clover leys into clover, grass, and weeds demonstrated state of the art accuracy.

Extending this work into a two-step classification scheme with corresponding  hierarchical labels, we have demonstrated an extended segmentation of clovers into the present species in the dataset: red clover (Trifolium pratense) and white clover (Trifolium repens).

The quantitative results of the segmentations on the hand-annotated test set are stated in the paper from the ICPA proceedings.

Qualitative results of the hierarchical segmentation on two images are shown below.

For more information, contact Søren Skovsen.

Improved Weed instance Detector in Cereal Fields

The following samples show some promising results of an automated weed instance detector that works in cereal fields and is able to distinguish monocot (red) from dicot (blue) weeds. It utilizes the focal-loss introduced by Facebook AI Research (FAIR), which balance the error contribution based on how easy detectable they are.
Even though the final aim is to classify each weed instance, it is sometimes not possible due to the small size of the weeds. In this case a mono/dicot discrimination is still valuable to determine a suitable weed control strategy.

Weed Detection In Cereal

Weed Detection In Cereal

Weed Detection In Cereal
For more information, contact Mads Dyrmann