The ‘vegebot’ platform, developed by an engineering team at Cambridge, was initially trained to recognise and harvest lettuce in a lab setting. It comprises a vision system, custom end effector and software and has now been successfully tested in a variety of field conditions in cooperation with local fruit and vegetable cooperative G’s Growers.
“To address the harvesting challenges posed by iceberg lettuce a bespoke vision and learning system has been developed which uses two integrated convolutional neural networks to achieve classification and localization. A custom end effector has been developed to allow damage free harvesting. To allow this end effector to achieve repeatable and consistent harvesting, a control method using force feedback allows detection of the ground,” the researchers revealed in a paper published in The Journal of Field Robotics.
Currently, the prototype is ‘nowhere near’ as fast or efficient as a human worker. However, the team of engineers said the development demonstrates how the use of robotics in agriculture might be expanded, even for crops like iceberg lettuce which are ‘particularly challenging’ to harvest mechanically.
‘We can make it work for many other crops’
The harvesting of iceberg lettuce – like soft fruits - have to date been difficult to automate. Iceberg is easily damaged and grows relatively flat to the ground, presenting a challenge for robotic harvesters.
"Every field is different, every lettuce is different," explained co-author Simon Birrell from Cambridge's Department of Engineering. "But if we can make a robotic harvester work with iceberg lettuce, we could also make it work with many other crops."
"At the moment, harvesting is the only part of the lettuce life cycle that is done manually, and it's very physically demanding," added co-author Julia Cai, who worked on the computer vision components of the Vegebot while she was an undergraduate student in the lab of Dr Fumiya Iida.
The Vegebot first identifies the 'target' crop within its field of vision, then determines whether a particular lettuce is healthy and ready to be harvested, and finally cuts the lettuce from the rest of the plant without crushing it so that it is 'supermarket ready'. "For a human, the entire process takes a couple of seconds, but it's a really challenging problem for a robot," said co-author Josie Hughes.
How does it work?
The Vegebot has two main components: a computer vision system and a cutting system. The vision system uses an overhead camera to take an image of the field. Machine learning is then used to identify all the lettuces and then classify whether they are ready for harvest.
Once the Vegebot could recognise healthy lettuces in the lab, it was then trained in the field, in a variety of weather conditions, on thousands of real lettuces, according to a press release from Cambridge University.
Meanwhile, a second camera on the Vegebot is positioned near the cutting blade to ensure a ‘smooth cut’. The pressure in the robot's gripping arm can be adjusted for different crops.
"We wanted to develop approaches that weren't necessarily specific to iceberg lettuce, so that they can be used for other types of above-ground crops," said Iida, who leads the team behind the research.
In future, robotic harvesters could help address problems with labour shortages in agriculture, and could also help reduce food waste, Cambridge suggested. Because the machine learning can recognise when a particular lettuce is ready to harvest, it can target only ripe vegetables. Currently each field is typically harvested once, and any unripe vegetables or fruits are discarded.
"We're also collecting lots of data about lettuce, which could be used to improve efficiency, such as which fields have the highest yields," said Hughes. "We've still got to speed our Vegebot up to the point where it could compete with a human, but we think robots have lots of potential in agri-tech."
Source
Journal of Field Robotics
A field‐tested robotic harvesting system for iceberg lettuce
Simon Birrell Josie Hughes Julia Y. Cai Fumiya Iida
First published: 07 July 2019 https://doi.org/10.1002/rob.21888