I spent much of my early life terrified of public speaking. The thought of being the center of attention, acting on stage, or talking to a group of people filled me with primal dread. But at the MIT AI Lab, my place of employment in the 1980s, we did research. We reported on research by writing papers, and sometimes research papers were presented at conferences. One year it fell to me to present a paper at a robot conference.
Because I was terrified I sought instruction about how to give a good talk. And, as always, my boss, Tomas Lozano-Perez gave me some great advice. He said, “People can either read your slide or listen to what you’re saying—not both. But they can look at a picture and listen at the same time.” I resolved to make my talk almost entirely pictures.
I have a friend who is very comfortable with public speaking. I once attended a talk of his where he spent his first few seconds at the lectern finishing drawing the figure he was about to present. Not me. My terror of public speaking made me prepare extensively before hand. I completed my figures and notes far in advance of the conference and I rehearsed repeatedly. Still, I was thoroughly stressed in the hours before my talk.
Then it began. The session chair introduced me and I began speaking. I was nervous but things were going pretty smoothly; all my rehearsing was actually paying off. Until my mouth started to get dry. In my inexperience I hadn’t prepared for this commonplace of speaking. A bold (should I say, normal?) person might simply have asked someone to bring a glass of water. That solution didn’t even occur to me. But I noticed that a previous speaker had left about half an inch of water in the bottom of a cup setting on the lectern. I thought, “If I want to go on speaking, I’ll have to risk disease.” I drank the water.
I was getting almost comfortable. The pictures were working, they supported the concepts nicely and the audience seemed attentive. Then suddenly, at almost exactly the halfway point in my talk, the conference center’s fire alarm went off. The PA system instructed conference goers to leave the building. Somewhat stunned I watched as everyone started to leave, then I followed. But fate was only toying with me. After only a few minutes the all clear was given.
Having an audience made me ill-at-ease, a big audience even more ill. Still, as I walked back to the lectern, I found myself hoping that my listeners would return. And despite having a perfect opportunity to skip out, almost everyone did. I finished the talk without further interruption and got a great round of questions.
My experience is not one I would have wished for myself—or anyone else for that matter. But it had a very positive result. Since my trial by fire (alarm) I’ve had no problem speaking about robots in public. I’m ready for any size audience any time.
Worldwide there are about 400,000 species of plants. Seeds from those plants can be scattered by the wind, deposited by birds, and carried by animals. Given the exuberance with which many plants produce seeds and the ease of dispersion, it’s inevitable that when you sow a field with just one type of plant many different types will grow. And wherever these uninvited guests emerge they’ll do everything they can to snatch sunlight, nutrients, and water away from the crops you want.
Farmers wage a perpetual battle against weeds, and for decades chemistry has supplied them with a potent weapon. Modern crop protection chemicals largely enable farmers to prevail over their tenacious foe. But not without cost. Herbicides can cause collateral damage—poisoning beneficial organisms, running off into rivers and streams, and persisting in the environment long after the job is done. Nor have weeds surrendered. They counter attack by developing immunity to our best chemical weapons.
The worrisome aftereffects of herbicide use partly explain the growing interest in organic farming. Organic farming eschews external inputs in favor of managing the farm ecosystem using local tools. Rather than input high-tech herbicides, many organic growers rely instead on an ancient and venerable tool, the hoe.
In many ways the humble hoe beats the pants off herbicides. Removing weeds mechanically leads to few unintended consequences. Nothing is poisoned, nothing noxious escapes, and the decomposing weeds can even return nutrients to the food crop. Furthermore, no weed yet has developed immunity to being plucked from the ground. With these clear advantages why don’t we see armies of diligent hoers saving our crops from weeds? Two reasons: it costs too much and no one wants to enlist—the hoeing army recruitment office is a lonely post.
Hoeing weeds: a hot, dusty, low-paying yet important job that no one wants to do—sounds ideal for a robot, no? Indeed yes and many scientists, engineers, and robot enthusiasts have pondered this tantalizing challenge. But how realistic is it? My purpose here is to address this question. Can we, today design weeding robots that work as well and cost less than chemical alternatives?
It’s easy to envision the ideal solution. We see teams of robot weeders swarming through fields. Day and night they work uprooting weeds of every size and description but never disturbing any food plant or causing mischief beyond the boundaries of the farm. Plus the robots cost far less than the billions of dollars farmers now spend annually on herbicides.
This notion seems very attractive—why aren’t such robots already in widespread use? It turns out that the bar for robots is set very high. In spite of their drawbacks herbicides mostly work. To conventional farmers they are familiar and predictable, each use costs relatively little (typically ones to tens of dollars per acre), and as weeds become resistant farmers use different chemicals and larger quantities. Society’s reasons for wanting to limit herbicide use appear a little less compelling from the farmer’s point of view. And it’s farmers not society, who must be convinced to trade sprayers for robots.
Technologists seek a solution that satisfies both sides: minimize herbicide use, eliminate weeds, and reduce cost. But to do so we must ask the right question. It is not, “Can we build robots that remove weeds?” The answer to that question is an easy but academic “yes.” Rather, the important question is, “Can we build weeding robots that make economic sense to farmers?”
As with most proposed jobs for robots, the really tricky part is neither technology nor economics but the confluence of the two. First, the macroeconomic case is convincing—there is clearly a billion dollar plus potential market for weeding robots. Second, relevant technology appears to be available. (See here for example.) In the video a commercial vision system developed by Aris of Eindhoven, The Netherlands grades and sorts plants in a greenhouse. Why can’t we just mount this or a similar device on a mobile base and achieve a decisive victory over weeds?
Existing solutions like Aris’ work indoors, but indoor systems have a big advantage over outdoor systems—they give engineers control. In the grading/sorting example the lighting is controlled, the backdrop behind the plants is carefully chosen, the plants can be observed from various angles, electrical power is abundant, and heat, cold, rain, dust, condensation, and bugs crawling on lenses need not be considered. Another subtle but significant benefit is that indoor operations can be conducted year round while outdoor equipment is used only during a limited growing season. Return on investment calculations make it more difficult to justify purchasing equipment that must sit idle in the barn part of the year rather than earn its keep every day. This means that minimizing cost is more important for outdoor robots than for indoor systems even though ruggedness and other outdoor requirements make low cost harder to achieve.
As is the case here, it’s common in mobile robotics to find that a reliable solution available in a structured environment does not function or is impractical in an unstructured setting. Instead, we must reimagine problems from the ground up. We look for alternative approaches that benefit from the unique strengths of robots and suffer only minimally from their weaknesses.
Weeds aren’t giving up and neither should we. So let’s look at the essence of what the robot needs to do. Our desired weeding robot must satisfy these functional requirements:
Remain confined to the assigned field
Visit every accessible point in the field
Classify each point appropriately
Apply an eradicating mechanism to every point where a weed is present
Refrain from applying an eradicating mechanism to points where a food crop is present.
Various engineering solutions at a range of costs are available to address most of these items. But requirement number three turns out to be the make or break challenge for our weeding robot.
A standard suggestion for deciding between weed and not-weed is to give the robot a vision system. That system would follow one of two strategies. The first solution involves providing the robot access to a database of relevant weed types. The vision system points toward the ground and, moment by moment, decides whether any observed visual feature matches a known weed type. The robot then attacks the features it identifies as weeds. The second approach is to train the vision system to recognize the type of plant we want to grow in the field. The robot then applies its eradication mechanism to all ground surfaces that do not contain the desired plant.
At the current state of the art both these strategies are problematic in uncontrolled environments. A wavelength analysis of the spectrum of light reflected from the ground can usually discriminate between plant and non-plant. (For example see the WeedSeeker system.) But to date, I am aware of no vision-based outdoor system that reliably and economically discriminates between weed and desired plant. No such system is in widespread commercial use in outdoor fields.
The critical element of the weeding robot we wish to design can’t be found in any catalog. That is, the vision part of our ideal robot remains a research project. Research projects often merit support but putting one on the critical path to a commercial robot has scuttled many a promising product.
Manufacturers of agricultural equipment use a couple of strategies to deal with weeds without actually having to recognize them. Crops are usually planted in rows while weeds just sprout anywhere. Thus it is possible to build purely mechanical weeding implements. Dragged behind tractors these devices use passive mechanical fingers to disrupt weeds that grow in the space between crop rows. Such devices are available in the marketplace.
A second approach relies on giving plants a head start over weeds. If we begin with a freshly plowed field then transplant seedlings rather than plant seeds, we ensure that the desired plants start out taller than competing weeds. There are systems (see the Robovator) that rely on this strategy to discriminate between big crop plants and small weeds. This enables attacking weeds anywhere, even those that emerge between desired plants in the same row.
I often think that designing a successful robot is a lot like constructing a magic trick. A magician performing a trick could never actually do what the audience thinks he or she is doing. Rather, the trick only works because the magician wears a blue shirt rather than a brown one and because the apparatus is back lit rather than front lit and because the magician moves the hand that does not conceal the ball rather than the one that does, and so on. A bunch of factors exquisitely specific to the trick being performed must be arranged just so to make the trick work.
The situation is similar in the robotics domain. The robots I have built are never as general-purpose as most observers suppose them to be, and each robot exploits every possible advantage in its application space just to do one useful task. Lay observer, “If your robot can do A, then it must also be able to do B and C and D!” Me, “No, it’s really only cost effective doing A. But I could design a robot that does B or C or D.”
There’s one more similarity between robots and magic—like the magician, the roboticist must cheat at every opportunity. In magic the only thing that counts is mystifying and delighting the audience. To make it seem that your lovely assistant has magically transported from one location to another, use twin lovely assistants. Rank cheating! But it accomplishes the goal. In commercial robotics the only thing that counts is performing a desirable task at a competitive price. It’s not necessary that the solution be elegant, or be done the way conventional wisdom expects, or be something a researcher could write a paper about.
I see a couple of opportunities for cheating in the design of the weeding robot. First, although it seems natural and obvious, we’re not required to use a vision system at all. And second, although we started with the problem of weeds in farmers’ fields, we may achieve initial success by focusing on a related but different target—executing a sneak attack rather than a frontal assault.
The function we would like a vision system to perform is to decide whether a point within its field of view is or is not a weed. Rather than insist that the robot figure this out, why don’t we cheat and just tell it? One way we might do this is to employ centimeter-resolution Real Time Kinematic (RTK) GPS. Record the position of each seed when planted and then later avoid applying the weed eradication mechanism to those spots. Unfortunately this calls for high precision and high precision’s favorite companion is high cost. Too much cost will render our robot as desirable as a weed.
An alternative strategy is to indicate the location of each seed or seedling with a marker the robot can identify reliably and inexpensively. For example, we could place an RFID tag or biodegradable barcode near every seed or surround each seed with a short physical barrier. Tagging every seed with a this-is-not-a-weed marker allows us to dodge the vision research project and to keep robot cost low. This means that we can build our weeding robot today with inexpensive, off the shelf technology. But admittedly, this solution poses another problem.
An acre of corn typically contains tens of thousands of plants. If we assign our robots to this weeding task, we must commit to buying, installing, and later maybe removing tens of thousands of tags per acre—and corn is grown on a very large number of acres. Commercial growers care about total cost, not whether the cost is due to the robots or tags. They also care about worker availability not whether the workers are assigned to hoe or tag plants. Our strategy doesn’t seem appealing these growers.
Robot-friendly markers change the problem but have not yet solved it, so let’s cheat again. Farmer’s fields are not the only reluctant venue for weeds. Every home gardener battles the same scourge only on a smaller scale. Marking tens of thousands of plants per acre on the massive scale of commercial agriculture is currently impractical, however performing a similar task for tens of plants in a typical home garden is entirely reasonable.
Home gardeners plant their crops by hand, are often reluctant to use herbicides, and (if they’re like me) intend to weed regularly but are frequently diverted by other priorities. But for a small increment of work at the beginning—tagging plants as they are put in the ground—gardeners can have a better-looking, weed-free garden all season long. Furthermore, time freed up from weeding can be spent tending plants, pruning plants, and generally realizing healthier, higher yielding crops. Priced and executed correctly, the home garden weeding robot should have a strong appeal.
Let’s take stock. We set out to see if robots could change the weeding paradigm—eliminate hand hoeing for organic farmers and maybe give conventional farmers a compelling alternative to herbicides. So far we’ve discovered a strategy that enables cost-effective robots on a home gardening scale. However, our approach accomplishes something more significant than may be apparent at first blush. We now have a well founded path toward greater functionality and larger scale.
The strategy that enables the weeding robot to be simple and inexpensive makes planting more complicated and costly. But in situations where crops are planted and weeded by hand, this tradeoff makes sense. Thus our robot is appropriate for home gardeners and likely also applies to small-scale agricultural production, e.g. market gardens and maybe market farms. In these instances the cost of robots and tags is more than offset by a reduction in the hassle and cost of hand weeding.
Looking forward, the cost and inconvenience of tagging plants is not fixed. Like any technology, once a market is established, competitive forces will work to reduce the tag cost and automated methods will be found to simplify tagging. As cost decreases scope increases. That is, given further development the weeding robot becomes attractive to commercial scale growers.
Our exercise has proven fruitful. Rather than waiting for an unpredictable breakthrough in computer vision we have found a “shovel-ready” strategy. One that offers a limited but useful robot today and through predictable, incremental development promises to fulfill our vision of a generally applicable, cost effective weeding robot for large growers in due time.
If I worked on non-robotic technology, say high-speed fiber optic communication systems, I expect I would rarely get advice from lay people. It’s hard to imagine meeting a poet or a lawyer at a party, describing my work, and then having that person wax eloquent on why I should use a transimpedance as opposed to a high-impedance amplifier in my front-end receivers.
But many people seem perfectly comfortable advising roboticists on how to design robots. “Why don’t you just…” they begin. Robots, no doubt, appear much more approachable and understandable than other high-tech devices. Robots are engaging, they seem to have personalities, and they behave in ways analogous to people. Perhaps the thinking goes, “If robots behave like people maybe they can benefit from the same advice that would help a person.”
This presumed prowess is unfortunate because robots are every bit as subtle and intricate as other high-tech devices. The inner workings of robots are not intuitive; their design requires experience and expertise. Furthermore, the necessary balance between functionality and cost always proves a perplexingly difficult and nuanced tradeoff. Off the cuff analysis has little chance of hitting the mark. Unfamiliarity with these matters produces unrealistic expectations among novice robotics enthusiasts and many polite nods from more grizzled robotics practitioners.
It’s well known that the world has a critical need to increase food production. A 2009 UN report estimates that just to keep up with population, production must grow 70% by 2050. The catch is that this must happen with no more water or land than is being used by agriculture today. That’s a daunting challenge but one I’m sure robots can help with. Unfortunately, (for a technologist) sometimes the biggest challenges are not technical ones.
In robotics I believe in solving the easiest problems first then using those solutions to attack harder problems. So, a few years ago I conducted an exercise to try to figure out where to begin. Which crops, I wondered, were the most amenable to robotic harvesting? I made a list of hand-harvested crops in the US and developed a robotic-appropriateness filter. The filter had these components:
Is it important to preserve the plant? If it’s OK to destroy the plant during harvest then you may be able to design a mechanical harvester for the task and a robot won’t be necessary.
Does foliage obscure the fruit to be picked? If there are leaves in the way then developing a robot to pick the fruit will be more challenging.
Does all the fruit ripen at the same time? Mechanical harvesters naturally treat ripe and unripe fruit the same way. If fruit ripens at different times it may be more appropriate to use a robot.
Is it easy to identify the fruit? A really distinctive feature might mean that the robot can detect the fruit with a simple, low-cost sensor.
Is the fruit located near the ground? Arms that can reach into trees or even high above the ground are expensive and may have reliability problems. It’s better to exclude such features from the robot if possible.
Is the fruit easily damaged? It takes more effort and cost to make a robot really gentle; the robot can be simplified if the fruit is tough.
I ranked each category from 0 to 2 with the higher number being more favorable to robots. (The categories are not entirely independent but I was interested in a quick first pass.) And what crop came out on top? It surprised me but with a score of 11 out of a possible 12 the answer is asparagus.
From a technical point of view, asparagus (green, not white) is a great crop for robots. The happy properties of asparagus from the robot’s perspective are: asparagus plants must be preserved from year to year, the stalks are not surrounded by leaves, they ripen at different times and must be harvested every few days, asparagus maturity equals height—a relatively easy parameter to identify, the stalks are near the ground, and asparagus is sturdy compared to many other food crops.
From a need point of view asparagus is also a great target for robots. Harvesting asparagus is usually done manually. Workers stoop over and use a Y-shaped knife to cut the asparagus stalks one at a time just below ground level. The work is harsh and farmers have trouble finding willing laborers. There are stories of asparagus allowed to go to seed or being plowed under because so few harvesters can be found.
Another point of need for US growers is foreign competition. Consumers in the United States eat more asparagus each year but US growers produce less and less. One reason for this mismatch is the high cost and uncertain availability of workers to harvest asparagus. Robots could reduce the cost of harvesting and ensure that harvests happen on time.
So why have you seen no stories about companies racing to develop asparagus-harvesting robots? Because the economics don’t work out. Domestic production of asparagus amounted to only about $60M in 2014 (we consumed $575M worth). Developing a robot to do this task will be expensive (several million dollars), take maybe two-or-so years, and entail risk. Even if the resulting robots were used to harvest all the asparagus in the US, the potential reward does not justify the risk for most venture capitalists.
I’d like to find a way to change this equation but as it stands asparagus production in the US continues to fall.
The first version of Roomba to hit the market was more popular than we developers had dared hope. An initial factory-order for 15,000 turned quickly into an order for 25,000 followed by orders for many, many more.
The company decided to fill in the product line so we developed several new models with feature designed to appeal to different market segments. Three of the new robots had gray bumpers, the fourth was all red including the bumper.
When early production units arrived from the factory we checked to see that they worked properly. The robots with gray bumpers passed every test but the red robot was a different story. It began strong—cleaning well, turning away from cliffs, and escaping from collisions. When cleaning was done the robot detected the charging station beacon from the proper distance, turned toward it, and advanced confidently. But then, only a couple of feet from its goal, the robot became confused, hesitated, and wandered off in the wrong direction. Every attempt ended the same way, no matter where or how we started it the robot could never reach its source of electrical nourishment.
We were all baffled. The red robot had exactly the same homing electronics as the other robots; it ran exactly the same code; the beacon sent exactly the same signal to all the robots. The only difference we knew of was the color of the bumper. How could that possibly affect the way the robot behaved?
The sensor Roomba uses to detect the beacon is mounted on the bumper. Above the sensor is a little plastic dome that directs IR energy toward the detector. As the robot turns left or right the dome moves into different parts of the beam and the signal changes. For the robot to operate correctly the IR energy reaching the detector must come from nowhere but the plastic dome.
After many theories and tests we finally discovered that the gray bumpers were opaque to IR while the red bumper was translucent. The gray bumpers prevented IR energy from reaching the detector except after being directed through the dome while the red bumper let IR energy reach the detector from any direction. The robot saw the beacon everywhere, no wonder it became confused. We confirmed our diagnosis by putting black tape on the back side of the red bumper—everything worked fine.
The red bumper mystery was a fun if perplexing debugging exercise. But I think it teaches a lesson in a completely different area: Be wary of drawing conclusions from simulations. A simulation includes only those features one believes in advance to be significant. But the real world always seems to find ways to make obscure things suddenly take on vital importance. At least it does in relatively unexplored areas like robotics.
In the early ’90s I worked briefly at a company called Denning Mobile Robotics. At the time I arrived, about eight years after the company’s founding, Denning had developed an ambitious robot called Sentry. Sentry was a security robot designed to patrol the corridors of a warehouse, office, or other facility after hours.
Sentry was a marvel of engineering—especially considering the technology available at the time. Sentry used a ring of sonar sensors to detect obstacles and follow walls; it used infrared and microwave motion sensors to detect intruders and a video camera to transmit a picture back to the security station. Sentry could follow a programmed path (relying on previously installed, active beacons) and would automatically return to its charging station to recharge its batteries. The robot gave impressive demonstrations.
Over the course of several years many talented, capable people designed, built, and programmed Sentry. These robot pioneers were justifiably proud of their achievement. But a commercial robot must please its customers not its builders and this is where the trouble started.
Sentry was placed at several customer sites. Denning management was confident that Sentry would be well received and gave the trial companies favorable terms. But after a few months all the robots were sent back to Denning. No one wanted to buy or lease Sentry.
Engineers sitting around the lab might imagine that a security robot would frequently encounter intruders. Maybe the voice of the guard relayed through the robot would instruct the would-be burglar to surrender or flee. Maybe the robot would even give chase. Unfortunately, Denning discovered that’s not what security staff spend most of their time doing. Instead guards do things like check the doors to make sure that they are locked, turn off the lights and the coffee pot, maybe turn down the thermostats to save energy. Sentry couldn’t do any of those things.
Sentry could roll along a corridor and report unexpected movement. For that customers had to outfit their office or warehouse with beacons and pay Denning $75,000. And someone was still needed to check that the doors were locked and the coffee pot turned off. Companies concluded that Sentry’s service wasn’t worth the price.
Denning’s example—involving people I knew—of fruitless effort and dashed hopes, dramatically illustrated to me that great technology isn’t enough. Building a really cool robot nobody wants is just an exercise in disappointment. If your work is to count for something you have to solve a problem people want solved at a price they’re willing to pay. Fulfilling a customer need at a competitive price makes a robot practical. But achieving practicality is deceptively hard.