In the summer of 2018, a small Berkeley-based robotics startup received a challenge. KNAPP, a major provider of warehouse logistics technologies, was on the hunt for a new AI-powered robotic arm that could pick as many types of items as possible. So every week, for eight weeks, it would send the startup a list of increasingly difficult items—opaque boxes, transparent boxes, pill packages, socks—that covered a range of products from its customers. The startup team would buy the items locally and then, within the week, send back a video of their robotic arm transferring the items from one gray bin to another.
By the end of the challenge, executives at KNAPP were floored. They had challenged many startups over six or seven years with no success and expected the latest one to have the same outcome. Instead, in every video, the startup’s robotic arm transferred every item with perfect accuracy and production-ready speeds.
“Every time, we expected that they would fail with the next product because it became more and more tricky,” says Peter Puchwein, vice president of innovation at KNAPP, which is headquartered in Austria. “But the point was they succeeded and everything really worked. We’ve never seen this quality of AI before.”
Covariant has now come out of stealth mode and is formally announcing its partnership with KNAPP today. Its algorithms have already been deployed on KNAPP’s robots in two of KNAPP’s customers’ warehouses. One, operated by the German electrical supplier Obeta, has been fully in production since September. The co-founders say Covariant is also close to striking another deal with an industrial robotics giant.
The news signifies a change in the state of AI-driven robotics. Such systems used to be limited to highly-constrained academic environments. But now Covariant says its system can generalize to the complexity of the real-world and is ready to take on warehouse floors by storm.
There are two categories of tasks that exist in warehouses: things that require legs, like moving boxes from the front to the back of the space, and things that require hands, like picking up and placing items in the right place. Robots have been in warehouses for a long time, but their success has primarily been limited to automating the former. “If you look at a modern warehouse, people actually rarely move,” says Peter Chen, co-founder and CEO of Covariant. “Moving stuff between the fixed points—that’s a problem that mechatronics is really great for.”
But moving things with hands, requires more than just the right hardware. It needs nimble adaption to a wide variety of product shapes and sizes as well as their ever-changing orientations. A traditional robotic arm can be programmed to execute the same precise movements again and again, but it will fail the moment it encounters any deviations. It needs AI to “see” and adjust to have any hope of keeping up with its evolving surroundings. “It’s really the dexterity part that requires intelligence,” Chen says.
But in the last few years, while research labs have made incredible advances in combining AI and robotics to achieve such dexterity, bringing them into the real-world has been a completely different story. Labs can get away with 60 or 70% accuracy; robots in production cannot. Even with 90% reliability, a robotic arm would be a “value-losing proposition,” says Pieter Abbeel, Covariant’s co-founder and chief scientist.
To truly be a payback on investment, Abbeel and Chen estimate that a robot needs to be at least 99%—and maybe even 99.5%—accurate. Only then can it operate without much human intervention or risk of slowing down a production line. But it wasn’t until the recent progress in deep learning, and in particular reinforcement learning, that this level of accuracy has become possible..
That research happens in Covariant’s office, situated not far from the San Francisco Bay waterfront, off a dilapidated parking lot between a row of unmarked buildings. Inside, several industrial robots and “co-bots,” collaborative robots designed to operate safely around humans, train for every product possibility.
On a regular basis, members of Covariant’s team go on convenience store runs to buy whatever odds and ends they can find. The items range from different medicines to packaged clothes to eraser caps encased in clear boxes. The team especially looks for things that might trip the robot up: highly reflective metallic surfaces, transparent plastic surfaces, and easily deformable surfaces like cloth and chip bags that will look different to a camera every time.
Hanging above every robot is a series of cameras that act as its set of eyes. That visual data, along with sensor data from the robot’s body, feeds into the algorithm that controls its movements. The robots learn primarily through a combination of imitation and reinforcement techniques. The first involves a person manually guiding the robot to pick up different objects. It then logs and analyzes the motion sequences to understand how to generalize the behavior. The latter involves the robot conducting millions of rounds of trial and error. Everytime the robot reaches for an item, it tries it in a slightly different way. It then logs which attempts result in faster and more precise picks versus failures, so it can continually improve its performance.
Because it is ultimately the algorithm that learns, Covariant’s software platform, called Covariant Brain, is hardware agnostic. Indeed, the office has over a dozen robots of various models, and its live deployment with Obeta uses KNAPP’s hardware.
Over the course of an hour, I watch three different robots masterfully pick up all manner of store-bought items. In seconds, the algorithm analyzes their positions, calculates the attack angle and correct sequence of motions, then articulates the arm to grab on with a suction cup. It moves with certainty and precision, and changes its speed depending on the delicateness of the item. Pills wrapped in foil, for example, receive gentler treatment—to avoid deforming the packaging or crushing the medication. In one particularly impressive demonstration, the robot also reversed its air flow to blow a pesky bag pressed against the bin’s wall into the center for easier access.
KNAPP’s Puchwein says since adopting Covariant’s platform, its robots have gone from being able to pick between 10 and 15% to around 95% of Obeta’s product range. The last 5% consists of particularly fragile items like glasses, which are still reserved for careful handling by humans. “That’s not a problem,” Puchwein adds. “In the future, a typical setup should be maybe you have 10 robots and one manual picking station. That’s exactly the plan.” Through the partnership KNAPP will scale its Covariant-enabled robots to all of its customer’s warehouses in the next few years.
In addition to product picking, Covariant wants to eventually encompass all aspects of warehouse fulfillment, from unloading trucks to packing boxes to sorting shelves. It also envisions expanding beyond warehouses into other areas and industries.
Even longer term, Abbeel also harbors a moonshot: “The long-term vision of the company is to solve all of AI robotics.”