Why Amazon Built a Spatula-Wielding Robot

AI transcript
0:00:01 This is an iHeart podcast.
0:00:06 Run a business and not thinking about podcasting?
0:00:07 Think again.
0:00:11 More Americans listen to podcasts than ad-supported streaming music from Spotify and Pandora.
0:00:15 And as the number one podcaster, iHeart’s twice as large as the next two combined.
0:00:17 Learn how podcasting can help your business.
0:00:19 Call 844-844-iHeart.
0:00:22 Why are TSA rules so confusing?
0:00:24 You got a hoodie on, take it off!
0:00:25 I’m Manny.
0:00:25 I’m Noah.
0:00:26 This is Devin.
0:00:31 And we’re best friends and journalists with a new podcast called No Such Thing,
0:00:33 where we get to the bottom of questions like that.
0:00:34 Why are you screaming at me?
0:00:36 I can’t expect what to do.
0:00:39 Now, if the rule was the same, go off on me.
0:00:39 I deserve it.
0:00:40 You know, lock him up.
0:00:46 Listen to No Such Thing on the iHeart Radio app, Apple Podcasts, or wherever you get your podcasts.
0:00:48 No such thing.
0:00:51 Barbie’s gone blockbuster.
0:00:53 OnlyFans is rewriting the rules for creators.
0:00:56 And ESPN is leading the game in sports media.
0:00:59 I’m Chris Grimes, the FT’s LA Bureau Chief.
0:01:05 And on September 17th and 18th, I’ll be hosting the Financial Times Business of Entertainment Summit in West Hollywood,
0:01:11 where CEOs from Mattel, OnlyFans, ESPN, and many more will tackle the future of entertainment.
0:01:16 Head to ft.com slash entertainment to unlock your exclusive FT discount.
0:01:19 Use code FTPODCAST to save 20%.
0:01:37 One of the things I look at in robotics as a big field, there are so many amazing demonstrations of mobility.
0:01:41 Robots doing backflips, robots running down hills.
0:01:48 And that’s really impressive to me because I can’t do a backflip or I, you know, might trip if I run down the hill.
0:01:54 But where the really valuable parts of robotics are going to be are in manipulation.
0:01:59 So my kid can take a blueberry out of her cereal bowl because she doesn’t want to eat it.
0:02:02 And that is an incredibly hard task for a robot.
0:02:05 And you don’t see any of those demos.
0:02:12 And I think we’re like kind of inherently programmed as people to like be biased towards the backflip being more impressive.
0:02:20 And in reality, like the business value and the harder thing for the robot is to like take the blueberry out of the cereal bowl.
0:02:28 I’m Jacob Goldstein, and this is What’s Your Problem?
0:02:32 The show where I talk to people who are trying to make technological progress.
0:02:34 My guest today is Aaron Parnes.
0:02:41 Aaron spent the earlier part of his career building space robots at NASA’s Jet Propulsion Laboratory, JPL.
0:02:44 Six years ago, he went to work at Amazon.
0:02:49 Now Aaron is a director of applied science at Amazon Robotics.
0:02:53 I wanted to talk to Aaron about a robot arm called Vulcan.
0:02:59 He and his team developed Vulcan to do a job that is surprisingly hard for robots to do.
0:03:04 Taking stuff that gets delivered to Amazon warehouses and putting it onto shelves.
0:03:11 In order to solve this problem, Aaron and his team had to build a robot that had a sense of touch,
0:03:14 that could deal with complicated, unpredictable situations,
0:03:19 and that could look at a shelf and plan out a course of action.
0:03:26 As you’ll hear in the interview, all of those traits may someday be helpful not just in stocking shelves in a warehouse,
0:03:31 but in doing lots of boring-sounding but complicated real-world tasks.
0:03:35 Like, for example, taking a blueberry out of a bowl of cereal.
0:03:41 To start, I asked Aaron to tell me the problem that Vulcan was designed to solve at Amazon’s warehouses.
0:03:47 So, new inventory comes into the building, you know, trucks pull up and they unload new stuff.
0:03:52 We need to store that stuff while it’s waiting for someone to click the buy button.
0:03:56 We store it in these large fabric bookcases.
0:03:58 It’s about eight feet tall.
0:04:01 It has about 40 different shelves on it.
0:04:01 Okay.
0:04:07 It’s four-sided, so you can store stuff from any of the different faces of the case.
0:04:08 Okay.
0:04:11 What’s really interesting is the stuff is randomly stowed.
0:04:14 So, it’s not like all the iPhones are in one shelf.
0:04:19 It’ll be all different stuff, all mixed together.
0:04:24 When you say random, do you mean random, or do you mean it would look random to the untrained eye?
0:04:26 I mean literally random.
0:04:27 Really?
0:04:30 Wherever there is space, you can put the item.
0:04:32 Because that’s what’s optimal?
0:04:34 It turns out the optimal way to store stuff is random?
0:04:35 That’s right.
0:04:36 Why?
0:04:43 This stems actually from Jeff Bezos’ original vision, I think, and it’s incredible.
0:04:50 So, you want to have the most selection, and you want to have speed of delivery, and you want to have low cost.
0:04:52 And that’s what the customer wants, right?
0:04:59 The customer is using Amazon.com because we have selection, we have speed, and we have low cost.
0:05:06 In order to achieve that, you have to have these massive warehouses located really close to your customers.
0:05:13 And you have a lot of customers in Tokyo, in New York City, in San Francisco, where real estate’s really expensive.
0:05:27 So, you have to figure out a way to put all of this different stuff in, like, the densest packing area you can, and have access to it immediately, so that you can deliver in, you know, hours instead of days.
0:05:32 And what that means is that random is better than structured.
0:05:40 So, anywhere there’s a space, you can add that item into the inventory, and that means it comes up for sale immediately on the website.
0:05:47 And then when someone places in order, you don’t have to wait for that iPhone bookcase to make its way all the way across the warehouse.
0:05:54 You probably have a thousand iPhones in the warehouse, and whichever one is closest can go to whichever pick station is eligible.
0:05:58 And it ends up being actually substantially faster.
0:06:02 So, that last sentence seems to be the key.
0:06:10 The idea is like, yes, given you have whatever, a thousand iPhones in the warehouse, in a universe where a human had to know where they all were, you’d put them all on one shelf.
0:06:15 But you’re saying, at any given time, that means that shelf is probably going to be pretty far away.
0:06:25 Whereas, if you randomly distribute them throughout the shelves in the warehouse, at any given time, one of those thousand iPhones is probably going to be pretty close to where it needs to be.
0:06:33 And because you have a, whatever, a computerized system that can keep track of everything all the time, it makes sense to randomly distribute all the things.
0:06:34 Yep, that’s exactly right.
0:06:35 Okay.
0:06:36 And it works on the flip side as well.
0:06:48 So, when you have a new item that’s come in, rather than waiting for the shelf that has the right size thing to put the new dog toy in, you just put the dog toy anywhere you can find space for it.
0:06:49 Uh-huh.
0:06:50 It’s like my house.
0:06:50 Yes.
0:06:54 We have a lot of dog toys in my house also.
0:06:54 Yeah.
0:06:56 That’s really interesting.
0:06:57 It’s great for the customer.
0:06:58 And that’s optimal.
0:06:59 It’s optimal.
0:07:04 And it creates an incredibly difficult environment for robotics.
0:07:05 Uh-huh.
0:07:07 Because now you have to deal with all this clutter.
0:07:12 We can have more than a million unique items in one warehouse.
0:07:12 Yeah.
0:07:15 So, it’s not like you have a model of each of those items.
0:07:21 And we sell more third-party items than, you know, Amazon owns themselves, right?
0:07:23 We are a platform for third-party fulfillment.
0:07:28 And so, you don’t have all the data about all those items.
0:07:33 And so, you have to handle all this uncertainty, all this clutter, and everything’s tightly packed.
0:07:46 And so, still in most places, as a result, when stuff comes into the warehouse every day off a truck, do people take the things out of the truck and stick them randomly on shelves wherever they can find space?
0:07:47 Is that the system?
0:07:47 Yeah.
0:07:52 So, that is exactly the system, and it’s in, you know, hundreds of buildings around the world.
0:08:01 And just to be clear, I mean, it’s pretty clear, but just to really put a point on it, why is this a hard environment for robots?
0:08:14 Traditional industrial robots do not handle contact well, so like touching their environments, and they don’t handle clutter or, you know, uncertainty.
0:08:28 And so, it’s hard because to put that last book onto the bookshelf, or to squeeze that teddy bear into the just small enough space that it’ll fit, you have to push the other stuff around that’s already on that bookshelf.
0:08:32 And a traditional robot doesn’t have sensors.
0:08:33 It doesn’t even know how to do that.
0:08:42 So, if you think of like a car manufacturing line, you’re like 1990s, 2000s, you know, welding robot or loading sheet metal into a press.
0:08:45 It’s doing all of that only knowing its position in space.
0:08:47 So, it has no force sensing.
0:08:55 If it runs into something, it either is like an emergency stop because it’s like broken, or it just smashes that thing and keeps going.
0:08:57 And it doesn’t even know it smashed anything.
0:08:59 It literally has no sensing.
0:09:02 That is an incredibly homogenous environment, right?
0:09:07 It’s doing like the exact same thing at a very high level of precision forever.
0:09:08 One thing.
0:09:09 That’s exactly right.
0:09:30 And so, this extension, the like fundamental breakthrough for science, for robotics manipulation that my team is trying to make is, one, giving the robot a sense of touch and using that along with sight and along with like knowing where your robot is to do meaningful tasks in like very high contact, high clutter environments.
0:09:32 And then there’s a brain part.
0:09:41 It’s also much more difficult to kind of predict how this random assortment of items is going to move or change as you push on it.
0:09:42 And so, there’s an AI piece.
0:09:46 There’s a brain piece that’s saying, this item will fit in that bin.
0:09:49 This is actually one of the most frustrating things when you try and do the job yourself.
0:09:51 I’m like an optimist.
0:09:52 I’m always, oh, yeah, this will fit.
0:09:58 And I go up there and I try and play Tetris and I try and rearrange the shelf and like it clearly isn’t going to fit.
0:10:03 And then I’ve wasted 30 seconds or 40 seconds and I have to try something else.
0:10:05 That’s a good statement of the problem.
0:10:08 Well, like when did you come onto the scene?
0:10:16 So, I was working on some other stuff and there was a recent PhD that had joined our team.
0:10:19 He was, you know, one year out of school, something like this.
0:10:24 And he says, I’m going to go try and solve stowing items into these bookshelves.
0:10:27 And my thought was, oh, how naive.
0:10:33 Like the real world is going to teach this new grad that’s just way too hard a problem for robotics to solve.
0:10:36 But I was helping him because it’s fun, right?
0:10:41 Like you like to work on hard problems when you’re a researcher and he was a very nice guy.
0:10:44 And so I was, you know, helping him but never thought it was going to work.
0:11:03 And there were a couple of kind of moments where we made these simplifications that turn the problem from, I have to try and do every possible game of Tetris that a person can do into a problem where you’re like, oh, it’s not that this is never going to work.
0:11:04 It’s that this is the future.
0:11:06 Like this is robotics 2.0.
0:11:08 Like this is, I have to work on this.
0:11:09 I can’t do anything else anymore.
0:11:12 I’m like, I’m all in on this problem.
0:11:16 Tell me about one of those simplifications, one of those moments.
0:11:18 It was the gripper is one.
0:11:23 The design, the mechanical design of the robotic hand was actually a big breakthrough.
0:11:29 And when we started, we were trying to push items with the item we were gripping.
0:11:36 So imagine you’re pinching a book and you’re trying to use that book to like push this dog toy over to the side.
0:11:37 I see.
0:11:38 So you want to put the book in a bin.
0:11:39 Yeah.
0:11:40 The dog toy is in the way.
0:11:45 So you’re like, okay, pick up the book and use the book kind of like a brush to sweep the dog toy out of the way.
0:11:46 Okay.
0:11:49 And we say, okay, like, I understand, but it’s never going to work.
0:11:50 What if you don’t have a book?
0:11:51 What if you have a T-shirt?
0:11:52 Yeah.
0:11:55 What if you have an iPhone and it’s very expensive?
0:11:58 Are you going to actually want to start pushing on stuff with the phone?
0:12:08 And so we came up with this strategy to have like a spatula that would extend into the bin and you’d push everything with this spatula that was part of your hand.
0:12:09 Huh.
0:12:16 So imagine like you’re like Wolverine and you can shoot out, you know, but instead of like the adamantium claws, you’re shooting out a spatula.
0:12:23 So it’s like a pincher grip, but a little spatula shoots forward out of the pincher grip is the thing.
0:12:24 That’s right.
0:12:27 It’s so simple when you put it that way.
0:12:33 I mean, I’m sure making it was not low tech, but it sounds very like it’s not like some crazy AI thing.
0:12:37 It’s like just what if there was another little thing that came out and could push stuff out of the way.
0:12:46 But those ideas are like the really powerful ones when you have a simple, elegant solution and you’re like, okay, that could work.
0:12:53 That’s different than like a five fingered hand that has 25 motors embedded in it.
0:12:55 You’re like, oh, it’s just a spatula.
0:12:56 Fingers are famously difficult.
0:12:59 Why didn’t anybody think of it before?
0:13:05 So we had been working on it as a company back to the Amazon picking challenge, which was, you know, 2015.
0:13:13 But I think a lot of robotics researchers like myself were scared that this problem was just too hard.
0:13:15 There was easier things to go try and work on.
0:13:18 And there were a couple of simplifications.
0:13:20 So using this spatula was one.
0:13:27 And then you watch people do the task and you realize they’re kind of doing the same strategies over and over again.
0:13:32 It’s like insert the spatula on the edge and sweep to one side.
0:13:33 Okay.
0:13:35 Or this kind of page turn mechanism.
0:13:40 Something’s fallen over and you need to sort of flip it back up to make space.
0:13:45 So you put the spatula underneath it and flip the thing up 90 degrees, basically.
0:13:52 And you realize that accounts for like 90% of the actions you do when you try and stow into these bins.
0:13:56 And did you figure that out by watching people stow?
0:13:57 We did.
0:13:58 And doing it yourself.
0:14:00 How much stowing did you do?
0:14:02 A couple of days.
0:14:03 Okay.
0:14:04 It’s a hard job.
0:14:05 Thousands of items probably.
0:14:06 I imagine so.
0:14:13 And we tried to wear GoPro cameras on our heads so we could look at the videos later, which turns out is a recipe for motion sickness.
0:14:16 It’s very difficult to watch those videos.
0:14:18 But you go and you do it and you build up this intuition.
0:14:27 And I think the other piece of the problem that made it tractable and made me this like huge believer was recognizing we didn’t have to get to 100%.
0:14:33 So in some automation scenarios, you have to solve the whole problem.
0:14:34 And if you don’t, you have nothing.
0:14:36 So like landing on the moon.
0:14:45 And what we realized was there was a way to like make the business logic work that the robot could handle 75% of the stoves.
0:14:53 And it just had to not make a mess and let like work alongside people to do the other 25%.
0:15:00 And the sum of the parts is actually much better than either all robots or all employees would be on their own.
0:15:08 And making that realization all of a sudden meant that it could be a two or three-year project instead of a 20-year project.
0:15:17 Because chasing this long tail, you know, we have a million unique items in the building, but we also process a million items per day.
0:15:23 So I have a phrase like if something goes wrong once in a million, it happens every day in every Amazon building.
0:15:27 And to try and solve all of those is it is a 20-year problem.
0:15:33 I feel like that part of the solution generalizes in a really nice way, right?
0:15:37 Like, I mean, I guess the 80-20 problem is a sort of cliche.
0:15:44 But the idea that like, oh, if you think of the problem the right way, it’s like, no, we don’t have to build a robot that does it every time.
0:15:49 If we build a robot that does it 75% of the time, that is a huge efficiency gain.
0:15:52 And maybe the optimal point on the curve, right?
0:15:53 Yep.
0:15:58 If the robot is doing everything, you’re working too hard to make the robot work, probably.
0:15:59 Exactly that.
0:16:01 So, okay.
0:16:03 So you have these two big ideas.
0:16:06 Do you want to tell me the sort of story of making it work?
0:16:07 You want to tell me how it works?
0:16:16 We’ve been running six of these robots at a warehouse in Spokane, Washington, since November of last year.
0:16:20 And so we’ve done over half a million Stowe’s at this point.
0:16:24 We also have another product that’s picking those items out of the bins.
0:16:27 And so that’s my team in Germany.
0:16:31 And so we have a warehouse in Hamburg where we’ve been picking items.
0:16:36 And picking is a slightly harder problem in some ways because you have to identify the item.
0:16:38 So for Stowe, you have to identify free space.
0:16:43 It’s either occupied or you can make space to put the next item in.
0:16:47 For pick, you want to make sure I get you the red T-shirt, not the red sweatpants.
0:16:52 Or I get you the Harry Potter Vol. 2 and not Sapiens or some other book.
0:16:53 Tell me how it works.
0:16:56 Let’s do the Stowe first since that’s what we’ve been talking about.
0:17:01 So there’s this warehouse in Spokane where this robot that you built is in use.
0:17:02 Like, what happens there?
0:17:04 A truck pulls in and then what happens?
0:17:11 The way the system works is one of these pods, one of these bookcases, pulls up to the station.
0:17:13 So it pulls in front of the robot.
0:17:15 We have stereo camera towers.
0:17:18 And so we’re looking with the eyes first.
0:17:23 And we are creating a, you know, 3D representation of the scene.
0:17:29 So we’re modeling, you know, all the items that are in the pod already.
0:17:38 But the really interesting part is we’re actually predicting on top of that, how we can move those items around to make more empty space.
0:17:40 How can we squeeze more stuff in?
0:17:44 So it’s not just identifying vacant space.
0:17:50 You have to predict where you can make that vacant space by pushing stuff with this spatula.
0:17:52 Then we do this matching algorithm.
0:17:57 So we have about 40 or 50 items waiting for us to stow.
0:17:59 And so we have a variety of stuff.
0:18:05 And we’re matching those 40 or 50 items to the 30-ish shelves that are in front of the robot.
0:18:07 Which items should go where?
0:18:09 And then how do we make that space?
0:18:14 And so that’s where a lot of the AI in the system is active and operating.
0:18:15 It’s predicting success.
0:18:17 It’s minimizing risk.
0:18:20 It’s trying to optimize for a bunch of different parameters.
0:18:25 Once we’ve made that selection, we grasp the item.
0:18:31 So that item we’ve selected for putting into the given shelf passes into our hand.
0:18:34 And our hand is two conveyor belt paddles.
0:18:36 So you can think of it kind of like a panini press.
0:18:37 Like a George Foreman grill?
0:18:43 It is a George Foreman grill where each side has a conveyor built into it.
0:18:44 Like a little belt?
0:18:46 Like just a little belt going around?
0:18:47 That’s right.
0:18:51 So each face of the grill, the top face and the bottom face have a conveyor belt.
0:18:54 And that’s important because you can control the pose of the item.
0:19:00 And you can feed it into the bin rather than like throwing it into the bin.
0:19:05 One of the early versions we had kind of dropped it and tried to punch it to put the item into the bin.
0:19:09 And that predictably failed in a lot of ways.
0:19:13 Well, you say predictably now, but if you tried it, it wasn’t predictable, right?
0:19:14 Yeah.
0:19:16 I’m a huge believer in iterative design.
0:19:21 And so we try and build early, build often, build cheap and learn from those builds.
0:19:28 So it’s actually really important to keep six doff pose control of the item.
0:19:32 So you want to make sure the item isn’t rotating as you shoot it out.
0:19:36 You want to make sure that you keep the orientation of the item because it’s fitting tightly.
0:19:42 So you don’t want it to run into the bookshelf above it or below it or the items that are already in there.
0:19:42 Yeah.
0:19:43 Yeah.
0:19:44 Yeah.
0:19:45 We started by trying to shoot it out.
0:19:49 And then we had all kinds of problems when it would like collide with stuff and fall on the floor.
0:19:53 The worst case is, you know, you would shoot it out.
0:19:58 It would bounce off the back of the back of the bookcase and then come back and hit you in the face or hit you in your…
0:19:58 Did that happen?
0:19:59 Yeah.
0:19:59 That’s good.
0:20:01 That’s robot comedy.
0:20:01 Yes.
0:20:03 Robot physical comedy.
0:20:04 Yeah.
0:20:05 So, and that’s it.
0:20:07 That’s the Stowe process.
0:20:09 And we want to do that a few hundred times an hour.
0:20:23 And we want to do it on the top shelves of those bookcases where that’s one of the ways we are really complementary to the employees is if the robots can do the top shelves, it saves a lot of ergonomic tasks.
0:20:28 It allows the employees to work in their power zone, like, you know, shoulder level.
0:20:30 That makes them faster, too.
0:20:33 So if you put robots in, people get faster at the job.
0:20:39 I mean, presumably, as the robot gets better, it’ll also be better at putting things on the middle shelf, right?
0:20:40 Well, there’s this, like, sweet spot.
0:20:43 The robot’s going to get better faster than people will get better.
0:20:44 Yeah.
0:20:47 We want the robot to be as good as it can and not chase 100%.
0:20:50 We don’t really believe in 100% automation.
0:20:54 We want to find that sweet spot where we’re maximizing productivity.
0:20:56 But, I mean, the sweet spot’s going to keep moving, right?
0:21:00 The robot’s going to get better and better and be able to do more and more faster and faster, presumably.
0:21:03 And my science team’s actually really excited about that.
0:21:08 As you get more and more data, so we have 500,000 Stowe’s that we’ve done so far.
0:21:20 But when we get to 500,000,000 Stowe’s, you can leverage some of these techniques to start learning the motions and learning some of these strategies and refining them to be specific to the item that you’re holding in your hand.
0:21:24 There’s a lot of, like, opportunity as you get more and more data.
0:21:25 Well, right.
0:21:31 So we haven’t really, I mean, you mentioned the software side, the AI side, but we haven’t really talked about it.
0:21:41 And it is, I mean, in talking to other people working on robotics, it’s plainly a data game because there’s no internet of the physical world, right?
0:21:48 Because large language models work so well because there’s this huge data set and everybody is trying to get data from the physical world.
0:21:53 And you seem very well positioned to get a lot of data from the physical world.
0:21:55 I think that’s true.
0:22:01 So one of the joys of being a roboticist at Amazon is all the data that we have access to.
0:22:04 But I will push back a little bit that it’s just a data problem.
0:22:06 It’s a hotly debated topic.
0:22:20 Some people in the world believe that you can apply the same sort of transformer architectures that work so well for search and so well for natural language processing and apply those to robotics if we only had the data.
0:22:23 I would not put myself in that camp.
0:22:29 I am not a believer that all we need is more torque data from robotic grippers and we’ll solve it.
0:22:35 Natural language is already tokenized in a way that’s very amenable to those methods.
0:22:41 And language and search are also very tolerant of sloppiness.
0:22:44 So you and I can have a conversation.
0:22:46 I don’t have to get every single word correct.
0:22:57 But if you mess up a torque on a gripper, you can crush your iPhone or you can sort of smash something else that’s there or drop something or just fail the task.
0:23:07 And that’s because you have physics and this, you know, nonlinear, very sort of difficult to model real world that these robots have to interact with.
0:23:14 And so I think those techniques certainly accelerate us in a lot of places, but they don’t just solve the problem.
0:23:22 I think we need all of the rest of robotics, like hardware design and classical control theory to solve those problems.
0:23:24 Compelling.
0:23:26 Although you did start this part of the conversation.
0:23:35 You brought it up by saying the science team is really excited for what the model is going to learn once you have hundreds of millions of stoves.
0:23:35 That’s right.
0:23:37 And that both things are true.
0:23:37 I know.
0:23:38 Yes.
0:23:40 Plainly, we’re just talking about sort of the margins, right?
0:23:42 What is true at what margins?
0:23:54 I mean, I did wonder as I was reading about this, you know, I thought of AWS, of Amazon Web Services, which, of course, like was an internal Amazon thing that at some point Amazon was like, oh, maybe other people would find this service useful.
0:23:56 And now it’s a giant part of Amazon’s business.
0:24:01 And so I wondered, like, are you building Amazon robotics services?
0:24:03 Yeah, not today.
0:24:13 There’s so much value that we can provide to our fulfillment business that we are 100% focused on that.
0:24:20 Certainly as a roboticist, though, I take great joy that the work we’re doing is advancing the field of robotics.
0:24:25 And so it’s definitely like in the makes my job better that we’re advancing the state of the art.
0:24:32 But from a business perspective, it’s all hands on making the fulfillment process better for Amazon.com.
0:24:37 We’ll be back in just a minute.
0:25:22 Imagine that you’re on an airplane and all of a sudden you hear this.
0:25:30 Attention passengers, the pilot is having an emergency and we need someone, anyone to land this plane.
0:25:32 Think you could do it?
0:25:39 It turns out that nearly 50% of men think that they could land the plane with the help of air traffic control.
0:25:42 And they’re saying like, okay, pull this, do this, pull that, turn this.
0:25:44 It’s just like doing my eyes closed.
0:25:45 I’m Manny.
0:25:45 I’m Noah.
0:25:46 This is Devin.
0:25:51 And on our new show, No Such Thing, we get to the bottom of questions like these.
0:25:55 Join us as we talk to the leading expert on overconfidence.
0:26:02 Those who lack expertise, lack the expertise they need to recognize that they lack expertise.
0:26:05 And then as we try the whole thing out for real.
0:26:06 Wait, what?
0:26:08 Oh, that’s the runway.
0:26:09 I’m looking at this thing.
0:26:09 See?
0:26:16 Listen to No Such Thing on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
0:26:23 Barbie’s gone blockbuster, OnlyFans is rewriting the rules for creators, and ESPN is leading the game in sports media.
0:26:32 I’m Chris Grimes, the FT’s LA Bureau Chief, and on September 17th and 18th, I’ll be hosting the Financial Times Business of Entertainment Summit in West Hollywood,
0:26:39 where CEOs from Mattel, OnlyFans, ESPN, and many more will tackle the future of entertainment.
0:26:43 Head to ft.com slash entertainment to unlock your exclusive FT discount.
0:26:46 Use code FTPodcast to save 20%.
0:26:54 I think I read you say that you’re building a foundation model of items.
0:26:55 Is that right?
0:26:59 And I sort of know what that means, but tell me what that means when you say that.
0:27:07 So when a robot handles an item, it would do better if it takes into account the properties of that item.
0:27:16 So if you’re trying to hand a bowling ball to someone, you should do that in a different way than if you’re handing them a bouncy ball or a light bulb.
0:27:29 At its core, a foundation model for items is simply a model that encodes all of those attributes of an item and makes them available to the robotic systems that are going to use it.
0:27:38 And one of the things that makes it a foundation model instead of just, you know, some custom bespoke thing is that you can transfer it across lots of different applications.
0:27:40 So if it’s, you know, stowing, you can use it.
0:27:44 If you’re packing it into a delivery box, you can use it.
0:27:49 If you’re putting it onto a shelf in a physical store, like for grocery or Whole Foods or something, you can use it.
0:27:54 And so that, like, commonality across applications is one of the things that’s important.
0:28:06 Is part of the notion there that, like, the model would allow a robot to sort of look at some novel item and make a reasonable inference about the properties of that item?
0:28:08 Yeah, absolutely that.
0:28:23 And the other thing that’s a little non-intuitive is that by understanding how to handle that item in all those different applications, a grocery, a packing, you know, stowing, picking, you get better at the individual application.
0:28:33 So by training on all of this data across these different domains, you actually get better at the individual task that your specific robot is trying to do.
0:28:37 It doesn’t, like, it takes a while to, like, understand that.
0:28:38 It is non-intuitive.
0:28:39 Say more.
0:28:40 What do you mean?
0:28:42 Like, I don’t know that I fully get it.
0:28:45 Understanding how an item behaves when you gift wrap it.
0:28:46 Uh-huh.
0:28:51 Shouldn’t really inform how it’s going to behave when you’re picking it off of a bookshelf.
0:28:54 Oh, I mean, yes, it should, right?
0:28:58 Like, if you think of, like, a, whatever, a stuffed animal versus a book.
0:28:59 Yeah.
0:29:01 Maybe that’s too easy of a case.
0:29:11 But, like, if a thing is squishy or rigid, that seems like, as a human being, I feel like we sort of port that knowledge from one use case to another, right?
0:29:12 Yeah, it’s a good point.
0:29:21 And maybe that’s because we are inherently sort of, we think and manipulate items in the world more similarly to how these foundation models do.
0:29:24 But 10 years ago, it was totally not the case.
0:29:27 You would train your model in a very narrow domain.
0:29:33 And if you gave it data from some other domain, it would kind of corrupt the results that you had.
0:29:40 And so you were very careful to, like, curate all the data that you were using to be very specific to the task that you wanted it to do.
0:29:47 And that made the performance better, but it also meant that the model you had was only good at that one very narrow thing.
0:29:51 It was why we were always so far from the general purpose robot.
0:29:51 Yeah.
0:29:57 Because, as you’re describing it, trying to make a robot do more than one thing just meant it couldn’t even do one thing.
0:29:58 We couldn’t even do one thing.
0:30:02 And so you’re putting all your effort into, like, making it do that one thing just a little better.
0:30:15 I think there’s another really interesting piece here, which is our team, the Vulcan team at Amazon, is trying to use touch and vision together.
0:30:19 And that is how people interact with the world.
0:30:21 That’s how people manipulate the world.
0:30:26 And so the example I like to give is picking a coin up off a table.
0:30:31 Ten years ago, when a robot would try and do that, I mean, it’s impossible.
0:30:33 Like, a robot can’t pick a coin up off a table.
0:30:34 It’s too hard a task.
0:30:39 My five-year-old can pick a coin up off the table in half a second without you noticing.
0:30:42 Well, the reason is your strategy.
0:30:47 So when you pick a coin up off the table, you actually don’t grasp the coin.
0:30:49 You go and you touch the table.
0:30:55 And then you slide your fingers along the surface of the table until you feel the coin.
0:31:00 And when you feel the coin, that’s your trigger to, like, rotate it up into a grasp.
0:31:08 You’re not going to some millimeter precision the way your grandfather’s robot and the, you know, welding line would do.
0:31:10 And you’re not just watching with your eyes.
0:31:13 You’re using your eyes and your fingertips both.
0:31:14 Your sense of touch.
0:31:15 Yes.
0:31:18 Your sense of touch is central to picking a coin up off the table.
0:31:27 And we’re trying to do the same kind of behaviors that are not only reacting to touch, but planning for touch.
0:31:42 So the same way you plan to touch the table first, we plan to, like, put our spatula against the side of the bookcase before we try to extend it in between this, you know, small gap between the T-shirt and the bag and the side of the bookcase.
0:31:49 So we are building our plans and our controllers around having sight and touch.
0:31:58 I mean, when you say touch in the context of the robot, does that mean that it is getting feedback from the stuff it is coming into contact with?
0:31:59 And is that novel?
0:32:00 And how does that work?
0:32:03 So the sensor is a force torque sensor.
0:32:05 It looks like a hockey puck.
0:32:13 And a thousand times a second, it’s telling you what it feels in the six degrees of freedom.
0:32:17 So up and down is one, left and right is two, in and out is three.
0:32:21 And then you’ve got roll, pitch, yaw as the three torques.
0:32:28 So a thousand times per second, you’re sensing, you’re feeling what the world is pushing on you with.
0:32:32 And we use that to control the motion, but also to plan the motion.
0:32:40 When you say plan the motion, it’s like, given the sense of touch that is happening right now, what should I do next?
0:32:54 Yep. So in the like high level view, it’s like touch the table first, slide along the table while keeping, you know, sort of one pound of force pushing into the table until you touch the coin and then, you know, rotate.
0:32:56 That’s at a high level.
0:33:06 But then even at a low level, the thousand times per second is so that as you slide your fingers along the table, you’re sort of maintaining that accurate force.
0:33:13 Yeah. Or like if you’re putting a thing on the shelf, you can sort of tell if you’ve pushed it too far because the shelf is pushing back at you.
0:33:18 Exactly. Or you can tell it’s slipping and you’re about to like push over the top of it.
0:33:22 And so you can like, oh, it’s about to fall over so I can react.
0:33:26 And those dynamics are happening at tens or hundreds of hertz.
0:33:29 And so you need to sense them at a thousand hertz.
0:33:32 What’s the frontier right now for stowing?
0:33:34 What are you trying to figure out?
0:33:44 One of the things is getting the fullness of those bins all the way up to where they are today.
0:33:47 So as a person, you can pack those bins really, really densely.
0:33:57 And so the robot’s close, but not quite as good as a person is today at getting as much stuff into the bookcase as it can.
0:33:58 That’s one frontier.
0:34:06 And that is because one, we’re conservative, like our brain is telling us there’s no space when really there is space.
0:34:10 And two, it’s because those motions are not sophisticated enough yet.
0:34:13 So we’re trying to improve our video streaming.
0:34:21 We’re trying to get the eyes better to help as well as those low-level touch sensors to those behaviors to be better.
0:34:25 So that’s one of the major frontiers.
0:34:26 The other one is the negative.
0:34:28 The robot makes too many mistakes.
0:34:35 So defects and exception handling are so important in robotic systems.
0:34:39 And this is another thing I think the world on the internet doesn’t appreciate enough.
0:34:42 Like you can do a demo and a happy path.
0:34:43 Hey, it worked once.
0:34:46 I can submit a paper to a conference or I can put a cool video on YouTube.
0:34:48 That’s great.
0:34:48 You have a demo.
0:34:58 To have a product, you have to make sure it’s working, you know, 99% of the time or 99.5% or, you know, in some cases, four nines or five nines.
0:35:07 And a lot of the work you have to do is to recover and handle those rare exceptions or prevent or recover from those defects.
0:35:11 And so the robot still drops too much stuff on the floor.
0:35:14 One of our frontiers is not dropping crap on the floor.
0:35:17 Like we need to get about three times better at that.
0:35:18 Uh-huh.
0:35:23 And presumably the robot is already skipping some universe of items that the robot can’t handle.
0:35:24 Yeah.
0:35:28 And so we need to get smarter about which items we skip and which items we take.
0:35:34 We also need to get better at inserting those items in such a way that they’re not going to fall back out.
0:35:37 What items are particularly hard for the robot?
0:35:40 So tight-fitting items are the hardest.
0:35:41 Uh-huh.
0:35:47 And so that’s not the nature of the item, but the nature of the particular relationship between the item and the shelf.
0:35:48 Exactly.
0:35:48 Yeah.
0:35:55 Like, is there a kind of thing that the robot just can’t do because of its shape or something?
0:36:01 There is a particular rubber fish that we really hate.
0:36:03 It’s a dog toy.
0:36:03 It’s floppy?
0:36:04 Is that why?
0:36:05 It’s sticky.
0:36:06 Oh, sticky.
0:36:07 Interesting.
0:36:08 Yeah.
0:36:10 And they don’t put it in a box?
0:36:10 Nope.
0:36:13 They just send you the sticky fish?
0:36:13 Yep.
0:36:18 And it sort of gets hung up on whenever it makes contact, it doesn’t slide.
0:36:21 It, like, it wants to rotate about whatever it’s made contact with.
0:36:24 And so there’s this particular dog toy.
0:36:25 And so we use it.
0:36:26 We’ve bought, like, 50 of them.
0:36:28 And now we have them in the lab.
0:36:31 And this is, like, our diabolical item set.
0:36:33 Is that a term of art, the diabolical?
0:36:34 I don’t know.
0:36:35 Yeah.
0:36:36 It’s our term of art, yeah.
0:36:40 Also, bagged items where the bag is really loose.
0:36:46 So imagine having, like, a T-shirt in a bag, but the bag is, like, twice as big as the T-shirt.
0:36:48 Floppy?
0:36:49 Is that the floppy problem?
0:36:51 Floppy, but also transparent.
0:36:54 So sometimes you can see through the bag or…
0:36:59 So the robot gets confused about, is the bag the item or not?
0:36:59 Uh-huh.
0:37:01 Sometimes you want one, and sometimes you want the other.
0:37:06 So, like, if it’s just floppy plastic bag, it probably will fit.
0:37:10 Like, if I just push it into the bin, the bag is going to conform and slide in.
0:37:16 But you can’t be sure about that, you know, you get into a bunch of those edge cases that are in that long tail of being robust.
0:37:22 I mean, it’s interesting, right, because the robot is dealing with this sort of human-optimized world.
0:37:29 Like, it reminds me of the way, I think, is it IKEA designs its furniture to fit optimally on a pallet?
0:37:29 Yeah.
0:37:30 So you can fit the most of them.
0:37:33 Like, not just the flat pack, but, like, in more subtle ways.
0:37:43 And can you imagine that there is some shift in the world where, I mean, obviously you’re trying to make the robot better, but also people are trying to make things work better for the robot?
0:37:54 And there is a different team within Amazon that’s imagining a future world and future bookcases that are friendly for robots.
0:38:07 However, there are currently 5 million of those bookshelves in warehouses holding inventory that’s for sale on Amazon.com.
0:38:14 And so it’s a really, really big lift to go replace all of those bookshelves.
0:38:14 Interesting.
0:38:22 So it’s a whole other team that’s just like, let’s imagine the, you know, a much more robot-centric warehouse.
0:38:23 Yeah.
0:38:24 Those guys, like, you don’t even talk to them.
0:38:26 They’re just off in their own whatever.
0:38:31 I mean, we’re friends, but yeah, we are facing very different problems.
0:38:33 And so we took a tenant very early on.
0:38:35 It’s like, the world exists.
0:38:40 The robot needs to perform in the world as it exists.
0:38:43 And this team, they get their green field.
0:38:45 So they get to think of, like, a new field.
0:38:49 We are a brown field, meaning we have to retrofit into these existing buildings.
0:38:52 You know, we have, like, 10-year leases on some of these buildings.
0:38:54 They’re going to be there for a long, long time.
0:38:56 And then somebody else is out there.
0:38:58 So they’re building a whole other kind of robot.
0:39:03 Your robot is optimized for the world today, and somebody else is building a robot for the robot world.
0:39:04 That’s right.
0:39:05 I love that.
0:39:07 They have a building that they’ve built in Louisiana.
0:39:09 It’s in Shreveport, Louisiana.
0:39:14 It has 10 times the number of robots that a traditional building has.
0:39:18 It’s a completely reimagined way of fulfilling your order.
0:39:21 It also has a lot of people still working in those buildings,
0:39:26 but they’re working in maintenance and robotics quarterbacks’ jobs,
0:39:28 and so they’re higher skilled.
0:39:33 And so we have a bunch of programs that are trying to transition our very talented workforce
0:39:35 into the jobs of the future.
0:39:40 One of the things I really like to say is you don’t need a college degree to work in robotics at Amazon.
0:39:46 So about 20-25% of my team doesn’t have a college degree but are enormously valuable.
0:39:50 Like, some of our top 10 people on our team are those people.
0:39:53 That facility in Shreveport, is it live?
0:39:56 Like, is real stuff going in and real orders going out?
0:39:57 Yeah, it’s live.
0:40:01 We could follow up with exactly the date, but it’s been up for about a year, I think.
0:40:02 Oh, interesting.
0:40:02 Something like that.
0:40:06 Well, I would be interested in talking to your counterpart there as well.
0:40:08 That show would pair interestingly with this show.
0:40:13 So, okay, let’s talk about the rest of the process, you know,
0:40:17 the rest of what’s going on in the warehouse and where else you’re working on robots.
0:40:23 So the piece we’ve been talking about this whole time is getting stuff as it comes in from the truck
0:40:28 onto the shelf, which naively, I wouldn’t even think of that part,
0:40:30 but it turns out to be this great big problem.
0:40:31 What are the other pieces?
0:40:37 What’s interesting is the science we’re building, giving robots a sense of touch,
0:40:42 has applicability in lots and lots of places across that whole chain.
0:40:48 Anytime the robots need to be physically interacting, like contacting, touching items,
0:40:51 is a good place for our core technology.
0:40:57 So if we’re packing four items into a box, because we want to send you the four things you bought
0:41:01 in one shipment, not in four separate packages, you need to touch the box.
0:41:03 You need to touch the other items that are already in the box.
0:41:05 You need to play that game of Tetris.
0:41:06 Yes.
0:41:08 I mean, it’s a stowing problem again, right?
0:41:11 I know it’s called packing, but it’s a version of that same problem.
0:41:11 That’s right.
0:41:13 And those problems recur over and over again.
0:41:17 So getting all of the packages, all of the cardboard boxes and paper mailers
0:41:24 into a cart that can go onto the back of the truck, that is a stowing problem in the cart.
0:41:29 Putting things in a thing is a great big problem in many ways.
0:41:31 But you can also expand to think about groceries.
0:41:39 So if you order produce, you don’t want your grandfather’s welding robot handling your peaches.
0:41:40 It’s going to smash them.
0:41:42 Like you need a robot with a sense of touch.
0:41:48 If you think about household tasks, if you want a robot, you know, picking up your kids’ toys or
0:41:53 dealing with laundry, like those robots need to have a sense of touch.
0:41:56 They’re physically interacting in a dexterous way with the world.
0:42:01 And so one of the things that we’re so excited about, not only these big applications for stowing
0:42:08 and picking off of, you know, these bookcases, but everything that gets unlocked once the robot has that sense of touch.
0:42:17 When you talk that way, it feels like a beyond what is typically considered Amazon kind of thing.
0:42:29 It seems like a thing that either Amazon’s going to get into lots of other sort of non-retail businesses or license the technology or sell, you know, robotic touch as a service or whatever.
0:42:40 Yeah, I think there are probably five or 10 applications in how we process orders today that are all within the warehouses and delivery stations.
0:42:44 And those are my first hill to climb.
0:42:46 Then we do have a consumer robotics team.
0:42:50 So there was a cool robot we released called Astro.
0:42:53 It didn’t have any manipulation capabilities, right?
0:42:54 It would drive around your house.
0:42:57 It had a camera and a mask that would extend up and down.
0:43:00 You could talk to it the way you can talk to an Alexa device.
0:43:04 The future versions of those robots are going to want to do more useful things.
0:43:07 And so they’re going to need this kind of underlying technology.
0:43:10 And so that’s a business opportunity in the long term.
0:43:17 You know, that’s not a thing my team is focused on now, but I get excited about it when I think about what we unlock.
0:43:24 We’ll be back in a minute with the lightning round.
0:43:58 Imagine that you’re on an airplane and all of a sudden you hear this.
0:44:07 Imagine that you’re on an airplane and all of a sudden you hear this.
0:44:15 Attention passengers, the pilot is having an emergency and we need someone, anyone to land this plane.
0:44:16 Think you could do it?
0:44:23 It turns out that nearly 50% of men think that they could land the plane with the help of air traffic control.
0:44:27 And they’re saying like, okay, pull this, pull this, pull that, turn this.
0:44:28 It’s just like doing my eyes closed.
0:44:29 I’m Manny.
0:44:30 I’m Noah.
0:44:31 This is Devin.
0:44:36 And on our new show, No Such Thing, we get to the bottom of questions like these.
0:44:39 Join us as we talk to the leading expert on overconfidence.
0:44:46 Those who lack expertise, lack the expertise they need to recognize that they lack expertise.
0:44:51 And then as we try the whole thing out for real, wait, what?
0:44:52 Oh, that’s the run right.
0:44:53 I’m looking at this thing.
0:44:54 See?
0:45:00 Listen to No Such Thing on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.
0:45:02 Barbie’s gone blockbuster.
0:45:05 OnlyFans is rewriting the rules for creators.
0:45:08 And ESPN is leading the game in sports media.
0:45:11 I’m Chris Grimes, the FT’s LA Bureau Chief.
0:45:17 And on September 17th and 18th, I’ll be hosting the Financial Times Business of Entertainment Summit in West Hollywood,
0:45:23 where CEOs from Mattel, OnlyFans, ESPN, and many more will tackle the future of entertainment.
0:45:27 Head to ft.com slash entertainment to unlock your exclusive FT discount.
0:45:31 Use code FTPODCAST to save 20%.
0:45:34 Let’s do a lightning round.
0:45:37 If you listen to the show, you have a sense of what this is.
0:45:37 Yeah.
0:45:40 Tell me about the last time you were in zero gravity.
0:45:49 I flew an experiment to try and drill into rocks, which was going to be applied to asteroids.
0:45:56 And of course, if you’re drilling into an asteroid, any amount you’re pushing into the rock is pushing you back off into space,
0:45:59 because asteroids have almost zero gravity.
0:45:59 Right.
0:46:03 So you’re going to have somebody push it on the other side?
0:46:04 How do you solve that?
0:46:05 What do you do?
0:46:06 You grab it?
0:46:07 How do you even do that?
0:46:08 This is my passion for robot hands.
0:46:13 We built a robot hand that would grab the rock with a bunch of claws.
0:46:17 I think it had a thousand claws and the claws were actually fish hooks.
0:46:23 So imagine a bunch of fish hooks grabbing onto a rock to react to the force of pushing a drill bit down the center.
0:46:24 Did it work?
0:46:32 It did work, but it only worked on rocks that were pretty rough, that had a lot of spots for the fish hooks to grab.
0:46:34 But it turns out asteroids are really rough.
0:46:39 Most of the smooth rocks you find on Earth have been processed by liquid water or ice.
0:46:42 And that’s not happening on asteroids.
0:46:43 No liquid water.
0:46:51 And so this was on the plane, on that NASA plane that flies, what is it, parabolas, flies a sine curve, basically?
0:46:51 Yeah.
0:46:52 What was it like?
0:46:55 The Vomit Comet’s actually very zen.
0:47:00 So when you’re in zero gravity, when you’re floating, it’s like very peaceful.
0:47:06 It’s when you’re in double gravity, where you’re at the bottom of the parabola and you’re like being glued and pushed against the floor.
0:47:10 If you like turn your head very quickly, that’s where you get like into serious trouble.
0:47:15 And so the trick is just to like go into your zone for the bottom of the parabola.
0:47:20 And then you become like very free and zen-like in the zero-G portion.
0:47:22 Do you think you’ll ever go to space?
0:47:24 No.
0:47:27 I think now that I have three kids, I think I’m landlocked.
0:47:30 You seem a little bit sad about that.
0:47:34 Does everybody who works at JPL kind of want to go to space?
0:47:39 Yes, everybody that works at JPL, I think, does think about going to space.
0:47:51 I think what makes me sad is we could be doing so much more at building civilization out into space, at the scientific exploration of all of the interesting places in space.
0:47:57 And I think we’re kind of tripping ourselves up in a couple of places as a species.
0:48:00 I wish we would get unblocked and get some of that eagerness.
0:48:03 You see some of the private investment, like we’re doing well in rockets,
0:48:11 but we’re not yet doing well in the spacecraft and the scientific instruments and the pieces that have to fly on top of the rockets.
0:48:16 When you say we’re tripping ourselves up in a couple of places, in what places?
0:48:17 Like, what do you mean?
0:48:23 I think we became very conservative, like our risk posture about going to space.
0:48:29 We stopped treating it as this like very dangerous activity and tried to make it extremely safe.
0:48:30 And that slowed us down.
0:48:32 Got to bring back the cowboys.
0:48:32 A little bit.
0:48:33 Yeah.
0:48:34 Interesting.
0:48:34 Yeah.
0:48:39 And then there’s a lot of bureaucracy, of course, that built up over, you know, 50 years.
0:48:41 I still am very optimistic.
0:48:46 There’s a lot of smart people working in that area and a lot of exciting things happening.
0:48:48 So we’re going to get through it.
0:48:59 Aaron Parnes is a director of applied science at Amazon Robotics.
0:49:03 Please email us at problematpushkin.fm.
0:49:06 We are always looking for new guests for the show.
0:49:11 Today’s show was produced by Trina Menino and Gabriel Hunter-Chang.
0:49:15 It was edited by Alexander Gerriton and engineered by Sarah Bruguer.
0:49:19 I’m Jacob Goldstein and we’ll be back next week with another episode of What’s Your Problem.
0:49:34 Why are TSA rules so confusing?
0:49:36 You got a hood or you want to take it off?
0:49:37 I’m Manny.
0:49:37 I’m Noah.
0:49:38 This is Devin.
0:49:43 And we’re best friends and journalists with a new podcast called No Such Thing, where we
0:49:45 get to the bottom of questions like that.
0:49:46 Why are you screaming at me?
0:49:48 I can’t expect what to do.
0:49:50 Now, if the rule was the same, go off on me.
0:49:51 I deserve it.
0:49:52 You know, lock him up.
0:49:58 Listen to No Such Thing on the iHeartRadio app, Apple Podcasts, or wherever you get your
0:49:58 podcasts.
0:50:00 No Such Thing.
0:50:02 Barbie’s gone blockbuster.
0:50:05 OnlyFans is rewriting the rules for creators.
0:50:08 And ESPN is leading the game in sports media.
0:50:11 I’m Chris Grimes, the FT’s LA Bureau Chief.
0:50:16 And on September 17th and 18th, I’ll be hosting the Financial Times Business of Entertainment Summit
0:50:22 in West Hollywood, where CEOs from Mattel, OnlyFans, ESPN, and many more will tackle the
0:50:23 future of entertainment.
0:50:28 Head to ft.com slash entertainment to unlock your exclusive FT discount.
0:50:31 Use code FTPODCAST to save 20%.
0:50:33 This is an iHeart Podcast.

Aaron Parness is a director of applied science at Amazon Robotics.

His problem is this: How do you build a robot that can put stuff on shelves.

Today on the show, Aaron explains why this is a surprisingly hard problem – and why the solution Aaron’s team came up with may ultimately have uses beyond the warehouse.

See omnystudio.com/listener for privacy information.

Leave a Comment