dsharlet

[MOC] EV3 Catching a Ball in Mid-Air

Recommended Posts

Hi all, this is a project I worked on ~1.5 years ago, but I'm just now finally getting around to writing up some blurbs about how some of it works. I had been thinking for *ages* (even since RCX 1.0) how interesting and cool it would be to build a LEGO robot to catch a ball in mid-air. Obviously, the parts we got from LEGO at that time were not remotely good enough to attempt this (and I did not know some of the key things necessary to make this work). But, by the time of EV3, I thought maybe it was possible, so I gave it a try, with some success.

delta1.jpg

Here are some more pictures: https://goo.gl/photo...khrsioVxZbGw84A and a video if it in action:

Some quick details:

The robot uses two NXTcam cameras (http://www.mindsenso...page&PAGE_id=78) to track the ball in stereo, which gives a 3D estimate of the trajectory of the ball. Here is a video from when it was WIP, testing just the stereo tracking:

The robot is programmed using ev3dev (www.ev3dev.org), written in C++, the code is available on github: https://github.com/dsharlet/DeltaCatch

The robot will catch ~1/5 (reachable) throws, the robot can reach an area with a radius of ~25 studs, and it can reach up ~40 studs.

A bit more information on the design and how it works:

In order to catch a ball, a robot needs to be able to move very quickly and accurately. This is why I chose a delta robot design, and (over-)built it to be very stiff. This is also the motivation for using pneumatics to control the hand of the robot: the pneumatics have very low mass and move quickly. All of the heavier parts controlling the hand are located on the base of the robot where mass is less of a concern. The robot has a range of about 50 LEGO studs (about 40 cm) horizontally, and it can reach about 40 studs above the top of the platform.

Most of the hard problems in getting this to work are in the software for tracking the ball and planning motions for executing a catch.

The biggest challenge to get this to work was the camera calibration. Calibration is the process of determining the camera parameters that define how positions in the world are mapped to pixel coordinates on the sensor of a camera. Calibrating NXTcam (or similar object detecting cameras) is very difficult because this type of camera does not provide images, only object tracking data. This means the standard calibration tools and packages that use known relative feature positions cannot be easily used. In addition to this, NXTcam is very low resolution. Ideally, camera calibration would be done with subpixel accurate feature extraction, especially for such a low resolution device like NXTcam.

My solution for the calibration problem is to use a different constraint. I use object tracking data from a ball tied to one end of a string, and the other end of the string is fixed to some known position relative to the robot. Moving the ball in the stereo field of view while keeping the string taught gives tracking observations of an object known to lie on a sphere. The calibration is then formulated as an optimization problem to find the camera parameters that minimize the distance from the surface of the sphere to the estimated 3D position of the ball. This calibration procedure is finicky and took me several attempts to get a good calibration. It helps to use observations from several different spheres, the more the better. It's also important to use inelastic string, such as kite string.

Once the cameras are calibrated, the next problem is estimating the trajectory of a ball flying through the air. This is difficult because the NXTcam frames are not synchronized, so we cannot assume that the object detected by two NXTcams is at the same 3D position. This means that the most obvious approach of computing the 3D position of the ball at each pair of camera observations and fitting a trajectory to these positions is not viable.

To work around this, I set up trajectory estimation as an optimization problem where the objective function is the reprojection error (https://en.wikipedia...rojection_error) between the current estimate of the trajectory sampled at the observation time and the observations from the cameras. This formulation allows for a new variable representing the unknown time shift between the two sets of observations. Despite lacking floating point hardware, the EV3 processor can solve this nonlinear optimization problem in 40-80 ms (depending on initialization quality), which is fast enough to give the robot enough time to move to where the trajectory meets the robot.

The robot is programmed to begin moving to where the trajectory is expected to intersect the region reachable by the robot as soon as it has the first estimate of the trajectory. As more camera observations take place, the estimated trajectory can be improved, so the robot continues moving to the expected intersection of the refined trajectory and the reachable area. To increase the window of time for catching the ball, the robot attempts to match the trajectory of the ball while closing the "hand" as the ball arrives. This reduces the risk of the ball bouncing out of the hand while attempting to catch.

Anyways, that's probably too much detail to start with, let me know if you want to know more.

Edited by dsharlet

Share this post


Link to post
Share on other sites

absolutely amazing!! how long have you been working on this? do you think if another robot threw the ball, it would catch it more often? like a little thrower bot with a little randomization to it?

Share this post


Link to post
Share on other sites

I am speechless... how amazing is this, in so many level. Respect! :thumbup:

Share this post


Link to post
Share on other sites

This is quite incredible. Congrats on your quest! It's quite a machine you have engineered. But I also liked your description, which shows us a glimpse of the problems you came by and the solutions to them. I would not mind reading a full article about this.

Share this post


Link to post
Share on other sites

Fascinating project. Thanks for the detailed description of the development. It really shows how much effort went into this. The camera calibration workaround (ball on string) is really clever.

Now a few questions:

Is the processing done completely by the EV3 unit? (I think the answer is yes since you mentioned processing times).

How far away can the cameras track the ball?

What is the success rate for catching the ball? It looks like the accuracy of the throw and the incoming speed play a big role in the ability to catch the ball, so maybe it varies from person to person.

Last question is more of a request - would you be able to post a few pictures showing the details of this awesome robot?

Share this post


Link to post
Share on other sites

...

What is the success rate for catching the ball? It looks like the accuracy of the throw and the incoming speed play a big role in the ability to catch the ball, so maybe it varies from person to person.

...

he mentioned the success rate is around 1 in 5

...

The robot will catch ~1/5 (reachable) throws, the robot can reach an area with a radius of ~25 studs, and it can reach up ~40 studs.

...

Share this post


Link to post
Share on other sites

Great project!

I still remember, some months ago, how it seemed so "organic" when "just" tracking an object... now it seems something from a Terminator movie.

And yes, I would appreciate more details.

Share this post


Link to post
Share on other sites

he mentioned the success rate is around 1 in 5

Thank you, this is what happens when I post before having my morning coffee.

Share this post


Link to post
Share on other sites

Incredible! If anything, this highlights how amazing it is that humans are so accurate at catching, and we don't have to do advanced maths like the robot does.

Share this post


Link to post
Share on other sites

It is a really impressive creation, i didn't knew that the new Mindstorms were that advanced and accurate, and able to do such complex actions, it is really amazing.

Share this post


Link to post
Share on other sites

Thanks for the kind words! Here's the answers to some of the questions:

how long have you been working on this?

I worked on this for a few months. I probably spent more time on the software than the robot itself (lots of time waiting for parts from bricklink...).

Is the processing done completely by the EV3 unit? (I think the answer is yes since you mentioned processing times).

Yes, all of the processing is done on the EV3! Even the camera calibration, which in hindsight, was probably a mistake (it takes ~10-60 seconds to solve that problem, depending on how much calibration data you give it). The EV3 hardware doesn't have floating point, which is strictly necessary for the numerical techniques I'm using, and I wasn't sure the EV3 would be fast enough. But, I've find the EV3 hardware to be quite powerful. It can probably do anything that you might want a LEGO robot to do, with the exception of processing large amounts of data (like images, but that's done on the NXTcams on a dedicated processor).

do you think if another robot threw the ball, it would catch it more often? like a little thrower bot with a little randomization to it?

How far away can the cameras track the ball?

What is the success rate for catching the ball? It looks like the accuracy of the throw and the incoming speed play a big role in the ability to catch the ball, so maybe it varies from person to person.

Regarding the quality of throws/camera tracking, there's a few issues to deal with:

- The cameras have a really narrow field of view, so the region in which the ball is visible to both cameras is relatively small.

- The cameras also can only see the ball ~8 feet (2-3m) away. The more light there is, the further away they can see it (thus the lamps attached to the table, and the funky lighting in my video).

- The robot can only reach a relatively small area (~50 studs horizontally x ~40 studs vertically).

The success rate (1 in 5) is already excluding throws that the robot can't be expected to catch due to some of these issues. I think the remaining issues are:

- The NXTcams only run at 30 fps, and the tracking is quite crude. You only get a bounding box for a particular color, and the NXTcam color matching is done in a poor way too. I was prepared to attempt to reprogram the NXTcam firmware to do better object tracking: use YUV instead of RGB (should make the tracking less sensitive to lighting), and compute a centroid instead of a bounding box (should give sub-pixel precision of the ball position). But, it proved to be just *barely* able to track the balls well enough :)

- There's quite a lot of slop in the EV3 servo motors. I attached the arms directly to the motors to try to avoid this slop, and didn't gear them down to reduce the error because I needed the arms to move as quickly as possible.

- Despite trying pretty hard to make everything rigid, the arm is still pretty flexible, which leads to errors in positioning.

Last question is more of a request - would you be able to post a few pictures showing the details of this awesome robot?

Any particular parts you want to see? The inside of the robot is actually quite boring, because there are very few moving parts aside from the arms. There's not a single gear train in this entire robot! The whole structure is mostly just massive overbuilding to make the base as stiff as possible. BTW, the link could be quite easy to miss next to the embedded youtube video, so here is a link to some still photos again: https://goo.gl/photos/EYkhrsioVxZbGw84A

Share this post


Link to post
Share on other sites

Stunning. Just stunning. If you're looking for another challenge, here's my dream project: EV3 riding a unicycle. I've no idea if it's possible, but it should be fun to try :)

Share this post


Link to post
Share on other sites

BTW, the link could be quite easy to miss next to the embedded youtube video, so here is a link to some still photos again

Yup, saw it now. I was interested in the grippers and the base, the pictures you have are more than sufficient. Apologies for asking for info you had already provided, I was very excited by this project and obviously missed some stuff.

Share this post


Link to post
Share on other sites

would you ever consider using non lego cameras or moving the program to a computer?

Share this post


Link to post
Share on other sites

This is really awesome. :thumbup: As a robotics engineer, I can really appreciate the effort and the result. Thanks for sharing, I wouldn't mind a more in-depth explanation of the algorithms if you're willing to write it up! :classic:

Share this post


Link to post
Share on other sites

would you ever consider using non lego cameras or moving the program to a computer?

It would be nice to use much higher quality cameras, the challenge is latency. NXTcam is pretty nice in that there should be very little latency between the world and its observations. Using a separate camera and/or doing the processing on a computer would enable more powerful techniques, but, I think the added latency would be really problematic. Aside from the latency itself, it would probably also be unpredictable/random latency, which would be hard to deal with.

This is really awesome. :thumbup: As a robotics engineer, I can really appreciate the effort and the result. Thanks for sharing, I wouldn't mind a more in-depth explanation of the algorithms if you're willing to write it up! :classic:

Thanks! Any particular piece you want more info on? I'm happy to go into more detail if you have some specific questions/curiosity about a particular part of the project :)

One thing I didn't mention before that might be interesting - this was the first LEGO robot I've worked on that required precisely behaving motor controllers. The stock controllers weren't cutting it, because when you change the target position (setpoint), it resets the internal PID controller state, which leads to very jerky motion. So, I built a custom PID controller that would allow varying the setpoint without reseting the internal state. This means that you can continuously change the setpoint (e.g. to track an objective) without ugly transients. Here is a video comparing the controllers:

Here are two more videos that compare the delta robot with and without the custom controllers.

Stock controller:

Custom controller:

Share this post


Link to post
Share on other sites

geez, I see what you mean! that robot is having a seizure! does it work at all without the custom controller?

Share this post


Link to post
Share on other sites

I never attempted to have it catch the ball with the stock controller, so I don't know. Even if it did work, I didn't like running it like that very much, because I'm pretty sure it would destroy the servo motor gears quickly. I also had to strap the whole robot down to avoid it shaking itself off the table. It would also probably ruin the camera calibration quickly too.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Recently Browsing   0 members

    No registered users viewing this page.