ninoguba

Reverse Kinematics and Path Planning

Recommended Posts

Hello,

First topic post here. =)   So I've built a 6DoF robot arm with a motorized end effector and have so far only done the basic programming of the joint movements using python (ev3dev).  I have looked at implementations of reverse kinematics and RRT path planning on other robot platforms and would like to implement it for my arm.   Anyone on here has done it before and would like to share how they've done it?  Looking for collaborators on the programming side if there is any interest.

Here's the link to my very basic code so far:

https://github.com/ninoguba/ev3-robotic-arm

 

Thanks,

Nino

 

 

Edited by ninoguba

Share this post


Link to post
Share on other sites

Nice robot arm! I'm working on one myself so will be following this thread.

I haven't done it before but was considering using the Robotics toolbox for Matlab to simulate the arm and get joint coordinates to plug into python. 

Any idea how you might approach it? 

Share this post


Link to post
Share on other sites
3 hours ago, ord said:

Nice robot arm! I'm working on one myself so will be following this thread.

I haven't done it before but was considering using the Robotics toolbox for Matlab to simulate the arm and get joint coordinates to plug into python. 

Any idea how you might approach it? 

I found this for using Matlab with python.  Haven't tried it yet though.

https://www.mathworks.com/products/matlab/matlab-and-python.html

Share this post


Link to post
Share on other sites
19 minutes ago, GroundskeeperWillie said:

I can highly recommend watching this video from Akiyuki. Especially the second half has many details about its motion. Unfortunately it doesn't get much into programming details.

Thanks. I've seen that and it actually helped me with the wrist mechanism of my arm, build-wise.  Unfortunately, only sparse details were shared about how he programmed his arm other than this statement on his 2015 blog post.

Quote

The program to be executed on EV3 is written in C using TOPPERS / EV3RT.

It is actually trivial to hardcode pre-determined movements into your program. For example, when moving the arm from a fixed starting point and onto a fixed destination to do something.    The real challenge on the programming side is implementing RK and RRT so it will be a breeze to make the arm do more concerted movements like complex pick-and-place routines.  This is what I'm hoping to get a collaboration started and open-source it to share.

Share this post


Link to post
Share on other sites

Just made a quick drawing to calculate 2 Dimensions X/Z

50597658703_380a0a698d_b.jpg

You should start with measuring all center to center distances for all possible joints. Then you can make some lists/Arrays with possible angles alfa, beta, etc. Then try to write a code to just give in X/Y/Z coordinate and let it run the lists to find matching angles. Probably not the easiest way, but fun to find the math behind it.

Share this post


Link to post
Share on other sites
1 hour ago, Mr Jos said:

Just made a quick drawing to calculate 2 Dimensions X/Z

You should start with measuring all center to center distances for all possible joints. Then you can make some lists/Arrays with possible angles alfa, beta, etc. Then try to write a code to just give in X/Y/Z coordinate and let it run the lists to find matching angles. Probably not the easiest way, but fun to find the math behind it.

Thanks for this!  I could definitely go down do the math.  But I was more leaning toward making use of existing libraries so I won't have to reinvent the wheel

I actually already found this - https://github.com/nnadeau/pybotics - that I may be able use with my program running on ev3dev.  Wonder if anyone here has used it before...

Share this post


Link to post
Share on other sites
3 hours ago, ninoguba said:

It is actually trivial to hardcode pre-determined movements into your program. For example, when moving the arm from a fixed starting point and onto a fixed destination to do something.    The real challenge on the programming side is implementing RK and RRT so it will be a breeze to make the arm do more concerted movements like complex pick-and-place routines.  This is what I'm hoping to get a collaboration started and open-source it to share.

Yes, but from watching the video it's pretty clear to me that he is not using pre-determined movements but rather set coordinates with calculated movements, probably using a model that knows the arm's geometry and limitations. But since he doesn't discuss how it's modelled and implemented it doesn't help much. I've been using the Java Mindstorms OS, LejOS quite a lot and it has great abstractions for many robotics tasks. But again, unfortunately, they are mostly aimed at controlling/pathfinding in 2D, not 3D. This is just to say that your suggestion to create an open source python project is probably the best way to go about it. Good luck!

2 minutes ago, ninoguba said:

I actually already found this - https://github.com/nnadeau/pybotics - that I may be able use with my program running on ev3dev.  Wonder if anyone here has used it before...

From a quick look, this looks promising. And you're right, doing this math from scratch is a monumental task. I've been a developer for 2 decades but since I've mostly been doing high level work, I still find robotics really difficult and I'm always struggling to find the right abstractions. But they already exist and it's just a question of finding those that match the problem at hand.

Share this post


Link to post
Share on other sites
17 hours ago, ninoguba said:

Thanks for this!  I could definitely go down do the math.  But I was more leaning toward making use of existing libraries so I won't have to reinvent the wheel

I actually already found this - https://github.com/nnadeau/pybotics - that I may be able use with my program running on ev3dev.  Wonder if anyone here has used it before...

For me trying to make it work from 0 to final product is the most fun. Once it works most fun is over.

I'ld imagine just finding some code where you need to give in center-center joint-lengths and maximum angle movements would make any #DoF move, once a complete library is made. Personally I'ld have fun in the time searching all math behind it especially with the higher amount of axis like this one. I would give a try at this one, but I have no model to test code with (yet!).

Share this post


Link to post
Share on other sites
On 11/13/2020 at 6:40 PM, GroundskeeperWillie said:

much into programming details

There is little to no information about the programming code used. Akiyuki posted he programmed in C using TOPPERS/EV3RT. Don't have much experience in this field myself, maybe it helps.

Share this post


Link to post
Share on other sites
Posted (edited)

Delighted to have found this thread.

I, too, am building a 6-axis robot plus pneumatic end-effector, with plans to employ a reverse kinematics algorithm to calculate the angle joints (thetas) when supplied with coordinates and orientation of the end effector. Eventually I plan to add machine vision so the robot can locate and pick up an object and place it in a pile, based on the object's characteristics.

Currently, I've built version x of the robot. Here's a photograph:

PXL_20210104_141458678.PORTRAIT-1024x768

I was also inspired by Akiyuki's excellent wrist design, but have made the joint mechanisms stronger than his design to avoid gear slippage. The robot automatically calibrates each joint by jamming the joint. I sacrificed a bit of smooth motion for robustness, and may return to that issue another time.

I performed a kinematic analysis (using PowerPoint as a diagramming tool):

6-axis-angles-1.jpg?w=763&ssl=1

If you can spot problems, please let me know. I'm new to this; thinking in terms of multiple axes makes my head spin.

The main program is quite simple. The robot is specified as follows in the main program:

actuators_parameters = [motor_objects, actuator_names, calibrate_to_origin, gear_ratios, \
    calibration_speed_sps, calibration_stall_duty_cycle_sps, calibrate_holds, \
    normal_speed_ps, normal_duty_cycle_sps, ramp_sps, \
    theta_positive_motor_directions, theta_holds, theta_positive_directions, theta_range_of_motions, \
    theta_end_margins, initial_thetas, theta_adjustments]

Each parameter is a list of 7 items, one for each joint plus end effector. With the exception of the motor_objects, they could be input from a JSON file. Here's a couple examples:

actuator_names = ['waist', 'shoulder', 'upper_arm', 'wrist_rotation', 'wrist_bend', 'flange', 'end_effector']
gear_ratios = [1/9, 8/60, 8/60, 12/60, 20/60 * 12/24, 1/3, 1]

The implementation works with any length of list.

I then instantiate the robot as follows:

my_robot = robot(actuators_parameters)

All the smarts are buried in the robot class (which I wrote), including the creation of actuator objects (a separate class). I reckon I could specify a wide range of robots this way.

Next, I ask the robot to calibrate itself:

my_robot.calibrate([1, 2, 0, 4, 3, 4, 5, 6])  # note I calibrate joint 4 twice to avoid a possibility of jamming.

Each number refers to one of the actuators.

Next, I can move the robot arm to another position as follows:

my_robot.go_to( [35, 50, 160, 100, 110, 20, 45]) # The robot class deals with side-effects of concentric drives

Each number is a joint angle (theta). 

My plan now is to incorporate a reverse kinematics algorithm (libraries are referenced elsewhere in this thread).My application would supply the coordinates and orientation of the end effector; the inverse kinematics algorithm would then supply thetas for the robot go_to method shown in the above example. 

First, though, I want to understand what I'm doing. I've done much of an Udemy course https://www.udemy.com/course/robotics-1/ (where I learned to do a kinematics analysis) and am ready to watch the lectures that explain the numerical methods to perform reverse kinematics. Doubtless, along the way, I'll keep modifying the structure of the robot and application as I come across issues. I have to keep reminding myself the model is just a model; it will always be imperfect.

Of course I welcome feedback, and look forward to ninoguba's updates. 

 

 

Edited by Tomphase3

Share this post


Link to post
Share on other sites
Posted (edited)

Glad you found the thread and I'm looking forward to your updates as well.  Would you be interested in collaborating on the programming?  I envision something that can be adapted to any model of LEGO EV3 robot arms, perhaps by parameterizing the number of joints, dimensions, etc. and the type of end-effectors used.  If you don't mind sharing your code on Github, I'm interested to compare it with what I have already.

Something I haven't shared publicly before is what got me started with making my robot arm.  A year or so ago, I had an idea I wanted to do for an entry to the LEGO Mindstorms + Alexa Contest to combine my fondness for LEGO, Nerf and Alexa.   That idea was a robotic arm that has a target for Nerf blasters on its end-effector.   The movement of the target will be randomized so that it's quite a challenge for target practice kinda like skeet shooting.   Alexa will be used for the voice commands and sound effects.   That idea proved hard to complete due to the limited time to meet the contest deadline so I've pivoted to a simpler entry for the contest here - https://www.hackster.io/nino-guba/lego-target-practice-powered-by-mindstorms-ev3-and-alexa-fd1af8.

Then at the beginning of the year and due to the pandemic quarantine, I had the chance to get back to working on my robotic arm again. Initially planned to attach the ultrasonic sensor at the end-effector to track when a Nerf Rival round passed through the target.  Also thought of having a touch sensor being activated when a physical target on the end-effector is hit.  Went through several iterations of the arm, starting from 3 degrees of freedom, to 4, to 5, then eventually to 6 thanks to Akiyuki's wrist design for inspiration.  At each iteration, I also was able to improve the range of motion and strength of the most critical joints - shoulder and elbow.  Then lastly, I was able to add motorization of the end-effector using another medium EV3 motor.  Most I've seen, like yours, used pneumatics at the end-effector which I don't have.   My robot arm at its current form, models industrial robot arms such as the Kuka Agilus.  I think it's cool that I was able to build such a moc!

At some point, I still would like complete my original application for the project but this would really require finishing the implementation of inverse kinematics algorithm which is also making my head spin.  =) 

Good luck to us in completing our projects!  Anybody else working on their 6DoF robot arms are welcome to post their progress here.  

 

Edited by ninoguba

Share this post


Link to post
Share on other sites

I made it to calculate current X/Y/Z positions of the end-effector for my 6DoF, with lengths a1, a2, etc as arm length variables so it fits any robot arm. And it works good.

Problem is, I gave up with reverse kinematics. I got to the point I know how to do it, and I get theta1+2+3 with triangular math, but for theta4+5 and 6 there is no other way then calculating the inverted 3x3 matrix where theta1/2/3 are variables in and thereafter multiply it with 3x3 matrix with desired end effector pitch/yaw/roll/position. Micropython has no option to calculate this inversion, problem is that it's no simple calculation.

I tried a long time finding a workaround with some if statements for a given theta2+3 and using all kinds of math with cos/sin but there is no linearity in the math for theta4/5/6.

So if you find a way to calculate 3x3 matrix inversion and multiplication I will be happy to give you my calculations how you can do it.

Share this post


Link to post
Share on other sites
Posted (edited)

Ninogubu:

Thank you for your detailed message. I appreciate not being alone in this. I've just posted the code to Github.

https://github.com/tomwilsonmn/LEGO-EV3-n-DOF-Manipulator

Please feel free to critique it, freely borrow from it, and see if there is a way we can collaborate.

My mind is working overtime combining robotics and Nerf balls. I'm not sure I'd be happy having my robot "shot" but all sorts of alternative scenarios are "shooting" into my head. You would shoot at a regular target, and the robot would keep score using machine vision. It would then pick up the Nerf balls (nice and grippable), dump them in a bucket, and summon a vehicle to convey the bucket back to you.

Like you, I'm taking my time with this, moving forward when the mood takes me. Right now, I'm focusing on getting the robot to calibrate itself in a repeatable manner. It calibrates by jamming each limb at one end of its range of motion. Sometimes, a limb sticks and doesn't reach the end of its range of motion, or a gear slips. My goal is for the robot to successively calibrate itself accurately ten times in a row. Generally I can get it to calibrate 5 or 6 times in a row, but sometimes it acts up on the first or second calibration. I need to keep working on the limbs (I'm certainly learning how to make strong joints) and the settings (speed, power, ...) in the program to reach calibrate x10 Nirvana. I'm also thinking of adding some kind of automated check that the calibration proceeded correctly; e.g., the robot would press a button at the end of the calibration, or hold up a colored brick to the machine vision camera (neither of which is a 100% guarantee, but good enough for now). Only then will I feel confident to move forward with (gasp) inverse kinematics.

My first impression of inverse kinematics is, it's nothing to be too afraid of. Unlike forward kinematics, the solution is not strictly geometry, it demands a numerical methods solution. Back in the day, I loved numerical methods classes in college, so I have some hope. Ultimately, I'm believing we have to ensure the kinematic analysis is correct, then plug in numbers to functions in a math library. The functions will return the thetas we need. OK, it's not that simple, that's why I'm going to continue to take that Udemy course. I don't want to proceed with this, not understanding what I'm doing. I imagine, along the way, I'll do a "paper prototype."

This certainly is a great activity to keep us sane during these strange times.

Tom

Edited by Tomphase3

Share this post


Link to post
Share on other sites

Hopping on the train because these arms seem to be a common thing to do. I’ve found that for my purposes I can probably work in one section of cylindrical coordinates. The  base turntable and everything up to the wrist provides a position in a cylinder (a 2d plane rotated around the turntable) and the wrist is a quaternion in Cartesian. I haven’t actually tried it though.

Share this post


Link to post
Share on other sites
Posted (edited)

Amicus1: Indeed, these arms are common, but it’s apparent most implementations hard-wire the thetas into the code. It is really easy to manually move an end effector to a given set of coordinates, read the thetas (joint angles) then store those values in a file or in the program code. As I recall, every video I’ve seen has the robot going about a repetitive routine.

There’s plenty performing EV3 robots on the Web, but *nowhere* have I found a post or article describing the theoretical foundation of one of those robots. My suspicion is that there is none. But, I would love this suspicion to be proved wrong.

An end effector being moved to an arbitrary, not predetermined position is a different beast.

Some industrial robots have a teaching mode where the robot records a human operator moving the robot arm to different positions. This is not the type of robot I’m trying to build. I want to be able to roll a ball to an arbitrary position which the software determines via machine vision. The software then plugs the coordinates of the ball into an algorithm that returns one set of thetas (there are many) to position the end effector around the ball,

I suspect the geometry approach you’re researching works for *forward* kinematics where you supply thetas and lengths, etc., to a set of equations to determine the end effector coordinates.

Everything I read about *inverse* kinematics points to an iterative numerical methods solution where, given the desired coordinates of the end effector, the algorithm supplies the theta values. There’s plenty material out on the Web. Wikipedia gives an overview:

https://en.m.wikipedia.org/wiki/Inverse_kinematics#:~:text=In computer animation and robotics,the start of the chain.

There are many theses out there, just Google:

thesis n dof manipulator

On the first search results page “inverse kinematics” repeats over and over.

The necessary algorithms to perform inverse kinematics are well published, and there are readily available code libraries to perform inverse kinematics for n-DOF robotic manipulators. My main challenge is I want to understand what I’m doing, and not blindly plug numbers into a black box. (Painful, I know.)

That there are multiple possible sets of theta values for a given end effector location suggests a strictly geometry approach won’t deliver the goods. Perhaps if you introduce restrictions to the arm geometry, a geometry-only approach will work. Please be sure to keep us posted.

Edited by Tomphase3

Share this post


Link to post
Share on other sites

The "geometrical approach" of splitting up the chain is actually well suited for my use case. I'm using this arm as a fancy forklift for loading train cars in a circle. This means what I'm essentially dealing with is a three-chain system.

1. The base turntable at the center of the track circle rotates the rest of the arm to point at the target (train car, a stack of shelves, etc).

2. The first two joints are a two-bone IK chain in two dimensions, bringing the base of the wrist near the target.

3. The wrist (Akiyuki wrist) then has the job of "canceling out" the angle at which it attaches to the arm, in order to continuously keep the forklift end level.

Analyzing this, the first 1DOF chain is very simple, and does not need any kinematic algorithm beyond a P loop. The second is slightly tricky, but 2-dimension 2DOF solving basically comes down to constructing a triangle given three sides, which basic trigonometry can easily handle. The third chain, however, will present more difficulties, at least if the wrist rolls. If it stays in two dimensions it's simply an arithmetic calculation based on the angle of the "forearm". 

Share this post


Link to post
Share on other sites
Posted (edited)

That does make sense for your use case, as you’re not trying to address a generalized n-DOF manipulator.  It’s certainly more practical to try and stay in two dimensions (plus turntable) as that gives you what you need. Akiyuki’s example of the end effector maintaining its orientation is a beautiful sight.

I’m [suffering from delusions of grandeur] trying to solve the general case by developing code (posted to GitHub) with general robot and actuator classes I can reuse in future models... so far, so good. This morning I’ve been wading through a kinematics library but then realized there was no z, it applied to a robot restricted to operating in a plane. There’s other libraries, of course, but I’m trying to find the simplest that meets the requirement for generality. I’m hopeful.

Edited by Tomphase3

Share this post


Link to post
Share on other sites
Posted (edited)

I have also built a robot inspired by the Akiyuki's famous one. I built it as a 'toy' on which I want to learn EV3 programming in Micropython (see my thread here on Eurobricks.)

I decided to go on in smaller steps. So my recent task is simplified, because I am going to control only 5 motors (Body, Arm, Forearm, WristBend and Gripper) as shown in the picture.

kinematicscheme2.jpg

In this simpler version, the inverse kinematic task can be decomposed to simpler subtasks (body rotation + planar 3DoF scheme + gripper) that can be linearly combined. I have written the equations for forward and inverse directions and I hope to implement and test them soon. Though, I am struggling with some other challenges that should be solved first, such as reliable and repeatable homing (without additional sensors), position variability due to gear backlash, 100% reliable communication between two EV3 bricks (in Pybricks it is little tricky, one needs to add waits here and there), etc. Hope I will be able to show some progress soon.

Edited by Jonas

Share this post


Link to post
Share on other sites

Glad to know of fellow 6DoF arm builders :wink: !

I've actually stumbled on the concept of "Gradient Descent" for implementing inverse kinematics.   But same as before, it is still making my head spin trying to understand and implement it in my code.   Here's the link, in case, it helps you on your projects too.

https://www.marginallyclever.com/2020/04/gradient-descent-inverse-kinematics-for-6dof-robot-arms/

On 1/4/2021 at 1:45 PM, Tomphase3 said:

Ninogubu:

Thank you for your detailed message. I appreciate not being alone in this. I've just posted the code to Github.

https://github.com/tomwilsonmn/LEGO-EV3-n-DOF-Manipulator

Please feel free to critique it, freely borrow from it, and see if there is a way we can collaborate.

...

Tom

Thanks for sharing you code!  I'll take a look one of these days...

Share this post


Link to post
Share on other sites
On 11/10/2020 at 8:47 PM, ord said:

Nice robot arm! I'm working on one myself so will be following this thread.

I haven't done it before but was considering using the Robotics toolbox for Matlab to simulate the arm and get joint coordinates to plug into python. 

Any idea how you might approach it? 

 

On 11/20/2020 at 11:44 PM, m3eu said:

There is little to no information about the programming code used. Akiyuki posted he programmed in C using TOPPERS/EV3RT. Don't have much experience in this field myself, maybe it helps.

 

On 1/4/2021 at 2:44 AM, Tomphase3 said:

Delighted to have found this thread.

I, too, am building a 6-axis robot plus pneumatic end-effector, with plans to employ a reverse kinematics algorithm to calculate the angle joints (thetas) when supplied with coordinates and orientation of the end effector. Eventually I plan to add machine vision so the robot can locate and pick up an object and place it in a pile, based on the object's characteristics.

...

I was also inspired by Akiyuki's excellent wrist design, but have made the joint mechanisms stronger than his design to avoid gear slippage. The robot automatically calibrates each joint by jamming the joint. I sacrificed a bit of smooth motion for robustness, and may return to that issue another time.

Of course I welcome feedback, and look forward to ninoguba's updates. 

 

On 1/6/2021 at 3:08 AM, Amicus1 said:

Hopping on the train because these arms seem to be a common thing to do. I’ve found that for my purposes I can probably work in one section of cylindrical coordinates. The  base turntable and everything up to the wrist provides a position in a cylinder (a 2d plane rotated around the turntable) and the wrist is a quaternion in Cartesian. I haven’t actually tried it though.

 

On 1/8/2021 at 3:59 AM, ninoguba said:

Glad to know of fellow 6DoF arm builders :wink: !

I've actually stumbled on the concept of "Gradient Descent" for implementing inverse kinematics.   But same as before, it is still making my head spin trying to understand and implement it in my code.   Here's the link, in case, it helps you on your projects too.

https://www.marginallyclever.com/2020/04/gradient-descent-inverse-kinematics-for-6dof-robot-arms/

Thanks for sharing you code!  I'll take a look one of these days...

@all of you: Wanted to let you know that it is possible with Micropython/Pybricks to perform Inverse Kinematics on the 6DoF!!

Just finished being able to do all kinds of positioning, but it's not 100% finished yet, need to clean up my code a lot and build in some limits (now only theta2/3/5 are limited, but it can still crash into itselve. if Yaw >120° and XYZ gets to close to the robot.

This is what I used for this video.

799509430_6DoFpositiesvoorshowcase.PNG.52a2869feba5775b415a1502491335bf.PNG

Now I want to find out if and how I can use a bluetooth controller together with the 6DoF. I think like Playstation or something? I don't have any consoles. But what I want to do is move X/Y with joystick whilst maintaining Z altitude, that gets controlled by other buttons. 2nd joystick for Yaw/Pitch and buttons for Roll. Then use a button to save positions (with the forward kinematics), and then let the program run. But not tracking exactly what was done with remote, but using a linear path with IK.

So it will be easy to make animations, and fun I guess? Without having to hardcode these positions like in the screenshot.

Edited by Mr Jos
Added screenshot with positions.

Share this post


Link to post
Share on other sites

My 6 DOF manipulator has been sitting on a shelf for a few weeks, waiting for more programming. I've decided to pause programming until I have a better handle on inverse kinematics. The other day I came across this:

https://github.com/lanius/tinyik 

"tinyik is a simple and naive inverse kinematics solver."

At this point this library and I are a good match, "simple" and "naive." I put together a short program using tinyik, plugging in parameters from my manipulator, and produced these simulations:

258972579_Screenshotfrom2021-02-1015-09-34.png.090c131766c24f479d39867fded143cb.png

These simulations are a pretty good representation of my manipulator. For each simulation, all I changed in the code were the coordinates of the end effector.

For example, here's the code for the top-left simulation.

import tinyik
import numpy as np
import open3d as o3d
body =tinyik.Actuator(['y',[0, 2.1, .0],'z', [1.5, .0, .0], 'z', [1.3, .0, .6], 'x', [.9, .0, .0], 'z', [1.1, .0, .0],'x', [1.5, .0, .0]]) # 1.0 = 10 LEGO studs
body.ee = [2.,0, 0]  # position of End Effector
tinyik.visualize(body)

I need to look at this more closely, but it does look pretty reasonable. Going through the library code, I can see the inverse kinematics solver is iterative/analytical/converging on a solution, as you would expect. Beyond that, I can't yet vouch for the efficacy of the code. However it's promising, and most appealing to my laziness. All I have to do is plug in the coordinates of the end effector, and I get the thetas (angles) for all the joints. For now, I'm not concerned about the orientation of the end effector.

At a minimum, this is raising my comfort level.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Recently Browsing   0 members

    No registered users viewing this page.