Tomphase3

Eurobricks New Members
  • Content Count

    7
  • Joined

  • Last visited

Everything posted by Tomphase3

  1. Ninoguba: Thanks for the PS4 controller recommendation. I'm now the proud owner of the device, and have been having fun figuring out events: this is a pleasant diversion. Did you ever manage to intercept accelerometer events?
  2. My 6 DOF manipulator has been sitting on a shelf for a few weeks, waiting for more programming. I've decided to pause programming until I have a better handle on inverse kinematics. The other day I came across this: https://github.com/lanius/tinyik "tinyik is a simple and naive inverse kinematics solver." At this point this library and I are a good match, "simple" and "naive." I put together a short program using tinyik, plugging in parameters from my manipulator, and produced these simulations: These simulations are a pretty good representation of my manipulator. For each simulation, all I changed in the code were the coordinates of the end effector. For example, here's the code for the top-left simulation. import tinyik import numpy as np import open3d as o3d body =tinyik.Actuator(['y',[0, 2.1, .0],'z', [1.5, .0, .0], 'z', [1.3, .0, .6], 'x', [.9, .0, .0], 'z', [1.1, .0, .0],'x', [1.5, .0, .0]]) # 1.0 = 10 LEGO studs body.ee = [2.,0, 0] # position of End Effector tinyik.visualize(body) I need to look at this more closely, but it does look pretty reasonable. Going through the library code, I can see the inverse kinematics solver is iterative/analytical/converging on a solution, as you would expect. Beyond that, I can't yet vouch for the efficacy of the code. However it's promising, and most appealing to my laziness. All I have to do is plug in the coordinates of the end effector, and I get the thetas (angles) for all the joints. For now, I'm not concerned about the orientation of the end effector. At a minimum, this is raising my comfort level.
  3. That does make sense for your use case, as you’re not trying to address a generalized n-DOF manipulator. It’s certainly more practical to try and stay in two dimensions (plus turntable) as that gives you what you need. Akiyuki’s example of the end effector maintaining its orientation is a beautiful sight. I’m [suffering from delusions of grandeur] trying to solve the general case by developing code (posted to GitHub) with general robot and actuator classes I can reuse in future models... so far, so good. This morning I’ve been wading through a kinematics library but then realized there was no z, it applied to a robot restricted to operating in a plane. There’s other libraries, of course, but I’m trying to find the simplest that meets the requirement for generality. I’m hopeful.
  4. Amicus1: Indeed, these arms are common, but it’s apparent most implementations hard-wire the thetas into the code. It is really easy to manually move an end effector to a given set of coordinates, read the thetas (joint angles) then store those values in a file or in the program code. As I recall, every video I’ve seen has the robot going about a repetitive routine. There’s plenty performing EV3 robots on the Web, but *nowhere* have I found a post or article describing the theoretical foundation of one of those robots. My suspicion is that there is none. But, I would love this suspicion to be proved wrong. An end effector being moved to an arbitrary, not predetermined position is a different beast. Some industrial robots have a teaching mode where the robot records a human operator moving the robot arm to different positions. This is not the type of robot I’m trying to build. I want to be able to roll a ball to an arbitrary position which the software determines via machine vision. The software then plugs the coordinates of the ball into an algorithm that returns one set of thetas (there are many) to position the end effector around the ball, I suspect the geometry approach you’re researching works for *forward* kinematics where you supply thetas and lengths, etc., to a set of equations to determine the end effector coordinates. Everything I read about *inverse* kinematics points to an iterative numerical methods solution where, given the desired coordinates of the end effector, the algorithm supplies the theta values. There’s plenty material out on the Web. Wikipedia gives an overview: https://en.m.wikipedia.org/wiki/Inverse_kinematics#:~:text=In computer animation and robotics,the start of the chain. There are many theses out there, just Google: thesis n dof manipulator On the first search results page “inverse kinematics” repeats over and over. The necessary algorithms to perform inverse kinematics are well published, and there are readily available code libraries to perform inverse kinematics for n-DOF robotic manipulators. My main challenge is I want to understand what I’m doing, and not blindly plug numbers into a black box. (Painful, I know.) That there are multiple possible sets of theta values for a given end effector location suggests a strictly geometry approach won’t deliver the goods. Perhaps if you introduce restrictions to the arm geometry, a geometry-only approach will work. Please be sure to keep us posted.
  5. Ninogubu: Thank you for your detailed message. I appreciate not being alone in this. I've just posted the code to Github. https://github.com/tomwilsonmn/LEGO-EV3-n-DOF-Manipulator Please feel free to critique it, freely borrow from it, and see if there is a way we can collaborate. My mind is working overtime combining robotics and Nerf balls. I'm not sure I'd be happy having my robot "shot" but all sorts of alternative scenarios are "shooting" into my head. You would shoot at a regular target, and the robot would keep score using machine vision. It would then pick up the Nerf balls (nice and grippable), dump them in a bucket, and summon a vehicle to convey the bucket back to you. Like you, I'm taking my time with this, moving forward when the mood takes me. Right now, I'm focusing on getting the robot to calibrate itself in a repeatable manner. It calibrates by jamming each limb at one end of its range of motion. Sometimes, a limb sticks and doesn't reach the end of its range of motion, or a gear slips. My goal is for the robot to successively calibrate itself accurately ten times in a row. Generally I can get it to calibrate 5 or 6 times in a row, but sometimes it acts up on the first or second calibration. I need to keep working on the limbs (I'm certainly learning how to make strong joints) and the settings (speed, power, ...) in the program to reach calibrate x10 Nirvana. I'm also thinking of adding some kind of automated check that the calibration proceeded correctly; e.g., the robot would press a button at the end of the calibration, or hold up a colored brick to the machine vision camera (neither of which is a 100% guarantee, but good enough for now). Only then will I feel confident to move forward with (gasp) inverse kinematics. My first impression of inverse kinematics is, it's nothing to be too afraid of. Unlike forward kinematics, the solution is not strictly geometry, it demands a numerical methods solution. Back in the day, I loved numerical methods classes in college, so I have some hope. Ultimately, I'm believing we have to ensure the kinematic analysis is correct, then plug in numbers to functions in a math library. The functions will return the thetas we need. OK, it's not that simple, that's why I'm going to continue to take that Udemy course. I don't want to proceed with this, not understanding what I'm doing. I imagine, along the way, I'll do a "paper prototype." This certainly is a great activity to keep us sane during these strange times. Tom
  6. Delighted to have found this thread. I, too, am building a 6-axis robot plus pneumatic end-effector, with plans to employ a reverse kinematics algorithm to calculate the angle joints (thetas) when supplied with coordinates and orientation of the end effector. Eventually I plan to add machine vision so the robot can locate and pick up an object and place it in a pile, based on the object's characteristics. Currently, I've built version x of the robot. Here's a photograph: I was also inspired by Akiyuki's excellent wrist design, but have made the joint mechanisms stronger than his design to avoid gear slippage. The robot automatically calibrates each joint by jamming the joint. I sacrificed a bit of smooth motion for robustness, and may return to that issue another time. I performed a kinematic analysis (using PowerPoint as a diagramming tool): If you can spot problems, please let me know. I'm new to this; thinking in terms of multiple axes makes my head spin. The main program is quite simple. The robot is specified as follows in the main program: actuators_parameters = [motor_objects, actuator_names, calibrate_to_origin, gear_ratios, \ calibration_speed_sps, calibration_stall_duty_cycle_sps, calibrate_holds, \ normal_speed_ps, normal_duty_cycle_sps, ramp_sps, \ theta_positive_motor_directions, theta_holds, theta_positive_directions, theta_range_of_motions, \ theta_end_margins, initial_thetas, theta_adjustments] Each parameter is a list of 7 items, one for each joint plus end effector. With the exception of the motor_objects, they could be input from a JSON file. Here's a couple examples: actuator_names = ['waist', 'shoulder', 'upper_arm', 'wrist_rotation', 'wrist_bend', 'flange', 'end_effector'] gear_ratios = [1/9, 8/60, 8/60, 12/60, 20/60 * 12/24, 1/3, 1] The implementation works with any length of list. I then instantiate the robot as follows: my_robot = robot(actuators_parameters) All the smarts are buried in the robot class (which I wrote), including the creation of actuator objects (a separate class). I reckon I could specify a wide range of robots this way. Next, I ask the robot to calibrate itself: my_robot.calibrate([1, 2, 0, 4, 3, 4, 5, 6]) # note I calibrate joint 4 twice to avoid a possibility of jamming. Each number refers to one of the actuators. Next, I can move the robot arm to another position as follows: my_robot.go_to( [35, 50, 160, 100, 110, 20, 45]) # The robot class deals with side-effects of concentric drives Each number is a joint angle (theta). My plan now is to incorporate a reverse kinematics algorithm (libraries are referenced elsewhere in this thread).My application would supply the coordinates and orientation of the end effector; the inverse kinematics algorithm would then supply thetas for the robot go_to method shown in the above example. First, though, I want to understand what I'm doing. I've done much of an Udemy course https://www.udemy.com/course/robotics-1/ (where I learned to do a kinematics analysis) and am ready to watch the lectures that explain the numerical methods to perform reverse kinematics. Doubtless, along the way, I'll keep modifying the structure of the robot and application as I come across issues. I have to keep reminding myself the model is just a model; it will always be imperfect. Of course I welcome feedback, and look forward to ninoguba's updates.