Let’s Talk…Task Control (PART 2)

Tags

, ,

qhn5z5n

Orientation Kinematics

In PART 1, I discussed how to control the position of a robot tool, and also how the orientation of a rigid body can be represented by a [3 x 3] rotation matrix.  However, deriving an appropriate control method for a matrix is not always possible.  The standard process for determining a control method is to look at the derivative, but that is

dotR_OT

Since the matrix R_OT is an operator (i.e. mapping one space to another), it is not an appropriate way to represent the orientation state.  The angular velocity at the tool, however, is surprisingly just as easy to compute as the linear velocity from part 1

omega_Jacobian

Unfortunately, determining a control law with respect to angular velocity and rotation matrices is difficult.  The relationship is not always straightforward between rotation matrices; state representations that have been shown to work are the classic yaw-pitch-roll, the Euler-Rodrigues axis angle, or my personal favorite: unit quaternions.

Quaternions

“I came later to see that, as far as the vector analysis I required was concerned, the quaternion was not only not required, but was a positive evil of no inconsiderable magnitude…”   Oliver Heaviside (famous mathematician, did not like quaternions)

The unit quaternion r_OT_quat is a 4 dimensional hyper-complex representation of a rotation that sits in the SO(3) space.  This is the terrifying product of a William Hamilton fever dream that very appropriately represents the SO(3) space in a single vector.  One thing to note is that, because it’s a unit vector, it only has 3 degrees of freedom (since if we know 3 terms, we automatically know the 4th up to a sign ambiguity).

If we consider a rotation matrix as being the rotation about hat_k by theta , the unit quaternion is defined as

quat_definition

The relational aspect of rotation matrices where a single rotation can be decomposed into two rotations, i.e.  rotation_product continues into unit quaternions, except in this case we’re using the quaternion product.  The popular notation in use, due to the hypercomplexity disallowing normal vector products, is

quat_product

Here comes the useful part!.  We can relate angular velocity to quaternion velocity as

quat_jacobian

Then we can modify the Jacobian to correspond from joint velocity to quaternion velocity

quat_robot_jacobian

The inverse Jacobian is identical to the form described in Part 1, i.e.

quat_minization_problem

quat_pinv

Again, using standard PID control methods, we can use a control law

quat_error

where quat_complement is the quaternion complement

Example MATLAB code set up on Gist

Advanced Task Space Control Concepts

For the full task space control, we can simply concatenate both Jacobians into a single matrix

stacked_jacobian

to optimize

6Dminproblem

As an additional consideration, to improve singularity robustness, we can introduce a regularization term to minimize

6Dminproblem_reg

and get the control law

6Dreg_control_law

Even further, if the number of joints is greater than 6, this is a redundant arm, and thus the internal degrees of freedom can be controlled using

6dreg_redundant_control_law

where xi is a vector of terms corresponding to the nullspace of the jacobian that satisfies J_orth_definition.

Advertisements

Let’s talk…Task Control (PART 1)

Tags

, , ,

wholeagitatedjerboa

An object’s state is expressed by its position and orientation.  It is located at a particular point and faces a particular direction.  The easiest way to represent these is with a [3 x 1] position vector p and a [3 x 3] rotation matrix R.

HOW TO DEFINE A ROBOT

A robot is generally treated as just a bunch of rigid objects (generally referred to as links) connected by actuators (motors, wheels etc.).  Therefore, a robot can be validly represented by the position p_Oi  and orientation R_Oi of each link i  in the arm (with respect to a defined origin O).

CoordinateTransform

 

Each actuator j_1_Nq has a particular displacement q_j from its zero position.  If we know the relationships defining the actuator directions h_j and inter-actuator displacements p_jm1_j, then the link j can be expressed as a combined of the joint angles in the arm.  This is called forward kinematics.

R_Oj

p_Oj

At the end of the arm is the final link called the end effector or the tool, and we can define the entire state of the robot as q.

R_OT

p_OT

ForwardKinematics

HOW TO CONTROL THE ROBOT POSITION

When deciding the robot motion, we generally have to decide how to move the tool.  A classic way to determine this is by looking at the derivative

dot_p_OT

This is a linearized Jacobian for the position forward kinematics, representing the instantaneous actuator velocities to the tool velocity.  The Jacobian itself is a [3 x Nq] matrix and the jth column is the instantaneous effect of the jth actuator on the tool’s velocity.  Note that J is dependent on the current state of the robot.

Assuming that we have a desired dot_p_OTd, we need to find the appropriate dot_q by optimizing

min_qd

The optimal solution is the least-squares solution (only if Nq_leq_3)

dot_q_normal

If N_q_g_3, then JpTJp is not full rank and therefore not invertible.  In this case, we must use the equivalent

dot_q_redundant

If the Jacobian is full rank (avoiding singularity) then we can define a standard PID control law for dot_p_OTd, e.g.

dot_p_OTd_feedback

PositionControl

An example MATLAB script is on Gist, using libraries found at Github repositories for robot control and visualization.

ORIENTATION CONTROL COMING IN PART 2

Let’s Talk…Developing Software

So I’ve made a gigantic, huge, breaking change to my MATLAB robot visualization toolbox, and I’m so incredibly glad there are so few people who use it because it is so vastly different in terms of use, and if anyone depended on consistency they would be FURIOUS with me.  I know I’d be irritated, but I am somewhat confident that this will be the last major restructuring.

The fundamental usage is predominantly the same, but I’ve added some functionality that I’d wanted from the beginning.  These functions started out as a simple way for me to visualize the robot controllers I was working with, and it was easy to work on because I genuinely like working with graphics.  It has forced me to really understand coordinate frames and transformations between them.  But enough nostalgia and history, let’s look at what we can do!

I’ve added affine transformations!

spring

And a lot of support for multi-armed robots to call using only the updateRobot function (as opposed to before where all chains needed separate function calls).  This should reduce a lot of redundant kinematic computation when there are robots with shared kinematics (for example a two-armed robot with a shared waist).

I think I’m mostly excited to finally implement some of these aspects that I’ve thinking about for a long time.  All of the source is still on Github and formal documentation is coming soon!

Let’s Talk…Robot Raconteur and Baxter

Tags

, , , , ,

communitySo I have another set of code I’d like to share. I made a set of scripts that bridge the ROS topics for the Baxter robot into Robot Raconteur services! The link can be found in my code page

Some quick background:
Robot Operating System (or ROS) is a Linux-based middleware supported by an expansive community who are doing everything from SLAM to path planning. It is a huge part of the current robotics world and a de facto standard in any robotics laboratory.

So you might ask- why, if ROS is so powerful, would you ever need to convert it to anything else? Well, there are a few reasons, but the major ones are that ROS does not communicate well with MATLAB, which I use often (although I’m now seeing ROS Support from MATLAB, does anybody have any experience with this?) and it certainly does not work with Windows computers.

Robot Raconteur, developed by an alumnus of RPI and former lab-mate – Dr. John Wason, addresses all those concerns. It is a cross-platform middleware that supports many languages, namely C#, MATLAB, C++, Python, and Java. Robot Raconteur does not require a master node, but also maintains the distributed nature that makes ROS so attractive.

Functionality:
The full documentation is on GitHub, but a quick runthrough:

  1. Hosts joint positions, velocities, torques
  2. Hosts end effector poses, twists, wrenches
  3. Set joint position / velocity / torque commands
  4. Stream image data from cameras
  5. Control over electric grippers
  6. Access to data from any of the many sensors, including range finders, accelerometers, sonar ring, etc.

This was the basis for my previous post on having Baxter’s mimic my pose. The entire code was built around a MATLAB script, but passed through these python services to relay the control to the Baxter robot.

If you think this is a useful tool for your setup, please feel free to use this service (and credit me :P). Let me know of any implementations you’d like to see in the future, as well.

Let’s Talk…Inverse Kinematics

Tags

, , ,

So recently I hosted a new video of our Baxter robot:

Let’s look under the hood of how this worked. Off screen is a Microsoft Kinect (version 2), which is discretizing my body pose into a set of 3D points. This naturally seems like a simple-to-implement control method, since we have the exact locations of where we want the shoulder, elbow, and wrist. However, this isn’t exactly an easy problem, for a couple reasons.

1. My arms don’t have the same dimensions as a Baxter’s arms. This immediately causes problems, since if my upper arm doesn’t have the exact same length as a Baxter’s upper arm, then I am going to propagate error into my estimation of where the wrist should go.
2. Robots exist within Joint Space. Those who have already worked with serial robot arms would have already picked up on this issue – you can’t just tell a robot to go a location in space without solving for the joint angles that put it there. This is the essence of this post, and what is known as Inverse Kinematics.

Forward Kinematics is the mapping from the set of joint angles to the 3D position and orientation of the end effector, using the known geometry of the links and joints. Naturally, the Inverse Kinematics problem is how to determine the set of joint angles necessary to put the robot in a desired 3D position and orientation. I’m not going to get into the very intricate details of Inverse Kinematics, since it’s already been done by some amazing experts in the field. Instead I will direct curious minds to look at Murray, Li, and Sastry’s amazing book.

For our purposes, what is important to know is that an Inverse Kinematics problem will usually have several solutions, but may also have infinite solutions (in the case of redundancy, for example: put your finger on a nearby surface and move your arm around without moving your fingertip), a single solution (in the case of singularity – for example: extending the leg fully out), or no solution (if the desired position / orientation is unreachable). More often than not, the average roboticist will not have to compute the inverse kinematics themselves, as there are many packages out there to do that for them (i.e. ROS.

In this particular case, the secret sauce was that I did not directly map my 3D elbow location to the robot’s 3D elbow location. I mapped my upper-arm’s direction to the direction of the robot’s upper-arm, and likewise with the lower limbs. This meant that the algorithm was general enough to work for any person’s arm geometry.

So now that you know how the sausage was made, I’d love to see any of your own tele-operation examples. Also, something to think about going forward: I have 7 degrees of freedom in my arm, and the robot had 7 degrees of freedom in its arm, but what if my robot only had 5 degrees of freedom? What if it had 10? These redundancy and ambiguity problems are not easily solved and are still an interesting aspect of human-robot interaction. Please post in the comments your thoughts on the matter.

Let’s Talk…Visualization

Tags

, , ,

I tend to spend a lot of my time modelling and simulating robots.  I’ll design some sort of control scheme and try to simulate its interactions with this world, but I’m never quite happy unless I can see a robot, real or virtual, actually doing something.  I perform a lot of my simulations in MATLAB and therefore I required a means for seeing my robots come to life in the MATLAB environment.

Fortunately, MATLAB is already built with a powerful graphics toolbox, and, over the course of my studies, I’ve evolved my visualization functions into a relatively efficient drawing library.  So, I’ve decided to do the scary thing and let others look at my code.  I’ve hosted it on GitHub, please feel free to submit pull requests and absolutely tell me of any bugs, issues, or unclear code.

The library itself is based around ‘create’ functions which will add an object to the MATLAB figure, and return a handle to that object for future manipulation.  These handles can then be easily manipulated using the ‘update’ functions.  I’ve included some sample scripts to demonstrate some simple uses for this library.

BodyAnimation One script details how to create and animate a simple rigid body – in this case a cylinder.

 

PickAndPlace Another discusses how to generate a custom planar robot, and then do a pick-and-place task

 

BaxterAnimation I’ve also pre-defined a few robots.  One of which is the Rethink Robotics’ Baxter.  This particular robot loves to dance.

This is a bit of a show-off post and less of a detailed discussion.  Future posts will actually talk a bit more on nuts and bolts, but for now I hope someone out there can make use of this code.  Or at least enjoys silly animations.