You are currently viewing ROS Tutorial: Pick and Place task with the Moveit C++ interface

ROS Tutorial: Pick and Place task with the Moveit C++ interface

This tutorial shows how to implement a simple pick and place task by using the Moveit C++ interface. In the last tutorial I showed you how to create a Moveit configuration for the UR5 robot arm and a Robotiq gripper and how to control it using the Moveit Plugin in Rviz. Now we can use this configuration and command our robot using C++ code.

After a brief introduction of the interface I will present the gazebo world we are using for the pick and place task. Afterwards we will go through the code step by step. At the end will also learn how to do collision avoidance by adding collision objects to the Planning scene.

The Moveit C++ Interface

Let’s briefly talk about the C++ interface and how it connects to the rest of Moveit. Moveit has different interfaces that we can use to implement our application as this picture from the official website shows:

System architecture overview from the official Moveit Website

The GUI interface in Rviz we already know from the previous tutorial. For coding, Moveit provides a the moveit_commander interface for Python and the move_group_interface for C++. The picture also shows the other connections to and from the central move_group node. When creating the Moveit configuration in the previous tutorial, we used the URDF file to create a SRDF and the Config using the Moveit! Setup Assistant. Move_group gets all of that information from the ROS Param Server.

The motion commands are sent as trajectories to the controllers of the robot (e.g. controllers set up with ros_control). The Moveit C++ interface provides an easy way to access the most important functionality of the move_group node so let’s go ahead and get started!

The Gazebo Pick and Place World

You might already have the Moveit Configuration package for the UR5 with a Robotiq gripper from the previous tutorial. If not, please follow the Quickstart section of the previous article.

For implementing the pick and place task we create a new ROS package. You can either clone the finished package …

~/catkin_ws/src$ git clone https://github.com/dairal/ur5_simple_pick_and_place

… or you can go ahead and create your own.

Inside the package I created a new Gazebo world that looks as follows:

In order to load this world, we need to modify the gazebo.launch file inside the Moveit Configuration package. So if you already started the gazebo demo then close it and go ahead and change line 9 to load the pick and place world as a default:

    <arg name="world_name" default="$(find ur5_simple_pick_and_place)/world/simple_pick_and_place"/>

In this new world the robot is placed on a table so we need to modify the position where the robot arm is spawned. Therefore, we need to change the offset for the z-axis in line 18 to the height of the table (roughly 1.21 m):

    <node name="spawn_gazebo_model" pkg="gazebo_ros" type="spawn_model" args="-urdf -param robot_description -model robot -x 0 -y 0 -z 1.21"

Now you can start the gazebo demo with:

$ roslaunch ur5_gripper_moveit_config demo_gazebo.launch

Initializing the interface

Our mission is to place the blue box inside the plate on the right hand side. Our C++ code is in a file called simple_pick_and_place.cpp inside the src folder of our pick and place Repository (You can view the complete file under this link). Before we can command our motions, we need to do some initializations.

 #include <moveit/move_group_interface/move_group_interface.h>
 #include <moveit/planning_scene_interface/planning_scene_interface.h>
 
 
 int main(int argc, char** argv)
 {
   ros::init(argc, argv, "move_group_interface_tutorial");
   ros::NodeHandle n;
 
   // ROS spinning must be running for the MoveGroupInterface to get information
   // about the robot's state. One way to do this is to start an AsyncSpinner
   // beforehand.
   ros::AsyncSpinner spinner(1);
   spinner.start();
 
   // MoveIt operates on sets of joints called "planning groups" and stores them in an object called
   // the `JointModelGroup`. Throughout MoveIt the terms "planning group" and "joint model group"
   // are used interchangably.
   static const std::string PLANNING_GROUP_ARM = "ur5_arm";
   static const std::string PLANNING_GROUP_GRIPPER = "gripper";
 
   // The :planning_interface:`MoveGroupInterface` class can be easily
   // setup using just the name of the planning group you would like to control and plan for.
   moveit::planning_interface::MoveGroupInterface move_group_interface_arm(PLANNING_GROUP_ARM);
   moveit::planning_interface::MoveGroupInterface move_group_interface_gripper(PLANNING_GROUP_GRIPPER);

As usual when creating a ROS node, we have a main function. In line 7 we initialize ROS and in line 8 we create the NodeHandle. Next, we instantiate and start an async spinner in lines 13 and 14. This is important as the spinner makes sure that the topics are constantly sent and received. The move_group needs this to update the robot state.

We need two move_group Interfaces, one for the arm and one for the gripper because they are separate planning groups. In lines 19 and 20 we define the names of the planning groups as "ur5_arm" and "gripper". In line 24 and 25 we use these names to instantiate the MoveGroupInterfaces.

Commanding the motions

We separate the pick and place task into several motion steps:

  1. Move to home position
  2. Place the TCP (Tool Center Point, the tip of the robot) above the blue box
  3. Open the gripper
  4. Move the TCP close to the object
  5. Close the gripper
  6. Move the TCP above the plate
  7. Lower the TCP above the plate
  8. Open the gripper

The resulting motion looks something like this:

Before starting the movements we can print the available planning groups (line 29/30). That way we know if the groups where loaded correctly.

     // We can get a list of all the groups in the robot:
     ROS_INFO_NAMED("tutorial", "Available Planning Groups:");
     std::copy(move_group_interface_arm.getJointModelGroupNames().begin(),
             move_group_interface_arm.getJointModelGroupNames().end(), std::ostream_iterator<std::string>(std::cout, ", "));

1. Move to home position

In line 32 we instantiate a moveit::planning_interface::MoveGroupInterface::Plan that we call my_plan_arm. We use it later to store the motion plan created by the motion planner. There is different options to set a goal position for the UR5 robot arm. One way is to define the goal positions of the robot joints, another one is to define the goal pose of the TCP.

     moveit::planning_interface::MoveGroupInterface::Plan my_plan_arm;
 
     // 1. Move to home position
     move_group_interface_arm.setJointValueTarget(move_group_interface_arm.getNamedTargetValues("home"));
 
     bool success = (move_group_interface_arm.plan(my_plan_arm) == moveit::planning_interface::MoveItErrorCode::SUCCESS);
 
     ROS_INFO_NAMED("tutorial", "Visualizing plan 1 (pose goal) %s", success ? "" : "FAILED");
  
     move_group_interface_arm.move();

In a first step we want to move to the “home” position which was defined while creating the Moveit configuration package. This home position is defined as a set of joint position values inside our srdf file. We can get these values by calling move_group_interface_arm.getNamedTargetValues("home"). Then we hand these values over to the move_group by calling setJointValueTarget which happens in line 35.

With move_group_interface_arm.plan(my_plan_arm)in line 37 we call the motion planner that stores it’s results in my_plan_arm which we created before. We also check if the plan was successfully created and print the result in line 39. Until now we only have a plan that was by the motion planner but the robot does not yet move. To actually execute the movement, we need to call move_group_interface_arm.move() (line 41). Now step one of our pick and place sequence is completed

2. Place the TCP above the blue box

In step two we move to a position above the blue box. I know the position because I created the gazebo world but you can check by clicking on the object and checking the position displayed on the right. In this step, instead of the joint values we want to define a goal pose so we insantiate a geometry_msgs::Pose called target_pose1 in line 47. The pose consists of the x, y and z position as well as the orientation of the TCP. We want to keep the orientation so we get the current pose of our end effector link in lines 44 and 45 and set the current orientation as the target orientation in line 49.

     // 2. Place the TCP (Tool Center Point, the tip of the robot) above the blue box 
     geometry_msgs::PoseStamped current_pose;
     current_pose = move_group_interface_arm.getCurrentPose("ee_link");
 
     geometry_msgs::Pose target_pose1;
 
     target_pose1.orientation = current_pose.pose.orientation;
     target_pose1.position.x = 0.3;
     target_pose1.position.y = 0.5;
     target_pose1.position.z = 0.2;
     move_group_interface_arm.setPoseTarget(target_pose1);
 
     success = (move_group_interface_arm.plan(my_plan) == moveit::planning_interface::MoveItErrorCode::SUCCESS);
 
     ROS_INFO_NAMED("tutorial", "Visualizing plan 1 (pose goal) %s", success ? "" : "FAILED");
 
     move_group_interface_arm.move();

For the coordinate axes I chose the same coordinates except for the z-position (lines 50 to 52). This one is a bit more tricky because the reference frame for the goal pose is the base frame of our robot. For this frame we already have an offset from the world frame because the robot is spawned at z-position 1.21. In turn, the position of the box as it is displayed in Gazebo is shown in relation to the world frame. Nonetheless, we can assume the box and robot base at approximately the same z-position as the two tables have a rather similar in height.

When our pose is defined we can set the goal for our move_group_interface_arm. Note that this time, we call the method setPoseTarget(). The following procedure is the same as in the last step, we compute the plan, print if it was successful and execute the movement. I will not show these lines for the following anymore as they stay the same for every movement.

3. Open the gripper

Now we want to command the gripper to open. As for the robot arm we create a plan my_plan_gripper for the gripper and command the pre-configured state “open” by calling setJointValueTarget. This time of course we use the move_group_interface_gripper.

     moveit::planning_interface::MoveGroupInterface::Plan my_plan_gripper;
 
     // 3. Open the gripper
     move_group_interface_gripper.setJointValueTarget(move_group_interface_gripper.getNamedTargetValues("open"));

4. Move the TCP close to the object

For the next step we take the previous target for our tcp position and apply an offset of -0.2 to the z-position to move the robot towards the blue box.

     // 4. Move the TCP close to the object
     target_pose1.position.z = target_pose1.position.z - 0.2;
     move_group_interface_arm.setPoseTarget(target_pose1);

5. Close the gripper

     // 5. Close the  gripper
     move_group_interface_gripper.setJointValueTarget(move_group_interface_gripper.getNamedTargetValues("closed"));

6. Move the TCP above the plate

From checking the position of the plate by clicking on it, we know that it is 0.6 meters to the right from the initial position of the blue box where our TCP is located after the previous step. So we just need to apply this as an offset to the initial target position on the x axis. To move the arm to a position above the plate we also need to move up along the z axis (+0.2 offset).

     // 6. Move the TCP above the plate 
     target_pose1.position.z = target_pose1.position.z + 0.2;
     target_pose1.position.x = target_pose1.position.x - 0.6;
     move_group_interface_arm.setPoseTarget(target_pose1);

7./8. Lower TCP and open the gripper

When the robot is above the plate we lower the tcp (-0.14) and open the gripper.

     // 7. Lower the TCP above the plate
     target_pose1.position.z = target_pose1.position.z - 0.14;
     move_group_interface_arm.setPoseTarget(target_pose1);
     // 8. Open the gripper
     move_group_interface_gripper.setJointValueTarget(move_group_interface_gripper.getNamedTargetValues("open"));

Done! This is all we need for a simple pick and place task. Build the project by executing catkin_make in your catkin workspace:

~/catkin_ws$ catkin_make

Now you can execute the task with the following command:

$  rosrun ur5_simple_pick_and_place pick_and_place

Collision avoidance with Moveit

Even though it seems like a very easy task, things can go wrong even in such a simple pick and place scenario. The gripper needs to be able to grasp the object and hold on to it until the target is reached. If the object surface is very plane with very small friction, it might slip out of the closed gripper. Maybe you already see this in our demo.

Sometimes, the resulting plan of the motion planner keeps changing for the same scenarios. That means when you run the motion planner twice for the same start and goal position, it might take a different path. This can for example result in the robot crashing into one of the tables.

That is because our motion planner currently doesn’t know about any of the objects in our environment. Let’s illustrate that by adding an object in between the blue box and the plate into the world. For that I prepared a second world pick_and_place_simple_collision which contains a green box on the table. So go ahead and change the name of the default world in line 9 of the gazebo.launch file inside the ur5_gripper_moveit_config package to load the pick_and_place_simple_collision world. The line should then look as follows:

    <arg name="world_name" default="$(find ur5_simple_pick_and_place)/world/simple_pick_and_place_collision"/>

When we restart the demo_gazebo.launch as before with

$ roslaunch ur5_gripper_moveit_config demo_gazebo.launch

we see an obstacle in the form of a big green box.

Now when we execute the pick_and_place ROS executable again, we see the robot crashing full speed into the green box.

Adding collision objects to the Moveit Planning Group

To avoid a collision with the obstacle we need to tell Moveit about the object. The so called planning scene is representing the state of the robot as well as the environment around it. It is maintained by the Planning Scene Monitor inside the move_group node.

It listens to the joint state topic that contains the robot state and can receive sensor data e.g. from a depth camera. Through the C++ move_group_interface we can modify the world geometry information using the planning_scene topic. This topic is the one we will use to let the planner know about the obstacle.

We add the code for adding the collision object right after the move_group intializations. I added the code to the pick_and_place_collision.cpp file that you can view under this link.

First, we instantiate a planning_scene_interface (line 30) which will be used to add the collision object to the Planning Scene Monitor. Then we create an instance collision_object of the CollisionObject message type. Now we need to fill the message with details about the collision object. In line 34 we set the Planning Frame of the move group interface for the robot arm as frame_id. We also assign a unique name to the object. Let’s call it “box1”.

    moveit::planning_interface::PlanningSceneInterface planning_scene_interface;

    // Collision object
    moveit_msgs::CollisionObject collision_object;
    collision_object.header.frame_id = move_group_interface_arm.getPlanningFrame();

    collision_object.id = "box1";

    shape_msgs::SolidPrimitive primitive;
    primitive.type = primitive.BOX;
    primitive.dimensions.resize(3);
    primitive.dimensions[0] = 0.2;
    primitive.dimensions[1] = 0.4;
    primitive.dimensions[2] = 0.4;

    geometry_msgs::Pose box_pose;
    box_pose.orientation.w = 1.0;
    box_pose.position.x = 0.0;
    box_pose.position.y = 0.5;
    box_pose.position.z = 1.23 - 1.21;

    collision_object.primitives.push_back(primitive);
    collision_object.primitive_poses.push_back(box_pose);
    collision_object.operation = collision_object.ADD;

    std::vector<moveit_msgs::CollisionObject> collision_objects;
    collision_objects.push_back(collision_object);

    planning_scene_interface.applyCollisionObjects(collision_objects);

    ROS_INFO_NAMED("tutorial", "Add an object into the world");

    ros::Duration(2.0).sleep();

We need to define the shape of the object. Therefore, a SolidPrimitive shape message is created and gets assigned the type BOX in lines 38 and 39. The box needs to have the same dimensions (or greater) as the one in our collision world (line 40 to 43). It also needs to be at the same position (lines 45 to 49). After shape and pose of the box are defined, we add them to the collision_object message in line 51 and 52.

We want to add the collision object to the environment so we choose collision_object.ADD as the operation to apply. Because the method of our planning_scene_interface class takes a vector of collision objects we define one in line 55 and add the collision_object we created in line 56.

Now we can add the object to the planning scene by calling the applyCollisionObjects method in line 58. Afterwards we print some information and sleep for two seconds to make sure the changes are applied before we continue.

At the end of the execution we need to remove the created collision object from the planning scene, otherwise it will stay there until we restart the demo.

    ROS_INFO_NAMED("tutorial", "Remove the object from the world");
    std::vector<std::string> object_ids;
    object_ids.push_back(collision_object.id);
    planning_scene_interface.removeCollisionObjects(object_ids);

Let’s run the executable with the above changes and see what happens:

$  rosrun simple_pick_and_place pick_and_place_collision

We can see that the motion planner tries to find a path around the collision object. There might be still collisions between the blue and the green box because the planner doesn’t take the grasped object into account. This can be solved by increasing the dimensions of the collision object we defined and added to the planning scene.

Conclusion

Great! Now we know how to use Moveit’s C++ interface to program a simple pick and place task and how to add information about the environment such that the motion planner can avoid collisions. However, there is a lot more to learn about motion planning. Avoiding collisions when we exactly know about objects in the environment is fairly easy.

However, we can also use sensor data e.g. from a depth camera to get information about the robot environment. This sensor data can then also be used by the motion planner for obstacle avoidance. In the next tutorial I show you how modify the pick and place task to use a Kinect depth camera and detect collision objects automatically. I also created a tutorial where I show you how to use the OpenCV library with ROS to add Computer Vision to the pick and place task. On another note, there are many different motion planning algorithms that can be used. Each of them has their pros and cons. We will also talk about those in a future tutorial. Until then you can get an overview of all available tutorials here and I am happy to receive feedback in the comments.