As part of MIT's 6.141 robotics course, we were challenged in teams to create autonomous robots that could navigate a space while collecting blocks and ultimately deploy those blocks to form some sort of structure (see: background and details).
The approach that my team took for the grand challenge was an ambitious one: create a fleet of diversified but simple robots that cooperate to gather and stack blocks. These “worker” robots are meant to be extremely simple remote-control vehicles that are commanded by a sensory “mothership” robot. The primary motivation was to develop a system that could parallelize tasks and capitalize on the agility of using smaller robots (for example, improved maneuverability in tight spaces).
Our original design consisted of three “worker” robots: an agile gatherer that could grasp and carry a block, a dump truck that could carry multiple blocks, and a slow-but-precise stacker robot that could create block towers up to six blocks tall. The worker robots have no sensors of their own other than a gyroscope to track their heading - this allows us to command translational velocities and headings. Although we built all three worker robots, we only had time to get the gatherer and dump-truck cooperating in time for the challenge.
We built the worker robots using LEGO and the HappyBoard microcontroller platform from the 6.270 robotics course/competition , and used a wireless link to allow the mothership to remotely control them.
The next step was tracking the worker robots from the mothership - we used a Microsoft Kinect which provides an RGB video feed along with a corresponding depth map (it's a pretty popular robotics tool these days) . To identify the workers, I modified the robot-tracking system I co-authored for 6.270 (github repo), which looks for a type of 2D barcode on top of the robot (I previously blogged about this system). When one of these patterns is located in the RGB video feed, the software looks up the corresponding depth-map coordinates of the 4 corners of the pattern. The depth at those coordinates can be transformed into real <x,y,z> space coordinates to figure out where the worker is in relation to the mothership.
The robot tracker has identified the pattern and labelled the robot #1.
The colorized depth map.
The depth map (uncolored) - note the four white circles that indicate the depth probe points for tracking the robot’s true <x,y,z> world coordinates - these correspond with the corners of the fiducial as seen in the RGB image above.
We also use the RGB video feed to identify blocks by filtering the hue, saturation, and brightness values and identifying connected components. Once a block is found, we probe the depth-map to determine the block’s true <x,y,z> coordinates.
In order to move the mothership with both the gatherer and dump-truck, the path-planning software assigns the path to both the workers and the mothership. The workers’ paths are modified slightly so that the two robots drive side-by side, rather than attempting to reach the exact same endpoint and crashing into each other. As long as the workers are in view, the mothership will command them to move toward their next waypoint, otherwise it will command them to stay in place - this prevents the workers from wandering aimlessly if they get too far ahead of the mothership.
To aid with localization, the robot detects walls using the Kinect: it takes a slice of the Z-space between ~20cm and 30cm above the ground and finds walls by looking at the <x,y> coordinates of all points in the point cloud within that slice. One of our team members implemented a particle filter that updates the odometry based on the wall-detection data compared to a known map.
Since the worker robots don't have sensors, the gatherer can't tell if it has successfully grasped a block. To deal with this, the gatherer will turn toward the mothership - the mothership can then visually verify whether it is holding a block or not before telling it to place the block on the dump-truck.
In the end, our robot swarm was able to drive along a series of waypoints, and collect blocks along the path, placing them onto the dump-truck. The final system can be seen in action below:
Videos of other teams' robots can be seen here: http://www.csail.mit.edu/node/1529
The approach that my team took for the grand challenge was an ambitious one: create a fleet of diversified but simple robots that cooperate to gather and stack blocks. These “worker” robots are meant to be extremely simple remote-control vehicles that are commanded by a sensory “mothership” robot. The primary motivation was to develop a system that could parallelize tasks and capitalize on the agility of using smaller robots (for example, improved maneuverability in tight spaces).
Our original design consisted of three “worker” robots: an agile gatherer that could grasp and carry a block, a dump truck that could carry multiple blocks, and a slow-but-precise stacker robot that could create block towers up to six blocks tall. The worker robots have no sensors of their own other than a gyroscope to track their heading - this allows us to command translational velocities and headings. Although we built all three worker robots, we only had time to get the gatherer and dump-truck cooperating in time for the challenge.
The gatherer worker. | The dump-truck worker. | The stacker worker. |
We built the worker robots using LEGO and the HappyBoard microcontroller platform from the 6.270 robotics course/competition , and used a wireless link to allow the mothership to remotely control them.
The next step was tracking the worker robots from the mothership - we used a Microsoft Kinect which provides an RGB video feed along with a corresponding depth map (it's a pretty popular robotics tool these days) . To identify the workers, I modified the robot-tracking system I co-authored for 6.270 (github repo), which looks for a type of 2D barcode on top of the robot (I previously blogged about this system). When one of these patterns is located in the RGB video feed, the software looks up the corresponding depth-map coordinates of the 4 corners of the pattern. The depth at those coordinates can be transformed into real <x,y,z> space coordinates to figure out where the worker is in relation to the mothership.
The robot tracker has identified the pattern and labelled the robot #1.
The colorized depth map.
The depth map (uncolored) - note the four white circles that indicate the depth probe points for tracking the robot’s true <x,y,z> world coordinates - these correspond with the corners of the fiducial as seen in the RGB image above.
We also use the RGB video feed to identify blocks by filtering the hue, saturation, and brightness values and identifying connected components. Once a block is found, we probe the depth-map to determine the block’s true <x,y,z> coordinates.
In order to move the mothership with both the gatherer and dump-truck, the path-planning software assigns the path to both the workers and the mothership. The workers’ paths are modified slightly so that the two robots drive side-by side, rather than attempting to reach the exact same endpoint and crashing into each other. As long as the workers are in view, the mothership will command them to move toward their next waypoint, otherwise it will command them to stay in place - this prevents the workers from wandering aimlessly if they get too far ahead of the mothership.
To aid with localization, the robot detects walls using the Kinect: it takes a slice of the Z-space between ~20cm and 30cm above the ground and finds walls by looking at the <x,y> coordinates of all points in the point cloud within that slice. One of our team members implemented a particle filter that updates the odometry based on the wall-detection data compared to a known map.
Since the worker robots don't have sensors, the gatherer can't tell if it has successfully grasped a block. To deal with this, the gatherer will turn toward the mothership - the mothership can then visually verify whether it is holding a block or not before telling it to place the block on the dump-truck.
In the end, our robot swarm was able to drive along a series of waypoints, and collect blocks along the path, placing them onto the dump-truck. The final system can be seen in action below:
Videos of other teams' robots can be seen here: http://www.csail.mit.edu/node/1529
Hi Scott
ReplyDeleteI left a private message to you in instructable. Please read it, thanks