Skip to main content

Robots and a Kinect

As part of MIT's 6.141 robotics course, we were challenged in teams to create autonomous robots that could navigate a space while collecting blocks and ultimately deploy those blocks to form some sort of structure (see: background and details).

The approach that my team took for the grand challenge was an ambitious one: create a fleet of diversified but simple robots that cooperate to gather and stack blocks. These “worker” robots are meant to be extremely simple remote-control vehicles that are commanded by a sensory “mothership” robot. The primary motivation was to develop a system that could parallelize tasks and capitalize on the agility of using smaller robots (for example, improved maneuverability in tight spaces).

Our original design consisted of three “worker” robots: an agile gatherer that could grasp and carry a block, a dump truck that could carry multiple blocks, and a slow-but-precise stacker robot that could create block towers up to six blocks tall. The worker robots have no sensors of their own other than a gyroscope to track their heading - this allows us to command translational velocities and headings. Although we built all three worker robots, we only had time to get the gatherer and dump-truck cooperating in time for the challenge.







The gatherer worker.


The dump-truck worker.


The stacker worker.

We built the worker robots using LEGO and the HappyBoard microcontroller platform from the 6.270 robotics course/competition , and used a wireless link to allow the mothership to remotely control them.

The next step was tracking the worker robots from the mothership - we used a Microsoft Kinect which provides an RGB video feed along with a corresponding depth map (it's a pretty popular robotics tool these days) . To identify the workers, I modified the robot-tracking system I co-authored for 6.270 (github repo), which looks for a type of 2D barcode on top of the robot (I previously blogged about this system). When one of these patterns is located in the RGB video feed, the software looks up the corresponding depth-map coordinates of the 4 corners of the pattern. The depth at those coordinates can be transformed into real <x,y,z> space coordinates to figure out where the worker is in relation to the mothership.



The robot tracker has identified the pattern and labelled the robot #1.



The colorized depth map.



The depth map (uncolored) - note the four white circles that indicate the depth probe points for tracking the robot’s true <x,y,z> world coordinates - these correspond with the corners of the fiducial as seen in the RGB image above.



We also use the RGB video feed to identify blocks by filtering the hue, saturation, and brightness values and identifying connected components. Once a block is found, we probe the depth-map to determine the block’s true <x,y,z> coordinates.

In order to move the mothership with both the gatherer and dump-truck, the path-planning software assigns the path to both the workers and the mothership. The workers’ paths are modified slightly so that the two robots drive side-by side, rather than attempting to reach the exact same endpoint and crashing into each other. As long as the workers are in view, the mothership will command them to move toward their next waypoint, otherwise it will command them to stay in place - this prevents the workers from wandering aimlessly if they get too far ahead of the mothership.

To aid with localization, the robot detects walls using the Kinect: it takes a slice of the Z-space between ~20cm and 30cm above the ground and finds walls by looking at the <x,y> coordinates of all points in the point cloud within that slice. One of our team members implemented a particle filter that updates the odometry based on the wall-detection data compared to a known map.

Since the worker robots don't have sensors, the gatherer can't tell if it has successfully grasped a block. To deal with this, the gatherer will turn toward the mothership - the mothership can then visually verify whether it is holding a block or not before telling it to place the block on the dump-truck.

In the end, our robot swarm was able to drive along a series of waypoints, and collect blocks along the path, placing them onto the dump-truck. The final system can be seen in action below:




Videos of other teams' robots can be seen here: http://www.csail.mit.edu/node/1529

Comments

  1. Hi Scott
    I left a private message to you in instructable. Please read it, thanks

    ReplyDelete

Post a Comment

Popular posts from this blog

OpenSCAD Rendering Tricks, Part 3: Web viewer

This is my sixth post in a series about the  open source split-flap display  I’ve been designing in my free time. Check out a  video of the prototype . Posts in the series: Scripting KiCad Pcbnew exports Automated KiCad, OpenSCAD rendering using Travis CI Using UI automation to export KiCad schematics OpenSCAD Rendering Tricks, Part 1: Animated GIF OpenSCAD Rendering Tricks, Part 2: Laser Cutting OpenSCAD Rendering Tricks, Part 3: Web viewer One of my goals when building the split-flap display was to make sure it was easy to visualize the end product and look at the design in detail without having to download the full source or install any programs. It’s hard to get excited about a project you find online if you need to invest time and effort before you even know how it works or what it looks like. I’ve previously blogged about automatically exporting the schematics, PCB layout , and even an animated gif of the 3D model to make it easier to understand the project at a glanc

Using UI automation to export KiCad schematics

This is my third post in a series about the open source split-flap display I’ve been designing in my free time. I’ll hopefully write a bit more about the overall design process in the future, but for now wanted to start with some fairly technical posts about build automation on that project. Posts in the series: Scripting KiCad Pcbnew exports Automated KiCad, OpenSCAD rendering using Travis CI Using UI automation to export KiCad schematics OpenSCAD Rendering Tricks, Part 1: Animated GIF OpenSCAD Rendering Tricks, Part 2: Laser Cutting OpenSCAD Rendering Tricks, Part 3: Web viewer Since I’ve been designing the split-flap display as an open source project, I wanted to make sure that all of the different components were easily accessible and visible for someone new or just browsing the project. Today’s post continues the series on automatically rendering images to include in the project’s README, but this time we go beyond simple programmatic bindings to get what we want: the

Scripting KiCad Pcbnew exports

This is my first post in a series about the  open source split-flap display  I’ve been designing in my free time. Check out a  video of the prototype . Posts in the series: Scripting KiCad Pcbnew exports Automated KiCad, OpenSCAD rendering using Travis CI Using UI automation to export KiCad schematics OpenSCAD Rendering Tricks, Part 1: Animated GIF OpenSCAD Rendering Tricks, Part 2: Laser Cutting OpenSCAD Rendering Tricks, Part 3: Web viewer For the past few months I’ve been designing an open source split-flap display in my free time — the kind of retro electromechanical display that used to be in airports and train stations before LEDs and LCDs took over and makes that distinctive “tick tick tick tick” sound as the letters and numbers flip into place. I designed the electronics in KiCad, and one of the things I wanted to do was include a nice picture of the current state of the custom PCB design in the project’s README file. Of course, I could generate a snapshot of the