Tuesday, May 17, 2011

Robots and a Kinect

As part of MIT's 6.141 robotics course, we were challenged in teams to create autonomous robots that could navigate a space while collecting blocks and ultimately deploy those blocks to form some sort of structure (see: background and details).

The approach that my team took for the grand challenge was an ambitious one: create a fleet of diversified but simple robots that cooperate to gather and stack blocks. These “worker” robots are meant to be extremely simple remote-control vehicles that are commanded by a sensory “mothership” robot. The primary motivation was to develop a system that could parallelize tasks and capitalize on the agility of using smaller robots (for example, improved maneuverability in tight spaces).

Our original design consisted of three “worker” robots: an agile gatherer that could grasp and carry a block, a dump truck that could carry multiple blocks, and a slow-but-precise stacker robot that could create block towers up to six blocks tall. The worker robots have no sensors of their own other than a gyroscope to track their heading - this allows us to command translational velocities and headings. Although we built all three worker robots, we only had time to get the gatherer and dump-truck cooperating in time for the challenge.







The gatherer worker.


The dump-truck worker.


The stacker worker.

We built the worker robots using LEGO and the HappyBoard microcontroller platform from the 6.270 robotics course/competition , and used a wireless link to allow the mothership to remotely control them.

The next step was tracking the worker robots from the mothership - we used a Microsoft Kinect which provides an RGB video feed along with a corresponding depth map (it's a pretty popular robotics tool these days) . To identify the workers, I modified the robot-tracking system I co-authored for 6.270 (github repo), which looks for a type of 2D barcode on top of the robot (I previously blogged about this system). When one of these patterns is located in the RGB video feed, the software looks up the corresponding depth-map coordinates of the 4 corners of the pattern. The depth at those coordinates can be transformed into real <x,y,z> space coordinates to figure out where the worker is in relation to the mothership.



The robot tracker has identified the pattern and labelled the robot #1.



The colorized depth map.



The depth map (uncolored) - note the four white circles that indicate the depth probe points for tracking the robot’s true <x,y,z> world coordinates - these correspond with the corners of the fiducial as seen in the RGB image above.



We also use the RGB video feed to identify blocks by filtering the hue, saturation, and brightness values and identifying connected components. Once a block is found, we probe the depth-map to determine the block’s true <x,y,z> coordinates.

In order to move the mothership with both the gatherer and dump-truck, the path-planning software assigns the path to both the workers and the mothership. The workers’ paths are modified slightly so that the two robots drive side-by side, rather than attempting to reach the exact same endpoint and crashing into each other. As long as the workers are in view, the mothership will command them to move toward their next waypoint, otherwise it will command them to stay in place - this prevents the workers from wandering aimlessly if they get too far ahead of the mothership.

To aid with localization, the robot detects walls using the Kinect: it takes a slice of the Z-space between ~20cm and 30cm above the ground and finds walls by looking at the <x,y> coordinates of all points in the point cloud within that slice. One of our team members implemented a particle filter that updates the odometry based on the wall-detection data compared to a known map.

Since the worker robots don't have sensors, the gatherer can't tell if it has successfully grasped a block. To deal with this, the gatherer will turn toward the mothership - the mothership can then visually verify whether it is holding a block or not before telling it to place the block on the dump-truck.

In the end, our robot swarm was able to drive along a series of waypoints, and collect blocks along the path, placing them onto the dump-truck. The final system can be seen in action below:




Videos of other teams' robots can be seen here: http://www.csail.mit.edu/node/1529

Tuesday, March 8, 2011

Building a party lighting system

One of the coolest things about MIT is the wide range of opportunities to work on awesome projects outside of class. I'm on the executive board of a fairly new student group called Next Make. Next Make is a collection of motivated engineers in my dorm - Next House - with a mission of furthering Mens et Manus at MIT. We want to practice and teach hands-on engineering skills that build upon our collective past experience in order to learn and build really cool stuff!

Over the past semester, and especially culminating at the beginning of February, I helped organize the design and construction of an amazing LED party lighting system for our dorm with Next Make.



At the beginning, our goal was exactly that statement: "we want to build a really awesome LED party lighting system for our dorm." This was an admittedly broad goal, so during the semester we refined the idea into a detailed design that could actually be implemented. (On a sidenote, another cool thing about MIT is that there are lots of ways to find funding for crazy ideas like this - in our case our dorm graciously funded most of the project!)

We selected LEDs to use, driver ICs to power them, microcontrollers to act as the brains, and also developed a vision of what we wanted the final product to look like. After all that planning we got together as a group and worked tirelessly for about four weeks (shared with our other classwork) to get the system built and fully functional.



We had soldering parties, construction parties, and generally involved a bunch of people in the process. I'll be posting later with more details about the design process and construction, but for now I mostly want to show off how the project turned out!

Here's a quick demo of the system:



(Note: this was filmed before the system was completed, so you may notice a few minor glitches)


There are 8 front panels with 4 full color (RGB mixed) pixels each - these form a linear array of color, and are probably the most visible and distinguishable part of the system. (video of 3 of these panels stacked on top of one another).

Here's video of the front panels along with a peek at the the control software:


There's also a set of 32 RGB lights that sit in a recessed lighting fixture, casting a bright, colored glow in the rear of the room (Video of the first partial "rear glow" trial). The rear glow lights are also all individually addressable, allowing us to design chase patterns and color fades for these lights as well.

We also wanted to have dimmable blacklights that could pulse to the beat, so we built 8 sets of high-powered UV LED panels (carefully selected to have a safe spectrum). Next to the UV LEDs we also placed ridiculously bright white LEDs to be used for strobe effects. This is what one of those panels looks like (at about 1% brightness so the camera could focus)




All of these lights are connected on a network and controlled by a computer running custom Light DJ software. The Light DJ software runs real time music analysis to create impressive visual effects that are synced to the beat, while offering a live Light DJ high-level control in order to select lighting effects that perfectly match the mood of the music.


So what did we do once we finished it? Naturally, we threw a huge party!

In fact, it was Next House's first party in 7 years!

Over 500 MIT students showed up over the course of the night!

The basement of Next House was packed from wall to wall with people dancing!

(Poster by Anton Nguyen; Photos by Scott Bezek and RJ Ryan)