Monday, June 11, 2012

Simple USB LED Controller - Part 2

After fixing my pinout mixup from the previous version, my Simple USB LED Controller (SULC) v0.2 works!

Check out Part 1 and Part 1.5 for a bit more background on SULC.  In short, it's a ridiculously simple way to control high-power RGB LEDs from a computer.  You can send commands like "red, blue" or "all green" to control the LEDs, rather than implementing some complex protocol.

The build process for this version was the same as my first prototype - using a laser-cut solder paste stencil and "frying pan" reflow soldering - so I don't have any new pictures to show of that.  However, I do have pictures and video of the new version in action:

(I ran out of TLC5940s, so I decided to make this board with just 2 of them rather than waiting for a shipment to arrive - notice the missing IC in the top right corner)


The video gives a brief overview and shows just how easy it is to control high-power LEDs with SULC:



The full design files (schematic, pcb, firmware, and software) are on github: https://github.com/scottbez1/sulc


Monday, April 2, 2012

Next Make CPW USB Gadget

I just got some PCBs in the mail!  These are the PCBs I designed for Next Make's Campus Preview Weekend (CPW) event later this April.  CPW is when all the MIT admitted students are invited to come check out the campus and see what life at MIT is like.  Generally all the student groups on campus throw fun events for the prefrosh - and Next Make is no exception!

This year, prospective students of the class of 2016 will be able to solder up and take home a cute USB gadget at the Next Make event:





The board plugs into a usb port and pretends to be a usb keyboard - it can then "type" a message into the computer it's plugged into, without having to install any drivers (inspired by an Instructable USB PCB business card that types out a guy's resume).  You can program any message you want into it (up to about 1000 characters).  Here's a video of it in action:




The board is based on the ATTiny45 with V-USB (software USB library) which lets the device show up as a low speed USB device.  If I have some free time, I may program alternate firmware that emulates a USB mouse and sends random mouse movements at random intervals as a prank device like ThinkGeek's Phantom Keystroker.

The PCB designs are on github: https://github.com/scottbez1/nextmake-cpw2012

Looking forward to CPW!

Thursday, March 29, 2012

Simple USB LED Controller - Part 1.5

I've been working on a simple usb led controller (read Part 1), but unfortunately ran into a bit a of snag - it turns out that the surface-mount package of the TLC5940 has different pin assignments than the through-hole version I've used before - even though it has the same number of pins in the same physical arrangement, the pin assignments are shifted over by 7 pins, which means my original PCB designs don't work.  Lesson learned: double check the datasheet!  I've updated the PCB design and sent off v0.2 to have new PCBs made, so now I just have to wait a few weeks for them to arrive.

In the meantime though, I was able to get the LUFA usb library up and running, port the Arduino TLC5940 library to work on the ATMega32U2, and get a good portion of the led controlling firmware written.  In order to test this out, I programmed the controller board I built, but had to use led drivers on a separate breadboard.  It's ugly, but it works:


The goal of SULC is to make controlling high power RGB LEDs really simple, so the firmware I'm writing can parse several different formats to set the colors of the LEDs.  It shows up as a virtual serial device, and you can send simple messages to set all LED colors.  Here are some examples:
  •  "all purple" - set all 5 RGB LEDs to purple
  • "red, green, blue, yellow, teal" - sets the LEDs to different colors
  • ",,red,,yellow" - sets the 3rd LED to red and the 5th to yellow (leaving the others unchanged)
  • "all 50 50 0" - sets all to a dim yellow (using decimal RGB values)
  • "red; 20,0,80; blue; green; 50,50,200" - sets all 5 LEDs using a mix of names and rgb values
I'm planning to add support for hex colors (e.g. "#FF0000" or "#ff0") along with a more efficient binary protocol for programmatically setting the colors quickly (see protocol.txt for details).

Getting the microcontroller to be able to parse all ~147 standard web color names was a bit tricky.  The ATMega32U2 only has 1KB of RAM, and a good chunk of that is being used for actually running the program, so there's no room to store a table of color names as a global variable in RAM.  Instead, I used avr-gcc's PROGMEM macro to specify that particular data structures should live in flash program memory instead (Dean Camera wrote a nice tutorial on PROGMEM).  I defined two main data structures: one giant string with every color name concatenated together, along with an array of structs that holds a color's name-length and its rgb values:

typedef struct {
    const uint8_t name_len;
    const uint8_t r;
    const uint8_t g;
    const uint8_t b;
} Color;

const Color colors[] PROGMEM = {
    {3,0,0,0},          //off
    {9,240,248,255},    //aliceblue
    {12,250,235,215},   //antiquewhite
    {4,0,255,255},      //aqua

    ...
};

const char COLOR_NAMES[] PROGMEM = "offaliceblueantiquewhiteaquaaquamarineazurebeigebisqueblackblanchedalmondbluebluevioletbrownburlywoodcadetbluechartreusechocolatecoralcornflowerbluecornsilkcrimsoncyan ..." ;


Reading from PROGMEM structures is a little different than normal variables - instead of getting a value with syntax like:

uint8_t len = colors[5].name_len;

you need to use a macro to read a byte from program memory:

uint8_t len = pgm_read_byte(&colors[5].name_len);

The reason for the difference is that program memory and RAM are distinct - so the array colors is a pointer in program memory address space. Indexing into an array the normal way (e.g. colors[5]) would be looking up that address in RAM, which obviously won't work because the data isn't in RAM!  There are also functions for reading a float, word, or dword defined in avr/pgmspace.h.

To interpret a color name, the parser first scans through the colors array looking for a color with the same length, and whenever it finds a color of the right length, it compares the input string buffer to COLOR_NAMES to see if they match.  Of course there are plenty of possible optimizations - using better data structures to make lookups faster, or compression techniques to make the color names take up less space - but it's currently "fast enough" and with 32K of program memory available, size isn't a huge concern right now either.

I'll post another entry once the new PCB's get here (assuming they work this time!).

Sunday, March 25, 2012

Simple USB LED Controller - Part 1

Back when Next Make built the Next House Party Lighting System, we designed the LED controllers to connect on a shared RS-485 network over CAT5 cable.  This was a great solution for that system since the controllers were far apart (RS-485 uses differential signaling so it's pretty robust over longer distances), and we had 24 separate controllers to connect so we wanted to be able to chain them together on a single network.

But if you wanted to set up a smaller scale LED system with just 1 or 2 sets of LEDs, those controllers were a bit overkill - you needed a separate USB->RS-485 converter and then had to string them together with CAT5.  So I set out to design a simpler high power LED controller that had a USB port directly on it (I'm calling it SULC - the Simple USB LED Controller).

Instead of using an FTDI (USB->serial converter IC) along with a microcontroller, I wanted to try out the ATMega8/16/32U2 family of AVRs which has USB support built-in.  Unfortunately there's no through-hole version of those chips, so I had to design a PCB to try them out - my first experience laying out a PCB from scratch.  I used the open source KiCad EDA for the schematic design and pcb layout.  After a weekend of work, I had a PCB ready to send off to production:


I ordered the PCBs from SeeedStudio which offers an amazing deal: Fusion PCB Service - $10 for 10 boards that are 5cm x 5cm, with 2 layers of copper, soldermask and silkscreen on both sides.  The boards arrived about 2 weeks after I placed the order (mostly shipping time from China), and looked great:


The next step was soldering the components to the board.  Since most of the components were surface mount, I decided to try out "frying pan reflow" - you first spread solder paste on each pad on the PCB, then line up all of the components on top of that, and finally stick it in a frying pan to melt and reflow the solder.  SparkFun has a great article about low-cost reflow soldering.  

But how do you get the solder paste cleanly onto the pads when they're only ~0.01" wide? You can buy solder paste syringes to squirt the paste onto each pad individually, but that seemed like a lot of work with ~150 pads, and tricky to get the right amount onto each pad.  Instead, I used a solder-paste stencil to apply the paste - SparkFun also has a great tutorial on solder paste stencils.  You can order solder paste stencils online from places like Pololu, but to go the full DIY approach, I made my own.  I bought 3 mil mylar on McMaster and had my friend laser cut holes in it for the pads.  Here's what the stencil looks like:


A few of the smaller holes didn't get completely cut, so I had to use a pin to clean them up:

(notice the little bits of mylar stuck on the upper side of those holes)

After cleaning up the stencil, it was time to apply the solder paste.  I used the technique described by SparkFun - use other PCBs to hold the one you're working with in place, and spread the paste with a putty knife:

Spreading the paste across the stencil


Unfortunately the solder paste didn't apply very cleanly - probably because I didn't hold the stencil tight enough and because it was warm when I applied it, so the paste was more liquid than I would have liked.  I went ahead and placed each component on top of the solder paste:

Solder paste and components placed

In order to reflow the solder paste, I stuck the PCB inside a rectangular aluminum extrusion and placed that on a small electric stove/hot plate:

PCB placed inside aluminum extrusion to help spread the heat


When reflowing solder, there's a specific heat curve that you're supposed to follow to get it to melt and make good connections.  A number of people have modified toaster ovens with PID control loops to get the temperature to follow a specific curve precisely.  I just used a thermocouple with my multimeter to measure the temperature and used the stove's knob to make adjustments - pretty simple and it worked fine.


Fresh out of the oven!


The reflow process was mostly successful - all of the small discrete components like resistors, capacitors, and LEDs aligned themselves and were soldered in place perfectly.  There were a couple solder bridges though between pins on the TLC5940s and on the ATMega32u2:


A nasty solder bridge on a TLC5940 (top) and a minor one on the ATMega32u2 (bottom)

After a bit of cleanup with solder wick and flux, everything looked good to go.  I soldered up the through-hole components and then came the moment of truth - plugging the board in.  To my surprise, it actually lit up the first time!


And even better than seeing that beautiful blue light was the output of lsusb:

Bus 004 Device 126: ID 03eb:2ff0 Atmel Corp.

Yes!  The board shows up as a USB device (running Atmel's DFU bootloader)!  

I wrote a quick test program and loaded onto the device over USB using dfu-programmer.  It works!  It flashes each of the 4 debug LEDs on the board:


That's as far as I've gotten so far, but I think it's pretty awesome progress for my first ever custom PCB and first time working with surface mount components.  Next I need to reprogram the fuses on the microcontroller to get the full 16MHz clock speed, and then I can try using LUFA to make the board show up as a USB virtual serial device, and finally I can see if the TLC5940 LED drivers are connected correctly to drive high power LEDs.

The board designs are on github - https://github.com/scottbez1/sulc - although beware that I haven't finished testing the board, so there may be errors still.

Tuesday, February 28, 2012

Remapping Zoom on the Microsoft Natural Keyboard 4000 - Mac OS X

The Microsoft Natural Keyboard 4000 is pretty great, but the "Zoom" slider in the middle never seemed very useful to me - how often do I need to zoom in or out? I scroll much more often than I need to zoom, so it would be nice to remap to scroll instead.
 
Although Microsoft's control panel lets you remap the special function keys, it doesn't let you change the Zoom slider function. Luckily for Windows users there's a fairly simple xml file that you can edit to change the mapping (and there are plenty of explanations: SuperUser, Josh Highland, Joel Bennett, etc).

Unfortunately, the "commands.xml" config file doesn't exist on Mac OS X. Instead, there's a binary file for the configuration, which makes it tough to modify:
/Users/YOUR_NAME/Library/Preferences/com.microsoft.keyboard.pref

After a bit of reverse-engineering, I was able to remap the Zoom slider to the UP and DOWN keys (sadly, using the SCROLL mapping doesn't auto-repeat, so UP/DOWN was the best I could do).

Scrolling Instead of Zooming - The Easy Way:
The easiest way to get scrolling instead of zooming is to replace your com.microsoft.keyboard.pref file with a modified version:
  1. Make sure System Preferences is closed
  2. Download the modified pref file: com.microsoft.keyboard.pref
  3. Navigate to /Users/YOUR_NAME/Library/Preferences/
  4. Back up the com.microsoft.keyboard.pref file (e.g. rename it to com.microsoft.keyboard.pref.old)
  5. Move the modified pref file into that folder
  6. Open System Preferences, and open the Microsoft Keyboard preference panel (this causes the pref file to be reloaded)
If you're interested to see how the file was modified (or want to map the Zoom slider to something other than UP/DOWN), keep reading...

How to reverse-engineer the preferences: 
Since com.microsoft.keyboard.pref is a binary file, opening it with TextEdit or vim isn't going to be very useful. Instead, take a hex dump of the original configuration:

cd /Users/YOUR_NAME/Library/Preferences
xxd com.microsoft.keyboard.pref > prefsOrig.hex

Now we'll make some key mapping changes and see what parts of the pref file change. I changed the Open and Close buttons to do nothing.

Now we'll take another hex dump and compare the two:

xxd com.microsoft.keyboard.pref > prefMod.hex
diff prefOrig.hex prefMod.hex

which outputs:

182,183c182,183
< 0000b50: 0000 0000 0000 0000 0000 0000 0000 5400
< 0000b60: 0000 0000 0000 0000 0000 0000 0000 0000
---
> 0000b50: 0000 0000 0000 0000 0000 0000 0000 0000
> 0000b60: 0000 ff00 0000 0000 0000 0000 0000 0000
185c185
< 0000b80: 0000 0000 0000 5500 0000 0000 0000 0000
---
> 0000b80: 0000 0000 0000 0000 0000 ff00 0000 0000

Notice the changes:
  1. At 0x0000b5e, value 0x5400000000 becomes 0x00000000ff
  2. At 0x0000b86, value 0x5500000000 becomes 0x00000000ff
It looks like the 0xff signifies "None" mode, whereas the 0x54 and 0x55 probably specified the Open and Close functions that used to be there.

You can play around with this technique to figure out how the byte values change with different mappings. For example, let's make Open and Close map to zooming in and zooming out. The pref file diff now looks like:

182,183c182,183
< 0000b50: 0000 0000 0000 0000 0000 0000 0000 5400
< 0000b60: 0000 0000 0000 0000 0000 0000 0000 0000
---
> 0000b50: 0000 0000 0000 0000 0000 0000 0000 0800
> 0000b60: 0000 ff00 0000 0000 0000 0000 0000 0000
185c185
< 0000b80: 0000 0000 0000 5500 0000 0000 0000 0000
---
> 0000b80: 0000 0000 0000 0900 0000 ff00 0000 0000

The changes:
  1. At 0x0000b5e, value 0x5400000000 changes to 0x08000000ff
  2. At 0x0000b86, value 0x5500000000 changes to 0x09000000ff.

The next issue is figuring out which bytes correspond to the Zoom slider's key-mappings. The only control that the GUI provides is enable/disable, zoom speed, and zoom acceleration, so we can mess with those.



If you toggle "Enable zooming" you get a hex diff that looks like:

82,83c82,83
< 0000510: 0000 0000 0000 0000 0000 0000 0000 0800
< 0000520: 0000 0000 0000 0000 0000 0000 0000 0000
---
> 0000510: 0000 0000 0000 0000 0000 0000 0000 0000
> 0000520: 0000 ff00 0000 0000 0000 0000 0000 0000
85c85
< 0000540: 0000 0000 0000 0900 0000 0000 0000 0000
---
> 0000540: 0000 0000 0000 0000 0000 ff00 0000 0000

Notice the similarities to the diff when we changed Open/Close to None:
  1. At 0x000051e, the value 0x0800000000 changes to 0x00000000ff
  2. At 0x0000546, the value 0x0900000000 changes to 0x00000000ff
So it looks like disabling zooming is actually just switching 2 key mappings to None (0x00000000ff) - and now we know where the key mappings for the Zoom buttons are located (0x51e and 0x546)!
Again, we can play around with some other key mappings to figure out the byte values to map keys to the UP and DOWN actions (0x030000007e and 0x030000007d). Now, just apply these values to the prefOrig.hex file, at the addresses we found for the Zoom slider's mappings (0x51e and 0x546):

0000510: 0000 0000 0000 0000 0000 0000 0000 0300
0000520: 0000 7e00 0000 0000 0000 0000 0000 0000
0000530: 0000 0000 0000 0000 0000 0000 0000 0000
0000540: 0000 0000 0000 0300 0000 7d00 0000 0000

(If you want to make your Zoom slider do something other than UP/DOWN, you can replace 0x030000007e and 0x030000007d with different key mappings)

Finally, convert the modified hex dump back into a binary preference file:
xxd -r prefOrig.hex > com.microsoft.keyboard.pref

Now open the Microsoft Keyboard settings panel within System Preferences to get the driver to reload the pref file, and you're all done!

Wednesday, February 15, 2012

Running 6.270 Robotics Competition

This past January I organized and ran MIT's 26th annual 6.270 Autonomous Lego Robotics Competition. Basically groups of 2 or 3 students are given a box with Lego, a microcontroller, motors, and sensors, and they have just 3 weeks to put together and program a fully autonomous robot to compete in a game.

Here's the first part of the final competition video:

MIT Tech TV

Part 2 is on MIT TechTV


This year's game was about capturing territories and gathering resources, on a hexagonal playing field:
The organizing team with the playing field


Robots had to spin a gearbox to capture a territory:



Then they could collect ping pong balls by pulling a lever:



And then dump them on their half of the center:



One of the robots actually shot the ping pong balls into the center:



But the really cool part is that 6.270 is entirely student-run - the organizing team that I led had about 8 core members that took care of everything from ordering Lego, motors, and electronics, to developing lectures and labs to teach the basics of autonomous systems to the ~60 students enrolled in the class. We also design the game itself (playing field, scoring, etc), and interact with companies to get sponsorship for the competition.

We were really ambitious this year, making a hexagonal playing field rather than the usual rectangular one. One of the organizers was able to CNC-cut the plywood:

I did all the electronics in the playing field - programming a microcontroller to read the quadrature encoders on all 6 of the gearboxes, sensing the breakbeams attached to each lever, and controlling the 6 servos that dispensed the ping pong balls. This interfaced with the "vision positioning system" computer that wirelessly transmits each robot's location along with the score, who owns each territory, and how many balls are remaining in each territory - which the robots could use to make decisions.

We also integrated LED lighting in the table to indicate which team owned each territory (using my friend Joe's ACRIS LED controllers):


When we weren't giving lectures or helping contestants with their robots, the organizers had time to build a robot of our own. One of the cooler robot drive systems is omni-drive - each of the wheels are actually compound wheels, made out of a bunch of smaller wheels around the circumference. The smaller wheels are placed perpendicular to the drive direction of the large "wheel", so that it can roll sideways. This allows the robot to strafe in any direction and rotate in place (or both simultaneously). Here's one of the LEGO omni-wheels I designed:

And here's a video of the omnibot in action:




Running 6.270 was a really fun & rewarding experience - I got experience lecturing to a large group of students, learned about the HappyBoard's embedded software, hardware, etc, learned how to do basic PCB layout work on the HappyBoard, taught myself OpenCV to create the vision positioning system, utilized my 6.111 knowledge to do some FPGA programming, interacted with reps from Apple, Oracle, Dropbox, and more!


(My girlfriend made me a LEGO cake after the final competition was over!)



That's all for now. You should watch the final competition video - there were some really amazing robots this year!

Tuesday, May 17, 2011

Robots and a Kinect

As part of MIT's 6.141 robotics course, we were challenged in teams to create autonomous robots that could navigate a space while collecting blocks and ultimately deploy those blocks to form some sort of structure (see: background and details).

The approach that my team took for the grand challenge was an ambitious one: create a fleet of diversified but simple robots that cooperate to gather and stack blocks. These “worker” robots are meant to be extremely simple remote-control vehicles that are commanded by a sensory “mothership” robot. The primary motivation was to develop a system that could parallelize tasks and capitalize on the agility of using smaller robots (for example, improved maneuverability in tight spaces).

Our original design consisted of three “worker” robots: an agile gatherer that could grasp and carry a block, a dump truck that could carry multiple blocks, and a slow-but-precise stacker robot that could create block towers up to six blocks tall. The worker robots have no sensors of their own other than a gyroscope to track their heading - this allows us to command translational velocities and headings. Although we built all three worker robots, we only had time to get the gatherer and dump-truck cooperating in time for the challenge.







The gatherer worker.


The dump-truck worker.


The stacker worker.

We built the worker robots using LEGO and the HappyBoard microcontroller platform from the 6.270 robotics course/competition , and used a wireless link to allow the mothership to remotely control them.

The next step was tracking the worker robots from the mothership - we used a Microsoft Kinect which provides an RGB video feed along with a corresponding depth map (it's a pretty popular robotics tool these days) . To identify the workers, I modified the robot-tracking system I co-authored for 6.270 (github repo), which looks for a type of 2D barcode on top of the robot (I previously blogged about this system). When one of these patterns is located in the RGB video feed, the software looks up the corresponding depth-map coordinates of the 4 corners of the pattern. The depth at those coordinates can be transformed into real <x,y,z> space coordinates to figure out where the worker is in relation to the mothership.



The robot tracker has identified the pattern and labelled the robot #1.



The colorized depth map.



The depth map (uncolored) - note the four white circles that indicate the depth probe points for tracking the robot’s true <x,y,z> world coordinates - these correspond with the corners of the fiducial as seen in the RGB image above.



We also use the RGB video feed to identify blocks by filtering the hue, saturation, and brightness values and identifying connected components. Once a block is found, we probe the depth-map to determine the block’s true <x,y,z> coordinates.

In order to move the mothership with both the gatherer and dump-truck, the path-planning software assigns the path to both the workers and the mothership. The workers’ paths are modified slightly so that the two robots drive side-by side, rather than attempting to reach the exact same endpoint and crashing into each other. As long as the workers are in view, the mothership will command them to move toward their next waypoint, otherwise it will command them to stay in place - this prevents the workers from wandering aimlessly if they get too far ahead of the mothership.

To aid with localization, the robot detects walls using the Kinect: it takes a slice of the Z-space between ~20cm and 30cm above the ground and finds walls by looking at the <x,y> coordinates of all points in the point cloud within that slice. One of our team members implemented a particle filter that updates the odometry based on the wall-detection data compared to a known map.

Since the worker robots don't have sensors, the gatherer can't tell if it has successfully grasped a block. To deal with this, the gatherer will turn toward the mothership - the mothership can then visually verify whether it is holding a block or not before telling it to place the block on the dump-truck.

In the end, our robot swarm was able to drive along a series of waypoints, and collect blocks along the path, placing them onto the dump-truck. The final system can be seen in action below:




Videos of other teams' robots can be seen here: http://www.csail.mit.edu/node/1529