Friday, April 22, 2016

Using UI automation to export KiCad schematics

This is my third post in a series about the open source split-flap display I’ve been designing in my free time. I’ll hopefully write a bit more about the overall design process in the future, but for now wanted to start with some fairly technical posts about build automation on that project.

Since I’ve been designing the split-flap display as an open source project, I wanted to make sure that all of the different components were easily accessible and visible for someone new or just browsing the project. Today’s post continues the series on automatically rendering images to include in the project’s README, but this time we go beyond simple programmatic bindings to get what we want: the schematic!

"Wow, I bet someone had to manually click through the GUI to 
export such a beautiful schematic!" Nope.

Unfortunately, KiCad’s schematic editor, Eeschema, doesn’t have nice Python bindings like its pcb-editing cousin Pcbnew (and probably won’t for quite some time). And there aren’t really any command line arguments to do this either. So we turn to the last resort: UI automation. That is, simulating interaction with the graphical user interface.

There are two main issues with automating the graphical user interface: the build system (Travis CI) is running on a headless machine with no display, and the script needs to somehow know where to click on screen.

As I mentioned in my last post, we can use X Virtual Framebuffer (Xvfb), which acts as a virtual display server, to solve the first problem. As long as Xvfb is running, we can launch Eeschema even when there’s no physical screen. This time, instead of using `xvfb-run` from a Bash script, I decided to use the xvfbwrapper Python library for additional flexibility. xvfbwrapper provides a Python context manager so you can easily run an Xvfb server while some other code executes.

from xvfbwrapper import Xvfb
with Xvfb(width=800, height=600, colordepth=24):
    # Everything within this block now has access
    # to an 800x600 24-bit virtual display
    do_something_that_needs_a_display()


So how do we actually script and automate interactions with the GUI, such as opening menus, typing text, and clicking buttons? I looked into a number of different approaches, such as Sikuli, which allows you to write high level “visual code” using screenshots and image matching, or Java’s Robot class which lets you program the mouse and keyboard using Java, but the easiest option I found by far was the command-line program xdotool.

With xdotool, you can easily probe and interact with the window system from the command line. For instance, you can output a list of all named windows by running:
xdotool search --name '.+' getwindowname %@

(This is an example of a chained command: the first part (search --name '.+') finds all windows whose name matches the regular expression ‘.+’ (any non-empty string) and places those window ids onto a stack. The second part runs the command getwindowname, with the argument %@ meaning “all window ids currently on the stack.”)

Going back to Eeschema, the option we want to automate (exporting the schematic) lives under the File → Plot → Plot menu. The trick to automating this is not to use the mouse to click (since then we’d need to know the coordinates on screen) but instead use keyboard shortcuts. Opening that menu from the keyboard just requires pressing “Alt+F” then “P” then “P”, which we can automate like this:

# First find and then focus the Eeschema window
xdotool search --onlyvisible --class eeschema windowfocus
# Send keystrokes to navigate the menus
xdotool key alt+f p p



We can similarly write commands to fill out the correct information in the “Plot Schematic” dialog once it opens. To change radio button selections, we can “Tab” numerous times to move focus through the various options. This is a bit fragile, since it relies on there being a stable set of options in the same order to work (and might break if KiCad were to add a new Page Size option, for instance), but is about the best we can do without using more complex UI automation tools.


To make it easier to debug what’s happening in the X virtual display, we can use a screen-recording tool like recordmydesktop to save a screencast of the graphical automation. This is particularly helpful when running on Travis where you can’t actually see what’s going on as the script runs.

Since we’re writing in Python, we can use some syntactic sugar with Python context managers to make it really easy to wrap a section of code with Xvfb and video recording. As a first step, we’ll need a context manager for running a subprocess:

class PopenContext(subprocess.Popen):
    def __enter__(self):
        return self
    def __exit__(self, type, value, traceback):
        if self.stdout:
            self.stdout.close()
        if self.stderr:
            self.stderr.close()
        if self.stdin:
            self.stdin.close()
        if type:
            self.terminate()
        self.wait()


and then we can create a macro that combines both an Xvfb context and a recordmydesktop subprocess context into a single context manager to be used together:

@contextmanager
def recorded_xvfb(video_filename, **xvfb_args):
    with Xvfb(**xvfb_args):
        with PopenContext([
                'recordmydesktop',
                '--no-sound',
                '--no-frame',
                '--on-the-fly-encoding',
                '-o', video_filename], close_fds=True) as screencast_proc:
            yield



You can use that macro like so:
with recorded_xvfb('output_video.ogv', width=800, height=600, colordepth=24):
    # This code runs with an Xvfb display available
    # and is recorded to output_video.ogv
    do_something_that_needs_a_display()

# Once the 'with' block exits, the X virtual display is
# no longer available, and the recording has stopped
run_non_recorded_things()



So, putting all of those elements together, we can use Xvfb to host the Eeschema GUI (even on a headless build machine), run recordmydesktop to save a video screencast to help understand and debug the visual interactions, and use xdotool to simulate key presses in order to click through Eeschema’s menus and dialogs. The code looks roughly something like this:

with recorded_xvfb('output.ogv', width=800, height=600, colordepth=24):
    with PopenContext(['eeschema', 'splitflap.sch']) as eeschema_proc:
        wait_for_window('eeschema', ['--onlyvisible', '--class', 'eeschema'])
        # Focus main eeschema window
        xdotool(['search', '--onlyvisible', '--class', 'eeschema', 'windowfocus'])
        # Open File->Plot->Plot')
        xdotool(['key', 'alt+f', 'p', 'p'])
        wait_for_window('plot', ['--name', 'Plot'])
        xdotool(['search', '--name', 'Plot', 'windowfocus'])

        [...]

        eeschema_proc.terminate()



This is what one of those recordings looks like:


You can find the full scripts in the github repo, particularly in these two files:
/electronics/scripts/export_util.py
/electronics/scripts/export_schematic.py

I also used a similar technique to export the component list .xml file (Tools → Generate Bill of Materials) which is then transformed into a .csv bill of materials:
/electronics/scripts/export_bom.py

Hopefully this was a useful overview of how I used UI automation to export schematics from KiCad. If you have questions, leave a comment here or open an issue on on github and I’ll try to respond. In my next post in this series I’ll switch gears a bit and talk about how I programmatically generate the OpenSCAD 3d animation you see at the top of the project’s README!

Sunday, April 17, 2016

Automated KiCad, OpenSCAD rendering using Travis CI

This is my second post in a series about the open source split-flap display I’ve been designing in my free time. I’ll hopefully write a bit more about the overall design process in the future, but for now wanted to start with some fairly technical posts about build automation on that project.

In my last post, I discussed how I scripted the export of 2d renderings of the custom PCB. In this post, I’ll cover how I hooked up that script and others to run automatically on every commit using Travis CI, with automated deployments to S3 to keep all the renderings in the README updated, like this one:
I'll talk about this particular animated OpenSCAD rendering in a future blog post

Why Travis?

Travis CI is a continuous build and test system, with Github integration and a matching free tier for open source projects. If you’ve ever seen one of these badges in a Github README, it’s probably using Travis:

That's the current build status, hopefully it's green!
The best thing about Travis though is that unlike many build systems (like Jenkins or Buildbot), nearly the entire build system configuration for Travis lives directly inside the repo itself (in a .travis.yml file). This has a few major advantages:

Reproducible (or at least reasonably well defined) build environment
Each Travis build starts off as a clean slate, and you’re responsible for defining and installing any extra dependencies on the machine yourself through code. This way you always end up with clearly documented dependencies, and that documentation can never go stale!

Enables different build/test configurations on each branch
One big problem with keeping your code separate from the build configuration (as is often the case with tools like Jenkins/Buildbot) is that the two need to stay in sync. Typically this is not a huge problem for slow, linear development, since occasional lock-step updates across repo and build system aren’t too painful.

The issues start when you have faster development with frequently changing build configurations or parallel development across branches. Now not only do you have to keep your build configuration in sync with changes in the source repo, but you also have to make it branch-aware and keep each branch’s build config in sync with the branches in the source repo! Travis avoids all of this because the .travis.yml file is naturally versioned alongside the source it’s building, and therefore just works in branches with no extra effort!

Build configuration changes can be tested!
Related to the previous point — since the .travis.yml file is checked in and versioned with the source code, changes to the source code that e.g. require new packages to be installed in the build environment can actually be fully tested as part of a feature branch or pull request before landing in `master`.

Travis with KiCad and OpenSCAD

The first step to automating my build was to install the right packages. The basic .travis.yml config looks like this:

    dist: trusty
    sudo: true
    language: generic
    install:
    - ./3d/scripts/dependencies.sh
    - ./electronics/scripts/dependencies.sh


Both KiCad (schematic/pcb software) and OpenSCAD (3d cad software) are under fairly active development, and their packages in the Ubuntu 14.04 are woefully out of date, so I use snapshot PPAs to install more modern versions of each (this necessitates the use of `sudo: true` above which allows for running `add-apt-repository` under sudo).

Each of the install scripts referenced above is pretty straightforward and looks roughly like this:

    #!/bin/bash
    set -ev
 
    sudo add-apt-repository --yes ppa:js-reynaud/kicad-4
    sudo apt-get update -qq
    sudo DEBIAN_FRONTEND=noninteractive apt-get install -y kicad inkscape imagemagick


The .travis.yml configuration for actually running the PCB export script and OpenSCAD rendering scripts as the main build steps is likewise pretty simple:

    # [... other stuff above ...]
    script:
    - (cd electronics && python -u generate_svg.py)
    - (cd 3d && xvfb-run --auto-servernum --server-args "-screen 0 1024x768x24" python -u generate_2d.py)
    - (cd 3d && xvfb-run --auto-servernum --server-args "-screen 0 1024x768x24" python -u generate_gif.py)


The only interesting part of that is the use of `xvfb-run`. Getting OpenSCAD exports working is slightly trickier than KiCad, since even OpenSCAD’s command-line interface requires a graphical environment to render images. The trick to make this work on a headless build machine is to use X virtual framebuffer (Xvfb), which lets you run a standalone X server detached from an actual display. So in the config above, I use the `xvfb-run` utility, which starts an Xvfb server, sets up the DISPLAY environment, runs the specified command, and then shuts everything down when the command completes; easy! (I’ll discuss the actual `generate_2d.py` and `generate_gif.py` script implementations in a future post)

From Travis to the README

Now that we’ve got Travis set up installing KiCad and OpenSCAD and exporting images from each on every commit, the next step is to actually get those renderings off the build machine and somewhere useful. To do that, I use Travis’s deploy tool to upload those build artifacts to S3.

The configuration is again pretty simple. Here’s what it takes to upload the entire “deploy” directory on the build machine to a publicly-readable directory named “latest” in my “splitflap-travis” S3 bucket:

    # [... other stuff above ...]
    deploy:
      provider: s3
      access_key_id: AKIAJY6VAINVQICEC47Q
      secret_access_key:
        secure: SYHsDA3WZfV6YlZ... [truncated for your viewing pleasure]
      bucket: splitflap-travis
      local-dir: deploy
      upload-dir: latest
      skip_cleanup: true
      acl: public_read
      cache_control: no-cache
      on:
        repo: scottbez1/splitflap
        branch: master


Since the .travis.yml file is checked into the repo and public, putting your actual S3 credentials inside would be silly! But Travis allows you to encrypt your credentials using a secret that only their build machines know, so everything’s nice and secure despite being public.

This lets me reference the latest 2d laser-cut rendering from the README file by referencing https://s3.amazonaws.com/splitflap-travis/latest/3d_laser_raster.png. Here’s what the current rendering looks like by the way:



One thing you may notice is the black bar at the bottom with the date and commit hash. I added that because Github’s image proxy caches extremely aggressively and I originally didn’t include the `cache_control: no-cache` line in my deployment config, so I needed some way to debug. It was pretty easy to add using ImageMagick, and now I can easily tell that the images in my README are showing the latest designs correctly:


    #!/bin/bash
    set -e
    LABEL="`date --rfc-3339=seconds`\n`git rev-parse --short HEAD`"
    convert -background black -fill white -pointsize 12 label:"$LABEL" -bordercolor black -border 3 input_image.png +swap -append output_image.png

(slight adaptation from the full script: annotate_image.sh)

If you do find yourself stuck with cached images on Github, you can manually evict them from the cache using an http PURGE request to the image url:
`$ curl -X PURGE https://camo.githubusercontent.com/xxxxxxxxxxxxx`

If you want to poke around the actual Travis configuration I’ve discussed above, here are some links to the real files:
/travis.yml
/3d/scripts/dependencies.sh
/electronics/scripts/dependencies.sh
/scripts/annotate_image.sh

In my next post I’ll cover how I used `Xvfb` , `xdotool` , and `recordmydesktop` to automatically export the KiCad schematic and bill of materials, which are only exposed through the GUI!

Saturday, April 16, 2016

Scripting KiCad Pcbnew exports

For the past few months I’ve been designing an open source split-flap display in my free time — the kind of retro electromechanical display that used to be in airports and train stations before LEDs and LCDs took over and makes that distinctive “tick tick tick tick” sound as the letters and numbers flip into place.

I designed the electronics in KiCad, and one of the things I wanted to do was include a nice picture of the current state of the custom PCB design in the project’s README file. Of course, I could generate a snapshot of the PCB manually whenever I made a change by using the “File→Export SVG file” menu option and then check that image into my git repo…


…but that gets tedious, is prone to human error, pollutes the git history with a bunch of old binary files, and isn’t very customizable.

For instance, the manual SVG export uses opaque colors which make it hard to see features that overlap, as well as using two different colors for items on the same layer (yellow and teal are both part of the front silkscreen layer below):

Functional rendering, but not exactly what I wanted.
Luckily, Pcbnew has built-in Python bindings which make it pretty straightforward to invoke certain features from standalone Python scripts. As a simple example, here’s how to plot a single layer to an SVG:

import pcbnew

# Load board and initialize plot controller
board = pcbnew.LoadBoard("splitflap.kicad_pcb")
pc = pcbnew.PLOT_CONTROLLER(board)
po = pc.GetPlotOptions()
po.SetPlotFrameRef(False)

# Set current layer
pc.SetLayer(pcbnew.F_Cu)

# Plot single layer to file
pc.OpenPlotfile("front_copper", pcbnew.PLOT_FORMAT_SVG, "front_copper")
print("Plotting to " + pc.GetPlotFileName())
pc.PlotLayer()
pc.ClosePlot()


As a minor note, there's not much documentation of the Python bindings, but if you search through the KiCad source code you can find the C++ interfaces that are exposed to Python. E.g. above, pcbnew.F_Cu is one of many possible layer constants and pcbnew.PLOT_FORMAT_SVG is one of several different plot formats.

While it’s in theory possible to specify the colors to use when plotting, I ran into issues where certain items were always plotted in their default color. For instance, when I plot the front silkscreen layer with the following options, the footprints are plotted in teal rather than the specified color, red:

pc.SetLayer(pcbnew.F_SilkS)
pc.SetColorMode(True)
po.SetColor(pcbnew.RED) # <-- NOTE THIS LINE po.SetReferenceColor(pcbnew.GREEN)
po.SetValueColor(pcbnew.BLUE)


A lot of the silkscreen ended up teal instead of red.

So instead of trying to get Pcbnew to output the exact SVG I wanted, I decided to export each layer as a separate monochrome SVG image and then post-process them to apply colors and merge them into a single output file. Since SVG images are just XML, it was easy to write a script, svg_processor.py, which allowed me to override the “fill” and “stroke” style attributes of the shapes, and then wrap all of the shapes in a <g> group tag to set the desired opacity.

(Note: the reason for wrapping in a group before applying opacity is that things like traces are rendered as a combination of multiple shapes, like a line + circle, so if you applied alpha=0.5 to each shape individually, a single trace would have varying degrees of opacity depending on how its subcomponents overlapped)

This allowed me to write a simple definition of the PCB layers to export and turn that into a nice, customizable rendering:

layers = [
  {'layer': pcbnew.B_SilkS, 'color': '#CC00CC', 'alpha': 0.8 },
  {'layer': pcbnew.B_Cu, 'color': '#33EE33', 'alpha': 0.5 },
  {'layer': pcbnew.F_Cu, 'color': '#CC0000', 'alpha': 0.5 },
  {'layer': pcbnew.F_SilkS, 'color': '#00CCCC', 'alpha': 0.8},
]

Ooooh, so beautiful!

As a final step after processing and merging, I use Inkscape's command line interface to shrink the .svg canvas to fit the image and convert the vector .svg file into a raster .png image like you see above:

inkscape --export-area-drawing --export-width=320 --export-png output.png --export-background '#FFFFFF' input.svg

The complete script to export .svg and .png renderings of the PCB can be found at https://github.com/scottbez1/splitflap/blob/580a11538d801041cedf59a3c5d1c91b5f56825d/electronics/generate_svg.py

In an upcoming post, I’ll cover how I automated this rendering process on every commit using Travis CI with S3 deployments to keep the image and gerbers referenced in the README always up to date!

Monday, June 11, 2012

Simple USB LED Controller - Part 2

After fixing my pinout mixup from the previous version, my Simple USB LED Controller (SULC) v0.2 works!

Check out Part 1 and Part 1.5 for a bit more background on SULC.  In short, it's a ridiculously simple way to control high-power RGB LEDs from a computer.  You can send commands like "red, blue" or "all green" to control the LEDs, rather than implementing some complex protocol.

The build process for this version was the same as my first prototype - using a laser-cut solder paste stencil and "frying pan" reflow soldering - so I don't have any new pictures to show of that.  However, I do have pictures and video of the new version in action:

(I ran out of TLC5940s, so I decided to make this board with just 2 of them rather than waiting for a shipment to arrive - notice the missing IC in the top right corner)


The video gives a brief overview and shows just how easy it is to control high-power LEDs with SULC:



The full design files (schematic, pcb, firmware, and software) are on github: https://github.com/scottbez1/sulc


Monday, April 2, 2012

Next Make CPW USB Gadget

I just got some PCBs in the mail!  These are the PCBs I designed for Next Make's Campus Preview Weekend (CPW) event later this April.  CPW is when all the MIT admitted students are invited to come check out the campus and see what life at MIT is like.  Generally all the student groups on campus throw fun events for the prefrosh - and Next Make is no exception!

This year, prospective students of the class of 2016 will be able to solder up and take home a cute USB gadget at the Next Make event:





The board plugs into a usb port and pretends to be a usb keyboard - it can then "type" a message into the computer it's plugged into, without having to install any drivers (inspired by an Instructable USB PCB business card that types out a guy's resume).  You can program any message you want into it (up to about 1000 characters).  Here's a video of it in action:




The board is based on the ATTiny45 with V-USB (software USB library) which lets the device show up as a low speed USB device.  If I have some free time, I may program alternate firmware that emulates a USB mouse and sends random mouse movements at random intervals as a prank device like ThinkGeek's Phantom Keystroker.

The PCB designs are on github: https://github.com/scottbez1/nextmake-cpw2012

Looking forward to CPW!

Thursday, March 29, 2012

Simple USB LED Controller - Part 1.5

I've been working on a simple usb led controller (read Part 1), but unfortunately ran into a bit a of snag - it turns out that the surface-mount package of the TLC5940 has different pin assignments than the through-hole version I've used before - even though it has the same number of pins in the same physical arrangement, the pin assignments are shifted over by 7 pins, which means my original PCB designs don't work.  Lesson learned: double check the datasheet!  I've updated the PCB design and sent off v0.2 to have new PCBs made, so now I just have to wait a few weeks for them to arrive.

In the meantime though, I was able to get the LUFA usb library up and running, port the Arduino TLC5940 library to work on the ATMega32U2, and get a good portion of the led controlling firmware written.  In order to test this out, I programmed the controller board I built, but had to use led drivers on a separate breadboard.  It's ugly, but it works:


The goal of SULC is to make controlling high power RGB LEDs really simple, so the firmware I'm writing can parse several different formats to set the colors of the LEDs.  It shows up as a virtual serial device, and you can send simple messages to set all LED colors.  Here are some examples:
  •  "all purple" - set all 5 RGB LEDs to purple
  • "red, green, blue, yellow, teal" - sets the LEDs to different colors
  • ",,red,,yellow" - sets the 3rd LED to red and the 5th to yellow (leaving the others unchanged)
  • "all 50 50 0" - sets all to a dim yellow (using decimal RGB values)
  • "red; 20,0,80; blue; green; 50,50,200" - sets all 5 LEDs using a mix of names and rgb values
I'm planning to add support for hex colors (e.g. "#FF0000" or "#ff0") along with a more efficient binary protocol for programmatically setting the colors quickly (see protocol.txt for details).

Getting the microcontroller to be able to parse all ~147 standard web color names was a bit tricky.  The ATMega32U2 only has 1KB of RAM, and a good chunk of that is being used for actually running the program, so there's no room to store a table of color names as a global variable in RAM.  Instead, I used avr-gcc's PROGMEM macro to specify that particular data structures should live in flash program memory instead (Dean Camera wrote a nice tutorial on PROGMEM).  I defined two main data structures: one giant string with every color name concatenated together, along with an array of structs that holds a color's name-length and its rgb values:

typedef struct {
    const uint8_t name_len;
    const uint8_t r;
    const uint8_t g;
    const uint8_t b;
} Color;

const Color colors[] PROGMEM = {
    {3,0,0,0},          //off
    {9,240,248,255},    //aliceblue
    {12,250,235,215},   //antiquewhite
    {4,0,255,255},      //aqua

    ...
};

const char COLOR_NAMES[] PROGMEM = "offaliceblueantiquewhiteaquaaquamarineazurebeigebisqueblackblanchedalmondbluebluevioletbrownburlywoodcadetbluechartreusechocolatecoralcornflowerbluecornsilkcrimsoncyan ..." ;


Reading from PROGMEM structures is a little different than normal variables - instead of getting a value with syntax like:

uint8_t len = colors[5].name_len;

you need to use a macro to read a byte from program memory:

uint8_t len = pgm_read_byte(&colors[5].name_len);

The reason for the difference is that program memory and RAM are distinct - so the array colors is a pointer in program memory address space. Indexing into an array the normal way (e.g. colors[5]) would be looking up that address in RAM, which obviously won't work because the data isn't in RAM!  There are also functions for reading a float, word, or dword defined in avr/pgmspace.h.

To interpret a color name, the parser first scans through the colors array looking for a color with the same length, and whenever it finds a color of the right length, it compares the input string buffer to COLOR_NAMES to see if they match.  Of course there are plenty of possible optimizations - using better data structures to make lookups faster, or compression techniques to make the color names take up less space - but it's currently "fast enough" and with 32K of program memory available, size isn't a huge concern right now either.

I'll post another entry once the new PCB's get here (assuming they work this time!).

Sunday, March 25, 2012

Simple USB LED Controller - Part 1

Back when Next Make built the Next House Party Lighting System, we designed the LED controllers to connect on a shared RS-485 network over CAT5 cable.  This was a great solution for that system since the controllers were far apart (RS-485 uses differential signaling so it's pretty robust over longer distances), and we had 24 separate controllers to connect so we wanted to be able to chain them together on a single network.

But if you wanted to set up a smaller scale LED system with just 1 or 2 sets of LEDs, those controllers were a bit overkill - you needed a separate USB->RS-485 converter and then had to string them together with CAT5.  So I set out to design a simpler high power LED controller that had a USB port directly on it (I'm calling it SULC - the Simple USB LED Controller).

Instead of using an FTDI (USB->serial converter IC) along with a microcontroller, I wanted to try out the ATMega8/16/32U2 family of AVRs which has USB support built-in.  Unfortunately there's no through-hole version of those chips, so I had to design a PCB to try them out - my first experience laying out a PCB from scratch.  I used the open source KiCad EDA for the schematic design and pcb layout.  After a weekend of work, I had a PCB ready to send off to production:


I ordered the PCBs from SeeedStudio which offers an amazing deal: Fusion PCB Service - $10 for 10 boards that are 5cm x 5cm, with 2 layers of copper, soldermask and silkscreen on both sides.  The boards arrived about 2 weeks after I placed the order (mostly shipping time from China), and looked great:


The next step was soldering the components to the board.  Since most of the components were surface mount, I decided to try out "frying pan reflow" - you first spread solder paste on each pad on the PCB, then line up all of the components on top of that, and finally stick it in a frying pan to melt and reflow the solder.  SparkFun has a great article about low-cost reflow soldering.  

But how do you get the solder paste cleanly onto the pads when they're only ~0.01" wide? You can buy solder paste syringes to squirt the paste onto each pad individually, but that seemed like a lot of work with ~150 pads, and tricky to get the right amount onto each pad.  Instead, I used a solder-paste stencil to apply the paste - SparkFun also has a great tutorial on solder paste stencils.  You can order solder paste stencils online from places like Pololu, but to go the full DIY approach, I made my own.  I bought 3 mil mylar on McMaster and had my friend laser cut holes in it for the pads.  Here's what the stencil looks like:


A few of the smaller holes didn't get completely cut, so I had to use a pin to clean them up:

(notice the little bits of mylar stuck on the upper side of those holes)

After cleaning up the stencil, it was time to apply the solder paste.  I used the technique described by SparkFun - use other PCBs to hold the one you're working with in place, and spread the paste with a putty knife:

Spreading the paste across the stencil


Unfortunately the solder paste didn't apply very cleanly - probably because I didn't hold the stencil tight enough and because it was warm when I applied it, so the paste was more liquid than I would have liked.  I went ahead and placed each component on top of the solder paste:

Solder paste and components placed

In order to reflow the solder paste, I stuck the PCB inside a rectangular aluminum extrusion and placed that on a small electric stove/hot plate:

PCB placed inside aluminum extrusion to help spread the heat


When reflowing solder, there's a specific heat curve that you're supposed to follow to get it to melt and make good connections.  A number of people have modified toaster ovens with PID control loops to get the temperature to follow a specific curve precisely.  I just used a thermocouple with my multimeter to measure the temperature and used the stove's knob to make adjustments - pretty simple and it worked fine.


Fresh out of the oven!


The reflow process was mostly successful - all of the small discrete components like resistors, capacitors, and LEDs aligned themselves and were soldered in place perfectly.  There were a couple solder bridges though between pins on the TLC5940s and on the ATMega32u2:


A nasty solder bridge on a TLC5940 (top) and a minor one on the ATMega32u2 (bottom)

After a bit of cleanup with solder wick and flux, everything looked good to go.  I soldered up the through-hole components and then came the moment of truth - plugging the board in.  To my surprise, it actually lit up the first time!


And even better than seeing that beautiful blue light was the output of lsusb:

Bus 004 Device 126: ID 03eb:2ff0 Atmel Corp.

Yes!  The board shows up as a USB device (running Atmel's DFU bootloader)!  

I wrote a quick test program and loaded onto the device over USB using dfu-programmer.  It works!  It flashes each of the 4 debug LEDs on the board:


That's as far as I've gotten so far, but I think it's pretty awesome progress for my first ever custom PCB and first time working with surface mount components.  Next I need to reprogram the fuses on the microcontroller to get the full 16MHz clock speed, and then I can try using LUFA to make the board show up as a USB virtual serial device, and finally I can see if the TLC5940 LED drivers are connected correctly to drive high power LEDs.

The board designs are on github - https://github.com/scottbez1/sulc - although beware that I haven't finished testing the board, so there may be errors still.