Skip to main content

Robot tracking experiments

As an organizer of MIT's 6.270 Autonomous Robot Design Competition I've been working on an improved vision-based system for tracking the contestants' robots on the playing field.

(Sidenote: I competed in 6.270 last January and at some point I'll write a whole post or two about my experience. To sum it up, I had an awesome time competing, which is why I'm now an organizer of the competition)

The basic concept is that we wirelessly feed each robots its coordinates throughout the round and act like GPS to help the robots navigate. However, this isn't as easy as it sounds.

Our approach is to mount an overhead camera facing down at the playing field and then analyze the video to find special "fiducial" patterns on the robots. This isn't too hard in a controlled environment, but it gets tricky when the system has to ignore other objects on the field or when pieces of a robot go flying after it slams into a wall (which happens quite often during testing!).

The fiducial patterns I came up with look like this:


Essentially it's a white square with one white corner, and then a few bits of information in the center.

To track these, each frame of video is run through a square detection algorithm. Then for each of these potential objects, each of the 4 corners are checked to find the reference corner - now the software can tell where there are objects and their orientations.

Finally, it checks the inside of each square to see which sections are white and which are black - these are essentially bits that we can use to encode the team number in order to distinguish multiple robots.

So far I put together a simple proof of concept that can track and identify a bunch of these fiduccial markers in real time:



You can see how it has ignored all of the distractions placed in the view and labeled each of the markers it finds. It places a large red dot in the corner that has the reference mark so you can see the perceived orientation.

The software uses the OpenCV library for video capture and processing. The C code is available at github under the MIT license: http://github.com/scottbez1/6.270/tree/master//vision/ It should be updated moderately often as I get this system fully fleshed out in time for this January's competition.

Hopefully this was an interesting peek at what's going on behind the scenes to get the competition ready for January. If you go to MIT you should definitely consider competing, or if you live in the area you should definitely check out the final competition at the end of January!

Comments

Popular posts from this blog

OpenSCAD Rendering Tricks, Part 3: Web viewer

This is my sixth post in a series about the  open source split-flap display  I’ve been designing in my free time. Check out a  video of the prototype . Posts in the series: Scripting KiCad Pcbnew exports Automated KiCad, OpenSCAD rendering using Travis CI Using UI automation to export KiCad schematics OpenSCAD Rendering Tricks, Part 1: Animated GIF OpenSCAD Rendering Tricks, Part 2: Laser Cutting OpenSCAD Rendering Tricks, Part 3: Web viewer One of my goals when building the split-flap display was to make sure it was easy to visualize the end product and look at the design in detail without having to download the full source or install any programs. It’s hard to get excited about a project you find online if you need to invest time and effort before you even know how it works or what it looks like. I’ve previously blogged about automatically exporting the schematics, PCB layout , and even an animated gif of the 3D model to make it easier to understand the project at a glanc

Using UI automation to export KiCad schematics

This is my third post in a series about the open source split-flap display I’ve been designing in my free time. I’ll hopefully write a bit more about the overall design process in the future, but for now wanted to start with some fairly technical posts about build automation on that project. Posts in the series: Scripting KiCad Pcbnew exports Automated KiCad, OpenSCAD rendering using Travis CI Using UI automation to export KiCad schematics OpenSCAD Rendering Tricks, Part 1: Animated GIF OpenSCAD Rendering Tricks, Part 2: Laser Cutting OpenSCAD Rendering Tricks, Part 3: Web viewer Since I’ve been designing the split-flap display as an open source project, I wanted to make sure that all of the different components were easily accessible and visible for someone new or just browsing the project. Today’s post continues the series on automatically rendering images to include in the project’s README, but this time we go beyond simple programmatic bindings to get what we want: the

Scripting KiCad Pcbnew exports

This is my first post in a series about the  open source split-flap display  I’ve been designing in my free time. Check out a  video of the prototype . Posts in the series: Scripting KiCad Pcbnew exports Automated KiCad, OpenSCAD rendering using Travis CI Using UI automation to export KiCad schematics OpenSCAD Rendering Tricks, Part 1: Animated GIF OpenSCAD Rendering Tricks, Part 2: Laser Cutting OpenSCAD Rendering Tricks, Part 3: Web viewer For the past few months I’ve been designing an open source split-flap display in my free time — the kind of retro electromechanical display that used to be in airports and train stations before LEDs and LCDs took over and makes that distinctive “tick tick tick tick” sound as the letters and numbers flip into place. I designed the electronics in KiCad, and one of the things I wanted to do was include a nice picture of the current state of the custom PCB design in the project’s README file. Of course, I could generate a snapshot of the