⟵ Homepage

The Smartest R/C Car

Piotr Sokólski

Here’s a short introduction to a hardware project I’ve been recently busy with.

With the imminent dawn of self-driving vehicles, the art and craft of developing the software and hardware remains elusive. It’s no suprise: after all the domain is still very much in it’s infancy. Engineering a robust system involves tight cooperation of the most cutting edge software and (potentially) dangerous and expensive hardware. Therfore, the bar for entry is set very high and only available to companies with vast resources, such as Alphabet’s Waymo.

As a programmer, I would often take for granted the democratization and openness of the field I work in. I’d often overlook the wonder of state of the art technology simply being there in the open on Github, a few terminal commands away from working on my desktop.

Fortunately, the field of self-driving vehicles seems to be taking steps in that direction. There are MOOCs and free online materials offered for anyone who’s interested in the topic. However, they still lack a hardware platform where the learners can safely and quickly evaluate their ideas. The best we can hope for right now is a simulator. I believe that making an approximation of the technology available to anyone who wishes to try it is possible, just as it was possible with VR and Google Cardboard.

The platform I’m proposing is not unlike the Cardboard. The principles remain the same: low cost (or made of things you likely already own) and no need for expert knowledge to get started. I’ve been working on a toy car based on a smartphone that is able to reuse the sensors (GPS, compass, accelerometer, light), connectivity and high-res camera available on all modern devices.

Prototype 2

And it takes your phone for a ride!

First Prototype

The first prototype was a proof of concept of phone-to-computer and phone-to-chip communication. I’ve put an iPhone 5S (you can see it falling off at the end of the video…) on a slightly modified Buggy Car kit. I’ve used a mini-jack interface (the audio port) to communicate between the phone and an Arduino Uno that controlled a servo and a DC motor driving the car. A protocol similar to a dial-up modem’s was used to encode and decode messages (and with the speaker enabled it even sounded similar). Then using the HTML5 audio api I was able to transmit commands from a computer to the phone using just a regular web server (in Node.js) and phone’s browser. Not too bad for a quick prototype. Using the iPhone’s browser audio APIs allowed me to avoid the time (and money) consuming process of writing a native app.

Second Prototype

While the first prototype was a proof of concept of the communication model, the second has the shape of what I originally had in mind.

Prototype 2

This time the phone itself becomes a part of car’s chassis. A periscope (which originally is a pretty creepy iPhone camera case) allows the phone’s camera to face forward.

Autodesk Fusion360 Project

I’ve reused some parts from the buggy kit and 3D-printed outer parts of the chassis to allow for mounting the extra elements on top of it. Interestingly, I’ve found that a laser cutter is way more roubst and reliable tool for rapid prototyping than a 3D printer (as long as you can live with the limitations). While lacking the 3D-ness, it takes just minutes from an idea to a tangible object, even for bigger parts. And once you have the speed and power settings right for your material it’s unlikely to fail mid-build.

The recorded video is not ideal for self-driving purposes - it produces a portrait image rather than more useful landscape view and a lot of space is taken up by the close-up of the floor. I’m hoping it will be sufficient for Computer Vision processing applications. I’d imagine that at some later stage it would be possible to replace the regular lens in the periscope with a fish-eye.

The Road Ahead

With the hardware more or less stable (although there’s a lot of room for improvement and even more ideas to implement), I’m able to focus on even more interesting part - the software! The toy car simulates a life sized autonomous vehicle to a limited extent: no LIDAR, no point for lane or road sign detection, limited usefulness of GPS on shorter distances. However, equally interesting topics, such as object recognition, learning steering or visual should be perfectly explorable using the platform! With communication to a computer re-established (this time there’s no avoiding native apps) I plan to work on employing phone’s camera and sensors to implement an odometer. After all, a phone that can find it’s way back home is an appliance many would appreciate.

Prototype 2