Tag Archives: gadgets

Telemus: A remotely operated vehicle based on Raspberry Pi and Arduino

NASA’s Curiosity rover is just awesome. I wanted one in my life, and so set about thinking and building… To what extent could I construct a similar robot using bits and pieces available to mortal men? I’m not interested in sampling rocks or seeking out evidence of extraterrestrial activity just yet, and so I boiled my desires down to simple remote control with a video feed – It’d be awesome fun to trundle around my house with a telepresence droid, from anywhere in the world, much like NASA are able to host a remote human presence on a planet 225 million kilometers away.


Acknowledging that such a construction was not exactly an insignificant undertaking, I spent some time in project manager mode considering my approach. I settled on assembling my droid as a series of independent engineering work ‘streams’, that when brought together, would result in the main goal: an internet controlled Remote Operated Vehicle (ROV).

Core: The project involves a computer that is connected to the internet, which can receive commands and tell motors to move on demand. The computer has a camera attached, and it is able to present the video stream via the internet.

I planned to use a Raspberry Pi as my onboard computer. This little hackable Linux machine certainly has low power requirements, and I’ve seen some impressive things done with it, including numerous projects involving cameras and motors. The Raspberry Pi is not such a great candidate for controlling any motors directly from its GPIO pins though (motors need to draw a lot of current – the tiny computer is built for very low current logic output).

In a biological system, it’s typical for a motor intention (e.g. ‘move my arm in this way’) to pass from a relatively complex ‘high level’, down through successively simpler layers of control, and finally into mechanical affectors: the brain controls muscles via the motor cortex, for instance. I thought that this methodology would be a great way to get my Raspberry Pi communicating with physical reality – by pushing commands to a ‘low level’ motor controller – for my purposes an Arduino microcontroller. But what’s the best way to get a message from the Pi into the Arduino? I established (‘forked’) a project stream: Bridge, which was intended to solve this.

Setting up a web camera on the Raspberry Pi was a critically important task, so I gave it a stream title: Eye. Similarly, internet connectivity is an essential work module (connectivity is not provided ‘out of the box’ with the RPi), and so another stream was forked: Uplink, which was to establish WiFi internet connectivity.

Vessel: I’d need to build a scaffold to hold everything together, with wheels to form a mobile platform. It’ll need a drive system with sufficient torque to deal with basic obstacles (carpet/hard floor transitions), and stay put when not driving.

Juice: There are energy requirements that must be met – a battery or batteries sufficient for both motor control and core computer power.

Soul: In order to animate the whole thing, I’d need to do some software development in a couple of sub work-streams. Firstly I’d have to present a control interface to the user (me at a remote machine), so that I could send commands to some process running on the robot ‘host’ computer. This should then push commands onto its motor control layer, to ultimately translate my control intentions into motor actions.


It’s cool to think that if I were ‘in industry’, I could farm out each stream to a resource or team to develop – that’d allow the production of something really big and cool within an efficient time; just like at NASA! However I was tackling my project alone (kind of, as you’ll see), and so worked through each stream one after the other, switching to progress upon a different task whenever I got stuck or bored. What follows is a debrief of each work stream – hopefully some of the things I learned will assist others looking to achieve similar goals.

Uplink – Raspberry Pi WiFi Connectivity
After a bit of internet research targeting ‘cheap wifi dongle for Raspberry Pi’, I hoped to follow the success of others with the Edimax 7811un WiFi dongle. Unfortunately for me and my year out of date Raspbian Linux distro, this did not at all work with prebuilt driver modules helpfully posted by others on the web. Not fully confident with building from source for my distro (as a Linux noob), I tried to get the prebuilt drivers working with an updated Raspbian distro. Failed again – now the distro was too new for the drivers! I then tried an entirely different OS: Arch Linux, hoping the device would work immediately with included drivers, but was totally put off by the different command environment (I’m really am a Linux noob). I finally tried Occidentalis 0.2 (based on familiar Raspbian, and with a lot of hobbyist-centric modules preloaded); after some very simple configuration, the Edimax now worked ‘out of the box’!

This was a difficult project stream for me, since it involved a lot of learning about Linux hardware compatibility and setup – the entire Linux hardware configuration process seems so arbitrary; options are littered about various obscure places on the core file system, and little sensible help aimed at newcomers (who want to understand the process and not just copy some commands) can be found on the web. Occidentalis came ‘pre-baked’ and ready to go, a perfectly simple solution to wireless internetting on the RPi, with a dirt cheap WiFi dongle.

Eye – webcamera streaming with Raspberry Pi
IMG_0487Similar problems to those experienced with Uplink were encountered whilst setting up the webcam (the lesson: Linux hardware setup is a pain). The main issue was trying to get a Linux distro that supported both the Edimax dongle for WiFi and my webcamera (a Microsoft Lifecam 3000). Occidentalis 0.2 totally saved the day with support for my camera already built in, and my camera hardware was ready to use. I also tried a really cheap camera, the Microsoft XBox Live Vision (around a fiver including postage via Amazon), and this worked beautifully too, with no setup required! I found this hardware compatibility resource particularly useful whilst shopping for suitable hardware: http://elinux.org/RPi_VerifiedPeripherals

Actually streaming the video from my functional camera turned out to be a mammoth research and development endeavour – I tried various systems designed to stream a camera feed to the internet with no success; the popular programs ffserver, avserver and motion all failed at various points during investigation, each with obscure errors. I was running out of patience after days getting nowhere, until I happily stumbled upon mjpg-streamer (installed from the command line like other Linux programs using apt-get) – it worked immediately, and it worked so very well. Furthermore, the video stream format hosted by this program was ready to drop straight into a webpage, which is perfect for my requirements – rather than writing a ‘native’ desktop program to control the robot remotely, I’d simply be able to use a web page to build my control interface: much simpler.

Bridge – communication from Raspberry Pi to Arduino
There turn out to be many numerous methods of getting a computer to talk to another bit of hardware: Serial, I2C etc. I wanted a quick and easy solution, so opted for basic digital logic output directly through the Raspberry Pi’s general purpose input/output pins. I can define my motor control intentions in terms of binary states, and by setting the logic voltage of the pins high for 1 and low for 0, I can push the control intention to my Arduino:

Each motor has three states, which can be described by two bits; a ‘drive’ bit and a ‘reverse’ bit.

idle: 00 (both ‘drive’ and ‘reverse’ bits zero)
driving forwards: 01 (reverse bit 0, drive bit 1)
driving backwards: 11 (reverse bit 1, drive bit 1)

…which means that I need just four bits to control two motors according to the following:

idle – both motors stationary: 0000
driving forwards – both motors turning forwards: 0101
driving backwards – both motors turning backwards: 1111
turning left – left motor turning backwards, right motor forwards: 1101
turning right – right motor turning backwards, left motor forwards: 0111

Further consideration reveals that up to 7 digital states can actually be controlled by just three bits (freeing up one pin from the scheme above), but I eventually stuck with keeping things simple and obvious, using four bits according to the scheme above.

The voltage output from the Raspberry Pi is 3.3V, whilst the voltage expected by the Arduino inputs is 5V – whilst it’s possible to connect directly from Pi into Arduino (3.3V reads perfectly well as a logic 1 – the ‘high’ threshold on Arduino is closer to 2.8V by my experiences), I wanted to ‘do things properly’ with a logic buffer. I tried pushing my bits from the RPi to a 74HCT541 Octal buffer IC, with 5V on the Arduino side. Using the chip to ‘drive’ inputs to the Arduino worked okay until a fourth ‘high’ output was demanded from it – there seemed to be a voltage drop associated with each high output, and by setting all four bits to on (1111), the informational state was very unstable, with ‘high’ represented by slightly less than 2.8V on the Arduino side on each pin. A conversation with a seasoned industrial engineer (thanks dad!) introduced me to the difference between ‘source’ and ‘sink’ drive, as it pertains to IC circuits. I had assumed that ‘source’ mode was how every IC worked! A quick bit of prototyping using a ‘sink’ setup to drive four LEDs worked much better than a source drive arrangement; logic voltage could be made to remain consistent with each additional high output from the octal buffer, but I just couldn’t figure out how to drive my Arduino inputs in this way – ‘pulling’ current from the arduino to represent a logic 1 (I’m also an electronics newbie – though in retrospect I’m sure this could be done with a PNP transistor arrangement).

I ditched the IC and tried an entirely different approach using plain old transistors to translate my Raspberry Pi output voltage from 3.3 to 5v, and experienced a voltage drop across the transistor leading to my 5v dropping to below 2.8v (regardless of swapping out resistors, changing transistor types etc.) – again no good!

Given all the electronics grief thus far, I ended up just plugging RPi into Arduino (sometimes the simplest solutions are the best), acknowledging that this is perfectly safe as my Arduino on the connected pins is in input mode – it won’t be dropping 5v onto the RPi anytime soon. I also noted that I seriously need to spend more time working with electronics to understand what caused my numerous issues during this work stream.

IMG_0494Vessel – a simple wheeled robot body
Possibly the easiest task of all: have some fun with Meccano (a great modular construction system with enough robustness for light robot application). I looked into the possibility of adding tank tracks (a belt sander reel cut in half lengthways) but this didn’t work out how I intended – I preferred a smaller frame than dictated by the size of the belt, so wheels it was.

By modifying a couple of servo motors so that they rotated continuously in either direction (rather than the fixed angle default), I had some high torque drive units, with easy control using the pulse width modulation output from the Arduino.

Juice – Power supply for Pi and motors
Whilst browsing the discounted goods aisle in Maplin, I found a 9000MAh rechargeable power pack with USB output, intended as a supplementary charging source for mobile devices. It’s pretty heavy, but this battery had a massive capacity! A first trial with my WiFi connected Raspberry Pi powered by this little brick proved that it could idle the Pi happily for fourteen hours.

My drive servos require a good bit of power however (servos needing typically 6v, and a fair bit of current) – I didn’t want to put too great a demand on my new power source, and so though it’d do well as my CPU power source for both the Arduino and Raspberry Pi. I’d use a separate source of 4 AA batteries (~6v) to drive my motors.

I built a small ‘power station’ circuit to allow me to source power from each supply, with a common ground rail connected to the rest of the robot circuitry by a master on/off switch. I learned a core lesson in electronics at this point: different battery voltages can happily be combined so long as their grounds are connected – I once couldn’t comprehend how this was possible without one battery ‘leaking’ charge into the other, thus leaving the batteries ‘unbalanced’. Such a thing just doesn’t happen – for charge to enter a battery at one end, an equal but opposite charge must leave at the other end, and so connecting multiple batteries to a common ground simply completes a pathway for charge from each to complete its circuit – there’s no adverse effect with ‘excess’ charge crossing from one battery to the other; a simple fact, but fundamental to my growing practical understanding.

Soul – Programming a telepresence ‘droid
As a software developer by trade I’m on familiar ground here. A bit of research into how the Raspberry Pi’s GPIO pins could be controlled via the internet led me to discover a set of interconnected software elements called WebIOPi – an *amazing* bit of work by a single person; not only did this software fulfill my requirements almost immediately all the way from front end control interface all the way to GPIO action on the Pi, but the author (blog linked here) was in the process of writing an article for a hobbyist magazine about making a ‘cambot’ – refer to issues 9 and 10 of The Magpi; basically the same as my own project! Furthermore, he’d kindly committed his control interface source to the repository for use by anyone (with files supporting the written article also available here)… In this respect I cannot claim credit for this portion of the ‘Soul’ workstream – this turned out to have been fulfilled already by another skilled engineer – with a very few small tweaks (and donation where it’s due to the author of WebIOPi), my project requirements were met.

To summarise the architecture briefly: WebIOPi hosts a webserver on the Raspberry Pi, upon which HTML content can be accessed remotely using a web browser. On this webserver I can place a simple HTML document that uses WebIOPi’s JavaScript API to communicate with a Python script running on the Raspberry Pi; this sets the state of the GPIO pins based on commands originating from button presses on the web page. ‘Macro’ commands can be defined, and so it’s easy to define a ‘forward’ button as meaning ‘0101’ on any four GPIO pins (the intention to drive both my motors accordingly), for example.

The final portion of programming was a very simple bit of code on the Arduino. A loop monitors input on four pins connected to the GPIO from the Raspberry Pi, and sets the ‘angle’ desired by my modified servo motors accordingly, via PWM output. The angle will never be reached by my modified servos, so they’ll spin continually in one or other direction, with a speed proportional to how far the angle is defined from zero.

Final assembly
A bit of insulating tape to hold everything in place and it’s good to go! My robot’s name: Telemus – a cyclops from legend, and a nod towards the word ‘telemetry’! Project success; a lot of learning had during construction, and a lot of fun to be had trundling around my own local planet Mars. A video of my rover in action follows…


Project complete

The future

I’d really like to add a set of high power LEDs to ‘light up the dark places’ – the on/off state of this will easily be controlled with one GPIO pin connected to a transistor switch.

I’d also really like my robot to be able to tackle stairs! This will require both a larger body scale, and some better locomotion system than wheels – tracks might do it, but I’m highly intrigued by Boston Dynamics’ RHex robot, with its leg/wheel crossover type locomotion system…

A larger body could support a beefier power system for drive. It might be interesting to add a manipulator or arm of some kind, for basic interaction with the environment – possibly even aid with raising the front of the (tracked) robot’s body for mounting stairs.

Tagged , ,

AppHack-e :: ‘Appcopter’ toy helicopter, hacked. Part I: The control signal


A few weeks ago, I went to a local hobby and model shop to look for inspiration (and maybe something to dismantle). Of particular interest was this:


A small infra-red controlled toy helicopter, with a transmitter that plugs into an iPhone – simply download the virtual control panel from the app store to fly it. I was immediately struck by the potential for tinkering; it’s rare to see anything ‘unofficial’ plugged into an iPhone because the platform is so closed (Apple don’t let just anyone mess with their hardware), but here was a cheap toy making innovative use of the audio output socket, to fly a vehicle no less – that’s pretty clever! As a programmer I could see that in order to make a custom controller for this toy, all that’d be needed would be some code to ‘make the right noise down the wire’ to the transmitter, so to speak, so with this simple view in mind I went ahead and bought it with some idea of what I wanted to achieve…

I remember once playing with a programmable robot at school called ‘Logo’ (nicknamed ‘Turtle’), it was a white plastic dome shaped vehicle with wheels, that you could command to perform a series of navigations in sequence, leading the robot to traverse a maze or draw a picture. Program instructions were really simple, you could draw a square by commanding something like the following:

// draw a square: go forwards X units and turn right 90 degrees. Repeat four times.
Forward 10 ;
Right 90 ;
Forward 10 ;
Right 90 ;
Forward 10 ;
Right 90 ;
Forward 10 ;
Right 90 ;

I thought it’d be an interesting challenge to hack this toy helicopter into behaving like a ‘flying turtle’, so I could send it a series of instructions and watch as it performed them in sequence. That sounds like a pretty simple goal, but to get this far I’d have to achieve a number of tasks, most definitely learning a lot in the process. This modest goal also represents a necessary step on the path to bigger things, such as building a fully autonomous flight platform with sensory feedback systems – now that’d be awesome…

Control mechanics

So after playing with the helicopter for a bit to get a feel for the flight characteristics (cats now run from the room when the toy starts up), I looked at more detail into the available control mechanics.

The virtual controller app in d-pad mode

The virtual control panel gives you a number of controls, the most obvious being throttle – how fast the rotors spin, which translates into how quickly the helicopter ascends or descends. Get things balanced just right, and it hovers nicely.

A significant feature of this helicopter is that it has two pairs of ‘coaxially mounted’ rotor blades – each pair spins in opposite directions, with the effect of mutually canceling any bodily rotation induced during spin. The control circuitry on-board the helicopter manipulates the Y-axis rotation (yaw) of the helicopter in response to your command by slowing down or speeding up just one or the other pair of blades, deliberately allowing a little inductive torque in either direction. There’s also ‘trim‘ – it’s possible that the unique physiology of the helicopter (factory differences in motor power, for instance) means that some rotational force needs constant compensation; trim allows you to apply some permanent offsetting yaw for this.

Next is pitch, managed by a tiny motor and upwards-facing propeller on the tail boom – spin the rotor one way to dip the tail and make the helicopter fly backwards, or spin it the other to raise the tail and fly the helicopter forwards. Leaving the control untouched allows the helicopter to straighten out and hover.

The app offers two control modes. First is a ‘d-pad’ (‘digital pad’) for simple ‘zero-or-maximum’ forwards, backwards, yaw left, yaw right and combinations of these. Secondly there’s a pretty cool mode that allows you to fly the helicopter with the accelerometer of your iPhone by tilting it left, right, forwards or backwards to affect yaw and pitch respectively.

Finally, hidden away in another screen of the app, is a channel selection option: A, B or C – this setting lets you and some friends fly up to three helicopters on different channels, in the same room, each with their own transmitters and controller apps.

Control signal

When you fire up the throttle on the app, and the helicopter’s motors whirr into action, you can hear the plug-in transmitter box coughing out a staccato series of sounds through a tiny built-in speaker (which on first appraisal could passably be described as ‘helicopter rotor noises’, made for effect). The same sound is made through the speakers on the iPhone when the transmitter is unplugged, so there must be more to these sounds than mere aural aesthetic; it must be the control signal! It’s a pretty straightforward exercise to plug the iPhone’s audio output into the audio line-in port on a computer, and record a bit of the sound. It’s even easier to then look at it visually portrayed in a sound editing program – here’s what the audio signal looks like at two levels of zoom:

1) A little over 3.5 seconds of audio signal from the app, throttle set at 10%

2) Zoomed in upon one of the many individual ‘blips’ from the signal in image (1). The portion containing dips and peaks is about 30 milliseconds long.

Zoomed right into one of the many pulses visible in image (1), you can see that the signal looks like a series of valleys carved out from a line centered around zero (image 2). Though the signal looks like it tends upwards a bit across its entire length, the ‘valley floors’ along it look to be of pretty consistent widths and depths, with some looking like exactly double the width of the shorter ones.

This signal above looks most definitely digital (an analogue signal would present a more chaotic and varied looking series of smooth valleys and peaks or different heights and depths, not the regular ‘rectangular wave’ that we see). A little more research reveals that your average TV remote sends a signal that doesn’t look too different to the one sent by the helicopter transmitter… The important point being that the typical TV remote controller signal has a definite structure: it first opens with a ‘low signal’ or valley of a fixed length (like shouting someone’s name to get their attention), the signal data then follows like a spoken sentence, flowing in a series of words: control values spelled out in binary (with the length of the low signal denoting either a ‘1’ or a ‘0’), before closing with another fixed pattern of lows. Just in case the TV failed to get the message the first time around, the signal is then repeated. It looks like my helicopter similarly has a consistent ‘hello’ valley at the start (longest trough, far left of image 2), some binary data then follows (with what could be a longer valley denoting a ‘1’, and a shorter valley denoting ‘0’) before coming to a definitive close. The entire signal is repeated again over and over at regular intervals so long as the throttle is greater than zero – setting the throttle to zero stops the signal audio.

A brief moment to explain binary…

When we count from 0 to 9, we visit a series of unique numerical symbols in one ‘column’, which we call the ‘units‘: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9. When we get to the number 10, we wrap over into two columns: the ‘tens‘ (of which we have one in this instance) in addition to the ‘units‘ (of which we have none). Count up to 100 and we have one ‘hundred‘, no ‘tens‘ and no ‘units‘. Following so far? We’ve just been counting in ‘base-10’ – our numerical system is in fact built around base-10. We don’t have to constrain ourselves to ‘units‘, ‘tens‘, ‘hundreds‘, etc. though – you can effectively work in a numerical system of any base – binary is simply counting in ‘base-2’! The table below shows exactly what that looks like – on the left is the familiar number in base-10, on the right is the same number, in binary. See that instead of ‘tens‘, ‘hundreds‘ and ‘thousands‘ (powers of ten), we instead have ‘twos‘, ‘fours‘, ‘eights‘, ‘sixteens‘, etc. (powers of two), with each column being shifted along to the left when the smallest gets filled.

1 (one ‘unit’) → 1 (one ‘one’)

2 (two ‘units’) → 10 (no ‘ones’ plus one ‘two’)

3 (three ‘units’) → 11 (one ‘one’ plus one ‘two’)

4 (four ‘units’) → 100 (one ‘four’ plus no ‘two’ and no ‘one’)

5 (five ‘units’) → 101 (one ‘four’ plus no ‘two’ and one ‘one’)

6 (six ‘units’) → 110 (one ‘four’ plus one ‘two’ and no ‘one’)

7 (seven ‘units’) → 111 (one ‘four’ plus one ‘two’ and one ‘one’)

8 (eight ‘units’) → 1000 (one ‘eight’ plus no ‘fours’ plus no ‘two’ and no ‘one’)

9 (nine ‘units’) → 1001 (one ‘eight’ plus no ‘fours’ plus no ‘two’ and one ‘one’)

10 (one ‘ten’, no ‘units’)→ 1010 (one ‘eight’ plus no ‘fours’ plus one ‘two’ and no ‘one’)

When using binary in computing, it’s typical to specify a number with a certain length of symbols – like putting in a placeholder ‘0’ for each column we might possibly use – this is the number of ‘bits’ (‘binary digits’) used to specify the number:

The number 1 in ‘four bit binary’: 0001

The number 15 in ‘four bit binary’: 1111

So what happens if we try to specify the number 16 in four bits? We just can’t do it! There’s a limit to the maximum number we can specify with a certain number of bits. To show the number 16 in binary we need at least five bits:

The number 16 in ‘five bit binary’: 10000

Decoding the binary signal

So we’ve seen what controls the helicopter responds to: throttle, yaw, pitch, trim, and channel. The signal data must contain information about each of these – there’s no magic happening when you push up the throttle control! We’ve established that the signal is binary in format; it’s sending a series of numbers represented by zeroes and ones – these numbers must mean something about each control, but in what order do the values get sent, and by how many bits is each represented? With applied scientific method, we can easily figure out the signal encoding! The process is really simple:

– Choose just one variable (be it throttle, yaw, pitch, trim or channel) and modify it to a known value, for example, set throttle at 10%. Record the signal audio – this is our ‘control’ signal, against which all further modifications will be compared.

– Change the selected variable by a small amount, for example, set throttle to 20%. Record the signal audio.

– Open up the audio file for the control signal (in this case throttle at 10%), and place alongside it the audio for the latest recording (throttle at 20%) – see how the signal changes between the two wave forms (image 3) – there’s clearly a connection between a particular variable (throttle in this case) and the part of the second recording that changes in response.

– Repeat many times, for a range of control settings for a particular variable, before moving onto a different variable and starting over!

3) Signal comparison: Throttle at 10% (top) and 100% (bottom). There’s definitely something going on with those first few bits of signal (highlit, bottom)

I found it possible to get a pretty big hint about the location of a particular control value on the signal, with a little trick: the controls have maximum and minimum values (for example, throttle has both a 0% and 100% setting), so assuming these aren’t mapped to weird arbitrary values, they must be equivalent to the value on the signal – the signal should show the highest value for throttle, say, when the app has it set at 100%. So set the throttle to 100%, and the signal shows a series of long valleys (obviously then our ‘ones’) at a location that must contain the throttle value!

I initially started writing a program to automate the full testing procedure, exporting the audio files to a text file format (easier to crunch the numbers that way). I eventually ran out of patience with this though after encountering a number of issues, and resorted to pencil and paper – it turned out to be much quicker and easier – about an hour to figure out the encoding, and like any puzzle, really good fun.

Pencil and paper: Change one aspect of the control (e.g. set throttle @ 20%) and see how the signal changes relative to some base setting (I used throttle @ 10% as my reference signal). Repeat many times, identifying the characteristics of the control components of the signal

The signal exposed

So here’s a summary of my findings, and everything you need to start writing a program to take control of the helicopter:

– The audio signal is a rectangular waveform representing binary data, with a normalised peak amplitude ‘pulse’ of about 0.6 units, when represented in uncompressed .wav file format.

– The signal begins with an ‘on’ pulse of ~1600 microseconds

– A binary ‘0’ is indicated by a pulse of ~400 microseconds following a gap of ~400 microseconds

– A binary ‘1’ is indicated by a pulse of ~800 microseconds following a gap of ~400 microseconds

– The entire signal carries 28 bits

– Following the on pulse the controls are sent as follows:

Throttle: 7-bits, defaulting to 0, full at 127
Pitch: 6-bits, defaulting to 32, full forwards at 0, full reverse at 63
Yaw: 5-bits, defaulting to 16, full right at 0, full left at 31
Trim: 6-bits, defaulting to 31, full right at 0, full left at 63
Channel: 2-bits, A: 1, B: 2, C: 0
???: 2-bits, no idea – let’s find out in the next post!

That’s it for now – there’s a lot of content here, so I’ll deliver the story in two parts. In the next post I’ll reveal the program written in Processing to generate the audio signal for the transmitter, and talk about automating the helicopter’s flight!

Tagged , , , , , ,