Category Archives: c++

UAV Editor

The signal processing view of a quadcopter has one obvious result – a node editor.

Still WIP but here’s the first screenshot:

Screenshot from 2015-03-02 00:16:43

The left top part has some nodes that compute a Reference Frame, a Location and a Battery State from a MPU9250 imu, a MS5611 baro, a Ublox GPS and a RC5T619 ADC.

There are some processors there that do some resampling or convert streams (like an ADC_Voltmeter that converts from ADC to Voltage).

The nodes are read from the brain running on the Raspberry Pi. Each node has:

– a unique name
– a type (class_id)
– a list of inputs
– a list of outputs
– an init_params json which represents fixed configuration that is applied when the node is created. Stuff like IMU sample rate or gyro range.
– a config json that has things that can be changed at any moment – like where the inputs/outputs are connected or calibration data, or cutoff frequency for LPFs.

Each input/output has a name and a stream type that determines where it can be connected. There’s also a sample rate that has to be matched.

Next is to allow uploading of the json configs to the brain and more editing features.

The node editing is done with this library.

Advertisements

Signal processing

While thinking about ways to implement multiple sensor support – for several GPS or gyroscopes – I realized that an UAV can be modeled as a signal processing device.

It can be split like this:

  • sources – like the MPU9250 which generates 4 signal streams: acceleration, angular velocity, magnetic field and temperature, the SRF02 sonar which generates a stream of distances
  • streams – like the acceleration from the imu, or the location stream from the GPS. Each stream has a sample rate
  • sink – like the pigpio PWM. Sinks accept streams of a certain sample rate only
  • and processors – like low pass filters (LPF) or the PIDs, or the AHRS. Processors don’t have an inherent sample rate. They adopt the rate of the input usually.

All of these are implemented as nodes with several inputs and outputs. All nodes are added to the HAL layer and both their properties and connections are configured from a json file.

The important aspect is that – like all signal processing – signal rate is very important and has to be properly matched between nodes to satisfy the nyquist frequency. For example the PWM has a max sample rate of 490Hz so any signal higher than ~240Hz will be seen as noise. Higher frequency signals have to be resampled and low-pass-filtered before fed in the PWM node.

I started implementation last week and – after many rewrites – I have a model that ‘feels right’.

Here are some drawings:

Ahrs_0

Here the MPU9250 node exposes 4 sub-nodes (not shown) – an accelerometer, a gyroscope, a magnetometer and a thermometer. Each of these outputs a stream: acceleration stream, angular velocity, magnetic field and temperature.

The frequency of these streams is determined by the sensor. Acceleration and Angular Velocity are at 1Khz while magnetic field is at 100Hz. Temperature (not shown in the pic) is at 10Hz.

The Magnetic Field stream is fed into a Resampler node that takes it from 100Hz to 1000Hz, ready to be fed into the AHRS procesor node. The reason for this is that upsampling the stream needs a low-pass flter to avoid parasitic frequencies. A sample rate of 100Hz can carry at most a 50Hz signal so when going to 1000Hz sample rate this max frequency has to be maintained. Simply cloning each sample 10 times is not enough as this adds a lot of high frequency harmonics.

So now all signals at 1Khz sample rate are fed into the AHRS which outputs a Reference Frame stream, also at 1Khz. This reference frame represents the rotation of the UAV relative to earths frame of reference.

Right now I have a complimentary filter for the AHRS node but in the ear future I will write an extended kalman filter (EKF) for it – as soon as this tutorial is complete.

Note200215_0

This diagram uses the AHRS output – the reference frame stream – plus the acceleration stream again to output 2 new streams – gravity vector and linear acceleration. These 2 streams will be further used for dead reckoning and stability pids.

Note200215_1_0

Here I have the basic rate PIDs diagram. The angular velocity stream is first resampled from 1KHz to 50Hz (the pic says LPF but it’s actually a resampler node). The reason for this is that the usable bandwidth of the quadcopter is ~20-30Hz. Anything higher than this is noise.

The stream is further passed to the Rate PID processor node which outputs a torque stream (also at 50Hz). Torque is converted to thrust using a model of the motors+propellers (configurable of course), which is fed to the motor mixer and then in the PWM sink.

The advantage of this system is that new processors can be easily added to accommodate new hardware. For example if I want to use 2 GPS sensors I will add a redundancy processor that takes 2 location streams and outputs the healthiest one. Or I can merge them together to get better accuracy – while the rest of the nodes don’t even know nor care about this.

Basically I have minimal coupling between the elements of the system which talk through very well defined (and type safe) interfaces.

The ground station is also simplified. It can enumerate ll the nodes, read their configuration as a json value, change it and push it back to the nodes. Changing a filter cutoff will become trivial.

So far I still working to match the old functionality but in 1-2 weeks I believe this system will be ready for a test.

Robust sensors

Thanks to HobbyKings horrible shipping times, I’m forced to use my broken Crius UBLOX GPS to test the GPS code. I’m not sure what is broken but the symptoms vary from garbage data, wrong crc, loosing the fix every 2 minutes and random periods of complete silence.

To be able to continue working I had to add a watchdog that reinitializes the GPS if it’s silent for more than 5 seconds. However – my GPS::init function is blocking and takes a few seconds to do it’s work as it has to wait for the ACK messages and retry a few times if needed. So I moved it on the secondary thread using silk::async.

void GPS_UBLOX::process()
{
    QLOG_TOPIC("gps_ublox::process");
    if (!m_is_setup)
    {
        if (!m_setup_future.valid() || //first time we're starting this
             m_setup_future.get_state() == boost::future_state::ready) //the previous async setup failed
        {
            //start the async setup
            m_setup_future = silk::async([this]() { setup(); });
        }
        return;
    }

And the watchdog:

//check if we need to reset
if (q::Clock::now() - m_sample.last_complete_time_point >= REINIT_WATCHGOD_TIMEOUT)
{
    m_is_setup = false;
}

So if at any moment the GPS stops responding I’m firing the setup on the second thread – to avoid blocking the main control thread and keep doing it until the GPS responds again.

I’m thinking to add this king of hot restart to all sensors and in case something happens to trigger it from the GS.

silk::async

No intro for this one, it’s too late.

Here’s the code:

extern boost::asio::io_service s_io_service;

namespace silk
{
    template<typename F> auto async(F f) -> boost::unique_future<decltype(f())>
    {
        typedef decltype(f()) result_type;
        typedef boost::packaged_task<result_type> packaged_task;
        auto task = std::make_shared<packaged_task>(std::move(f));
        boost::unique_future<result_type> future = task->get_future();
        s_io_service.post(boost::bind(&packaged_task::operator(), task));
        return future;
    }
}

It’s std::async replacement that uses a boost::asio::io_service as a dispatcher.

I use it for any operation that might take too long and it’s in the middle of my control loop – like saving new PIDs to disk ot the battery state.

Usage is pretty simple – pass a lambda and that lambda will be executed some time later from the thread that is doing the io_service::run()

silk::async([=]()
{
    save_settings();
});

or

silk::async([=]()
{
    autojsoncxx::to_pretty_json_file("battery_state.cfg", m_saved_state);
    QLOGI("Capacity used: {}mAh", m_capacity_used_mah);
});

The function returns a boost::future so the result of the lambda can be pooled for.

I like the async ways. The future/promise mechanism is a very elegant way to express computations. It reminds me a bit of the Haskell’s lazyness.

RUDP API & implementation details

I committed the latest version of RUDP. After many changes in design and implementation I’ve got something that I’m satisfied with.

The API is very simple – packets can be sent and received on 32 individual channels:

bool send(uint8_t channel_idx, uint8_t const* data, size_t size);
bool try_sending(uint8_t channel_idx, uint8_t const* data, size_t size);

try_sending is like send but fails if the previous send on the same channel didn’t finish yet. This can happen with the video frames which take some time to compress when a big keyframe arrives and there’s a context switch in the middle of the send. Since the camera data callback is async, it might try to send again too fast – so it uses try_sending to avoid getting blocked in the send call.

bool receive(uint8_t channel_idx, std::vector<uint8_t>& data);

This call will fill the data vector with a packet from channel idx and return true if succeeds.

Each of these channels has a few parameters that control the trade offs between bandwidth, latency, reliability and speed.

The most important parameter is the one that controls delivery confirmations: Send_Params::is_reliable. Reliable packets keep getting resent until the other end confirms or cancels them – or until Send_Params::cancel_after time runs out. Silkopter uses reliable packets for the remote control data,  calibration data and pid params (the silk::Comm_Message enum).

Unreliable packets are sent only once even if they get lost on the way. They are used for telemetry and the video stream since this data gets old very fast – 33ms for both video and telemetry.
A useful parameter for unreliable packets is Send_Params::cancel_on_new_data. When true, new data cancels all existing unsent data from the same channel. This is very useful for low bandwidth when video frames can take longer than 33ms to send. Another parameter – at the receiving end this time is Receive_Params::max_receive_time which indicates how long to wait for a packet. Useful for the video stream in case frame X is not ready but frame X+1 is already available. In case a packet is skipped due to this parameter, a cancel request is sent to the other end to indicate that the receiver is not interested in this data anymore. This saves quite some bandwidth.

Zlib compression can be set for each channel – and it’s on for all channels in silkopter – including the video stream where it saves between 3 and 10% of the frame size at a ~10% CPU cost.

 

Internally, packets are split in fragments of MTU size (currently 1KB). Each fragment is identified by an ID + Fragment IDX – so fragments from the same  packet share the ID.

The first fragment has a different header then the rest.

Fragments are sent as datagrams, same as pings, confirmations and cancel requests.
A datagram has a small header (5 bytes) containing the crc of all the data and the type of the datagrams. Based on the type the header can be casted to a specialized header.
The crc is actually a murmur hash of the datagram data and I’m not sure it’s really needed as UDP has it’s own crc but  better be safe than sorry. It’s very fast anyway and doesn’t even show up in the profiler.

The datagrams are managed by a pool using intrusive pointers to avoid allocations of the datagram data (a std::vector) or the ref_count (in case of using std::shared_ptr).

My test so far was this:
With both the silkopter and the GS far from the access point, I’m sending a video stream of enough bandwidth to choke the wifi. This is ~400KB/s in my current test scenario. Then I’m pushing data through a reliable channel at ~10-20 packets per second amounting to 6KB/s. So far all the data got through in less than 2-300ms which is 2-3x my max RTT. Pretty happy with the result considering that in my previous setup – TCP for reliable data + UDP for video I was getting 2-3 seconds lag even next to the access point is some worst cases.

 

The only thing missing are handling the case when one end restarts. This is problematic so far because RUDP keeps track of the last valid received packet ID and ignores packets with IDs smaller than this. So when one of the ends restarts – all its packets are ignored uptil it reaches the ID of the last packet before the restart… Not good.

 

Reliable UDP

Silkopter uses 2 sockets to communicate with the ground station. A TCP reliable socket for the remote control, diagnostics, telemetry etc and an unreliable UDP socket for the video stream.  Both go through a Alfa AWUS036H wifi card.

The UAV brain splits video frames in 1Kbyte packets in order to minimize their chances of corruption – and marks each of them with the frame index and timestamp. The ground station receives these packets and tries to put them in order. It waits for up to 100ms for all the packets of a frame to be received before either presenting the frame to the user or discarding it. So it allows some lag to improve video quality but above 100ms it favors real-time-ness over quality.

All the remote control, telemetry and sensor data is sent through TCP and in theory, given a good enough wifi link – this should be ideal. But it’s not, not even close. I consistently get 300-2000ms (yes, 2 full seconds) of lag over this channel.
All of this is due to how TCP treats network congestion. When lots of packets are lost, tcp assumes the network is under a lot of pressure and throttles down its data rate in an attempt to prevent even more packet loss. This is precisely what is causing my lags – the TCP traffic hits the heavy, raw UDP video traffic and thinks the network is congested, so it slows down a lot. It doesn’t realize that I care more about the tcp traffic than the udp one so I end up having smooth video but zero control.

My solution is to create a new reliable protocol over UDP and send control, telemetry and sensor data over this channel, in parallel to the video traffic. In low bandwidth situations I can favor the critical control data over the video one.

There are lots of reliable udp libraries but I always preferred writing my own for simple enough problems when there is the potential of getting something better suited to my needs (not to mention the learning experience).

So my design is this:

  1. I will have a transport layer that can send messages – binary blobs – and presents some channels where received messages can be accessed.
  2. Data is split in messages, and each message has the following properties:
    1. MTU. Big messages can be split in smaller packets. If a packet is lost, it will be resent if needed. This should limit the size of the data to resend.
    2. Priority. Messages with higher priority are send before those with lower priority.
    3. Delivery control. If enabled, these messages will send delivery confirmation. If confirmation is nor received within X seconds, message is resent. The deliveries also have priority – just like any other messages.
    4. Waiting time. If a low priority message waited for too long, it will have its priority bumped. For video, I’ll use a 500ms waiting time to make sure I get some video even in critical situations.
  3. Channels. Each message will have a channel number so that the receiving end can configure some options per channel. Video will have one channel index, telemetry another and so on. The GS will be able to configure a 100ms waiting period for out-of-order messages on the video channel for example.

 

My current implementation uses boost::asio and I intend to keep using it.

As soon as I finish the stability pids I’ll move to the new protocol.

 

 

Remote debugging on the Raspberry PI from QtCreator

For a few weeks now I got an error in QtCreator every time I started the debugger: “The selected build of GDB does not support Python scripting.It cannot be used in Qt Creator.

I imagined it’s due to my recent raspbian cleanup as I deleted everything I could (don’t remember why..). So yesterday I reinstalled raspbian from scratch due to a kernel panic and I reinstalled ubuntu due to a new laptop and the error was still there.

Long story short – it seems that the latest versions of QtCreator require python support in the _host_ gdb  to be able to format the debug data. The crosscompile toolchain for the raspi (https://github.com/raspberrypi/tools) lacks python support and I’m too green in the linux ways to compile my own toolchain so I got used to the idea of working without a debugger…

Today, desperate after 2 days of trying to nail down a crash I found out that you can use gdb-mutiarch and voila – debugging is back.

So:

sudo apt-get install gdb-multiarch

Point QtCreator to it:

Screenshot from 2014-09-28 22:54:44

and profit!

…And the code is up

You can find the WIP silkopter code here:

https://code.google.com/p/silkopter/

 

It’s quite a lot of code as it includes a 3d engine used in the ground station.

It depends on boost, freetype2 and bullet physics for the simulator.

I use QTCreator on Ubuntu 14.04 for development but the code should work just fine on windows as 6 months ago I used to develop in Visual Studio 2013.

 

The code is published under GPL.

 

Enjoy

Mixing motors

I flew the quad in the simulator this weekend trying to tune the rate PIDs.

First problem is the D factor which is very sensitive to sensor noise. For this I added a butterworth filter with a 5Hz cutoff. I’ll probably need to tune this later.

Next problem is that my motor mixing doesn’t preserve total thrust, but instead tries to preserve angular velocity. Rookie mistake I presume.

First a stolen picture of a quad in ‘plus’ configuration:

quad

Motor 1 is towards the front of the quad, 2 to the right, 3 is in the back and 4 to the left. As you can tell from the picture, diagonally opposite motors spin in the same direction – so 1 & 3 spin clockwise while 2 & 4 spin counter-clockwise.

If all motors were spinning in the same direction, the whole quad would start to spin in the opposite direction to counter the rotation of the motors. For this reason the motors spin the way they do – to counter this effect.

Now if the quad wants to go forward, it runs motor 3 faster and motor 1 slower causing it to tilt forward. This will get it to accelerate forward as some of the downward thrust goes backwards now.

Same for going left, right and backwards.

If the quad wants to rotate clockwise, it spins motors 2 & 4 faster than motors 1 & 3. This causes the quad frame to spin in the opposite direction than motors 2&4 – so clockwise. 

Back to the mixing:

The inputs to the mixer are 4 quantities:

1. collective throttle

2. yaw rotation factor – rotation along the Z axis obtained with differential thrust of diagonal motors

3. pitch factor – rotation along the X axis

4. roll factor – along the Y axis

The mixer then proceeds to calculate the throttle of each motor based on their position in the X-Y plane. The hard constraints are 1 – never go below a min throttle to prevent the motor stopping and 2 – never go above full throttle (because physics..)

I had code in place to satisfy these hard constraints in a clever way but it turns out I wasn’t so clever. Take this for example:

//check if under minimum
    if (min < m_min_output)
    {
        for (auto& motor: m_motors)
        {
            motor.output += m_min_output - min;
            max = math::max(max, motor.output);
        }
        min = m_min_output;
    }
    
    //check if over maximum
    if (max > m_max_output)
    {
        for (auto& motor: m_motors)
        {
            motor.output -= max - m_max_output;
            min = math::min(min, motor.output);
        }
        max = m_max_output;
    }

If a motor goes above max (100%), distribute the spill to all other motors. Same when going below min (5%).

Turns out this doesn’t preserve total throttle, but it does preserve angular velocity.

As a consequence sometimes my quad would fly away even at min throttle.

Time to fix this.

Simulator

I’ve managed to get some of the simulator UI and rendering done.

For now the sensors will be simulated by taking the uav rigid body (cylinder) state and applying some random white noise to it. I’ll probably have to add some very low frequency noise to simulate temperature drift.

This data is then sent to the brain board through a tcp channel with the same interface as the serial channel the brain uses to talk to the io_board (the avr). The simulator basically replaces the io_board with a bullet physics simulation.

sim