Tag Archives: silkopter

Comms, revisited

After the latest incident and the loss of my quad it became clear that I need reliability. So far I focused on physical reliability – if the quad is sturdy enough to take a light crash, if it’s well balanced, etc. But I kind of ignored the other 2 aspects – software and comms. Well – not ignored, more like I hoped for the best.

So the next issues to fix are:

  1. Solid comms for the commands (RC) and telemetry. The RFM22B is the perfect candidate for this:
    1. It’s lightweight and small.
    2. Very sensitive receiver (-104dbm @125Kbps)
    3. Powerful TX amp @20dbm
    4. 64byte FIFO so I can drive it directly from the raspberry pi
    5. SPI interface, which the RPI has enough
    6. Good penetration due to the low frequency (433Mhz)

With this setup I should have 5-10Km range, way more than the WIFI video like allows.
The issue will be the comms protocol as the device is half-duplex and the bandwidth is not that much. I made a quick calculation and it should take around 5ms to send an entire 64byte message. With a simple protocol where the GS is the master and the quad the slave, I can allocate 5ms slots to each in turn. This will give me a symmetric 7Kbyte bandwidth, -protocol losses lets say a 5Kbyte bandwidth per second. Should be enough for control and telemetry data.

2. Solid software, especially the GS. I intend to get a RPI3 and display and build a dedicated GS to replace my laptop. For now I will keep my PS4 controller but in the future I will aim to build an entire RC system with integrated touchscreen around the RPI.

I want to run the RPI without an X server and for this I will need a QT version that supports this. Don’t know if it’s possible though.
If this fails, I’ll stop using QT and do all the rendering and UI using something else but this will be quite painful… I’m used to QT and despite all it’s quirks it’s very powerful when doing quick UI work.

 

What I do know that works in the current setup is:

  1. The RPI2 is powerful enough to handle a quad with WIFI video streaming. It was using ~16-20% CPU at any moment with around 5 threads so no issues here
  2. The case I designed is very sturdy. It’s a combination of 10mm carbon fiber tubes for the arms and ABS plastic for everything else.
  3. Flight time with the current weight of 650grams is around 20-25 min with a 3000mAh multistar LiHV battery. Enough time to enjoy the landscape
  4. The motors + ESC pair are a nice match. RcTimer 1806 1450Kv and RTFQ 4-in-1 12A ESC seem to handle the load nicely. A bit underpowered due to the props – 7×2.4inches but I plan to move to 8×4.5 for the next quad.

 

So back to the drawing board for a while.

 

 

 

 

 

Advertisements

Back

After a long break, I’m back with a video showing some of the progress I had lately:

 

Silkopter actually flew this summer and soon it will fly again. This time with a Raspberry Pi 2 instead of the OdroidW. 4 cores, yaay!

 

Excuse the lack of music. Youtube doesn’t provide many enjoyable options.

Enjoy

Mixing Motors #3

I’ve written about this before (here & here). In the previous code I was making a few assumptions:
1. Overall motor rotation is zero – so there is the same number of clockwise and counter-clockwise motors
2. Thrust vector is always pointing downwards
3. No physical units. The old mixer worked in ratios – full rotation along X axis, 0.3 rotation along Z Axis and so on

Now I want to get rid of all these assumptions and build a mixer that given a desired torque and the motor+quad configuration – it can figure out how much throttle to apply to each motor.

This is proving more complicated than I thought and the fact that I don’t have any formal math background doesn’t help at all.

So my requirements:
– Work in physical units. I want to to send the mixer 2 quantities: a torque in Nm and a collective force in N.
– Any (valid) motor configuration. X and + frames, quad, hexa, octo, spider, Y6 etc. Including motors with random thrust vectors – like a tricopter. Non-downward thrust vectors opens the door to a nice simplification as we’ll see below.
– Unbalanced setups. In reality it often happens that motors differ and I want my mixer to account for this and compensate without forcing the PIDs to do it

So first step is to have a model for the motor. I have the following struct to represent the relevant motor data:

struct Motor
    {
        math::vec3f position;
        math::vec3f thrust_vector;
        bool clockwise = true;
        float max_thrust = 0; //N
        float max_rpm = 0;
        float acceleration = 0; //rpm/s
        float deceleration = 0; //rpm/s
    };

Position is in local space, relative to the center of mass of the UAV.
Thrust vector is normalized.
Max_thrust is in N and is the thrust of the motor at max_rpm.
Acceleration and deceleration are used to model motor+prop inertia.

Using the position and thrust_vector I can calculate the max torque the motor will produce:

max_torque = thrust_vector X position * max_thrust; (X is cross product)

Using max_rpm and max_thrust I can figure out how much throttle too apply to get a certain thrust using the following approximation:

//thrust increases approximately with throttle^2
auto throttle = math::sqrt<float, math::safe>(target_thrust / max_thrust);

Now the problem is – how to figure out what throttle to send to each motor to get a certain torque and total force?

Let’s consider a simple case: a perfectly balanced quadcopter with a + frame. Each motor points downwards so they generate a torque along one of the axis (and arms) but there is the yaw torque as well that needs to be taken into account. This can be modeled by simply tilting a bit the thrust vector which in turn creates a small torque on the Z axis. The tilt factor depends on the motor and prop but it’s usually a factor of the RPM or RPM^2. For now let’s consider it constant.

So each motor produces a max torque: A, B, C, D.
A and C are opposite, just like B and D:

A = -C
B = -D

What I want to find out is the factors a, b, c and d in the equation below:
1. aA + bB + cC + dD = T    where T is the target torque of the quad

As you can imagine there are an infinite number of factors that give a good answer so we need more constraints. These are:
2. I also want a total thrust of Th.
3. I want a, b, c, d to have the minimum values that result in the desired torque. So I want the optimal solution.

For a quad, equations 1 and 2 are enough to calculate the optimal factors but for a hexa, octo or any other setup it’s not enough. I don’t know how to express eq 3 mathematically but this equation should result in an unique set of factors.
So in the past days I tried to figure out how to calculate the coefficients for some very weird (but valid) motor configurations and in the end I gave up on the math approach.

So what’s left when the analytic approach is out of reach? You try brute force of course.
I wrote a small iterative algorithm that tries to find out the factors with a form of binary search. It’s very short and seems to be fast enough. It takes a motor config, a target torque and a step size and computes the factors with a precision of 0.0001.

A step size of 0.01 requires ~3-7000 iterations, 0.1 requires 3-700 and 0.5 can solve it in 50-100 iterations. Pretty good.
However I cannot prove that the algorithm will always converge (if I could prove this, then I could also solve the initial problem to begin with) and I’ve been running the algorithm all day yesterday with random valid motor configs and random target torques and so far I had no bad cases.

Here it is:

void test_mm(math::vec3f const& target, float step)
{
    struct Motor
    {
        math::vec3f torque; //precomputed maximum torque
        float factor;
    };

    //some test configuration with random data
    std::vector<Motor> motors = {{
                                     {{2, 3, 0.1}, 0},
                                     {{5, -1, -0.1}, 0},
                                     {{-1, -7, 0.1}, 0},
                                     {{-4, 2, -0.1}, 0},
                                     {{-0.1, .2, -1}, 0},
                                }};

    size_t iteration = 0;
    while (true)
    {
        //first calculate the crt torque
        math::vec3f crt;
        for (auto& m: motors)
        {
            crt += m.torque * m.factor;
        }

        //check if we're done
        if (math::equals(crt, target, 0.0001f))
        {
            QLOGI("{}: Done in {} iterations", step, iteration);
            break;
        }

        //how far are we for the target?
        //divide by motor count because I want to distribute the difference to all motors
        auto diff = (target - crt) / float(motors.size());

        //distribute the diff to all motors
        for (auto& m: motors)
        {
            //figure out how much each motor can influence the target torque
            //  by doing a dot product with the normalized torque vector
            auto t = math::normalized(m.torque);
            auto f = math::dot(t, diff) * step; //go toqards the difference in small steps so we can converge
            m.factor += f;
        }

        iteration++;
    }
}

And the test code:

q::util::Rand rnd;
while (true)
{
    math::vec3f target(rnd.get_float() * 40.f, rnd.get_float() * 40.f, rnd.get_float() * 10.f);
    test_mm(target, 0.01);
    test_mm(target, 0.1);
    test_mm(target, 0.5);
}

Now I have to test in in the simulator.

Multirotor diagram

The new GS node editor is progressing nicely and everything seems to fit. I have data visualization for most types of streams with scopes and FFTs and yesterday I added a map widget to visualize the GPS ECEF location.

On the silkopter brain side I kept having a weird feeling that something is not right, that I cannot model everything with nodes and streams. Since I tend to obsess about finding the correct paradigm to modeling a certain process, I started drawing multirotor diagrams to see if/where I hit a road block. After 6 tries I got this:

(excuse the quality, I got tired of drawing on my galaxy note 4 after the 3rd diagram and went back to paper)

20150319_000139

I’ll start from the end – the PIGPIO sink. This takes a PWM stream from the Throttle To PWM node which does exactly what the name says – converts a throttle stream into a PWM stream.

The Throttle stream comes from the motor mixer which uses data from the Multirotor Model Data node (top cloud) to convert a Torque stream and a Force stream into throttle values for each motor. This conversion can be done easily for a certain combination of motors + propellers. It doesn’t have to be very accurate as there are some PIDs before this to compensate.

The Torque stream is calculated by the AV Model node (renamed now to Rate Model) that basically combines a rate PID + a feed forward model using again the top cloudy model data. It takes as inputs an angular velocity stream from the gyro and another angular velocity target stream from the Stability Model and tries to match them. The corrections represent the Torque stream.

The Stability Model node is responsible for matching a given Reference Frame (from the AHRS node) with a target ref frame from the Pilot or Assisted Model Node. The output of the Stability Model Node goes into the Rate Model Node.

The Assisted Model Node is responsible for matching a stream of velocities from the GPS with a target velocity from the Pilot. The velocities include both horizontal and vertical (altitude) components. It’s used to model a very intuitive control scheme – move left at 1m/s, or climb at 2m/s. Its outputs are a ref frame stream for the Stability Model (which will try to match it) and an up force stream fed into the Motor Mixer Node.

The Pilot Node interprets RC inputs (not shown in the diagram) and outputs either a ref frame + up force stream when using rate/stab controls, or a velocity stream when using an assisted control scheme.

So going top-down now :

– The Pilot tells reads RC commands and outputs either low-level rotation+vertical force data or high-level velocity data

– The Assisted Model reads velocity data and tries to match those inputs by changing the orientation and altitude of the quad

– The Stability Model reads orientation data and tries to match it by rotating the quad. It uses a stab PID to do this (until I have a good FF model).

– The Rate Model reads rotation speed and tries to match it by outputting torques. It uses the quad geometry and weight + rotational inertia of the motors + props to figure out how much torque to apply to rotate with a certain speed. Small corrections are done with a rate PID

– The Motor Mixer reads torques and figures out what throttle each motor should have based on the geometry of the quad and power train.

So far this looks like it should work.

UAV Editor

The signal processing view of a quadcopter has one obvious result – a node editor.

Still WIP but here’s the first screenshot:

Screenshot from 2015-03-02 00:16:43

The left top part has some nodes that compute a Reference Frame, a Location and a Battery State from a MPU9250 imu, a MS5611 baro, a Ublox GPS and a RC5T619 ADC.

There are some processors there that do some resampling or convert streams (like an ADC_Voltmeter that converts from ADC to Voltage).

The nodes are read from the brain running on the Raspberry Pi. Each node has:

– a unique name
– a type (class_id)
– a list of inputs
– a list of outputs
– an init_params json which represents fixed configuration that is applied when the node is created. Stuff like IMU sample rate or gyro range.
– a config json that has things that can be changed at any moment – like where the inputs/outputs are connected or calibration data, or cutoff frequency for LPFs.

Each input/output has a name and a stream type that determines where it can be connected. There’s also a sample rate that has to be matched.

Next is to allow uploading of the json configs to the brain and more editing features.

The node editing is done with this library.

The Pilot

So I have my nodes almost done. There are acceleration streams, angular velocity, compass, gravity, reference frames and sonar distances and all these have to translate somehow to PWM pulses for the motors.

My initial diagram (here) showed a rate PID  – with angular velocity as input and torque as output – but it’s missing something: the target angular velocity. I’m missing the input.

I kept thinking how can I model the input that controls all these nodes and it hit me – I’m missing the pilot. The central node that reads all the input sources and gives commands to the actuators – the sink nodes. The pilot knows if it’s piloting an airplane, a multirotor, a rover or a boat so I need one for each vehicle type.

The pilot receives the inputs from the Comms class and outputs a torque stream that will be fed in the motor mixer and then in  thrust node.

So I created the processing node Multirotor_Pilot that takes these streams:

– an angular velocity for the Rate Pids

– a reference frame stream for the stability (horizontal) pids

– a location stream for the assisted pids and for the RTH and other AI behaviors

– a vertical distance stream (from a sonar usually but maybe from a laser as well) for the ground avoidance

It will use these streams based on the current input mode and state (landing, take off, crash etc) and process them into a torque stream. This basically means how the pilot wants the quad to rotate.

The streams are generated from sensors and passed through low-pass filters and resampled to around 20-50Hz. No need for more as it will just result in noise.

If I want to use silkopter with an airplane I will have to add a new Plane_Pilot class to model the plane. Same for any other vehicle.

Things are clearer now.

Signal processing

While thinking about ways to implement multiple sensor support – for several GPS or gyroscopes – I realized that an UAV can be modeled as a signal processing device.

It can be split like this:

  • sources – like the MPU9250 which generates 4 signal streams: acceleration, angular velocity, magnetic field and temperature, the SRF02 sonar which generates a stream of distances
  • streams – like the acceleration from the imu, or the location stream from the GPS. Each stream has a sample rate
  • sink – like the pigpio PWM. Sinks accept streams of a certain sample rate only
  • and processors – like low pass filters (LPF) or the PIDs, or the AHRS. Processors don’t have an inherent sample rate. They adopt the rate of the input usually.

All of these are implemented as nodes with several inputs and outputs. All nodes are added to the HAL layer and both their properties and connections are configured from a json file.

The important aspect is that – like all signal processing – signal rate is very important and has to be properly matched between nodes to satisfy the nyquist frequency. For example the PWM has a max sample rate of 490Hz so any signal higher than ~240Hz will be seen as noise. Higher frequency signals have to be resampled and low-pass-filtered before fed in the PWM node.

I started implementation last week and – after many rewrites – I have a model that ‘feels right’.

Here are some drawings:

Ahrs_0

Here the MPU9250 node exposes 4 sub-nodes (not shown) – an accelerometer, a gyroscope, a magnetometer and a thermometer. Each of these outputs a stream: acceleration stream, angular velocity, magnetic field and temperature.

The frequency of these streams is determined by the sensor. Acceleration and Angular Velocity are at 1Khz while magnetic field is at 100Hz. Temperature (not shown in the pic) is at 10Hz.

The Magnetic Field stream is fed into a Resampler node that takes it from 100Hz to 1000Hz, ready to be fed into the AHRS procesor node. The reason for this is that upsampling the stream needs a low-pass flter to avoid parasitic frequencies. A sample rate of 100Hz can carry at most a 50Hz signal so when going to 1000Hz sample rate this max frequency has to be maintained. Simply cloning each sample 10 times is not enough as this adds a lot of high frequency harmonics.

So now all signals at 1Khz sample rate are fed into the AHRS which outputs a Reference Frame stream, also at 1Khz. This reference frame represents the rotation of the UAV relative to earths frame of reference.

Right now I have a complimentary filter for the AHRS node but in the ear future I will write an extended kalman filter (EKF) for it – as soon as this tutorial is complete.

Note200215_0

This diagram uses the AHRS output – the reference frame stream – plus the acceleration stream again to output 2 new streams – gravity vector and linear acceleration. These 2 streams will be further used for dead reckoning and stability pids.

Note200215_1_0

Here I have the basic rate PIDs diagram. The angular velocity stream is first resampled from 1KHz to 50Hz (the pic says LPF but it’s actually a resampler node). The reason for this is that the usable bandwidth of the quadcopter is ~20-30Hz. Anything higher than this is noise.

The stream is further passed to the Rate PID processor node which outputs a torque stream (also at 50Hz). Torque is converted to thrust using a model of the motors+propellers (configurable of course), which is fed to the motor mixer and then in the PWM sink.

The advantage of this system is that new processors can be easily added to accommodate new hardware. For example if I want to use 2 GPS sensors I will add a redundancy processor that takes 2 location streams and outputs the healthiest one. Or I can merge them together to get better accuracy – while the rest of the nodes don’t even know nor care about this.

Basically I have minimal coupling between the elements of the system which talk through very well defined (and type safe) interfaces.

The ground station is also simplified. It can enumerate ll the nodes, read their configuration as a json value, change it and push it back to the nodes. Changing a filter cutoff will become trivial.

So far I still working to match the old functionality but in 1-2 weeks I believe this system will be ready for a test.

Git

The repo is now on github:

https://github.com/jeanleflambeur/silkopter

git is still a bit scary compared to the familiar naivety of svn but it’s too powerful to ignore. First thing I did after importing the svn was to recover crius version of silkopter from last year and make a branch from it. This was doable in svn as well by doing a checkout to that revision and committing in a branch.. but in github it was 2 clicks.