Category Archives: GS

Crash

I took the quad for a spin yesterday and figured out 4 things:

  1. It flew, quite good. Very stable, video quality was good and the signal was very strong even 4-500 meters away
  2. I crashed a few times. See video here: https://www.youtube.com/watch?v=AjwiUQltHaE&feature=youtu.be
  3. Flight time was around 20-25 min with a 3000mAh Multistar LiHV battery. Average amps for hover was around 6-7 which is great. AUW around 630g
  4. Eventually it flew away in a near-by forest and I couldn’t find it. The reason was a sudden loss of signal which triggered the fail safe routine. The quad started climbing to 50m, went home and then dropped to 5m. Problem was that ‘home’ was a few hundred meters into the near-by forest due GPS interference from the camera….

So it’s gone, have to build another one.

The lesson here is that I have some software issues to iron out and that I really need another signal path for the control. I plan to use a rfm22b for this.

The material loss is not that huge – a raspberry pi, 4 rctimer 1806 motors, a brand new battery and a Navio board. A total of 250 euros plus about 2 weeks of work to print all the pieces, etc.

 

So the next one will need to fix these issues:

  1. Rock solid RC link for control
  2. Solid software without glitches. My GS crashed once in mid-flight (see video) due to the h264 decoder
  3. More portable GS. A laptop with 2 wifi cards, a ps4 controller and many wires is not a fun way to go to the field and fly
  4. The raspicam interferes with the GPSsomething fierce. Even after copper shielding the H Acc went from 2m to 50m after starting the camera…

 

 

Back

After a long break, I’m back with a video showing some of the progress I had lately:

 

Silkopter actually flew this summer and soon it will fly again. This time with a Raspberry Pi 2 instead of the OdroidW. 4 cores, yaay!

 

Excuse the lack of music. Youtube doesn’t provide many enjoyable options.

Enjoy

Multirotor diagram

The new GS node editor is progressing nicely and everything seems to fit. I have data visualization for most types of streams with scopes and FFTs and yesterday I added a map widget to visualize the GPS ECEF location.

On the silkopter brain side I kept having a weird feeling that something is not right, that I cannot model everything with nodes and streams. Since I tend to obsess about finding the correct paradigm to modeling a certain process, I started drawing multirotor diagrams to see if/where I hit a road block. After 6 tries I got this:

(excuse the quality, I got tired of drawing on my galaxy note 4 after the 3rd diagram and went back to paper)

20150319_000139

I’ll start from the end – the PIGPIO sink. This takes a PWM stream from the Throttle To PWM node which does exactly what the name says – converts a throttle stream into a PWM stream.

The Throttle stream comes from the motor mixer which uses data from the Multirotor Model Data node (top cloud) to convert a Torque stream and a Force stream into throttle values for each motor. This conversion can be done easily for a certain combination of motors + propellers. It doesn’t have to be very accurate as there are some PIDs before this to compensate.

The Torque stream is calculated by the AV Model node (renamed now to Rate Model) that basically combines a rate PID + a feed forward model using again the top cloudy model data. It takes as inputs an angular velocity stream from the gyro and another angular velocity target stream from the Stability Model and tries to match them. The corrections represent the Torque stream.

The Stability Model node is responsible for matching a given Reference Frame (from the AHRS node) with a target ref frame from the Pilot or Assisted Model Node. The output of the Stability Model Node goes into the Rate Model Node.

The Assisted Model Node is responsible for matching a stream of velocities from the GPS with a target velocity from the Pilot. The velocities include both horizontal and vertical (altitude) components. It’s used to model a very intuitive control scheme – move left at 1m/s, or climb at 2m/s. Its outputs are a ref frame stream for the Stability Model (which will try to match it) and an up force stream fed into the Motor Mixer Node.

The Pilot Node interprets RC inputs (not shown in the diagram) and outputs either a ref frame + up force stream when using rate/stab controls, or a velocity stream when using an assisted control scheme.

So going top-down now :

– The Pilot tells reads RC commands and outputs either low-level rotation+vertical force data or high-level velocity data

– The Assisted Model reads velocity data and tries to match those inputs by changing the orientation and altitude of the quad

– The Stability Model reads orientation data and tries to match it by rotating the quad. It uses a stab PID to do this (until I have a good FF model).

– The Rate Model reads rotation speed and tries to match it by outputting torques. It uses the quad geometry and weight + rotational inertia of the motors + props to figure out how much torque to apply to rotate with a certain speed. Small corrections are done with a rate PID

– The Motor Mixer reads torques and figures out what throttle each motor should have based on the geometry of the quad and power train.

So far this looks like it should work.

UAV Editor

The signal processing view of a quadcopter has one obvious result – a node editor.

Still WIP but here’s the first screenshot:

Screenshot from 2015-03-02 00:16:43

The left top part has some nodes that compute a Reference Frame, a Location and a Battery State from a MPU9250 imu, a MS5611 baro, a Ublox GPS and a RC5T619 ADC.

There are some processors there that do some resampling or convert streams (like an ADC_Voltmeter that converts from ADC to Voltage).

The nodes are read from the brain running on the Raspberry Pi. Each node has:

– a unique name
– a type (class_id)
– a list of inputs
– a list of outputs
– an init_params json which represents fixed configuration that is applied when the node is created. Stuff like IMU sample rate or gyro range.
– a config json that has things that can be changed at any moment – like where the inputs/outputs are connected or calibration data, or cutoff frequency for LPFs.

Each input/output has a name and a stream type that determines where it can be connected. There’s also a sample rate that has to be matched.

Next is to allow uploading of the json configs to the brain and more editing features.

The node editing is done with this library.