Tag Archives: ground station

New GS

In the past week I started working on the new Ground Station. The problems I’m trying to fix are:

  1. I want a dedicated GS, not a laptop. For this purpose I’m trying a raspberry pi with a 7 inch display (the official RPI touchscreen display), together with a custom case, 2 RC sticks and various buttons. The reason is that a laptop is unwieldy, not bright enough in direct sunlight, tends to get busy with other tasks in the worst moments, doesn’t have a touchscreen (at least my current laptop doesn’t)
  2. I want a nicer interface that focuses on the main purpose of the GS – flying. The current one focuses more on editing the nodes of the quad.

So I started a new project, the GS2 using QT’s QML and a touch driven UI. So far so good, here are some screenshots:

 

The resolution is 800×480, a bit small TBH but enough for now. The display size is 7″.

So far the experience in QML is great, it takes very little time to build a nice, functional UI. It does require to build proxies for anything that has to be exposed from the C++ side to qml but that’s a good thing as it separates the UI from the logic.

The main things that I still have to figure out are:

  1. How to create a property sheet in QML. This is used to show the init params and configuration for nodes. This UI is generated in the current GS so there is no need to write custom UI for every node but just a description of the parameters in a json file.
  2. How to write the node editor in QML. Screen real-estate is limited and the node structure can get complex. One possible solution is to allow node groups. Basically take the IMU and low pass filters ang build a meta-node – a Filtered IMU – and treat that as a new node in the UI. Or group all the Navio sensors together in a new Navio Node and avoid all the current clutter. This is an interesting feature that deserves a few weekends.

 

The physical part of the GS – the case, sticks, buttons are still to be designed but I’m sure I can come up with something in a few weekends. The main parts will be:

  • A raspberry pi to drive the display and run the actual GS
  • An AVR to read the sticks, battery voltage and buttons. Could be replaced with another multichannel ADC and the RPI GPIO
  • A few analog sticks. I’m thinking to salvage the ones in my TH9x remote or buy a few of these or these
  • A RFM22B and 2 TL-WN722 wifi cards for the comms. This will be in a detachable module in the GS, connected by a usb cable so I can mount it higher or on top of my car.
  • LEDs, piezo speaker etc

I already have the rpi + display and the first version of the GS working, check out the video here:


All in all – a few months of work. I hope I don’t get demotivated.

 

 

Advertisements

Reliable UDP

Silkopter uses 2 sockets to communicate with the ground station. A TCP reliable socket for the remote control, diagnostics, telemetry etc and an unreliable UDP socket for the video stream.  Both go through a Alfa AWUS036H wifi card.

The UAV brain splits video frames in 1Kbyte packets in order to minimize their chances of corruption – and marks each of them with the frame index and timestamp. The ground station receives these packets and tries to put them in order. It waits for up to 100ms for all the packets of a frame to be received before either presenting the frame to the user or discarding it. So it allows some lag to improve video quality but above 100ms it favors real-time-ness over quality.

All the remote control, telemetry and sensor data is sent through TCP and in theory, given a good enough wifi link – this should be ideal. But it’s not, not even close. I consistently get 300-2000ms (yes, 2 full seconds) of lag over this channel.
All of this is due to how TCP treats network congestion. When lots of packets are lost, tcp assumes the network is under a lot of pressure and throttles down its data rate in an attempt to prevent even more packet loss. This is precisely what is causing my lags – the TCP traffic hits the heavy, raw UDP video traffic and thinks the network is congested, so it slows down a lot. It doesn’t realize that I care more about the tcp traffic than the udp one so I end up having smooth video but zero control.

My solution is to create a new reliable protocol over UDP and send control, telemetry and sensor data over this channel, in parallel to the video traffic. In low bandwidth situations I can favor the critical control data over the video one.

There are lots of reliable udp libraries but I always preferred writing my own for simple enough problems when there is the potential of getting something better suited to my needs (not to mention the learning experience).

So my design is this:

  1. I will have a transport layer that can send messages – binary blobs – and presents some channels where received messages can be accessed.
  2. Data is split in messages, and each message has the following properties:
    1. MTU. Big messages can be split in smaller packets. If a packet is lost, it will be resent if needed. This should limit the size of the data to resend.
    2. Priority. Messages with higher priority are send before those with lower priority.
    3. Delivery control. If enabled, these messages will send delivery confirmation. If confirmation is nor received within X seconds, message is resent. The deliveries also have priority – just like any other messages.
    4. Waiting time. If a low priority message waited for too long, it will have its priority bumped. For video, I’ll use a 500ms waiting time to make sure I get some video even in critical situations.
  3. Channels. Each message will have a channel number so that the receiving end can configure some options per channel. Video will have one channel index, telemetry another and so on. The GS will be able to configure a 100ms waiting period for out-of-order messages on the video channel for example.

 

My current implementation uses boost::asio and I intend to keep using it.

As soon as I finish the stability pids I’ll move to the new protocol.