Category Archives: Comms

RC Almost Finished

Here’s the case after my best attempt:

 

It looks… bad. The paint coat is horrible and full of scratches and the screen is too big.

But worst of all, the screen is not bright enough in direct sunlight. Not even close. I don’t have a photo but after brief testing I’d say it’s unusable.

So I’m pretty disappointing with the result – I ended up with a big, heavy RC system that is too dim to be usable for FPV.

I searched for a week for alternative capacitive touch screens, preferable in the 5-7 inches range but found nothing bright enough under 100 euros.

So after a mild diy depression I got an idea that will solve at lease 3 of the issues – cost, bright screen and the RC size: use my Galaxy Note4 phone as the screen.

The setup will look like this:

  • The quad will send video through 2.4Ghz, packet injection (a.k.a. wifibroadcast method) and RC stream through 433Mhz
  • The RC will receive both video and RC data and relay them to the phone using another 5.8Ghz wifi UDP connection. The phone will decompress the H264 video using OMX (or whatever is available) and display it with telemetry on top.
  • The phone will also act like a touchscreen interface to control the RC/Quad

 

Basically this is what most commercial quads (like Mavic) are doing. I’m sure the video link is 2.4GHz due to longer range than 5.8 and better penetration and the connection with the phone is done over a 5.8, low power link.

 

So the next steps are:

  • Redesign a smaller case that will accomodate a Raspberry Pi 3, the RC stick and fader + buttons and wifi cards
  • Write a quick android application that can connect to the RC and decompress the video stream
  • Profit!
Advertisements

Classic TX

I thought to give my Flysky TH9x TX a change to use it with silkopter so I made a new node called “CPPM Receiver”. This samples a CPPM stream on a GPIO and outputs the PWM streams that can be fed into the main node.

 

It uses the PIGPIO setAlertFunc which calls a user provided function for all level transitions on a certain gpio pin. The resolution is 5 microseconds which is waay more than needed for a standard CPPM stream. The gap between channels is 400 microseconds so way bigger than the minimum resolution of the library.

 

It started more of a proof of concept but in the end I think it’s perfectly usable.

Here’s a video with it in action, connected to a D8R-II receiver:

 

 

RF Modules

I found and bought the RF modules I needed for the GS-brain comms: the RF4463F30

They have a 1W PA, the receiver has -122dBm sensitivity at low data rates and around -90-96dBm at 1Mbps max rate.

They are pretty small and light – under 2-3 grams I think.

IMG_0265

I managed to find a lot of documentation from SI and the RF chip works pretty differently than the RFM22B. It’s not programmed using registers but with SPI commands. You basically create API calls in a uint8_t buffer and send it through SPI.

The FIFO is only 64 bytes but it does have a CRC and a custom 4 byte header to pack various things – like a request id.

So far it seems very nice, I will try to make it work during the weekend.

 

The plan I have is to use the wifi only for video data – tx only, and the RF4463 for bidirectional commands (GS->Brain and back) and video packet confirmations.

This should allow me to use better the limited wifi bandwidth by using uni-directional comms – so the back packets don’t keep the channel busy – and also have a rock solid link for control.

I did some tests with the RFM22B at 0dBm and it can easily go through 4-5 walls and 20 meters  inside my house. With 20 or even 30dBm I should be able to have a solid 10Km link line of sight.

 

Comms, revisited

After the latest incident and the loss of my quad it became clear that I need reliability. So far I focused on physical reliability – if the quad is sturdy enough to take a light crash, if it’s well balanced, etc. But I kind of ignored the other 2 aspects – software and comms. Well – not ignored, more like I hoped for the best.

So the next issues to fix are:

  1. Solid comms for the commands (RC) and telemetry. The RFM22B is the perfect candidate for this:
    1. It’s lightweight and small.
    2. Very sensitive receiver (-104dbm @125Kbps)
    3. Powerful TX amp @20dbm
    4. 64byte FIFO so I can drive it directly from the raspberry pi
    5. SPI interface, which the RPI has enough
    6. Good penetration due to the low frequency (433Mhz)

With this setup I should have 5-10Km range, way more than the WIFI video like allows.
The issue will be the comms protocol as the device is half-duplex and the bandwidth is not that much. I made a quick calculation and it should take around 5ms to send an entire 64byte message. With a simple protocol where the GS is the master and the quad the slave, I can allocate 5ms slots to each in turn. This will give me a symmetric 7Kbyte bandwidth, -protocol losses lets say a 5Kbyte bandwidth per second. Should be enough for control and telemetry data.

2. Solid software, especially the GS. I intend to get a RPI3 and display and build a dedicated GS to replace my laptop. For now I will keep my PS4 controller but in the future I will aim to build an entire RC system with integrated touchscreen around the RPI.

I want to run the RPI without an X server and for this I will need a QT version that supports this. Don’t know if it’s possible though.
If this fails, I’ll stop using QT and do all the rendering and UI using something else but this will be quite painful… I’m used to QT and despite all it’s quirks it’s very powerful when doing quick UI work.

 

What I do know that works in the current setup is:

  1. The RPI2 is powerful enough to handle a quad with WIFI video streaming. It was using ~16-20% CPU at any moment with around 5 threads so no issues here
  2. The case I designed is very sturdy. It’s a combination of 10mm carbon fiber tubes for the arms and ABS plastic for everything else.
  3. Flight time with the current weight of 650grams is around 20-25 min with a 3000mAh multistar LiHV battery. Enough time to enjoy the landscape
  4. The motors + ESC pair are a nice match. RcTimer 1806 1450Kv and RTFQ 4-in-1 12A ESC seem to handle the load nicely. A bit underpowered due to the props – 7×2.4inches but I plan to move to 8×4.5 for the next quad.

 

So back to the drawing board for a while.

 

 

 

 

 

RCP (the new RUDP)

Some months ago I found this awesome blog with research and a usable implementation of rfmon for uni-directional wireless communication. It uses libpcap and a modified firmware for the TP 721N dongle to send encoded video packets and telemetry to the GS. It’s unidirectional and this has some advantages. The biggest advantage for me was that it doesn’t require pairing nor connection handshake. Perfect for a quad!

So I included rfmon in RUDP and renamed the result to RCP (Reliable Comms Protocol…). It’s bidirectional in my case as I need to send control data to the quad as well but the advantages still apply.

Performance is great, I can send 1024×768 video @30 FPS, 2Mbps using around 4-4.5Mbps with very little CPU usage.

The problem I’ve hit is this: the more data I send, the more packets get lost due to interference, etc. Having a fixed retransmit rate – like every packet 3 times – requires a fixed amount of bandwidth but doesn’t scale at all to low-bandwidth situations. So I need a way to use the least amount of bandwidth possible while still ensuring that frames arrive at the other end.
One solution is to add ACK packets to reduce retransmission – which is what RCP does – but this also has an issue. Every confirmation packet keeps the channel busy for a little while – in turn dropping the total bandwidth of the system.
Current solution is to gather many confirmations and send them with low frequency – around MAX_RETRANSMIT_TIME / 2 (currently every 10ms). So fast enough to avoid retransmission, but not as fast as to keep the channel busy unnecessarily. So far this works beautifully.
Code time!

The brain on the UAV uses RCP setup with a RFMON socket:

 

auto s = new util::RCP_RFMON_Socket("mon0", 5);//5 is the end-point ID
m_socket.reset(s);
m_rcp.reset(new util::RCP);

util::RCP::Socket_Handle handle = m_rcp->add_socket(s);
if (handle >= 0)
{
    m_rcp->set_internal_socket_handle(handle);
    m_rcp->set_socket_handle(SETUP_CHANNEL, handle);
    m_rcp->set_socket_handle(PILOT_CHANNEL, handle);
    m_rcp->set_socket_handle(VIDEO_CHANNEL, handle);
    m_rcp->set_socket_handle(TELEMETRY_CHANNEL, handle);
}

 

So – this code adds a socket to the RCP instance and then instructs all channels to go through this socket. Same for the internal data – which represents the ACKs, pings (for RTT estimation) and connection requests.

The reason for this indirection is to allow different sockets for different channels. For example – use RFMON for unidirectional video streaming and another socket over a 433Mhz radio (like this one) for all other comms and ACKs.

In the near future I’ll try this – sending all channels except video through the RFM22B socket, as this should give me better range & penetration compared to 2.4Ghz.

 

Odroid W ADC Fail!

2 days and 5 forum posts later and the picture is clearer. Let’s start from the beginning. To simplify I’ll refer to the Raspberry pi rev. 2 board and ignore rev. 1 and the A+/B+ (** check the notes for details).

The Raspberry pi has the i2c-0 and i2c-1 busses. The former** is used by the GPU to talk to the camera while the latter can be used by the user however she/he wants. Each of the 2 busses can be redirected to several physical pins by changing the mode of these pins:

i2c-0:
GPIO 0/1  – ALT0 mode
GPIO 28/29  – ALT0 mode
GPIO 44/45 – ALT1 mode

i2c-1:
GPIO 2/3 – ALT0 mode
GPIO 44/45 – ALT2 mode

Only one of these pairs can be activated at any moment per bus.

The camera is physically connected to GPIO 0/1 **. These pins are setup as inputs by default and the GPU will change them to ALT0 whenever it needs to talk to the camera but _switches them back_ to INPUT immediately after. If you monitor the mode of GPIO 0/1 you’ll see that most of the time they are INPUT with random switching to ALT0 0-30 times per second. It seems that the more movement the camera sees the more it talks to the GPU. It seems to be related to AWB and shutter speed.

So far – no problem. It’s pretty clear from the design of the Raspberry pi that i2c-0 is off limits and there is no way to synchronize the GPU access with the CPU – so i2c-0 cannot be shared between the camera and any other device. If one attempted to use i2c-0 and started ‘raspivid -t 0’ he would see weird things ranging from i2c errors, the image freezing for seconds, random noise on the screen or even the board completely freezing.

 

The OdroidW has a nice PMC 5t619 chip that provides 2 free ADC pins that I intended to use to monitor voltage and current of silkopter. The PMC uses i2c-0 to talk to the CPU so care must be taken to synchronize somehow with the GPU. This is what I’ve been trying to achieve the whole weekend…

– 1st  try: After every use of mmal I switched GPIO 1/2 mode from IN back to ALT0 so I can talk to the PMC. Didn’t work as it seems the GPU uses the bus even between calls to mmal (in hindsight it makes sense as the camera has to inform the GPU about starting and finishing transfers and the GPU has to set awb, gains and shutter speeds).

– 2nd try: Use a semaphore to trigger the ADC measurement in the mmal callbacks – hoping that after the callback I get a period of silence from the GPU. No such luck

– 3rd try: Give up using the hw i2c-0 bus and try bitbanging on GPIO 0/1. This was such a nice idea – let the GPU use the hardware i2c-0 bus and I’ll use bitbanging… Didn’t work as both the GPU and the PMC are connected to GPIO 0/1 pins – so they share the same physical pins….

So basically it looks like the OdroidW is not capable of using both the camera and the PMC at the same time because they share the i2c-0 bus but _also the pins_!!!. This could have been easily avoided by putting the PMC on some other GPIO and bit banging, or at the very lease putting it on i2c-1.

So now I’m back to the drawing board and started considering an Arduino mini board as ADC + PWM generator. Connected through i2c-1 probably..

 

 

Notes:
** Rev.1 boards have i2c-0 and i2c-1 reversed. So i2c-0 is free while i2c-1 is the camera one. ** A+ and B+ have the camera using GPIO 28/29 pins to access i2c-0.

Video Latency

Just did some latency tests using RUDP through wifi, 640×480@30fps, 2Mbps.

Both the laptop and the quadcopter are in the same room but they go through a router 3 walls away. Signal strengths are (as reported by iwconfig):

Quad:
Link Quality=70/70  Signal level=-37 dBm

Laptop:
Link Quality=58/70  Signal level=-52 dBm

Ping reports these RTTs:
64 bytes from 192.168.1.110: icmp_seq=1 ttl=64 time=137 ms
64 bytes from 192.168.1.110: icmp_seq=2 ttl=64 time=160 ms
64 bytes from 192.168.1.110: icmp_seq=3 ttl=64 time=85.4 ms
64 bytes from 192.168.1.110: icmp_seq=4 ttl=64 time=108 ms
64 bytes from 192.168.1.110: icmp_seq=5 ttl=64 time=125 ms
64 bytes from 192.168.1.110: icmp_seq=6 ttl=64 time=149 ms
64 bytes from 192.168.1.110: icmp_seq=7 ttl=64 time=73.6 ms
64 bytes from 192.168.1.110: icmp_seq=9 ttl=64 time=119 ms

The quadcopter uses the more sensitive alfa card while the laptop has its own crappy RTL8723be card that has many issues under linux…

I happen to live in a building with a very noisy wifi situation so SNR is not good at all.

Average latency is around 100-160ms with random spikes of 3-400 ms every 10-20 or so seconds.

[Edit – just realized that both the brain and the GS are debug versions…]

To measure I pointed the raspicam at my phone’s stopwatch app and then took photos of the phone and the screen at the same time. Here are some of them:

20141128_212006

 

 

 

 

20141128_212010

 

 

 

 

 

20141128_212011

 

 

 

20141128_212007

RUDP API & implementation details

I committed the latest version of RUDP. After many changes in design and implementation I’ve got something that I’m satisfied with.

The API is very simple – packets can be sent and received on 32 individual channels:

bool send(uint8_t channel_idx, uint8_t const* data, size_t size);
bool try_sending(uint8_t channel_idx, uint8_t const* data, size_t size);

try_sending is like send but fails if the previous send on the same channel didn’t finish yet. This can happen with the video frames which take some time to compress when a big keyframe arrives and there’s a context switch in the middle of the send. Since the camera data callback is async, it might try to send again too fast – so it uses try_sending to avoid getting blocked in the send call.

bool receive(uint8_t channel_idx, std::vector<uint8_t>& data);

This call will fill the data vector with a packet from channel idx and return true if succeeds.

Each of these channels has a few parameters that control the trade offs between bandwidth, latency, reliability and speed.

The most important parameter is the one that controls delivery confirmations: Send_Params::is_reliable. Reliable packets keep getting resent until the other end confirms or cancels them – or until Send_Params::cancel_after time runs out. Silkopter uses reliable packets for the remote control data,  calibration data and pid params (the silk::Comm_Message enum).

Unreliable packets are sent only once even if they get lost on the way. They are used for telemetry and the video stream since this data gets old very fast – 33ms for both video and telemetry.
A useful parameter for unreliable packets is Send_Params::cancel_on_new_data. When true, new data cancels all existing unsent data from the same channel. This is very useful for low bandwidth when video frames can take longer than 33ms to send. Another parameter – at the receiving end this time is Receive_Params::max_receive_time which indicates how long to wait for a packet. Useful for the video stream in case frame X is not ready but frame X+1 is already available. In case a packet is skipped due to this parameter, a cancel request is sent to the other end to indicate that the receiver is not interested in this data anymore. This saves quite some bandwidth.

Zlib compression can be set for each channel – and it’s on for all channels in silkopter – including the video stream where it saves between 3 and 10% of the frame size at a ~10% CPU cost.

 

Internally, packets are split in fragments of MTU size (currently 1KB). Each fragment is identified by an ID + Fragment IDX – so fragments from the same  packet share the ID.

The first fragment has a different header then the rest.

Fragments are sent as datagrams, same as pings, confirmations and cancel requests.
A datagram has a small header (5 bytes) containing the crc of all the data and the type of the datagrams. Based on the type the header can be casted to a specialized header.
The crc is actually a murmur hash of the datagram data and I’m not sure it’s really needed as UDP has it’s own crc but  better be safe than sorry. It’s very fast anyway and doesn’t even show up in the profiler.

The datagrams are managed by a pool using intrusive pointers to avoid allocations of the datagram data (a std::vector) or the ref_count (in case of using std::shared_ptr).

My test so far was this:
With both the silkopter and the GS far from the access point, I’m sending a video stream of enough bandwidth to choke the wifi. This is ~400KB/s in my current test scenario. Then I’m pushing data through a reliable channel at ~10-20 packets per second amounting to 6KB/s. So far all the data got through in less than 2-300ms which is 2-3x my max RTT. Pretty happy with the result considering that in my previous setup – TCP for reliable data + UDP for video I was getting 2-3 seconds lag even next to the access point is some worst cases.

 

The only thing missing are handling the case when one end restarts. This is problematic so far because RUDP keeps track of the last valid received packet ID and ignores packets with IDs smaller than this. So when one of the ends restarts – all its packets are ignored uptil it reaches the ID of the last packet before the restart… Not good.

 

RUDP first benchmark

I ran the first RUDP tests and here’s what I got:

Throughput test:
Using zlib compression, local host, release and 16MB messages in a tight loop – ~80MB/s. It’s limited by the compression speed
Same test but without compression – ~1GB/s

Message spam test:
Local host, release, 200byte messages in a tight loop – 70K/s.

Main purpose of the test was to check for obvious bottlenecks and other issues. I found a couple that were crippling performance – like my pool not working correctly and loosing some datagrams, or the allocation of the shared_ptr ref_count (which I replaced just now with a boost::intrussive_ptr). So far it seems to work ok.

I need to redo the tests on the raspberry pi but I’ll be limited by the wifi, for sure.

 

Next on my list are:

– Handle connection loss. Now if one end drops the other end will start pooling messages until it’s out of memory.

– Limit the amount of pending messages I can allow. If bandwidth is low I need to cut down the data rate to avoid messages stacking up in the RUDP queues.

– Tune it on the actual hardware. I need to figure out the optimal MTU and minimum resend period. The MTU will be a compromise between too much protocol overhead and very expensive resends adn the minimum resend period between latency and bandwidth.

 

 

UDP broadcasting

For the past 2 days I’ve been investigating why my RUDP protocol has a waay worse ping than the one in iperf. I found numerous bugs and optimizations during this time but I was never able to get the ping below 100ms while iperf in udp mode was getting 3-7ms…

It turns out that using broadcasting increases the RTT a lot and causes some packet loss. Maybe it’s just my network that behaves like this with broadcasting but after I removed it my ping was a solid 4ms. Not too bad for wifi going through 3 walls..

So the culprit was this line:
m_send_socket.set_option(socket_base::broadcast(true));

I did this a few days back when I got too lazy to implement proper discovery so I made all comms broadcast.