Monthly Archives: November 2014

Video Latency

Just did some latency tests using RUDP through wifi, 640×480@30fps, 2Mbps.

Both the laptop and the quadcopter are in the same room but they go through a router 3 walls away. Signal strengths are (as reported by iwconfig):

Quad:
Link Quality=70/70  Signal level=-37 dBm

Laptop:
Link Quality=58/70  Signal level=-52 dBm

Ping reports these RTTs:
64 bytes from 192.168.1.110: icmp_seq=1 ttl=64 time=137 ms
64 bytes from 192.168.1.110: icmp_seq=2 ttl=64 time=160 ms
64 bytes from 192.168.1.110: icmp_seq=3 ttl=64 time=85.4 ms
64 bytes from 192.168.1.110: icmp_seq=4 ttl=64 time=108 ms
64 bytes from 192.168.1.110: icmp_seq=5 ttl=64 time=125 ms
64 bytes from 192.168.1.110: icmp_seq=6 ttl=64 time=149 ms
64 bytes from 192.168.1.110: icmp_seq=7 ttl=64 time=73.6 ms
64 bytes from 192.168.1.110: icmp_seq=9 ttl=64 time=119 ms

The quadcopter uses the more sensitive alfa card while the laptop has its own crappy RTL8723be card that has many issues under linux…

I happen to live in a building with a very noisy wifi situation so SNR is not good at all.

Average latency is around 100-160ms with random spikes of 3-400 ms every 10-20 or so seconds.

[Edit – just realized that both the brain and the GS are debug versions…]

To measure I pointed the raspicam at my phone’s stopwatch app and then took photos of the phone and the screen at the same time. Here are some of them:

20141128_212006

 

 

 

 

20141128_212010

 

 

 

 

 

20141128_212011

 

 

 

20141128_212007

RUDP API & implementation details

I committed the latest version of RUDP. After many changes in design and implementation I’ve got something that I’m satisfied with.

The API is very simple – packets can be sent and received on 32 individual channels:

bool send(uint8_t channel_idx, uint8_t const* data, size_t size);
bool try_sending(uint8_t channel_idx, uint8_t const* data, size_t size);

try_sending is like send but fails if the previous send on the same channel didn’t finish yet. This can happen with the video frames which take some time to compress when a big keyframe arrives and there’s a context switch in the middle of the send. Since the camera data callback is async, it might try to send again too fast – so it uses try_sending to avoid getting blocked in the send call.

bool receive(uint8_t channel_idx, std::vector<uint8_t>& data);

This call will fill the data vector with a packet from channel idx and return true if succeeds.

Each of these channels has a few parameters that control the trade offs between bandwidth, latency, reliability and speed.

The most important parameter is the one that controls delivery confirmations: Send_Params::is_reliable. Reliable packets keep getting resent until the other end confirms or cancels them – or until Send_Params::cancel_after time runs out. Silkopter uses reliable packets for the remote control data,  calibration data and pid params (the silk::Comm_Message enum).

Unreliable packets are sent only once even if they get lost on the way. They are used for telemetry and the video stream since this data gets old very fast – 33ms for both video and telemetry.
A useful parameter for unreliable packets is Send_Params::cancel_on_new_data. When true, new data cancels all existing unsent data from the same channel. This is very useful for low bandwidth when video frames can take longer than 33ms to send. Another parameter – at the receiving end this time is Receive_Params::max_receive_time which indicates how long to wait for a packet. Useful for the video stream in case frame X is not ready but frame X+1 is already available. In case a packet is skipped due to this parameter, a cancel request is sent to the other end to indicate that the receiver is not interested in this data anymore. This saves quite some bandwidth.

Zlib compression can be set for each channel – and it’s on for all channels in silkopter – including the video stream where it saves between 3 and 10% of the frame size at a ~10% CPU cost.

 

Internally, packets are split in fragments of MTU size (currently 1KB). Each fragment is identified by an ID + Fragment IDX – so fragments from the same  packet share the ID.

The first fragment has a different header then the rest.

Fragments are sent as datagrams, same as pings, confirmations and cancel requests.
A datagram has a small header (5 bytes) containing the crc of all the data and the type of the datagrams. Based on the type the header can be casted to a specialized header.
The crc is actually a murmur hash of the datagram data and I’m not sure it’s really needed as UDP has it’s own crc but  better be safe than sorry. It’s very fast anyway and doesn’t even show up in the profiler.

The datagrams are managed by a pool using intrusive pointers to avoid allocations of the datagram data (a std::vector) or the ref_count (in case of using std::shared_ptr).

My test so far was this:
With both the silkopter and the GS far from the access point, I’m sending a video stream of enough bandwidth to choke the wifi. This is ~400KB/s in my current test scenario. Then I’m pushing data through a reliable channel at ~10-20 packets per second amounting to 6KB/s. So far all the data got through in less than 2-300ms which is 2-3x my max RTT. Pretty happy with the result considering that in my previous setup – TCP for reliable data + UDP for video I was getting 2-3 seconds lag even next to the access point is some worst cases.

 

The only thing missing are handling the case when one end restarts. This is problematic so far because RUDP keeps track of the last valid received packet ID and ignores packets with IDs smaller than this. So when one of the ends restarts – all its packets are ignored uptil it reaches the ID of the last packet before the restart… Not good.

 

RUDP first benchmark

I ran the first RUDP tests and here’s what I got:

Throughput test:
Using zlib compression, local host, release and 16MB messages in a tight loop – ~80MB/s. It’s limited by the compression speed
Same test but without compression – ~1GB/s

Message spam test:
Local host, release, 200byte messages in a tight loop – 70K/s.

Main purpose of the test was to check for obvious bottlenecks and other issues. I found a couple that were crippling performance – like my pool not working correctly and loosing some datagrams, or the allocation of the shared_ptr ref_count (which I replaced just now with a boost::intrussive_ptr). So far it seems to work ok.

I need to redo the tests on the raspberry pi but I’ll be limited by the wifi, for sure.

 

Next on my list are:

– Handle connection loss. Now if one end drops the other end will start pooling messages until it’s out of memory.

– Limit the amount of pending messages I can allow. If bandwidth is low I need to cut down the data rate to avoid messages stacking up in the RUDP queues.

– Tune it on the actual hardware. I need to figure out the optimal MTU and minimum resend period. The MTU will be a compromise between too much protocol overhead and very expensive resends adn the minimum resend period between latency and bandwidth.

 

 

UDP broadcasting

For the past 2 days I’ve been investigating why my RUDP protocol has a waay worse ping than the one in iperf. I found numerous bugs and optimizations during this time but I was never able to get the ping below 100ms while iperf in udp mode was getting 3-7ms…

It turns out that using broadcasting increases the RTT a lot and causes some packet loss. Maybe it’s just my network that behaves like this with broadcasting but after I removed it my ping was a solid 4ms. Not too bad for wifi going through 3 walls..

So the culprit was this line:
m_send_socket.set_option(socket_base::broadcast(true));

I did this a few days back when I got too lazy to implement proper discovery so I made all comms broadcast.