UDP receive latency on NANO54415
Posted: Fri Nov 08, 2013 10:58 am
Hello,
We are running a latency-sensitive application on a NANO54415 module and are experiencing very strange periodic spikes in UDP receive latency. The problem can be demonstrated with a following test setup:
1. NANO54415 is running a trivial loop that receives UDP packets and measures the time spent waiting for the next packet. This code looks like this (where RunningStat object keeps a running average of the wait time using Welford’s method):
2. The Netburner is connected to another embedded device via a crossover Ethernet cable and this device is spitting a single 15-byte UDP packet at 100Hz. We are keeping track of the elapsed time between successive sends using the same RunningStat object.
When running with the default TICKS_PER_SECOND value of 20, a 10000-sample test run produces the following output (all times are in microseconds):
On the sender device:
On the sender device:
These results are completely repeatable. We have tried switching to non-blocking UDP receive by modifying the UDPPacket::UDPPacket( OS_FIFO *pfifo, DWORD wait ) constructor in udp.cpp to use:
m_p = ( pool_buffer * ) OSFifoPendNoWait( pfifo);
However, this did not alter the timing results.
Any ideas as to what may be causing the latency spike will be greatly appreciated.
Thanks,
-Roman
We are running a latency-sensitive application on a NANO54415 module and are experiencing very strange periodic spikes in UDP receive latency. The problem can be demonstrated with a following test setup:
1. NANO54415 is running a trivial loop that receives UDP packets and measures the time spent waiting for the next packet. This code looks like this (where RunningStat object keeps a running average of the wait time using Welford’s method):
Code: Select all
// Initialization code is here …
DEBUGTimer = HiResTimer::getHiResTimer();
DEBUGTimer->init();
DEBUGTimer->start();
while (1) {
RunningStat stats;
int i = 0;
while (i < 10000) {
const double udpReadBlock = 1.0e6 * DEBUGTimer->readTime();
UDPPacket upkt(&fifo, 0);
const double udpReadUnblock = 1.0e6 * DEBUGTimer->readTime();
if (upkt.Validate())
{
++i;
const double delta = udpReadUnblock - udpReadBlock;
stats.push(delta);
}
}
// stats are printed to the terminal here…
}
2. The Netburner is connected to another embedded device via a crossover Ethernet cable and this device is spitting a single 15-byte UDP packet at 100Hz. We are keeping track of the elapsed time between successive sends using the same RunningStat object.
When running with the default TICKS_PER_SECOND value of 20, a 10000-sample test run produces the following output (all times are in microseconds):
On the sender device:
On the Netburner:mean: 10077.3
sigma: 3.7036
min: 10050.8
max: 10106.8
If we increase the TICKS_PER_SECOND value to 100, the latency improves as follows:mean: 10077.068004
sigma: 541.726130
min: 48.128000
max: 59706.088000
On the sender device:
On the Netburner:mean: 10077.3
sigma: 3.8715
min: 10058.7
max: 10107.8
mean: 10077.189543
sigma: 120.278000
min: 1990.520000
max: 18856.000000
These results are completely repeatable. We have tried switching to non-blocking UDP receive by modifying the UDPPacket::UDPPacket( OS_FIFO *pfifo, DWORD wait ) constructor in udp.cpp to use:
m_p = ( pool_buffer * ) OSFifoPendNoWait( pfifo);
However, this did not alter the timing results.
Any ideas as to what may be causing the latency spike will be greatly appreciated.
Thanks,
-Roman