Download speeds

No it’s not old.

But the reason I can’t remember is, I’m old. :blush:

2 Likes

Yes, the Klimax only uses 100mbs Ethernet, which is Amber.

The feed going in is 1gbs, which is Green.

At least it shows all is working OK.

DG…

1 Like

How does latency cause a buffer to empty?

The amount of time it takes for data to travel between server and streamer can simply be too long for it to keep a small buffer full.
The latency on a local stream from a server on your home network will be tiny, and this is what the earlier Naim streamers were designed for. A stream from a Tidal server involves multiple hops around the globe, which takes much longer regardless of the bandwidth your ISP provides. Bear in mind that this is a two way comms process. Your streamer examines a packet of data, and if it determines that it is correct, it sends a message to the server to say that it’s ready for the next packet, (thus the term ‘round trip delay’) Only then is the next packet sent.

2 Likes

Sure, but wouldn’t the ping need to be unusually large (or line speed very low) for that to happen? Can you give a numeric example so I can understand this in more detail?

Hi, it was the Bridgeco streaming boards in Naim’s 1st gen streamers that were limited in their ability to handle remote servers due to their added latency issues. This quote from Naim’s Software Director, Steve Harris, I think is a good example. Hopefully @Stevesky won’t mind me quoting him here).

“……the network stack in the bridgeco really can’t handle it. The main limitation is that the network peripheral in the chip only has 8K of fast DMA memory (which equates to 6K of real data). This means when streaming high bandwidth data (eg. JB Radio 2 - 4Mbits/sec) you can’t do enough 6K’s in a sec to reach 4Mbits/sec.
aka:
4000 / (6 x 8 ) = 83 transactions a second. or network needs a ping time to server of <12ms to achieve this 83 figure. Not going to happen as physical speed of light says we’re not going from Europe to Canada and back again in <12ms.

The only solution is to proxy it via a UPnP server that is running on hardware that can expose a nice big TCP window and hence nice big chunks of data can flow from the radio server, then the 83 transactions a second are on a LAN link of <1ms ping times.

On the newer products the network stack exposes (and can handle) a huge TCP window + we have a massive input buffer, so streaming from the other side of the world + reasonably high bandwidths can be handled.”

Interesting. Thanks for posting.

Indeed, and remember the buffer mentioned there is nothing to do with the increased sample buffer… this is the TCP window reconstruction buffer.
In TCP design there is a relationship between latency and TCP segment window reconstruction memory for a given throughput.
In short the longer the latency the more memory is required for a given throughput… these days we don’t tend to need to worry about this, but tge limitations of the first gen streamers made this apparent, which is why Naim spent much effort in repeated firmwares making their tcp stack as efficient as possible to reduce the latency they had control over.

1 Like

My head hurts. I’m going back to “System pics”. :stuck_out_tongue_winking_eye:

2 Likes

I didn’t realise early kit had such small buffer depth. No wonder latency could have an impact.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.