If it’s got a transformer there is an inductive load which will draw more current under certain conditions, power up for example.
The fuse is there to protect if things are obviously going wrong and it needs to make things safe.
So, no, then…
You came to the wrong place if you were hoping for a short answer! But yes, the short answer is that 20 meg is fine.
I have connected my NDX2 wirelessly, is there any chance of improvement by going wired connection, below is the speed of the internet and other details which I do not have any technical knowledge.
Those results say nothing about how strong the WiFi signal is at the NDX2. But assuming it is pretty strong, then I don’t think you would get any SQ gain from using Ethernet instead of WiFi. It would be different if you were talking about an NDX though.
As David says, it’s all about how good your wifi is. If the NDX2 is within reach of the router you could always run a temporary ethernet cable and see whether you get improvements in sound, connectivity or both. The received wisdom seems to be that if you can use ethernet, then use it.
With the Linn Klimax DS3 / Katalyst, it can only be hardwired.
Presently on 200mbs full fibre package, but usually higher than this. Even the WiFi is good using a Mesh Network.
This is the Wi-Fi results
This is the Hardwired results
Interestingly, the Wi-Fi results are better for download than the Hardwired results.
DG…
Even though I have virgin fibre 250 the streamer I use is limited to 100mbps. I think Naim limit their streamers to 100.
If your streamer Ethernet port is blinking orange it’s probably working at 100. Green would be 1000gbps.
For some reason (I can’t remember the reason) I think my router is 56mbps max when using WiFi.
Because its old?
No it’s not old.
But the reason I can’t remember is, I’m old.
Yes, the Klimax only uses 100mbs Ethernet, which is Amber.
The feed going in is 1gbs, which is Green.
At least it shows all is working OK.
DG…
How does latency cause a buffer to empty?
The amount of time it takes for data to travel between server and streamer can simply be too long for it to keep a small buffer full.
The latency on a local stream from a server on your home network will be tiny, and this is what the earlier Naim streamers were designed for. A stream from a Tidal server involves multiple hops around the globe, which takes much longer regardless of the bandwidth your ISP provides. Bear in mind that this is a two way comms process. Your streamer examines a packet of data, and if it determines that it is correct, it sends a message to the server to say that it’s ready for the next packet, (thus the term ‘round trip delay’) Only then is the next packet sent.
Sure, but wouldn’t the ping need to be unusually large (or line speed very low) for that to happen? Can you give a numeric example so I can understand this in more detail?
Hi, it was the Bridgeco streaming boards in Naim’s 1st gen streamers that were limited in their ability to handle remote servers due to their added latency issues. This quote from Naim’s Software Director, Steve Harris, I think is a good example. Hopefully @Stevesky won’t mind me quoting him here).
“……the network stack in the bridgeco really can’t handle it. The main limitation is that the network peripheral in the chip only has 8K of fast DMA memory (which equates to 6K of real data). This means when streaming high bandwidth data (eg. JB Radio 2 - 4Mbits/sec) you can’t do enough 6K’s in a sec to reach 4Mbits/sec.
aka:
4000 / (6 x 8 ) = 83 transactions a second. or network needs a ping time to server of <12ms to achieve this 83 figure. Not going to happen as physical speed of light says we’re not going from Europe to Canada and back again in <12ms.
The only solution is to proxy it via a UPnP server that is running on hardware that can expose a nice big TCP window and hence nice big chunks of data can flow from the radio server, then the 83 transactions a second are on a LAN link of <1ms ping times.
On the newer products the network stack exposes (and can handle) a huge TCP window + we have a massive input buffer, so streaming from the other side of the world + reasonably high bandwidths can be handled.”
Interesting. Thanks for posting.
Indeed, and remember the buffer mentioned there is nothing to do with the increased sample buffer… this is the TCP window reconstruction buffer.
In TCP design there is a relationship between latency and TCP segment window reconstruction memory for a given throughput.
In short the longer the latency the more memory is required for a given throughput… these days we don’t tend to need to worry about this, but tge limitations of the first gen streamers made this apparent, which is why Naim spent much effort in repeated firmwares making their tcp stack as efficient as possible to reduce the latency they had control over.
My head hurts. I’m going back to “System pics”.
I didn’t realise early kit had such small buffer depth. No wonder latency could have an impact.
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.