To be honest I always assumed that the undulations that the stylus tracks from the walls of the groove were on either side of the roughly 90° angled groove. So the movement which generates the signal for each channel would be at 45° to the surface, rather than being vertical and horizontal as Simon explains.
I am pretty sure that this is the case, and my problem is visualizing this movement through this rotated 3D space. It obviously works without the stylus randomly scratching along or jumping out of the groove, so clearly the issue is with my imagination / internal visualization
Clearly you are trying to visualise internet streaming when you should be thinking about local streaming
Back to the topic if I may
Considering that none of the audio codecs / file formats / protocols / services under discussion auto-degrade to a lower bitrate if the data transmission is too slow, in the way modern lossy video codecs and services like Netflix do, would we not expect that in the above cases of severed TCP connection et al. the buffer would eventually run empty and there would be dropouts / complete stops until the connection recovers? Maybe there could be some cases where the dropouts are so short that the brain glosses over them and fills the gaps, but this should be highly unlikely to be reproducible.
If this makes sense, how would we explain, based on your hypothesis, the supposed perception of a constantly and reproducibly degraded SQ with internet streaming vs local streaming, which the affected people describe rather as if they were listening to a low-bitrate MP3 instead of the expected CD quality or hi-res FLAC, and not as intermittent dropouts?
The effect is just like you watch youtube or netflix with a slow bandwidth router or a slow internet connection, especially you need to download a hires audio file.
But it isn’t the same because YT (if set to auto-quality) and Netflix codecs auto-degrade to lower resolution. Qobuz/Tidal and FLAC don’t, at least not e.g. via Roon or as far as I know the Naim app (Tidal’s/Qobuz’s own apps might autoswitch from FLAC to MP3, but I don’t think they do)
Even they do not lower to lower resolution, but constantly copying large hires audo files between your Roon server to your streamer or from a Qobuz server to your Roon server can cause some impacts due to data flow handshaking.
I really don’t think you know what you are talking about
“Some impacts”, OK, but this would be the ubiquitous “higher processing load impacting analogue stages” and shouldn’t affect what goes into the DAC (ignoring obvious dropouts), so I don’t think that this would have such a great effect as described.
But OK, I am satisfied that you don’t have a technical explanation for the reported effects either, so I know what to ascribe it to, as I thought. I just was curious if I am missing something obvious, thanks
Maybe I am talking nonsense, but if you constantly copy large files (over 1 MB) from one computer to another computer in a slow network environment, you can immediately see the effects, even the files will eventually get there.
But what effects would that be, apart from some processing load and further contribution to network congestion due to excessive retransmissions? (Again assuming that it is at least fast enough to avoid an empty buffer and hence dropouts)
Does it cause any audio SQ impact? No?
I wouldn’t know how, apart from the “higher processing load noise affecting the analog parts in the streamer” catch-all argument, which should normally still not have the large effects that are sometimes described, and with Roon this would not be an issue anyway because all this load would be in the Roon Core (which, being usually a NUC with at least an 8th gen i3, is not at all taxed by audio transmission, if Roon DSP is not enabled)
I believe the large experienced differences have completely different causes
The consistency of the inter-frame time window gaps and excessive retrans, I think, are some of main issues. That is why Roon and other streamer makers recommend wired ethernet. Some of us may not hear the impacts, some do, especially on a bad wifi network or a slow lan or a bad router.
I can hear a difference b/w my server and the cloud on my lowly QNAP. Based on comments here I attribute the difference to my legacy streamer buffer or network latency vs cloud latency. But that only makes sense to me b/c I don’t know or understand the details behind the claims.
PS - I have no reason to believe it’s related to problems with my ISP. I continually get 130MBS with no dropouts.
OK, I guess this is the best there is. Thanks! Certainly Roon do attribute SQ differences to this in writing, so I’ll go with this, too
Rule 5: Use Ethernet between Core and Output
Roon has comprehensive and robust support for WiFi, but the sound quality often isn’t the same. For your highest quality rooms, plan on using wired gigabit ethernet connections between the Core and the Outputs.
Remember that a large buffering in the Roon Server or a streamer only helps mitigating the impacts because the buffer size is finite, not to mention some unsolicited disconnections between point A and B.
Absolutely… the internet is inherently reliable… that is why it was invented… to reliably send data across multiple interconnected networks. (Hence its name)
However at points of congestion or bottlenecks and in less reliable network links such as with with radio the odd packet can be dropped or lost and needs to be resent.
It’s not so unusual to see with internet connected applications these days (unless a very low bandwidth or congested access) the unreliability coming from the host… ie data received is not processed quick enough and is lost so has to be resent. This is what you saw quite a lot with internet access on first generation Naim streamers.
Reading the thread now there seems to be some confusion about speed and reliability… from a network perspective they are not specifically related.
However with connection oriented transmission flows… which with TCP/IP means using the TCP transport protocol… there is a relationship between throughput, latency and TCP window sizing. That is the higher the latency the larger the host TCP window memory needs to be for a given throughput. From memory I seem to remember Naim streamers new and old have fixed TCP window memory.
Note one shouldn’t confuse TCP window memory with sample buffer memory… they are quite different things.
Also one shouldn’t confuse TCP with assured or gaurenteed data transfer… however it is a transport protocol that reliably informs the sender and receiver that the data has been sent or received or otherwise and allows the selective retransmission of data. The traditional alternative, is UDP. This is a connectionless transport. This means the sender and receiver have no reliable way of knowing who has received the data or status of data sent. In practice these checks are undertaken in the application. In home audio streaming, UDP is typically only used for mDNS and SSDP (discovery) type functions. Media transfer nearly always uses TCP.
I should also point out without getting technical, that devices, and this includes Naim streamers often switch between two styles of TCP flow control depending on the rate at which the data is being received by the host (or streamer). This results in staccato burst like flows for high speed low latency transmission, like home networks, and good speed internet to a slower more consistent flow of data for slower end to end throughput… and on a home broadband link with other users of the internet access you may see the streamer switch between the two modes when communicating with a cloud content media server, such as with Qobuz or Tidal.
This ability and effectiveness of operation was significantly improved with the new Naim streamers. The first gen streamers could loose data quite easily here… and require quite a lot of network retransmission.
It’s a bit going in circles. None of this is disputed, obviously, and I believe there is much agreement here, anyway