Yes I agree more could be done, in fact I would say more needs to be done.
Problem is whilst CDP to Amp incl via a DAC is a simple path, the complexities of LAN schematics in some homes can be far more intricate. But Linn do a pretty good job with their LinnDoc schematics
It’s getting late,.it’s a day tomorrow too,so I’ll have to come back with an answer.
But short,.given what you write above,you may not have read about @Darkebear 's detailed description of his “journey” in this thread.
He has ND555 with dual-PS.
And others,.with similar streamers like Darkebear have exactly the same experience.
This has been written about many times now in the thread.
The rest I’ll answer tomorrow.
/Peder🙂
There is no need, my opinion is my opinion & does not need an answer.
You do not need to “answer” each & every post.
I’ve read DB’s posts along with other people’s posts, interesting conclusions, as are some of the other posts, some more so than others.
As i’ve said in previous posts, a number of people on this forum have noticed different cables with different end points (equipment) produce different perceived changes. Naim have shown this with the latest players, the cable changes are less pronounced with the new players compared to the older ones.
I have noted together with others during experiments with our own home systems that a cables ‘characteristic’ is not constant across different players & brands - Naim, Linn, Cyrus, Audiolab - & we’ve run the same tests numbers of times & added switch variables, & this started a long while ago, maybe 3 or 4 years.
My only key point from yesterday’s post is that unless the client endpoints & any relevant software detail is stated in the test & these remain as a constant, these cable ‘comparisons’ are not proving much. They might be interesting if the reader is so inclined, but they prove nothing.
Well thats not my experience … the cable changes I have made are very apparent - the ND555 in the 500 system is very revealing … and its not subtle … Don’t get me wrong … I wish bits were bits but they are most certainly not…I cannot pretend to know whats going on … my findings mirror DB … My original view was a good gigabit switch (aka Netgear GS105 or similar) and some cat 6 or 7 cable and the jobs a good one. It was not until I purchased some new cable plugged it in thought …WTF…whats broken…this led to some experimentation … and ultimately a very satisfying result.
Yes, Rich, but if you take your Ethernet solution and test it on several other brands of hifi gear it won’t give the same result on all of them.
And if you take your Ethernet cables and switches and use them in a few systems with ND555s, that have different speakers and amps and rooms and psus and electrical environments and supports and ears with different software versions and different music playing, you won’t even get the same results there either.
I think that’s what Mike’s saying in his post above.
Peder, if your cableholic doesn’t know how to use his money, here a more expensive Ethernet cable than Chord Music.
I quoted a recent post from a member.
Crickey!
£6,250.00
Excl. VAT£4,947.92Incl. VAT£5,937.50
FREE UK DELIVERY
Item Code: Platinum Starlight Ethernet
Bits most definitely are bits! And if anything is happening to the bits in the network then firstly the ethernet protocol is not working properly, extremely unlikely if ethernet specification cables and switches and streamer input are used, and very disturbing if other cables or switches are adding or taking away data as it would mean the digitised music as in the recording is being corrupted/modified before reaching the decoder. And I cannot see how what musically must be random changes by a cable or switch effectively adding or removing bits can in any way be considered to be hi-fi or even musical!
One distinct possibility is that it is the effect of non-digital electrical noise (e.g. RF) being picked up or being suppressed by network components, and thus the modulation of the analog signal or some other interference as it is being reconstructed can vary, with a more or less pleasing effect to the listener: that of course only the unmodulated represents the recording itself.
@Simon-in-Suffolk has referred to timeframe effects in different switches (if I am remembering correctly). That, and SiS will correct me if I am wrong, I would not expect to be of any significance when streaming music from one’s own store on the network to a streamer that loads the entire song into memory before playing, as for example does the ND555, whereas RF still teaching the streamer whilst playing can still reach the DAC and wreak its effect (unless the streamer blocks it effectively - some will block better than others).
So bits are bits, but the network may be a source or collector of other interference that change the sound. My conclysiin is that network components may variously be adding, reducing or passing whatever interference is having an audible effect, and hence what is reported in this thread. The differences heard by different people in their systems then will be due to differences in interference arising from different electrical environments, different networks, different susceptibilities of different streamers, different degrees of resolution and background noise in the analog stages of different systems, and of course different brain sensitivity to whatever the ear picks up.
Exactly.
Bits can only be bits - they cannot be half-bits or bits and a half. Or rather, the voltage representing the bits can be above or below their ideal value, but they will be interpreted either as on or off (1 or 0). There is no other value that they can take. None whatsoever. So if we are using a system where, say 5 volts is on and 0 volts is off (you can use any voltage you like in principle), then you would accept anything above 2.5 volts as on (or some such threshold) and anything less as off. So it doesn’t matter of your voltages at the clock ticks are 0, 0.1, 1, 6,3.8,5.8 for example, this will be interpreted as 0,0,0,1,1,1 - nothing else.
If there is sufficient noise on the system, and the noise is not rejected, then it would be possible, in theory, for some of those voltages to be modified, and the 3.8 become 2, for instance. In this case that string of bits would be 0,0,0,1,0,1 - but then another mechanism comes into force - error checking and error correcting. This detects that the received information is wrong in that particular packet, and the receiver requests a re-send, in which case the data are re-sent, rechecked and if necessary another resend is requested. If this happened a lot then several things could happen. It could be that the connection is, effectively lost - and you get silence. More likely you will get dropped packets, which would result in very audible degradation - and certainly not in the form of loss of bass, or soundstage or anything like that. It would be very obvious, not subtle at all, and (this is important) random. No particular frequency range would be selectively affected.
So the bit-stream arriving at the DAC will be correct - bit-perfect.
Any differences between Ethernet-compliant cables will not be through bad bits - and certainly not through bits that are not bits.
All that is left - if there are differences - is EMF effects on the DAC - presumably introduced in some way by the cable. These will not be affecting the bitstream, but I suppose could affect the output of the DAC in some way. My knowledge here is insufficient to say much.
Ok point taken, bits are bits - but not in the context of being immune from other effects … there is clearly something happening. I do not pretend to know … something is influencing …the bits or the mechanism of transfer. How can you account for a linear power supply on a nas drive … which was working perfectly as standard (with std psu) … altering the resolution - in other words extra detail present sonically. My point is the eco system in which the bits are transferred - has an influence some how. My understanding is that the ethernet protocols have some degree of tollerance. Perhaps noise or RFi has some impact I don’t know … I do know that LPS supplies on both my NUC and NAS made a big difference.
I imagine the power supply is delivering a lower noise floor, therefore more detail is resolved. In other words, bits are bits, but a network will have a relative noise floor, below which, details will not be revealed. Lower the noise floor, less details will be masked.
I would love to watch you install someone’s system, or improve it. I expect that you keep them fully informed on what you are doing - after all, it is their equipment you are modifying. Presumably you tell them what to expect from the changes, otherwise I don’t suppose that they would be particularly happy with changes being made. Do you try to teach them the TuneDem method?
That applies in the analog domain, but not digital, for the reasons expounded by Beachcomer. The voltage representing every ‘1’ is the same and way above even the worst case noise level, and if it weren’t the then no data would get through. But a noisy power supply to a switch (N.B. while some SMPSs are worse than LPSs, I understand that is not inevitably the case) may inject RF noise into the network, so with a streamer/DAC that is sesceptible to that particular frequency range changing to a less noisy power supply less RF might get through.
Probably the reason why Cisco switches with external ps or adding linear ps on switches improve the sound. Noise is injected in the network and we all prefer noisy sound.
There is no such thing as audio “details” transported through Ethernet cables.
Ethernet cables (expensive or cheap) transport numeric data in an very robust and reliable way.
That data is encapsulated into Ethernet frames.
This means, in terms of data, there is no difference between cheap or outrageously expensive cables.
The difference is more in the noise shaping domain. This explains why some can hear a difference between cables, and also explains why those cables are so “system dependant”.
Hi yes that kinda make sense … but ask yourself this those bits that are lost to noise … those that effectively have not been picked up by the dac because they are swamped by the noise floor…have effectively been lost. Therefore the original digital signal has not got through … so you no longer have a perfect signal. The bits are effectively not there at the end where it counts… So here we have potentially a lossless system subject to a lossy environment…take from that what you will…its reality. Now it stands to reason anything you can do to reduce issues - like noise by say using low noise psu’s, possibly noise rejecting ethernet transformers, cables that possibly reject rfi etc, switches with better internal regulation etc … will help. What I was getting originally - if you are marketing a 150k amp … it makes sense to demo that equipment … taking reasonable care of the source…
The expression « bits are not only bits » means that data transfer is not all. And even isolation from noise are not enough. Sophisticated switches are improved nowadays by better ps but also better clocks. Some switches can be improved by external clocks, like Sotm or Ether Regen.
Yup … agree … I think there is still allot to learn.
Yes, I feel too. Streaming is finally relatively new, maybe 15 years old. And the first high end streamer was the Linn KDSM from 2004 ( or 2005). With Naim it begun in 2009 (?) with the Hdx.
I remember that at the beginning there were essentially Dacs with usb connection to laptops. In Hifi Shows they were showing big systems, very expensive dacs and ridiculous mini laptops. The Dac was considered as the most important part.