I’d leave it where it is. Keep the networking and server bits away from the Hi-Fi. In my system, i preferred it with the Cisco at the end of 25m or so of cable so it sits in the office / study along side my Roon Nucleus and router, and the rack in the lounge stays nice and neat.
My switch is at 1, 5 m from the router and I have 2 X 0,75 m Audioquest diamond from the switch to the Melco Nas and from the Nds to the Melco.
These cables are too expensive to buy in long length.
Sound is sublime.
I agree with James, there is no need for the network gear to be anywhere hear your hifi.
I’ll stick with Plan A for now, but I’ll try the Cisco at the hi-fi end just to test it out too.
Why? Ethernet isn’t susceptible to distance (apart from the normal limit of 100m).
I wouldn’t want all of that noisy digital electronics near my system: also for any mechanical noise of fans and disks and PSU’s. My music server and LAN switch is well out of the way (upstairs).
And why cat6a? Cat 5e is good enough for gigabit Ethernet. Unless you’re next to some 4g or 5g tower I’m not sure what else could interfere. You’d be better off looking at checking for interface error counts and the minimum bend radius IMHO.
The sound degrading effects in an Ethernet link are nothing to do with data loss or data corruption. These effects arise from low level (sub bit transition level) electrical noise and clock jitter (i.e. time based ‘noise’) that is conveyed along with the electrical carrier for the digital data. These extraneous noise signals then couple into the analogue electronics causing interference.
It’s for this reason that the ‘best’ Ethernet cable is so random between different installations and why there is so much acrimonious debate about Ethernet cables. Unfortunately some people still work on the principle that ‘Bits is Bits’ (and ignore noise on the electrical carrier for those bits) and other still cling to the notion that digital cables affect sound in the same consistent way that analogue cables do.
Each time someone (listener) claims that a particular digital cable is ‘the best’ they are simply providing anecdotal evidence that it caused the least bad interference in their system. When a manufacturer makes that claim it’s a case of an intentional attempt at deception by their marketing department
@MeToo, I’m keeping an open mind about the effect of switches and cables and have a foot in the “bits are bits” camp, but I’m prepared to give it a go because a Cisco 2960 (I need a switch anyway) and 20 metres of cable are not too expensive. They are both on there way to me.
Why Cat6a? No other reason than the specific one I’ve gone for is generally well liked (it is in floating configuration so only meets Cat5e standard anyway), and relatively cheap for something that has “audio” in it’s description. I’ve got my Maplin (or was it Tandy?) bog standard Cat5e with my own terminations in place now, so can swap between them anytime.
Placement. I’ve been flip-flopping about this; some (possibly the majority) have their servers and switches on their hi-fi rack connected by a short and expensive SPDIF or “streaming” cable and claim this is the way to go hence my comment. However they will be upstairs in my arrangement for domestic reasons will have to stay there.
As I say I’m a little bit sceptical but am open to trying.
Hmm. IMHO I agree data corruption would be caught (mostly) by the CRC bits at Ethernet level or checksums at higher layers.
Sub bit-level noise has no effect on a digital receiver, as the PHY layer in 1000-Base-T employs a scrambler/descrambler and trellis decoder with convolution to counteract and correct for that. So either a symbol is detected or not and the entire frame would get passed or dropped.
Also bit-level jitter within a frame will have zero impact because the Ethernet frame is delivered at the MAC layer as a complete frame in memory. The main CPU or real-time downstream digital processor will typically only see the entire frame (once triggered by an interrupt that a frame is ready to be read from the Ethernet chip).
If you’re worried about intra-frame or inter-bit jitter, you should be looking at the MAC/PHY chips employed in your equipment rather than the interconnecting cables.
So the downstream frame & packet handler won’t experience any intra-frame jitter. It will see inter-frame jitter and gaps, but it has to cope with those anyway (to cope with packet drops and retransmissions). So any re-timing and re-clocking / buffering of an audio stream at frame level has to be much more capable than accommodating sub-bit jitter: it has to retime multiple frames to a very high degree of clock accuracy before passing this stream to the DAC (independent of the intermediate non-real-time digital transmission over Ethernet).
I do really believe in the criticality of re-clocking at this point: just as the bit stream hits the DAC (like in my Meridian 808 CD player which re-creates a 192Kbit up-sampled stream internally). Simple maths will show you that any jitter in the real-time stream at this stage is directly equivalent to losing bits of resolution, and also lead to mis-focused stereo imagery. But IMHO that has nothing to do with the non-real-time portion.
But others can hold on to their beliefs. I ain’t trying to convert anyone.
The effects aren’t primarily in the digital domain (for the reasons you pointed out), but are caused by non-ideal behaviour of the electrical signal in the digital domain interacting with (i.e. coupling to) the analogue electronics.
How is this low level digital noise on the Ethernet cable meant to be coupled into the analogue path in any meaningful way?
[Especially compared to the very noisy and very close digital noise originating from the streaming module and DSP itself.]
See page 3 of the following:
There are two sets of isolation between the Ethernet, the streaming module, the DSP, and the analogue audio. The optical coupling and retiming should ensure that any external sub bit-level noise is completely filtered and isolated. You will also observe the relevant buffers and re-clocking.
Again: if anyone wants to spend money on “better” Ethernet cables, that’s their decision. I’d be spending my money elsewhere e.g. on higher resolution streams or more music.
Because you can’t actually stop it - all you can ever do is reduce the coupling.
The coupling occurs capacitatively and also via conduction through power supplies. Jitter causes sidebands superimposed on the other digital clock frequencies and these can be very troublesome. Sensitivity to disturbance by out of band frequencies is a major factor in the design of good sounding analogue audio electronics (and in this I speak from real personal experience).
So according to this, if I stream an audio file containing a computer generated blank test signal over Ethernet and turn up the volume, I should be able to hear this digital noise compared to a quiet optical SPDIF input (because of course zeroes in the digital domain are never transmitted on the wire as pure zeroes, because of the need for a clock signal recovery and the introduction of data scrambling)?
Or if I stream pure sine waves in an uncompressed computer generated test file I should either be able to hear this effect or be able to see the resulting modulation side bands and distortion on an audio spectrum analyser? And I’d be able to observe a difference between either two Ethernet cables, or an Ethernet cable and an optical SPDIF input for the same source, or whether a streaming server is either 1m or 20m away? [with “colocated server” supposedly being better according to the general perceived experiences expressed in the posts above?]
Again, I don’t doubt external noise coupling of digital into analogue is a factor, just that the choice of Ethernet cable and choice of server placement (beyond a few centimeters) can influence that in any significant way.
Some prefer to believe their theories, other test in their system and see if the theory joins or not the reality of their experience.
I agree with this, I don’t know anything about Ethernet transmission and it seems those that do can’t agree anyway, so how is one supposed to reach a conclusion?
It’s going to cost me £200 for the switch and cable, and I need the former regardless of whether it makes a difference to SQ or not.
In some ways I hope it makes no difference then I can forget about it, having satisfied my curiosity.
Yes, you can return it or sell it after. The risk is minor. 90% of us were very satisfied with the Cisco. You have nothing to loose and all these theories are from pro domain, and don’t really apply on audio.
Why do you believe you’d be able to audibly distinguish the effect of the extraneous signal coupled from the digital transmission from the noise of the analogue electronics?
Why do you believe that an audio spectrum analyser would be able to show a distinction between the effect of the uncorrelated extraneous signal coupled from the digital transmission and the noise of the analogue electronics?
Just taking small signal transistor latch up as an example (it’s a often transitory event in the forward gain path before the negative feedback corrects it, and because I’ve seen this occurring in amplifiers, being triggered by digital source components): in neither test you describe would that be shown in the result of the test, and yet it’s effect on sound quality of a music signal is obvious to hear. (There are however, more specific tests for this particular defect, should it occur.)
It’s not a case of “believing” a particular theory or not. Theories and also scientific “fact” can be very useful both in the design of equipment and in short listing what is worth trying out.
So can taking advice from trusted sources - not that there are many in the various streaming Threads at the moment. Mainly because switches and cables are so dependent on other equipment in the system and also on the perceived benefits according to individual tastes.
I suggested in one of these threads that we are basically in the “suck it and see” stage at the moment. There isn’t much consensus !
I don’t expect to be able to detect that difference, although it might be visible.
I don’t expect to be able to detect that difference, although it might be visible.
What I want to hear or see is the delta that replacing an Ethernet cable makes, or moving a server makes.
Right. But that’s blindingly obvious if you have test points within the circuit. Like an op amp latching. Or an amplifier going into parasitic oscillation.
What’s being claimed is incremental SQ improvement by changing an Ethernet cable, or relocating a server, and that it’s significant.
The problem is that the trusted sources you are referring come from the pro industry world.
There is no scientist here who works in audio here, designing streamers,´switches, or Ethernet cables. I would much much more prefer comments from those.
The others, without disrespecting them, are only speculating.
The trusted sources that I am referring to include a few people on this forum, my dealer and a few friends.
I’m don’t know how you have come to the incorrect conclusion that they are all from the professional industry world.
As I said before, these threads do provide pleasant reading and gradually, some of the suck it and see reports might provide useful indicators.