Thanks for your input.
The Innuos, Aurender and Melco Music Servers that Ive been looking into don’t have SFP connections to use with my P1, they only have USB or RJ45.
Lumin’s L2 Music Server has SFP connections but I cant rip CD’s directly to it unfortunately.
The P1 does not have an I2S connection eifher.
You might use a switch with ethernet and sfp, anyway it’s a good idea to separate server from the streamer
I just about stream all the time as my streamer sounds better than the cd rips.
All my streaming is upsampled to a much higher rate and this just crushes the standard cd 16 bit / 44.1. I have to upsample the cd rip as well for it to get close, but even then i still mostly find the streamer sound best.
But this certainly wasn’t the case years ago when i had the NDS, the melco with ripped cd’s sounded better. But i have come a long way since them days now
Been interesting to read the various replies and yes I am picking up some new ideas/theories to go investigate. Who would think that black and white cows exist, if they are not all brown can you get grey ones. Maybe they are not cows but DSD zebras?
I’m still thinking that most of the differences folks are hearing is due to replay channel artefacts being introduced from something (hardware, software or a combination of both). Anyway I’m off to get my GCSE engineering results, so you can guess (or not) my level of understanding. Never too old to learn something new. After that it is a bit of listening to music because that is what it is all about.
Agreed, I use my P1 with a Cisco 2960, which has an SFP output. This converts to fibre and plugs directly into the P1. Any server on the network can then feed the P1.
How were they determined to be different?
Here a post from the thread linked by our venerable @Innocent_Bystander :
“ Thanks Chris. The Core in question was a pre-production unit that I was reviewing. The unit came without a hard disk, but recognized my UnitiServe’s downloads folder as a store, so rips went there. That made it easy to compare the Core rips with those of the UnitiServe (WAV rips in all cases). I was not expecting a difference, but increasingly found myself less engaged with the Core rips. I set up a playlist of Core and US rips, then, using random unsighted playback, noted which version I thought it was, followed by a visual check. On repeated trials, I fared better than chance, correctly identifying the rips about 75% of the time.
An example: The Francis Cabrel’s track “Elle m’appartient, C’est une artiste” begins with an accordion and a brief guitar intro before Francis Cabrel sings. With the Core rip, the accordion was a little more finely resolved and cleaner ; when Cabrel’s voice enters, it was in a narrow band in the soundstage, just left of centre, lacking body. The guitar solo later on in the track was clean and clear, but again lacked body (or the human touch). The guitar was heard more than felt.
Moving to the same track on the Serve rip, the accordion was less finely resolved, but seemed to have better flow. The upper chest component of Cabrel’s voice was more present, with a consequent wider spread of his voice in the sound stage. It just sounded more human. By the guitar solo, my analytical brain had switched off and I was just transported by the music.
This pattern repeated itself many times over three weeks of going back and forth between the Core and the Serve.
When listening on the Core, I found myself picking up my iPhone more often than when listening via the Serve, where I was less easily distracted, the music commanding my attention more fully.
I also tested the rips using the ‘chills’ (goosebumps) reaction, precisely because it is involuntary. It is very difficult to ‘will’ oneself into it. I have a few tracks that reliably produce the reaction. On the Core rips, the reaction would often start at the appropriate point in the music, but would not be as intense as with the Serve rip.
Overall, the Serve rips did a better job of connecting me with the human performing the music and the human who composed it.
I’m fully aware of how our biases can influence what we hear and how we react to music, which is why I raised this on a forum thread in 2017. Five members volunteered to listen to both rips, blind to the source of the rip. The first reply came from a forum member who is also bass player. He consistently picked out and preferred the ‘B’ rips (UnitiServe).
Results from the other four listeners were not as categorical, ranging from a slight preference for the Serve rips to no difference. Overall, in six listeners (including myself), three noted a preference (from slight to strong) for the Serve rips. While the result wouldn’t pass a test for statistical significance, it is the consistent direction of the preference (when noted) that is interesting. One listener remarked that he couldn’t tell the rips apart on casual listening (on random playing), yet when he stopped thinking about the sound and just tried to relax into the music, his score was 5 to 1 in favour of the Serve rips.
Analysis of the rips showed bit perfect copies of each other excepting the leading data bits (zeros), a difference in offset.
The forum thread caught the attention of Naim’s former MD, Trevor Wilson, who reached out to me to investigate further. He compared Serve and Core rips and also found a difference, preferring the Serve rips on some tracks, and the Core rips on others. Different offset adjustments were evaluated at Naim, the firmware was updated, and the delta disappeared.
With the update, I could no longer reliably tell the Core rips from the Serve rips. On 30 random plays of Brush with the Blues, I guessed correctly 11 times. On another of my test albums, I guessed correctly 15 times out 30. Overall, 26 out of 60, so no better than chance.
Still, I was at a loss to understand how a minor offset difference could alter sound quality. Trevor’s theory was that “The music is left/right 16 bit data packet. Maybe the alignment of the non audible data helps align to the internal bus architecture thus it takes one CPU cycle not 2 to get data out.”
Stranger things have happened.
Systems used to compare the rips were as follows:
- UnitiServe / S/PDIF into a Resonessence Labs Mirus Pro DAC / 252+SC / Olive 250 / NACA5 / self-built two-way 8 inch floor standers.
- UnitiServe (from sys 1) / UnitiLite (or Naim DAC + Supernait2 ) / Bis Audio Vivat speaker cable / Graham Audio LS5/9
Jan”
I never understand why people get upset and/or heated just because others have a different view, and it is a great shame – and shame is the right word - that sometimes people get abusive and start attacking others.
Exploring things one doesn’t understand to me is the height of intelligent conversation: otherwise how is one to learn? And wanting to know surely is the starting point of learning. It matters not whether or not some have relevant knowledge, as that does not invalidate analysis they may bring, or suggestions they may offer for consideration. In this particular case in the thread I referenced there was some very straightforward and logical discussion, sensibly recognising that bit perfect is just that, so for the D100 rip to sound different some other factor(s) must be at play (just like with ethernet cables it cannot be the data itself being changed by a cable, but other things that later affect the conversion to analogue, such as RF noise).
It’s not exactly the response to your question. I found only that. Can’t remember where the comparison was done, in which thread. But it was Melco rips vs PC rips, and both were written differently.
Hi yes agreed, though more typically at the synchronous stage of the chain between the buffer in the streamer and the dac. That is moved by the cpu in the streamer with the help of a clock. Up to that point the process has been asynchronous. So timing earlier in the chain doesn’t really matter as the data gets to the buffer from its original storage location using asynchronous transfer techniques. What happens if there is lots of packet loss in that journey is that eventually the sound will cut out rather than deteriorate as the buffer empties. You have probably experienced the phenomenon listening to a streaming tv service when the bandwidth is poor. It stops playing and may even automatically decide to renegotiate a lower quality smaller data transfer so that the buffer can fill up again.
Thats a classic example of software engineering processes in action. A testing process identifies an issue and a problem determination pinpoints a software defect that is fixed.
When I get a few moments later I will post an explanation of how software engineers seen the world that might explain why so many of these threads in other places go off the rails and why understanding how software engineers think will significantly advance our collective understanding. The short of it is they live in a world of logic, trust and causation (sometimes called problem determination). That means that they are balancing evidence and probabilities all the time.
Just back from the demo - the Klimax DSM is a bit wow
Together with a friend we used to experiment a lot with this a few years back. We had the d100, the Buffalo BDXL and a Pioneer full-height blue-ray and either ripped via XLD on a Mac or via the Melco. I then wrote a little obfuscator that randomized the ripped filenames and saven a text file with the correct answers. I could also create a set of master files trying to find the randomized file that matched a master file.
I wrote another utility that updated the metadata from another. But kept the original audio data.
The audio bits were always correct. And with everything randomized we could not hear any diff. But the three drives used were all very good drives, it would have been more interesting adding some cheaper drives with a few bugs in the firmware.
But when I added lots of metadata and compared with a minimum of metadata we both managed to hear the diff and repeat it. But that was an extreme test.
Interesting
The post above shows that it’s possible to have 2 identical rips, identical as 100% accurate, but still sounding different.
Then , with the software update, the 2 rips sound finally identical.
Correlation: it’s then very possible to have another software process, like the one in the Melco ripper, and hardware ( more immune to noises) and create a rip also 100% accurate, but sounding still better vs the precedent rips ( Unitserve and Melco).
Hi, yes your latter point is correct if those ripping software packages are not using a technique like dbpoweramp’s accurate rip concept which is basically crowdsourced checksums. Without that bit of the software engineering chain it is perfectly possible for a rip of the data from the optical disk to be incorrect and not match what was originally encoded onto the disk and there is no real way of knowing if it is correct or not. I used to use EAC and I switched to the accurate rip software becaus it solved a real conundrum in the ripping process.
Frenchrooster, if you will indulge me later this evening I will briefly explain the software engineering culture and that might explain why your point about identical rips sounding different keeps triggering some people. I think I know what you mean but the words you use actually keep triggering software engineers to react because of how they see the world. I do hope that will help us get to a common understanding that improves both our collective knowledge? Will that help?
Yes of course the quality of the source also plays a role. However I would argue that sometimes you need to overinvest to also be ready for a next step in the journey…
If we can’t understand each other because of different languages, we should try sign language
I’m no digital engineer but I know what my ears tell me and they tell me that this is a real phenomenon. Not fanciful imagination or wishful thinking or some sort of crackpot trick-cyclist bias. And I’m not alone. Plenty of others hear it too, forum members, dealers and their customers and magazine reviewers.
The only thing I would say different is that my ‘working hypothesis’, if you like, is that the rips are 100% bit perfect but not 100% accurate as there are factors other than bit perfection that impact on the sound. There may be alternative explanations involving digital ins and outs that I’m unaware of. But whatever, all bit-perfect rips don’t sound identical.
Any further input from those with much more knowledge of digital engineering than myself such as @badger1 would be most welcome and interesting.
I don’t know if it’s technically possible. But I agree generally with you, as you have discovered.