Dear Mr. Beachcomber,
Of course I would not insult anybody because of his/her religion.
But you made me very curious, are you a believer or a non-believer?
Dear Mr. Beachcomber,
Of course I would not insult anybody because of his/her religion.
But you made me very curious, are you a believer or a non-believer?
Futureshop offers a burn-in service for Tellurium cables (and may be others).
24, 48, 72 or 96 hours free of charge.
It is amusing they use a Nordost Vidar cable burner.
Thank you in any case, I will try to be even more careful.
It’s still raging…
…I get the conflict…I can hear the change, but cannot be bothered to engage in double-blinded trials, so I cannot clearly demonstrate my claim to be true in objective terms…
…on the other hand,
…It might be that some people want objective evidence.
My viewpoint is quite simple; I can hear it.
…but, I’ve seen plenty of reasoning here (some of it being pseudo or misinformed reasoning) that cast doubt over this.
I am a little surprised, because the changes in the sound seem so obvious to me.
In any case, it seems to me that this is all a dead end.
I’m out of this thread…but still enjoying my music!
To be fair to Nordost, their FAQ states that they can’t offer a burn-in service in the factory, but the authorized dealers would, using the Vidar. FWIW
Personally I can understand all this and your post is quite similar to my thinking. On the other hand, there are forum reports about hearing all kinds of differences, I read one stating categorically that they can hear an SQ difference between app versions, although all the app does is to tell the streamer “play the album with ID xyz” and nothing else. And when you inquire, the answer is “but I can hear it”.
So it is difficult to take all these reports on face value even though some may well be valid.
In my own experience, I find it completely impossible to perform proper tests because changing a cable and messing around behind the rack is sufficiently exhausting and annoying to prevent any meaningful before-after comparison. This may be different for people who have better access to the rack
I’m bad…I was opting out, but…
I think that strictly objective evidence will be difficult to find as nothing both electrical and measurable has been defined.
…and statistical evidence: the double-blined Randomised Controlled Study is the gold-standard (in both of my professions, anyway), and I am certainly not going to allow a blinded researcher (who has two sets of stereo pairs) access to the back of my rack! I don’t suppose that the vast majority of us would.
Exactly yes, fully agree and have written similar yesterday (and many times before on the forum). A test that achieves the confidence level we expect in other fields is all but impossible in audio, even with best intentions, so the debate continues forever, at least as long as no material evidence (like actual demonstrable changes in the materials) and/or a plausible mechanism is found.
I don’t find it difficult at all to set up blind tests of many things, though of course some items of gear or ancillaries are more fiddly to do than others so not as slick. Whilst my way is not true double blind so not as robust as required for some purposes, it is still infinitely more valid than relying on my own comparison simply trusting that I can be clear of bias.
All it takes is a willing volunteer who likes a challenge and can be trusted to connect things correctly, in my case one of my sons. He doesn’t know the value or claimed performance of things I’m comparing, though of course in some cases it may be fairly obvious, but enjoys the challenge of trying to catch me out, meaning get my assessments non-consistent. (E.g. as well as being very variable with the order played, And though well able to set volume settings to be the same, he’ll sometimes deliberately err on one slightly louder than the other, sometimes vice versa.)
Every bit may help, and I applaud everyone who makes an attempt, but we should not confuse this with the confidence level we expect elsewhere. (And there’s a reason we want to see those, because we consider lower levels unreliable. Of course, consequences in audio are not as important as in, say, medicine )
Indeed, and I for one haven’t suggested full double-blind testing: indeed to the best of my recollection most, if not all, references to double blind that I recall on the forum relating to audio gear have been from people arguing against blind testing. Single-blind testing as long as not managed by an interested party (such as a salesperson) would still reduce much of the risk of psychological influence in comparisons, and given the hobby purpose of hifi the residual risk I suggest is sufficient.
Someone published a test, in Audio Circle site. He used a cable cooker.
Here are his comments:
“ Cable used was ZenWave Audio SL17 speaker cable which uses an aggregate 17g of UPOCC copper litz wire.
IMO, frequency response is one of the more subtle aspects of burn-in, but the test shows unmistakable and relatively large changes in frequency response. It’s fair to say a lot more is going on besides frequency response changes during burn-in.
1/6 octave smoothing”
Here the same graph, but with more visibility.
Thanks, interesting. Unfortunately I cannot search on AudioCircle without an account. So I am not sure what he is measuring there. For an amp, the curve seems to be too wobbly, for speaker output it seems remarkably smooth.
I can’t quite make out the labeling in your second screenshots, but it seems like a linear y-axis with 1 dB between gridlines, which is consistent with the first screenshot. So I’m looking mostly at the first one.
If we assume that other influences were properly ruled out, I find it interesting that there is any change at all, even though it is very small (max 1 dB more after break-in, and a consistent increase in dB response). Of course, as the later curve is consistently a bit louder, one would have to adjust for volume before making quality comparisons. However, if it is true that there is really such a change at all, then I would agree that other changes beyond frequency response are a possibility, maybe even likely.
Google: ZenWave Audio SL17 cable cooker
Thank you. OK, so the info in the post about how he measured is “Pioneer S-1EX speakers used with 5” magnesium midrange and coaxial beryllium dome tweeter. Omnimic software, mic placed about 2" from dome tweeter, centered on tweeter. Mic was not moved while cable was being burned in for 5 days on an AudioDharma Cable Cooker."
Can’t argue with the principle, but with the change being quite small, I would wish to see more. I know, I could do it myself
Once again, a commendable effort, but not something one would be anywhere close to accepting as valid, yet, in other fields
Edit: With the mic 2" from the speaker, the smallest difference in distance might cause a noticeable difference, I suppose. And interestingly it is strongest around 10kHz and the mic is on front of the tweeter
Another edit: In any serious field, the natural response would be “odd. need to measure more”, and with good reason
Another one: At least as an additional measurement, would be good to take the speakers out of the equation and compare the cable output directly.
There is also the bit about his customer feedback. Another interesting data point but also something with a large number of possible influences
It’s an interesting test, but ofcourse it’s really difficult to make sure that the environment is exactly the same 5 days later:
What i am most sceptical about is the difference around 10KHz, where the burned in line is almost 2dB louder than the non burned in version.
Consider that:
A 1 dB change in a sound equates to about a 26% difference in sound energy (remember that a 3 dB difference is a doubling of energy levels). In terms of subjective loudness, a 1 dB change yields just over a 7% change. A 3 dB change yields a 100% increase in sound energy and just over a 23% increase in loudness.
This is obviously an enormous difference if it is only due to cable burn in.
Indeed, i think this cannot be reliable measured with a room-microphone. The cable would have to be measured directly in order to calibrate the results…
I cannot see that a claim made about the outcome of a putative test is any more valid that a claim of efficacy of the product. The tester isn’t impartial…
I can attest that all my internal cables in my body are burning. 32 C in Paris. Now in train, maybe more.
Interesting…if this is valid research, then it would explain why it is so easy (in my opinion) to hear the effect. No, let’s reach for the meta-analysis of all the available valid research.