Is Naim missing something in the long run?

That’s directly because…

rather than any sound quality ‘gain’ by a lossy reprocessing of the existing master (actually by definition it’s a loss of information from the data!).

Give us MQA and a Naim power conditioner and we will be in audio heaven.

1 Like

Blasphemer!! :stuck_out_tongue_winking_eye:

4 Likes

I guess I am one of the lucky ones, because the MQA always sounds better to me and more like live music–a goal I have had for many years in this pursuit. Luckily I am not affected by the artifacts that some hear. I do a full unfold, so perhaps some of the artifacts are heard by those doing just the first unfold. However, while technically MQA is lossy in that the digital information is changed to fix the the a/d and d/a problems with Hi Res Audio, the resulting music almost always sounds better–sometimes for the 44Khz stuff its not a big improvement. However when I listen blind, I can almost always detect when its MQA and 352/384. That is where the biggest bang for the buck is for me.

Well it’s interesting, there are few problems.

at www.mqa.co.uk MQA claims…

1 “MQA will play back on any device to deliver higher than CD-quality.”

2 “When paired with an MQA decoder, the MQA file reveals the original master recording.”

3 “Products with a full MQA Decoder unfold the file to deliver the highest possible sound quality. At this level of playback you are hearing what the artists created in the studio”
4 “Smartphones from brands including LG and Essential can also deliver a fully decoded MQA experience.”

.
Problems

Claim 1:
Undecoded MQA is 13bits signal + 3 bits partially correlated noise (otherwise known as distortion).
How are these degraded data better than 15bits signal + 1bit dither (i.e. uncorelated noise)?

Claim 2:
As the MQA encoding is lossy, information from the master is lost, so this cannot be true.

Claim 3:
Interestingly these raise two possibilities:
Either all full unfold MQA DAC systems sound absolutely identical (“you are hearing what the artists created in the studio”) and there’s no point in paying any more than for the cheapest; or this is just marketing deception and there are benefits from higher quality DACs. (There are no logical possibilities other than these two!)

Claim 4:
Taking into consideration Claim 3, if that is true, you may as well use a smartphone as a HiFi DAC to playback music even on the best of all HiFi systems as nothing can do any better than an MQA smartphone.

Stevesky’s response in the MQA on Atom thread may be of interest

1 Like

Indeed, I have learnt that ‘full master recording’ is these days simply increasingly becoming a marketing moniker of meaning 24 bits at various sample rates which can often sound preferable to the 16 bit equivalent. … luckily one has a choice of obtaining such recordings via lossless PCM (which I think Naim’s Steve Harris refers to as ‘native’) or in MQA. To me I am glad I have the choice, as one sounds significantly superior that the other on a quality Naim system.

But to be fair to MQA when I had my old ADSL broadband supply, I could only really stream MQA if I wanted any sort of Hidef… I didn’t have enough bandwidth. Now I have a VDSL service I have plenty of bandwidth and can happily stream 192/24 PCM FLAC via Qobuz, and so has obsoleted the need for MQA.

1 Like

Hi Bailyhill, there are not any systemic issues with A to D or D to A converters that MQA specifically addresses… simply MQA allows audio frequency lossy compression whilst retaining timing information.

Further the reconstruction recovery/correction filter for MQA DACs is specifically defined for MQA to compensate for the filter response errors caused as a consequence of the MQA compression process … therefore when the audio is processed for MQA compression for an MQA distribution master, there is a greater degree of determinism on how it will sound on playback through a MQA DAC. This contrasts with PCM where users can choose filters on some DACs or even choose DACs to suit their preferences, or simply choose a neutral DAC to match the PCM master as no specific manipulation of the data and reconstruction and MQA error correction filtering is required.

Hello Simon

I probably used the term “A to D” a little loosely. As is defined in the AES Hierarchial" paper, I was referring to the phase shifts and rolloffs in response caused by the cascading of things in the analog chain like the record microphones, mic preamps, mixers, converters pre and post, which are introduced by the record chain. They of course add to the Replay Preamp, Power amp, and transducer. The article shows the cascade effect of eight of these, modeled as a 2nd order Butterworths and shows the attenuation (which means phase shift) and the effect on the impulse response, which is quite slow compared to the quoted research that suggest that temporal shifts as small as 5 microseconds are audible and should be avoided. If one compares the microphone output to the transducer output, they need to be as close to the same to reproduce that signal–which is as close as we can get to the music.

This along with the reconstruction artifacts in standard hi res audio Dacs is what MQA seeks to fix. Of course, they can not know our preamp, power amp and transducer effects, so they can only control prior to that point in the chain.

In the end, that is the theory and the only thing that matters is the sound in our rooms. I am sure that your new DAVE will make a big improvement in reconstruction area.

Hi Bailyhill, the only artefacts or errors that MQA attempts to fix that I can see are the errors introduced in the frequency response from the MQA compression process. (The patent discusses the appropriate compensation filters). Follow the blue curve below and the resultant black curve upto 20kHz.
With regular PCM one doesn’t have to deal with these errors, other than the regular sinc (x) filter response as a consequence of the sample theorem.

http://www.audiomisc.co.uk/MQA/origami/Fig6.png

Hello Simon

The AES Heirarchial…" paper devotes lots to the limitations of reconstruction of the analog signal by most of the standard techniques of the day, and shows the advantages of a B Spline reconstruction as it applies to the timing of transient details in music. Regardless of what the Patent says, and regardless of what the Heirarchial Paper says, and regardless of what the marketing information says, I doubt very much if too many people really know what the secret sauce is, unless they have an MQA Licence and have incorporated it into their equipment. I am sure MQA is pretty careful as they would not want the copy cats out there to get a free lunch.

As I recall, a patent must divulge a reproducible recipe, but not necessary all the recipes. In addition, I imagine that there may be additional patents working their way thru the process that we have not seen as yet.

Yes you can tune and optimise, but the principles are in the Patent… clearly different manufacturers can implement MQA differently within the licence constraints…
However bottom line, the MQA compression technique adjusts the frequency response at compression, and needs to be corrected at reconstruction… not a problem, but an important correction for MQA decompression if wanting reproduce the original digital encoding as closely as possible…
I don’t think there is any ‘secret sauce’… such things are more for the snake oil merchants and Coca Cola, not reputable engineering companies … I have no doubt that MQA engineering is exactly that… from an engineering perspective they are quite open on the constraints and considerations and the benefits they are delivering with their compression techniques. It’s the marketeers and sales collateral that get carried away with speculative hyperbole (in my opinion of course…)

1 Like

Warner signed with MQA in 2016 and Universal in 2017. So this is nothing new.

That they haven’t done anything more visible yet is easy to understand. It is a very small market. And they dont want to be dependent on some small entity like Meridian.

Umm Meridian have a very solid resume of delivering technology to media and telecommunications providers. MLP the carrier for Dolby TrueHD. They are not thought of as “some little company” by Warner Music, I can assure you.

Hello Simon

I had been thinking about your rejection of the suggestion in a prior post, that MQA corrects for the phase buildup caused by the beginning of the analog chain. I had a tour of Meridian and the good fortune to spend an hour and half with Bob Stuart. So I asked him. He confirmed that MQA has a goal for maintaining temporal integrity of the music from sampling and a goal for reconstruction. Depending upon the original Master at hand, in some cases analog, sometimes digital, sometimes very well documented what processing it went thru, and sometimes not, the MQA coding may take slightly different forms. So it sounds like it does address both time smearing from the a/d and the d/a process.

Cheers

Yes MQA (can) retains the temporal information after the frequency elements have been compressed… and in reconstruction it (can) maintain this temporal information BUT this only relevant for MQA as it otherwise would have been lost through lossy compression.
If you are using PCM with all other things equal this issue is moot.
Smearing is somewhat different but related, and typically refers to phase distortion with respect to frequency within the audio pass band. These days most quality DACs deal with this by oversampling and have a pass band low pass filter before the phase distortion becomes prominent… also traditionally using FIR filters tend to be better behaved with respect to phase distortion than IIR filters.
ATB

Hello Simon

Resampling may be a great option for things that have an anolog Master. Higher sample rate is always better. That does reduce the time smearing, but it sounds like unless the resample uses something beyond the classic sampling and reconstruction techniques, it only partially addresses the problem. I would love to read about what Chord does there.

For material where the Master is in digital form, this means that some interpolation is being done and someone or some algorithm more exactly is choosing how the interpolation is being done and that’s not to an absolute standard, but to someone’s liking or choice. I would love to see Chord tell us how they do that and why theirs is the best. Talk about secret sauce–thats one if I ever hear one and from what folks are saying, its pretty good secret sauce. Its fine, but its not “open source”–our only option is to listen and trust our ears.

I can’t see how it makes any difference to the upsampling whether the original master was analogue or digital. Once digitised, any info between the samples is lost, so upsampling in both cases is guessing at what should be between the samples, therefore the interpolation or whatever mathematical process is used can only do exactly the same for each. In what way can an analogue original master cause the interpolation of a digitised version to be different from the that of a digital original master?

The only difference I can see there being between the two types of original master on the resultant sound through a digital medium is the effect of the different AD converters from analog audio and the effect of the analogue recording process, the analog master ADC being after first capturing on tape, and the digital master ADC being before it is recorded: In an analogue master any sound from the analogue taping process will be added, and with multiple inputs that often will be twice, once laying down individual channel tracks, and adding to that the taping of the mastered combination. After that it goes through a single ADC, with the any effect on the sound that that adds. In a digital master there is no effect of analogue taping, but only the effect on the sound that the ADC has, though except with single mic recording the individual channels will have their own ADCs, which if of different types may have different effects on the sound of different instruments.

Bailyhill, there is no secret sauce here, really there isn’t, it’s all rather well established methods, applications and choices, other than the specific MQA lossy compression technique… and for example different proprietary filtering algorithms such as the WTA windowing techniques… more in a minute.

In any digital to analogue reconstruction, there is time domain / value interpolation… in other words, a sample value in a discrete sample series is theoretically representative of an analogue value for an infinitely small amount of time. Between these ‘pulses’ that are representing values for infinitely narrow points in time time , the reconstruction filter (a sinc response filter) allows the values to be representatively converted to a continuous signal (what we call an analogue signal). The analogue signal is reconstructed and interpolated between these infinitely narrow sample point representations… and interestingly because this signal is low pass filtered, these reconstructed interpolated analogue signals may exceed in value between the sample points…and care is required to avoid clipping.

All this is standard stuff… and not specifically MQA related. Any discrete bit stream DAC deals with these concepts.

The fun and variability of approach (the closest to the ingredients of your sauce) often comes around the implementation of the sinc response and low pass filter … there are different approaches to digital filtering with varying pro and cons regarding the distortion and aberrations introduced… … there are side effects to filtering and so different algorithms and approaches focus on different aspects … it becomes a design engineer’s choice in the design and implementation of their DAC, but it appears in the case of MQA some of that choice is taken away.
For example many DAC designs today for example oversample the signal such that the low pass digital filter can be more gradually introduced above the audio band causing less distortion into the resultant audio. Now there are different choices to the implementation of that low pass filter implementation again with different pros and cons … ie Naim use one way using recursive IIR filtering, whereas Chord use FIR sample window based filtering (convolving the discrete sample signal with a defined filter response series of samples or ‘taps’). Chord use a bespoke windowing algorithm, the WTA, as opposed to one of the standard established windowing algorithms.

1 Like

i subscribed for free 1 month Tidal today. I could see around 1200 Tidal Masters MQA albums. There are perhaps 7 albums that interest me.

1 Like