I’m rather late to this thread, but my advice would be to ring REL and ask them for advice. I did just that when wanting a sub to match my speakers.
I will continue reading to see how the thread develops.
I admire your dedication and detailed explanation.
I did not have the patience to use REW; I used HouseCurve to provide the files for Roon DSP. However, this was before I added a sub. I’ve just bought a mini DSP, UMIK-1, but I have not turned off the DSP filters yet, and I’m reluctant to test my system with the sub because it takes a full signal from the Supercaps (IIRC) rather than high level from one of my speakers.
Once the cricket finishes I will dip a toe into the water, and may play with REW as well.
Hi DG, thanks for your input. Although I think your posting is specific to Teskey’s posting above? it has reminded me that I have made no statement about the drive levels to use for REW measurements.
This is an interesting topic. If I were doing this as a day job, I would definitely be using (disposable) plug in ear protection. Not because the drive levels are so high that it is painful without protection, but repeated sweeps - in acoustically UNTREATED rooms - might hit levels sufficiently frequently to promote hearing loss.
Actually, you might have guessed I have ‘done’ a few rooms now for friends and colleagues, and depending on what speakers and amplifiers are in place, the drive level in REW is usually below -12dB full scale.
Sometimes even that moderate level is too high for the speakers to accommodate without significant distortion, at which point I drop it to 15dB or even 18dB below ‘full scale’.
For my own system, when I used the ATI amplifier, it was starting to run ‘out of steam’ at circa -9dB FS, and definitely by -6dB FS.
Now, with my Linn Solo 800s in place, -6dB FS is no problem.
It becomes obvious, if using REW, what the limitations are in your system as the noise and distortion rises quickly once drive level gets too close to the current delivery limit of the amplifier for the connected loudspeakers (or the limit of the loudspeaker driver movement). Plus you don’t really want to be doing this as you can damage the loudspeakers, never mind your hearing!
Hi Camphuw, perhaps you will have have a bit more time soon to try REW again?
Thank you for your compliments regarding my dedication to the cause! However I might add that others (who have more experience in using REW) might consider me a complete novice!
FYI I have (just) less than 12 months experience in using REW - and I gave myself an accelerated start by paying a professional room acoustics engineer to come and measure my room and loudspeaker setup in order to ‘optimise’ the setup. This allowed me to watch what is done and also get the files so I could easily see what settings this professional was using during the process.
But really, learning the way I did it (above) is not necessary as there are sufficient resources on the web:- specific videos by REW for example but also other information sources such as videos posted on the GIK Acousics website. Using and gaining benefit from REW is like any ‘new’ tool: - to really learn, one just has to immerse onself in doing and using and trying.
All the best, and thank you for posting on this thread.
PS: You might have noticed through the thread that the HiFi dealer did arrive with a big REL for a trial and this was succesful (see posting no 132 - that was with a single REL No 31), but in the end I selected MAGICO Titan15 to add to the system - hopefully more news on this soon.
Your original posting to which I am replying raised an interesting point. That of ‘Blind Testing’.
I can imagine that there are many people in the world who do not appreciate the significance of using appropriate test procedures when evaluating human subjective opinion.
Full Disclosure: I am not one of those people. However I had never personally had a need to implement such procedures in my home… until LAST YEAR (AUGUST 2024).
Without going into too much detail, it came about that I needed to gather data on users experience of (different settings of) Linn Space Optimisation being applied to music experienced in my media room (which had lots of room treatement already fitted).
Here are the Key Elements of Test Procedure:
a). Tests conducted using six participants (2 females, 4 males).
b). Each participant had a dedicated and independent (of other participants) listening session.
c). Each participant selected their own short piece of familiar music (4 – 5 minutes long).
d). The selected music genres included blues, classic orchestral, jazz/folk, Latin, opera and rock.
e). Participants were not informed of the (three) different Optimisation settings used.
f). Each session had a 20 minute ‘settling in’ period where the Optimisation setting was ‘random’.
g). The subsequent active evaluation period (~15 mins total) exposed the listener to their selected short piece of music played three times sequentially (once for each Optimisation setting). (see special note below)
h). The listener ranked the ‘playing/sounds’ (c.f. Space Optimisation settings) in order of preference.
Note: The order in which the three optimisation settings were played was randomised amongst the six participant sessions with the intention of eliminating any potential bias introduced through the order of listening.
To introduce the participant to their session, there was a verbal introduction (that I read from a script) and then the participant was handed the following form to note their experiences. Note: I was not in the room as the evaluation was taking place.
Perhaps you (or any other reader of this thread) can comment on any weaknesses in the test method I used?
FYI the trial was sucessful as the results were incorporated into a technical report for the recipient company, and useful knowledge came out for both parties (i.e. me and the company).
PPS: Perhaps , depending on the level of interest in this posting, I will share the results of the trial and perhaps stimulate a discussion on the subject of ‘statistical significance’?
Hi EoE, just seen this but travelling - will respond when I have a chance, whether later today or some time over weekend (though being retired makes weekends no different from weekdays!). Some steam, water, ice and fire to see ove next 24 hours!
Hi Middle, Thank you for your continued interest and encouragement!
I think I will share the results of the ‘blind testing’ referenced above, as it is relatively uncontested information - but also informative.
PS: I have also not forgotten about the loudspeaker cable acoustic measurement results which I have, and in which you showed previous interest about two weeks ago.
On that topic, I am still waiting for appropriate information from the cable manufacturers. In any case, there is a bit more work for me to do to present the acoustic testing results of the various loudspeaker cables clearly and concisely.
Furthermore, depending on the level and nature of feedback on this thread re ‘blind testing’ of audio equipment - I am keen to take genuine constructive feedback on any weaknesses in the test method described previously - I could post various other testing results soon.
New posting linking information in a number of other threads and some PURE SPECULATION.
Firstly, thanks to @frenchrooster for the helpful suggestion that not many people (on the Naim forum) are interested in measurements……
I needed to wait one week from that posting by FR before looking at some Naim Forum statistics - and although there may only be a few active new postings - there are over 1000 additional views on the topic in one week alone! So I feel there is quite a bit of interest in the measurements based approach on the Naim Forum.
Actually - and this is more of a general observation - the markets for high end HiFi and high end Home Cinema can be seen as quite separate or quite similar. Which view most closely represents the reality?
In the high end home cinema market, the dedicated customers revel in tweaking their systems using microphone measurements that use variously either Anthem ARC Genesis, Audyssey Multi EQ-X, DiracLive or Trinnov room acoustics correction systems.
In the high end stereo HiFi - perhaps not so much interest (yet) in room acoustics correction?. However both markets are changing and DiracLive in particular has got a lot of traction in both markets with its software licensed and embedded in many stereo HiFi products (refer to the DiracLive website for information).
Nevertheless there are other key players with dedicated stereo HiFi solutions. For example:- Linn (Space Optimisation) take a theoretical modelling approach and Lyngdorf (Room Perfect) take a microphone measurement approach to the problem of room acoustics.
Where is NAIM in all this?
There are clues dotted around the web and on the forum for those wishing to join the dots…
Here is the first one from 2022…
Now a more recent one from October 2024…
What else happened in October 2024?
Was there a firmware update on the Naim streaming platforms? Did that update upset some users? Why might that be?
NOW TO the PURE SPECULATION…
Perhaps the firmware update in October 2024, as well as correcting some interfaces for the Stream Unlimited board, was readying the firmware on the Analog Devices SHARC DSP 21489 to be able to continue to interact with the Focal Naim App that is giving control of speaker boundary interaction and also room correction nulling filters on the new Focal Diva range of loudspeakers as well as the CI 102 streaming product.
FYI these new Focal/Naim products use the Analog Devices SHARC DSP 21563 device, which coincidentally has licenced some DiracLive software incorporated into it.
It is pure speculation on my part but I think it is possible that some (lack of) convergent rounding issue cropped up in the IIR filtering implemented in the SHARC DSP 21489 code during this transition that only certain users would experience depending on the level of room treatment and systems.
But what if there were also other plans for the users of legacy streaming kit? Remember the folded arms of Jason to the question what new products were coming at the Bristol HiFi show 2025? Also, there has been some staff sickness during these recent periods at Naim which may have impacted new product development and release. Add in the general state of the HiFi industry with mergers etc and you can see why a “WAIT and SEE” (Jason’s words, not mine) might be the best business strategy.
But… WHAT IF THE NEXT THING was DSP card upgrades for streamers? Substitute the Analog Devices SHARC DSP 21489 based cards with one using the newer Analog Devices SHARC DSP 21563 devices and room acoustic correction using ND555, NDX2 and ND5XS2 etc becomes a possibility. (I think Naim have offered upgrades like this before for example when they increased the sample rate to 192kHz from 96kHz on the streaming products).
All that is needed is sufficient confidence from NAIM that the installed user base is ready for this sort of thing and that enough customers will dip their hand in their pocket and upgrade.
PS: Perhaps why the information I have posted has never yet been moderated - perhaps this thread could be considered as raising awareness in the market?
I’m not sure that anything related to Naim product development is normally moderated no matter how close to or far from reality it may be, unless it contains leaked factual confidential information. Whether Naim enjoys, or even learns from, speculation on the forum I have no idea - I like to think that they may follow some discussions because that would help them understand what some customers or potential customers think, though it would be a full time job to monitor every thread, and one ostensibly about football hardly suggests anything of developmental interest!
Your blind testing approach seems reasonably thorough, despite the small sample size, however one significant question - was care taken to make the sound level of each variant identical? However even that does not necessarily rule out sound level affecting preference as, for example, average sound levels might be the same but sound levels of a frequency band to which people are more responsive might differ with different optimisation settings. An alternative could be to variously play each one at different sound levels, however that would make the process a lot longer.
When I refer to blind testing I normally mean first person, and for myself I adopt a simple process: I get one pf my sons to do the swapping over, having first found volume control settings for equal sound level, but deliberately sometimes tweaking one to be louder, another time another. It helps tgat he enjoys trying to catch me out, which means when I am consistent it is really convincong!
Hi IB, thanks again for taking the time to respond here. BTW, I do hope you had a good weekend with steam, water, ice and fire (Iceland perhaps?).
Like you, I also like to think that the NAIM employees spend their free time during the day scrolling through all the threads of different Ethernet cable testing! - NOT!
Nevertheless, if NAIM do launch (Q4, 2025) an upgraded DSP module along the lines I am speculatively suggesting, I could then claim “you heard it here first!”
Hi IB. Your question about sound level is very important. The simple answer is that I was not able to run the tests fully compensated for ‘average SPL’ either within each participants session or even between each participants session. However - consistent volume level was not required throughout the test as I was simply trying to see if a specific marketing claim by Linn Products Ltd stood up to independent scruitny.
For the tests, each participant chose some familiar music (to them) to listen to and also choose the volume at which to listen. The particular chosen volume for the test was established during the ‘settling in’ period of 20 mins (reference step ‘f)’ in the Summary of the Test Procedure described above) and was set so that the participant was most comfortable and engaged in the music that they had chosen.
The trial was evaluating three (3) different settings of Linn Space Optimisation and one of those settings was ‘OFF’, i.e. ‘NO Space Optimisation’. The way Linn SO works means that the magnitude of the SPL below 100Hz will very likely vary between settings, however the decay of resonance in the room will also vary - so the question then becomes what is the ‘perceived loudness’ of such resonance?
Which brings an interesting point up for debate. When sound engineers (or anybody for that matter) do ‘level’ checks - what is the nature of the processing applied to give the reading? And if the measurement device is applying an ‘A’ weighted curve, how valid is that?
Three different settings of Linn SO are shown in the chart, and to help understand the straightforward gain change at the main low frequency (axial) resonance, each setting differed by 2dB from the next. In other words ‘NO SO’ gave the highest SPL result at the lowest resonant frequency, ‘20-80’ Linn SO Bias setting gave the next highest (i.e.2dB lower) and finally ‘35-65’ gave the most reduced SPL (i.e. 4dB below ‘NO SO’ and 2dB below the ‘20-80’ setting) at the lowest (axial) resonant frequency. I hope all that makes sense!
Read on, as I think the results below will generate interesting thoughts (if not discussion) amongst interested parties on the forum.
PS: The information contained within the ‘appendix’ refered in the chart has already been posted above.
PPS: The SPL (above 100Hz) was consistent for each session, I.e the volume was not changed in a given participant session once set for comfort during the settling in period.
The different weightings indeed have their different uses. A is supposed to be a similar response to human hearing at ordinary sound levels, and is the one most commonly used for noise measurement, and, I think, is the weighting normal for recording level meters etc. IIRC C is mainly for peak sound level or other high level sources such as PA systems averaging over 100dB.
There is also B, which is for intermediate sound levels but I don’t recall ever having seen in use. And there is also Z, which that is effectively no weighting but limited to human frequency range.
Hi IB, Thank you for continuing to provide your input. I do enjoy our conversations - I hope you do too. I also hope others are intriuged enough to follow the various topics going on in this thread.
I posed the ‘A’ weighting question because I undertook a couple of experiments in other situations (interconnect cable assesements) where I was able to ‘observe’ the choices expressed by the participants by careful control of volume change between cables and settings. - When I say careful - I mean circa 0.5dB variations! So for example, I might only have two cables being assessed. But actually ‘four’ (4) situations, i.e. two cables, each assessed at two (very closely related) volume settings. So the participant(s) believe that there are four (4) cables in total, and participants are asked to rank each in a series of A/B comparisons.
For the ‘masked randomised’ experiment reported on Linn SO - I was not so concerned about the impact of volume setting as I simply wanted to check that I did not need to alert the Advertising Standards Authority to any potential false claims expressed in marketing materials.
Futheremore it is becoming very well understood that ‘A’ Weighting SPL measurements are hardly expressing human percived annoyance of low frequencies (below 200Hz). There are some good research papers on this - The Finnish Institute of Occupational Health Report titled “Measurement of low frequency noise in rooms” (2011) is one such example. Its objective: “The aim of the study was to develop a simple and reliable method for the measurement of
low frequency noise in all kinds of rooms”.
I dont know if you (or others) are sufficienly interested to read this kind of information?
It is all of interest to me, though I often have limited time to respond! Being retired has made me busier than when working, or it seems that way (sometimes holidays, sometimes work at home, etc!
I partially agree with you, as I would expect a next-get Uniti series to appear in the near future, with the latest streaming platform and room correction functionality included.
Retrofitting the new StreamUnlimited/DSP platform to previous generation streamers seems highly unlikely to happen, in the same way that there was no provision for the 2nd gen streaming platform to be installed in the 1st Gen streamers. Such a strategy doesn’t help with selling more black boxes .
The only exception that I could possibly envisage is the potential upgrade of NSS333/NSC222 into the new platform, as it would be a bit strange to have different functionalities among different New Classic streaming products. Not sure if such a retrofitting will technically feasible in the first place though…
This is all fascinating, @Edmund-of-Essex. Thank you for your work on these topics, and your willingness to share.
I am curious about how you adjusted for psychoacoustic effects, such as sequential contrast effects in which the sequence in which sounds are heard influences the perception of those sounds.
For example, let’s say I were auditioning two speakers. One speaker has a suckout in the upper mids; the other is more neutral. If I hear the speaker with the suckout first, then my ear-brain may index on that sound as “normal.” As a result, I may hear the second speaker sound like it has an upper mids “peak” when actually it is flat.
I am by no means an expert in any of this! It seems to present a challenge in subjective assessment, though. I am sure you have thought these issues through … and may even have discussed them already, if so apologies for missing those posts!
Hi @Middle , Thanks for a great question on the method! By the way, thanks also for the many likes you give on the various postings - it does provide encouragement to me (to keep posting).
Now to your question. Indeed I was very concerned about the point you have raised and no I haven’t previously fully explained other than to declare it was a ‘Masked Radomised’ test.
Because I knew I wanted to explore three (3) settings of Linn SO (including the OFF setting) I quickly realised that I would need a minimum of 6 participants to try to deal with any potential biases raised by the order of listening or the psychoacoustic perception isssue that your posting states. So with 6 participants, each participant receives a different (and unique to the trial) ‘play’ order of settings.
Without having to mark it out as if we are all still in school, I hope the forum readers will realise that 6 participants is the minimum required to achieve this?
Having decided on 6 participants, I thought I should probably have a even mix of ‘HiFi interested people’ and ‘non HiFi interested people’ as well as an even mix of gender.
I mostly achieved these goals with a 60%-40% split on (birth) gender and 50%-50% for the general level of interest in HiFi.
Nevertheless, despite all of the above and the previously declared precautions I was still left with the following concerns:
The duration of the sessions on a given setting (I.e. too long perhaps),
Not permitting the participants to select back and fourth between the settings to form a clearer view on preference,
and finally…
The initial ‘settling in’ period was also random on Linn SO setting which could have produced biases of selection via the very observation you have raised!
Luckily, when I checked the results for this final bias risk, it was OK - no bias showed up.
PS: There are more subtlties (in the results) that can be discovered if one studies the chart(s) very closely