Found this article. Its been said anecdotally on this forum, but I have not previously seen any science suggesting this is so.
Neurons in the brain’s hearing center reacted differently to the same sounds in different test subjects–so different people may hear the same sound differently. Cynthia Graber reports.
Full Transcript –From Scientific American
Our ears are highly attuned to sounds in the world around us. It’s not just the frequency of the sound itself. There are also subtle differences and shifts in loudness and pitch. That’s what tells us, for instance, whether that baby crying belongs to us and just where it’s located. But according to a recent study, what you and I hear may not sound the same.
Scientists at the University of Oxford are trying to understand how the ears and the brain work together. They fit ferrets with auditory implants, trained them to respond to sound, and then looked at the way their neurons reacted. It turns out that each ferret’s neurons in the auditory cortex responded to changes in gradual differences in sound but each ferret responded differently.
The researchers say this is applicable to humans. They say this means that our brains are wired to process sounds depending on how our ears deliver that sound. So if you suddenly heard the world through my ears, it might sound quite different. The scientists say this research could help in the quest to design better hearing aids and speech recognition systems.
Might explain why I don’t always understand my kids, obviously we’re hearing different things.
I think that’s called selective hearing Pete.
Of course we all know that, do these researchers have better things to do?
It’s a bit off topic (well parallel), but I was reading not so long ago in the paper that the trait of being moved by music to a point where it becomes an integral part of your life vs. being a casual consumer of whatever is currently on the radio/TV had been traced back to specific genes.
Most people simply lack this gene and if you took music away tomorrow, they’d be completely fine. Without the gene, music triggers a pleasurable but non emotional response driven by melody. With the gene, there is a highly emotional response based on a complex set of factors of which melody is just but one. Amplitude, tempo, lyrics, all coming into play. And these people exclusively in the sample set represented collectors and enthusiasts.
It made no mention of hifi, though I bet I can guess where that falls.
Indeed - I read on another forum from someone that they appreciated the bracketed descriptions of film/TV music that you get on subtitles, such as:
(Stirring Instrumental Music)
(Intense Music Builds)
…because they had little emotional reaction to music so it was helpful to have it described to them so they knew what the music was supposed to be doing. Quite an alien concept to me, but it takes all sorts.
I suspect that if you monitored my brain activity when listening to music at 17, 27, 37 etc there would be substantial differences; I believe I am far better at differentiating today. I believe there is an apprenticeship here and experience counts, as with wine tasting.
Another slight aside, audiologists when fitting hearing aids know that they sound pretty strange early on, particularly one’s own voice, but with use the brain changes the sound to what we “expect” to hear and so they sound okay.
Of course I corrected my audiologist and informed him that it was nothing to do with perception of the sound, that it was a well researched scientific phenomenon called “burn-in”.
Differences in hearing may be genetic and/or a consequence of learning (and I suspect the latter includes the formation of neural pathways, though that is an area about which I know very little). But the fact of difference was brought home to me some years ago in a non-music context, but human communication. A Chinese person was trying to improve her English, and I was trying to learn Cantonese. She genuinely could not tell that her attempts to sound so e consonants in English resulted in very different sounds to me, but apparently identical to her, e.g. the sound of ‘L’ as in ‘slap’ and ‘N’ as in ‘snap’. And my attempts to pronounce some Cantonese words brought nothing but laughter as I imitated exactly what she said, but to her made a completely different sounding word.
Next year I can give you a point on the curve at age 77. My how time flies. Its been an enjoyable apprenticeship.
Yes, and my earworms sounds better than yours.
Of course people react differently to sound. For instance, I find the constant beat music playing behind traffic announcements on the radio intensely distracting and annoying. Also, I hate loud music playing in restaurants. Some people say they aren’t even aware of the noise! But now, of course, the simple pleasure of eating out in a restaurant is something to look forward to, despite the “music”.
Funny isn’t it, that when the music is turned down behind tge announcement it is the drums that stand out, whatever the music!
It was my Mum who drew my attention to that some 25 years or so ago, asking why they played drums while giving tge weather forecast on local radio. I hadn’t noticed it, but focused and heard what she meant - but I could ignore it and so was oblivious. Roll on 20+ years and it now obscures the announcement for me. Reducing ability to discriminate I think is an age-related hearing phenomenon, the same as struggling to follow a conversation in a noisy place.
But why indeed do radio stations do it? The background music whilst talking serves no useful or artistic purpose, and in the interests of accessibility shoukd be avoided - but I guess no-one has conveyed the significance of the issue to the broadcasters.
I have a terrible time with selective hearing. I’ve never ever been able to hear someone on the phone with low level background noise at my end or pick out one word of someone’s conversation in a bar. And yet I had hearing that went beyond 20Khz well into my late 30s.
My brain just doesn’t filter audio that way. It’s great for music because I hear everything together without switching off instruments mentally like many do. In daily life, it proves quite annoying.
Yes, sadly it may well be an age-related problem! Broadcasters think it appeals to the younger audience, who they think matter the most, so even if they have received many requests to stop the background beat, they will ignore them!
I am sure by now you are a Master, and I hope I am a Journeyman
When I’m out hunting and gathering - with a mask and noticing 2 metres.
I find my right ear is very susceptible to very high frequencies.
My left ear very susceptible to low frequencies.
And kind of the opposite with sensitivity.
Meaning I’m pretty spot on with my big stick.
And its all based on your HTRF head transfer related function the shape of your ear and head which sony discovered when doing research for the new 3D Tempest audio chip which will be based in the new PS5. There are basically hundreds if not thousands or more of different options dependent on this shape of ear and head. Getting the right one for the 3D audio will be the hard part so roll along the PS5.
This fella seems to be fitted with a 30 htz resonant dissipation model and looks happy.