Apple changes our sound perception

The same it did with our reality perception

ofer shmueli
Mac O’Clock

--

The first time that I heard of Apple’s Spatial Audio, one thing came to my mind:

Apple is playing with our sound perception, the same way as they were doing with our vision using augmented reality introduced a few years ago.

Don’t get me wrong, I don’t think that it is a Game-Changing Feature, but the inclusion of it, alongside Augmented reality and the ultra-wideband ( UWB ) RF capabilities, coming to newer apple devices, will make our reality, virtual in both our vision, sound, and location.

AR Started it

Apple wasn’t a pioneer in augmented reality, but the launch of the ARKIT framework gave developers around the world a simple and powerful way to develop visual augmented reality layers, that comes out of nowhere and blends perfectly into reality as we see it. Apples spatial Audio does the same, but now instead of light photons, they are filtering and playing with sound waves.

Apples Airpods Pro is her first audio hardware to support spatial Audio. That means that with a stereo setup (left and right ears as we hear in real life with our 2 ears ) you will enjoy an immersive Theatre experience with multi-channel surround sound, that will hit you, from right, left, front, back left, and back right.

Analyze real sources of information and manipulate them

The algorithms behind spatial audio mess with the audio frequencies, originating from your audio source — the device that your AirPods is connected to (if it supports iOS 14 or iPadOS 14), the same way as ARKIT messes with the real-life scenes and adds up new sources of life. the spatial audio algorithm takes the plain sound waves and manipulates them in forming a sound wall from every corner.

Spatial audio is not supported on every featured content, so you will have to make sure, that what you watch and hear, supports spatial audio.

The algorithm that creates this surround sound ( either 5.1 with 5 main channels and LFE bass channel, 7.1 or Dolby atmos ) has the knowledge of how surround sound that is perceived in the brain, spreads in the air, hitting and reflecting from our body parts ( head, shoulders, hair, neck, ears ) quite a complex task as people have a different head, shoulders size.

This aural cues ( the reflected sound wave ) is one of those techniques that help our brain to determine where the sound comes from.

When we hear something or someone in front of, and suddenly a sound is heard from the back, our brain uses reflected sound to understand that the sound came from behind us.

Spatial Audio uses Aural cues ( among our techniques ) to change our brain perception into thinking that sound waves come from multiple sources instead of two.

In the end, we get a surround sound experience, that emits a sound field on every corner as if we were listening to 5 or more speakers placed in a surround topology in our house.

Relative Movement

The beauty of it, besides the fact that it is being done virtually using sophisticated algorithms, is the fact that using the build-in AirPods sensors as the motion detection accelerometer, the sound wall will still be there as you turn your head.

The sound wall either the left, right, front is oriented to your device and you with your AirPods are moving behind it, the AirPods and your device ( iPhone, iPad ) sensors keep tracking your movement relative to your device and your device relative to your head movements.

Spatial Audio Content

Not everyone is supporting spatial audio ( Apple TV and Disney+ do for sure ) hopefully it will come later to major content players as Netflix and Amazon Prime.

--

--