With some new circuit or software innovations, we hear deeper into the music—removed is an obfuscating layer revealing what lies just under the surface.
Which begs two questions: are there actually layers to be removed and if so how many are there?
The assumed layers blocking a closer look inside the music have been called many things: veils, layers, clouds, and haze—a stack of barriers separating music and listener. Their one-by-one removal gets us closer to the original recording, or so we imagine.
I question this whole notion of layers and wonder if instead, it's more closely related to increasing the contrast or adding saturation to a video image.
Could it be that instead of removing layers, we're actually supercharging the musical signal?
More than just a semantic twist, there's a fundamental difference between removing haze and amping up content.
It's an interesting question, one I will spend more time exploring over these next few months.