High-resolution recording of music for immersive audio requires the utmost attention to detail in order to capture pristine tones and reproduce the hyper-realistic sound of music performed live from a space in time. Ulrike Schwarz and Jim Anderson, who have won Grammy awards and received numerous nominations for immersive sound engineering, are accustomed to using only the finest recording studios and equipment to make their projects so successful.
However, the pandemic totally upended the rules of recording and forced engineers to get creative in overcoming quarantine and other barriers to continue their work. With soprano saxophonist/composer Jane Ira Bloom’s latest release, Picturing the Invisible: Focus 1, not only did Ulrike Schwarz and Jim Anderson garner yet another Best Immersive Album Grammy nomination, they have created a new paradigm for recording live, improvised jazz in real time over the internet with the performers in remote locations. Jane Ira Bloom (pictured in the above header image), Ulrike, and Jim spoke with Copper’s John Seetoo a few weeks after a private listening session for the album at Dolby’s Dolby 24 Screening Room surround sound theater in New York to explain the challenges and workarounds they devised to transcend the physical, logistical, technical, and even familial obstacles to create this new release in various high-resolution formats including 384 kHz, 32-bit stereo, Dolby Atmos and 5.1 surround sound, AURO-3D, streaming MQA up to 192 kHz, and other formats.
John Seetoo: Hey, congratulations on, I believe it's your fourth Grammy nomination working with Jane?
Ulrike Schwarz: It's the third; it was Sixteen Sunsets, Early Americans, and now it's Picturing the Invisible: Focus 1, right.
Jim Anderson: But I think between [Ulrike and myself], this is our third or fourth.
JS: OK. Congratulations! As a result of the pandemic, Picturing the Invisible: Focus 1 had an unusual genesis that exemplifies the maxim, "necessity is the mother of invention." What was the original concept for the record, and what led to the changes that resulted in the final product?
Ulrike Schwarz. Courtesy of John Abbott.
US: The way that I understood it was that originally we had gotten [a] grant from the New York Foundation for the Arts, in co-association with NYC Women's Fund for Media, Music and Theatre by the City of New York’s Mayor’s Office of Media and Entertainment. We had suggested doing a large three-dimensional recording of, basically, the space in between the notes
And that's why Jane composed this cycle of songs for instruments like Japanese koto, Chinese pipa, vibraphone, and all kinds of unusual instruments. It would have [been] an aural representation of the science photography that Berenice Abbott had taken in the 1920s, 1930s and 1940s. That was supposed to happen all in a spacious, very nice acoustical room, like a church or a big studio. And the other component to make this really very audible was to go into [a] super-high-resolution audio format to really be able to establish that kind of fine, fine line of recording. We got the grant in, I think, December 2019. And we were supposed to go into production early 2020.
And then the world shut down for a while. So, we had to rethink that concept. And by rethinking that concept, I mean, the composer had to rethink the concept. And Jane kind of reduced it to mostly duos. The only trio on the recording is a pre-recorded ground bass line, which was then overdubbed live by the koto and soprano saxophone.
The technical reason we [decided to do what we did] was that establishing recordings at two sites was possible [for us]. First of all, I didn't have three recording systems to set up in different spaces. And second, the strain on the internet connections would have been very big. I think with the possibilities we had, we couldn't have done a three-way live over the internet. Communications with delays (latency) would have made it impossible to actually make music together.
[Jane Ira Bloom joins the call.]
JS: We were just getting into some of the technical issues that had come up for recording Picturing the Invisible: Focus 1 as a result of the pandemic. The question was, what was the original concept for the record and what had to change? Ulrike was going through some of the technical engineering considerations, and she wanted to bring you in on the aesthetic aspects.
Jane Ira Bloom: Oh, sure; I'll tell you how the music came about. It all started with this photographer, Berenice Abbott. [She was] a legendary New York City photographer, who did this gorgeous black and white science photography. She was one of the very first photographers ever to make images of light waves and principles of motion, using this very stark, beautiful, abstract black and white imagery. And she made things in physics feel visible through her lens.
So, this stuff just knocked me out. And I started to imagine how I might try to reimagine these ideas as music. I began composing these pieces for small improvising groups, inspired by her work. It’s hard to describe what goes on in the composer's mind, but somehow I was making the translation. Sometime later, Ulrike, Jim and I were sitting in a coffee shop. We were brainstorming this idea about how we could record these Abbott-inspired pieces in high resolution, immersive sound, and in that way, try to make the technology of recording the sound match the place [where] the music was inspired by these concepts of science, you know, physics. So in that way, art and science would collaborate and with the support of the New York City Women's Fund for Media, Music and Theatre, we could do this – and then the pandemic happened. All bets were off.
And this is when we all got creative. We couldn't rehearse and record in person with the ensembles that I had imagined, like quartets and sextets. So I honed the material down to pieces that would work very simply as duets, very stark contrast of sound and silence. We couldn't record in the studio. So, Ulrike basically came up with the idea of elevating the recording technology, so she could record myself and three of my bandmates: Allison Miller (drums), Mark Helias (bass) and Miya Masaoka (koto) –remotely in real time, but from our homes in New York City.
Ulrike and Jim did the most amazing, amazing work coming to our homes to record in surround sound under circumstances that would make most engineers faint. So that's where all the audio magic happened. And from my end, basically, we were the creators of the sound. The musicians I chose to write for and perform with all have very distinctive sounds and a very unique improvisational identity, [and] we were used to playing with each other. So we were able to give Ulrike a sound, and a kind of intimate dialogue to the way we were improvising. But from there, she took the sound and ran with it. And what happened after that it's absolute…to me, it's absolutely audio magic. So that's where my story ends and hers begins.
JS: Thank you so much. If you want to stay on to comment or weigh in on any of the other questions, by all means...
JIB: I imagine you guys have to talk tech. I don't know if I should step out. Make it easier.
US: You can stay, If you get bored, of course you can step out! But I think it's always good to have your perspective.
JS: Jim or Ulrike: can you elaborate on the logistical and technical challenges and how you decided to resolve them?
US: From a studio engineer point of view, our approach is usually [that] we try to get our musicians in a controlled environment, so that we know there won't be any interference from the outside, which [there] would be in personal homes. [If] somebody takes a shower, if there's a construction site outside, or something that would make the environmental noise uncontrollable – that is the first, the biggest problem. You can record all the nice music in the world, but if somebody [is using] a sledgehammer, then this [recording is not usable.
So, in that case, we were very, very lucky, especially in Jane's building. Everybody seemed to be incredibly disciplined, because we used her office three times as a recording studio and nothing ever happened.
In Allison's case, she lives in Brooklyn we used her basement practice room. Drums are a little bit louder, so any outside noise would not have been that significant. But her percussion – I mean, certainly those bell parts, are very, very open to [being affected by] the distractions. And again, nothing happened. That was great.
At Mark's place…that was the East Village, which is a little bit Party Central. That could have been…interesting, but even there, it was very, very controlled. It was [on a] Sunday. It was still [during the] pandemic, so there was limited party action going on.
And Miya on the Upper West Side also had very, very good neighbors. So in terms of outside noise, we were very lucky.
We also tried to find good times. And by good times, I mean we tried to find times when people might not use the internet that much. So, for example, Sunday afternoon is a bad time because everybody uploads their YouTube videos, and the internet gets slow.
So, the first concern was outside noise. And then [related] to that is, of course, how does it sound? How is the sound in the rooms to record? Usually in a studio, you can either change the acoustics if [it’s] a little bit too dry or less dry. But in a little office, you may have walls [that are] too straight, too close, too something, so that there are interferences or unwanted effects. We were able to mitigate that a little bit with very basic measures like [putting] winter jackets on sofas and towels on tables and things like that. And [for] the rest we correctly took a very, very Anderson Audio approach, which is, “you know it, ignore it!” (laughs)
JA: John, we have a saying around here: “when there is no solution, there is no problem.” Therefore, we had no problems, because there were some solutions that we had no resolution for! (laughs)
US: Of course, the most important thing [was facilitating] the communication between the musicians. In order to establish that we had to overcome the internet, mostly the internet speeds. Because, ideally, you want [the musicians] to have [a] line of sight [with each other], and you want them to be in the position to have immediate communication. Or if there is a delay, you want a stable delay, because people will be able to get used to something…I think you can work with anything between, let's say, under 10 milliseconds up to 20 milliseconds if you really have to play together. And so, my job was to try to get the delay between the musicians down to a “playable” latency and also to keep it stable.
Since Jane, Allison and Mark and Miya had rehearsed on free software, it kind of made sense to use the software that everybody was used to, Sonobus. Given what I've learned now about the software, I would now go to a more advanced one that uses the capabilities of the internet a little differently. This was really a one-to-one line, and it depended very much on the speed that everybody had at their home networks.
So, my idea was to use times when not many people were on the internet. And then to bring in mostly gaming audio components, because [they] are the fastest – it's either gaming audio or banking that has the fastest access. I had a gaming audio hub for the communications computer. My recording computer is also a modified gaming laptop, because it can record 64 channels in 384 kHz and 32-bit. That's a very high processing rate that a normal laptop usually can't do. We wanted to be on the internet at a very, very high data rate (at 192) because latency is being calculated in samples. For example, if I am working at 192 kHz compared to 48, I can speed up or reduce the latency to a fourth of what it would have been if I had done 48 kHz.
JIB: Ulrike, I have one thought – because I have to leave you guys – about latency. Because in fact, it's hard. It's fine as [long as] the technical and engineering [people] can get the latency down of how and where musicians hear each other. In fact, there is some kind of odd delay. And the interesting thing about the musicians who – jazz musicians who are used to adapting to all kinds of situations – is that something goes on in our minds so that we actually anticipate one another. We get used to it; we compensate for it. And completely unconsciously, we're making some kinds of adaptations. That's completely non-technical. But it has to do with our minds, which I think is fascinating that we do this, and we don't even know that we're doing it.
US: Yeah, but you can also do this if it's stable, like if it were really super-low, right? [And] if you weren't who you are this project wouldn't have happened anyway.
JIB: For sure [the delay] has to be stable, because somehow, our minds must make some kind of adaptation or anticipation that we're able to calculate what we're going to play before we're going to play it. So it's...
JA: You usually play behind the beat anyway, right? (laughs)
JIB: Behind the beat, behind me, who knows where it is? (laughs)
JS: Were you [all] on camera at the same time so you could see Mark or Allison?
JIB: We could see [each other on] Zoom. We like to feel the presence of the other when we're improvising.
JS: If you didn't have the visual aspect through Zoom, would that have altered how the recording or performance came out?
JIB: I think so. Because we're human and we like to feel like we know the other is there. If they're not, we have to work harder with our ears. And that would be off! (laughs)
JS: I'm just curious – aesthetically, if you were only relying just on hearing each other play in a dialogue form as opposed to...?
JIB: It's a very good question. I think you're right, that there's something that's comforting about the presence of the visual, together with the odd audio sensing that we do. Yeah, I think it does help.
JS: Ulrike, so in addition to the modified gaming computer, did you have to rely on different kinds of mics or preamps, versus what you would normally use in recording in a studio?
US: Yes, actually. So just to finish that other [thought] we had one set of computers that was only for oral communication. That was for [the] Sonobus [software]. Then we had a second set of computers for the Zoom, which was very much out of time. But in order to not slow down the communications computers, we put that on a different set of computers and then the third line was for the two laptops that were doing the actual high-definition recording.
For Jane, I used only two microphones, a Sanken CU-41, and a Neumann TLM 170. I think in a traditional studio recording, she has up to nine microphones. But first of all, we wouldn't have been able to fit that many microphones into the space. And second, that would have recorded too much of the space. So that's why I reduced this to two. In a usual studio recording, she would have [John] Hardy M-1 mic preamps. But in this case, I used the Merging Technologies MERGING+HAPI preamp and Premium A/D cards because I had to split my equipment. So, it was two microphones into the Hapi and then analog out into the Acousta LE03 Interface, so that there wouldn't be any latency. [See the equipment sidebar at the end of the article.] The audio also went to the recording machine independently.
The Acousta LE03 audio interface doesn't have any or has very little latency, [at] 192 kHz into the Sonobus. I had two Acoustas for each side [one for Jane’s Sonobus feed and one at the other musician’s remote location]. Acousta is an Austrian company that [makes equipment for] broadcasts and they built these latency-free units that you can either use for broadcast or in this case, for the internet recording. These gave us the mix minus feeds for each direction.
Allison’s [drum] microphones were recorded on my big laptop in Allison's basement. But I also recorded the Sonobus feeds of both Allison and Jane, so that I always had a recording of what everybody was hearing and reacting to. I could eventually make adjustments in case they weren't together. But they, as Jane said, were fantastically together [despite] being apart.
JS: Interesting. Separate laptops together using the same clock for sync?
US: I synced them by hand, because I had to for [the] Sonobus recording, so I kind of knew where everybody was supposed to be.
Another thing that was interesting was that at Miya Masaoka's space, she insisted that she had a 1 gigabyte line. I was checking the speed before we went in to record. And I thought, “this is too slow for a gigabyte line – what is happening?” And she had told me that her son was at home, but that wasn't a problem. But…what was the son doing?
[It turned out] he played [online] games with his friends, unfortunately, and that was using a lot of the bandwidth. I had to get him off playing games with his friends during the holidays, which was a difficult task. I didn't make friends that day. But we had to do it, and the faster we could possibly do it, the earlier he could get back to the game. So yeah, that was interesting.
We have power conditioners at home, and at each location, I actually took power conditioners and tried to clean up the power as much as I could so that we would have a really noise-free or very, very low-noise recording. I think that also helped a lot to keep the recording quality on a very, very high level.
JS: I think Jim mentioned a humidifier…
US: Oh, well. Yeah. Usually, the engineers are the big control freaks and they try to get everybody [under] their conditions. This time, what was great was that the musicians actually worked at home and they could choose the best instruments for the recording. Usually, you don't have the ability to do that. So with Mark, I went there a day early and he picked the best bass that we thought would make this kind of recording sound better. He has 10 basses, and I don't know how many more in storage. So we picked a very good one. The thing is, acoustic basses in winter need a lot of humidity. They need to be humidified at all times, otherwise they'll break [the wood can get more brittle and crack if it gets too dry – Ed]. We had a little bit of a discussion as to whether during the recording, the humidifier should be on or off. I would have said it should have been off the whole time. But in one or another track on the recording. I think it was still on. That's the only time you hear any extraneous noise!
(The next part of the interview will go into how the microphone setups created the huge, larger-than-life 3D sounds in the immersive audio release; why they flew to California to mix the album at Skywalker Sound; the role that mastering engineer Morton Lindberg played in the final product; and comparisons between past methods of recording Jane Ira Bloom’s saxophone.)
******
Here’s the equipment used in the production of Picturing the Invisible: Focus 1:
At Jane’s office:
For recording:
- 1 Neumann TLM170 microphone
- 1 Sanken CU-41 microphone
- 1 Merging Technologies HAPI with Premium A/D cards
- 1 Apple MacBook with Merging Technology Pyramix digital audio workstation v12 (running at 384 kHz/32-bit)
- 1 ESP MusicCord Pro ES Power Accelerator, ESP MusicCord Pro ES AC cables
- AccuSound MX-4 microphone cables - AccuSound IX-3 Interconnect cables
For internet communication:
- 1 Acousta LE03 interface (running at 192 kHz)
- 1 iMac running Sonobus
The microphone signals were split analog in the HAPI and went to the Acousta to be fed to the internet. The Acousta unit has n-1, so the sax signals for Jane were latency-free and the D/A latency in 192 kHz for the Acousta LE03 was under 1ms.
For all the other venues (Allison Miller’s basement, Mark Helias’ bass studio, Miya Masaoka’s living room) the setup for recording was:
- 1 Merging Technologies Horus with Premium A/D cards
- 1 Merging Technologies Pyramix v14
- 1 PC AudioLabs laptop (the modified gaming laptop that can record 64 channels in 384/DXD)
- 1 ESP Eloquence Power Accelerator
- All ESP Eloquence or ESP MusicCord ProES AC cords
- All interconnect cables are AccuSound IX-3 cables
- All Mogami microphone cables
Microphones (for all three locations):
- Neumann USM 69, DPA 4007s, RadioShack PZM, AMB Tube DI, Neumann TLM 102, ElectroVoice 654, AKG D-112, AKG P-120, Shure SM57, Sanken CMS-2
For communication:
- iPad Pro for Sonobus
- WD D-50 Game Dock (internet hub)
- 1 Acousta LE03 Interface (in 192 kHz
- 1 Focusrite Interface (for Allison Miller) The microphone signals were split analog in the Horus and went to the Acousta to be fed to the internet. The Acousta unit has n-1, so the drums/bass/koto signals for Allison/Mark/Miya were latency-free and the D/A latency in 192 kHz for the Acousta LE03 is under 1ms.
The ability to control the Apple MacBook remotely was done via TeamViewer.
The accumulated latency between the systems ranged between fixed 8ms (minimum) and 64ms (with Mark, due to a slower internet speed).
Header image of Jane Ira Bloom courtesy of Lucy Gram.