From The Audiophile’s Guide: Sample Rates and Their Impact on Digital Audio

From <em>The Audiophile’s Guide:</em> Sample Rates and Their Impact on Digital Audio

Written by Paul McGowan

PS Audio CEO Paul McGowan has launched The Audiophile’s Guide, a 10-book set of the knowledge he’s garnered over the years. It’s a comprehensive collection of practical information, from understanding room acoustics and speaker setup to getting the best from analog and digital audio.

Copper will be featuring excerpts from The Audiophile’s Guide in this and future issues. We continue the series with Part Three and a look at the particulars of different types of digital to analog converters. (Part One and Part Two discussing the history and some fundamentals of digital technology appeared in Issue 215 and Issue 216.)

Sample Rates and Why They Matter

The sample rate (as discussed in Part One of this series in Issue 215) is a fundamental concept that plays a crucial role in determining the quality and characteristics of recorded sound. At its core, the sample rate is the number of times per second that an analog audio signal is measured and converted into digital data. This process, known as sampling, is essential for capturing and reproducing sound in the digital domain.

The importance of sample rate lies in its direct relationship to the frequency range that can be accurately captured and reproduced in a digital audio system. According to the Nyquist-Shannon sampling theorem, to faithfully reproduce a signal, the sampling rate must be at least twice the highest frequency present in the signal. Human hearing typically ranges from about 20 Hz to 20 kHz, though many adults can’t hear much above 15 kHz or so. To capture the full range of human hearing, we need a sample rate of at least 40 kHz. This is why the two most common sample rates in digital audio are 44.1 kHz and 48 kHz – both comfortably exceed this minimum requirement. The 44.1 kHz sample rate became standard with the introduction of the CD. But why such an odd number? The story goes back to the early days of digital audio and involves a fascinating blend of innovation, compromise, and a touch of serendipity.

In the late 1970s, Sony and Philips were locked in a race to develop the CD format. The project was spearheaded by two audio pioneers: Kees Schouhamer Immink from Philips and Toshitada Doi from Sony. One of the most crucial decisions was choosing a sample rate that could capture the full range of human hearing but also fit on a reasonably-sized disc. The team initially considered 44,056 Hz, a number derived from video technology. The NTSC video standard used in North America and Japan has a frame rate of 29.97 frames per second, and each frame can store 3 samples per line for 490 active lines. Multiplying these numbers (29.97 × 3 × 490) gives us 44,056 samples per second.

However, this number presented some technical challenges. Enter Nobuo Ohtsuka, another Sony engineer, who suggested rounding up to 44,100 Hz. This slight increase solved several technical issues and, coincidentally, allowed for exactly 74 minutes and 33 seconds of audio on a single CD. Legend has it that Norio Ohga, Sony’s president at the time and a former opera conductor, insisted that the CD be able to contain Beethoven’s Ninth Symphony in its entirety. The longest known recording of the Ninth is Wilhelm Furtwängler’s 1951 Bayreuth Festival performance, which clocks in at about 74 minutes. Whether this story is entirely true or a bit of clever marketing is still debated, but it adds a touch of classical elegance to the birth of digital audio.

The 48 kHz sample rate, on the other hand, has its roots in the professional audio and video production world. Its adoption is a tale of practicality, foresight, and a dash of international diplomacy. In the early 1980s, as digital audio was gaining traction, the Audio Engineering Society (AES) formed a working group to standardize digital audio practices. This group, led by Bart Locanthi of Pioneer North America, recognized the need for a sample rate that would play nice with various video formats used around the world. The 48 kHz rate was chosen because it’s a multiple of many common video frame rates (24, 25, 30 fps), making it easier to synchronize audio with video. This was crucial for the film and television industry, where precise audio-video sync is paramount.

But there’s more to the story. The choice of 48 kHz was also a diplomatic one. It bridged the gap between the American 44.1 kHz standard and the European preference for 50 kHz (based on their 50 Hz power systems). By choosing 48 kHz, the AES working group found a middle ground that could be accepted by both sides of the Atlantic. Moreover, the 48 kHz standard provides a slightly higher frequency range and a bit more headroom above the range of human hearing. This extra headroom became invaluable in professional applications where audio might be processed extensively. It allowed for pitch-shifting and other manipulations without introducing unwanted artifacts.

(An interesting tidbit: despite the professional audio world’s embrace of 48 kHz, the film industry initially used a 44.1 kHz sample rate for digital soundtrack work. This changed in the mid-1980s when Dolby Laboratories introduced their Dolby Digital format, which used 48 kHz. This move helped cement 48 kHz as the go-to sample rate for professional audio-visual work.)

 

An example of sampling and quantization of a signal (red) for 4-bit linear PCM over a time domain at specific frequency. Courtesy of Wikimedia Commons/unlisted author.

 

The coexistence of these two standards – 44.1 kHz for consumer audio and 48 kHz for professional use – has shaped the digital audio landscape for decades, each with its own rich history and practical justifications. Having two main sample rate bases has led to some complications in the audio world. Converting between these rates isn’t always perfect and can introduce subtle artifacts. However, both standards have persisted due to their respective strengths and the industries that have grown around them. What’s particularly interesting is how these two base rates have influenced the entire landscape of digital audio, from consumer to professional applications. Every consumer-facing sample rate we encounter is derived from one of these two foundational rates.

Let’s start with the 48 kHz family. In the world of high-resolution audio, we often hear about 96 kHz and 192 kHz sample rates. These are direct multiples of the base 48 kHz rate: 96 is exactly double 48, while 192 is exactly quadruple. These higher sample rates are often used in professional recording and mastering, with the idea that the increased temporal resolution might capture subtle nuances in the sound, particularly in the higher frequencies. While the audible benefits of these ultra-high sample rates are debated among audio engineers and enthusiasts, they’ve become standard options in many high-end audio systems.

On the other hand, the 44.1 kHz family has spawned its own set of derivatives: 88.2 kHz (exactly double 44.1) and 176.4 kHz (exactly quadruple). These rates are less common but still used, particularly in scenarios where the final output will be in CD format (44.1 kHz). Using these rates can simplify the down-sampling process, potentially preserving more of the original signal’s characteristics.

But perhaps the most interesting offshoot of the 44.1 kHz family is the DSD format, used in Super Audio CDs and some high-end audio equipment. The standard DSD rate, also known as DSD64, has a sample rate of 2.8224 MHz. This might seem like an arbitrary number, but it’s actually exactly 64 times 44.1 kHz! And it doesn’t stop there. We also have:

  • DSD128 at 5.6448 MHz (128 times 44.1 kHz)
  • DSD256 at 11.2896 MHz (256 times 44.1 kHz)
  • DSD512 at 22.5792 MHz (512 times 44.1 kHz)

These ultra-high DSD rates are sometimes playfully referred to as “Double DSD,” “Quad DSD,” and “Octa DSD” respectively. Problem is, no current physical consumer audio media support 48kHz; CDs and SACDs are all based on the consumer format of 44.1kHz. Heresy!

If I had a magic wand and could narrow it down to just one standard, I’d choose 48 kHz. Here’s why:

  • It’s a rounder number, making calculations and conversions simpler.
  • It provides a bit more headroom above the range of human hearing, which can be beneficial in audio processing.
  • It aligns better with video frame rates, simplifying audio/video synchronization.
  • The difference between 44.1 kHz and 48 kHz is practically inaudible to most people, so we wouldn’t be losing perceptible audio quality.

However, it’s important to note that both 44.1 kHz and 48 kHz are more than capable of capturing the full range of human hearing. The differences between them are subtle, and both can produce excellent audio quality.

In the end, the most important thing for us audiophiles is not the specific sample rate, but the quality of the entire audio chain, from recording and mastering to playback equipment and listening environment. Great music can shine through at either sample rate when handled with care and expertise.

 

Digital to Analog Converters

 Now that we have a glimpse into how an analog to digital converter works, what about the opposite? The digital-to-analog converter, or DAC, is the most known and interacted-wit piece of high-end audio equipment in our digital audio world, transforming those ones and zeros back into the smooth, continuous waves our ears can appreciate. A DAC’s job is essentially to reverse the process of an ADC, taking the discrete digital data and reconstructing it into a continuous analog signal. This is crucial because our ears and speakers operate in the analog domain. Without DACs, all that carefully captured and stored digital audio would remain silent.

The concept of DACs dates back to the 1930s, with early work by Vladimir Kotelnikov and Harry Nyquist laying the theoretical groundwork. However, it wasn’t until the 1960s that practical DACs began to emerge, primarily for use in industrial and military applications. The first consumer DACs appeared in the late 1970s with the advent of digital audio. One of the earliest consumer devices to incorporate a DAC was the Sony PCM-1, introduced in 1977. This device could convert digital audio to analog for recording on videotape, a precursor to fully digital audio systems.

 

Vladimir Kotelnikov, one of the pioneers of digital audio technology. Courtesy of Wikimedia Commons/Leo Medevev.

 

But the real revolution came with the introduction of the Compact Disc in 1982. The CD player brought DACs into homes around the world. These early DACs were relatively simple by today’s standards. Many used a technique called “oversampling” to ease the burden on the analog output filters. One of the first widely used DAC chips in CD players was the Philips TDA1540. This 14-bit DAC used a technique called “dynamic element matching” to achieve 16-bit performance. Sony’s CX20017, another early DAC chip, was also 14-bit but used a 4-times oversampling digital filter to improve performance.

These early consumer DACs faced significant challenges. They had to contend with high levels of jitter (timing errors), limited resolution, and the need for very steep analog filters to remove high-frequency noise. As a result, many early CD players were criticized by us audiophiles for sounding harsh or artificial compared to analog sources – and it wasn’t just us being picky. Those early digital devices sounded pretty danged bad.

Over time, DAC technology improved dramatically. Higher sample rates and bit depths became possible, allowing for gentler filtering and for capturing more of the subtleties in the original audio. Today’s high-resolution audio formats, with sample rates of 192 kHz or even higher and bit depths of 24 or 32 bits, give DACs much more information to work with.

The result? Modern high-end DACs can produce sound that’s breathtakingly good. They can recreate the sense of space in a recording, render the tiniest details of a performance, and deliver music with a naturalness that can make you forget you’re listening to a reproduction at all. It’s not uncommon for listeners to experience their favorite recordings in a whole new light when heard through a top-quality modern DAC.

While DACs come in various types, they all share a common basic architecture. Understanding this structure helps us appreciate how digital data becomes the music we hear. At its core, every DAC follows a similar path:

  • Digital input: The process begins with digital audio data entering the DAC. This could be from a CD player, a computer, or a network streamer.
  • Digital processing: Before conversion, the incoming signal often undergoes some form of digital processing. This might involve upsampling (increasing the sample rate), noise shaping, or digital filtering. Some DACs allow users to choose between different digital filter options, each with its own sonic character.
  • Digital-to-analog conversion: This is where the magic happens. The digital data is transformed into an analog electrical signal using one of several conversion techniques (which we’ll explore in detail later).
  • Analog output stage: Finally, the newly-created analog signal passes through an analog output stage before leaving the DAC.

While much attention is given to the digital-to-analog conversion process itself, the analog output stage plays a critical role in shaping the final sound. In fact, it’s not an exaggeration to say that this stage often determines much about how a DAC ultimately sounds and performs. The analog output stage has several important jobs:

  • Buffering: It provides a low-impedance source to drive the next component in the chain, typically a preamplifier or power amplifier.
  • Filtering: It helps remove any remaining high-frequency noise from the conversion process.
  • Amplification: In some cases, it boosts the signal to a standardized line level.
  • Current-to-voltage conversion: For DACs that output a current signal, the analog stage converts this to a voltage signal.

 The design of this stage can significantly impact the DAC’s sound. For instance, a tube-based output stage might impart a different character than a solid-state design. The choice of components, from capacitors to op-amps, can influence the final sound. Moreover, the analog stage’s ability to maintain signal integrity is crucial. Any distortion or noise introduced here can mask the benefits of even the most sophisticated digital conversion.

This is why many high-end DAC designers pay as much (or more) attention to the analog output stage as they do to the digital conversion itself – think of the exotic vacuum tube output stages and discrete wonders of all kinds. Truth is, the vast majority of sound quality differences in DACs (given good digital performance in the first place) are mostly in the analog output stage. In PS Audio DACs, for example, the majority of expense in building our high-performance products is in the analog stage and the power supplies that feed them. Of course, this all assumes great digital electronics and techniques to start with.

It’s a reminder that while we often focus on the digital aspects of modern audio, the analog domain remains critically important. After all, it’s analog signals that eventually move our speakers and our eardrums. Understanding this can help explain why two DACs with similar specs on paper might sound noticeably different. It’s not just about the numbers; it’s about how those final analog waveforms are crafted and delivered.

 

Types of DAC Architecture

 Over the years, DAC designers have explored various architectures, each with its own strengths. While there are several variations and hybrid designs, two main types of DACs have dominated the landscape: R2R ladder DACs and delta-sigma (ΔΣ) DACs.

R2R ladder DACs, also known as multibit DACs, represent one of the earliest approaches to digital-to-analog conversion. They use a network of precisely matched resistors to directly convert digital signals into analog voltages. These DACs are known for their straightforward approach and potentially more natural handling of low-level signals.

Delta-sigma DACs, on the other hand, use oversampling and noise-shaping techniques to achieve high resolution. They’ve become the dominant type in modern audio devices due to their ability to deliver high performance at relatively low cost.

Between these two main types, we find various hybrid designs that attempt to combine the strengths of both approaches. These hybrids might use elements of both R2R and delta-sigma architectures, aiming to capture the directness of R2R designs and the high resolution of delta-sigma.

Each of these DAC types has its proponents in the audio world, and the ongoing debate about their relative merits continues to drive innovation in DAC design. As we delve deeper into each type, we’ll explore their unique characteristics, strengths, and reasons why a designer might choose one approach over another. As we do so, remember that the goal of all this technology is simple: to recreate the original sound as faithfully as possible. Whether it’s the delicate brush of a jazz drummer’s cymbal or the thunderous climax of a symphony orchestra, a great DAC aims to put you right there in the moment of the performance.

 

R2R Ladder DACs

 The R2R ladder DAC, also known as the resistor ladder DAC, holds a special place in the history of digital audio. This architecture was among the first practical methods for converting digital signals to analog, and in many ways it started the digital audio revolution.

The “R2R” in the name refers to the network of resistors that forms the heart of this DAC type. Imagine a ladder where each “rung” consists of two resistors: one with a resistance value of R, and another with 2R (twice the resistance of R). This network of resistors, when combined with electronic switches controlled by the digital input, creates a simple yet elegant method of converting digital data to an analog voltage.

To help understand how this works in simpler language, let’s again turn to our water analogy we’ve used previously. Imagine you have a large water tank at the top of a hill, representing our reference voltage. At the bottom of the hill, you have a series of cups, each half the size of the previous one. Now, picture a series of pipes leading from the big tank to each cup, with a valve on each pipe. These valves are our “switches,” controlled by the digital input. When a bit is “1,” its valve opens, allowing water to flow. When it’s “0,” the valve stays closed. The cups are connected in a special way, so that each cup can either pour its contents into the next smaller cup or into a final collection bucket (our analog output). The largest cup represents the most significant bit, the next cup the second most significant bit, and so on.

When we input a digital number, we open and close valves accordingly. Water flows into the corresponding cups, and they pour into each other based on their connections. The amount of water that ends up in our final bucket represents our analog output. This system is fast because once we open or close the valves, water flows quickly. It’s also very direct – the amount of water in the final bucket is directly related to which valves we opened.

The beauty of this system is its simplicity. We don’t need complex mechanisms; we just need accurate cup sizes and reliable valves. However, as you might imagine, ensuring each cup is exactly half the size of the previous one becomes challenging as we add more cups, especially for the smallest ones. This is analogous to the precision required in the resistors of a real R2R DAC.

An illustration of an R2R ladder DAC. Courtesy of Wikimedia Commons/Damido123.

 

However, early R2R DACs faced significant limitations, particularly when it came to resolution. They were typically limited to about 18 bits of resolution. Why? The answer lies in the precision of those resistors. For an R2R ladder to work accurately, the resistors need to be extremely precise in their values, especially for the most significant bits. For example, in a 16-bit DAC, the most significant bit’s resistors need to be accurate to better than 1 part in 65,536. Achieving this level of precision with physical components is challenging, and becomes increasingly difficult as you add more bits.

This limitation meant that while R2R DACs could offer excellent performance for CD-quality audio (16 bits), they struggled to keep up as digital audio moved to higher resolutions. As a result, for many years R2R DACs fell out of favor in consumer audio, replaced by delta-sigma designs that could more easily achieve higher bit depths. However, in recent years, we’ve seen a resurgence of interest in R2R designs among us audiophiles and high-end DAC manufacturers.

This revival has been driven by several factors. First, advancements in manufacturing techniques have made it possible to create more precise resistor networks, allowing for higher-resolution R2R DACs. Some modern designs claim 24-bit or even 32-bit resolution, though it’s worth noting that achieving true 24-bit precision remains extremely challenging (and requires the use of tricks and averaging techniques). Second, some listeners prefer the sound of R2R DACs, describing it as more “analog-like” or natural. This is subjective, of course, but it’s driven interest in these designs.

Modern R2R DACs often use clever techniques to overcome the limitations of the basic design. For example, some use resistor trimming or calibration to achieve better matching. Others combine multiple lower-resolution ladders to create a higher-resolution output. Some designs use a hybrid approach, combining R2R techniques with other DAC architectures to try to get the best of both worlds. For instance, a DAC might use an R2R ladder for the most significant bits and another technique for the least significant bits. It’s also worth noting that modern R2R DACs often incorporate sophisticated digital filtering and oversampling techniques, much like their delta-sigma counterparts. This allows them to achieve better performance than a “pure” R2R design might.

The resurgence of R2R DACs in high-end audio is a fascinating example of how old technologies can find new life. It’s a reminder that in audio, as in many fields, there’s rarely a single “best” solution. Different approaches have different strengths, and there’s always room for innovation and refinement.

 

Delta-Sigma (ΔΣ) DACs

 Delta-sigma (ΔΣ) DACs have become the dominant architecture in digital audio, powering everything from smartphones to high-end audio systems. Their rise to prominence is a story of overcoming the limitations of earlier DAC designs through clever engineering and the relentless march of digital technology.

The core idea behind a ΔΣ DAC is quite different from the R2R ladder approach. Instead of trying to precisely match the analog output to each possible digital input value, ΔΣ DACs use a technique called noise shaping combined with oversampling to achieve high resolution and accuracy. Here’s a simplified explanation of how they work:

  • Oversampling: The incoming digital audio signal is upsampled to a much higher rate than the original – often 64 times or more.
  • Noise shaping: A feedback loop compares the desired output to the actual output, and “shapes” any errors (noise) so that they occur at frequencies above the audible range.
  • Quantization: The signal is reduced to a simple stream of 1s and 0s (hence “delta” for change and “sigma” for sum).
  • Filtering: Finally, a low-pass filter smooths out this stream into a continuous analog signal.

To use an analogy unrelated to electronics, imagine you’re trying to draw a smooth curve using only dots. With a delta-sigma approach, you’d start by plotting many, many dots, far more than you actually need. Then, you’d step back and look at the overall shape, constantly adjusting where you place the next dots to make the curve smoother. You’d focus on getting the average position of the dots right, knowing that when viewed from a distance, the individual dots blend together into a smooth line.

The history of ΔΣ DACs in consumer audio begins in the late 1980s. As digital audio pushed beyond 16-bit resolution, the limitations of R2R ladder DACs became more apparent. Delta-sigma offered a way to achieve higher resolution without requiring ultra-precise components. One of the first widely used ΔΣ DAC chips was the Philips SAA7320, introduced in 1987. It used 256-times oversampling and 1-bit quantization to achieve 16-bit performance. This chip and its successors helped usher in a new era of improved digital audio quality.

An illustration of Delta Sigma modulation. Courtesy of Wikimedia Commons.

 

Over the years, ΔΣ DAC technology has continually improved. Modern designs can handle sample rates up to 384 kHz or higher and bit depths of 32 bits. They’ve also become incredibly power-efficient, making them ideal for portable devices. However, ΔΣ DACs aren’t without their critics in the audiophile world. Some listeners feel they can sound less “analog-like” than R2R designs, particularly in how they handle very quiet signals. There’s also been debate about how the high-frequency noise inherent in delta-sigma modulation might affect sound quality, even if it’s theoretically above the audible range.

The ongoing refinement of ΔΣ DAC designs reflects the audio industry’s constant pursuit of perfect sound reproduction. Multi-bit delta-sigma modulators represent a significant evolution in DAC technology. Traditional ΔΣ DACs use a 1-bit modulator, which is simple and inherently linear but can introduce high-frequency noise. Multi-bit designs, typically using 4 or 5 bits, aim to reduce this noise while maintaining the benefits of delta-sigma architecture.

Multi-bit delta-sigma modulators work by using a small number of bits instead of just one in the modulator stage. This approach reduces quantization noise, allowing for lower oversampling rates. The result is potentially lower distortion and better performance, especially for high-frequency signals. The advantage of this approach is that it can offer some of the precision of multi-bit conversion (like R2R DACs) while retaining the noise-shaping benefits of delta-sigma designs. This can lead to improved dynamic range and potentially more natural sound reproduction, especially in the higher frequencies.

Imagine you’re a chef trying to recreate a complex sauce recipe. In a traditional 1-bit delta-sigma approach, you’d be limited to adding either a pinch of seasoning or nothing at all, many times per second. You’d constantly taste and adjust, adding a pinch when the flavor is too weak, or skipping when it’s too strong. Over time, you’d get close to the right flavor, but it might take a lot of tiny adjustments. With a multi-bit approach, now you have a set of different-sized spoons. You can add a pinch, a dash, or a smidgen, depending on how far off the flavor is. Each adjustment can be more precise, getting you closer to the desired taste more quickly and accurately.

In DAC terms, this means each sample of audio can be represented more precisely. The “quantization noise” – the difference between the ideal flavor and what you can actually create – is reduced. This allows you to work more efficiently (lower oversampling rates) while still achieving great accuracy. The result is a more faithful reproduction of the original audio “recipe.” It’s like being able to capture subtle flavor notes that might have been lost with the cruder 1-bit method.

This multi-bit approach combines the precision of having multiple spoon sizes (like multi-bit conversion) with the constant tasting and adjusting of the delta-sigma method. For the listener, this can mean hearing more detail and nuance in the music, especially in complex, high-frequency sounds like the shimmering decay of a cymbal or the airy breathiness of a flute.

On the digital filter front, sophisticated algorithms are being employed to address one of the criticisms of delta-sigma DACs: their time-domain response. Traditional ΔΣ DACs can introduce pre-ringing in the time domain, which some argue is unnatural and could affect sound quality.

Modern digital filters in high-end DACs often focus on minimum-phase filters, which eliminate pre-ringing, potentially resulting in a more natural sound. Apodizing filters are another approach, reducing both pre- and post-ringing while maintaining good frequency response. Some DACs now offer customizable filter options, allowing users to choose between different filter characteristics, tailoring the sound to their preference.

Let’s continue with our cooking analogy to explain digital filters and their effects on sound quality. Imagine you’re following a recipe that calls for adding ingredients in a specific order and timing. Traditional ΔΣ DACs sometimes struggle with this timing, like a chef who starts preparing an ingredient before the recipe calls for it. This is what we call “pre-ringing” in audio terms. It’s as if you can taste a hint of garlic before you’ve even bitten into the garlic bread.

Modern digital filters in high-end DACs are like skilled sous chefs who ensure every ingredient is added at precisely the right moment. Minimum-phase filters, for instance, are like chefs who never prepare an ingredient early. They eliminate that pre-taste (pre-ringing), resulting in a more natural flavor progression as you eat the dish (or hear the sound).

Apodizing filters take this a step further. They’re like master chefs who not only ensure ingredients aren’t added too early, but also smooth out the transitions between flavors. In audio terms, they reduce both pre-ringing and post-ringing (the lingering of a sound), while maintaining the overall flavor profile (frequency response) of the audio.

Some high-end DACs now offer customizable filter options. This is like a restaurant where you can tell the chef your preferences. Maybe you like your flavors to blend smoothly, or perhaps you prefer each ingredient to stand out distinctly. These customizable filters allow audiophiles to “season to taste,” adjusting the sound to their personal preferences.

All these sophisticated filtering techniques aim to present the audio (or in our analogy, the dish) as naturally and accurately as possible. They strive to let you experience the music as if you were there in the recording studio or concert hall, with each instrument and voice presented in perfect time and harmony, just as the artist intended.

It’s worth noting that these advancements often come with increased computational demands. This is one reason why we’re seeing more DACs using powerful FPGAs (field-programmable gate arrays) or custom ASICs, (application-specific integrated circuits), which can handle these complex algorithms in real-time. These ongoing refinements in ΔΣ DAC technology demonstrate that digital audio conversion is far from a solved problem. Engineers and designers continue to push the boundaries, seeking that elusive goal of perfect sound reproduction. For audiophiles, this means an ever-expanding array of options, each with its own approach to translating digital data into music.

Today, ΔΣ DACs remain at the forefront of digital audio technology. They’re capable of delivering exceptional performance, often at relatively low cost. While other DAC architectures continue to have their proponents, the ubiquity and continued development of delta-sigma designs ensure they’ll remain a crucial part of the audio landscape for years to come.

Back to Copper home page

1 of 2