What is hi-res audio, and does it make any difference?

Adam Molina / Android Authority

You’ve probably been told that bigger is better, more is more, and that 24-bit audio is better than 16-bit audio. But what is hi-res audio, and is it better than other kinds of audio? We’ll go over what, if anything, changes as you climb the bitrate ladder.

What our ears can hear

in ear comparison empty ear

Rita El Khoury / Android Authority

My ear

Almost any discussion of audio has to begin with the human ear. We know that the range of frequencies we can hear is 20Hz to 20kHz, which drops as you age. That means anything higher or lower than that is inaudible, but other factors are at play beyond this. There’s also the concept of auditory masking. This means loud sounds mask quiet ones. You can’t hear someone else whispering at a noisy party, even if they’re standing nearby, for example. Even the fanciest pair of headphones won’t change these basic facts.

As a result, there’s a limit on what we can hear in a given recording. Theoretically, we could capture a broader range of frequencies and save them, but it wouldn’t do us much good. Sound recording technologies, then, use these facts to create high-quality reproductions of practically anything.

Some basics of digital audio

Trimming Audio on Audacity 3

When saving audio on a computer, cell phone, or any other device, the device must convert it from an analog signal to a digital one. This is done using an analog-to-digital converter (ADC), which takes sound waves and represents them as a series of numbers. That’s an oversimplification, but it’s the basic idea we need here.

For digital audio to work, we need a method to actually do that conversion. If we take the value of a sound wave at a particular time and write it down, called “sampling,” we can then use those values to recreate the wave later. But how many times should we do this? What time intervals should we use to ensure we capture all the detail? That’s where some math known as the Nyquist-Shannon sampling theorem comes in.

The computations and algorithms behind Nyquist-Shannon are complex. Still, we know that sampling a sound wave in a bounded frequency range twice in a given period is enough to reproduce it perfectly. Let’s go slightly above the range of human hearing, just to be safe. Some basic math shows us that 2 x 22,000Hz = 44,000Hz. Keep that number in mind; it comes up again later.

But remember, all the digital wizardry in the world can’t change what the human ear is capable of.

The other piece of this puzzle is bitrate. It’s the numbers you see floating around, with 16-bit or 24-bit being the most common. Most of the trumpeting around hi-res audio tends to emphasize this number. But what is a bit? At its most basic, a bit (short for binary digit) is what computers use to represent two possible values: 0 or 1. The more bits you have, the more data you can fit in.

Regarding audio, 16-bit is more than enough to capture the detail to reconstruct sound waves later. Because of some historical technologies, including early digital audio tape and compact discs, we get a standard of 16-bit, 441.1kHz. You’ll notice that it’s slightly higher than the 44kHz we need, so we’ve got all the pieces required to recreate high-quality audio. If you don’t believe us, you can check out how much the recording industry itself panicked about digital audio of this quality. If consumers could record audio to such high standards, they worried, then piracy would be rampant.

That didn’t really happen, but it is indeed true 16-bit, 44.1kHz is of outstanding quality. So why did hi-res happen, is it better, and how does it work?

Hi-res audio says hello

Deezer Android app

When the CD took off in the late 1980s and through the 1990s, it represented some cutting-edge technology. It seems trivial today, but the components and computing power recorded to perform the required digital-to-analog conversion (DAC) in home audio products were once expensive and complex. It didn’t matter much, though, as prices rapidly dropped, and the audio quality experience was indeed outstanding.

The 1990s also saw the rise of the MP3 audio file format, and into the 2000s, we got iPods and iTunes, and even later, audio streaming services like Spotify made a splash. But as computing power became cheaper and more accessible, hi-res audio also arose from the likes of Deezer, Qobuz, Tidal, and more. That’s the short version of how we ended up with the media landscape we’re in today.

Hi-res audio is also called “Ultra-HD” audio, with 16-bit, 44.1kHz audio called “HD” or “CD-quality.” Hi-res files can reach 24-bits and frequency ranges of 192kHz or more. What, if anything, does that mean for your listening experience?

Higher frequency ranges don’t matter much

Let’s start with frequency ranges. We know that human hearing maxes out at 20kHz, and the corresponding Nyquist-Shannon bounded frequency range is thus 44kHz, so 16-bit, 44.1kHz is more than enough to cover that range.

At 192kHz, for instance, we could go up to 96kHz! But our ears cannot hear frequencies that high, so it doesn’t mean much at the end of the day. Even if you perfectly reproduce those frequencies anyway, you’ll never come close to hearing them.

But hi-res audio proponents will claim that doing more samples means more data points, and less “stair-stepping” or “smoother” curves are the results. The short answer to this is that Nyquist-Shannon already produces perfectly smooth audio curves, but let’s look at this claim a little deeper.

Nyquist-Shannon already lets us recreate smooth soundwaves.

Yes, an ADC produces a series of numbers with finite values, not the smooth sloping curves you’ll find in a vinyl record. However, the corresponding DAC produces nothing like a bar graph or staircase. If you want to see for yourself, plug the output of one into an oscilloscope. The result: smooth, beautiful sound waves. Only the cheapest DAC would do a one-to-one mapping of the discrete data points to particular sound values. In contrast, any consumer-grade DAC will do the Nyquist-Shannon mathematics required to perfectly reproduce the sound wave as it was recorded. You could include even more data points, but they’ll end up on the same recreated sound curve anyway. It won’t make a difference at the end of the day.

So why do so many hi-res audio services show us graphs of supposedly “better” digital representations? Well, for one thing, it’s great for marketing. It took a long time just to describe all the nuances of digital audio above. It’s much faster to sell the idea of more is more than it is to explain the reality behind it all.

More bits don’t mean bigger sound

The Sony WH 1000XM4 noise cancelling headphones ipad

Adam Molina / Android Authority

Fair enough, higher frequency ranges don’t really make a difference, but surely having more bits available must be good, right? Yes, but actually, no. It is technically true that capturing more data per sample leads to a broader dynamic range — the difference between quiet and loud sounds in a given track — but we have to consider the pesky human ear again.

As mentioned, louder sounds drown out quieter ones, so even if a file contained data for sounds all through a range of volumes, only the loudest ones one be discernable to our ears. This is even more true if they are close in frequency.

More bits don’t mean much to your ears, even if you ratchet up the volume.

What if we do turn up the volume, though? That’s where more bits will surely shine, right? We’ll amp up the quiet sounds, and they’ll make it to our ears. Except the loud sounds will get louder too. Plus, there’s a limit to how loud we can listen to music before we literally damage our hearing permanently. Even if you did ramp up the knob to 11, you somehow have to bear listening to music that loud for a full minute and have perfect, microphone-like ears. It’s your brain that does a lot of masking, too, so even if your ears picked up the sound of a mosquito’s wings at the same time a cannon went off in a particular piece, for example, you’d never end up noticing. And by now, your hearing is wrecked.

So theoretically, yes, a 24-bit hi-res file might have more dynamic range than a 16-bit one, but you’ll never hear the difference.

Is hi-res worth the high cost?

A woman wears the audio technica ath m50xbt2 wireless headphones in a coffee shop.

Despite all the mathematical, physical, and psychological limitations showing us that hi-res audio isn’t worth much, why do more and more streaming services offer it? In short, because they can charge more for it.

A bit more fair answer might be because streaming services have been legitimately below CD-quality for a while. That makes sense. The kind of large, lossless file formats required to send CD-quality audio across the internet eat up lots of bandwidth and consume a bunch of storage space. But as both these get cheaper, it’s easier to accomplish the task. So, why not claim to be better than CDs, which are old anyway?

However, as we explored above, CD-quality is good enough, and the recording industry knows that (and has already panicked about it before). In sum, hi-res audio won’t hurt anything — except maybe your wallet — but it doesn’t exactly help.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TechNewsBoy.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.