EVGA Launches Its First Audio Card, the Nu Audio
by Ian Cutress & Anton Shilov on January 22, 2019 1:00 PM EST- Posted in
- Audio
- EVGA
- Trade Shows
- NU Audio
- CES 2019
- Audio Note
EVGA currently sells a range of products: motherboards, graphics cards, power supplies, cases, and laptops. EVGA has been expanding to other markets for a number of years now, and at CES 2016 we saw the beginnings of a USB audio device due to a collaboration with a professional company called Audio Note. We were impressed at the time, but since then we've not heard much about the project, and had kind of assumed it had been abandoned. But at CES 2019 EVGA introduced the results of the collaboration: its first audio card. This card uses a PCIe to USB controller, making it an internal USB audio product.
EVGA’s Nu Audio card was designed by Audio Note, a UK-based company that develops custom audio solutions. The PCIe 2.0 x1 card implements a PCIe to USB controller to the hardware, and is based on the XMOS xCORE-200 DSP accompanied by Asahi Kasei Microdevices’ (AKM) AK4493 DAC, the AKM AK5572 ADC, and the Cirrus Logic CS5346 ADC. The board uses a silver and gold-plated multilayer PCB with isolated dual ground planes for analogue and digital circuits. Being aimed at users who want a cleaner sound but also better sound support out of their audio outputs, the Nu Audio card uses audio-grade capacitors and resistors that carry Audio Note, Nichicon, WIMA, and Panasonic brands. Besides, it features switchable OP amps as well as a dedicated Maxim amp for headphone volume control.
The Nu Audio card is equipped with two RCA line outs for left and right speakers that can output 384 KHz at 32-bit audio, one output for headphones featuring impedance between 16 and 600 Ohms, an S/PDIF out, a line in supporting 384 KHz at 32-bit audio, and a mic in supporting 192 KHz at 24-bit audio. In addition to traditional analogue and digital outputs, the card supports USB audio class 2.0 enabled by the ASMedia ASM1042 PCIe-to-USB bridge. Meanwhile, to ensure that the board gets enough power, it has a SATA power connector coupled with a multi-stage VRM.
Since EVGA usually targets enthusiasts, its audio card is not only outfitted with a cooling system for heating components, but it is covered by a shroud featuring 10-mode RGB lighting as well as four Audio Reactive Lighting options that match the board’s lighting and audio. The bundled software allows for full EQ tuning, as well as a dynamic response implementation. With the right software, the audio card can support full audiophile formats, such as DSD, and switch between them as required.
EVGA is now currently selling the card, at a price of $249. This is the first in a line of cards, we were told - depending on the feedback of the hardware, the collaboration with Audio Note might extend into a gaming focused design or a more professional audio input/output design.
Related Reading
- EVGA Launches B360 Micro Gaming: Its First Budget Motherboard
- EVGA Releases CLC120 CL11 AIO CPU Cooler: Simple and Affordable
- EVGA Launches SC17 1080 Laptop: Core i7-7820HK, GeForce GTX 1080, TB3
- EVGA Torq X10 Gaming Mouse Review
Source: EVGA
83 Comments
View All Comments
mode_13h - Wednesday, January 23, 2019 - link
Sure, there's a lot of stupidity in the audiophile community. I don't defend that. However, what you're missing is the ethos that one can only achieve the best audible performance, from an entire system, by optimizing each stage *beyond* the audible range. By definition, this can only be done through engineering, and measurement.And your assertion that the interpolation filter doesn't matter only applies to cases where the transition band is *completely* above the audible range (i.e. content that's already oversampled).
Finally, everyone here is forgetting that Shannon's Sampling Theorem only applies to *periodic* signals. Unless you sit around and listen to sine waves (or other repeating waveforms), audio data is not truly periodic. In other words, sampling theory is merely an approximation, for audio. Oversampling at (or close to) the source gives you additional margin for this, as well.
So, if you're trying to avoid over-engineering (and let's face it - the extra data we're talking about is cheap, by modern standards) and are satisfied with something that's merely decent, then good for you. Please just don't force your values on others.
CaedenV - Tuesday, January 22, 2019 - link
@willis936You are absolutely right. higher sample rates are not for the bits you can't hear, it is to make things as analog as possible before it hits the analog hardware. So, lets say (for sake of argument) that -1V is the bottom of a wave form, and 1V is the top of a wave form, and we have some 440 'waves per second' for a C note. Divide that up 44100 times and you end up with a whole series of voltage readings which can be used to reproduce that note. In digital, that is fine to reproduce something to the human ear, but to analog hardware (and especially for good hardware) this is a very long time between samples. The hardware can over-shoot, or under-shoot the next voltage reading which leads to weird jagged stair-stepping where it should be a nice even curve, which introduces weird artifacts in the music.
Up-sample that to 96-192Khz (beyond that is perhaps overkill), and now you have lots of voltages for the dac to read, and far less room for audible errors to be introduced by literally bored hardware that is just waiting to see what is next. Its not about creating tones that you cannot hear (but give dogs a more satisfying experience), it is about transitioning the stuff you can.
rpg1966 - Wednesday, January 23, 2019 - link
Caeden, that's not right. All you really need to think about is the #bits (which limits your DR), sample frequency (which limits your frequency response), and the filter at the end of the chain. When you start thinking in terms like "oh the stair step is too steppy", then you're not understanding how the filters work.mode_13h - Wednesday, January 23, 2019 - link
No, @CaedenV is exactly on the right track. In the 1980's and 1990's there seemed to be a race to reach ever-higher levels of oversampling, in DACs and CD players. I'm pretty sure I remember even reading about 256x oversampling, but 8x was not uncommon. I have a DAC from the late 90's with a Pacific Microsonics HDCD interface chip that used 8x oversampling.At a core level, the output voltage of PCM DACs is latched. This introduces "stair steps", like those having been noted. By increasing the sample rate, the noise introduced by this process is reduced in amplitude and pushed far outside the audible range, making it more easily addressed simple analog filters at the output.
https://en.wikipedia.org/wiki/Zero-order_hold
Kirby64 - Wednesday, January 23, 2019 - link
@mode_13h, you're wrong. Because analog filtering is on both sides (on the input for recording, on the output for playback), the 'stair step' you talk about doesn't ever actually exist. Yes, oversampling does technically allow you to use a crappier analog filter, but modern technology doesn't have this issue unless it's designed poorly. Once filtering is applied, the frequency will be accurately represented. This is just fundamentals of Nyquist-Shannon sampling theorem (https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shan...Watch this video: https://xiph.org/video/vid2.shtml - if you care about 'stair stepping', skip to chapter 3 specifically which addresses that.
The creator of that video goes into why sampling rate doesn't matter beyond say, 48 kHz.
mode_13h - Wednesday, January 23, 2019 - link
*sigh* this is kid stuff. Covered in the first chapter of any signal processing text.Yes, you *do* need to band-limit the signal, before sampling, on the upstream side. No argument there. Failure to do that will result in aliasing, which is impossible to remove, unless you can make some assumptions about the signal (in which case you're also not actually using the full bandwidth of the channel).
Second, he's glossing over what the device is doing to groom the output. His test would be relevant to my point if he had a PCM DAC chip on a breadboard - not testing an end product.
Finally, see my above point about sampling theorem vs. audio. He's using sine waves, since they're easy to understand and work with. How one can actually quantify the performance of a system is by passing through a full-bandwidth signal and subtract it off (or divide it by) the original. That's how you'd be able to observe any corners being cut along the way.
Here's something a bit more worthwhile:
http://bigwww.epfl.ch/publications/unser0001.pdf
slides: https://basesandframes.files.wordpress.com/2016/05...
mode_13h - Wednesday, January 23, 2019 - link
This explains the context of the slides. They were prepared to describe the paper, but not by its author.https://basesandframes.wordpress.com/2016/05/12/50...
Kirby64 - Thursday, January 24, 2019 - link
Are we not talking about complete systems here? I'm sure if you use some raw DAC with no filtering then weird stuff can happen. No question there. Then you aren't properly band-limiting a system.Your above post talking about sampling theorem vs. audio is incorrect. Applicability is to periodic OR non-periodic signals. I can cram as much as I want within the bandwidth of a device and it should be able to be accurately reproduced. If I have a single step waveform sampled the only way to actually represent that is with ripples and a decay that would be in accordance with the sampling rate I have available. There's only one pattern of signals that would satisfy that waveform. Yes you have to worry about intermod distortion and other non-linear effects when you start throwing more and more signal content in there, nothing is perfect, but these issues are relatively minor.
What are you actually claiming is the benefit of oversampling DAC in relation to the output voltage? Closer accuracy to the signal because there's more points as it 'approximates' it? With a filter that doesn't matter.
I'm not sure how higher bit depth or higher sampling frequency fix any of the problems with 'corners being cut'. I'm sure I can design a crap 384kHz 32-bit DAC and an amazing 16-bit 48kHz DAC; the marketing numbers don't really have anything to do with how good it actually performs.
mode_13h - Thursday, January 24, 2019 - link
I was specifically talking about what comes out of a PCM DAC, prior to analog filtering, which you incorrectly said didn't resemble a stair step.https://en.wikipedia.org/wiki/Zero-order_hold
Regarding sampling, you're contradicting yourself. Shannon says you can't have aliasing if you stay below the Nyquist limit, but that's only for periodic waveforms. Now, you're trying to add a bunch of caveats and tell me that I can't *really* use the entire channel bandwidth? Just read the paper I linked and bring yourself up to date, circa 20 years ago.
The author also made presentation on the same subject material, 10 years after that:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=...
And what I claim is the obvious - of course the quality of the interpolation filter matters! And no, your analog output filter is not magic. It can't make a non-oversampled signal/DAC perform as well, and it doesn't make the quality of the oversampling filter irrelevant.
I suppose you've never heard of noise shaping, either.
> I'm sure I can design a crap 384kHz 32-bit DAC and an amazing 16-bit 48kHz DAC; the marketing numbers don't really have anything to do with how good it actually performs.
It's cheaper and easier to get good performance with an oversampled signal. That was the whole point of delta/sigma DACs, but it also holds true of PCM DACs. You know enough to have opinions, but you're no DSP engineer.
As for bit-depth, it depends on the original signal. 16-bit doesn't quite match the dynamic range of human hearing. For home listening purposes, what's interesting with 24-bit (or more) is you can actually cut out the analog preamp and run your DAC straight into your power amp, doing your volume control in the digital domain.
mode_13h - Thursday, January 24, 2019 - link
I forgot to add that bit depth also affects phase accuracy. Of course, if your input is only 16-bit, then oversampling that obviously won't improve accuracy that's already been lost (though it can mitigate the impact of downstream filtering).This brings up an interesting point, however. Because the high frequencies in music tend to be fairly low amplitude, their effective bit depth is lower. To address this CDs have a feature called pre-emphasis, which is essentially an EQ curve that boost high frequencies during mastering, and a flag telling the CD player to attenuate them after the DAC (although CD players with oversampling and/or higher native bit-depth DACs can do it in the digital domain).
https://en.wikipedia.org/wiki/Emphasis_(telecommun...
I have some CDs with pre-emphasis and they sound great. Excellent highs and reverb. I wish it had caught on, more.