This new demo presents LPCNet, an architecture that combines signal processing and deep learning to improve the efficiency of neural speech synthesis. Neural speech synthesis models like WaveNet have recently demonstrated impressive speech synthesis quality. Unfortunately, their computational complexity has made them hard to use in real-time, especially on phones. As was the case in the RNNoise project, one solution is to use a combination of deep learning and digital signal processing (DSP) techniques. This demo explains the motivations for LPCNet, shows what it can achieve, and explores its possible applications.
After more than 10 years not having any desktop at home, I recently started doing some deep learning work that requires more processing power than a quad-core laptop can provide. So I went looking for a powerful desktop machine, with the constraint that it had to be quiet. I do a lot of work with audio (e.g. Opus), so I can't have a lot of noise in my office. I could have gone with just a remote machine, but sometimes it's convenient to have some compute power locally. Overall, I'm quite pleased with the result, so I'm providing details here in case anyone else finds the need for a similar machine. It took quite a bit of effort to find a good combination of components. I don't pretend any component is the optimal choice, but they're all pretty good.
First here's what the machine looks like right now.
Now let's look at each component separately.
Dual-socket Xeon E5-2640 v4 (10 cores/20 threads each) running at 2.4 GHz. One pleasant surprise I have found is that even under full load, the turbo is able to keep all cores running at 2.6 GHz. With just one core in use, the clock can go up to 3.4 GHz. The listed TDP is 90W, which isn't so bad. In practice, even when fully loaded with AVX2 computations, I haven't been able to reach 90W, so I guess it's a pretty conservative value. The main reason I went with Intel was dual-socket motherboard availability. Had I chosen to go single-socket, I would probably have picked up an AMD Threadripper 1950x instead. As to why I need so many CPU cores when deep learning is all about GPUs these days, what I can say is that some of the recurrent neural networks I'm training need several thousands of time steps and for now CPUs seem to be more efficient than GPUs on those.
Choosing the right CPU coolers has been one of the main headaches I've had. It's very hard to rely on manufacturer specs and even reviews, because they tend to use different testing methodologies (can only compare tests coming from the same lab). Also, comparing water cooling to air cooling is hard because there's a case issue. Air cooling is all well inside the case, so the noise level depends on the case damping. For water cooling, the fans are (obviously) setup in holes in the case, so they benefit less from noise damping. Add to that the fact that most tests report A-weighted dB values, which underestimates the kind of low-frequency noise that pumps tend to make. In the end, I went with a pair of Noctua NH-U12DX i4 air coolers. I preferred those over water coolers based on the reviews I saw, but also on the reasoning that the fan was about the same size as those on water radiators, but at least it was inside the case.
Got an ASUS Z10PE-D16 WS motherboard. All I can say is that it works fine and I haven't had any issue with it. Not sure how it would compare to other boards.
I have 128 GB ECC memory, split as 8x 16 GB so as to fill all 4 channels of each CPU. Nothing more to say here.
Deep Learning GPU
For now, deep learning essentially requires CUDA, which means I had to get an NVIDIA card. The fastest GPU at a "reasonable" price point for now is the 1080 Ti, so that part wasn't hard to figure out. The hard part was finding a quiet 1080 Ti-based card. I originally went with the water-cooled EVGA GeForce GTX 1080 Ti SC2 HYBRID for which I saw good reviews and which was recommended by a colleague. I went with water cooling because I was worried about the smaller fans usually found on GPUs. Unfortunately, the card turned out to be too noisy for me. The pump makes an audible low-frequency noise even when idle and the 120 mm fan that comes with the radiator is a bit noisy (and runs even when idle). So that card may end up being a good choice for some people, but not for a really quiet desktop. I replaced it with an air-cooled MSI GeForce GTX 1080 Ti GAMING X TRIO. I chose it based on both good reviews and availability. I've been pleasantly surprised by how quiet it is. When idle or lightly loaded, the fans do not spin at all, which is good since I'm not using it all the time. So far the highest load I've been able to generate on it is around 150W out of the theoretical 280W max power. Even at that relatively high load, the fans are spinning at 28% of their max speed and, although audible, they're reasonably quiet. In fact, the air-cooled MSI is much quieter under load than the EVGA when idle.
Main Video Card
To avoid wasting resources on the main GPU, I went with a separate, much smaller GPU to handle the actual display. I wanted an AMD card because of the good open-source Linux driver support. I didn't put too much thought on that one, and I went with a MSI Radeon RX 560 AERO. It's probably not the most quiet card out there, but considering that I don't do any fancy graphics, it's very lightly loaded, so the fan spins pretty slowly.
According to some reviews I looked at, the Corsair HX1000i appeared to be the most quiet PSU out there. All I can say is that so far, even under load, I haven't been able to even get the PSU fan to spin at all, so it looks like a good choice.
The case was another big headache. It's really hard to get useful data since the amount of noise depends on what's in the case more than on the case itself. After all, cases don't cause noise, they attenuate it. In the end, I went with the Fractal Design Define XL R2, mostly due to the overall good reviews and the sound absorbing material in the panels. Again, I can't compare it to other cases, but it seems to be dampening the noise from the CPU coolers pretty effectively.
The Define XL R2 case originally came with three Silent Series R2 3-pin fans. When running at 12V, those fans are actually pretty noisy. The case comes with a 5V/7V/12V switch, so I had the fans run on 5V instead, making them much quieter. The down-side is of course lower air flow, but it looked (kinda) sufficient. Still, I wanted to see if I could get both better air flow and lower noise by trying a Noctua NF-A14 fan. The good point is that it indeed has a better air flow/noise ratio. The not so good point is that it requires more than 7V to start, so I couldn't operate it at low voltage. The best I could do was to use it on 12V with the low-noise adapter, which is equivalent to running it around 9V. In that configuration, it has similar noise level than the Silent Series R2 running at 5V, but better air flow. That's still good, but I wish I could make it even more quiet. So for now I have one Noctua fan and two Silent Series R2, providing plenty of air flow without too much noise.
Got a 1 TB SSD, which of course is completely silent. Not much more to say on that one.
For the stuff that doesn't fit on an SSD, I decided to get an actual spinning hard disk. The main downside is that it is currently the noisiest component of the system by far. It's not so much the direct noise from the hard disk as the vibrations it propagates through the entire case. despite being mounted on rubber rings the vibrations are causing very audible low-frequency noise. For now I'm mitigating the issue by having the disk spin down when I'm not using it (and I'm not using it often), but it would be nice to not have to do that. I've been considering home-made suspensions, but I haven't tried it yet. I would prefer some off-the-shelf solution, but haven't found anything sufficiently interesting yet. The actual drive I have is a 8 TB, Helium-filled WD Red, but I doubt any other 5400 rpm drive would have been significantly better or worse. The only extra annoyance with the WD Red is that it automatically moves the heads every 5 seconds, which makes additional noise. Apparently they call it pre-emptive wear leveling.
Case Fan Positioning
I've seen many contradicting theories about how to configure the case fans. Some say you need positive pressure (more intake than exhaust), some say negative pressure (more exhaust), some say you need to balance them. I don't pretend to solve the debate, but I can talk about what works in this machine. I decided against negative airflow because of dust issues (you don't want dust to enter through all openings in the case) and the initial configuration I got was one intake at the bottom of the case, one intake at the front, and one exhaust at the back. It worked fine, but then I noticed something strange. If I just removed the exhaust fan, my CPU would run 5 degrees cooler! That's right, 2 intake, no exhaust ran cooler. I don't fully understand why, but one thing I noticed was that the exhaust fan was causing the air flow at back to be cooler, while causing hot air to be expelled from the holes in the 5.25" bay. I have no idea why, but clearly, it's disrupting the air flow. One thing worth pointing out is that even without the exhaust fan, the CPU fans are pushing air right towards the rear exhaust and the positive pressure in the case is probably helping them. When the CPUs are under load, there's definitely a lot of hot air coming out from the rear. One theory I have is that not having an exhaust fan means a higher positive pressure, causing many openings in the case to act as exhaust (no air would be expelled if the pressure was balanced). So in the end, I added a third fan as intake at the front, further increasing the positive pressure and reducing temperature under load by another ~1 degree. As of now, when fully loading the CPUs, I get a max temperature of 55 degrees for CPU 0 and 64 degrees for CPU 1. The difference may look strange, but it's likely due to the CPU 0 fan blowing its air on the CPU 1 fan.
This demo presents the RNNoise project, showing how deep learning can be applied to noise suppression. The main idea is to combine classic signal processing with deep learning to create a real-time noise suppression algorithm that's small and fast. No expensive GPUs required — it runs easily on a Raspberry Pi. The result is much simpler (easier to tune) and sounds better than traditional noise suppression systems (been there!).
Opus gets another major upgrade with the release of version 1.2. This release brings quality improvements to both speech and music, while remaining fully compatible with RFC 6716. There are also optimizations, new options, as well as many bug fixes. This Opus 1.2 demo describes a few of the upgrades that users and implementers will care about the most. You can download the code from the Opus website.
Over the last three years, we have published a number of Daala technology demos. With pieces of Daala being contributed to the Alliance for Open Media's AV1 video codec, now seems like a good time to go back over the demos and see what worked, what didn't, and what changed compared to the description we made in the demos.
Here's my new contribution to the Daala demo effort. Perceptual Vector Quantization has been one of the core ideas in Daala, so it was time for me to explain how it works. The details involve lots of maths, but hopefully this demo will make the general idea clear enough. I promise that the equations in the top banner are the only ones you will see!
After more than two years of development, we have released Opus 1.1. This includes:
- new analysis code and tuning that significantly improves encoding quality, especially for variable-bitrate (VBR),
- automatic detection of speech or music to decide which encoding mode to use,
- surround with good quality at 128 kbps for 5.1 and usable down to 48 kbps, and
- speed improvements on all architectures, especially ARM, where decoding uses around 40% less CPU and encoding uses around 30% less CPU.
With the changes, stereo encoding now produces usable audio (of course, not high fidelity) down to about 40 kb/s, with surround 5.1 sounding usable down to 48-64 kb/s. Please give this release a try and report any issues on the mailing list or by joining the #opus channel on irc.freenode.net. The more testing we get, the faster we'll be able to release 1.1-final.
As usual, the code can be downloaded from: http://opus-codec.org/downloads/
Also of interest at the convention was the Fraunhofer codec booth. It appears that Opus is now causing them some concerns, which is a good sign. And while we're on that topic, the answer is yes :-)
We just released Opus 1.1-beta, which includes many improvements over the 1.0.x branch. For this release, Monty made a nice demo page showing off most of the new features. In other news, the AES has accepted my paper on the CELT part of Opus, as well as a paper proposal from Koen Vos on the SILK part.
Ever since we started working on Opus at the IETF, it's been a recurring theme. "You guys don't know how to test codecs", "You can't be serious unless you spend $100,000 testing your codec with several independent labs", or even "designing codecs is easy, it's testing that's hard". OK, subjective testing is indeed important. After all, that's the main thing that differentiates serious signal processing work from idiots using $1000 directional, oxygen-free speaker cable. However, just like speaker cables, more expensive listening tests do not necessarily mean more useful results. In this post I'm going to explain why this kind of thinking is wrong. I will avoid naming anyone here because I want to attack the myth of the $100,000 listening test, not the people who believe in it.
In the Beginning
Back in the 70s and 80s, digital audio equipment was very expensive, complicated to deploy, and difficult to test at all. Not everyone could afford analog-to-digital converters (ADC) or digital-to-analog converters (DAC), so any testing required using expensive, specialized labs. When someone came up with a new piece of equipment or a codec, it could end up being deployed for several decades, so it made sense to give it to one of these labs to test the hell out of it. At the same time, it wasn't too hard to do a good job in testing because algorithms were generally simple and codecs only supported one or two modes of operation. For example, a codec like G.711 only has a single bit-rate and can be implemented in less than 10 lines of code. With something that simple, it's generally not too hard to have 100% code coverage and make sure all corner cases are handled correctly. Considering the investments involved, it just made sense to pay tens or hundreds of thousands of dollars to make sure nothing blows up. This was paid by large telcos and their suppliers, so they could afford it anyway.
Things remained pretty much the same through the 90s. When G.729 was standardized in 1995, it still only had a single bit-rate, and the computational complexity was still beyond what a PC could do in real-time. A few years later, we finally got codecs like AMR-NB that supported several bit-rates, though the number was still small enough that you could test each of them.
When we first attempted to create a codec working group (WG) at the IETF, some folks were less than thrilled to have their "codec monopoly" challenged. The first objection we heard was "you're not competent enough to write a codec". After pointing out that we already had three candidate codecs on the table (SILK, CELT, BroadVoice), created by the authors of 3 already-deployed codecs (iSAC, Speex, G.728), the objection quickly switched to testing. After all, how was the IETF going to review this work and make sure it was any good?
The best answer came from an old-time ("gray beard") IETF participant and was along the lines of: "we at the IETF are used to reviewing things that are a lot harder to evaluate, like crypto standards. When it comes to audio, at least all of us have two ears". And it makes sense. Among all the things the IETF does (transport protocols, security, signalling, ...), codecs are among the easiest to test because at least you know the criteria and they're directly measurable. Audio quality is a hell of a lot easier to measure than "is this cipher breakable?", "is this signalling extensible enough?", or "Will this BGP update break the Internet?"
Of course, that was not the end of the testing story. For many months in 2011 we were again faced with never-ending complaints that Opus "had not been tested". There was this implicit assumption that testing the final codec improves the codec. Yeah right! Apparently, the Big-Test-At-The-End is meant to ensure that the codec is good and if it's not then you have to go back to the drawing board. Interestingly, I'm not aware of a single ITU-T codec for which that happened. On the other hand, I am aware of at least one case where the Big-Test-At-The-End revealed someting wrong. Let's look at the listening test results from the AMR-WB (a.k.a. G.722.2) codec. AMR-WB has 9 bitrates, ranging from 6.6 kb/s to 23.85 kb/s. The interesting thing with the results is that when looking at the two highest rates (23.05 and 23.85) one notices that the 23.85 kb/s mode actually has lower quality than the lower 23.05 bitrate. That's a sign that something's gone wrong somewhere. I'm not aware of why that was the case or what exactly happened from there, but apparently it didn't bother people enough to actually fix the problem. That's the problem with final tests, they're final.
A Better Approach
What I've learned from Opus is that it's possible to have tests that are far more useful and much cheaper. First, final tests aren't that useful. Although we did conduct some of those, ultimately their main use ends up being for marketing and bragging rights. After all, if you still need these tests to convince yourself that your codec is any good, something's very wrong with your development process. Besides, when you look at a codec like Opus, you have about 1200 possible bitrates, using three different coding modes, four different frame sizes, and either mono or stereo input. That's far more than one can reliably test with traditional subjective listening tests. Even if you could, modern codecs are complex enough that some problems may only occur with very specific audio signals.
The single testing approach that gave us the most useful results was also the simplest: just put the code out there so people can use it. That's how we got reports like "it works well overall, but not on this rare piece of post-neo-modern folk metal" or "it worked for all our instruments except my bass". This is not something you can catch with ITU-style testing. It's one of the most fundamental principles of open-source development: "given enough eyeballs, all bugs are shallow". Another approach was simply to throw tons of audio at it and evaluate the quality using PEAQ-style objective measurement tools. While these tools are generally unreliable for precise evaluation of a codec quality, they're pretty good at flagging files the codec does badly on for further analysis.
We ended up using more than a dozen different approaches to testing, including various flavours of fuzzing. In the end, when it comes to the final testing, nothing beats having the thing out there. After all, as our Skype friends would put it:
Which codec do you trust more? The codec that's been tested by dozens of listeners in a highly controlled lab, or the codec that's been tested by hundreds of millions of listeners in just about all conditions imaginable?It's not like we actually invented anything here either. Software testing has evolved quite a bit since the 80s and we've mainly attempted to follow the best practices rather than use antiquated methods "because that's what we've always done".
We just released Opus 1.1-alpha, which includes more than one year of development compared to the 1.0.x branch. There are quality improvements, optimizations, bug fixes, as well as an experimental speech/music detector for mode decisions. That being said, it's still an alpha release, which means it can also do stupid things sometimes. If you come across any of those, please let us know so we can fix it. You can send an email to the mailing list, or join us on IRC in #opus on irc.freenode.net. The main reason for releasing this alpha is to get feedback about what works and what does not.
Most of the quality improvements come from the unconstrained variable bitrate (VBR). In the 1.0.x encoder VBR always attempts to meet its target bitrate. The new VBR code is free to deviate from its target depending on how difficult the file is to encode. In addition to boosting the rate of transients like 1.0.x goes, the new encoder also boosts the rate of tonal signals which are harder to code for Opus. On the other hand, for signals with a narrow stereo image, Opus can reduce the bitrate. What this means in the end is that some files may significantly deviate from the target. For example, someone encoding his music collection at 64 kb/s (nominal) may find that some files end up using as low as 48 kb/s, while others may use up to about 96 kb/s. However, for a large enough collection, the average should be fairly close to the target.
There are a few more ways in which the alpha improves quality. The dynamic allocation code was improved and made more aggressive, the transient detector was once again rewritten, and so was the tf analysis code. A simple thing that improves quality of some files is the new DC rejection (3-Hz high-pass) filter. DC is not supposed to be present in audio signals, but it sometimes is and harms quality. At last, there are many minor improvements for speech quality (both on the SILK side and on the CELT side), including changes to the pitch estimator.
Another big feature is automatic detection of speech and music. This is useful for selecting the optimal encoding mode between SILK-only/hybrid and CELT-only. Unlike what some people think, it's not as simple as encoding all music with CELT and all speech with SILK. It also depends on the bitrate (at very low rate, we'll use SILK for music and at high rate, we'll use CELT for speech). Automatic detection isn't easy, but doing so in real-time (with no look-ahead) is even harder. Because of that the detector tends to take 1-2 seconds before reacting to transitions and will sometimes make bad decisions. We'd be interested in knowing about any screw ups of the algorithm.
The new encoder can also detect the bandwidth of the input signal. This is useful to avoid wasting bits encoding frequencies that aren't present in the signal. While easier than speech/music detection, bandwidth detection isn't as easy as it sounds because of aliasing, quantization and dithering. The current algorithm should do a reasonable job, but again we'd be interested in knowing about any failure.
We're also releasing both version 1.0.0, which is the same code as the RFC, and version 1.0.1, which is a minor update on that code (mainly with the build system). As usual, you can get those from http://opus-codec.org/
Thanks to everyone who contributed by fixing bugs, reporting issues, implementing Opus support, testing, advocating, ... It was a lot of work, but it was worth it.
I just got back from the 84th IETF meeting in Vancouver. The most interesting part (as far as I was concerned anyway) was the rtcweb working group meeting. One of the topics was selecting the mandatory-to-implement (MTI) codecs. For audio, we proposed having both Opus and G.711 as MTI codecs. Much to our surprise, most of the following discussion was over whether G.711 was a good idea. In the end, there was strong consensus (the IETF believes in "rough consensus and running code") in favor of Opus+G.711, so that's what's going to be in rtcweb. Of course, implementers will probably ship with a bunch of other codecs for legacy compatibility purposes.
The video codec discussion was far less successful. Not only is there still no consensus over which codec to use (VP8 vs H.264), but there's also been no significant progress in getting to a consensus. Personally, I can't see how anyone could possibly consider H.264 as a viable option. Not only is it incompatible with open-source, but it's like signing a blank check, nobody knows how much MPEG-LA will decide to charge for it in the next years, especially for the encoder, which is currently not an issue for HTML5 (which only requires a decoder). The main argument I have heard against VP8 is "we don't know if there are patents". While this is true in some sense, the problem is much worse for H.264: not only are there tons of known patents for which we only know the licensing fees in the short term, but there's still at least as much risk when it comes to unlicensed patents (see the current Motorola v. Microsoft case).