jmvalin: (Default)
I just got the news today that LCA 2011 has accepted my talk proposal: "Opus, the Swiss Army Knife of Audio Codecs". I'll be presenting it in Ballarat, Australia in January. If there's any specific topic you'd like me to include in the talk, please let me know (by email or comment on this post).
jmvalin: (Default)
Since yesterday, the IETF audio codec requirements are now published as RFC 6366. While the requirements aren't by themselves interesting (why discuss abstract requirements when you can discuss actual running code?), it's an important milestone in that it's the first document published by the Working Group. It also means one less source of pointless arguments. The guidelines document is now next in line and should go to IETF last call soon.

Now the interesting part of the Opus codec itself. That's the only document that really matters. That one should go to Working Group Last Call (WGLC) pretty soon (possibly next week or two). In the mean time, we're working on improving the clarity of the draft, cleaning up the code and fixing all the last few issues that have been reported since the first WGLC. Stay tuned.
jmvalin: (Default)
I spent my last week in Quebec City at the 81th IETF meeting. The most important meeting there for me was the codec WG. The good news is that there's been a lot of progress in that meeting. A few issues with the Opus bit-stream (e.g. padding, frame packing) were resolved and the chairs are planning a second working group last call in four weeks. After that if all goes well, the codec can go to IETF last call and then RFC.

My week at the IETF meeting was also my first week at my new job working for Mozilla. I've been hired specifically to work on Opus and other codec/multimedia development, so I should have a lot more time for that than I used to. First thing on my list: finishing the Ogg mapping for Opus and releasing an Ogg encoder and decoder.
jmvalin: (Default)
Monty has just finished a very interesting CELT demo that covers most of the techniques used in CELT and their history. It also includes a large number of audio samples, including comparisons with Vorbis and various flavours of AAC. CELT has come a long long way in the past three years and even in the past three months, quality has gone up significantly, to the point where it sounds better than Vorbis on many (most?) samples and even comparable to HE-AAC at 64 kb/s. The target is to freeze the bit-stream early January for integration within the Opus codec, but there may still be a few quality improvements we can make before that -- not to mention all the encoder-side improvements we can make even after the bit-stream is frozen.
jmvalin: (Default)
Recently, I was curious about how CELT and Vorbis differ in the way the allocate bits. Now, CELT's bit allocation is really explicit with a fixed number of bits per band. This is not quite the case of Vorbis, so a comparison isn't straightforward. What I've done is I've ran some audio (mono version of the audio I used in my previous post) through Vorbis and measured the SNR as a function of frequency. By dividing the SNR by 6 db/bit, I can get the (approximate) bit allocation. The result (smoothed a bit) is shown below for encoding quality -1 to 10.



Now, these are the curves currently used by CELT for its bit allocation:




Among the differences are:
1) The Vorbis allocation lines for different rates are nearly parallel, meaning that starting from a certain allocation, bits are added/removed nearly uniformly when changing the bit-rate
2) Vorbis allocates a lot of bits to very low frequencies, and then there is a sharp drop-off around 400 Hz.
3) In the mid-high range, the Vorbis allocation is much flatter than CELT

Now I tend to trust that the Vorbis allocation has been decently tuned, so the question is whether the differences in allocation are due to fundamental differences between Vorbis and CELT or just to bad tuning of CELT so far. I suspect there's a bit of both. I've actually created an exp_vorbis_tuning branch to find out. I just took the Vorbis data and turned that into CELT bit allocation data just to see what it would do. I expected something terrible, but it actually sounds quite decent. In some circumstances, it sounds a bit worse than the original CELT tuning, but I think in other cases it actually sounds better. More investigation needed...
jmvalin: (Default)
I've been doing some tuning of CELT over the past few days and thought it would be a interesting to compare how the quality of CELT has evolved over the coarse of its development. It's easy to lose track when each change you made provides only a tiny improvement. Using this stereo reference file, I've tried encoding with a few different versions. Even though I don't normally recommend using that bit-rate for stereo, I've used 40 kb/s for the comparison because it makes the artefacts (and thus the differences) more obvious. A bit more than two years ago, this is what CELT 0.3.2 sounded like at 40 kb/s. Then there was version 0.5.2 that improved, with the latest version, 0.8.1. And now, here's what in the current git to be released as 0.9.

OK, I know the quality isn't that good at such a low rate, so here's a slightly higher bit-rate. This is current git at 64 kb/s, compared to G.719 at the same rate. I'm curious to hear comments about how CELT does compared to G.719 because we haven't done any formal comparison yet on music.

Even at 64 kb/s, the artefacts are generally audible, even though they're usually no longer annoying. They start being less audible at 80 kb/s, as you can see, and then the quality continues going up all the way to 256 kb/s or even higher.
jmvalin: (Default)
I mentioned in my previous post that much technical work was done while at the IETF meeting. First, it's always good to have other people looking at your code, and meeting face to face is the best way to actually explain your code to others. The first thing that happened while Tim was looking at my code was he found much simpler ways (closed-form) to compute probability distributions I was computing in an iterative manner. The next thing that happened was that while I was trying to explain to him some bit allocation detail, I just couldn't figure out why there was a division by two in the bit allocation of the band split. The explanation was simple: we just shouldn't be dividing by two. That resulted in an easy (though small) increase in quality.

Another CELT related topic that we were finally able to investigate more is allocation of the bits between the fine energy (gain) and the PVQ codebook (shape). There was a mismatch between the code and the theoretical analysis we had. After actual calculations based on (Laplacian) random data, Tim found that it matched the theory almost perfectly. The only problem is that PQevalAudio (objective quality measurement) disagrees with the theory as to what the optimal allocation is. The problem is that it's very hard to tell which one is really optimal just by listening, so this is still not fully resolved.

The last thing we've worked on (with Tim) that's still ongoing is optimising the pdfs used by the range coder for coarse energy encoding. There may be a few
bits there we can save so, it's worth trying.
jmvalin: (Default)
Here's good news from the codec Working group meeting that was held on Monday. Koen Vos and I presented the prototype codec draft, including the results of an informal MUSHRA test (see slide 8). The bottom line is that the hybrid codec running with full audio bandwidth (48 kHz) at 32 kb/s significantly out-performed all other codecs under test, including BV32, SILK-WB, CELT alone and G.719. For the first three, this is hardly surprising: BV32 and SILK were using "wideband" (i.e. bandlimited at 7 kHz) audio, which just cannot match the bandwidth of the hybrid codec, and CELT was just never designed for 32 kb/s and has annoying artifacts at that rate. As for G.719, it was the closet contender in that test, but still had annoying coding noise that was easily noticeable and relatively annoying. On the other hand, several of the listeners had a very hard time telling the hybrid codec from the original.

Following the presentation, the chairs decided to take a hum and there was "rough consensus" in the room for adopting the proposed codec as the baseline codec and thus adopting the draft as a working group document. This still has to be confirmed on the mailing list, but at least things are looking good. This doesn't mean the codec will be accepted as is, but it's a good starting point from which we can keep improving. The rest of the meeting was a lot of discussions on the requirements and the testing, which I'm sure will be better summarized in the minutes.

Other than that, the most useful part of this IETF meeting was having Koen Vos, Timothy Terriberry and I in the same place. We managed to get a lot of technical stuff done -- both conceptual and actual code. More on that later.
jmvalin: (Default)
It's been a while since the last time I discusses CELT, so at last, here's an update. A while ago, I was working on a low-complexity "profile" of CELT. The idea is to disable the use of the pitch predictor, which is quite costly in terms of complexity. To help speed things up, I also changed the allocator to do the conversion from bits to pulses one band at a time instead of doing it jointly for all bands at once. This decreases the complexity, while making the allocation a bit less optimal -- in theory. In practice, it means that for higher rates where bands require a large number of bits, the encoding can actually be more efficient because no bits are wasted. Because of that, I was able to replace all 64-bit arithmetic from CELT by 32-bit splits. On top of that, Timothy (derf) managed to -- again -- save some computation in the pulse encoding. The result is that in low-complexity mode, it takes about 1% CPU to encode and decode a 44.1 kHz mono stream at 64 kbit/s (on my 2 GHz box).

Here's what lies ahead now. I'd like to slowly work towards freezing the bit-stream. But there's a few things I want to do before even thinking about a freeze:

- Dynamic bit allocation
Right now, the bit allocation in each band remains about the same for every frame. I'd like to change that and allow more bits in the regions of the spectrum that are hard to encode at any given time. It's not as easy as it looks because: 1) you need to figure out the best allocation based on psychoacoustics and 2) You need to *encode* the allocation information compactly enough that it doesn't waste all you saved from the dynamic allocation. So far, my attempts at 1) haven't been very successful.

- Folding decision
To prevent "birdie" artefacts, we use a certain amount of spectral folding that acts as a noise floor. In most cases, this improves quality, but for very tonal signals (e.g. glockenspiel), it transforms a pure tone into noise, which is annoying. So I'd like to be able to turn that feature on or off based on the data, but again, it's not simple.

- Stereo coupling
CELT already does stereo. It does it by encoding the energy independently for each channel and doing (sort of) M-S encoding of the "residual". This works, but probably doesn't save much compared to using two mono streams. So I want to see how it can be improved. There's already some (disabled) code to do intensity stereo, but maybe there's more that can be done.

Of course, I only have a vague idea of how to do the three things I listed above, so suggestions are welcome.
jmvalin: (Default)
Before reading this, I recommend reading part 1 and part 2. As I explained in part 1, CELT achieves really low latency by using very short MDCT windows. In the current setup, we have two 256-sample overlapping (input) MDCT windows per frame. The reason for not using a single 512-sample MDCT instead is latency (the look-ahead of the MDCT is shorter). With that setup, we get 256 output samples per frame to encode (128 per MDCT window). Now, at 44.1 kHz, it means a resolution of 172 Hz, not to mention the leakage. That's far from enough to separate female pitch harmonics, much less male ones. To the MDCT, a periodic voice signal thus looks pretty much like noise, with no clear structure that can be used to our advantage.

To work around the poor MDCT resolution, we introduce a pitch predictor. Instead of trying to extract the structure from a single (small) frame, the pitch predictor looks outside the current frame (in the past of course) for similar patterns. Pitch prediction itself is not new. Most speech codecs (and all CELP codecs, including Speex) use a pitch predictor. It usually works in the excitation domain, where we find a time offset in the past (we use the decoded signal because the original isn't available to the decoder) that looks similar to the current frame. The time offset (pitch period) is encoded, along with a gain (the prediction gain). When the signal is highly periodic (as is often the case with voice), the gain is close to 1 and the error after the prediction is small.

Unlike CELP, CELT doesn't operate in the time domain, so doing pitch prediction is a bit trickier. What we need to do is find the offset in the time domain, and then apply the MDCTs (remember we have two MDCT windows per frame) and do the rest in the frequency domain. Another complication is the fact that periodicity is generally only present at lower frequencies. For speech, the pitch harmonics tend to go down (compared to the noisy part) after about 3 kHz, with very little present past 8 kHz. Most CELP codecs only have a single gain that is applied throughout the entire frame (across all frequencies). While Speex has a 3-tap predictor that allows a small amount of control on the amount of gain as a function of frequency, it's still very basic. Working in the frequency domain on the other hand, allows a great deal of flexibility. What we do is apply the pitch prediction only up to a certain frequency (e.g. 6 kHz) and divide the rest in several (e.g. 5) bands. For the example from part 2 (corresponding to mode1 of the 0.0.1 release), we use the following bands for the pitch (different from the bands on which we normalise energy):

{0, 4, 8, 12, 20, 36}

Another particulatity of the pitch predictor in CELT (unlike any other algorithm I know of) is that the pitch prediction is computed on the normalised bands. That is we apply the energy normalisation on both the current signal (X) and the delayed (pitch prediction from the past) signal (P). Because of that, the pitch gain can never exceed unity, which is a nice property when it comes to making things stable despite transmission losses. Despite a maximum value of one in the normalised domain, the "effective value" (not normalised) can be greater than one when the energy is increasing, which is the desired effect. The pitch gain for band i is computed simply g_i = <X_i, P_i>, where <,> is the inner product and X_i is the sub-vector of X that corresponds to band i (same for P_i).

Here's what the distribution of the gains look like for each band:



It's clear from the figure above that the lower bands (lower frequencies) tend to have a much higher pitch value. Because of that, a single gain for all the bands wouldn't work very well. Once the gains are computed, they need to be encoded efficiently. Again, using naive scalar quantisation and encoding each gain separately (using 3 or 4 bits each) would be a bit wasteful. So far, I've been using a trained (non-algebraic) vector quantiser (VQ) with 32 entries, which means a total of 5 bits for all gains. The advantage of VQ for that kind of data is that it eliminates all redundancy so it tends to be more efficient. The are a few disadvantages as well. Trained VQ codebooks are not as flexible and can end up taking too much space when there are many entries (I don't think 32 entries is enough for 5 gains).

The last point to address about the pitch predictor is calculating the pitch period. We could try all delays, apply the MDCTs and compute the gains for each and at the end decide which is beat. Unfortunately, the computational cost would be huge. Instead, it's easier to do it in "open loop" just like in Speex (and many other CELP codecs). We compute the generalised cross-correlation (GCC) in the frequency domain (cheaper than computing in the time domain). The cross-spectrum (before computing the IFFT) is weighted by an approximation of the psychoacoustic masking curve just so each band contributes to the result (instead of having the lower frequencies dominate everything else).

Now the results: how much benefit does pitch prediction give? Quite a bit actually, hear for yourself. Here's the same speech sample encoded with or without pitch prediction. Even on music, which is not always periodic, pitch prediction can a bit, though not as much. I think there's potential to do better on music. There's a few leads I'd like to investigate (and again, I'm open to ideas):
  • Using two pitch periods
  • Frequency-domain prediction
Feel free to ask questions below in the (likely) case something's not clear.
jmvalin: (Default)
As mentioned in my previous post, one of the main ideas in CELT is to divide the signal in bands and directly encode the energy in each band. There are several reasons for that. First, the ear is generally more sensitive to the energy in a frequency band than to the exact details of where that energy is. This is especially true at higher frequencies, where we sometimes only need to get the rough shape of the spectrum right to get decent quality. A second reason is that it is convenient to separate the signal into energy and "details", just like CELP codecs (such as Speex) split the signal into a filter and an excitation, or Vorbis that uses a "floor". In CELT, we go one step further and actually divide the data in each band by the band's energy and then constrain each band to have unit magnitude (\sum (x^2_i)). Once a band has been normalised, its magnitude will always be equal to 1, no matter what happens to it. Any processing/encoding/mutilating we do to it needs to preserve that unit magnitude.

Ideally, the width of each band should be roughly one critical band. In practice, there isn't much much point in having a single frequency bin per critical band, so although the ear has roughly 25 critical bands, we only use about 15-20 in CELT. Here's an example using 256-sample MDCTs (128 output samples) and 15 bands. The band boundaries are:

{0, 2, 4, 6, 8, 12, 16, 20, 24, 28, 36, 44, 52, 68, 84, 116, 128}

Using this, band number 0 includes samples 0 and 1, while band number 14 includes samples 84 to 115. The remaining samples (116-127) are just discarded because they are outside the ear's range (a 44.1 kHz sampling rate is assumed here).

Now, the first thing we need to do is actually encode the energy of each band in an efficient way. The ear is more sensitive to lower frequencies, so these will need to be encoded with better resolution. Of course, we use the log (dB) domain and add a small value (equivalent to -10 dB) just to prevent overflows when taking the log. In this example, we use a quantisation interval of 0.75 dB for the lowest band, increasing linearly to 4.25 dB for the highest band. Doing naive quantisation/encoding over a fixed range would require a prohobitive number of bits (>100 bits per frame) and is thus not an option. Measuring the ideal entropy (assuming a perfect probability model for the data) for same speech and music samples gives us an average of 71 bits per frame. That's still expensive, considering we're going to encode around 200 frames per second.

The only way to further reduce the number of bits used for energy quantisation is to eliminate redundancy. Energy usually doesn't vary that much from one frame to the next, so we can use a time-domain predictor of the form P(z) = 1 - alpha*z^-1. That means we remove from the current energy alpha times the previous energy (we're already in log domain). Here's what the entropy per frame looks like (as a function of alpha) if we use that predictor:



That's already much better. As we increase the prediction coefficient (alpha) from 0 to 1, we can reduce the entropy from 71 bits down to around 45 bits, a 26 bits improvement. Unfortunately, using alpha=1 for prediction isn't practical because it would mean that any transmission error (e.g. lost packet) would propagate through time with no attenuation (even 0.95 would take too long). A value of alpha around 0.7 would be a nice tradeoff between redundancy reduction and limited error propagation. That's 52 bits per frame. However, we're not done yet eliminating redundancy. There's still a correlation across the bands in the same frame. This time, we can use any predictor we like because a frame either arrives completely or it doesn't. So we use a second predictor Q(z) = (1 - z^-1)/(1 - beta*z^-1). With that second predictor, the entropy goes down again:



With alpha=0.7 and beta=0.5, we have just under 44 bits of entropy. Much better than the 71 bits we started from and even better than only the first predictor with alpha=1. Of course, that entropy value is optimistic because it assumes a perfect probability model and because it assumes that prediction isn't degraded by quantisation.

For encoding, it's not very practical to use the actual probability model because would require storing the probability for each value of each band (and for each bit-rate if we change the resolution). However, it turns out that the distributions are somewhere between a Gaussian distribution and a Laplacian distribution. Although actually closer to being Gaussian, we use a Laplacian model because it reduces the spikes in bit-rate (a Gaussian would significantly underestimate the probability of extreme values). Despite the rough approximation, the average actual encoding rate for all 15 bands is 46 bits per frame. That's just 2 bits worse than the theoretical best case using that predictor. Not bad at all.

I've also played with the DCT (for intra-frame redundancy) without getting better results, mainly because it's harder to control the error in each band. Still, there may be better ways that what I've done so far to reduce the bit-rate for the energy. I'm open to ideas/suggestions on that.

Updated: Fixed the definition of the example bands.

Profile

jmvalin: (Default)
jmvalin

June 2017

S M T W T F S
    123
45678910
11121314151617
1819 2021222324
252627282930 

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Sep. 24th, 2017 09:07 pm
Powered by Dreamwidth Studios