jmvalin: (Default)
Since yesterday, the IETF audio codec requirements are now published as RFC 6366. While the requirements aren't by themselves interesting (why discuss abstract requirements when you can discuss actual running code?), it's an important milestone in that it's the first document published by the Working Group. It also means one less source of pointless arguments. The guidelines document is now next in line and should go to IETF last call soon.

Now the interesting part of the Opus codec itself. That's the only document that really matters. That one should go to Working Group Last Call (WGLC) pretty soon (possibly next week or two). In the mean time, we're working on improving the clarity of the draft, cleaning up the code and fixing all the last few issues that have been reported since the first WGLC. Stay tuned.
jmvalin: (Default)
I spent my last week in Quebec City at the 81th IETF meeting. The most important meeting there for me was the codec WG. The good news is that there's been a lot of progress in that meeting. A few issues with the Opus bit-stream (e.g. padding, frame packing) were resolved and the chairs are planning a second working group last call in four weeks. After that if all goes well, the codec can go to IETF last call and then RFC.

My week at the IETF meeting was also my first week at my new job working for Mozilla. I've been hired specifically to work on Opus and other codec/multimedia development, so I should have a lot more time for that than I used to. First thing on my list: finishing the Ogg mapping for Opus and releasing an Ogg encoder and decoder.
jmvalin: (Default)
Monty has just finished a very interesting CELT demo that covers most of the techniques used in CELT and their history. It also includes a large number of audio samples, including comparisons with Vorbis and various flavours of AAC. CELT has come a long long way in the past three years and even in the past three months, quality has gone up significantly, to the point where it sounds better than Vorbis on many (most?) samples and even comparable to HE-AAC at 64 kb/s. The target is to freeze the bit-stream early January for integration within the Opus codec, but there may still be a few quality improvements we can make before that -- not to mention all the encoder-side improvements we can make even after the bit-stream is frozen.
jmvalin: (Default)
I've been doing some tuning of CELT over the past few days and thought it would be a interesting to compare how the quality of CELT has evolved over the coarse of its development. It's easy to lose track when each change you made provides only a tiny improvement. Using this stereo reference file, I've tried encoding with a few different versions. Even though I don't normally recommend using that bit-rate for stereo, I've used 40 kb/s for the comparison because it makes the artefacts (and thus the differences) more obvious. A bit more than two years ago, this is what CELT 0.3.2 sounded like at 40 kb/s. Then there was version 0.5.2 that improved, with the latest version, 0.8.1. And now, here's what in the current git to be released as 0.9.

OK, I know the quality isn't that good at such a low rate, so here's a slightly higher bit-rate. This is current git at 64 kb/s, compared to G.719 at the same rate. I'm curious to hear comments about how CELT does compared to G.719 because we haven't done any formal comparison yet on music.

Even at 64 kb/s, the artefacts are generally audible, even though they're usually no longer annoying. They start being less audible at 80 kb/s, as you can see, and then the quality continues going up all the way to 256 kb/s or even higher.
jmvalin: (Default)
I mentioned in my previous post that much technical work was done while at the IETF meeting. First, it's always good to have other people looking at your code, and meeting face to face is the best way to actually explain your code to others. The first thing that happened while Tim was looking at my code was he found much simpler ways (closed-form) to compute probability distributions I was computing in an iterative manner. The next thing that happened was that while I was trying to explain to him some bit allocation detail, I just couldn't figure out why there was a division by two in the bit allocation of the band split. The explanation was simple: we just shouldn't be dividing by two. That resulted in an easy (though small) increase in quality.

Another CELT related topic that we were finally able to investigate more is allocation of the bits between the fine energy (gain) and the PVQ codebook (shape). There was a mismatch between the code and the theoretical analysis we had. After actual calculations based on (Laplacian) random data, Tim found that it matched the theory almost perfectly. The only problem is that PQevalAudio (objective quality measurement) disagrees with the theory as to what the optimal allocation is. The problem is that it's very hard to tell which one is really optimal just by listening, so this is still not fully resolved.

The last thing we've worked on (with Tim) that's still ongoing is optimising the pdfs used by the range coder for coarse energy encoding. There may be a few
bits there we can save so, it's worth trying.
jmvalin: (Default)
Here's good news from the codec Working group meeting that was held on Monday. Koen Vos and I presented the prototype codec draft, including the results of an informal MUSHRA test (see slide 8). The bottom line is that the hybrid codec running with full audio bandwidth (48 kHz) at 32 kb/s significantly out-performed all other codecs under test, including BV32, SILK-WB, CELT alone and G.719. For the first three, this is hardly surprising: BV32 and SILK were using "wideband" (i.e. bandlimited at 7 kHz) audio, which just cannot match the bandwidth of the hybrid codec, and CELT was just never designed for 32 kb/s and has annoying artifacts at that rate. As for G.719, it was the closet contender in that test, but still had annoying coding noise that was easily noticeable and relatively annoying. On the other hand, several of the listeners had a very hard time telling the hybrid codec from the original.

Following the presentation, the chairs decided to take a hum and there was "rough consensus" in the room for adopting the proposed codec as the baseline codec and thus adopting the draft as a working group document. This still has to be confirmed on the mailing list, but at least things are looking good. This doesn't mean the codec will be accepted as is, but it's a good starting point from which we can keep improving. The rest of the meeting was a lot of discussions on the requirements and the testing, which I'm sure will be better summarized in the minutes.

Other than that, the most useful part of this IETF meeting was having Koen Vos, Timothy Terriberry and I in the same place. We managed to get a lot of technical stuff done -- both conceptual and actual code. More on that later.
jmvalin: (Default)
It's been a while since the last time I discusses CELT, so at last, here's an update. A while ago, I was working on a low-complexity "profile" of CELT. The idea is to disable the use of the pitch predictor, which is quite costly in terms of complexity. To help speed things up, I also changed the allocator to do the conversion from bits to pulses one band at a time instead of doing it jointly for all bands at once. This decreases the complexity, while making the allocation a bit less optimal -- in theory. In practice, it means that for higher rates where bands require a large number of bits, the encoding can actually be more efficient because no bits are wasted. Because of that, I was able to replace all 64-bit arithmetic from CELT by 32-bit splits. On top of that, Timothy (derf) managed to -- again -- save some computation in the pulse encoding. The result is that in low-complexity mode, it takes about 1% CPU to encode and decode a 44.1 kHz mono stream at 64 kbit/s (on my 2 GHz box).

Here's what lies ahead now. I'd like to slowly work towards freezing the bit-stream. But there's a few things I want to do before even thinking about a freeze:

- Dynamic bit allocation
Right now, the bit allocation in each band remains about the same for every frame. I'd like to change that and allow more bits in the regions of the spectrum that are hard to encode at any given time. It's not as easy as it looks because: 1) you need to figure out the best allocation based on psychoacoustics and 2) You need to *encode* the allocation information compactly enough that it doesn't waste all you saved from the dynamic allocation. So far, my attempts at 1) haven't been very successful.

- Folding decision
To prevent "birdie" artefacts, we use a certain amount of spectral folding that acts as a noise floor. In most cases, this improves quality, but for very tonal signals (e.g. glockenspiel), it transforms a pure tone into noise, which is annoying. So I'd like to be able to turn that feature on or off based on the data, but again, it's not simple.

- Stereo coupling
CELT already does stereo. It does it by encoding the energy independently for each channel and doing (sort of) M-S encoding of the "residual". This works, but probably doesn't save much compared to using two mono streams. So I want to see how it can be improved. There's already some (disabled) code to do intensity stereo, but maybe there's more that can be done.

Of course, I only have a vague idea of how to do the three things I listed above, so suggestions are welcome.
jmvalin: (Default)
I've been conducting a listening test for a paper on the CELT codec. I've been comparing it to AAC-LD, G.722.1C (aka Siren14) and MP3. Here are the results for the 48 kbit/s MUSHRA test (95% confidence intervals):


And here are the results for the 64 kbit/s MUSHRA test (95% confidence intervals):


Considering that I was just hoping wouldn't be too much worse than these codecs, it's a pleasant surprise. That's because the version of CELT I tested had a latency of 8.7 ms, while the latency of AAC-LD was 34.8 ms (I know it's possible to get down to 20 ms, but the Apple implementation doesn't do it), G.722.1C was 40 ms and MP3 (LAME) was probably way above 100 ms.

In the graphs above, the error bars don't consider the fact that the MUSRA test is paired, so there's more statistically significant results than what is apparent. Basically, CELT and AAC-LD come out ahead of both G.722.1C and MP3 in both tests. CELT comes out ahead of AAC-LD at 48 kbit/s and the two are tied (i.e. no statistically significant difference could be observed) at 64 kbit/s.

Despite those results, I still think CELT can do better. Among the things I'd like to try once I'm done with the paper:
  • Add a psycho-acoustic mode and start changing the bit allocation based on the frequency content
  • Do lots of tuning
  • Do something to prevent time smearing of impulses (not TNS)
  • Encoding (or guessing) the spectral tilt in each band
  • Better stereo support
jmvalin: (Default)
Before reading this, I recommend reading part 1 and part 2. As I explained in part 1, CELT achieves really low latency by using very short MDCT windows. In the current setup, we have two 256-sample overlapping (input) MDCT windows per frame. The reason for not using a single 512-sample MDCT instead is latency (the look-ahead of the MDCT is shorter). With that setup, we get 256 output samples per frame to encode (128 per MDCT window). Now, at 44.1 kHz, it means a resolution of 172 Hz, not to mention the leakage. That's far from enough to separate female pitch harmonics, much less male ones. To the MDCT, a periodic voice signal thus looks pretty much like noise, with no clear structure that can be used to our advantage.

To work around the poor MDCT resolution, we introduce a pitch predictor. Instead of trying to extract the structure from a single (small) frame, the pitch predictor looks outside the current frame (in the past of course) for similar patterns. Pitch prediction itself is not new. Most speech codecs (and all CELP codecs, including Speex) use a pitch predictor. It usually works in the excitation domain, where we find a time offset in the past (we use the decoded signal because the original isn't available to the decoder) that looks similar to the current frame. The time offset (pitch period) is encoded, along with a gain (the prediction gain). When the signal is highly periodic (as is often the case with voice), the gain is close to 1 and the error after the prediction is small.

Unlike CELP, CELT doesn't operate in the time domain, so doing pitch prediction is a bit trickier. What we need to do is find the offset in the time domain, and then apply the MDCTs (remember we have two MDCT windows per frame) and do the rest in the frequency domain. Another complication is the fact that periodicity is generally only present at lower frequencies. For speech, the pitch harmonics tend to go down (compared to the noisy part) after about 3 kHz, with very little present past 8 kHz. Most CELP codecs only have a single gain that is applied throughout the entire frame (across all frequencies). While Speex has a 3-tap predictor that allows a small amount of control on the amount of gain as a function of frequency, it's still very basic. Working in the frequency domain on the other hand, allows a great deal of flexibility. What we do is apply the pitch prediction only up to a certain frequency (e.g. 6 kHz) and divide the rest in several (e.g. 5) bands. For the example from part 2 (corresponding to mode1 of the 0.0.1 release), we use the following bands for the pitch (different from the bands on which we normalise energy):

{0, 4, 8, 12, 20, 36}

Another particulatity of the pitch predictor in CELT (unlike any other algorithm I know of) is that the pitch prediction is computed on the normalised bands. That is we apply the energy normalisation on both the current signal (X) and the delayed (pitch prediction from the past) signal (P). Because of that, the pitch gain can never exceed unity, which is a nice property when it comes to making things stable despite transmission losses. Despite a maximum value of one in the normalised domain, the "effective value" (not normalised) can be greater than one when the energy is increasing, which is the desired effect. The pitch gain for band i is computed simply g_i = <X_i, P_i>, where <,> is the inner product and X_i is the sub-vector of X that corresponds to band i (same for P_i).

Here's what the distribution of the gains look like for each band:



It's clear from the figure above that the lower bands (lower frequencies) tend to have a much higher pitch value. Because of that, a single gain for all the bands wouldn't work very well. Once the gains are computed, they need to be encoded efficiently. Again, using naive scalar quantisation and encoding each gain separately (using 3 or 4 bits each) would be a bit wasteful. So far, I've been using a trained (non-algebraic) vector quantiser (VQ) with 32 entries, which means a total of 5 bits for all gains. The advantage of VQ for that kind of data is that it eliminates all redundancy so it tends to be more efficient. The are a few disadvantages as well. Trained VQ codebooks are not as flexible and can end up taking too much space when there are many entries (I don't think 32 entries is enough for 5 gains).

The last point to address about the pitch predictor is calculating the pitch period. We could try all delays, apply the MDCTs and compute the gains for each and at the end decide which is beat. Unfortunately, the computational cost would be huge. Instead, it's easier to do it in "open loop" just like in Speex (and many other CELP codecs). We compute the generalised cross-correlation (GCC) in the frequency domain (cheaper than computing in the time domain). The cross-spectrum (before computing the IFFT) is weighted by an approximation of the psychoacoustic masking curve just so each band contributes to the result (instead of having the lower frequencies dominate everything else).

Now the results: how much benefit does pitch prediction give? Quite a bit actually, hear for yourself. Here's the same speech sample encoded with or without pitch prediction. Even on music, which is not always periodic, pitch prediction can a bit, though not as much. I think there's potential to do better on music. There's a few leads I'd like to investigate (and again, I'm open to ideas):
  • Using two pitch periods
  • Frequency-domain prediction
Feel free to ask questions below in the (likely) case something's not clear.
jmvalin: (Default)
As mentioned in my previous post, one of the main ideas in CELT is to divide the signal in bands and directly encode the energy in each band. There are several reasons for that. First, the ear is generally more sensitive to the energy in a frequency band than to the exact details of where that energy is. This is especially true at higher frequencies, where we sometimes only need to get the rough shape of the spectrum right to get decent quality. A second reason is that it is convenient to separate the signal into energy and "details", just like CELP codecs (such as Speex) split the signal into a filter and an excitation, or Vorbis that uses a "floor". In CELT, we go one step further and actually divide the data in each band by the band's energy and then constrain each band to have unit magnitude (\sum (x^2_i)). Once a band has been normalised, its magnitude will always be equal to 1, no matter what happens to it. Any processing/encoding/mutilating we do to it needs to preserve that unit magnitude.

Ideally, the width of each band should be roughly one critical band. In practice, there isn't much much point in having a single frequency bin per critical band, so although the ear has roughly 25 critical bands, we only use about 15-20 in CELT. Here's an example using 256-sample MDCTs (128 output samples) and 15 bands. The band boundaries are:

{0, 2, 4, 6, 8, 12, 16, 20, 24, 28, 36, 44, 52, 68, 84, 116, 128}

Using this, band number 0 includes samples 0 and 1, while band number 14 includes samples 84 to 115. The remaining samples (116-127) are just discarded because they are outside the ear's range (a 44.1 kHz sampling rate is assumed here).

Now, the first thing we need to do is actually encode the energy of each band in an efficient way. The ear is more sensitive to lower frequencies, so these will need to be encoded with better resolution. Of course, we use the log (dB) domain and add a small value (equivalent to -10 dB) just to prevent overflows when taking the log. In this example, we use a quantisation interval of 0.75 dB for the lowest band, increasing linearly to 4.25 dB for the highest band. Doing naive quantisation/encoding over a fixed range would require a prohobitive number of bits (>100 bits per frame) and is thus not an option. Measuring the ideal entropy (assuming a perfect probability model for the data) for same speech and music samples gives us an average of 71 bits per frame. That's still expensive, considering we're going to encode around 200 frames per second.

The only way to further reduce the number of bits used for energy quantisation is to eliminate redundancy. Energy usually doesn't vary that much from one frame to the next, so we can use a time-domain predictor of the form P(z) = 1 - alpha*z^-1. That means we remove from the current energy alpha times the previous energy (we're already in log domain). Here's what the entropy per frame looks like (as a function of alpha) if we use that predictor:



That's already much better. As we increase the prediction coefficient (alpha) from 0 to 1, we can reduce the entropy from 71 bits down to around 45 bits, a 26 bits improvement. Unfortunately, using alpha=1 for prediction isn't practical because it would mean that any transmission error (e.g. lost packet) would propagate through time with no attenuation (even 0.95 would take too long). A value of alpha around 0.7 would be a nice tradeoff between redundancy reduction and limited error propagation. That's 52 bits per frame. However, we're not done yet eliminating redundancy. There's still a correlation across the bands in the same frame. This time, we can use any predictor we like because a frame either arrives completely or it doesn't. So we use a second predictor Q(z) = (1 - z^-1)/(1 - beta*z^-1). With that second predictor, the entropy goes down again:



With alpha=0.7 and beta=0.5, we have just under 44 bits of entropy. Much better than the 71 bits we started from and even better than only the first predictor with alpha=1. Of course, that entropy value is optimistic because it assumes a perfect probability model and because it assumes that prediction isn't degraded by quantisation.

For encoding, it's not very practical to use the actual probability model because would require storing the probability for each value of each band (and for each bit-rate if we change the resolution). However, it turns out that the distributions are somewhere between a Gaussian distribution and a Laplacian distribution. Although actually closer to being Gaussian, we use a Laplacian model because it reduces the spikes in bit-rate (a Gaussian would significantly underestimate the probability of extreme values). Despite the rough approximation, the average actual encoding rate for all 15 bands is 46 bits per frame. That's just 2 bits worse than the theoretical best case using that predictor. Not bad at all.

I've also played with the DCT (for intra-frame redundancy) without getting better results, mainly because it's harder to control the error in each band. Still, there may be better ways that what I've done so far to reduce the bit-rate for the energy. I'm open to ideas/suggestions on that.

Updated: Fixed the definition of the example bands.
jmvalin: (Default)
Here's a bit more info on the CELT experimental codec I just released. First, the goals. For the past two years, Monty and I have been discussing what the next generation free audio codec would be. Monty's goal is to basically be better than Vorbis in terms of quality vs compression. My main goal, on the other hand is to have a high-quality codec with very low latency, even if it means being less efficient. So, we're trying to combine both into the Ghost codecs. Whether that'll succeed or we have to go with two separate codecs is still an open question. For now, I'm working on CELT, which I hope to be both a low-latency codec, and a noise encoder to be used in a lower bit-rate Ghost codec like Monty wants.

Below is an overview of how CELT works:


It may look a bit hairy, but it's actually a relatively simple idea. The four main ideas are:


  • We use a lapped transform (here an MDCT) on very short windows (128-256 samples)

  • The spectrum is divided in bands and the energy in each band is encoded and kept constant

  • We use a time-domain pitch predictor, with frequency-domain gains

  • The residual is encoded using a pulse codebook



I'll address each of these (and more) in later posts.
jmvalin: (Default)
Speex 1.2beta3 has been tagged and will be up on the website shortly. There should even be Windows builds this time thanks to Alexander Chemeris. I'm expecting the next release to be named 1.2rc1. There's still a few things to address before 1.2, but I'm hoping the libspeexdsp API will be complete for rc1. Stay tuned.

There's another releasing coming up: a new Code-Excited Lapped Transform (CELT) codec prototype. This codec is (for now at least) meant to compete with neither Vorbis, nor Speex. Instead, the primary idea is to reduce latency to a minimum -- currently around 8 ms (compared to ~25 ms for Speex and ~100 ms for Vorbis). Of course, this comes with a price in terms of efficiency, but I'm already surprised the price isn't bigger. I've been mainly focusing on speech, but unlike Speex, I'm hoping this one will handle music as well. For the curious, I've put a 56 kbps CBR music file (original). This is still very experimental and everything is still likely to change, including the exact goals. I'm still trying to figure out how to put psychoacoustics into that. Stay tuned for the release of version 0.0.1 (or should I use a negative version number to make it clearer it's experimental?).

CELT is based on a paper I submitted to ICASSP and which I'm hoping will be accepted so I can make it available to everyone. The only difference is that the ICASSP paper was based on the FFT (non critically sampled), whereas this version is based on the MDCT. One part that is already published though is Timothy's explanation of the pulse codebook encoding along with some source code. Now, here's a challenge. Who can beat the algorithm on Timothy's page? Simply stated, the idea is to enumerate all combinations of M pulses in a vector of dimension N, knowing that pulses have unit magnitude and a sign, but all pulses at the same position need to have the same sign.

Updated: The full source for CELT is available at: http://downloads.us.xiph.org/releases/celt/celt-0.0.1.tar.gz or through git at http://git.xiph.org/celt.git

Updated again:: Speex 1.2beta3 is out.

Profile

jmvalin: (Default)
jmvalin

June 2017

S M T W T F S
    123
45678910
11121314151617
1819 2021222324
252627282930 

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Sep. 24th, 2017 09:08 pm
Powered by Dreamwidth Studios