Fast factor 2 resampling

classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|

Fast factor 2 resampling

Stefan Westerfeld
   Hi!

I have worked quite a bit now on writing code for fast factor two
resampling based on FIR filters. So the task is similar to the last
posting I made. The differences are:

- I tried to really optimize for speed; I used gcc's SIMD primitives to
  design a version that runs really fast on my machine:

  model name      : AMD Athlon(tm) 64 Processor 3400+
  stepping        : 10
  cpu MHz         : 2202.916

  running in 64bit mode.

- I put some more effort into designing the coefficients for the filter;
  I used octave to do it; the specifications I tried to meet are listed
  in the coeffs.h file.

The resamplers are designed for streaming use; they do smart history
keeping. Thus a possible use case I designed them for would be to
upsample BEAST at the input devices and downsample BEAST at the output
devices.

The benefits of using the code for this tasks are:

- the filters are linear-phase
- the implementation should be fast enough (at least on my machine)
- the implementation should be precise enough (near -96 dB error == 16
  bit precision)

The downside may be the delay of the filters.

I put some effort into making this code easy to test, with four kinds of
tests:

(p) Performance tests measure how fast the code runs

    I tried on my machine with both: gcc-3.4 and gcc-4.0; you'll see the
    results below. The speedup gain achieved using SIMD instructions
    (SSE3 or whatever AMD64 uses) is

                   gcc-4.0    gcc-3.4
    -------------+---------------------
    upsampling   |   2.82      2.85
    downsampling |   2.54      2.50
    oversampling |   2.70      2.64

    where oversampling is first performing upsampling and then
    performing downsampling. Note that there is a bug in gcc-3.3 which
    will not allow combining C++ code with SIMD instructions.

    The other output should be self-explaining (if not, feel free to
    ask).

(a) Accuracy tests, which compare what should be the result with what is
    the result; you'll see that using SIMD instructions means a small
    loss of precision, but it should be acceptable. It occurs because
    the code doesn't use doubles to store the accumulated sum, but
    floats, to enable SIMD speedup.

(g) Gnuplot; much the same like accuracy, but it writes out data which
    can be plottet by gnuplot. So it is possible to "see" the
    interpolation error, rather than just get it as output.

(i) Impulse response; this one can be used for debugging - it will give
    the impulse response for the (for sub- and oversampling combined)
    system, so you can for instance see the delay in the time domain or
    plot the filter response in the frequency domain.

So I am attaching all code and scripts that I produced so far. For
compiling, I use g++ -O3 -funroll-loops as options; however, I suppose
on x86 machines you need to tell the compiler to generate SSE code.

I just tried it briefly on my laptop, and the SSE version there is much
slower than the non-SSE version. Currently I can't say why this is. I
know that AMD64 has extra registers compared to standard SSE. However,
I designed the inner loop (fir_process_4samples_sse) with having in mind
not to use more than 8 registers: out0..out3, input, taps, intermediate
sum/product whatever. These are 7 registers. Well, maybe AMD64 isn't
faster because of more registers, but better adressing mode, or
whatever.

Maybe its just my laptop being slow, and other non-AMD64-systems will
perform better.

Maybe we need to write three versions of the inner loop. One for AMD64,
one for x86 with SSE and one for FPU.

In any case, I invite you to try it out, play with the code, and give
feedback about it.

   Cu... Stefan
--
Stefan Westerfeld, Hamburg/Germany, http://space.twc.de/~stefan

_______________________________________________
beast mailing list
[hidden email]
http://mail.gnome.org/mailman/listinfo/beast

coeffs.h (2K) Download Attachment
ssefir.cc (21K) Download Attachment
ssefirtests.sh (202 bytes) Download Attachment
RESULTS (6K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Fast factor 2 resampling

Tim Janik
On Mon, 6 Mar 2006, Stefan Westerfeld wrote:

>   Hi!
>
> I have worked quite a bit now on writing code for fast factor two
> resampling based on FIR filters. So the task is similar to the last
> posting I made. The differences are:
>
> - I tried to really optimize for speed; I used gcc's SIMD primitives to
>  design a version that runs really fast on my machine:
>
>  model name      : AMD Athlon(tm) 64 Processor 3400+
>  stepping        : 10
>  cpu MHz         : 2202.916
>
>  running in 64bit mode.
>
> - I put some more effort into designing the coefficients for the filter;
>  I used octave to do it; the specifications I tried to meet are listed
>  in the coeffs.h file.

hm, can you put up a description about how to derive the coefficients with
octave or with some other tool then. so they can be reproduced by someone
else?

> The resamplers are designed for streaming use; they do smart history
> keeping. Thus a possible use case I designed them for would be to
> upsample BEAST at the input devices and downsample BEAST at the output
> devices.
>
> The benefits of using the code for this tasks are:
>
> - the filters are linear-phase

*why* exactly is this a benefit?

> - the implementation should be fast enough (at least on my machine)
> - the implementation should be precise enough (near -96 dB error == 16
>  bit precision)

what is required to beef this up to -120dB, or provide an alternative
implementation. i'm asking because float or 24bit datahandles are not at
all unlikely for the future.
likewise, a 12bit variant may make sense as well for some handles (maybe
even an 8bit variant in case that's still significantly faster than the
12bit version).

> The downside may be the delay of the filters.
>
> I put some effort into making this code easy to test, with four kinds of
> tests:
>
> (p) Performance tests measure how fast the code runs
>
>    I tried on my machine with both: gcc-3.4 and gcc-4.0; you'll see the
>    results below. The speedup gain achieved using SIMD instructions
>    (SSE3 or whatever AMD64 uses) is
>
>                   gcc-4.0    gcc-3.4
>    -------------+---------------------
>    upsampling   |   2.82      2.85
>    downsampling |   2.54      2.50
>    oversampling |   2.70      2.64
>
>    where oversampling is first performing upsampling and then
>    performing downsampling. Note that there is a bug in gcc-3.3 which
>    will not allow combining C++ code with SIMD instructions.
>
>    The other output should be self-explaining (if not, feel free to
>    ask).

hm, these figures are pretty much meaningless without knowing:
- what exactly was performed that took 2.82 or 2.85
- what is the unit of those figures? milli seconds? hours? dollars?

> (a) Accuracy tests, which compare what should be the result with what is
>    the result; you'll see that using SIMD instructions means a small
>    loss of precision, but it should be acceptable. It occurs because
>    the code doesn't use doubles to store the accumulated sum, but
>    floats, to enable SIMD speedup.

what's the cost of using doubles for intermediate values anyway (is that
possible at all?)
and what does the precision loss mean in dB?

> (g) Gnuplot; much the same like accuracy, but it writes out data which
>    can be plottet by gnuplot. So it is possible to "see" the
>    interpolation error, rather than just get it as output.
>
> (i) Impulse response; this one can be used for debugging - it will give
>    the impulse response for the (for sub- and oversampling combined)
>    system, so you can for instance see the delay in the time domain or
>    plot the filter response in the frequency domain.
>
> So I am attaching all code and scripts that I produced so far. For
> compiling, I use g++ -O3 -funroll-loops as options; however, I suppose
> on x86 machines you need to tell the compiler to generate SSE code.
>
> I just tried it briefly on my laptop, and the SSE version there is much
> slower than the non-SSE version. Currently I can't say why this is. I
> know that AMD64 has extra registers compared to standard SSE. However,
> I designed the inner loop (fir_process_4samples_sse) with having in mind
> not to use more than 8 registers: out0..out3, input, taps, intermediate
> sum/product whatever. These are 7 registers. Well, maybe AMD64 isn't
> faster because of more registers, but better adressing mode, or
> whatever.
>
> Maybe its just my laptop being slow, and other non-AMD64-systems will
> perform better.
>
> Maybe we need to write three versions of the inner loop. One for AMD64,
> one for x86 with SSE and one for FPU.
>
> In any case, I invite you to try it out, play with the code, and give
> feedback about it.

first, i'd like to thank you for working on this.

but then, what you're sending here is still pretty rough and
looks cumbersome to deal with.
can you please provide more details on the exact API you intend to add
(best is to have this in bugzilla), and give precise build instructions
(best is usually down to the level of shell commands, so the reader just
needs to paste those).

also, more details of what exctaly your performance tests do and how
to use them would be apprechiated.

>   Cu... Stefan

---
ciaoTJ
_______________________________________________
beast mailing list
[hidden email]
http://mail.gnome.org/mailman/listinfo/beast
Reply | Threaded
Open this post in threaded view
|

Re: Fast factor 2 resampling

Stefan Westerfeld
   Hi!

On Tue, Mar 28, 2006 at 03:27:14PM +0200, Tim Janik wrote:

> On Mon, 6 Mar 2006, Stefan Westerfeld wrote:
> >I have worked quite a bit now on writing code for fast factor two
> >resampling based on FIR filters. So the task is similar to the last
> >posting I made. The differences are:
> >
> >- I tried to really optimize for speed; I used gcc's SIMD primitives to
> > design a version that runs really fast on my machine:
> >
> > model name      : AMD Athlon(tm) 64 Processor 3400+
> > stepping        : 10
> > cpu MHz         : 2202.916
> >
> > running in 64bit mode.
> >
> >- I put some more effort into designing the coefficients for the filter;
> > I used octave to do it; the specifications I tried to meet are listed
> > in the coeffs.h file.
>
> hm, can you put up a description about how to derive the coefficients with
> octave or with some other tool then. so they can be reproduced by someone
> else?

As I have done it, it requires extra octave code (a bunch of .m files
implementing the ultraspherical window). I've copypasted the code from a
paper, and hacked around until it worked (more or less) in octave.

But if we want to include it as octave code in the BEAST distribution,
it might be worth investing a little more work into this window so that
we can provide a matlab/octave implementation we really understand and
then can provide a C implementation as well, so it can be used from
BEAST directly.

> >The resamplers are designed for streaming use; they do smart history
> >keeping. Thus a possible use case I designed them for would be to
> >upsample BEAST at the input devices and downsample BEAST at the output
> >devices.
> >
> >The benefits of using the code for this tasks are:
> >
> >- the filters are linear-phase
>
> *why* exactly is this a benefit?

Linear phase filtering means three things:

* we do "real interpolation", in the sense that for factor 2 upsampling,
  every other sample is exactly kept as it is; this means that we don't
  have to compute it

* we keep the shape of the signal intact, thus operations that modify
  the shape of the signal (non-linear operations, such as saturation)
  will sound the same when oversampling them

* we have the same delay for all frequencies - not having the same
  delay for all frequencies may result in audible differences between
  the original and up/downsampled signal

    http://en.wikipedia.org/wiki/Group_delay

  gives a table, which however seems to indicate that "not being quite"
  linear phase wouldn't lead to audible problems
 
> >- the implementation should be fast enough (at least on my machine)
> >- the implementation should be precise enough (near -96 dB error == 16
> > bit precision)
>
> what is required to beef this up to -120dB, or provide an alternative
> implementation. i'm asking because float or 24bit datahandles are not at
> all unlikely for the future.

Why -120dB? 6 * 24 = 144...?

The first factor that influences the precision is of course the filter
(and the resampling code doesn't hardcode the filter coefficients). The
filter can be tweaked to offer a -144dB (or -120dB) frequency response
by redesigning the coefficients (with the octave method I used), it will
be longer then (more delay, slower computation).

The second factor is the SSE code itself, because SSE limits us to float
float precision. My implementation also uses a computation order that is
quite fast - but not too good for precision. Usually, for FIR filters its
good to compute first the influence of small coefficients and then the
influence of larger ones. However I compute the influence of the
coefficients in the order they occur in the impulse response.

As conclusion: it might be that SSE code - at least as implemented -
cannot attain the precision we desire for 24bit audio. How good it gets
probably can't be determined without trying it.

> likewise, a 12bit variant may make sense as well for some handles (maybe
> even an 8bit variant in case that's still significantly faster than the
> 12bit version).

That should be no problems, simply by designing new coefficients.

> >The downside may be the delay of the filters.
> >
> >I put some effort into making this code easy to test, with four kinds of
> >tests:
> >
> >(p) Performance tests measure how fast the code runs
> >
> >   I tried on my machine with both: gcc-3.4 and gcc-4.0; you'll see the
> >   results below. The speedup gain achieved using SIMD instructions
> >   (SSE3 or whatever AMD64 uses) is
> >
> >                  gcc-4.0    gcc-3.4
> >   -------------+---------------------
> >   upsampling   |   2.82      2.85
> >   downsampling |   2.54      2.50
> >   oversampling |   2.70      2.64
> >
> >   where oversampling is first performing upsampling and then
> >   performing downsampling. Note that there is a bug in gcc-3.3 which
> >   will not allow combining C++ code with SIMD instructions.
> >
> >   The other output should be self-explaining (if not, feel free to
> >   ask).
>
> hm, these figures are pretty much meaningless without knowing:
> - what exactly was performed that took 2.82 or 2.85
> - what is the unit of those figures? milli seconds? hours? dollars?

These are speedup gains. A speedup gain is a factor between the "normal"
implementation and the SSE implementation.

speedup_gain = time_normal / time_sse

It has no unit, because the "seconds" unit both times have will
disappear when dividing them.

If you want to know the times, and the number of samples processed in
that time, you should read the RESULTS file. It is much more detailed
than the table I gave above.

> >(a) Accuracy tests, which compare what should be the result with what is
> >   the result; you'll see that using SIMD instructions means a small
> >   loss of precision, but it should be acceptable. It occurs because
> >   the code doesn't use doubles to store the accumulated sum, but
> >   floats, to enable SIMD speedup.
>
> what's the cost of using doubles for intermediate values anyway (is that
> possible at all?)
> and what does the precision loss mean in dB?

The non-SSE implementation does use doubles for intermediate values. The
SSE implementation could only use doubles if we rely on some higher
version of SSE (I think SSE2 or SSE3). However, the price of doing it
would be that the vectorized operations don't do four operations at
once, but two. That means it would become a lot slower to use SSE at
all.

As outlined above, the "real" performance loss is hard to predict.
However, I can give you one sample here for the -96dB filter:

$ ssefir au
accuracy test for factor 2 upsampling using FPU instructions
input frequency used to perform test = 440.00 Hz (SR = 44100.0 Hz)
max difference between correct and computed output: 0.000012 = -98.194477 dB
$ ssefir auf
accuracy test for factor 2 upsampling using SSE instructions
input frequency used to perform test = 440.00 Hz (SR = 44100.0 Hz)
max difference between correct and computed output: 0.000012 = -98.080477 dB

As you see, the variant which uses doubles for intermediate values is
not much better than the SSE variant, and both fulfill the spec without
problems.

However, as dB is a logarithmic measure, care has to be taken when
extrapolating what it would mean for a -144dB (or -120dB) filter.
And the other aspects that affect precision I mentioned above will also
affect the result.

> but then, what you're sending here is still pretty rough and
> looks cumbersome to deal with.
> can you please provide more details on the exact API you intend to add
> (best is to have this in bugzilla), and give precise build instructions
> (best is usually down to the level of shell commands, so the reader just
> needs to paste those).

I've uploaded a more recent version of the sources to bugzilla: #336366.
It also contains build instructions for the standalong thingy. For
ssefir.h, I added documentation comments

/**
 *...
 */

for those functions/classes that may be interesting for others. I also
marked a few more functions protected, so that only the interesting part
of the main classes, Upsampler2 and Downsampler2, remains public.

> also, more details of what exctaly your performance tests do and how
> to use them would be apprechiated.

Basically, they do the resampling processing for the same block of data
500000 times. You can modify the block size used. By timing this
operation, a throughput can be computed which then can be given as
samples per second, or for instance CPU usage for resampling a single
44100 Hz stream.

If you run the shell script (or read the test file RESULTS I had
attached to the initial mail), you may understand it a bit more, because
the output is somewhat verbose. On my system:

$ ssefir pu
performance test for factor 2 upsampling using FPU instructions
  (performance will be normalized to upsampler input samples)
  total samples processed = 64000000
  processing_time = 3.667876
  samples / second = 17448790.501572
  which means the resampler can process 395.66 44100 Hz streams simultaneusly
  or one 44100 Hz stream takes 0.252740 % CPU usage
$ ssefir puf
performance test for factor 2 upsampling using SSE instructions
  (performance will be normalized to upsampler input samples)
  total samples processed = 64000000
  processing_time = 1.346511
  samples / second = 47530250.673020
  which means the resampler can process 1077.78 44100 Hz streams simultaneusly
  or one 44100 Hz stream takes 0.092783 % CPU usage

The arguments here are:

p = performance
u = upsampling
f = fast -> SSE implementation

run ssefir without args for help.

   Cu... Stefan
--
Stefan Westerfeld, Hamburg/Germany, http://space.twc.de/~stefan
_______________________________________________
beast mailing list
[hidden email]
http://mail.gnome.org/mailman/listinfo/beast
Reply | Threaded
Open this post in threaded view
|

Re: Fast factor 2 resampling

Tim Janik
On Tue, 28 Mar 2006, Stefan Westerfeld wrote:

>   Hi!
>
> On Tue, Mar 28, 2006 at 03:27:14PM +0200, Tim Janik wrote:
>> On Mon, 6 Mar 2006, Stefan Westerfeld wrote:
>>> I have worked quite a bit now on writing code for fast factor two
>>> resampling based on FIR filters. So the task is similar to the last
>>> posting I made. The differences are:
>>>
>>> - I tried to really optimize for speed; I used gcc's SIMD primitives to
>>> design a version that runs really fast on my machine:
>>>
>>> model name      : AMD Athlon(tm) 64 Processor 3400+
>>> stepping        : 10
>>> cpu MHz         : 2202.916
>>>
>>> running in 64bit mode.
>>>
>>> - I put some more effort into designing the coefficients for the filter;
>>> I used octave to do it; the specifications I tried to meet are listed
>>> in the coeffs.h file.
>>
>> hm, can you put up a description about how to derive the coefficients with
>> octave or with some other tool then. so they can be reproduced by someone
>> else?
>
> As I have done it, it requires extra octave code (a bunch of .m files
> implementing the ultraspherical window). I've copypasted the code from a
> paper, and hacked around until it worked (more or less) in octave.
>
> But if we want to include it as octave code in the BEAST distribution,
> it might be worth investing a little more work into this window so that
> we can provide a matlab/octave implementation we really understand and
> then can provide a C implementation as well, so it can be used from
> BEAST directly.

ok, ok, first things first ;)

as far as i see, we only have a couple use cases at hand, supposedly
comprehensive filter setups are:
-  8bit:  48dB
- 12bit:  72dB
- 16bit:  96dB
- 20bit: 120dB
- 24bit: 144dB

if we have those 5 cases covered by coefficient sets, that'd be good enough
to check the stuff in to CVS and have production ready up/down sampling.

then, if the octave files and the paper you pasted from permit, it'd be good
to put the relevant octave/matlab files into CVS under LGPL, so the coefficient
creation process can be reconstructed later on (and by other contributors).

last, once all of the above has been settled and there is a valid use case
for creating these filters from C, the octave/matlab code can be translated
to be available during runtime. do you actually see a use case for this?

>>> The resamplers are designed for streaming use; they do smart history
>>> keeping. Thus a possible use case I designed them for would be to
>>> upsample BEAST at the input devices and downsample BEAST at the output
>>> devices.
>>>
>>> The benefits of using the code for this tasks are:
>>>
>>> - the filters are linear-phase
>>
>> *why* exactly is this a benefit?
>
> Linear phase filtering means three things:
>
> * we do "real interpolation", in the sense that for factor 2 upsampling,
>  every other sample is exactly kept as it is; this means that we don't
>  have to compute it
>
> * we keep the shape of the signal intact, thus operations that modify
>  the shape of the signal (non-linear operations, such as saturation)
>  will sound the same when oversampling them
>
> * we have the same delay for all frequencies - not having the same
>  delay for all frequencies may result in audible differences between
>  the original and up/downsampled signal
>
>    http://en.wikipedia.org/wiki/Group_delay
>
>  gives a table, which however seems to indicate that "not being quite"
>  linear phase wouldn't lead to audible problems

ok, thanks for explaining this. we should have this and similar things
available in our docuemntation actually. either on a wiki page on synthesis,
or even a real documentation chapter about synthesis. thoughts?

>>> - the implementation should be fast enough (at least on my machine)
>>> - the implementation should be precise enough (near -96 dB error == 16
>>> bit precision)
>>
>> what is required to beef this up to -120dB, or provide an alternative
>> implementation. i'm asking because float or 24bit datahandles are not at
>> all unlikely for the future.
>
> Why -120dB? 6 * 24 = 144...?

yeah, thanks for pointing this out. both are valid use cases, 20bit
samples and 24bit samples.

> The first factor that influences the precision is of course the filter
> (and the resampling code doesn't hardcode the filter coefficients). The
> filter can be tweaked to offer a -144dB (or -120dB) frequency response
> by redesigning the coefficients (with the octave method I used), it will
> be longer then (more delay, slower computation).
>
> The second factor is the SSE code itself, because SSE limits us to float
> float precision. My implementation also uses a computation order that is
> quite fast - but not too good for precision. Usually, for FIR filters its
> good to compute first the influence of small coefficients and then the
> influence of larger ones. However I compute the influence of the
> coefficients in the order they occur in the impulse response.
>
> As conclusion: it might be that SSE code - at least as implemented -
> cannot attain the precision we desire for 24bit audio. How good it gets
> probably can't be determined without trying it.

yeah, right. float might fall short on 20bit or 24bit (definitely the latter,
since 32bit floats have only 23bit of mantissa).
but as you say, we'll see once we have the other coefficient sets, and even
if at 144dB only the slow FPU variant can keep precision, the SSE code will
still speed up the most common use case which is 16bit.

what worries me a bit though is that you mentioned one of your machines
runs the SSE variant slower than the FPU varient. did you investigate
more here?

>> likewise, a 12bit variant may make sense as well for some handles (maybe
>> even an 8bit variant in case that's still significantly faster than the
>> 12bit version).
>
> That should be no problems, simply by designing new coefficients.

ok, as i understand, you're going to do this by manually, using octave or
mathlab as a tool now, right?

>>>                  gcc-4.0    gcc-3.4
>>>   -------------+---------------------
>>>   upsampling   |   2.82      2.85
>>>   downsampling |   2.54      2.50
>>>   oversampling |   2.70      2.64
>>>
>>>   where oversampling is first performing upsampling and then
>>>   performing downsampling. Note that there is a bug in gcc-3.3 which
>>>   will not allow combining C++ code with SIMD instructions.
>>>
>>>   The other output should be self-explaining (if not, feel free to
>>>   ask).
>>
>> hm, these figures are pretty much meaningless without knowing:
>> - what exactly was performed that took 2.82 or 2.85
>> - what is the unit of those figures? milli seconds? hours? dollars?
>
> These are speedup gains. A speedup gain is a factor between the "normal"
> implementation and the SSE implementation.

ah, good you point this out.

> If you want to know the times, and the number of samples processed in
> that time, you should read the RESULTS file. It is much more detailed
> than the table I gave above.

well, i did read through it now. first, what's oversampling? how's that
different from upsampling?
and second, reading through the output doesn't lend itself for proper
comparisions of the figures in a good way. compiling them into a table
would be better for that.
and third, since this is the output of a single run, i have no idea how
much those figures are affected by processor/sytem/etc. jitter, so even
if all the performance figures where next to each other, i'd still have
no idea within what ranges they are actually comparable.

i.e. a bit of posprocessing on your side would have helped to make the
information acquired be properly digestible ;)

i agree that the output itself is nicely verbose though.

>>> (a) Accuracy tests, which compare what should be the result with what is
>>>   the result; you'll see that using SIMD instructions means a small
>>>   loss of precision, but it should be acceptable. It occurs because
>>>   the code doesn't use doubles to store the accumulated sum, but
>>>   floats, to enable SIMD speedup.
>>
>> what's the cost of using doubles for intermediate values anyway (is that
>> possible at all?)
>> and what does the precision loss mean in dB?
>
> The non-SSE implementation does use doubles for intermediate values. The
> SSE implementation could only use doubles if we rely on some higher
> version of SSE (I think SSE2 or SSE3). However, the price of doing it
> would be that the vectorized operations don't do four operations at
> once, but two. That means it would become a lot slower to use SSE at
> all.

depending on sse2 also limits portability, e.g. out of 2 laptops here, only
1 has sse2 (both have sse), and out of 2 athlons here only one has sse (and
none sse2). the story is different with mmx of course, which is supported by
all 4 processors...

> As outlined above, the "real" performance loss is hard to predict.
> However, I can give you one sample here for the -96dB filter:

> max difference between correct and computed output: 0.000012 = -98.194477 dB

> accuracy test for factor 2 upsampling using SSE instructions

> max difference between correct and computed output: 0.000012 = -98.080477 dB

thanks. now lets see those figures for 120dB and 144dB ;)

> As you see, the variant which uses doubles for intermediate values is
> not much better than the SSE variant, and both fulfill the spec without
> problems.

have you by any chance benched the FPU variant with doubles against the
FPU variant with floats btw?

> However, as dB is a logarithmic measure, care has to be taken when
> extrapolating what it would mean for a -144dB (or -120dB) filter.
> And the other aspects that affect precision I mentioned above will also
> affect the result.

yeah, agree.

>> but then, what you're sending here is still pretty rough and
>> looks cumbersome to deal with.
>> can you please provide more details on the exact API you intend to add
>> (best is to have this in bugzilla), and give precise build instructions
>> (best is usually down to the level of shell commands, so the reader just
>> needs to paste those).
>
> I've uploaded a more recent version of the sources to bugzilla: #336366.
> It also contains build instructions for the standalong thingy. For
> ssefir.h, I added documentation comments
>
> /**
> *...
> */
>
> for those functions/classes that may be interesting for others. I also
> marked a few more functions protected, so that only the interesting part
> of the main classes, Upsampler2 and Downsampler2, remains public.

thanks for the good work, will have a look at it later.

>   Cu... Stefan

---
ciaoTJ
_______________________________________________
beast mailing list
[hidden email]
http://mail.gnome.org/mailman/listinfo/beast
Reply | Threaded
Open this post in threaded view
|

Re: Fast factor 2 resampling

Stefan Westerfeld
   Hi!

On Tue, Mar 28, 2006 at 08:06:07PM +0200, Tim Janik wrote:

> On Tue, 28 Mar 2006, Stefan Westerfeld wrote:
> >>>- I put some more effort into designing the coefficients for the filter;
> >>>I used octave to do it; the specifications I tried to meet are listed
> >>>in the coeffs.h file.
> >>
> >>hm, can you put up a description about how to derive the coefficients with
> >>octave or with some other tool then. so they can be reproduced by someone
> >>else?
> >
> >As I have done it, it requires extra octave code (a bunch of .m files
> >implementing the ultraspherical window). I've copypasted the code from a
> >paper, and hacked around until it worked (more or less) in octave.
> >
> >But if we want to include it as octave code in the BEAST distribution,
> >it might be worth investing a little more work into this window so that
> >we can provide a matlab/octave implementation we really understand and
> >then can provide a C implementation as well, so it can be used from
> >BEAST directly.
>
> ok, ok, first things first ;)
>
> as far as i see, we only have a couple use cases at hand, supposedly
> comprehensive filter setups are:
> -  8bit:  48dB
> - 12bit:  72dB
> - 16bit:  96dB
> - 20bit: 120dB
> - 24bit: 144dB
>
> if we have those 5 cases covered by coefficient sets, that'd be good enough
> to check the stuff in to CVS and have production ready up/down sampling.

Yes, these sound reasonable. Although picking which filter setup to use
may not be as easy as looking at the precision of the input data.

For example ogg input data could be resampled with 96dB coefficients for
performance reasons, or 8bit input data could be resampled with a higher
order filter to get better transition steepness.

Anyway, I'll design coefficients for these 5 cases, and if we want to
have more settings later on, we still can design new coefficients.

> then, if the octave files and the paper you pasted from permit, it'd be good
> to put the relevant octave/matlab files into CVS under LGPL, so the
> coefficient
> creation process can be reconstructed later on (and by other contributors).

I've asked the author of the paper, and he said we can put his code in
our LGPL project. I still need to put some polishing into the octave
code, because I somewhat broke it when porting it from matlab to octave.

The original version has somewhat more readable/understandable filter
design parameters than my version. I hope I get the octave code right.

> last, once all of the above has been settled and there is a valid use case
> for creating these filters from C, the octave/matlab code can be translated
> to be available during runtime. do you actually see a use case for this?

One case I can think of is if we've got a FIR module with a GUI that
allows designing custom filters. But even then, ultraspherical
coefficient tweaking may be too much work for everyday use. Probably the
standard user will rather have "some" window which is somewhat optimal,
without a lot of tweaking, rather than an "almost optimal" window - such
as ultraspherical, with a lot of manual tweaking.

One could also try to automate the tweaking of the two window parameters
by using some kind of search algorithm. Then, it would be as easy to use
as a normal window.

> >Linear phase filtering means three things:
> >
> >* we do "real interpolation", in the sense that for factor 2 upsampling,
> > every other sample is exactly kept as it is; this means that we don't
> > have to compute it
> >
> >* we keep the shape of the signal intact, thus operations that modify
> > the shape of the signal (non-linear operations, such as saturation)
> > will sound the same when oversampling them
> >
> >* we have the same delay for all frequencies - not having the same
> > delay for all frequencies may result in audible differences between
> > the original and up/downsampled signal
> >
> >   http://en.wikipedia.org/wiki/Group_delay
> >
> > gives a table, which however seems to indicate that "not being quite"
> > linear phase wouldn't lead to audible problems
>
> ok, thanks for explaining this. we should have this and similar things
> available in our docuemntation actually. either on a wiki page on synthesis,
> or even a real documentation chapter about synthesis. thoughts?

Maybe a new doxi file on synthesis details? I could write a few
paragraphs on the resampler.

> >Why -120dB? 6 * 24 = 144...?
>
> yeah, thanks for pointing this out. both are valid use cases, 20bit
> samples and 24bit samples.

Although by the way -120dB should be ok for almost any practical use
case, because the human ear probably won't able to hear the difference.

Since these are relative values (unlike when talking about integer
precisions for samples), even signals which are not very loud will get
really good resampling.

Thus you have error scenarios like this: a signal with a loud desired
signal (sine wave with 0 dB) and a small error signal (sine wave with
-120dB).  I doubt that the human ear can pick up the error signal. I
even doubt it for the -96 dB case. But well, we could perform listening
tests to try it out.

> yeah, right. float might fall short on 20bit or 24bit (definitely the
> latter,
> since 32bit floats have only 23bit of mantissa).
> but as you say, we'll see once we have the other coefficient sets, and even
> if at 144dB only the slow FPU variant can keep precision, the SSE code will
> still speed up the most common use case which is 16bit.

Yes. We need to try it once I have the coefficient sets. As I argued
above, the errors may be well below what the human ear can percieve.

> what worries me a bit though is that you mentioned one of your machines
> runs the SSE variant slower than the FPU varient. did you investigate
> more here?

Not yet.

> >>likewise, a 12bit variant may make sense as well for some handles (maybe
> >>even an 8bit variant in case that's still significantly faster than the
> >>12bit version).
> >
> >That should be no problems, simply by designing new coefficients.
>
> ok, as i understand, you're going to do this by manually, using octave or
> mathlab as a tool now, right?

Yes.

> >>>                 gcc-4.0    gcc-3.4
> >>>  -------------+---------------------
> >>>  upsampling   |   2.82      2.85
> >>>  downsampling |   2.54      2.50
> >>>  oversampling |   2.70      2.64
> >>>
> >>>  where oversampling is first performing upsampling and then
> >>>  performing downsampling. Note that there is a bug in gcc-3.3 which
> >>>  will not allow combining C++ code with SIMD instructions.
> >>>
> >>>  The other output should be self-explaining (if not, feel free to
> >>>  ask).
> >>
> >>hm, these figures are pretty much meaningless without knowing:
> >>- what exactly was performed that took 2.82 or 2.85
> >>- what is the unit of those figures? milli seconds? hours? dollars?
> >
> >These are speedup gains. A speedup gain is a factor between the "normal"
> >implementation and the SSE implementation.
>
> ah, good you point this out.
>
> >If you want to know the times, and the number of samples processed in
> >that time, you should read the RESULTS file. It is much more detailed
> >than the table I gave above.
>
> well, i did read through it now. first, what's oversampling? how's that
> different from upsampling?

Oversampling is first upsampling a 44100 Hz signal to 88200 Hz, and then
downsampling it again to 44100 Hz. Its what I first designed the filters
for: for oversampling the engine. Thus I benchmarked it as seperate
case.

> and second, reading through the output doesn't lend itself for proper
> comparisions of the figures in a good way. compiling them into a table
> would be better for that.

Right.

> and third, since this is the output of a single run, i have no idea how
> much those figures are affected by processor/sytem/etc. jitter, so even
> if all the performance figures where next to each other, i'd still have
> no idea within what ranges they are actually comparable.

Well, the of putting the code in bugs.gnome.org was that you could see
yourself how much jitter/... it produces.

> >>>(a) Accuracy tests, which compare what should be the result with what is
> >>>  the result; you'll see that using SIMD instructions means a small
> >>>  loss of precision, but it should be acceptable. It occurs because
> >>>  the code doesn't use doubles to store the accumulated sum, but
> >>>  floats, to enable SIMD speedup.
> >>
> >>what's the cost of using doubles for intermediate values anyway (is that
> >>possible at all?)
> >>and what does the precision loss mean in dB?
> >
> >The non-SSE implementation does use doubles for intermediate values. The
> >SSE implementation could only use doubles if we rely on some higher
> >version of SSE (I think SSE2 or SSE3). However, the price of doing it
> >would be that the vectorized operations don't do four operations at
> >once, but two. That means it would become a lot slower to use SSE at
> >all.
>
> depending on sse2 also limits portability, e.g. out of 2 laptops here, only
> 1 has sse2 (both have sse), and out of 2 athlons here only one has sse (and
> none sse2). the story is different with mmx of course, which is supported by
> all 4 processors...

But MMX only accelerates integer operations, which doesn't help much for
our floating point based data handles.

> >As outlined above, the "real" performance loss is hard to predict.
> >However, I can give you one sample here for the -96dB filter:
>
> >max difference between correct and computed output: 0.000012 = -98.194477
> >dB
>
> >accuracy test for factor 2 upsampling using SSE instructions
>
> >max difference between correct and computed output: 0.000012 = -98.080477
> >dB
>
> thanks. now lets see those figures for 120dB and 144dB ;)

Later. I want to cleanup the octave files first (see above), before
doing the final design of the coefficients.

> >As you see, the variant which uses doubles for intermediate values is
> >not much better than the SSE variant, and both fulfill the spec without
> >problems.
>
> have you by any chance benched the FPU variant with doubles against the
> FPU variant with floats btw?

Well, I tried it now: the FPU variant without doubles is quite a bit (15%)
faster than the variant which uses doubles as intermediate values.

If you want *really cool* speedups, you can use gcc-4.1 with float
temporaries -ftree-vectorize and -ffast-math. That auto vectorization
thing really works, and replaces the FPU instructions with SSE
instructions automagically. Its not much slower than my hand crafted
version. But then again, we wanted a FPU variant to have a FPU variant,
right?

I haven't benchmarked a version which uses double filter coefficients,
because it would have required a lot of rewriting to get it work with
my current source tree.

But here are the tests I described above:

### FPU CODE with temporary DOUBLES.
$ g++-4.1 -o ssefir ssefir.cc -O3 -funroll-loops
$ ./ssefir pu
performance test for factor 2 upsampling using FPU instructions
  (performance will be normalized to upsampler input samples)
  total samples processed = 64000000
  processing_time = 3.804806
  samples / second = 16820831.364426
  which means the resampler can process 381.42 44100 Hz streams simultaneusly
  or one 44100 Hz stream takes 0.262175 % CPU usage

### FPU CODE with temporary FLOATS.
$ g++-4.1 -o ssefir ssefir.cc -O3 -funroll-loops
$ ./ssefir pu
performance test for factor 2 upsampling using FPU instructions
  (performance will be normalized to upsampler input samples)
  total samples processed = 64000000
  processing_time = 3.317954
  samples / second = 19288996.585134
  which means the resampler can process 437.39 44100 Hz streams simultaneusly
  or one 44100 Hz stream takes 0.228628 % CPU usage

### SSE CODE with temporary FLOATS generated by AUTOVECTORIZER
$ g++-4.1 -o ssefir ssefir.cc -O3 -funroll-loops -ffast-math -ftree-vectorize
$ ./ssefir pu
performance test for factor 2 upsampling using FPU instructions
  (performance will be normalized to upsampler input samples)
  total samples processed = 64000000
  processing_time = 1.482929
  samples / second = 43157831.814407
  which means the resampler can process 978.64 44100 Hz streams simultaneusly
  or one 44100 Hz stream takes 0.102183 % CPU usage

### SSE CODE with temporary FLOATS generated by HAND
$ g++-4.1 -o ssefir ssefir.cc -O3 -funroll-loops
$ ./ssefir puf
performance test for factor 2 upsampling using SSE instructions
  (performance will be normalized to upsampler input samples)
  total samples processed = 64000000
  processing_time = 1.323285
  samples / second = 48364483.105296
  which means the resampler can process 1096.70 44100 Hz streams simultaneusly
  or one 44100 Hz stream takes 0.091183 % CPU usage

I must admit, I am really impressed with the quality of auto
vectorization. Its the first time I can try it out (installed gcc-4.1
today). Hand written code is a somewhat faster, but not much. Another
advantage of the hand written code is that it will work with any
compiler down to gcc 3.4.

> >I've uploaded a more recent version of the sources to bugzilla: #336366.
> >[...]
>
> thanks for the good work, will have a look at it later.

I uploaded a new version with my current sources.

   Cu... Stefan
--
Stefan Westerfeld, Hamburg/Germany, http://space.twc.de/~stefan
_______________________________________________
beast mailing list
[hidden email]
http://mail.gnome.org/mailman/listinfo/beast
Reply | Threaded
Open this post in threaded view
|

Re: Fast factor 2 resampling

Tim Janik
On Tue, 4 Apr 2006, Stefan Westerfeld wrote:

>   Hi!
>
> On Tue, Mar 28, 2006 at 08:06:07PM +0200, Tim Janik wrote:
>> On Tue, 28 Mar 2006, Stefan Westerfeld wrote:
>>>>> - I put some more effort into designing the coefficients for the filter;
>>>>> I used octave to do it; the specifications I tried to meet are listed
>>>>> in the coeffs.h file.
>>>>
>>>> hm, can you put up a description about how to derive the coefficients with
>>>> octave or with some other tool then. so they can be reproduced by someone
>>>> else?
>>>
>>> As I have done it, it requires extra octave code (a bunch of .m files
>>> implementing the ultraspherical window). I've copypasted the code from a
>>> paper, and hacked around until it worked (more or less) in octave.
>>>
>>> But if we want to include it as octave code in the BEAST distribution,
>>> it might be worth investing a little more work into this window so that
>>> we can provide a matlab/octave implementation we really understand and
>>> then can provide a C implementation as well, so it can be used from
>>> BEAST directly.
>>
>> ok, ok, first things first ;)
>>
>> as far as i see, we only have a couple use cases at hand, supposedly
>> comprehensive filter setups are:
>> -  8bit:  48dB
>> - 12bit:  72dB
>> - 16bit:  96dB
>> - 20bit: 120dB
>> - 24bit: 144dB
>>
>> if we have those 5 cases covered by coefficient sets, that'd be good enough
>> to check the stuff in to CVS and have production ready up/down sampling.
>
> Yes, these sound reasonable. Although picking which filter setup to use
> may not be as easy as looking at the precision of the input data.
>
> For example ogg input data could be resampled with 96dB coefficients for
> performance reasons, or 8bit input data could be resampled with a higher
> order filter to get better transition steepness.

but that'd just be another choice out of those 5, other than the obvious one.
or am i misunderstanding you and you want to point out a missing setup?

>
> Anyway, I'll design coefficients for these 5 cases, and if we want to
> have more settings later on, we still can design new coefficients.

yeah.

>
>> then, if the octave files and the paper you pasted from permit, it'd be good
>> to put the relevant octave/matlab files into CVS under LGPL, so the
>> coefficient
>> creation process can be reconstructed later on (and by other contributors).
>
> I've asked the author of the paper, and he said we can put his code in
> our LGPL project. I still need to put some polishing into the octave
> code, because I somewhat broke it when porting it from matlab to octave.

ugh. i think putting just the link to the paper into our docs or the code
comments would be enough, give it is publically available. but we can mirror
it on site if we got permission for redistribution and there is reason to
believe the original location may be non-permanent in any way.

>>> Linear phase filtering means three things:
>>>
>>> * we do "real interpolation", in the sense that for factor 2 upsampling,
>>> every other sample is exactly kept as it is; this means that we don't
>>> have to compute it
>>>
>>> * we keep the shape of the signal intact, thus operations that modify
>>> the shape of the signal (non-linear operations, such as saturation)
>>> will sound the same when oversampling them
>>>
>>> * we have the same delay for all frequencies - not having the same
>>> delay for all frequencies may result in audible differences between
>>> the original and up/downsampled signal
>>>
>>>   http://en.wikipedia.org/wiki/Group_delay
>>>
>>> gives a table, which however seems to indicate that "not being quite"
>>> linear phase wouldn't lead to audible problems
>>
>> ok, thanks for explaining this. we should have this and similar things
>> available in our docuemntation actually. either on a wiki page on synthesis,
>> or even a real documentation chapter about synthesis. thoughts?
>
> Maybe a new doxi file on synthesis details? I could write a few
> paragraphs on the resampler.

that'd be good. will you check that in to docs/ then?
one file that may be remotely suitable is:
   http://beast.gtk.org/architecture.html
but it's probably much better to just start synthesis-details.doxi.

>
>>> Why -120dB? 6 * 24 = 144...?
>>
>> yeah, thanks for pointing this out. both are valid use cases, 20bit
>> samples and 24bit samples.
>
> Although by the way -120dB should be ok for almost any practical use
> case, because the human ear probably won't able to hear the difference.
>
> Since these are relative values (unlike when talking about integer
> precisions for samples), even signals which are not very loud will get
> really good resampling.
>
> Thus you have error scenarios like this: a signal with a loud desired
> signal (sine wave with 0 dB) and a small error signal (sine wave with
> -120dB).  I doubt that the human ear can pick up the error signal. I
> even doubt it for the -96 dB case. But well, we could perform listening
> tests to try it out.

well, since we're writing a modular synthesis application here, keep in mind
that examining just one signal in isolation isn't good enough for all cases.
that'd be ok for a media player with one pluggable filter in it's output
chain, but our signals are used for various purposes and *may* be strongly
amplified.
i'm not saying 144dB will be the common use case, but i think it's reasonably
within the range of filters we migth want to offer synthesis users.

>
>> yeah, right. float might fall short on 20bit or 24bit (definitely the
>> latter,
>> since 32bit floats have only 23bit of mantissa).
>> but as you say, we'll see once we have the other coefficient sets, and even
>> if at 144dB only the slow FPU variant can keep precision, the SSE code will
>> still speed up the most common use case which is 16bit.
>
> Yes. We need to try it once I have the coefficient sets. As I argued
> above, the errors may be well below what the human ear can percieve.

i'll keep an eye on it. with our FFT scope. which allows allmost arbitrary
signal boosts ;)

>
>> what worries me a bit though is that you mentioned one of your machines
>> runs the SSE variant slower than the FPU varient. did you investigate
>> more here?
>
> Not yet.

ok, please keep posting once you've done that then ;)

>> well, i did read through it now. first, what's oversampling? how's that
>> different from upsampling?
>
> Oversampling is first upsampling a 44100 Hz signal to 88200 Hz, and then
> downsampling it again to 44100 Hz. Its what I first designed the filters
> for: for oversampling the engine. Thus I benchmarked it as seperate
> case.

hm, i still don't have a good idea if we won't need n-times oversampling
for the whole engine. basically, because i have a rough idea on what usual
input output rates are or could be (44.1k, 48k, 88.2k, 96k), but not what
good rates are to run the synthesis engine at (48K, 56K, 66.15K, 64K, 72K)...

>>> The non-SSE implementation does use doubles for intermediate values. The
>>> SSE implementation could only use doubles if we rely on some higher
>>> version of SSE (I think SSE2 or SSE3). However, the price of doing it
>>> would be that the vectorized operations don't do four operations at
>>> once, but two. That means it would become a lot slower to use SSE at
>>> all.
>>
>> depending on sse2 also limits portability, e.g. out of 2 laptops here, only
>> 1 has sse2 (both have sse), and out of 2 athlons here only one has sse (and
>> none sse2). the story is different with mmx of course, which is supported by
>> all 4 processors...
>
> But MMX only accelerates integer operations, which doesn't help much for
> our floating point based data handles.

sure, i'm just pointing out the availability of different technologies here.
i.e. mmx vs. sse vs. sse2. and the athlons of course also have 3dnow.
to sum it up, SSE seems feasible at the moment, SSE2 not so, out of the
available instruction sets.

>>> As you see, the variant which uses doubles for intermediate values is
>>> not much better than the SSE variant, and both fulfill the spec without
>>> problems.
>>
>> have you by any chance benched the FPU variant with doubles against the
>> FPU variant with floats btw?
>
> Well, I tried it now: the FPU variant without doubles is quite a bit (15%)
> faster than the variant which uses doubles as intermediate values.
>
> If you want *really cool* speedups, you can use gcc-4.1 with float
> temporaries -ftree-vectorize and -ffast-math. That auto vectorization
> thing really works, and replaces the FPU instructions with SSE
> instructions automagically. Its not much slower than my hand crafted
> version. But then again, we wanted a FPU variant to have a FPU variant,
> right?

erm, i can't believe your gcc did that without also specifiying a
processor type...
and when we get processor specific, we have to provide alternative
compilation objects and need a mechanism to clearly identify and
select the required instruction sets during runtime.

>>> I've uploaded a more recent version of the sources to bugzilla: #336366.
>>> [...]
>>
>> thanks for the good work, will have a look at it later.
>
> I uploaded a new version with my current sources.

rock.

>   Cu... Stefan

---
ciaoTJ
_______________________________________________
beast mailing list
[hidden email]
http://mail.gnome.org/mailman/listinfo/beast
Reply | Threaded
Open this post in threaded view
|

Re: Fast factor 2 resampling

Stefan Westerfeld
   Hi!

On Wed, Apr 05, 2006 at 02:05:37AM +0200, Tim Janik wrote:

> On Tue, 4 Apr 2006, Stefan Westerfeld wrote:
> >>ok, ok, first things first ;)
> >>
> >>as far as i see, we only have a couple use cases at hand, supposedly
> >>comprehensive filter setups are:
> >>-  8bit:  48dB
> >>- 12bit:  72dB
> >>- 16bit:  96dB
> >>- 20bit: 120dB
> >>- 24bit: 144dB
> >>
> >>if we have those 5 cases covered by coefficient sets, that'd be good
> >>enough
> >>to check the stuff in to CVS and have production ready up/down sampling.
> >
> >Yes, these sound reasonable. Although picking which filter setup to use
> >may not be as easy as looking at the precision of the input data.
> >
> >For example ogg input data could be resampled with 96dB coefficients for
> >performance reasons, or 8bit input data could be resampled with a higher
> >order filter to get better transition steepness.
>
> but that'd just be another choice out of those 5, other than the obvious
> one.
> or am i misunderstanding you and you want to point out a missing setup?

No, I was just pointing out to the fact that choosing from these 5
should not (always) be automated for datahandles. In the plain C API,
this means that we now have

/* --- resampling datahandles with the factor 2 --- */
GslDataHandle* bse_data_handle_new_upsample2 (GslDataHandle *src_handle, int precision_bits);

instead of the old API

GslDataHandle* bse_data_handle_new_upsample2 (GslDataHandle *src_handle);


But actually there is a case that is not covered very well (and can not
be covered with the code designed as it is right now), and thats
resampling files with small sample rate. When it comes to that, the
aliasing area that I've designed into the inaudible area (22050-26100 Hz
if we're resampling 44100 Hz recordings) moves down into the audible
area (for instance 11025-13050 Hz upsampling a 22050Hz recording with
factor 2).

If that is a problem we need a non-halfband implementation as well (see
below for more cases where we need that), which is not too hard to write
but may be significantly slower (factor 2 or so).

> >>then, if the octave files and the paper you pasted from permit, it'd be
> >>good
> >>to put the relevant octave/matlab files into CVS under LGPL, so the
> >>coefficient
> >>creation process can be reconstructed later on (and by other
> >>contributors).
> >
> >I've asked the author of the paper, and he said we can put his code in
> >our LGPL project. I still need to put some polishing into the octave
> >code, because I somewhat broke it when porting it from matlab to octave.
>
> ugh. i think putting just the link to the paper into our docs or the code
> comments would be enough, give it is publically available. but we can mirror
> it on site if we got permission for redistribution and there is reason to
> believe the original location may be non-permanent in any way.

I was't speaking about the paper, but only about the octave/matlab
source code given in the paper (for ultraspherical windows). He allowed
us to redistribute this, and thats what I want to do, so that everybody
can reproduce how we designed the filter coefficients.

>
> >>>Linear phase filtering means three things:
> >>>
> >>>* we do "real interpolation", in the sense that for factor 2 upsampling,
> >>>every other sample is exactly kept as it is; this means that we don't
> >>>have to compute it
> >>>
> >>>* we keep the shape of the signal intact, thus operations that modify
> >>>the shape of the signal (non-linear operations, such as saturation)
> >>>will sound the same when oversampling them
> >>>
> >>>* we have the same delay for all frequencies - not having the same
> >>>delay for all frequencies may result in audible differences between
> >>>the original and up/downsampled signal
> >>>
> >>>  http://en.wikipedia.org/wiki/Group_delay
> >>>
> >>>gives a table, which however seems to indicate that "not being quite"
> >>>linear phase wouldn't lead to audible problems
> >>
> >>ok, thanks for explaining this. we should have this and similar things
> >>available in our docuemntation actually. either on a wiki page on
> >>synthesis,
> >>or even a real documentation chapter about synthesis. thoughts?
> >
> >Maybe a new doxi file on synthesis details? I could write a few
> >paragraphs on the resampler.
>
> that'd be good. will you check that in to docs/ then?
> one file that may be remotely suitable is:
>   http://beast.gtk.org/architecture.html
> but it's probably much better to just start synthesis-details.doxi.

Will do.

> >Oversampling is first upsampling a 44100 Hz signal to 88200 Hz, and then
> >downsampling it again to 44100 Hz. Its what I first designed the filters
> >for: for oversampling the engine. Thus I benchmarked it as seperate
> >case.
>
> hm, i still don't have a good idea if we won't need n-times oversampling
> for the whole engine. basically, because i have a rough idea on what usual
> input output rates are or could be (44.1k, 48k, 88.2k, 96k), but not what
> good rates are to run the synthesis engine at (48K, 56K, 66.15K, 64K,
> 72K)...

Well, I've at least already thought about accelerated FIR based
implementations: you can accelerate every rate change with a raional
number (P/Q), i.e. 3/2 upsampling or 5/4 upsampling (because you can
build complete coefficient tables, then).

However, the code will be quite a bit slower than factor 2 upsampling
due to two reasons:

 * you can not copy samples from the original signal as often as you
   can do it for factor 2 upsampling
 * for rational rates, you can not use half band filters, thus you don't
   have filters with every other coefficient zero

So in the worst case, we have a performance loss of factor 2 for the
first point and factor 2 for the second point. But of course these are
estimates and can't replace real benchmarking of an implementation once
it exists.

> >>>As you see, the variant which uses doubles for intermediate values is
> >>>not much better than the SSE variant, and both fulfill the spec without
> >>>problems.
> >>
> >>have you by any chance benched the FPU variant with doubles against the
> >>FPU variant with floats btw?
> >
> >Well, I tried it now: the FPU variant without doubles is quite a bit (15%)
> >faster than the variant which uses doubles as intermediate values.
> >
> >If you want *really cool* speedups, you can use gcc-4.1 with float
> >temporaries -ftree-vectorize and -ffast-math. That auto vectorization
> >thing really works, and replaces the FPU instructions with SSE
> >instructions automagically. Its not much slower than my hand crafted
> >version. But then again, we wanted a FPU variant to have a FPU variant,
> >right?
>
> erm, i can't believe your gcc did that without also specifiying a
> processor type...

Well, I never have to specify a processor type, because my gcc only
supports one: native AMD64 code. But you are of course right in the
sense that my gcc always produces code which is perfectly optimized to
the processor it will run on, and my gcc always knows about all
instructions my processor supports and so on.

> and when we get processor specific, we have to provide alternative
> compilation objects and need a mechanism to clearly identify and
> select the required instruction sets during runtime.

I understand why you don't want to support _many_ object files per
algorithm. However, it may be reasonable to support _two_ object files
per algorithm, one compiled with -msse -ftree-vectorize and one without.
This also needs to be done for Bse::Resampler. It may be reasonable to
do it for common algorithms at least, like scaling a float block with a
float value or adding two float blocks together.

And we need a runtime check for checking whether SSE is available. But
thats not a problem, because there is one in arts/flow/cpuinfo.* we can
simply copypaste.

   Cu... Stefan
--
Stefan Westerfeld, Hamburg/Germany, http://space.twc.de/~stefan
_______________________________________________
beast mailing list
[hidden email]
http://mail.gnome.org/mailman/listinfo/beast
Reply | Threaded
Open this post in threaded view
|

Re: Fast factor 2 resampling

Tim Janik
On Mon, 10 Apr 2006, Stefan Westerfeld wrote:

>   Hi!
>
> On Wed, Apr 05, 2006 at 02:05:37AM +0200, Tim Janik wrote:
>> On Tue, 4 Apr 2006, Stefan Westerfeld wrote:
>>>> ok, ok, first things first ;)
>>>>
>>>> as far as i see, we only have a couple use cases at hand, supposedly
>>>> comprehensive filter setups are:
>>>> -  8bit:  48dB
>>>> - 12bit:  72dB
>>>> - 16bit:  96dB
>>>> - 20bit: 120dB
>>>> - 24bit: 144dB
>>>>
>>>> if we have those 5 cases covered by coefficient sets, that'd be good
>>>> enough
>>>> to check the stuff in to CVS and have production ready up/down sampling.
>>>
>>> Yes, these sound reasonable. Although picking which filter setup to use
>>> may not be as easy as looking at the precision of the input data.
>>>
>>> For example ogg input data could be resampled with 96dB coefficients for
>>> performance reasons, or 8bit input data could be resampled with a higher
>>> order filter to get better transition steepness.
>>
>> but that'd just be another choice out of those 5, other than the obvious
>> one.
>> or am i misunderstanding you and you want to point out a missing setup?
>
> No, I was just pointing out to the fact that choosing from these 5
> should not (always) be automated for datahandles. In the plain C API,
> this means that we now have
>
> /* --- resampling datahandles with the factor 2 --- */
> GslDataHandle* bse_data_handle_new_upsample2 (GslDataHandle *src_handle, int precision_bits);
>
> instead of the old API
>
> GslDataHandle* bse_data_handle_new_upsample2 (GslDataHandle *src_handle);
>
>
> But actually there is a case that is not covered very well (and can not
> be covered with the code designed as it is right now), and thats
> resampling files with small sample rate. When it comes to that, the
> aliasing area that I've designed into the inaudible area (22050-26100 Hz
> if we're resampling 44100 Hz recordings) moves down into the audible
> area (for instance 11025-13050 Hz upsampling a 22050Hz recording with
> factor 2).

sorry, i don't understand. can you elaborate o nwhat you mean with
"aliasing area" here?

>> and when we get processor specific, we have to provide alternative
>> compilation objects and need a mechanism to clearly identify and
>> select the required instruction sets during runtime.
>
> I understand why you don't want to support _many_ object files per
> algorithm. However, it may be reasonable to support _two_ object files
> per algorithm, one compiled with -msse -ftree-vectorize and one without.
> This also needs to be done for Bse::Resampler. It may be reasonable to
> do it for common algorithms at least, like scaling a float block with a
> float value or adding two float blocks together.
>
> And we need a runtime check for checking whether SSE is available. But
> thats not a problem, because there is one in arts/flow/cpuinfo.* we can
> simply copypaste.

the runtime CPU check and build system changes to build plugins with
and without SSE support are now in CVS HEAD.

>   Cu... Stefan

---
ciaoTJ
_______________________________________________
beast mailing list
[hidden email]
http://mail.gnome.org/mailman/listinfo/beast
Reply | Threaded
Open this post in threaded view
|

Re: Fast factor 2 resampling

Stefan Westerfeld
   Hi!

On Tue, Apr 18, 2006 at 05:39:30PM +0200, Tim Janik wrote:

> >/* --- resampling datahandles with the factor 2 --- */
> >GslDataHandle* bse_data_handle_new_upsample2 (GslDataHandle *src_handle,
> >int precision_bits);
> >
> >instead of the old API
> >
> >GslDataHandle* bse_data_handle_new_upsample2 (GslDataHandle *src_handle);
> >
> >
> >But actually there is a case that is not covered very well (and can not
> >be covered with the code designed as it is right now), and thats
> >resampling files with small sample rate. When it comes to that, the
> >aliasing area that I've designed into the inaudible area (22050-26100 Hz
> >if we're resampling 44100 Hz recordings) moves down into the audible
> >area (for instance 11025-13050 Hz upsampling a 22050Hz recording with
> >factor 2).
>
> sorry, i don't understand. can you elaborate o nwhat you mean with
> "aliasing area" here?

Suppose this is your original spectrum:


    # #
    ###
    #### #
    ######
    ######
--------------------> frequency
    |    |
    0    nyquist

Zero padding introduces a spectrum copy

    # #      # #
    ###      ###
    #### ## ####
    ############
    ############
--------------------> frequency
    |    |
    |    old nyquist
    0          | new nyquist

We now need to filter this, and using a half band filter for the task
means that we'll have an attenuation of -6dB (0.5 on a linear scale) at
the old nyquist frequency. Thus, there is an area which starts slightly
below the old nyquist frequency and ends slightly above the old nyquist
frequency which is not the same as the original signal.

Filter

    ####
    #####
    ######
    #######
    ########
--------------------> frequency
    |   \---/
    |    aliasing area
    0          | new nyquist

Filtered signal:

    # #
    ###  
    ####  
    ######
    #######
--------------------> frequency
    |   \---/  
    |    aliasing area
    0          | new nyquist

As you can see, some frequencies have been introduced which were not
present in the original signal, and some were reduced which were present
in the original signal. As long as the original sample rate is high,
this is not a problem, because you can not hear them. But if the
original sample rate is low, this is a problem.

   Cu... Stefan
--
Stefan Westerfeld, Hamburg/Germany, http://space.twc.de/~stefan
_______________________________________________
beast mailing list
[hidden email]
http://mail.gnome.org/mailman/listinfo/beast