#164362 - Ruben - Wed Oct 29, 2008 5:26 pm
Hi again. So I was trying to do delay/reverb, but keep getting "errors" in it.
By that, I mean either bias errors or some other, random and unknown error. :P
So basically...
Does anyone know (or know about any links for that matter) how to implement delay "properly?" Cos mine sucks. Lol.
#164367 - silent_code - Wed Oct 29, 2008 9:22 pm
Keep in mind that there are different "types" of delay effects, that build on the same principle, but have different timing, among other parameters.
That kind of artifacts do you get? Please capture some samples and link them, so that we can hear for ourselves.
Then, some implementation details would be needed to help you improve your program.
Kind regards. :^)
_________________
July 5th 08: "Volumetric Shadow Demo" 1.6.0 (final) source released
June 5th 08: "Zombie NDS" WIP released!
It's all on my page, just click WWW below.
#164379 - DekuTree64 - Thu Oct 30, 2008 2:01 am
I've never found a description of reverb/echo that didn't make it sound way more complicated than it is. Here's my algorithm:
Make a buffer of x samples, x being the delay amount. At the end of each mixer call, make sure it's filled up with the most recent x output samples.
During mixing, add a sample from the reverb buffer to each output, until you're either done mixing or the reverb buffer runs out.
If you finish mixing first, then just memcopy the remaining samples in the reverb buffer over to the start of it, and fill it the rest of the way from the freshly generated output samples.
If the reverb buffer runs out first, then switch the reverb buffer pointer to the start of the output buffer (i.e. where you're currently mixing to), and continue on.
And that's it! I hope that doesn't make it sound way more complicated than it is. The last step is fun because it basically makes an endless reverb buffer, since the reverb pointer follows a few samples behind the output pointer, which continually generates new samples.
Also, keep in mind that the reverb buffer will be signed 8-bit, even if you're doing unsigned mixing. So there can't be any bias errors from it, since it needs no bias. But do make sure you load the samples from it as signed.
_________________
___________
The best optimization is to do nothing at all.
Therefore a fully optimized program doesn't exist.
-Deku
#164380 - Ruben - Thu Oct 30, 2008 2:19 am
Well, yeah, that wasn't complicated to understand, but the bias error happens cos I'm clipping the data while still unsigned...
Unless..~
Are you supposed to do something like...
Code: |
do {
s32 new_value = (*dat_src++ - bias) + *rev_src++;
//check the reverb buffer end, etc, etc
//now clip??
} while(--samples) |
#164381 - DekuTree64 - Thu Oct 30, 2008 2:54 am
Yeah, like that.
I can't think of any reason that unsigned clipping would be necessary, and it's a bit more complicated since the min/max values depend on the bias rather than being constant like for signed clipping.
_________________
___________
The best optimization is to do nothing at all.
Therefore a fully optimized program doesn't exist.
-Deku