[Openal] Audio Entry Point/Compositing(? -- Mix)
sjbaker1 at airmail.net
Thu Sep 15 19:18:03 PDT 2005
Ryan C. Gordon wrote:
>>Even if you pursuaded it to generate a readable WAV file, it would still
>>take an hour to mix together an hours worth of music. What you'd like is
>>to have it work at maximum speed using close to 100% of the CPU to mix
>>an hour's worth of music in 30 seconds or so...but there is no way to do
> Ok, I'm intrigued, I admit.
Yeah - it's one of those drop-offs in the API that makes you think
at a fundamental level. Sometimes the best ideas come from thinking
about these kinds of problems.
> In terms of extension, how would this work?
> Perhaps open a "file:///x.wav" device or something in alcOpenDevice(),
It would need additional parameters for sampling frequency - format
(byte or short) - stereo or not...etc.
But while we're doing this - why force the output to have to go to
a file? Maybe the application wants to use it for something immediately?
I think it's a VERY bad idea to build a particular file format into
the OpenAL core library - that stuff belongs out in ALUT somewhere.
This is why (IMHO) it should be a 'render to alBuffer' feature...but then
we'd need to be able to read back from alBuffer (a contentious issue) - and
add ALUT API to write an alBuffer to disk in whatever formats we might
choose to support in the future.
The idea of queuing buffers that we have for sound SOURCES could also be
used for sound RECORDING - so you'd provide a set of buffers that you
could write out to disk and recycle as needed.
> make sure the context is synchronous, so it mixes during
> alcProcessContext() when the app has adjusted all the sources (etc) and
> never any other time, and maybe add a query for ALC_TICKS (or something
> better named) that reports the number of milliseconds that the AL has
> processed (which may be way ahead of or way behind wallclock time,
> depending on the processing involved)?
Yes - I guess that could work.
> GL totally dodges this issue by the nature of graphics...you don't
> actually care how long a given frame takes. Well, okay, you CARE, but
> not in the same way audio processing cares.
That's a really fundamental distinction - yes. Video is inherently
encoded in discrete frames - audio isn't. From that fairly inescapable
fact - all else follows.
I think the way to view that is the whole business of buffer queuing.
If you had a recording API, OpenAL could fill up each buffer in one
near-instantaneous operation - no matter the length - at the sampling
frequency you set for that buffer.
The only problem would be if your application wanted to change positions
of sources and such during the recording process...but I guess it could
just give the recording system a lot of really tiny buffers to record
---------------------------- Steve Baker -------------------------
HomeEmail: <sjbaker1 at airmail.net> WorkEmail: <sjbaker at link.com>
HomePage : http://www.sjbaker.org
Projects : http://plib.sf.net http://tuxaqfh.sf.net
-----BEGIN GEEK CODE BLOCK-----
GCS d-- s:+ a+ C++++$ UL+++$ P--- L++++$ E--- W+++ N o+ K? w--- !O M-
V-- PS++ PE- Y-- PGP-- t+ 5 X R+++ tv b++ DI++ D G+ e++ h--(-) r+++ y++++
-----END GEEK CODE BLOCK-----
More information about the Openal