[Openal-devel] Some questions....
Thu Jan 8 12:23:07 PST 2004
On Thu, 2004-01-08 at 01:17, Garin Hiebert wrote:
> > On Wed, 2004-01-07 at 15:15, Garin Hiebert wrote:
> >> The reason why hardware sources don't "rollover" to software sources
> >> is that the transition from one to the other would be noticable in a
> >> really bad way. If you have a 7.1 speaker setup playing X sources
> >> with EAX support, you don't want to add one more source and find that
> >> it is only spatialized into the front two speakers without reverb --
> >> that would just sound strange. On top of that, all of a sudden a
> >> game's framerate would radically change. You might be running along
> >> at 60 frames per second with hardware sources, and then suddenly the
> >> game creates five software sources and the framerate goes down to 45
> >> fps because of the entirely new rendering path being activated.
> > Well, A3D does this just nicely. I didn't ever notice any framerate drop
> > or change in the way source sound like.
> > If software fallback is handled by the soundcard driver (as it happens
> > with A3D), it may have more knowledge to make the software fallback to
> > sound similar to the hardware rendering. Seems reasonable to me.
> I don't mean to say that it _can't_ be done, just that given the way
> OpenAL was formed it wasn't a good idea.
> With A3D, Aureal provided a
> seamless transition by writing both the "hardware" and the "software"
> effects themselves. The framerate dropoff was linear for A3D because most
> of the work was in software, not hardware. On Creative hardware, we
> implement EAX entirely on the hardware (meaning there is no performance
> penalty beyond parameter passing overhead to add EAX to a source).
No. Aureal Hardware does 100% of the spatialization processing in
hardware, but their cards support max upto 16 hardware sources. They
support stereo FIR HRTF's (56 taps each), ITD, ILD, Atmospheric filters,
linear pitch shift, crosstalk canceling, mixing and equalization in
hardware. Beside that it support 80 additional PCM DMA streams.
I'm pretty sure about that because i reverse engineered the 3D engine of
I have never found anywhere a clear statement what the Creative Labs
sound card really do in hardware, since the vendor binary driver is the
only one that supports "something" on its DSP. For example how may taps
do the HRTF filters of the SoundBlaster Live use ? Or are they
implemented by other means (not FIR filters) ?
> With OpenAL, the original idea was to include a variety of vendors who are
> not going to want to share all their core or extension code with one
> another to create a seamless experience across the hardware/software
> transition. For instance, Creative isn't going to share its EAX effect
> code to allow other vendors to adopt the feature at the same quality
> level. NVIDIA has their own capabilities and extensions that they aren't
> going to share with others as well...
OK, that means that its not entirely a technical reason, rather more
"political". Maybe one should do a fork to have a OpenAL variant backed
only by technical argumentation.
> >> At some point, the application should deal with voice management
> >> anyways -- having an un-constrained number of sources for your
> >> application will just lead to miserable performance for no good reason
> >> (in that there probably aren't 758 things you really should be
> >> listening to at the same time anyways even if they can all be
> >> rendered).
> > Well the same way this common functionality could be handled by a
> > library. Maybe a extension ? Any comments ?
> In my opinion, the audio engine which uses OpenAL should provide voice
> management capabilities to match the application's needs. It could
> certainly be done in OpenAL, but if your application's needs don't match
> how the feature was put into OpenAL, then the application is stuck having
> to interact with OpenAL in an awkward way.
Yep, i agree with that.
More information about the Openal-devel