-
Notifications
You must be signed in to change notification settings - Fork 535
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] 16 speakers in Windows using AmbiSonics (without HDMI and Receiver) #999
Comments
Hi. The 16 channels for the 3rd order ambisonic mix that OpenAL Soft uses internally aren't 16 discrete outputs/speaker feeds. It needs to be decoded before being played back, and actually for a proper reproduction of a full 3rd order ambisonic mix, you need more than 16 speakers (you need more speakers than ambisonic channels to properly play it back) and those speakers need to be arranged in a suitable layout to sound good. The internal decoders OpenAL Soft has is for outputting to the supported channel configurations; mono, stereo, quad, 5.1, 6.1, and 7.1 (along with an experimental 7.1.4 output, and an uncommon 3D7.1 setup). It can also decode using HRTF (binaural/headphones), or convert to stereo-compatible UHJ. Alternatively, the ambisonic mix can be output directly without decoding or converting it, but that requires a suitable output API and an external decoder if it's to be listened to. The primary problem with outputting the internal ambisonic mix on Windows is that WASAPI seems limited to 8 channels for 7.1 output. Which isn't enough for even 2nd order ambisonics that requires 9 channels, let alone the 16 channels for 3rd order. To provide the ambisonic channels to ASIO, you'd need JACK installed as well as a custom build of OpenAL Soft with the JACK backend enabled. Or alternatively, make a build of the latest Git version with the PortAudio backend enabled, which should be able to output to ASIO. You'll need the proper development stuff installed to build OpenAL Soft with support for JACK or PortAudio if you go that route. If there's free public development headers and such available to build for using ASIO, I can think about maybe creating a backend to use it directly. Or maybe I can enable JACK or PortAudio for future Windows binary builds (lower priority than the normal Windows audio APIs so they have to be expressly selected in the config file) to have a way to output to ASIO. But to reiterate, the ambisonic mix needs to be decoded before being played on speakers, with an appropriate decoder designed for your particular speaker arrangement, regardless of how it gets to ASIO. |
Thank you for the fast reply!
Oh, then I've got something seriously wrong. I thought that the order defines the amount of speakers directly (3rd order = 3+1^2 = 16 speakers). So the current speaker presets all use 2nd order, but with way less than 9 "channels" as I normally would assume with a hexagonic for example (less than 6). What's the formula for this, how many speakers one can (or has to) address with full 3rd order 16 "channels"?
So the problem is not, that I could not talk to ASIO, the problem ist the decoder itself. I thought that the decoder algorythm currently implemented is universal and that I only need a proper *.ambdec file for any kind of speaker matrix, where it defines the angle and distance for each speaker. I just realize, that's not that easy. So you have to write a separate decoder for each output config? |
The order defines the number of channels in the ambisonic signal, but those channels relate to the spherical harmonics of the soundfield, not individual output speakers. Technically the number of ambisonic channels is the minimum number of discrete output feeds necessary to retain the complete soundfield, but it's not good enough for listening to (you can decode to and encode from an appropriate 16 channel layout for 3rd order without loss, but when listened to you'll get audible "holes" in the output where certain directions will have noticeably reduced quality compared to others, because of the way the sound interacts with our ears/brain). For example, full 1st order ambisonics uses 4 channels (
The internal decoders (except for 7.1.4 and 3D7.1) are horizontal only. They ignore the height-related channels which cuts down on the number of ambisonic channels being decoded, reducing the number of output speakers needed for playback. For horizontal-only ambisonics, the channel calculation is As for figuring out how many speaker outputs you need to properly play back a given ambisonic order, I'm not sure if there's any hard and fast rules to it. Generally more is better, but avoid going overboard.
The ambdec file is the decoder. It specifies how to take the ambisonic channels produced by OpenAL Soft, and mix them for each output channel on a given output configuration. Currently OpenAL Soft can only handle ambdec files for the supported surround output configurations (quad, 5.1, etc), basically overriding the internal decoder for specific output configurations. If you want something different, you'll need to have OpenAL Soft output the ambisonic mix to an external decoder instead. I do intend to add the ability to specify more generic output configurations for OpenAL Soft, to allow for decoders that output to a general assortment of channels unrelated to known configurations (e.g. a generic 8-channel output that's not 7.1, or a generic 16-channel output), but it's not possible for now. |
Thank you for all the detailed descriptions!
3D-surround is moving in that direction anyway. The max config for 3D-Spatial (Dolby, Microsoft) at the moment is 8.1.4.4. |
AFAIK, modes like 8.1.4.4 only work through headphones/virtual surround sound. From what I remember reading, anyway. Setting up bottom/ground/floor speakers would be pretty difficult for most people, likely even more difficult than ceiling speakers, since people sit pretty close to the floor and things like the sofa and end tables would get in the way (either preventing proper speaker placement, or blocking line-of-sight with the speaker). It's a lot easier to simulate it over headphones with HRTF, than using physical speakers. From a software perspective, the main issue with configurations like that is lack of system support. Most audio APIs don't have labels for bottom speakers so there's no way to properly address them. Some APIs may let you access output channels without position labels, which leaves the order unspecified (there's no way to say which channel is which, leaving it to the user to fiddle with connections to assign them correctly). So far, the only audio API I'm aware of that allows specifying the appropriate bottom channels is the spatial audio extensions for WASAPI (not even base WASAPI can), which OpenAL Soft only has experimental support for. That said, it wouldn't be hard for OpenAL Soft to at least support it internally. Such a decoder would even be better and more stable than the 7.1.4 one it currently has. But the availability of using it will be very limited. |
@kcat I'm also interested in higher ambisonics output, but for storing it in WAV/AMB format via WaveWriter to use an external decoder/player like Virtual Home Theater or maybe just to have a more precise sound field when visualizing it to see if a game's actually using height or checking if there are reversed axes. Like here's a 1OA recording I made a while ago: UT2004_FuMa_Ambisonics_recording_to_virtual_7.1_binaural_HRTF.mp4By the way, is it possible to output to multiple backends simultaneously? Because it'd be neat to be able to hear the game audio while we're recording HOA. Also, I know this is probably far-fetched so I'm not sure if it should be a whole new issue, but how feasible would it be to record into an object-based format? Not necessarily a completely new format, since there's already LAF (someone's working on an OpenAL Soft-based player though we can already use Cavern), IAMF and MPEG-H 3D (which supports HOA though I'm not sure if it's a free format)
Something like this? Apparently it just requires submitting a license agreement though
|
No, unfortunately. Can't output to multiple devices in one backend either. Best you could do is set up some kind of virtual splitter cable that can forward the samples to the ambisonic decoder device, and passes a copy to another virtual device that outputs to a file on disk. Or maybe recording software like OBS can capture the audio from the app (making sure it doesn't try to remix/upmix/downmix the channels and keeps them as-is in the recording).
It would take a fair bit of work. The mixer would have to be redesigned to allow individual sources/source channels to output to their own channels instead of mixing into the main ambisonic buffer, somehow also passing along the position data. Voice/channel limits would also become an issue, something OpenAL Soft hasn't really had to worry about but these object-based audio systems do. It's something to keep in mind, but until a practical option becomes available, I don't know if the effort would be worth it.
That is extremely interesting. Particularly the point about "A PCM block is created for each second of audio. [...] For object-based exports, extra PCM tracks are used for object movement data." Does that mean the dynamic object positions only update once a second? That doesn't seem correct, given the atmos demo (and that video's conversion/rendering of it) seems to update more frequently/smoothly. Unless the idea is the sound moves over the course of that second, which would make it more practical to deal with, though pose problems for outputting it in real-time. Quick fine movements would be lost, and there may be a delay in the position updates. Still, LAF looks like a promising format that OpenAL Soft could play back in some fashion, at least (that video mentions it used their "wip audio renderer based on OpenAL Soft", which I'm not sure what precisely that means; a modified OpenAL Soft, or the guts of OpenAL Soft were ripped out and put into something else, or something on top of an unmodified OpenAL Soft?). And though outputting dynamic channels isn't currently practical, OpenAL Soft could output static channel layouts, including esoteric ones like 3D7.1, 7.1.4.4, etc, since it deals with positions instead of labels. And if Atmos and similar formats are known well enough to convert to it to LAF, then... hmm... Some LAF samples (and/or Atmos samples that can be converted to LAF) would be nice to look at and play around with.
Looks like it could be a problem for the LGPL. Or at least a problem for people making their own OpenAL Soft builds that want support for it. |
@kcat here's response from the LAF dev:
BTW forgot to mention someone else made an object-based audio scene editor/player, also using OpenAL Soft (app+sample, WASD to move, QE for height or enter/exit reverb area) that uses an XML-based format to store metadata for listener and emitter positions over time and even environment effects, which I think would get lost in translation to LAF.
Here's Universe Fury and Test Tones.
This person suggests it's ok to distribute binaries with ASIO as long as the SDK itself isn't included (in the codebase or release binaries, I presume), but not according to this GPL project. However, FlexASIO downloads the SDK during CI and after complying with license requests, it seems to be in the clear, though it's (modified) MIT. So are flexible licenses the reason why some projects are allowed to distribute binaries of ASIO-based projects? Or perhaps all it takes is to reach out to Steinberg to get explicit permission while adhering to their license requests? WASAPI Exclusive might be easier to implement since there's already a proof of concept and fixes, though in my experience, it seems to have similar latency to ASIO, but it's more prone to crackling at low buffer sizes. BTW Have you looked into WDM/Kernel Streaming? I think it's what ASIO uses so we might as well just cut the middle-man 🤔 |
Hmm, that seems a bit of an obtuse way to encode the position over time, interleaving it between each sample frame. Seems rather limiting (:P) as well, if you wanted to use proper compression like FLAC or Opus instead of raw PCM samples (which any user-oriented format should handle) since you presumably don't want to compress position vectors and you can't put them between PCM frames when it's compressed like that. Also unfortunate that the format doesn't specify sizes for headers; I guess you just have to hope a custom header won't have the words The more I look at it, the less I really like the format without some updates to support proper compression and have more robust structure. As for playback with OpenAL Soft, it seems the "proper" way to do it would be with a loopback device where you control the number of samples rendered between updates to source positions. Otherwise, the best that can be done is to keep positions buffered as they're read in, and continuously check the source offset and set the proper positions based on the current time being processed for each source. Pretty crude and inexact, prone to late or missed movements, but it's probably good enough for just listening to most things. More sophisticated options for specifying source movement at set times would be needed for being more precise and accurate with normal playback.
Perhaps. I think it would probably depend on what exactly is in the headers, and whether there's any free/libre alternatives to the official ASIO headers. You can build against non-free headers and still be GPL-compliant, otherwise you couldn't distribute builds of (L)GPL code made with MSVC using MS's headers (the system library exception I don't think says anything about headers, just the DLLs), but MinGW comes with a mostly complete and compatible free/libre alternative to the official C/C++ Windows SDK so users have options to build the project without relying on non-free stuff, even if it's not the same exact/official SDK.
I didn't think that was still a thing after Windows XP. If it is, it's probably internal Windows stuff and not stable (as in API/ABI stable) across versions, and why there are things like ASIO and WASAPI exclusive mode to reduce the overhead and provide a stable interface. |
Sadly I've looked at several projects and I'm yet to find a FLOSS alternative to ASIO.
It seems the key here is Steinberg's license directly conflicting with GPL, like in the case of Audacity:
So if FLOSS isn't a viable option, would OpenAL Soft's license allow not including ASIO SDK code/headers in the repo while still providing binaries that do contain ASIO (added only during CI)? Perhaps @dechamps could comment on whether it'd be feasible to add ASIO here like in FlexASIO 👀
ASIO4ALL uses it and supports "any Windows OS since Win98SE". Also it has a C++ API and sample
Audio Repeater can also use Kernel Streaming to get low latency when using a virtual audio device for HeSuVi |
I don't think that's true. I remember dechamps/FlexASIO#100 from a few years back where it was demonstrated that WASAPI is able to open a 10-channel output device just fine, and you can check that for yourself using emulated software WDM drivers such as Virtual Audio Cable. IIRC that is true even in shared mode, as long as the channel count is correctly configured in the Windows control panel settings for the audio device. Of course, the bigger issue is that actual WDM audio devices that expose this many channels seem to be extremely rare - most many-channel audio interfaces typically expose pairs of channels as Windows audio endpoints, such that you end up with N/2 stereo endpoints, not a single N-channel endpoint. With these devices, typically the only way to get them to expose a single device with N channels is to use ASIO.
I would strongly recommend against enabling ASIO support in PortAudio due to PortAudio/portaudio#696. Especially in a library. You would run the risk of compromising the integrity of the application process, even if the user does not intend to use ASIO at all. I would strongly recommend using ASIO directly instead, without going through PortAudio.
It's not really correct to say that "ASIO uses WDM/KS". Some ASIO drivers may be implemented using WDM/KS internally (most notably ASIO4ALL and FlexASIO in WDM/KS mode, but it's conceivable some manufacturer-specific ASIO drivers do that as well), but there is nothing stopping anyone from developing an ASIO driver that interfaces with the hardware in a completely different way.
It is definitely still a thing - it is what the Windows Audio Engine (which runs in user mode) uses to talk to the underlying WDM audio device driver. It forms the interface between user mode and kernel mode for audio.
It's complicated. Nowadays, Microsoft mostly sees WDM/KS as the interface between the Windows Audio Engine (i.e. WASAPI) and the WDM audio device driver. Microsoft does not advertise nor recommend WDM/KS as an application API, but in practice there is nothing stopping an application from using WDM/KS directly. I doubt Microsoft will break applications using WDM/KS, because that would mean breaking ASIO4ALL, which is quite popular, therefore such a move would piss off quite a lot of Windows users. The WDM/KS ABIs are documented and very stable, because it is the ABI that WDM audio device drivers have to implement. Microsoft cannot break these ABIs without breaking audio drivers. That said, there have been cases in the past where Microsoft introduced new WDM/KS "modes" (the most prominent example being WaveRT, introduced in Vista), and over time new WDM audio device drivers dropped support for pre-Vista and ended up only supporting WaveRT, breaking any WDM/KS client code that did not support WaveRT. These things take many years to happen though.
I wouldn't recommend attempting to write a WDM/KS client. WDM/KS is an atrociously complicated and error-prone API with lots of device-specific sharp edges (e.g. workarounds for driver bugs) that require extensive hardware testing to sort out. To give an idea, the PortAudio WDM/KS client is 7 KLOC and despite that is still quite buggy/unreliable. I would strongly recommend using WASAPI Exclusive instead, which is basically just a wrapper around WDM/KS in practice (I mean that quite literally - if you use WASAPI Exclusive, your app ends up making WDM/KS calls directly from the application process behind the scenes).
I am not a lawyer so am just speculating here, but my understanding is you are bound to Steinberg licensing terms as soon as you use the ASIO SDK in any way. I believe one possible way to work around that could be to not use the ASIO SDK at all but instead enumerate and call ASIO drivers directly. My understanding (again, I am not a lawyer) is that this is protected activity in many legal jurisdictions, where providing interoperability is a valid legal excuse for using APIs you don't own (see also this famous case). I know of several ASIO Host Applications that do not use the ASIO SDK anywhere in their build process: they just enumerate drivers in the same way the ASIO SDK would (it's basically just listing some entries in the Windows Registry), and then instantiate ASIO driver COM classes directly and call the relevant methods using their own declarations in their own headers. NAudio is one such example. This is not particularly difficult to do: the code in the ASIO SDK is very small and trivial to reimplement. |
Yeah, as far as I understand (also as a non-lawyer), APIs aren't copyrightable. As long as new headers and such can be created in a clean room manner, whatever license there may be with the original SDK isn't relevant to code that doesn't use it and uses clean alternatives instead. And projects like wineasio provide a FLOSS driver implementation that the headers would provide access to, so it wouldn't be the case that the interface could only be used for proprietary/closed source implementations either. What could be an issue is using the ASIO names/trademarks where not otherwise necessary, though. MESA used to have a similar issue, where even though they were effectively an OpenGL implementation, implementing their own headers and libs with
That is informative, and may help me implement my own interface header. Having sample code that uses the API to play/record audio would be helpful too (I tried looking at PortAudio's source, but I couldn't find the code for their ASIO backend). |
There's some in the ASIO SDK, but obviously it assumes you're going through the ASIO SDK "glue layer" (i.e. the small shim between the app-facing ASIO SDK API and the stable driver-facing ASIO API/ABI - a very small amount of code since the two are very similar). Another thing you can do is run some sample ASIO app (such as the one in the ASIO SDK) with FlexASIO with logging enabled - the FlexASIO log will show every single call going into a driver from the perspective of the driver API (i.e. what you would have to code against). If and when you get serious about this I would also recommend reading this.
It's here, but again this is going through the ASIO SDK client API, not the driver API. |
@dechamps Is there any significant functional difference between WASAPI exclusive, WDM-KS and ASIO/4ALL in terms of...?
Asking because someone already managed to force OpenAL Soft to use WASAPI in exclusive mode but it was an unfinished/outdated implementation so I wonder if it would be more feasible/practical compared to ASIO/WDM-KS. Besides being limited to 10 channels, which isn't enough for what the OP was asking for, unless it's indeed possible to output to to 8-channel devices simultaneously 🤔 On a side note, in case we go off-topic here, we also started a latency-focused discussion here #682 |
It really depends on the quality of the respective drivers. I don't think ASIO in itself makes it possible to achieve latencies that are lower than what you could achieve with a good WDM driver implementation. Indeed modern WDM/KS in WaveRT packet mode (which also describes WASAPI Exclusive, since it's just a wrapper around that) is basically the same thing as ASIO: a double buffer allocated on the driver side with a notification mechanism. So if there are any latency differences between the two, my money is on the manufacturer not spending an equal amount of time/money/effort on their ASIO driver vs. their WDM driver, not fundamental limitations of the framework used. Or it could also be the app's fault: the app having better ASIO client code than WASAPI client code (though that seems less likely as it's harder to get wrong).
I'm not intimately familiar with ASIO4ALL, but my understanding is it just wraps WDM/KS. That's bit-perfect in the exact same way WASAPI Exclusive is: the buffer goes directly to the WDM driver without any alteration. Now if there are features in ASIO4ALL where it would mess with the signal in some way that's a different story, but as far as I know there aren't any. This is something that can easily be verified: just use a purely digital loopback connection, e.g. a S/PDIF loopback or a purely software WDM driver such as Virtual Audio Cable. Since it's purely digital, the bits you get on one end should be strictly identical to the bits you sent on the other end. This will always be true as far WDM/KS and WASAPI Exclusive are concerned.
Yeah BSODs with WDM/KS are not too surprising. In theory they are never supposed to happen assuming bug-free WDM drivers, but WDM drivers like any other software are not completely bug-free. More importantly, I suspect audio WDM drivers are only tested with the Windows Audio Engine as a client (and hopefully WASAPI Exclusive), not niche/unusual clients like ASIO4ALL, PortAudio WDM/KS or this "AudioRepeater" thing you mentioned. This means that when using these third-party WDM/KS clients you may be triggering code paths in the WDM driver that the Windows Audio Engine would never exercise. At this point you end up in untested territory, and a bug in a Windows kernel-mode driver can often lead to a BSOD. To be clear this is never the client's fault - BSODs are always the fault of the driver, period - but unusual WDM/KS clients can be particularly effective at finding edge cases where the driver is not handling some inputs/states correctly.
I would always recommend using WASAPI if you can. It is the modern API that is recommended by Microsoft. It is more user-friendly and likely more reliable than ASIO (which requires the user to fiddle with often poor-quality drivers), and it is much, much easier to use correctly than WDM/KS.
As I said earlier I don't think WASAPI itself is limited to 10 channels, especially in exclusive mode. The real challenge is finding a device that will expose 10+ channels on a single endpoint from its WDM driver. |
Yes. A final GPL binary cannot forfeit any of the freedoms.
That would be redistribution, and it would still violate the GPL.
To link to the dlls.. You need said headers?
I mean, it is really all just about some kind of api existing to get your waves to the sound card drivers and outputs (fun fact: before openal 1.1 and the soon-to-be-released Vista, the whole thing was just a cross-platform abstraction that on windows just used the normal native directsound even on Audigys). FlexASIO (or at least the principles behind it, without having to rely on the steinberg's interfaces as if this was a legacy windows XP application) is where bad bitches seem they should be at today.
What about the mysterious |
When building with MSVC, yes, that's my point. You don't need MS's headers to be able to link to system DLLs using MinGW as it comes with its own FLOSS headers, but when building with MSVC, you use MS's headers, which weren't always permissively licensed. So being able to build (L)GPL software with MSVC, and distribute binaries that used MS's proprietary headers, is apparently fine.
This is really where I'm standing. Admittedly, it's kinda fun to create my own interface to ASIO drivers to make them work, but I had been under the impression that ASIO was something more along the lines of JACK or CoreAudio, an audio server/service designed for low-latency audio production work, rather than giving apps direct hardware access. In the latter case, I'm not sure I see much benefit to it over WASAPI Exclusive mode if Shared mode has too much latency. API-wise, ASIO does have the nice feature of being callback-driven, but I don't think that's much of a benefit given the drawbacks (no real device selection, no channel labels or configuration detection, names being limited to ANSI strings, the sloppy way "direct output" is indicated, having to deal with individual driver quirks). The only real utility ASIO would provide is having more than 8 or 10 output channels, but given how it's not designed for being a plugin-based system to be able to do something with those channels, and the awkwardness of making it output to a specific device that expects what's being output, it would really require some dedication on the part of the user to make useful. JACK for Windows includes an ASIO driver for connecting ASIO apps to a JACK server, where you have more freedom to route connections between apps, plugins, and devices, but at that point, why not just use JACK directly? Unless there's an alternative low-latency server with an ASIO driver that more preferred over JACK on Windows.
That just indicates the channels are unlabelled, so rather than saying "4 channels: front-left, front-right, back-left, back-right", it just says "4 channels", and it's completely up to the driver where they end up. That doesn't allow getting more channels than are specified, it just leaves it unspecified what the channels you have are (really, any channels not specified in the channel mask are "direct out", e.g. if you specify 4 channels with the front-left and front-right mask bits only, then the first two channels are front stereo, and the extra two go wherever the driver wants). |
Yes, but my main point was that's 100% covered by the system headers exception.
I don't know about coreaudio, but AFAIR jack is far more properly maintained and coded.
You must appreciate how you couldn't really do the former without the latter, at least on older Windows.
As #682 notes, shared mode is actually WAY better in windows 10. Possibly not so much to make exclusive redundant, but still. Both to see just how good modern WASAPI now is (I wonder if they couldn't have cut another ms in W11 perhaps), but also to check if as far as "underground apis" go KS isn't actually the most that you'd ever really need (even though, who knows, the results may be different between shitty sound cards and professional ones and perhaps ASIO could still have a measurable advantage).
I mean, JACK cannot do anything without relying on another api on windows...
Considering what Etienne said (and considering that at least on <W10, KS truly seems the absolute best at least when it come to support) I guess it might indeed be worth looking into some lower level library.
Uhm.. Then why cannot you just specify more? |
Sure, it would've included some way to handle low-latency audio I/O, but I don't see that being too useful on its own without something to make use of it.
Yes, but that would be an implementation detail of JACK. The point I was getting at was that if the intent is to use something like JACK, to connect apps to filters/converts to devices and such, and you make an ASIO host app that would use an ASIO driver that's just a front for accessing that server, it would be better to use that server directly.
Because you can't get more channels than the device provides. If the device is reporting 8 channels, you can't get 16. |
@mirh In my experience, WASAPI exclusive often crackles when forcing latency/buffer low enough to match ASIO4ALL, which is why I prefer to use the latter in DAWs like Reaper and OmniMIDI. I don't recall that happening with the WASAPI exclusive fork, but then again I think the lowest sample size I could set without the app crashing was like 160 (and a bit lower on lower sample rates), compared to 64 in ASIO.
@kcat I gave it a shot, and maybe I messed up the setup but results weren't good. Details in #1033 |
Sorry, maybe I implied too much - of course I was hinting at IAudioClient3's WASAPI exclusive, not the original one. Putting aside that on top of not having heard many KS vs ASIO comparisons, again even this bloody IAudioClient3 seems very badly "reviewed".
I'm really having a bad time understanding where else one would like to have sound be sent, if we don't want (or have) so many physical outputs - and even "virtual" ones aren't desired? |
I'm curious what The main issue it seems with getting really low latencies with WASAPI exclusive is it being event driven. A period passes and the device is ready for more audio, and WASAPI will signal an event
The problem at issue was making WASAPI virtual devices have more channels. Despite being set up to take a 16-channel third-order ambisonic signal with a virtual cable, the device only reported 8 channels in a 7.1 configuration to apps querying it. So OpenAL Soft could try to create a steam that's 16 channels and specify |
I have not looked into this in detail, but intuitively I would be very skeptical this would be a problem. First, this doesn't take as long as you think it does. As long as the proper thread priorities are set and things are operating normally, the Windows scheduler will immediately schedule the thread as soon as the kernel signals the event. There is no "wait". Second, I don't think anyone can actually implement a true "callback-driven" approach like you're describing, at least not on Windows. The kernel won't let you - you would end up calling into user space directly from the kernel, I don't see that happening. And even if you could, I'm not sure how that would be significantly faster than signaling an event and then immediately scheduling the thread waiting on the event (which is how it currently works). The only way to improve on that is to get rid of the context switch into user mode, and for that you would need to run the app logic itself in kernel mode (good luck). The ASIO API is callback-driven on the surface, but in practice I would expect all ASIO drivers to be ultimately implemented by spinning up a thread, waiting for events in that thread, and firing the ASIO callbacks when the event is signaled. It looks like callbacks from the ASIO host app's perspective, but behind the scenes it's all ultimately based on waiting for some event from the kernel. It may perhaps be possible to get this very slightly faster by having the thread spin on the event instead of waiting for it (to remove the context switch cost), but I doubt it'd be worth it. |
From a totally theoretical read of the documentation, it doesn't really seems that's up to you to bother or rethink.
I mean, if you are targetting a virtual device then I don't see why it should? |
Yeah, I suppose requesting an
That doesn't seem to be talking about creating a stream with more channels than the WASAPI device accepts. If a device only accepts 8 channels, it won't accept 16 channels. But as for the quoted part about nChannels and dwChannelMask:
The mismatch behavior doesn't match my experience with the topic. If
as that would make 3- and 4-channel UHJ files more practical. You could specify |
In case it's relevant/useful regarding channel count expansion, turns out we can use CRU to fake up to 7.1 by modifying the EDID of the stereo-only playback device of a DisplayPort/HDMI display to be able to use HeSuVi to virtualize the system surround mix into stereo that gets forwarded to the 2 real channels (the 6 extra ones are silent/unused, of course). I've always wanted to do something like that for regular sound cards, so I emailed dechamps and this was his response:
|
Hello kCat,
Are there any plans to use the full AmbiSonics 3rd order (16 speakers max) with OpenAl-Soft in a way that I can directly route 16
channels from the internal decoder to discrete outputs, let's say 16 ASIO channels on two sound cards?
I don't know how to explain it better, but I want to use up to 16 speakers with the outputs of two ASIO cards. (no Receiver, and also no HDMI)
The only way to move around the Windows 7.1 config (as I see it) would be to directly address the ASIO channels that are available in the system.
So the decoder has to use an algorythm that produces true 16 channels from the internal B-stream, and map them to the channel-table.
But I assume that OpenAL-Soft never support the tool to do the matrix calculations for 16 speakers and the ambdec config file itself.
The "highest" configuration at the moment uses 7.1.4 speakers. The downside of it is that is produces no sound from below.
The idea was that I built my own matrix/rig of speakers (up to 16), but I don't know at the moment how to implement this with OpenAL-Soft.
The text was updated successfully, but these errors were encountered: