Rust Audio

Nannou the creative coding framework awarded MOSS grant for Rust audio dev

Hey folks, very happy to be announcing this!

I’ll likely start this work within the next week or two and will aim to keep the Rust Audio folks updated on plans and progress in this thread. I’ll post back here with the CPAL issues and PRs we have in our sights soon.

9 Likes

Yay! Congrats!

I’d appreciate work on CPAL very much, just revived my project this weekend and hit old CPAL problems again. Inability to cleanly exit event loop is particularly annoying.

Hi @mindtree. This is great news!

I’m interested in the “standard audio graph abstraction and crate” because it would be awesome if it could be combined with rsynth as well. So please let me know when you’re about to design this, so that I can have a look at it and maybe give some useful input.

I had a look at CPAL and it seems that it is similar in purpose to rsynth, with the difference that CPAL focuses on “general purpose” audio backends (alsa, wasapi, coreaudio, …), whereas rsynth focuses on backends for music purposes (VST, jack, hopefully lv2, …). Ideally, rsynth and CPAL could be unified into one crate (only the “backend abstracion” part; the synth-specific stuff from rsynth could be split into a separate crate). I’m afraid I don’t have the bandwidth for unifying, however (but adding CPAL as an extra backend for rsynth is on the radar now and seems to be doable).

That being said, I think it can be inspiring to compare the design of the two crates. You can find my notes on the design of rsynth here. I hope this can be of help.

Thanks for the info Pieter! I’m currently hacking on CPAL but I’ll let you know once my focus shifts onto the audio graph abstraction so that we can discuss some more!

I had a look at CPAL and it seems that it is similar in purpose to rsynth , with the difference that CPAL focuses on “general purpose” audio backends

To clarify a little, CPAL aims to provide access to input, output (and in the future, duplex) audio device streams in a portable, cross-platform manner and nothing more. The scope can be likened to that of PortAudio or RtAudio, but with the aim of being built in pure-Rust - at least down to the level of the platform’s native audio host API(s). The idea is to leave higher-level, opinionated abstractions like synthesis, sample playback, audio processing, to other libraries that are built either on top of CPAL or used alongside it.

W.r.t. unification of rsynth and cpal - I think it’s fine to keep the two projects separate as both seem to have quite different goals as you’ve mentioned. I haven’t looked too closely at the API exposed by rsynth, but one way in which rsynth may benefit could be to use cpal to provide the low-level support for cross-platform streams for the standalone audio application side of things. Note however that cpal does not yet support jack as a host, though recently added a Host API that should now make it possible to do so.

Thanks again for sharing!

Yeah, I came to the same conclusion.

I would like to note that it’s also my intention to split the “higher level abstractions” that are currently in rsynth to separate crates anyway. The reason why these are together now is because it’s much easier that way to synchronise the – still frequent – breaking changes.

Concerning jack, rsynth already has support for jack. I think the value of CPAL lies more in the support for other, more mainstream, audio systems, like alsa.

As an aside, speaking of alsa, supporting full duplex alsa is a pain. I tried it once in an earlier project, but I simply gave up after I discovered that the sample full duplex application that is distributed with alsa refused to work on my PC with an error stating that something is not supported by my audio card.

I noted that CPAL and rsynth use different approaches for handling audio buffers, which may make it difficult to have an ecosystem that supports both. I think this is an important separate subject, so I’ve created a separate forum topic for this. Because you seem to have a lot of experience using the “interlaced” approach, I am interested in hearing your input on this.