Rust Audio

LV2 crate implementation and design

Here is the topic to discuss the various design decisions and implementation details for the lv2 crate. :slight_smile:

As of now there are two different prototype implementations for this crate: @Janonard’s (here) and mine (here).

Therefore, the goal of this discussion is to define:

  1. the general directions and perimeter for the lv2 crate;
  2. which elements of the two prototype’s designs are the best for those directions, then extract and use them to put together the design of the final lv2 crate.

@Janonard and I already started this discussion on his reddit thread a bit, so I’m going to continue it here, others are obviously more than welcome to give their opinions on this!

(Below are only my opinions about this, which are all up for debate, obviously.)

In terms of general design, I believe we should strive for safety, performance, and ergonomics, in that specific order. Although, thanks to Rust, we often end up being able to get all three. :wink:

The whole public API should be 100% safe to use (otherwise, why use Rust in the first place?), and this is the part about @Janonard’s prototype that I don’t really like: even the examples have to resort to lots of unsafe, and this definitely is not a good thing, especially for beginners who may not comprehend what the implications of unsafe in those cases are.

In my prototype I managed to make a safe interface for most of those, though there is still quite a bit of work left before all of the book’s examples are done (actually I just pushed the code for the midigate example, to show off a bit more of the API).

Now, @Janonard I believe you had some reluctances about my own design (mostly the Atom API), so now we’re here, feel free to share them!

I’m a bit of an outsider to LV2, so feel free to ignore me. :slight_smile:

We in the rust-vst group had a similar discussion recently, and I think the direction we’re going to start moving towards is having a vst-sys crate that just gives you pretty barebones, unsafe access to the VST2 protocol, and then on top of that we’re going to have a vst crate (which depends on vst-sys) that gives users the safety that users expect from Rust.

This would allow anyone to come along and implement their own safe wrapper around the unsafe *-sys crate, if they wanted. This would also be pretty useful for the long-term, multi-API crate goal that people have been talking about.


Jepp, that’s true, but I had to make them unsafe: The only part of my public API that’s unsafe is the de-referencing of the IO port pointers. These pointers are coming from the host and we have no way to know if they are valid or if the pointed values have the correct type. Therefore, I had to make it unsafe to let users know that these functions a) may yield undefined behavior, b) should only be used in the run method, and c) have to get their pointers from the connect_port method, exclusively.

I agree with you that this is not good, but the only way to properly solve it would require big abstractions without runtime costs, a heavy task. You made some progress there, could you maybe explain your approach on how to solve this problem? I’ve tried to look at it, but I’m not that familiar with Rust macros and the syn crate; it would be faster if you would explain it! :grinning:

However, I would like to stress that, apart from that point, my public API is perfectly safe, especially the Atom crate! Not even that, it also uses Rust’s type system to ensure that written atoms are valid (e.g. borrowing mutably from the parent writing frame to ensure that there is only one frame writing to the port), something the C library can only dream of. There are unsafe methods publicly available, but they should be avoided from the outside as much as possible (just like, for example std::mem::transmute).

Maybe, the way to go is to stabilize your approach on the core specification first
and then fitting my Atom library to it. Is that an idea?

I don’t think that this is such a good idea for LV2: The normative part of LV2 is only a collection of, actually quite few, type definitions. The provided functions are all inlined and very C-specific. Therefore, if someone wants to create their own safe wrappers for LV2, they just need to copy these types, which can be done in about half an hour, or a quarter if you’re fast. From my point of view, creating a new -sys crate is work that simply doesn’t give us an advantage.

1 Like

That’s good to know! vst seems to be quite a bit more complicated unfortunately.

Our end goal is to create a moderate level of abstraction for a plugin that can target vst, lv2, AU, etc. with a single codebase. So, since VST has so much going on, we’re separating it into a -sys as @crsaracco said already.

To clarify - do you mean that either lv2-rs or lv2rs can be consumed by an aforementioned higher level library without the need for separating things into a sys crate, or that implementing lv2 is simple enough that it can be done standalone in little time? I’m very much new to lv2 as well.

edit: seems the discussion on this is already ongoing in another thread, I’ll take my questions over there :slight_smile:

In my implementation I actually did split the C bindings into a -sys crate, with the intents @crsaracco describes, mainly for two reasons:

  • Even though there is no binding to external functions in LV2 (there is no or lv2.dll to link to), the header files does actually define quite a bit: the struct definitions for all the specs, and all of the URIs as consts.
    So even though it could be made somewhat quickly from scratch, it still quite a bit of (potentially error-prone) work to vendor in, which is quite against the code-sharing (and update-sharing) usually done in the Rust community.
  • I was lazy and just wanted to use bindgen to generate all of those C structs and consts for me. :stuck_out_tongue:

Actually (since I never really documented it…), here’s a crate-level view of my prototype:

  • lv2 is the crate that contains Rustic definitions and utilities for each specification. It defines stuff like port types, official extensions and host features types and methods. It is common to both plugins and hosts, and potentially higher-level libraries, as it pretty much implements LV2 as a general protocol.
  • lv2-plugin contains the utilities (mainly traits and macros) to safely implement LV2 plugins.
  • lilv is a high-level binding to the lilv library, which handles quite a few details for hosts (it is also used by Ardour, for instance). This library could be rewritten in Rust (outside of Turtle parsing it doesn’t do that much really), but I wanted to use it to be able to effectively test my Rust plugins.
  • lv2-plugin-test is a small crate to test LV2 plugins using the usual cargo tests. It actually does a full-level testing, by loading the plugin object and metadata files using lilv, and running the plugin on randomly-filled test buffers.

There’s also some -sys and -derive crates for the the crates that need C bindings and custom derives, respectively. :slight_smile:

If you’re having trouble understanding Rust macros (whether they’re the declarative or procedural kinds), I highly recommend you to take a look at the cargo-expand tool. Here you can find a gist of the code the various macros generate for the amp example generated by this tool, but I’ll go over it a bit here:

For the ones not knowing LV2 too much, pretty much all communication with the plugin is done via ports, whether it is for passing audio data (which, from an user perspective, behave like literal audio ports on audio hardware, which you can plug and unplug to pretty much anything), for events (MIDI, …), or control values. But in the end, they’re all pointers to various buffers.

The pointers to the buffers allocated by the host are passed to the plugin using the connect_port() function, which just associates a port index (defined in the plugin’s metadata) to a pointer.
Note however that the size of those buffers is never given in the connect_port function. It depends on the type of port (control buffers for instance are fixed-size, being just a single f32 value), but for audio data, the size of those buffers the plugin is allowed to process is given in the run() function (the equivalent of the process() function in VST), as the sample_count parameter.
The plugin instance is therefore expected to store those pointers, and then only access them in the run() function, where it can know the size of the buffers behind the pointers.

While this is fine in C, in safe Rust this poses some issues, as you pretty much have to build the slices from their separate components (one being from the instance struct and the other from a parameter), which naturally is an unsafe operation.

The way I fixed this is by declaring all of the ports in a dedicated struct, separated from the state of the plugin, and then deriving the Lv2Ports trait on it. The declared Ports struct is actually the pointers + sample_count that can be used to get the slices from directly. The pointers inside the instance are actually held in a separate, automatically generated struct (called DerivedPortsConnections), which is then used to generate the final Ports struct in the run() function.

Oh, I missed that, sorry! I didn’t see the unsafety was only on the port access. ^^

We could do that, although I don’t really like how you handle URID caching, as it might allocate twice (once for calling the host’s map(), and once for inserting the result in the HashMap). I much prefer my approach, which just stores it in a struct, similarly to how the C plugins do it (although the mapping behavior is derive’d and therefore safe), so accessing URIDs for a given feature or Atom type is pretty much zero-cost.

Also, you didn’t explain what you didn’t like in my Atom implementation, as you said in your reddit post! :slight_smile:

Okay, I hope that I got your approach on ports right:

You basically hide the connect_port method and the port de-referencing from the plugin implementer. Instead, the implementer creates one struct that contains the ports and their type. Then, a macro kicks in and creates the connect_port and the beginning of run for the plugin, right?

If that’s correct, I like it! In a later state we could even create a build script or cargo expansion that generates this port struct from the configuration files, which would mean that the port types are always correct, at least at compilation time. Later, people still can modify the configuration, but I would consider this quite hacky and therefore not a big problem. Good!

Next, your point with the URID management. First, a little explanation for LV2 newcomers: LV2 heavily uses URIs to identify things. However, URIs are strings and therefore quite space-consuming and slow to compare. To fix this problem, hosts usually provide a feature to map URIs to integers, so-called URIDs. The host has to assure that these URIDs are unique to every URI and therefore, this mapping can be quite costy. The standard recommends to cache these URIDs instead of properly mapping them every time.

I first tried to use structs to store URIDs too, but when I was working on the Atom library, I constantly had to pass references to this struct around. I also had to keep the type of the URID struct generic, since some atoms aren’t part of the core atom specification and therefore, use a different URID struct. This made the code very complicated and repetitive, which is something you should avoid in any case.

Also, the reason why it’s recommended to cache URIDs is because the cost of the provided mapping function is generally unknown; One host may do this pretty quickly, but another one may allocate 42GB of data every time someone wants to map a URI. Therefore, I used a HashMap to a) make the code simple and DRY and b) have a known cost for looking up URIDs. If you map and cache all URIDs you need before run is called, you only have to the cost of looking a URID up in a HashMap, which, overall, looked like a good deal to me. If you’ve got a more elegant solution for passing URID structs around, I’m open for it!

What about your Atom implementation: What I miss is a certain level of abstraction, support for dynamically sized types and consistency validation:

  • You only support a fixed set of atoms and your design is not extensible enough to contain others. For example, there are multiple container types apart from Sequence: Tuple, Object, Property, and maybe also Vector, String and Literal. Following your design, you would create a new Forge struct for every of these types, although their behavior is very similar.
  • Also, your method that de-references atoms to atom bodies can not cope with types of a size unknown at compile time. One example of such a atom body is the String.
  • There are many checks you could make that you don’t do: For example, you don’t check if the body size corresponds to a valid size for that type. This also gets into the field of dynamically sized types.

That’s the gist of what I’ve found; Your design simply isn’t finished yet! I just solved these problems while you solved others! :wink:

Another thing I’d like to mention is the crate structure: When we ignore the lilv crate for a while, you basically have three crates: A sys crate which contains the raw stuff, the lv2 crate which makes the raw stuff nice and rusty, and a plugin crate for plugin-specific stuff. My implementation, on the other hand, has a subcrate for every specification, including raw, wrapping and plugin-specific stuff, and reexport them in a route crate.

When planning ahead for the upcoming lv2 crate, I guess it will be necessary to have separated sys crates. However, I would like to keep that concept of “one spec - one crate”, because it really fits the style of LV2. This would result in a pair of lv2-<spec> and lv2-<spec>-raw for every spec and a re-exporting lv2 crate. What are your thoughts on this?

1 Like

The beginning of the run function is handled by the lv2-plugin crate and not generated by a macro, but otherwise yes, this is correct. :slight_smile:
In general I prefer to have the least amount of code possible generated by macros, and the rest being handled by (potentially unsafe) functions in the library.

To be fair I much prefer the opposite approach: having the port struct generate the configuration files from expressive types instead. As I explained in the other thread structs generated from exterior sources are very hard to reason about, for both developers and tools like IDEs, as you can’t tell from the source code what your struct is like.

You can find an example on the midigate example I made, but here’s the gist of it:

struct URICache {
    midi_event: URIDOf<AtomMidiEvent>,
    sequence: URIDOf<UnknownAtomSequence>,
    frame: URIDOf<Frame>

This is how you declare your URID cache structure, which has already the types (and their URIs) bound to fields of the struct.
The URIDCache derive implements a new function to quickly create this struct from the map feature, but it also implements the URIDCacheMapping<U: UriBound> trait on the struct for each of its fields:

impl ::lv2::urid::URIDCacheMapping<AtomMidiEvent> for URICache {
    fn get_urid(&self) -> URIDOf<AtomMidiEvent> {
impl ::lv2::urid::URIDCacheMapping<UnknownAtomSequence> for URICache {
    fn get_urid(&self) -> URIDOf<UnknownAtomSequence> {
impl ::lv2::urid::URIDCacheMapping<Frame> for URICache {
    fn get_urid(&self) -> URIDOf<Frame> {

(This is an extract of the cargo expand command.)
This trick allows to reason about URIDs at the type level, and instead of having to pass around an HashMap-based struct, you can just ask for a URIDCache with the additional URIDCacheMapping<YourThing> trait constraint :slight_smile: :

// Returns an AtomMidiEvent if the URID matches
let my_midi = my_unknown_atom.read_as::<AtomMidiEvent>(&self.cache)?; 

You’re right on these issues, I must admit that my Atom implementation was just enough to get the MIDI examples working, aside from the alignment issues you mentioned earlier yours looks much more complete. :slight_smile:

I like your idea better actually (“one spec = one crate”), I thought about using feature gates to only take the specs you’re interested in, but separate crates might be probably cleaner. The only issue I have is that it would make us release like 20 crates each time, I don’t now if that’s an issue for though.

(Edit: Sorry, sent the response when I didn’t mean to ^^")

I don’t like that approach, because you have to model everything a turtle file could include in source code. Having these information in Rust source code and therefore in shipped binaries only adds weight without any use. Also, turtle files in bigger projects maybe become larger and more complicated too, with several additional vocabularies that are only useful for some very special, custom standards that have nothing to do with the code. In the end, we just model turtle files in Rust including information we don’t need.

One could say that only some bare information should be stated in source code, like the number and types of ports. However, after these bare turtle files were generated, the programmer has to edit them to include all of the “unnescessary” information. This invalidates the advantage of having automated configurations in the first plays, since people mess things up, and therefore leads nowhere.

One last case that comes to my mind is supporting only dynamic manifests. However, this removes the advantage of having configuration files in the first place and therefore isn’t a good choice either.

Instead, if the build script generates a ports struct from the turtle files, the programmer has to look up the exact names or the numbers of the ports and include them in the code. This has to be done now too, but with the generated structs, the build will fail if the turtle file changes or if the programmer made some general mistakes.

Sure, IDEs don’t like auto-generated structs, but they may at some point. I haven’t tested it, but I guess that there are also some cases in your code, as well as in mine, where RLS can’t help you, but RLS, or IDE support in general, is young and develops. Maybe in some years, people may wonder why they didn’t use automatically generated structs.

I like the general idea, but I see one problem: What if the number of required URIDs gets long. Very long. Then, you have to include every single URI bound in every where clause, which can become numberous too. This will lead to a lot of copy-pasting and therefore WET code. A way to bundle these types would be nice.

Also, not every URI is generally bound to a type, there may also be some that just stand for themselves. How are these handled?

I guess it won’t, since we don’t need to update every crate everytime we update something. Regarding the number of crates, I guess that won’t be a problem too: If hundreds of empty non-sense crates aren’t a design problem, 40 active, usefull crates won’t be too.

I think we can find some workarounds to only generate this data when exporting the metadata files and stripping them from the binary at build time, but I agree with pretty much all of your other points here (although I would argue that even if IDEs get more support for generates structs, their contents are still obscured from the developer, which I think is the most important).

Actually, my biggest issues with LV2’s ttl files are that:

  1. Both their syntax and contents are quite hard to grasp if you don’t know them (which is still pretty much my case);
  2. Messing up the contents of this will cause undefined behavior.

Perhaps we could do a bit of both methods instead, i.e. let the plugin author write the structs and the manifest file, but then let the compiler (read: a procedural macro) read the manifest file, pull from it the info it needs (i.e. ports indexes, stuff like that), and check its coherence against the Rust code? That way you still have all of the metadata in the ttl file, and the struct definitions in the Rust code, and while there will be a bit of overlap (though not that much), it will be checked and will not allow any mistakes.
Like the Rust compiler today, it could also suggest some fixes for errors in either code or the manifest file.

However, I think that turtle files integration (whichever method we choose in the end) shouldn’t be in our initial goals, due to its complexity (and the lack of a good, well-maintained RDF/TTL parser in Rust): while far from perfect it already works as-is, you just have to be careful while writing your manifest files, like LV2 plugin authors already do today.
Though it’s definitely something we’ll have to do properly to reach good productivity and stability (i.e. 1.0). :slight_smile:

On the declaration side, I suppose we could find a way to nest URIDCache-implementing structures, and provide some common ones out-of-the box? (Like “This cache struct contains all base Atom types”). Something like this:

struct MyCacheBundle { // This may come from another crate too
    foo: URIDOf<Foo>,
    bar: URIDOf<Bar>,

struct URICache {
    my_bundle: MyCacheBundle, // Automatically implement all mappings this bundle implements
    midi_event: URIDOf<AtomMidiEvent>,
    sequence: URIDOf<UnknownAtomSequence>,
    frame: URIDOf<Frame>

I think it would help with the issue of URID Caches getting too long, and also has the advantage of being zero-cost since they are still contiguous in memory. :slight_smile:

About the number of where clauses on the usage side, I’m not sure where this issue would arise?
All functions from the lv2 crate are always generic over the URI mappings they require (like the atom-reading ones), and on the plugin side the developers would just pass their URIDCache struct around by reference and not use generics there.

Perhaps it would be an issue if there ends up being a “kinda-generic-but-not-really” intermediate layer between the lv2 crate and the plugin author, although I think this can be worked around either with super-traits, or a bit of generics-fu that bases itself on some of the aforementioned URIDCache bundles.

Do you mean URIs that are not bound to types because they don’t come from the lv2 crate, or because those are URIs we don’t want to attach a type to?

In the first case, you can bind any URI to a new (possibly zero-sized) type by implementing the core::uri::URIBound trait on them, and defining the associated const URI, and then it’ll integrate itself with all the lv2 functions, including the cache mapping.

In the second case, they are not handled by the URIDCache derive (though the plugin author could come up with its own struct to hold these), but I’m not sure I can see a case where there are “free-standing” URIs that we don’t a type bound to for some reason, but still want to map them to an URID. Do you have any examples of this?

That’s my gut feeling too, but I want to make sure there isn’t some unspoken rule of “please don’t release 40 related crates” that we could break somehow. :upside_down_face:

That’s the case for me too, especially because it’s hard to find good documentation!

That looks good! The programmer still feels responsible, but gets some help from the build system. I also like it because it’s optional and therefore increases the flexibility of the system.

That’s exactly my point! Maybe it will become important, for example when we go for dynamic manifests, but not now.

That sounds reasonable! The only problem I see is that the every mapping trait that is implemented for the nested bundle also needs to be implemented for the nesting bundle, which might result in code bloat and therefore increased compilation time. However, I’m thinking in orders of hundreds URIs, grouped in dozens of bundle levels, maybe that’s a bit over the top!

I was thinking about internal methods calling internal methods that require URIDs. For example,
when you construct a sequence atom with my implementation (source file), you will need three URIDs for one method call: The URID of the sequence type as well as the URIDs of the Frames and Beats time units. Using your current method would require a where section that looks like this: where C: URIDCacheMapping<Sequence> + URIDCacheMapping<FramesTimeUnit> + URDICacheMapping<BeatsTimeUnit>. You must admit that this is quite long and it may get longer when things get more complicated.

Therefore, a way to bundle URID requirements would be nice too. Maybe this could be done with a supertrait that includes all mappings and is implemented for all mappings. However, such traits that are implemented for all types that implement another trait aren’t liked by the current tool set (e.g. not discovered by RLS, not listed in rustdoc, etc.)

I was thinking about URIs that could be bound to a type, but would not make little to no sense, for example the lv2:hardRTCapable feature. For the plugin, this is just a flag set in the features array, without any additional meaning. However, when I think about it now, having every URI bound to a type, even a zero-sized one, would make many things easier than using only a constant: For example, it was always a pain for me to store a URI as a CStr, because the conversion methods are either very long or unsafe, but not const. Therefore I couldn’t store the URI directly as a CStr. Having all these conversions in one place will make things much easier; We should go for it!

We could write the team a mail asking if that’s okay. Additionally, we cut down the crate count by having the sys and the “nice” parts for a spec in one crate, with the “nice” parts being optional. Then, we will have around 20 crates, which is within the range of dependencies of the tokio ecosystem.

It looks to me like we’ve got pretty far regarding the core and the urid specs. Maybe should start to summarize it all and organize the development? I’m sure many problems will only arise when we got them under our fingers.

1 Like

I agree there. I think properly documenting these is something we should do in our edition of the book, in a second time. :slight_smile:

That’s my idea as well! In my implementation all of the plugin specific and QoL tools are only present in the lv2-plugin crate, which I expect most plugins to rely on, but you can remove it if you want your own layer on top (for an higher level API for instance).

I don’t think I’ll implement the trait for each URI of each sub-bundles, I’ll only be forwarding the impls to the bundle via generic impls. So it won’t create that much code (which will be codegen’d anyway).
Even with hundreds of URIs I don’t think it would impact compile times that much, though I’m speculating here. Even if it adds a few seconds I don’t think it’s that bad of an issue. :slightly_smiling_face:

In the specific case of the Atom Sequence, my implementation is generic over the type of the unit used, so it looks like this instead:

impl<U: Unit> AtomSequence<U> {
    pub(crate) fn new_header<C: URIDCacheMapping<UnknownAtomSequence> + URIDCacheMapping<U>>(cache: &C) -> Self {

In general though, while I agree it’s not really nice to type out for us, I think 3-5 trait bounds on generics isn’t too bad to type if it’s only in our code that we have to type these, and it doesn’t leak out too much to the user.

In general, my approach on making APIs is that I don’t mind making them complex or annoying to implement, if that means it is better for the developer to use. :slight_smile:

That’s the approach in my implementation, all features have their own type, even if they are zero-sized.
This also allows both plugins and hosts to reason about their features (required/optional) in their features structs, at zero additional cost (yay ZSTs!).

The URI-handling is separated from features: I have an Uri type (analogous to CStr) which handles converting everything, and then an UriBound trait to bind an URI to a type (which is the trait URIDCache uses).
For now it is unsafe to implement this trait because the associated const must be null-terminated and there’s no way for us to check it at compile time, but I hope to make it safe to implement once const fn gets enough features.

Oh, would we still have the -sys parts integrated into each crate? I thought we would want them separated, so that relying on the -sys one of them does not bring all the higher-level abstractions from the other specs this one depends on.

I was going to say just that. ^^
I’m already working on cleaning up my implementation a bit, so it can be better split and integrated.

From what we discussed, I think we could go with the following plan:

  • Create a brand new repo on Github which we can both maintain (I can create the repo if you want), that will get moved to the Rust-DSP group eventually.
  • Bring in the lv2 re-exporting crate, my core implementation (as a sub-crate), as well as lv2-plugin and the amp example.
  • Bring in lv2-plugin-test (and lilv) so we can set some automated testing working for plugin loading (with CI potentially).
  • Bring in my urid implementation, with support for nested URID caches.
  • Bring in your atom implementation, but integrate it with my urid implementation (and try to fix the alignment issue you mentioned).
  • Bring in the midi spec, and the midigate and fifths examples.
  • Document all of those. :wink:

I think that’s a pretty good roadmap for now, what do you think? :slight_smile:

1 Like

You’re welcome to create it in the rust-dsp group from the start - whichever you prefer.

Your URID handling looks pretty solid now, I think we should simply take it!

This is what I meant to say, just to be sure!

This is only an idea, and it’s a mixture of having it all completely separated or integrated: Without the “nice” feature, every sub-crate looks like a sys crate; You only get the “nice” features when you enable them. It’s like having a sys and a “nice” version of the same crate, depending on what you need. This is also only necessary if we have to keep the crate count low, otherwise, ignore me! :wink:

Wait a second, we haven’t really talked about repo and project structure, as well as licensing!

First of all, I don’t think that we should have a separate plugin sub-crate since it collides with the “one spec = one crate” idea. Also, it’s boundaries to the core crate are relatively unclear or arbitrary: For example, port’s are included in the core crate, although hosts have relatively few things to do with ports. Also, having these structs and macros in the core crate has no real disadvantage: A host simply not uses them, which is often the case when you using a library; You’re not always using everything you have!

Next, the thing about lilv: When I said that this should be handled by a different crate, I meant that it should be handled by a different project altogether because it is a different project altogether! Maybe it’s not a big one, but it’s a different one. I know that this would mean that we can’t have the big scope tests, but on the other hand: Testing lv2 with lilv (or another host) isn’t really a test for the crate, it’s a test for the ecosystem. Also, having one passing test with lilv doesn’t mean that our implementation is right, it just means that our interpretation of the specifications matches those of the lilv project. I often had the case that my translated examples would run with lilv, but not with Carla or Ardour. For now, we should lay lilv aside, do internal unit tests and have the translated examples.

Talking about the examples: I think they should not be bundled with the lv2 crate itself, but with the book since they are the “executed” version of the book’s descriptions. Also, the book should be a separate repository, because it is also a different project and it will make hosting, for example with Github pages, easier.

My last point is licensing: LV2 is licensed under ISC and therefore, our crate should be too. Therefore, we aren’t more or less restrictive than the original, which might become a problem for some plugin creators.

Also, I would prefer to have a spec implementation cycle where we document and test a spec implementation every time we complete one. This incremental development makes it easier for other developers to get into the project and/or play around with some simple plugins. Also, having a dedicated “documenting and testing” phase for every spec assures that the APIs are nice and that upcoming additions build on solid ground.

I would personally prefer to start in the rust-dsp group right away. We want to get there anyway and this way, we don’t have to make a transition.

Compiled together, my roadmap would look like this:

  • The two of us get added to the rust-dsp group
  • Creating the lv2 repo with a simple readme, license, contribution guide and a code of conduct, as well as the re-exporting crate
  • Importing/merging your core/lv2 implementation
  • Documenting and testing core
  • Creating the lv2-book repo, implementing the amp example, hosting the book
  • Importing/Implementing urid
  • Documenting and testing urid
  • Importing/Implementing atom
  • Documenting and testing atom
  • Importing/Implementing midi
  • Implementing the midigate and fifths examples

When we got this far, we get back to the drawing board for the next examples! What do you think about this plan?


Could someone please add @Prokopyl and/or me (@Janonard) to the Github group, or create the lv2 repo and add the two of us as maintainers?

This way, we could get started finally.

1 Like

(Sorry for the late reply, I was quite busy around here!)

Oh, you’re right there! Apologies, got a bit carried away here! ^^’

I’d argue that hosts have just as much to do with ports as plugins: they have to host the buffers the plugins use, after all!
Other than that, I agree that we can, in fact, put the plugin-specific stuff in core.

At first I liked the idea of being able to at least rely on some already implemented part of the LV2 ecosystem (lilv is used by Ardour for instance), but after some thoughts I tend to agree it does not provide much more than just testing dynamic library loading and RDF parsing/checking (plus I never really liked how lilv is implemented, to me it was a good candidate to RIIR).

Also I have to add that my implementation of lv2-plugin-test was very hacky (loading the .so file fresh from target is not great), and this was the only use of lilv so I’m glad to toss it aside to simplify the development for now.

Agreed also. I put them in the same cargo workspace to simplify development (considering I was making breaking API changes pretty much everytime!), but since I think those APIs are pretty much designed now, we can separate them and make changes separately if needed.

Mandatory disclaimer: I am not a lawyer, and I know close to nothing about licensing other than the stuff I read on Wikipedia and on the Rust internals forum. So clearly not the best legal advice here. :upside_down_face:

After a quick check it seems that ISC is just a less explicit version of MIT.
I think it would be better to stick with the Apache2.0+MIT licensing as the Rust API guidelines suggest, as Apache2.0 provides additional rights for contributors, which ISC (or MIT alone) do not.
And also because it doesn’t make us stay away from what the Rust community as a whole uses for their licensing, so use don’t have to spend time and/or legal advice to see if they actually can use this crate in their project (ISC is seemingly more obscure than MIT, at least I never heard of it before now personally).

I’m skipping the rest of your arguments, because I just agree with them! :wink:

Same! :slight_smile:
I also agree with your roadmap, it looks pretty good to me!


You two have been invited to the Github group!

1 Like

For the sake of consistency, I propose the repo is named rust-lv2 while the crate (hopefully, if we get the name) can simply be lv2.

This is slightly different from our rust-vst approach, where the crate is named vst despite being a VST2 implementation. However, it would seem that the 2 on lv2 is more of the identity of the standard than a version number. Is that correct?

1 Like


Jepp, that’s correct. “LV2” stands for LADSPA (Linux Audio Developer’s Simple Plugin API) Version 2, but the first version is simply known as LADSPA. Distinguishing LV2 from “LV1” doesn’t really make any sense!

1 Like

The project is created:
It’s just the bare bones of a crate. @Prokopyl, please add yourself to the authors list and import your “core” stuff!

I’ve noted Apache2.0+MIT as the license for now, due to your comment, although drobilla (one of the creators of LV2) asked me to use ISC. As far as I know, we can also change then license later, if we need to. Maybe I should stress that I’m not a lawyer too.


I just imported my core stuff into the repository, and I also temporarily added the amplifier example in, just to check everything builds correctly while we’re preparing the book repository.

I think I’ll start documenting all this now, but also feel free to comment about the implementation details if you’d like (or anything else really)! This goes for @Janonard of course but also to anyone who would like to give a look. :slight_smile: