Extending the api - plans?

Ive quite a few ideas for modules but they often revolve around getting assess to data file (read/write) and also more complex displays.

I know this was talked about… but I wondered if there was any kind of roadmap / priority for these? in particular:

  • direct draw api
  • read/write access to data files e.g. samples but not limited to.

ofc, I recognise its a new product, so you have lots of bugs/feature requests from users all requiring dev resources - very important to get these right.

however, I think the above for (some) developers are pretty important, and would be good to try to encourage devs ‘whilst the iron is hot’.

I for one am still excited by MM possibilities, but we have so many other projects… so will tend to just naturally ‘move on’ to others , if I cannot do what I envisage. and unfortunately, time goes by , and you just never comeback.

put another way, I think the success of vcv is the way it allows community devs to build stuff that dream of… and I think this can help push MM too.

(also ofc, some porting efforts are, id say, are likely ‘on ice’ due to the lack of the above)


edit:
idea : module full screen mode.

another thought I had (not very developed, partly as direct draw is required fist)
for custom ui’s on module it will probably be useful to have access to the encoder (turn/push) directly… perhaps a way to really ‘focus’ on the module , make it full screen and navigation via encoder. or even perhaps ‘suspending’ the mapping of pots.

this may seem ‘counter intuitive’ … and certainly not the way vcv generally works.
but, it could allow devs to create MM specific modules that have (more complex) UIs with a more ‘direct’ control.
e.g. imagine creating an oscilloscope/spectrum analyser module.
a user could be fully focused on that one module full screen,… and then pots/encoders would control its functions…without having to do any mapping etc… just use fixed controls.

this make no sense on a desktop, but ‘focus’ on something like the MM with its screen size etc, I think makes a lot of sense.

2 Likes

I need to post the roadmap, just organize my notes and separate out backend stuff from Plugin API stuff. I’ll make a post here soon.

For many issues we are very flexible in terms of what to prioritize. Especially if there are developers ready to develop a plugin using a new feature and can give feedback as it’s being developed for how the API is working for them in their use case. It sounds like that might be the case for you?

We just are wrapping up the API for the textual displays (with the NoisePlethora), and I’m adding in the ability for plugins to load their own fonts.

Direct draw could work in a similar way as text displays, but there is the issue that most VCV modules use the nanovg library and we’d need to either 1) port that, or 2) make a wrapper/adaptor library around our internal graphics lib (LVGL), or 3) expose an API to use LVGL directly (probably not what we’ll do).

For direct-draw, can you tell me roughly what did you have in mind, in terms of throughput (number of pixels times number of redraws per second)? I haven’t stress-tested the graphical redrawing to the max yet to see how much we can squeeze out, so I don’t know where the limits are yet. Based on what you can observe when you have a large patch with high CPU and lots of cables as you scroll through the modules (there’s some lag and visual tearing), I wouldn’t get hopes up about 30FPS full-screen animations unless you required audio processing to be very low.
The GUI updates happen in between audio updates. So a patch at 98% leaves little time for GUI processing, but a patch at 25% would allow for lots of screen updates.

OK, sounds interesting… I’m not sure I can visual what you mean yet, but full-screen module mode fits in with the MM flow where you zoom in from List of Patches → Patch → Module → Knob/Jack.
Giving the module the Rotary encoder signals is fine, what do you have in mind that turning and pushing the rotary would do? Like turning it adjusts some X parameter and push+turn adjusts some Y parameter, and long-hold toggles something else… is that kind of what you’re talking about? Or like, a module can implement its own menu system, displaying text and graphics and letting you navigate with the rotary?
We’d need a dedicated “escape hatch” which naturally would be the Back button.

This I have “working” but there are lots of gaps to fill to make it truly general and flexible. The one I was working on when I paused this branch is a way for a user to coordinate the filesystem paths on their computer to the ones on the SD card/USB drive.
The filesystem API allows a module to create an asynchronous “thread” that can make (blocking) filesystem calls. Each core has a background task which processes filesystem requests, and the M4 co-processor handles them. Unlike the direct-draw feature, the bottleneck here is USB drive speed and SD Card speed.
I’m designing this one around our sample player module, so it’s suited for streaming from disk to RAM, but of course with async threads + FS access I’m sure devs will come up with other uses!

2 Likes

direct draw…
Im happy to use a custom draw api, designed around hardware capabilities.
then Id see that you can (later?) create the api which is used for porting existing modules with… which would need to be closer to the vcv ‘api’ (yeah, I know its not an api really hence the issue)

render rate, again, I’ll work with what im given :laughing:
I think sub 30fps is fine on these kind of displays, its fine if they are not perceived as smooth motion.

Im a little surprised you say that the dsp load has a large impact on UI processing…
surely the UI processing is done on the M4 not the A7?

in a similar way to convention plugin dev, where UI processing is on a separate thread/core to audio processing, you limit data transfer to data needed for UI generation, and that can be rate limited to your screen display rate (which we already know … see above is sub 30fps)

something like that… e.g. turning encoder might zoom in on something, press encoder might reset zoom… but whatever the UI design wants. same with pots, it might adjust a range or parameter.

yeah, I dont see anyone implementing menus etc :wink:

indeed, I deliberately left out the back button for the ‘escape’

yeah, I see either asynchronous for streaming or synchronous where you read into memory.
ofc, a dev can build synchronous on top of asynchronous if they required.

whilst asynchronous is very useful - often synchronous is enough, if its ‘easier’ to get as a first step.
(that said, asynchronous fits better into dsp models… so perhaps its as easy to go straight there)

edit/added:

overall, Im happy to work with custom apis, they don’t need compatibility with vcv.
the way I see this (mostly) is, id like to get some efficiency/optimisations by using such an api, working within limitations of the hardware.
so the plugins would not (potentially) run on vcv (if I need this, I can dual implement functions) , but rather I would develop using the simulator (which would need to support the custom api)

I see this as a pragmatic way of using the hardware of the meta module - using the ‘player’ as a container that integrates modules.

ofc, to some extent my module would need to appear in vcv so that it can be patched in the desktop.
but this might be similar to the 4ms meta module (on desktop) where its kind of a placeholder/proxy - so not necessarily fully functional - or as above, I use an alt implementation.

I’m interested in this topic, or at least what i think the topic is. :smiley: For me personally, I see a powerful processor with screen and lots of IO and knobs, and think this would be awsome for some native applications. By native I don’t mean non-VCV modules, I mean full screen, just this app (so no overhead for cables, patch management etc).

For example I have ideas around a spectral processing sort of app I’d like to develop (that potentially can benefit from the beefy hardware here), and so I just want: here are blocks/arrays of inputs/outputs, here are knob values, here is your blockisze/sample rate etc and here is some way to draw a basic gui (I will happily use whatever is available).

Currently I’m just developing a plugin like NativeExample and wiring it all all with the patch player which is fine but I might guess that lower level “here are the buffers - GO” might be more efficient. Just a thought!

OK, excellent. Having our own API makes it easier

Nope… the bus architecture of the STM32MP1 has the DDR RAM and both A7 cores on an AXI bus, and the M4 is on a AHB bus with its SRAMs and the SDMMC and USB peripherals. There’s a AXI-AHB bridge but transferring across it is SSSLLLOOOWWWW… I made a valiant effort to run the GUI on the M4 but there is not anywhere near enough SRAM for the code itself (let alone data), so it has to execute from DDR3 RAM over the AXI-AHB bridge. Which makes it so slow, I was getting around 1-2fps. So, not an option unfortunately. We do use DDR3 RAM via the bridge for some M4 data, but they aren’t bottlenecks.

Yeah, with hard real-time guarantees that the MM makes, synchronous filesystem access is a non-starter. All SD Cards and USB drives are 3rd-party devices and we can make no assumptions about how long it takes to load a file.

Great! Glad to hear that. This is very much in line with my intentions. MM has been an independent project/engine from the ground-up, and only in the last year or so did we start adding an adaptor layer for VCV modules, and later the dynamic plugin loader and rack-interface API.

The simulator sometimes has audio issues (something with SDL and audio frame formats I think?), but I will be keeping it updated with all features as much as it makes sense

1 Like

I like this… I’m thinking of what you’re describing as being able to run a custom app instead of the MM’s Patch Player “app”. So, keeping all the infrastructure (pot/jack IO, bootloaders and firmware updater, filesystem, etc) but allowing you to load a custom program that takes over the audio loop.
That’s a pretty amazing idea. The dynamic loader could probably detect the plugin type and link against the appropriate API (modules plugin or app plugin). An app plugin could define void audio_process(InputBlock const&, OutputBlock &, Params &); and the MM would run that instead of the regular audio callback.

Having the app plugin update the GUI might be not as straightforward, as there’s currently not one function that sends the encoder/button states and returns a framebuffer of pixels. It would probably make sense to use the LVGL library, since we’d be using that when NOT running the app. Hmm… maybe exposing all of the LVGL api (it’s huge, but…) and then having a gui_update() call that replaces the current Ui::page_update_task() call. Some changes would need to be made…

But I like this. Even if it’s a custom firmware (not a dynamic plugin) it could be really great.

If a screen isn’t important, I did make a board that’s the same size and pinout as the Daisy Submodule, which has the same processor and RAM as the MetaModule. Fits on our Sampler and Looping Delay kits. But well, this is getting off track…

2 Likes

fair enough… my misunderstanding … so basically M4 is responsible for SD/USB / and I guess pots/encoder(?) , so all hardware except display.
oh well , I guess having the UI on the A7 makes the directdraw task a bit easier :slight_smile:

when I run it up, I’ll take a little look …

I recently implemented a project (TraxHost) that uses SDL (on linux and Mac).
though what I did was just use SDL for output to frame buffer (linux) / window (Mac) and a bit of keyboard input. then I used RtAudio for handling audio side.
(basically, I found SDL a bit too heavy for my needs)

Ah ok, RtAudio looks nice. The original reason for the simulator was for GUI development, but then audio came for free…