Cpu load per module and bypass option

VCV Rack has a couple of features Id like to see on MM, since I think they would really help when developing patches for the MM due to limited dsp capacity.

a) bypass (or disable)
rather than delete a module, Id just like to (temporarily) disable it…
I dont need audio passed thru or anything I just want to see the effect on cpu load a particular module is having

b) performance meter
in vcv you can turn on performance meter and see which module are using most cpu. this would be very valuable information on MM.

the underlying reason for these is to help improve the workflow on the MM.

Its pretty easy to overload the dsp on the MM, and when you do the first question you ask yourself is
“Which module(s) are using the most cpu”
you can then decide…
do I need that module? can I find a ‘cheaper’ alternative?

its important to note here…
its not enough to know the total cpu load… you need to know which module(s) is using the resource.

similarly, because we MM will not run an ‘overloaded’ patch.
we need the bypass, to temporarily disable some parts of the patch, so we can dig into other parts cpu load.

however, we want this non-destructive, so that we can re-enable, as we work through the problem - hopefully to a successful compromise.

2 Likes

Yeah, I agree with this.

Bypassing a module would help a lot. You could just view how the CPU number changes.

Per-module CPU numbers would be great, too. I’m not sure how intrusive it will be, though my hunch is only adding a few percent load (will go up with the number of modules).

Another thing I am going to play with is a variable throttle for updating CVs and Knobs. On some modules this will help tremendously.

1 Like

indeed, though we need to be careful here…

as we have one cpu % displayed (the one of the highest loaded core, iirc)

so disabling one module, wont necessarily reduce that number by the module load % …
a) it might be on the less active core?!
so it wont decrease at all, since it does not impact the core which has its % displayed.
b) it might be on the most loaded core but removing it means the other core is now the most loaded core, and so that % is now shown.

i.e with totals in only really works if you display load of both cores.

also, Im not sure how you load balance the modules over the cores, but again (basically like b above), bypassing might mean modules move to different cores… so the % wont drop by the that modules load/impact.

(sorry, above might not be that clear… tricky to explain… but I think you’ll get the gist)

This might be more intrusive than anyone wants, but what if, as you scroll through the modules that make up your patch, it shows the CPU hit next to the name as each one is highlighted?
So where we now see “Freeverb”, it could show “Freeverb 10%”.

Yeah, that’s how I envisioned it, too: just displaying the number as you hover the module.

Right, good point, you’d need to see both core’s load to calculate a module’s load by bypassing it. Well, we could add an option to display both cores: 25%/32% instead of 32%.
Right now I don’t have any “rebalancing” of cores, but I’ve been pondering how to do that in a general way. Some patches would benefit by being balanced before being run, but since load can change as the user performs, it’s a little hard to determine ahead of time.

I see this ‘load’ meter as like a debug mode… so its fine to obscure panels etc.
(this also will encourage users to turn it off when not in use, which is a good thing)

I think, if the extra load is not too much ( e.g. < 5%) … then its be useful to show simultaneously for all modules , perhaps just overlayed text in the centre of the panel. this will give a useful overview of load.
but if doing so, for all modules, creates a prohibitive load increase (> 15%?) , then sure hover is fine.

yeah, I think generally showing both cores loads is a useful option…
perhaps an option, as I can see for many users its unnecessary clutter?
actually perhaps a single option - cpu meter - none, single, both

balancing, yeah that’s a tricky one…
Id be interested how do you schedule processing order, do you create a process graph to try to minimise latency?

in trax (my vm) , I actually went for an approach where the user kind of has some control over the processing order using the positioning of modules.
for sure, its an advanced use-case, that most users wont use, but it can be important if you want to have consistency latency thru “chains” of modules.

anyway, thats for sure, another topic entirely, for another day :slight_smile: