I like this! So about 75% of the modules are 15% or less load at “average” conditions. And around 90% of modules are 25% or less. Those are good stats to keep in mind. Considering there are two cores, if you stick to this 90% and prefer the 75% group then you can run 8-12 modules at 48kHz. More if you can drop to 32k or 24k.
Obviously this is a gross oversimplification, since there are some must-try modules in that upper 10%, and there are also examples of patches running 30-40 or more modules… but having 8-12 is a helpful mental target when going about patch creation, especially for new users not familiar with the environment yet.
Dan,
Given we now have this cpu load database, would it be possible to code some kind of numerical display element in the 4ms Metamodule vcv plugin, that displayed the estimated likely CPU load (using the database as an embedded lookup table) of each users patch, prior to uploading? Would be really useful to keep track of things.
Yeah, something like that. I had promised it before the release, but the idea of a CPU load meter is feeling like less and less of a good idea the more data we collect on modules. Instead, something else might solve the need for a meter.
The problem with trying to distill CPU load estimates down to a single number (or even a min/max) is that I don’t see how to do it so that it’s both accurate and precise enough. An imprecise meter wouldn’t be useful (e.g. “This patch uses 10%-110% cpu”), and an inaccurate CPU meter would just cause frustration and confusion (e.g. “This patch will run at 75%” – but actually it overloads).
Looking at the spreadsheet as it exists now, the average spread between min and max cpu usage for a module at 512 block size is about 4%. A 12 module patch gives us an average of a 48% spread, e.g. “This patch uses 60% - 108%”. I tested some real-world patches and that’s about what an imaginary CPU meter would report. It would also need to be reported that this might be completely inaccurate because of knob positions or internal states/modes.
If I were making a patch in VCV and saw 60-108% with an asterisk that it might even be substantially higher or lower, I don’t think I’d find that very useful. On the other hand, if I added a new module and saw it jump to 80-128%, then that difference of 20% would be useful to know – and that’s making us re-think the whole idea of a CPU meter.
What we need is a solution that provides some information about the patch and about potential modules, but does not give a false sense of hope/doom. Something that encourages users to experiment but also provides some guidance in making choices about which modules to use.
Perhaps: instead of a meter with one number (or a min/max), we could have a link to a web page or a pop-up window that lets you filter and find modules based on function/tag and see their CPU load numbers. Maybe it can list the load numbers for the modules in your patch and provide a subtotal for each column of the spreadsheet. This is more attractive especially after seeing @gabrielroth’s work on the tag website. That would at least solve the issue of “I’m at 80% and I need an LFO, what are the options?”, but wouldn’t be misleading since you’d be presented with the raw data and forced to reckon with the fact that CPU usage is inherently complicated.
On that topic (complexity), here are a few things making it hard to distill cpu usage of a patch down to one or two useful numbers. Apologies in advance for the long-rant style!
- If you scroll through the CPU load spreadsheet, most modules are fairly constant in their usage for a given block size. But about 1 in 6 modules shoot up at least 5% when you patch jacks.
- Similarly, some modules change drastically depending on the knob positions. E.g. some oscillators go up in CPU usage as the freq goes up. Measuring various knob positions would turn this into a big-data project (even just 6 knobs at a few positions and 6 jacks requires > 100k measurements for a single module). The spreadsheet only represents each module with all knobs/params at 25% since this is such a big deal to measure all the knobs at various positions.
- Some modules have very different cpu usages depending on an internal state. E.g. right-click menu selects the algorithm/mode, or a button selects oversampling amount. Or you have to load a wavetable or sample file. For some modules this is the difference between a cool 10% load and >100% overloading. I don’t have any data about which modules or how many modules are like this. I’m not even sure how to do that programmatically.
TY, I did not have an appreciation of all that. Perhaps we could define CPU Load as an art. Elastika high on idle, and lower under load does not look like science I understand.
Perhaps a simple color code system that shows max usage as low-med-high. That would at least give you an idea of the CPU impact of a module.
Yes this is of help, are you sharing the sheet somewhere?
We just made a new page on our site, which combines the CPU load database with a list of modules/plugins and tags for each module.
It’s based on @gabrielroth’s Module Finder.
https://metamodule.info/modulefinder
I won’t be updating the CPU database on this thread anymore, as all new data will just go right to that page.
ty much for all the hardwork with both 2.0 and modular finder!!
just noticing:
Seaside Modular Tala 101573 – 101697%
don’t think i’ll be using that module haha, lest it melt my mm!
Ha ha. Yeah that one takes a long time to load the module, but then once it’s loaded it runs fine. The automatic CPU tester rolls the load time in.