CPU load number database

Yeah, something like that. I had promised it before the release, but the idea of a CPU load meter is feeling like less and less of a good idea the more data we collect on modules. Instead, something else might solve the need for a meter.

The problem with trying to distill CPU load estimates down to a single number (or even a min/max) is that I don’t see how to do it so that it’s both accurate and precise enough. An imprecise meter wouldn’t be useful (e.g. “This patch uses 10%-110% cpu”), and an inaccurate CPU meter would just cause frustration and confusion (e.g. “This patch will run at 75%” – but actually it overloads).

Looking at the spreadsheet as it exists now, the average spread between min and max cpu usage for a module at 512 block size is about 4%. A 12 module patch gives us an average of a 48% spread, e.g. “This patch uses 60% - 108%”. I tested some real-world patches and that’s about what an imaginary CPU meter would report. It would also need to be reported that this might be completely inaccurate because of knob positions or internal states/modes.

If I were making a patch in VCV and saw 60-108% with an asterisk that it might even be substantially higher or lower, I don’t think I’d find that very useful. On the other hand, if I added a new module and saw it jump to 80-128%, then that difference of 20% would be useful to know – and that’s making us re-think the whole idea of a CPU meter.

What we need is a solution that provides some information about the patch and about potential modules, but does not give a false sense of hope/doom. Something that encourages users to experiment but also provides some guidance in making choices about which modules to use.

Perhaps: instead of a meter with one number (or a min/max), we could have a link to a web page or a pop-up window that lets you filter and find modules based on function/tag and see their CPU load numbers. Maybe it can list the load numbers for the modules in your patch and provide a subtotal for each column of the spreadsheet. This is more attractive especially after seeing @gabrielroth’s work on the tag website. That would at least solve the issue of “I’m at 80% and I need an LFO, what are the options?”, but wouldn’t be misleading since you’d be presented with the raw data and forced to reckon with the fact that CPU usage is inherently complicated.

On that topic (complexity), here are a few things making it hard to distill cpu usage of a patch down to one or two useful numbers. Apologies in advance for the long-rant style!

  • If you scroll through the CPU load spreadsheet, most modules are fairly constant in their usage for a given block size. But about 1 in 6 modules shoot up at least 5% when you patch jacks.
  • Similarly, some modules change drastically depending on the knob positions. E.g. some oscillators go up in CPU usage as the freq goes up. Measuring various knob positions would turn this into a big-data project (even just 6 knobs at a few positions and 6 jacks requires > 100k measurements for a single module). The spreadsheet only represents each module with all knobs/params at 25% since this is such a big deal to measure all the knobs at various positions.
  • Some modules have very different cpu usages depending on an internal state. E.g. right-click menu selects the algorithm/mode, or a button selects oversampling amount. Or you have to load a wavetable or sample file. For some modules this is the difference between a cool 10% load and >100% overloading. I don’t have any data about which modules or how many modules are like this. I’m not even sure how to do that programmatically.