I have now created several patches that won’t play because of the “greater than 99% CPU” limit. I don’t expect the CPU usage on the MM to be anywhere near the same as my 16 core workstation, but should I expect there to be a somewhat reliable correlation between the CPU in Rack and on the MM? How many hardware threads does the MM have? I assume I can limit Rack to the same number of threads to get a more accurate baseline for estimation? Having to just incrementally chop modules from the patch, and then test each iteration to see if the MM CPU is adequate and its not clipping, is a pain in the arse.
Roughly, yes, there should be a correlation but – it’s not reliable because there are some optimizations that make a huge difference that only happen on your host computer – or only happen on the MM. But I do use my CPU guide as a rough gauge. For instance on my old macbook that we take to trade shows (I don’t recall the model, but it’s ancient), I multiply by 4 to estimate my MM CPU usage. And I don’t push it too hard: better to shoot for 80% and be happy when it’s 75% than to shoot for 98% and be frustrated when it’s 101%.
Two.
They are not technically “threads” but it’s the same idea. For instance you can run one Ensemble Osc and it’ll be around 30%. If you add a second one, it barely changes because they run in parallel. But a third one and you’re up to 55% or so. the fourth one barely changes the cpu load, … etc.
But I recommend if you want to use the computer to estimate, then set the VCV engine to 1 thread.
If anybody is relatively new to VCV, always set the engine to a single thread. Adding unnecessary threads will only start your fans up. Only ever go to more than one thread if you need more modules than the single thread supports. That will never be the case for the maximum module limit (32) on Meta.
sorry, but I think this is a red-herring…at least from my experience so far.
on my Mac Mini M1, I created a relatively simple patch (I thought ;)) , and it used about 1% cpu load on desktop (compared to ‘empty’ patch)
… and this wouldn’t run at all on MM > 99%
(and yes, I only use one thread on vcv desktop)
when it failed to run,
I turned on performance meters,
the two rings modules show 1%, and noise plethora 1.8%… everything else is pretty much zero.
(yup, no idea, why vcv is reporting 1+1+1.8 as 1% total on one core, but thats what it does… another reason why estimate is not useful ;))
so what I can take away from this? the numbers are too small to really be useful.
thats why I created this feature request, we need a way on the module to know what modules are cpu hungry ( * )
knowing the total load, would only help if there was a quick workflow for incrementally developing a patch so that you know the LAST module was the ‘problem one’
the current workflow is not incremental (due to transfer overhead) , rather you make the patch and copy across… and try again, so its better to have a more diagnostics on the module itself.
( * ) esp, as this may different on the MM compared to a particular computer due to things like neon/simd optimisation.
edit: I wondered if I could somehow get my M1 to create enough load to make cpu numbers a bit bigger/ useful.
so upped vcv to 96k and use 32 sample buffer… it didn’t really change the numbers much at all.
interestingly total cpu load crept up a little, now 3%, but performance meters numbers hardly moved…again, not sure whats going on there.