sorry, but I think this is a red-herring…at least from my experience so far.
on my Mac Mini M1, I created a relatively simple patch (I thought ;)) , and it used about 1% cpu load on desktop (compared to ‘empty’ patch)
… and this wouldn’t run at all on MM > 99%
(and yes, I only use one thread on vcv desktop)
when it failed to run,
I turned on performance meters,
the two rings modules show 1%, and noise plethora 1.8%… everything else is pretty much zero.
(yup, no idea, why vcv is reporting 1+1+1.8 as 1% total on one core, but thats what it does… another reason why estimate is not useful ;))
so what I can take away from this? the numbers are too small to really be useful.
thats why I created this feature request, we need a way on the module to know what modules are cpu hungry ( * )
knowing the total load, would only help if there was a quick workflow for incrementally developing a patch so that you know the LAST module was the ‘problem one’
the current workflow is not incremental (due to transfer overhead) , rather you make the patch and copy across… and try again, so its better to have a more diagnostics on the module itself.
( * ) esp, as this may different on the MM compared to a particular computer due to things like neon/simd optimisation.
edit: I wondered if I could somehow get my M1 to create enough load to make cpu numbers a bit bigger/ useful.
so upped vcv to 96k and use 32 sample buffer… it didn’t really change the numbers much at all.
interestingly total cpu load crept up a little, now 3%, but performance meters numbers hardly moved…again, not sure whats going on there.