Thoughts on ai use to port an open source sequencer to vcv rack then meta module

Curious what are peoples thoughts on using ai to port an open source sequencer to vcv then meta module if you have no experience coding. I want to do that with this tb-303 sequencer from the midilab aciduino project on github that is mit licensed but I have no experience with coding. I’m almost done getting it to work.

what’s your thought on using AI to make music ?

only, joking, but bear in mind, using AI to do something other skilled/creative humans do, can be a bit emotive at time.- though, I find most developers (including myself) are embracing it - so perhaps, less contentious, than it appears to be for many musicians.

but on to your question…

It depends, generally coding with AI shows its strengths and weakness very well.

AI will, usually, create something that seems plausible, if not correct.
for coding this means, it 95% of the time it will produce code that compiles. (plausible)
but it doesn’t mean it will function correctly (correct) , but often will.

for experienced programmers, this is a non-issue, they can read the code, and decide if the AI is going mad, or has something of value to add.

without that experience, you are just hoping… or going thru an iterative loop of debug/test, hoping AI will solve the issue for you, could feel like infinite monkeys theorem at times

BUT… on a more practical level, it depends on the plugin.
frankly, some plugins can be ported to the MM with zero code changes without AI, whilst some its nigh impossible aka a complete re-write of the code.

generally, AI knows something about the VCV sdk, (its been trawling github ;)) but I suspect, does not really know much about the MM SDK - so results will vary,.
you’d pretty much just have to try it and see…

when do you know if AI is hallucinating? whats the signs?

well from a previous (major) project I did recently, the most common one is - it going around in circles, trying to solve a problem in the same way - that’s not working.
this is not A → B → A more like A → B → C → D → A
this is where you need to intervene, to bring some clarity and creativity to get it out of what it thinks ‘is the correct way’

also be prepared for it leading you down many dead ends, also not being critical enough.
I had to seriously modify my prompting strategy, as it’s very keen to say your ideas are great… when you are talking nonsense/forgotten something.
also I had to frequently tell it, its ideas were completely flawed and to point out why.

these things can be major time sinks… if you are not very careful.

overall, give it a go…
if you cannot get it to work reasonably quickly, then either
a) give up :wink:
b) learn some coding skills to evaluate what its doing.

a couple of tips:

  • vscode with GitHub co pilot is excellent
    for serious work, you’ll need to use an AI that has an iterative approach, and has access to local files.
  • all AI models are different
    not better, just different - models are trained in different ways, different focus. Claude might be good in one case, Grok code might be better in another - sometimes GPT-4 is 'good enough.
  • prompting is a skill !
    different prompts will lead to different solutions (on different models , see above).
    this is a fun skill to learn, and where you’ll really start to see how powerful AI can be - and also get to know its weaknesses.
  • don’t expect a dev to ‘finish it off’
    this would be a bit like getting Suno to write a track, then asking a musician to ‘make it good’ :wink:
  • credit / AI usage
    if you release something, always credit the original source that you used. often as developers we stand of the shoulders of others work, and its important to credit this esp. in open source.
    Personally, I also will let people know when I used AI, and for what - ethics in this field are ‘new’, but this is one trend that appears to be emerging.

overall just have fun… AI is here to stay, so I say, lets embrace it :slight_smile:

2 Likes

Oh my sequencer is already 98% done and working I just have one or 2 little bugs left to fix. It’s already working in vcv for the most part just some usability tweaks. Next is porting it to mm considering there’s no real visual elements that are very flashy I don’t think porting to mm will be difficult. So far it only took a few hours to get it working.

if you don’t have any custom UI elements, it’ll likely just work on MM.

I’ll be interested to see how it turns out.
my main issue with (non generative) sequencers on the MM is the form factor does not really lend itself well to it - given sequencers are pretty ‘hands on’ and usually require a reasonable number of buttons to achieve this.
so overall, Id prefer to use a hardware sequencer, Hapax / Hermod+ are my main go to.

no issue with generative ones, as these (obviously) use parameters to form the generation, so work ok with just a few pots.

The ui is very simple just buttons for steps and blinking lights to correspond with gate and accents so it should be easy to ve hands on with it using a standard midi keyboard and or the button expander. I want it to be as simple and barebones as possible for my shrimple brain. It has on black rectangle with text on it as a mock screen and some other elements in the code but it’s just one svg file.

This is a great response, and I think this part can’t be stressed enough:

Prompting is a skill.

My first love was journalism, not software engineering. That has been a useful journey when it comes to interacting with AI.

This sentiment was echoed on the VS Code Insiders podcast recently. The person being interviewed said that people have a tendency to “under-prompt.” This makes sense, because person-to-person communication has a contextual element that AI isn’t great at yet.

The gist of it is this: Overdescribe. Don’t worry so much about ‘prose’ related things (the rhythm of your writing, word repetition.) Concentrate on painting as complete a picture as you can. Think photo-realistic, just in written form.

1 Like

I usually prompt AI with AI, because even if you think you are good at prompting, AI knows AI better and it usually knows exactly how to phrase something so that you get less errors.
I always ask what it thinks about my prompt;
i want to do x and y, i am going to post this prompt to my agent … give me a much improved prompt I use cursor or copilot. Sometimes i also tell it which AI agent i use, because they don’t all work the same, and often i ask which one to use for the task.

Then it comes up with a much, much improved version of my prompt. i usually have harsher personalities too, no sugar coating, for coding honesty is a must obviously.

AI coding is so good right now that these models are for the most part written by AI.

A year ago, when i started using AI coding, it was time consuming and you got a lot of errors and hallucinations.

If you tried it just months ago, maybe even just weeks ago, then you are not up-to-date with how good it is right now. It’s changing that fast. You often see people saying AI coding is garbage, hallucinating etc, and they base it of something they tried a year ago, and maybe not even in Cursor etc but prompted in chatgpt… big difference, its not the same at all.

I have made 1 module from scratch (AI coded) to VCV and later to MM (a drum pattern module)
and yesterday i ported TB3PO (from ornament and crime) to VCV, going to continue today to port it to MM + also make a byte beat generator which can store and recall presets (always annoyed me with them that you cant save your sweet spots) + also use CV for these presets.

No shame in using AI, if it’s getting you the results you need. Getting the plugin to run on MetaModule efficiently requires a bit more understanding of best practices, but for most cases if you’re running just a single module it will probably run fine on it’s own, even at high sample rates.

Optimizing your plugin and developing a deeper understanding of what the code is doing will let the plugin run with lower CPU to use in a more complex patch. The process of learning how to optimize by tweaking the AI generated code is a great and practical learning exercise. In the process you’ll realize ways to simplify what the AI is spitting out, or simply find a better method than what it provided for you.

I’m personally not particularly good at coding nor am I very knowledgable about computer sciences; I rely heavily on AI to do the majority of the groundwork of my code. I do have to go back in and optimize it for MetaModule, a process I’m just starting to learn.

All of that being said, what’s most important for this particular application is really understanding the underlying synthesis theory, analyzing the output and UI aesthetically, and understanding some basic physics concepts about audio + electronics to figure out why the AI isn’t giving you what you need. These categories I’m very confident in and have a deep knowledge of, which is why AI is a killer tool for me. If you have strong foundations in audio engineering and synthesis, you might find the tool to be a powerhouse for you. If you don’t really know what’s going on at a baseline theoretical level with the audio/synthesis you might find yourself running in circles.

For example, saying “make me an oscillator” could yield a million and one poor examples. A better prompt would be something very specific and discrete about each individual feature like:

“Here is my UI and setup for my module (copy and paste the widget and config that was generated from the VCV script from your artwork). I need to make a VCO with 4 basic waveshapes: Sine, Triangle, Saw, Square. There is a main pitch knob that has a range of 20hz to 20khz with logarithmic scaling across the knob. The 1v per octave input controls the pitch of the oscillator: each volt is equivalent to one octave in pitch (the AI will automatically realize that this means the scaling is an exponential curve). The output of the module should be in the range of -5v to +5v. A PWM input is provided for the square wave section of the module which allows external control voltage in the range of -5v to 5v to control the duty cycle of the square wave from 10% to 90%. All waveshapes need to have the same amplitude regardless of their waveshape. An FM input is provided with a dedicated attenuator. Voltages at this input should be rescaled to the range of -1 to 1v so that it can be used for a vibrato like effect.”

While a prompt this long might yield some incorrect results, you can start to see the type of language required to get it to stop hallucinating. I personally prefer to break up each discrete step into it’s own prompt, test it, make sure it’s doing what I need to do, then feed it back into AI with the next step.

Best of luck!