Hi,
In text to image, we can use seed , same model, same negative prompts, same parameters, and it let us generate the SAME music.
With the "think" mode , the LM interfering with the gen, we always never obtain the same music from the same parameters from a gen correct?
I was wondering if there was a way to give the "think" mode, a "seed" so we can predict how it will alter the music.
So if I use the same parameters, model, shift, seed for a gen, and a same "seed for LM", I would get THE SAME OUTPUT MUSIC.
I wonder if such feature can exist and if its too complicated or not.
Hi,
In text to image, we can use seed , same model, same negative prompts, same parameters, and it let us generate the SAME music.
With the "think" mode , the LM interfering with the gen, we always never obtain the same music from the same parameters from a gen correct?
I was wondering if there was a way to give the "think" mode, a "seed" so we can predict how it will alter the music.
So if I use the same parameters, model, shift, seed for a gen, and a same "seed for LM", I would get THE SAME OUTPUT MUSIC.
I wonder if such feature can exist and if its too complicated or not.