Skip to content

MusicGen advice #3

@chlowden

Description

@chlowden

Hello
This is an issue but maybe you have an idea that can help me. I am using your MusicGen interface with the metadata below ;.

{ "_version": "0.0.1", "_hash_version": "0.0.3", "_type": "musicgen", "_audiocraft_version": "1.3.0", "models": {}, "prompt": "((piano)) acoustic key F minor minimalist low energy 4/4, 150bpm 320kbps 48.0kHz Stereo", "hash": "05f88c6f4049307a5209b74c368f62fda1575c7ab45668e06b39f54806e0fbcd", "date": "2024-06-21_23-06-50", "text": "((piano)) acoustic key F minor minimalist low energy 4/4, 150bpm 320kbps 48.0kHz Stereo", "melody": "94240fe69b46edc19d55977a5b38598da85708c28bd932d51dbbd5f00e609076", "model": "facebook/musicgen-stereo-melody-large", "duration": 360, "topk": 250, "topp": 0, "temperature": 1, "cfg_coef": 3, "seed": "1538577670", "use_multi_band_diffusion": false }

The melody reference is Philip Glass Metamorphosis 5
https://www.youtube.com/watch?v=Rebr_F53db8

This seemed to me to reasonably possible for a general AI model. After hours of playing around with different settings, I still don't get anything near the ref. I get an audio "soup" at best. As there seems to be rather little written on Musicgen, I was wondering if you have any ideas about the limits of the Musicgen model and what it might have been trained on?
Any thoughts are most welcome.
Thank you.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions