-
Notifications
You must be signed in to change notification settings - Fork 128
How to add new composer & Unable to finetune NotaGen-X model with 24GB VRAM #18
Description
Hello, first of all I would like to thank you for your work & sharing it with us. This is some amazing work that I really enjoyed and appreciate. But I do have a few questions/reports in this issue.
How do I add a new composer? 如何加一位新的作曲家?
I have a composer that I really like, his name is Charles-Valentin Alkan, he was also in the same period of time as Chopin.
I want the model to be able to generate pieces in his style, I have collected around 100 .mxl files of his pieces (I know it might not be enough, I'm still trying to collect more of them but it's hard since I don't think he is that famous).
I looked at README.md and issue #6 but I'm still confused.
But how do I do that? Is there any detailed tutorial you can link me for it?
(Sorry I'm new to these)
Alkan是我蠻喜歡的作曲家,他與蕭邦同時期,我蠻想試試假如他在模式裡的話會生成出什麼樣的曲子
我目前搜集了大約100份他的 .mxl 作品,這可能還不夠但目前還在嘗試搜集更多他的作品但不太容易,畢竟他感覺不太有名
我有看 README.md 跟 issue #6 但我看完之後依然不太明白要如何實作
想問一下您能告訴我如何實作嗎?或是有推薦的詳細教學嗎?
(抱歉我在這方面實在是個菜鳥...)
Unable to finetune large model with 24GB VRAM 大模型似乎需要多於24GB的VRAM才能finetune
I'm using Google Colab with L4 GPU, I was unable to finetune it (I tried to do this with the Alkan pieces and it throws torch.cuda.OutOfMemoryError at the first epoch, the VRAM usage skyrockets from 3MB to ~21GB then throws the error).
I successfully ran the finetune with Small pretrained model
I haven't tried the Medium pretrained model on finetune yet but I think it would throw the out of memory error as well since Small pretrained model already uses nearly all the VRAM
This might be some information to add to the README.md? (NotaGen-X runs perfectly on 24GB VRAM btw)
我用Google Colab 的 L4 GPU去跑大模型,當我嘗試finetune它的時候就報錯了,在第一個epoch就給了 torch.cuda.OutOfMemoryError 的錯誤
我試了finetune小模型,沒有問題的跑完了但它用了差不多所有的VRAM。我沒有試中模型,但基於小模型的情況我猜中模型可能也需要多於24GB的VRAM去做finetune
或許可以把這些資訊加到README.md上? 另外NotaGen-X在24GB VRAM上跑生成的話完全沒問題
(希望有幫上忙)
Inference seems to be stuck when using Small/Medium model 用小或中模型生成的話似乎會卡死
I tried to run inference with Small pretrained model and Medium pretrained model but the inference never ends and it seemed to generates non-sense, is that a normal behavior of them? (I did changed the variables in config.py before running them)
我有試用小跟中模型去生成但它們似乎會卡死,時間愈久會生成出愈奇怪的東西,想問一下這是正常的嗎?
==================================================
I forked the repo and currently trying to learn it, so I might have more questions/reports in the future but I might open new issues for that. And again, thank you very much for the work!
我有複製您的repo,目前還在學習它。未來可能會有更多東西想問或是回報的,但我應該會另再開issue。在此再一次感謝您的付出以及和我們分享您的成果!
(I hope there's no typos but I will edit if I find any)