Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it necessary to relearn acoustic token embeddings during LM training? #485

Open
jhauret opened this issue Aug 19, 2024 · 0 comments
Open

Comments

@jhauret
Copy link

jhauret commented Aug 19, 2024

Dear authors,

Have you done any experiments where, instead of learning new embedding tables of the code indices produced by DAC/Encodec, you directly used their raw token embeddings as input for the transformer model? If so, how did this work compared to jointly training LM weights & embedding tables ?

Sorry if this information is somewhere in the project's literature, but I have not found it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant