W2v-bert: Combining Contrastive Learning and Masked Language Modeling for Self-supervised Speech Pre-training

Motivated by the success of masked language modeling (MLM) in pre-training natural language processing models, the developers propose w2v-BERT that explores MLM for self-supervised speech representation learning.

Motivated by the success of masked language modeling (MLM) in pre-training natural language processing models, the developers propose w2v-BERT that explores MLM for self-supervised speech representation learning.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow