W2v-bert: Combining Contrastive Learning and Masked Language Modeling for Self-supervised Speech Pre-training
Motivated by the success of masked language modeling (MLM) in pre-training natural language processing models, the developers propose w2v-BERT that explores MLM for self-supervised speech representation learning.
What's Your Reaction?
![like](https://technetspot.com/assets/img/reactions/like.png)
![dislike](https://technetspot.com/assets/img/reactions/dislike.png)
![love](https://technetspot.com/assets/img/reactions/love.png)
![funny](https://technetspot.com/assets/img/reactions/funny.png)
![angry](https://technetspot.com/assets/img/reactions/angry.png)
![sad](https://technetspot.com/assets/img/reactions/sad.png)
![wow](https://technetspot.com/assets/img/reactions/wow.png)