A comparative study of deep learning based language representation learning models

Deep learning (DL) approaches use various processing layers to learn hierarchical representations of data. Recently, many methods and designs of natural language processing (NLP) models have shown significant development, especially in text mining and analysis. For learning vector-space representati...

Full description

Saved in:
Bibliographic Details
Main Authors: Boukabous, Mohammed (Author), Azizi, Mostafa (Author)
Format: EJournal Article
Published: Institute of Advanced Engineering and Science, 2021-05-01.
Subjects:
Online Access:Get fulltext
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deep learning (DL) approaches use various processing layers to learn hierarchical representations of data. Recently, many methods and designs of natural language processing (NLP) models have shown significant development, especially in text mining and analysis. For learning vector-space representations of text, there are famous models like Word2vec, GloVe, and fastText. In fact, NLP took a big step forward when BERT and recently GTP-3 came out. In this paper, we highlight the most important language representation learning models in NLP and provide an insight of their evolution. We also summarize, compare and contrast these different models on sentiment analysis, and thus discuss their main strengths and limitations. Our obtained results show that BERT is the best language representation learning model.
Item Description:https://ijeecs.iaescore.com/index.php/IJEECS/article/view/24400