NLP(76)
-
[2022-03-22] 오늘의 자연어처리
BIOS: An Algorithmically Generated Biomedical Knowledge Graph Biomedical knowledge graphs (BioMedKGs) are essential infrastructures for biomedical and healthcare big data and artificial intelligence (AI), facilitating natural language processing, model development, and data exchange. For many decades, these knowledge graphs have been built via expert curation, which can no longer catch up with t..
2022.03.22 -
[2022-03-21] 오늘의 자연어처리
Do Multilingual Language Models Capture Differing Moral Norms? Massively multilingual sentence representations are trained on large corpora of uncurated data, with a very imbalanced proportion of languages included in the training. This may cause the models to grasp cultural values including moral judgments from the high-resource languages and impose them on the low-resource languages. The lack ..
2022.03.21 -
[2022-03-16] 오늘의 자연어처리
Seamlessly Integrating Factual Information and Social Content with Persuasive Dialogue Effective human-chatbot conversations need to achieve both coherence and efficiency. Complex conversation settings such as persuasion involve communicating changes in attitude or behavior, so users' perspectives need to be carefully considered and addressed, even when not directly related to the topic. In this..
2022.03.16 -
[2022-03-15] 오늘의 자연어처리
IndicNLG Suite: Multilingual Datasets for Diverse NLG Tasks in Indic Languages In this paper, we present the IndicNLG suite, a collection of datasets for benchmarking Natural Language Generation (NLG) for 11 Indic languages. We focus on five diverse tasks, namely, biography generation using Wikipedia infoboxes (WikiBio), news headline generation, sentence summarization, question generation and p..
2022.03.15 -
[2022-03-14] 오늘의 자연어처리
Integrating Dependency Tree Into Self-attention for Sentence Representation Recent progress on parse tree encoder for sentence representation learning is notable. However, these works mainly encode tree structures recursively, which is not conducive to parallelization. On the other hand, these works rarely take into account the labels of arcs in dependency trees. To address both issues, we propo..
2022.03.14 -
[2022-03-07] 오늘의 자연어처리
Parameter-Efficient Mixture-of-Experts Architecture for Pre-trained Language Models The state-of-the-art Mixture-of-Experts (short as MoE) architecture has achieved several remarkable successes in terms of increasing model capacity. However, MoE has been hindered widespread adoption due to complexity, communication costs, and training instability. Here we present a novel MoE architecture based o..
2022.03.07