논문(61)
-
[2021-12-27] 오늘의 자연어처리
Are E2E ASR models ready for an industrial usage? The Automated Speech Recognition (ASR) community experiences a major turning point with the rise of the fully-neural (End-to-End, E2E) approaches. At the same time, the conventional hybrid model remains the standard choice for the practical usage of ASR. According to previous studies, the adoption of E2E ASR in real-world applications was hindere..
2021.12.27 -
[2021-12-24] 오늘의 자연어처리
How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness? The fine-tuning of pre-trained language models has a great success in many NLP fields. Yet, it is strikingly vulnerable to adversarial examples, e.g., word substitution attacks using only synonyms can easily fool a BERT-based sentiment analysis model. In this paper, we demonstrate that adversarial training, the ..
2021.12.24 -
[2021-12-23] 오늘의 자연어처리
Mixed Precision DNN Qunatization for Overlapped Speech Separation and Recognition Recognition of overlapped speech has been a highly challenging task to date. State-of-the-art multi-channel speech separation system are becoming increasingly complex and expensive for practical applications. To this end, low-bit neural network quantization provides a powerful solution to dramatically reduce their ..
2021.12.23 -
자연어처리 워크샵에 페이퍼 내기
코로나 때문인건지... 알 수 없지만 내가 석사하는 동안 연구실에서 같이 논문을 쓴 게 하나도 없었다 ....... 불모지와 같은 환경에서 실적이 아예 없으니까 불안해지기 시작했고 선배가 워크샵 같은 곳은 덜 어려우니 거기에 페이퍼 내보라고 해서 준비 시작한게 올해 여름! 큰 욕심 없이 (지도교수가 논문 피드백을 거의 안 주니까..) 피드백을 받기 위해 페이퍼를 내기 시작했다. 근데 욕심이란게 그렇게 버린다고 쉽게 버려지지가 않음 ^^ 어쨌든 비대면 석사 생활을 하면서 아무것도 몰랐던 내가 페이퍼 내는 과정을 정리해본다 1. 워크샵 찾아보기 EMNLP, ACL 이런 학회 자체에 내기엔 너무 무서웠고 워크샵은 좀 덜 어려울 것 같아 워크샵을 위주로 찾았다. 구글에 그냥 EMNLP workshops, ACL..
2021.11.25 -
E-petition popularity: Do linguistic and semantic factors matter? 논문 정리
www.sciencedirect.com/science/article/abs/pii/S0740624X16301253 E-petition popularity: Do linguistic and semantic factors matter? E-petitioning technology platforms elicit the participation of citizens in the policy-making process but at the same time create large volumes of unst… www.sciencedirect.com Reference - Twitter studies: impact of textual patterns on retweets - Communication studies of..
2020.10.07 -
StructBERT: Incorporating Language Structures into Pretraining for Deep Language Understandin 논문 정리
StructBERT: incorporated language structures into pre-training (사실상 language structure라기 보단 어순) 1) word-level ordering 2) sentence-level ordering 1) word-level ordering : 기존 BERT처럼 일부 토큰 masking 후, masked되지 않은 토큰 3개(trigram) 골라 순서 섞기 * 4개로 했을 때 성능 차이가 크지 않았고, robustness 고려하여 3개로 선택 → masked된 토큰의 final hidden state → softmax classifier → 본래 토큰 예측 → shuffled된 토큰들의 final hidden state → softmax clas..
2020.10.07