Awesome Neural Adaptation in NLPA curated list of awesome work on neural unsupervised domain adaptation in Natural Language Processing, including links to papers. The focus is on unsupervised neural DA methods and currently on work outside Machine Translation.
ACL, pp.8342-8360, (2020)
While our results demonstrate how these approaches can improve ROBERTA, a powerful language model, the approaches we studied are general enough to be applied to any pretrained language model
Cited by7BibtexViews282Links
0
0
Han Guo, Ramakanth Pasunuru,Mohit Bansal
national conference on artificial intelligence, (2020)
The values are shown in Table 1, where we can see that most of the distance measures are correlated with actual performance, with DCORAL having the lowest correlation with empirical performance
Cited by2BibtexViews40Links
0
0
Aakanksha Naik,Carolyn Rose
ACL, pp.7618-7624, (2020)
We note that models transferred from LitBank to TimeBank have high precision, while models transferred in the other direction have high recall
Cited by0BibtexViews19Links
0
0
Ghosal Deepanway, Hazarika Devamanyu, Majumder Navonil, Roy Abhinaba,Poria Soujanya,Mihalcea Rada
In domain-adversarial neural network+, using Adam optimizer leads to substantial jump in performance which can comfortably surpass many of the recent advanced domain adaptation methods – Central Moment Discrepancy, Variational Fair Autoencoder, ASym and MTTri
Cited by0BibtexViews21Links
0
0
Michael A. Hedderich, Lukas Lange,Heike Adel,Jannik Strötgen,Dietrich Klakow
We showed that it is essential to analyze resource-lean scenarios across the different dimensions of data-availability
Cited by0BibtexViews14Links
0
0
Dustin Wright,Isabelle Augenstein
EMNLP 2020, (2020)
We investigate the problem of unsupervised multi-source domain adaptation, where a model is trained on labelled data from multiple source domains and must make predictions on a domain for which no labelled data has been seen
Cited by0BibtexViews8Links
0
0
Ben-David Eyal, Rabinovitz Carmel, Reichart Roi
We propose PERL: A representation learning model that extends contextualized word embedding models such as BERT with pivot-based fine-tuning
Cited by0BibtexViews9Links
0
0
arXiv: Computation and Language, (2019): 569-631
We provide a comprehensive typology of cross-lingual word embedding models
Cited by142BibtexViews31
0
0
arXiv: Learning, (2019)
We plan to use the datasets collected for a similar task, e.g., the Stanford Natural Language Inference data to investigate the utility of our model in transfer learning between inference and stance detection tasks
Cited by9BibtexViews9Links
0
0
EMNLP/IJCNLP (1), pp.4237-4247, (2019)
This paper demonstrates the applicability of contextualized word embeddings to two difficult unsupervised domain adaptation tasks
Cited by6BibtexViews22
0
0
Guy Rotman,Roi Reichart
TACL, (2019): 695-713
We follow Che et al and define the ELMo word embedding for word i as: wi hEi,jLM o, where is a trainable parameter and hEi,jLMo is the hidden representation for word i in the j’th BiLSTM layer of the ELMo model, which remains fixed throughout all experiments
Cited by4BibtexViews9Links
0
0
Zhenghua Li, Xue Peng,Min Zhang, Rui Wang,Luo Si
ACL (1), pp.2386-2395, (2019)
Our proposed semi-supervised domain adaptation approach leads to absolute LAS improvement of 16.15% and 15.56% on product blog/ZX-test respectively, over the non-adapted parser trained on the source balanced corpus-train
Cited by4BibtexViews19
0
0
north american chapter of the association for computational linguistics, (2019)
Our work is unique in showing that the standard task of mutual-information-selected pivot prediction is a high quality auxiliary task, though future work should explore whether their pivot selection algorithm is superior to mutual information in our joint model
Cited by4BibtexViews24Links
0
0
EMNLP/IJCNLP (1), pp.2510-2520, (2019)
We propose a new framework, Adversarial Domain Adaptation for Machine Reading Comprehension, to transfer a pre-trained MRC model from a source domain to a target domain
Cited by3BibtexViews54
0
0
Yftah Ziser,Roi Reichart
ACL (1), pp.5895-5906, (2019)
On average across the test sets, all Task Refinement Learning-Pivot Based Language Modeling methods improve over the original PBLM with the best performing method, RF2, improving by as much as 2.1% on average
Cited by2BibtexViews11
0
0
Shrey Desai, Barea Sinno, Alex Rosenfeld,Junyi Jessy Li
EMNLP/IJCNLP (1), pp.4717-4729, (2019)
An unsupervised domain adaptation framework capable of identifying political texts for a multi-source, diachronic corpus by only leveraging supervision from a single-source, modern corpus
Cited by0BibtexViews16
0
0
DeepLo@EMNLP-IJCNLP, pp.11-21, (2019)
We have studied unsupervised language adaptation approaches on two natural language processing tasks, taking into consideration the assumptions made regarding the availability of unlabeled data in the source and target languages
Cited by0BibtexViews6Links
0
0
Xia Cui, Danushka Bollegala
RANLP, pp.213-222, (2019)
Our experimental results on two datasets for crossdomain sentiment classification show that projection learning and self-training have complementary strengths and jointly contribute to improve Unsupervised Domain Adaptation performance
Cited by0BibtexViews12Links
0
0
Chen Jia, Xiaobo Liang, Yue Zhang
ACL (1), pp.2464-2474, (2019)
Cross-domain language modeling is conducted through a novel parameter generation network, which decomposes domain and task knowledge into two sets of embedding vectors
Cited by0BibtexViews9
0
0
Neurocomputing, (2018)
Deep domain adaptation is utilizing deep networks to enhance the performance of domain adaptation, such as shallow domain adaptation methods with features extracted by deep networks
Cited by247BibtexViews42Links
0
0
Keywords
Data MiningData Mining AlgorithmData ModelsDifferent Data DistributionDomain AdaptationFeature SpaceFuture DataInductive Transfer LearningJingKnowledge Engineering
Authors
Roi Reichart
Paper 5
Sebastian Rüder
Paper 3
Qiang Yang
Paper 2
Pascal Germain
Paper 1
Jianfeng Gao
Paper 1
Lifu Huang
Paper 1
Rui Wang
Paper 1
Isabelle Augenstein
Paper 1
Minmin Chen
Paper 1
Karl Stratos
Paper 1