EmbeddingWhen some object X is said to be embedded in another object Y, the embedding is given by some injective and structure-preserving map f : X → Y. The precise meaning of 'structure-preserving' depends on the kind of mathematical structure of which X and Y are instances. In the terminology of category theory, a structure-preserving map is called a morphism.
arXiv: Computation and Language, (2019): 597-610
We propose an architecture to learn multilingual fixed-length sentence embeddings for 93 languages
Cited by134BibtexViews47Links
0
0
Nils Reimers,Iryna Gurevych
EMNLP/IJCNLP (1), pp.3980-3990, (2019)
We showed that BERT out-of-the-box maps sentences to a vector space that is rather unsuitable to be used with common similarity measures like cosine-similarity
Cited by115BibtexViews108
0
0
Emily Alsentzer,John R. Murphy, Willie Boag,Wei-Hung Weng,Di Jin,Tristan Naumann, Matthew B. A. McDermott
arXiv: Computation and Language, (2019)
We find robust evidence that our clinical embeddings are superior to general domain or BioBERT specific embeddings for non de-ID tasks, and that using note-type specific corpora can induce further selective performance benefits
Cited by96BibtexViews14Links
0
0
Cotterell Ryan, Schütze Hinrich
While contextual signatures provide a strong cue for morphological proximity, orthographic features are requisite for a strong model
Cited by85BibtexViews9Links
0
0
CVPR, (2019): 2437-2446
We have proposed a novel 3D-structured scene representation, called DeepVoxels, that encodes the view-dependent appearance of a 3D scene using only 2D supervision
Cited by84BibtexViews88Links
0
0
arXiv: Computation and Language, (2019)
Our results show that ELMo embeddings perform unequally on male and female pronouns: male entities can be predicted from occupation words 14% more accurately than female entities
Cited by67BibtexViews96Links
0
0
Alan Akbik, Tanja Bergmann, Roland Vollgraf
north american chapter of the association for computational linguistics, (2019)
We presented a simple but effective approach that addresses the problem of embedding rare strings in underspecified contexts
Cited by65BibtexViews15Links
0
0
Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Informatio..., (2019)
Although considerable attention has been given to neural ranking architectures recently, far less attention has been paid to the term representations that are used as input to these models. In this work, we investigate how two pretrained contextualized language modes (ELMo and BE...
Cited by42BibtexViews65Links
0
0
arXiv: Computer Vision and Pattern Recognition, (2019): 6002-6012
We have shown that when applied to Deep convolutional neural networks, the Local Aggregation objective creates representations that are useful for transfer learning to a variety of challenging visual tasks
Cited by36BibtexViews45Links
0
0
Journal of the American Medical Informatics Association : JAMIA, no. 11 (2019): 1297-1304
We present an analysis of different word embedding methods and investigate their effectiveness on four clinical concept extraction tasks
Cited by30BibtexViews20Links
0
0
NeurIPS, pp.2731-2741, (2019)
We design a new knowledge graph embedding model which operates on the quaternion space with well-defined mathematical and physical meaning
Cited by23BibtexViews86Links
0
0
Vahe Tshitoyan, John Dagdelen, Leigh Weston, Alexander Dunn, Ziqin Rong, Olga Kononova,Kristin A Persson,Gerbrand Ceder,Anubhav Jain
Nat., no. 7763 (2019): 95-98
We show that materials science knowledge present in the published literature can be efficiently encoded as information-dense word embeddings11–13 without human labelling or supervision
Cited by0BibtexViews6Links
0
0
ICASSP, pp.5329-5333, (2018)
We found that the x-vector system significantly outperformed two standard i-vector baselines on SRE evaluation Cantonese
Cited by566BibtexViews89Links
0
0
national conference on artificial intelligence, (2018)
Test leakage through inverse relations of WN18 and FB15k was first reported by Toutanova and Chen: we investigate the severity of this problem for commonly used datasets by introducing a simple rule-based model, and find that it can achieve state-of-the-art results on WN18 and FB...
Cited by340BibtexViews395Links
0
0
Matteo Pagliardini, Prakhar Gupta,Martin Jaggi
NAACL-HLT, (2018)
Along with the models discussed in Section 3, this includes the sentence embedding baselines obtained by simple averaging of word embeddings over the sentence, in both the C-BOW and skipgram variants
Cited by338BibtexViews50Links
0
0
british machine vision conference, (2018)
Notice that using ResNet152 and fine-tuning can only lead to 12.6% improvement using the VSE0 formulation, while our Max of Hinges loss function brings a significant gain of 8.6%
Cited by313BibtexViews93Links
0
0
Alan Akbik,Duncan Blythe, Roland Vollgraf
COLING, pp.1638-1649, (2018)
This paper proposes contextual string embeddings, a novel type of word embeddings based on character-level language modeling, and their use in a state-of-the-art sequence labeling architecture
Cited by310BibtexViews75Links
0
0
meeting of the association for computational linguistics, (2018)
We introduced a set of tasks probing the linguistic knowledge of sentence embedding methods
Cited by267BibtexViews79Links
0
0
meeting of the association for computational linguistics, (2018)
In contrast to adversarial methods, we propose to use an initial weak mapping that exploits the structure of the embedding spaces in combination with a robust self-learning approach
Cited by215BibtexViews64Links
0
0
Nature communications, no. 1 (2018)
Neural networks, which form the theoretical architecture of deep learning, were inspired by the primary visual cortex of cats where neurons are organized in hierarchical layers of cells to process visual stimulus
Cited by198BibtexViews9Links
0
0
Keywords
Metric SpaceWord EmbeddingsDimensionality ReductionNatural Language ProcessingAlgebraic VarietyGraph TheoryNormed SpaceScalar CurvatureVector FieldAlgorithmic Applications
Authors
Piotr Indyk
Paper 4
Kaiwei Chang
Paper 3
Eneko Agirre
Paper 3
Antoine Bordes
Paper 3
Yoav Goldberg
Paper 3
Jason Weston
Paper 3
Gorka Labaka
Paper 3
Mikel Artetxe
Paper 3
Sanja Fidler
Paper 2
Stephan Mandt
Paper 2