An Analysis of Embedding Layers and Similarity Scores using Siamese Neural Networks
CoRR(2023)
摘要
Large Lanugage Models (LLMs) are gaining increasing popularity in a variety
of use cases, from language understanding and writing to assistance in
application development. One of the most important aspects for optimal
funcionality of LLMs is embedding layers. Word embeddings are distributed
representations of words in a continuous vector space. In the context of LLMs,
words or tokens from the input text are transformed into high-dimensional
vectors using unique algorithms specific to the model. Our research examines
the embedding algorithms from leading companies in the industry, such as
OpenAI, Google's PaLM, and BERT. Using medical data, we have analyzed
similarity scores of each embedding layer, observing differences in performance
among each algorithm. To enhance each model and provide an additional encoding
layer, we also implemented Siamese Neural Networks. After observing changes in
performance with the addition of the model, we measured the carbon footage per
epoch of training. The carbon footprint associated with large language models
(LLMs) is a significant concern, and should be taken into consideration when
selecting algorithms for a variety of use cases. Overall, our research compared
the accuracy different, leading embedding algorithms and their carbon footage,
allowing for a holistic review of each embedding algorithm.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要