Skip to content

JD's Joint

]20 goto 10

Menu
Menu

Clarification (Seeking Semantic Similarities)

Posted on November 18, 2023 by jdkendall

A quick nitpicking follow-up for the previous post about the embeddings used by LLMs. The examples I used were vectors in space, which is intuitive for someone to think about. However, the actual representation inside of vector databases and in LLMs is different – instead of being a point in space, a semantic concept would be a directional vector of sorts and comparisons would be made using the dot product of the vectors (ie, cosine similarity.)

This isn’t really necessary to understand it as a layman, but if you were to try to give a more accurate analogy with that detail in mind, then it’s like each embedding is magnetic and being pulled towards “idea poles”, and the strength and direction of the vector represents what it is. It’s not very intuitive and doesn’t give any better insight from my perspective, so I went with the simplified version.

If you’re interested in reading a great write-up with the technical part attached, NickyP has a great article here. There’s quite a bit of good stuff on LessWrong in general, so have a look around the site while you’re there.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • Reboot for updates complete.
  • Clarification (Seeking Semantic Similarities)
  • Seeking Semantic Similarities
  • Bytes to Bites
  • Om Nom Chom

Archives

  • February 2024
  • November 2023
  • October 2023
  • April 2023
  • September 2022
  • July 2022
  • January 2022

Categories

  • Uncategorized
©2026 JD's Joint | Built using WordPress and Responsive Blogily theme by Superb