A quick nitpicking follow-up for the previous post about the embeddings used by LLMs. The examples I used were vectors in space, which is intuitive for someone to think about. However, the actual representation inside of vector databases and in LLMs is different – instead of being a point in space, a semantic concept would…