Once you peel back the hype and mysticism, large language models (LLMs) are a fascinating application of statistical models, effectively what you get when you dial a basic auto-complete model up to eleven. In order to analyze a mind-boggling amount of text and produce meaningful auto-completion results quite a bit of math is involved, with a recent three-part article series by [Giles] going through the basics of inference, being the prediction step using a trained model.
The text is encoded in the LLM’s vector space as token IDs, each token being a text fragment that has some probability of following another ID, such as when cats may be found on desks, as in the above photo by [Giles]. With inference multiple of such IDs are retrieved in a vector from which in successive steps a sentence can be pieced together. These so-called logits are detailed in the first article in the series, with the second article focusing on vocabulary space and embedding, as well as the matrix operations used for inference.
Finally, the third article puts all of this together and looks at transformers, which is a crucial part of GPT (generative pretrained transformer) LLM architecture. Of note is the attention mechanism, which takes GPTs beyond merely being glorified auto-complete systems by adding pattern matching. Here we can see how the statistical model of the LLM is used to generate a rather plausible output, which is where the human has to ask themselves in how far they feel that it is correct.
Of course, there goes a lot more into making LLMs and GPTs performant, such as key-value caches that massively speed up inference.

You must be logged in to post a comment Login