Let the vibes flow
December 31, 2025
This blog post is not published yet! (This will disappear once it is published)
I happened to have started reading for a Master’s Degree in AI just before the LLM craze started. AI was cool and there were some very impressive use cases but discourse about it was nowhere near as mainstream as it is now.
When the hype took off, it was a very crazy time to be in both industry and academia at the same time. In industry there was a mad rush to build something, anything that used the ChatGPT API. In academia, there was so much research happening that at one point we were being encouraged to not consider any papers older than three months old.
Being one of the few folks who knew how LLMs actually worked, having understanding of their inherent flaws and how they wouldn’t really ever go away, just be managed better (because bias is a part of reality), made me very fun at parties for a while. I got good at bursting people’s bubbles about how no, this is a very bad use case! didn’t make me any friends for a while.
This came to a head when there was a presentation about how a team had integrated the ChatGPT API to run sentiment analysis on one of our products. They were talking about how well it was going (without presenting any sort of objective evaluation against a test set) and how one of their struggles was with inconsistent results.
This rang very weird to my brain. Inconsistent results with an LLM is (a) a feature, not a bug, and (b) configurable! So as the presentation continued I took a look at their source code and found the following: temperature=0.5.