Going Deeper with AI: Reliable or Random?

We can control the consistency of output generation of an LLM when given the same input – the parameter is called temperature.

The typical temperature range of an LLM is 0.0 to 2.0.

At 0.0 temperature, the LLM will always generate an output that is the most likeliest relevant to the input. It won’t consider other possible outputs.

At 2.0 temperature, the LLM could generate an output that is a mix of all possible outputs whilst throwing in something irrelevant in the mix.

Imagine requesting AI to suggest book title for a book that sheds light on fast-growing plants – could there be a children’s book on flowers called What’s the Story? Morning Glory? The LLM may think – highly unlikely but not impossible especially if the author is a fan of Oasis!