Why does working with LLMs feel like a Situationship?

Why does working with LLMs feel like a Situationship?

Nov 8, 2024

4 Min

Deeptalk Team

Do you often wonder why ChatGPT or other LLMs give you different responses even when you ask the same question multiple times? It can be draining & puzzling, but this variability is a fundamental part of how Large Language Models (LLM’s) work.

How LLMs Generate Responses:

Unlike traditional databases or search engines that retrieve specific pieces of stored information, LLMs like GPT-4 don’t "store" information in a conventional sense. Instead, they are trained on vast amounts of data from books, websites, and other sources. During training, these models learn to predict the next word or sequence of words based on patterns in this data.

When you ask a question, the LLM doesn’t look up a pre-stored answer. Instead, it generates a response on the fly by considering the patterns it learned during training. This process involves complex calculations in the neural network, which are influenced by a combination of factors including the context of the question and the patterns in the training data.

The Role of Randomness:

A key reason for the varying responses is the inherent randomness in how LLMs generate text. Every time you ask a question, the model uses probabilities to decide which words to generate. This means that even with the same input, slight differences in the model’s "thought process" can lead to different outputs.

Example: Think of it like a human conversation. When you ask someone the same question on different occasions, their response might vary based on their mood, recent experiences, or even their interpretation of the question. Similarly, an LLM's responses can vary due to the probabilistic nature of its language generation process.

“Yes! It’s a Feature, not a Bug”

This variability is not a flaw or limitation of LLMs but a fundamental characteristic of how they function. It reflects the probabilistic nature of human language and understanding. Just as people might give different answers based on how they interpret a question or their recent thoughts, LLMs produce varied responses based on their training data patterns and internal calculations.

Understanding the Probabilistic Nature:

Recognizing that LLMs generate responses probabilistically helps in setting realistic expectations when deploying these models in real-world applications. It’s important to account for this variability in scenarios where consistency is crucial, such as customer support or information retrieval.

Implications:

So, how can we ensure accuracy of responses while using LLMs for enterprise applications?

Cross-check with other LLMs to compare answers.

  1. Cross-check with other LLMs to compare answers.

  2. Use another LLM as a judge to evaluate output quality.

  3. Use RAGs to limit the universe of responses to the contexts that are relevant

  4. If that doesn't cut it, develop code-based Orchastrators to ensure SOPs are followed

  5. Apply Rouge Scores to quantify how closely responses match the ideal answer.

These methods can help verify that LLMs are providing accurate and reliable information, despite their inherent unpredictability.

Conclusion:

While we observed the inherent variability in LLM responses, we also identified methods to both measure and control these outputs effectively. This dual approach enables us to harness the flexibility of language models while ensuring consistency and alignment with desired outcomes.

Contributors
Naveen Upadhyay
Naveen Upadhyay

Partner (Technology)

Partner (Technology)

Aistra

Rijul Bansal

GenAI Architect

GenAI Architect

Deeptalk

307 Seventh Avenue Suite 1601, New York, NY 10001.

307 Seventh Avenue Suite 1601, New York, NY 10001.

307 Seventh Avenue Suite 1601, New York, NY 10001.