In a conversation with Salesforce CEO Marc Benioff at Dreamforce 2023, OpenAI CEO, Sam Altman positioned generative AI hallucinations as a new catalyst for creativity (watch the full conversation here).
Until that moment, my perception of AI hallucination had been entirely negative. Sam’s comment encouraged me to better understand the concept of hallucination. And each definition I found included a new term I did not know. Here are the layers of the onion:
- Hallucination: A generative AI hallucination refers to the phenomenon where an artificial intelligence system, particularly a generative or language model like GPT, produces outputs that are creative and imaginative but not grounded in reality. In the context of a generative model like GPT-3.5, for example, a hallucination might occur when the model generates text that is coherent and grammatically correct but contains fictional or improbable information.Hallucinations are inherently “creative” because they represent the output of a natively quantitative-based tool (a GPT) that extends beyond the “factual.” Hallucinations are the reason not to take GPT output as inherently accurate (as one attorney learned the hard way) and also the reason most generative models can be a natural enhancement to the creative process.
- GPT: My own mother freely uses GPT in her daily conversation now, and I am fairly confident that despite her wisdom, she could not define the acronym. GPT stands for “Generative Pre-trained Transformer.” It is a type of artificial intelligence model designed for natural language processing tasks. GPT models are part of a broader family of transformer-based neural networks.
- Generative: This refers to the model’s ability to generate text or other forms of data. GPT models are primarily used for tasks like text generation, completion, and language understanding.
- Pre-trained: Before fine-tuning on specific tasks, GPT models are pre-trained on massive amounts of text data. This pre-training involves predicting the next word in a sentence, which helps the model learn grammar, syntax, semantics, and even some degree of world knowledge.
- Transformer: GPT is built upon the transformer architecture, which is a deep learning architecture known for its effectiveness in handling sequential data like text. Transformers use self-attention mechanisms to capture dependencies between words in a sequence, making them suitable for a wide range of natural language processing tasks.
- Neural networks: A neural network is a computational model inspired by the structure and function of the human brain. It consists of interconnected processing units called neurons organized into layers. Information is processed through these neurons, with each neuron applying a mathematical operation to its input (prompt) and passing the result to the next layer. Neural networks are used for various machine learning tasks, including pattern recognition, image classification, natural language processing, and regression, by adjusting the connections (weights) between neurons during training to learn complex relationships in data.
- [Grounded] Prompt: A grounded AI prompt is an input or query given to an artificial intelligence system that is rooted in a specific context, domain, or task. It provides clear and relevant information to guide the AI’s response, ensuring that the output is meaningful and contextually appropriate. Grounded prompts help AI systems generate accurate and useful responses by providing essential details or constraints for the task at hand. In the Salesforce context, Grounded Prompts focus the Einstein1 Copilot generative response on the data within your CRM system (either natively present or connected through Data Cloud).