A Glossary of Key Concepts
2026-01-19
This glossary provides brief definitions of key technical terms used throughout the Data Analysis with AI course.
Use this as a reference when you encounter unfamiliar terminology.
The basic unit of text that LLMs process.
The maximum amount of text (in tokens) an LLM can consider at once.
The process of generating output from a trained model.
The time delay between sending a request and receiving a response.
A neural network trained on massive text data to predict and generate language.
The neural network architecture underlying modern LLMs.
The learned values (weights) inside a neural network.
Hidden instructions given to the AI before your conversation.
The practice of curating all information the model receives to optimize performance.
A technique that retrieves relevant documents and adds them to the prompt.
A prompting technique where the model shows intermediate reasoning steps.
A model specifically trained to “think” before responding.
A mode where the model explicitly reasons through problems.
Gemini 3’s approach to controlling reasoning depth.
An AI system that can take actions, not just generate text.
A process where AI acts across multiple steps with tool use.
An open standard for connecting AI to external tools and data.
The ability of an AI to invoke external functions or APIs.
When an AI generates plausible-sounding but false information.
Connecting AI responses to verified sources of truth.
A training technique where models learn from human preferences.
Anthropic’s approach to training models with explicit principles.
Reusable, modular instruction packages in Claude.
Custom AI assistants in Gemini Advanced.
Workspaces with shared context across conversations.
Interactive workspaces for editing AI-generated content.
A parameter controlling randomness in model outputs.
Note: Gemini 3 and reasoning models work best at default (1.0)
Performance degradation as the context window fills up.
A technical optimization that speeds up repeated inference.
Describing desired behavior in natural language rather than writing syntax.
A convention for providing project context to Claude Code.
Breaking complex tasks into sequential prompts.
Tokens are billed separately for input (prompt) and output (response).
Restrictions on how many requests you can make.
Storing and reusing processed prompts to reduce cost.
| Term | One-liner |
|---|---|
| Token | Basic text unit (~4 characters) |
| Context window | Max tokens model can see at once |
| Inference | Generating output from input |
| Agent | AI that takes actions via tools |
| MCP | Standard for AI tool connections |
| Hallucination | AI-generated false information |
| RAG | Retrieval + generation technique |
This version: 2026-01-19 (v0.1.0)
Gabors Data Analysis with AI – Technical Terms – 2026-01-19 v0.1.0