Last Updated: March 15, 2026
When most people think about improving LLM outputs, they focus on rewriting the prompt. But in real systems, the prompt is only one piece of a much larger input.
Modern LLM applications send far more than a single instruction to the model. The model may receive a system prompt defining its behavior, retrieved documents from a knowledge base, conversation history from earlier messages, tool definitions, and the user’s question. All of this together forms the context window the model uses to generate its response.
Context engineering is the practice of designing and managing that entire context. It involves deciding what information should be included, how it should be structured, how much history to keep, and how to combine retrieved knowledge with user instructions. In many production systems, getting the context right has a bigger impact on output quality than tweaking the wording of the prompt.
In this chapter, we will explore how context engineering works in practice.