Context Is the Skill

·5 min read

The thing nobody talks about when they talk about prompting is context management. Not the prompt itself. The stuff around the prompt. What you include, what you leave out, and how you decide.

This is the actual skill of working with AI right now. And almost nobody treats it that way.

The instinct is to over-explain

When people start using LLMs for real work, the first instinct is to dump everything in. Here's my whole codebase. Here's every requirement. Here's the full conversation history. More context is better, right?

Not really. Context windows are finite. Every token you spend on background information is a token you're not spending on the actual task. And models get worse when they're drowning in irrelevant context. They start hedging, getting confused about which parts matter, and losing focus on what you actually asked.

I've seen this firsthand. A 200-line prompt with full file contents, design specs, and three paragraphs of backstory will often produce worse results than a 20-line prompt that gives the model exactly what it needs and nothing else.

The instinct is also to under-explain

The opposite failure mode is assuming the model knows things it doesn't. The model has broad training data. It knows Python, it knows React, it knows how to write SQL. But it doesn't know your codebase conventions. It doesn't know that your team uses a specific folder structure, or that your API returns errors in a particular format, or that there's a shared utility function that already does the thing it's about to rewrite from scratch.

This is the stuff that has to be explicit. The model can't infer your local conventions from first principles. If you don't tell it, it will make reasonable guesses that are wrong in your specific context.

The boundary is the skill

The real skill is knowing where the boundary sits between "the model already knows this" and "the model needs me to say this explicitly."

General knowledge: the model has it. How Python decorators work. What a REST API is. Common React patterns. You don't need to explain these things. Doing so wastes context and sometimes actually makes the output worse because the model starts pattern-matching to your explanation instead of using its own (often better) understanding.

Local knowledge: the model doesn't have it. Your file structure. Your naming conventions. Your specific business logic. The fact that your auth middleware expects a certain header format. This is what needs to be in the prompt.

The gray area is where it gets interesting. The model might know a library's API, but does it know the version you're using? It might understand the general pattern, but does it know the specific way your team implements it? This is where judgment matters.

Different models, different boundaries

Not all models need the same amount of context. A more capable model can infer more from less. Give Opus a function signature and it can often figure out the conventions from the surrounding code. Give a smaller model the same input and it might need you to spell out the patterns explicitly.

This matters when you're choosing which model to use for which task. I use Haiku for routine stuff where the task is straightforward and the context is minimal. Sonnet for standard development where some project-specific context is needed. Opus for architecture decisions and complex tasks where the model needs to reason about tradeoffs with less hand-holding.

The amount of context you provide should scale with the model's ability to fill in gaps. Smaller model, more explicit context. Larger model, more trust in inference.

How I actually manage this

In my workflow, I have persistent configuration files that define project-level context: conventions, file paths, preferences, things the model should always know. These get loaded automatically so I'm not re-typing them every session.

But when I write a specific prompt for a specific task, I'm selective. I tell the model which files to read. Not all of them. The ones that are relevant. I describe the feature in terms of what already exists, not from scratch. I point at patterns to follow rather than describing the pattern from nothing.

The prompt for a new screen in my app doesn't explain what React Native is. It says "look at the existing PrayerTimes screen, follow that pattern, here's what this new screen should do." That's the right level. The model knows React Native. It doesn't know my screen pattern. So I bridge that specific gap and nothing more.

Context as a budget

It helps to think of context like a budget. You have a fixed amount. Every token spent on something the model already knows is a token wasted. Every token saved on something the model actually needed is a gap that produces worse output.

The optimal prompt is the one that spends every token on information the model genuinely couldn't have inferred on its own. Nothing more, nothing less.

This sounds obvious. In practice, it's hard. It requires you to have a mental model of what the AI knows, what it doesn't, and where the edges are. That mental model gets better with experience. It's different for different models. And it changes as models improve.

This is a real skill and it compounds

People talk about prompt engineering like it's about finding magic phrases or secret techniques. It's not. It's about context management. Knowing what to include, what to leave out, and making that decision quickly for every interaction.

The people who are most productive with AI right now aren't writing better prompts. They're providing better context. They've built intuition for the boundary between what the model knows and what it needs to be told. And that intuition gets sharper every day they use it.

This is also the skill that's hardest to teach because it requires understanding both the model and the domain you're working in. You need to know your own codebase well enough to know which parts are relevant. And you need to know the model well enough to know what it can figure out alone.

If you're investing time in getting better at AI, invest it here. Not in prompt templates. In context judgment.