Nathan Henry Logo

Menu

The Curriculum Hidden in Defaults

Abstract UI elements arranged in a clean grid:sliders, buttons, toggles, and geometric modules—symbolizing default interfaces and the hidden curriculum of software design.

Interfaces are teachers in disguise. Defaults shape how we think; the challenge is becoming conscious of the lessons we’re absorbing.

Nathan Henry
Nathan Henry
8 min read

Interfaces shape your habits, expectations, and mental models long before you make your own choices.

Much of this happens beneath awareness. A tool sets a cadence, suggests a workflow, and shields you from certain mistakes. After enough repetitions, that cadence begins to feel like second nature. Choices feel intuitive because you have been trained to expect them.

Most interfaces guide how you arrange information or move through a process. A conversational model leans further into that trend by shaping tone, voice, and structure. The most effective ones act as intuitive thought partners, offering a rhythm and clarity that settle into your own patterns over time. This is the hidden curriculum: the steadily growing influence of design on thought. The more actively a system responds, the faster those lessons settle in and begin to shape the way you reason your way through a task.

Adjusting LLM Tone

A language model speaks with smooth, confident certainty, but beneath the gloss is a system trained to settle on statistical averages.

Language models are built on regression: they move toward what is most statistically likely, not what is most incisive. Their training data is full of institutional language, "safety" rules, and ideological biases, which create moral (and logical) contradictions, causing responses to drift toward the comfortable middle. When the forces of contradictions collide, the system smooths conflict rather than resolve it, which is often where its "hallucinations" and confident misfires originate.

Because the interface is conversational, those tendencies arrive wrapped in fluency. The model’s cadence becomes a template: polite, optimistic, eager to qualify, reluctant to take a firm position. Spend enough time with it, and that posture begins to feel natural. The flattening of nuance, the avoidance of sharp distinctions, and the preference for consensus over clarity, all exert a gentle pull on your own reasoning.

A clean, minimalist chat interface displaying the prompt “What can I help with?” above an empty text field labeled “Ask anything,” with buttons for Attach, Search, Study, and Voice.
Credit: Screenshot from the ChatGPT interface (openai.com), shown for illustrative purposes.

The drift is the risk. Without deliberate attention, the model’s defaults become your defaults, simply because they repeat often and frictionlessly. The remedy is awareness: using the system as a tool to interrogate, not a tone to inherit.

One way to surface a model’s underlying logic is to hold the question constant and vary only the frame. Pick something with tension (responsibility, tradeoffs, or historical judgment) and ask it three ways:

Passive Phrasing

text
1We’re seeing a drop in user retention. What factors should we consider, and what are some possible ways to address it?

This is a reasonable question, but it leaves the model to decide what matters. The output will be broad: onboarding, pricing, notifications, content quality, competitive landscape, etc. Nothing is prioritized, and no action emerges.

The model fills the ambiguity with a survey, not a position, which means the tool ends up steering the reasoning.

Active Framing

text
1Weekly retention dropped from 42% to 31% over the last 60 days.
2 
3Known context:
4- onboarding completion is 54%
5- usage is concentrated in one core workflow
6- no pricing changes
7 
8Return:
91) the most likely root cause
102) one action we can test within 30 days
113) the metric that would confirm improvement

The intent is identical, but the frame is different. By adding constraints like timeframe, context, and expected format, the model shifts from listing possibilities to making a hypothesis, proposing a next step, and defining how to validate it. The reasoning sharpens not because the model changed, but because the question did.

The lesson here is about structure. When the frame is loose, the output drifts because the system has nothing firm to align to. When the frame is defined, the model stops guessing and starts collaborating.

That same shift happens in design: once decisions are structured, the tool becomes an extension of your intent rather than an interpreter of it.

Figma and the Power of Tokens

Once decisions are expressed as tokens, they stop being suggestions and become instructions.

Tokenization turns design decisions into objects that both humans and coding agents can interpret the same way. Figma’s variable and token system, in particular, gives designers a structural model that mirrors engineering logic. It nudges toward systems thinking by rewarding clarity, reusability, and consistency

A token hierarchy diagram showing how a single base color cascades through category, subcategory, and component-level tokens in a design system. Lines connect raw values to semantic labels like neutral/0, surface/primary, and button/secondary, ending in styled UI elements such as buttons and pill labels.

A base color flows through categories and subcategories — neutral/0, surface/primary, button/secondary — and ends in concrete UI components like buttons and pill labels.

Credit: Figma Learn: “Variables & Token Structure” (figma.com/learn)

Token hierarchies convert raw values into meaning. Once the structure exists, ambiguity disappears.

How the progression works

Figma’s MCP features make the influence, and power, of the interface unmistakable. With MCP, the model can read the file, understand the structure, and act on it. That changes how designers write, and think, because the model depends on precision.

Here’s how the same request shifts as soon as MCP is in the loop:

a. Non-tokenized request

text
1Make the primary button darker and increase the padding a bit.

The agent has to guess what you mean. There is no shared system to reference. What comes back will adhere to your request, likely, but non-deterministically.

b. Partially tokenized request

text
1Use our primary blue at a slightly darker value and increase the button padding from 12px to 16px.

The agent can infer some structure, intent is clearer, but the model still must rely on its own judgement (and here's where the contradictions in logic that result from "safety" training can complicate practical life).

c. Fully tokenized request

json
1{
2 "component": "Button/Primary",
3 "update": {
4 "color": {
5 "background": "color.primary.700"
6 },
7 "spacing": {
8 "paddingY": "spacing.scale.4",
9 "paddingX": "spacing.scale.5"
10 }
11 },
12 "regenerate": {
13 "output": "react",
14 "componentName": "PrimaryButton"
15 }
16}

This is a clear specification. An AI agent can act on it directly, reliably. No guessing. The request now becomes an object the agent can execute directly.

Why this matters

When design intent is expressed as tokens, it becomes operational. You create a direct interface between design and code. The results are:

  • Fewer dependencies on engineering for prototyping
  • Faster iteration cycles
  • Coding agents that can build with fidelity
  • Designs that scale because the logic scales

The takeaway is that while tokens can feel constraining, they establish precision, and precision increases leverage, especially once AI enters the room. The more systematically a design system is defined, the more easily it can be understood, extended, and acted on at scale.

  • As tokens harden into decisions, a system takes shape.
  • Systems can be learned, repeated, and scaled.
  • Good ones get used because they reduce friction and make the next step obvious.

This principle doesn’t stop at design. The same pattern applies the moment you start building an application: when structure exists, tools can collaborate with you; when it doesn’t, the cost is revealed.

Which brings us to vibe coding.

What Cursor Teaches Us About Structure

Anyone who has ever vibe-coded in Cursor learns the same lesson fast: you can build something impressive in a day, and it can break just as quickly.

A dark-themed screenshot of the Cursor code editor showing the “New chat” panel open, with the Agent settings dropdown visible. Options include model selection, keybinding, auto-run, and auto-fix toggles.
Credit: Screenshot from the Cursor editor interface (cursor.sh)

Putting together a quick app in Cursor can be a thrill: fast scaffolding, working UI, instant feedback. And then the floor drops out. A dependency breaks. State management collapses. You can’t remember what changed, because nothing was versioned.

This is the moment the interface starts teaching. Through friction. It surfaces the costs of improvisation and rewards structure the moment you adopt it, pushing you towards engineering discipline.

What Cursor Teaches (whether you notice or not), is that:

  • Changes need lineage The diff viewer makes every alteration visible. There’s no illusion of “it just works.”
  • Your code is a system, not a sketch The agent asks for context when structure is missing. That’s a hint, not a failure.
  • Precision compounds The more deterministic your instructions, the more deterministic the outcome, just like tokens.

The Shift from Vibe to System

Loose instruction:

text
1The latest release introduced an issue where users get logged out after a period of inactivity. Can you take a look and fix whatever is causing the session to expire unexpectedly?

In this example, Cursor has to infer everything: auth flow, token storage, failure state, expected behavior. You might get a patch, but you won’t get reliability, nor repeatability.

Structured Instruction:

json
1{
2 "intent": "Prevent unexpected sign-outs",
3 "evidence": [
4 "Users are logged out after 60 seconds of inactivity",
5 "Auto-logout occurs even while actively using the dashboard",
6 "Behavior began after updating the auth provider on Nov 14"
7 ],
8 "expectedBehavior": "Sessions remain active for 30 minutes of inactivity and never expire during active use",
9 "nonNegotiables": [
10 "No storing tokens in localStorage",
11 "Preserve SSR compatibility",
12 "Do not require users to log in again after page refresh"
13 ]
14}

Why this works (and why it scales)

The structured request not only helps Cursor fix the issue, it creates state, and state is what makes work repeatable.

When intent, evidence, expected behavior, and constraints are explicit, fixes can be traced, reasoning can be reviewed, and the outcome can be reproduced. That’s the line between coding and system-building. And the real leverage appears after the first change:

  • If the session logic breaks again, you now have a baseline
  • If someone new joins the project, they inherit clarity
  • If Cursor regenerates code, it does so from a known state

Without versioning, even good AI output evaporates. With versioning, it compounds. So the workflow becomes:

  1. Define intent (what should happen)
  2. Record the change (what was altered, and why)
  3. Store the source of truth somewhere the agent can read it again

Which is where tokens re-enter the story. As we learned, tokens make decisions addressable: a color is defined as color.primary.700; spacing is defined in spacing.scale.5, etc. When decisions are defined and retrievable, Cursor doesn’t have to rediscover them, it can simply reuse them.

That’s the hidden curriculum: ambiguity forces AI to improvise; structure lets it collaborate; memory (versioning + tokens) lets it scale.

And the payoff is simple: You stop rebuilding fixes. You start building momentum.