The Ghost in the Document
Google’s internal design documents from 2023 surfaced in a public repository last fall, sparking a quiet panic among product teams. Not because of leaked roadmaps or unreleased features, but because of the writing. One document, outlining a new approach to on-device AI inference, read with unnerving fluency—clear, concise, and free of the usual corporate jargon. Engineers debated in Slack channels whether it had been drafted by a senior staffer or generated by an LLM fine-tuned on internal wikis. The answer? Neither. It was written by a mid-level product manager who had spent weeks refining it with feedback loops from three different teams. Yet its polish felt artificial, as if the human touch had been sanded down by consensus and revision.
This incident wasn’t an anomaly. Across Silicon Valley, design documents—once the sacred, messy blueprints of engineering culture—are becoming indistinguishable from AI-generated text. The shift is subtle but profound. Where once a design doc bore the fingerprints of its author—typos, digressions, emotional asides, even doodles in the margins—now they read like they were composed by a committee of algorithms. The rise of AI-assisted writing tools, from GitHub Copilot to internal drafting assistants, has smoothed out the idiosyncrasies that once made technical writing feel human. The result is a creeping homogenization of thought, where clarity masks conformity.
The Death of the Voice
Design documents used to be personality tests. A Google doc from a decade ago might open with a bold claim, followed by a sarcastic footnote about legacy systems, or a personal anecdote about a failed prototype. These weren’t distractions—they were signals. They revealed how someone thought, what they valued, where their biases lay. Today, those signals are vanishing. Engineers and product managers are trained to write in a neutral, risk-averse tone. AI tools reinforce this by suggesting rewrites that eliminate ambiguity, emotion, and style. The output is technically correct, but emotionally sterile.
Consider the language itself. Older docs often used active voice: “We decided to deprecate the old API because it caused latency spikes.” Modern versions favor passive constructions: “It was determined that the legacy API would be deprecated due to observed performance degradation.” The subject disappears. The decision-maker vanishes. This isn’t just stylistic—it’s ideological. It reflects a culture that prioritizes defensibility over authorship, where taking ownership of a idea is riskier than diffusing responsibility across a system.
Worse, the tools themselves are shaping the content. AI drafting assistants are trained on vast corpora of existing documents, which means they amplify prevailing norms. If most design docs avoid strong opinions, the AI will too. If they favor bullet points over narrative, the AI will generate bullet points. The feedback loop is self-reinforcing: humans write like AI, AI trains on human writing, humans adapt to AI suggestions. The result is a flattening of intellectual diversity.
Why Clarity Isn’t Enough
Proponents of AI-assisted writing argue that cleaner documents lead to better execution. Fewer misunderstandings, faster reviews, more efficient decision-making. There’s truth to that. But clarity without character is dangerous. Engineering is not just about solving problems—it’s about choosing which problems to solve, and why. Those choices are shaped by values, intuition, and experience. When design docs erase the human element, they also erase the context that gives those choices meaning.
Take the infamous 2016 Twitter redesign. The internal design doc, leaked years later, revealed a team deeply conflicted about changing the heart icon from a star. Some argued it would confuse users; others believed it signaled a shift toward emotional engagement. The debate was messy, emotional, and ultimately human. The final doc reflected that tension—footnotes, counterarguments, even a section titled “Why This Might Be a Bad Idea.” Today, that same document would likely be streamlined into a risk-assessment matrix with neutral pros and cons. The passion would be scrubbed out. The trade-offs would appear objective, when in reality, they’re always subjective.
This isn’t just about nostalgia. It’s about accountability. When a design doc reads like it was written by no one, it becomes easier to blame the process rather than the people. Failures get attributed to “systemic issues” or “misaligned priorities” instead of individual judgment. But systems are built by humans. And humans need to be visible in their work.
The Cost of Efficiency
The push for AI-generated clarity is driven by real pressures. Product cycles are faster. Teams are larger. Documentation is more critical than ever. But efficiency has a cost. When we optimize writing for speed and neutrality, we sacrifice the very things that make innovation possible: dissent, creativity, and individual voice.
Some companies are pushing back. At Stripe, engineers are encouraged to include a “personal note” section in design docs—a paragraph where they can share intuition, concerns, or even humor. At Figma, senior staffers deliberately leave in minor imperfections to signal authenticity. These are small acts of resistance, but they matter. They remind teams that design is not just logic—it’s judgment.
The question isn’t whether AI can write a good design doc. It can. The question is whether we want it to. A world where every technical document reads the same is a world where ideas converge too quickly, where risk aversion masquerades as rigor, and where the next breakthrough might be edited out before it’s even written. The next time you read a flawless, frictionless design doc, ask yourself: who—or what—is really behind it? And more importantly, what got lost in the translation?