Photo by Daniil Komov on Pexels

Photo by Daniil Komov on Pexels

Expert Take: Five‑Year Outlook: How AI Could Erode Writing Quality for Strategic Planners

AI AGENTS Apr 12, 2026

1. The Immediate Allure of AI-Generated Text

When OpenAI announced that ChatGPT reached 100 million users in just two months, the headline read like a triumph of efficiency. For a planner juggling quarterly forecasts, a one-sentence summary generated in seconds feels like a shortcut worth taking. Yet the Boston Globe’s opinion piece warns that this shortcut may be eroding the very craft that underpins clear policy communication. Speed, however, is not the only metric that matters in strategic planning - accuracy, nuance, and ethical framing are equally critical.

Professor Emily Bender of the University of Washington, a leading voice on language models, has repeatedly cautioned that “large language models reproduce the biases and blind spots of their training data.” In the context of long-term planning, a model that defaults to jargon-heavy phrasing can mask underlying assumptions, making it harder for decision-makers to spot risk. The Globe’s columnist, John H. Smith, argues that the erosion of good writing is not merely aesthetic; it threatens the rigor of policy analysis.

"In 2023, AI-generated content accounted for roughly 30% of all online articles, according to a Pew Research Center study."

That figure illustrates the scale of exposure. When a third of the information stream is produced by algorithms, the probability that a planner’s briefing contains unnoticed AI artifacts rises dramatically. The first problem, therefore, is not the loss of literary flair but the dilution of analytical depth that good writing traditionally safeguards.


2. The Hidden Cost of Homogenized Narrative

Long-term planners rely on narrative to stitch together data, stakeholder sentiment, and scenario analysis. If every report sounds as if it were drafted by the same algorithm, the ability to detect divergent viewpoints collapses. Dr. Joanna Bryson, professor of ethics and technology at the University of Bath, notes that “algorithmic uniformity can create an echo chamber where alternative hypotheses are under-represented.”

For planners, the homogenization risk translates into a strategic blind spot. When the language of risk assessment becomes formulaic, the subtle cues that flag emerging threats - such as a shift in regulatory tone or a new geopolitical tension - can be lost. The cost is not immediate; it accrues over the planning horizon as missed early warnings become costly corrective actions.


3. Skill Erosion Among Emerging Analysts

One of the most under-discussed ramifications of AI-driven writing is the impact on the next generation of analysts. A recent survey by the International Association of Business Analysts found that 42% of junior analysts rely on AI tools for first drafts of their reports. While the survey is not cited in the Globe piece, it aligns with the columnist’s concern that reliance on AI may stunt the development of critical writing skills.

Expert Take: Tim O'Reilly, founder of O'Reilly Media, argues that “if we outsource the cognitive work of structuring arguments to machines, we risk producing a workforce that can’t articulate complex trade-offs without a prompt.”

This skill erosion has a direct bearing on five-year planning cycles. As analysts graduate into senior roles, the collective ability to craft persuasive, evidence-based narratives may be weaker, leading to strategic proposals that are less compelling to boards, investors, or regulators. The Globe’s author warns that the erosion of good writing is a “slow bleed” that will manifest as weaker policy advocacy and, ultimately, reduced funding for long-term projects.


Moreover, the ethical dimension extends to bias amplification. Dr. Kate Crawford, co-founder of the AI Now Institute, has documented how language models perpetuate gendered and racial stereotypes. When a planning document inadvertently mirrors these biases, it can undermine community trust and fuel opposition to otherwise sound projects.


5. Mitigation Strategies for the Next Five Years

Given the risks outlined above, experts converge on a set of practical mitigations that planners can embed into their workflows. First, implement a “human-in-the-loop” review process where every AI-drafted section is vetted by a senior analyst for tone, accuracy, and bias. Professor Bender recommends a checklist that includes verification of source attribution, detection of stereotypical phrasing, and cross-checking of statistical claims.

Second, invest in training programs that reinforce traditional writing competencies. Tim O'Reilly suggests quarterly workshops that focus on argument mapping, narrative framing, and data storytelling without reliance on AI. Such programs not only preserve skill sets but also create a culture of critical engagement with technology.

Third, adopt transparent disclosure policies. The Globe’s editorial board advises that any document circulated beyond the internal team should include a footnote stating, “Portions of this text were generated using an AI language model.” This simple step satisfies emerging legal requirements and signals ethical responsibility to stakeholders.

Finally, diversify data sources for AI tools. By feeding models with region-specific policy documents, community testimonies, and non-English corpora, planners can reduce the homogenization effect and improve the relevance of generated content. Over a five-year horizon, these mitigations can transform AI from a threat to a calibrated aid.


6. A Forward-Looking Reflection for Planners

Five years from now, the strategic planning profession will likely be judged on how it balanced efficiency with intellectual rigor. The Boston Globe’s alarm about AI destroying good writing is not a call to abandon technology; it is a reminder that the tools we adopt shape the quality of our decisions. As Dr. Bryson observes, “technology amplifies the values we embed in it.”

For long-term planners, the challenge is to embed safeguards that preserve the analytical depth and ethical clarity that good writing provides. By treating AI as a collaborator rather than a replacement, planners can harness speed while protecting the nuanced storytelling that drives policy adoption, stakeholder buy-in, and sustainable outcomes.

Looking ahead, I would prioritize building cross-functional teams where writers, data scientists, and policy experts co-create documents. This approach not only mitigates the risks highlighted by the Globe but also cultivates a new generation of analysts who can navigate both the algorithmic and human dimensions of strategic communication. The future of planning may depend less on the tools we use and more on the habits we cultivate around them.

Tags