Across markets, the signal-to-noise ratio around AI has reached a new peak. New provocations are circulating at high speed and volume, often without full vetting or analysis by traditional curators, and this is causing new problems and challenges. One 10-day period alone brought us:
• A “Something Big Is Happening” blog post by Matt Shumer, CEO of OthersideAI/HypeWrite, comparing this AI moment to February 2020, the month before Covid shut down the world. His core claim: AI has crossed from tool to autonomous executor—and thus he “was no longer needed for the actual technical work.” The post triggered a wave of board conversations across industries and selected sectors.
• Citrini Research’s “The 2028 Global Intelligence Crisis” report, written as a memo from the future (June 2028). While labeled a “scenario not a prediction,” it named companies—including Mastercard and Visa—as vulnerable to AI-driven disruption, triggering a stock sell off.
• OpenAI’s Sam Altman defending AI’s energy footprint at India’s AI Impact Summit because it “takes a lot of energy to train a human—it takes like 20 years of life and all of the food you eat during that time before you get smart,” extending that argument to all 100 billion humans who ever lived. In The Atlantic, Matteo Wong called it a reveal of something deeper: AI leaders equating human existence with computational power—signaling a values shift.
• A post from Summer Yu, Meta’s Superintelligence Lab’s director of alignment, about her use of open-source OpenClaw on her personal inbox. She told it to suggest what to delete, not to act on it. Instead it started to speed delete everything older than February 15, ignoring her repeated commands to stop.
These four events—different and discrete, viral and idiosyncratic—moved markets and heightened anxiety. By the time this article is published, there will be more examples. Conjecture and personal takes are disrupting the AI narrative, over and above the technological disruption. Executives and directors who spent decades learning how to read markets, regulations and competitors find themselves in an environment where credibility, virality and truth travel on separate tracks.
What follows is a framework to meet this challenge and protect your strategy, sharpen your governance and preserve the clarity your organization needs most.
A Framework for Frothy Times:
1. Resist reactivity in favor of hard questions. Distinguish between high-confidence data and high-conviction opinion. When a Citrini-style moment arrives, have a strategy for what questions you will ask, and of whom? How will you vet the information and consequences? Do you have process and incentives in place to encourage the necessary critical thinking and inquiry?
2. Develop “lead” rather than “lag” metrics. Which measures of ROI are meaningful to your business? Is usage instructive and of what? Levels of training? Reduced workforce levels and/or increased productivity? Over what time scale? How are alignment wins (and avoided misalignments) captured?
3. Filter for motivation and misinformation. Evaluate the media source and author’s incentives. Is volatility the point? Who is the trend or countertrend likely to serve? Are internal or vendor voices captured? What are the risks of misinformation making its way into production data?
4. Build on bedrock. Does your board and your management team leave sufficient time to assess the AI “why” or “project” questions—for what purpose, to what end? How are the “bargain” questions addressed with customers, employees and investors; when AI produces big wins (or losses), how do rewards and true costs get distributed and allocated?
5. Design for alignment, governance and performance. AI agents (especially) require a lot of forethought on design and guardrails. How are AI tools being vetted and introduced to workflows and aligned with your values and objectives? What is the auditability and controllability plan? Who owns the impact beyond implementation?
Uncertainty and reactivity are here to stay. How to navigate will vary by event, and there is no single playbook. That said, we can already see how CEOs and their boards will need to work to secure a future that is more thoughtful, deliberate and aligned than one driven by chasing balls from left field.





