Jonathan Brill
Futurist-in-Residence at Amazon and Executive Director of the Center for Radical Change

Jonathan Brill is the Futurist-in-Residence at Amazon and Executive Director of the Center for Radical Change. Ranked the #1 futurist in the world by Forbes and described as “the world’s leading transformation architect” by Harvard Business Review, Brill converts the chaos of AI, geopolitical shifts, and economic disruption into bold advantage. He’s unlocked tens of billions for multinational corporations, frontier tech firms, and national governments. He’s the co-author, with Stephen Wunker, of AI and the Octopus Organization: Building the Superintelligent Firm. Learn more at www.aiandtheoctopus.com.

In this insightful conversation with CIO Magazine, Jonathan explains why many AI projects fail not because of technology, but because they’re implemented in rigid, outdated systems. Brill outlines how leaders can transform their organizations into agile, “octopus-like” networks where intelligence is distributed, collaboration thrives, and AI becomes a trusted partner rather than a threat. From building psychological safety to balancing centralized strategy with decentralized execution, Jonathan shares how leaders can prepare for the rise of the superintelligent firm. Below are the excerpts of the interview.

You’ve said that many AI initiatives fail not because of the technology itself, but because they’re deployed into broken systems. Could you expand on this idea? What are the most common structural or cultural flaws you see in organizations trying to embrace AI?

AI deployments rarely fail because the algorithms don’t work. It’s because the technology is introduced into organizations that were never designed to easily and efficiently take advantage of them. The core problem is that most companies still operate like ammonites — built around shells of stability, control, and hierarchy that once ensured survival but now inhibit it. These structures were optimized for a world of predictable inputs and linear improvement. AI, however, is exponential, fluid, and deeply interconnected. It doesn’t just automate processes; it reshapes the logic of how work gets done.

In many legacy organizations, information flows upward and decisions flow downward. Data lives in silos, middle managers act as interpreters between strategy and execution, and risk is managed through layers of approval. AI doesn’t work in those environments. Instead of empowering teams to act faster and make smarter choices, it becomes another tool waiting for permission.

Compounding the problem are leaders and managers that look at AI as just another project to try out and employees who are anxious of AI’s purposes and the implications to them.

For AI to succeed organizations must dismantle the inertia in their processes, incentives, and mental models. They need to lean into the fluidity, to truly benefit from what AI offers.

The “Octopus Organization” is a striking metaphor. In what ways can leaders practically apply nature’s principles such as adaptability, distributed intelligence, and regeneration to make their organizations AI-ready?

The octopus is not just a metaphor; it’s a biological operating system for resilience. The octopus thrives because it is soft, adaptive, and distributed. It can sense, decide, and act through multiple centers of intelligence, reconfiguring itself in real time without losing coherence. Translating that into organizational practice means building systems that are both decentralized and deeply connected.

Leaders can apply this model by treating AI as the connective tissue—the “neural necklace”—that links an organization’s many arms. Rather than concentrating intelligence in a central brain, they must distribute it to the edges, where context is richest and speed matters most. AI enables this by giving teams the information and judgment support once reserved for senior leadership. Decision making moves closer to the customer; allowing strategy and execution to begin to happen simultaneously.

From your experience with Fortune 500 companies and government agencies, what distinguishes organizations that thrive with AI from those that struggle or stall?

Steve Wunker, my co-author, and I have worked with countless Fortune 500 companies, and have found that the most successful organizations, are the ones that don’t chase AI for efficiency alone; they pursue it as a strategic amplifier of human intelligence. Their leaders make sure that every model and automation align with the business mission.

They also fully embrace the need for adaptability. Leaders of success AI organizations recognize that they don’t yet know what the future will demand, so they build modular systems that can evolve. Using AI to create feedback loops between data, people, and decisions, to inform future strategies.

Perhaps most importantly, successful adopters treat AI not as a substitute for human expertise but as an extension of collective intelligence. They see it as a coordination technology—one that enables faster pattern recognition, cross-boundary collaboration, and more intelligent risk-taking.

AI’s true promise isn’t replacing us. It’s amplifying our ability to coordinate at scale.

Fear and distrust often undermine AI adoption. What steps can leaders take early on to build psychological safety and ensure teams view AI as a partner rather than a threat?

Interestingly the people who most need to experiment with AI—those in routine cognitive roles—often experience the highest psychological threat. Yet a lot of AI rollouts are mandated with little input from the user further stoking those fears.

Studies on proactivity show that people take initiative when they feel both autonomous and psychologically safe. To create psychological safety, leaders must begin with transparency: explain what AI is for, what it isn’t for, and how it will augment rather than erase human roles.

Trust grows through participation and initiative. When teams are invited to co-design how AI is used in their work—defining the boundaries, reviewing its outputs, shaping its feedback loops—they shift from feeling replaced to feeling empowered. Early wins should focus on freeing people from tedious tasks so they can focus on creative, judgment-driven work. Framing AI as a career mobility accelerator rather than a cost-cutting device signals that the organization values its people’s potential.

You’ve emphasized that becoming AI-ready doesn’t require a million-dollar rollout. What are some accessible, high-impact actions leaders can implement now to start that transformation?

The entry cost is not capital—it’s courage. The first step is to identify one recurring, time-consuming decision or process that delivers little strategic value. Equip that frontline team with access to AI tools, relevant data, and the authority to act. Measure the outcome not just in efficiency gains but in the quality of learning produced.

The transformation begins when people see AI working for them—simplifying workflows, uncovering insights, reducing friction. Once that trust is built, AI is no longer a mandate but a welcome collaborator.

As AI reshapes roles, decision-making, and workflows, how can leaders maintain balance between centralized strategy and decentralized execution much like the octopus’s distributed intelligence?

Going back to the octopus. While the octopus acts locally through its arms, it maintains global coherence through its neural network. That is the same duality that is essential to AI-enabled organizations. Strategy must provide the unifying purpose, ethics, and guardrails, but execution must be delegated to the periphery, where context lives.

Executives should set direction, define the parameters of acceptable risk, and ensure information symmetry. AI then allows local teams to operate autonomously within those boundaries, making thousands of micro-decisions aligned with the business strategy. Maintaining that balance requires constant feedback loops that allow insight from the edges to reshape strategy at the core.

Looking ahead, how do you see the concept of the “superintelligent firm” evolving over the next decade? What must today’s leaders unlearn to successfully lead in that future?

The superintelligent firm is not a company that outthinks humans; it’s one that thinks with them—a collective intelligence that merges human judgment, machine precision, and organizational adaptability, enabling it to react, learn, and reallocate resources dynamically.

To successfully lead this type of an organization, executives will need to abandon the belief that foresight comes from prediction rather than preparedness. They will need to lean into ambiguity and create a culture that fosters experimentation, discussion, and willingness to learn and move rapidly.

You’ve worked at the intersection of innovation, strategy, and AI for decades. What personal lessons or mindset shifts have guided your own approach to navigating disruptive change?

One thing that has become very clear to me over the years is that disruption is not an event but a state of being. Our goal shouldn’t be to predict the future, but instead to understand why the future is leading to change. What we can do about it. And how we can start today preparing for it.

We shouldn’t fear change but be fascinated by it. AI for example, doesn’t just automate work; it invites us to rediscover what makes us irreplaceably human: our capacity for meaning, creativity, and collective imagination.

Leading through disruption is not about mastering machines, but about mastering ourselves and our connection with others.

Content Disclaimer

Related Articles