
In boardrooms, creativity is often conflated with charisma—a founder’s flash of insight, a strategist’s “feel” for the market. The rise of creative A.I. complicates that mythology. Systems that once mimicked patterns are beginning to originate them, not by feeling their way through ambiguity, but by searching vast spaces of possibilities with tireless composure. The question for leadership is no longer whether A.I. can imitate the past. It is whether machines can meaningfully extend the frontier of invention—and how executives should organize decision-making when they do.
From imitation to invention
The cleanest evidence that A.I. is stepping past imitation arrives where truth is checkable: mathematics, molecular science and materials discovery.
In 2022, DeepMind’s AlphaTensor not only learned to multiply matrices faster but also discovered new, provably correct algorithms that improved upon long-standing human results across various matrix sizes. That is not style transfer but rather marks an algorithmic invention in a domain where proof, not opinion, decides progress.
In late 2023, an A.I. system known as GNoME proposed 2.2 million crystal structures and identified roughly 381,000 as stable, nearly an order-of-magnitude expansion of the known “materials possibility space.” Labs have already begun synthesizing candidates for batteries and semiconductors, creating a faster loop between computational hypothesis and physical validation.
In 2024, AlphaFold 3 advanced from single-protein structure prediction to modelling interactions among proteins, nucleic acids and small molecules. This capability matters for drug design because binding, not just shape, drives efficacy. The model’s accuracy on complex assemblies has energized pharmaceutical R&D, though access limits have drawn pushback from academics who want open tools.
Progress is also visible in symbolic reasoning. DeepMind reported systems that solve Olympiad-level problems at a level comparable to an International Mathematical Olympiad silver medalist. At the same time, the research community continues to explore machine-generated conjectures, including the “Ramanujan Machine” work on fundamental constants.
None of this makes A.I. creative in the human sense. It does, however, expand the adjacent possible, surfacing options that were invisible or unaffordable to explore manually. When machines push frontiers in domains with crisp feedback—proofs or measured properties—boards should treat them not as autocomplete engines, but as option-generation machines for strategy.
A more recent wave of “reasoning models” underscores the shift. OpenAI’s “o” line prioritizes deliberate chains of thought and planning over fast pattern matching, improving performance on mathematics and coding tasks (empirical evidence). Whatever the brand names, the direction of travel is clear: more search, more planning, more verifiable problem-solving—and less reliance on past style to predict the future.
What machines still cannot feel
Creativity at the level that moves markets also rests on three human anchors:
- Intuition: tacit pattern recognition shaped by lived experience and domain immersion.
- Emotion: the energy to pick a fight with the status quo, to persist when the spreadsheet says “no.”
- Cultural context: sensitivity to norms, taste and symbolism that gives an idea social traction.
A.I. can simulate tone and recall cultural references. Still, it has no stake in the outcome and no phenomenology—no gut to trust, no fear to overcome, no values to defend. That absence is evident in strategy, where the “right” move hinges on timing, narrative and coalition-building as much as on optimization.
The practical stance, therefore, is not man versus machine, but machine-extended human judgement. Executives should treat creative A.I. as a means to broaden the search over hypotheses and prototypes, then apply human judgment, ethics and narrative sense to decide which bets to place and how to mobilize organizations around them.
How leaders should exploit machine invention—without outsourcing judgment
1) Run invention portfolios, not tool pilots.
The AlphaTensor and GNoME results serve as reminders that A.I.’s edge lies in search. Build portfolios where models explore thousands of algorithmic or design candidates in parallel, with clear funnels for lab validation or market testing. Resist vanity pilots; instrument programs like a venture portfolio with kill criteria, milestone economics and fast capital recycling.
2) Separate generation from selection.
Let models overgenerate options; reserve selection for cross-functional councils that combine domain experts with brand, legal and policy voices. In drug discovery, for example, computational signals are necessary, but go-to-market narratives, regulatory risk and patient trust still decide value. AlphaFold 3’s critics highlight that access and transparency are strategic variables, not just technical ones.
3) Put proof and measurement at the core.
Favor use cases with verifiable feedback, such as proofs, A/B tests and measurable properties, before pushing into messier cultural domains. The faster the loop from hypothesis to truth signal, the more compounding advantage you build. That is why material and algorithm discovery have progressed rapidly, while brand-level creativity remains a human-led endeavor.
4) Couple A.I. with automated execution.
The materials ecosystem illustrates the compounding effect when A.I. designs are paired with automated synthesis and testing. The playbook for enterprises is similar: link generative systems to simulation, robotic process automation or programmatic experimentation to prevent ideas from dying in slide decks.
5) Govern for explainability where it matters—and for outcomes where it doesn’t.
Demand explanations in regulated or safety-critical contexts. Elsewhere, prioritize outcomes with robust testing and guardrails. AlphaTensor’s value lies in proofs; a marketing concept’s value lies in performance lift, not in the model’s narrative about why it works.
6) Incentivize “taste” as a strategic moat.
As models make it cheap to generate competent options, advantage shifts to taste—the human ability to recognize what resonates in a culture. Recruit and reward this scarce judgment. Machines can propose; only leaders can pick the hill to die on.
What this means for decision-making
The companies that convert creative A.I. into a durable advantage will do three things differently.
- Treat search as a first-class strategic function. Leaders will invest in compute, data and optimization talent the way prior generations invested in distribution—because the ability to search better than competitors becomes a compounding differentiator in R&D, pricing, logistics and design.
- Reframe “intuition” as a disciplined interface. Human intuition does not retire; it selects, sequences and stories the outputs of machine search. That interface needs structure: pre-registered criteria, red-team rituals, ethical review and explicit narrative strategy.
- Professionalize uncertainty. Creative A.I. expands the option set and the error surface. Governance must evolve from model-centric compliance to portfolio-centric risk control, with exposure limits, scenario triggers and graceful rollback plans. The lesson from AlphaFold 3’s access debate is that licensing, openness and ecosystem design are themselves strategic levers, not afterthoughts.
The bottom line is not that machines have acquired emotions or culture. They have acquired something strategically scarce: the capacity to search, prove and propose at a superhuman scale in domains where truth can come back to haunt them. That capability does not substitute for human attributes; it amplifies them. The winning organizations will be those that marry machine-scale exploration with human-grade selection, treating A.I. neither as a muse nor as a mask, but as the most relentless research partner strategy has ever had.

Want more insights? Join Working Title - our career elevating newsletter and get the future of work delivered weekly.