

This past summer, an MIT report rattled the business community with its finding that 95 percent of enterprise A.I. applications fail to deliver the revenue growth companies expect. A newer Wharton study, released in October, reached a similar conclusion, noting that it is still “too early” for most large organizations to see measurable gains from A.I. Even so, long-term optimism remains high, with 88 percent of the Wharton study respondents saying their organizations expect to increase A.I. spending next year.
“The narrative that A.I. can’t deliver business impact is misleading,” Adam Gabrault, CEO of Solvd, a software and digital infrastructure firm, told Observer. In July and August, Solvd surveyed 500 U.S. CIOs and CTOs from companies with annual revenues exceeding $500 million and found that nearly 60 percent reported business benefits from A.I. in specific business departments, such as predictive analytics, customer support, HR and data management.
Companies that use A.I. effectively tend to align it with clear goals and follow long-established digital transformation practices. These approaches have guided successful tech adoption since the rise of personal computers and the shift to cloud computing.
“There’s a huge amount of pressure from all sectors, and all industries, to figure out how A.I. could be a change agent,” said Gabrault. The first step, he said, is tying A.I. to a specific objective—reducing customer churn, improving support or lowering costs. The “think big, take small wins” mindset applies here as well. Companies that see returns on A.I. don’t try to use it to “solve all problems,” Gabrault added.
Deploying A.I. on top of legacy systems and poor-quality data is often futile. An insurance company still relying on 30-year-old systems to write policies and manage claims, for example, will struggle to make any A.I. platform work. “To even get to a place of A.I. adoption, companies need to start looking at their data stack and how to make it A.I.-ready,” Gabrault said.
“This is where most A.I. initiatives actually die, not from bad algorithms, but from the unglamorous reality of messy data and systems that weren’t designed to share information,” Bakul Banthia, co-founder of Tessell, an A.I.-native enterprise data platform, told Observer. A.I. models run best on complete and consistent data, he said. While bridging data silos and cleaning up databases takes time, systems can be connected through APIs, and automated tools—with human oversight—can help improve data quality.
“Once you start building that kind of infrastructure, then we’re starting to see the acceleration of A.I. adoption significantly change,” said Gabrault.
Navigating governance and regulation
Governing A.I. is complex. As a new technology, it lacks a well-established framework, leaving companies to navigate largely uncharted territory.
“The only real answer is for companies to be thoughtful and ethical around how they use A.I. in their business and to continue to monitor and reform their governance,” Steven Pappadakes, founder and CEO of NE2NE, an automation and data integration company, told Observer.
Privacy and data protection should be top priorities, Pappadakes said. Building a strong relationship with an A.I. provider can help companies understand the technology and train internal teams. As new regulations emerge, staying informed is essential, he added.
Companies should also be aware that regulators like the SEC have lost patience with A.I.-washing—the practice of overstating a product’s A.I. capabilities. A.I.-washing can lead to legal consequences, fines and lasting reputational damage.
In the U.S., while federal regulators have been cautious about imposing broad A.I. rules, most states have already enacted or plan to enact some form of A.I. legislation. More are on the way. Companies operating in Europe face a more complex compliance landscape, with new laws such as the EU AI Act taking effect. “Those organizations that have that structure and framework ready are going to be in a much better position than those that do not,” Gabrault said.
Highly regulated sectors such as finance, banking and health care must involve strong compliance teams from the outset. These teams must vet A.I. projects, approve deployments and track new rules across local jurisdictions. Companies that plan for compliance early will be better prepared as new A.I. regulations inevitably emerge, Gabrault said.

Want more insights? Join Working Title - our career elevating newsletter and get the future of work delivered weekly.