
Good morning. Many boards are approving AI strategies without clear visibility into whether the underlying controls actually work, leaving CFOs exposed when regulators, auditors, or investors ask for proof, according to new research. In the private sector, health care appears to face the steepest challenge.
Kiteworks, a technology security company, has released its “Data Security and Compliance Risk: 2026 Forecast Report,” based on a survey of 225 security, IT, compliance, and risk leaders across 10 industries and eight regions.
One of the key findings is that 53% of organizations cannot remove personal data from AI models once it has been used, creating long-term exposure under GDPR, CPRA, and emerging AI regulations.
All respondents said agentic AI is on their roadmap, but the controls to govern those systems are lagging. Overall, 63% cannot enforce purpose limitations on AI agents, 60% lack kill-switch capabilities, and 72% have no software bill of materials (SBOM) for AI models in their environment. The result: AI systems are accessing, processing, and learning from sensitive data while organizations cannot fully track where that data goes or prove how it is being used, according to the report.
Among the 10 industries surveyed, government faces the steepest challenges due to legacy systems. In the private sector, however, health care stands out for weaknesses in controls and AI governance.
Health care organizations are also among the most conservative in AI spending. More than 80% of respondents said they currently have no API agents planned—technology that enables AI agents to connect with external systems and operate in coordinated workflows. While cautious deployment can reduce near-term risk, organizations that delay may also fail to build the governance capabilities they will need as AI use expands, Kiteworks finds.
That caution reflects long-standing economic constraints. Health care has lagged industries such as banking and manufacturing in adopting advanced technologies, largely because of thin operating margins, according to reporting by Becker’s Hospital Review. Yet industry leaders increasingly see AI as essential to financial sustainability. Cleveland Clinic EVP and CFO Dennis Laraway told the publication that AI, robotics, and automation can help health systems scale by expanding patient coverage, increasing volume, and improving speed and accuracy—supporting cost transformation amid payment reform and regulatory pressure.
Those competing forces are landing squarely on CFOs’ desks.
“Health care CFOs are navigating a uniquely difficult balancing act as AI investment pressure intensifies,” Tim Freestone, chief strategy officer at Kiteworks, told me. “Unlike tech or retail, many health systems operate on 2–3% margins in good years, which makes every technology decision feel existential rather than experimental.”
Quantifying AI’s return on investment remains especially difficult, Freestone added. “How do you put a dollar figure on faster diagnosis or reduced clinician burnout?” At the same time, any AI deployment involving patient data brings substantial compliance and security costs, he said.
Because health care has been relatively slow to develop AI governance frameworks, CFOs are increasingly being asked to approve significant investments in technology their organizations may not yet have the internal expertise to evaluate or manage, Freestone said. “They’re essentially being asked to build the plane while deciding whether to buy it,” he said.
As scrutiny shifts from AI ambition to AI execution, CFOs may find that governance—not innovation—becomes the real test.
Sheryl Estrada
sheryl.estrada@fortune.com