cio_ukhace 56d
Many organizations deploying AI recognize the need for guardrails, but few have figured out how to build a mature governance model.According to a recent survey from Cisco, three out of four organizations report having a dedicated AI governance process in place, but only 12% describe their efforts as mature.Cisco’s 2026 Data and Privacy Benchmark Study suggests not only that AI governance processes are still evolving, but also that privacy concerns are driving movement toward more guardrails, with 93% of organizations planning further investment to keep up with the complexity of AI systems and expectations of customers and regulators.The struggle to establish good governance is real, AI experts agree. But recognition from the IT and security professionals surveyed that they have work to do is a good sign, says Jen Yokoyama, senior vice president for legal innovation and strategy at Cisco.“It’s a good statistic to show the awareness of the complexity that is facing these companies,” she says. “If everyone said, ‘Yes, we have a program, and that therefore means we’re set and we are mature,’ that would be showing a little bit of naivete over how incredibly complex this is.”One of the big challenges for organizations deploying AI is that governance has so far lagged adoption, Yokoyama says. Many IT leaders must make decisions on compliance, ethical issues, and transparency as the technology is being rolled out, she adds.“A big part of the problem is the push for speed, quick adoption, and the push to figure out how to get returns on that technology,” she says. “They need to do it at speed, because people are spending the money now, and people want to see returns, and so you’re figuring it out at a pace.”IT leaders also need to consider issues such as privacy, data sharing with AI vendors, and data localization and sovereignty as they launch AI projects, Yokoyama says. “The problem is one size does not fit all,” she adds.Quick deploymentsThe speed of AI adoption has complicated governance efforts, agrees Jean-Matthieu Schertzer, chief AI officer at marketing IT provider Eagle Eye Group.“Many organizations are moving quickly to deploy AI across functions such as marketing, automation, personalization, and operational efficiency, but governance maturity often lags as adoption scales,” he says. “The opaque nature of many AI systems makes it difficult to trace decisions, identify bias, and establish clear accountability when something goes wrong.”Effective AI governance depends on structured operating practices such as documenting model limitations, conducting bias and security audits, and establishing review and oversight workflows, Schertzer notes. At the same time, AI leaders must meet growing expectations around transparency, consent, and regulatory compliance, he adds, and these issues cut across many departments within an organization.“These requirements span legal, data, security, marketing, and product teams, and progress often slows when ownership is unclear or initiatives remain confined to siloed pilots rather than becoming standard operating practice,” he says.It’s not only the speed of deployment that complicates AI governance, but also the speed of advancement within the technology itself, adds Ron Davis, senior vice president of product engineering and head of AI at manufacturing software vendor QAD Redzone.“AI innovation is advancing faster than most enterprises can formalize controls, forcing teams to scale technology and governance simultaneously,” he says. “Compounding these challenges, AI governance remains an emerging discipline, with standards and operating models still taking shape.”The AI governance challenge is amplified in the manufacturing sector because AI increasingly influences operational decisions that affect safety and quality, he adds. “As a result, organizations are navigating uncertainty while balancing speed, risk, accountability and safety,” Davis says.Bad data governanceAt many organizations, the struggle for better AI governance stems from a lack of good data governance, adds Anisha Vaswani, chief information and customer officer at networking equipment vendor Extreme Networks. Many enterprises are still trying to get their hands around good data governance, she notes.“You overlay on top of that, rapidly evolving technology landscapes, new investments, the upskilling of people that you need to be effective at governance, and it worsens the problem,” she adds. “You’re dealing with a lot of complexity in your data and fragmentation of data, fragmentation of models, rapidly evolving technology landscape in AI, and to be able to govern it, you need to understand it, you need to keep abreast of it, and it’s moving so fast.”Vaswani recommends that organizations establish cross-functional teams to address governance issues. CIOs and other IT leaders have a big role in pushing for auditability and explainability in their AI tools, she adds.“Part of governance is like being that negative Nelly and asking, ‘What could go wrong, and how are we going to mitigate it?’” she says. “We haven’t fully solved for it; it’s emerging tech.”Cisco’s Yokoyama agrees that creating good practices will be a collaborative effort because AI governance spans multiple disciplines within an organization.“IT professionals see things that legal doesn’t, that privacy doesn’t, that the engineers don’t, and all around that circle, the engineers see things that we don’t,” she says. “If you don’t have a mechanism to have that conversation, especially in larger companies, you won’t have them, and you’ll learn after the fact, and you’ll be reactive.”Effective AI governance requires broad, cross-functional participation, agrees Davis.“Organizations should bring together product, engineering, operations, legal, and business leaders to define shared standards and accountability,” he says. “This is not a one-time framework, but an ongoing operating model embedded into the product lifecycle across the organization that evolves as AI capabilities and use cases mature.”Leaders neededLeadership is also important, and top executives need to define governance as a core responsibility of deploying AI, Davis adds. Leaders should establish clear ownership, decision rights, and escalation paths across the AI product life cycle, he recommends.Organizations should also avoid treating regulation as the sole driver of governance models, he adds. “Instead, leaders should anchor governance decisions in human impact, ensuring AI systems are designed, deployed, and used with a clear focus on safety, trust, and responsible execution,” Davis says.Eagle Eye Group’s Schertzer also recommends that IT leaders treat governance like financial oversight, and not red tape. IT leaders should run regular audits for bias and“The critical piece is answering, ‘Who owns the decision when AI gets it wrong?’ with named roles and an escalation path,” he says.IT leaders should also document what AI outputs can and cannot be explained, Schertzer says. “Instead of promising perfect explainability, mature programs document model limitations, define review checkpoints, and create workflows for oversight and correction,” he adds. “This is what turns responsible AI into a repeatable day-to-day practice.”