Saidot Library
Product
Insights
EU AI Act
AboutPricing
Log in
Get started
ProductInsightsEU AI ActAboutPricingLog in
Saidot Library

Beyond compliance: Building trust through AI governance and control

A joint perspective from Saidot and Velatir

When 673 pages of chat logs from Tromsø municipality became public in 2025, they revealed something that should concern every board director and executive: entire sections of an official government report were 100% AI-generated, complete with fabricated research sources, and no human had verified any of it. The project leader's instruction to ChatGPT was chilling in its brevity: "Fill out the rest. No need for further quality control."

This wasn't just a municipal scandal. It was a preview of what happens when organisations rush to adopt AI without the operational frameworks to govern it. As the EU AI Act reshapes corporate accountability and boards face increasing scrutiny over AI oversight, the question is no longer whether to act, but how quickly organisations can implement the dual pillars of responsible AI deployment: operational governance and control.

Why companies must act now

The regulatory landscape has fundamentally shifted. The EU AI Act entered into force on 1 August , 2024, with prohibitions and AI literacy obligations entering into application from February 2, 2025, governance rules and obligations for general-purpose AI models becoming applicable on 2 August, 2025. The most significant milestone and full applicability is scheduled for August 2026, when a significant part of high-risk system obligations will become enforced. Organisations operating in or serving the European market can no longer treat AI governance as a future concern.

The Tromsø case demonstrates the reputational and operational risks that emerge when AI is deployed without proper controls. When journalists requested the chat logs under Norway's Freedom of Information Act, the municipality initially refused, claiming they were merely "electronic traces" not subject to transparency laws. The County Governor disagreed, establishing in a landmark ruling that AI chat logs are documents covered by public access laws and must be available for scrutiny. This precedent signals a clear direction: AI interactions are not invisible, and organisations will be held accountable for how AI is used in their operations.

Operational governance: AI inventorying and application of risk-based governance throughout the AI lifecycle

Effective AI governance begins with visibility. Organisations cannot manage what they cannot see, and the first requirement of responsible AI deployment is knowing where, when, and how AI is being used across the enterprise. This is where AI mapping capabilities become essential, providing the operational governance layer that enables transparency and risk management.

Saidot's role in operational governance encompasses three critical functions:

Inventory of AI assets: Inventorying AI assets, systems, agents, models and datasets makes them visible and subject to governance. Organisations need a consistent inventory to form the baseline for AI governance. Because AI systems are composed of multiple connected components, governance must operate at the right level of granularity. The EU AI Act requires organisations to detect AI systems and classify them according to their risk level. AI governance platforms support this through automated inventorying and classification based on regulations and internal policies alike.

Risk and compliance automation: Effective governance requires recognising that AI systems carry different levels of risk and must not be treated the same. To avoid unnecessary governance burdens on AI teams, organisations must implement risk-based rules. A system’s risk posture is shaped by its components, models, data, products, agents, tools, context and use. Automating this process helps consistently capture system risks and assign appropriate controls, while ensuring visibility into control implementation to provide audit-ready evidence.

Trigger-based governance throughout lifecycle: AI systems continuously evolve as models are retrained, data or features change or new components are added. Many enterprise AI systems now rely on third-party models, APIs and plug-ins that may update without notice. Trigger-based governance responds to these events by automatically initiating risk reassessment, control revalidation or compliance workflows. This lifecycle-aware approach ensures continuous compliance, prevents hidden risk accumulation and increases transparency across the AI supply chain.

Operational control: Detecting, stopping, and escalating AI actions

Detection tells you what happened. Control determines what should happen. While visibility is essential, it is insufficient without the operational controls that enforce responsible AI use in real time. This is where Human-in-the-Loop (HITL) systems like Velatir become critical, providing the control layer that prevents AI incidents before they occur.

Velatir's role in operational controladdresses three fundamental requirements:

Real-time enforcement of governance policies: Organisations need systems that don't just document AI usage but actively enforce rules about how AI can be used. This means configuring mandatory checkpoints where AI outputs cannot advance without human review, creating approval workflows that match organisational risk tolerance, and ensuring that policies are technically enforced. In the Tromsø scenario, a properly configured HITL system would have prevented the project leader from instructing ChatGPT to proceed "without further quality control," requiring instead that qualified personnel review and approve AI-generated content before incorporation into official documents.

Structured approval workflows that embed expertise at critical decision points: Article 14 of the EU AI Act requires that high-risk AI systems be designed and developed with appropriate human-machine interface tools so they can be effectively overseen by natural persons during the period in which they are in use, with human oversight aiming to prevent or minimise risks to health, safety or fundamental rights. HITL platforms enable this by routing AI outputs to appropriately qualified personnel, ensuring that decisions receive the level of human oversight they require, and creating clear accountability chains for AI-assisted work.

Security and access controls that protect against misuse: Operational control extends beyond governance to security, ensuring that AI capabilities are available only to authorised personnel, that usage is logged and auditable, and that sensitive AI interactions are protected from unauthorised access. This security dimension becomes increasingly important as AI systems handle more critical business functions and as regulatory scrutiny intensifies.

The control layer transforms AI from an unpredictable tool into a governed capability. It provides organisations with the confidence that AI is being used responsibly, that humans remain in control of critical decisions, and that policies are enforced consistently across the enterprise. The business case for operational control is compelling: as detailed in Velatir's analysis of AI control ROI, preventing a single governance incident or enabling safe AI adoption in one critical business process can justify years of investment in control infrastructure.

The Governance and Control Matrix: Why both are essential

Organisations often ask whether governance or control is more important. This question misunderstands the nature of effective AI governance. Governance and control are not alternatives but complementary capabilities that together create comprehensive AI governance.

Consider a simple two-by-two matrix.

On one axis, governance capability: can the organisation identify where AI is being used, and detect what risks it introduces and which controls are required?

On the other axis, control capability: can the organisation enforce and monitor those controls to ensure AI is used in alignment with policies?

Key insight: Comprehensive AI governance requires high-risk systems to operate in the Comprehensive governance quadrant. Organisations must implement both detection capabilities (to verify compliance) and control mechanisms (to enforce oversight). Point solutions addressing only one dimension leave critical governance gaps.

The EU AI Act lens: AI governance as fiduciary duty

The EU AI Act represents more than regulatory compliance; it signals a fundamental shift in how corporate governance must approach AI. The Act's human-centric approach and emphasis on oversight creates new responsibilities for boards and executives that parallel traditional fiduciary duties.

Fiduciary duties require directors to act in good faith, with appropriate care, and in the interests of stakeholders. Directors must consider long-term consequences for the company and its reputation, the potential impact on employees, suppliers, customers, and the local community. When AI systems make or influence decisions affecting these stakeholders, directors cannot claim ignorance about how those systems operate or whether they're properly governed.

The AI Act's requirements for human oversight, transparency, and accountability effectively translate these fiduciary principles into technical and operational mandates. Article 14 requires that high-risk AI systems enable natural persons to properly understand relevant capacities and limitations, remain aware of possible automation bias, correctly interpret system outputs, and decide when not to use or to override the system. These aren't just technical specifications but governance obligations that demand board-level attention.

The Tromsø case offers a stark illustration. When AI-generated content containing fabricated sources appeared in official documents, it wasn't just a technical failure or a training gap. It represented a governance failure at the highest level: systems that allowed consequential decisions to be made without proper oversight, controls that could be bypassed with a simple instruction to skip quality control, and leadership that treated AI as an oracle rather than a tool requiring human judgment.

The practical AI checklist for boards and executives

Boards and executives need concrete ways to assess their organisation's AI governance posture. This checklist provides a framework for board-level discussions about AI governance and control capabilities:

Detection and classification

1. Can we identify where AI is being used across our organisation, including AI built by us and by third parties?
2. Do we have clear mechanism to consistently classify AI based on risk and regulatory requirements?
3. Have we clearly defined which AI use cases are prohibited or high-risk, which require approval, and which fall within acceptable use?

Governance and risk management

4. Are we able to identify, evaluate and monitor AI-specific risks effectively?
5. Do we have processes for assessing new AI applications before deployment?
6. Have we conducted risk assessments of our high-risk AI systems in accordance with regulatory requirements?
7. Is AI governance integrated into our broader enterprise risk management framework?

Control and oversight

8. Have we implemented mandatory human review checkpoints for AI outputs?
9. Can our systems prevent AI outputs from advancing without appropriate approval?
10. Do our controls match the risk level of different AI use cases, with more stringent oversight for higher-risk applications?
11. Are our human oversight mechanisms technically enforced, or do they rely solely on employee discretion?

Audit and accountability

12. Do we maintain comprehensive logs of AI interactions, including what was asked, what was generated, and how it was used?
13. Can we produce audit trails demonstrating that human oversight actually occurred, not just that it was required by policy?
14. Have we tested our ability to respond to transparency requests about AI usage?

Competency and training

15. Have we ensured that personnel involved in AI deployment and oversight have appropriate competence, training, and authority?
16. Do we have clear role definitions for who can deploy AI tools, who must review AI outputs, and who has final approval authority?
17. Beyond general AI literacy training, do we have role-specific training for personnel with AI governance responsibilities?

Board oversight

18. Does our board receive regular reporting on AI usage patterns, governance compliance, and risk indicators?
19. Have we assigned clear accountability for AI governance at the executive level?
20. Do we have processes ensuring that AI-related risks and incidents reach board attention?
21. Are we treating AI governance with the same seriousness as financial controls and cybersecurity?

Organisations that cannot answer these questions affirmatively face significant exposure. Organisations that can answer them position themselves ahead of the compliance curve and demonstrate the kind of responsible AI governance that builds stakeholder trust.

The path forward: Building governance that enables innovation

The mixture of regulatory requirements, board-level accountability, and public scrutiny has transformed AI governance from an optional best practice to a business critical and organisations should no longer deploy AI tools without robust frameworks for detection and control.

The Tromsø incident offers lessons that extend far beyond Norwegian municipalities. When organisations lack visibility into AI usage and controls over AI deployment, the results are predictable: fabricated information making it into consequential documents, reputational damage that takes years to repair, regulatory scrutiny that extends across the enterprise, and a cautionary tale that competitors use to position themselves as more responsible alternatives.

Why the Governance and Control Framework matters

At Velatir and Saidot, we've believe that effective AI governance requires two distinct but complementary capabilities that most organisations currently lack. This isn't theoretical speculation. We've worked with enterprises across financial services, healthcare, public sector, and technology to understand where AI governance fails and what it takes to succeed.

What we've learned is that organisations consistently underestimate the complexity of AI governance. Many organisations are still operating in a blindspot where AI-specific risks are not recognised or acted on. Organisations aren’t used to operating in an environment, where AI is built using third-party components and require significant third-party risk management alike.

Organisations assume policies will be followed, that training will be remembered, that employees will exercise good judgment under pressure. Reality proves otherwise. Even well-intentioned professionals, when offered an expedient path through AI assistance, will take shortcuts that undermine governance objectives.

This is why we advocate for the governance and control framework. It's not simply about regulatory compliance, though it certainly achieves that. It's about building governance infrastructure that works with human nature rather than against it, that makes responsible AI use the path of least resistance, and that creates accountability through visibility rather than through hope.

Saidot's operational governance layer addresses the visibility challenge that undermines most AI governance efforts. Organisations cannot manage what they cannot measure. By providing comprehensive capabilities across AI deployments, Saidot enables organisations to answer the questions regulators increasingly ask: where is AI being used? What risks does our AI pose to your organisation and stakeholders? How do you verify compliance with disclosure obligations? Without these capabilities, those questions have no good answers. With them, organisations transform uncertainty into evidence-based governance.

Velatir's operational control layer addresses the enforcement challenge that turns policies from paper tigers to operational safeguards. Governance frameworks fail when they depend entirely on human discretion, when they can be bypassed under deadline pressure, when they create friction that employees route around. By embedding Velatir directly into AI workflows, it is ensured that governance policies aren't aspirational but operational. The system enforces what the organisation requires, creating accountability through technical controls rather than through trust alone.

Together, these capabilities answer the fundamental governance question boards must address: Can we use AI responsibly while maintaining accountability, building stakeholder trust, and satisfying regulatory requirements? Organisations with mature governance and control capabilities can answer yes with evidence. Organisations without them face an uncomfortable choice: limit AI adoption to avoid risk, or proceed with blind, ungoverned AI deployment and accept the consequences.

What this means for your organisation

If you're reading this as a board director, chief legal officer, chief compliance officer, or chief risk officer, you likely recognise elements of your own organisation in this discussion. Perhaps you've deployed AI tools without fully understanding usage patterns. Perhaps you have policies about AI use but lack technical enforcement. Perhaps you're concerned about the EU AI Act August 2026 compliance deadlines but uncertain where to start.

The encouraging news is that building comprehensive AI governance is achievable, and organisations that start now gain significant advantages over those that delay. The governance and control framework provides a clear roadmap: establish visibility into current AI usage, identify risks for AI and its components, implement technical controls that enforce governance policies, create audit trails that demonstrate compliance, and build the organisational capability to govern AI at scale.

The Tromsø case demonstrates where hope-based AI governance leads. The EU AI Act establishes where regulation is heading. Forward-looking organisations recognise that the combination of governance and control isn't just about avoiding scandals or satisfying regulators. It's about building the trust required to innovate with confidence, about demonstrating to customers and stakeholders that AI enhances rather than replaces human judgment, and about positioning AI governance as a competitive differentiator rather than a compliance burden.

The organisations that will lead their industries in the AI era aren't necessarily those with the most advanced AI capabilities. They're the organisations that can deploy AI responsibly, prove they're doing so, and build the stakeholder trust that enables continued innovation. This is the promise of comprehensive AI governance: not limiting what AI can do, but ensuring AI does it right.

This is the work we do at Velatir and Saidot, and we welcome the opportunity to discuss how the detection and control framework can strengthen your organisation's AI governance posture.

Ready to build comprehensive AI governance?

Learn how Velatir and Saidot can help your organisation meet EU AI Act requirements while enabling confident AI adoption through the combination of operational control and operational governance.

Velatir | Control your AI – everywhere.
Contact Velatir

Saidot | Graph-based AI governance
Contact Saidot

More Insights

AI Act Handbook: The essentials of the EU AI Act

EU AI Act Classifier: Find out your AI system risk level and role under the AI Act

Saidot launches Integrated Evaluations for Microsoft Azure AI Foundry

Get started with responsible AI.

Book intro
Saidot Library
hello@saidot.ai
sales@saidot.ai
+358 407725010
Saidot Ltd.
Lapinlahdenkatu 16
00180 Helsinki, Finland
Terms of ServicePrivacy PolicyAI Policy

Product

Insights

EU AI Act

On-demand demo

About us

Get started

Book intro

EU AI Act Classifier

Help

Log in

Get Saidot on Microsoft Azure Marketplace

© 2025 Saidot Ltd. All rights reserved