Saidot enables transparency and accountability in AI
In Saidot, data intensive algorithmic systems and AI applications are assigned identities. Each identity acts as a unique identifier, and provides a means for ecosystem transparency, accountability, and cooperation. In this way each identity can be developed to align with best practices and earn the trust of its users.
We standardise transparency by providing an open, modular data model that organisations and ecosystems can adapt and utilize according to their needs. Our model facilitates organisation- and sector-specific requirements, whilst securing interoperability between different versions.
Saidot builds on the data model by using claims to define the behaviour and accountabilities in algorithmic systems. It also offers benchmark workflows uniquely designed for people developing and governing responsible AI.
“Computer supported anticipative intelligence is increasingly being used for decision-making that impacts people’s lives. Due to their complexity, for such algorithmic applications and systems to earn the trust of their users and the public, we must provide certified processes that can ensure that the systems are dependable, explicable and - as far as possible - unbiased. Saidot’s solution takes a big step from principles to practice. Their visionary AI identity management platform allows responsible organisations to collaborate on gathering data on how their algorithmic systems work, thus providing the critical means for transparency, accountability and implementation of ethics certifications into practice.”
The creation of responsible AI ecosystems requires new forms of collaboration. All participants share an important role in providing the transparency that enables end-to-end accountability. With Saidot, our customers can invite partner organisations to participate in defining, verifying, or reviewing claims and identities, helping to build a deeper collaboration based on shared understanding, responsibility, and accountability.
“Saidot's platform has the potential to address some of the biggest challenges in accountability and governance of machine learning systems. Responsible and ethical development of AI systems is now a priority, and Saidot's platform helps demystify how to introduce it in practice. This is why The Institute for Ethical AI & Machine Learning is a supporter of this great initiative.”
Our APIs welcome third parties – such as authorities and certifiers – to create and deploy sector-specific requirements, or to generate certifications and trust marks.
Our customers can also give stakeholders – from the general public to trusted third parties – the option to request and verify the information used and generated by their AI systems.
Saidot adapts to varying requirements of different algorithmic impact assessments by offering customised claims and claim sets. Our platform provides the means to operationalise AI transparency and accountability requirements such as the: