Skip to content

Introduction

Overview

The AI Verify Toolkit is an open-source, extensible toolkit that validates the performance of AI systems against a set of 11 internationally recognised AI ethics principles through process checks and technical tests. AI Verify can be used by companies for self-assessment, or by independent testers to verify AI models against the AI Verify Testing Framework. It supports the technical assessment of supervised learning models trained on most tabular and image datasets for binary / multiclass classification and regression models.

How does the toolkit work?

The toolkit operates in a report-oriented workflow where you start with your ‘end-goal’ in mind. Through the customisable report canvas, design page-by-page using report widgets what you want your report to consist of. This will determine the technical tests and process checks needed to be done. AI Verify will then streamline your workflow to collect only the relevant files, test arguments and user inputs needed to run the tests and generate your customized report.

To help companies align their reports with the AI Verify Testing Framework, the toolkit also comes with a set of report templates, which pre-defines the report layout, technical tests and process checks needed.

To extend the suite of existing testing functionalities, you can install plugins built by the AI Verify Foundation or third parties.

Supported environments

The AI Verify Toolkit can be deployed in 2 ways.

For a more detailed breakdown of how AI Verify works, check out How It Works

To jump directly into the tool, Download the Quick Start Guide here.

If you are looking to contribute to the AI Verify Toolkit, Visit the Developer Documentation here.