Trustworthy AI - What is it?

Trustworthy AI – What is it?

 

In the world of AI Ethics and Compliance a key cornerstone of discussions is the term: “Trustworthy AI” In this short article I wanted to just expand on that term to explain it, where it comes from and what is happening currently to define it.

In order for Artificial Intelligence to be trusted to make decisions on behalf of humans it has to fulfil a number of criteria to reassure the people that are affected by those decisions that they are in their best interests. This may be online processes like job vacancies or financial applications through to real-world operations such as autonomous vehicles and industrial robots.

If an organisation does not ensure that their approach to AI is trustworthy they can very quickly find that their customers, employees, citizens or patients experiencing bias, discrimination and erroneous decisions which could lead to reputational and financial harms, legal breaches and even risk to life.

There are lots of views on the characteristics of what makes Trustworthy AI, but most flow from the work conducted by the Organisation for Economic Cooperation and Development (OECD) who first started work on Trustworthy AI in 2016 and published the first Principles in 2019 with revisions up to May this year. This is the first intergovernmental standard of AI and 47 countries have signed up

They define an AI system thus:

“An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”

So, what are the OECD Guidelines for Trustworthy AI? I have summarised them here:


OECD Guidelines for Trustworthy AIjpg

Some of them are obvious, others less so, lets just run through them briefly:

 

Inclusive Growth, Sustainable Development & Well-Being

Focusing on beneficial outcomes for people and planet, reducing inequalities and protecting natural environments. This is about driving inclusion, benefits for all and sustainability

Respect for the Rule of Law, Human Rights & Democratic Values, including Fairness and Privacy

Human-centred values around Law, equality, freedom, dignity, autonomy, privacy, diversity, fairness, social justice and labour rights. Preventing intended & unintended discrimination.

Transparency & Explainability

Committing to Transparency and Disclosure of interactions with AI and where possible the data and criteria that lead to content, recommendation or decision to enable a stakeholder to understand and challenge the output

Robustness, Security & Safety

AI Systems should be robust, secure and safe at all stages of their lifecycle, mechanisms are in place to repair, override or stop the system if required. Secured against bad actors stealing the data, injecting bad data or influencing content, recommendations or decisions.

Accountability

AI solutions require traceability as to the ownership of the system and accountability for the function and output. There should be a systematic risk management approach and adoption of responsible business conduct to address the risks related to the AI systems.

 

When you look across the landscape of AI guidelines across countries and organisations you will find that they all come back to these 5 key factors, so if you are looking to put together guidelines for Trustworthy AI for your organisation, I would recommend the OECD approach as an excellent start point for you to then layer on your particular organisation, geographical and vertical market elements.

Of course, all this is great, but how is it measured? Some aspects are already covered well within existing IT management frameworks and legislation (eg GDPR) but some are entirely new and subject to constant evolution as the technology, application and legislature matures.

Risk management becomes a key part of the governance framework, your organisation’s attitude and appetite for risk, alongside the legal parameters will determine what’s good and bad – and it has to be transparent and explainable!

The first step to building Trustworthy AI into your organisation is to create an AI Governance Framework, this will form the Accountability part of Trustworthy AI. The key things to consider are the people and functions to include in the framework, who has overall accountability in the organisation for AI and ensuring that there is board-level participation.

Now, most organisations already have mature governance around Data & Privacy (GDPR), Security, IT Operations and Risk Management, and as these are key parts of your AI Governance model, these functions can be adapted to include AI so you don’t have to re-invent the wheel – AI Governance should be a layer over what is already there and you can utilise and augment your existing expertise which is a much more cost-efficient approach.

EmpathAIs can help with educating and working with your AI stakeholder teams to assist with the development of your approach to Trustworthy AI and the governance structure needed to meet your ethical and legal requirements.

We offer a range of training and workshop sessions to get your organisation off on the right foot. We recommend some short, virtual training for the key AI actors in your organisation to help you understand the fundamentals and piece together what and who you need to be involved in your governance team, followed by on-site training and workshops with that wider team to determine what the Governance model will look like for you.

More details are depicted here, obviously we can tailor the approach to suit your needs :

https://www.empathais.com/ai-ethics-compliance

  

In my next article I will take a look at Risks and Harms, ie what can happen if you don’t apply the Guidelines for Trustworthy AI and the penalties for breaching compliance (now and in the future)

 
Thanks for Reading
Jeremy Atkins
Principal Consultant
jeremy.atkins@empathais.com

Follow-up reading:

https://oecd.ai/en/ai-principles

https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449