Skip to content

12 January 2024

Responsible AI - an overview

logo
Neal Johnson

Organisations are adopting, or feeling the pressure to adopt AI in their organisation, products and services at a rate incomparable to recent innovations. Whilst companies might have flirted with crypto and the metaverse, the way AI has gripped the public interest, and that of senior leaders, is driving rapid adoption of the technology.

The pressure to implement AI cannot be at the cost of finding a valuable use case and, more importantly, it needs to consider wider implications, risks and challenges. AI needs to be introduced to organisations, products, and services in a responsible manner.

Until relatively recently the subject of Responsible AI was reserved for academics and specialists. Today most big technology companies have frameworks and guidelines for AI development.

It’s not just the tech companies – there are now 400+ publications from governments and standards bodies that have further confused this field of technology.

If your organisation doesn’t have such Responsible AI processes in place, we suggest you consider it.

So what changed? Why has the need to implement policy, governance and safe implementation become urgent now if the technology has been in circulation for two decades?

Tech companies made to acknowledge AI risks

As noted above, large technology companies, such as Amazon, Apple, Facebook/Meta and others have all been using some form of AI for close to two decades and all have had problems along the way

These are just a few examples, but all these issues share a common thread. At some point the application overstepped an ethical line.

The cost for an “overstep” can be financial. Amazon cut its losses at $50 million abandoning the project before it went live. Facebook were fined €1 million in the EU and $5 billion in the US, for “deceiving users”.

In other cases it can be reputational, Apple Card never really recovered from claims of gender bias, with its financial partner Goldman Sachs looking to escape the deal.

How responsibility came to AI

Based on historical mishaps and in an attempt to prevent doing something foolish in the future, companies turned to academic research in applying responsibility to artificial intelligence. The academic research originated in the 1950s, mainly dealing with philosophical arguments. What they glimpsed in the research was the potential for a set of principles for all AI applications.

Here are some examples of the principles:

  • Fairness
  • Transparency
  • Accountability
  • Privacy
  • Safety
  • Explainability
  • Inclusivity
  • Sustainability

Each of the 400+ publications define what is meant by each principle in often different and contradictory ways. For one, can you imagine the user story “As a user, I want to identify and mitigate any unfairness in the AI application.” What is an engineer supposed to do with that? How would you define “done”? Yes, you could use a matrix but which matrix? Attempting to tackle responsibility and ethics at scale with, strict, imprecise, and overly broad one size fits all approach, gives a false impression that the risk and issues are managed.

What got lost as the AI responsibility scaled was the glimpsed potential in the principles. For large tech companies the approach works, for Governments not so much. Why does it work for large tech? Well, they use them as intended, to form a common language across teams and stakeholders, so when someone says “fairness” all parties understand what that means and what is needed to be done. For governments, governing by consent rather than control creates a greater space for disagreement.

How we can use Responsible AI

“Responsible Artificial Intelligence? That’s just more documentation? We do not need that, it is just going to interfere with how we go about work.”
The above statement is a valid concern. However, just like GDPR before, we won’t have a choice. Almost every government has released draft policies, guidelines, and frameworks for AI – some of these will inevitably tell you what you can and cannot do within their jurisdiction.

Here we concentrate on a checklist just to get the general meaning of what is required by each concept: People, Context, Accountable, Valid & Reliable, and Transparent.

 

People

Artificial intelligence technologies should be focussed in a way that prioritises human welfare and rights. This ensures the AI application is designed to align with societal values and human-centric principles. The goal is to create an AI application that augments and enhances human capabilities and decision-making, rather than replace or undermine them.

How?
  • Encourage an environment where AI applications are accessible and its benefits are understood and shared.
  • Ensure human-in-the-loop or human-on-the-loop are designed into AI applications. Support human decision-making and do not replace it.
  • Regularly evaluate the social impact of an AI application, making adjustments to ensure alignment with peoples’ expectations, welfare, and societal values.

Context

The context borrows from the basic ethic elements values and norms.

  • values – the degree of importance of a thing or action.
  • norms – what should be done or what is expected.

Values and norms are the unwritten rules that define what is considered appropriate or acceptable conduct when addressing an individual or group. A single AI application may have to match values and norms from many diverse groups. Each group may have very different values and norms. The application should handle the diverse groups equally, but at the same time recognise this may need to be a compromise in situations of conflicting values.

How?
  • AI applications are not special. Use traditional discovery methods, they work and are well understood.
  • Identify any group that may have been subject to historical bias.
  • Consider who is going to be affected by the application. Identify as many groups as possible, including those that may not be using the system directly but are involved, such as developers, decision makers and accountable owners.
  • Find out as much as possible about existing systems that cover the same or similar use case. Collect as much information around policies, procedure, limitations, issues as possible.
  • Explore the possibility of mathematical proof that norms are implemented as expected.

Context is a documentation process that establishes the common language for a team, client and stakeholders.

 

Accountable

When people have to make decisions for someone else, they usually weigh-up the pros and cons based on what they think is best for that person. They consider the potential good outcomes and the possible bad outcomes, trying to choose the option that minimises harm and maximises benefits. Once made, they own the decision and are answerable for the outcomes, good or bad.

An AI application is no different.

How?
  • Ensure the system operates legally.
  • An accountable owner should be a person.
  • Ensure that each automated decision for a context group has an accountable owner. This includes internal stakeholders.
  • Always ensure you know what data set is used in a model. On any occasion the model goes wrong, you will need to know data lineage, this information should be provided by a suitable transparency method.
  • Always ensure you know what data is used in a model. Clearly document what data from the data set is used and how the data may have been altered. Although part of transparency, the use of specific data in a specific form must be an accountable decision.
  • Ensure a redress policy can be identified and easily applied in the case something goes wrong.

Accountability should be provided as both documentation and automated proof the system can recover when not working as “normal” for the target audience.

 

Valid & reliable

Valid & reliable is a necessary condition of Responsible AI and is base for all characteristics – without it nothing else works.

  • “Valid” in this sense means that the AI system accurately reflects or predicts the real-world scenarios it is designed for, ensuring that its outputs are meaningful and relevant.
  • “Reliable” involves the consistent performance of the AI system over time and across various conditions, maintaining its accuracy and effectiveness.

Validity and reliability for deployed AI systems are often assessed by ongoing testing or monitoring that confirms a system is performing as intended. Measurement of validity and reliability contribute to trust and should take into consideration that certain types of failures can cause greater harm. AI systems efforts should prioritise the minimisation of potential negative impacts, and may need to include human intervention in cases where the AI system cannot detect or correct errors.

How?

You need to sit down and think of who the audience is you are targeting with your AI application.

  • Identify the audience you are targeting. This is usually a subset of groups from the Context. The audience is not just the users.
  • With the audience, define the context that the AI is expected to work-in as normal. An example of normal is “for this audience I expect at least an 80% good outcome”.
  • Check if the data set has any biases for your audience.
  • When you have a model ready, make sure it behaves normally for the different groups in your audience.

Valid & reliable should be provided as automated proof the system is working as “normal” for the target audience, it is not documentation.

 

Transparent

AI should be easy to detect and understood by people. People in general do not trust things that cannot be explained or understood. Our interaction with AI is increasing on a daily basis. As our usage of AI increases, so too is the use in decisions made about us. For an AI system to gain trust, people must have a level of understanding and knowledge, appropriately explained.

How?

You need to sit down and think about what the audience you are targeting needs to know about how your AI application works.

  • Understand the level of transparency allowed for by the client. As an example, describing how a road route for a company is chosen by an AI application – for a taxi app this may be acceptable; for a prisoner-transport system, probably not.
  • For each group in the Context explain how the input values influence outcomes of the model.
  • Test that the explanations align on is good enough to satisfy accountability concerns.
  • Ensure that at least one explanation is good enough to satisfy normal audience usage.
  • Ensure that at least one explanation is good enough to satisfy accountability concerns.

Transparent outcomes should be provided as both documentation and automated proof that the system is working, as appropriate, as normal, for the target audience. Transparent is not only documentation.

Examples of Transparent:
  • https://modelcards.withgoogle.com/object-detection
  • https://ai.meta.com/tools/system-cards/instagram-feed-ranking/

What next?

As noted there are already 400+ publications from Governments and standards agencies and more on its way. Our clients are asking for support in this field. It is not a simple process and just from this overview it is clear that answering the basic questions around the use of artificial intelligence is complex with no single answer.

Our clients want to be enabled by AI and in such a way that it aligns with their values. Responsible AI is not a term we own; however, we are passionate about the principles behind it and are well placed to support responsible institutions in aligning this technology robustly.

Contact us to discuss how we can help with your policy, governance and implementation of responsible AI projects. We can help.