With the promise of vast improvements to productivity and efficiency, AI is set to disrupt every industry. However if developed or deployed incorrectly, these same AI models can actually turn out to cause real social and economic harm. This has led to a growing interest in AI ethics and how organisations can work to prevent and mitigate the dangers of improperly deployed AI models.
There are three recurring concerns: the issues of AI bias, the explainability of models, and accountability. Addressing each of these challenges is an imperative when deploying AI, as getting them wrong has significant economic, social and moral consequences.
Tackling algorithmic bias
One way to view intelligence in general is as a means to identify patterns from raw data. In this respect, all AI systems will always have an inherent bias, but ideally these “biases” are an attempt to discriminate towards the answer that represents useful patterns or truths. However, if implemented incorrectly, models can inherit societal biases and statistical errors which can be detrimental to the performance of the model. The patterns and relationships that AI models perceive are heavily dependent on the data we choose to feed into them, which means the model is only as good as the data it is given. Hence when deploying models we should not simply attempt to remove all bias, but instead ensure that undesired biases are mitigated.
Many datasets we currently draw from are limited due to a bias of some sort. For example, a longstanding oversight means that data from car crash tests is only taken from “male” crash dummies. As a result, inferences from that dataset can put women at greater risk of harm in car accidents than men. This is an issue that even tech giants are having trouble with, such as when a CV scanning tool at a tech giant came under fire for bias due to it being heavily trained by male engineers. The algorithm favoured words such as “captured” or “executed”, which are more common in male CVs, leading to the tool ultimately favouring male candidates.
To combat this, AI models need to be fed training data from thoroughly examined and controlled datasets, and data should be representative of the use-case that model is being trained for. Experts in the area the model is being applied in should also be involved throughout the lifecycle of the model from inception, and they should subject the model to assessments whose rigour scales up with the risk posed by applying an AI model to the field in question.
Making AI explainable
An explainable AI model is one where we can understand and justify its decisions, and where its inferences and conclusions have a clear chain of justification from its original input. In an explainable model, we can also spot decisions that are made based on unjust inferences caused by poor training data, which means that explainability should be seen as a means to prevent undesired biases from forming in an AI model.
However, an explainable model also makes those models explainable and interpretable by non-technical experts and regulatory authorities. This is important, as many organisations are faced with legal requirements to justify the decisions they make, meaning that relying on an unexplainable model exposes them to fines, and also limits them in scaling up their AI capabilities.
Historically, a drive towards explainability was a project that some feared, not least because more easily explainable models are often less complex, and thus less accurate. However, with the increasing legal demands to make models more explainable – and equipped with broader tools and techniques – many teams have come to find that this putative trade-off is not as significant as was expected.
Instead, while making an AI model explainable is a difficult – but ultimately very necessary – project, it’s generally been found that a smart allocation of resources and time can provide explainability without significantly hampering accuracy. Necessity has proven a mother of innovation, and specialists have found the requirement for explainability has driven new and efficient practices across a variety of AI use cases.
Explainability is now far more attainable, via implementation of advanced feature engineering and human-in-the-loop processes. So whilst achieving explainability may require teams to devote energy beyond just developing and deploying the model, this doesn’t, crucially, reduce models’ ability to make accurate predictions.
Clarifying accountability
AI can make decisions that radically impact people’s lives. Models can predict if someone goes to jail, receives a mortgage, or gets hired. If a person was doing the same, they’d be held accountable for a decision; but the fact that an AI not a human agent makes this far more complex.
Who is responsible, for example, for someone hit by a self-driving car? Is it the original designer of the AI model? Is it the developer who deployed the algorithm? Is it the compliance officer who signed it off? Or is it the civil servant who allowed that car to drive on the road?
This landmine of maintaining accountability has huge ramifications. Unless it’s clear who is accountable for specific contingencies or risks, then many organisations are faced with a major disincentive to deploying and scaling their AI models. To ensure that accountability is clarified and maintained, organisations have to develop an understanding of the risks, and assign accountability for them before the technology is implemented. From even the most basic steps, organisations must ensure that there is a human element safeguarding and codifying accountability for the decision-making process of AI systems.
A good regulatory environment is essential
The key to solving all of the above problems is to put a solid set of regulatory frameworks in place, which ensure AI models are carefully designed, explained, and audited. History has shown that regulations that protect and standardise industries don’t just stop bad actors, but also help the good actors become more efficient, thereby boosting confidence in an industry.
It’s my hope that business, society, academia, and government will come together to nurture discussion on the ethics of AI and begin to build these regulatory frameworks in the coming years. Interdisciplinary collaboration is critical to succeeding in the complex challenge of ensuring that the AI models we increasingly depend on are explainable, accountable, serve our common interests, and mitigate undesired biases.