The Institute for Ethical AI & Machine Learning’s Alejandro Saucedo contributes to this article by Aaron Hurst about important skillsets needed to achieve success in AI deployments.
With artificial intelligence (AI) deployments carrying many risks if not deployed properly, we explore the most important skills for workforces to have.
Just as technologies evolve and become more useful in the enterprise, so do the skills needed to deploy them successfully, and AI is no exception. For software development in general, the importance of formal technical education is waning, as found by a Codingame report, which revealed 80% of HR professionals to have hired programmers who were self-taught.
“When we think of deploying AI in the enterprise at scale, the skills that are needed are evolving,” said Beatriz Sanz-Saiz, global data and analytics lead at EY. “The skills needed to obtain a PhD in the field, for example, are no longer necessary.
“Companies need a foundation of AI engineers that can not only manage the algorithms, but also the data involved.
“Companies will need increasing numbers of data engineers and data skills to shape modern architectures. Without those skills, it will be very difficult to bring in AI at scale.”
An ability to manage and analyse masses of data, as well as a willingness to learn quickly and clearly communicate with colleagues across the enterprise, are seen as more vital in today’s world than exactly how digital skills were obtained.
Compliance practices
Traditionally, AI development has been thought of as a model creation process, which ends once the model has been created. However, deploying this technology now requires a range of other aspects. With AI deployment needing multiple datasets, one of the most vital pieces of this puzzle is compliance.
“What is becoming clear is that in the model’s entire lifecycle, training is only the start of it, so the skills needed go beyond data science capabilities,” said Alejandro Saucedo, engineering director of machine learning at Seldon.
“The IT and compliance requirements are now just as critical to the process. Then you need to consider the operational components that are brought in, depending on the use case. Compliance checks call for roles such as operational managers, delivery managers and domain managers.
“Ultimately, the AI skills that are needed today boil down to data science capabilities, software engineering capabilities, IT operation capabilities, and domain expertise.”
DevOps and ModelOps
Dr Iain Brown, head of data science for UK & Ireland at SAS, has seen the evolution of skills needed to deploy AI first hand.
He said: “I’ve been in this industry around 15 years. I have a statistics background, and picked up the computer science elements over the years, but I was very much focused on the analytical side of things.
“What organisations really need more of now is those at the top and tail of the process. This means DevOps, procuring the right environments and putting in the infrastructure for these models to be developed, and then ModelOps, where those models are being taken through a process and deployed into production environments.”
It’s within those end results of the deployment process where monitoring, governance and validation of AI models need to be considered. These aspects, alongside the ability to find and introduce the most fitting infrastructure for the process, have proven as equally necessary as maths, statistics and computer science.
Brown believes that DevOps and ModelOps competencies have been especially successful within the banking sector. Here, larger organisations have been leveraging a combined view of business problems, identifying them and adapting the modelling ecosystem accordingly.
Q&A: Dataiku VP discusses AI deployment in financial services
Domain and sector knowledge
As AI has bolstered the operations of more and more sectors, it’s become apparent that knowledge of the technology alone isn’t enough for deployments to succeed. Whether the AI solution is serving companies or individuals, the engineers behind the roll-out need to understand the business at hand.
“The company needs people who know the principles of how these algorithms work, and how to train the machine, but can also understand the business domain and sector,” said Sanz-Saiz.
Without this understanding, training an algorithm can be more complex. Any successful data scientist not only needs to bring technical expertise, but also needs to have domain and sector expertise as well.”
Without sufficient industry knowledge, decision-making can become inaccurate, and in some cases, such as healthcare, it can also be dangerous.
Companies such as Kheiron Medical have been using an AI solution to transform cancer screening, accelerating the process and minimising human error. For this to be effective, careful assessments and evaluations at every stage of the screening procedure need to be in place.
“I think a commitment to clinical rigour needs to underpin everything that we do,” explained Sarah Kerruish, chief strategy officer at Kheiron. “You need to be able to test and evaluate at scale in ways that are independently validated, and there’s no way of getting around that.
Kerruish also believes that humility and close collaboration with colleagues are skills that are just as important in the medical space. In the case of helping to detect signs of cancer, this means working closely with radiologists, as well as bringing patients on the journey to ensure they understand how the process works.
She continued: “We’re not focused on replacing radiologists, but on helping them. Just like an accountant needs a calculator, we need better tools to achieve our goal.”
AI bias
Another important area that the workforce needs to be well-versed in is the matter of possible bias. AI development teams need governance skills to ensure that this can be minimised.
Claire Woodcock, senior product manager at Onfido, said: “Cross-functional teams consisting of deeply technical specialisms alongside those with user experience and policy understanding are key to delivering governance.
“They ensure greater protection against fraud and a better customer experience for all. For example, we are living through a time where digital identity is the key to accessing essential services, making it vital that identity verification technology works as intended for everyone regardless of race, age, or any other human characteristic.
“Using tools such as governance frameworks can enable teams to amend their decision-making process and eliminate negative instances of AI such as bias.”
In collaboration with the Information Commissioner’s Office (ICO), Onfido has been looking to improve its facial recognition algorithm to reduce bias in identity verification for financial services and enterprises. The sandbox initiative has led to a false acceptance rate of 0.01% and false rejection rate of 0.3%, as well as recording 60 times false acceptance improvement on documents issued by African countries.
Omer Artun, chief science officer at Acquia, added: “One key area which the workforce must be educated about is AI bias. While headlines like to proclaim that AI can be discriminatory, AI is a tool, and so isn’t inherently biased.
“Instead, it is trained on data sets – and if these data sets are biased, the AI is likely to adopt similar traits.
“Teams therefore need to be educated about transparent and open means of data collection, to ensure they’re not feeding the AI with biased data sets.”
Want to find out more about Seldon’s ability to increase freedom and time-to-deploy whilst reducing risk? Get a demo today