The ethics and challenges of AI

By Jonathan Ebsworth

As a society, we’re often slow to understand the impact that new technology will have on our lives. Sometimes these unexpected consequences are positive, such as the way that washing machines liberated women from hours of daily drudgery, enabling them to take an active part in public life. Often, they are negative: just think of the damage done by adding lead to petrol, or the heart-breaking consequences of prescribing thalidomide as a treatment for morning sickness.

The problem lies in the fact that the full impact of new technology takes time to emerge, and the jury is still out on the long-term effects of social media – or, for that matter, artificial intelligence.

Ethical questions unanswered

We live in a time of rapid and profound technological change that affects every aspect of our lives, so why pick on AI as deserving particular scrutiny? Ask most technologists and they will tell you that AI, of all the technologies currently being developed, will likely bring the biggest changes to our world in the next few decades. It will revolutionise our jobs, the services we use, and even the way that we think; it could also fundamentally alter humanity’s relationship with the machines it creates.

As things stand, we are rushing heedlessly into the future with a blithe disregard for the unsolved ethical questions of AI. That’s not true of everyone: Microsoft is aware of the racism problems experienced by its self-teaching Tay and Zo chat bots, while autonomous vehicle developers are grappling with the Trolley Problem – an ethical question that will ultimately determine who lives and who dies in a road accident.

While not every AI application will involve life-or-death decisions, a failure to examine and answer ethical questions will lead to damaging consequences for businesses or other organisations who deploy AI-based technologies.

See also:

Peculiar challenges of AI

If you think that this scaremongering, consider last year’s story about the machine learning application developed by the Correctional Offender Management Profiling for Alternative Sanctions (Compas) in the US. The tool was found to mistakenly label black defendants as likely to reoffend, and was twice as likely to flag them up as recidivists compared to white people.

Or take the issue of autonomous weapons systems. We have pilot-less aircraft (in fact, remotely piloted), but should we leave the decisions to launch a Hellfire missile to an algorithm?

The list of problematical questions is almost infinite: we’ve already looked at the issue of driver-less cars, but what about AI applications dealing with sensitive data? The Cambridge Analytica scandal has shown what happens when people take a cavalier approach to people’s personal information; without an ethical foundation, future AI applications could wreak the same damage on an unimaginable scale.

We can scoff at Terminator-style scenarios, where AI gains self-awareness and turns against humanity, but the fact remains that machines are only as ethical as they are programmed to be. How, then, can we create an ethical framework for AI – and whose job is it to do so?

A delicate balance

There will be some who say that the answer to these difficult questions is to create a raft of legislation setting out the parameters for ethical AI, but I suggest that this would be an historic mistake.

The problems with this approach are legion: legislation is often heavy-handed, and a government-mandated set of rules would stifle technological advances in an area where the UK enjoys an envious lead over other nations. Moreover, politicians (no matter how well-briefed) are not the best people to decide complex, fluid questions about technologies that they do not fully understand.

That’s not to say that politicians can’t play an important role in shaping our future relationship with artificial intelligence. One example of the positive effect that parliament can have is the publication of the House of Lords AI Select Committee’s report in April. This document proposed a cross-sector code of ethics for AI based on five principles. These represent sensible proposals that would provide an ethical foundation for future AI projects, including the principles that artificial intelligence should not be used to diminish the data rights or privacy of individuals or groups, and that the autonomous power to hurt, destroy or deceive human beings should never be vested in AI.

That’s a good start, but we can’t leave it to politicians to shape the future of artificial intelligence. Instead, we must show businesses that making AI ethical is a matter of enlightened self-interest. It’s an argument that should resonate with any free-marketer. Businesses need a moral compass for no other reason than their customers, suppliers, and other partners expect them to protect their interests.

We’re all aware of the reputational damage dealt by, say, poor cybersecurity practices that lead to a massive data leak. Businesses should be approaching AI with ethics at the forefront of their strategy. We don’t want to see organisations hamstrung by fear of what could go wrong, but instead to consider the ethical implications of the applications and services they create.

Every business needs to understand where it faces potential risks from AI, and having a code of ethics is an essential foundation to ensure that this technology brings as much good and as little evil as possible.

Jonathan Ebsworth, Partner in Disruptive Technologies, Infosys Consulting

Share

Featured Articles

Amelia DeLuca, CSO at Delta Air Lines on Female Leadership

Driving decarbonisation at Delta Air Lines, Chief Sustainability Officer Amelia DeLuca discusses the rise of the CSO and value of more women in leadership

Liz Elting – Driving Equality & Building Billion-$ Business

Founder and CEO Liz Elting Turned Her Passion into Purpose and Created a Billion-Dollar Business While Fighting for Workplace Equality – and Winning

JPMorgan Chase: Committed to supporting the next generation

JPMorgan has unveiled a host of new and expanded philanthropic activities totalling US$3.5 million to support the development of apprenticeship programmes

How efficient digital ecosystems became business critical

Technology & AI

Mastercard: Supporting clients at a time of rapid evolution

Digital Strategy

Why Ceridian has boldly rebranded to Dayforce

Human Capital