May 19, 2020

Building an ethical policy for AI

Technology
AI
Jonathan Ebsworth
5 min
Building an ethical policy for AI

It’s easy to be ethical. We are all faced with obvious choices between right and wrong, and most of us try to choose the right path – don’t steal, don’t lie, don’t do harm. It’s only rarely that a person or organization decides to do something that’s patently wrong.

Sometimes, however, doing the right thing is difficult; not least when it concerns a new technology where we’re unsure of the impact that it will have on our lives. Artificial intelligence (AI) is one such technology. Because it’s still very much in its infancy, full of potential but with its ethical implications largely unmapped, our enthusiasm for its potential can blind us to its potentially harmful side-effects.

Take Google’s recent demonstration of its Duplex technology. Let’s be clear: passing the Turing Test is a momentous milestone in the history of AI, and Google’s engineers should be proud of their achievement – but it’s a problematic one. What’s worrying is that Duplex, proof-of-concept though it is, is engineered with deception at its heart.

See also:

At no point in the demonstration does the AI inform its human interlocutor that it’s a bot, and this runs directly contrary to the House of Lords’ recently-published AI Code, the fifth tenet of which states that “the autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.”

Duplex’s duplicity is only one example of the ethical minefield into which we are blindly stumbling. Every business is rushing to develop new AI-powered applications which will undoubtedly change our lives. Whether they are for the better or for the worse will depend on whether businesses tackle the tricky ethical questions inherent in such a transformative technology.

Businesses may think that such concerns are a matter for regulators and legislators, but this would be an historic mistake. Consumers don’t mind too much about intentions or whether a company has acted legally. They care about outcomes; if an unethical AI application causes them harm, then it will damage its creator just as much as if they really had had evil aims.

Ethics clearly makes good business sense, but committing to an ethical AI policy is the easy bit. Much harder at this nascent stage is actually to create a meaningful ethical policy that protects users without hindering innovation. How, then, can we cut the Gordian Knot of ethical AI?

The House of Lords’ AI Select Committee’s report, mentioned above, is as good a place as any to start, with its commitment to AI that is developed for the common good of everyone, operates on the principles of intelligibility and fairness, protects privacy, and does not harm humanity.

There are other organizations working on this topic. Perhaps the most advanced is the work done by the Institute of Electrical and Electronics Engineers (IEEE) with its Ethically Aligned Design framework, which aims to define values and ethical principles for intelligent and autonomous systems. The IEEE is sponsoring the creation of standards (such as IEEE P7000™ series) and future certification programmes. Other ideas and frameworks include the Department for Digital, Culture, Media and Sport’s (DCMS) Centre for Data Ethics and Innovation, and the Ada Lovelace Institute which is examining the ethical and social issues arising from new technologies such as AI.

Clearly there is no shortage of activity, but these initiatives are still inchoate; businesses developing AI today cannot afford to wait for a consensus to form around ethics. They need to take action to create a policy that prevents them from creating applications that are open to abuse or which, unwittingly, could cause us harm. So, what should we consider when creating an ethical policy for AI?

1) Goals

The intentions of such a policy should be to consider holistically the impact of the solutions we create and confirm that they are consistent with the values of our business. Above all, we should be concerns with creating lasting value both for our shareholders and for society – not some ephemeral, selfish aim – like Enron’s efforts to boost its share price at all costs.

2) Approach

To ensure we succeed, we need teams to be fully aware of, and sharing, our business values; that are focused on lasting value rather than ‘big ideas’; and are supported and rewarded through our policies and processes. Above all, they must appreciate the significance and positive value of sound business ethics so that they can ensure the lasting value of the applications they create.

This requires strong leadership with the ability to manage ethical risk at three levels of the development process. At the individual or creative group level, we need people to consider the ethical implications of their ideas; at the business function level, we should formalise this assessment with appropriate actions associated with the identified level of risk; finally, operational management should be highly engaged throughout the process to map and monitor these risks against business goals. Management’s attitude towards these issues will go a long way towards setting the tone towards ethics in our business.

3) Commitment

There will be winners and losers in the race to AI, but those who succeed will be those that develop applications that are deemed ethically acceptable or, better, beneficial to society. We can choose to rely on luck – or we can commit wholeheartedly to pursuing an ethical path.

4) Outcome

Done right, ethics can strengthen our brand, reinforce our values, provide competitive advantage and create enhanced value. Done carelessly, we are putting our hard-won reputation on the line – along with the safety and security of every stakeholder. That alone should make us think very carefully about our approach to AI initiatives.

Artificial intelligence itself is amoral; it is how we use our creation that will determine its effects on humanity. Let us approach the AI-powered future in a spirit of hope tempered with caution.  If we are open about our commitment to ethics, then we can realize our greatest hopes for AI – and reap the benefits of being seen to do the right thing.

Jonathan Ebsworth, Partner, Infosys Consulting

Share article

Jun 12, 2021

How changing your company's software code can prevent bias

Deltek
diversity
softwarecode
inclusivity
Lisa Roberts, Senior Director ...
3 min
Removing biased terminology from software can help organisations create a more inclusive culture, argues Lisa Roberts, Senior Director of HR at Deltek

Two-third of tech professionals believe organizations aren’t doing enough to address racial inequality. After all, many companies will just hire a DEI consultant, have a few training sessions and call it a day. 

Wanting to take a unique yet impactful approach to DEI, Deltek, the leading global provider of software and solutions for project-based businesses, took a look at  and removed all exclusive terminology in their software code. By removing terms such as ‘master’ and ‘blacklist’ from company coding, Deltek is working to ensure that diversity and inclusion are woven into every aspect of their organization. 

Business Chief North America talks to Lisa Roberts, Senior Director of HR and Leader of Diversity & Inclusion at Deltek to find out more.

Why should businesses today care about removing company bias within their software code?  

We know that words can have a profound impact on people and leave a lasting impression. Many of the words that have been used in a technology environment were created many years ago, and today those words can be harmful to our customers and employees. Businesses should use words that will leave a positive impact and help create a more inclusive culture in their organization

What impact can exclusive terms have on employees? 

Exclusive terms can have a significant impact on employees. It starts with the words we use in our job postings to describe the responsibilities in the position and of course, we also see this in our software code and other areas of the business. Exclusive terminology can be hurtful, and even make employees feel unwelcome. That can impact a person’s desire to join the team, stay at a company, or ultimately decide to leave. All of these critical actions impact the bottom line to the organization.    

Please explain how Deltek has removed bias terminology from its software code

Deltek’s engineering team has removed biased terminology from our products, as well as from our documentation. The terms we focused on first that were easy to identify include blacklist, whitelist, and master/slave relationships in data architecture. We have also made some progress in removing gendered language, such as changing he and she to they in some documentation, as well as heteronormative language. We see this most commonly in pick lists that ask to identify someone as your husband or wife. The work is not done, but we are proud of how far we’ve come with this exercise!

What steps is Deltek taking to ensure biased terminology doesn’t end up in its code in the future?

What we are doing at Deltek, and what other organizations can do, is to put accountability on employees to recognize when this is happening – if you see something, say something! We also listen to feedback our customers give us and have heard their feedback on this topic. Those are both very reactive things of course, but we are also proactive. We have created guidance that identifies words that are more inclusive and also just good practice for communicating in a way that includes and respects others.

What advice would you give to other HR leaders who are looking to enhance DEI efforts within company technology? 

My simple advice is to start with what makes sense to your organization and culture. Doing nothing is worse than doing something. And one of the best places to start is by acknowledging this is not just an HR initiative. Every employee owns the success of D&I efforts, and employees want to help the organization be better. For example, removing bias terminology was an action initiated by our Engineering and Product Strategy teams at Deltek, not HR. You can solicit the voices of employees by asking for feedback in engagement surveys, focus groups, and town halls. We hear great recommendations from employees and take those opportunities to improve. 

 

Share article