May 19, 2020

The ethics and challenges of AI

Technology
AI
Jonathan Ebsworth
5 min
The ethics and challenges of AI

As a society, we’re often slow to understand the impact that new technology will have on our lives. Sometimes these unexpected consequences are positive, such as the way that washing machines liberated women from hours of daily drudgery, enabling them to take an active part in public life. Often, they are negative: just think of the damage done by adding lead to petrol, or the heart-breaking consequences of prescribing thalidomide as a treatment for morning sickness.

The problem lies in the fact that the full impact of new technology takes time to emerge, and the jury is still out on the long-term effects of social media – or, for that matter, artificial intelligence.

Ethical questions unanswered

We live in a time of rapid and profound technological change that affects every aspect of our lives, so why pick on AI as deserving particular scrutiny? Ask most technologists and they will tell you that AI, of all the technologies currently being developed, will likely bring the biggest changes to our world in the next few decades. It will revolutionise our jobs, the services we use, and even the way that we think; it could also fundamentally alter humanity’s relationship with the machines it creates.

As things stand, we are rushing heedlessly into the future with a blithe disregard for the unsolved ethical questions of AI. That’s not true of everyone: Microsoft is aware of the racism problems experienced by its self-teaching Tay and Zo chat bots, while autonomous vehicle developers are grappling with the Trolley Problem – an ethical question that will ultimately determine who lives and who dies in a road accident.

While not every AI application will involve life-or-death decisions, a failure to examine and answer ethical questions will lead to damaging consequences for businesses or other organisations who deploy AI-based technologies.

See also:

Peculiar challenges of AI

If you think that this scaremongering, consider last year’s story about the machine learning application developed by the Correctional Offender Management Profiling for Alternative Sanctions (Compas) in the US. The tool was found to mistakenly label black defendants as likely to reoffend, and was twice as likely to flag them up as recidivists compared to white people.

Or take the issue of autonomous weapons systems. We have pilot-less aircraft (in fact, remotely piloted), but should we leave the decisions to launch a Hellfire missile to an algorithm?

The list of problematical questions is almost infinite: we’ve already looked at the issue of driver-less cars, but what about AI applications dealing with sensitive data? The Cambridge Analytica scandal has shown what happens when people take a cavalier approach to people’s personal information; without an ethical foundation, future AI applications could wreak the same damage on an unimaginable scale.

We can scoff at Terminator-style scenarios, where AI gains self-awareness and turns against humanity, but the fact remains that machines are only as ethical as they are programmed to be. How, then, can we create an ethical framework for AI – and whose job is it to do so?

A delicate balance

There will be some who say that the answer to these difficult questions is to create a raft of legislation setting out the parameters for ethical AI, but I suggest that this would be an historic mistake.

The problems with this approach are legion: legislation is often heavy-handed, and a government-mandated set of rules would stifle technological advances in an area where the UK enjoys an envious lead over other nations. Moreover, politicians (no matter how well-briefed) are not the best people to decide complex, fluid questions about technologies that they do not fully understand.

That’s not to say that politicians can’t play an important role in shaping our future relationship with artificial intelligence. One example of the positive effect that parliament can have is the publication of the House of Lords AI Select Committee’s report in April. This document proposed a cross-sector code of ethics for AI based on five principles. These represent sensible proposals that would provide an ethical foundation for future AI projects, including the principles that artificial intelligence should not be used to diminish the data rights or privacy of individuals or groups, and that the autonomous power to hurt, destroy or deceive human beings should never be vested in AI.

That’s a good start, but we can’t leave it to politicians to shape the future of artificial intelligence. Instead, we must show businesses that making AI ethical is a matter of enlightened self-interest. It’s an argument that should resonate with any free-marketer. Businesses need a moral compass for no other reason than their customers, suppliers, and other partners expect them to protect their interests.

We’re all aware of the reputational damage dealt by, say, poor cybersecurity practices that lead to a massive data leak. Businesses should be approaching AI with ethics at the forefront of their strategy. We don’t want to see organisations hamstrung by fear of what could go wrong, but instead to consider the ethical implications of the applications and services they create.

Every business needs to understand where it faces potential risks from AI, and having a code of ethics is an essential foundation to ensure that this technology brings as much good and as little evil as possible.

Jonathan Ebsworth, Partner in Disruptive Technologies, Infosys Consulting

Share article

Jun 18, 2021

Intelliwave SiteSense boosts APTIM material tracking

APTIM
Intelliwave
3 min
Intelliwave Technologies outlines how it provides data and visibility benefits for APTIM

“We’ve been engaged with the APTIM team since early 2019 providing SiteSense, our mobile construction SaaS solution, for their maintenance and construction projects, allowing them to track materials and equipment, and manage inventory.

We have been working with the APTIM team to standardize material tracking processes and procedures, ultimately with the goal of reducing the amount of time  spent looking for materials. Industry studies show that better management of materials can lead to a 16% increase in craft labour productivity.

Everyone knows construction is one of the oldest industries but it’s one of the least tech driven comparatively. About 95% of Engineering and Construction data captured goes unused, 13% of working hours are spent looking for data and around 30% of companies have applications that don’t integrate. 

With APTIM, we’re looking at early risk detection, through predictive analysis and forecasting of material constraints, integrating with the ecosystem of software platforms and reporting on real-time data with a ‘field-first’ focus – through initiatives like the Digital Foreman. The APTIM team has seen great wins in the field, utilising bar-code technology, to check in thousands of material items quickly compared to manual methods.

There are three key areas when it comes to successful Materials Management in the software sector – culture, technology, and vendor engagement.

Given the state of world affairs, access to data needs to be off site via the cloud to support remote working conditions, providing a ‘single source of truth’ accessed by many parties; the tech sector is always growing, so companies need faster and more reliable access to this cloud data; digital supply chain initiatives engage vendors a lot earlier in the process to drive collaboration and to engage with their clients, which gives more assurance as there is more emphasis on automating data capture. 

It’s been a challenging period with the pandemic, particularly for the supply chain. Look what happened in the Suez Canal – things can suddenly impact material costs and availability, and you really have to be more efficient to survive and succeed. Virtual system access can solve some issues and you need to look at data access in a wider net.

Solving problems comes down to better visibility, and proactively solving issues with vendors and enabling construction teams to execute their work. The biggest cause of delays is not being able to provide teams with what they need.

On average 2% of materials are lost or re-ordered, which only factors in the material cost, what is not captured is the duplicated effort of procurement, vendor and shipping costs, all of which have an environmental impact.

As things start to stabilise, APTIM continues to utilize SiteSense to boost efficiencies and solve productivity issues proactively. Integrating with 3D/4D modelling is just the precipice of what we can do. Access to data can help you firm up bids to win work, to make better cost estimates, and AI and ML are the next phase, providing an eco-system of tools.

A key focus for Intelliwave and APTIM is to increase the availability of data, whether it’s creating a data warehouse for visualisations or increasing integrations to provide additional value. We want to move to a more of an enterprise usage phase – up to now it’s been project based – so more people can access data in real time.

 

Share article