What does Siri co-founder think about the future of AI?

By Mark Matthews
Share
Tom Gruber
In this exclusive interview, Siri co-founder Tom Gruber says humanistic AI can transform perceptions of the technology from Big Brother to Big Mother

When Tom Gruber co-founded Siri, he brought AI technologies to the mobile market and positioned Apple as one of the most popular mobile brands in the world. In recognition of this impact, Tom has received several awards and accolades, including being listed among the Top Technology Speakers for global conferences. In this exclusive interview, learn about Tom’s principles of humanistic AI and how to avoid the ethical issues associated with AI technologies. 

What are the guiding principles of humanistic AI?

Humanistic AI is a philosophy of AI. You hear a lot about machine intelligence as the goal; oftentimes it’s ‘let's do whatever it takes to make the machine exhibit the intelligence to automate the thing we respect about humans - let's build machine versions of us.’ That's a perfectly good thing to do.

The other way of thinking about it is humanistic AI. Why don't we make machines that make humans more intelligent? The difference actually matters. If you pursue the goal of machine intelligence for its own sake, particularly in business, what you end up with is machine intelligence that often competes with humans, for their job or for their attention.

Oftentimes, if you pursue the machine intelligence goal for its own sake, all things being equal in an economic situation, like advertising-based attention economy, you end up with AI being used against humans to compete with humans for their attention. Or in the case of employment, it's often to compete with humans for jobs.

That’s a thing that can make money for some companies, but it's not an effective use of AI to impact humanity. On the other hand, humanistic AI’s inherent design goal is to augment humans to make humans more effective.

Now we're talking about, for instance, automating things that people do at work, we mean the things that are dangerous or tedious or require lots of time that people don't have. That border will change over time as automation of intelligence gets better, more and more menial tasks can go away. 

For example, in healthcare, there are a lot of things that are done in tedious ways because it's the only way we could do it. Now, AI can do a lot more of that so medical scientists and the health life scientists can think about the theories of health and how we can improve our response to disease – and that's true across the board. It turns out that this idea of augmenting human intelligence versus competing with it has been around for a while.

How do you believe AI will shape our everyday lives?

So many good ways it can help us! AI will be augmenting us in all kinds of ways, it's already beginning to augment us. I mean Siri was an attempt to augment us. When we built Siri, the only way you could use the mobile phone is to tap on the tiny little screen – of course, that's not easy for everybody.

At the same time, on the web, all kinds of cool things were available, like all these services you can do, like travel, booking restaurants with Siri. Well, those things were only possible if you had 10 fingers and a big screen and an internet connection, right? We felt like we wanted to bring the bounty of all that to someone with a mobile phone, just their voice. That’s a kind of augmentation that’s giving us the power of having the 10 fingers and the screen, while you're carrying it around. 

Now we see a lot more examples of how AI is augmenting people overcoming disabilities. For example, one of the companies I advise helps people who can't speak because of neurological conditions, like Stephen Hawking. AI understands what they're trying to say by reading the brain waves and interpreting them to speech – that wasn't possible before AI came along to help make sense of that.

The next thing is essentially to nurture. Now nurture in the sense of like a mother nurturing an offspring. You know, AI today is more like a Big Brother, oftentimes being used against people, but I like to think of AI as Big Mother, like a big bear mother protecting her cubs... There's a lot of dangerous stuff out there, but what does a mother do? A mother tries to use her skills to make you a better person. Healthier, better self-care, better mental healthcare, better social interaction with peers and so on, that's what AI can do in the future.

I'm not just making this up. There're many real companies and projects working on those goals, starting with simple things like, wearables, watches and rings. They're starting to give us feedback about how well we sleep, how well we focus.

“The final thing that AI is going to do is transform society. It's already beginning to transform it in negative ways, but I think we can turn that around. It's going to transform societies because of a couple of things.

One, AI can overcome differences between people that are unimportant. As they say on the internet, you can be any colour, any gender that you want to be. Even things like your cognitive disabilities, whether it's a spectrum emotional thing or it's just IQ, these differences can be shored up when AI is mediating for you. 

There’s a more subtle and even more powerful way. Right now, we as individuals haven't really realised our potential as a collective. We don't really think well together today. I mean, the only techniques we had for that were the media and politics… those traditional ways in which we would collectively think have been disrupted in a serious way. The one piece of collective intelligence that survives is science. 

Ironically enough, the way that AI advances fastest is because of its use of the scientific method. The point is that there are systems of thinking together, like science, that do work, and those that don't work, like traditional politics. AI is going to play a big role in mediating many collaborations and solve some of these problems that we need to solve like climate change, that are based on a mass solution.

Are there ethical risks to AI and big data, and if so, how should businesses combat them?

There’re huge ethical issues in AI. AI is probably the most powerful technology that's been invented this century; the CEO of Google calls it the most important invention since fire. He has invested more than any other company in the world in AI. It's very powerful and all-powerful technologies have ethical consequences.

In the case of AI today, there’re people worried about the future – can it be like a Terminator or Skynet? That's a real concern, but it turns out that we have much more pressing real-life problems happening right now because of the power of AI. The most outstanding one is the fact that AI is the engine of optimisation behind the big social media platforms like Facebook and YouTube and Snapchat. They've optimised for what's called growth hacking, which is how much can you get people to stay addicted and stay online. They use the big data that they gather from their users to predict what would work to keep them online and addicted.

So basically, humans don't have a chance against that kind of technology. It's not an issue of free speech or economics. This is a technology that is having impact at human scale, political scale, geopolitical scale – lives are being lost because of misinformation. This is a serious business in the real world.

How can businesses address this? Well, especially in technology, ethics is not a simple matter of do the right thing, have morals. The interesting part is how do you embed values in the very technology that you employ? An example of not doing that is if you make the AI only care about winning in an adversarial game against humans for their attention, that’s going to produce negative human impact.

We want to make sure the business is making money and we want to make sure that the human society is benefiting as well. You can literally engineer those two bottom lines into your equations.

This exclusive interview with Tom Gruber was conducted by Mark Matthews.

Share

Featured Articles

Why Are US CEOs Stampeding for the Exit Sign?

The number of US CEOs exiting their businesses rose by more than one third in August, while the annual total of CEO exits hits a year-to-date record

Companies Wasting Millions on AI Spending - MIT Professor

KPMG survey says 81% of US executives worry about lagging behind on tech but MIT economist says AI will only replace 5% of jobs

6 Biggest Challenges Facing Incoming Nike CEO Elliott Hill

Incoming Nike CEO Elliott Hill faces huge challenges trying to reverse the fortunes of the legacy US sportswear giant

Anthony becomes first female CEO of Big Four accounting firm

Leadership & Strategy

Nearly Quarter of CEOs Firefighting Sexual Misconduct Crises

Human Capital

What Autumn Budget 2024 Means for CEOs

Corporate Finance