Aller au contenu principal

Artificial intelligence (AI) is one of this year’s hottest topics. Governments and businesses are investing big-time (China’s said by some to be best placed). Wide-eyed consultancies spell out the wonders they foresee ahead (with predictions of big dollars). Androids and robots stalk the silver screen. 

I’ve been listening lately to AI experts talking of AI, and most of them rather want to calm things down. We’re a long, long way, they say, from Hollywood’s fantasias/dystopias. There’s been AI for decades, they say; it’s just an evolution of computing. There’s lots of it in place today and there’ll be more tomorrow in our daily lives. But we’re a long way yet, they say, from needing to worry about the ‘singularity’ – the point at which we’ve built machines that are so clever they can build new machines that are more clever still without involving us.

‘Narrow’ versus ‘general’ AI

The distinction that they draw is this. 

What we have and what we’re getting’s a great deal of ‘narrow’ artificial intelligence: devices and applications that do specialist tasks much better than we ourselves could do them, mostly because they number-crunch in ways we can’t. Autonomous cars will be like this. 

We’re a long way from what they call ‘general’ or ‘strong’ AI, in which devices and applications can do all the things we do (or most of them) better than ourselves, can join them up and then do things on their own, perhaps without our knowledge.

So should we worry?

In the medium and long term yes, say some tech gurus – among their number Elon Musk and Stephen Hawking. No, though, or at least not yet, say those who want to calm things down. The Terminator’s not just round the corner, so we don’t need to worry yet about regulating things as if he were. As the British government urged citizens in 1939, we should keep calm and carry on.

There’s a problem with this line of argument, though, and it’s one that’s going to be important if we wish to shape the digital society to suit us rather than letting it shape how we live in future. The thing is, we won’t be moving straight from A to Z – directly, in a single leap, from the narrow AI devices and applications that we have today to fully-fledged general AI at some time in the future. We’ll move through many stages (B to Y) along the way (and Z, of course, won’t be the final stop). 

What matters most is the transition

As we do so, we’ll adapt how we behave through many different stages of AI development. We’re likely to give more and more decision-making power to the algorithms from which AI will learn – to Siri, for example – as we’ve done to sites like Google and Facebook. As AI devices become more capable, we’ll cede more to them without being all that conscious that we’re doing so. It’s the transition to an AI world that will determine our relationship with it when we get to Z.

If we make rules for that relationship, we’ll make those too along the way. It’s those rules and regulations we make in the long transition between narrow and general AI that will decide what things look like when we reach Z.  

For example, with self-driving cars

I’ll illustrate what I mean here through self-driving cars. In spite of last week’s fatal incident in Arizona, it’s generally believed that self-driving cars will make accidents, and fatal accidents, less common.  Once the technology’s perfected, and so long as it is hacker-free, virtual drivers will have quicker reactions and be loss prone to error than will real ones. (They also won’t drink-drive.) Motorways packed with self-driving cars should be safer than freeways full of tired, frustrated drivers.

The problem is, though, that we won’t be going, in one leap, from cars with human drivers everywhere to only cars with virtual drivers. For a generation, we’ll have both kinds on the road. The rules and regulations that we need will not be ones that suit roads that only have self-driving cars; they will be ones that work while we’ve a mix of real and virtual motoring. It’s those transition rules and regulations that will set the terms for designing and developing self-driving vehicles. That will be right, with luck, for mixed environments, but maybe less than optimal in the longer term.

There’s a point about behaviour here, as well. On mixed roadways, real drivers are likely to start emulating virtual cars, but real drivers won’t have the same skills. Driverless cars will have faster reaction times, and so can drive closer together. If human drivers copy them, that raises risk of a collision rather than reducing it. 

I’m not opposed here to self-driving cars. We trust our lives these days to what are more-or-less self-driving planes and trains; why not to cars? All that investment’s unlikely to be wasted. Self-driving cars are coming. And the final outcome, when we get to Z, will almost certainly be faster, safer, more environmentally efficient. But getting there is going to be much messier than advocates would have us think, because we’ll only get there through a prolonged transition between B and Y.

That transition, too, of course, will be different in different places. Car companies are testing their prototypes in Nevada deserts and on LA freeways. Those are much easier environments for them to learn to navigate than the crowded streets of New York, Lagos or Mumbai, where people play to rather different rules of the road.

We’re slow to change our thinking …

Let me return, though, to the general principle I’m trying to get at here: that the way we handle the transition between past and future technologies is crucial to deciding outcomes.

The last 25 years have seen enormous changes in communications. I’ve watched telecommunications change from state monopolies on fixed voice networks to competitive markets for mobile data networks. I’ve seen the Internet develop from ftp for geeks to social media for billions; from innovation in university computer labs to innovation by global corporations. The changes in telecommunications were heavily regulated (to ensure competition); those in the Internet have largely not been.

The way we think about today’s communications markets is strongly influenced by how we dealt with how they used to be. Most people in the sector have expected telecoms markets to be regulated, and still do, while assuming that the Internet should not be (or not much).  

… but we should

It’s taken most of a decade, and a bunch of scandals, for us to realise that social networks are no longer innocent apps for chatting with our friends, as they may have been when first invented, but means for businesses to profit from our data. The power of today’s platforms and data corporations has grown because we’ve continued to think of them as little apps rather than big data. We thought we didn’t need to regulate them. Increasingly today, we think we do.  

When we have put regs in place, though, we've often keep them longer than they’re fit for purpose. As a result, some issues that need regulating aren’t, while others that no longer need it are. For years, the development of telecoms was influenced by models of economic regulation – say, of spectrum management – or assumptions about markets – say, that they were or weren’t competitive – that had passed the time when they made sense.

The risk we face is that the same will happen with AI. When experts tell us that we needn’t worry yet about general AI, I think they’re wrong. Not because the singularity’s around the corner, but because the rules we set for narrow AI now will be the rules that influence AI’s development, including what happens when we do reach more sophisticated ‘general’ AI. The ways that we adopt and adapt to narrow AI devices and applications - the way we bring Amazon Echo and the like into our lives - will also determine how accepting we are if and when more powerful successors have more power to determine what we do.

In summary

Decisions made today, in short, affect the longer term. We need to think about how we’d regulate sophisticated new devices before they’re in our hearts and homes.

Next week, inevitably, back to Facebook.

Image credit: "AI artwork" by David J used under CC BY 2.0 licence.  

David Souter writes a weekly column for APC, looking at different aspects of the information society, development and rights. David’s pieces take a fresh look at many of the issues that concern APC and its members, with the aim of provoking discussion and debate. Issues covered include internet governance and sustainable development, human rights and the environment, policy, practice and the use of ICTs by individuals and communities. More about David Souter.