USAID – the US government’s development agency – tweeted recently that we were ‘entering the second decade of digital development.’ Not so, we’re much further down the road then that.
When I was thinking, some years ago, of writing a book about the early years of ICT4D – or ‘digital development’ – a key figure from those early days directed me way back, towards United Nations studies in the 1960s and 1970s.
USAID’s comment sent me back to those reports. Two things are clear from them:
-
how much has changed in terms of the technology;
-
how little’s changed in public discourse.
A start(l)ing quote
My quotations in this blog come from one old UN document: The Application of Computer Technology for Development. (It’s pre-internet, I fear, so I can offer you no URL.)
Here’s how it starts:
Technology has an essential role to play in reducing the disparities that exist between the developing and the developed countries. Computers are especially important in this context, because so many computer applications have a direct bearing on some of the main facets of the development process and reflect certain aspects of the technology that has facilitated the growth of the economically advanced countries.
And later: Technology is not the only key to reducing the disparities between the developing and the developed countries, but technological progress is essential.
Substitute ‘digital technologies’ or ‘the internet’ for computers there, and nothing’s really changed. That’s still the root of ‘digital development’ and ICT4D. But the date of that report is 1971.
Let’s take the changing context first, and then the similarities.
Changing technology
Of course, computers then weren’t everywhere as they are now. Of the 51 governments that responded to the UN’s call for input, three said that there were no computers whatsoever in their countries, while five had only one. Canada had just under two thousand; India a few more than a hundred.
And what the word ‘computer’ means has changed.
-
We’ve gone from mainframes through mini-computers to PCs, laptops, tablets, mobile phones, smart speakers. Quantum computing’s now on the horizon.
-
Computing then was centralised, not distributed – there were no personal computers. The digital belonged to governments and the biggest businesses, with no mass market.
-
Computing capabilities were the smallest fraction of what’s possible today. Their mainframe’s in your wristwatch.
-
There were no graphical user interfaces, like Windows and the Web, to make them easier to use.
-
There was no internet, no cloud, to link them seamlessly together.
-
There were no apps, there was no Internet of Things, though (delightfully) the report does say – before rejecting the idea - that ‘Computer programs are often said to exhibit artificial intelligence.’
The differences are staggering. It’s an entirely different world from what we know today. What can be done by computing now is hugely different. So you’d expect public discourse and policy priorities to differ too. And yet they don’t as much as you’d expect. Let me give some examples from the same report. Direct quotes, as above, are in italics.
On digital policy development:
Decision-makers are usually overworked . They must … be encouraged to learn what can be done with a computing system…. Otherwise they will be subject to the pressures of local computer salesmen who paint optimistic pictures of what computers can do and fail to tell of the pitfalls and deficiencies.
… a blind faith in the miraculous powers of science and technology … can lead people to expect too much too soon. A belief that quick results are to be obtained by installing computers and pushing buttons can only delay the benefits.
Familiar? Too many policymakers remain in thrall to what the digital can do in ideal circumstances without thinking through whether or how to make those circs ideal. They buy a false prospectus, and then have to deal with project over-runs and unexpected consequences.
On public attitudes and fears:
In almost any country contradictory attitudes towards computers may be found. There are reports expressing admiration of the difficult and useful things computers can do: schedule traffic, play chess or help to locate a rare blood type when it is desperately needed. … And yet again there are articles pointing out how it is possible to maintain large files of personal data with the aid of computers, thereby contributing to an encroachment on personal privacy and a loss of human rights.
Again. The ambivalence present in the minds of many about computers comes from a growing anxiety about harmful side-effects of technology in general and in particular about the possible consequences of the widespread use of computers …
a. The fear that man [sic] is being rendered obsolete by an intelligent, infallible device …;
b. A fear that computers are propelling us towards a society run by technocrats, where important decisions are made every day by persons of narrow viewpoint … and by unfeeling robots;
c. A fear that computers, especially through the data banks which they make possible, will bring about an irretrievable loss of individual privacy.
Could have been written yesterday.
On future jobs:
The UN thought back then that fears about job losses were probably exaggerated but had to be addressed. There’d be disruption, not just among managers but also at lower skill levels. (Today, as next week’s blog will describe, the direction of disruption’s been reversed.) There’d be a need for new skills. Everyone should be educated in computer skills because they would be needed, but not just in aspects of technology.
And today governments are still rummaging around to find the right employment and skill strategies.
On privacy and human rights:
Few countries have a legal concept of privacy. No country as yet has laws for regulating data banks as they relate to privacy … The problem is complicated because there are circumstances where highly confidential information about an individual is legitimately needed by the police or for reasons of national security.
That sounds familiar too, though there’s now more law in place in more countries. The report went on to consider ‘what types of regulations might be adopted for data banks’ and came up with these:
-
what kind of data may be gathered and by whom;
-
how long data should be kept;
-
how correctness is verified;
-
who should have access to the data and how security should be maintained.
That’s exactly the agenda now. These questions are both technical and concerned with rights, it added, concluding that ‘it is a matter of concern that computers should not be used as an instrument to limit … rights.’
Everything changes, everything stays the same
Or, in the original French, plus ça change, plus c’est la même chose. But the point here isn’t trivial.
If the international community recognised these opportunities and problems fifty years ago, when digital technology was in its infancy, why are we asking the exact same questions in such very similar ways today?
Why are we no nearer answering them after so many international conferences and dialogues? Why is there still no consensus about appropriate ways forward?
Are the issues that the UN raised in 1971 intractable? Or is it partly also the result of how we’ve chosen to address them:
-
perhaps that we’ve prioritised innovation in technology over shaping its impact on human society;
-
perhaps that we’ve prioritised the good we hope for over the harms we fear;
-
perhaps that we’ve done too little to enable effective international discourse, because its weakness has suited some political and business interests.
Will we be asking the same questions about ‘digital development’ in another fifty years? Or will we wish then that we’d come closer to answering them sooner?
Next week: comments on an important new book on the possibility that we;re heading for ‘a world without work’.