In this extremely topical blog, Martin Upchurch, Professor of International Employment Relations at Middlesex University, discusses digitalisation and robotics in the new workplace.
Earlier this month MEPs in the European Parliament debated a call for comprehensive rules for how humans will interact with artificial intelligence (AI) and robots. The fear expressed by the politicians is that advances in AI could elevate robots to the status of an electronic ‘person’ with rights and privileges in law. It is surprising that such a prospect has taken so long to enter public discourse as ever since the birth of the computer seventy years ago commentators have been writing on the prospects of technological singularity – the point at which intelligence would become ‘non-biological’, and creativity would be unbounded by human limitations. Machines would dominate production through processes of self-improvement, re-writing their own software to outstrip the functional capabilities of the human brain.
The scenario of singularity signals a complete collapse of human employment. Researchers at Oxford University have already calculated that almost half of all jobs in the US are at risk from new forms of automation in the coming decades, while journalist Paul Mason has written a best seller on the nirvana of a new ‘post-capitalist’ society. While most routine jobs would disappear, the destruction would also overlap into professional work. Doctors may be replaced by smart phone apps. that diagnose a patient’s symptoms and robots that perform operations. The collection of big data and its processing by algorithms (machine learning) may also enable correlations of behaviour, genetic disposition, or symptoms to predict a person’s health. Even IT specialists would not be safe, as much of the ‘knowledge’ which enables them to hold down employment may be transferred to a central cloud computer accessible by all from any location.
The restraints of limited mobility and flexibility of robotic ‘arms’ have been eased by new technologies which enable a humanoid robot to grip and to turn with less pre-programming. Advances in algorithmic programming utilise the principles of neural networks that enable AI to discriminate, to ‘remember’ past decisions and to make finer judgements. In the early days of development such ‘thinking’ was measured by the degree to which the robot or computer passed the ‘Turing Test’ (after the celebrated British computer scientist Alan Turing). The test is based on the proposition that a machine would be able to think if it could hold a conversation that was indistinguishable from one with a human being.
Image recognition technology has improved, as well as text to speech (and vice versa). Robots can now be programmed remotely from the cloud computer, an advance that is equivalent to the launch of the first ‘free standing’ Progamma101 personal desk top computer by the Italian firm Olivetti in 1965. Combined with the falling cost of robots in the product market it is not surprising that their numbers are on the rise. The International Federation of Robotics estimates that there were 1.5 million robots in operation worldwide in 2014. China now absorbs an increasing proportion of the total, spurred by rising labour costs and shortages with a falling ‘payback’ period for investment of 1.5 years. But we should not get carried away with the rise of the robots, while their numbers may well rise to over 2 million, this compares with a worldwide workforce of 3 billion. In the country with the highest density of robots (South Korea) there are still less than 500 for every 10,000 workers.
If we adopt a socio-technical approach to examining AI and robots we may see that claims of total singularity may well prove to be a false dawn. For more complex tasks, robots still need to be minded by humans lest they break down or miscalculate precision movements. Efforts by a leading robotics manufacturer to create an affordable ‘plug and play’ robot capable of mimicking human movement for widespread use in industry also appear to have stalled.
A simple way of understanding the problem is to imagine a robot attempting to catch a tennis ball in flight. Not only the speed and angle of flight need to be finely calculated in a split second, but also the weight of the tennis ball (which a human would have remembered from previous experience) will determine how hard the robot needs to grip the ball once caught to avoid the ball bouncing back out of the hand. Such a seemingly simple task for a human is a logistical nightmare for a robot. Mercedes-Benz, which is a lead player in developing autonomous cars, has now begun replacing its robots with humans in its factories due to this very lack of flexibility in the robotic machine.
Moves are now afoot to develop ‘cobots’ which operate side-by-side with humans to enable flexibility and creativity to flourish. While algorithms might replicate past human behaviour in robotic form they are a long way off from ‘consciousness’ and the ability to ‘think’ at the level of a human. Returning to the ‘Turing Test’ the ability of robots to ‘think’ as humans do is only a remote possibility. Turing also identified a ‘halting problem’ whereby a computer using AI may never ‘know’ when it is ‘right’, and so will continue to compute. The algorithms they feed from remain subject to human input in programming and coding and repeat the mistakes and false assumptions that humans may have made in the past, but may consciously check against in the present. So, for example, the algorithm-fed robot Beauty.AI only chose women of light skin when asked to judge an international ‘beauty contest’, suggesting an unconscious (or even conscious) racist agenda among those humans creating the algorithm.
A further obstacle we need to address is that of economics and the related political implications of choices made by employers. Computers are a relatively small proportion of capital stock, and investment in computers has been declining since the height of the ‘IT Revolution’ of the 1990s. The overall impact on productivity, growth and jobs appears less dramatic than might otherwise be assumed. Evidence published in 2015 by Michaels and Graetz from a dataset of companies in 17 countries gathered between 1993 and 2007, suggests that while productivity increases with robotic innovation and some semi-skilled and lower skilled jobs are abandoned, “there is some evidence of diminishing marginal returns to robot use – ‘congestion effects’ -so they are not a panacea for growth……this makes robots’ contribution to the aggregate economy roughly on a par with previous important technologies, such as the railroads in the nineteenth century and the US highways in the twentieth century.” Neither do robots do away with the contradictions within capitalist accumulation. This is because as capital-bias and labour shedding takes place proportionately less new value is created (as labour is the only source of new value) relative to the cost of invested capital, added to which, as the economist Michael Roberts reminds us, worker resistance to the dystopia of permanent joblessness would surely ensure that the road to ‘full automation’ if it is ever constructed, would be a very rocky one.
Indeed, the ‘full automation’ and post-capitalist schools of thought assume an ever-increasing thirst for new digital technology and a limitless supply of the necessary hardware and software. Yet these assumptions also need to be questioned. Predictions of the coming of singularity have been based on extrapolations from co-founder of Intel Gordon Moore’s ‘law’, by which the number of transistors that can be inserted into a computer doubles every two years, both lowering the cost and vastly increasing computing power. However, this depends on a finite supply of rare earth metals, and Moore has himself acknowledged that there will also be a physical limit to how many transistors you can squeeze into an integrated circuit. As reported by the OECD in 2016 “…the introduction of new technologies is a slow process due to economic, legal and societal hurdles, so that technological substitution often does not take place as expected”. For example, the development of autonomous or driverless cars is subject to regulatory concerns over insurance liability, which will act to slow down or even impede development.
A sober analysis of the economics of singularity has been undertaken by William Nordhaus at Yale University. Using econometric methodology on both the supply and demand side for digital technologies and AI he attempts to predict when singularity might occur. He argues that two ‘accelerationist’ mechanisms could develop, either from accelerating supply or from accelerating demand, and then applies a series of time-linked tests to both hypothetical scenarios, focusing on the key input variables such as wages, productivity growth, prices, intellectual property products and R&D. Five of his seven tests for the likelihood of singularity proved negative (including that for ‘accelerating productivity growth’ and ‘rising wage growth’) while the two that proved positive (including a ‘rising share of capital’) indicated that singularity, if it did occur, would be at least 100 years away. And as previously positioned, a rising share of capital may simultaneously lead not only to decreasing rates of productivity growth, but also trigger a crisis of profitability in the longer term.
We might suspect that the coming of singularity may falter, be delayed, or never happen because of the economic, social and political factors that stretch beyond the technology itself. Despite these limitations, the prospects of Irving John Good’s 1965 musings of a ‘last Ultraintelligent’ machine ever being constructed, which will “surpass all the intellectual activities of any man however clever … (so that) the intelligence of man would be left far behind”, will no doubt continue to fascinate many. The dream of singularity would, however, be faced with a simultaneous collapse of the underlying dynamic of capitalism. The only surviving ‘human’ industrial sectors might be defence and space exploration, to guard against terrorist or foreign hostile cyber attack, and against attack on humans by the super intelligent machine!
RT @DrAnneElliott: Just caught up with @ProfTEvans latest political discussion on @ShareRadioUK. Great insight and clarification on the mos…
"I conclude that Esther Rantzen's famous intervention, to coat playgrounds in impact-absorbing surfaces, was a wast… https://t.co/EsmNK1EHK9
Research on #whistleblowing by MDX PhD student featured in @thesundaytimes. https://t.co/xTbvm5zlly