The New Technological Revolution

Automation, Education and Work, part 1

There is a proliferation of articles in the media about how the accelerating technology of artificial intelligence and robots will change the world of work radically in the next 30 years. “47% of current jobs are under threat”. “Self-driving vehicles will put millions of truck drivers out of work”. “We are seeing a hollowing out of middle-income jobs.”

Some say this Luddite fear is unwarranted. “Just because we can’t imagine what new jobs will come along doesn’t mean the next generation won’t”. “We just need to educate everyone for the future knowledge economy.”

Others say “This time it’s different”. “We cannot compete with the robots”. “Maybe we need to tax the robots and have Universal Basic Income?”

This is the first of three parts of the talk ‘Automation, Education and Employment’ which will look beyond these articles, at

  1. how technological revolutions happen,
  2. what the new technology actually is, and
  3. how things may turn out differently from what we expect.

In this first part, I look at the first two items on that list.

Technological Revolutions of the Past

AEW_waves

The Russian economist Nicolai Kondratiev identified technological waves with a period of about 50 years. This starts with the industrial revolution around 1800 – the classic landmark being James Watt’s improvements to the steam engine around 1781 (examples, names and dates provide some reference points).

The third wave, around 1900, is sometimes called the ‘second industrial revolution’ with many developments of different underlying technologies:

  • the internal combustion engine (Otto 1876 and Diesel 1893),
  • plastics (Baekeland – 1907)
  • improvements to the electric motor (Sprague 1886)
  • the telephone (Bell 1876) and wireless communication (Marconi 1897)

Around 50 years earlier (1850), there were innovations such as steel-making (Bessemer 1856), fertilizer (von Liebig 1840) and rubber (Goodyear 1844).

And around 50 years later (1950), there was the invention of the transistor (Bardeen, Brattain and Schockley 1947) and integrated circuit (Kilby and Noyce 1958) and of the theories of computing (Turing 1936) and information (Shannon 1948).

I note that each wave has launched a fundamentally (truly revolutionary) new technology as well as bringing in innovations building on previous revolutions. Each wave can be associated with an engineering department in a university:

  • 1800: mechanical engineering
  • 1850: chemical engineering
  • 1900: electrical engineering
  • 1950: information engineering, i.e. computer science

The early waves provided physical innovations: human, horse, wind and water power were replaced by the mechanical, chemical and electric power of machines. In contrast, the latest waves are providing cognitive innovations, reacting to events in the world with useful, appropriate and increasingly intelligent responses.

But there is another aspect to these waves: new technologies lead to new ways of doing things – new processes. When a new technology comes along, people typically apply it to their existing world as a better alternative to something more primitive. It typically takes a generation or two to shed the preconceived notion of what the technology is ‘for’ such that people who have grown up with that technology invention being familiar discover new ways of doing things with that technology.

Ford’s innovation of the production line (1908) was helped by the concurrent innovations of the internal combustion engine and electric motor, which made the layout of the factory less dependent on distributing a single common power source around the factory floor. But the production line method of manufacture could have been applied to the steam-engine-powered production of steam-powered cars – it is just that its effect – the mass ownership of cars – would have been far less pronounced. Ford’s factory also built upon previous process innovations such as Marc Brunel’s use of standard components in the Portsmouth Block Mills (1803) and in Isambard Kingdom Brunel’s development of the institution of an engineering business (1843). So there has been the transformation of manufacture over a period of less than 250 years from the ‘cottage industry’ LINK ‘putting out’ system to automated factories (1948, as a consequence of the combined application of electric motors and computers).

The Current Technological Revolution

The Current Technological Revolution is a cognitive one, building on previous ones – the combination of electronic computers (Turing, Kilby and Noyce) and communications (Bell and Shannon) with the landmark development being the internet (reaching the mainstream public around 1995). The dotcom bubble burst after many companies tried to do things using the internet in the same way as done without the internet. Since then, companies like Google and Facebook have come to dominance by applying entirely new business models. More on that later.

And finally, after promising so much for so long, artificial intelligence has finally come of age. For many years, there has been progress along ‘traditional’ computing lines – skilled computer scientists writing programs. First it was to (unashamedly) imitate intelligence (Weizenbaum’s ‘ELIZA’  program, following on from the 1936 ‘Turing Test’ imitation game thought experiment). Then there were landmark moments like IBM’s Deep Blue beating the world chess champion Gary Kasparov (1997) and IBM’s Watson beating the champions-of-champions in the U.S. general knowledge TV quiz show ‘Jeopardy!’ (2011). But in the very early years of computing, it was recognized that the enabling technology (transistors) could be put together based on how the brain is connected rather than the programmed computer. These ‘artificial neural nets’ promised much but delivered little for so long. But then, in 2016, there was the landmark achievement of Deepmind’s AlphaGo beating the world Go champion, Lee Sedol. Go is more strategically complex than Chess but AlphaGo is not an incremental improvement upon Deep Blue. It is a revolutionary development.

To beat Kasparov, IBM had a team of programmers writing algorithms to search for good chess moves and were helped by a chess grandmaster for building up a store of opening moves. Deep Blue’s intelligence was built upon the superior intelligence of its makers. Watson’s intelligence was built upon storing thousands upon thousands of Wikipedia pages. But AlphaGo was just put in an environment that defined the rules of the game and it practiced playing Go over and over again until it was pitted against Sedol and won.

The traditional, programmed approach was intelligence of the artificial kind in the way that an artificial flower is artificial – it is an imitation. But I would argue that there is nothing artificial about the intelligence of ‘artificial neural networks’. They are only artificial in the sense that they are not natural (naturally, biologically evolved).

Breakout

Since I am not familiar with the game of Go and I suspect that you probably aren’t either, I will illustrate the learning behaviour with the example of another, simpler game – the computer arcade game Breakout.  Google Deepmind trained a machine up to play this game only a year earlier, in 2015.

The network consists of many ‘artificial neurons’, each having a value of its own (e.g. a for neuron ‘A’, for example) and connected to other artificial neurons’ by ‘weights’ (e.g. wab being the weight from neuron A to neuron B), simple numbers that indicates the strength of the connections between neurons.

If neuron A ‘s inputs are from neurons B,C and D, the new value of a, a’, is derived as follows:

x = b.wba+c.wca+d.wda

a’ = f(x)

where f(x) is some ‘activation function’ we do not need to go into here. Basically, there is just adding and multiplying going on here (but with many, many iterations, with many, many neurons, that becomes a huge number of additions and multiplications). During use, all neurons are getting updated all of the time. During training, the weight values just get modified.

For the game, the screen can be divided into about 1000 pixels, each with a number representing a particular colour. These 1000 numbers are the input to the network – analogous to the firing of the retina for the brain. And there are just 2 outputs. The machine just plays the game over and over again. At the end of each game, the score influences the net: higher scores mean ‘what you did is good’ so the weights are modified so that it is more likely to act similarly in future. This is what is called ‘reinforcement learning’.

If we watch what is going on, after perhaps a thousand games, it looks like it has worked out that there is a ball on the screen and the 2 outputs control whether the paddle moves to the left or right. After a few thousand games, it is hitting the ball with some proficiency. After a few more thousand games, it seems to have noticed that bricks on the back wall disappear when they are hit and that the game ends when the ball can break through that wall. There will be a higher score the sooner this is done. After a few more thousand, it is able to play better than any human. Now, this explanation describes behaviour in terms of intentions: ‘the computer has worked out…’. But all that is actually happening is that weights are being modified within the network.

Someone wrote the programming code for the Breakout game in the first place, someone wrote the code for a network (with no specified application and a load of randomly-assigned weights), and someone wrote code to:

  1. start running the game,
  2. control the game from the network’s outputs,
  3. feed it its visual inputs,
  4. tell it its score value at the end of the game, and
  5. tell it to update its weights.

But nobody gave the network any code about the rules of the game. It just worked them out by itself, starting by having its 2 outputs randomly waggling. The only information it was provided was the score – a good score meant that weights between neurons that fired together were ‘strengthened’ (the numbers were increased a bit).

Note: There has been a lot of games mentioned above (and there will be more to come). Games provide a very simple environment for ‘intelligent machines’ to operate in. This allows researchers to concentrate on the machines rather than the environment that the machines are made to interact with and it allows comparison between the machines.

The amount of skilled programming required for an artificial neural network is not large. It is the sheer amount of number crunching across many, many artificial neurons during training that gives rise to its intelligent behaviour.

Below is a Python program to implement, train and use a basic multi-layer neural network.

AEW_python

In this code, the size of the network is determined by the vector num_neurons_in_layer. For example, if we set it to [100, 1000, 1000, 1000, 1] it will have 5 layers, with be 100 neurons in the input layer, 1 neuron in the output and 3 ‘hidden layers’ of 1000 neurons each. If the number of layers is more than 4, the network is considered to be ‘deep’ hence we encounter the terms ‘deep neural nets’ and ‘deep learning’.

The point here is not to examine the code above in any detail (see here for that) but just to show:

  1. how little code is needed for someone to create the neural network itself,
  2. how easy it is to define a large network, and
  3. that there is nothing specific to the application in here.

The ‘magic’ is in the huge, indecipherable set of numbers that make the so-called weights that adjust themselves during learning. The programmer has to create the setup so that the network can learn, at the right pace, in the right way and for the right duration. But the programmer does not need to specify anything about what the network needs to do. Once the learning has been started, the programmer does not need to do anything whilst the network is learning.

Breakthrough

Moore’s law gradually increased computer processing power over the years to the point that impressive things could be done with ordinary amounts of hardware (e.g. PC processors and graphics processors). The previous disappointments were because we underestimated just how many artificial neurons (how much number-crunching) were needed to get it to do useful things.

But we are now are the stage where advanced ‘intelligent’ behaviour can be trained into these machines with relatively little effort on the part of humans. And, astoundingly, Moore’s law continues to work. The application of this technology to new applications is rapid. We just need lots of data to train a network in the first place.

Big Data

The internet helps when it comes to having lots of data. For example, Facebook has lots of data associating photos of people with their name, age and gender. We could train a network with this data so that, given a face, it could guess what gender or age it is (and guess very well).

AEW_faces

Alternatively, the network could be trained to identify specific individuals from faces.

Note: It would have to a very deep network to be able to do this…

AEW_deep

It is this technology that enables machines to read hand-written numerals very well, understand what we say (voice recognition by digital assistants such as Siri and Alexa), identify which tumours are cancerous, doing this better than specialist can…

AEW_tumour

and detect objects in a street scene that would help self-driving cars navigate their way around.

AEW_scene

The New Revolution

So, when I am talking about the new technology that is driving changes to employment and education, I am talking about:

  1. The continuation of existing technology: the application of ‘traditional’ programming techniques to use computers, electric motors and various electronics sensors to automate physical and cognitive tasks as they have been being automated over the past 50 years, in the factory and on-line.
  2. The new technology of deep learning and its application to the automation mentioned above. (Intelligent robots just comprise artificial intelligence plus the technology above)
  3. The new processes of the currently technological wave, namely the ‘platform’ business models of the likes of Google, Amazon and Facebook.
  4. The combination of these new processes with the new technology of deep learning.

(I am not going to speculate about what might come in the next technological wave.)

Advertisements
This entry was posted in Uncategorized and tagged , , , , , , , , , , . Bookmark the permalink.

One Response to The New Technological Revolution

  1. Pingback: New Processes at Work | Headbirths

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s