Technological unemployment
Incorpora video
Technological unemployment
In contrast with previous technology, the latest generation of machines equipped with artificial intelligence replaces many creative and manual jobs without creating new ones. Thus the future will probably see less work and more unemployment: how should all the economic and social implications be managed?
do good afternoon i am very very happy to introduce one of the most interesting economists of today daniel zuskin from oxford he is a researcher and in economics and teaches at belgium college college in oxford and he was a kennedy scholar at harvard university in cambridge but there's another interesting detail in his curriculum he worked for the british government in the prime minister's strategy unit in the policy unit in downing street and as a senior policy advisor at the cabinet office before the brexit decision i suppose before the brexit decision he is the author together with his father of a real bestseller this book the future of the professions how technology will transform the work of human experts for this book in 2016 he was awarded the price of the best business book of the year given by the financial times well we have the same thing in germany the same award for the best economic book of the year published in germany and it is organized by my my paper the german economic daily hundred splat together with the frankfurt book fair and with for the money was goldman sachs because we if you win our prize you get 10 000 euro in 2016 when daniel's book came out we awarded a german author for a book with the title silicon germany how we can make it for the digital transformation a problem not only for german small and medium-sized businesses but the year before and then last year again we gave our award for the best book to two international authors eric brunelson and andrew mcafee both from the mit in boston they got it for their book the second machine age and last year we awarded the global bestseller homo deus a brief history of tomorrow by the brilliant mind yuval noah harari professor at the brewed university jerusalem historian not an economist i am sure daniel would have been awarded with our prize as well if only the book had been translated into german but there is not even an italian version uh only a spanish one as far as i know and a soon to be a portuguese one as well okay but no no no no italian or german no we have to to fill this gap yes all the books i mentioned have to do with daniel's research and with his lecture today but there's another one to begin with that started the discussion exactly 20 years ago which was the first in a line about the big issue of men and labour richard sennett published in 1998 his book the character of corrosion the personal consequences of work in the new capitalism the german title of the book was much better better because it was the flexible mensch the flexible men and then 10 years later among many other books the craftsman in 2008 he analyzed for the first time the changes in modern work life when was the upcoming of the new capitalism patterns like security of our labour a long life work at the same place apostle as we say in italy and also experience or special skills didn't count any longer senate identified a sense of losing identity and security and invented the description drift moving from doctor job short time engagement frustration and insecurity in private life as a consequence that's what happened mostly in the united states his book the craftsman then has the price of craftsmanship as a basic human impulse to do a job well for its own sake and one of his examples was a computer programmer then brunovson and mcafee wrote their second machine age in the middle of the financial crisis they said that intelligent technology and hyper interconnectedness would cost millions of jobs in the next years what started in the factories would now spread over to the knowledge sector cognitive work is done by a very clever computer systems they are very optimistic and say in their book that they expected that the digital revolution would bring a push for productivity we are optimistic because today we are able to generate more wealth than ever before they write but we have to use the digital technology looking at the issue of jobs they state that the problem is not to have too little people with high skills and too many now that the problem is to have too little people with high skills and too many with little skills and they criticize that politics is slowing down progress pro progress protecting the past not looking at the future and then harari finally putting from history goes in another direction he said that homo sapiens as we know them will disappear in a century or so with the help of medical research we will live for more than 100 years and will achieve god-like powers well this is a fascinating food for thought coming to our issue man and labor harare in an article 2017 argued that through continuing technological progress and advances in the field of artificial intelligence and now i quote him by 2050 a new class of people might emerge the useless class people who are not just unemployed but unemployable he put forward the case that dealing with this new social class economically socially and politically will be a central challenge for humanity in the coming decades and here we are real reality is fast things are going ahead and we are not talking about science fiction 3d printers there was only yesterday today we talked about artificial intelligence so after sociologists and historians we now have economists we are no longer talking about machines doing the work of workers but the future of the white collar work daniel in your book you say in an internet society where we neither need nor want lawyers accountants doctors teachers architects consultants the clergy and many others to work as they did in the 20th century so what about this occupation what about the mist about the future of work your book come out in 2015 time moves quickly what have you discovered since then daniel the floor is yours what have i discovered since then the floor is mine to talk yes please okay very good um perhaps somebody may want to ask him questions what what yes after we have time to ask questions how my thinking has changed in the past or what what i would have wished had been in that book that was published in 2015 that that wasn't uh well it's a it's a great pleasure to be with you this afternoon what i what i want to do and if we could bring up the opening slide what i want to do in the next 45 minutes or so is talk to you about the future of work the title is technological unemployment but i really want to talk to you about the future of work and in particular i want to do seven things with you the first is i want to talk a little bit about the history of people talking about machines doing remarkable things and people worrying in turn that there may not be enough jobs for human beings to do i then want to point to blue collar work which uh i think many people in the room will be comfortable with the idea that technological change might take on the sort of work that's traditionally done by blue collar workers and then what i want to do is i want to raise the question as was suggested in the remarks the intro introductory remarks raise the question that white column at work might in turn be susceptible to these sorts of technological changes as well and that really is that's the research and the ideas that we developed in this book the future of the professions and i want to share some of that with you i then want to take a step back and say a little bit about technology because what i'm about to talk about this afternoon is driven by technology and then i want to say a little bit about one technology in particular which is artificial intelligence and it's really captured people's imaginations in the last few last year or two and we have a particular way of thinking about what's happened in artificial intelligence and why it's significant for thinking about the future of work then i want to come back to white-collar work and again suggest in light of those ideas that i've discussed in technology provides some reflections on why white collar work now might also be susceptible to these technological changes and finally i want to draw out a set of implications and share those with you and think through what these changes might mean for individuals what they might mean for businesses and finally a set of questions that these changes raise for governments as well so first the history of some of these ideas you know today we talk about driverless cars uh but in ancient times homer uh the great greek poet he told a tale of driverless stools uh he wrote of a fleet 20 strong that would scuttle to their owners on demand you know today we talk about robots but plato again in ancient times the ancient philosopher he wrote of daedalus a sculptor who was so talented that his statues had to be tied down to stop them running away first glimpse of this idea of robotic life now that story the idea of sculptures so lifelike they might run away it might sound absurd but that story caused so much trouble to aristotle who was one of plato's uh greatest students that aristotle wondered what would happen to the world of work and this is perhaps the first glimpse of some people worrying about technological unemployment aristotle wondered what would happen to the world of work if every tool we had could perform its task either at our bidding or itself perceiving the need he worried what would happen if all these machines could do their jobs without human beings having to operate them you know the old jewish sages again thinking about robotic life the old jewish sages wrote of mystical creatures called gollum fashioned out of mud and clay which would come to life at the muttering of the right incantations from their owners the right spell would bring them to life and it said one golem called yosef uh is said to to lie hidden in the attic of the grand synagogue in prague to this day and the legend goes that centuries ago the rabbi there a rabbi called rabbi judah low brought this gollum to life to protect the jews in hungary from persecution leonardo da vinci will be familiar with him he's the great 15th century polymath let's jump forward a few hundred years he set out designs for a driverless cart he invented a mechanical lion which if you whipped it three times its belly would open and reveal a crest of the monarchy and he also it turned out in in the 1950s uh some papers were discovered that suggest he also was one of the first people to try and sketch out what uh an android what a what a robot might look like and how it might work in the 19th century let's move forward a few more hundred years the luddites started causing trouble in britain as many people who have heard stories about the industrial revolution in britain well no the luddites were a group of disgruntled workers who took their name for from the their declared support for a man called an apocryphal man called ned lud and he was an east midlands weaver and this this was a group of east mid east midlands craftsmen who set about smashing machines in anger and fear right at the start of the industrial revolution they worried what these machines would do to the work that they had traditionally done in 1812 and this is what this snapshot is from machine breaking breaking these looms became a crime punishable by death that's how seriously these disruptions were taken and the destruction of stocking frames etc act of 1812 was passed and the following year several people were actually put to death under this piece of legislation given the disruptions that they were causing the word robot itself is actually a relatively new word it was developed in it was first i think used in 1920 by carol chapek who was a czech writer and it comes from the word uh robot sorry it comes from the word robota the czech word which means slavery or drudgery and he wrote a science fiction play called r-u-r and he developed the term there the first time the phrase technological unemployment was really used to great effect was in keynes john maynard keynes the great british economist in his book or in fact an essay that he wrote in the 1930s and he described technological unemployment as unemployment due to the discovery of means of economizing the use of labor outrunning the pace at which we can find new uses for labor so the destructive effects of technological change in the labor market outrunning the creative effects the ability of technological change to create new work for human beings to do some great minds of the 20th century threw themselves behind these ideas so albert einstein the great physicist he wrote of industrial techniques which were meant to serve the world's progress by liberating mankind from the slavery of labor now threatened to overrun uh overwhelm uh its creators and then we have politicians and you know throughout the 20th century they said very much the same thing so john f k j f kennedy president kennedy in the 1960s called automation a revolution that carried with it the dark menace of industrial dislocation industrial dislocation was what worried him in 1960 but then barack obama in 2016 in his farewell address used exactly the same words he described automation exactly as kennedy had done as the next wave of economic dislocation so that's a flavor of the history of some of these ideas and some of some of people's worries about increasingly capable systems and machines i think many people as i said right at the start are comfortable with the idea that technological change of the sort i've just sort of just catched a glimpse of might threaten blue collar work and you know think of agriculture for example in the united states in 1900 about 41 of the u.s workforce worked in agriculture by the year 2000 that had fallen to 2 producing far more agricultural goods than in the past but requiring fewer and fewer workers to do it i think many people are comfortable with that idea that technological change might affect agricultural workers similarly look at manufacturing in the u.s from 1980 what we see and that's what that top line is is the output the real output of manufacturing rising and rising and rising but in exactly the same way as we just saw in agriculture total employment falling over the same period able to produce more and more and more with fewer and fewer people required to do it now if you listen to president trump you might get the sense that this is a story that this particular story about manufacturing is a story about trade a story about china sorry about china taking away uh american jobs in manufacturing but of course that isn't really the most important part of the story here the most important part of the story here is technology it's a story about productivity it's the fact that american farmers and american industrialists can now produce more and more and more with the input of fewer and fewer workers that they have more productive machinery and equipment of their disposal so i think as i said many people are comfortable with that idea that technology might affect the work of blue collar workers it might affect the work of people working in agriculture or it might affect the work of people working in industry and manufacturing but the idea that technological change might also affect the work that white-collar workers do i think is a more troubling proposition and the reason i think many people think that is because they have a particular conception of white collar work being different in nature to blue collar work when we think about blue collar work we tend to think of it being routine relatively straightforward process based easy to explain how human beings do it and so we tend to think easy to automate but the difference with white collar work is that people tend to think it requires things that require creativity that require judgment that require empathy from human beings but another way we tend to think that white collar work is non-routine and because it's non-routine traditionally we've thought it might be hard to automate that's the i think the traditional way of thinking about technology and work what we did in our book in in 2015 was and it's a book the future of the professions we looked at um what technology was doing to white collar workers what it was doing to doctors and teachers accountants and nurses and architects consultants even the clergy and what we found was that even though this work often turned out to be non-routine uh technology technological change uh was starting to to affect that work as well and i just want to in our work there's hundreds of case studies of this happening what i want to do now is just to give you a flavor of the sort of thing that i'm talking about so in education more people signed up for harvard's online courses in a single year than had attended the actual university in its entire existence up until that point in medicine a team of researchers at stanford last year announced the development of a system which if you gave it a photo of a freckle it could tell you as accurately as leading dermatologists whether or not that freckle is cancerous in the world of journalism associated press just a few years ago started to use algorithms to computerize the production of their earnings reports so using these algorithms they now produce about 15 times as many earnings reports as when they relied upon traditional financial journalists alone in the legal world on ebay every single year 60 million disputes arise 16 million and they're resolved online without any traditional lawyers using what's called an e-mediation platform 52 million of those without any human beings involved at all so just to put that 60 million in context that is 40 times the number of civil claims that are filed in the entire english and welsh justice system it's three times the number of lawsuits filed in the entire u.s legal system they're resolved every year on this one website without any traditional lawyers again in the legal world the bank jp morgan announced the development of a system called con called coin stands for contract intelligence it the details don't matter it scans commercial loan agreements what matters is that this system does in a matter of seconds what's thought to have required up to about 360 000 hours of traditional legal time in the world of tax last year about 50 million americans used online tax preparation software rather than a traditional tax accountant to help them do their tax return in the world of audit think about the traditional way in which an auditor doesn't audit there's too many financial transactions to review them all so what we do is we take a small sample of those transactions and we've got various methods for trying to ensure that those samples are representative that they provide us with a good window on the the rest of the data and we extrapolate and we draw broader conclusions about the general population of data based on that narrow sample that's the traditional approach to auditing only now and halo at pwc price waterhouse cooper the big auditing firm halo at pwc is one example of a very different approach where instead rather than take a small snapshot of the data and extrapolate from that instead these companies now use algorithms and run them through the entire population of data hunting for financial irregularities that way for cocoa mutual life insurance company a japanese life insurance company began to use a system at the start of last year to calculate insurance premium insurance premium payouts we build homes.nl in the world of architecture it's a dutch firm and architects go online to this website we build homes.nl and out of what are essentially digital lego blocks they assemble buildings and somebody looking for a home can go online sift through the buildings choose one that they like the look of pay for it and it gets delivered to them very different way of thinking about home construction the new concert hall in hamburg an incredibly beautiful building the sort of space that you look at and think wow you know only a human being with a remarkably refined aesthetic sensibility would be capable of designing a space as beautiful as that only this system wasn't designed by a human being it was designed by an algorithm what happened was the architects had a system and they set this system a relatively sparse set of design criteria we want it to have these acoustic properties we want it to be made of these materials even some more granular things in fact like uh you'll see it's made of 10 000 interlocking panels and they wanted it to be the case that if a panel was in within reach of an audience member the panel would have a particular texture uh to the touch and they set those design criteria and the system generated a set of designs and the job of the architect was simply to look through them and choose one that they like the look of and i said in our work one of the professions that we looked at was divinity and this i think is the perhaps the most playful but also the most provocative of all the cases that i'm going to put to you in 2011 the vatican or in fact it wasn't the vatican it was the catholic church issued the first ever digital imprometer now for those of you who don't know the imprometer is of course the official license granted by the catholic church to religious texts it granted it to this app called confession that would help you prepare for confession so it's got tools for tracking sin it's got various drop down panels with options for contrition and i should just say it was incredibly it remains controversial it was incredibly controversial at the time so controversial that the vatican itself had to step forward or felt it had to step forward and say look while you're allowed to use this app to prepare for confession please remember that it's not a substitute for the real thing which we thought was uh interesting so everything i've said so far is underpinned by technology and we have a particular way of thinking about what's happening in technology and i want to share that with you now uh in all before i do that i want to take you back to 1996. and this was when my co-author richard suskind who as you heard in the introduction is also my father this was when he wrote a book called the future of law and in the book one of the predictions that he made was that the main way that lawyers and their clients would come to communicate in the future would be through email okay now it sounds completely unremarkable today at the time the law society of england and wales which is the professional body for lawyers in my in my homeland they said that my dad shouldn't be allowed to speak in public okay the law society said my dad didn't understand lawyer client confidentiality in fact they said he was bringing the legal profession into disrepute by suggesting that the main way that lawyers and their clients will come to communicate in the future would be through email so i just want you to bear that anecdote in mind i think nothing i'm about to say about technology can seem as remarkable as email must have seemed to the law society back in 1996. when we when we think about technology there's so much happening in technology what we do is we look at it through four different windows the first is the exponential growth and the underpinning technologies the fact is that they're becoming more and more powerful the second is that not only are they more powerful but they're also increasingly capable we can use them to perform a wider range of tasks and activities than was possible in the past the third is that these systems and machines are increasingly pervasive not just that all of us have smartphones or tablets but it's the internet of things it's the idea that our devices are increasingly connected and finally we too as human beings are becoming increasingly connected not least through all the various types of social media that we'll be familiar with just for this afternoon's purposes i want to focus on those first two because i think those are particularly revealing the first is the exponential growth in the underpinning technologies the law here isn't a law of the land but it's moore's law gordon moore who was the co-founder of intel in 1967 two years before he co-founded intel in 1965 made the for him it was just a made the rough observation that he thought the number of transistors that we'd be able to fit on a silicon chip would double every two years and that observation has roughly borne out since then sometimes it's been faster sometimes it's been slower but that doubling every two years every 18 to 24 months does a good job of explaining or a good job of capturing the dynamics in the number of transistors we'd be able to fit we've been able to fit on a silicon chip but the consequence of that has also been that every two years we've also seen a doubling in processing power in data storage capability and in bandwidth as well and for anyone who's mathematically orientated in the room they'll know that this doubling and doubling and doubling gives rise to exponential growth every it's doubling every two years because rise to exponential growth or we can just think of it more simply as being just explosive growth and to capture quite how powerful this process is there's a great story and some of you will be familiar with of the king and the princess and the story goes something like this there is a princess and she's in some turmoil and she's rescued from the turmoil by a and the returns the princess to the king and the king says to the thank you so much how can i ever repay you and the who's a mathematically astute says look what i want you to do is i want you to set out a chess board in the front square of your palace and on the first square of the chessboard i want you to put one grain of rice on the second square i want you to put two on the third square i want to put four on the fourth eight sixteen thirty two sixty four one two eight and so on i just want you to go around the chess board like that doubling the grains of rice as you go and all i want in return is the pile of rice that's left over when you get to the 64th square and the king who is not mathematically astute thinks he's struck a great bargain and he assembles his servants and they start gathering rice on this giant chessboard in the front of their palace only very quickly they realize that this is an impossible task because if they were to get anywhere near the 64th square of that chessboard it would require more grains of rice than there are probably on planet earth and that's how powerful this doubling process is that and that's only 64 doublings it means if we think in terms of processing power and we just project out processing power to 2020 it means the average desktop computer in 2020 will have about the processing power of the human brain more remarkably yet though if this process continues out to 2050 it means the average desktop computer will have the processing power of all of humanity combined now you might think that i'm exaggerating that this is sort of hyperbole just to put this in context go back to the turn of the century when michael spence who's one of a nobel laureate in economics gave his nobel prize lecture and in this lecture he noted a roughly 10 billion times reduction in the cost of processing power in the first 50 years of the computer age and that was 16 years ago 17 years ago now that was more than eight doublings ago that has only continued since then but i suppose the most important point here isn't simply that these systems and machines are more powerful in the sense that we have more raw computational power as a result of this exponential growth but it's also that they're more capable we can use them to perform a wider range of tasks and activities than we could in the past and we in turn think about this increasing capability through four different windows the first is this idea of big data it's a term some of you will be familiar with the second that we can use these systems to solve problems the third and it's a fascinating field called effective computing and i'll say a little on that and and finally there's the field of robotics so let's just look at each of these in turn the first is the idea of big data it's an inevitable fact that as more and more of our lives become digitized as more and more of what we do is interact with digital systems online systems that we you know every decision that we take every action that we make is captured in a in data and you know in a sense we trail behind us now as we move through life a data exhaust and when you gather up this data exhaust it can yield insights patterns correlations that human beings acting alone simply couldn't perceive so a nice example of this from the legal profession from a white collar profession a system called lex mckenna the system can predict the outcome of patent disputes it's said as accurately as many leading patent lawyers how does the system work it knows nothing about the law what it's got is a database of about 100 000 past cases and it's with various features of those cases the date and time the people involved the nature of the case and is able to make a statistical prediction based on that body of data about the outcome of a potential dispute that can rival the reasoning of really quite fine legal minds then there's the whole idea of problem solving this was the second category that i mentioned before the great example of this i think is probably ibm's watson this was a supercomputer owned by ibm that went on the us quiz show jeopardy and it beat the two human champions at jeopardy now lots of you will be familiar with the story what's interesting about this story though is that this was a system that if you posted a question it could or gave it a problem it could provide you with an answer and if you look at the direction that many large technology companies are going in today whether or not it's cortana at microsoft siri at apple uh go google at google alexa amazon designing systems that solve problems in this way where you pose a question and they provide you with an answer it's becoming increasingly uh an increasingly popular line of research there's then the field of effective computing and this i think is fascinating it's an entire and it's gone largely unnoticed i think by in the popular press it's an entire field of computer science dedicated to designing systems that can both detect and respond to human emotions so there are now systems that can more accurately than a human being distinguish between a smile of genuine joy and a smile of social conformity you know there are now systems that can more accurately even human beings distinguish between a face showing genuine pain and fake pain that can listen to a recording of a woman and a child and tell from their voices whether or not they're related that can watch a video of a human being sitting in a courtroom being cross-examined and tell whether or not they're lying more accurately than the best human lawyers and in white collar work and particularly the professions that i've been interested in for the last few years where professionals tend to think the core of what they do involves some kind of interpersonal interaction some kind of empathetic interaction some of these developments raise i think quite interesting and quite troubling questions for uh for white collar workers finally there's the field of robotics and i think the great example of robotic achievement is the driverless car this is particularly interesting i think from an economic point of view because if you were to go back to 2003 and ask the leading economist in the world who were thinking about technological change and how it affects work at the time tell me you were to ask them tell me a task that you think cannot be automated cannot readily be automated one of the tasks that they said at the time was turning left in a bakery truck and it's just fascinating that within a year or two we had sebastian throne develop the first driverless car and today we have almost all major car manufacturers saying we can expect versions of these on our roads commercial versions of these on our roads uh in in the coming years so these i think are the four ways for helpful windows to think about what's happening in technology but really if i had to capture it in one sentence the idea is that there's no finishing line here you know nobody in the world of technology is dusting their hands off and saying job done you know when you look at the technologies that we have today when you look at pick up your iphone or open you look at your tablet or open your laptop that's the worst they're ever going to be and then thinking about what the world of work might look like in 5 10 15 20 years i think that's quite an important mindset to have in mind that the technologies that we have today may not give us a reliable guide to what those technologies in the future might be capable of doing i said at the start i wanted to say a little about artificial intelligence because this is an idea that i think has captured people's imaginations in the last few years and we have a particular way of thinking about it and and it will be useful for thinking uh in a later uh moment why it why it is that that these changes pose quite substantial through quite a significant threat we think to to the work of white-collar workers to begin this story about what's happened in artificial intelligence i want to take you back to what we identified as being a first wave of artificial intelligence that that took place in the 1980s and as i said we wrote i wrote this book the future of the professions with my dad and my dad's had a really very interesting career and he began it in the 1980s when he wrote his doctorate at oxford university on artificial intelligence and the law so he was trying to build systems almost 40 years ago artificially intelligent systems that could solve legal problems and what happened was he finished his doctorate in in 1986 and in 1986 a very difficult piece of legislation was passed in the uk and it was called the latent damage act and it turned out that the leading expert in the world at the time on this particular piece of the legislation was a man called philip kappa and philip kappa happened to be the dean of the law school at oxford university where my dad had just finished his doctorate and he'd written the definitive book on this latent damage act this piece of law and he came to my dad and said look it's absurd anytime anyone wants to understand if this legislation applies to them this piece of law affects them they have to come to me or they have to buy my book why don't we instead build a system together based on the expertise in my head and they can use that instead so they don't have to come to me and talk to me face to face and they don't have to buy a very expensive copy of my book and that's what they did they came together and they built built a system that could help people navigate this particularly difficult piece of law just to give you a sense of what they are up against here's an extract from the law section two of this act shall not apply to an action to which this section applies okay now english is my first language and this is incredibly difficult to understand uh what's going on they this was the home screen design for the system my dad assures me that this was a cool screen design uh 40 years ago never really been convinced of that they published it in the form of two floppy disks there was no internet it was a time when floppy disks genuinely were floppy and you know essentially what they did together was they built a gigantic decision tree where you answered yes or no questions and you navigated through this tree and the tree was gigantic it had about two million branches through it and you answered these questions and you you made your way through this tree and you ended up at an answer does this legislation apply to me and this was the approach in the first wave of artificial intelligence and they were doing it not just in law but they were doing it in medicine in tax in audit and consulting and the approach was the same in all these settings which was if you wanted to build a system that could outperform a human expert that could do the sort of thing that a white-collar worker did what you had to do was you had to identify a human expert and in this case it was this man philip kappa who was the leading lawyer at the time you had to get them to explain to you how it was they were so good at solving this problem how it was they went about solving this problem and then you had to try and capture that explanation in a set of destruction instructions or a decision tree for people who didn't know the expertise to navigate through they were known as expert systems they were based on the expertise of human beings what's interesting though and we can talk more about this in questions if people are interested what's interesting though is that these systems and machines these expert systems didn't really catch on in the 1980s people thought that by now they'd be really quite widespread and they're not we don't see them systems built in the way i just described we don't see them very often at all and what happened was as the 1990s got underway research interest funding in artificial intelligence dried up and a period that's known now as the ai winter where really not a lot of progress was made in artificial intelligence at all a period known as the ai winter began the turning point for all of this and this is where i think things start to get quite interesting the turning point came in 1997 and this was when gary kasparov who was at the time the world chess champion was beaten by a different supercomputer owned by ibm called deep blue now again many people will be familiar with this this uh this case what's important about this case is that in the 1980s if you had gone and asked my dad and his colleagues and remember these were some of the most progressive people thinking about technology and artificial intelligence at the time if you had gone to them and asked them do you think we'll ever be able to build a machine that could beat someone like gary kasparov at chess my dad and his colleagues would have said no they would have said emphatically no and the reason why they would have said no is very important and it's very important for thinking about the future capabilities of machines the reason they would have said no was because at the time when they were trying to build these systems and machines they thought the only way to do it as i described was to identify a human expert sit down with them get them to explain to you how it was they solved a particular problem and then try and capture that human explanation in a set of instructions for a machine to follow but here's the problem this was the problem they saw in the 1980s if you sat down with someone like gary kasparov and said gary how is it you're so good at chess he'd struggle to explain he wouldn't be able to tell you he might be able to give you a few clever opening moves or a few you know sneaky closing plays but ultimately he'd struggle he'd say things like it requires creativity or judgment or intuition or experience and these were all things that were very hard to articulate and so it was thought very very hard to automate if a human being can't explain how it is they perform a particular task where on earth do we begin in writing a set of instructions for a machine to follow to perform that task that was the mindset my dad and his colleagues had in the 1980s but what of course they hadn't banked on was that exponential growth in processing power that we saw before so by the time that gary kasparov sat down with deep blue and remember this is 1997 this was almost 20 years ago deep blue was able to calculate up to 330 million moves a second now gary kasparov at best could juggle about 100 moves in his head on any one turn the guy kasparov here was blown out the water by brute force processing power and lots of data storage capability in a sense you know this system was playing a different game to him we had a correspondence when we were doing our work with patrick winston who was one of the founding fathers of of the whole field of artificial intelligence back in the 1950s and he said to us you know there's lots of ways of being smart that aren't smart like us i think this is a challenge to many white collar workers because many white-collar workers tend to think the way to be smart is in fact to be smart exactly like them and we call this and it's probably i think one of the most important ideas in our work we call this the artificial intelligence fallacy the ai fallacy and it's this it's the mistaken assumption that the only way to develop systems that perform at the level of human experts or higher is to somehow replicate the thinking processes of human specialists that's what we thought that's what we thought was the case in the 1980s but what's transpired is that that really isn't the case anymore so let me give you an example judgment very often somebody will say to me after hearing what i've said so far look daniel you don't understand the work that i do requires judgment and judgment is the sort of thing that simply cannot be performed by a machine there's no way to automate it and we say that we say in our work the question can a machine ever exercise judgment can a machine replicate the human faculty of judgment it's the wrong question to be asking in thinking about the future of work instead there's two more important questions to be asking the first question is this to what problem is judgment the solution why do people go to white collar workers a doctor a lawyer an accountant and say give us your judgment and the answer to that question it seems to us is because of uncertainty you know when the facts are unclear when the information is ambiguous when you don't know what to do you go to a particular type of worker and say give me your judgment based on your experience and help me make sense of this uncertainty so really the more important question we should be asking isn't can a machine ever exercise judgment but it's can a machine deal with uncertainty better than a human being can and the answer is in many cases of course they can that is precisely what lots of these systems and machines are good at doing they can handle far larger bodies of data than us and make sense of them in ways that we acting alone simply can't so a nice example of this is that that case that i brought up at the start from the medical profession again if you were to go back to 2003 and asked the leading economists in the world at the time who were thinking about technology in the future of work tell me a task that you think cannot readily be automated one of the tasks that they said alongside turning left in a bakery truck was the task of medical diagnosis why did they think that in 2003 were they fallen into the same trap that my dad and his colleagues had fallen into in the 1980s they thought the only way to perform a task like medical diagnosis was to sit down with a doctor get them to explain to you how it was she made the diagnosis and then try and capture that in a set of instructions for a system to follow but this system this system that performed that task of diagnosing a freckle wasn't trying to replicate the judgment of a human doctor it knows or understands nothing about medicine at all what it's got is a database of about 129 450 past cases and it's running a pattern recognition algorithm through those cases hunting for similarities between the particular photo that you've given it and the database of photos at quest in question it performs the task in an unhuman way based on the analysis of more possible cases than a human doctor could hope to review in their lifetime it no longer mattered that this system couldn't um the the a human doctor couldn't explain how she performed the task it performed it in a in a fundamentally different way it's what we call an increasingly capable non-thinking machine can machines think i think from the point of view of thinking about the future of work it's a fantastically interesting philosophical question but from an economic point of view it's not the question we should be asking so just to see why that's the case go back to ibm's watson that supercomputer that went on the us quiz show jeopardy in 2011 and it beat the two human champions the day after its victory on that quiz show the wall street journal ran a piece by the great philosopher john sur with the title watson doesn't know it won on jeopardy right and it's brilliant and it's true you know watson didn't let out a cry of excitement when it won it didn't call up its parents to say what a good job it had done it didn't want to go around the corner to the pub for a drink it wasn't trying to replicate the thinking or reasoning processes of those human contestants it wasn't thinking at all and yet it still outperformed them as i said it's what we call an increasingly capable non-thinking machine and that's what the second wave of artificial intelligence is about and that's what we're currently in at the moment systems and machines that are able to perform tasks that might require faculties like creativity or judgment or empathy when they're performed by human beings but using lots of processing power lots of data storage capability and increasingly smart algorithms it can perform those tasks in very different ways so let's return now in light of all of what i've said to white collar work i began by saying many of us are familiar with the idea that technology might affect blue collar work it might affect the work of people who work in agriculture or manufacturing but the idea that it affects white-collar workers is is more uncomfortable so in light of what i've just said let's let's go back and and think more concretely about white collar work i think one of the things that's helpful to do if we're going to think try and think clearly about this is one of the unhelpful things that we do is we tend to talk about the different jobs that people do and i've done this already today i've talked about lawyers and teachers and doctors and accountants and architects and so on but but actually the term jobs is unhelpful and it's unhelpful because what it encourages us to do is to think of the work of human beings as sort of monolithic indivisible lumps of stuff you know a lawyer does lawyering a doctor does doctoring a accountant does accounting and so on but of course what we know and all of you will be familiar with this is that people perform a wide range of tasks and activities in their work uh they don't just um perform a monolithic and divisible lump of stuff they do lots of different tasks and activities why does this matter for thinking about the future of work well when we wrote our book initially the economist did a a nice review of it and alongside it was this great cartoon of professor dr robot qc now qc is the honorific that's given to distinguished lawyers in the uk and there's a sense i think when we think about the future of work in that top-down way in terms of jobs that the only way that technological change can affect the work that human beings do is by displacing entire jobs in an instant the professor doctor professor dr robot qc or one of his relatives pushes a human being out of their job and clearly that isn't how technological change affects work what it does is it changes the tasks and activities that people do in their work so with that in mind thinking about technology and the future of work in terms of tasks rather than in terms of jobs the first reason i think the technological change is likely to affect white-collar work more than people currently expect is related to an idea called decomposition we call it decomposition which is if you if you take professional work the work done by lawyers and doctors and teachers and accountants just take white-collar work and rather than think about it in terms of jobs break it into all the constituent tasks and activities that make it up i think what you'll find and this is what research is increasingly finding at the moment is the large components of what professionals do is actually routine it is relatively straightforward it is relatively process based it is easy for human beings to articulate how they do it and so it is relatively straightforward in many cases for us to design systems and machines that can do it instead of human beings so that's the first thing to have in mind that when you break white collar work down into all its different tasks and activities actually lots of them turn out to be routine the second reason though that i think lots of white collar work is also at risk of the sorts of technological change that i've described this afternoon relates to the idea of the ai fallacy remember the thought was that these systems and machines can't perform non-routine work because the sort of work that requires creativity or judgment or empathy because this is the sort of these sorts of tasks and activities human beings find it very difficult to explain how they do it so where on earth do we begin in writing a set of instructions for a machine to follow but to think in that way is to fulfill the ai fallacy and i think what's happening more and more is that actually many of those non-routine tasks as well that white-collar workers do are being taken on by these systems and machines you know the temptation is to think that because machines can't think like a human being they can't be creative like a human being because machines can't reason like a human being they can't exercise judgment like a human being because machines can't feel like a human being they can't exercise empathy like a human being and the mistake in all those cases is to fail to realize that while human beings may while those tasks may require things like creativity or judgment or empathy when they're performed by human beings what we're seeing more and more is that these systems and machines are able to perform them in very different ways requiring things drawing on things like lots of processing power lots of data storage capability and increasingly smart algorithm so that that is why it's the argument we make for why we think white-collar work which has traditionally been thought to be out of reach of technological change just as happened in agriculture just as happened in manufacturing is also going to happen and also is happening in white collar work as well you break the work down into all these different tasks and activities it transpires that many are routine rather than non-routine and secondly in the second wave of artificial intelligence many of these non-routine tasks can be done by systems and machines as well so what does this mean for thinking about what are the implications what do we draw from this i just want to share in the final few minutes just some thoughts on what this means for individuals what this means for businesses and what this means for governments for individuals i think the main risk in the coming 10 15 years isn't that there won't be enough work for human beings to do that there won't be enough demand for the work of human beings the main risk will be that human beings lack the skills and capabilities to do that work both of those things lead people to not be able to find work um but put another way i think the challenge in the next 10 to 15 years is a skills challenge it's an education challenge it's making sure that people have the skills and capabilities to do the sorts of work that is available for human beings to do crudely if a young person was to ask me you know what strategies should i follow and thinking about the future of my job you know what should i do to prepare for this world to come i'd say you have two strategies either you try and compete with the machines you try and do the sorts of tasks and activities that as yet these systems and machines cannot do and in spite of everything i've said there are lots of examples of this or alternatively you try and build the machines you try and become the sort of person that's capable of designing and operating these increasingly capable systems and machines in the words of economists there are two sets of tasks that technology will complement in the future complement human beings in the future two sets of tasks that technology will make human beings more productive at rather than less productive and these i think are the two categories one a set of tasks that machines can't do two a set of tasks involving putting these systems and machines to use themselves so those i think are the two strategies what does this mean for businesses what does this mean for thinking about the future of business um i think the main challenge here for businesses is one of mindset there's a story that my co-author my dad likes to tell he spends lots of his time advising large large companies there's a story he likes to tell about the the power drill company called black and decker and the story goes something like this which is the at the black decker away day when they take away their senior executives to to think about uh their company they put a slide up with this image on it and the person who's hosting the session says to the assembled black and decker employees this is what we sell isn't it and the sort of nervous audience kind of nod and think well yes this is what we sell you know we're black and decker we're a power drill company we sell power drills and the people hosting the event say no this isn't what you sell this is what you sell and your job as businesses is to try and find ever more creative ever more efficient ever more productive ever more effective ways of giving your clients what it is they actually want which is the hole in the wall one of the things that's become very clear in talking to lots of large professional firms lots of professionals and white collar workers whether or not it's doctors and teachers and lawyers the big four auditing firms and consulting firms and so on is that many of them define themselves by the particular way in which they solve problems today they're of a sort of power drill mentality you know doctors like the traditional craft of medicine lawyers like the craft of lawyering large professional consulting firms like the traditional approach of offering consulting advice and what we're seeing i think technology do is change the ways or provide in a sense new power drills new ways to solve these problems and i think the challenge for many businesses is to be far more particularly what particularly in the professions to be far more agnostic about the particular ways in which they solve problems and far more focused on the problems themselves because i think technology is going to in the coming years uh offer really quite different ways and we're already seeing it today really quite different ways of solving the sorts of problems that traditionally have been solved by people in in very particular ways finally let me just finish with two with three thoughts for what this means for governments the first question i think is the education what are we training the young to become and how are we retraining the older i spend lots of time as i said talking to professional firms and talking to professional associations and professional schools training young white-collar workers i think it's very clear that as yet most medical schools most law schools um most uh most professional training centers are training white collar workers to be 20th century rather than the 21st century professionals and i think there is a challenge here to make sure that we are giving people the skills and capabilities that they need for the tasks and activities that will be important in the future rather than you know in the next 15 years rather those those that were important in the last 15 years the second question is who should own and control tomorrow's and i left this blank in our work we're interested in an idea called practical expertise which is the knowledge the experience the know-how of professionals but what really is the bigger question here is who should own and control tomorrow's systems and machines if it's right that these new technologies are going to become more prevalent in our economic life the question who owns these valuable new types of capital is going to become more and more important today we have lots of worries about big tech large technology companies large internet service providers uh taking on or developing increasingly powerful systems and machines i think this question will become more and more prominent over time the third and i just want to finish on this is a moral question and i haven't um said anything about the sort of moral dimension of this problem at all today what i've spoken about is sort of the technical side of the story what it might be possible to do what is possible but that's a very different question from the moral question which is what ought we to do not what systems can do but what should they do and i think this question becomes more and more pertinent when we think about white-collar work when we think about lawyers and doctors and teachers and accountants because the work that they do often you know they are responsible for solving some of the most important problems in society keeping us in good health educating our children making us aware of our financial and legal entitlements and so on and i think the impact that technology might have on on these areas of our of of the labor market some of the most important problems uh that we have to solve collectively as a society i think it starts to introduce quite an interesting moral dimension to the problem so for instance just very briefly in the us we have systems that help judges make parole decisions we might feel comfortable with that but how would we feel about a judge using a system like that to make life sentencing decisions there are now systems and machines as we've seen that can make medical diagnoses given that human doctors tend to make errors 10 to 20 of the time that's quite an exciting proposition but how would we feel about using a system in a hospital to make decisions about the finite allocation of hospital beds turning off perhaps life support machines to make sure resources are used more efficiently and more productively that just feels i think to many people even though it may be able to make a better more productive decision more efficient decision than human beings that feels i think to many people like a very uncomfortable proposition and so what we actually do in our work is that we argue that we need a moral inquiry into the limits of some of these machines and we started to see this happen in various directions around the world to really try and mark out what the moral limits particularly in white collar work to these systems and machines might be in fact i think there's an interesting parallel as a as a british person to an inquiry that was done in the 1980s by mary warnock who is a great philosopher into ivf and test tube babies it was an emerging technology which raised lots of very difficult ethical moral questions uh about not what could be done but what should be done in terms of the creation of life and the destruction of life and her inquiry raised i think you know most of the important questions and helped establish a sort of moral consensus for thinking about what the limits to that technology might be so i will finish there thank you very much for your attention on this warm afternoon and i look forward to hearing your reflections and taking some questions in in the time we've got left thank you very much and thank you thank you very much daniel for this brilliant presentation i put down a lot of notes this is i think it's it's really food for thought very very interesting i have a couple of questions but i don't want to be before so please if you tell me who wants to ask a question if we take one by one okay you were the first one yes please it would be very useful for me if you say before the question who you are and where you're from and what you do just so i can have some some context as well that'd be great so please and we have also a microphone very good uh hi i i work here in trento as you know as you may know there are many people working on ai yes in trenton so i am part of this community here and you insisted a lot on on the so-called ai fallacy so peop machine systems ai systems can do useful things for us even if they may not think this the way we do that's fine but especially when it comes to to the white colors yes using exploiting this system requires to control them and especially it requires them to be accountable and be accountable forces them versus these ai systems to in a sense communicate with people in the way people do yes so even if we can be happy assuming that ai systems do not necessarily have to think yes but especially when it comes to decisions and so on they have to be accountable yes and being accountable means talking our language and explaining stuff and so on and here is where we do we where we are now yes these systems are not accountable yeah i don't know what uh so it's a very it's a it's a really interesting observation what's interesting uh just to put some historical um so i i want to introduce another word which alongside accountable which is transparent and it strikes me that what's important is that these systems are transparent that people know how a particular outcome has been reached or how particular decision has been made what's interesting is about the fir in the first wave of artificial intelligence when those systems and machines were based on human reasoning they were very transparent it was very easy to understand why a particular medical diagnosis had been made why a particular you know it was very easy to understand if i look at that system that my dad built with that the dean of the law school in oxford it was very easy to understand why the decision why the system had reached a decision about whether or not that law applied to you because it was based on human reasoning and you could very easily follow the reasoning of the system what is challenging i think about some of these new systems is that they are opaque or at least more opaque that the reasoning beco because they are non-thinking machines the reasoning that they follow doesn't necessarily make sense at all and so i think one of the i think it's very clear that an engineering challenge for the ai research community in the next 10 20 or now and over the next decade or two is designing systems that can both achieve an impressive outcome but also explain themselves in ways that human beings can understand that they have to develop um uh ways of researchers have to attach systems to these systems that help make them more transparent i think it's transparency that matters not accountability uh sorry i think it's transparency that matters in order to get accountability there's then there's then a kind of there's an uh a question another interesting question about accountability which is even if these systems are transparent who is to blame who's accountable if they go wrong and one of my one of the stories i really like is of the early days of the google driverless car when the car was pulled over somewhere in the west coast of america for speeding and the question was who does the parking ticket go to does it go to the person who designed the system does it go to the person sitting behind the wheel um you know the answer to that question about where the accountability lies from a legal point of view will be established at least in the uk where we have the common law it will be established by precedent it will over time will gradually as a community as a society learn a set of uh laws customs for thinking about how accountability ought to be um ought to be attached to these systems and machines let me just just one final thought about accountability which is that not everything that white-collar workers does requires that demanding a a require not everything that white-collar workers do not every task and activity requires such a demanding um doesn't demand necessarily that it's accountable i can think of lots of things that doctors do that i'm not really interested in necessarily holding them account for you know i want to hold them account for the diagnosis that they make but in terms of managing their paperwork reviewing the medical literature you know there are lots of tasks and activities which don't have that white collar workers do that perhaps don't have as sort of as uh as demanding a requirement of accountability attached so i think in short it's an interesting engineering challenge for lots of researchers now making these systems transparent i think accountability is going to be established by legal precedent at least where i'm from and i think it's true of some parts of white collar work but not necessarily all white collar work thank you then you had a question yes please we have a microphone over here no no i'm sorry the second row then after i didn't see you good afternoon i'm a private equity investor and you spoken about blue collar and white collar yeah i wonder if you made any thoughts about the green color on what's going on in in the field of military and and so on in the military yeah um so it's interesting i mean our book is called the future of the professions and the military often thinks of themselves as the profession of arms and you'll see that we in our book talk gave some thought to the military particularly when it came to the moral questions yeah um there it seemed that you know i gave the example in um at the end of would we feel morally comfortable with a system making in a hospital making decisions to turn off life support machines but a very similar moral problem arises in the military with autonomous weaponry we have systems and machines that can kill people more efficiently and more effectively than human beings but efficiency and effectiveness isn't necessarily the moral criteria that we ought to be using in in conflict uh certainly not the criteria that we think is appropriate in all certain in all settings so for us the military setting uh raises is where we can where what we use to test some of our concerns about the moral boundaries of these systems and machines but i think there's no sense in which once you break the work of uh of um of what um soldiers do into its constituent tasks and activities that those tasks and activities are somehow immune from i don't think that's right at all so i think a lot of these changes affect them as well thank you sir in the last round yes i'm a junior economist but i do have a passion for moral philosophy as well yes so um you described ai devices as increasingly capable non-thinking machines that approach problem solving not through critical reasoning but they sort of mimic the experience that humans have through their processing power and capability of storing information yesterday somebody at another talk referred of an experiment done somebody somewhere in america in a university basically what they did they wanted to predict uh potential offenders through deep learning and they gave to these ai devices images from a database of new york police department and the result was that the algorithm predicted that the probability of being offender was larger among blacks so i was thinking this clearly is an example of by omitting critical reasoning mirroring humans prejudices how can we sort of uh limit this problem yeah it's a is so i reached the opposite conclusion from that study in the sense that the way these systems that the way that system likely worked which is is that it would have reviewed data provided by the behavior of human beings it would have reviewed data created by the sentencing decisions of human beings any a large part of the racist bias that that system will have learned is a direct reflection of the racist bias that human beings put into the data through their actions and decisions uh it's so human beings with critical reasoning um so in a sense i look at that and that for me and and many of the systems do this for me they kind of hold a mirror up to ourselves this is a system which learnt how to behave from the behavior of human beings and you know to to an extent that's a sort of damning reflection back on human beings because the data was a reflection of human i i i don't the particular the particular uh the particular system that you're looking at so i can't talk authoritatively about it but there's a whole field of algorithmic bias and and uh which is emerging looking at precisely the sorts of problems that you describe and in many cases what troubles me is that these systems are revealing and capturing and uncovering biases that human beings didn't know necessarily that they had and that makes me feel uncomfortable about how human beings are using their critical reasoning um the you know a great example of this a slightly more is the microsoft's um bot called tay that they had on twitter i don't know if people have seen this bot but it was a it was a system that they designed to interact with human beings on twitter and how did it learn how to interact with human beings well it learned from what other people were saying on twitter and within a matter of hours it had become horrendously offensive and uh racist and uh kind of espousing fascist views and it was really unpleasant and again that for me the system wasn't thinking it wasn't reasoning it had just picked up from the thinking and reasoning of human beings some quite distasteful behavior so i think we need to be careful when we see these uh examples of biases appearing in these systems that we recognize and we're clear about where those biases have come from what they've emerged from then there's the question of how we get rid of them how we strip them away and and that's what this field of algorithmic biases is uh trying to do but we at first we have to be clear about the origin uh are there some other questions so now it's up to me i have two more and then yes i'll let you go i would like to know about the reactions on on your on your work um it seems to me that not not all categories were that happy about what you found out um just in kind of an old-fashioned way of thinking that maybe their their labor their their job was no longer needed it yeah is it right that only architects that like did like your your thesis and why i think so it's hard to generalize uh different professions reacted in different ways and in different ways in different countries um i and i think the the one shift that has we began the book in 2009 2010 we began thinking about it then and i think the atmosphere the intellectual atmosphere was far more hostile to the ideas back then i think that i think in the subsequent seven or eight years there's been a kind of growing recognition that these sorts of things and and so it's very rare now to uh although it does happen it's very rare that there is the sort of um yeah there's a rejection hostility that might have been there a decade ago i will say though that i mean the most hostile audiences that we talk to are often audiences of young professionals young doctors in training lawyers and training teachers in training who hear this and they're furious and they're furious because you know they spent a lot of time and i don't know what the i don't know exactly what the system looks like in italy but you know in britain if you want to train to be a doctor or an architect it is a very extensive expensive process and they embark on this training and then they hear me and these ideas and they're furious and they're furious because nobody told them that they were training to do the sorts of things that were valuable and important 15 years ago but perhaps not in 15 20 years when they want to be at the peak of their career so i think they will all tell their children to go to silicon valley i mean that is a the question what should a young professional do i i still think that in the professions the gaining some domain expertise um you know if you're interested in legal technology or medical technology or financial technology spending some time in the profession the traditional profession learning the nature of the problems learning the kind of domain specific expertise is still useful so i wouldn't discourage it necessarily as a root but what i would say is just be open-minded as you move through it that there may be moments where you could move off in non-traditional directions okay so my last yeah i wanted to know from the beginning as you said in your lecture there's no finishing line what's coming next are you writing on a new book and how in where where is all this going to so uh i have in fact i've submitted a new book um last uh two weeks ago uh and um and it's it's honest it's on a similar set of themes uh looking at how technology is affecting not only work but also i'm interested in and it's something we haven't spoken about at all today but looking at how technology affects inequality and also about how it affects the kind of distribution of economic power in society as well yeah i think lots not just economic power actually but also political power i think lots of people's worries today about big tech worries about uh facebook and google and uh those aren't worries always about economic power that they even security right their worries their moral worries political worries so the the book that i'm it's looking about really you know how we'll live together in a society where economic life social life political life looks really quite different from it from how it does today okay so please come back next year you'll be delighted and we'd like to read this book in italian i suppose thank you very much daniel suskin thank you thank you a great you
{{section.title}}
{{ item.title }}
{{ item.subtitle }}