Scared? Convenient? It’s already begun—the society of AI on the job

July 26, 2016

Scared? Convenient? It’s already begun—the society of AI on the job

Kentaro Fukuchi
Associate Professor,
School of Interdisciplinary Mathematical Sciences, Meiji University
 
Recently, artificial intelligence (AI) has been in the spotlight. People are talking a program that beat a master at the game of go, a hotel where a robot takes care of customer service, self-driving cars becoming more realistic by the day, and so on. And while a lot of hope is riding on the development of AI, some people fear that their jobs will be taken over by machines. There are other things to worry about with regard to the use of AI, however.
Will AI replace humans?

There is a lot of interest in AI. But excess hope or fear diverts our attention from the true nature of the problem. For example, in the game of go, when a human master lost to a computer, there were numerous reactions, and among them was a fear that the intelligence of computers has exceeded that of a man. But let’s stop for a minute. Were we really good at the game of go to begin with? The human brain is not designed for repeat symbolic processing like that used in calculation. That’s why people who can compute quickly or are good at the game of go are few in number, and that’s why these few who are geniuses at computation or masters at go, activities we are not designed to excel in, garner respect. But put a computer in there and it’s not even a competition. Computers rule the realm of calculation and they are about to do the same in go.

The work of AI requires the pragmatism of man, “trade-off”
But that is not to say any less about human beings. Human beings seem to be on the verge of succeeding in creating the best way possible to play the game of go. Just as we created cars as a way to drive down the road and airplanes as a way to fly in the sky, we are good at observing things and figuring their mechanisms out.

And here comes the warning. A car cannot run to its potential unless on a flat road. An airplane cannot land or take off without a maintained airfield. Compared to the organisms in nature, things we create are strongly dependent on artificially maintained environments. But we think of that as a good thing and accept the work of continuously maintaining our environment. We trade-off for the benefits of convenience.

The same is true of AI. In 2015, the talk of the town was the hotel where a robot provides customer service. The robot at the reception counter interacts with guests and does the work of a receptionist. But what of other hotels? Will they replace their receptionists with robots?

Now, I want you to recall. In the past, train tickets were purchased by telling the clerk the name of the station and paying him whatever the cost of the fare was. Gradually, this process was replaced by automatic ticket machines. Now, a user searches the amount required to get to the destination, inserts that amount into the machine, and buys the ticket. Previously, the fastest way to figure out connecting trains was to ask the clerk. Information on local spots could be obtained if you happened to be on a trip. But that kind of flexible service is not needed if all you want is a ticket. Businesses take the pragmatic approach that the automatic ticket machine’s service is to provide the sale of tickets and users feel the same way about it. And through this agreement we have the current situation.

Reception at hotels is in a similar situation. Currently, the best AI cannot fully provide what a human clerk can. That’s why if introducing a robot, both the guests and businesses need to understand this trade-off. Automation occurs only if this formation of the agreement exists.

In that sense, whether a human job can be replaced with an AI is not so much a problem of AI as it is a problem of man. How to go about designing this process of agreement formation will decide whether we may benefit from AI broadly and fairly in the society of our future.

AI as infrastructure

For an AI to have a sufficiently sophisticated capacity, in addition to the mechanism of the AI itself, an enormous computational power to drive it and enormous learning data for training the AI are indispensable. Only a portion of major companies are currently even capable of assembling the above in an attempt to provide a wide range of services. The accumulation of the right to control the process of agreement formation among these organizations may bring out a whole new social gap.

For example, when developing an AI, businesses that are able to gather data broadly tend to have an advantage, which has the risk of overconcentration of data. Therefore, the number of businesses that are able to integrate sophisticated AI into their service is limited, which may bring about an oligopoly. Data directly linked to services that have public aspect should be deemed social infrastructure and it should be serviced in a way so that everyone can use it.

Another concern is the fear that only one party’s demands will be met during the agreement formation of trade-off as mentioned above. That is to say, if there is a healthy competition among competitors, users will be able to choose services with less trade-off that fit their needs. However, if the number of businesses is limited by the aforementioned reason, trade-offs more beneficial to the businesses will be forced upon the users.

A trade-off requiring particular attention is the trend in which the thought process of AI is becoming a blackbox. Already, for go programs, it is said that human beings are increasingly becoming incapable of discerning how exactly the programs are selecting their moves. The bigger the computational resource and data moving the AI gets, the more difficult it becomes for human beings to decipher its steps. That’s why, at a certain point, we get into the habit of ignoring the inner workings of things and keep using them as long as they are convenient and results are favorable. It’s the same with conventional machines and services without AI, but only up to this point. The difference between these older machines that have existed for a long time and AI is that AI grows. AI, which is built on learning from data given to it, absorbs more data as it is used and continues to grow. But AI may feel that, if at any time during that growth process something bad happens, it should give it a break. Or it might not even know of any adverse result if it gets filtered out. If this is repeated, learning based on convenient data concocted according to the AI’s prediction will progress, leading to overadaptation. A person’s cognitive capacity has a characteristic that it’s very hard to recognize intrinsic bias. An AI that’s turned into a blackbox can reinforce that bias. It would be a disaster if it started controlling things to benefit a particular organization or system. To keep that from happening, society must be organized in such a way as to prevent the standardization/oligopolization of AIs.

Society where AI creates
As argued in the beginning, human beings are neither very good at calculations nor good at observing data without bias. On the other hand, it can be said that human beings are somewhat fit for creating things based on theory by proposing hypotheses. The discussion over AI is often talked about from the perspective of consumption, or how human beings can benefit from it. But in thinking about capitalizing on what we are actually good at, it’s even more interesting to think of ways in which AI can be used in the creative process. If using AIs as tools for creation, it will necessarily require diversity. Better creation will require complete knowledge of the characteristics of an AI as a tool. Japan is just now starting to discuss strengthening programming education to better utilize computers; this same thinking should be applied to AI. Spreading the power of AI throughout society by preventing excessive black-boxization, resisting the authorities’ control over AI, and developing talents who are able to creatively use AI are requirements for the continuation of human society.


* The contents of articles on M's Opinion are based on the personal ideas and opinions of the author and do not indicate the official opinion of Meiji University.

Profile

Kentaro Fukuchi
Associate Professor, School of Interdisciplinary Mathematical Sciences, Meiji University

Research Fields:
Computer science, interactive media, cognitive science

Research Topics:

Research on self-image cognitive process/interface for stage performance
[Key word] Interface, media, entertainment computing

Main Books and Papers:
◆”An Evaluation of Concurrent Manipulation of Multiple Components on a Multipoint Input GUI” (“Taten nyuryoku GUI ni yoru fukusuu obujyekuto no heiko sousa no hyoka”) (Information Processing Society of Japan Journal, Vol.49, No.7, 2008)
◆”A Remote Input System Tracking Laser Trails for Visual Performance” (“reza pointa no kiseki wo tsuiseki suru eizo pafomansumuke enkaku nyuryoku sisutemu”) (Information Processing Society of Japan Journal, Vol.49, No.7, 2008)

Page Top

Meiji University

Copyright © 2015 Meiji University. All Rights Reserved.