Over the year-end and New Year period, a mysterious Go program won 60 straight games online, beating top players from Japan, China and South Korea and attracting attention for its extraordinary strength. It turned out that the program was an updated version of "AlphaGo," a program developed by a Google subsidiary that uses artificial intelligence (AI).
The program is now far stronger than it was in March last year, when it shocked the world by beating South Korean master Lee Se-dol, who has a ninth dan ranking. It has even been rumored that the software can no longer be beaten by humans -- evidence of the astonishing speed of the evolution of artificial intelligence.
The waves of information and communications technology including AI and the internet and the technology revolution incorporating robot technology have collectively been dubbed the "fourth industrial revolution," and they have had a large impact on our lives and society.
If the ability of AI advances further, will it one day come to rule over humans? Will a divide emerge between those who make use of AI and those who don't? Such questions are currently being raised.
One expanding area of technology that makes use of AI is the field of autonomous vehicles. There are already vehicles on the market that have automated two of the three main controls in cars (the accelerator, the brakes and the steering wheel). And competition in the development of completely automated vehicles that don't need drivers is intensifying. Noticeable is the cooperation transcending the borders between car manufacturers and IT firms, such as Honda and Google.
The day when someone can use a smartphone to call a self-driving taxi to their home and have it deliver them to a destination is no mere dream. More self-driving vehicles would mean fewer accidents caused by driver error and less traffic congestion, and this would suit the needs of an aging society. The benefits for society would no doubt be boundless.
Yet the question remains, where does the responsibility for accidents lie? Under current laws, if someone is killed or injured in an accident, the driver can generally be held liable for damages, but the same would not apply for autonomous cars that have no driver.
The General Insurance Association of Japan last year compiled a report pointing out the need to overhaul legislation. It may be the case that the responsibility of car manufacturers or victims is more frequently questioned. The Japan Society of Mechanical Engineers, meanwhile, held two mock trials last year to consider the legal issues involved.
Kimihiko Nakano, an associate professor in the Institute of Industrial Science at the University of Tokyo, commented, "If the responsibility of manufacturers alone is harshly questioned, then it will delay the development of technology. That said, we cannot ignore relief measures for victims."
Japan needs to form social consensus on matters like this through wide-ranging discussion and prepare legislation. The issues at hand are not restricted to autonomous vehicle technology but apply to the introduction of AI in general.
The use of AI in the medical field is not far off, either. Researchers including Satoru Miyano, a professor at the Institute of Medical Science at the University of Tokyo, have been working with IBM's AI system Watson, researching its use in the diagnosis of cancer and the selection of medicines.
Last year, the researchers inputted into Watson the genetic data of a leukemia patient whose recovery was delayed from the side effects of treatment, even though anticancer drugs were effective. After about 10 minutes, it uncovered a characteristic mutation. Based on this information, the medicine was changed, and the patient is said to have subsequently recovered.
Watson's strength lies in its acquisition of a massive amount of information from over 20 million pieces of literature and 15 million pieces of patent information and cancer gene mutation data.
There are limits to how much of the vast existing amount of medical information a single doctor can cover. That's precisely why AI can become an able "assistant" for a medical practitioner. But there will probably emerge fields in which AI will compete with humans.
"Eventually a lot of white collar work will be snatched away by AI." From an early stage Noriko Arai, a mathematician at the National Institute of Informatics has voiced this warning.
During development of "Torobo-kun," an artificial intelligence system whose goal was to pass the entrance exam to the University of Tokyo, Arai says that researchers learned where AI was lacking. "The weakness of AI is that it doesn't understand meaning in essence, and there are currently limits to its reading comprehension," she says. On the flip side, that also means that AI can quickly take over work where understanding is not essential. It is therefore important for humans to equip themselves with capabilities that AI doesn't possess, she says.
A panel of experts at the Cabinet Office last autumn held a discussion on the ethical aspects of conducting research and development as well as using AI, in addition to legal issues. They surmised that cooperation with AI was an "extension of human ability," pointing out that it could become the basis for a new value system.
Experts also underscored the importance of researching what AI can and can't do, and cultivating abilities that are restricted to humans. This is surely one issue at hand that should be tackled.
Some scientists see further possibilities for AI. They believe that once the operation of AI reaches the same level at that of the human brain, AI will be able to understand like humans and have a mind.
While AI experiments are currently limited to thought experiments, beyond self-driving cars and the Watson system, we may see the emergence of AI requiring us to fundamentally probe the question, "What are humans?"
Even today, thinking about issues pertaining to AI leads us to think about what humans are. We have arrived in an age in which we should focus on the future of humanity and seriously confront the issues.