In most countries, one becomes a “senior citizen” upon turning 65. By the same token, AI just turned 65 this year.
While most countries have “adopted” AI as an aspirational goal, one nation, Saudi Arabia, has even conferred citizenship on Sophia, an AI robot “fathered” by inventor David Hanson.
AI as a concept and a subject was formally born in 1956 at Dartmouth College in Hanover, New Hampshire in the US. Its parents were John McCarthy, then a young Assistant Professor of Mathematics at Dartmouth College, Marvin Minsky, Nathaniel Rochester and Claude Shannon. The Proposal stated:
We propose that a 2-month, 10-man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can, in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.
The proposal also discussed computers, natural language processing, neural networks, computation theory, abstraction, and creativity. We have come a long way since then.
While most AI proponents have become proficient in AI programming, few have devoted time or effort to look at AI development or deployment’s underlying ethics. The noted thinker and writer Isaac Asimov first wrote about ethics in the 1950s in his I, Robot masterpiece. At the insistence of his editor John W Campbell Jr, Asimov proposed the Three Laws of Robotics to govern AI systems.
He spent time testing the boundaries of his three laws to consider cases where they would break down or create unexpected or unanticipated behavior. The verdict: no set of fixed rules can anticipate all possible events or circumstances where ethics would be overridden.
More recently, academics and many governments have challenged the idea that AI can itself be held accountable. A panel convened by Britain in 2010 revised Asimov’s laws to clarify that AI is the responsibility either of its manufacturers or its owners or operators.
An interesting experiment was conducted at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland in 2009. Robots were programmed to cooperate with each other to search for a beneficial resource and avoid a poisonous one. The intriguing outcome: the robots eventually learned to lie to each other in an attempt to hoard the valuable resource, Popular Science reported on Aug 18, 2009.
So it is apt that the Singapore Computer Society (SCS) and IMDA (Infocomm & Media Development Authority of Singapore) collaborated to come up with a unique AI Ethics & Governance Body of Knowledge (AI E&G BoK) to deep-dive into the ethical aspects of AI. The BoK is a “living document”, with the capability of periodic updates and enhancements. It has contributions from 30 authors and 25 reviewers in 22 chapters in 7 sections.
As an aside, Saudi Arabia bestowed citizenship on robot Sophia in Oct 2017, becoming the first nation to grant citizenship upon a robot. The Hong Kong-based Hanson Robotics built Sophia in 2015; its inventor David Hanson imbued the robot to mimic 62 human facial expressions. I’m not sure if irony was one of them.