Ethics and Emotional Intelligence in a Future of AI


Chris Cubbage

Daniel Goleman,
Co-Director, Consortium for Research on
Emotional Intelligence in Organizations

There is an urgent need for compassion and empathic concern to play a central role in the creation of artificial intelligence.

There’s no doubt Artificial Intelligence (AI)–machines that reproduce human thought and actions–is on the rise, both in the scientific community and in the news. And along with AI, there comes “emotional AI,” from systems that can detect users’ emotions and adjust their responses accordingly, to learning programs that provide emotional analysis, to devices, such as smart speakers and virtual assistants, that mimic human interactions.

As the pace of AI development and implementation accelerates–with the potential to change the ways we live and work–the ethics and empathy that guide those designing technology of our future will have far-reaching consequences. It is this moral dimension that concerns me most: do the organizations and software developers creating these programs have an ethical rudder?

Long before the concept of AI became commonplace, science fiction writer Isaac Asimov introduced the "Three Laws of Robotics" in his 1942 short story “Runaround” (which was later included in his 1950 collection, I, Robot):

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Much of Asimov’s robot-based fiction hinges upon robots finding loopholes in their interpretations of the laws, which are programmed into them as a safety measure that cannot be bypassed. Asimov later added a “Zeroth Law”: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” This enabled his robots to kill individual humans in service of abstracted humanity.

Above all, Asimov’s work suggests that there is no fixed set of laws that can adequately account for every possible scenario intelligent machines will encounter. Still, Asimov’s laws serve as a foundation for contemporary thought on the ethics of AI.

In June of 2018, Google created a code of ethics for their artificial intelligence programs in response to an April 2018 petition signed by 4000 employees and the resignation of a dozen more. The petition included demands that the company terminate their contract with the US Department of Defense’s “Project Maven,” which uses AI to aid drone strikes. Employees described it as “biased and weaponized AI.”

Google later chose not to renew the contract, which will end in March 2019, and decided to drop their bid for the new, $10 billion Joint Enterprise Defense Infrastructure (JEDI) program due to conflicts of interest with its corporate mission and AI code of ethics.

Yet, while Google has committed to not pursuing “technologies whose purpose contravenes widely accepted principles of international law and human rights,” they have outlined few specifics on how this will work in practice. Further, the vagueness of their ethical code leaves opportunities for profitable defense deals in the future.

The petitions and resignations of Google employees portray part of a larger movement, in which tech workers have exercised meaningful influence over their employers. Employees expect their leaders to walk their talk and act in accordance with the inspirational missions they espouse. And that includes taking a stand on controversial issues.

A research survey by Data & Society asserts that for AI to “benefit the common good,” it must avoid harm to fundamental human values. This includes the extent to which we allow AI to make decisions of its own. Machine learning systems, which process vast quantities of data and make decisions based upon that data–such as which candidates to interview for a job–have already been shown to exhibit bias and discrimination. Biased data yields biased learning systems.

Proposed solutions to AI’s potential for bias and harm often hinge upon human rights. International human rights, proponents argue, offer an established ethical framework for programming AI. I wholeheartedly agree that we mandate that AI cannot be programmed to harm. Software developers have an ethical obligation to be transparent about their work. (Open source AI developments, including OpenCog and Open AI, offer important resources in this regard.) But we should not rely solely upon codes of ethics or international human rights to govern emerging technologies.

New research finds that reading the Association for Computing Machinery’s (ACM’s) code of ethics has no impact on the decision making of computer scientists. In contrast, an awareness of newsworthy incidents, such as Volkswagen’s “Deiselgate,” in which the company tried to trick emissions tests, did impact the decisions computer scientists said they would make.

Thus, the opportunity for a programmer to see the potential consequences of their actions–not in the form of an ethical code, but in the context of similar decisions in the news–could create a vital, ethical rudder. I suspect this outcome is due largely to empathy. Ethical codes rely on generalizations and abstractions to encompass a range of scenarios. But when we zero in on a single, real scenario–such as “Deiselgate”–the consequences, including the impact on our children and the environment, come into focus.

My late uncle, Alvin Weinberg, was a nuclear physicist who often acted as the conscience of that sector. He once confided to me his ambivalence about for-profit companies running nuclear power plants; he feared that the profit motive would mean they cut safety measures–a premonition of what contributed to the Fukushima disaster in Japan.

While it seems likely that many AI programs will remain in the hands of for-profit companies, I encourage employees to continue speaking up. Hold your organizations to act in accordance with the values they promote.

In short, there’s an urgent need for empathic concern and compassion to play a central role in the creation of artificial intelligence. As with nuclear power, putting AI in the hands of for-profit companies poses an ethical risk.

That’s one reason emotional intelligence may prove particularly important–not only in light of the empathic concern we must bring to the creation of AI, but for the future of work in an artificially intelligent world. AI will continue to automate logic-based tasks, such as diagnosing medical cases and managing investments. But paradoxically, the human side that involves understanding, engaging, and interacting with others–the skills of emotional intelligence–will become increasingly critical. Competencies including empathy, influence, and teamwork differentiate human capabilities from the work of AI and machine learning. While an AI system could learn to diagnose an illness, the ability to credibly express compassion for a patient’s situation, and to develop a treatment plan that works for them, necessitates a human touch.

But here’s my main qualm about EI in the age of AI. There are two kinds of knowing needed to create a meaningful moral rudder. The first is self-awareness, the second empathy.

Self-awareness in the realm of meaning and purpose requires us to get in touch with messages from ancient neural circuitry in structures deep down in the brain. These can give us a reading of a situation or scenario in terms we express as “feels right” or “feels wrong.” Some neuroscientists argue that this primitive sense of right and wrong operates as a moral compass, one we then put into words as our guiding principles and values in life. An AI algorithm can’t sense a moral message in this way–it just follows the rules it has been programmed for.

Then there’s empathy, particularly acts of moral imagination where we sense how a given decision would impact others. With this imagined other we engage in a variety of empathy that seems far beyond the capabilities of an AI system.

Bottom line: AI cannot become truly emotionally intelligent; for one, the brain circuitry that gives rise to this human skill set is too complex to model. While an emotional AI algorithm might well read human emotions, it cannot do emotional tasks akin to being self-aware and experiencing deep empathy. And for the foreseeable future, our relatively minimal understanding of consciousness will limit the consciousness we can program artificially.

Subscribe to Industry Era



 

Events