Skip to main content

The Humanity of AI - A Spotlight on the Ethical Questions

Mon, 01/28/2019

MSL engaged two ethical experts - John Havens and Jeff Catlin around the humanity of AI, a rich topic.

MSL: What is the human role in AI interactions?

Jeff: It is similar to the human role in computer-human interactions. When using AI, we must ask the right questions, provide data and act on recommendations and predictions. If we point AI the wrong way, we're going to make bad decisions. A human should have their finger on the trigger for any decision that could impact someone's life. AI should make recommendations that humans can act on.

MSL: As AI starts leading so many customer interactions, what is the danger of losing the humanity in customer/ brand relationships?

John: The primary loss of humanity comes from people's inability to access and control their data. While government protections like the European Union's General Data Protection Regulation (GDPR) are essential, corporations and brands must support sovereign identity structures allowing individuals to have their own digital/algorithmic 'terms and conditions.' Organizations like Meeco.me and MyData and are paving the way for blockchain-like, peer-to-peer (P2P) methodologies where individuals can provide incredibly specific and useful information to brands, but only when companies commit to using that data the way customers stipulate.

MSL: How do brands and companies deal with this challenge and maintain humanity in customer relationships?

John: In terms of data, companies should create P2P, blockchain-enabled channels for their customers right away. The first brands to do this will experience a massive burst of trust from customers and a clear market edge. Soon, organizations that don't allow users to have their own terms and conditions for exchanging personal data will seem not only behind the times but of questionable ethical standing.

Disclosure is essential to using AI to interact with customers. People don't like to feel tricked, even when an AI chatbot or algorithm is technologically exciting. Disclosure doesn't have to be dry and overly legal; it's communication about your AI system that will teach customers how and why you're reaching out to them. This has to do with agency. When people feel they're part of a magic show, for instance, they're open to being surprised and amazed. In the theatre world, this is called, "the willful suspension of disbelief." But when a device is in your home and you don't know why it did something, you will not trust the manufacturer no matter what their intentions are. Disclosure is the key.

MSL: How do you think AI can contribute to better ethical decision-making?

Jeff: The most powerful enemy of ethical decision-making is bias. Biases can be implicit or explicit, known and unknown. All AI has biases; it is not the pointy-eared Vulcan myth of Spock, which is all logic. AI bias depends on the data we feed it and how we structure the ‘learning’.

AI’s major advantage is that its biases are potentially quantifiable and always consistent, so we can account for them. This reduces biases’ impact and leads to inherently more ethical decisions.

MSL: Can we remove biases from the data we feed into AI? What do humans need to understand about this and how do we compensate?

Jeff: Biases can be minimized in AI, but not removed entirely. Society is biased, therefore the data we generate is as well. Keep in mind, bias is simply a slant. It doesn't have to be good or bad, but it should be quantified.

There's a lot of interesting research going on right now on detecting problematic bias; for example, the sort that would lead a deserving person (in the U.S.) to be disqualified for a loan based on their ethnicity. There are no great answers, but best practices are to have very broad data gathering processes and aggressive test cases. A broad testing set allows institutions to understand biases and account for them with training data or in direct tuning.

MSL: John, you've written it is time for the AI industry to create ethical standards and leverage innovation stemming from transparent stakeholder dialogue. Can you explain?

John: This is what we've tried to do with IEEE, which now has fourteen approved standardization projects (the IEEE P7000™ series) focused on ethical Autonomous and Intelligent Systems (A/IS). Standards form a type of soft governance and work because they're created by consensus over two to three years with a lot of people from multiple industries asking questions like, "How do we eradicate negative algorithmic bias?" It is in these in-depth, global conversations that innovation happens. People truly empathize with colleagues in various disciplines and fully understand cultural specifics that affect AI design.

MSL: With AI, some companies may choose to reduce reliance on the human workforce. What is your take on the ethics of an automation-led society?

John: I don't see motivation for businesses to avoid automation to ensure humans will have jobs in the current status quo. Unless society changes the focus on gross domestic product (GDP)-driven exponential growth, which automation serves, the bottom line will always win. This isn't any company's fault -- where shareholder value is in place as the ultimate goal it's a legal mandate. Unless we go triple bottom line in our priorities as a society, where new legal models like the B-Corporation provide ways for organizations to get from point A to point B, there will be no labor equality. Everything that can be automated will be. It's not a question of if, but when. You will never read a headline saying, "The X industry has decided to stop creating AI because they're worried about human workers."

MSL: What are the ethical risks of AI? What kinds of corporate crises could AI ethical lapses spark?

Jeff: There is an unfortunate tendency to believe AI is some sort of panacea for all human problems. Humans are seen as being slow and expensive. That may be true in some cases, but we're also really good at certain decisions. If we take humans completely out of the loop on life-changing decisions and leave everything to AI, there is the potential to make lots of really bad decisions really quickly.

Some major AI ethical issues will look familiar, such as the pathological thirst for data that leads to risky and unethical decision-making. Some issues will be new, for instance, blaming a medical device company's machine or AI for products shipped out with bad parts. A discussion of autonomous car liability is a good overview of the possibilities.

MSL: How will AI make business leaders smarter?

Jeff: When we talk about getting ‘smarter’ with AI, semantics are important. AI alone will not make anyone smarter. More knowledgeable, yes; able to process more data, yes. But not smarter, and therein lies the rub, just like when using a computer. People can drive AI to make poor recommendations, then use it to justify unethical behavior. Now more than ever, influencers and decision-makers must understand AI's positive and negative implications and leverage it to maximize the positive impact on business and society.

MSL: How should companies develop and design AI products and services to ensure they maintain humanity?

John: If companies truly want to ensure humanity in customer interactions, they must keep humans involved in all AI processes. Beyond ‘human in the loop’ protections to make AI transparent and accountable, this means actually prioritizing areas where AI will never be allowed to take over a human's role, at least not completely.

The other thing to do is train humans in emotional intelligence, empathy and compassion. While AI can replicate aspects of human emotions, it is not human or emotive, but a mirror of our innate abilities. This doesn't mean AI-oriented emotion is not fantastic and hugely useful. There are times when humans prefer AI emotion over human scrutiny; a great example is soldiers with post-traumatic stress disorder (PTSD) talking to AI therapists.

The final thing is to prioritize applied ethical methodologies in design. This is often called value sensitive design, based on Batya Friedman's work, and functions as a sort of ‘agile for ethics.’ These methodologies don't use utilitarianism or virtue ethics to make final decisions, but they help design teams ask questions based on these traditions that they may not have asked in the past. AI directly affects human agency, emotion and identity, so methodologies providing new levels of due diligence are extremely important to employ.

Header Image: 6eo tech on Flickr


John C Havens
John C. Havens is executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and The Council on Extended Intelligence. He is the former executive vice president of Social Media at Porter Novelli, where he worked with clients including P&G, Gillette, HP and Merck. John is a contributor  to Mashable and The Guardian on technology and wellbeing issues and the author of the books, Heartificial Intelligence: Embracing Our Humanity To Maximize Machines,” and “Hacking Happiness: Why Your Personal Data Counts and How Tracking It Can Change the World.” He shares his interesting insights on how to preserve humanity as AI technologies advance.

 

 

Jeff Catlin
Jeff Catlin is co-founder and CEO of Lexalytics the leader in ‘worlds-first’ machine learning and AI that translates text into profitable decisions for social media monitoring, reputation management and voice of the customer programs. With more than 20 years of experience in the fields of search, classification and text analytics products and services, he offers a unique perspective on how ethics play into AI interactions. Jeff is a frequent contributor to Forbes and VentureBeat where he shares his unique perspective on technology-related issues.

 

Additional Blog

Influence: Anywhere & Everywhere

read more

Subscribe to MSL's Blog

 
 
* indicates required

Do you want to get in touch?