avatarMary Brodie

Summarize

Conversations — Not Just for Humans Anymore

The epilogue from, Revenue or Relationships? Win Both.

This is an excerpt from my book, Revenue or Relationships? Win Both. This same piece is on my blog, but I put it here for greater visibility given the current discussions about AI.

I address human/machine conversations, or the interface between them, the role for AI, and sentience. Many authors and thinkers have defined sentience in some way, from philosophers contemplating the philosophy of mind to sci-fi writers like Isaac Asimov and Arthur C. Clarke to filmmakers like Stanley Kubrick. (And don’t knock sci-fi! Good science fiction can inspire out-of-the-box thinking because it is based in science. Otherwise, that genre is called fantasy.)

I don’t think machines are close to achieving sentience — yet. We are in phase 2 of chat technology sophistication; ChatGPT and the like will need some time to achieve what we like to consider to be AI (or what sci-fi has defined as AI). However, I think Generative AI as-is today has the ability to cause absolute chaos in society because they can’t discern truth from fiction or make values-based decisions (as we know from Antonio Damasio’s research on decision making, decisions are driven largely by feelings and somatic markers. We use our past to know what’s best to do today. Without it, well….).

That’s a big problem.

Learning from input and feedback is helpful. But what do you do with that information? Discernment and feelings and emotions are required to know what to do with that input. That’s intelligence.

What’s missing in our technology is a “machine mind” to discern truth and reality from a fairy tale. We humans have five senses to help us do that. A machine has….nothing. Its reality is 1s and 0s. If you are in a 2 dimensional existence like that, how do you know what’s true? One method to determine that is popularity: what’s said most often in the data you process.

But is that a good method? My discernment screams heck no.

This is why I see Generative AIs as parrots, spitting back what we entered on the Web (ooh, now that’s a scary thought given what’s in the depths of YouTube and some blogs out there).

To add to that problem, as a culture we are permissive about ignorance and “wrong ways” because we don’t value truth anymore. We did. But maybe this cultural value shift comes from our desire for clicks and “fame” that’s a result of “challenging the established narrative.” At this point, everything is being challenged. Some is good and helps us as a society progress (for example, human rights and justice according to what was defined in the UN Declaration of Human Rights puts us on a forward path). Some is pushing us backwards in history (or not good. Mainly what goes against what’s defined in the UN Declaration of Human Rights).

Studying Buddhism showed me that yes, indeed, ignorance is a problem because if you aren’t seeing the true nature of things, you live in delusion (and therefore, won’t have the opportunity to “see,” change, and be englightened).

In some ways, we could say that computers that are accessing content on the web live in a delusional reality because they aren’t able to see all sides of what’s happening. And if they are considering only the “most popular” content in their construction of reality, watch out world!

We know that expertise and knowledge rely on deep understanding, complex reasoning, and discernment. This means that discovery and certainty doesn’t spring from majority belief systems. I believe that we learned that during the Middle Ages through Copernicus and Galileo, but maybe we need to learn this again?

We should expect a wild ride, unless we accept Generative AI for what it is and stop trying to make it something it isn’t because we wish it so. And yes, it needs to be regulated or build a truth filter in it before it does more damage.

Enjoy!

Conversations are vital to building a relationship. They are ways for people to connect with each other, find common interests, and develop memories together. Social media and content marketing have elements of automated conversations. They provide information to readers to learn about the issues surrounding problems, describe solutions, and provide insights the reader should consider when making a decision. This first stage of communication starts a dialogue between companies and customers to help them recognize and understand their problems and realize they need a solution. The next stage usually involves online transactions, which is a type of conversation. The app or site requests information, the user provides it, and this banter continues until an agreement is reached and money is exchanged for an item. We are now exploring the possibility of chatbots and AIs to react quickly to human input in an automated, digital conversation. But what does that mean? And why is this relevant to discuss in the context of customer experience?

Conversations extend beyond information and transactions to decision-making, influencing, and relationship-building, with more intricate goals like information-sharing and collaboration along the way. We have created apps to facilitate automating these conversations, but there is more to a conversation than exchanging pleasantries, thoughts, and ideas. The automation of communication and conversations through bots and AI is a vital component of automating business. This has proven successful for informative and transactional conversations, but can we achieve this for more complex, relationship-driven communications?

As we know, the more factual types of conversations — informational and transactional, related to things and action — are automated today. Decision-making, related to actions and thoughts, is semi-automated. We have tools available to help us, but humans need to actively use them to get any type of output. Influential conversations are more difficult to automate because they require conversations to discover information and insights, similar to relationship-building and brainstorming conversations. These types of conversations include emotions, feelings, empathy, curiosity, critical thinking, and problem-solving.

The bottom layer of the diagram refers to the types and topics of the conversations, as suggested by Judy Apps in The Art of Conversation. These complement the types of conversations at the top of the diagram. It’s rare when talking about information that you’d talk about heart-related topics (like love or relationships) or discuss what really motivates you (like a soul topic). The more personal the conversation, the more emotionally driven the topics become. The more transactional and informational, the more likely factual or “thing” or “action” topics are fitting. If you are completing a transaction with a person or company, knowing that someone feels a certain way about an object may help a decision-making discussion about a purchase, but it won’t complete the transaction. Two or more people could be discussing how to implement a product or service, but the discussion goes beyond the “things” and “actions” to “head,” “heart,” and “soul.” The team is building trust through various side conversations that develop a relationship. And they understand the problem by sharing different perspectives, which they bring together in their collaboration to determine the best solution.

Keeping all this in mind, without an appropriate program, a computer cannot reach the sentience necessary to be capable of making these connections between facts and emotions, curiosity and creativity, identifying problems and solving them. Human conversations beyond information and if/then transactions are too complex to model in a computer today. Relationship-building skills, like empathy, compassion, connection, and emotion, are required to complete more intricate life functions like decision-making, collaboration, and emotional connection.

Even if we were to create such a program, what would it look like?

One could argue that we have achieved some type of sentience with the world-famous robot, Sophia. She has been introduced to the media as the AI representative of the future, but is she? She became a citizen of Saudi Arabia in 2017 and attends all of the popular technology events. She has even made some frequently quoted quips about AIs and robots having emotions or how robots want to kill humans. But does she have true sentience? She can see. She can respond to humans. But even her creator, David Hanson of Hanson Robotics, acknowledges:

. . .acknowledges that her development is still more akin to a baby or toddler than an adult with a consciousness or intellect that could feasibly be rewarded with a full set of rights. Even this is pushing it — toddlers, for example, have consciousness; Sophia does not.1

Hanson has admitted that her responses are often based on programming, illustrating how far we can go with the if/then statement to model human behavior. We still have not created intelligence or sentience in a machine.

This brings us back to the original question: If we were to create such a program for decision-making, collaboration, and emotional connection, what would it look like?

It’s unclear. If we don’t know in detail how these cognitive functions work in our own brains, how could we create a model to possibly replicate ourselves in a computer? We could create a new model that’s completely different from our own image, but what would that look like? Do we have any theoretical models to use as a basis for that initial approach?

We often take for granted what is involved in creating a conversation. As we listen to someone speak, thoughts rush to us regarding questions to ask next, responses to provide, and insights to share. A computer today doesn’t have the ability to respond in such ways. A computer follows its program and responds to stimuli, mostly based on user input. It processes data to present results and findings; it doesn’t provide an analysis or summarized insights without its programmed direction. Humans usually provide their own insights based on what they believe is important, using the facts that they find through traditional research methods or computer output. Ironically, computer output is based on programs humans designed to access specific data points that a group originally decided were important. In many ways, one group of people is defining for another group what is important through a program. When the computer is deciding what is important for a user using programmed judgement created by humans, that’s not entirely intelligence. From that perspective, we still haven’t reached sentience.

This raises the question of whether we are limiting our own data knowledge by not considering the impact of outlier data to improve situations and provide a different perspective. Are we developing AIs to help us in the way we want to be helped? Or are we developing AI to identify problems or patterns that we could use to create something new? There are initiatives in companies and consultancies to have AIs discover trends found in “dark data,” outside of the knowledge that people commonly have and can immediately leverage and reference. Leveraging such an approach is the only way we could expand human conversations using AI to add value for us to see problems and issues differently. Otherwise, we are defining what we need in a program, inadvertently limiting AI discoveries based on our existing knowledge.

If/Then versus How and Why

Conversations about “things” and “actions” are based on direct questions and answers. Do you have this in stock? When will it be shipped? How can I order that? That’s why it is easy to automate this into chatbots. They are if/then statements about information that’s required and requested.

However, when we talk about thoughts, emotions, and abstract ideas, more relevant topics for decision-making, influence, relationship-building, and collaborating, conversations no longer follow if/then structures to provide information. How are you feeling? Why are you feeling that way? What can I do so you feel better? One could create an if/then program to create answers, but that’s not what’s required in these types of conversations. These are questions that require cognitive processing related to sentience, or self-awareness. They require that subjects know they are alive and want to remain that way. We organic beings “feel” because we are self-aware and we know what is happening in our bodies and minds. We are driven to stay alive based on this self-awareness. But are computers aware of their existence? Do they feel? Do they seek to stay alive at any cost? What does this mean for them?

Science fiction has explored these ideas for more than 75 years in books and movies like 2001: Space Odyssey and Isaac Asimov’s I, Robot. It has been in the realm of fantastical thinking and philosophy for decades, if not centuries (for example, Frankenstein explores this idea at some level), but it is relevant today as we are in the early stages of creating intelligences and sentient beings that use AI.

Arthur C. Clarke and Stanley Kubrick created an AI entity, the HAL9000 computer, in the movie 2001: Space Odyssey. In one scene, Dave is dismantling and deactivating HAL because of its psychopathic actions. Unknown to Dave, this was because HAL’s programming was conflicting with his orders; Dave assumed that HAL was simply malfunctioning. While Dave was dismantling HAL, the computer admitted his faults, attempted to apologize, and asked him to stop. HAL was aware of what Dave was doing and told Dave that he was afraid. If HAL was only a computer, how could he have identified — never mind experienced — an emotion like fear? Or felt his mind drifting away with the removal of each chip and circuit board? It seemed like HAL was aware of the physicality of what was happening and the impact on his own mind and being. Or was he? Was that part of his programming?2

The question that Clarke and Kubrick explored was: Was it possible to kill an AI like HAL, which seems to have the qualities of a sentient being, by deactivating his “brain”? That’s hard to say, because in future movies HAL comes back to “life” when reassembled. The other question that Clarke and Kubrick explored with HAL as a character, which is more central to this discussion, is: What exactly is sentience for a computer or AI? Are they mimicking humans? Is it programmed behavior? Or do they have their own experience through their own desire to survive?

In a real-life example, we could consider the Facebook bot that was created to negotiate ad deals through chat.3

Programmers theorize that the bots created a language to streamline communications with each other. The programmers didn’t add code for the bots to use only human-friendly language. It’s pretty amazing that an AI would optimize a language to communicate better with another AI. This makes me wonder about their perception of what they were experiencing, if there was any at all. We assume there isn’t, but we also have assumed for centuries that animals have no emotions, which is now proven false. Animals do have emotions, possibly experienced differently or similarly as humans. We don’t know because animals can’t speak about them. But this idea raises the question: Why couldn’t this also be true for an AI? Could an AI be aware of what it is? Could a program created to communicate be sentient and we aren’t aware of that? In a way, the AI was sentient and self-aware enough to realize it was speaking with another AI rather than a human.

This introduces a more philosophical question: What constitutes sentience? If a bot is creating a language to communicate with another chatbot, that demonstrates some level of awareness, even if that is part of its programming. One could imagine a programmatic entity thinking: “I know from my programming that I am not a human, but a bot. It seems based on the input I am receiving that this other subroutine interacting with me appears to be another bot. Since we are both bots, I will communicate in ‘this’ style. If the entity communicated with me in this other human style, I would use that style to communicate with it.” Based on input provided by the other entity, it can determine if it is interacting with a bot or human. That is a sophisticated yet simple level of intelligence and self-awareness. It is if/then thinking, but it illustrates that it is possible to understand the difference between two audiences and have enough self-awareness to communicate differently. It’s unclear if there were emotions and feelings experienced by the bot, mainly because it doesn’t have a physical body, but we should consider that emotions and feelings as humans perceive them may be a human construct and we have more to discover and understand regarding what intelligence and sentience include.4

If we read some of Antonio Damasio’s more recent work, cells and more simplistic organisms have feelings to help them stay alive. Emotions emerge from nervous systems and a type of brain to help keep the organism feeling good — and, consequently, alive. This will to live and feeling good is a sign of life that leads to intelligence and sentience. But what is part of this drive to live? According to Viktor Frankl, meaning. Beings will create meaning in their lives to drive them through adverse challenges. Frankl’s book, Man’s Search for Meaning, documents his experience in the concentration camps and its influence on him in developing logotherapy. He found that the search for meaning above all things (reproduction, power) drove men to survive the camps.

If we apply these ideas to an AI, we must first acknowledge that AIs often don’t have a body, except through robotics, but they do have a brain. It’s unclear if that brain does have a desire to stay alive unless it is programmed to believe that. However, if we programmed an AI to have meaning, would that change an AI’s sentience? Isaac Asimov suggested this in his fiction work, I, Robot, through his presentation of the three laws of robotics:

  • “A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”5

How the AI interpreted these laws to give them meaning was what got it into trouble in his book. It had a different interpretation and perception of what the three laws represented for its purpose. From this, you could argue that having meaning and purpose is a type of sentience.

Would meaning or purpose change the nature of an AI so it could have self-awareness and be able to participate in more advanced conversations like collaboration and relationship-building? It may be worth considering.

We can’t forget that we are still in the very early stages of developing AI. I am aware that much of this section is based on conjecture and science fiction, but for us to support the automation of more complex conversations and human-computer interactions, AI programs need to evolve to achieve sentience, and to get there, we may need to dream and expand our perception of what sentience means.

To return to the question posed at the beginning of this section: Is it possible for us to automate conversations, and therefore, automate relationships? To me, this is highly unlikely any time soon. It is in the realm of dreams, philosophy, and science fiction. There will always be an element of human interaction required for two beings to connect and have a conversation that humans have grown accustomed to having. AI allows us to identify and use data in ways we never dreamed possible. But when I dream of AI and humans having conversations, I keep remembering a scene in the movie, Rogue One, with the droid K2S0 announcing, “There is a 97.6 percent chance of failure,” as they are flying toward their mission. The humans continued regardless of the challenges. This is what I perceive to be the balance between AI and bots and humans. As we know through the work of Antonio Damasio and Viktor Frankl, human conversations and decisions are not always driven by logic. Emotions and an individual’s self-perception often drive their will and a desire for a specific outcome that defies the odds. That element of human nature based on feelings and emotions to move towards a goal won’t go away. If anything, with better data elements selected for us, we may be able to achieve our goals faster and more completely by using a better approach than we do today. It would be a tremendous partnership, providing us a complete picture of our options, choices, and current situation. And our corporate world could further expand to include employees, customers, and our computers, all interacting to create a more balanced emotional and factual customer experience.

1. Reynolds, Emily. “The Agony of Sophia, the World’s First Robot Citizen Condemned to a Lifeless Career in Marketing.” WIRED. June 1, 2018.

2. 2001: Space Odyssey. Deactivation of HAL9000. ()

3. McKay, Tom. “No, Facebook Did Not Panic and Shut Down an AI Program that Was Getting Dangerously Smart.” Gizmodo. July 31, 2017.

4. Griffin, Andrew. “Facebook’s Artificial Intelligence Robots Shut Down after They Started Talking to Each Other in Their Own Language.The Independent. July 31, 2017.

5. Asimov, Isaac. 3 Laws of Robotics.

Human Rights
AI
Ethics
Recommended from ReadMedium