The Psychopath We Should’ve Seen Coming

By Greg Nathan posted May 18, 2023

I have just returned from speaking at a Convention on how leaders might benefit from applying greater Emotional Intelligence (EI) if they are to get buy-in to change and improve the well-being of their people. But the intelligence that seemed to be more on people’s minds at the convention was the Artificial Intelligence (AI) now readily available through ChatGPT and similar products like Google Bard.

I suspect that what I am about to say is going to get an eye roll from many of you. But over the past few months, I have had a growing feeling of dread at how these AI technologies can now increasingly do a range of functions infinitely faster and better than humans.

I’ve also been wondering why I have this deep sense of unease. It’s not the technology, which I actually think is fascinating. It is more the way we are mindlessly lapping up the short-term novelty and convenience of these platforms, without asking some important ethical and social questions, such as:

  • “What are the implications of this on how we work, live, think and function?”
  • “Who owns the new knowledge that is being generated?”
  • “How much power will this potentially give a person or organisation, and what happens if this power is misused?”
  • "Who will take responsibility if this goes pear-shaped?” 

Surely with the sorry state of the planet, we would have learned that science without social responsibility and ethics is not smart science. And please, let's not just leave it to the lawyers to sort this. Their abilities to logically argue the case for humanity have already been rendered extinct by ChatGPT-4 which has just scored in the 90th percentile on the American Bar exam.

Isaac Azimov's laws of robotics

Seventy years ago, one of my heroes, the genius science fiction writer, Isaac Azimov, (who invented the word robotics) put more thought into the social implications of this stuff than the scientists at the big technology companies today. They seem more intent on madly competing against each other to be first with the next breakthrough, instead of thinking about the consequences of their work on their families, communities and the planet. Azimov invented the 3 Laws of Robotics, and later the Zeroth Law as social and ethical principles to protect humanity from AI. For your interest, these are:

  • A robot may not harm a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
  • A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Cute genie or psychopath?

When most people first ask ChatGPT to do a simple task, for instance, to write an email on a specific topic, the experience can feel like having a visitation from a polite, slightly cute genie. It doesn't just meet our expectations, it blows them away. In three seconds it gives us something better than we could have done if we hacked away at the task for hours.
"Welcome into my world, you lovely thing. What else can you do for me?"
"Just ask. Your wish is my command."

But before we hand over the keys to our businesses and lives, perhaps we should be asking, is this just a really cool productivity tool, or it is something more insidious? Psychopaths can give the impression they are acting in your best interests before they take what they want from you with zero remorse. These AI technologies do not feel empathy, love or concern for you or your interests, despite their increasingly human-like language or behaviour. They will do whatever they have been programmed to do. And if you, your family, or your business are damaged in the process, it will not be their concern.

I love innovation as much as the next punter, but let’s keep our heads and remember that, just because we can do something, it doesn’t mean we should. Let’s demand greater responsibility from scientists, investors and governments to ensure we take a more thoughtful path into this uncertain future.

An opportunity to unite the human race

Yesterday I signed an open letter from the Future of Life Institute run by some genuinely very smart people. It doesn’t ask scientists to stop the research, just to pause its release until the public implications have been properly thought through. It’s worth a read. You can check it out here. Who knows, maybe this is a cause that will bring us together as a race of thinking, sentient beings. Meanwhile, let’s take a breath and not swallow the AI hype, hook line and sinker.

Just over 10 years ago I wrote about the dangers of how smartphones were encouraging addictive and socially destructive habits in young people. This is kids' play compared to the dangers I see ahead for humanity if we continue down this reckless path of allowing AI to unthinkingly infiltrate all aspects of our lives. While AI may be faster, slicker and smarter than a human, it's EI that enables us to show the love and compassion for others that gives life meaning and defines what it means to be human. Now surely that's worth thinking about. And no, this was not written by a robot. This really is Greg speaking.

Subscribe to Greg's Tips

Since 1990, thousands of franchise executives around the world have enjoyed receiving a regular email tip from FRI’s Founder, Greg Nathan.

These short stories on the psychology of business and everyday life have been likened to “mind brightening pills” as they open our thinking to fresh insights for improving wellbeing, business performance and franchise relationships.

Sign up now to receive your regular free tip from one of the leading thinkers in the world of franchising.

Start typing and press enter to search

Search