Back in 1950, Alan Turing — the Godfather of modern computing — developed a set of criteria to determine whether or not a computer could be classified as an artificial intelligence. Basically Turing’s criteria was this: if a computer could successfully convince someone that it, too, was human, then you could say it was an AI.
For years this was the goal line for computer scientists, the big brass ring — creating a simulacrum of the human intellect. The Turing test was so widely accepted as the endpoint of our efforts to create a computational intellect that it’s appeared as a trope in movies about AI. It was science fiction, a computer that could pass the test. It was the future.
How quickly has the future arrived.
Unless you’ve been living under a rock this past month, you’ve probably heard about ChatGPT, a chatbot that gives such remarkably detailed and well-reasoned responses to user queries, that the question isn’t whether artificial intelligence can pass for its human equivalent, but how much authority can an AI project — and how much should credence we lend it. The technology is as exciting as it is scary: one can easily imagine a future in which AIs successfully lie and manipulate us — a future in which they’re another way of disseminating fake news and propaganda.
People are also nervous that, in the near future, AIs will become so powerful that they will begin to take over creative jobs; for example, anyone who writes for a living (myself included) is absolutely freaked out right now.
Naturally, our thoughts turn to the future of psychotherapy, and how it might be permanently altered by the emergence of these powerful AIs.
If we had the API at our disposal, we could easily imagine ways to use a technology like ChatGPT to our advantage. Though we’re immensely proud of our Mental Health Fingerprint Questionnaire, our best-in-class onboarding survey, using a chatbot that didn’t feel like a chatbot — that felt more like talking to a human than it did to a computer — might be a great way of making the onboarding process feel that much more welcoming, that much more human (ironically enough).
Even though text-based communication can be a critical part of a therapeutic relationship, it can’t be the exclusive — or even the primary — mode of communication. Too much can be lost between the lines in a text-centric relationship — sarcasm and humor, for example. In a face-to-face meeting, though, when a qualified, experienced therapist can hear the way their patient is talking and pick up on every inflective hint and subtlety, it’s much harder to misinterpret a joke. Not to mention the fact that body language often speaks volumes — another form of communication that falls by the wayside with a chat-based AI.
Frankly, we doubt an AI can ever really act effectively as a psychiatrist. The crux of a truly beneficial therapeutic relationship is the rapport (or Rappore) that you build with your provider; the artificiality of a program like ChatGPT will always stand in the way of that. Even if the AI passes the Turing test with flying colors, the mere fact of knowing that they are interacting with a computer will never escape the patients’ subconscious. They will be forever aware that they’re dealing with a representation of a human mind, and not the real thing. In a field where authenticity is so critical, this is an impossible obstacle to overcome. There can’t be trust without authenticity, and there can’t be authenticity when you’re interacting with a simulacrum — something that is inherently imitative and inauthentic.
The mind turns to movies like Her and Ex Machina, movies that ask this very question: can you actually have an intimate relationship with a machine, or only a relationship that imitates intimacy? It’s telling, that even these films seem to net out on our side. It might be easy to fall in love with a robot, but it’s impossible to stay in love.
Rest assured, at Rappore, you’ll never talk to an AI. If you’re interested in starting to build a relationship with a real live therapist, click here.