Most of the commentators take great pains to point out that they understand the technology, that they know it isn’t a thinking, feeling creature talking to them -- but one leveraging vast troves of written content to string new troves of written content together in a way that seems meaningful.
“We’re not fools!” they seem to be saying. “We know how the magic trick works!”
But there are several points missing from this line of response. The first is obvious: We’re all just stringing new troves of content together in a way that seems meaningful. The second is perhaps less obvious: It doesn’t actually matter if it’s just a computer shuffling words around. The impact on us is the same.
In my courage-building courses, I have participants run through an exercise. They pick a difficult situation (but one they’re happy to talk about in public) and write down how they’re going to initiate a conversation about it: My yogurt’s gone missing for the past three days. The story I’m making up is that you’ve been eating it, so I wanted to check in with you about it. They then pair off with strangers and read their opening statements to each other.
It’s an intense experience for the folks practicing: writing it is always different than thinking it, and saying it is always different than writing it.
But it’s also an intense experience for the partners, who are inevitably surprised by their visceral reactions when the opening statements get read to them. I felt so defensive, even though I’ve never even met you before, and I’m lactose-intolerant!
Our systems -- by which I mean our physical bodies, our emotional bodies, our thoughts, the totality that is us -- do not distinguish between “real” and “fake” the way our frontal lobes do. I can know intellectually that you’re not accusing me of taking your yogurt, but my system is hard-wired to protect me, and it’s going to snap into gear without waiting for further instructions.
In other words, our systems respond how they respond even when we know the external situation isn’t “real.”
In other other words, our internal experience of a situation is distinct from the external “reality” of it.
Falling in love, becoming angry, feeling understood -- as much as we think of these things about being in relation to others, they’re all experiences that happen within us. We like to say, “A computer can’t empathize,” but it doesn’t matter; a computer can provoke in us the feeling of being empathized with.
So we may know intellectually that Sydney’s just spitting out words in an order that seems consistent with all of the examples it’s digested. But our system responds exactly the same way as if it were a person writing to us.
AI may not be “sentient” -- but that may not matter. In terms of the effect it can have on us, it’s already crossed the uncanny valley.
Any sufficiently advanced AI is indistinguishable from sentience. We’d be wise not to dismiss their impacts.
Very interesting Kaila.
I have a (I hope) interesting experiment regarding AI. Utilising a single AI application (say ChatGPT) get a half a dozen 'run-of-the-mill' people to enter the EXACT same words, punctuation etc. and see what is produced. Will they be exactly the same? Will they be clones of the same response. Will they vary depending on the person who entered the text?
If there is recognisable differences does that mean it 'knows' a lot about that individal person? If they are similar but different wording, is it just a randomiser within the software? If they are absolutely identical I think we all know what that would mean.