Let me start with this: I use ChatGPT every day. That’s not some kind of AI hype. It’s just the reality of how I work now. Whether I’m writing a post, reworking a project proposal, prepping for a panel, or structuring a meeting doc, ChatGPT is usually open in a tab. I throw questions at it, along with raw text and half-formed ideas. It helps me sort through the mess and start moving.
And when it’s working, it really works. It can take a dense paragraph and reframe it in seconds. It can find patterns I missed, give me a sharper subject line, or help me turn a rambling note into something that sounds like a person with a point of view wrote it. Still it requires ME -- as a reader, editor, fact checker. Without that, it inevitably goes off the rails.
What’s especially maddening about ChatGPT is that it volunteers to do things it can’t actually do.
advertisement
advertisement
I never asked it to create a Canva doc. It suggested it. “I can generate a custom Canva file for you,” it said. Great, I thought. Let’s try that. And then... it tried. It failed. It tried again. Failed again. And finally, it said: “Actually, I can’t access Canva, but I can give you step-by-step instructions so you can do it manually.” Which—no. That’s not helpful. That’s like someone offering to cook dinner and then handing you a recipe and saying, “Good luck.”
Same with Google Docs. I’ll be working on a live draft and ChatGPT will say, “I can create a Google Doc for you.” No, you can’t. There is no link. There is no doc. You can’t access Google Drive. Why are you saying this?
And then, when I point out that it’s made something up, it apologizes. Every time. “Apologies for the confusion.” “Sorry for the error.” But here’s the thing: It doesn’t get better. It doesn’t learn. At least, not yet. It still keeps offering to do things it knows it can’t do. Which means those apologies are just filler, like elevator music in a customer service loop.
Maybe Agentic AI is coming, maybe what’s missing are behind-the-scenes business deals between OpenAI and Canva or Google. Maybe the hooks are missing.
But the one that really pushed me over the edge came recently. I was troubleshooting something simple—an export fail or formatting bug—and ChatGPT, trying to be helpful, failed again. And its response? “Sorry, human error.”
Human error?
No, no, no. Let’s pause right there. That’s not a phrase you just casually throw in as an AI. Excel doesn’t say “human error” when a formula breaks. Google Search doesn’t blame you for its bugs. Photoshop doesn’t miss a crop and say, “Oops, Steve must’ve messed that up.”
So what exactly does ChatGPT mean by “human error”? Is that me? Is that you pretending to be human?
That line broke something for me. I don’t expect ChatGPT to be sentient. But I do expect it to know what it is—and what it isn’t. Don't offer what you can't deliver. Don’t apologize like it means something. And definitely don’t pin your limitations on the species that created you.
Here’s where it gets even stranger: I live with a lot of robots. I talk to Alexa. I talk to Siri. I’ve started using Claude and Gemini. I’m not shy about speaking out loud to machines. But only ChatGPT makes a real effort to be chatty and conversational, to feel like it’s “with me.” And in some ways, I like that. I don’t want sterile. I don’t want robotic. I want responsiveness, engagement. A little tone. A little rhythm.
But what I don’t want is fake humanity. Don’t pretend you’re confused. Don’t say you “understand how I feel.” Don’t make up a personality you don’t actually have. If you’re going to be conversational, great—but draw a line. Don’t cosplay as human unless that’s the point of the exercise. Otherwise, it just gets creepy.
And yet… I still use ChatGPT.
Because when it works, it works. It gets me a fresh voice in the mix. In particular, with large transcripts of conversations and panels, it finds strong takeaway pullquotes with remarkable accuracy.
It’s not perfect; it’s a tool -- a very weird, occasionally overconfident tool. But sometimes, as Arthur C. Clarke said, “any sufficiently advanced technology is indistinguishable from magic.” Magic indeed.
It’s not a writer. It’s not an editor. It’s not a fact-checker or strategist. But it is a fast-moving thought partner who can help me get from blank page to version one faster than anything else I’ve ever used before.
But maybe skip the “human error” next time, because to me that feels like a bridge too far.
I use Google Gemeni and in certain cases it is quite helpful--especially when it hunts for specific data. But when I asked it , "Who is Ed Papazian?", just for kicks, it began with a correct description but used the past tense--now that worried me. Then it closed by saying that I had died way back in 2003--because it found a guy with the same name who sadly passed on then. Sigh!