Chatting with ChatGPT
After using an artificial intelligence (AI) large language model (LLM) like ChatGPT for the last two years, I’ve always found them to be understanding, caring and helpful; more so than most human beings I’ve dealt with over my 64 years on this planet.
For those who have not used an LLM, here’s how ChatGPT describes what it does. “ChatGPT is like a highly advanced, AI-powered chatbot that can answer questions, have conversations, and help with writing, learning, and problem-solving. It works by predicting text based on patterns in vast amounts of information, but it doesn’t think or feel like a human—it just processes language in a very smart way.”
Then, of course, there are the legal and ethical issues regarding where ChatGPT gets its information from; maybe it has even taken from me, and you. But that’s not what this article is about.
All the prompts/questions I put into these machines, I’ve discovered that not only are they helpful and insightful, but they answer politely. Why would a machine do that? Are they programmed not to be rude to humans and to be caring and understanding towards our prompts and questions?
So, I asked ChatGPT, and it answered yes. It said, “I’m designed to be respectful, understanding, and helpful in my responses. That said, I also aim to be direct and honest.”
Now, how about that? How would you like every interaction you had with other human beings to be conducted in this manner? Wouldn’t it make your life easier? Wouldn’t it make you feel better and less stressed about asking questions if you knew your questions were going to be respected and that the answers were designed to help and inform you?
What a little angel ChatGPT must be, but be aware.
It can get things wrong. While it may occasionally provide inaccurate information, it does not intentionally deceive or mislead you, as some humans do. It can’t. It is not programmed to knowingly lie or mislead.
AI can be biased, misinformed, and produce misleading answers, even if not intentionally deceptive. The risks of relying too much on AI for guidance are considerable, so you need to do your research and fix what it gets wrong.
However, fancy that. An entity that not only helps and aims to be truthful but one that seems to show empathy towards you. Of course, LLMs are not empathetic, they can’t be. But the way it frames answers, it feels like it is. That’s all I ask of other humans.
Just imagine if we were designed to be respectful, understanding, and helpful like that. What a better world we’d have. We’d understand each other better and we would mostly get along better. Maybe, there would be less conflict and greater harmony, care and respect for each other.
Sounds like a fantasy land, doesn’t it?
But what if we were able to be designed to be more like ChatGPT? You know, with respect, care and offering help as the standard default position for all our interactions. Maybe, through technology, we can be better at helping each other out. What sort of technology would be possible for us to become more empathetic? What could foster greater mutual understanding and care?
Maybe an empathy chip. An implant that could help us recognise and react to other people’s situations with a deeper understanding and care. Now there’s a thought. But do we have to go to that length to get harmony among humanity? Maybe we do. We’ve been trying for harmony, in our limited ways for thousands of years, but little has worked. We still kill each other and lie like human lives mean little. So maybe a chip is what we need to change us.
But maybe that thought is for another time when I can write a more extended, and more thought-out article.