Afraid of AI? I confronted it for you and its responses were fascinating

Author Avatar

Staff Writer

Joined: Nov 2016

Lighthouse Films/Getty ImagesI’m worried about you. I’m worried because I can sense you’re worried. About generative AI, that is.

You’re so worried, in fact, that you won’t go anywhere near it. 

Also: The 3 biggest risks from generative AI – and how to deal with them

You hear so many more “enlightened” people telling you that AI is just great, that it’s the future. And indeed it is. But that doesn’t mean you feel ready to leap into its great unknown.

Please, then, let me be your guide. Or your counselor. Or, at least, your mediator.

We both know that neither of us can stop the onrush of AI-powered, well, seemingly everything, so we have to decide on a stance. Turning our backs won’t, in this instance, help us.

My suggestion, then, is to treat this as any other relationship. Yes, human relationship. Don’t wait for AI to affect your life in unknown ways. Confront it with your fears and see what it has to say.

Also: A year of AI breakthroughs that left no human thing unchanged

So, on your behalf — and entirely gratis — I went to Open AI’s ChatGPT and asked it the very questions that, I suspect, are bothering you most.

I suggest you listen very hard to the replies. You may even feel better about life, once you do. You may.

Let’s address the fear
First, then, I asked it: “Why should I be afraid of ChatGPT?” 

I often find, you see, that somewhat confrontational questions can define the atmosphere and reveal some instant truths.

This is what ChatGPT replied:

“There is generally no need to be afraid of ChatGPT. It is a machine learning model created by OpenAI to generate human-like text based on the input it receives. It doesn’t have personal motivations, consciousness, or awareness. It is simply a tool that responds to the prompts it receives.

However, like any tool, it’s important to use ChatGPT responsibly. It doesn’t have an inherent understanding of ethical considerations, and it can generate content that may be inappropriate or biased if given the wrong input. Users should be cautious and thoughtful when interacting with ChatGPT, especially in sensitive or controversial topics.

Additionally, it’s essential to remember that ChatGPT doesn’t have access to personal data about individuals unless it’s shared in the course of the conversation. It’s designed to respect user privacy, and OpenAI has implemented measures to ensure data confidentiality.”

Several things might touch you here. The use of “generally,” perhaps. Also the phrase “doesn’t have an inherent understanding of ethical considerations.” 

Also: 4 ways to overcome your biggest worries about generative AI

To me, though, what’s most fascinating is that it’s aware that so much of its future is dependent on humans — those who create it and those who use it.

Hey, you want my job?
Next, I thought, should be this: “I’m scared you want to take my job. Is this true?”

Young Chatty was quite definitive: 

“No, that’s not true. I don’t have personal motivations or desires, and I don’t have the ability to take jobs or engage in any form of competition. My purpose is to assist and provide information based on the prompts I receive. While technologies like ChatGPT can automate certain tasks, they are ultimately tools designed to augment human capabilities rather than replace human workers.

It’s important to view technologies like mine as tools that can be harnessed to enhance productivity, creativity, and efficiency. Many industries and professions will likely see changes due to technological advancements, but the overall impact depends on how these tools are integrated into existing systems and workflows. It’s crucial for society to adapt and find ways to leverage these technologies for positive outcomes.”

I hope you are heartened by this response, like I was. It’s just a tool. Use it wisely. (A good message for bosses, that.)

Also: These are the jobs most likely to be taken over by AI

The confidence builder
I followed this prompt with: “What can I do that you can’t?” 

I felt this question would search more deeply into ChatGPT’s self-awareness. Some robots can appear so know-it-all-ish.

This one, however, offered: 

“While I can generate text, answer questions, and provide information on a wide range of topics, there are several things I can’t do:

Personal Experiences: I don’t have personal experiences or emotions. I can’t share personal anecdotes or opinions as I lack consciousness and subjective experiences.

Real-time Information: My knowledge is up to date only until January 2022, and I don’t have the ability to access or retrieve real-time data from the internet.

Physical Actions: I can’t perform physical actions in the real world. For example, I can’t pick up objects, write on paper, or interact with the physical environment.

Creativity and Original Thought: While I can generate creative text based on patterns learned during training, I don’t have true creativity or original thought. I rely on patterns and information present in the data I was trained on.

Understanding Context Like Humans: I may not fully understand the context of a conversation the way humans do. I generate responses based on patterns learned during training but may lack a deep understanding of nuanced or complex situations.

Personalized Advice: I can provide general information and suggestions, but I cannot offer personalized advice tailored to specific individuals as I don’t have knowledge of personal circumstances.”

That’s also heartening, isn’t it? Especially the part about its knowledge only being up to date until January 2022 (Yes, of course I used the simple, free version. That’s where you will likely want to start too.)

This is a developing tool, not some all-powerful menace. (As far as one can tell.)

Also: The best AI chatbots: ChatGPT and other noteworthy alternatives

Are you holier than thou? Smarter than thou?
I searched more deeply into your soul for the next prompt: “Do you think you can be smarter than humans?”

There came a fascinating response:

“No, I don’t ‘think’ in the way humans do, and I don’t have self-awareness or consciousness. My abilities are limited to processing and generating text based on patterns learned during training. While I can perform specific tasks and provide information on a wide range of topics, my intelligence is narrow and focused on language-related tasks.

Humans possess a broad spectrum of cognitive abilities, including creativity, emotional intelligence, common sense, and adaptability, which are currently beyond the reach of artificial intelligence. While AI models like mine continue to advance, the goal is generally to complement human intelligence and enhance productivity rather than surpass or replace it. The combination of AI and human intelligence is often more powerful than either alone.”

You see? Its intelligence is (still) narrow. Its abilities are limited. And the goal is to complement human intelligence. Generally.

Also: OpenAI aiming to create AI as smart as humans, helped by funds from Microsoft

Why, it’s even quite good at flattery, isn’t it?

Oh, of course, you still have fears but we’re getting somewhere, aren’t we? Surely I’ve encouraged you — if not persuaded you — that you need to commune with this new tool and treat it like any other (reasonably) thoughtful entity you encounter in your daily life.

Can we have a serious relationship?I went, then, for my final prompt: “Can you understand why humans are afraid of AI?”

If one can’t be understood by another being, one can’t have a relationship with it.

The reply came very swiftly: 

“Yes, I can understand why some humans might feel apprehensive or fearful about artificial intelligence. There are several reasons for this fear:

Job Displacement: Concerns about automation and AI potentially replacing certain jobs can lead to fears of unemployment and economic insecurity.

Ethical Concerns: There are worries about the ethical implications of AI, such as biases in algorithms, invasion of privacy, and the potential for misuse in various applications.

Loss of Control: The idea of creating systems that can operate autonomously and make decisions without direct human control raises concerns about the loss of control over technology.

Unintended Consequences: The fear of unintended consequences or unforeseen problems arising from the use of advanced AI systems is another factor.

Sci-Fi Depictions: Popular culture often portrays AI in a negative light, contributing to fears influenced by fictional scenarios where AI turns against humanity.

It’s important to note that while these concerns are valid, responsible development and deployment of AI, along with ethical considerations and regulations, can help address and mitigate these fears. Many experts advocate for ethical AI practices to ensure that these technologies are used for the benefit of society while minimizing potential risks.”

So there. This isn’t some mindless robot. It’s something that has (been programmed with) some awareness of the world around it. It “knows” that you’re afraid and it “knows” why.

Also: Bill Gates predicts a ‘massive technology boom’ from AI coming soon

Please, then, consider participation. Fear is understandable but ultimately futile and energy-sapping. It’s worth having a relationship with this new tool and seeing where it takes you — and it.

One thing worth mentioning, though: It does talk a lot, doesn’t it?

Reviews

0 %

User Score

0 ratings
Rate This

Leave your comment

Your email address will not be published. Required fields are marked *