(Forbes first published my article here.)
You’ll find it difficult to pick up a publication from the last few months that does not include an article on artificial intelligence, specifically OpenAI and its ChatGPT capabilities. Rather than going gaga regarding the potential uses of AI, I’m taking a different nontechnical tact here.
Granted, ChatGPT can turn summersaults to rousing applause with these capabilities:
So why would anyone have “concerns” about such AI technology? After all, for years, we’ve posed questions and topics on Google and have received informative information and opinions to help with our physical ailments, finances, and family relationships. We use our GPS system to get to Grandma’s house for holidays. We ask Siri for the location of the closest Sonic.
So how does using ChatGPT (and other similar AI products) differ? Let me count the ways—particularly those ways that most affect the reputation of authors, speakers, consultants, and even big business. In fact, it can crater their careers or businesses.
Consider accuracy, plagiarism, copyright and trademark infringement, deception, misinformation hitting the world in mass doses, and the most important—personal and company credibility. Let’s take them one at a time:
Unlike when Google returns answers to your question with a long, long list of sources from which you as the user can make a judgment about the accuracy and reliability of the answers, ChapGPT gives you no clue about the sources of the information. You may be reading a list of 6 causes or cures of pancreatis from Mayo Clinic or Uncle Billy Bob’s blog about his own diagnosis.
ChatGPT scrapes information from unknown sources. In fact, users can enter the same question and get totally different answers—as happened to my group of speaker friends as they debated the issue live online with this demonstration. They all typed in the identical question and all three received and posted their widely varying answers.
Picture the situation where two different marketing teams are using AI to generate ideas for an ad campaign and ChapGPT sends them the same ideas. In an article by Adweek by Trishla Ostwal, she reminds that currently AI tools do not have the capability to determine trademarks and copyrights.
Because ChatGPT doesn’t reveal the source of its information, authors and speakers may unknowingly be quoting the work of other writers or speakers or using their brand names, without realizing that infringement. That, of course, opens up the potential for copyright and trademark infringement lawsuits.
Authors brand themselves by their expertise and writing style. Speakers brand themselves by what they call their “signature stories.” They don’t look lightly on those who copy/steal their work.
Professors and public school teachers complain that students use AI tools to write their essays. Often, they become aware of that only after they receive three or four papers that sound very similar, making the same key points. In a recent discussion with a college professor, he made an excellent point: Teachers and professors need to develop better ways to evaluate their students’ work—not based on regurgitation of historical or scientific facts but on creativity and analytical thinking processes.
In fact, the court has recently ruled that images created by AI tools cannot be copyright protected or trademarked.
For authors, speakers, or consultants adding their byline to articles and books generated by AI (and lightly “edited” by the supposed author) and claiming the work as their own destroys credibility. Yes, even books! I’ve seen the advertisement of a book coach/consultant encouraging clients and prospects to write their entire book by using AI to generate their text in minutes!
Claiming ideas and texts that AI-generated as one’s own comes down to a matter of personal integrity—or the lack thereof.
Can you imagine a world flooded with misleading and inaccurate information on every social media platform in existence—and a media that reports opinions as “facts”? What about the potential for AI to alter the words of political leaders by manipulating/doctoring video clips? By using AI tools to alter photos so that what you’re seeing never actually happened?
We’ve had this technology for a while. Do you recall situations when Osama Bin Ladin and ISIS members released videos to the world, and our political leaders telling us that they needed to “verify authenticity” before releasing such videos to news outlets?
Imagine a world where we need some agency or board to verify authenticity of every video, audio, or image that might affect our health or physical safety.
When I hear frequent users talk about the time invested in “chatting” with ChapGPT, posing questions, inputting data, and probing the tool to get at nuances on their topic or problem, I think of the classroom. Students often spend hours and hours devising schemes and preparing cheat-sheets for tests, when had they spent an equal amount of time simply learning the required information, they would have been far better off.
Becoming proficient at using AI tools even for the best reasons can become a time-sponge.
At this point in the article, you may be thinking, Wow, this author seems stuck in the “way past” machine, resisting useful technology that many claim will transform the world.
So let me be clear where I stand: I embrace technology to make our lives better. And we have new AI tools that have the potential for great good as well as great harm. We can use AI for either noble purposes or evil purposes. (Those with evil purposes are already bragging about how they’ve put deceptive, harmful information into the stratosphere.)
As with so many other tools and techniques, how AI tools are used—and the claims made about their output—comes down to personal integrity and company credibility. The future will reveal the moral foundation of individuals and our culture.
Stand out in a world with increasing AI visibiltiy with Creating Personal Presence: Look, Talk, Think, and Act Like a Leader