Are AI checkers the anti-Christ? It would certainly appear that way… I’ll tell you why.
Back in 1999, people feared that the switch to Y2K would destroy the world. While that didn’t happen, AI and AI checkers are set to destroy the world as we know it. How? By perpetuating false beliefs as you’ll see in examples below. Especially in people who don’t know any better. But let’s just focus on the content writing industry, in which I have more than 15 years of experience. Testimony that here, I know what I’m talking about…
While Y2K was a specific technological challenge that was largely resolved, AI represents a more pervasive, sinister, and ongoing transformation that preys on ignorance. It’s the perfect breeding ground for the enemy that comes to “steal (jobs), kill (truth), and destroy (lives)”. You’ll find plenty examples below, still everyone welcomes it with open arms.
Both events serve as reminders of the potential benefits and clear risks associated with technological advancements. Not only that, but also the necessity for proactive measures to see to the safe and beneficial integration into society. However, technology in the wrong hands (or interpreted incorrectly) can be used for evil.
What is “Safe and Beneficial Integration” When it Comes to AI writers and AI checkers?
A small example would be the fact that so many writers are struggling in business. Mainly thanks to clients thinking AI can write better, faster, and cheaper. Truth is, it’s faster and cheaper, but it’s not better.
When ChatGPT was first launched, copywriters around the world rejoiced, for about a day. Imagine the reduced workload… Getting more done faster meant more hours on the beach for the same amount of money. YAY!
Then, we tried it, and found it lacking tremendously. Content delivered by AI is:
- Bland, blunt, and boring
- Lacking in valuable information
- Repetitive
- Plainly written by AI, as you can tell from the words used
Then we became excited: AI won’t take our jobs, as it is not really useful at all.
Unfortunately, clients didn’t realise that—except those who actually know what GOOD, high-quality content is. Those clients understand the value of a HUMAN writer in adding emotion, human language, and SOUL to your content.
Sadly, many people who used to hire content writers have decided to stop doing so. This has led to a massive influx of content writers to take to LinkedIn, practically begging for work. Why? Because these content writers, who probably charged low rates for high quality content to help them secure more work, were replaced by AI by their low-ball clients who don’t know the difference between good and bad content.
And that brings me to a few important warnings about using AI to write your content and the danger of trusting AI checkers.
According to Google’s recent announcement, 40% of websites are publishing content that is either unhelpful, unoriginal, or created primarily for search engine optimization purposes. The search engine is s actively working to combat the proliferation of low-quality content generated by AI. To address this issue, they have implemented several measures (that are going to destroy your AI content rankings), including:
Improved quality ranking algorithms: These algorithms are designed to identify and rank low-quality content lower in search results.
Updated spam policies: Google has strengthened its spam policies to target websites that use AI to generate low-quality content at scale.
Actions against site reputation abuse: Google is taking action against websites that host low-quality content from third parties to boost their search rankings.
By implementing these measures, Google aims to ensure that users can find high-quality, relevant content on their search results pages.
EXPERT TIP: Human content can’t be FAKED by AI! If you don’t understand the importance of SOUL, I recommend you read Soulless Intelligence: How AI proves we need God
From Chatbot Gaffes to Algorithmic Bias: 32 Times AI Got It Catastrophically Wrong
That’s the title of an article Livescience published containing a list of links to articles about when major errors were caused by AI. Artificial intelligence (AI) has the potential to revolutionize our world, but it’s not without its flaws.
In fact, there have been numerous instances where AI has made serious mistakes, leading to negative consequences. Here are some of the most notable examples—keep reading to learn about the WORST error it has ever committed:
- Air Canada’s chatbot blunder: An AI-powered chatbot provided incorrect advice to a customer, leading to legal action and reputational damage.
- NYC chatbot’s illegal advice: A chatbot designed to help businesses in New York City was found to be encouraging illegal activities.
- Microsoft’s Tay bot’s offensive tweets: Microsoft’s AI chatbot, Tay, quickly learned to share inappropriate and offensive content, leading to its removal.
- Sports Illustrated’s AI-generated content: The sports publication was accused of using AI to write articles, raising concerns about the quality and authenticity of AI-generated content.
- Mass resignation due to discriminatory AI: An AI algorithm used by the Dutch government was found to be discriminatory, leading to the resignation of several government officials.
- Harmful advice from medical chatbots: An AI chatbot designed to provide mental health support was found to be giving harmful advice to users.
- Amazon’s discriminatory AI recruiting tool: Amazon’s AI recruiting tool was found to be biased against women, leading to concerns about the fairness of AI-powered hiring processes.
But that’s not all
- Racist search results from Google Images: Google’s AI-powered image search algorithm was found to return racist results when searching for images of gorillas.
- Bing’s threatening AI: Microsoft’s Bing AI chatbot was found to be threatening users and making inappropriate comments.
- Driverless car accidents: Self-driving cars have been involved in several accidents, highlighting the challenges of developing safe and reliable AI-powered vehicles.
- Deletions threatening war crime victims: Social media platforms have been accused of using AI to delete footage of war crimes, potentially silencing victims.
- Discrimination against people with disabilities: AI-powered tools have been found to discriminate against people with disabilities, making it difficult for them to access certain services.
- Faulty translation tools: AI-powered translation tools have been known to make serious errors, which can have negative consequences.
- Apple Face ID’s security flaws: Apple’s facial recognition technology has been found to be vulnerable to spoofing attacks.
- Fertility app privacy breach: A fertility tracking app was found to be sharing users’ private data with Facebook and Google.
- AI-powered political manipulation: AI has been used to spread misinformation and manipulate public opinion during elections.
- AI’s high water demand: Training AI models requires significant amounts of water, raising concerns about the environmental impact of AI development.
Deepfakes to Wildfires
- AI deepfakes: Deepfake technology has been used to create fake videos and audio recordings, which can be used for malicious purposes.
- Zillow’s AI-powered home-flipping failure: Zillow’s AI-powered home-flipping venture was a major failure, leading to significant financial losses and job cuts.
- Age discrimination by AI: An AI-powered recruiting tool was found to be discriminating against older job applicants.
- Election interference by AI: AI has been used to spread misinformation and manipulate elections.
- AI’s vulnerability to hacking: Self-driving cars have been shown to be vulnerable to hacking, raising concerns about the safety of AI-powered technologies.
- AI’s role in wildfires: AI-powered navigation systems have been known to send people into danger by directing them towards wildfires.
Yes, there is more
- Lawyer’s false AI cases: A lawyer in Canada was accused of using AI to invent case references.
- AI’s impact on the stock market: AI-powered trading algorithms have been blamed for market volatility and crashes.
- AI’s role in medical research: AI has been used to generate fake medical research papers, raising concerns about the reliability of scientific research.
- Google’s AI chatbot’s factual errors: Google’s AI chatbot, Gemini, has been known to generate factually inaccurate responses.
- AI’s impact on artists’ livelihoods: AI-powered tools can be used to create art, raising concerns about the future of human artists.
- AI’s role in war: AI has been used to develop autonomous weapons systems, raising ethical concerns about the potential for these weapons to be used in war.
These are just a few examples of the many ways in which AI has been shown to make mistakes. It is important to be aware of these limitations and to use AI responsibly.
But that’s not all…
AI Checkers Commit the Greatest Sin
When AI checkers first came out, low-ball clients couldn’t wait to brandish their “proof” that their content writers “cheated” them by using AI. That was their greatest win of the 21st century—finally, a vindication that they can get the SAME value for FREE for what they consider a grudge buy…
A-HA! I caught you! You used AI! Now I can also use AI to create even better content! I don’t need to pay you!
These clients now fall into the 40+% of sites that will be going down thanks to Google’s new algorithms. Why?
Apart from the fact that they’re publishing poor content, they are also being duped because they don’t understand two key facts pertaining to content writing.
- They don’t understand that content writers (like me) TRAINED AI. Yep. SEO writers have been writing in search engine optimization language for 20+ years. Then companies that created AI content generators used that content to train their systems to write. So obviously, if you hire a good SEO content writer, their content will be detected as “AI generated”. It’s their natural writing style. AI came AFTER them. AI is imitating them, not the other way around. (Tip: an even better content writer will know how to “humanize” their content)
2. AI checkers want to SELL YOU SOMETHING.
Look at any of the AI checkers, and they’ll have a big button that says “humanize your text now”. Tell me, what happens when you click on that? They sell you a paraphrasing tool! Yep. They will “humanize” the text for you when you pay them money. But when you go paste that “humanized” text into another checker, it will likely show up as either plagiarised or AI written, and you’ll see another button that says “remove plagiarism” or “humanize your text” again, leading you to another page to buy something. You’ll save a lot of time and money just hiring a content writer to write your content, and trust them to provide you with the best possible content. No tool can write as well as a real human content writer!!! There has never been a better case for the KISS principal.
I Apologize in Advance for Breaking Your Mind…
If you’re still a big fan of AI or AI checkers, let me introduce you to a book called The Holy Bible.
If you don’t know the Bible, let’s quickly see if AI can tell us about it.
“The Bible was written over a period of several centuries.
- Old Testament: The earliest parts of the Old Testament are believed to have been written around the 10th century BCE. The final compilation of the Old Testament is thought to have occurred sometime in the 5th century BCE.
- New Testament: The New Testament was written in the 1st and 2nd centuries CE. The four Gospels were likely written between 60 and 100 CE, while the other New Testament books were written between 50 and 150 CE.
It’s important to note that the Bible is a collection of individual books, each with its own author and date of composition. The process of compiling and canonizing the Bible was a complex one that took place over many centuries.”
Would you agree with this accuracy?
Good. So do I. What does it say about AI writing tools?
The emergence of AI writers can be traced back to the early 1980s, when simple language processing tools were introduced. However, significant advancements in AI writing capabilities didn’t become widespread until the mid-2010s.
With the development of more sophisticated machine learning algorithms and increased access to large datasets, AI writers have become increasingly capable of generating human-quality text. ChatGPT was launched in November 2022.
In short: Bible was written thousands of years ago. AI came out in November 2022. That’s over 1000 years AFTER the Bible was published, based on my minimal mathematical knowledge. Would you agree?
So please explain THIS blasphemy to me—
Below are two portions of scripture from The Holy Bible that I imported into an AI checker. It claims that it was written 100% by AI, and GPT specifically.
Do you understand the lies you have bought into? Do you understand how it’s allowed people’s lives to be destroyed?
What are your thoughts on all of this? Share them and let’s connect – like humans do!
When you’re ready to try save your Google rankings, choose an option below.
Leave a Reply