Dead Internet Theory

  • Leader
    August 9, 2024 4:05 PM PDT

    To your first link- Can Google detect AI content:

    One of the most pressing questions related to AI tools is whether search engines like Google can detect it. The short answer – yes, they can. Google’s algorithms are constantly evolving, and they are capable of detecting patterns and anomalies that typically indicate AI-generated content. Clues such as unnatural language, repetitive phrases, and a lack of coherence can potentially flag content as AI-generated. While Google can detect certain characteristics of AI-generated writing, it’s important to note that the detection methods are not foolproof.

    I was wondering about this and nice to see a good description on where Google stands on SEO and AI generated content (from first link):

    Google’s stance on AI-generated content is somewhat nuanced. They state that while the use of AI for content creation is not inherently penalized, content that is generated solely to manipulate search rankings (whether created by a human or machine) violates their spam policy. In order to score a top spot on Google results pages, content needs to follow their E-E-A-T guidelines. It needs to be authoritative, trustworthy, and directly address user search queries. That means that no matter how content is generated, it will ultimately rank lower if it lacks substance, relevance, or originality. Essentially, the Google algorithm prioritizes high-quality, valuable content for users, no matter how it’s created.

    Your second link has a different answer for AI generated content and SEO:

    Yes, AI content works for SEO. Google does not ban or punish your website for having AI-generated content. They accept the use of AI-generated content, as long as it’s done ethically. However, if your AI-generated content isn’t optimized for SEO or you try to use for spam, it will still perform poorly in search results and Google will punish your website.

    Apparently, this is worth a deep dive to study whether AI generated content is bad for SEO from your second link:

    When you create content for search engines, you’re creating content for humans. Your content must be helpful, informational, and focused on the user (per a set of guidelines known as E-E-A-T). Google wants to deliver the most relevant and trustworthy information to its audience.

    So, how can AI-generated content be bad for SEO? AI-generated content can be bad if it doesn’t fit Google’s E-E-A-T guidelines.

    Both of your links would make a good study to create a blog discussion on their own.  Thanks for sharing! 

     

  • Member
    August 13, 2024 7:02 AM PDT

    I think I've said it before somewhere on this website that I don't believe today's AI is AI at all and I think the article link I posted above and below just confirms that for me and it is just a myth!

    It's not artificial, it's just a computer programme created by HUMANS, as far as intelligence goes, it's only as intelligent as the HUMANS make it. And without any future HUMAN driven content and research it cannot get anymore intelligent!


    There Is No AI

    https://www.newyorker.com/science/annals-of-artificial-intelligence/there-is-no-ai


    Now when I was writing this, I first wrote it in Notepad, why you might be asking, because I needed to edit, re-write and correct myself before I was happy to post it and that's because I am only HUMAN and we can make mistakes, can AI do or be like that?

  • Leader
    August 13, 2024 2:07 PM PDT

    Omg, what you just said makes so much more sense, that AI is merely a program made by humans and the programs knowledge is only as good as what humans put into it.  So to your "can AI do that", no it can't.  AI, the program called AI to make it sound like it has super powers is nothing more than a vast amount of information.

    I have your article open to read, thanks for sharing!

    Side note, I haven't tried using notepad to write out responses as I usually use Word.  I may try notepad, thanks!

  • Member
    August 14, 2024 2:05 PM PDT

    I only use Notepad because it doesn't have any formatting and also no spell checking by the way, but copying and pasting code from Word is not always good for websites because of the formatting!

  • Leader
    August 14, 2024 2:43 PM PDT

    I can see why you like notepad then.  Recently, I wrote out an email in word and pasted it in the gmail email I was using to send the email and when I posted the message all looked good.  After I sent the message, some of the text in my message was now purple.  My dd said that has happened to her.  I was pretty embarrassed because that was a business email, omg. I'll try notepad, thank you!

  • Member
    August 15, 2024 10:51 AM PDT

    Anyway, we are going off on a tangent here, let's get back to so called AI or Tools as I would prefer to call them!

    I think AI tools have their place in the world especially in the fields of innovation, design and industry, and yes it can create great images as we've seen, but when it comes to art, music, videos etc. I think that should be down to us humans. Let's say for example I get AI to write a song and that song gets to number one I might end up with a lawsuit against me because parts of the song are from other songs, but who would be to blame? Me or the AI?

    Also I would never try AI generated content (text/articles) as even if I did I would have to proof read it and maybe edit it anyway, so what's the point?

    You also have to remember that whatever is produced by AI is dependent on what you ask for, primarily "keywords", well search engines have been doing just this for years now, so what's the difference?

    And when it comes to things like climate change, can AI stop that? It might give us some insights into it but from my point of view it's down to us humans because we caused it so we must be the ones to stop it before it's too late!

    And I just want to draw your attention to another article I found:

    AI can strategically lie to humans. Are we in trouble?

    https://bigthink.com/the-future/artificial-intelligence-is-learning-to-deceive/

    What do you make of that?

  • Leader
    August 15, 2024 3:53 PM PDT

    Risks of AI, gee whiz, there are so many and I'm thinking of many that are getting cozy using AI in place as friends now having massive personal discussions where all of this is recorded in that big tech sky forever. 

    Park and his co-authors detailed numerous risks if AI’s ability to deceive further develops. For one, AI could become more useful to malicious actors. Imagine, for example, large language models autonomously sending thousands of phishing emails to unsuspecting targets. An AI could also be programmed to distribute fake news articles, made-up polling, and deepfake videos to affect the outcome of an election.

    AI going out of control...should I be less anxious if AI is merely a program, or is the people behind the program that will allow the AI programs to go amok the scariest reality?

    Going into more speculative territory, Park and his team painted a hypothetical scenario where AI models could effectively gain control of society. They noted that leading AI company OpenAI’s stated mission is to create “highly autonomous systems that outperform humans at most economically valuable work.”

    Maybe creating highly autonomous systems in AI should be halted and keep the humans at the top of the smart network chain.

    Going into more speculative territory, Park and his team painted a hypothetical scenario where AI models could effectively gain control of society. They noted that leading AI company OpenAI’s stated mission is to create “highly autonomous systems that outperform humans at most economically valuable work.”

    That's all fine, unless you have a billionaire with control motives of taking over the world.  Who's regulating the rich as they are building their AI networks to out perform another AI?

    Moreover, before an AI is deployed, it must pass safety tests.

    “AI developers should be legally mandated to postpone deployment of AI systems until the system is demonstrated to be trustworthy by reliable safety tests. Any deployment should be gradual, so that emerging risks from deception can be assessed and rectified,” they added.

    Oh, there it is....

    AI companies themselves might not be genuinely interested in AI safety. Take OpenAI, for example. Earlier this year, the company’s safety team essentially collapsed in a mass exodus. Many of them subsequently wrote an open letter arguing that AI company insiders must be permitted to speak publicly without fear of retaliation about the risks of their models.

    AI deceptive, hallucinations and gibberish at times.  What hell have they unleashed without guardrails in place for every AI producing tech firm.

    Goldstein is pessimistic that society will meet the pressing challenge of deceptive AIs. There seem to be three responses to the current situation, he said. Some people argue it’s all hype. Others hope that AI’s interests will align with ours. And a third group thinks that oversight will reign in deceptive AI. He thinks all those responses are naive.

    We are truly in uncharted territory. If AI learns to deceive us on a large scale, we may not have a say in the direction.

  • Leader
    August 15, 2024 3:55 PM PDT

    I enjoyed your article, @Mark Ransome.  Lots to think about and process, thank you!

  • Member
    August 26, 2024 9:56 AM PDT

    I was thinking the other day about what dangers AI could pose and after doing some digging around I found this article:

    14 Risks and Dangers of Artificial Intelligence (AI)

    https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence

    I was quite disturbed at sections "12. Uncontrollable Self-Aware AI" and "13. Increased Criminal Activity"! Hackers could use it for all sorts of things and create chaos and in particular "Online predators can now generate images of children"!

  • Leader
    August 26, 2024 3:13 PM PDT

    Thanks for sharing this article as the author, Mike Thomas, really lays at the perils of AI in an unregulated environment. 

    I don't like twelve or thirteen either.  On number thirteen, creating child porn images is very disturbing, as well as voice cloning, for creating scams or ransoms. Number twelve has it's own ring of familiarity we've seen in the movies from the Terminator (Skynet), Star Trek Next Gen when Data creates Lal where they were determining if she was sentient (https://www.imdb.com/title/tt0708814/ ) to recent films I've added here, like T.I.M. and Me3an.  As I'm scanning all of the risks and danger of AI listed in your article, I think I could agree, they are all very concerning on so many levels.