A shocking story was promoted on the “front page” or main feed of Elon Musk’s X on Thursday:

“Iran Strikes Tel Aviv with Heavy Missiles, read the headline.

This would certainly be a worrying world news development. Earlier that week, Israel had conducted an airstrike on Iran’s embassy in Syria, killing two generals as well as other officers. Retaliation from Iran seemed like a plausible occurrence.

But, there was one major problem: Iran did not attack Israel. The headline was fake.

Even more concerning, the fake headline was apparently generated by X’s own official AI chatbot, Grok, and then promoted by X’s trending news product, Explore, on the very first day of an updated version of the feature.

  • cmnyboEnglish
    arrow-up
    41
    arrow-down
    3
    ·
    6 months ago
    link
    fedilink

    Oh, what a surprise. Another AI spat out some more bullshit. I can’t wait until companies finally give up on trying to do everything with AI.

    • Cosmic ClericEnglish
      arrow-up
      24
      arrow-down
      3
      ·
      6 months ago
      link
      fedilink

      I can’t wait until companies finally give up on trying to do everything with AI.

      I don’t think that will ever happen.

      They’re acceptable of AI driving car accidents that causes harm happen. It’s all part of the learning / debugging process to them.

      • rottingleafEnglish
        arrow-up
        14
        arrow-down
        1
        ·
        6 months ago
        edit-2
        6 months ago
        link
        fedilink

        The issue is that the process won’t ever stop. It won’t ever be debugged sufficiently

        EDIT: Due to the way it works. A bit like static error in control theory, you know that for different applications it may or may not be acceptable. The “I” in PID-regulators and all that. IIRC

        • assassin_aragornEnglish
          arrow-up
          1
          arrow-down
          0
          ·
          6 months ago
          link
          fedilink

          Due to the way it works. A bit like static error in control theory, you know that for different applications it may or may not be acceptable. The “I” in PID-regulators and all that. IIRC

          Oh great, I’m getting horrible flashbacks now to my controls class.

          Another way to look at it is if there’s sufficient lag time between your controlled variable and your observed variable, you will never catch up to your target. You’ll always be chasing your tail with basic feedback control.

        • Cosmic ClericEnglish
          arrow-up
          5
          arrow-down
          9
          ·
          6 months ago
          edit-2
          6 months ago
          link
          fedilink

          It won’t ever be debugged sufficiently

          It will, someday. Probably years and years down the road (pardon the pun), but it will.

          By the way, you reply to me seems very AI-ish. Are you a bot?

          • rottingleafEnglish
            arrow-up
            8
            arrow-down
            0
            ·
            6 months ago
            link
            fedilink

            No, but English is not my first language

            • Cosmic ClericEnglish
              arrow-up
              3
              arrow-down
              0
              ·
              6 months ago
              link
              fedilink

              No, but English is not my first language

              Fair enough. Apologies.

          • maynarkhEnglish
            arrow-up
            6
            arrow-down
            0
            ·
            6 months ago
            link
            fedilink

            I guess the argument is that this is what “innovation and disruption” looks like. When they finally iron out so that chatbots won’t invent fake headlines, they will pile on a new technology that endangers us in a new way. This is the acceptable margin of error to them.

      • JackGreenEarthEnglish
        arrow-up
        4
        arrow-down
        4
        ·
        6 months ago
        link
        fedilink

        AI isn’t inherently bad. Once AI cars cause less accidents than human drivers (even if they still cause some accidents) it will be moral to use them on roads.

        • anon987English
          arrow-up
          3
          arrow-down
          4
          ·
          6 months ago
          link
          fedilink

          AI cars already cause drastically less accidents. And the accidents they do cause are overwhelmingly minor.

          • Thorny_InsightEnglish
            arrow-up
            1
            arrow-down
            0
            ·
            6 months ago
            link
            fedilink

            People hate it when an accident happens and there’s no one to blame. Now it’s still on the driver’s responsibility but that’s not always going to be the case. We’re never reaching zero traffic deaths even with self driving cars that are a hundred times better than the best human driver.