I’m trying to feel more comfortable using random GitHub projects, basically.

  • slazer2auEnglish
    arrow-up
    3
    arrow-down
    1
    ·
    2 months ago
    link
    fedilink

    What do you consider malicious, specifically. Because AI are not magic boxes, they are regurgitation machines prone to hallucinations. You need to train it on examples to identify what you want from it.

    • unknowing8343OP
      arrow-up
      4
      arrow-down
      0
      ·
      2 months ago
      link
      fedilink

      I just want a report that says “we detected in line 27 or file X, a particular behavior that feels weird as it tries to upload your environment variables into some unexpected URL.

      • slazer2auEnglish
        arrow-up
        4
        arrow-down
        3
        ·
        2 months ago
        link
        fedilink

        particular behavior that feels weird

        Yea, AI doesn’t do feelings.

        tries to upload your environment variables into some unexpected URL

        Most of the time that is obfuscated and can’t be detected as part of a code review. It only shows up in dynamic analysis.

        • unknowing8343OP
          arrow-up
          5
          arrow-down
          2
          ·
          2 months ago
          link
          fedilink

          AI doesn’t do feelings

          How can I have a serious conversation with these annoying answers? Come on, you know what I am talking about. Even an AI chatbot would know what I mean.

          Any AI chatbot, even “general purpose” ones will read your code and will return a description of what it does if you ask it.

          And particularly AI would be great at catching “useless”, “weird” or unexplainable code in a repository. Maybe not with the current levels of context. But that’s what I want to know, if these tools (or anything similar) exist yet.

          Thank you.

          • FizzyOrange
            arrow-up
            1
            arrow-down
            1
            ·
            1 month ago
            link
            fedilink

            Questions about AI seem to always bring out these naysayers. I can only assume they feel threatened? You see the same tedious fallacies again and again:

            • AI can’t “think” (using some arbitrary and unstated definition of the word “think” that just so happens to exclude AI by definition).
            • They’re stochastic parrots and can only reproduce things they’ve seen in their training set (despite copious evidence to the contrary).
            • They’re just “next word predictors” so they fundamentally are incapable of doing X (where X is a thing they have already done).
        • FizzyOrange
          arrow-up
          1
          arrow-down
          6
          ·
          2 months ago
          link
          fedilink

          AI doesn’t do feelings

          It absolutely does. I don’t know where you got that weird idea.

          • SuperbEnglish
            arrow-up
            2
            arrow-down
            1
            ·
            1 month ago
            link
            fedilink

            Honey your AI girlfriend doesn’t actually love you

              • SuperbEnglish
                arrow-up
                2
                arrow-down
                0
                ·
                1 month ago
                link
                fedilink

                You’re right, I hope the two of you are very happy