Summary

This research, conducted by Microsoft and OpenAI, focuses on how nation-state actors and cybercriminals are using large language models (LLMs) in their attacks.

Key findings:

  • Threat actors are exploring LLMs for various tasks: gathering intelligence, developing tools, creating phishing emails, evading detection, and social engineering.
  • No major attacks using LLMs were observed: However, early-stage attempts suggest potential future threats.
  • Several nation-state actors were identified using LLMs: Including Russia, North Korea, Iran, and China.
  • Microsoft and OpenAI are taking action: Disabling accounts associated with malicious activity and improving LLM safeguards.

Specific examples:

  • Russia (Forest Blizzard): Used LLMs to research satellite and radar technologies, and for basic scripting tasks.
  • North Korea (Emerald Sleet): Used LLMs for research on experts and think tanks related to North Korea, phishing email content, and understanding vulnerabilities.
  • Iran (Crimson Sandstorm): Used LLMs for social engineering emails, code snippets, and evading detection techniques.
  • China (Charcoal Typhoon): Used LLMs for tool development, scripting, social engineering, and understanding cybersecurity tools.
  • China (Salmon Typhoon): Used LLMs for exploratory information gathering on various topics, including intelligence agencies, individuals, and cybersecurity matters.

Additional points:

  • The research identified eight LLM-themed TTPs (Tactics, Techniques, and Procedures) for the MITRE ATT&CK® framework to track malicious LLM use.
  • AbouBenAdhemEnglish
    arrow-up
    13
    arrow-down
    3
    ·
    8 months ago
    edit-2
    8 months ago
    link
    fedilink

    I assume they mean threat actors besides Microsoft and OpenAI?

    • muteEnglish
      arrow-up
      3
      arrow-down
      4
      ·
      8 months ago
      link
      fedilink

      deleted by creator

  • Funderpants English
    arrow-up
    2
    arrow-down
    0
    ·
    8 months ago
    link
    fedilink

    I mean, yea okay, but most of those use cases are exactly what everyone else is using them for so far.

  • PantherinaEnglish
    arrow-up
    4
    arrow-down
    7
    ·
    8 months ago
    link
    fedilink

    And thats why you dont produce tools that are not needed and cause harm, MicroShit

    • FaceDeer
      arrow-up
      2
      arrow-down
      2
      ·
      8 months ago
      link
      fedilink

      I am baffled that you appear to be attacking Microsoft over this. They’re doing research to counter bad actors here.

      • PantherinaEnglish
        arrow-up
        7
        arrow-down
        0
        ·
        8 months ago
        link
        fedilink

        They are funding and forcefully pushing that tool to Windows. And now they want to “protect” against “threat actors”.

        Dont believe a word that comes out of Big Tech PR departments.

        • FaceDeer
          arrow-up
          2
          arrow-down
          2
          ·
          8 months ago
          link
          fedilink

          You think Microsoft is the only organization capable of producing these tools? They weren’t even the first.

          • PantherinaEnglish
            arrow-up
            2
            arrow-down
            1
            ·
            8 months ago
            link
            fedilink

            That is true. Still, huge big tech companies are the biggest threat actors

      • demonswordEnglish
        arrow-up
        2
        arrow-down
        0
        ·
        8 months ago
        edit-2
        8 months ago
        link
        fedilink

        They’re doing research to counter bad actors here

        “Bad actors” as defined by the US gov’t, of course. Home of the “brave” that bombs the shit out of everyone they dislike using unmanned drones, and currently supports a ongoing genocide happening right now in the middle east. Literally the paradise of freedom and justice on Earth.