Poisoned AI went rogue during training and couldn’t be taught to behave again in ‘legitimately scary’ study::AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

  • KingJalopy English
    arrow-up
    14
    arrow-down
    0
    ·
    9 months ago
    link
    fedilink

    Check out the sci-fi book “Talbot” if you are interested in what a realistic look at a rogue AI (AGI) would be like. It was a fun book.