The model, called GameNGen, was made by Dani Valevski at Google Research and his colleagues, who declined to speak to New Scientist. According to their paper on the research, the AI can be played for up to 20 seconds while retaining all the features of the original, such as scores, ammunition levels and map layouts. Players can attack enemies, open doors and interact with the environment as usual.

After this period, the model begins to run out of memory and the illusion falls apart.

  • CheeseNoodleEnglish
    arrow-up
    61
    arrow-down
    5
    ·
    1 month ago
    link
    fedilink

    I really hope this doesn’t catch on, Games are already horifically inefficient, imagine if we started making them like this and a 4090 becomes the minnimum system requirement for goddamn DOOM.

    • UnityDeviceEnglish
      arrow-up
      15
      arrow-down
      6
      ·
      1 month ago
      link
      fedilink

      Games are already horifically inefficient

      That’s so far from the truth, it hurts me to read it. Games are one of the most optimised programs you can run on your computer. Just think about it, it’s a application rendering an entire imaginary world every dozen milliseconds. Compare it to anything else you run, like say slack or teams, which makes your CPU sweat just to notify you about a new message.

      • Lucy :3English
        arrow-up
        13
        arrow-down
        1
        ·
        1 month ago
        link
        fedilink

        Many games, especially AAA games or ones relying on common game engines, are actually horribly inefficient. It’s hard to run any Unity/Unreal game in 4k on my 1070. Even if it has shit graphics like Lethal Company. What does run well? Smaller, custom engines, even Metro Exodus runs with 60+ FPS in 4k on my 1070, and still looks very good. Why? Because 4A Game is/was actually interested in creating a good engine and games. That’s the whole reason they split from the S.T.A.L.K.E.R team: Because, in their opinion, the engine was too inefficient.

        Most games are just a quick cash grab tho, especially ones by large companies like EA. Other large companies with a significantly lower output of games, eg. Valve, do produce programmatically higher quality games tho.

        • rhombusEnglish
          arrow-up
          8
          arrow-down
          0
          ·
          1 month ago
          link
          fedilink

          It’s hard to run any Unity/Unreal game in 4k on my 1070

          Both of these engines are capable of making very optimized games, it’s just that most of the developers using them either don’t have the expertise or don’t care to put in the effort.

          • Lucy :3English
            arrow-up
            4
            arrow-down
            0
            ·
            1 month ago
            link
            fedilink

            I know. The inherent problem of games made with those engines is the lack of motivation, knowledge and experience of devs to make (programmatically) good games. Only very few games using those engines are good in that sense, and as exceptions confirm a rule I’d just simplify it to that statement.

      • PaellaVacuumEnglish
        arrow-up
        4
        arrow-down
        8
        ·
        1 month ago
        edit-2
        1 month ago
        link
        fedilink

        Right buddy, seems like you’ve never had to play on a 3-gen old non-gaming laptop. That’s such a privileged view lmao.

  • YourNetworkIsHauntedEnglish
    arrow-up
    45
    arrow-down
    5
    ·
    1 month ago
    link
    fedilink

    Note that the image here isn’t from the AI project, it’s from actual Doom. Their own screenshots have weird glitches including a hit splat that looks like a butt in the image I’ve seen closest to this one.

    And when they say they’ve “run the game” they do not mean that there was a playable version that was publicly compared to the original. Rather they released short video clips of alleged gameplay and had their evaluators try to identify if they were from the AI recreation or from actual Doom.

    Even by the abysmal standards of generative AI projects this is a hell of a grift.

    • TelorandEnglish
      arrow-up
      14
      arrow-down
      1
      ·
      1 month ago
      link
      fedilink

      Even by the abysmal standards of generative AI projects this is a hell of a grift.

      But if you invest now, you can make a game-generating AI a reality! /s

    • ElectricMachmanEnglish
      arrow-up
      2
      arrow-down
      0
      ·
      1 month ago
      link
      fedilink

      I’m pretty sure that screenshot is from the video. The zombieman has no feet

      • YourNetworkIsHauntedEnglish
        arrow-up
        1
        arrow-down
        0
        ·
        1 month ago
        link
        fedilink

        Possibly fair. I’m pretty sure I’ve seen that exact screenshot used in other articles about Doom, but I’m not enough of a Doom nerd to be sure.

        There’s a decent writeup over at Pivot-to-AI that looks at the paper as a whole in more detail.

  • the_artic_oneEnglish
    arrow-up
    38
    arrow-down
    0
    ·
    1 month ago
    link
    fedilink

    Thinking quickly, Generative AI constructs a playable version of Doom, using only some string, a squirrel, and a playable version of Doom.

  • Snot FlickermanEnglish
    arrow-up
    47
    arrow-down
    22
    ·
    1 month ago
    edit-2
    1 month ago
    link
    fedilink

    An AI-generated recreation of the classic computer game Doom can be played normally despite having no computer code or graphics.

    After this period, the model begins to run out of memory and the illusion falls apart.

    Why are we lying about this? Just because it happens in the AI “black box” doesn’t mean it’s not producing some kind of code in the background to make this work. They even admit that it “runs out of memory. Huh, last I checked, you’d need to be running code to use memory. The AI itself is made of code! No computer code or graphics, my ass.

    The model, called GameNGen, was made by Dani Valevski at Google Research and his colleagues, who declined to speak to New Scientist.

    Always a good look. /s

    • xionzuiEnglish
      arrow-up
      36
      arrow-down
      3
      ·
      1 month ago
      link
      fedilink

      I mean, yes, technically you build and run AI models using code. The point is there is no code defining the game logic or graphical rendering. It’s all statistical predictions of what should happen next in a game of doom by a neural network. The entirety of the game itself is learned weights within the model. Nobody coded any part of the actual game. No code was generated to run the game. It’s entirely represented within the model.

      • huginnEnglish
        arrow-up
        13
        arrow-down
        1
        ·
        1 month ago
        link
        fedilink

        What they’ve done is flattened and encoded every aspect of the doom game into the model which lets you play a very limited amount just by traversing the latent space.

        In a tiny and linear game like Doom that’s feasible And a horrendous use of resources.

        • SpaceNoodleEnglish
          arrow-up
          6
          arrow-down
          1
          ·
          1 month ago
          link
          fedilink

          It doesn’t even actually do that. It’s a glitchy mess.

        • Todd BonzalezEnglish
          arrow-up
          3
          arrow-down
          3
          ·
          1 month ago
          link
          fedilink

          And a horrendous use of resources.

          This was a stable diffusion model trained on hundreds of thousands of images. This is actually a pretty small training set and a pretty lightweight model to train.

          Custom / novel SD models are created and shared by hobbyists all the time. It’s something you can do with a Gaming PC, so it’s not any worse a resource waste than gaming.

          I’m betting Google didn’t throw a lot of money at the “get it to play Doom” guys anyway.

    • Blue_MorphoEnglish
      arrow-up
      10
      arrow-down
      2
      ·
      1 month ago
      link
      fedilink

      Imagine you are shown what Doom looks like, are told what the player does, and then you draw the frames of what you think it should look like. While your brain is a computation device, you aren’t explicitly running a program. You are guessing what the drawings should look like based on previous games of Doom that you have watched.

    • cdf12345English
      arrow-up
      4
      arrow-down
      0
      ·
      1 month ago
      link
      fedilink

      Maybe they should have specified , the Doom Source Code

    • fruitycoderEnglish
      arrow-up
      1
      arrow-down
      0
      ·
      1 month ago
      link
      fedilink

      This would be like playing DnD where you see a painting and describe what you would do next as if you were the painting and they an artists painted the next scene for you.

      The artists isn’t rolling dice, following the rule book, or any actual game elements they ate just painting based on the last painting and your description of the next.

      Its incredibly nove approchl if not obviously a toy problem.

    • bambooEnglish
      arrow-up
      1
      arrow-down
      9
      ·
      1 month ago
      link
      fedilink

      “No code” programming has been a thing for a while, long before the LLM boom. Of course all the “no code” platforms generate some kind of code based on rules provided by the user, not fundamentally different from an interpreter. This is consistent with that established terminology.

      • Blue_MorphoEnglish
        arrow-up
        13
        arrow-down
        0
        ·
        1 month ago
        link
        fedilink

        No code programming meant using a GUI to draw flowcharts that then creates running code. This is completely different.

        • bambooEnglish
          arrow-up
          1
          arrow-down
          4
          ·
          1 month ago
          link
          fedilink

          Using a different high level interface to generate code is completely different? The fundamental concept is the same even if the UI is very different.

          • Blue_MorphoEnglish
            arrow-up
            5
            arrow-down
            1
            ·
            1 month ago
            edit-2
            1 month ago
            link
            fedilink

            Yes it’s completely different. “No code” is actually all code just written graphically instead of with words. Every instruction that is turned into CPU instructions has to be drawn on a flowchart. If you want the “no code” to add A + B, you had to write A+B in a box on the flowchart. Have you taken a computer class? You must know what a flowchart is.

            This Doom was done by having a neural net watch Doom being played. It then recreates the images from Doom based on what it “learned”. It doesn’t have any code for “mouse click -> call fire shotgun function” Instead it saw that when someone clicked the mouse, pixels on the screen changed in a particular way so it simulates the same pixel pattern it learned.

  • BetaDoggo_English
    arrow-up
    30
    arrow-down
    5
    ·
    1 month ago
    link
    fedilink

    It’s cool but it’s more or less just a party trick.

    • Todd BonzalezEnglish
      arrow-up
      14
      arrow-down
      4
      ·
      1 month ago
      link
      fedilink

      Is it though? We can show an AI thousands of hours of something and it can simulate it almost perfectly. All the game mechanics work! It even makes you collect keys and stock up on ammo. For a stable diffusion model that’s pretty profound emergent behavior.

      I feel like you’re kidding yourself if you don’t think this has real world applications. This is the kind breakthrough we need for self-driving: the ability to simulate what would happen in real life given a precise current state and a set of fictional inputs.

      Doom is a low-graphics game, so it’s definitely easier to simulate, but this method could make the next generation of niche “VidGen” models extremely accurate.

      • fruitycoderEnglish
        arrow-up
        2
        arrow-down
        0
        ·
        1 month ago
        link
        fedilink

        Honestly I thinkyour self driving example is something this could be really cool for. If the generation can exceed real time (I.e. 20 secs of future image prediction can happen in under 20 secs) then you can preemptively react with the self driving model and cache the results.

        If the compute costs can be managed maybe even run multiple models against each other to develop an array likely branch predictions (you know what I turned left)

        Its even cooler that player input helps predict the next image.

      • ElectricMachmanEnglish
        arrow-up
        3
        arrow-down
        1
        ·
        1 month ago
        link
        fedilink

        I’m not convinced. The ammo seems to go up and down on a whim, as does the health

        • JDPoZEnglish
          arrow-up
          3
          arrow-down
          0
          ·
          1 month ago
          link
          fedilink

          Because AI isn’t actually “artificial intelligence. It’s the marketing term that seems to have been adapted by every corporation to describe LLMs which are more like extra fancy power guzzling parrots.

          Its why the best cases for them are mimicking things brainlessly, like voice cloning for celebrity impressions but that doesn’t mean it can act or comprehend emotion, or know how many fingers a hand should have and why they constantly hallucinate contextless bullshit because just like a parrot doesn’t actually know any meaning of what it is saying when it goes POLLY WANT A CRACKER it just knows the tall thing will give it a treat if it makes this specific squawk with its beak.

      • Rolling ResistanceEnglish
        arrow-up
        3
        arrow-down
        4
        ·
        1 month ago
        link
        fedilink

        Can’t wait for your self-driving car to go out of memory mid ride.

    • Echo DotEnglish
      arrow-up
      5
      arrow-down
      1
      ·
      1 month ago
      edit-2
      1 month ago
      link
      fedilink

      It’s a proof of concept demonstration not a final product. You might as well say the Wright brothers didn’t have anything other than their party trick.

      So many practical applications for being able to do this beyond just video games in fact video games are probably the least useful application for this technology.

  • DrusenijaEnglish
    arrow-up
    28
    arrow-down
    4
    ·
    1 month ago
    link
    fedilink

    Regardless of the technology, isn’t this essentially creating a facsimile of a game that already exists? So the tech isn’t really about creating a new game, it’s about replicating something that already exists in a fairly inefficient manner. That doesn’t really help you to create something new, like I’m not going to be able to come up with an idea for a new game, throw it at this AI, and get something playable out of it.

    That and the fact it “can be played for up to 20 seconds” before “the model begins to run out of memory” seems like, I don’t know, a fairly major roadblock?

    • UndercoverUlrikHDEnglish
      arrow-up
      16
      arrow-down
      1
      ·
      1 month ago
      link
      fedilink

      It’s just a research paper, not a product. It’s about discovering and learning new possible methods and applications.

      • DrusenijaEnglish
        arrow-up
        5
        arrow-down
        0
        ·
        1 month ago
        link
        fedilink

        That’s a fair point actually, I’m looking at it through a product lens, not a research one.

    • bob_lemonEnglish
      arrow-up
      13
      arrow-down
      0
      ·
      1 month ago
      link
      fedilink

      Yes, this does nothing for game dev. But I don’t think it was supposed to.

      The fact that this is a genAI Model generating a reasonable, context aware image a whopping 20 times a second is nonetheless pretty impressive.

    • GounEnglish
      arrow-up
      6
      arrow-down
      1
      ·
      1 month ago
      link
      fedilink

      Unless you can monetize those 20 seconds like crazy

      • DrusenijaEnglish
        arrow-up
        5
        arrow-down
        0
        ·
        1 month ago
        link
        fedilink

        This sounds like the basis for a new Warioware game.

    • Echo DotEnglish
      arrow-up
      5
      arrow-down
      2
      ·
      1 month ago
      edit-2
      1 month ago
      link
      fedilink

      So you think a project should be killed immediately upon inception because it’s not immediately perfect? That is a really really weird attitude.

      • DrusenijaEnglish
        arrow-up
        5
        arrow-down
        0
        ·
        1 month ago
        link
        fedilink

        I’m more taking issue with this quote from the article:

        “Researchers behind the project say similar AI models could be used to create games from scratch in the future, just as they create text and images today.

        This doesn’t strike me as something that can create a game from scratch, it’s something that can take an existing game and replicate it without having access to the underlying source code, and use an immense amount of processing power to do it.

        Since it seems they’re using generative AI based technology underneath it, they’re effectively building a Doom model. You might be able to spin a Doom clone off from that but I don’t see it as something you could practically throw another game type at.

        That being said as I said in a different reply, I was viewing it through the lens of something more product based rather than that of a research project. As a field of research, it’s an interesting topic. But I’m not sure how you connect it to “create games from scratch” if you don’t already have an existing game available to train the model on.

        • Echo DotEnglish
          arrow-up
          3
          arrow-down
          0
          ·
          1 month ago
          link
          fedilink

          Why do you think it needs an existing game to train the model on? They used Doom precisely because it already exists.

          The entire point to the research paper was to see if humans could tell the difference between the generated content and the real game, that way they have a measurable metric of how viable this technology is even if only in theory, that means that they have to make something that’s based off a real game.

          Obviously the technology isn’t commercially viable yet. But the fact that it looks even remotely like Doom shows that there is promise to the technology.

    • locuesterEnglish
      arrow-up
      2
      arrow-down
      1
      ·
      1 month ago
      link
      fedilink

      Perhaps you could be missing the trajectory of continuous improvement. How long until The Matrix?

      • Echo DotEnglish
        arrow-up
        3
        arrow-down
        0
        ·
        1 month ago
        link
        fedilink

        It’s an exponential increase as well and humans are very bad at judging exponential increases they look at something like this and they see no promise in it because they can’t see that four or five iterations down the line (and in the world of AI that could very easily be 3 months) it will be hundreds of times better.

  • paraphrandEnglish
    arrow-up
    23
    arrow-down
    1
    ·
    1 month ago
    edit-2
    1 month ago
    link
    fedilink

    “Playable” nah. “Interactive” yes.

  • harsh3466English
    arrow-up
    18
    arrow-down
    1
    ·
    1 month ago
    link
    fedilink

    Correct me if I’m wrong, but doesn’t there have to be a code layer somewhere in there?

    It’s like all those “no code” platforms that just obscure away the actual coding via a gui and blocks/elements/whataver.

    • hasnt_seen_gooniesEnglish
      arrow-up
      12
      arrow-down
      3
      ·
      1 month ago
      link
      fedilink

      In this case, no. This is just interpreting what the next frame should be by the previous one. Like how the sora videos work, but with input.

      • Virkkunen
        arrow-up
        16
        arrow-down
        2
        ·
        1 month ago
        link
        fedilink

        So then the code is to make the AI generate images and take input

        • hasnt_seen_gooniesEnglish
          arrow-up
          10
          arrow-down
          0
          ·
          1 month ago
          link
          fedilink

          Or the code is the operating system that the application is running on, or the code is the firmware that is operating the GPU that is crunching the numbers to make the neural net, or the code is the friends we made along the way.

    • Todd BonzalezEnglish
      arrow-up
      2
      arrow-down
      0
      ·
      1 month ago
      link
      fedilink

      I mean, yeah, there’s code, but none of it is Doom.

  • dustyDataEnglish
    arrow-up
    17
    arrow-down
    1
    ·
    1 month ago
    link
    fedilink

    This is just a pile of garbage. Jim Sterling’s break down is the most complete argument. But this is just a plain ol bag of shit.