There’s a video on YouTube where someone has managed to train a network of rat neurons to play doom, the way they did it seems reminiscent of how we train ML models

I am under the impression from the video that real neurons are a lot better at learning than simulated ones (and much less power demanding)

Could any ML problems, such as natural language generation be solved using neurons instead and would that be in any way practical?

Ethically at this point is this neuron array considered conscious in any way?

  • flashgnashOP
    arrow-up
    3
    arrow-down
    0
    ·
    6 months ago
    link
    fedilink

    Think you might’ve commented on the wrong post

    • kakes
      arrow-up
      2
      arrow-down
      2
      ·
      6 months ago
      link
      fedilink

      Haha naw, it’s the same basic idea, just using something inorganic (like glass) to represent a neural network instead of something like biological neurons.

      • flashgnashOP
        arrow-up
        3
        arrow-down
        0
        ·
        6 months ago
        link
        fedilink

        Cool idea, though existing computers are also an inorganic way to representing a neural net

        • kakes
          arrow-up
          1
          arrow-down
          2
          ·
          6 months ago
          link
          fedilink

          Well, yes, but something like an etched glass would be better in basically every way, if it could be done. (See my other comment in this thread if you want more details)

      • BreakDecksEnglish
        arrow-up
        2
        arrow-down
        1
        ·
        6 months ago
        link
        fedilink

        What on earth are you talking about?

        • kakes
          arrow-up
          1
          arrow-down
          2
          ·
          6 months ago
          link
          fedilink

          A neural network is an array of layered nodes, where each node contains some kind of activation function, and each connection represents some weight multiplier. Importantly, once the model is trained, it’s stateless, meaning we don’t need to store any extra data to use it - just inputs and outputs.

          If we could take some sort of material, like a glass, and modify it so that if you shone a light through one end, the light would bounce in such a way as to emulate these functions and weights, you could create an extremely cheap, compact, fast, and power efficient neural network. In theory, at least.