• crispyflagstonesEnglish
    arrow-up
    5
    arrow-down
    1
    ·
    6 months ago
    edit-2
    6 months ago
    link
    fedilink

    The completely different software stack is a killer. It’s not that you can’t find versions of a model to run, but almost everything that hits the GPU for compute is going to be targeting CUDA, not RocM. From a compatibility standpoint alone this killed AMD for me. I just do not want to spend my time fighting the stack to get these models running.

    • WormFoodEnglish
      arrow-up
      4
      arrow-down
      0
      ·
      6 months ago
      link
      fedilink

      on the one hand, cuda is vendor lock-in and if we’d all just agreed on an open standard decades ago then we wouldn’t be in this mess

      but on the other hand, rocm is crap and adaptivecpp is very half baked right now, at least in my limited experience

      • crispyflagstonesEnglish
        arrow-up
        2
        arrow-down
        0
        ·
        6 months ago
        edit-2
        6 months ago
        link
        fedilink

        Yeah, it’s not that I like this state of affairs, but right now the vendor lock-in is so one-sided that it’s hard to say there’s a viable alternative to CUDA. I hope that changes one day.

    • aardEnglish
      arrow-up
      1
      arrow-down
      0
      ·
      6 months ago
      link
      fedilink

      Admittedly I’m just toying around for entertainment purposes - but I didn’t really have any problems of getting anything I wanted to try out with rocm support. Bigger annoyance was different projects targetting specific distributions or specific software versions (mostly ancient python), but as I’m doing everything in containers anyway that also was manageable.