• thespcicifcocean@lemmy.world
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    16 hours ago

    I hate that ai just means llm now. ML can actually be useful to make predictions based on past trends. And it’s not nearly as power hungry

    • Bazell@lemmy.zip
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      edit-2
      14 hours ago

      Yeah, especially it is funny how people forgot that even small models the size of like 20 neurons used for primitive NPCs in a 2D games are called AI too and can literally run on a button phone(not Nokia 3310, something slightly more powerful). And these small ones specialized models exist for decades already. And the most interesting is that relatevly small models(few thousands of neurons) can work very well in predicting trends of prices, classify objects by their parameters, calculate chances of having specific disease by only symptoms and etc. And they generally work better than even LLMs in the same task.

      • chonglibloodsport@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        11 hours ago

        Do you have an example of some games that use small neural networks for their NPC AIs? I was under the impression that most video game AIs used expert systems, at least for built-in ones.

        • Bazell@lemmy.zip
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          10 hours ago

          Well, for what I know, modern chess engines are relatevly small AI models that usually work by taking on input the current state of the board and then predicting the next best move. Like Stockfish. Also, there is a game called Supreme Commander 2, where it is confirmed of usage small neural models to run NPC. And, as a person that somewhat included in game development, I can say that indie game engine libgdx provides an included AI module that can be fine tuned to a needed level for running NPC decisions. And it can be scaled in any way you want.

          • Buddahriffic@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            4 hours ago

            As I understand, chess AIs are more like brute force models that take the current board and generate a tree with all possible moves from that position, then iterating on those new positions up to a certain depth (which is what the depth of the engine refers to). And while I think some might use other algorithms to “score” each position and try to keep the search to the interesting branches, that could introduce bias that would make it miss some moves that look bad but actually set up a better position, though ultimately, they do need some way to compare between different ending positions if the depth doesn’t bring them to checkmate in all paths.

            So it chooses the most intelligent move it can find, but does it by essentially playing out every possible game, kinda like Dr Strange in Infinity War, except chess has a more finite set of states to search through.

            • Bazell@lemmy.zip
              link
              fedilink
              arrow-up
              2
              ·
              4 hours ago

              Maybe. I haven’t studied modern chess engines so deeply. All I know that you either can use the brute force method that will calculate in recursion each possible move or train an AI model on existing brute force engines and it will simply guess the best possible move without actually recalculating each possible. Both scenarios work with each one having its own benefits and downsides.

              But all of this is said according to my knowledge which can be incomplete, so recommend to recheck this info.

        • Holytimes@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          11 hours ago

          Black and white used machine learning If I recall absolutely a classic of a game highly recommend a play if you never have. Dota 2 has a machine learning based ai agent for its bots. Tho I’m unsure if those are actually in the standard game or not.

          Forza and a few other racing games though out the years have used ML to various degrees.

          And hello neighbor was a rather infamously bad indie game that used it.

          For a topical example arc raiders used machine learning to train its AI during development. Tho it doesn’t run on the live servers to keep updating it.

          For LLM examples where the wind meets is using small LLMs for its AI dialogue interactions. Which makes for very fun RP mini games.

          I’m sure there’s more examples but these are what I can think of and find off Google.

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      14 hours ago

      What’s most annoying to me about the fisasco is that things people used to be okay with like ML that have always been lumped in with the term AI are now getting hate because they’re “AI”.

      • thespcicifcocean@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        13 hours ago

        What’s worse is that management conflates the two all the time, and whenever i give the outputs of my own ML algorithm, they think that it’s an LLM output. and then they ask me to just ask chat gpt to do any damn thing that i would usually do myself, or feed into my ml to predict.

        • KeenFlame@feddit.nu
          link
          fedilink
          arrow-up
          1
          arrow-down
          3
          ·
          10 hours ago

          ? If you make and work with ml you are in a field of research. It’s not a technology that you “use”. And if you give the output of your “ml” then that is exactly identical to an llm output. They don’t conflate anything. Chat gpt is also the output of “ml”

          • thespcicifcocean@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            9 hours ago

            when i say the output of my ml, i mean, i give the prediction and confidence score. for instance, if there’s a process that has a high probability of being late based on the inputs, I’ll say it’ll be late, with the confidence. that’s completely different from feeding the figures into a gpt and saying whatever the llm will say.

            and when i say “ml” i mean a model I trained on specific data to do a very specific thing. there’s no prompting, and no chatlike output. it’s not a language model