• thespcicifcocean@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    13 hours ago

    What’s worse is that management conflates the two all the time, and whenever i give the outputs of my own ML algorithm, they think that it’s an LLM output. and then they ask me to just ask chat gpt to do any damn thing that i would usually do myself, or feed into my ml to predict.

    • KeenFlame@feddit.nu
      link
      fedilink
      arrow-up
      1
      arrow-down
      3
      ·
      11 hours ago

      ? If you make and work with ml you are in a field of research. It’s not a technology that you “use”. And if you give the output of your “ml” then that is exactly identical to an llm output. They don’t conflate anything. Chat gpt is also the output of “ml”

      • thespcicifcocean@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        9 hours ago

        when i say the output of my ml, i mean, i give the prediction and confidence score. for instance, if there’s a process that has a high probability of being late based on the inputs, I’ll say it’ll be late, with the confidence. that’s completely different from feeding the figures into a gpt and saying whatever the llm will say.

        and when i say “ml” i mean a model I trained on specific data to do a very specific thing. there’s no prompting, and no chatlike output. it’s not a language model