What’s most annoying to me about the fisasco is that things people used to be okay with like ML that have always been lumped in with the term AI are now getting hate because they’re “AI”.
What’s worse is that management conflates the two all the time, and whenever i give the outputs of my own ML algorithm, they think that it’s an LLM output. and then they ask me to just ask chat gpt to do any damn thing that i would usually do myself, or feed into my ml to predict.
? If you make and work with ml you are in a field of research. It’s not a technology that you “use”. And if you give the output of your “ml” then that is exactly identical to an llm output. They don’t conflate anything. Chat gpt is also the output of “ml”
when i say the output of my ml, i mean, i give the prediction and confidence score. for instance, if there’s a process that has a high probability of being late based on the inputs, I’ll say it’ll be late, with the confidence. that’s completely different from feeding the figures into a gpt and saying whatever the llm will say.
and when i say “ml” i mean a model I trained on specific data to do a very specific thing. there’s no prompting, and no chatlike output. it’s not a language model
What’s most annoying to me about the fisasco is that things people used to be okay with like ML that have always been lumped in with the term AI are now getting hate because they’re “AI”.
What’s worse is that management conflates the two all the time, and whenever i give the outputs of my own ML algorithm, they think that it’s an LLM output. and then they ask me to just ask chat gpt to do any damn thing that i would usually do myself, or feed into my ml to predict.
? If you make and work with ml you are in a field of research. It’s not a technology that you “use”. And if you give the output of your “ml” then that is exactly identical to an llm output. They don’t conflate anything. Chat gpt is also the output of “ml”
when i say the output of my ml, i mean, i give the prediction and confidence score. for instance, if there’s a process that has a high probability of being late based on the inputs, I’ll say it’ll be late, with the confidence. that’s completely different from feeding the figures into a gpt and saying whatever the llm will say.
and when i say “ml” i mean a model I trained on specific data to do a very specific thing. there’s no prompting, and no chatlike output. it’s not a language model