This kind of AI use is a plague. I’m a fourth-year student at one of Romania’s top medical universities, and it’s insane how many of my peers can no longer write proper essays, conduct research, or carry out studies independently. Critical thinking, attention span, and management skills have all taken a huge hit. My girlfriend attends a highly ranked private high school (where annual tuition is in the five figures, €) - the same issues are present there as well. Depressing times.
AI is completely unreliable to the point of almost being dangerous in sciences. The more niche you get with the question the more likely it is to give you a completely incorrect answer. I’d rather it admit that it doesn’t know.
Chatbots are text completion models, improv machines basically, so they don’t really have that ability. You could look at logprobs I guess (aka is it guessing a bunch of words pretty evenly?), but that’s unreliable. Even adding a “I don’t know” token wouldn’t work because that’s not really trainable into text datasets: they don’t know when they don’t know, it’s all just modeling what next word is most likely.
Some non-autoregressive architectures would be better, but unfortunately “cutting edge” models people interact with like ChatGPT are way more conservatively developed than you’d think. Like, they’ve left tons of innovations unpicked.
This kind of AI use is a plague. I’m a fourth-year student at one of Romania’s top medical universities, and it’s insane how many of my peers can no longer write proper essays, conduct research, or carry out studies independently. Critical thinking, attention span, and management skills have all taken a huge hit. My girlfriend attends a highly ranked private high school (where annual tuition is in the five figures, €) - the same issues are present there as well. Depressing times.
AI is completely unreliable to the point of almost being dangerous in sciences. The more niche you get with the question the more likely it is to give you a completely incorrect answer. I’d rather it admit that it doesn’t know.
Chatbots are text completion models, improv machines basically, so they don’t really have that ability. You could look at logprobs I guess (aka is it guessing a bunch of words pretty evenly?), but that’s unreliable. Even adding a “I don’t know” token wouldn’t work because that’s not really trainable into text datasets: they don’t know when they don’t know, it’s all just modeling what next word is most likely.
Some non-autoregressive architectures would be better, but unfortunately “cutting edge” models people interact with like ChatGPT are way more conservatively developed than you’d think. Like, they’ve left tons of innovations unpicked.