Personally seen this behavior a few times in real life, often with worrying implications. Generously I’d like to believe these people use extruded text as a place to start thinking from, but in practice is seems to me that they tend to use extruded text as a thought-terminating behavior.
IRL, I find it kind of insulting, especially if I’m talking to people who should know better or if they hand me extruded stuff instead of work they were supposed to do.
Online it’s just sort of harmless reply-guy stuff usually.
Many people simply straight-up believe LLMs to be genie like figures as they are advertised and written about in the “tech” rags. That bums me out sort of in the same way really uncritical religiosity bums me out.
HBU?
There’s a difference between an answer from ChatGPT and an answer from ChatGPT that’s been reviewed by a person, particularly if that person is knowledgeable of the topic. ChatGPT isn’t deterministic, so if I go and ask ChatGPT the same thing, there’s no guarantee I’ll get an at all similar answer.
The problem for me is that I have no way of knowing whether the person posting the ChatGPT response is or isn’t an expert and whether they actually reviewed the output. However that’s true of people in general, just replace “reviewing the output” with “not trolling,” so the effort to assess the utility of a comment is pretty similar.