I do think LLMs are going to start getting worse once more data is fed into them. And then instead of admitting this. These companies will have so much capital they will just tune the AI to say exactly what they want it to every time. We already see some of this but it will get worse.
Sometimes yeah you can see it. Not only with updates but within a conversation, Models degrade in effectiveness long before context window is reached. Things like image generation tend to get worse after >2 edits and even if the image seed is given .
I do think LLMs are going to start getting worse once more data is fed into them. And then instead of admitting this. These companies will have so much capital they will just tune the AI to say exactly what they want it to every time. We already see some of this but it will get worse.
Reject LLMs branded as AI, retvrn to 9999999 nested if statements.
I’ve been hearing this for a while, has it started happening yet?
And if it did, is there any reason why people couldn’t switch back to an older version?
deleted by creator
deleted by creator
Sometimes yeah you can see it. Not only with updates but within a conversation, Models degrade in effectiveness long before context window is reached. Things like image generation tend to get worse after >2 edits and even if the image seed is given .