There does need to be filters in place to prevent results like this though. They have a responsibility to prevent their algorithms doing stuff like this.
Completely impossible to completely filter LLM outputs like this. As hard as all these big companies making frontier models have tried, you can still get them to say and tell you whatever you want if you prompt them right.
Also this was on a fucking Tesla I guess, so its not like this was in anyway a model designed for use with children. In fact, it sounds like it was just the standard Grok model, which is intended to be edgy and sexual. So the model is actually functioning as intended lol
This is a strawman because notepad isn’t going to spit shit out to children. Parents have a duty to prevent kids seeing any 18+ material on their computer.
You are probably right about them mishearing though.
There does need to be filters in place to prevent results like this though. They have a responsibility to prevent their algorithms doing stuff like this.
Completely impossible to completely filter LLM outputs like this. As hard as all these big companies making frontier models have tried, you can still get them to say and tell you whatever you want if you prompt them right.
Also this was on a fucking Tesla I guess, so its not like this was in anyway a model designed for use with children. In fact, it sounds like it was just the standard Grok model, which is intended to be edgy and sexual. So the model is actually functioning as intended lol
The filter needs to happen at the end.
I agree. Notepad should also have an NSFW filter so no children can read the word bum. Overascribing importance to AI output is the real issue here.
In reality, there probably is a filter and the AI just said ‘news’ and they misheard.
This is a strawman because notepad isn’t going to spit shit out to children. Parents have a duty to prevent kids seeing any 18+ material on their computer.
You are probably right about them mishearing though.