I run a small VPS host and rely on PayPal for payments, mainly because (a) most VPS customers pay that way if you aren’t AWS or GoDaddy and (b) very good fraud protection. My prior venture had quite a bit of chargebacks from Stripe so it went PP-only also.
My dad told me I should “reduce the processing fees” and inaccurately cited that ChatGPT told him PayPal has 5% fees when it really has 3-3.5% fees (plus 49 cents). Yet he insisted 5% was the charge.
Yes, PayPal sucks but ChatGPT sucks even more. When I was a child he said Toontown would ruin my brain, yet LLMs are ruining his even more.
If you’re looking up content written by humans and published to the internet in an article, it is far less likely to be wrong.
It’s a bit less likely to be wrong, but there’s plenty of room for it to be wrong, either maliciously with intent or through incompetence of researching even basic things on their part. Someone being wrong once by misreading, or without interpreting data, or by trying to steer perception of something, can easily snowball into many sources concerning that wrong information (“I’ve read it, so must be true”). Many kinds of information are also very dependant on perspective, adding nuance beyond “correct” and “false”.
There are plenty of reasons to double check information (seemingly) written by humans, it’s just good to double check that for different reasons than ai content. But the basic idea of “it can easily be wrong” is the same.
No. Far less likely.