I’m now convinced that most holodeck malfunctions are the result of end users, who don’t know what they’re doing, using AI to generate poorly-written software they’re ill-equipped to debug or even really understand.
I’m now convinced that most holodeck malfunctions are the result of end users, who don’t know what they’re doing, using AI to generate poorly-written software they’re ill-equipped to debug or even really understand.
I’d argue that while fiction can present good arguments about things, it can’t prove anything because the conclusion was decided and written for rather than an outcome of a series of events and choices.