

Following the asteroid analogy, I view it as this: If there’s a 20% chance that an asteroid could hit us in 2050, does that supplant the threat of climate change today?
I’m not trying to say that AI systems won’t kill us all, just that they are using to directly harm entire populations right now and the appeal to a future danger is being used to minimize that discussion.
Another thing to consider: If an AI system does kill us all, it will still be a human or organization that gave it the ability to do so, whether that be through training practices, or plugging it in to weapons systems. Placing the blame on the AI itself absolves any person or organization of the responsibility, which is in line with how AI is used today (i.e. the promise of algorithmic ‘neutrality’). Put another way, do the bombs kill us all in a nuclear armageddon or do the people who pressed the button? Does the gun kill me, or does the person pulling the trigger?
Could we not be like this. Those kids didn’t vote for anyone.