cross-posted from: https://lemmy.ca/post/52046585
We make buildings install fire extinguishers for safety. Should AI plants be forced to install something that can shut it down in an instant?
cross-posted from: https://lemmy.ca/post/52046585
We make buildings install fire extinguishers for safety. Should AI plants be forced to install something that can shut it down in an instant?
The utility of a nuclear stockpile is as a deterrent against a threat that we know exists (hostile foreign powers). The utility of this is a deterrent or response to, what exactly? A hypothetical AI beyond what we currently have the tech to make, and which if built probably would not behave in the way that it is fictionally portrayed to, such that the button is unlikely to actually be pressed even if needed (consider that the AIs we have already can be used to persuade people of things, so if we somehow managed to actually make a skynet style super-AI bent on taking over the world, rather than suddenly launching a war on humanity, its most obvious move would be to just manipulate people into giving it control of things, such that the one in charge of pressing the button would pretty much be itself or someone favorable to it, long before anyone realized pressing it was even necessary).
I get what you’re saying when AI can manipulate, it will try to make sure the button never gets pressed. But humanity isn’t dumb either. We’ve spotted and contained world-ending risks before. Why assume we wouldn’t notice this one?
Have we? the closest I can think of is maybe the ozone hole, and that wasnt quite world ending as far as I understand it so much as a danger to people’s health.
Smallpox may be another one if the current Secretary of Health’s brain worm doesn’t decide that smallpox is good for your health or something.
that wasnt world ending tho. Deadly, sure, but diseases like that dont tend to just kill absolutely everyone, and we existed in spite of it for quite a long time.
Yeah, fair enough.
global warming, comets, solar flares, nearby supernovas, overpopulation, nuclear war… i dunno
which of those have we actually done anything about? weve made some modest efforts on global warming but not enough to actually solve the issue, overpopulation was never really a serious issue in the first place, nuclear weapons still exist and still could be used someday, and the space stuff we have only the beginnings of an idea about how maybe deal with someday, except maybe asteroids and comets, which we have an idea of what to do but not the infrastructure to launch a big enough craft to redirect a big one in time.
Apparently you don’t.
We’ve done nothing meaningful to contain global warming. Comets? That’s a laugh! What do you think we have that will stop a comet from creating a huge mess if it happens to be pointed to us? (You’re aware that Armageddon was a fictive movie, right?) And with solar flares and nearby supernovas you’ve entered the realm of delusion. What, precisely, have we done to “contain” solar flares and supernovas?
[citation needed]
Socrates (470–399 BCE) — ethics, questioning, Socratic method
Plato (427–347 BCE) — forms, justice, ideal state
Aristotle (384–322 BCE) — logic, science, virtue ethics
Confucius (551–479 BCE) — ethics, family, social harmony
Niccolò Machiavelli (1469–1527) — political realism
Francis Bacon (1561–1626) — scientific method
René Descartes (1596–1650) — rationalism, “I think, therefore I am”
Thomas Hobbes (1588–1679) — social contract, Leviathan
Baruch Spinoza (1632–1677) — pantheism, ethics
John Locke (1632–1704) — empiricism, liberalism
Gottfried Leibniz (1646–1716) — monads, optimism
David Hume (1711–1776) — empiricism, skepticism
Jean-Jacques Rousseau (1712–1778) — social contract, human freedom
Immanuel Kant (1724–1804) — categorical imperative, critique of reason
Georg Hegel (1770–1831) — dialectics, history as progress
Arthur Schopenhauer (1788–1860) — pessimism, will to live
John Stuart Mill (1806–1873) — utilitarianism, liberty
Karl Marx (1818–1883) — materialism, class struggle
Friedrich Nietzsche (1844–1900) — will to power, eternal recurrence
William James (1842–1910) — pragmatism, psychology
Ludwig Wittgenstein (1889–1951) — language, logic
Martin Heidegger (1889–1976) — being, existentialism
Jean-Paul Sartre (1905–1980) — existentialism, freedom
Simone de Beauvoir (1908–1986) — feminism, existential ethics
Michel Foucault (1926–1984) — power, knowledge, institutions
Hannah Arendt (1906–1975) — totalitarianism, political theory
Noam Chomsky (1928– ) — linguistics, political philosophy
Those are humans. A human is smart. Humanity is another story…
Ah. If you redefine “contain[ing] world-ending risks” to include “literally anything that someone blathers about” you can continue that line of blather forever.