I feel like most people who hate AI really hate the dipshits who monopolized the sector and have been blowing it like a balloon, fooling oaf CEOs and destroying people’s livelihood in exchange of profit.
AGI research, if it achieves its end goals, can help us also learn a lot of things about consciousness, creativity, feelings, mental diseases etc etc (along with a horde of ethical problems too…). Moreover if AI was properly used for benefit of humanity to ease the burden of mundane tasks and give people more time for their life and hobbies, it could be quite beneficial. But again we are living in a generation of power hungry, mentally deranged, soulless dipshit CEOs, shareholders and politicians. So maybe some civilization that comes later will make better use of it (provided we don’t autodestruct).
I think the search for a perfect slave is kinda disgusting. I think llms are a bad technology made in an evil way. I think pretending computers can speak is an insult to personhood, and that creating tools to generate mediocrity at scale is anti-human.
Some technologies are just bad and we should not make them.
Yea fully conscious AGIs obviously shouldn’t be used for that purpose, that is not different from slavery. Whether or not it could be researched ethically is even not sure. Doesn’t mean we should stop thinking about it though.
I am thinking more like if there ever was an energy efficient and less error prone version of the LLMs today, it could in principal be beneficial if they were used to deal with mundane tasks, reducing work times while keeping the same workforce and salary. We all know it is hard to push things that way because soon as there is some tech that is capable of automated work, CEOs, and shareholders will insist on reducing the workforce to increase profits. So for non general AI to be useful to humanity, we obviously need some sort of social reform along with it too. Which does make it very hard to turn the net negative effects of current AI tech to positive. But this also does not make non general AI research inherently evil. Evil people will almost always try to monopolize on new tech to increase their profits at the cost of destroying other people’s livelihoods. One can categorize even advances in computer tech or dedicated software in the same bucket.
As to the question of moderation, I was actually added to the mod list without asking for it. Happy to be removed as I am not a very active mod.
I feel like most people who hate AI really hate the dipshits who monopolized the sector and have been blowing it like a balloon, fooling oaf CEOs and destroying people’s livelihood in exchange of profit.
AGI research, if it achieves its end goals, can help us also learn a lot of things about consciousness, creativity, feelings, mental diseases etc etc (along with a horde of ethical problems too…). Moreover if AI was properly used for benefit of humanity to ease the burden of mundane tasks and give people more time for their life and hobbies, it could be quite beneficial. But again we are living in a generation of power hungry, mentally deranged, soulless dipshit CEOs, shareholders and politicians. So maybe some civilization that comes later will make better use of it (provided we don’t autodestruct).
I think the search for a perfect slave is kinda disgusting. I think llms are a bad technology made in an evil way. I think pretending computers can speak is an insult to personhood, and that creating tools to generate mediocrity at scale is anti-human.
Some technologies are just bad and we should not make them.
Why do you mod this subreddit
Yea fully conscious AGIs obviously shouldn’t be used for that purpose, that is not different from slavery. Whether or not it could be researched ethically is even not sure. Doesn’t mean we should stop thinking about it though.
I am thinking more like if there ever was an energy efficient and less error prone version of the LLMs today, it could in principal be beneficial if they were used to deal with mundane tasks, reducing work times while keeping the same workforce and salary. We all know it is hard to push things that way because soon as there is some tech that is capable of automated work, CEOs, and shareholders will insist on reducing the workforce to increase profits. So for non general AI to be useful to humanity, we obviously need some sort of social reform along with it too. Which does make it very hard to turn the net negative effects of current AI tech to positive. But this also does not make non general AI research inherently evil. Evil people will almost always try to monopolize on new tech to increase their profits at the cost of destroying other people’s livelihoods. One can categorize even advances in computer tech or dedicated software in the same bucket.
As to the question of moderation, I was actually added to the mod list without asking for it. Happy to be removed as I am not a very active mod.