Yea fully conscious AGIs obviously shouldn’t be used for that purpose, that is not different from slavery. Whether or not it could be researched ethically is even not sure. Doesn’t mean we should stop thinking about it though.
I am thinking more like if there ever was an energy efficient and less error prone version of the LLMs today, it could in principal be beneficial if they were used to deal with mundane tasks, reducing work times while keeping the same workforce and salary. We all know it is hard to push things that way because soon as there is some tech that is capable of automated work, CEOs, and shareholders will insist on reducing the workforce to increase profits. So for non general AI to be useful to humanity, we obviously need some sort of social reform along with it too. Which does make it very hard to turn the net negative effects of current AI tech to positive. But this also does not make non general AI research inherently evil. Evil people will almost always try to monopolize on new tech to increase their profits at the cost of destroying other people’s livelihoods. One can categorize even advances in computer tech or dedicated software in the same bucket.
As to the question of moderation, I was actually added to the mod list without asking for it. Happy to be removed as I am not a very active mod.
Yea fully conscious AGIs obviously shouldn’t be used for that purpose, that is not different from slavery. Whether or not it could be researched ethically is even not sure. Doesn’t mean we should stop thinking about it though.
I am thinking more like if there ever was an energy efficient and less error prone version of the LLMs today, it could in principal be beneficial if they were used to deal with mundane tasks, reducing work times while keeping the same workforce and salary. We all know it is hard to push things that way because soon as there is some tech that is capable of automated work, CEOs, and shareholders will insist on reducing the workforce to increase profits. So for non general AI to be useful to humanity, we obviously need some sort of social reform along with it too. Which does make it very hard to turn the net negative effects of current AI tech to positive. But this also does not make non general AI research inherently evil. Evil people will almost always try to monopolize on new tech to increase their profits at the cost of destroying other people’s livelihoods. One can categorize even advances in computer tech or dedicated software in the same bucket.
As to the question of moderation, I was actually added to the mod list without asking for it. Happy to be removed as I am not a very active mod.