• 0 Posts
  • 3 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle


  • As a Data Scientist I can say: The present danger from AI isn’t the the singularity. That’s science fiction. It’s the lack of comprehension what an AI is & the push to involve it more and more into certain decision making processes.

    Current AIs are at there cores just statistical models, that assign probabilities to answers, based on previously observed data.

    Governments and cooperations around the globe try to use these models to automate decisions. One massive problem here is the lack of transparency and human bias in the data.

    For example, when a cooperation uses an AI to determine who should be fired. You get fired, you try to complain, but you just get the answer the maschine had a wide variety of input data & you should have worked harder.

    We experienced in the past, that AIs focus on things we don’t necessarily want them to focus on. In the example from above maybe your job performance was better then your colleges Dave, but you are a PoC and Dave is white. In the past PoCs were more prone to get fired, so the AI decided, that you are the most probable answer to the question ‘Who should we fire?’.

    If a human would have made the decision you could interview him and discover the underlying racism in this decision. Deciphering the decision of an AI is next to impossible.

    So we slowly take away our ability to address wrongs in our burocratic processes, by cementing them into statistical models & thereby removing our ability to improve our societal values. AI has the potential to grind societies progress to a halt & drag easily fixable problems decades or centuries into the future.