On January 7, 2025, Meta announced sweeping changes to its content moderation policies, including the end of third-party fact-checking in the U.S., and rollbacks to its hate speech policy globally that remove protections for women, people of color, trans people, and more. In the absence of data from Meta, we decided to go straight to users to assess if and how harmful content is manifesting on Meta platforms in the wake of January rollbacks.

    • fyzzlefry@retrolemmy.com
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      1 day ago

      That is the most simplistic, uneducated opinion on the subject that is possible to make. You should be ashamed.

      • lmmarsano@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        11 hours ago

        Nah, and cool opinion.

        As someone else wrote, why should anyone put much confidence in “some giant/evil megacorp”? They’re not a philanthropic organization & they’re not real authorities. We can expect them to act in their own interest.

        If content is truly illegal or harmful, then the real authorities should handle it. Simply taking down that content doesn’t help real authorities or address credible threats. If it’s not illegal or harmful, then we can block or ignore.

        People already curate their information offline. It seems reasonable to expect the same online.

        • zarkanian@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          10 hours ago

          There are speech police in the real world. Workplaces don’t allow you to use slurs or to harass your co-workers. That’s just one example. In fact, any social group that I can think of will punish you for saying something. Some are more lenient than others, but every one has a line that you cannot cross.

          • Plebcouncilman@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 hours ago

            True which is why I think an upvote/downvote system is the best form of moderation. Of course there are things you cannot allow, but it’s mostly the illegal stuff. I’m for low moderation, not no moderation. Facebook et al were not doing low moderation, it was heavy handed and unnecessary.

          • lmmarsano@lemmynsfw.com
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            6 hours ago

            There are speech police in the real world. Workplaces don’t allow you to use slurs or to harass your co-workers.

            That “speech police” traces to the government in the form of labor laws & regulations in the remit of the EEOC, eg, Title 7 of Civil Rights Act, Age Discrimination in Employment Act, Americans with Disabilities Act. Employers didn’t conceive of such workplaces policies on their own to invite lawsuits & put targets on their backs.

            These laws do not apply to social media as a communication platform. Offensive expression doesn’t deny equal access/opportunities to platform resources they are under any legal obligation to provide. Should we put much confidence in social media companies voluntarily assuming unnecessary obligations just because?

            It never made sense.