Hello everyone,

We unfortunately have to close the !lemmyshitpost community for the time being. We have been fighting the CSAM (Child Sexual Assault Material) posts all day but there is nothing we can do because they will just post from another instance since we changed our registration policy.

We keep working on a solution, we have a few things in the works but that won’t help us now.

Thank you for your understanding and apologies to our users, moderators and admins of other instances who had to deal with this.

Edit: @Striker@lemmy.world the moderator of the affected community made a post apologizing for what happened. But this could not be stopped even with 10 moderators. And if it wasn’t his community it would have been another one. And it is clear this could happen on any instance.

But we will not give up. We are lucky to have a very dedicated team and we can hopefully make an announcement about what’s next very soon.

Edit 2: removed that bit about the moderator tools. That came out a bit harsher than how we meant it. It’s been a long day and having to deal with this kind of stuff got some of us a bit salty to say the least. Remember we also had to deal with people posting scat not too long ago so this isn’t the first time we felt helpless. Anyway, I hope we can announce something more positive soon.

  • 𝕯𝖎𝖕𝖘𝖍𝖎𝖙@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    Sorry let me word this correctly: social media wouldn’t exist.

    And this is hardly the argument you think it is. Again, not true of all social media sites, but let’s strongman your argument for a moment and say that you are refering to only the major social media sites.

    Well then, we have a problem, don’t we? What’s something the major social media sites have that lemmy doesn’t? Ad revenue, to the tune of millions of dollars. What do they do with that revenue? Well, some if it goes to pay real humans who’s entire job is simply seeking out and destroying CSAM content on the site.

    So then how does lemmy, with only enough money to pay hosting costs, if that… deal with CSAM when a user wants to create a botnet that posts CSAM to lemmy instances all day? My answer is: the admins do whatever they think is nessecary, including turning off the community for a bit. They have my full support in this.

    No your solution is to permanently shut down Lemmy since there is the possibility of being CSAM on one instance. The community it’s posted in doesn’t matter. They can just keep spamming CSAM and the mods can’t do anything about it except shutting down the instance/community. Unless there are better tools to moderate. That’s basically what everyone wants. We want better tools and more automation so the job gets easier. It’s better to have a picture removed because of CSAM that is wrongly flagged than not removing one that is CSAM.

    You’re strawmanning my argument. I’ve never said forever. I’ve said while the community gets cleaned up. I’ve even described a timeline below.

    The better tools you want to moderate are your own eyeballs. I’ve said this before but there have been many attempts at making automated CSAM detection material and they just don’t work as well as needed, requiring humans to intervene. These humans are paid by major social media networks but not volunteer networks.

    The problem is that it won’t stop and will happen again.

    Yes, this is the internet! No one has a solution to stop CSAM from happening. We aren’t discussing that. We are discussing how to handle it WHEN it happens.

    You’re wrong at step 2 The posters Might’ve used Tor which basically makes it impossible to identify them. Also in most cases LE doesn’t do shit. So the spamming won’t stop (unless someone other than LE does something against it). We can’t only relay on LE to do their job. We need better moderation tools.

    No, I’m correct about step 2, which I described as: “csam gets deleted, posters are identified, information turned over to law enforcement”

    I’ll break it down further:

    1. CSAM gets deleted from the instance. Admins and mods can do this, and they do this already.
    2. posters are identified. Admins and mods can do this, and might do this already. TO BE CLEAR, they can identify the users by IP address and user agent, that’s about it. The rest of it… is…
    3. “information turned over to law enformcement” … left up to law enforcement. “Hello police, I’m the owner of xyz.com and today a user at 23.43.23.22 posted CSAM on my site at this time. The user has been banned and have given you all the information we have on this. The cops can get a warrant for the ISP and go from there.

    Oh yeah, TOR. well, we’re getting deep off topic here but go on youtube and see some defcon talks about how TOR users are identified. You may think you’re slick going on TOR but then you open up facebook or check your gmail and it’s all over.

    Either way, I’m not speaking to the success of catching CSAM posters, I’m only speaking to what the admins likely are doing already, which is probably true.

    Also even if the community is turned back on, what’s stopping someone from doing it again? This time maybe a whole instance?

    Nothing, which is why social media sites dedicate teams of mods to handle this exact thing. It’s a cat and mouse game. But not playing the game and not trying to remove this content means the admins face legal trouble.

    It’s simply too easy to spam child porn everywhere. One instance of CP is much easier to moderate than thousands.

    This makes no sense to me. What was your point? Yes, one image is easier to delete than thousands of images. I don’t see how that plays into any of what we have been discussing though.

    • newIdentity@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      I don’t want to write a long text so here is the short version: These automated tools are not perfect but they don’t have to be. They just have to be good enough to block most of it. The rest can be done through manual labor which also people have done voluntarily on reddit. Reporting needs to get easier and you can prevent spammers from rate limiting them.

      To be clear, I don’t have anything against temporarly shutting down a community filled with CP until everything is cleared up. But we need better solutions to make it easier in the future so it doesn’t need to go this far and be more manageable.

      I’m sorry for the grammatical mistakes. I’m really tired right now and should probably go to bed.

      • 𝕯𝖎𝖕𝖘𝖍𝖎𝖙@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        i agree with most of what you’ve written, just one small issue:

        The rest can be done through manual labor which also people have done voluntarily on reddit.

        You’re probably right that some volunteers handle this content on reddit. By this I mean, mods are volunteers and sometimes mods handle this content.

        My point however has been that big social media sites can’t rely on volunteers to handle this content. Reddit, along with facebook and other major sites (but not twitter, as elon just removed this team) has a team of people who pick up the slack where the automated tools leave off. These people are paid, and usually not well, but enough so that it’s their job to remove this content (as opposed to it being a volunteer gig they do on the side). I’ll say that again: these people are paid to look at photographs of CSAM and other psychologically damaging content all day, usually for pennies.

        But we need better solutions to make it easier in the future so it doesn’t need to go this far and be more manageable.

        I fully agree with you. It’s just, as a dev, who has toyed around with AI and has been working on code for decades now, I don’t see a clear path forward. I am also not an expert in these tools, so I can’t speak specifically to how well they work. I can only say that they don’t work so well that humans are not required. Ideally, we want tools that work so well humans won’t be required (as it’s a psychologically damaging job), but at the sametime, we don’t want legit users to be misflagged either. The other day there was a link posted to hackerne.ws by a youtube creator who keeps needing to reenable comments on her shorts. The youtube algorithm keeps disabing comments on her shorts because it thinks there’s a child in the video - it’s only ever been her and while she is petite in stature, she’s also 30 years old. She’s been reaching out to youtube for over 3-4 years now and they still haven’t fixed the issue. Each video she uploads she needs to turn on comments manually, which affects her engagement. While nowhere near comparible to the sin of CSAM, it’s also not right for a legit user to be penalized just because of the way she looks - because the algorithm cannot properly determine her age.

        Youtube is a good example of how difficult it is to moderate something like this. A while ago, youtube revealed that “a years-worth of content is uploaded every minute” (or maybe it was every hour? still)… Consider how many people would be required to watch every minute of uploaded video, multiplied by each minute in their day. Youtube requires automated tools, and community reporting, and likely also has a team of mods. And it’s still imperfect.

        So to be clear, you’re not wrong, it’s just a very difficult problem to solve.