• AnarchistArtificer@slrpnk.net
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    11 hours ago

    As a society, we need to better value the labour that goes into our collective knowledge bases. Non-English Wikipedia is just one example of this, but it highlights the core of the problem: the system relies on a tremendous amount of skilled labour that cannot easily be done by just a few volunteers.

    Paying people to contribute would come with problems of its own (in a hypothetical world where this was permitted by Wikipedia, which I don’t believe it is at present), but it would be easier for people to contribute if the time they wanted to volunteer was competing with their need to keep their head above the water financially. Universal basic income, or something similar, seems like one of the more viable ways to improve this tension.

    However, a big component of the problem is around the less concrete side of how society values things. I’m a scientist in an area where we are increasingly reliant on scientific databases, such as the Protein Database (pdb), where experimentally determined protein structures are deposited and annotated, as well as countless databases on different genes and their functions. Active curation of these databases is how we’re able to research a gene in one model organism, and then apply those insights to the equivalent gene in other organisms.

    For example, the gene CG9536 is a term for a gene found in Drosophila melanogaster — fruit flies, a common model organism for genetic research, due to the ease of working with them in a lab. Much of the research around this particular gene can be found on flybase, a database for D. melanogaster gene research. Despite being super different to humans, there are many fruitfly genes that have equivalents in humans, and CG9536 is no exception; TMEM115 is what we call it in humans. The TL;DR answer of what this gene does is “we don’t know”, because although we have some knowledge of what it does, the tricky part about this kind of research is figuring out how genes or proteins interact as part of a wider system — even if we knew exactly what it does in a healthy person, for example, it’s much harder to understand what kinds of illnesses arise from a faulty version of a gene, or whether a gene or protein could be a target for developing novel drugs. I don’t know much about TMEM115 specifically, but I know someone who was exploring whether it could be relevant in understanding how certain kinds of brain tumours develop. Biological databases are a core component of how we can big to make sense of the bigger picture.

    Whilst the data that fill these databases are produced by experimental research that are attached to published papers, there’s a tremendous amount of work that makes all these resources talk to each other. That flybase link above links to the page on TMEM115, and I can use these resources to synthesise research across so many separate fields that would previously have been separate: the folks who work on flies will have a different research culture than those who work in human gene research, or yeast, or plants etc. TMEM115 is also sometimes called TM115, and it would be a nightmare if a scientist reviewing the literature missed some important existing research that referred to the gene under a slightly different name.

    Making these biological databases link up properly requires active curation, a process that the philosopher of Science Sabine Leonelli refers to as “data packaging”, a challenging task that includes asking “who else might find this data useful?” [1]. The people doing the experiments that produce the data aren’t necessarily the best people for figuring out how to package and label that data for others to use because inherently, this requires thinking in a way that spans many different research subfields. Crucially though, this infrastructure work gives a scientist far fewer opportunities to publish new papers, which means this essential labour is devalued in our current system of doing science.

    It’s rather like how some of the people who are adding poor quality articles to non-English Wikipedia feel like they’re contributing because using automated tools allows them to create more new articles than someone with actual specialist knowledge could. It’s the product of a culture of an ever-hungry “more” that fuels the production of slop, devalues the work of curators and is degrading our knowledge ecosystem. The financial incentives that drive this behaviour play a big role, but I see that as a symptom of a wider problem: society’s desire to easily quantify value causing important work that’s harder to quantify to be systematically devalued (a problem that we also see in how reproductive labour (i.e. the labour involved in managing a family or household) has historically been dismissed).

    We need to start recognising how tenuous our existing knowledge is. The OP discusses languages with few native speakers, which likely won’t affect many who read the article, but we’re at risk of losing so much more if we don’t learn to recognise how tenuous our collective knowledge is. The more we learn, the more we need to invest into expanding our systems of knowledge infrastructure, as well as maintaining what we already have.


    [1]: I am not going to cite the paper in which Sabine Leonelli coined the phrase “data packaging”, but her 2016 book “Data-Centric Biology: A Philosophical Study”. I don’t imagine that many people will read this large comment of mine, but if you’ve made it this far, you might be interested to check out her work. Though it’s not aimed at a general audience, it’s still fairly accessible, if you’re the kind of nerd who is interested in discussing the messy problem of making a database usable by everyone.

    If your appetite for learning is larger than your wallet, then I’d suggest that Anna’s Archive or similar is a good shout. Some communities aren’t cool with directly linking to resources like this, so know that you can check the Wikipedia page of shadow library sites to find a reliable link: https://en.wikipedia.org/wiki/Anna’s_Archive


    1. 1 ↩︎

  • A_norny_mousse@feddit.org
    link
    fedilink
    English
    arrow-up
    28
    ·
    19 hours ago

    As soon as you leave the big languages, esp. English, Wikipedia can be very problematic for all sorts of reasons.
    Mostly because of a lack of eyeballs.
    But it doesn’t end with merely badly written/generated content but also with narrative manipulation that - unlike in the English version - remains unchallenged.

    • Truscape@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      12 hours ago

      I wonder if language and other cultural fields are the only areas where Linus’s law are impossible to safely apply. Programming seems quite easy by comparison.

      • squaresinger@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 hours ago

        Hmm, the law begins with “Given enough eyeballs”. So it’s explicitly not about small-language Wikipedia sites having too few editors.

        It also doesn’t talk about finding consensus. “All bugs are shallow” means that someone can see the solution. In software development, that’s most often quite easy, especially when it comes to bugfixes. It’s rarely difficult to verify whether the solution to a bug works or not. So in most cases if someone finds a solution and it works, that’s good enough for everyone.

        In cultural fields, that’s decidedly not the case.

        For most of society’s problems, there are hardly any new solutions. We have had the same basic problems for centuries and pretty much “all” the solutions have been proposed decades or centuries ago.

        How to make government fair? How to get rid of crime? How to make a good society?

        These things have literally been issues since the first humans learned to speak.

        That’s why Linus’ law doesn’t really apply here. We all want different things and there’s no fix that satisfies all requirements or preferences.

        • Truscape@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 hours ago

          Wikipedia (from my understanding) was built off similar doctrine as Linus’ law for iterative improvement where the dedicated and the many culled the misinformation and the outdated.

          I wonder what would be a viable alternative for delicate situations like these where “hugs of death” (too many eager users who don’t understand the damage they’re causing) are occurring due to the current model for niche cultural systems like these. Maybe a council established group of authors and editors based on their background and qualifications?

          • squaresinger@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 hours ago

            That’s kinda what Wikipedia does. They have a quite elaborate review process before stuff goes live: https://en.wikipedia.org/wiki/Wikipedia:Reviewing

            In the English Wikipedia, that process is working quite well. But in e.g. the Welsh Wikipedia or other tiny languages, they might only have a handful of reviewers in total. There’s no way that such a small group of people could be knowledgeable in all subjects.

            Welsh Wikipedia, for example, has less than 200 total active users, and there are dozens of small language or dialect Wikipedias that have <30 active users.

            https://en.wikipedia.org/wiki/List_of_Wikipedias

            I don’t think there’s an actual solution for this issue until AI translations become so good that there’s no need for language-specific content any more. If that ever happens.