• BodyBySisyphus [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    18
    ·
    10 days ago

    Unironically all the issues associated with the technology in general. Just open-sourcing the models doesn’t solve all the problems inherent with them, and as @Wheaties@hexbear.net noted in her pull quotes:

    The vast majority of data in Africa hasn’t been digitized, so contractors across the continent are paid to gather agricultural, medical and financial records, as well as audio in Yoruba, Hausa and Nigerian-accented English

    Like, is it better that OpenAI or Google doing it? Sure, but that doesn’t necessarily make it good.

    • ThermonuclearEgg@hexbear.net
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      10 days ago

      Unironically all the issues associated with the technology in general.

      Oh yeah, definitely, those would be a legitimate complaint. As usual in tech, solve one problem and two more appear… unfortunately making it less power-hungry can have the side effect of increasing its total power consumption

      • BodyBySisyphus [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        10 days ago

        Researchers have also written about the ethics. Is deploying these sorts of tools to resource-scarce environments going to actually help or will it simply justify decreasing their resources further? Why invest in training teachers, medical workers, and agricultural extension agents if you can just get a chatbot to teach lessons, give medical diagnoses, and tell you when to plant your crops? Are we just going to undermine social relations further by replacing work that plays intangible roles in maintaining communities with software that can only perform the strict job description? And that assumes that it can perform. What happens if your AI extension agent gives you bad crop advice and you lose your harvest?

        I’m still convinced that LLMs are mostly a solution looking for a problem and their appeal would be much lower if our ruling ideas weren’t dogmatically aligned with automating away all workers.

    • piccolo [any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      9 days ago

      It seems that EqualyzAI, the main company mentioned in the article, isn’t even using a hosted version of DeepSeek. They’re running the model themselves on their own hardware, so nobody but EqualyzAI is seeing this data, not even DeepSeek or Huawei.

      (Edit: Also EqualyzAI only has to pay for the server costs of running the model, they don’t have to pay DeepSeek anything to be able to self host it.)

      • BodyBySisyphus [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        9 days ago

        Who’s doing what is kind of fuzzy - as worded, it implies that Huawei is doing the digitization and ingestion and EqualyzAI is getting a model produced with custom training weights. I would assume that whatever’s getting integrated is winding up as part of DeepSeek sensu lato because why wouldn’t it?

        Even if there weren’t potential issues with that, the broader problem is the uses these models are being put to - see my other comment below. It’s one thing to use an LLM for sentiment analysis or coding help and quite another to use them for actual advice or information that might significantly impact decision making.