• BotCheese@beehaw.org
    link
    fedilink
    arrow-up
    7
    ·
    1 year ago

    And we’re nowhere near dome scalimg LLM’s

    I think we might be, I remember hearing openAI was training on so much literary data that they didn’t and couldn’t find enough for testing the model. Though I may be misrememberimg.

    • newde@feddit.nl
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      No that’s definitely the case. However, Microsoft is now working making LLM’s more dependent on several high quality sources. For example: encyclopedias will be more important sources than random reddit posts.

        • Zaktor@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Cunningham’s Law may be very helpful in this respect.

          “the best way to get the right answer on the internet is not to ask a question; it’s to post the wrong answer.”