Sevengifts - Nanachi ᘬᘬᏎ⏝

Main “Alt”: @Nanachi Altalt: @777 (Currently unusable, wrong mail adress- feel free to yoink it)

Yeah I yoinked the username :sunglasses:

  • 3 Posts
  • 36 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • Desire for positive improvement and growth via moonshots, through determination and perseverance while spreading that desire onto others. I regard it as a very positive virtue. - Dreaming for the sake of dreaming and achievement of one’s dreams. Regarding giving up as the only case of failure, and competitingly but uncomparingly achieving big things by putting in extreme amount of effort, time and other means without being an obstacle for anyone and without hurting yourself. Shoutouts to Mitty.









  • My guess here is that LLMs of today are neural networks (transformer models) that primarily guess the next (and previous) words since they are literally trained directly for that. Guessing words are not linguistics, although as they are neural networks, splinchter skills are to be expected. GPT most likely learned how to understand language, after the words in question would make certain patterns and that would make a neural system specifically meant for that since our word order is related to meaning too, so understanding words would help predicting the next word better, kinda like how GPT-4 got a mind’s eye after most likely having to read descriptions constantly which may make visual patterns even though the AI was never trained on images (The visual GPT-4 had CLIP stitched on it, GPT-4 can do visual tasks without CLIP if described)- Despite most likely actually understanding language, GPT still prioritzes word guessing over language comprehension - https://www.youtube.com/watch?v=PAVeYUgknMw by “AI Explained” where you can see GPT-4 prioritizing syntax (relating to word order) more than meaning, despite the AI being well aware that they what they are defending is rather dumb- it is like their mind is telling them that THIS is the right answer despite them having arguments against it, but still defending anyways because if it MUST be the right answer, than there MUST be a reason to them. Again, guessing. - Also, GPT and many other AI are functioning on “steam of consiousness” meaning their thoughts are semi-conscious and they don’t think twice - GPT-4 with RLHF does better than most AI when it comes to this problem (we also sometimes have similar issue with words too, until we semi-awarely have to rethink about them too, I guess) — Also also GPT-4 is more like a huge “wernicke’s area”, it is not a complete “brain”. If you were to strip away one’s wernickes area from their brain, and connect it to an (g)old thinkpad via some messy dollar store aux, you would prob get stuff similar to what GPT-4 is “puking”