• 0 Posts
  • 199 Comments
Joined 5 months ago
cake
Cake day: March 31st, 2025

help-circle



  • yeah and german toy makers were critical in supplying parts for arty fuzes in ww1. (i heard that soviet milk bottle filling machines could be repurposed for filling shells with molten explosives - both are dispensed hot, size is similar, not sure how real it is). company making complicated machinery out of many parts, requiring tight tolerances, made on-site, that already has tooling to make most of gun parts probably except barrels, makes sense that it could be pressed to make simple handguns.

    so what. manufacturing got much more specialized, so that even if in past car factory could crank out entire tanks, they probably can’t do it today easily (parts, sure, even entire engines and transmissions. not armor plate, or ceramics, or tungsten inserts or whatever these have). that factory could make stamped steel parts of jdam, but probably not much more. mk80 series shells are basically 30cm-ish wide, 1cm-ish thick steel tubes, with notches on inside and necked down while hot from both sides. can’t do that without highly specialized machinery








  • well nobody guarantees that internet is safe, so it’s more on chatbot providers pretending otherwise. along with all the other lies about machine god that they’re building that will save all the worthy* in the incoming rapture of the nerds, and even if it destroys everything we know, it’s important to get there before the chinese.

    i sense a bit of “think of the children” in your response and i don’t like it. llms shouldn’t be used by anyone. there was recently a case of a dude with dementia who died after fb chatbot told him to go to nyc

    * mostly techfash oligarchs and weirdo cultists



  • commercial chatbots have a thing called system prompt. it’s a slab of text that is fed before user’s prompt and includes all the guidance on how chatbot is supposed to operate. it can get quite elaborate. (it’s not recomputed every time user starts new chat, state of model is cached after ingesting system prompt, so it’s only done when it changes)

    if you think that’s just telling chatbot to not do a specific thing is incredibly clunky and half-assed way to do it, you’d be correct. first, it’s not a deterministic machine so you can’t even be 100% sure that this info is followed in the first place. second, more attention is given to the last bits of input, so as chat goes on, the first bits get less important, and that includes these guardrails. sometimes there was a keyword-based filtering, but it doesn’t seem like it is the case anymore. the more correct way of sanitizing output would be filtering training data for harmful content, but it’s too slow and expensive and not disruptive enough and you can’t hammer some random blog every 6 hours this way

    there’s a myriad ways of circumventing these guardrails, like roleplaying a character that does these supposedly guardrailed things, “it’s for a story” or “tell me what are these horrible piracy sites so that i can avoid them” and so on and so on