• 0 Posts
  • 28 Comments
Joined 2 years ago
cake
Cake day: October 6th, 2023

help-circle





  • I had a short-term ex say that and it really turned me off every time. If the relationship went on longer, I would’ve eventually said something. It really weirded me out, especially knowing that her dad died of cancer a year prior. It’s like, what the hell is going on in your head? Get that checked out.


  • Oh I completely agree that we are turning everything to shit in about a million different ways. And as oligarchs take over more, while AI is a huge money-maker, I can totally see regulation around it being scarce or entirely non-existent. So as it’s introduced into areas like the DoD, health, transportation, crime, etc., it’s going to be sold to the government first and it’s ramifications considered second. This has also been my experience as someone working in the intersection of AI research and government application. I immediately saw Elon’s companies, employees, and tech immediately get contracts without consultation by FFRDCs or competition by other for-profit entities. I’ve also seen people on the ground say “I’m not going to use this unless I can trust the output.”

    I’m much more on the side of “technology isn’t inherently bad, but our application of it can be.” Of course that can also be argued against with technology like atom bombs or whatever but I lean much more on that side.

    Anyway, I really didn’t miss the point. I just wanted to share an interesting research result that this comic reminded me of.


  • Oh no, I mean could you explain the joke? I believe I get the joke (shitty AI will replace experts). I was just leaving a comment about how systems that use LLMs to check the work of other LLMs do better than if they don’t. And that when I’ve introduced AI systems to stakeholders with consequential decision making, they tend to want a human in the loop. While also saying that this will probably change over time as AI systems get better and we get more used to using them. Is that a good thing? It will have to be on a case by case basis.




  • True! I’m an AI researcher and using an AI agent to check the work of another agent does improve accuracy! I could see things becoming more and more like this, with teams of agents creating, reviewing, and approving. If you use GitHub copilot agent mode though, it involves constant user interaction before anything is actually run. And I imagine (and can testify as someone that has installed different ML algorithms/tools on government hardware) that the operators/decision makers want to check the work, or understand the “thought process” before committing to an action.

    Will this be true forever as people become more used to AI as a tool? Probably not.





  • Focus groups aren’t meant to be used for gaining an understanding of a broad swath of the population. Focus groups are used for exploratory research, concept testing, and understanding the “why” behind opinions and behaviors.

    If you want to generalize trends towards large populations, you’re going to need a large sample size. It’s statistics that suggests that many respondents will leave you with extremely low confidence in the outcome.

    For example, if you are trying to judge the voting preferences of a population of 100,000 people, you’ll need 383 randomly sampled people in a survey to reach a 95% confidence interval. 13 is nowhere near the amount of people required to cover those that considered themselves “independents” before the debate.

    That’s not to say this tells us nothing, but it’s by no means a predictive study.

    *edit: I actually would say it’s harmful because I think that it portrays the narrative as if it is predictive, when it’s not.


  • I’m not surprised. Alito is straight up huffing Newsmax like it’s paint but trying to hide it, Clarence Thomas is outwardly corrupt and unabashedly fascist, and the other conservatives are, weirdly, not as extreme and still attempt to maintain this air of professionalism and integrity in their profession. Don’t get me wrong, they don’t actually and in them we have a religious nut, an idiot frat boy, an egoist, and at the head, a conniving political operator. All of which are driving us closer to fascism in their own style.

    But I get the feeling like John Roberts is embarrassed by Clarence Thomas and his clinically insane QAnon conspiracy wife or Alito and his “election was stolen” flag antics. So they’re going to see things differently.


  • I’m an AI researcher and yes, that’s basically right. There is no special “lighting mechanism” portion of the network designed before training. Just, after seeing enough images with correct lighting (either for text to image transformer models or GANs), it will understand what correct lighting should look like. It’s all about the distribution of the training data. A simple example is this-person-does-not-exist.com. All of the training images are high resolution, close-up, well-lit headshots. If all the training data instead had unrealistic lighting, you would get unrealistic lighting out. If it’s something like 50/50, you’ll get every part of that spectrum between good lighting and bad lighting at the output.

    That’s not to say that the overall training scheme of especially something like GPT-4 doesn’t include secondary training operations for more complex tasks. But lighting of images is a simple thing to get correct with enough training images.

    As an aside, I said that website above is a simple example, but I remember less than 6 years ago when that came out and it was revolutionary, so it’s crazy how fast the space has moved forward in such a short time.

    Edit: to answer the multiple subjects question: it probably has seen fewer images with multiple subjects and doesn’t have enough “knowledge” from it’s training data to accurately apply lighting in those scenarios. And you can imagine lighting is more complex in a scene with more subjects so it’s more difficult for the model to use a general solution it’s seen many times to fit the more complex problem.