• 0 Posts
  • 56 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle

  • Definitely depends on the person. There are definitely people who are getting 90% of their coding done with AI. I’m one of them. I have over a decade of experience and I consider coding to be the easiest but most laborious part of my job so it’s a welcome change.

    One thing that’s really changed the game recently is RAG and tools with very good access to our company’s data. Good context makes a huge difference in the quality of the output. For my latest project, I’ve been using 3 internal tools. An LLM browser plugin which has access to our internal data and let’s you pin pages (and docs) you’re reading for extra focus. A coding assistant, which also has access to internal data and repos but is trained for coding. Unfortunately, it’s not integrated into our IDE. The IDE agent has RAG where you can pin specific files but without broader access to our internal data, its output is a lot poorer.

    So my workflow is something like this: My company is already pretty diligent about documenting things so the first step is to write design documentation. The LLM plugin helps with research of some high level questions and helps delve into some of the details. Once that’s all reviewed and approved by everyone involved, we move into task breakdown and implementation.

    First, I ask the LLM plugin to write a guide for how to implement a task, given the design documentation. I’m not interested in code, just a translation of design ideas and requirements into actionable steps (even if you don’t have the same setup as me, give this a try. Asking an LLM to reason its way through a guide helps it handle a lot more complicated tasks). Then, I pass that to the coding assistant for code creation, including any relevant files as context. That code gets copied to the IDE. The whole process takes a couple minutes at most and that gets you like 90% there.

    Next is to get things compiling. This is either manual or in iteration with the coding assistant. Then before I worry about correctness, I focus on the tests. Get a good test suite up and it’ll catch any problems and let you reflector without causing regressions. Again, this may be partially manual and partially iteration with LLMs. Once the tests look good, then it’s time to get them passing. And this is the point where I start really reading through the code and getting things from 90% to 100%.

    All in all, I’m still applying a lot of professional judgement throughout the whole process. But I get to focus on the parts where that judgement is actually needed and not the more mundane and toilsome parts of coding.


  • My favorite use is actually just to help me name stuff. Give it a short description of what the thing does and get a list of decent names. Refine if they’re all missing something.

    Also useful for finding things quickly in generated documentation, by attaching the documentation as context. And I use it when trying to remember some of the more obscure syntax stuff.

    As for coding assistants, they can help quickly fill in boilerplate or maybe autocomplete a line or two. I don’t use it for generating whole functions or anything larger.

    So I get some nice marginal benefits out of it. I definitely like it. It’s got a ways to go before it replaces the programming part of my job, though.



  • The chance you’ll survive a half life is exactly the same whether MWI is real or not. It doesn’t give you any useful information. You have no way of distinguishing between being just that lucky or MWI being true.

    That’s not the case with other experiments. If you assume your hypothesis is correct, the chance of the experiment being successful is higher than the chance of it happening by random chance if your hypothesis is not. That’s a key difference.



  • Collect data, and show how it’s unlilely unless your hypothesis is true.

    The quantum immortality experiment doesn’t do that, though. The outcome, by definition, always occurs within the realm of random chance. Your environment needs to create an outcome that is extremely unlikely to occur by random chance. The experiment is not repeatable. It makes no predictions about what’s going to happen if you try again. It doesn’t do anything useful to bolster the many worlds theory.




  • Sure you can move some parts of the conversation to a review session, though I think the answers will be heavily influenced by hindsight at that point. For example, hearing about dead end paths they considered can be very informative in a way that I think candidates assume is negative. Nobody expects you to get it right the first time and telling the interviewer about your binary tree solution (that actually doesn’t work) can be a good thing.

    But the biggest problem I think with not being in the room as an interviewer is that you lose the opportunity to hint and direct the candidate away from unproductive solutions or use of time. There are people who won’t ask questions about things that are ambiguous or they’ll misinterpret the program and that shouldn’t be a deal breaker.

    Usually it only takes a very subtle nudge to get things back on track, otherwise you wind up getting a solution that’s not at all what you’re looking for (and more importantly, doesn’t demonstrate the knowledge you’re looking for). Or maybe you wind up with barely a solution because the candidate spent most of their time spinning their wheels. A good portion of the questions I ask during an interview serve this purpose of keeping the focus of the candidate on the right things.


  • I’m not sure that offline or alone coding tests are any better. A good coding interview should be about a lot more than just seeing if they produce well structured and optimal code. It’s about seeing what kinds of questions they’ll ask, what kind of alternatives and trade offs they’ll consider, probing some of the decisions they make. All the stuff that goes into being a good SWE, which you can demonstrate even if you’re having trouble coming up with the optimal solution to this particular problem.


  • My way of thinking differs by saying if from my individuals perspective I experience the perfect coin (quantum particle) to flip tales a million times in a row there must be a highly likelihood that many worlds indeed exist since I died in the ones it said heads.

    It doesn’t make that highly likely, though. It’s about equally likely that there’s a fairy controlling your coin flips. The experiment hasn’t proven anything about the cause of the unlikely outcome. You’ve just measured that it happened and then declared that your preferred explanation is the reason.


  • VoterFrog@lemmy.worldtoMemes@sopuli.xyzAI Art.
    link
    fedilink
    arrow-up
    1
    ·
    2 months ago

    I think it definitely depends on the level of involvement and the intent. Sure not everybody who just asks for something to be made for them is doing much directing. But someone who does a lot of refinement and curation of AI generated output needs to demonstrate the same kind of creativity and vision as an actual director.

    I guess I’d say telling an artist to do something doesn’t make you a director. But a director telling an AI to do the same kinds of things they’d tell an artist doesn’t suddenly make them not a director.



  • The language model isn’t teaching anything it is changing the wording of something and spitting it back out. And in some cases, not changing the wording at all, just spitting the information back out, without paying the copyright source.

    You could honestly say the same about most “teaching” that a student without a real comprehension of the subject does for another student. But ultimately, that’s beside the point. Because changing the wording, structure, and presentation is all that is necessary to avoid copyright violation. You cannot copyright the information. Only a specific expression of it.

    There’s no special exception for AI here. That’s how copyright works for you, me, the student, and the AI. And if you’re hoping that copyright is going to save you from the outcomes you’re worried about, it won’t.



  • If I understand correctly they are ruling you can by a book once, and redistribute the information to as many people you want without consequences. Aka 1 student should be able to buy a textbook and redistribute it to all other students for free. (Yet the rules only work for companies apparently, as the students would still be committing a crime)

    A student can absolutely buy a text book and then teach the other students the information in it for free. That’s not redistribution. Redistribution would mean making copies of the book to hand out. That’s illegal for people and companies.


  • It seems like a lot of people misunderstand copyright so let’s be clear: the answer is yes. You can absolutely digitize your books. You can rip your movies and store them on a home server and run them through compression algorithms.

    Copyright exists to prevent others from redistributing your work so as long as you’re doing all of that for personal use, the copyright owner has no say over what you do with it.

    You even have some degree of latitude to create and distribute transformative works with a violation only occurring when you distribute something pretty damn close to a copy of the original. Some perfectly legal examples: create a word cloud of a book, analyze the tone of news article to help you trade stocks, produce an image containing the most prominent color in every frame of a movie, or create a search index of the words found on all websites on the internet.

    You can absolutely do the same kinds of things an AI does with a work as a human.



  • I think the problem that you’re going to imagine a good analogy for this is that orbital dynamics works in sort of (but not really) an unintuitive way.

    An object in an elliptical orbit around earth is moving slowest at its furthest point from the earth. Like a thrown ball that slows when it reaches the top of its trajectory. That object is moving fastest at the point that it’s closest to earth.

    So you have this dynamic where if you decelerate it changes your orbit such that you’re increasing the speed you’ll be moving on opposite point of your orbit. E.g. if you decelerate at your slowest (furthest) point, it brings your closest approach point closer to earth and you’ll be moving even faster when you get there.

    You can decelerate at your closest approach point but eventually it brings the opposite end of your orbit closer to earth than you are, and then you’ll fall and of course speed up again. There’s no real way around this. You’re going to be moving fast when you approach earth unless you’re doing a lot of very active deceleration.