I am just complaining. my professors are constantly like, “you should use AI to research this! ask AI how to do this in excel! employers are going to expect you to use AI to work more efficiently!”

they at least acknowledge that it’s inaccurate and you always have to check it, but there’s no mention of ethical implications, environmental impacts, the danger this bubble poses to the economy.

I hate it so, so much. I had a professor tell me that I can write, which is good, so I can prompt AI, which I didn’t find especially flattering as far as “compliments” go.

I’m just so tired of it, guys.

  • Kyrgizion@lemmy.world
    link
    fedilink
    arrow-up
    32
    ·
    8 days ago

    MMW; there’s going to be an alternative market for people with skills without AI, and who can prove they got their credentials the oldfashioned way, as another poster said, after the bubble pops.

    • Jankatarch@lemmy.world
      link
      fedilink
      arrow-up
      14
      ·
      8 days ago

      It sucks that there is no way for me to prove that. They will see I got into college during peak of AI hype and promptly throw my application into the trash.

      And by “they” I mean the AI recruiters worst part.

      • Tollana1234567@lemmy.today
        link
        fedilink
        arrow-up
        3
        ·
        7 days ago

        if it wasnt bad prior to pandemic, they had software that did the same"just look keywords or deny certain amount of applicants" into the trash, AI probably does it at a higher rate.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      8 days ago

      I foresee a monstrous backlash to AI use coming fairly soon. Once CEOs start getting reports on their spend vs. productivity, they gonna be big mad. Also, many are going to be left without functioning products after the crash.

      They’re being patient for now. Big initiatives take time to start working smoothly and moving the ship. But CEOs are only going to take so many quarters of poor AI performance before they say “Fuck all this noise!”

      I honestly think the AI crash is going to cause financial ruin far beyond what we saw in 2008. AI is going to be poison after that.

      Hadn’t thought on the employee value aspect! Combined with what I’m saying, yeah, employers are not going to look favorably at resumes touting high AI use.

    • Tollana1234567@lemmy.today
      link
      fedilink
      arrow-up
      2
      ·
      7 days ago

      the old fashioned way, job listing already using something similar(other resume screening software) to screen applicants, its just slightly less worst than using AI to screen out resumes. plus employers also steal peoples resume to train thier AI/software to screen out more people.

  • ragingHungryPanda@piefed.keyboardvagabond.com
    link
    fedilink
    English
    arrow-up
    21
    ·
    8 days ago

    oh God, this is worse than we they were telling us not to use wikipedia, even though studies showed it was more accurate than encyclopedia Britannica.

    they also don’t know how to properly prompt AI if that’s what they’re saying your good writing skills are for

    • Emilie Easie@lemmynsfw.comOP
      link
      fedilink
      arrow-up
      7
      ·
      8 days ago

      they also don’t know how to properly prompt AI if that’s what they’re saying your good writing skills are for

      feels kinda vindicating LOL thanks for saying so

    • taiyang@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      7 days ago

      Not to be off topic but I’d like to clarify, you’re not supposed to use secondary sources regardless if you’re supposed to cite primary sources. Wikipedia is legit, just not primary so you’re getting information loss. You 100% could use it to lead to primary sources, though.

      Not everyone understands why, though, including teachers. :/

  • jordanlund@lemmy.world
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    8 days ago

    “employers are going to expect you to use AI to work more efficiently!”

    That’s 100% true… right now. It won’t be true after the bubble pops.

    If you’re graduating in 2026 or 2027, it’s valid advice.

    It most likely won’t be valuable after 2028.

    • Emilie Easie@lemmynsfw.comOP
      link
      fedilink
      arrow-up
      12
      ·
      8 days ago

      Omg but it isn’t even true for me! My workplace hasn’t touched LLMs and believes it’s a buzzword. But yeah, I think it was really foolhardy for them to rush in to adopt this very unproven technology despite mounting evidence it’s not useful to most businesses.

      I feel like I’m being stolen from. These are really expensive classes, and every minute spent on AI is wasted.

      • jordanlund@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        8 days ago

        Try working in the field… “AI, AI, we want you to use AI!”

        “For what?”

        “Well, look at the cool newsletters and fliers you can generate!”

        “Yeah, not my job, you do know what it is that I do, right?”

        “How about Facebook ads??!?”

        “Again, not my job.”

    • Tollana1234567@lemmy.today
      link
      fedilink
      arrow-up
      2
      ·
      7 days ago

      professors are using AI to accuse students of using AI. plus Ai resumes are used to combat AI screening of resumes for job listsings. seems like its getting worst on that end.

  • taiyang@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    8 days ago

    Not all of us professors are that… uh… smitten? In fact, as far as I’m aware most faculty I know hate AI as much as everyone else if not more so-- our job is to get you to think about a subject, AI inherently adds doubt that your thoughts are your own. And that’s just plagiarism. It’s been a nightmare trying to adjust assignments, implement more oral presentations, and make clear the do’s or dont’s.

    We don’t usually use the copyright infringement argument (educators “borrow” content all the time, so much so that there’s usually a carve out for education reasons) and we generally don’t fall on the environmental impact (you know how many professors still print out everything? It’s a lot). But undermining your education by using a flawed thinky machine? Your professor’s doing you an injustice.

    • Emilie Easie@lemmynsfw.comOP
      link
      fedilink
      arrow-up
      5
      ·
      7 days ago

      I know it depends on the location. My bachelor’s is from a VERY crunchy university and I already know they hate it (and are definitely talking about both the environmental impact and plagerism) without even asking, and I really miss that crunchyness now 😢

  • artyom@piefed.social
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    8 days ago

    my professors are constantly like, “you should use AI to research this! ask AI how to do this in excel! employers are going to expect you to use AI to work more efficiently!”

    Sounds like your professor is advocating themselves out of a job.

    • Tollana1234567@lemmy.today
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      7 days ago

      lazy professor, mostly uses AI to do all the grading, and making the slideshow for lectures. many of them would rather not teach classes but do “research or grant writing”. Some goes as far as only reading off thier slideshow powerpoints as a lecture, and not discussing actual concepts, making thier tests hard and confusing as its very hard to learn from a useless slide. i had a biochem teacher like this, also she made tests that were convoluted as well.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    4
    ·
    8 days ago

    What’s the field you’re trying to learn? I hope it’s not anything factual… Or just “normal” school?

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        8 days ago

        Oof. All I can say is when I learned computer science a while back, one of our professors told us we weren’t even allowed to use Stack Overflow due all the misinformation on there. And he wanted us to know what we’re doing when coding in that programming language and learn it by the book. I think that’s pretty much the opposite of your story… And sometimes I’m glad I did. It helps to form a solid understanding and not skip some underlying concepts and then you end up poking at stuff to maybe make it work.

  • CrayonDevourer@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    22
    ·
    edit-2
    8 days ago

    The AI bubble isn’t gonna pop. It’ll calm down at some point, but it’s working as intended. They’re removing peoples jobs and the AI is doing just as good of a job.

    Remember, humans aren’t error-proof either. So this argument of “AI is wrong all the time!” doesn’t really reach the ears of the people in corporate – because people are wrong all the fucking time too.

    And unlike people - they don’t have to deal with AI posting racist shit outside of work and causing massive problems for the company, etc. People are shit, the way people act are shit. Half of them are not just shit, but actively barbarians working against society.

    Honestly, I trust any random person I run across on the street as far less trustworthy than even the smallest of AIs

    • very_well_lost@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      8 days ago

      Honestly, I trust any random person I run across on the street as far less trustworthy than even the smallest of AIs

      You realize that AI is trained on people, right? And the technology is only as capable as its training data, it’s still not able to extrapolate from incomplete or flawed data to produce anything original.

      By its very nature, the current generation of AI can at best only ever be as smart or as trustworthy as its training data — and it rarely ever performs at that theoretical ‘best’.

    • Emilie Easie@lemmynsfw.comOP
      link
      fedilink
      arrow-up
      12
      ·
      8 days ago

      I don’t think people fully understand how much money AI is currently losing and how much it would need to make for this investment to pay off. I think a lot of people see numbers in the billions and don’t really think about it. There aren’t even a billion miles between here and the sun, not even 10% of a billion miles. And we’re talking about hundreds of billions.

      You might just think, well money is fake anyway, and yeah, true, it’s our faith in it that’s real, sorta like AI, and that can collapse.

      And unlike people - they don’t have to deal with AI posting racist shit outside of work and causing massive problems for the company, etc. People are shit, the way people act are shit. Half of them are not just shit, but actively barbarians working against society.

      Mechahitler would like a word lol! But seriously, AI is about as racist as what it was trained on: reddit lmao.

      • very_well_lost@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        8 days ago

        people see numbers in the billions and don’t really think about it

        It’s actually so much worse than that. OpenAI has already promised 1 trillion dollars to its various partners over the next 5 years.

        How the everliving fuck is a company with $5 billion in revenue going to find ONE TRILLION FUCKING DOLLARS by the end of the decade?

        • Emilie Easie@lemmynsfw.comOP
          link
          fedilink
          arrow-up
          7
          ·
          8 days ago

          oh jeeze, I’m behind again. Every time I turn around I feel like it gets more ridiculous. They’re gonna start just making up numbers because they know no one will call them on it.

          Bro we’re gonna give you a bagillion dollars over the next 6 months11!!

    • Triumph@fedia.io
      link
      fedilink
      arrow-up
      7
      ·
      8 days ago

      That’s an interesting perspective, and I appreciate it, however:

      … they don’t have to deal with AI posting racist shit outside of work …

      https://www.nature.com/articles/s41586-024-07856-5

      https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms?language=en

      I know that’s not exactly what you were referring to, but it’s arguably more serious, because it tends to go unnoticed.

      On the other hand:

      https://en.wikipedia.org/wiki/Tay_(chatbot)

      Garbage in, garbage out.