• Chaotic Entropy@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    In one case, when an agent couldn’t find the right person to consult on RocketChat (an open-source Slack alternative for internal communication), it decided “to create a shortcut solution by renaming another user to the name of the intended user.”

    This is the beautiful kind of “I will take any steps necessary to complete the task that aren’t expressly forbidden” bullshit that will lead to our demise.

  • some_guy@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Yeah, they’re statistical word generators. There’s no intelligence. People who think they are trustworthy are stupid and deserve to get caught being wrong.

    • Melvin_Ferd@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Ok what about tech journalists who produced articles with those misunderstandings. Surely they know better yet still produce articles like this. But also people who care enough about this topic to post these articles usually I assume know better yet still spread this crap

      • Zron@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Tech journalists don’t know a damn thing. They’re people that liked computers and could also bullshit an essay in college. That doesn’t make them an expert on anything.

            • TimewornTraveler@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 month ago

              that is such a ridiculous idea. Just because you see hate for it in the media doesn’t mean it originated there. I’ll have you know that i have embarrassed myself by screaming at robot phone receptionists for years now. stupid fuckers pretending to be people but not knowing shit. I was born ready to hate LLMs and I’m not gonna have you claim that CNN made me do it.

              • Melvin_Ferd@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 month ago

                Search AI in Lemmy and check out every article on it. It definitely is media spreading all the hate. And like this article is often some money yellow journalism

                • TimewornTraveler@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 month ago

                  all that proves is that lemmy users post those articles. you’re skirting around psychotic territory here, seeing patterns where there are none, reading between the lines to find the cover-up that you are already certain is there, with nothing to convince you otherwise.

                  if you want to be objective and rigorous about it, you’d have to start with looking at all media publications and comparing their relative bias.

                  then you’d have to consider their reasons for bias, because it could just be that things actually suck. (in other words, if only 90% of media reports that something sucks when 99% of humanity agrees it sucks, maybe that 90% is actually too low, not too high)

                  this is all way more complicated than media brainwashing.

  • szczuroarturo@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I actually have a fairly positive experience with ai ( copilot using claude specificaly ). Is it wrong a lot if you give it a huge task yes, so i dont do that and using as a very targeted solution if i am feeling very lazy today . Is it fast . Also not . I could actually be faster than ai in some cases. But is it good if you are working for 6h and you just dont have enough mental capacity for the rest of the day. Yes . You can just prompt it specificaly enough to get desired result and just accept correct responses. Is it always good ,not really but good enough. Do i also suck after 3pm . Yes.
    My main issue is actually the fact that it saves first and then asks you to pick if you want to use it. Not a problem usualy but if it crashes the generated code stays so that part sucks

    • jcg@halubilo.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      You should give Claude Code a shot if you have a Claude subscription. I’d say this is where AI actually does a decent job: picking up human slack, under supervision, not replacing humans at anything. AI tools won’t suddenly be productive enough to employ, but I as a professional can use it to accelerate my own workflow. It’s actually where the risk of them taking jobs is real: for example, instead of 10 support people you can have 2 who just supervise the responses of an AI.

      But of course, the Devil’s in the detail. The only reason this is cost effective is because of VC money subsidizing and hiding the real cost of running these models.

  • TheGrandNagus@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    LLMs are an interesting tool to fuck around with, but I see things that are hilariously wrong often enough to know that they should not be used for anything serious. Shit, they probably shouldn’t be used for most things that are not serious either.

    It’s a shame that by applying the same “AI” naming to a whole host of different technologies, LLMs being limited in usability - yet hyped to the moon - is hurting other more impressive advancements.

    For example, speech synthesis is improving so much right now, which has been great for my sister who relies on screen reader software.

    Being able to recognise speech in loud environments, or removing background noice from recordings is improving loads too.

    My friend is involved in making a mod for a Fallout 4, and there was an outreach for people recording voice lines - she says that there are some recordings of dubious quality that would’ve been unusable before that can now be used without issue thanks to AI denoising algorithms. That is genuinely useful!

    As is things like pattern/image analysis which appears very promising in medical analysis.

    All of these get branded as “AI”. A layperson might not realise that they are completely different branches of technology, and then therefore reject useful applications of “AI” tech, because they’ve learned not to trust anything branded as AI, due to being let down by LLMs.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      LLMs are like a multitool, they can do lots of easy things mostly fine as long as it is not complicated and doesn’t need to be exactly right. But they are being promoted as a whole toolkit as if they are able to be used to do the same work as effectively as a hammer, power drill, table saw, vise, and wrench.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Exactly! LLMs are useful when used properly, and terrible when not used properly, like any other tool. Here are some things they’re great at:

        • writer’s block - get something relevant on the page to get ideas flowing
        • narrowing down keywords for an unfamiliar topic
        • getting a quick intro to an unfamiliar topic
        • looking up facts you’re having trouble remembering (i.e. you’ll know it when you see it)

        Some things it’s terrible at:

        • deep research - verify everything an LLM generated of accuracy is at all important
        • creating important documents/code
        • anything else where correctness is paramount

        I use LLMs a handful of times a week, and pretty much only when I’m stuck and need a kick in a new (hopefully right) direction.

        • snooggums@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 month ago
          • narrowing down keywords for an unfamiliar topic
          • getting a quick intro to an unfamiliar topic
          • looking up facts you’re having trouble remembering (i.e. you’ll know it when you see it)

          I used to be able to use Google and other search engines to do these things before they went to shit in the pursuit of AI integration.

    • Punkie@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      I’d compare LLMs to a junior executive. Probably gets the basic stuff right, but check and verify for anything important or complicated. Break tasks down into easier steps.

      • zbyte64@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        A junior developer actually learns from doing the job, an LLM only learns when they update the training corpus and develop an updated model.

          • zbyte64@awful.systems
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            Why would you ever yell at an employee unless you’re bad at managing people? And you think you can manage an LLM better because it doesn’t complain when you’re obviously wrong?