• Badabinski@kbin.earth
    link
    fedilink
    arrow-up
    29
    ·
    4 days ago

    I learned to program by shitting out God awful shell scripts that got gently thrashed by senior devs. The only way I’ve ever learned anything is by having a real-world problem that I can solve. You absolutely do NOT need a CS degree to learn software dev or even some of compsci itself, and I agree that tools like Bolt are going to make shit harder. It’s one thing to copy stack overflow code because you have people arguing about it in the comments. You get to hear the pros and cons and it can eventually make sense. It’s something entirely different when an LLM shits out code that it can’t even accurately describe later.

    • peoplebeproblems@midwest.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 days ago

      Or that it can produce repeatedly. That’s something that bothers me. Slight changes in the prompt and you get a wildly different result. Or, worse, you get the same bad output every time you prompt it.

      And then there are the security flaws

      • shalafi@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        You can use that to your advantage! Slight prompt changes can give you different ideas on how to proceed, give you some items to evaluate. But that’s all they’re good for, and while they can be solid on getting you past a block, I’m horrified to think anyone in the IT space thinks an LLM can output safe, working code.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      LLMs are great for getting around specific stumbling blocks, might even present a path you hadn’t thought of or knew about. And that is it. Stop right there and you’ll be fine.

      I completely understand how an ignorant bystander would believe AIs pump out working code. I cannot understand anyone with any experience thinking that.