AI can’t be all that bad. The problem I’m always seeing with AI is a double-edged sword. You have corporations shoving AI in just about everything, treating it like its a cure for cancer and that really rubs people the wrong way. Then, on a more of a society level, you’ve got people who use AI for an assortment of things like making art with AI and still accredit themselves as an artist to people who treat AI like a therapist when it is not advised to.

However, I’ve found some benefits with AI. For example, I’m chatting with ChatGPT on credit cards, because it is something I may lean towards getting into. It’s helping me better understand than most people have tried explaining to me. Simply because it is giving me a more stream-lined response than people just beating the bush.

  • communism@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    5 hours ago

    The relevance for me personally is whether or not they can be useful for programming, and if they’re accessible to run locally. I’m not interested in feeding my data to a datacentre. My AMD GPU also doesn’t support ROCm so LLMs run slow as fuck for me. So, generally, I avoid them.

    LLMs consistently produce lower quality, less correct, and less secure code than humans. However, they do seem to be getting better. I might be open to using them to generate unit tests if only they would run faster on my PC. I tried deepseek, llama3.1, and codellama; all take like an hour+ to answer a programming question given that they are just using my CPU, as my GPU doesn’t support ROCm. So really not feasible for anything.

    Depending on what you count as AI, I think some of the long-existing predictive ML like autosuggestions based on learning your input patterns are fine and helpful. And maybe if I get a supported GPU I won’t mind using local LLMs for some things. But generally I’m not dying to use them. I can do things myself.