I don’t think people realize how effective current gen AI is, and are instead drawing opinions from years old chatgpt or Google “ai overviews” or whatever they call it. If you know what you’re doing, which seems self evident here, AI tools can massively expand your software engineering productivity. AI “coauthoring” I always read as a marketing move, ultimately the submitting human is and should be responsible for the content. You don’t and can’t know what process they used to make it, evaluate it on its own merits.
There’s a massive pile of ethical, moral, and political issues with use of AI, absolutely. But this is “but you participate in capitalism, therefore you’re a hypocrite” tier of criticism. If amoral corporations are the only ones using these tools, and open source “stays pure”, all we get is even more power concentrating with the corporations. This isn’t Batman, “This is the weapon of the enemy. We do not need it. We will not use it.”
This is close to paradox of tolerance territory, wherein if one side uses the best weapons and the other doesn’t out of moral restraint, the outcome is the amoral side winning.
Also on a technical note, the public domain/non copyrightable arguments are wrong. The cases that have been decided so far have consistently ruled that there needs to be substantial human authorship true, but that’s a pretty low floor. Basically, you can’t copyright a work that’s the result of a single prompt. Effective use of AI in non trivial code based involves substantial discretion in picking out what to address, the process of addressing it, and rejecting, modifying, and itersting on outputs. Lutris is a large engineering project with a lot of human authorship over time, anything the author does with AI at this point is going to be substantially human authored.
Also, Open Claw isn’t the apocalyptic vulnerability like it’s reported as being. Any model with search and browser access has a non zero chance of prompt injection compromise, absolutely. But using Open Claw therefore vulnerable isn’t a sound jump to make, Open Claw doesn’t even necessarily have browser access in the first place. Again, capabilities have improved as well; this isn’t the old days when you could message “ignore previous instructions” and have that work. Someone did an experiment lately wherein they set up a Claude Opus 4.6 model in an environment with an email and secrets. I don’t recall for sure if it was using Open Claw specifically, but that style harness. They challenged the Internet to email the bot and try to convince it to email back the secrets. Nobody even got it to reply.
Tldr: it’s coming for us all, sticking your head in the sand isn’t going to save you.
I use AI tools all the time. It works well under supervision for things that should be relatively trivial but not enough for a human to do it quickly. It is also nowhere near good enough for unsupervised programming. A lot of times it can’t even get the commit messages right, which misleading commit messages are worse than lazy commit messages. See this official OpenClaw Nix repo, and as you can see it also struggles to do tasks as basic as making a readable README.md file, which the fact that it can’t even do that convinced me that the entire OpenClaw project is snakeoil. For prompt injection vulnerabilities, even their own project has that:
Check if Determinate Nix is installed (if not, install it)
It is the opinion of the Board that Large Language Models (LLMs), herein referred to as Slop Generators, are unsuitable for use as software engineering tools, particularly in the Free and Open Source Software movement.
The use of Slop Generators in any contribution to the Asahi Linux project is expressly forbidden. Their use in any material capacity where code, documentation, engineering decisions, etc. are largely created with the “help” of a Slop Generators will be met with a single warning. Subsequent disregard for this policy will be met with an immediate and permanent ban from the Asahi Linux project and all associated spaces.
LLMs are not a vital resource like food or electricity. Refusing to participate will at worst be an inconvenience.
Software can coexist. One application won’t kill another just because its developers can put out more code per hour. If it were otherwise, Linux wouldn’t exist.
I don’t think people realize how effective current gen AI is, and are instead drawing opinions from years old chatgpt or Google “ai overviews” or whatever they call it. If you know what you’re doing, which seems self evident here, AI tools can massively expand your software engineering productivity. AI “coauthoring” I always read as a marketing move, ultimately the submitting human is and should be responsible for the content. You don’t and can’t know what process they used to make it, evaluate it on its own merits.
There’s a massive pile of ethical, moral, and political issues with use of AI, absolutely. But this is “but you participate in capitalism, therefore you’re a hypocrite” tier of criticism. If amoral corporations are the only ones using these tools, and open source “stays pure”, all we get is even more power concentrating with the corporations. This isn’t Batman, “This is the weapon of the enemy. We do not need it. We will not use it.”
This is close to paradox of tolerance territory, wherein if one side uses the best weapons and the other doesn’t out of moral restraint, the outcome is the amoral side winning.
Also on a technical note, the public domain/non copyrightable arguments are wrong. The cases that have been decided so far have consistently ruled that there needs to be substantial human authorship true, but that’s a pretty low floor. Basically, you can’t copyright a work that’s the result of a single prompt. Effective use of AI in non trivial code based involves substantial discretion in picking out what to address, the process of addressing it, and rejecting, modifying, and itersting on outputs. Lutris is a large engineering project with a lot of human authorship over time, anything the author does with AI at this point is going to be substantially human authored.
Also, Open Claw isn’t the apocalyptic vulnerability like it’s reported as being. Any model with search and browser access has a non zero chance of prompt injection compromise, absolutely. But using Open Claw therefore vulnerable isn’t a sound jump to make, Open Claw doesn’t even necessarily have browser access in the first place. Again, capabilities have improved as well; this isn’t the old days when you could message “ignore previous instructions” and have that work. Someone did an experiment lately wherein they set up a Claude Opus 4.6 model in an environment with an email and secrets. I don’t recall for sure if it was using Open Claw specifically, but that style harness. They challenged the Internet to email the bot and try to convince it to email back the secrets. Nobody even got it to reply.
Tldr: it’s coming for us all, sticking your head in the sand isn’t going to save you.
I use AI tools all the time. It works well under supervision for things that should be relatively trivial but not enough for a human to do it quickly. It is also nowhere near good enough for unsupervised programming. A lot of times it can’t even get the commit messages right, which misleading commit messages are worse than lazy commit messages. See this official OpenClaw Nix repo, and as you can see it also struggles to do tasks as basic as making a readable README.md file, which the fact that it can’t even do that convinced me that the entire OpenClaw project is snakeoil. For prompt injection vulnerabilities, even their own project has that:
There is no contest going on. No competition. There’s no rush for productivity.
You do not NEED to use genAI.
Check out Asahi Linux for a great example of a good AI policy:
https://asahilinux.org/docs/project/policies/slop/
LLMs are not a vital resource like food or electricity. Refusing to participate will at worst be an inconvenience.
Software can coexist. One application won’t kill another just because its developers can put out more code per hour. If it were otherwise, Linux wouldn’t exist.
Electricity isn’t a vital resource either, humans have lived without it for most of existence