If this is possible then your AI workflows are catastrophically broken. Even my dumbass company knows AI needs human supervision at all times.
Reddit and lemmy are so extreme on this topic it’s impossible to express a nuanced opinion on the issue. AI is an undeniably powerful tool for any good programmer, but it needs to be used properly.
People being this irresponsible with it must work on software where there are no legal consequences if it breaks. As brainwashed as my company is on AI they would never allow us to create a process that releases unreviewed code.
Oh they are lol. Our company was full steam on it and is just now pumping the brakes as they’ve seen the chaos.
Don’t get me wrong. I think Gen AI can be, gasp, useful! It’s great in small pockets where you can handhold it and verify output. It’s great for cut through the noise that google and others have failed to address. It’s good at summarizing text.
I’m not so high on it being this massive reckoning that’s going to replace people. It’s just not built for that. Text prediction can only go so far and that’s all GenAI is.
We’ve had multiple instances of AI slop being automatically released to production without any human review, and some of our customers are very angry about broken workflows and downtime, and the execs are still all-in on it. Maybe the tune is changing to, “well, maybe we should have some guardrails”, but very slowly.
The incident above I mentioned was the final straw but I’ve slowly seen the enthusiasm for LLMs start to whittle away.
It’s still the shiny new toy that everyone must play with but we went from “drop your entire roadmap for AI” to “eh maybe we don’t scrap all UIs just yet”
They’ll change their tune when a few of their new workflows go rogue and auto commit prs it shouldn’t and cause build issues.
If this is possible then your AI workflows are catastrophically broken. Even my dumbass company knows AI needs human supervision at all times.
Reddit and lemmy are so extreme on this topic it’s impossible to express a nuanced opinion on the issue. AI is an undeniably powerful tool for any good programmer, but it needs to be used properly.
People being this irresponsible with it must work on software where there are no legal consequences if it breaks. As brainwashed as my company is on AI they would never allow us to create a process that releases unreviewed code.
Oh they are lol. Our company was full steam on it and is just now pumping the brakes as they’ve seen the chaos.
Don’t get me wrong. I think Gen AI can be, gasp, useful! It’s great in small pockets where you can handhold it and verify output. It’s great for cut through the noise that google and others have failed to address. It’s good at summarizing text.
I’m not so high on it being this massive reckoning that’s going to replace people. It’s just not built for that. Text prediction can only go so far and that’s all GenAI is.
We’ve had multiple instances of AI slop being automatically released to production without any human review, and some of our customers are very angry about broken workflows and downtime, and the execs are still all-in on it. Maybe the tune is changing to, “well, maybe we should have some guardrails”, but very slowly.
The incident above I mentioned was the final straw but I’ve slowly seen the enthusiasm for LLMs start to whittle away.
It’s still the shiny new toy that everyone must play with but we went from “drop your entire roadmap for AI” to “eh maybe we don’t scrap all UIs just yet”
I have a feeling it’s gonna drop more from there.