<a rel="me" href="https://layer8.space/@helix">Mastodon</a>

  • 1 Post
  • 175 Comments
Joined 2 years ago
cake
Cake day: July 27th, 2024

help-circle










  • You have a studio Ghibli avatar, how do you think that makes you look when you rant about AI?

    You should’ve simply made this a talk at a conference.

    Instead you decided to have an AI write a mini-book (there are telltale signs like “this is not x, this is y” sentences) and tagged a few accounts who have reach.

    Is this ragebait or are you serious? Is there a possibility you’re currently mentally unwell? There’s this issue called “AI psychosis” and I’d hate to have witnessed such a thing. If you think the AI speaks to you personally, please do not do what it says and instead get help from a professional or reach out to friends and family and listen to their consensus.


  • No one does that in a project they’re building for themselves.

    Speak for yourself, I always did that and I found it easier with LLMs nowadays.

    I hate most AI shite with a passion but when it helps my colleagues write commits which are more than “add stuff”, “fix some things” I’m fine with it.

    I rarely use AI to generate code, usually only when I need a starting point. It’s much easier to unfuck AI code than to stare blankly at a screen for an hour. I’d never commit code I don’t fully understand or have read to the last byte.

    I hope OP is doing the same. LLMs fail at 90% of coding tasks for me but for the other 10% (mostly writing tests, readmes, boilerplate) it’s really OK for productivity.

    Ethics of LLMs aside, if you use them for exactly what they’re built for – being a supercharged glorified autocomplete – they’re cool. As soon as you try to use them for something else like “autocompletion from zero” aka “creativity”, they fail spectacularly.