while(true){💩};

  • 0 Posts
  • 57 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle




  • My argument is incredibly simple:

    YOU exist. In this universe. Your brain exists. The mechanisms for sentience exist. They are extremely complicated, and complex. Magic and mystic Unknowables do not exist. Therefore, at some point in time, it is a physical possibility for a person (or team of people) to replicate these exact mechanisms.

    We currently do not understand enough about them yet to do this. YOU are so laser-focused on how a Large Language Model behaves that you cannot take a step back and look at the bigger picture. Stop thinking about LLMs specifically. Neural-network artificial intelligence comes in many forms. Many are domain-specific such as molecular analysis for scientific research. The AI of tomorrow will likely behave very different from those of today, and may require hardware breakthroughs to accomplish (I don’t know that x86_64 or ARM instruction sets are sufficient or efficient enough for this process). But regardless of how it happens, you need to understand that because YOU exist, you are the prime reason it is not impossible or even unfeasible to accomplish.


  • This argument feels extremely hand-wavey and falls prey to the classic problem of “we only know about X and Y that exist today, therefore nothing on this topic will ever change!”

    You also limit yourself when sticking strictly to narrow thought experiments like the Chinese room.

    If you consider the human brain, which is made up of nigh-innumerable smaller domain-specific neural nets combined together with the frontal lobe, has consciousness, this absolutely means that it is physically possible to replicate this process by other means.

    We noticed how birds fly and made airplanes. It took many, MANY Iterations that seem excessively flawed by today’s standards, but were stepping stones to achieve a world-changing new technology.

    LLMs today are like DaVinci’s corkscrew flight machine. They’re clunky, they technically perform something resembling the end goal but ultimately in the end fail the task they were built for in part or in whole.

    But then the Wright brothers happened.

    Whether sentient AI will be a good thing or not is something we will have to wait and see. I strongly suspect it won’t be.


    EDIT: A few other points I wanted to dive into (will add more as they come to mind):

    AI derangement or psychosis is a term meant to refer to people forming incredibly unhealthy relationships with AI to the point where they stop seeing its shortcomings, but I am noticing more and more that people are starting to throw it around like the “Trump Derangement Syndrome” term, and that’s not okay.









  • If it’s not insisting, it’s demanding, which is worse.

    There are many tools now that replace X11 behavior. If Wayland doesn’t “do what they need”, at this point there’s a strong chance they have not put in any effort into making it work for them.

    For desktop forwarding there’s waypipe.

    For tablet users, KDE (And probably gnome) have pretty good tablet support at this point.

    For artists, KDE JUST got much better color calibration and HDR support.

    For gamers, WINE now has an experimental Wayland-native mode, and barring that we have Gamescope to make it behave semi-native (so this one is more of a future-ish solution that you can use now).

    Screen recording mostly just works with pipewire and almost everything supports it now including Discord.

    Etc.






  • Looking forward to seeing your work - it’s always good to have competitors, and gpt4all is also very crashy. If you have a lead in stability, I’d definitely use yours over theirs.

    Some other areas you could probably look into if you want to differentiate are:

    • Getting Started experience - recommend some high quality models and update the list as time goes on. Maybe include a good default one as part of the package.

    • Convenience - include a way to do what the modern chat interfaces do where asking it to do something other than text will call a different AI model built for that purpose and return the result (image generation, etc)

    • Voice conversations - Can we actually talk to the dang thing?

    • Assistant module - piggybacking off of the last one, can we invoke it with a wake-word or a button press and have it “always available” (similar to HomeAssistant with a Whisper plugin, but on-device).

    Anyway, I wish you well in your endeavor and will keep an eye out.

    EDIT: looks like the conversational bits are on your roadmap, and you do have some basic suggestions on startup.

    As for voice, the OpenWhisper module might fit your project’s theme a bit closer than elevenlabs.