• 0 Posts
  • 99 Comments
Joined 3 years ago
cake
Cake day: June 30th, 2023

help-circle
  • Very efficiently.

    Or for a less cheeky answer, I believe the method they used at a high level was pointing a camera at a few guide stars, so the 30 lines of assembly might have been a loop that checked those cameras for any drift of those stars and did a correction pulse of the rotation boosters to keep them centered. Oh, one of the references might have been the signal strength from home, too (signal gets weaker if the antenna isn’t aligned).

    Unless it was an emergency, it might only need to look at 5 pixels to determine alignment and correction.

    Also, just because it’s assembly doesn’t mean it can’t call subroutines and functions, so that 30 lines might be misleading in the way those several lines in the other reply have way more going on. That said, if it’s just doing a pixel brightness comparison, that’s one line to read the central pixel, then for each direction one line to read that pixel, one more to compare, one line to jump to next comparison if center is brighter, one instruction to initiate correction burn, one instruction to stop it immediately after, then one instruction to return to the start of the loop… Which comes to 22 lines total, leaving 8 for logging or maybe timing the burn. And that’s assuming their instruction set didn’t have anything fancy like read and compare, compare and jump, or a single instruction burn pulse.











  • It was a different commenter, though I also like snacking on dark chocolate chips. Baker’s chocolate is also good, but the consistency of the squares isn’t great for snacking.

    I just read it as a tip for how to get chocolate anyways, even if all the chocolate bar makers stop using it. The chocolate-like but cheaper stuff they are using instead of chocolate sounds more like the dustbowl/depression era tricks to enjoy food while you can’t afford it.

    Though part of my perspective is from getting my cooking to a level where store bought prepared stuff is just the easy/convenient option, not the high quality one (for health or taste). I also love dark chocolate and prefer the high cocoa content ones over must chocolate bars.



  • Yeah, I think sucralose is the only one that doesn’t taste awful to me. Like I’ve always been skeptical of the defense of aspartame because it tastes like something I shouldn’t be eating. I was looking forward to stevia back when it got popular, but it also has that taste (I’m guessing from leftover solvent, since it’s not water soluable like sugar).

    There’s plenty of ways to make things taste great without relying so heavily on sweetness. I hate the western food industry’s obsession with it along with the capitalist obsession with selling as much as possible, because it’s resulted in the less sugar I’ve wanted to see instead meaning the sugar is replaced with other chemicals that taste sweet (and “chemically”).

    And I doubt safety studies looked at anything beyond “does it so obviously cause issues that we’ll be sued the moment we try to sell this?”







  • Yeah, it’s good enough that it even had me fooled, despite all my “it just correlates words” comments. It was getting to the desired result, so I was starting to think that the framework around the agentic coding AIs was able to give it enough useful context to make the correlations useful, even if it wasn’t really thinking.

    But it’s really just a bunch of duct tape slapped over cracks in a leaky tank they want to put more water in. While it’s impressive how far it has come, the fundamental issues will always be there because it’s still accurate to call LLMs massive text predictors.

    The people who believe LLMs have achieved AGI are either just lying to try to prolong the bubble in the hopes of actually getting it to the singularity before it pops or are revealing their own lack of expertise because they either haven’t noticed the fundamental issues or think they are minor things that can be solved because any instance can be patched.

    But a) they can only be patched by people who know the correction (so the patches won’t happen in the bleeding edge until humans solve the problem they wanted AI to solve), and b) it will require an infinite number of these patches even to just cover all permutations of everything we do know.


  • Here’s an example I ran into, since work wants us to use AI to produce work stuff, whatever, they get to deal with the result.

    But I had asked it to add some debug code to verify that a process was working by saving the in memory result of that process to a file, so I could ensure the next step was even possible to do based on the output of the first step (because the second step was failing). Get the file output and it looks fine, other than missing some whitespace, but that’s ok.

    And then while debugging, it says the issue is the data for step 1 isn’t being passed to the function the calls if all. Wait, how can this be, the file looks fine? Oh when it added the debug code, it added a new code path that just calls the step 1 code (properly). Which does work for verifying step 1 on its own but not for verifying the actual code path.

    The code for this task is full of examples like that, almost as if it is intelligent but it’s using the genie model of being helpful where it tries to technically follow directions while subverting expectations anywhere it isn’t specified.

    Thinking about my overall task, I’m not sure using AI has saved time. It produces code that looks more like final code, but adds a lot of subtle unexpected issues on the way.