If we increase an LLM’s predictive utility it becomes less interesting, but if we make it more interesting it becomes nonsensical (since it can less accurately predict typical human outputs).
Humans, however, can be interesting without resorting to randomness, because they have subjectivity, which grants them a unique perspective that artists simply attempt (and often fail) to capture.
Anyways, however we eventually create an artificial mind, it will not be with a large language model; by now, that much is certain.
Ah, but if there’s no random element to a human cognition, it should produce the exact same output time and time again. What is not random is deterministic.
Biologically, there’s an element of randomness to neurons firing. If they fire too randomly, that’s a seizure. If they don’t ever fire spontaneously, you’re in a coma. How they produce ideas is nowhere close to being understood, but there’s going to be an element of an ordered pattern of firing spontaneously emerging. You can see a bit of that with imaging, even.
Anyways, however we eventually create an artificial mind, it will not be with a large language model; by now, that much is certain.
It does seem to be dead-ending as a technology, although the definition of “mind” is, as ever, very slippery.
The big AI/AGI research trend is “neuro-symbolic reasoning”, which is a fancy way of saying embedding a neural net deep in a normal algorithm that can be usefully controlled.
I didn’t say there’s no randomness in human cognition. I said that the originality of human ideas is not a matter of randomized thinking.
Randomness is everywhere. But it’s not the “randomness” of an artist’s thought process that accounts for the originality of their creative output (and is detrimental to it).
Consider the following question: “why did you write something sad?”
for an LLM, the answer is that a mathematical formula came up heads.
for a person, the answer is “I was sad.”
Maybe the sadness is random. (That’s depression for you.) But it doesn’t change the fact that the subjective nature of sadness fuels creative decisions. It is why characters in a novel do so and so, and why their feelings are described in a way that is original and yet eerily familiar — i.e., creatively.
randomness is a central part of a human coming up with an idea.
So, here’s how I understand this claim. Either
As an endorsement of the Copenhagen Interpretation about the ubiquity of randomness at the quantum level. Or
As a rejection of subjectivity (à la eliminative materialism), which reduces thoughts, emotions, and consciousness to facts about neural activation vectors.
(1) means randomness is background noise cancelled out at scale. We would still ask why some people are more creative than others, (or why some planets are redshifted compared to others) and presumably we have more to say than “luck,” since the chances that Shakespeare wrote his plays at random is 0.
Interpretation (2) suggests that creativity doesn’t exist and this whole conversation is senseless.
If we increase an LLM’s predictive utility it becomes less interesting, but if we make it more interesting it becomes nonsensical (since it can less accurately predict typical human outputs).
Humans, however, can be interesting without resorting to randomness, because they have subjectivity, which grants them a unique perspective that artists simply attempt (and often fail) to capture.
Anyways, however we eventually create an artificial mind, it will not be with a large language model; by now, that much is certain.
Ah, but if there’s no random element to a human cognition, it should produce the exact same output time and time again. What is not random is deterministic.
Biologically, there’s an element of randomness to neurons firing. If they fire too randomly, that’s a seizure. If they don’t ever fire spontaneously, you’re in a coma. How they produce ideas is nowhere close to being understood, but there’s going to be an element of an ordered pattern of firing spontaneously emerging. You can see a bit of that with imaging, even.
It does seem to be dead-ending as a technology, although the definition of “mind” is, as ever, very slippery.
The big AI/AGI research trend is “neuro-symbolic reasoning”, which is a fancy way of saying embedding a neural net deep in a normal algorithm that can be usefully controlled.
I didn’t say there’s no randomness in human cognition. I said that the originality of human ideas is not a matter of randomized thinking.
Randomness is everywhere. But it’s not the “randomness” of an artist’s thought process that accounts for the originality of their creative output (and is detrimental to it).
For LLMs, the opposite is true.
Actually, it seems pretty likely randomness is a central part of a human coming up with an idea.
Consider the following question: “why did you write something sad?”
Maybe the sadness is random. (That’s depression for you.) But it doesn’t change the fact that the subjective nature of sadness fuels creative decisions. It is why characters in a novel do so and so, and why their feelings are described in a way that is original and yet eerily familiar — i.e., creatively.
So, here’s how I understand this claim. Either
(1) means randomness is background noise cancelled out at scale. We would still ask why some people are more creative than others, (or why some planets are redshifted compared to others) and presumably we have more to say than “luck,” since the chances that Shakespeare wrote his plays at random is 0.
Interpretation (2) suggests that creativity doesn’t exist and this whole conversation is senseless.