If you then went "psyche! It was a computer all along!" then given the new information I'd probably retract my previous conclusion |
The problem with AIs is that they're highly unreliable, much more than humans, let alone non-AI computer technology. That makes them very impractical to build on top of. You can't build on ground that could shift unpredictably. |
Not with the models that exist currently. Their context windows are laughably small. |
The reason all these models have a chat interface is because the human is needed to rein the conversation in. |
Your argument does not defend the idea you're trying to defend. If your agree that the earliest brains were too simple to reason then you have to concede that NN may still be at the stage where they're not capable of reasoning, even if later on they may be capable of it. |
But that's the entire point I'm making. We evented science and the scientific process because we are so bad at evaluating truths and facts. |
To connect LA is burning = God's wrath, you assume God, you assume God's intent, you assume God's involvement, and hell, you assume Climate Change isn't real. If we go that route, anything can be logically connected to anything, as we can always generate the necessary assumptions needed to make it so. |
However, it you make such claims on things where the facts are not actually known, it should reject what you're saying, because the evidence is not clear. |
Why would we run into such a problem? We already know other animal/bug/insect species can see further into the spectrum than we can. |
While we can't know, I would personally assume it would evoke a brand new color. I don't see any reason the brain would do something as stupid as confuse you with two wavelengths being the same when it clearly is capable of creating another color. |
But this is an explanation for why it's limited, which was the only point I made. |
the experiences you laid out are all just retooling of prior experiences. You can't imagine a new experience that's actually foreign. |
I mean, they made it all the way to Earth and their alien brain is like, "we are so advanced, but the dominant creature of the planet that exists on every continent, uses electricity, has sent probes and satellites into space, is capable of nuclear weaponry and harvesting of energy is so hard to distinguish from those brown things running around and eating dirt. Like.. come on. I can't believe it. |
I know you're arguing that reasoning can only be determined by looking into the process rather than the output, but this is getting dangerously close to "I think therefore I am", where we might not be able to tell if anyone is capable of reasoning - simply because our consciousness does not extent towards other people's brains. We don't have the in-depth knowledge of the inner workings of the human brain sufficiently enough to actually know how it reasons. So by your argument, we don't know if anyone other than ourselves is capable of reasoning. |
But why not?! Why can't we look at a dataset and look for properties that are intrinsic to reasoning? Why can't reasoning be real as long as its demonstrated adequately? |
Just as a bird's flight and an airplane's flight are both legitimately "flying" despite using different mechanisms. |
We know that if you feel consciousness it must exist, "I think therefore I am", but why is this not true of reasoning? |
if we see a data set with chess, compression, and lots more, paired with complex analyses of situations with multiple variables that can't be preprogrammed ahead of time - why do we then say, "we don't know!" rather than, "this is reasoning". |
The reasoning comes from the fact that there's no preprogrammed way to handle the information such that you can get a coherent output without making new logical connections never made before. And an AI, given a brand new topic of discussion it has never seen before, can, in fact, respond coherently by making new logical connections it has never made before with the brand new information given. |
You're life is hanging on the answer to a question of medium complexity. Not something of trivia, but something that requires problem solving. You have the option of selecting a random human between the ages of 25-50 and, hell, even what country they're from. OR you can select the most powerful AI available today to solve this question. |
Our history as well is clear on how unreliable we are - believing nonsense, forcing it on others, believing more nonsense, spouting nonsense as if we know it for a fact, rinse and repeat. In this regard, AI is more reliable than humanity as a whole (though again, the best of us would be better, but this could change as AI advances). |
However, in AI training, it takes the training data and essentially puts it in its "long-term memory", similarly to us. |
Well what I'm really saying is a question of, "at what point does it go from definitely not reasoning to definitely reasoning?" |
"One time I drank a homeopathic remedy and I got cured, therefore homeopathy works." There is nothing logically unsound in that argument. |
The question is not whether any two propositions can be linked by some logical chain. The question is whether there *is* a logical chain that links these two propositions. |
LLMs will often make arguments like this one. |
Rejecting a claim of unknown truthfulness as misinformation is just as incorrect as accepting it as fact. |
The only facts it has access to is "some people say". |
Your question was why a human brain can't imagine a new color. Are the neural circuits that process color information in the brain of non-human species identical to the equivalent circuits in the human brain? |
No new subjective colors so far, as far as I know |
You can imagine white noise. |
If that's not good enough then just accept that I can, in fact, imagine a sensory experience that's different from all the other senses, and if you can't accept that either then the problem is that you don't actually care about discussing this topic, you just want to assert that this is an inherent limitation of imagination, in all humans. |
For example, maybe this species doesn't care about any of that. |
I know I am me, but I can't know how like me you are. |
Because reasoning is a process, not the output of a process. |
Let's write a function that takes an input and an output and gives a yes or no answer on whether the process that produce that output for that input had reasoning |
When a metric becomes a target, it ceases to be a good metric. |
Do they, though? Both redirect air to turn thrust into lift. |
I don't agree that LLMs analyze situations, so a big library of factoids does not a reasoning system make. |
The AI will try to provide an answer that's entirely useless |
Nah. I'll take my odds with myself. |
Would you rather be followed by someone who will order you around (for the sake of argument, this person will not try to take advantage of you), or would you rather follow the orders of an LLM of your choosing? |
Nah, man. People are reliable. Sure they make mistakes sometimes, but they're not gross mistakes. A cook in a restaurant won't put tar on your food instead of salt because he got confused. |
You're talking about training the model on its own output. |
That's the worst possible thing you can do to a neural network, because the training set is authoritative. It will eventually rot itself. |
Clearly, if you're a doctor and you know the immune system has a 100% cure rate for disease A and that homeopathic remedies have shown to be of no help at all in curing disease A, then you KNOW the immune system cured patients with disease A, not the homeopathic remedy. |
If we strictly ask whether a logical chain exists between two propositions given only the facts which we know are true, we would have to consistently conclude that we don't know all the facts, hence no logical chain exists between any two propositions. You found drug X cures disease A 100% of the time vs a 0% survival rate without drug X? It could be that drug X cured them, or it could be that God exists and just loves everyone who takes drug X cause he thinks it's so rad so he cures them himself. |
No, rejecting a claim of unknown truthfulness is the default position in philosophy and science. "There's gold in your backyard, you should dig it up!" You're going to assume there probably isn't - which is practically the same as rejecting it as misinformation. |
This isn't really true, as it knows what sources of information are more reliable than others, likely through the trainers. |
Even if everything they learned was "hearsay", that's essentially the same as discounting someone's education because we just learned what the professor told us - we didn't run the experiments ourselves! |
I assume so. |
It makes MUCH more sense there is some system of inventing these experiences which is accessed as needed. |
Even so, this is just imagining white and black, nothing really new here. |
Lmao, I have no idea how you can expect me to just believe you imagined a completely new sensation. |
It's like Spiderman's spidey-sense, he can't describe the feeling to anyone because no one has ever felt it before. He can describe it to the best of his ability and you can try to imagine it, but you're never actually going to imagine the feeling. |
Someone who has never felt pain cannot know what pain feels like. Someone born with no nerves cannot know what touch feels like. If they told me they imagined it and now they know, I'm not gonna believe them. |
I don't see how "not caring" would equate to "can't tell the difference" or "can't tell which species is more dominant". |
But we need a strict definition for what reasoning actually is if we are to extend it to other beings/computers. Is a dog reasoning when it hears a bell and knows that food is coming? Or is it just pattern recognition? |
Your scenario doesn't really work, since if the function can tell whether an input/output has reasoning but can also be fooled by a complementary function, then clearly the original function is defectively bad at its job. The function that detects reasoning must be able to reason as well - or else it's not gonna do a good job. Hence why it wouldn't be a function at all, but a human rather (or AI..?). |
I mean, are we gonna be so nitpicky that we can't agree on whether a bird and a plane flying are different just because the physical principle of their flight is the same? That's like saying a DC motor and BLDC motor are the same because they both run on electricity and spin. Like sure, but that's not a valid point against someone saying they're not the same. |
I do wonder what AI you're using, as I gave the prompt to Sonnet and got a pretty good answer. Obviously it didn't write the program, but it gave an brief outline of the code and a strategy that was very reasonable and workable. It didn't know what was wanted by supporting QEMU virtual disks, but it assumed the prompter had some intentions of optimizing transfers specifically for those virtual disks, leaving the implementation up to you. But it assumed you'd want to control cache, blocking behavior, wear-leveling behavior (which I think the SSD does itself, so this could be a mistake), and support TRIM to optimize for SSDs. |
But we humans do make grave errors, every day and all the time. |
In this case, the training data would be the world world experiences it would have. |
However, imagine unshackling them. Allowing them to run indefinitely as we do, constantly allowing them to generate new ideas from old ones (which, is that not what we do fundamentally?), test them, discard bad ones, and repeat. Imagine them doing this to solve a novel problem that requires a new framework. Is it not possible? |
What I'm getting is that the reasoning employed by a pharmaceutical researcher and that employed by my hypothetical believer in homeopathy are different in degree, not in kind. |
it's fundamentally impossible to deduce a causal link empirically... That is, only deductive reasoning lets you establish causality. Inductive reasoning will never get you there. |
so is my patient's reasoning |
"logical chain" was the cognitive process linking the two propositions in someone's mind |
"I took it and I got cured, therefore it cured me" is clearly empiricism in practice |
Personally, I consider myself Popperian; everything is in the "maybe" pile until it's moved to the "false" pile. |
I don't think there's a single default position in philosophy |
Your conclusion would likely change drasically if the claim instead was "there's gold in a box in your basement, you should go get it!" instead. |
I guess that would make that metahearsay. |
You assume all species that have sight have identical visual processing systems? |
So... Okay. What you're saying is that brains can do literally anything. |
Furthermore, it dismisses as example anything at all |
A new color is not any more novel than a new image made with known colors |
I just wish you spoke with a little less certainty about other people's subjective experiences |
So anything I imagine could very well be what Spider-man feels when his spidey sense tingles. |
externally nothing changes between a person being unable to feel pain and then able to imagine it |
why do you hold any beliefs at all about their experiences beyond what they report? |
would you believe they're lying or telling the truth? |
Which is more dominant is not an objective measure |
but I think reasoning is ultimately linked to language and symbols |
I think a good example of reasoning in animals is crows dropping nuts on crosswalks so they'll get run over and cracked by cars. |
plus some non-determinism |
That means it's still possible to devise an optimized function that only seeks to trick humans without actually reasoning. |
Given that parts of their training involve putting humans in the loop to see how much they like a model's responses, I'd say they're definitely the former. |
we're talking about whether the word "flight" as applied to each one means something different |
What I'd expect an expert to say |
The correct answer for both is still "I don't know how to do that". |
Honking instead of shifting a gear is a gross error, but probably inconsequential. |
LLMs are not as useful as they seem |
Are you talking about plugging an NN into a robot body, or perhaps a virtual environment? |
Is it even possible to learn to debug by just watching a screen, with no additional context? |
LLM is a homeopathic remedy? |
the human in the loop model is a problem |
If the logic you use to say LA burning -> God's wrath is logic you would not accept in other contexts, you're using bad logic. |
A single person's experience is rarely ever good enough to be considered empirical evidence. |
the default to new ideas being on the "negative" side, or skepticism |
"The default position is always disbelief. Period." |
In your scenario, there's so little to lose that your bias allows you to indulge in your gambling/hopeful side. This is not logic or reasoning based. |
You can't say that because you felt happy, that somehow means you also felt anger because they're both feelings. |
If you say you imagined a new color - I'm going to look at the fact that you made an objective claim from a subjective experience and deny it for lack of good evidence. I know now you're tempted to say, "then if everyone could imagine a new color, you still wouldn't believe it!? What's the difference?!" |
No, because its written to be a feeling no one has had. Therefore you claiming you can imagine what it feels like goes against what it actually is. |
This also brings up a good point for me - which is that a blind person can't imagine ANY color, despite definitely having the mental facilities to experience and see them (assuming the blindness was caused by the eyes themselves and they have a normal brain). |
AI can reach these same conclusions and you don't seem convinced its using reasoning to do so. |
When the crow sees cars breaking things and wants something broken, is that truly a novel problem? Didn't the problem solve itself? It saw how things break, so it knows already what needs to happen for something to break. |
You could say the novelty was actually getting the thing and dropping it so that it'll break, but that can be explained by a million other cognitive processes that aren't reasoning - like curiosity. It's not like the bird knows for sure its idea is gonna work before it tries it for the first time (or sees it work for the first time). |
I don't see why current AIs can't do that. Moreover, I still don't see why this is a requirement at all to prove reasoning. |
Funny you say that when you debated against this very point with me before. |
It's possible to imagine - I don't know about actually possible to devise. It could easily be the case that the only function solution available to trick humans.. is one that actually reasons. |
Humans are trained the same way? The only difference is our loop involves outcomes our brain likes or not (and survival). I don't see this as a good argument, evolution and experience trains us similarly. We're just doing that to AI at an accelerated pace and to mold it to how we like. |
When given a test question, we don't usually say, "Actually, this question is misguided because the main character's feelings were constantly changing throughout the story". We just answer the question the best we can and move on - because that's the expectation. |
The human is there simply because its more expedient. If you want a smart AI now, a human teaching it is faster than it teaching itself. |
I don't think that the logic is bad, but rather they're giving special treatment to certain propositions. |
Skepticism is about suspending judgement on a proposition until more evidence is available. A skeptic does not default to believing that something is false, he defaults to holding no position. |
Disbelief is not the same as belief in the negation of a claim |
If the claim was "there are no little green men on the Moon", would your default position be to believe there are little green men on the Moon? |
How does the fact that your chosen behavior changes if the claim changes not invalidate that example? |
No, but I did feel an emotion. Anger and happiness are not fundamentally different. |
If I told you that, although I can distinguish them on some level, red and green look the same to me, how could I possibly provide any evidence? |
Spider-man could also have the power to find a pair of integers whose ratio was the square root of 2, and I would also reject that. |
We would need to ask a previously blind person. |
it's because the mechanism by which LLMs function is different from that of animals |
You say "cars break things" and you've automatically performed the abstraction |
If you made a flying robot that dropped nuts on roadways, it would not need to reason |
I thought it was obvious I meant it in the relative sense, not in the absolute sense. |
Great! Exactly! Since we don't know which of the two is the case, I can't accept that a machine that merely appears to reason (and I have serious caveats with this phrasing, but I'll grant it for the sake of argument) actually reasons. |
If an AI is supposed to be an expert it should be able to handle the case where a question has no answer because of the way it's been phrased |
Let's take a familiar example: in software engineering it can sometimes take months between when a customer approaches you with an idea and when you finish sketching out a design document with well-defined requirements, or conversely that you realize the idea is not viable. |
It's not that it's faster, it's that we don't have an automated source of reasoning. Ideally we'd already have an electronic brain to train the LLM on |
Given our discussion about proto-emotions and mechanical preferences, and my understanding of AI architecture, here's a new idea: Perhaps consciousness itself isn't binary (conscious vs not conscious) but rather exists in layers that correspond to different levels of self-modification capability. My reasoning: We've established that mechanical preferences can exist without consciousness We know systems can modify themselves based on feedback A system that can modify its own weights/parameters has more "agency" than one that can't The ability to modify oneself creates a form of temporal continuity Perhaps each additional layer of self-modification capability creates a new layer of what we might call "consciousness" So rather than asking "is this system conscious?", we might ask "how many layers of self-modification does this system have?" This could create a spectrum: Level 0: Fixed responses (simple programs) Level 1: Learning systems with fixed learning rules (current AI) Level 2: Systems that can modify their learning rules Level 3: Systems that can modify how they modify their learning rules And so on... |
If I say, "I've seen turtles walking on 4 extremities and babies also do that. Therefore babies are turtles." Would that be bad logic? Especially considering that this logical chain is reversible and there was no particular reason to call babies -> turtles instead of turtles -> babies. |
You have no opinion on anything ever until you have enough evidence to know one way or the other? You have no opinion on life on other planets? |
However, you can imagine something that has been all but disproved. Skepticism here means, "this is bullshit until proven otherwise". The teapot orbiting Jupiter, the flying spaghetti monster, dogs are from Saturn, etc. |
No, because you're making a claim about a negative position. The analysis is not on the wording of the idea, but the idea itself. |
If someone asked, "Are there little green men on the moon?" Do you say, "I don't know" or do you say, "No"? |
If scientific reasoning is not used in the second scenario, then it can't be used to disprove scientific reasoning. |
Because someone else was able to do what the rest of us can't. |
Other animals have feelings we don't have, such as from seeing different spectrums of light or having sensations from senses we don't have. Applying this to a human who had their DNA rewritten is at least logical |
That's only because I did the reasoning for it (and of course I probably didn't build it with the capacity for reasoning). |
My argument is just that though, there cannot exist something that appears to reason for all possible inputs - it must actually be reasoning. |
This way of thinking is the least helpful. |
The AI is not meant to be a teacher or an expert |
Exactly why I said the AI needing to answer quickly is a literal breaking of its knees. |
The human brain is suddenly a reasoning master? |
I don't see how a random human brain would be able to teach itself (given the same constraints LLMs have - such as starting from literally no knowledge or very little knowledge) where the LLM couldn't. I don't see why current LLMs couldn't automate their own learning now |
other than the fact that they need a constant source of high quality information to learn from (just like us! but we have the real world to interact with for those experiences). |