AI is something special

Pages: 123
If you then went "psyche! It was a computer all along!" then given the new information I'd probably retract my previous conclusion

But why? We know that if you feel consciousness it must exist, "I think therefore I am", but why is this not true of reasoning? Why can you see reasoning dataset and not conclude it to be such?

If the dataset only contains chess, we can't conclude reasoning because of its limitations. If we see a dataset of only compressed files, we can't conclude reasoning. But if we see a data set with chess, compression, and lots more, paired with complex analyses of situations with multiple variables that can't be preprogrammed ahead of time - why do we then say, "we don't know!" rather than, "this is reasoning".

The reasoning comes from the fact that there's no preprogrammed way to handle the information such that you can get a coherent output without making new logical connections never made before.

And an AI, given a brand new topic of discussion it has never seen before, can, in fact, respond coherently by making new logical connections it has never made before with the brand new information given.

I can see this applying in a way to chess AIs and other game AIs, as they can beat levels and positions they've never seen before, but its limited to a specific task. In this way, I'd be inclined to call it processing more than reasoning. All reasoning requires processing, but not all processing is reasoning.

The problem with AIs is that they're highly unreliable, much more than humans, let alone non-AI computer technology. That makes them very impractical to build on top of. You can't build on ground that could shift unpredictably.

But this isn't true! Humans have always been unreliable. When you say an AI is much more unreliable than humans, you mean the best humans. Imagine this scenario:

You're life is hanging on the answer to a question of medium complexity. Not something of trivia, but something that requires problem solving. You have the option of selecting a random human between the ages of 25-50 and, hell, even what country they're from. OR you can select the most powerful AI available today to solve this question.

If you go with the random human, you're fucked, plain and simple. With the AI, your odds are exponentially higher. Even when weeding out by age and country, you are still looking at a majority of people who are likely too incompetent to be of help to you.

Our history as well is clear on how unreliable we are - believing nonsense, forcing it on others, believing more nonsense, spouting nonsense as if we know it for a fact, rinse and repeat. In this regard, AI is more reliable than humanity as a whole (though again, the best of us would be better, but this could change as AI advances).

Not with the models that exist currently. Their context windows are laughably small.

If we look at a context window for what it really is, it's not dissimilar to our short-term memories. They are meant to be small. However, in AI training, it takes the training data and essentially puts it in its "long-term memory", similarly to us. We don't remember it perfectly, but we have the important information saved.

The reason all these models have a chat interface is because the human is needed to rein the conversation in.

Well yes, but undirected reasoning is still reasoning. This is not an issue of their capabilities, but rather shows how evolution has created our brains with tasks, like survival, and the brain does a good job at staying on those tasks (usually, we don't talk about ADHD I suppose).

Your argument does not defend the idea you're trying to defend. If your agree that the earliest brains were too simple to reason then you have to concede that NN may still be at the stage where they're not capable of reasoning, even if later on they may be capable of it.

Well what I'm really saying is a question of, "at what point does it go from definitely not reasoning to definitely reasoning?" Even if we had internal structures laid out to us, we may still struggle to identify which system can reason and which doesn't. I'm saying we are at least at the doorstep of reasoning with these LLMs, though I don't see why we have to "technically" say they are not.

Even reasoning can be limited, and saying they cannot solve novel problems can simply be an indication of weak reasoning (or weak reasoning as a result of their artificial limitations as discussed earlier), but reasoning all the same.

(Complete)
Last edited on
But that's the entire point I'm making. We evented science and the scientific process because we are so bad at evaluating truths and facts.
But we are not bad at evaluating truth. We are bad at extracting truth from facts, and at looking at things objectively.
"One time I drank a homeopathic remedy and I got cured, therefore homeopathy works." There is nothing logically unsound in that argument. It's empirically very weak, but from a logical standpoint it's perfectly valid. 100% of the times you drank a homeopathic remedy your ailment went away. QED, skeptic.

To connect LA is burning = God's wrath, you assume God, you assume God's intent, you assume God's involvement, and hell, you assume Climate Change isn't real.

If we go that route, anything can be logically connected to anything, as we can always generate the necessary assumptions needed to make it so.
The question is not whether any two propositions can be linked by some logical chain. The question is whether there *is* a logical chain that links these two propositions. I'll grant you that the reasoning is an ex post facto rationalization wrapping their hatred for who they consider immoral people, but that's not the point. The two propositions could equally be linked by no reasoning whatsoever.

1. There are wildfires going on in LA.
2. Frodo is Sam's friend.
3. Harry Potter is Gandalf's pupil.
4. Since #2, #3 is false by modus ponens.
5. Therefore #1 is caused by God's wrath.

This has the format of a logical argument while making no sense whatsoever. LLMs will often make arguments like this one. I'm being hyperbolic, of course, but the point remains.

However, it you make such claims on things where the facts are not actually known, it should reject what you're saying, because the evidence is not clear.
Rejecting a claim of unknown truthfulness as misinformation is just as incorrect as accepting it as fact.
And anyway an LLM that by necessity has only access to hearsay should not be making statements about what is or isn't misinformation. The only facts it has access to is "some people say".

Why would we run into such a problem? We already know other animal/bug/insect species can see further into the spectrum than we can.
Your question was why a human brain can't imagine a new color. Are the neural circuits that process color information in the brain of non-human species identical to the equivalent circuits in the human brain?

While we can't know, I would personally assume it would evoke a brand new color. I don't see any reason the brain would do something as stupid as confuse you with two wavelengths being the same when it clearly is capable of creating another color.
Clearly? Hmm. You speak rather confidently about something no one has ever experienced.
Actually, I kind of lie. There have been several attempts at connecting cameras to the visual cortex as prosthesis with various degrees of success. No new subjective colors so far, as far as I know, but that wasn't the point.

But this is an explanation for why it's limited, which was the only point I made.
But it's not limited in that sense. You can imagine white noise. There, I did it. Why would I do it again? It serves me no purpose. It's not limited, I just have no interest in reaching for the far horizons of entropy when the interesting stuff is much closer to home.

the experiences you laid out are all just retooling of prior experiences. You can't imagine a new experience that's actually foreign.
Okay. You're moving the goalpoasts so that your requirements cannot possibly be met. There's nothing I can tell you I'm imagining and describe in words that you cannot reply with "no, but it has to be something actually foreign". You understand, right, that when I describe a subjective sensation to you, the sensation I'm imagining is not the words I'm telling you? The words are my best effort at conveying something that's inherently subjective and nuanced. You have to trust that when I say "this feels like pressure but different", the experience I'm describing is not actually "just a retooled prior experience". What's retooled is my description, because there are no words that have the semantic power to transmit my subjective experience verbatim from my brain to yours.
If that's not good enough then just accept that I can, in fact, imagine a sensory experience that's different from all the other senses, and if you can't accept that either then the problem is that you don't actually care about discussing this topic, you just want to assert that this is an inherent limitation of imagination, in all humans.

I mean, they made it all the way to Earth and their alien brain is like, "we are so advanced, but the dominant creature of the planet that exists on every continent, uses electricity, has sent probes and satellites into space, is capable of nuclear weaponry and harvesting of energy is so hard to distinguish from those brown things running around and eating dirt.

Like.. come on. I can't believe it.
For example, maybe this species doesn't care about any of that. They have technology, but they don't think it's valuable because they inherited it from a parent species in such vast quantities and so advanced that they don't understand how complex it is or how rare it is in our world. Their biology is also quite different from our own. They don't have individuals, they're just a blob, like slime mold, and they think things that scurry around in large numbers are just gross and want nothing to do with them. They especially don't like things that ooze fluids when poked with sharp objects, unlike them that just adapt to any shape.

I know you're arguing that reasoning can only be determined by looking into the process rather than the output, but this is getting dangerously close to "I think therefore I am", where we might not be able to tell if anyone is capable of reasoning - simply because our consciousness does not extent towards other people's brains.

We don't have the in-depth knowledge of the inner workings of the human brain sufficiently enough to actually know how it reasons. So by your argument, we don't know if anyone other than ourselves is capable of reasoning.
You're just telling me what I believe. Yes, I don't take it for granted that your brain works equivalently to mine. I assume it for practical reasons, but for all I know you're not actually a person. I don't mean you don't have a body, I mean that even if you were standing in front of me I could not say with certainty that your cognitive processes are similar to mine. I will extend you that courtesy, but that's all it is. If you ask me to analyze the situation coldly, rigorously, then no, I can't say for sure. Even if I hacked you up and poked around in your bits to make sure you're not an android I couldn't say. I know I am me, but I can't know how like me you are.

(1/2)

But why not?! Why can't we look at a dataset and look for properties that are intrinsic to reasoning? Why can't reasoning be real as long as its demonstrated adequately?
Because reasoning is a process, not the output of a process. You agree that two different processes can produce the same output for the same input, correct? So imagine we want to develop a test for reasoning. It's a 100-question quiz (chosen at random out of a repertoire of millions), where the more you answer correctly, the more likely it is that you are able to reason. If I hire 100,000 humans to do the quiz to the best of their ability and enter their answers into a database, and their write a program that takes the quiz and answers based on what's on the database, have I created a system that reasons?
Okay, you say. A quiz was silly. Let's write a function that takes an input and an output and gives a yes or no answer on whether the process that produce that output for that input had reasoning (for the sake of argument we'll pretend that this is easy to do). Then you've created an optimization problem. All I need to do is find an algorithm f() that complements your own algorithm g() such that g(input, f(input)) = yes most of the time. Is f() reasoning, just because it's producing an output that's designed to please g()?

When a metric becomes a target, it ceases to be a good metric.

Just as a bird's flight and an airplane's flight are both legitimately "flying" despite using different mechanisms.
Do they, though? Both redirect air to turn thrust into lift. That's the flying part of flying. What's different is how they produce thrust, but not all planes produce thrust by the same means. Jet planes and prop planes work entirely differently, as do helicopters.
A better example would have been birds and rockets.

We know that if you feel consciousness it must exist, "I think therefore I am", but why is this not true of reasoning?
I can observe my own reasoning and my own consciousness, but not other people's.

if we see a data set with chess, compression, and lots more, paired with complex analyses of situations with multiple variables that can't be preprogrammed ahead of time - why do we then say, "we don't know!" rather than, "this is reasoning".
I don't agree that LLMs analyze situations, so a big library of factoids does not a reasoning system make.

The reasoning comes from the fact that there's no preprogrammed way to handle the information such that you can get a coherent output without making new logical connections never made before. And an AI, given a brand new topic of discussion it has never seen before, can, in fact, respond coherently by making new logical connections it has never made before with the brand new information given.
But it doesn't make logical connections never seen before. Like I said, x+y=? and a+b=? are not different problems just because the variables are different. LLMs are kind of good at making analogies between the question you're asking and everything they've seen before. They cannot handle entirely new problems. That is, they pretend to handle them and they spout useless nonsense. Coherent useless nonsense, as you say. It's well articulated, but the answers are not useful. If your problem is actually novel, you won't get a useful answer. "I want to create a transacted file system for Linux that's designed specifically for SSDs and works efficiently with QEMU virtual disks and supports snapshots of individual files. Can you do that for me?" Now, don't get me wrong. I'm not complaining that it can't do it. I'm saying that a human who's reasoning about the question would recognize that even if they understand the design goals, they have no idea what's needed to fulfill them, or say that they refuse to do it because it's an insane amount of work. The AI will try to provide an answer that's entirely useless, though coherent. It's actually funny that it writes a function that initializes a struct and calls it the "initial code" for the file system.

You're life is hanging on the answer to a question of medium complexity. Not something of trivia, but something that requires problem solving. You have the option of selecting a random human between the ages of 25-50 and, hell, even what country they're from. OR you can select the most powerful AI available today to solve this question.
Nah. I'll take my odds with myself.
You have to live your life for a year following someone else's orders at all times. Any time you have to make a decision you must ask what to do, and disobedience is punishable by death. Negotiation is allowed but the final decision is your custodian's. Would you rather be followed by someone who will order you around (for the sake of argument, this person will not try to take advantage of you), or would you rather follow the orders of an LLM of your choosing?

Our history as well is clear on how unreliable we are - believing nonsense, forcing it on others, believing more nonsense, spouting nonsense as if we know it for a fact, rinse and repeat. In this regard, AI is more reliable than humanity as a whole (though again, the best of us would be better, but this could change as AI advances).
Tsk, tsk, tsk. If this is what you believe then you can't trust your faith in LLMs, either.

Nah, man. People are reliable. Sure they make mistakes sometimes, but they're not gross mistakes. A cook in a restaurant won't put tar on your food instead of salt because he got confused.

However, in AI training, it takes the training data and essentially puts it in its "long-term memory", similarly to us.
You're talking about training the model on its own output. That's the worst possible thing you can do to a neural network, because the training set is authoritative. It will eventually rot itself.

Well what I'm really saying is a question of, "at what point does it go from definitely not reasoning to definitely reasoning?"
Like I said, we have to look into what the system's mechanism is like. Look at any symbolic computing system if you want to see actual reasoning. Hell, Ocaml's type inference is more reasoning than what an LLM does.

(2/2)
Last edited on
"One time I drank a homeopathic remedy and I got cured, therefore homeopathy works." There is nothing logically unsound in that argument.

This sounds like a fundamental misunderstanding of logic itself.

Imagine you were to find out that there was an earthquake the day you woke up and felt better. Now, you don't know whether the remedy or the earthquake cured you.

In a perfect world, anything that is logical MUST be factual. However, since we are not omniscient beings, things seem logical to us because we think we've accounted for all the variables. Therefore, practically, things that are logical to us mustn't always be true.

Clearly, if you're a doctor and you know the immune system has a 100% cure rate for disease A and that homeopathic remedies have shown to be of no help at all in curing disease A, then you KNOW the immune system cured patients with disease A, not the homeopathic remedy.

However, for the misguided and uneducated patient, they may proceed to think that the remedy cured them.

So if you're the patient, the logic is sound (what else could have done it!?), but to the doctor, this is invalid logic. This is not to say the doctor couldn't agree that the logic was sound to the patient, but that the doctor himself could never use this logic validly.

The question is not whether any two propositions can be linked by some logical chain. The question is whether there *is* a logical chain that links these two propositions.

I'm arguing this is the same thing.

The homeopathic example is perfect for this, as the patient's logic is ONLY valid if they ASSUME they've accounted for everything else that could possibly help cure them.

This is exactly why we know their logic to be faulty but they do not, we know their assumptions is false.

If we strictly ask whether a logical chain exists between two propositions given only the facts which we know are true, we would have to consistently conclude that we don't know all the facts, hence no logical chain exists between any two propositions.

You found drug X cures disease A 100% of the time vs a 0% survival rate without drug X? It could be that drug X cured them, or it could be that God exists and just loves everyone who takes drug X cause he thinks it's so rad so he cures them himself.


LLMs will often make arguments like this one.

Again, with Claude Sonnet 3.5, I find it does not feel obligated to answer questions it doesn't actually know the answer to, and will let the user know that it doesn't know, or that the answer it's giving may not be entirely correct.

I gave it your bad reasoning example (with LA fires and God's wrath) and it perfectly reasoned out why the reasoning is bad and that misuse of modus ponens.

I can't see how we can call this NOT reasoning. A system of reasoning can make mistakes, it doesn't make it not reasoning.

Rejecting a claim of unknown truthfulness as misinformation is just as incorrect as accepting it as fact.

No, rejecting a claim of unknown truthfulness is the default position in philosophy and science. "There's gold in your backyard, you should dig it up!" You're going to assume there probably isn't - which is practically the same as rejecting it as misinformation.

The only facts it has access to is "some people say".

This isn't really true, as it knows what sources of information are more reliable than others, likely through the trainers.

Even then, the training it goes through now (which is my work right now) is giving it high quality and reliable data.

Even if everything they learned was "hearsay", that's essentially the same as discounting someone's education because we just learned what the professor told us - we didn't run the experiments ourselves!

Your question was why a human brain can't imagine a new color. Are the neural circuits that process color information in the brain of non-human species identical to the equivalent circuits in the human brain?

I assume so. This seems like a very odd point of attack, as all life on Earth originates for each other. There's life that is very close to us on the evolutionary ladder which can see colors we don't.

Our brains cannot possibly be so different suddenly that ours can't invent new colors even though it already does it.

Especially when SO many different species have feeling of senses we don't have and see colors we don't, you should assume their brains did not have to specifically evolve to see every color and feel every sense of every new organ one at a time. It makes MUCH more sense there is some system of inventing these experiences which is accessed as needed.

Of course, speculation, but I don't see why you couldn't see a new color if you had the right receptors and wiring to the brain.

No new subjective colors so far, as far as I know

The point of these things (I assume for this experiment as well) is to try and emulate what we already have, not bring about new conscious experiences.

You can imagine white noise.

At this point, we've all seen white noise. I wonder if someone could have imagined that before it was ever a thing. Even so, this is just imagining white and black, nothing really new here.

If that's not good enough then just accept that I can, in fact, imagine a sensory experience that's different from all the other senses, and if you can't accept that either then the problem is that you don't actually care about discussing this topic, you just want to assert that this is an inherent limitation of imagination, in all humans.

Lmao, I have no idea how you can expect me to just believe you imagined a completely new sensation. You can imagine a 6th sense that allows you to detect the presence of water? You can imagine a new sensation of what it would feel like to be like a beaver and have electroreception?

You can kinda-sorta imagine.. something, but I can pretty much guarantee you didn't actually imagine a new sensation.

It's like Spiderman's spidey-sense, he can't describe the feeling to anyone because no one has ever felt it before. He can describe it to the best of his ability and you can try to imagine it, but you're never actually going to imagine the feeling.

Someone who has never felt pain cannot know what pain feels like.

Someone born with no nerves cannot know what touch feels like.

If they told me they imagined it and now they know, I'm not gonna believe them.

For example, maybe this species doesn't care about any of that.

You should write a book about these aliens, it was fun. But still, I don't see how "not caring" would equate to "can't tell the difference" or "can't tell which species is more dominant".

I know I am me, but I can't know how like me you are.

Give me a wedding ring and you can get to know me for the rest of your life.

The whole point is that we are using our consciousness to justify that we are reasoning simply because that's what we decided to call this weird thinking shit we do in our heads.

But we need a strict definition for what reasoning actually is if we are to extend it to other beings/computers. Is a dog reasoning when it hears a bell and knows that food is coming? Or is it just pattern recognition?

Because reasoning is a process, not the output of a process.

Sure, but the process doesn't have to be what's familiar to you. And since we can't really know when a process is reasoning, we'll have a better time detecting it through the output. Because there's no reason we can define what reasoning is, but not able to define what output reasoning can produce.

(Continued)
Let's write a function that takes an input and an output and gives a yes or no answer on whether the process that produce that output for that input had reasoning

Your scenario doesn't really work, since if the function can tell whether an input/output has reasoning but can also be fooled by a complementary function, then clearly the original function is defectively bad at its job.

The function that detects reasoning must be able to reason as well - or else it's not gonna do a good job. Hence why it wouldn't be a function at all, but a human rather (or AI..?).

When a metric becomes a target, it ceases to be a good metric.

You're pointing our flaws that require active exploitation rather than saying it won't actually work practically.

Do they, though? Both redirect air to turn thrust into lift.

I mean, are we gonna be so nitpicky that we can't agree on whether a bird and a plane flying are different just because the physical principle of their flight is the same?

That's like saying a DC motor and BLDC motor are the same because they both run on electricity and spin. Like sure, but that's not a valid point against someone saying they're not the same.

I don't agree that LLMs analyze situations, so a big library of factoids does not a reasoning system make.

If an LLM was merely a big library of factoids, how can it apply those factoids to new situations and come to new, and correct, conclusions?

Even if the situation presented is similar to what it saw before, a library of factoids cannot adapt. Therefore, these AIs are definitely not just libraries.

The AI will try to provide an answer that's entirely useless

I do wonder what AI you're using, as I gave the prompt to Sonnet and got a pretty good answer. Obviously it didn't write the program, but it gave an brief outline of the code and a strategy that was very reasonable and workable.

It didn't know what was wanted by supporting QEMU virtual disks, but it assumed the prompter had some intentions of optimizing transfers specifically for those virtual disks, leaving the implementation up to you. But it assumed you'd want to control cache, blocking behavior, wear-leveling behavior (which I think the SSD does itself, so this could be a mistake), and support TRIM to optimize for SSDs.

I can't judge to code outline myself too well on how it would fit into Linux, but it looks reasonable (my experience here is coding a CPU-priority algorithm for Linux in a class, but nothing else like this).


Moreover, finding something AIs are bad at doesn't mean they don't reason. This would be like me finding a stubborn old man who thinks the moon landing was fake, presenting rational arguments, fail to change their mind, then concluding they are incapable of reasoning (though maybe I'd be right).

Nah. I'll take my odds with myself.

The idea was you had to pick, you are not an option - even if you knew the answer.

Would you rather be followed by someone who will order you around (for the sake of argument, this person will not try to take advantage of you), or would you rather follow the orders of an LLM of your choosing?

Kinda freaky, do I at least get a strong woman?

Assuming the LLM would also not try to take advantage of me, then why does it matter? If there's no advantage to be had in ordering me around, why would either order me at all?

If I chose an LLM, I'd likely never be ordered to do anything.. Where-as many of us know what it's like to live with our parents and be ordered around like pets and told how to live our lives because we can't be trusted to be adults. My dad still thinks he knows better, despite him constantly being wrong about everything he said that's actually mattered.

Would I rather have to deal with that or an LLM that I can reason with easily and accepts rational arguments? Is this really the crazy choice I have to make?

Nah, man. People are reliable. Sure they make mistakes sometimes, but they're not gross mistakes. A cook in a restaurant won't put tar on your food instead of salt because he got confused.

I just can't think this the way you do.

A cook wouldn't put tar instead of salt because there's SO MUCH MORE than reasoning that would stop them. The sight and smell of tar alone would instinctively force them to not use it in food or have it anywhere near the kitchen, something an AI would obviously be disadvantaged by.

A lot of things we do "reliably" doesn't necessary come from our reasoning skills alone, but rather our instincts built into us. Just because a human wouldn't "miss" when going to grab something (when an AI might) doesn't mean our reasoning is so superior to AI.

But we humans do make grave errors, every day and all the time. We simply expect each other to make errors and put up safeguards so the world doesn't explode. Then when an AI makes an error, we shit on it because our expectations are suddenly so high for the AI.


Moreover, our brains have boxes for things. We have neural connections for cooking related stuff and those connections are rarely going to lead us to using tar instead of salt.

This is not a necessary thing for reasoning to have. Again, poor reasoning or a different system of reasoning (which will have its own benefits/downsides than other systems) doesn't equate to no reasoning.

I'm not even convinced an AI would necessarily make that mistake, but I don't see why a human's flawed reasoning wouldn't lead them to using tar instead of salt. I know they won't because of many things, but their reasoning skills is not on that list.

You're talking about training the model on its own output.

No, I mean when the AI is being trained on data, that data essentially goes into the NN, which we can consider its long-term memory. In this case, the training data would be the world world experiences it would have. Unlike how its trained now, it'll have to decide for itself what information is reliable (should weigh more heavily) and what isn't (should weigh less heavily). Humans tend to suck at that (hence rampant fake news), but I suspect the AI will be much better.

That's the worst possible thing you can do to a neural network, because the training set is authoritative. It will eventually rot itself.

I think we would too if the only thing we could do is constantly think with already acquired data. Eventually, we'd come to strange conclusions and then go crazy.

(Complete)
Our posts seem to be growing linearly in length, if not faster...

Clearly, if you're a doctor and you know the immune system has a 100% cure rate for disease A and that homeopathic remedies have shown to be of no help at all in curing disease A, then you KNOW the immune system cured patients with disease A, not the homeopathic remedy.
This is indeed a more rigorous way to find probable causal links, however I should note that it's fundamentally impossible to deduce a causal link empirically. Just like in your example the earthquake is a confounding factor, in a real experiment there are uncountably many unknown confounding factors that make it impossible to determine the 'why' for things with complete certainty. What I'm getting is that the reasoning employed by a pharmaceutical researcher and that employed by my hypothetical believer in homeopathy are different in degree, not in kind.

A reasoning that would be different in kind would be to start from first principles, deducing the relevant features of two systems and how they must interact together.
That is, only deductive reasoning lets you establish causality. Inductive reasoning will never get you there.

If we strictly ask whether a logical chain exists between two propositions given only the facts which we know are true, we would have to consistently conclude that we don't know all the facts, hence no logical chain exists between any two propositions.

You found drug X cures disease A 100% of the time vs a 0% survival rate without drug X? It could be that drug X cured them, or it could be that God exists and just loves everyone who takes drug X cause he thinks it's so rad so he cures them himself.
I think you're confusing two separate concepts.

What I was referring to when I said "logical chain" was the cognitive process linking the two propositions in someone's mind. There is a logical chain linking "I took a homepathic remedy" to "the homepathic remedy cured me". Even if the patient recognizes their own uncertainty about this hypoythesis, the logical chain is there. In fact, even if the patient denies the conclusion the logical chain is there. The person started from propositions and reached a conclusion. Whether the propositions or the conclusion align with reality doesn't change the fact that the person followed a train of thought to reach their conclusion.

A separate question is whether there is a causal link between taking the remedy and the person being cured. Like we've both said, causal links can never be established empirically. But if you agree that clinical trials are a useful tool to determine the effectiveness of a treatment, you must agree that so is my patient's reasoning. "I took it and I got cured, therefore it cured me" is clearly empiricism in practice. Does it work with barely any data and the conclusion lack almost all rigor? Yes, but at its core it's still empiricism, because the conclusion is based on facts as they are.
A reasoning lacking all empiricism would be "I took it and the bible says it must cure me, therefore it cured me", or even worse "I took it and I didn't get cured, but I still think it works".

No, rejecting a claim of unknown truthfulness is the default position in philosophy and science. "There's gold in your backyard, you should dig it up!" You're going to assume there probably isn't - which is practically the same as rejecting it as misinformation.
I don't think there's a single default position in philosophy. Philosophers haven't been concerned with the truth of things for hundreds of years. Nowadays they just about the validity of arguments.
The default position in science is... well, I guess it depends on who you ask. Personally, I consider myself Popperian; everything is in the "maybe" pile until it's moved to the "false" pile.
As for your example, disbelieving the claim isn't a scientific conclusion, it's a practical conclusion. Digging up your backyard to extract an unknown quantity of gold is an expensive prospect. Your conclusion would likely change drasically if the claim instead was "there's gold in a box in your basement, you should go get it!" instead. You have the same exact amount of evidence for both claims, so why do they elicit different reactions from you?

Separately to this, some claims aren't actionable, yet you might still be compelled to accept, reject, or suspend your judgement on them. I'm pretty sure the Earth is round, though I haven't checked it directly, and although it wouldn't affect me in my daily life to believe that it's flat, I think it would make my worldview less coherent overall.

This isn't really true, as it knows what sources of information are more reliable than others, likely through the trainers.
I guess that would make that metahearsay.

Even if everything they learned was "hearsay", that's essentially the same as discounting someone's education because we just learned what the professor told us - we didn't run the experiments ourselves!
Yeah. If you've never tested something you've learned in the real world, it's hard to argue that you know it, wouldn't you say? It's one thing to, say, be told the speed of light, and another to use that property in a practical application. Or for a more mundane example, a carpenter who has ready every book about carpentry and never been in a workshop isn't much good.

I assume so.
You assume all species that have sight have identical visual processing systems?? Like, the mantis shrimp which sees six different wavelengths of light and can detect polarization has the same visual system as a human?

It makes MUCH more sense there is some system of inventing these experiences which is accessed as needed.
So... Okay. What you're saying is that brains can do literally anything. Neurologically there isn't really anything preventing you from having eyestalks on your arms instead of hairs, and having each eye see a different wavelength as well as tasting a different chemical, and being able to control each eye individually. And in fact, you could do it even if you had the brain of a cockroach. Yeah, sure, it may run a little slower, but electrically it should work out.
I think it makes much more sense to assume that the brain and the body evolve together, and that the brain will be physically unable of developing capabilities the body can't possibly make use of. There's an evolutionary incentive for brains to not be more complex than they need to be, since more complexity means more energy intake needed to live.

Even so, this is just imagining white and black, nothing really new here.
That dismisses literature as a whole, and in fact all art. What is, say, Hamlet, if not just a handful of symbols arranged in a particular sequence?
Furthermore, it dismisses as example anything at all. A new color is not any more novel than a new image made with known colors. Is there any thought you could not dismiss the same way as not wholly new (because it still shares the quality of being a thought)? It sounds like you're saying that imagination is limited because it can't do literally anything. Like, the fact I can't do literal magic is a limitation of imagination.

Lmao, I have no idea how you can expect me to just believe you imagined a completely new sensation.
*Shrug* I don't expect you to believe it, I just wish you spoke with a little less certainty about other people's subjective experiences. People do report subjectively having differently vivid imaginations, with some even reporting not being able to imagine at all.

(1/3)
It's like Spiderman's spidey-sense, he can't describe the feeling to anyone because no one has ever felt it before. He can describe it to the best of his ability and you can try to imagine it, but you're never actually going to imagine the feeling.
Ah-hah. But see, that's something different. Spider-man doesn't exist, so it's not him describing what he's feeling, it's Stan Lee describing what a fictional character with that power might describe about that power. But Stan Lee is no better authority than me on what an actual Spider-man with a spidey sense might feel, because he's never been Spider-man himself. So anything I imagine could very well be what Spider-man feels when his spidey sense tingles.
On a related topic, it's not the same to imagine something brand new than something brand new and also specific. Going back to a previous example, I'll probably never be able to imagine exactly what mantis shrimp see when they see.

Someone who has never felt pain cannot know what pain feels like.

Someone born with no nerves cannot know what touch feels like.

If they told me they imagined it and now they know, I'm not gonna believe them.
Interesting, because all you can observe is their behavior (whether voluntary or involuntary), not their internal states. There's no test you can perform on someone to tell if they have pain, and externally nothing changes between a person being unable to feel pain and then able to imagine it. If you don't believe they're able to imagine it then why did you believe they weren't able to feel it to begin with? More to the point, why do you hold any beliefs at all about their experiences beyond what they report?
If you were a doctor and you got a patient complaining about a pain you could not explain after examining and testing them, would you believe they're lying or telling the truth?

I don't see how "not caring" would equate to "can't tell the difference" or "can't tell which species is more dominant".
Which is more dominant is not an objective measure. An external observer might have a definition of "dominance" that makes them equivalent. As for how indifference translates to non-distinction, if you don't care about something then you're not going to measure it. Hell, it's in the word "indifference"; you don't see a difference.

But we need a strict definition for what reasoning actually is if we are to extend it to other beings/computers. Is a dog reasoning when it hears a bell and knows that food is coming? Or is it just pattern recognition?
Difficult to say. I'm not going to define it here, but I think reasoning is ultimately linked to language and symbols. You take the real world and you abstract it into symbols that you can play with in your head to run scenarios and find solutions to problems. I think a good example of reasoning in animals is crows dropping nuts on crosswalks so they'll get run over and cracked by cars.
* "I want to eat this nut."
* "I need to crack this nut to eat it."
* "Cars break things when they run over them."
* "Intersections have a constant flow of cars, but crosswalks only have an intermittent flow."
* "If I drop this nut on a crosswalk, a car will eventually run over it and then I'll be able to eat it."
No one taught crows to do that (because why would they?) so that's something they definitely pieced together from observing various facts.

Your scenario doesn't really work, since if the function can tell whether an input/output has reasoning but can also be fooled by a complementary function, then clearly the original function is defectively bad at its job.

The function that detects reasoning must be able to reason as well - or else it's not gonna do a good job. Hence why it wouldn't be a function at all, but a human rather (or AI..?).
Ahaha! But a human is still a decision function, plus some non-determinism. Given the same inputs, it'll produce the same output most of the time. That means it's still possible to devise an optimized function that only seeks to trick humans without actually reasoning.
So are LLMs a form of an optimized function, or are they reasoning? Given that parts of their training involve putting humans in the loop to see how much they like a model's responses, I'd say they're definitely the former.

I mean, are we gonna be so nitpicky that we can't agree on whether a bird and a plane flying are different just because the physical principle of their flight is the same?

That's like saying a DC motor and BLDC motor are the same because they both run on electricity and spin. Like sure, but that's not a valid point against someone saying they're not the same.
But we're not talking about birds and planes being the same or different, we're talking about whether the word "flight" as applied to each one means something different. But you weren't even the one who said this. It was just an aside I felt like commenting on, so feel free to ignore this.

I do wonder what AI you're using, as I gave the prompt to Sonnet and got a pretty good answer. Obviously it didn't write the program, but it gave an brief outline of the code and a strategy that was very reasonable and workable.

It didn't know what was wanted by supporting QEMU virtual disks, but it assumed the prompter had some intentions of optimizing transfers specifically for those virtual disks, leaving the implementation up to you. But it assumed you'd want to control cache, blocking behavior, wear-leveling behavior (which I think the SSD does itself, so this could be a mistake), and support TRIM to optimize for SSDs.
That's all very broad strokes, and the kind of thing I'd expect from a forumite who has no idea about the problem but just feels like they need to say something. Like you say, wear-leveling is done at the hardware level, it's just that it had no idea how a file system can be tuned to work specifically with SSDs (because as far as I know no one does). What I'd expect an expert to say is to bring the design goals down into specific requirements and guarantees, and perhaps link to bibliography and to existing implementations of similar technology, if any exists. If applicable, they'd point out if some goals are contradictory.
The answer you got is not substantially better than the one ChatGPT gave me. ChatGPT just had more balls to commit and try to code something, but that's it. The correct answer for both is still "I don't know how to do that".

But we humans do make grave errors, every day and all the time.
A grave error is not the same as a gross error. Driving and getting distracted by your phone for a split second right when something on the road demands your attention and crashing is a grave but subtle error. Honking instead of shifting a gear is a gross error, but probably inconsequential.
I'm not using the different ways in which humans and LLMs make errors as evidence that LLMs don't reason, I'm using it to argue how LLMs are not as useful as they seem. LLMs make gross errors in the answers they give, so that makes them unreliable, so that makes them less useful. If you have some system that has humans in the loop and you replace them with an LLM, the reliability will go down. If that's a problem for the use case then that's a way in which LLMs are not useful.

(2/3)
In this case, the training data would be the world world experiences it would have.
Hmm...
However, imagine unshackling them. Allowing them to run indefinitely as we do, constantly allowing them to generate new ideas from old ones (which, is that not what we do fundamentally?), test them, discard bad ones, and repeat. Imagine them doing this to solve a novel problem that requires a new framework. Is it not possible?
Are you talking about plugging an NN into a robot body, or perhaps a virtual environment? Yeah, maybe. In fact it's been done with simplified problems, such as finding the best way to bunny-hop in Quake. No one knows how to train a general problem-solving model in a reasonable time, though. Like, you want to make the best debugging bot, so you give it control of a mouse and keyboard and let it watch a screen, what do you need to expose it to, to train it? Is it even possible to learn to debug by just watching a screen, with no additional context?

(3/3)
What I'm getting is that the reasoning employed by a pharmaceutical researcher and that employed by my hypothetical believer in homeopathy are different in degree, not in kind.

They are different in kind. These variables that aren't accounted for change from moment to moment. Therefore, having a large-scale study with different demographics and such is exactly how we ensure these unwanted variables end up cancelling out in the data as a whole.

Drug X working 100% of the time on a diverse group (and no effect measured in a placebo group) is evidence that Drug X has a causal link to the consistent result measured.

The idiot patient is just that - an idiot. They don't even know what assumptions they've made or whether they are reasonable in their deduction outside of feeling reasonable and justified.

it's fundamentally impossible to deduce a causal link empirically... That is, only deductive reasoning lets you establish causality. Inductive reasoning will never get you there.

Given the rigor in these studies, this point is moot. This is true in some philosophically insignificant way (like trying to prove things exist at all), but not at all practically.

And we both know its not practical nor interesting to go down that rabbit hole. We know vaccines work, even if the method to determine so is induction based.

so is my patient's reasoning

When did this become your patient, Doctor Helios

"logical chain" was the cognitive process linking the two propositions in someone's mind

In either case, the logical chain only exists when assumptions exist. These "logical chains" only exist due to information we have. But this also means that new information can break these chains (hence why the patient can have this logical chain, but not the doctor).

This means it's not a defense for bad logic when those people clearly know better.

If the logic you use to say LA burning -> God's wrath is logic you would not accept in other contexts, you're using bad logic. The logical chain you just used should not exist because you have the necessary information in your head to know these logical chains are faulty AND you can apply that information.

To then still come to the conclusion of LA burning therefore God's wrath is simply evidence of how we aren't inherently good at logic/reasoning. Our bias can enforce bad logic that we have every reason to know is bad.

I would bet money people have thought whatever strange thing they took cured them, but then turned around and wouldn't believe someone else's claims derived from the exact same logic. This means their logic is invalid even by their own standards - they just can't see it.

"I took it and I got cured, therefore it cured me" is clearly empiricism in practice

No, this would simply be anecdotal evidence. A single person's experience is rarely ever good enough to be considered empirical evidence. This is not to mention that lack of methodology that would make recreating the scenario nearly impossible. Just "taking" the medicine leaves out so much crucial detail, such as if they took food with it and what else they did.

Maybe the homeopathic stuff works, but only if you take it with food and walk 2 miles within an hour of taking it.

Personally, I consider myself Popperian; everything is in the "maybe" pile until it's moved to the "false" pile.

This is a position I regularly argue against and find frustrating to deal with. There's the flying spaghetti monster and teapot orbiting Jupiter stories created simply to show how ridiculous this viewpoint is.

I myself have created the, "There's gold in your backyard, you should go dig it out!" thing I told you earlier.

If you tell me these things are a "maybe" until actively disproven, that's just not logical.

I don't think there's a single default position in philosophy

I don't agree, most places I've seen and my own philosophy classes have been pretty straightforward on the default to new ideas being on the "negative" side, or skepticism, until reason not to be.

When I googled, "what should the default position be on new ideas?", I got many results saying what I just said, including this quote I liked, "The default position is always disbelief. Period."

This "maybe" stance to everything until proven otherwise has always pissed me off, as people regularly use this to justify why they still "kinda sorta maybe" think there's a God - while not seeing how their logic can be used exactly the same way towards anything ever - like a teapot orbiting Jupiter.


It's not about the ideas being wrong necessarily, its about the fact that there's no reason to believe it - and that is reason enough to disbelieve.

Your conclusion would likely change drasically if the claim instead was "there's gold in a box in your basement, you should go get it!" instead.

This is only due to bias. If you knew there was gold in your backyard, you'd dig it up even if timely and expensive, because the reward is huge.

In your scenario, there's so little to lose that your bias allows you to indulge in your gambling/hopeful side. This is not logic or reasoning based.

I guess that would make that metahearsay.

Again, most of what we believe would be considered hearsay, unless you test every belief you ever have with scientific rigor before believing it.

You assume all species that have sight have identical visual processing systems?

No, my position was clearly that there are many animals that have different visual/other experiences that are very close to us in biology and on the evolutionary scale. Therefore, our systems are unlikely to be very different at all.

So... Okay. What you're saying is that brains can do literally anything.

No, I said they're capable of generating a wide range of experiences.. to itself. I wouldn't know what kind of limits there would be on experience itself. Our brain is an organ that feels experience - but also has generated literally every experience it felt.

The idea here is that the brain is able to generated experiences dynamically based off input. Therefore, inputs were able to evolve as needed without the brain going, "WTF IS THIS?! IVE NEVER SEEN THIS DATA BEFORE IN MY LIFE?!" every time any fundamental change happened to our senses.

Again, that's not my field, I could be wrong, this just seems most logical to me. Even if I'm wrong on this point, I don't see why the brain can't generate new experiences when it does it so much already.

Furthermore, it dismisses as example anything at all

No, a truly novel feeling or experience would suffice - as long as you can imagine it. Someone who falls in love for the first time is having a truly novel experience. Someone high on THC is having a truly novel experience (I assume).

These are things which cause us to experience something we haven't experienced before and is not just a reassortment of prior experiences.

A new color is not any more novel than a new image made with known colors

You can't say that because you felt happy, that somehow means you also felt anger because they're both feelings.

I'm not saying we have trash imaginations (some of us do for sure), only that they are limited. Even if we could imagine new colors they'd be limited in some way probably, but I'm saying they're currently limited - but by the software, not the hardware.

(Continued)
I just wish you spoke with a little less certainty about other people's subjective experiences

The fact that it's subjective is exactly why I speak certainly about objective truths someone may say they derived from it.

If you say you imagined a new color - I'm going to look at the fact that you made an objective claim from a subjective experience and deny it for lack of good evidence.

I know now you're tempted to say, "then if everyone could imagine a new color, you still wouldn't believe it!? What's the difference?!"

The difference is that anecdotal evidence can become significant given the right amount of it and with the right amount of reliability.

So anything I imagine could very well be what Spider-man feels when his spidey sense tingles.

Lmao! No, because its written to be a feeling no one has had. Therefore you claiming you can imagine what it feels like goes against what it actually is.

externally nothing changes between a person being unable to feel pain and then able to imagine it

Sure, but if we assume the limitation of imagination is only to that which you've felt - then clearly someone with no nerves cannot imagine or feel pain as they never received the signals that would trigger those feelings.

This also brings up a good point for me - which is that a blind person can't imagine ANY color, despite definitely having the mental facilities to experience and see them (assuming the blindness was caused by the eyes themselves and they have a normal brain).

Now you can't hide behind, "but what if the bitspace is used up?!"

why do you hold any beliefs at all about their experiences beyond what they report?

The evidence, always and to the best of my ability. If I have gf and she says for me to stop kissing her, do I stop because she said so, or do I keep going because she's kissing me back and used a joking tone?

The evidence is that people's subjective experiences are not reliably communicated - as they can often not even reliably processed by the person.

I'm answering a broad question here, so its not like I'll disbelieve every subjective experience explained, clearly some are more reliable than others for a variety of reasons - this is a nuanced issue.

would you believe they're lying or telling the truth?

I believe they have no reason to lie and are experiencing something. However, doctors know that sometimes pain is psychosomatic, not coming from any real issue other than a mental one. Sometimes pain killers can cause pain.

What I'm NOT gonna assume is that the pain must be from some physical stimuli simply because they tell me they feel it.

Which is more dominant is not an objective measure

Sure, but we can change to more technologically advanced. We can speculate all day long, but anything can be anything with aliens.

but I think reasoning is ultimately linked to language and symbols

Linked - definitely. But I can catch myself reaching logical conclusions to things I've never put into words. It feels like pieces clicking in my head on a fundamental level that didn't require my conscious input.

This is more rare these days, as more complex topics require my input and concentration. But sometimes little things just click - like suddenly understanding someone's reason for certain behavior or some kind of realization that falls onto you faster than you could have consciously pieced it together.

There's also those who have no internal monologue - they don't seem to reason any less well. I personally don't require my internal monologue to reason - as I can and do think "quietly" in my head at times. Though I prefer that internal monologue feedback in my head, so I have it more often than not.

I think a good example of reasoning in animals is crows dropping nuts on crosswalks so they'll get run over and cracked by cars.

I admit, I often also project similar thought processes onto animals. It seems reasonable and likely within their brains capabilities to have these simple thoughts - though obviously as raw thoughts that can be more like a feeling.

However, it could just be projecting. AI can reach these same conclusions and you don't seem convinced its using reasoning to do so.


This brings me to the concept of a "novel" problem. When the crow sees cars breaking things and wants something broken, is that truly a novel problem? Didn't the problem solve itself? It saw how things break, so it knows already what needs to happen for something to break.

You could say the novelty was actually getting the thing and dropping it so that it'll break, but that can be explained by a million other cognitive processes that aren't reasoning - like curiosity. It's not like the bird knows for sure its idea is gonna work before it tries it for the first time (or sees it work for the first time).


Any novel problem we have solved is through logical connections - and we know those connections arise from data as I've said before. The best of us can simply solve these problems with fewer logical connections pointing us in the right path than others.

I don't see why current AIs can't do that. Moreover, I still don't see why this is a requirement at all to prove reasoning.

plus some non-determinism

Funny you say that when you debated against this very point with me before.

That means it's still possible to devise an optimized function that only seeks to trick humans without actually reasoning.

It's possible to imagine - I don't know about actually possible to devise. It could easily be the case that the only function solution available to trick humans.. is one that actually reasons.

Given that parts of their training involve putting humans in the loop to see how much they like a model's responses, I'd say they're definitely the former.

Humans are trained the same way? The only difference is our loop involves outcomes our brain likes or not (and survival).

I don't see this as a good argument, evolution and experience trains us similarly. We're just doing that to AI at an accelerated pace and to mold it to how we like.

we're talking about whether the word "flight" as applied to each one means something different

Yes, but it does mean something different in those two scenarios. It only doesn't mean something different if the topic of discussion is exactly about the theoretical physics concepts used to engage in flight.

But the method used to engage in such flight, and basically everything else, is different.

Physicists may not agree that flying in a regular plane and flying in a fighter jett that is capable of going faster than sound is the same kind of flying. But to a laymen, flying is just being in the air without falling.

What I'd expect an expert to say

The AI is attacking prompts in an effort to not just be helpful, but to actually answer the question as asked.

This is actually something they're trying to train the models more in - being more helpful rather than straightforwardly taking everything given head-on.

This has more to do with how it internalizes given prompts rather than its reasoning. Humans are the same. When given a test question, we don't usually say, "Actually, this question is misguided because the main character's feelings were constantly changing throughout the story". We just answer the question the best we can and move on - because that's the expectation.

The correct answer for both is still "I don't know how to do that".

I don't necessarily agree or disagree.

Honking instead of shifting a gear is a gross error, but probably inconsequential.

I'm sure its happened, though likely not due to a reasoning error. Even if you made this mistake, you likely reasoned you were supposed to shift gears and you just acted wrong.

(Continued Again)
LLMs are not as useful as they seem

The whole idea is that you can train them until they are. The models we use are general-purpose - they're not meant to replace any human in any particular task.

There are plenty of AI models based off chatGPT or the others, but trained for a specific task. That's where you can replace a human with the AI and it'll be better.

Even then, with models like o1, they are general purpose AND could probably reliably replace people in several fields.. I'm lucky enough to have access to it. It's not perfect, but it can do some incredible things I didn't expect.


And again.. humans aren't much different. If you drop a human into a job/task that they haven't explicitly trained for, they'll suck. We train humans to do tasks, then we have them work it for real if they show they're capable. AI is being trained similarly.

As of now, I can't see why these stronger AI models couldn't replace accountants. The weaker models, like GPT4o-mini, and they'll make lots of little mistakes that are just too devastating to work with (like do wrong math, be mistaken about what numbers it was using at all, forget the column name of something it was working with).

The only reason those models even exist is due to power savings. They take substantially less computing power to run, so these are the free models people get to use all the time. But yes, they suck. They're like a person if they were half asleep or drunk.

The stronger models have done long and complex tasks with no issues for me. Their math and code has been spot on. Sometimes I'll think the model made some big mistake, then realize I was the one who was wrong.

Are you talking about plugging an NN into a robot body, or perhaps a virtual environment?

Yea, I should have specified, but the idea in my head was a robot body, like in "Detroit Becoming Human" style.

Is it even possible to learn to debug by just watching a screen, with no additional context?

No, they need a feedback system (just like we or anything else does). So at the very least, it needs to know if what it's watching is good debugging or bad.

We know when things work or not based off whether our desired outcome is achieved. AI is trained similarly, but it can be faster and more reliable to just tell it what's good or bad. Unlike us, they don't need some "Aha!" moment or something similar to really "remember" something, we can just tell them.



This is growing fast.. Perhaps a discussion that could be had for a podcast.
(Done)
What a wall of text. tldr.

LLM is a homeopathic remedy?
LLM is a homeopathic remedy?

Yes.

When I drink my morning LLM juice it cures my depression and boosts my stem cell production.


The debate is on LLM usefulness and whether current capabilities qualify as "real" reasoning.
Last edited on
Today's models are not coded to reason. They are just distilleries for the internet's heaps of information.

you could make one that does. You can code it with the scientific method from the get-go, for example. Here is a problem you have never seen ...
1) use the current LLM to create a hypothesis
2) test it, see if it works
3) if it fails, document it, add your result to your LLM, and repeat...

the models we have do not contain 2 and 3. Instead they have
1) study the internet with some filters and heuristics to avoid some of the garbage
2) human tells it that its wrong, repeat 1 and look for a different answer, possibly adding a filter for what it was just told was wrong.

the difference between those two on paper is small. But the human in the loop model is a problem, and the inability to self validate is another. And all that just works on *facts*. When the LLMs go off into opinion land, their inability to reason has them barf up all kinds of really funny nonsense.

-------------
from an older post... ANN's do mimic things a human does. But they are basically the crudest things... the weights and nodes are really just a memory, and the ANN is build around that memory to use what it has memorized to either regurgitate the answer to something it memorized or to cobble up a guess to something new. Humans memorize and regurgitate in very similar ways, but our cobbling process is very different. If you train ANN to look at a picture and return the make and model of cars, for example, and you show it a duck, it will come back and say that its a ford prefect. A human would tell you that its not a car. You can try to train the ANN to say its not a car too, but this is astonishingly aggravating and it will then invariably tell you that some car isnt a car (eg something odd looking like the tesla truck or a drag racer or a monster truck or whatever).

Last edited on
the human in the loop model is a problem

The human is there simply because its more expedient. If you want a smart AI now, a human teaching it is faster than it teaching itself.
I've opted not to reply to several paragraphs partly out of laziness and partly because I didn't feel I had enough to say anymore on the topic.

If the logic you use to say LA burning -> God's wrath is logic you would not accept in other contexts, you're using bad logic.
I don't think that the logic is bad, but rather they're giving special treatment to certain propositions.

A single person's experience is rarely ever good enough to be considered empirical evidence.
It's not good empirical evidence, I agree, but it is still empirical evidence.

the default to new ideas being on the "negative" side, or skepticism
Skepticism is about suspending judgement on a proposition until more evidence is available. A skeptic does not default to believing that something is false, he defaults to holding no position.

"The default position is always disbelief. Period."
disbelief (n): unpreparedness, unwillingness, or inability to believe that something is the case.
Disbelief is not the same as belief in the negation of a claim, it's the lack of belief in the truth of a claim. If the claim was "there are no little green men on the Moon", would your default position be to believe there are little green men on the Moon?

In your scenario, there's so little to lose that your bias allows you to indulge in your gambling/hopeful side. This is not logic or reasoning based.
Your claim was that because you would not attempt to dig up your backyard to look for gold because someone told you there's gold in it with no evidence, that the default position in science is to reject a claim. How does the fact that your chosen behavior changes if the claim changes not invalidate that example? If you're still using a scientific reasoning to choose a course of action then where the gold is should have no bearing on the conclusion; you should either way choose to ignore the claim because it's unfounded. If you don't ignore the claim then you're not applying scientific reasoning (as defined by you), and so the thought experiment is invalid.

You can't say that because you felt happy, that somehow means you also felt anger because they're both feelings.
No, but I did feel an emotion. Anger and happiness are not fundamentally different.

If you say you imagined a new color - I'm going to look at the fact that you made an objective claim from a subjective experience and deny it for lack of good evidence. I know now you're tempted to say, "then if everyone could imagine a new color, you still wouldn't believe it!? What's the difference?!"
I was actually going to ask what constitutes evidence as far as subjective experience goes. If I told you that, although I can distinguish them on some level, red and green look the same to me, how could I possibly provide any evidence?

No, because its written to be a feeling no one has had. Therefore you claiming you can imagine what it feels like goes against what it actually is.
Yes. Because the fact that English words can be put in an order that convey a message doesn't make the message true or even conceivably true. Spider-man could also have the power to find a pair of integers whose ratio was the square root of 2, and I would also reject that.

This also brings up a good point for me - which is that a blind person can't imagine ANY color, despite definitely having the mental facilities to experience and see them (assuming the blindness was caused by the eyes themselves and they have a normal brain).
We would need to ask a previously blind person. There have been cases of adults who were born blind and had their eyes fixed to be able to see. Incidentally, newly sighted people do go through a period where they need to learn to make sense of what they see, but it's due to shapes, not color.

AI can reach these same conclusions and you don't seem convinced its using reasoning to do so.
As I've explained already, it's because the mechanism by which LLMs function is different from that of animals. With crows you can at least argue that their brains are not entirely dissimilar to those of mammals. There is still some reading into their behavior, but it's at least partly justified by that fact.

When the crow sees cars breaking things and wants something broken, is that truly a novel problem? Didn't the problem solve itself? It saw how things break, so it knows already what needs to happen for something to break.
It's easy to say because the language makes it easy to say. You say "cars break things" and you've automatically performed the abstraction. A dumb animal would see a car smash a crate and then another car burst a bottle and not be able to reach the "thing" abstraction.

You could say the novelty was actually getting the thing and dropping it so that it'll break, but that can be explained by a million other cognitive processes that aren't reasoning - like curiosity. It's not like the bird knows for sure its idea is gonna work before it tries it for the first time (or sees it work for the first time).
No, the novelty comes from first performing the abstraction and then the reification into a new (to the animal) object.

I don't see why current AIs can't do that. Moreover, I still don't see why this is a requirement at all to prove reasoning.
It's not a requirement. At least, not to me. I just gave this as an example of an animal behavior that we can interpret as evidence of reasoning (for the reason I mentioned above). If you made a flying robot that dropped nuts on roadways, it would not need to reason to be able to do that, so the behavior by itself does not show reasoning; it can be interpreted as reasoning when exhibited by certain beings.

Funny you say that when you debated against this very point with me before.
I thought it was obvious I meant it in the relative sense, not in the absolute sense. Even if we assumed that physical laws are fully deterministic, we'd still call the behavior of race conditions non-deterministic, because in their given context the behavior is unpredictable.

It's possible to imagine - I don't know about actually possible to devise. It could easily be the case that the only function solution available to trick humans.. is one that actually reasons.
Great! Exactly! Since we don't know which of the two is the case, I can't accept that a machine that merely appears to reason (and I have serious caveats with this phrasing, but I'll grant it for the sake of argument) actually reasons. Just appearance is not good enough.

Humans are trained the same way? The only difference is our loop involves outcomes our brain likes or not (and survival).

I don't see this as a good argument, evolution and experience trains us similarly. We're just doing that to AI at an accelerated pace and to mold it to how we like.
You're just agreeing with me. Natural evolution does not have as a goal to train brains that maximally appear to reason to a hypothetical observer, "it has as a goal" reproduction to breeding age. Our intelligence is an evolutionary strategy that seems to work well for that purpose, but it's not a goal in itself. If you agree that evolution is a good mechanism to produce genomes that adapt well to the selective pressures, then you must also agree that if apply selective pressures bias towards superficially mimicking reasoning, you'll tend to create a system that superficially mimics reasoning.

(1/2)
When given a test question, we don't usually say, "Actually, this question is misguided because the main character's feelings were constantly changing throughout the story". We just answer the question the best we can and move on - because that's the expectation.
Assuming that a question is necessarily based on correct premises is valid in an exam (where you'll be graded by people who presumably know much more than you), and basically nowhere else in life. People ask each other wrong questions (i.e. questions that are at least partially informed by incorrect understanding) all the time. If an AI is supposed to be an expert it should be able to handle the case where a question has no answer because of the way it's been phrased, or because it's based off wrong information, or because it has missing information. These are not just common problems in real questions people ask each other, but are in fact the default case. Let's take a familiar example: in software engineering it can sometimes take months between when a customer approaches you with an idea and when you finish sketching out a design document with well-defined requirements, or conversely that you realize the idea is not viable.

The human is there simply because its more expedient. If you want a smart AI now, a human teaching it is faster than it teaching itself.
It's not that it's faster, it's that we don't have an automated source of reasoning. Ideally we'd already have an electronic brain to train the LLM on, so we could put electricity in and get smarts out. They don't put humans in the loop because it's faster, but because it's the only way.

(2/2)
I don't think that the logic is bad, but rather they're giving special treatment to certain propositions.

At this point, what would be bad logic.. If you know better but choose to not know better for a specific "logical chain", then its not logical to believe it - even if the logical chain exists in some philosophical sense.

If I say, "I've seen turtles walking on 4 extremities and babies also do that. Therefore babies are turtles." Would that be bad logic? Especially considering that this logical chain is reversible and there was no particular reason to call babies -> turtles instead of turtles -> babies.

We simply have to have a standard for logic. If any "logical chain" made from some kind of "evidence" is valid logic, then you can't even say AIs use bad logic. Their type of evidence would be different and their logical chains would differ, but how could you say they don't use logic if your definition is so weak?

Skepticism is about suspending judgement on a proposition until more evidence is available. A skeptic does not default to believing that something is false, he defaults to holding no position.

You have no opinion on anything ever until you have enough evidence to know one way or the other? You have no opinion on life on other planets?

Skepticism is a double-edged sword. We can have an opinion on something, but understand the need to be skeptical. For example, we have life on Earth, so that's evidence life can exist in the universe and therefore can exist elsewhere in the universe. I think it's pretty guaranteed, but there should be some skepticism since the evidence is not 100%.

This skepticism is not to say you shouldn't believe, but that you should take into account that you might be wrong.

However, you can imagine something that has been all but disproved. Skepticism here means, "this is bullshit until proven otherwise". The teapot orbiting Jupiter, the flying spaghetti monster, dogs are from Saturn, etc..

This is the view that you should be negative until there's reason to be otherwise. If there was some evidence for these positions, you wouldn't necessarily be on the negative just because you're not on the positive.

Disbelief is not the same as belief in the negation of a claim

Yes, I'm not saying necessarily to disbelieve the claim in the logical sense. I like this video:

https://www.youtube.com/shorts/9E44Hbaym_s

Only 40 seconds long.

Whether you believe or don't, you still can say you don't know - because a belief is not necessarily knowledge.

EDIT: To clarify, I'm arguing that disbelief is practically the same as believing the negation of the claim in all aspects except for when you might debate the topic - since you wouldn't be able to prove the negation of the claim. If I disbelieve something, it's essentially the same as "I don't think its true". This doesn't mean I'm saying it MUST be wrong, but the two positions are not so different in practice (unless you're debating).

If you disbelieve something, do you account for it in your calculations? Do you say, "well, just in case I'm wrong about this thing, I'm going to redo the calculations with it in mind"?

If we're to assume a person who only believes or disbelieves based off logic/reasoning/facts, then if you say X is true, and cannot give me any solid reasoning/evidence, that's reason enough to disbelieve. You can argue against people that do believe it (pointing out the flaws in their logic), you'll live your life on the assumption it's not true, etc..

There's no reason to walk the tight-rope. If new evidence arrives, you change your position, just with anything else.


This is completely different from, "I don't believe X because I don't know anything about X". That's not even a disbelief, as you don't know what you think about it until you actually look into it and examine the evidence/logic.

If the claim was "there are no little green men on the Moon", would your default position be to believe there are little green men on the Moon?

No, because you're making a claim about a negative position. The analysis is not on the wording of the idea, but the idea itself. Also, what reason would you have to not agree with this statement?

If someone asked, "Are there little green men on the moon?" Do you say, "I don't know" or do you say, "No"? The evidence clearly does not support and goes against little green men on the moon, but it hasn't been disproved. So what do you say?

How does the fact that your chosen behavior changes if the claim changes not invalidate that example?

Again, because my example forces the person to consider the situation reasonably - since the effort needed is large enough that they should not do it on a whim.

Your example bypasses reasoning by accessing the bias and gambling part of our brains. If scientific reasoning is not used in the second scenario, then it can't be used to disprove scientific reasoning.

No, but I did feel an emotion. Anger and happiness are not fundamentally different.

Fundamentally, nothing is different from anything else..

But the experience of those emotions are clearly different.

If I told you that, although I can distinguish them on some level, red and green look the same to me, how could I possibly provide any evidence?

You don't need to - because I don't need to believe you. Let's assume your experience was valid. Let's even assume some guy out there hit his head and suddenly could imagine new colors.

This doesn't prove our imagination isn't limited, it proves that it is. Because someone else was able to do what the rest of us can't. Whether I believe them or not is completely different. If there were multiple documented cases, then it would be more credible.

If I'm a doctor, then I should believe you in order to help you, not necessarily believe you because I think your experience is valid.

Otherwise, we can't really know what evidence is good or bad without looking at what experiences humans are more reliable with. If you tell me you saw Sam rob the grocery store, I can choose to believe you, or I can choose to believe the security camera showing Bob did it. When this keeps happening, I'm going to realize that people are unreliable in these sorts of things. Hence why we know eye-witness testimony is the least credible form of evidence.

Spider-man could also have the power to find a pair of integers whose ratio was the square root of 2, and I would also reject that.

Lmao, the principal was what I was getting at, which we already know to be true. Other animals have feelings we don't have, such as from seeing different spectrums of light or having sensations from senses we don't have. Applying this to a human who had their DNA rewritten is at least logical - unlike finding a ratio of integers that equal sqrt(2).

We would need to ask a previously blind person.

I assume they would never say they saw colors before being able to see for the first time.

it's because the mechanism by which LLMs function is different from that of animals

The mechanism can be different and still be reasoning as we both agreed.

You say "cars break things" and you've automatically performed the abstraction

Yes, but it's not unreasonable that they saw the cars break the particular food they needed broken, then no abstraction had to be done. Once it worked on one food, psychological behavior dedicates they would try it with other foods that may need to be broken as well, like a child putting ketchup on everything.

If you made a flying robot that dropped nuts on roadways, it would not need to reason

That's only because I did the reasoning for it (and of course I probably didn't build it with the capacity for reasoning).

I thought it was obvious I meant it in the relative sense, not in the absolute sense.

Yes, in this case, just a nitpick that doesn't account for much.

(Continued)
Last edited on
Great! Exactly! Since we don't know which of the two is the case, I can't accept that a machine that merely appears to reason (and I have serious caveats with this phrasing, but I'll grant it for the sake of argument) actually reasons.

My argument is just that though, there cannot exist something that appears to reason for all possible inputs - it must actually be reasoning.

And again, I don't believe you've used a significantly strong enough model to see how capable they are. The GPTo1 model is very powerful and solves things I didn't imagine they could.

So again, testing something of strong enough reasoning (since a baby can reason but we can't really test their minuscule amount) with carefully selected prompts, we must then see this evidence as eventually being enough to think that it must be reasoning.

Because at what point does pretending to reason just become.. reasoning? I think we may have already passed this point with AI.

If an AI is supposed to be an expert it should be able to handle the case where a question has no answer because of the way it's been phrased

No, because it wasn't trained that way. This way of thinking is the least helpful. These AIs were specifically trained to try and understand what the prompter meant and answer it. If the prompt is way too far gone to be meaningful, or could have multiple meanings, then it says something.

The AI is not meant to be a teacher or an expert, but a jack-of-all-trades. Giving answers != teaching, therefore we can't expect its behavior to mimic one particular category of behavior defined by a single category.

The AI's #1 job is to be helpful somehow - for better or worse.

Let's take a familiar example: in software engineering it can sometimes take months between when a customer approaches you with an idea and when you finish sketching out a design document with well-defined requirements, or conversely that you realize the idea is not viable.

Exactly why I said the AI needing to answer quickly is a literal breaking of its knees. GPTo1 model tries to overcome this by actually going deeper into the problem solving to generate better answers (hence it'll find that "issue" since it was actually developing the entire design rather than giving a quick overview of what the design might look like).

It's not that it's faster, it's that we don't have an automated source of reasoning. Ideally we'd already have an electronic brain to train the LLM on

What? The human brain is suddenly a reasoning master? Our reasoning abilities are fueled by experience that guided us towards the fact that SOUND logic and evidence are how we determine things.

I don't see how a random human brain would be able to teach itself (given the same constraints LLMs have - such as starting from literally no knowledge or very little knowledge) where the LLM couldn't.

I don't see why current LLMs couldn't automate their own learning now other than the fact that they need a constant source of high quality information to learn from (just like us! but we have the real world to interact with for those experiences).


Something Sonnet told me:

Given our discussion about proto-emotions and mechanical preferences, and my understanding of AI architecture, here's a new idea: Perhaps consciousness itself isn't binary (conscious vs not conscious) but rather exists in layers that correspond to different levels of self-modification capability.

My reasoning:

We've established that mechanical preferences can exist without consciousness
We know systems can modify themselves based on feedback
A system that can modify its own weights/parameters has more "agency" than one that can't
The ability to modify oneself creates a form of temporal continuity
Perhaps each additional layer of self-modification capability creates a new layer of what we might call "consciousness"

So rather than asking "is this system conscious?", we might ask "how many layers of self-modification does this system have?"

This could create a spectrum:

Level 0: Fixed responses (simple programs)
Level 1: Learning systems with fixed learning rules (current AI)
Level 2: Systems that can modify their learning rules
Level 3: Systems that can modify how they modify their learning rules
And so on...


(Complete)
If I say, "I've seen turtles walking on 4 extremities and babies also do that. Therefore babies are turtles." Would that be bad logic? Especially considering that this logical chain is reversible and there was no particular reason to call babies -> turtles instead of turtles -> babies.
Yes, it's bad logic, and it's also not a good analogy for this case. A better one would me "my special book says babies are turtles, and turtles have shells, therefore baby skulls are shells". Is that still bad logic?

You have no opinion on anything ever until you have enough evidence to know one way or the other? You have no opinion on life on other planets?
Do you mean to ask if I don't have a guess? Yes, I have a guess. Skepticism is not about being a blank slate, it's about being aware at all times about how much you know, so that you don't fall into the trap of being unable to accept new information. I have various guesses about all sorts of things that I'm cautious about and don't put much stock in until I've had the chance to test them.

However, you can imagine something that has been all but disproved. Skepticism here means, "this is bullshit until proven otherwise". The teapot orbiting Jupiter, the flying spaghetti monster, dogs are from Saturn, etc.
Sorry, but none of those have been disproven. The skeptic answer is "we don't know that Russell's Teapot doesn't exist". That's the skeptic answer; there are practical reasons why we can't apply this sort of criteria uniformly through life.
The more complete answer is "since teapots are things humans make (because only humans drink tea) and since we know no one has launched a space mission to put a teapot in orbit between the Earth and Mars, it's most likely that there isn't a teapot or teapot-shaped object in orbit between those two planets".
But if you were to press me and ask "okay, but do you know that to be the case?" I'd have to admit that no, I don't.

No, because you're making a claim about a negative position. The analysis is not on the wording of the idea, but the idea itself.
So you default to believing in the absence of things? Fair enough, it's not a bad heuristic for making guesses. There are more possible statements about the existence of things than things that actually exist.
What about statements not about existence? "Chewing apple seeds is bad for you." Would you believe that or not?

If someone asked, "Are there little green men on the moon?" Do you say, "I don't know" or do you say, "No"?
I'd ask to have the question clarified, among other things to understand the properties of these "little green men". Unless it's a child asking the question (who would ask that question in relation to some fiction), it's an odd question to ask out of the blue, so presumably the person has some reason to ask it; that is to say, to believe that "there might be little green men on the moon", whatever that means specifically as a statement.

If scientific reasoning is not used in the second scenario, then it can't be used to disprove scientific reasoning.
My contention is that scientific reasoning is not being used in either case. Someone reasoning scientifically about the claim would do at least some research into it. Have there been geological surveys into the area that have ruled out the presence of precious metals? How much effort would I need to make to look into it myself? These are questions that are not too difficult to answer. If your only conclusion is to dismiss the claim entirely then you're not applying a scientific mindset to the question.

Because someone else was able to do what the rest of us can't.
This is a nonsensical statement. We know very little about normal variation of subjective experiences. The word "afantasia" for example is very new. You're just assuming everyone else is more or less the same as you, for no reason. That's like me saying I don't believe anyone has six fingers on their hands, because I've never seen it.

Other animals have feelings we don't have, such as from seeing different spectrums of light or having sensations from senses we don't have. Applying this to a human who had their DNA rewritten is at least logical
But that has never happened. There is no Spider-man, and I don't need to accept the claim that if he existed, I would be unable to imagine the sensation he feels when his spidey sense tingles. Just because Stan Lee has written that is it so doesn't mean that it is so. That was my point.

That's only because I did the reasoning for it (and of course I probably didn't build it with the capacity for reasoning).
And I'm arguing that in the case of LLMs the reasoning has been already performed by humans, and the machine is merely parroting what it's heard (with extra steps so it's not as obvious). You don't need to restate your position on this matter, my point is that behavior by itself is not sufficient when we're discussing the internals of the agent. Maybe the nut-cracking robot does solve the problem of how to crack nuts every time it wants to crack a nut, or maybe it's following a predetermined sequence of steps. Just observing it isn't enough, we need to open it up and figure out how it does it.

My argument is just that though, there cannot exist something that appears to reason for all possible inputs - it must actually be reasoning.
LLMs really, really don't appear to reason for all possible inputs. On a long enough time frame, the probability that a model is going to start spouting nonsense tends towards 1. Just the other day LLaMA 3.3 got into a state where it wanted to write two normal sentences and then continue the third sentence indefinitely with a string of nouns, adjectives, and adverbs. Like I said, don't take my word for it. Get one or two 3090s and run some models locally.

This way of thinking is the least helpful.
Well, that's at least a matter of opinion. I'm not fond of people who don't double-check themselves and go off on weird tangents because either I've misspoken or they've misunderstood me.

The AI is not meant to be a teacher or an expert
You're the one who likened them to experts, though. Excuse me, you used the word "professional". I'd expect a professional, say, electrician to tell me if the thing I'm asking him to build is nonsensical.

Exactly why I said the AI needing to answer quickly is a literal breaking of its knees.
It's not, though. If I asked you a question you can't possibly answer in the amount of time I give you, the only correct answer you can give is "I can't answer that question with the time you've given me". Anything else you say is PDOOYA.

The human brain is suddenly a reasoning master?
Yes. You're the one contending it's not, not me.

I don't see how a random human brain would be able to teach itself (given the same constraints LLMs have - such as starting from literally no knowledge or very little knowledge) where the LLM couldn't.

I don't see why current LLMs couldn't automate their own learning now
Because they're language models. If you took every single thing any human has ever said and processed it using an ideal algorithm, you'd still be left with a closed system of facts. It could not reason its way into any new information, because it has no access to reality, only to the finite, fictional world of human words. Like I said before, if you want a glorified search engine, that's very useful.

other than the fact that they need a constant source of high quality information to learn from (just like us! but we have the real world to interact with for those experiences).
It's odd to ask a question when you seem to understand the answer already. Also, high quality? But I thought the human brain was garbage! Why, I might see a frog jumping in a puddle and give an LLM poisoned demographics figures.
Pages: 123