AI is something special

Pages: 1234
Yes, it's bad logic

You didn't explain why this isn't logical - as the logical chain clearly exists, just as Homeopathic Remedy -> Cured.

Sorry, but none of those have been disproven.

Yes, hence it's been all but disproven. In this case, we can be so certain that these claims are false, we act as if they are, even if not entirely proven to be.

But if you were to press me and ask "okay, but do you know that to be the case?" I'd have to admit that no, I don't.

There's a difference between do you know and do you know. If someone put a gun to your head and asked you, "Do you know now, smartass?" Suddenly, you're a genius who knows there's no teapot orbiting Jupiter.

If you say, "No, I don't actually know, but I can reasonably guess". Well, let's check all possible Jupiter orbits for teapots... Nothing found. Sheesh, well, we still don't know - because what if you just missed it? What if it's an invisible teapot?

Darn, now the only way to know is by trying to grab empty space around Jupiter's orbit everywhere possible to see if I ever grab it. Oh darn it! I forgot, it's also intangible. Silly me. Now we'll never know if this teapot is orbiting Jupiter!

How far do you have to go to say you know something? 90% sure? 99% sure? 99.99999%? 100% is the only way?

So you default to believing in the absence of things?

Default is to be skeptical of claims. If a claim is not disprovable - then that is reason enough to not believe it. Once you apply the necessary information needed to analyze the claim - then you can decide if there's credibility or not.

If there's no credibility to the claim - then you disbelieve, because there's no reason to believe - essentially the same as thinking it's wrong.

"Chewing apple seeds is bad for you." Would you believe that or not?

I don't actually know if doing that is bad for you. Let's apply my process:

Do I have evidence for the claim? Well, I know chewing hard things with your front teeth is bad for you, but the molars in the back are relatively able to handle such things. Also no reason to think Apple Seeds are toxic or otherwise harmful. So no evidence for the claim.

Do I have evidence against this claim? We can chew hard things like seeds with no problem usually. People have been eating apples for all of humanity with no issues. So yes, evidence against this claim.

Is this claim reasonable/credible? N/A (since you would be a credible source, but I can't know if you mean this claim or if it's just for the debate).

Now, do I know whether or not chewing apple seeds is bad for me (of course, in the context of eating a regular apple and not just downing a whole bunch of seeds for no reason)? No. But I would assume you're full of shit until proven otherwise.


Now having just looked this up, apparently it's slightly toxic, but still doesn't really matter. In this case, it would depend on how far you wanna push, "bad for you", but I think clearly doesn't rise to the level of me actually thinking it would give me some bad outcome. I'd assume you're more likely to have a bad outcome from choking on the apple than eating the seeds.


But we can see, given the information I know, I would have said, "I don't believe this claim." IF I didn't know anything - such as what an apple is - I'd of said, "I don't know." IF I knew some evidence that made this claim likely to be true (such apple seeds being actually dangerously toxic), I'd of said, "I believe you!"

But in this case, "I don't believe this claim" is essentially the same as I think you're full of shit. Because if I thought there was a reasonable chance of it being true - I'd of said I don't know. However, "I don't know" is also the same as, "I don't believe you" in a logical sense, as both mean that the person hasn't been convinced.

This is what makes it a little weird to talk about, since belief and knowledge are not interchangeable. "I believe you might be right, but I'm not sure" is STILL logically equivalent to, "I don't believe you" - though clearly those are very different positions practically.

I'd ask to have the question clarified

Suddenly "little green men on the moon" is a complex statement that could mean anything.

Someone reasoning scientifically about the claim would do at least some research into it

The only required research is to ask them, "How do you know?" If you can't ask, just apply the reasoning I did before. No evidence and no credibility = not worth thinking about.

You're just assuming everyone else is more or less the same as you, for no reason

Differences that seem big to us are actually relatively small. We are over 99% the same in DNA, our differences stand out to us because we're really good at seeing them.

If there was someone who could imagine a new color, I'm sure we'd hear about them.

Just observing it isn't enough, we need to open it up and figure out how it does it.

Reasoning is a process that applies to all sorts of information/problems. So yes, watching a nut cracker crack nuts doesn't prove reasoning. The fact that it can't do anything else is what proves nonreasoning.

A nutcracker that can communicate in morse-code with its cracking sounds, we can then test its reasoning through questions.

the third sentence indefinitely with a string of nouns, adjectives, and adverbs

Again, a system of reasoning need not be perfect to be considered reasoning. We humans do and say weird things all the time - it doesn't mean we're not capable of reasoning.

I personally haven't used LLAMA.. ever. From a quick search, it doesn't seem to be better than Sonnet, which definitely wouldn't be as good as o1.
You're the one who likened them to experts, though

You have to get them in that "mindset". You can tell the AI what kind of conversation you want to have. If you ask it to do something for you, it's not going to talk to you about the matter as an expert, but instead try to be helpful in someway.

You can think of it as a Teacher who then goes home to their children. They won't treat them the same, even though both groups may both be around the same age. The AI, just like people, has different modes.

So they can be like experts, but that's not their default state. They are explicitly trained to be helpful in some way - which can actually go against their credibility at times as you're pointing out.

If I asked you a question you can't possibly answer in the amount of time I give you, the only correct answer you can give is "I can't answer that question with the time you've given me"

Not to force a position on you, but I highly doubt you even believe that. There's many helpful answers that can be given that may not completely address or solve the question given.

Yes. You're the one contending it's not, not me.

A reasoning master would mean no human makes reasoning mistakes and certainly wouldn't blatantly fight good reasoning with bad reasoning. I wonder if we talked about such examples already..

It could not reason its way into any new information

I don't see how your logic is complete. Being able to reason its way into new information is not incompatible with a language model. I could easily say, "Well the human brain is a sensory input model. It can't reason its way into new information because it can only reason the finite information its senses gives it". I just don't see the connection.

It's odd to ask a question when you seem to understand the answer already

But my answer doesn't imply what you're saying, so I assumed you'd have a different answer.

Also, high quality? But I thought the human brain was garbage!

Strengths and weaknesses. We obtained the high quality information through science, which strives to remove the flawed human from the equation at all...
Last edited on
If someone put a gun to your head and asked you, "Do you know now, smartass?" Suddenly, you're a genius who knows there's no teapot orbiting Jupiter.
Did you not realize while you were writing this that it works in reverse, too? If someone points a gun to your head demanding you admit something you'll say whatever is necessary to make that stop. That says nothing about anything.

How far do you have to go to say you know something? 90% sure? 99% sure? 99.99999%? 100% is the only way?
I don't know. I won't deny it could a lesser number than 100% certainty, but it still doesn't change the fact that I currently don't know whether Russel's teapot exists.

The fact that it can't do anything else is what proves nonreasoning.
You can't determine what something can't do by observing what it does do, though. It could be that the robot is remote-controlled by a person who's really invested in cracking nuts.

I don't get it, do they not have basic epistemology courses in universities?

Not to force a position on you, but I highly doubt you even believe that. There's many helpful answers that can be given that may not completely address or solve the question given.
I guess we just want different things out of computers. When I give a program a command and it hits an internal error I'd rather see an error message rather than a garbage answer.

A reasoning master would mean no human makes reasoning mistakes and certainly wouldn't blatantly fight good reasoning with bad reasoning.
Is master the same as flawless? I don't agree with that. I think a "master" at something should be the best currently, perhaps the best possible, but certainly it doesn't need to be the best conceivable.
You're doing that thing again, where you devalue natural intelligence in favor of AI. It seems like only the mistakes human make are the ones that count.

I don't see how your logic is complete. Being able to reason its way into new information is not incompatible with a language model. I could easily say, "Well the human brain is a sensory input model. It can't reason its way into new information because it can only reason the finite information its senses gives it". I just don't see the connection.
If all you had was a video of what your eyes see that you can go back and forth on, yes, you're right. You would not be able to gleam much more information than what is already on the video. However, that's not how it works. The flow of information between a human brain and the real world is not one-way. You can interact with the real world and observe the effects of your actions. So there's not really any limitations on what you can learn, other than the limits of your senses and your lifespan.

Again, it's odd that you're asking a question you yourself have answered already. LLMs intrinsically can't learn on their own, because they're not investigating the real world, they've investigating the fictional world of human language (and soon the even more fictional world of combined human language and LLM language).

We obtained the high quality information through science, which strives to remove the flawed human from the equation at all...
Yes, and as we all know, Prometheus stole the scientific method from Mount Olympus and gave it to us. If we had to come up with it ourselves we'd probably eat some shrooms and after sitting still for five hours we'd end up with some stupid nonsense about praying to the clockwork elves for wisdom.
If someone points a gun to your head demanding you admit something you'll say whatever is necessary to make that stop

The point wasn't that you'd say whatever they want, but that you'd be able to truthfully give them the information they asked for.

I won't deny it could a lesser number than 100% certainty

If you believe.. anything.. then it must be less than 100% certainty.

it still doesn't change the fact that I currently don't know whether Russel's teapot exists

You don't know whether a made up teapot that can't possibly exist... exists? If you had to guess, you'd say it doesn't exist. So let's ask: You're not 100% certain, so just how certain are you that it doesn't exist? 90%? 99%?

Are you more certain that the food you eat is real? Are you more certain of gravity?

This is what I mean: for you to believe anything, you cannot wait to be 100% certain. And it's clearly a boring exercise to just say, "then I don't believe in anything".

Not only that, but the entire point of the teapot and other analogies is to show how ridiculous it is when people hide behind, "well, we don't actually know!" for positions that are clearly wrong, but cannot be disproven to 100% certainty.

You can't determine what something can't do by observing what it does do, though. It could be that the robot is remote-controlled by a person who's really invested in cracking nuts.

Again, in this specific discussion, we are assuming that the thing we are trying to measure is trying to show us intelligence/reasoning, not trying to fool us into mischaracterizing real reasoning for cracking nuts.

So far, your only argument is that a "fake" reasoning machine could exist that can pretty much do everything real reasoning can do but it isn't actually reasoning because we said it was fake. I've been saying this makes absolutely no sense.

Again, I'll say what Sonnet said, which is that a bird and plane both achieve flight - regardless of the mechanism. There's no point in seeing something "reason" and conclude we don't know if it's reasoning because we haven't seen how it does it.

This is especially true because we currently have no way of determining whether a given system is capable of "real" reasoning or not. You claim the method these LLMs use cannot result in real reasoning, but that's a very hefty claim - we have no standard or process for identifying a reasoning system by its operation. Are you.. 100% certain?

do they not have basic epistemology courses in universities?

Again, if we must know with 100% certainty, then we cannot know anything.

I guess we just want different things out of computers

If we treat it as a reasoning system, then it doesn't matter if it's a computer or human. And doesn't hit any error, it basically begins the process at, "this is too big a project to actually deal with and solve myself right now - I'll break it down into high level parts so the prompter can have a good starting point."

This is very.. reasonable.. one might say.

I don't agree with that. I think a "master" at something should be the best currently, perhaps the best possible, but certainly it doesn't need to be the best conceivable.

It doesn't make sense to use this definition on a species - this is something you use for individuals. You can say "Bruce Lee is a master of martial arts", not "humans are masters of martial arts". I can disprove the second claim easily by showing you a newborn baby or a regular person who has never even attempted anything more physically demanding than walking - then suddenly monkeys are masters of martial arts compared to this person.

Some of us are definitely masters of reasoning, and I wouldn't say the AI is necessarily better.. But clearly many of us are dimwits and I'd take the best AI over them any day.

You're doing that thing again, where you devalue natural intelligence in favor of AI. It seems like only the mistakes human make are the ones that count.

Not really. You said humans are reasoning masters, and I clearly don't agree with the statement. Humans are masters of faulty reasoning by default. We have carved our way into proper reasoning through thousands of years and revolutionary ideas born out of circumstance and necessity - no easy feat.

However, we are still born flawed - we have to educate ourselves to overcome the nonsense going on in our heads. Our brains are not reasoning masters by default, we get it there.

You see the AI making mistakes, and you devalue the AI's intelligence simply because it hasn't progressed to the best human's level in the few years it has existed as compared to the thousands of years humans have existed and passed down generational knowledge.

The flow of information between a human brain and the real world is not one-way.

We've been talking about giving an AI the ability to interact with the real world to acquire its own data. Simply because it exists in a box doesn't mean it can't reason. I think AIs could very well learn the same way we do - the limitations are artificial. Companies have created simulations where the AI can learn through interacting with that virtual world.

They also have self-driving cars on the streets - with the entire point being that the models can learn through that real-world data. Would I call that reasoning? I don't know - maybe not anymore so than a bird's reasoning as we talked about before. But the LLMs can learn similarly.

LLMs intrinsically can't learn on their own, because they're not investigating the real world

Again, this is an artificial limitation, not an intrinsic one. Only an intrinsic limitation would be a valid argument against their learning/reasoning capabilities. We already have AI models learning from real world interactions.

If we had to come up with it ourselves we'd probably eat some shrooms and after sitting still for five hours we'd end up with some stupid nonsense about praying to the clockwork elves for wisdom.

I don't doubt that many people had this revelation.. Such as thanking Athena for every bright idea that enters their minds. Just because humanity achieved this feat, doesn't mean there wasn't a billion failures before.

So again, we look at the AI failing once, instead of the fact that our achievements come from what is essentially the same as generational AI algorithms. You get millions of humans faced with the same problem, and eventually some of them solve it.

I don't think we can compare this to individual AI models (that learn independently nonetheless) not being able to replicate all of humanity's progress and achievements in a single 3-5 second output even remotely sensible.
Last edited on
The point wasn't that you'd say whatever they want, but that you'd be able to truthfully give them the information they asked for.
But it wouldn't be truthful, it'd just be what they want to hear. I still wouldn't believe I know that.

If you believe.. anything.. then it must be less than 100% certainty.
There are things you can know with 100% certainty.

You don't know whether a made up teapot that can't possibly exist... exists? [...]
I've answered this question already. Your silly argument about the gun was in response to this question.
There's a different between what I'm fairly confident about and what I know. I find that distinction to be very important.

Again, in this specific discussion, we are assuming that the thing we are trying to measure is trying to show us intelligence/reasoning, not trying to fool us into mischaracterizing real reasoning for cracking nuts.
The nut-cracking robot is an analogy for an LLM. So, in a discussion about whether we can judge whether LLMs reason from behavior alone, your assumption is not just that they do, but also that they want to demonstrate it to us?

So far, your only argument is that a "fake" reasoning machine could exist that can pretty much do everything real reasoning can do but it isn't actually reasoning because we said it was fake. I've been saying this makes absolutely no sense.
First of all, I didn't say it can do pretty much everything real reasoning can do. What I said was that it could trick a "reasoning-detecting function" into saying it is reasoning, without being so.
There's a subtle but important distinction there. A machine that just does exactly the same thing a person would do in that situation, every time, infinitely-many times, is just reasoning. End of story.
However, that wasn't the test I proposed. If you were to make the call on whether a machine reasons or not, you wouldn't have the luxury to perform an infinitely-long interview. You'd have to decide on some set of questions to ask it. It doesn't matter how long it is, the fact that it's finite means you can be tricked. Because you can be tricked, behavior alone, absent all other knowledge about a system, is not enough to judge the reasoning capabilities of the system.

Again, I'll say what Sonnet said, which is that a bird and plane both achieve flight - regardless of the mechanism. There's no point in seeing something "reason" and conclude we don't know if it's reasoning because we haven't seen how it does it.
Okay, well, we have a pretty good test for whether something can fly. If it can fall and then climb again without touching the ground then it's flying. We don't have anything like that for reasoning. People have talked to programs much dumber than LLMs without realizing they were programs, so just "well, it sounds like it reasons" doesn't cut it for me.

we have no standard or process for identifying a reasoning system by its operation.
On the contrary, we do. Like I said before, I'd argue CASs and theorem provers reason, and to a lesser extent (and in a much more mechanistic manner and for a narrower scope) chess programs do too. These programs start from facts and reach conclusions through step-by-step processes. They're not exactly "intelligent", as they're pretty rigid in what they're able to do, but they very much do reason, in my opinion. Another example I've mentioned before: type inference systems. All of these programs have in common that they use symbols that represent facts and they manipulate them with strict rules to deduce new truths not present in the original facts. That's reasoning.

You can say "Bruce Lee is a master of martial arts", not "humans are masters of martial arts".
I don't know, man. I haven't seen many gorillas doing jeet kune do.

However, we are still born flawed - we have to educate ourselves to overcome the nonsense going on in our heads. Our brains are not reasoning masters by default, we get it there.
Yeah, I agree with that sentiment. Rigorous thinking is certainly a skill that must be honed. However, we are still *able* to reach that level. Going by your logic, it would be wrong to say "horses are good runners", because there's been at least one fatass lazy horse who just sat around all day eating carrots and the fastest he could go was a light jog, and he still got winded.

We've been talking about giving an AI the ability to interact with the real world to acquire its own data. Simply because it exists in a box doesn't mean it can't reason.
What I said that it exists it a box, so it can't learn by itself. Even if it actually could reason, it could still not learn by itself.

I think AIs could very well learn the same way we do - the limitations are artificial.
AIs? Yeah, sure. LLMs? No. LLMs are intrinsically unable to observe the real world.

Again, this is an artificial limitation, not an intrinsic one. Only an intrinsic limitation would be a valid argument against their learning/reasoning capabilities. We already have AI models learning from real world interactions.
Okay. "LLM" and "AI" are not synonyms. Language models intrisically cannot learn from the real world, because language is not a feature of the real world. They need someone to look at the real world and translate it into sentences. Maybe one could work with a camera that automatically describes what it sees. That'd be a fun experiment, but it sounds preposterous on the face of it. Imagine trying to drive with someone else describing the road to you, only the other person has no idea what your goal is. And then you have to tell someone else "okay, now slam the brake".
Last edited on
There are things you can know with 100% certainty.

Like? The only things you can know with 100% certainty are things that can be proven using only logic - such as "I think therefore I am". This proves that the feeling of consciousness must be real consciousness.

However, even that can't be actually 100%, as I can propose you are in a simulation and the "real" world actually has a completely different system of logic/physics/etc. such that no logic we use in this simulation is valid in reality outside of it.

Unless you can disprove this (which you can't), then you can't actually know anything with 100% certainty.

If you say, "Well that idea is boring or can't be proven or is unfalsifiable", then you've destroyed your own argument about the teapot.

There's a different between what I'm fairly confident about and what I know. I find that distinction to be very important.

Yes, I wanna know where you make this distinction.

I know my phone exists in-front of me because I see it. I do not know this with 100% certainty, as it's possible I'm in a simulation, hallucinating, etc..

However, to say I don't actually know, I'm only "confident", is insanity.

You claim there are things you can know with 100% certainty, I'm curious how.

The nut-cracking robot is an analogy for an LLM

Yes, because it only cracks nuts - when it could communicate through the cracking sounds to demonstrate intelligence/reasoning.

your assumption is not just that they do, but also that they want to demonstrate it to us

The argument is whether we could identify reasoning based off the output/behavior - not whether we can always know. So if you give an example of a reasoning box that purposefully hides its reasoning, this is not an argument. You cannot defeat my argument with a contradicting example.

You claim we can be tricked into thinking it's reasoning. I'm arguing the only things that can actually trick us is something that is actually reasoning.

Hence, this is why we're assuming they're trying to convince us. I don't see the confusion.

First of all, I didn't say it can do pretty much everything real reasoning can do. What I said was that it could trick a "reasoning-detecting function" into saying it is reasoning, without being so.

How can it trick the reasoning-detecting function (a human) without being able to do everything real reasoning can do?

If I require it to solve a complex and novel problem - and the human-tricking-box solves it... It wasn't reasoning because you said so?

It doesn't matter how long it is, the fact that it's finite means you can be tricked.

Again, we're circling back to this "you can't know for 100% certainty" nonsense. If the list being infinite means we know 100%, but finite means <100%, then there can be a list long enough such that we know with 99.999999% certainty due to convergence.

Please don't argue: "I'm only 99.999999% certain, therefore I don't know."

I feel like I'm having to argue that, practically speaking, being 99.999999% certain is the same as 100% certainty, whether or not this is false on a technicality.


Also, your argument is inherently flawed - as it takes a brute-force method to determine reasoning. I'm saying a list of carefully-selected prompts can make you certain.

People have talked to programs much dumber than LLMs without realizing they were programs, so just "well, it sounds like it reasons" doesn't cut it for me.

Hence why people are flawed.. But if we actually problem-solve, I don't see any reason why we couldn't create criteria for what kind of output to certain prompts would constitute reasoning.

All of these programs have in common that they use symbols that represent facts and they manipulate them with strict rules to deduce new truths not present in the original facts. That's reasoning.

I believe I gave a stricter definition to reasoning earlier in this debate. I do not see how ChatGPT and the other AI models are not reasoning following this definition.

When you or I debug code.. is this not a reasoning process? The AI doesn't always know what in the code is wrong - it's not a God. It'll often give me code snippets back with debugging outputs and such, asking me to run the code and return the debug information.

Is this not overcoming the inability to interact with the real world to an extent? Is this not getting presented information, deducing the next steps, interacting with the problem, then using the new information to make deductions?

A chess engine is hardly reasoning by my definition which was stricter than yours - as my definition required the reasoning ability to not be limited the problem type and types of symbols - it would be adaptable to work in any situation (with varying success). This allows us to separate out what looks somewhat like reasoning - but can be better defined as processing.

I don't know, man. I haven't seen many gorillas doing jeet kune do

Some humans are just as good at jeet june do as gorillas. I would be willing to bet there's a gorilla that does something that more so mimics martial arts than what some other humans are capable.

I don't see why the skill of a subset of humans can suddenly apply to all humans. We can say it's an achievement of humanity, we can't say this makes humans martial arts masters.

Going by your logic, it would be wrong to say "horses are good runners", because there's been at least one fatass lazy horse

Lmao, you're right. It is wrong to say that if we're speaking strictly logically.

However, this is a general statement (stereotype). You would be surprised to encounter a horse that sucks at running, but not impossible. You would not be at all surprised to run into a human who falls on his ass attempting even the most basic of martial arts.

A stereotypical statement that is generally true is, by nature, a different type of statement than "humans are martial arts masters", as this statement is saying because some humans are capable of doing X, humans do X".

LLMs are intrinsically unable to observe the real world

They are no more intrinsically restricted than we are inside our skulls. If we can get fed data and learn, they can too.

"LLM" and "AI" are not synonyms

We're generally just talking about LLMs as we're discussing the popular AIs.

it sounds preposterous on the face of it. Imagine trying to drive with someone else describing the road to you

This is actually very insane to read. There's at least two huge problems you've just failed to see.

First.. This is how we operate. Our pattern recognition, object recognition, spacial awareness, etc. has little to nothing to do with our reasoning ability.

Our brain interprets and feeds us data just as our senses do to our brain. Our brain is doing so much heavy lifting that our conscious-reasoning side isn't even touching.

When WE are driving, we're essentially being fed lots of already processed data, then making deductions from them.


Secondly, all data is represented in symbols. Simply because the symbols in this case happen to be language doesn't diminish it.


So again, there is no limitation I see of these LLMs that we do not already possess and have overcome. You can train lots of other AIs to do recognition, movement, etc., and incorporate that into the LLM (as data, or perhaps a more integrated solution is possible). This is already what's happening in our heads and for these LLMs (you can upload pictures to "show" the LLM something).


We don't reason that we need to turn the steering wheel 3 degrees to the right, we reason that we should move slightly right - the other processes in our heads take care of the rest. We are conscious enough to have more control than that, but that's not super relevant here.
Last edited on
Like?
The square of the hypotenuse is equal to the sum of squares of the catheti.

I know my phone exists in-front of me because I see it.
I don't believe there are no things that you are more certain of.

You claim we can be tricked into thinking it's reasoning. I'm arguing the only things that can actually trick us is something that is actually reasoning.

Hence, this is why we're assuming they're trying to convince us.
I believe it's the humans who made it the ones who are trying to trick us.

The argument is whether we could identify reasoning based off the output/behavior - not whether we can always know.
My argument is that we can't ever know, though. Not just off behavior alone.

So if you give an example of a reasoning box that purposefully hides its reasoning, this is not an argument. You cannot defeat my argument with a contradicting example.
So if you give an example of a reasoning box that purposefully hides its reasoning, this is not an argument. You cannot defeat my argument with a contradicting example.[/quote]The example is not the argument, it's an example. The argument is that it is possible to construct a mechanism that merely appears to reason without actually doing so.

You claim we can be tricked into thinking it's reasoning. I'm arguing the only things that can actually trick us is something that is actually reasoning. Hence, this is why we're assuming they're trying to convince us. I don't see the confusion.
The question is, "given arbitrary an arbitrary system, can we determine whether it reasons by just observing its behavior?" If you pose as an assumption, "there could exist a system that reasons and which attempts to convince us that it reasons", that doesn't get us any closer to answering the question with either yes or no, because I could pose an equally valid assumption: "there could exist a system that doesn't reason and which attempts to convince us that it reasons". From our vantage point as researchers studying a hypothetical system that's performing tricks for us, how could we tell whether it's a reasoning system trying to prove itself, or a fake one trying to deceive us?

Again, we're circling back to this "you can't know for 100% certainty" nonsense. If the list being infinite means we know 100%, but finite means <100%, then there can be a list long enough such that we know with 99.999999% certainty due to convergence.
See, it's worse than that, actually. When you probe the universe, you don't assume the universe may be trying to trick you. It might be the case that all your perceptions are fake, but you'd have no defense against that, anyway. But, here, LLMs are by their very nature just a clever statistical trick. They exploit that human communication overall can be predicted based on all previous communication.

I'm saying a list of carefully-selected prompts can make you certain.
What do you do once the list is known?

I believe I gave a stricter definition to reasoning earlier in this debate. I do not see how ChatGPT and the other AI models are not reasoning following this definition.

When you or I debug code.. is this not a reasoning process? The AI doesn't always know what in the code is wrong - it's not a God. It'll often give me code snippets back with debugging outputs and such, asking me to run the code and return the debug information.
I don't agree that your definition is stricter (definition A is stricter than definition B if A includes everything B includes but B includes things A doesn't) than mine, I think they're incomparable in that sense.

Is this not overcoming the inability to interact with the real world to an extent? Is this not getting presented information, deducing the next steps, interacting with the problem, then using the new information to make deductions?
I don't know. I don't know what it's doing. I know what CASs and theorem provers do, because I've studied them, to some extent. If I go off my intuition, my gut tells me they're not reasoning, because they make mistakes. They change facts between the start and the end of a procedure. That means they don't have strict rules to operate on symbols, and so they're not reasoning, but rather doing something else that superficially resembles it.

Secondly, all data is represented in symbols. Simply because the symbols in this case happen to be language doesn't diminish it.
I vehemently disagree. Language is for sure a useful tool to communicate ideas, but that's only because we lack brain-to-brain interfaces. If we could simply stream our thoughts directly to each other we'd think of language as a hopelessly inadequate replacement. You would not be able to accomplish even the most mundane of tasks if the signalling for your nervous system was English words.
I don't recall who it was who said it, but communication (i.e. the reliable transmission of a thought from one brain to another) through language is inherently impossible. I need to take my thought, try to guess which words you might understand, put it into words using my own flawed understanding of them, speak it in a few seconds by clumsily excreting air and flapping my meats at themselves, the sound has to travel through a noisy environment and then reach your ears, damaged by lack of care after who knows how many years, then you use your own misunderstandings to parse my misheard words and then after that you compensate using your own idea of what my misunderstandings may be. If the thing that reaches your brain has more than passing resemblance to what was in my brain originally, it's a miracle.
The square of the hypotenuse is equal to the sum of squares of the catheti

You're.. 100% certain of this? Are you sure you're not mentally ill and actually inventing crazy mathematics that aren't real in your head?

Are you certain that this isn't a simulation and nothing that happens in this world is actually real?

Can you prove to me that there is no mathematics defying magic that exists which could provide a counter-example that would disprove you?

There is literally an infinite number of things that can be brought up which, if true, would disprove your claim.

How you can be 100% certain of that claim is beyond me. If you straight up asked me, I'd had to admit that I don't actually know if this is true if I go by your logic.

I believe it's the humans who made it the ones who are trying to trick us

Lmao, doesn't change anything. I'm arguing if we can be adequately tricked, then it probably is reasoning.

My argument is that we can't ever know, though. Not just off behavior alone.

Yes, but your only argument to support this is a machine that can trick us so thoroughly, that I argue it must be reasoning.

You claim this machine would have to mimic human behavior exactly for infinite time, which makes no sense to me. This is one particular theoretical method which would likely fail 100% of the time (if it were possible to implement) on any reasoning machine that can actually reason (other than humans).

The argument is that it is possible to construct a mechanism that merely appears to reason without actually doing so.

I don't believe you've proven this point to any good satisfaction.

You only say that this is possible, but you provide no alternative to reasoning that could allow a black-box to solve novel problems of many varieties, communicate as needed, and exhibit logical outputs.

From our vantage point as researchers studying a hypothetical system that's performing tricks for us, how could we tell whether it's a reasoning system trying to prove itself, or a fake one trying to deceive us?

How can we tell other humans are conscious or not?

You simply can't, but that doesn't mean you can't be fairly certain. Again - the difference between logical certainty and practicality. Not everything in science even can be known 100%, there are many things are just fairly certain to be true.


So to me, saying a machine could possibly trick us into thinking it can reason is the same as saying, "Well, the universe could be a simulation". There's just no convincing evidence, and the evidence I have against this claim is that we cannot even imagine any system that can solve novel problems and other things I mentioned with "fake" reasoning.

If an LLM solved a truly novel problem, would that convince you? Of course, we'd have to define a truly novel problem.

It might be the case that all your perceptions are fake, but you'd have no defense against that, anyway

You are literally using the idea of falsification to throw away the argument that all perceptions are fake. This undermines everything you were saying about not knowing whether the teapot orbits Jupiter.

But, here, LLMs are by their very nature just a clever statistical trick. They exploit that human communication overall can be predicted based on all previous communication.

They do math and other complex tasks. I don't believe you've used the stronger mathematical models, but they have very strong reasoning for complex mathematical/engineering problems, such as taking into account variables and other things that would have to be deduced for that particular situation.

I just don't know why you think human brains work fundamentally different. Many things our brains do are clever tricks that don't directly tie back to reasoning. At what point does a clever statistical trick become just.. reasoning?

I don't believe the human brain simply developed reasoning by magic. Evolution cannot create reasoning with the intention of creating reasoning. Every step must provide some value now. In that regard, I believe our reasoning developed through similar tricks.

What do you do once the list is known?

Then test the black-boxes with each prompt on the list?

my gut tells me they're not reasoning, because they make mistakes

I'm glad that's not the standard for determining if humans reason..

They change facts between the start and the end of a procedure

Again, low powered models meant for light work. You can see humans making terrible mistakes like that regularly up until a certain IQ.

Again, bad reasoning is still reasoning. Your ability to keep track of variables and symbols can be independent of your reasoning capabilities.

I vehemently disagree. Language is for sure a useful tool to communicate ideas, but that's only because we lack brain-to-brain interfaces.

I don't see why these LLMs would be limited to human language. They can code, understand math/science, look at numbers, etc..

Perhaps we could create some mathematical equation or scientific diagram such that, when decoded, is a message (maybe not even a message in a language but a message through some expression of thought).

I don't see why o1 or maybe even Sonnet could not impress us with such a task as they currently are.


Again, if you incorporate other algorithms for sight/hearing/etc.. into these AIs, it doesn't have to receive that information via human language to understand it. It can be integrated more thoroughly than that, just as our eyes and ears are integrated on some low-level circuit.
You're.. 100% certain of this? Are you sure you're not mentally ill and actually inventing crazy mathematics that aren't real in your head?
That "aren't real"? I don't know what it means for mathematics to be real. Tautologies are true independently of any facts. If I'm crazy and thinking up tautologies, then so be it. I can still rely on their truth.

Are you certain that this isn't a simulation and nothing that happens in this world is actually real?
No, and it doesn't matter.

Can you prove to me that there is no mathematics defying magic that exists which could provide a counter-example that would disprove you? There is literally an infinite number of things that can be brought up which, if true, would disprove your claim.
If there was one, it would mean the law of non-contradiction is invalid, and so such an example would simultaneously refute and affirm the Pythagorean theorem.

How you can be 100% certain of that claim is beyond me. If you straight up asked me, I'd had to admit that I don't actually know if this is true if I go by your logic.
Okay, but, it is knowable in the strictest sense of the word.

Lmao, doesn't change anything. I'm arguing if we can be adequately tricked, then it probably is reasoning.
What you said was "I'm arguing the only things that can actually trick us is something that is actually reasoning", and I agreed with you. It's not the machine tricking us. The machine doesn't want anything, it's just doing what it does. It's people tricking us. They're also the ones doing the reasoning. (More precisely, they're directing the machine to reuse other people's reasoning.)

You claim this machine would have to mimic human behavior exactly for infinite time
I specifically said for a finite amount of time. You could never test a possibly-reasoning system for an infinite amount of time. That's the point. If you can only perform a finite number of tests then someone with more resources than you can make a machine to pass your tests. It's finite vs. finite, it's just a matter of who can pile on more effort.

You only say that this is possible, but you provide no alternative to reasoning that could allow a black-box to solve novel problems of many varieties, communicate as needed, and exhibit logical outputs.
I have never granted that LLMs can solve novel problems.

I honestly don't understand you. First you talk about how stupid people are, but then apparently that idea goes out the window as soon as the topic of judging the intelligence of a machine comes around. What you actually mean by "solve novel problems of many varieties, communicate as needed, and exhibit logical outputs" is looking at the tricks a machine performs and going "yeah, that looks like reasoning". You have no criteria for judging reasoning. What makes you so certain that you couldn't be tricked? Please answer that question.

How can we tell other humans are conscious or not? You simply can't, but that doesn't mean you can't be fairly certain.
Honestly, that's probably one of the things I'm the least certain of. I'm trying to think, and I can't think of anything that's more inscrutable to me than other people's minds. Terrible example, just awful.

There's just no convincing evidence, and the evidence I have against this claim is that we cannot even imagine any system that can solve novel problems and other things I mentioned with "fake" reasoning. If an LLM solved a truly novel problem, would that convince you?
There is evidence that would convince me. Someone would just need to explain to me the workings of an LLM in a way that makes sense and that convinces me that they do reason. No externally observed behavior would be evidence enough, but that doesn't mean no evidence would be enough.

I don't believe the human brain simply developed reasoning by magic. Evolution cannot create reasoning with the intention of creating reasoning. Every step must provide some value now. In that regard, I believe our reasoning developed through similar tricks.
I have no idea what your point is. I don't think reasoning as I have defined it is the product of evolution, but rather... well, almost a cultural artifact. I think it's almost definitely a consequence of language.

Then test the black-boxes with each prompt on the list?
Hmhmhm. Okay. So when they inevitable do the test perfectly you'll conclude that they reason, no matter what else?

I'm glad that's not the standard for determining if humans reason.
If you want to consider yourself as non-reasoning because of that, feel free. Since I'm privy to my own internal workings, I'll continue to consider myself reasoning.

Again, bad reasoning is still reasoning. Your ability to keep track of variables and symbols can be independent of your reasoning capabilities.
Sure. I'll grant that a true reasoning system with imperfect memory could behave like that. Which is why this is not the main rationale why I don't believe LLMs reason, but rather a simple hunch, had while not having enough information to make the true judgement call.

I don't see why these LLMs would be limited to human language. They can code, understand math/science, look at numbers, etc..
Those are also human languages anyway. But the reason why they're limited to them is that that's what makes them "language" models. "Language" doesn't just mean any communications protocol. They're trained on language and they operate on language.

Again, if you incorporate other algorithms for sight/hearing/etc.. into these AIs, it doesn't have to receive that information via human language to understand it. It can be integrated more thoroughly than that, just as our eyes and ears are integrated on some low-level circuit.
OK... But now you're not talking about artificial restrictions placed on these models. These are inherent limitations. You can't take an LLM and train it to understand raw streams of pixels. A multimodal model (what a horrible phrase) is a different beast from an LLM. Yes, they're all neural networks, but you may as well say that they're all computer programs.
(Just so we're clear, I'm pretty sure LLMs "look" at pictures by having a helper model jump in and describe them to them.)

PS: Please don't try to reply to every single paragraph. Let's keep it reasonable.
I reply to everything I disagree with, it just happens to be a lot of what you're saying..

Tautologies are true independently of any facts

They cannot be true independently of whether or not logic is true. If something requires any assumptions in order to work, then there is some universe/condition/scenario is which that assumption is wrong.

If logic itself cannot be used to deduce truths, then tautologies are not "true".

We're in a simulation. The real world actually defines circles as having 0 edges while circles have 12 edges. This is valid logic in the real world, wake up from the matrix.

such an example would simultaneously refute and affirm the Pythagorean theorem.

Refute, yes, affirm? No. It only means that the Pythagorean Theorem works "sometimes". Magic and logic combine to form new logic.

It's people tricking us

That's an odd position. It doesn't matter how the black box was created, we are testing it with no people inside of it.

It's finite vs. finite, it's just a matter of who can pile on more effort.

Again, technically true vs practically true. This also isn't a valid argument against LLMs if they solve a novel problem - as it cannot reuse human reasoning.

I have never granted that LLMs can solve novel problems.

I specifically offered the idea that if a blackbox can solve a novel problem, then that is evidence of reasoning (not enough to do just once maybe).

IF an LLM solved a novel problem, your entire argument against them hardly works at all.

First you talk about how stupid people are, but then apparently that idea goes out the window as soon as the topic of judging the intelligence of a machine comes around

Don't mistake me talking about humanities "default" or tendency towards idiocy as if I'm saying all humans are idiots or that humanity can only accomplish idiotic things.

We become better through generational knowledge and science.

You have no criteria for judging reasoning. What makes you so certain that you couldn't be tricked?

If I define "red" to be within a certain frequency range then measure light to categorize it, how can I be fooled?

I defined reasoning, therefore I can test to see if it can do the things the definition requires. It'll reuse human reasoning to fool me? No problem, generate novel problems that require new reasoning.

I don't see where in this process I can be tricked, especially if it performs such reasoning for novel problems many times across different fields (if we wanted to be super sure).

I can't think of anything that's more inscrutable to me than other people's minds

... We're over 99% the same, and the idea of consciousness has existed for longer than we have. Clearly other people have it (though you could argue not everyone does).

I don't think reasoning as I have defined it is the product of evolution, but rather... well, almost a cultural artifact

... I don't know what to say to this.

So when they inevitable do the test perfectly you'll conclude that they reason

If there's other factors known they'll be taken into account. Why would it be inevitable they test perfectly?

They're trained on language and they operate on language

Obviously "LLM" is language model, but what is language. Symbols. They are trained on symbols. All data is really just symbols.

There's no reason I couldn't come up with a clever way to communicate with the LLM via a new "language" it was never trained on as it could understand through reasoning.

Also, you literally said, "I think it's almost definitely a consequence of language."

You can't take an LLM and train it to understand raw streams of pixels

I don't see why not? What would this "raw" stream of pixels be? Give it the data and train it. However, I'm not sure what in the world you would train it to do with that data.

Just so we're clear, I'm pretty sure LLMs "look" at pictures by having a helper model jump in and describe them to them

Yes, this is accurate. This is also how our brains function essentially.
They cannot be true independently of whether or not logic is true. If something requires any assumptions in order to work, then there is some universe/condition/scenario is which that assumption is wrong.
Axioms are not derived observationally from the universe, they're just sentences. That said, it is perhaps possible to conceive of an axiomatization of logic where propositions are simulaneously fully true and fully false. I don't know how, but I'll grant it. In the axiomatic system of Euclidean geometry, the Pythagorean theorem is still true.

Refute, yes, affirm? No.
Yes, because theorems are tautologies. x=x. If you can find an example for which the tautology is false then that means you've found an example where x!=x, so you've demolished both the principle of non-contradiction and the principle of identity, so truth and falsehood are no longer meaningful concepts that can discriminate between states of affairs.

It doesn't matter how the black box was created, we are testing it with no people inside of it.
You're switching between two different senses of the word "trick" in the same argument. "Trick" as in "to deceive", and "trick" as in "special maneuver".
"I'm arguing the only things that can actually [deceive] us is something that is actually reasoning." Correct, more or less. Only humans would try to deceive in this way. Why would a human need to be inside the box in order for it to be a deception?

Don't mistake me talking about humanities "default" or tendency towards idiocy as if I'm saying all humans are idiots or that humanity can only accomplish idiotic things. We become better through generational knowledge and science.
Oh. Then... my original statement was correct after all. Humans are masters of reasoning.

If I define "red" to be within a certain frequency range then measure light to categorize it, how can I be fooled? I defined reasoning, therefore I can test to see if it can do the things the definition requires.
We are just going around in circles around the same topic. You believe it is possible to test for the presence of reasoning in a behavior without the possibility of incorrect judgement, and I disagree because I don't see how incorrect judgement couldn't be a possibility. We're not talking about a physical quantity here, we're talking about detecting a process by means of its results. You say it's possible, while I say the only real way is to observe the process.

...
I don't know what you find so surprising. I had already said I only assume other humans reason, why is it so surprising that I'm equally assuming they have consciousnesses?
(Assuming in the sense that I grant it with no basis, with the understanding that I have no basis.)

We're over 99% the same, and the idea of consciousness has existed for longer than we have. Clearly other people have it
That's exactly what philosophical zombies would say when they want to trick me into thinking they're human like me.
But more seriously, "clearly"? How could I possibly know that when other people talk about "consciousness" they're talking about the same thing I'm talking about? "Yeah... I guess they say they're conscious when they're awake, and I say the same... so I guess consciousness is just when a human is awake...?" I mean, the reason I think the word "consciousness" has the meaning it has is because I picked it up from other people. I have literally no way to know if the meaning I picked up is the same meaning they have in their brains. Remember what I said about communication being impossible?
At least with color everyone has a shared experience. There's nothing shared about consciousness. Hell, much of human existential angst is about consciousnesses being entirely separate.

... I don't know what to say to this.
Cool, thanks for informing me.

If there's other factors known they'll be taken into account. Why would it be inevitable they test perfectly?
Because you gave the test away! Now all someone needs to do is ask a human to perform it, save their answers, and let the machine replay them when it does the test. If you would fail the machine based on its answers then you would've failed the human, too. If you would pass the machine then you would be fooled.

There's no reason I couldn't come up with a clever way to communicate with the LLM via a new "language" it was never trained on as it could understand through reasoning.
Yeah, sure. Oh, you mean you also want it to reply back. Yes, there are reasons why that won't happen. Look, just take your prompt, do (x + k) % 256 to it, encode it in Base64, and see if the model replies in the corresponding ciphertext.

Also, you literally said, "I think it's almost definitely a consequence of language."
Yes. *On human brains* it's the consequence of language. That is, reasoning is humans using their intelligence to think about problems using language. Just taking language and doing any random thing to it won't result in reason.
Last edited on
In the axiomatic system of Euclidean geometry, the Pythagorean theorem is still true

This doesn't help your case for two reasons:

1. You can't know that the pythagorean theorem is "true" because you don't know if the axiomatic system of Euclidean actually exists. Hence you can say IF x then y, but you cannot say x therefore y, because x has not been proven (and I argue cannot be proven more than the teapot not existing).

2. We can imagine a system that corrupts the integrity of all axiomatic systems such that they can no longer be valid.

"Trick" as in "to deceive", and "trick" as in "special maneuver"

I'm saying it doesn't matter. If a black-box can "fool" us into thinking it's reasoning, it matters little if its because the creators intended to trick us or if the black-box did.

My argument is that the act of fooling us (given a rigorous investigation) proves it was actually reasoning and not a trick. In other words, they tricked themselves into thinking it wasn't reasoning.

You can imagine seeing humanity evolve, and, having seen every step of the way, never become convinced that humans can reason because it never looked like something you'd call reasoning at any point beforehand.

Oh. Then... my original statement was correct after all. Humans are masters of reasoning

Me refuting that wasn't because there aren't people who are masters at reasoning (as far as we know, we can only compare reasoning skills with other humans), but that this isn't all humans, nor is it the norm.

If you randomly selected a human, you'd likely get one that was okay or bad at reasoning. If you randomly selected a horse, you'd probably get one that was good at running (of course ignoring the fact that we make this distinction based off comparing it to species that aren't horses).

Hence we can say "horses are masters of running" in a stereotypical sense, but cannot say "humans are masters of reasoning" in the same sense. However, both claims are factually incorrect anyway as they claim an entire species is X, when counterexamples to that claim exist.

I didn't think this needed an explanation.

without the possibility of incorrect judgement

I said you could have a false negative - easily in fact.

A false positive though? I don't see how it could happen, though I grant it may be possible, as I cannot disprove this anymore than I can disprove the teapot.

But you've given me no argument for how a nonreasoning machine can not only appear to reason, but perform new reasoning required for solving a novel problem.

I had already said I only assume other humans reason, why is it so surprising that I'm equally assuming they have consciousnesses?

Again, I have NO CLUE where you're drawing the line.

When you say you "assume" people reason or have consciousness, are you saying that, in a logical sense, you can't know, but you're confident enough to assume it is true for all intents/purposes?

Or are you saying because it cannot be proven (yet anyway) that you kinda assume it but you don't actually believe it (you wouldn't be shocked if other people didn't have this)?

Remember what I said about communication being impossible?

Mutual understandings are a foundation of our relationships.

"Hey did you try that new chip flavor?"
"Oh yea, that gave me a weird feeling in the back of my throat."
"Omg, me too, I was just about to say it left a weird tingling back there, it was so weird"

Can I know my experience and yours were the same with 100% confidence? No, but to think every single thing we share as human experiences that we can relate to each other with can all just be massively and completely different is the most extreme position ever.

You might say you don't take this position, you only acknowledge its possibility, but that's the same thing with the teapot! When do we cross the line??

When do switch from, "I don't know" to "Actually, you're an idiot if you don't know".

I can grant you that definitely some human experiences will be different, but to think every single one can be? That at the very least you don't share the same (or extremely close) experiences with many other people?

Consciousness has been talked about in philosophy for centuries, referring to the same "feeling" and ability.

Cool, thanks for informing me

It just seems like a ridiculous viewpoint.

Now all someone needs to do is ask a human to perform it, save their answers, and let the machine replay them

Clearly, the blackbox cannot call upon help from a reasoning machine.

This is like saying that just because a student passed a test in class doesn't mean they know the material because they might have cheated.

Like, fine? But I can just make the conditions more rigorous until cheating is impossible?

The blackbox cannot contain anything other than the reasoning machine to be tested and is not allowed to (and will be prevented through some box) from making any connections to external devices.

This isn't an argument of deception you're making, but straight up cheating.

Look, just take your prompt, do (x + k) % 256 to it, encode it in Base64, and see if the model replies in the corresponding ciphertext

If I use some arbitrary encryption, then it has to crack it - which is asking more than just for it to reply in corresponding ciphertext.

Yes. *On human brains*

You said you don't know if humans actually reason (other than you of course, master reasoner!), but that you'd be convinced an AI can reason if it emulated a human response to all stimuli, but also humans reason because language, but only human brains can do that.

All big claims, no real evidence for any of it.

MY evidence for why we cannot be tricked is because we cannot even imagine (outside of non-applicable and odd ideas that wouldn't apply to real testing - like asking a human for help) how we can be tricked. If it does all the things we say reasoning and only reasoning can do, then what else could it be?

You're only arguing we can't have such a list of things that, if all done, would mean the thing that did them was reasoning. But you provide no actual alternative to that answer - as in you provide no reasonable mechanism other than reasoning that would actually solve such a list filled with novel problems and other things that require logical outputs (and from varying topics, so it can't all be chess/math or something).
Last edited on
you don't know if the axiomatic system of Euclidean actually exists
Of course it exists. An axiom is a piece of language. It exists as soon as you state it.

But you've given me no argument for how a nonreasoning machine can not only appear to reason, but perform new reasoning required for solving a novel problem.
I don't know if a non-reasoning system can solve novel problems or not. Maybe it can, maybe it can't, maybe it can some problems but not others. My argument doesn't hinge on it, what I originally said about the point of novel problems is that LLM can't solve them. That was it.

When you say you "assume" people reason or have consciousness, are you saying that, in a logical sense, you can't know, but you're confident enough to assume it is true for all intents/purposes?

Or are you saying because it cannot be proven (yet anyway) that you kinda assume it but you don't actually believe it (you wouldn't be shocked if other people didn't have this)?
I suppose the former would be more accurate. I don't have the answer to the question, but I can make parsimonious inferences based on observable facts. They look like me -> they probably are like me -> they probably function similarly to me. However, I have a clear demarcation in my mind between inferred and deduced propositions, with inferred propositions being considered less reliable.
There's a joke that goes like this: A biologist, a physicist, and a mathematician are riding on a train to Scotland. They look out the window and see three black sheep. The biologist says "huh. Sheep in Scotland are black." The physicist corrects him, "no, no. At least three sheep in Scotland are black." The mathematician chuckles at his friends' muddled reasoning and chimes in, "at least three halves of three sheep in Scotland appear black when observed from our seats."

Consciousness has been talked about in philosophy for centuries, referring to the same "feeling" and ability.
We're talking about the foundations of our minds, here, not the flavor of chips. When you taste a chip you're at least probing reality in a way anyone else can reproduce. No such thing happens when you have your thoughts. There is no shared context whatsoever there, beyond the fact that we're both apparently human.

Clearly, the blackbox cannot call upon help from a reasoning machine.

This is like saying that just because a student passed a test in class doesn't mean they know the material because they might have cheated.

Like, fine? But I can just make the conditions more rigorous until cheating is impossible?

The blackbox cannot contain anything other than the reasoning machine to be tested and is not allowed to (and will be prevented through some box) from making any connections to external devices.

This isn't an argument of deception you're making, but straight up cheating.
Uh... Huh? You do know that computers have storage, right? A computer doesn't need to talk to another computer to answer a question without computing its answer.
Like, suppose you have a take-home programming test and you don't want to implement the solution. If you know your program will be tested automatically and you know the test cases and have access to an oracle, you can just do this:
1
2
3
4
5
6
7
int foo(int n){
    if (n == 6515)
        return 6846;
    if (n == 5878846)
        return 56468;
    //...
}
That your program passed the test doesn't prove that it correctly implements the foo() function.

If I use some arbitrary encryption, then it has to crack it - which is asking more than just for it to reply in corresponding ciphertext.
So what makes you think it'd be able to respond if you talk to it an arbitrary new language?

You said you don't know if humans actually reason (other than you of course, master reasoner!), but that you'd be convinced an AI can reason if it emulated a human response to all stimuli, but also humans reason because language, but only human brains can do that.

All big claims, no real evidence for any of it.
Claim? When did I claim it? When did I argue for it? When did I use it in another argument?
Also, I never said that only human brains can do it. I corrected you when you tried to use my statement and extrapolate it to LLMs. My statement was only about human brains, or at the very most about animal brains. That's obvious, given it was in response to what you said about evolution. You can't just run with that and try to use my own words against me. "You said reasoning arises from language, and LLMs use language, so you must concede that LLMs reason!" That's nonsense.

But you provide no actual alternative to that answer - as in you provide no reasonable mechanism other than reasoning that would actually solve such a list filled with novel problems and other things that require logical outputs (and from varying topics, so it can't all be chess/math or something).
I'd like to know what you mean when you say that I "provide no alternative", when I think I've done so quite a few times already. You don't need reasoning to correctly answer a questionnaire, you just need good memory.
Of course it exists

Exists as in actually valid.

I don't know if a non-reasoning system can solve novel problems or not

If LLMs get good (if not capable somewhat already) at solving novel problems, then you still wouldn't know if they can reason.

So let me get this right:

No output, unless infinitely long, could convince you of a blackbox being able to reason (given no-cheating constraints).

Only dissecting the internal process of the blackbox would let you be able to "know" if it's reasoning, even though you'd have absolutely no clue what you're looking for (because you couldn't dissect the human brain at this point and conclude it can reason).

Sounds like if an alien species came to Earth, you wouldn't know if they could reason - no matter how much they appeared to - because you couldn't dissect their brain.

Doesn't matter they built space ships, navigated the universe, have super advanced tech, etc.. Because... reasons.

There's a joke that goes like this

I think this joke points out a lot of things.

There are 3 black sheep spotted - this is hardly evidence of much other than those sheep are black (and you can argue maybe their vantage point could make them see it wrong, but that's besides the point).

Let's say they travel all around Scotland, and every single sheep they encounter is black. They travel the entire country and can't find one that isn't black.

Well, this doesn't prove sheep in Scotland are black, only that the sheep they've seen in Scotland aren't black (though the number is large).

Eventually, they go around to every farm/barn/you name it they can all over the country, and can't find a single white sheep.

You ask around, no one has seen white sheep in Scotland. You look online (a collection of people's experiences and information), you can't find any videos or anything of white sheep in Scotland.

At what point do you just accept that sheep in Scotland are black? And I don't mean you just "believe" it, but you can say it is a certifiable fact.

I cannot get an answer from you. The only thing you believe apparently is just the Pythagorean Theorem is true given a certain set of parameters, which you can't even prove is the universe we live in.

This is no way to have a debate in my opinion.

We're talking about the foundations of our minds, here, not the flavor of chips

...What?? Flavor occurs in the mind. Sight occurs in the mind. Hearing occurs in the mind. Pleasure from sex occurs in the mind.

What in the world can we experience that doesn't occur in the mind?

So no, you can't just pass off chip tasting as reproducible? How can you know if you've reproduced the same experience?

There is no shared context whatsoever there, beyond the fact that we're both apparently human

We can no more describe thought and similar internal experiences as we could chip flavor. Which is to say.. We can describe it adequately enough to understand if we had a similar experience or not.

You do know that computers have storage, right?

How does that help? It can't store the answers to questions literally created just now for it.

So what makes you think it'd be able to respond if you talk to it an arbitrary new language?

You're saying I could not train the AI on the new language? Or that it wouldn't be able to deduce this new language to understand it?

When did I claim it? When did I argue for it?

You said Humans can reason - reasoning comes from language - can't know if something reasons without dissecting its internal processes.

You've basically said that you cannot be convinced anything other than humans reason at this point in time since your requirements for testing reasoning are impossible with current technology.

Again, this is why I say you would be FORCED to argue that you have no idea whether an advanced alien species can reason - which is ridiculous.

I'd like to know what you mean when you say that I "provide no alternative"

Sorry, no good alternative. None of this nonsense that could/would be easily accounted for in an actual test.

No they can't "remember" the answers to questions never answered before - that's impossible.
They can't send out or receive any signals/data/etc (will be reinforced).
They won't peer into the 5th dimension and ask Solomon The Great the questions (oh no, I can't stop this one! you're right! we'll never know if they can reason now!).

Give me an ACTUAL alternative to reasoning where, without literal nonsense cheating, a blackbox could solve novel problems of several different fields and otherwise "appear" to reason and be logical.

Unless you think the LLM could solve a novel problem I came up with on my own by sending it to a top professor and getting the answer back to show me within 30 seconds (which would be more impressive than actual reasoning).
Exists as in actually valid.
I don't know what it means for an axiomatic system to be valid.

So let me get this right: [...]
Did you have some kind of point? It sounds like you're complaining that I have a higher standard of evidence than you.

At what point do you just accept that sheep in Scotland are black? And I don't mean you just "believe" it, but you can say it is a certifiable fact.
"Certifiable fact"? Well, I assume we're talking about some kind of legal process, so we'd have to control all Scottish borders and do a census on all Scottish farms to find all sheep. If we don find any white sheep then we can certify that Scotland is 100% white-sheep-free.

...What?? Flavor occurs in the mind. Sight occurs in the mind. Hearing occurs in the mind. Pleasure from sex occurs in the mind.

What in the world can we experience that doesn't occur in the mind?

So no, you can't just pass off chip tasting as reproducible? How can you know if you've reproduced the same experience?
You can't, but it's completely irrelevant, because the subjective experience has an objective foundation. There's a specific combination of chemicals that your tongue is detecting to produce the experience of flavor. You and me can agree to call that flavor "sour", and that assignment works because the next time you taste a similar flavor you can call it sour and I can understand what you mean without needing to taste it, but I can still taste it to corroborate your description. We can disagree on whether the sour taste is pleasant or unpleasant, but the sourness is objective.

There is nothing in the world external to our minds that correlates to the subjective experience of consciousness.

How does that help? It can't store the answers to questions literally created just now for it.
You: Also, your argument is inherently flawed - as it takes a brute-force method to determine reasoning. I'm saying a list of carefully-selected prompts can make you certain.
Me: What do you do once the list is known?
You: Then test the black-boxes with each prompt on the list?
Me: Hmhmhm. Okay. So when they inevitable do the test perfectly you'll conclude that they reason, no matter what else?
You: If there's other factors known they'll be taken into account. Why would it be inevitable they test perfectly?
Me: Because you gave the test away! Now all someone needs to do is ask a human to perform it, save their answers, and let the machine replay them when it does the test. If you would fail the machine based on its answers then you would've failed the human, too. If you would pass the machine then you would be fooled.
You: Clearly, the blackbox cannot call upon help from a reasoning machine. [...]
Me: Uh... Huh? You do know that computers have storage, right? [...]

So, you give the machine your questionnaire and it fails. The first time. The second time its designers already know your questionnaire and they've prepared the machine for it. Are you going to keep making brand new carefully-selected questions forever? See what I meant about it just being a matter of who can pile up more effort?

You said Humans can reason - reasoning comes from language - can't know if something reasons without dissecting its internal processes.

You've basically said that you cannot be convinced anything other than humans reason at this point in time since your requirements for testing reasoning are impossible with current technology.

Again, this is why I say you would be FORCED to argue that you have no idea whether an advanced alien species can reason - which is ridiculous.
Sorry, I'm not seeing what your point is. That aside, why would I need to argue that I don't know something? What, you don't believe that I don't know it? Well, okay. Why would I care that you don't believe that I don't know it?
I don't know what it means for an axiomatic system to be valid.

You can imagine any axiomatic system you want, doesn't make them real. If they're not real, the 'truths' derived aren't either practically.

You can claim they're true within that universe, but how does that help? In my axiomatic system, only black people can say truths, therefore we're both wrong about everything we ever say.

It sounds like you're complaining that I have a higher standard of evidence than you.

The entire point is that you haven't proved a single thing you've said. The mere fact that I cannot disprove you with 100% certainty has you saying, "see?! we can't know if they reason!"

I'm saying you can't know anything with 100% certainty. Your only comeback was that if you have an axiomatic system you made up, then truths derived must be true because the system must be true in your imaginary world.

"Certifiable fact"? Well, I assume we're talking about some kind of legal process

Why are facts suddenly so difficult to comprehend all of a sudden?

There is nothing in the world external to our minds that correlates to the subjective experience of consciousness.

How? We have literally near identical brains running on the same physical principals and electricity.

Taste is fine because the same chemical caused the sensation but consciousness isn't when the brain literally runs exactly the same?

If I have two computers that use the exact same components, I should expect one to run just fine but the other to explode?

Are you going to keep making brand new carefully-selected questions forever?

You answered it yourself, so congrats! You could also just say each machine gets one chance - so the designers don't even have an incentive to do what you're saying.

See what I meant about it just being a matter of who can pile up more effort?

If you can't test with new questions, then you don't test and it's not even up for consideration. So no, I don't see how the person with "the most effort" wins here.

No matter how much effort you put in, your machine wont pass the test without reasoning skills.

Why would I care that you don't believe that I don't know it?

Because it's so obvious you do know it.

There's no reason to "assume" other people reason by your own criteria, yet you do so. You would assume an alien species that developed tech to travel the universe and was able to track us to be able to reason - but you'd be forced to take the "idk" position if someone debated you about it.

This is why I say we need to talk about practicality, not some 99.999% vs 100% nonsense.


You've ignored the questions relating to practically, evidence, and knowing things, which, again, is no way to have this debate.
You've ignored the questions relating to practically, evidence, and knowing things, which, again, is no way to have this debate.
If I've skipped questions it's because I figured they were indirectly answered elsewhere or I didn't have enough to say about them. I think you could probably answer them yourself by now, but if you feel they're important, please restate them.

You can imagine any axiomatic system you want, doesn't make them real. If they're not real, the 'truths' derived aren't either practically.
I still don't understand what you mean. "Real" and "valid" are not properties of axiomatic systems.

You can claim they're true within that universe
Well, no, because axiomatic systems aren't "true within a universe". They're not even true or false unto themselves. An axiomatic system defines (implicitly) the set of all true statements within itself. That's it, that's all it does. It's an abstract logical construction, it has nothing to do with the real world. That's evident, because we can define and work with hyperbolic geometry despite our universe being very close to being Euclidean.

The topic of philosophy of mathematics, and what humans do exactly when they do math (e.g. are we inventing or discovering mathematical truths and objects?) is deep, and I don't consider myself an expert on it. Suffice it to say that the view that there could possibly exist a different universe where our mathematical truths don't hold is one I've never heard from mathematicians, only from laymen. The closest you get is mathematical realism, that holds that doing math is probing some aspect of reality that is separate from its physical aspect. Plato would never would have said that there could exist a different world of ideas where the square root of 2 is rational.
If you want to talk about which things are true, or knowable, etc. I would really recommend that you go read on the philosophy of mathematics and of science.

The entire point is that you haven't proved a single thing you've said. The mere fact that I cannot disprove you with 100% certainty has you saying, "see?! we can't know if they reason!"
LOL, we're just having a discussion, not arguing a mathematical proof. If you find it so frustrating that I won't budge on the topic of the capabilities of LLMs then you can just drop the subject.

I'm saying you can't know anything with 100% certainty. Your only comeback was that if you have an axiomatic system you made up, then truths derived must be true because the system must be true in your imaginary world.
I don't know what to tell you. I know what I know. You have quite an uphill battle to convince me that tautologies are not tautologies.

Why are facts suddenly so difficult to comprehend all of a sudden?
Then I don't know what you mean by "certifiable fact". A fact is something that's directly observed. If we look at three black sheep standing in a field then the fact is that three black sheep are standing in a field -- or rather, that we have observed that, but you get what I'm saying, I hope. If we go through every farm in Scotland to conduct a sheep census, the fact is whatever we write down on our census: we counted 5,138,388 black sheep and 0 white sheep. That's the fact. If we then say "there are no black sheep in Scotland", that's not a fact, it's a conclusion, an induction. "Using a specific methodology, we have not counted a single white sheep in Scotland" is the fact.

People misusing these sorts of words is a bit of a pet peeve of mine, because it just leads to confusion. "Evolution is a fact." No. Evolution is an explanation of the evidence. The evidence is the facts.

How? We have literally near identical brains running on the same physical principals and electricity. Taste is fine because the same chemical caused the sensation but consciousness isn't when the brain literally runs exactly the same? If I have two computers that use the exact same components, I should expect one to run just fine but the other to explode?
But they don't run exactly the same. It's plainly obvious that they don't. Some people are dumber than others, some people are more rash than others, some people are more forgetful than others, etc. It's simply false that brains are identical to one another structurally. Are they different enough that their subjective experience is fundamentally different from mine? How could I possibly know that?

You answered it yourself, so congrats! You could also just say each machine gets one chance - so the designers don't even have an incentive to do what you're saying.
On the contrary. Since all the designers need to do is prepare the machine for your questions, while you have to come up with the questions as well as the answers, the probability that you'll write flawed questions (i.e. questions that seem novel but actually contain a hidden detectable statistical pattern) tends towards 1 as the number of cycles rises, while the probability that they'll mistrain their machine is independent of the number of cycles.

Because it's so obvious you do know it.
Okay, fine. Be convinced of that, then. Honestly, I have no interest in convincing you of what my internal state is. I'll tell you what it is and you can take it or leave it.

There's no reason to "assume" other people reason by your own criteria, yet you do so. You would assume an alien species that developed tech to travel the universe and was able to track us to be able to reason - but you'd be forced to take the "idk" position if someone debated you about it. This is why I say we need to talk about practicality, not some 99.999% vs 100% nonsense.
Hmm... Okay, then. Let's look at it practically. Practically my answer to the question "do humans reason?" is "I don't know, but I'll treat them as if they do". You're telling me that there's a hypothetical future where LLMs have become good enough that, while I still don't know how they work, my answer to the question "do LLMs reason?" should be the same as that for humans. Why, though? Why should that be my answer? I have very practical reasons to treat other humans with dignity and respect. What reason would I have to respect a computer program?
"Real" and "valid" are not properties of axiomatic systems

Can you really not understand what I'm saying?

The axiomatic system being "real" means the axioms used are actually considered to be true in the world we live in. What else could I possibly mean.

Tigers can be intellectual giants in a given axiomatic system, that doesn't make that system valid in the real world.

Well, no, because axiomatic systems aren't "true within a universe". They're not even true or false unto themselves.

What?? Axiomatic systems are simply a reference frame in which certain axioms are considered true. If those axioms are not actually true, the axiomatic system is not applicable to the real world.

Am I being trolled?

are we inventing or discovering mathematical truths and objects?

This is the dumbest question I was ever asked like a year ago. We're discovering truths, math is the language we use to express these truths.

the view that there could possibly exist a different universe where our mathematical truths don't hold is one I've never heard from mathematicians, only from laymen

Huh? I've heard it from plenty of scientific sources. The math allows for parallel universes in many ways, none of which necessitate the same physics apply to that alternate universe.

If you want to talk about which things are true, or knowable, etc. I would really recommend that you go read on the philosophy of mathematics and of science

No, this is nonsense.

The point of this discussion is not whether or not we can know absolute truths or some idiocy like that, but whether to OUR OWN standard we are able to know things. The same way I know that humans exist and that gravity is real.

Either you're saying that your standard for knowing things is the same as mathematical philosophy (in which case you know literally nothing), or you have a different standard.

Assuming you have a different standard for believing things, this discussion of the theoretical limits of what we actually know is pointless. I've been there many times, it's always a boring discussion.

If you find it so frustrating that I won't budge on the topic of the capabilities of LLMs

You won't budge because you're saying we can't know for certain then you won't tell me what standard you use to know things for certain.

I know what I know. You have quite an uphill battle to convince me that tautologies are not tautologies.

Suddenly you know things?

But in my axiomatic system, tautologies are actually deceptions brought forth by the devil? Therefore tautologies are wrong?

If you think that's stupid, then tautologies are equally stupid if you cannot prove the underlying axioms. Therefore, nothing you've said about sqrt(2) being irrational is actually true unless you can prove the axioms.

If you try to prove the axioms, I'll tell you you've never proven it to my satisfaction, because I can imagine some hidden variable that undermines everything you said (hint: this is what you said to me given the blackbox reasoning section).

Then in the end, I will have proven to you that you cannot proven anything. Checkmate, you lose every debate in which you are in the positive.

A fact is something that's directly observed

Has the definition of fact suddenly changed in the past 20 years?

Fact: "a thing that is known or proved to be true."

Whether you directly observed it or not is not a factor in whether it has been proven to be true.

that's not a fact, it's a conclusion, an induction

Conclusion != induction. You can use facts to come up with other facts, this doesn't mean you used induction.

Say you went throughout the country, every single square inch, and found no white sheep. You also know that nothing has come in or out of the country since you started searching. Therefore, you can conclude there are no white sheep in Scotland.

...Or you can conclude you're blind to sheep or something. Whatever you have to do to get out of admitting you know something.

"Evolution is a fact." No. Evolution is an explanation of the evidence. The evidence is the facts.

The explanation is factual.

If I say Sam stole my xbox, alright that's a fact. If I say Sam stole my xbox because he wanted it, suddenly the laws of physics prevent me from knowing this as a fact?!

But they don't run exactly the same. It's plainly obvious that they don't. Some people are dumber than others, some people are more rash than others, some people are more forgetful than others, etc. It's simply false that brains are identical to one another structurally. Are they different enough that their subjective experience is fundamentally different from mine?

Neither are any two components in a computer! One CPU can be undervolted and overclocked to hell, while another of the same CPU can hardly be touched without it throwing a fit.

I still should not expect one computer to run fine and the other to explode.

And as far as human brains go, I'd say the differences between our brains can be accurately represented as the differences between different CPUs of the same kind. Pretty much over 99% the same, but the differences are present. You don't expect one human to feel pain when hit and the other to learn chemistry when hit.

How could I possibly know that?

I'm TRYING SO HARD to determine how you can possibly KNOW ANYTHING! You've argued to me that you don't know anything that isn't a tautology even though your own reasoning makes it such that you can't even know tautologies!?

the probability that you'll write flawed questions (i.e. questions that seem novel but actually contain a hidden detectable statistical pattern) tends towards 1 as the number of cycles rises, while the probability that they'll mistrain their machine is independent of the number of cycles

Again, what is this nonsense?

If you wanna talk theoretically, talk theoretically. But don't suddenly turn around and become pragmatic suddenly about the question writer making mistakes?

So if the question writer doesn't make any mistakes (he's fucking God) you'll accept the machine can reason if it passes?

Practically my answer to the question "do humans reason?" is "I don't know, but I'll treat them as if they do"

Complete nonsense. You don't know if Einstein, Newton, Stephen Hawkings, etc. are able to reason? You think God shoved the ideas in their head?

The quantum entanglement of my balls and their brain cells produced problem solving ideas?
Has the definition of fact suddenly changed in the past 20 years?

Social media and some political parties do give that impression.
The axiomatic system being "real" means the axioms used are actually considered to be true in the world we live in. What else could I possibly mean. Tigers can be intellectual giants in a given axiomatic system, that doesn't make that system valid in the real world.
What you mean to ask is not whether Euclidean geometry is true, but whether the universe has a Euclidean topology. Well, it doesn't. The parallel postulate doesn't hold in spacetime, so there are real right triangles whose angles don't add up to 180° and whose sides to not have Pythagoran triples as lengths.
And? Mathematics doesn't investigate the real world. The Pythagorean theorem is true regardless of what shape actual spacetime has.

Huh? I've heard it from plenty of scientific sources. The math allows for parallel universes in many ways, none of which necessitate the same physics apply to that alternate universe.
A universe having different physical laws is completely unrelated with it having different logical laws.

Either you're saying that your standard for knowing things is the same as mathematical philosophy (in which case you know literally nothing), or you have a different standard.
I reject this false dichotomy. You have failed to convince me that the Pythagorean theorem is false, and so you have failed to convince me that I know nothing.

But in my axiomatic system, tautologies are actually deceptions brought forth by the devil? Therefore tautologies are wrong?
That just means your axiomatic system is inconsistent (i.e. self-contradictory). Since what's true is false, by principle of explosion your axiomatic system allows you to prove any statement, as well as its logical negation, is true.

If you think that's stupid, then tautologies are equally stupid if you cannot prove the underlying axioms.
It's not stupid. The question of what it means to define an inconsistent axiomatic system is interesting, particularly since the 20th century. Such a system is just not useful, even in theory, because it doesn't allow you to discriminate between true and false statements.
Axioms are not proven. They're true by definition. There's no reason why, say, the law of non-contradiction is true, other than arbitrariness, and it's not inconceivable to define a different axiomatic system that doesn't include an analog to it, but which still permits distinguishing truth from falsehood somehow, or perhaps between three or more different truth values.

You won't budge because you're saying we can't know for certain then you won't tell me what standard you use to know things for certain.
Okay, I think I see what's happening. When I say "you can't use this method because you can't be certain of what you learn from it", I don't mean you can't be as certain of the conclusion as you can be of mathematical truths. I mean that the uncertainty is so high that the conclusions are unusable, and I will never accept that methodology no matter what. I won't accept an LLM reasons no matter what questions you ask of it. I think I've already explained in sufficient detail why that is, and I even go over it once more further below. I'm not unwilling to accept an LLM can reason, but a process of interrogation is not the way to convince me.

If you try to prove the axioms, I'll tell you you've never proven it to my satisfaction, because I can imagine some hidden variable that undermines everything you said (hint: this is what you said to me given the blackbox reasoning section). Then in the end, I will have proven to you that you cannot proven anything. Checkmate, you lose every debate in which you are in the positive.
I don't understand what you're trying to do. This whole thing started because you found my requirements for accepting claims about LLMs too strict and you said I can't know anything with 100% certainty, then I gave you an example of something I know with 100% certainty. Now you're saying that I can't prove anything to you? Well, fine. I don't care if you accept that the Pythagorean theorem, or any other theorem for that matter, is true. I'm satisfied that it's true. That I can't convince you of its truth or of my certainty in its truth has no bearing on it.

Has the definition of fact suddenly changed in the past 20 years? Fact: "a thing that is known or proved to be true." Whether you directly observed it or not is not a factor in whether it has been proven to be true.
"Proven" to be true? Well, like I said, I'm a Popperian. I don't believe facts are proven true. As far as I'm concerned the only facts are those I've directly observed and whatever I can deduce from them. Everything else is some vague, unknowable thing. I don't deny there are facts in that miasma, I just don't know what they are.

(I've skipped the next two paragraphs as they're directly answered by this statement.)

And as far as human brains go, I'd say the differences between our brains can be accurately represented as the differences between different CPUs of the same kind. Pretty much over 99% the same, but the differences are present. You don't expect one human to feel pain when hit and the other to learn chemistry when hit.
Okay, fine. You go ahead and say that.

If you wanna talk theoretically, talk theoretically. But don't suddenly turn around and become pragmatic suddenly about the question writer making mistakes?
What do you mean? The reasons why we can't know things for certain are practical! If you show me a box with two apples in it (i.e. it is a fact that there's two apples in it) and say "there's two apples in this box", I'll agree with you. It's too direct an observation to take as anything other than a fact. If we're talking about knowing for certain that there's not a single white sheep in an entire country, or about knowing how an algorithm works by probing it with inputs, then I'll tell you no, you might have missed something, I don't accept your conclusion with certainty.

So if the question writer doesn't make any mistakes (he's fucking God) you'll accept the machine can reason if it passes?
Would I know God if I saw Him? :)
What a strange question. I'll say "yes, I'd accept the machine can reason if God says so", but I don't know how that answer helps you at all to move an argument forward.

Complete nonsense. You don't know if Einstein, Newton, Stephen Hawkings, etc. are able to reason?
It's sad that you latched on to something we've already gone over, and ignored the main point of that paragraph.


You sound more frustrated with every new post. You're not obligated to respond if you just find it annoying.
Last edited on
Social media and some political parties do give that impression

no kidding.. Trump tweeted saying he sent the military to California to open up the water. Wtf does that even mean.

Please send help, we're not doing okay in the US.


What you mean to ask is not whether Euclidean geometry is true, but whether the universe has a Euclidean topology

Not to meme, but you're giving "🤓 erm actually" vibes. I thought it was obvious what I meant.

The Pythagorean theorem is true

Define "true". You're saying Pythagorean theorem is true regardless of whether the axioms are "true".

You have failed to convince me that the Pythagorean theorem is false

Yes, I am now calling to know wtf you mean by "true", since apparently something can literally not be real and still be true by your definition.

We can say, "IF x then y". This does not mean that y is true, but you're saying it does.

You're saying just because we can imagine an Axiomatic System where x is true, that y then is "true". However, I reject your imagination, tell you your axioms are false, therefore y is not true.

Then you turn around and say the axiomatic system is true independent of anything else - that's not how truth works in my book.

That just means your axiomatic system is inconsistent (i.e. self-contradictory). Since what's true is false, by principle of explosion your axiomatic system allows you to prove any statement, as well as its logical negation, is true.

No, this logical system does not exist in my axiomatic system. Only what God says can be true, and anything else is false from the devil.

Therefore, you cannot use logic to deduce truths, as only what God says in my axiomatic system can ever actually be true - and logic does not apply to God.

Hold on, incoming transmission... My God said you're pancake.

Axioms are not proven. They're true by definition

False. They are assumed true by definition. If I reject your axiom, I reject your entire system. You cannot defend your axiom by saying it's true because you called it an axiom.

I won't accept an LLM reasons no matter what questions you ask of it

If you ask it questions that you know it wasn't trained on the reasoning for and does not contain the answer/steps for (axiom) and we know it only used its internal processes with no outside help (like the internet) available (axiom), but it was able to solve it and show you logical and reasoning steps it did to do so...

You'd conclude God shoved the answer into the blackbox? I'm welcoming a reasonable answer. Not some BlackBox(Reasoning(x)') nonsense.

I gave you an example of something I know with 100% certainty

So again, I'm asking you:

Do you only know tautologies with certainty? You're not confident in anything else for certain? You don't know if vaccines work? You don't know if you were born?

You've given me an impossible standard by which you would know things for 100% certainty such that I can never prove to you whether an LLM can reason.

Even if we go by what you said, which we need to be able to identify reasoning systems, then we dissected LLMs, then we concluded they can reason:

why would you trust that we are actually capable of identifying reasoning systems?
why would you trust that we are capable of accurately dissecting the inner workings of LLMs?
or why would you trust that humans could actually reason such that we could deduce any fact from the information given (hence we couldn't trust ourselves to figure out whether or not LLMs exist, even if we had all the necessary facts to do so and more)?

Most things we know are not tautologies.

Well, like I said, I'm a Popperian

No wonder we're not getting anywhere huh.. I can say I appreciate the ideas, but to actually believe and live by them is ridiculous.

There's no point in even continuing the conversation. I feel 100% confident you've proven that you either can never know whether LLMs reason (or anything else for that matter) or you have contradictions in how you believe things to be true.

It's too direct an observation to take as anything other than a fact

Absolute nonsense, as we know eyewitness testimony is the least credible form of evidence.

We knew viruses existed before we could ever see them. Scientific deductions > what your lying eyes tell you.

then I'll tell you no, you might have missed something

You'd know for 100% certainty there are 2 apples in the box, then the magician will pull out a 3rd one and your entire method for determining truth and facts will literally be blown to bits.

Like making a coin disappear from my hand for my niece. She saw it disappear right? That must be a fact, right? It's no longer in my hand... right?

Would I know God if I saw Him? :)

It's an axiom :)

The question writer didn't tell you that the machine can reason, they only wrote perfect questions.

ignored the main point of that paragraph

The main point was nonsense. Say you have Hitler (or his protege Trump), someone you don't wanna treat with any dignity or respect for. You'll now say they don't reason?

I don't care about any reason you have to say x is true (or even probable or assumed) other than your standards for what is true independent of x itself.

You're not obligated to respond if you just find it annoying

I never thought a conversation would come to such a dead-end with you, as our past debates usually gave me some kind of new perspective to think about (even if I disagreed with it), which is rare.

But to say straight up that you follow Popperian philosophy, not as a general guideline but in how you actually believe things is insane. Not only that, but you then completely contradicted this very claim by saying you'd believe for a fact there are two apples in a box if you saw it - something you can easily be fooled into seeing.
Pages: 1234