Yes, it's bad logic |
Sorry, but none of those have been disproven. |
But if you were to press me and ask "okay, but do you know that to be the case?" I'd have to admit that no, I don't. |
So you default to believing in the absence of things? |
"Chewing apple seeds is bad for you." Would you believe that or not? |
I'd ask to have the question clarified |
Someone reasoning scientifically about the claim would do at least some research into it |
You're just assuming everyone else is more or less the same as you, for no reason |
Just observing it isn't enough, we need to open it up and figure out how it does it. |
the third sentence indefinitely with a string of nouns, adjectives, and adverbs |
You're the one who likened them to experts, though |
If I asked you a question you can't possibly answer in the amount of time I give you, the only correct answer you can give is "I can't answer that question with the time you've given me" |
Yes. You're the one contending it's not, not me. |
It could not reason its way into any new information |
It's odd to ask a question when you seem to understand the answer already |
Also, high quality? But I thought the human brain was garbage! |
If someone put a gun to your head and asked you, "Do you know now, smartass?" Suddenly, you're a genius who knows there's no teapot orbiting Jupiter. |
How far do you have to go to say you know something? 90% sure? 99% sure? 99.99999%? 100% is the only way? |
The fact that it can't do anything else is what proves nonreasoning. |
Not to force a position on you, but I highly doubt you even believe that. There's many helpful answers that can be given that may not completely address or solve the question given. |
A reasoning master would mean no human makes reasoning mistakes and certainly wouldn't blatantly fight good reasoning with bad reasoning. |
I don't see how your logic is complete. Being able to reason its way into new information is not incompatible with a language model. I could easily say, "Well the human brain is a sensory input model. It can't reason its way into new information because it can only reason the finite information its senses gives it". I just don't see the connection. |
We obtained the high quality information through science, which strives to remove the flawed human from the equation at all... |
If someone points a gun to your head demanding you admit something you'll say whatever is necessary to make that stop |
I won't deny it could a lesser number than 100% certainty |
it still doesn't change the fact that I currently don't know whether Russel's teapot exists |
You can't determine what something can't do by observing what it does do, though. It could be that the robot is remote-controlled by a person who's really invested in cracking nuts. |
do they not have basic epistemology courses in universities? |
I guess we just want different things out of computers |
I don't agree with that. I think a "master" at something should be the best currently, perhaps the best possible, but certainly it doesn't need to be the best conceivable. |
You're doing that thing again, where you devalue natural intelligence in favor of AI. It seems like only the mistakes human make are the ones that count. |
The flow of information between a human brain and the real world is not one-way. |
LLMs intrinsically can't learn on their own, because they're not investigating the real world |
If we had to come up with it ourselves we'd probably eat some shrooms and after sitting still for five hours we'd end up with some stupid nonsense about praying to the clockwork elves for wisdom. |
The point wasn't that you'd say whatever they want, but that you'd be able to truthfully give them the information they asked for. |
If you believe.. anything.. then it must be less than 100% certainty. |
You don't know whether a made up teapot that can't possibly exist... exists? [...] |
Again, in this specific discussion, we are assuming that the thing we are trying to measure is trying to show us intelligence/reasoning, not trying to fool us into mischaracterizing real reasoning for cracking nuts. |
So far, your only argument is that a "fake" reasoning machine could exist that can pretty much do everything real reasoning can do but it isn't actually reasoning because we said it was fake. I've been saying this makes absolutely no sense. |
Again, I'll say what Sonnet said, which is that a bird and plane both achieve flight - regardless of the mechanism. There's no point in seeing something "reason" and conclude we don't know if it's reasoning because we haven't seen how it does it. |
we have no standard or process for identifying a reasoning system by its operation. |
You can say "Bruce Lee is a master of martial arts", not "humans are masters of martial arts". |
However, we are still born flawed - we have to educate ourselves to overcome the nonsense going on in our heads. Our brains are not reasoning masters by default, we get it there. |
We've been talking about giving an AI the ability to interact with the real world to acquire its own data. Simply because it exists in a box doesn't mean it can't reason. |
I think AIs could very well learn the same way we do - the limitations are artificial. |
Again, this is an artificial limitation, not an intrinsic one. Only an intrinsic limitation would be a valid argument against their learning/reasoning capabilities. We already have AI models learning from real world interactions. |
There are things you can know with 100% certainty. |
There's a different between what I'm fairly confident about and what I know. I find that distinction to be very important. |
The nut-cracking robot is an analogy for an LLM |
your assumption is not just that they do, but also that they want to demonstrate it to us |
First of all, I didn't say it can do pretty much everything real reasoning can do. What I said was that it could trick a "reasoning-detecting function" into saying it is reasoning, without being so. |
It doesn't matter how long it is, the fact that it's finite means you can be tricked. |
People have talked to programs much dumber than LLMs without realizing they were programs, so just "well, it sounds like it reasons" doesn't cut it for me. |
All of these programs have in common that they use symbols that represent facts and they manipulate them with strict rules to deduce new truths not present in the original facts. That's reasoning. |
I don't know, man. I haven't seen many gorillas doing jeet kune do |
Going by your logic, it would be wrong to say "horses are good runners", because there's been at least one fatass lazy horse |
LLMs are intrinsically unable to observe the real world |
"LLM" and "AI" are not synonyms |
it sounds preposterous on the face of it. Imagine trying to drive with someone else describing the road to you |
Like? |
I know my phone exists in-front of me because I see it. |
You claim we can be tricked into thinking it's reasoning. I'm arguing the only things that can actually trick us is something that is actually reasoning. Hence, this is why we're assuming they're trying to convince us. |
The argument is whether we could identify reasoning based off the output/behavior - not whether we can always know. |
So if you give an example of a reasoning box that purposefully hides its reasoning, this is not an argument. You cannot defeat my argument with a contradicting example. |
You claim we can be tricked into thinking it's reasoning. I'm arguing the only things that can actually trick us is something that is actually reasoning. Hence, this is why we're assuming they're trying to convince us. I don't see the confusion. |
Again, we're circling back to this "you can't know for 100% certainty" nonsense. If the list being infinite means we know 100%, but finite means <100%, then there can be a list long enough such that we know with 99.999999% certainty due to convergence. |
I'm saying a list of carefully-selected prompts can make you certain. |
I believe I gave a stricter definition to reasoning earlier in this debate. I do not see how ChatGPT and the other AI models are not reasoning following this definition. When you or I debug code.. is this not a reasoning process? The AI doesn't always know what in the code is wrong - it's not a God. It'll often give me code snippets back with debugging outputs and such, asking me to run the code and return the debug information. |
Is this not overcoming the inability to interact with the real world to an extent? Is this not getting presented information, deducing the next steps, interacting with the problem, then using the new information to make deductions? |
Secondly, all data is represented in symbols. Simply because the symbols in this case happen to be language doesn't diminish it. |
The square of the hypotenuse is equal to the sum of squares of the catheti |
I believe it's the humans who made it the ones who are trying to trick us |
My argument is that we can't ever know, though. Not just off behavior alone. |
The argument is that it is possible to construct a mechanism that merely appears to reason without actually doing so. |
From our vantage point as researchers studying a hypothetical system that's performing tricks for us, how could we tell whether it's a reasoning system trying to prove itself, or a fake one trying to deceive us? |
It might be the case that all your perceptions are fake, but you'd have no defense against that, anyway |
But, here, LLMs are by their very nature just a clever statistical trick. They exploit that human communication overall can be predicted based on all previous communication. |
What do you do once the list is known? |
my gut tells me they're not reasoning, because they make mistakes |
They change facts between the start and the end of a procedure |
I vehemently disagree. Language is for sure a useful tool to communicate ideas, but that's only because we lack brain-to-brain interfaces. |
You're.. 100% certain of this? Are you sure you're not mentally ill and actually inventing crazy mathematics that aren't real in your head? |
Are you certain that this isn't a simulation and nothing that happens in this world is actually real? |
Can you prove to me that there is no mathematics defying magic that exists which could provide a counter-example that would disprove you? There is literally an infinite number of things that can be brought up which, if true, would disprove your claim. |
How you can be 100% certain of that claim is beyond me. If you straight up asked me, I'd had to admit that I don't actually know if this is true if I go by your logic. |
Lmao, doesn't change anything. I'm arguing if we can be adequately tricked, then it probably is reasoning. |
You claim this machine would have to mimic human behavior exactly for infinite time |
You only say that this is possible, but you provide no alternative to reasoning that could allow a black-box to solve novel problems of many varieties, communicate as needed, and exhibit logical outputs. |
How can we tell other humans are conscious or not? You simply can't, but that doesn't mean you can't be fairly certain. |
There's just no convincing evidence, and the evidence I have against this claim is that we cannot even imagine any system that can solve novel problems and other things I mentioned with "fake" reasoning. If an LLM solved a truly novel problem, would that convince you? |
I don't believe the human brain simply developed reasoning by magic. Evolution cannot create reasoning with the intention of creating reasoning. Every step must provide some value now. In that regard, I believe our reasoning developed through similar tricks. |
Then test the black-boxes with each prompt on the list? |
I'm glad that's not the standard for determining if humans reason. |
Again, bad reasoning is still reasoning. Your ability to keep track of variables and symbols can be independent of your reasoning capabilities. |
I don't see why these LLMs would be limited to human language. They can code, understand math/science, look at numbers, etc.. |
Again, if you incorporate other algorithms for sight/hearing/etc.. into these AIs, it doesn't have to receive that information via human language to understand it. It can be integrated more thoroughly than that, just as our eyes and ears are integrated on some low-level circuit. |
Tautologies are true independently of any facts |
such an example would simultaneously refute and affirm the Pythagorean theorem. |
It's people tricking us |
It's finite vs. finite, it's just a matter of who can pile on more effort. |
I have never granted that LLMs can solve novel problems. |
First you talk about how stupid people are, but then apparently that idea goes out the window as soon as the topic of judging the intelligence of a machine comes around |
You have no criteria for judging reasoning. What makes you so certain that you couldn't be tricked? |
I can't think of anything that's more inscrutable to me than other people's minds |
I don't think reasoning as I have defined it is the product of evolution, but rather... well, almost a cultural artifact |
So when they inevitable do the test perfectly you'll conclude that they reason |
They're trained on language and they operate on language |
You can't take an LLM and train it to understand raw streams of pixels |
Just so we're clear, I'm pretty sure LLMs "look" at pictures by having a helper model jump in and describe them to them |
They cannot be true independently of whether or not logic is true. If something requires any assumptions in order to work, then there is some universe/condition/scenario is which that assumption is wrong. |
Refute, yes, affirm? No. |
It doesn't matter how the black box was created, we are testing it with no people inside of it. |
Don't mistake me talking about humanities "default" or tendency towards idiocy as if I'm saying all humans are idiots or that humanity can only accomplish idiotic things. We become better through generational knowledge and science. |
If I define "red" to be within a certain frequency range then measure light to categorize it, how can I be fooled? I defined reasoning, therefore I can test to see if it can do the things the definition requires. |
... |
We're over 99% the same, and the idea of consciousness has existed for longer than we have. Clearly other people have it |
... I don't know what to say to this. |
If there's other factors known they'll be taken into account. Why would it be inevitable they test perfectly? |
There's no reason I couldn't come up with a clever way to communicate with the LLM via a new "language" it was never trained on as it could understand through reasoning. |
Also, you literally said, "I think it's almost definitely a consequence of language." |
In the axiomatic system of Euclidean geometry, the Pythagorean theorem is still true |
"Trick" as in "to deceive", and "trick" as in "special maneuver" |
Oh. Then... my original statement was correct after all. Humans are masters of reasoning |
without the possibility of incorrect judgement |
I had already said I only assume other humans reason, why is it so surprising that I'm equally assuming they have consciousnesses? |
Remember what I said about communication being impossible? |
Cool, thanks for informing me |
Now all someone needs to do is ask a human to perform it, save their answers, and let the machine replay them |
Look, just take your prompt, do (x + k) % 256 to it, encode it in Base64, and see if the model replies in the corresponding ciphertext |
Yes. *On human brains* |
you don't know if the axiomatic system of Euclidean actually exists |
But you've given me no argument for how a nonreasoning machine can not only appear to reason, but perform new reasoning required for solving a novel problem. |
When you say you "assume" people reason or have consciousness, are you saying that, in a logical sense, you can't know, but you're confident enough to assume it is true for all intents/purposes? Or are you saying because it cannot be proven (yet anyway) that you kinda assume it but you don't actually believe it (you wouldn't be shocked if other people didn't have this)? |
Consciousness has been talked about in philosophy for centuries, referring to the same "feeling" and ability. |
Clearly, the blackbox cannot call upon help from a reasoning machine. This is like saying that just because a student passed a test in class doesn't mean they know the material because they might have cheated. Like, fine? But I can just make the conditions more rigorous until cheating is impossible? The blackbox cannot contain anything other than the reasoning machine to be tested and is not allowed to (and will be prevented through some box) from making any connections to external devices. This isn't an argument of deception you're making, but straight up cheating. |
|
|
If I use some arbitrary encryption, then it has to crack it - which is asking more than just for it to reply in corresponding ciphertext. |
You said you don't know if humans actually reason (other than you of course, master reasoner!), but that you'd be convinced an AI can reason if it emulated a human response to all stimuli, but also humans reason because language, but only human brains can do that. All big claims, no real evidence for any of it. |
But you provide no actual alternative to that answer - as in you provide no reasonable mechanism other than reasoning that would actually solve such a list filled with novel problems and other things that require logical outputs (and from varying topics, so it can't all be chess/math or something). |
Of course it exists |
I don't know if a non-reasoning system can solve novel problems or not |
There's a joke that goes like this |
We're talking about the foundations of our minds, here, not the flavor of chips |
There is no shared context whatsoever there, beyond the fact that we're both apparently human |
You do know that computers have storage, right? |
So what makes you think it'd be able to respond if you talk to it an arbitrary new language? |
When did I claim it? When did I argue for it? |
I'd like to know what you mean when you say that I "provide no alternative" |
Exists as in actually valid. |
So let me get this right: [...] |
At what point do you just accept that sheep in Scotland are black? And I don't mean you just "believe" it, but you can say it is a certifiable fact. |
...What?? Flavor occurs in the mind. Sight occurs in the mind. Hearing occurs in the mind. Pleasure from sex occurs in the mind. What in the world can we experience that doesn't occur in the mind? So no, you can't just pass off chip tasting as reproducible? How can you know if you've reproduced the same experience? |
How does that help? It can't store the answers to questions literally created just now for it. |
You said Humans can reason - reasoning comes from language - can't know if something reasons without dissecting its internal processes. You've basically said that you cannot be convinced anything other than humans reason at this point in time since your requirements for testing reasoning are impossible with current technology. Again, this is why I say you would be FORCED to argue that you have no idea whether an advanced alien species can reason - which is ridiculous. |
I don't know what it means for an axiomatic system to be valid. |
It sounds like you're complaining that I have a higher standard of evidence than you. |
"Certifiable fact"? Well, I assume we're talking about some kind of legal process |
There is nothing in the world external to our minds that correlates to the subjective experience of consciousness. |
Are you going to keep making brand new carefully-selected questions forever? |
See what I meant about it just being a matter of who can pile up more effort? |
Why would I care that you don't believe that I don't know it? |
You've ignored the questions relating to practically, evidence, and knowing things, which, again, is no way to have this debate. |
You can imagine any axiomatic system you want, doesn't make them real. If they're not real, the 'truths' derived aren't either practically. |
You can claim they're true within that universe |
The entire point is that you haven't proved a single thing you've said. The mere fact that I cannot disprove you with 100% certainty has you saying, "see?! we can't know if they reason!" |
I'm saying you can't know anything with 100% certainty. Your only comeback was that if you have an axiomatic system you made up, then truths derived must be true because the system must be true in your imaginary world. |
Why are facts suddenly so difficult to comprehend all of a sudden? |
How? We have literally near identical brains running on the same physical principals and electricity. Taste is fine because the same chemical caused the sensation but consciousness isn't when the brain literally runs exactly the same? If I have two computers that use the exact same components, I should expect one to run just fine but the other to explode? |
You answered it yourself, so congrats! You could also just say each machine gets one chance - so the designers don't even have an incentive to do what you're saying. |
Because it's so obvious you do know it. |
There's no reason to "assume" other people reason by your own criteria, yet you do so. You would assume an alien species that developed tech to travel the universe and was able to track us to be able to reason - but you'd be forced to take the "idk" position if someone debated you about it. This is why I say we need to talk about practicality, not some 99.999% vs 100% nonsense. |
"Real" and "valid" are not properties of axiomatic systems |
Well, no, because axiomatic systems aren't "true within a universe". They're not even true or false unto themselves. |
are we inventing or discovering mathematical truths and objects? |
the view that there could possibly exist a different universe where our mathematical truths don't hold is one I've never heard from mathematicians, only from laymen |
If you want to talk about which things are true, or knowable, etc. I would really recommend that you go read on the philosophy of mathematics and of science |
If you find it so frustrating that I won't budge on the topic of the capabilities of LLMs |
I know what I know. You have quite an uphill battle to convince me that tautologies are not tautologies. |
A fact is something that's directly observed |
that's not a fact, it's a conclusion, an induction |
"Evolution is a fact." No. Evolution is an explanation of the evidence. The evidence is the facts. |
But they don't run exactly the same. It's plainly obvious that they don't. Some people are dumber than others, some people are more rash than others, some people are more forgetful than others, etc. It's simply false that brains are identical to one another structurally. Are they different enough that their subjective experience is fundamentally different from mine? |
How could I possibly know that? |
the probability that you'll write flawed questions (i.e. questions that seem novel but actually contain a hidden detectable statistical pattern) tends towards 1 as the number of cycles rises, while the probability that they'll mistrain their machine is independent of the number of cycles |
Practically my answer to the question "do humans reason?" is "I don't know, but I'll treat them as if they do" |
Has the definition of fact suddenly changed in the past 20 years? |
The axiomatic system being "real" means the axioms used are actually considered to be true in the world we live in. What else could I possibly mean. Tigers can be intellectual giants in a given axiomatic system, that doesn't make that system valid in the real world. |
Huh? I've heard it from plenty of scientific sources. The math allows for parallel universes in many ways, none of which necessitate the same physics apply to that alternate universe. |
Either you're saying that your standard for knowing things is the same as mathematical philosophy (in which case you know literally nothing), or you have a different standard. |
But in my axiomatic system, tautologies are actually deceptions brought forth by the devil? Therefore tautologies are wrong? |
If you think that's stupid, then tautologies are equally stupid if you cannot prove the underlying axioms. |
You won't budge because you're saying we can't know for certain then you won't tell me what standard you use to know things for certain. |
If you try to prove the axioms, I'll tell you you've never proven it to my satisfaction, because I can imagine some hidden variable that undermines everything you said (hint: this is what you said to me given the blackbox reasoning section). Then in the end, I will have proven to you that you cannot proven anything. Checkmate, you lose every debate in which you are in the positive. |
Has the definition of fact suddenly changed in the past 20 years? Fact: "a thing that is known or proved to be true." Whether you directly observed it or not is not a factor in whether it has been proven to be true. |
And as far as human brains go, I'd say the differences between our brains can be accurately represented as the differences between different CPUs of the same kind. Pretty much over 99% the same, but the differences are present. You don't expect one human to feel pain when hit and the other to learn chemistry when hit. |
If you wanna talk theoretically, talk theoretically. But don't suddenly turn around and become pragmatic suddenly about the question writer making mistakes? |
So if the question writer doesn't make any mistakes (he's fucking God) you'll accept the machine can reason if it passes? |
Complete nonsense. You don't know if Einstein, Newton, Stephen Hawkings, etc. are able to reason? |
Social media and some political parties do give that impression |
What you mean to ask is not whether Euclidean geometry is true, but whether the universe has a Euclidean topology |
The Pythagorean theorem is true |
You have failed to convince me that the Pythagorean theorem is false |
That just means your axiomatic system is inconsistent (i.e. self-contradictory). Since what's true is false, by principle of explosion your axiomatic system allows you to prove any statement, as well as its logical negation, is true. |
Axioms are not proven. They're true by definition |
I won't accept an LLM reasons no matter what questions you ask of it |
I gave you an example of something I know with 100% certainty |
Well, like I said, I'm a Popperian |
It's too direct an observation to take as anything other than a fact |
then I'll tell you no, you might have missed something |
Would I know God if I saw Him? :) |
ignored the main point of that paragraph |
You're not obligated to respond if you just find it annoying |