22 April, 2016

Another Argument Against Solipsism

Epistemic Status: Likely

i.


I like to think most of us are familiar with the cognitohazard called 'solipsism'. Most of us likely had a run in with this during some I'm14AndThisIsDeep phase where we were intelligent enough to produce such dangerous ideas like solipsism and determinism, but not intelligent enough to produce the counter-arguments.

Given this blog's vague association with the rationalist and post-rationalist memeplexes, I think there's a good chance you're already over solipsism if you ever contracted it, but hey, there's a chance I could successfully put a anti-solipsism meme in circulation, and there's also some more insidious forms of solipsism affecting philosophy of mind that I'd like to address with this.

First, some preliminaries. I'll define common solipsism or 'type 1 solipsism' as the assertion that you alone are conscious or (in a weaker form) that you alone are certainly aware (other people are merely possibly aware). The second relevant form is more sophisticated, I'll call it systems solipsism or 'type 2 solipsism'. It is the assertion that consciousness resides in certain structures of dynamic systems. Later on I'll argue against this proposition, and sketch an alternative.

(you may wonder why I lumped two unrelated positions into one term. Part of this is rhetorical, and a part of it is laziness. I can defend it by pointing out that both positions ignore certain realities of consciousness, but that's just a rationalization)


ii.


The easiest dragon to slay, here, is that spectre of common solipsism. It's mostly semantical. It says:

"Okay, okay, maybe other people aren't really conscious, I'll concede that. But, these objects-that-aren't-conscious behave in certain common ways that are somewhat advanced, and can be described and modeled in ways that, ultimately, will have you behaving as if they're actually conscious (whatever that means) so is arguing this really a good use of our time?"

This is the practical argument. That completely ignores the (possibly not-well-founded) premise of the argument and goes right for the jugular, in true empiricist fashion.

This was easy to counter, because it's a low complexity argument and only takes some slightly more complex conceptual tools to dismantle.

iii.


There's another argument, which is capable of defeating even a stronger version of the great look-up table thought experiment.

Suppose a interlocutor were replaced entirely by a giant look-up table, with input-response pairs. By our practical argument, this changes nothing, since this look-up table should have us behave as if it were conscious anyway.

It still feels kinda empty, they're still just a static look-up table, will no free will or creativity.

But look closer.

This look-up table simply can't be a python-esue dictionary, with ever input assigned some output. Demonstration: suppose you tell this look-up table "I'm going to ask you a simple yes or no question. If you're answer is yes, then when I say 'red' respond with 'green'. If the answer is no, respond with 'blue' instead."

You ask your simple question, ve answers. You say 'red', what happens next?

"Well, it depends on what ve answered with, doesn't it?"

Then it looks like the so-called look-up table must be doing some kind of internal modeling, does it not? And with these probes potentially becoming arbitrarily advanced, it seems as though there 'look-up table might just as well be emulating whole brains.

Huh.

iv.

Let's get even more abstract. What if we imagine a randomly generated list of responses, which, by miraculous chance, when implemented in look-up table style way, happen to produce a human which behaves relatively normal.

This, this randomly generated string must be unconscious, right? It's just a bunch of random bits that happen to align in a human face. It can't be modeling anything, because it's just a random string.

I'm going off on a limb and will answer positively. This string is conscious. The argument is short, and I think it should be easy to follow.

Unfocus your metaphorical eyes and look at the situation we have have. A string which just so happens to encode human-ish behavior over a lifetime. We're ignoring the sheer improbability, and going purely off the necessity of this string's existence.

But there's a reason we selected the string out of the entire space, and that reason is that the string is conscious. It's conscious by definition, by specification. We choose this string if it weren't conscious, why would we be considering it?

v.

And this argument illustration my particular ontological approach to consciousness. It must be located in actions, in the output of a process rather than a process itself. Deciding if a string is conscious is a decision problem that's probably generally uncomputable. A mind bent on pretending to be nonsentient would give a false negative, and the extremely rare fluke in a stochastic process would give a false positive. A trade-off.

At the core, truth grounds itself in behavior, and that eccentric mind will have you behaving as if it were nonsentient, and the hypothetical 'random' string from the last section will have you behaving as if it were sentient.

In the end, that's what matters.

No comments:

Post a Comment