20 May, 2016

Moral Efficacy, Cosmic Cheesecake, and the Epistemic Vacuum

Epistemic Status: exploratory, uncertain
Tagged in: philosophy, meaningness,

i.


Somewhere in the environs of the blogosphere (if I recall correctly, it was in a Luke Muehlhauser piece), I have encountered the notion that if the universe is indeed infinite, then there is corollary that our actions cannot have a significant impact on the moral status of the universe. It would be the equivalent of emptying the ocean with thimbles. The author's claim was that in this case, we should just adjust our utility calculus so as the give more weight to people closer to us and less to people (say) light years from us.

I didn't like this conclusion. It was not parsimonious. It was inelegant. My objection, however, was filed away into the depths of my mental attic. Long enough to collect dust, long enough, as I mentioned, I actually don't know where I encountered the notion.

I'm going to tackle it anyway.

ii.


The thrust of my argument is that there are three possibilities for the (infinite) universe. It is infinitely bad, and thus we cannot make it better; it is infinitely good and thus we cannot make it better; it is finitely good or bad and thus we can make it better.

It is clear, then, that our actions have zero expected utility if we live in the first two universes. This should be no cause to despair, I think, since all it should do is reduce the expected utility of all actions, and relatives only matter in agency; utility calculation involves the comparison of utility, making it no difference.

I was quite satisfied with my brain when this reasoning lept from my brow. But something in it made me instinctively uneasy, an epistemic smell.

This is an argument that doesn't get less convincing no matter how low it's probability goes. A generalized Pascal's Mugging.

iii.


Note: The claim argued against here has been flagged by the author as "wrong, obsolete, deprecated by an improved version, or just plain old", while this means there is a significant chance the original author does not support his idea, the creature has already been released. Since I myself encounter this argument with no further mention or qualification, I'll engage it in exactly that way.

Yudkowsky writes
... it isn't necessary to have some nonzero goal when the system starts up.  It isn't even necessary to assume that one exists.  Just the possibility that a nonzero goal exists, combined with whatever heuristics the system has learned about the world, will be enough to generate actions.
Another note: this quote is lifted out of it's context, but I'm not interested int he context, because this is a completely modular idea.

Preambling aside, this seems like a convincing argument. What's wrong with it?

Recall the warning of Where Two-Valued Logic..., which can be sufficiently summed in a Graham Harman quote: "Reality is not made of statements."

Yudkowsky starts out by partioning possibility into two separate magisteria, nihilism and anti-nihilism, that is there is not a meaningful goal in the world, and there is, respectively.

 This is begging the question. It may carve reality at the joints, but it still carves reality; anything less than the whole is not the whole. I have advocated for a more upfront view, that there are three categories a statement can be in 'right' 'wrong' and 'not even wrong', where the last is the case where things get weird.

In one example you are, as alone of TheLastPsychiatrist (rip) is fond of saying, "accepting the form of the argument, which is itself mistaken". Case of this, such as the Nature/Nurture debate, abound. In the other example, you are looking at two sides of a coin, without it being immediately clear. We have a neat metonym for this last case: antinomy.

(Though I'll use 'antinomy' to refer to both case because symmetry, I think you can also make a case for antinomy in both cases with only the qualification that the latter is a contradiction of the postive statements, and the former is a contradiction of the negatives.)

iii.


One often hears, in futurism, a line of reasoning that goes something like this. Someone says: "When technology advances far enough, we’ll be able to build minds far surpassing human intelligence. Now it’s clear, that if you’re baking a cheesecake, how large a cheesecake you can bake depends on your intelligence. A superintelligence could build enormous cheesecakes - cheesecakes the size of cities. And Moore's Law keeps dropping the cost of computing power. By golly, the future will be full of giant cheesecakes!" I call this the Giant Cheesecake Fallacy. It happens whenever the argument leaps directly from capability to actuality, without considering the necessary intermediate of motive.
And so was born the giant cheesecake fallacy.

The general form is pretty clear: a cheesecake argument is a variant of motivated cognition, of rationalization. The conclusions are already there in your mind, but those damn philistines can't see the obvious truth there, so you must devise an argument that could convince even them.

Of course, this implies the cause of the fallacy is malice and not simple efficiency. It would seem plausible that in most cases, being able to construct an informal argument and not seeing immediate contradictions is a good measure of accuracy. But sometimes it isn't. Here, it isn't.

The Epistemic Vacuum is the gully of posterior probability space; it's where the odds of the hyposthesis of interest gets so low that tiny fluctuations, like say "the moon landing was faked" or "astral projection is a real phenomena" (would) have relevant effects on (completely unbiased) reasoning. More clearly: things that are really unlikely but not impossible, become relatively likely enough that they take up a non-negligible portion of your immediate utility calculus.

So the problem with Yudkowsky's argument is that you can't slice up reality with human constructs like statements and propositions and reason from there. The myriad possible worlds will not sort neatly into 'nihilistic' and 'not nihilistic'.

Stepping down from the meta level, to interpret what this result means for the actual question of life's potential menaing and what to do about it, is a bit hairier. When doing three-value analysis, the principle is to shift your focus from the finger pointing (the statment) and the moon (reality). In other words, we dissolve the question.

Yudkowsky does the light lifting here and gives a nice object level target. The meaning of life is the answer to 'why do anything' or, equivalently, 'what should be done'. Nihilism insists there is no 'should'. So, what expectation varies when we shift from ambiguously anti-nihilist to nihilist? The correct answer seems to be plural to me.

On one hand, there's the 'you can do whatever' result, trivial nihilism. It acts like a self-consistency axiom added to a powerful logic system; everything becomes provable. The opposite is the 'you'll do nothing' result, empty nihilism. which is essentially a computation which produces no output. The AI will search for oughts, and never find them, nothing doing.

iv.


But let's step up a abstraction level. We're ignoring the agent-y dynamics already present. Like an anthropic scenario, the very fact that you're asking the question demonstrates that question can be asked. Asking is an action.

Some (hypothetical) minds were born without a seed of truth. To us, induction seems to be a principle of reality, but an anti-inductionist would never see that. Ze would notice anti-inductionism never works, and from this believe that means anti-inductionism will start working. I mean, "it's never worked before".

So too with meaning. By the very fact that you ask the question, you demonstrate there exists an ounce of meaning in the universe; it's what compels to ask 'but why x', (where x becomes 'act'). In an empty nihilist subjective universe, you could never even become aware. Awareness is an action, and an action supposes a goal.

Here's a nugget: empty nihilist exist, and are everywhere about you. Inert matter has no meaning in their life, not even an interim meaning like 'figure out what the real meaning is'. They were never able to even get to point where they could ponder the grand narrative of reality.

v.


So what about trivial nihilism?

When we move from trivial nihilism to ambiguous anti-nihilism, the change is one has trash and taboo. Trash is stuff that's meaningless. E.g.
A tiny gray pebble slides half an inch down a slope on a lifeless planet a million light-years from the nearest star. No being ever knows about this, and nothing happens as a result of it.
Taboo is closer to province: it's simply a moral taboo, or more rationally, a negative utility. Something anti-meaningful.

This is as far as my thinking has gone. It's not as dissolved as empty nihilism, so I'd encourage readers to do their own thinking along these lines, perhaps coming to a new insight.

vi.


What a tangent!

Earlier, we were discussing:

I have encountered the notion that if the universe is indeed infinite, then there is corollary that our actions cannot have a significant impact on the moral status of the universe. The author's claim was that in this case, we should just adjust our utility calculus so as the give more weight to people closer to us and less to people (say) light years from us.

I didn't like this claim. I refuted it, but my refutation had a pathological case (a escape clause, if you would). I had to invent the dialectic machinery of the epistemic vacuum in order to address what my gut level objection to my own objection was, and in the process I was sidetracked by the alluring sights of a claim I wanted to test my new toy on.

With all the hot air serious intellectual argumentation behind us, the solution to this little mess will seem quite simple, and is left as an exercise to the reader.

No comments:

Post a Comment