23 April, 2016

Narrative Morality

Epistemic Status: Plausible Crackpottery

i.


Man-with-a-hammer syndrome. It's horrible, horrible affliction affecting intellectuals and can subvert their critical thinking in a certain area for weeks at time.

I used to think I was immune. Then I went through a spell where I was utterly fascinated with the idea of languages and ontology, sparked by a lesswrong post which said something along the lines of 'every language implies a certain ontological framework' that isn't a literal quote, but it captured the insight which then occupied my thinking for quite bit. Ultimately, it produced Translating Alien Languages, although I think it might have produced another, if I were more productive.

My next (as far as I remember, which shouldn't inspire confidence) fascination was narratives. The cognitive dynamics it inspired are mostly a kind of taste, which is quite hard to communicate.

At best it can be summed up in my tweet: "You are a story. Your brain is the story-teller. Sometimes there are translation errors."

Like a true case of man-with-a-hammer, I applied this idea to everything under the sun. Arguments, ideology, culture, popular science articles, etc.

As luck would have it, I've moved on from them. I might be contracting a rather minor case with my new, as-yet-unnamed Big Idea(tm), but that's not here, nor there.

22 April, 2016

Another Argument Against Solipsism

Epistemic Status: Likely

i.


I like to think most of us are familiar with the cognitohazard called 'solipsism'. Most of us likely had a run in with this during some I'm14AndThisIsDeep phase where we were intelligent enough to produce such dangerous ideas like solipsism and determinism, but not intelligent enough to produce the counter-arguments.

Given this blog's vague association with the rationalist and post-rationalist memeplexes, I think there's a good chance you're already over solipsism if you ever contracted it, but hey, there's a chance I could successfully put a anti-solipsism meme in circulation, and there's also some more insidious forms of solipsism affecting philosophy of mind that I'd like to address with this.

First, some preliminaries. I'll define common solipsism or 'type 1 solipsism' as the assertion that you alone are conscious or (in a weaker form) that you alone are certainly aware (other people are merely possibly aware). The second relevant form is more sophisticated, I'll call it systems solipsism or 'type 2 solipsism'. It is the assertion that consciousness resides in certain structures of dynamic systems. Later on I'll argue against this proposition, and sketch an alternative.

(you may wonder why I lumped two unrelated positions into one term. Part of this is rhetorical, and a part of it is laziness. I can defend it by pointing out that both positions ignore certain realities of consciousness, but that's just a rationalization)

21 April, 2016

Anti-Perfect Objects

Epistemic Status: likely; I probably didn't make obvious mistakes, but I doubt this concept is anything more than an encapsulation of something trivial.

I.

To me, Cantor's diagonal argument is pretty good as an intro to what antiperfection is all about. If you hang out in the geek aisle long enough, you'll see it. Given that you're reading a blog as obscure as this one, I'm sure you've probably already seen it. I hope, because I'm writing a cliff notes version and it's going to be mangled to heck.

Seriously, look it up, if you need to.

Anyway, Cantor worked with a notion called 'cardinality', and applied to infinite sets. He discovered that most infinities we know about are really the same infinity, albeit with fake glasses and a mustache.

For instance, an infinite set like 0,1,2,3,4... (naturals) is equivalent to itself sans one element, e.g. 1,2,3,4,5... (counting numbers) Then, by induction, we see an infinite set is equivalent to itself sans infinity elements, for instance 2,4,6,8... (evens).

It also means it's equivalent to itself plus infinite elements, so ...-1,0,1... (integers) are equivalent as well.

It keeps going. An infinite set is equivalent to itself times infinity elements. The .25, .5, ,75 rationals are equivalent as well! The proof is outside the scope of this article, mostly because I forgot bits of it and it'll take cognitive effort to recover them.

Ok, so what's not equivalent to the natural numbers?

The reals, as it happens.

Here's the crux of it's relevance: the diagonal arguments says, suppose there reals were denumerable (read: countable). Then we list them out in some ordering. The reals are infinite sequences of converging rationals, so we have a list a items with infinite elements each. For convenience, suppose we just convert that to a binary representation in its stead.

So: just take the first digit of the first sequence, and flip that bit. Add it to the sequence we're constructing. Now, take the nth bit of the nth sequence, and flip that bit. Append that to the sequence.

For obvious reasons, this sequence can't be on the list. QED

you can stop skimming now i mean: II.

Now, I consider this a good enough example of a antiperfect construction. The essential insight is that of the special case which defeats the general theorem. The antiperfect lies close enough to the normal that it prevents regularities of the normal from becoming universally true. It kills theorems just by existing.

This doesn't mean antiperfect objects are objectively well-defined, or that they are a minority in a set. An antiperfect object is always antiperfect relative to some fledgling conjecture. And from the Diagonal Argument above, you can see the antiperfect objects good just as easily be the majority in a set.

But in a sense, the diagonal argument itself is kinda antiperfect, because most prototype of my minds category of 'antiperfections' have antiperfections are special cases that are unique in that the generalization they refute is more or less true for the rest of the set.

Consider that famous proof of the Halting Problem's uncomputability. If we did have a program which solves the halting problem, then we could feed a bizarro implementation of that program which halts iff the given program doesn't, and vice versa.

Now we ask ourselves what the output will be when we feed this program its own source code, and are forced to admit a contradiction.

What it's interesting about this is that the proof just says the problem is generally undecidable, because a very specific input can't be decided by any would-be implementation. You very well could implement a program which decides on most programs, but it just isn't universal. Its deserved titled would be stolen by the antiperfect programs.

III.

I'm probably overstepping the bounds of my theory here, but I also think antiperfection is a way to think about uncanny valley effect.
At least, a new perspective. Uncanny valleys are antiperfect in the sense that the uncanny faces are close enough to ideal to be recognizably human, but far from it enough to be disturbing.

There's a good chance I'm missing something though, because the uncanny valley doesn't really effect me that much. The above image is pretty common in popular treatments of this valley, and honestly, I don't get what's so creepy about it. I'm just weird.

12 April, 2016

Where Two-Valued Logic Fears to Tread

Epistemic Status: Uncertain
Errata:
(16/4/14) I added some quotes in the second section, to make when reference to mtraven's articles clearer, and split it into two sections, delineated by the end of the quotes.
(16/4/24) changed epistemic status from 'Likely' to 'Uncertain'.

This post was originally called "Propositional Uncertainty as a Epistemically Useful Type of Logical Uncertainty", but this I realized how much that sounded like snarXiv-esque word salad, with a decision theoretic bent. I'm better than that, I hope.

i.


In the Friendly AI literature, there's this concept known as an 'ontological crisis'. Simply put, it's a situation where your model of reality blows up in your face, and since your value system is probably intricately hooked into your model (you aren't a wirehead, are you?) things don't look pretty. You're faced with the task of reconstructing a utility function now that your old has been thrown out with the bathwater.

The canonical example of this is the loss of faith accompanied by realizing God doesn't exist.

What people don't know, is that ontological micro-crisises (ontological stress?) are far more ubiquitous than than the flashy loss of faith many educated people eventually face. I think we can build a far more interesting model by investigating some phenomena superficially distinct (but deeply related) to the ontological crisis.

And inward we go.

11 April, 2016

A Meditation on Writing

[Epistemic Status: Uncertain]

i.

Suppose you're consuming some fiction. It's prose is readable, it's ideas coherent. The plot isn't mind-blowing, but it's compelling. Say it's some maybe-supernatural mystery/thriller or something. In it, the story is set off by some local detectives receiving a tip from a Joe Everyman about some weird stuff happening in town. And lo behold, upon investigation, a frighteningly competent local neighborhood conspiracy is brought to light.

That's all well and good, nothing to write home about, maybe, but all good and well. But you're grabbing a drink from the fridge, as it goes, and you have a thought.

If this conspiracy is so competent, how did Joe Everyman even notice enough to give a tip?

It gets worse.

Suppose this Joe actually helps out the investigation. Maybe he contributes valuable input, some insight, a deduction or two, to this team of otherwise competent investigators. He even pulls a few tricks from his sleeve, gets them out of some sticky situations. Seems a bit Sue-ish, perhaps?

It gets worse.

Suppose this fiction wasn't just any fiction, but fanfiction.

You'd drop this poorly-thought-out, self-inserted crap right?

I probably would.

But. What if it turns out Joe isn't just an Everyman, but (say) a agent of a much larger, greater conspiracy, that maybe the local villains were rogue elements and he was just here to take them out, perhaps, and recruited the investigators to avoid getting his hands dirty.

Eliezer has aid some good stuff about noticing confusion. Those details weren't supposed to add up. It was a clue.

But no one kept reading that far.

08 April, 2016

Several Case Studies of Metamathematics in Everyday Life

I have no idea why I think it's a good idea to post this, but it's been on my mind lately.
Corrections, qualifications, and suggestions welcome.

Gödel's Incompleteness

  • "Are you crazy if you think you're crazy?"
  • Free will.

Chaitin's Incompleteness

Tarski's Incompleteness

  • Squabbles about the meaning of words.
  • Linguistic paradoxes (ofc)

Löb's Theorem

Fixed Point Theorem

  • Being meta. (e.g. metahumor, etc.)

The Halting Problem

  • Telling if a deadfic is really dead, maybe.
  • Free will (again)

04 April, 2016

Aegri Somnia Vana

Epistemic Status: Fiction
Errata: Changed the title not to be a horrible pun. Check the URL if you're curious.

--

This isn't over and I'm not dead. I just needed to post something, so here's a little vignette I wrote while bored in class.
P̶l̶e̶a̶s̶e̶ ̶e̶x̶c̶u̶s̶e̶ ̶t̶h̶a̶t̶ ̶h̶o̶r̶r̶i̶b̶l̶e̶ ̶p̶u̶n̶ ̶p̶r̶e̶t̶e̶n̶d̶i̶n̶g̶ ̶t̶o̶ ̶b̶e̶ ̶a̶ ̶t̶i̶t̶l̶e̶.̶

i.

Location: somewhere cinematic with dramatic cliffs overlooking a sunset on the ocean.

I heard the click of the gun pressed to my temple. How had he sneaked up on me like that? I needed to know that trick.

A moment passed.

"Why not shoot?" I said.