04 July, 2016

Dead before life

If, somehow, you managed to stumble upon this forgotten corner of the internet, and, somehow, you actually enjoy my writings, you should know I no longer blog hear. I have moved to sexlesshydrogen.wordpress.com,  though I haven't made any original posts there yet. I am leaving this up because I have written too much to edit and move it to wordpress in a timely fashion, and so this makes it far more convenient to link people to things I have written.

02 June, 2016

Intrinsic vs. Extrinsic Funniness

Epistemic Status: exploratory, plausible, uncertain
Tagged in: amateur sociology, memes,


The typical explanation for the funniness of jokes is surprise; a joke is funny because it is unexpected, because it challenges expectations. This jives well with experience; you typically don't laugh as hard when you've heard the same chestnut twenty time: it becomes predictable.

As parsimonious as this explanation is, and it's pretty good (see Hurley et al. for a deeper treatment), I want to explore a kind of extension, that models some aspects of the phenomena well.

There is a certain class of statements, such as "We have them surrounded in their tanks!", that are seemingly intrinsically funny; you can hear it again and again and you at least crack a smile each time. This contradicts the surprise model, since not only is it not meaningfully violating object-level expectations, it doesn't seem to get less funny with time, indeed if the author plays his hand right he can even make it more funny because you've heard it before.

Contrast this with what I'll call extrinsically funny statements. This are the jokes with old, time-tested formula for execution: knock-knock jokes, 'whadaya call x' jokes, 'these dudes walk into a bar' jokes, etc. They are plainly funny because they violate object-level expectations at the surface level. This can be done in two ways. First when a word means something different than what you thought it meant but everything makes sense when you substitute the new meaning (puns) and ones where the words mean their usual things, but the solution is unexpected and fits (though this heavily overlaps with wonder in Sarah Perry's theory of puzzles)

23 May, 2016

Quotes #1

A collection of interesting passages I've come across in the interim.


From Greg Egan's Diaspora:
Inoshiro said, "I feel great compassion for all conscious beings. But there's nothing to be done. There will always be suffering. There will always be death."

"Oh, will you listen to yourself? Always! Always! You sound like that phosphoric acid replicator you fried outside Atlanta!" Yatima turned away, trying to calm down. Ve knew that Inoshiro had felt the death of the fleshers more deeply than ve had. Maybe ve should have waited before raising the subject; maybe it seemed disrespectful to the dead to talk so soon about leaving the Earth behind.

It was too late now, though. Ve had to finish saying what ve'd come here to say.

"I'm migrating to Carter-Zimmerman. What they're doing makes sense, and I want to be part of it."

Inoshiro nodded blithely. "Then I wish you well."

"That's it? Good luck and bon voyage?" Yatima tried to read vis face, but Inoshiro just gazed back with a psychoblast's innocence. "What's happened to you? What have you done to yourself?"
Inoshiro smiled beatifically and held out vis hands. A white lotus flower blossomed from the center of each palm, both emitting identical reference tags. Yatima hesitated, then followed their scent.

It was an old outlook, buried in the Ashton-Laval library, copied nine centuries before from one of the ancient memetic replicators that had infested the fleshers. It imposed a hermetically sealed package of beliefs about the nature of the self, and the futility of striving ... including explicit renunciations of every mode of reasoning able to illuminate the core beliefs' failings.

Analysis with a standard tool confirmed that the outlook was universally self-affirming. Once you ran it you could not change your mind. Once you ran it, you could not be talked out of it.

Yatima said numbly, "You were smarter than that. Stronger than that." But when Inoshiro was wounded by Lacerta, what hadn't ve done that might have made a difference? That might have spared ver the need for the kind of anesthetic that dissolved everything ve'd once been?

Inoshiro laughed. "So what am I now? Wise enough to be weak? Or strong enough to be

"What you are now-" Ve couldn't say it.

What you are now is not Inoshiro.

Yatima stood motionless beside ver, sick with grief, angry and helpless. Ve was not in the fleshers' world anymore; there was no nanoware bullet ve could fire into this imaginary body. Inoshiro had made vis choice, destroying vis old self and creating a new one to follow the ancient meme's dictates, and no one else had the right to question this, let alone the power to reverse it.

Yatima reached out to the scape and crumpled the satellite into a twisted ball of metal floating between them, leaving nothing but the Earth and the stars. Then ve reached out again and grabbed the sky, inverting it and compressing it into a luminous sphere sitting in vis hand.

"You can still leave Konishi." Yatima made the sphere emit the address of the portal to Carter-Zimmerman, and held it out to Inoshiro. "Whatever you've done, you still have that choice."

Inoshiro said gently, "It's not for me, Orphan. I wish you well, but I've seen enough."

Ve vanished.

Yatima floated in the darkness for a long time, mourning Lacerta's last victim. 

20 May, 2016

Moral Efficacy, Cosmic Cheesecake, and the Epistemic Vacuum

Epistemic Status: exploratory, uncertain
Tagged in: philosophy, meaningness,


Somewhere in the environs of the blogosphere (if I recall correctly, it was in a Luke Muehlhauser piece), I have encountered the notion that if the universe is indeed infinite, then there is corollary that our actions cannot have a significant impact on the moral status of the universe. It would be the equivalent of emptying the ocean with thimbles. The author's claim was that in this case, we should just adjust our utility calculus so as the give more weight to people closer to us and less to people (say) light years from us.

I didn't like this conclusion. It was not parsimonious. It was inelegant. My objection, however, was filed away into the depths of my mental attic. Long enough to collect dust, long enough, as I mentioned, I actually don't know where I encountered the notion.

I'm going to tackle it anyway.

17 May, 2016

Motions of Meaningness

This is a riff on David Chapman's forever incomplete Meaningness html book. Knowing what the hell that is not required, or beneficial really. I just steal borrow his terminology to point at a neat pattern that would probably better suit a series of twitter postings.

Epistemic Status: I writing this late at night; expect moderate incoherence.


When you ask people 'what's the meaning of life', there are two common answers, and a few uncommon ones I think I saw mention in Chapman's site but forgot too much about to even find the reference again. Nihilism is the total denial of true meaning. Eternalism is Chapman's coinword for what's basically anti-nihilism. They are mirror images in two ways, the one illustrated in his site, which is they they are (mostly) motivated by fear of each other, acting like something of a distributed sorting algorithm for people's emotional proclivities.

The other way is simply in terms of connections. Meaning in logical systems is about isomorphism. This meshes quite well with the intuitive impression of meaning in most cases, where something can be thought of as meaningful if it maps onto something more familiar, in some kind of structure preserving map. These words are meaningful because they map onto concepts in your heed, for instance.

Switching tracks, let me tell you about this idea stewing about in my head for the longest: deep symbolism. Sometimes I wonder about what types of things are intrinsic the humanity and the environs we inhabit, and produce images that are invariant over time and space. Concepts, narratives, images, that all emerge naturally from our neural makeup. Attractors in mindspace. Call these things, if they exist at all, 'deep symbols'.

Occaisionally when checking a musical album, for example, I'll wonder about what deep symbol I'm brushing up against. I think about whatever thoughts/ideas are expressed in the music, and wonder about what paradigm I can fit them into, that would have this particular piece of culture fall out as a specific instance.

At least one time, which prompted this post, I'll have the self-awareness to realize what I'm doing. I'm reaching for meaning; this is what meaning-search feels like from inside. And this gives me a nice, neat analogy for the eternalism/nihilism split: eternalism is the insistence that there is very simple, elegant conceit which you have a structure preserving mapping of everything in existence onto this very simple thingy and isn't it nice?

(fans of Unsong, this is basically what adam kadmon (pls let me have spelled that right) is)

Nihilism is the exact opposite. It sees the world through the lens of a metaphorical sensory processing disorder where everything is noise. No patterns, no regularity, etc. This may or may not be how SPDs actually work, but it's a conceit that's gotten into my head and stuck around.

Even simpler: nihilism is a graph with no connections, and eternalism is a completely-connected graph.

None of this is how reality looks like, when you get down to it. It's a delicate interplay of order and randomness, a transcendental number rather than a integer or a undefinable and real number.

I hope at least I can help other feel and notice this same reaching for insight, for small-m meaning that happens to me from time to time.

10 May, 2016

Hierarchy of Conventions

Epistemic Status: Useful & Likely

This concept frequently comes up when I explain this idea to my friends, so I'm primarily writing this so I have a convenient place to link them the next time I have to explain it.

first rung: 'useful'

The levels of my hierarchy are essentially systemic safety nets; anything that falls through the first level is tested against the second level and so on. The requirements get progressive more lax to cast the net wider and wider.

At the first level, we must ask ourselves 'is this convention useful'. This is the highest, strictest and most salient level, dealing with all 'important' conventions. Here, we have Newtonian Mechanics, folk theories of mind, most (vertically transmitted) religions, etc.

The property of interest here is that while these cultural inventions really (in the Chapman sense of 'really' denoting 'in some sense') reflect structures in the real world. The gotcha here is that things first go the a homomorphism-distorting utility calculus.

I'll unpack that last bullet of jargon. A homomorphism is an algebraic equivalence which essentially means a isomorphism that preserves structure. A function like x^2 or 7x map most xs onto different numbers than themself (that is, 3 becomes 9 or 21, 7 becomes 49, etc.), it preserves several useful relations, such as the total ordering '>'.

It has been neatly demonstrated (Quanta article, paper) that, generally speaking, homomorphism-preserving internal representations of the external world, i..e. beliefs structures that accurately map reality, are not evolutionary viable, particularly when computations are expensive (i.e. always).

And this captures the 'utility calculus' part of it as well; the distortions is caused simply by the fact that usefully non-homomorphic representations are generally 'better' than their undistorted alternatives.

But I digress. It's be instructive to point to things that are patently not in this category. These are Quantum Mechanics/General Relativity, the consensus neurology/psychology/sociology, etc.

second rung: 'accurate'

The next level catches anything which isn't useful per se, but accurately respects reality. Everything I mentioned at the end of the last sections falls here. 

You might have noticed that the first rung is relative; generalizations like 'bigger brains mean higher intelligence' are useful when describing populations and averages, but hopelessly imprecise for describing individual people. In fact, accuracy becomes usefulness whenever the domain of interest is scientific.

The second rung is equally nebulous in a distict way; where what is 'useful' mutates whenever the domain of interest changes, what is 'accurate' varies as time. As the canonical example of this: Newton's Mechanics was accurate centuries ago, but not now.

third rung: 'convenient'

This is an even more nebulous rung. 'Convenient' is anything where you can do it anyway you want, but some ways are just plainly easier. In linear algebra, you can describe vectors by translating your vector space by any constant amount you like, and the calculations are equivalent. But still, A translation by +2,-8 or 0,+5 are much more convenient than (say) +(pi),-sqrt(3) or -e,+8/9.

I'd guess the main thing making this a distinct category from usefulness is that for usefulness, you can do it a different way, and you get a different answer. GR gives slightly different answers than NM, (and by the accuracy criterion is slightly different in the right way), but there is a reason we use NM for things like mundane trajectory calculations, and we'll define that reason by that and denote it 'usefulness'.

Likewise, there is a reason we do our calculations in base ten rather than base three or base phi, and we'll define the reason by that and denote it 'convenience'.

fourth rung: 'convention'

This is finally where everything has fallen through and any framework is neither especially useful, especially accurate, nor especially convenient. This is the domain of ISO standards, certain cultural artifacts, whether we spell it 'color' or 'colour'.

fifth rung: may god have mercy on your souls

23 April, 2016

Narrative Morality

Epistemic Status: Plausible Crackpottery


Man-with-a-hammer syndrome. It's horrible, horrible affliction affecting intellectuals and can subvert their critical thinking in a certain area for weeks at time.

I used to think I was immune. Then I went through a spell where I was utterly fascinated with the idea of languages and ontology, sparked by a lesswrong post which said something along the lines of 'every language implies a certain ontological framework' that isn't a literal quote, but it captured the insight which then occupied my thinking for quite bit. Ultimately, it produced Translating Alien Languages, although I think it might have produced another, if I were more productive.

My next (as far as I remember, which shouldn't inspire confidence) fascination was narratives. The cognitive dynamics it inspired are mostly a kind of taste, which is quite hard to communicate.

At best it can be summed up in my tweet: "You are a story. Your brain is the story-teller. Sometimes there are translation errors."

Like a true case of man-with-a-hammer, I applied this idea to everything under the sun. Arguments, ideology, culture, popular science articles, etc.

As luck would have it, I've moved on from them. I might be contracting a rather minor case with my new, as-yet-unnamed Big Idea(tm), but that's not here, nor there.

22 April, 2016

Another Argument Against Solipsism

Epistemic Status: Likely


I like to think most of us are familiar with the cognitohazard called 'solipsism'. Most of us likely had a run in with this during some I'm14AndThisIsDeep phase where we were intelligent enough to produce such dangerous ideas like solipsism and determinism, but not intelligent enough to produce the counter-arguments.

Given this blog's vague association with the rationalist and post-rationalist memeplexes, I think there's a good chance you're already over solipsism if you ever contracted it, but hey, there's a chance I could successfully put a anti-solipsism meme in circulation, and there's also some more insidious forms of solipsism affecting philosophy of mind that I'd like to address with this.

First, some preliminaries. I'll define common solipsism or 'type 1 solipsism' as the assertion that you alone are conscious or (in a weaker form) that you alone are certainly aware (other people are merely possibly aware). The second relevant form is more sophisticated, I'll call it systems solipsism or 'type 2 solipsism'. It is the assertion that consciousness resides in certain structures of dynamic systems. Later on I'll argue against this proposition, and sketch an alternative.

(you may wonder why I lumped two unrelated positions into one term. Part of this is rhetorical, and a part of it is laziness. I can defend it by pointing out that both positions ignore certain realities of consciousness, but that's just a rationalization)

21 April, 2016

Anti-Perfect Objects

Epistemic Status: likely; I probably didn't make obvious mistakes, but I doubt this concept is anything more than an encapsulation of something trivial.


To me, Cantor's diagonal argument is pretty good as an intro to what antiperfection is all about. If you hang out in the geek aisle long enough, you'll see it. Given that you're reading a blog as obscure as this one, I'm sure you've probably already seen it. I hope, because I'm writing a cliff notes version and it's going to be mangled to heck.

Seriously, look it up, if you need to.

Anyway, Cantor worked with a notion called 'cardinality', and applied to infinite sets. He discovered that most infinities we know about are really the same infinity, albeit with fake glasses and a mustache.

For instance, an infinite set like 0,1,2,3,4... (naturals) is equivalent to itself sans one element, e.g. 1,2,3,4,5... (counting numbers) Then, by induction, we see an infinite set is equivalent to itself sans infinity elements, for instance 2,4,6,8... (evens).

It also means it's equivalent to itself plus infinite elements, so ...-1,0,1... (integers) are equivalent as well.

It keeps going. An infinite set is equivalent to itself times infinity elements. The .25, .5, ,75 rationals are equivalent as well! The proof is outside the scope of this article, mostly because I forgot bits of it and it'll take cognitive effort to recover them.

Ok, so what's not equivalent to the natural numbers?

The reals, as it happens.

Here's the crux of it's relevance: the diagonal arguments says, suppose there reals were denumerable (read: countable). Then we list them out in some ordering. The reals are infinite sequences of converging rationals, so we have a list a items with infinite elements each. For convenience, suppose we just convert that to a binary representation in its stead.

So: just take the first digit of the first sequence, and flip that bit. Add it to the sequence we're constructing. Now, take the nth bit of the nth sequence, and flip that bit. Append that to the sequence.

For obvious reasons, this sequence can't be on the list. QED

you can stop skimming now i mean: II.

Now, I consider this a good enough example of a antiperfect construction. The essential insight is that of the special case which defeats the general theorem. The antiperfect lies close enough to the normal that it prevents regularities of the normal from becoming universally true. It kills theorems just by existing.

This doesn't mean antiperfect objects are objectively well-defined, or that they are a minority in a set. An antiperfect object is always antiperfect relative to some fledgling conjecture. And from the Diagonal Argument above, you can see the antiperfect objects good just as easily be the majority in a set.

But in a sense, the diagonal argument itself is kinda antiperfect, because most prototype of my minds category of 'antiperfections' have antiperfections are special cases that are unique in that the generalization they refute is more or less true for the rest of the set.

Consider that famous proof of the Halting Problem's uncomputability. If we did have a program which solves the halting problem, then we could feed a bizarro implementation of that program which halts iff the given program doesn't, and vice versa.

Now we ask ourselves what the output will be when we feed this program its own source code, and are forced to admit a contradiction.

What it's interesting about this is that the proof just says the problem is generally undecidable, because a very specific input can't be decided by any would-be implementation. You very well could implement a program which decides on most programs, but it just isn't universal. Its deserved titled would be stolen by the antiperfect programs.


I'm probably overstepping the bounds of my theory here, but I also think antiperfection is a way to think about uncanny valley effect.
At least, a new perspective. Uncanny valleys are antiperfect in the sense that the uncanny faces are close enough to ideal to be recognizably human, but far from it enough to be disturbing.

There's a good chance I'm missing something though, because the uncanny valley doesn't really effect me that much. The above image is pretty common in popular treatments of this valley, and honestly, I don't get what's so creepy about it. I'm just weird.

12 April, 2016

Where Two-Valued Logic Fears to Tread

Epistemic Status: Uncertain
(16/4/14) I added some quotes in the second section, to make when reference to mtraven's articles clearer, and split it into two sections, delineated by the end of the quotes.
(16/4/24) changed epistemic status from 'Likely' to 'Uncertain'.

This post was originally called "Propositional Uncertainty as a Epistemically Useful Type of Logical Uncertainty", but this I realized how much that sounded like snarXiv-esque word salad, with a decision theoretic bent. I'm better than that, I hope.


In the Friendly AI literature, there's this concept known as an 'ontological crisis'. Simply put, it's a situation where your model of reality blows up in your face, and since your value system is probably intricately hooked into your model (you aren't a wirehead, are you?) things don't look pretty. You're faced with the task of reconstructing a utility function now that your old has been thrown out with the bathwater.

The canonical example of this is the loss of faith accompanied by realizing God doesn't exist.

What people don't know, is that ontological micro-crisises (ontological stress?) are far more ubiquitous than than the flashy loss of faith many educated people eventually face. I think we can build a far more interesting model by investigating some phenomena superficially distinct (but deeply related) to the ontological crisis.

And inward we go.

11 April, 2016

A Meditation on Writing

[Epistemic Status: Uncertain]


Suppose you're consuming some fiction. It's prose is readable, it's ideas coherent. The plot isn't mind-blowing, but it's compelling. Say it's some maybe-supernatural mystery/thriller or something. In it, the story is set off by some local detectives receiving a tip from a Joe Everyman about some weird stuff happening in town. And lo behold, upon investigation, a frighteningly competent local neighborhood conspiracy is brought to light.

That's all well and good, nothing to write home about, maybe, but all good and well. But you're grabbing a drink from the fridge, as it goes, and you have a thought.

If this conspiracy is so competent, how did Joe Everyman even notice enough to give a tip?

It gets worse.

Suppose this Joe actually helps out the investigation. Maybe he contributes valuable input, some insight, a deduction or two, to this team of otherwise competent investigators. He even pulls a few tricks from his sleeve, gets them out of some sticky situations. Seems a bit Sue-ish, perhaps?

It gets worse.

Suppose this fiction wasn't just any fiction, but fanfiction.

You'd drop this poorly-thought-out, self-inserted crap right?

I probably would.

But. What if it turns out Joe isn't just an Everyman, but (say) a agent of a much larger, greater conspiracy, that maybe the local villains were rogue elements and he was just here to take them out, perhaps, and recruited the investigators to avoid getting his hands dirty.

Eliezer has aid some good stuff about noticing confusion. Those details weren't supposed to add up. It was a clue.

But no one kept reading that far.

08 April, 2016

Several Case Studies of Metamathematics in Everyday Life

I have no idea why I think it's a good idea to post this, but it's been on my mind lately.
Corrections, qualifications, and suggestions welcome.

Gödel's Incompleteness

  • "Are you crazy if you think you're crazy?"
  • Free will.

Chaitin's Incompleteness

Tarski's Incompleteness

  • Squabbles about the meaning of words.
  • Linguistic paradoxes (ofc)

Löb's Theorem

Fixed Point Theorem

  • Being meta. (e.g. metahumor, etc.)

The Halting Problem

  • Telling if a deadfic is really dead, maybe.
  • Free will (again)

04 April, 2016

Aegri Somnia Vana

Epistemic Status: Fiction
Errata: Changed the title not to be a horrible pun. Check the URL if you're curious.


This isn't over and I'm not dead. I just needed to post something, so here's a little vignette I wrote while bored in class.
P̶l̶e̶a̶s̶e̶ ̶e̶x̶c̶u̶s̶e̶ ̶t̶h̶a̶t̶ ̶h̶o̶r̶r̶i̶b̶l̶e̶ ̶p̶u̶n̶ ̶p̶r̶e̶t̶e̶n̶d̶i̶n̶g̶ ̶t̶o̶ ̶b̶e̶ ̶a̶ ̶t̶i̶t̶l̶e̶.̶


Location: somewhere cinematic with dramatic cliffs overlooking a sunset on the ocean.

I heard the click of the gun pressed to my temple. How had he sneaked up on me like that? I needed to know that trick.

A moment passed.

"Why not shoot?" I said.

04 March, 2016

More Notes on my Philosophy of Mind

Epistemic Status: Unlikely

This wasn't the first time I had this thought, but it's the first time it's left my head.


As I alluded to a few weeks ago, I have a model of thought as intentionally activating neural patterns associated with there referrand of the the thought.

 [M]y ... pet theory involved visualization triggering the activation patterns associated with what you're trying to visualize
Let me unpack that statement.

16 February, 2016

How to Fight Fate

Epistemic Status: Likely


Imagine you're in one of those ancient tragedies. Y'know, the ones with prophecies and fate and stuff.

Yeah, I know, prophecy is so last paradigm and considered harmful and so on. Just stick with me here.

Suppose you're walking down the road, minding your own business, and you happen to walk by some two-bit oracle. Then, all-of-a-sudden, their eyes roll back and they speak in the classic deep, inhuman voice of prophecy. It's something about how some horrible event shall betide thee; maybe "thou shalt killeth thine heir" or something similarly cliché. Let's just go with that as an example.

Generally, prophecies are infallible, and this one is no different. Thus, you know with certainty that the aforementioned tragedy shalt come to pass. Maybe thou'll accidentally your child in a sparring match, when you're just simply trying to show them the ropes. Perhaps thine child will commit some unforgivable crime, and you must sentence them to death. Or you just lose your temper with them at the wrong time.

They will die by your hand, is what I'm saying.

However, you're also well-read enough to know how these patterns typically resolve. Subverting a prophecy never works. If you forego the sparring match, your child will die unprepared in battle. If you exonerate them of their crime, ve'll recidivate. Or you just won't be there to save them while avoiding them.

You'd be hoist by your own petard, is what I'm saying.

Thou canst not flee either. Mayhaps if thou runeth to the hills, your child attempts a desperate search, and by a thoroughly bizarre sequence of coincidences you mistake them for a bandit, and realize only when it's too late.

Even if you attempt suicide, your child, unable to live with the grief, will do likewise.

The game was rigged. Fate was the original master of the Xantos gambit.[1]

[1] Warning: TVtropes

28 January, 2016

Translating Alien Languages

Epistemic Status: Likely

Related: Three Worlds Collide

I'm learning Lojban sorta off and on, mostly off, and yesterday I was idly pondering whether the meanings of words in Lojban would be biased the words used in their English descriptions (I'm not considering this seriously, sense I doubt someone would make a conlang without knowing a few languages). It eventually lead to me thinking about languages in general, and how dictionaries create endless circularity of meaning.

25 January, 2016

You are a Pattern-Matching Agent

Or, "My (crude) model of human thought"

Epistemic Status: Plausible, but uncertain.


I sorta want to learn how to draw. Until recently, my feelings about drawing were (statistically speaking) probably similar to yours: it'd be cool to do, but it the type of thing other people are good at, not me. Naturally, my thoughts weren't so blunt, e.g. "I don't know how to draw because it wouldn't be useful to me". Right.

Then, I read Dennet's Consciousness Explained. I haven't finished it, of course, but that might be fixed in a few months (I'm a busy dude). It's not a drawing book, in case you're wondering, but something I read in it made me think that perusing basic competence in sketching stuffs would be something I'd like doing.

In the section I recently finished reading, Dennet argues against the idea of 'pictures in the mind' in a particularly persuasive way (to me, anyway): if we really did have 'pictures' in our heads, then everyone would know how to draw; it'd be as simple as transferring the internal pictures to an external medium, the only barrier to everyone being professional-quality (photorealistic) artists would be hand-eye coordination (which tends to be fairly good in humans).

But that isn't the case, which was to be demonstrated, thus completing the reductio ad absurdum.

The persuasive-ness likely doesn't carry-over in my blunt-force summary, but you just need the gist of it. This argument resonated with me when I first read it, partially because I had a superficially similar pet theory of my own. I'll try to explain it, though I'll stress the mental environment and train of thought which gave birth to this idea have long since dissolved, so I can't faithfully re-construct my exact thoughts anymore than you can.

Personally, I have no idea how other people mentally see their act of mentally seeing, but for me, it's always been a very los-res affair. Seeing a dog and visualizing a dog are utterly distinct things to me. One was 'high definition', with minuscule details and structural stability. The other was, in word, not. Which isn't to say that I lacked imagination, just lacked a vivid imagination, if such a thing exists (I doubt it does).

Now, my original hypothesis about "how visualizing works" is that we don't actually visualize.


24 January, 2016

A Statement of Intent

Errata: Made some fun edits.

This Intro post has been weltering in the cloud for weeks now, at this point I'm sure I'll have too force myself to start this.

Here we go.


A few months ago, I was writing this intro post to be some stupid self-aware rant about how insignificant this blog is, how no one will ever read it, etc. It blossomed into a some analysis of the Gödelian interaction between the implications of self-aware angst and any statement on the significance (or lack morelike) that typically get paired together.

Or at least that's what my second draft said my first draft said. The actual analysis is gone both from my mind and my computer, so we'll just have to take that draft on its word.

That kind of angsty prose has an unpleasant taste, so at least I had the self-restraint not to post it, avoid inflict those horrors on the world.

But I digress. This is a statement of intent, which I read blogs should probably start of with.

In an ideal world, this blog will be an outlet for my uninformed opinions and bad ideas, as well as a place to cut my teeth and improve my writing before moving on to meaningful things.

I mostly care about mathematics/logic and sometimes programming, with some occasional science and philosophy thrown the mix. At least, those are the things I care about and plan on using this blog to discuss.

In the real world, I lack the ability to commit to anything. I'll likely never write anything on this blog, and anything I do will be worth nothing more than a cringe as I delete my fourth google account some years down the line. Read by no one other than some poor sap who had impliedimplication.blogspot.com in their search space of domain names, and was curious what rag was occupying the rightful place of their blog.

Edit: Haha past self, I defeated your expectations!

But self-pitying aside,I have a few posts lined up, and if all goes well, they should be actually posted in a few days. Which translates to at least another week before the next post (probably "You are a pattern matching agent" or "Why your brain hates determinsm") gets here.



PS. If you happened upon this, and it's been more than a few days, try to leave a comment so I'll remember this exists. I don't put it past my future self to forget about this.

Edit: Haha, again.