philosophy

Minority Report - Individual Autonomy vs Collective Good

Warning: this essay contains spoilers for the movie Minority Report

You can’t run, John!

One of the fundamental questions Steven Spielberg’s Minority Report poses is whether or not individual autonomy weighs more than the collective good. This question is particularly apparent in the finale, which, albeit conclusive, roots itself on a rather open-ended question: 

Why exactly was PreCrime disbanded? 

The most obvious answer assumes that once Lamar Burgess’s murderous past to establish PreCrime was uncovered, the public moral outrage was too much of a PR nightmare to deal with (presumably, PreCrime was funded on taxpayer’s money). However, such an assumption would be far too simplistic and naive, especially given America’s notorious history of violating human rights in the name of homeland security (think Guantanamo Bay, for starters). No, to assume that one woman’s death would sufficiently rouse public protest against a seemingly perfect system that creates a utopia of safety is, sadly, too naive of a notion to be sufficient a reason. Instead, let’s consider the alternative reason why PreCrime was disbanded – the possibility that innocent people may have been wrongly jailed. 

While this second reason may seem obvious to the viewer, consider that, like many things hidden away from the public, PreCrime’s internal function was unknown to many outside its precincts: in one scene, a man giving a tour to school children claims that the three precogs – Arthur, Dashiell, and Agatha – have their own rooms with toys, books, and exercise equipment, and that they must be kept in isolation because they are too sensitive to normal environmental stimuli; in reality, the precogs are kept in a narcotic state in an isolated, antiseptic room, never quite awake nor quite asleep, just so they can function 24/7 for PreCrime to predict any potential murder at any given time. So while the tour scene is a small one (and more of some background dialogue while we see John Anderton paralyze his own facial muscles to look twenty years older), we can assume that, like the naivety and outright ignorance of the touring school children and the tour guide, the general public in Minority Report really has no idea the intricate, internal workings of PreCrime beyond its ultimate result – that it stops murders from happening, and the numbers show. 

So now let’s assume that once Burgess committed suicide and (presumably) Anderton testified to everything that happened – how PreCrime works, who Ann Lively was, why and how she was murdered, why he was set up by Lamar – would the reasons for individual autonomy and innocent until proven guilty still hold? Somehow, I find myself doubting either reason: Anderton states that there has not been a single murder in D.C. ever since PreCrime was established; additionally, most crimes after the establishment of PreCrime are spontaneous crimes of passion, which means the PreCrime officers more or less caught the perpetrators in the act of murder, as can be seen in the opening sequence of the movie. This means PreCrime 100% is efficient on paper – something unheard of in the real world. So would anyone dare to suggest disbanding such a system if there was even the slightest, most minute chance someone may have been jailed unjustly? Somehow, I find that very doubtful. 

While the chances of a minority report (when one precog disagrees with the other two in a prediction) are never stated, we can assume that it happens infrequently enough that the original creators of PreCrime would design the system to erase said minority report (note: Wally, the caretaker of the precogs, only erases echoes, aka the instances when the precogs visualize past predictions. Presumably he does not manually erase minority reports because that could possibly lead to error, which PreCrime touts as non-existent, which means the erasure of minority reports is likely done by a computer). Even then, only a handful of people would know that such a mechanism existed – Anderton, himself the head police officer of PreCrime, did not know of this until consulting with the retired Dr. Iris Hineman, the other co-founder of PreCrime – so until the revelation about Burgess, it’s highly unlikely that anyone knew how to manipulate the PreCrime system to commit murder undetected. PreCrime, despite its intrinsic human error, is a perfect system. 

You could argue that PreCrime was disbanded on the basis of human error, given it was touted as an absolutely perfect system. However, if you put it in perspective, there really isn’t any system that’s truly perfect: random error is inevitable, and the goal of a system designer is to minimize (ideally eradicate) systematic errors as much as possible. In this case, PreCrime is possibly one of the best systems you could ever ask for: minority reports (analogously random errors) are known to exist to only a few people, and beyond that everything is controlled with perfect, surgical precision. 

The most feasible reason for PreCrime’s disbandment, then, must be because of the potential corruption of the upper echelons of PreCrime – the systematic error, per se. When Burgess’s motives and means of securing PreCrime became public, it became very clear that, with the right position, power and knowledge, anyone could manipulate the PreCrime system and commit murders any which way they want. Of course, such a manipulation takes quite the planning and proper time span – I could only imagine the intricate steps Burgess took to kill Ann Lively without being caught, or what sum of money he must have offered Leo Crow in order to imprison Anderton before the truth about Lively’s death and the inherent, systematic flaw of PreCrime became apparent – so presumably, Burgess wanted to silence Anderton and anyone who could potentially uncover the truth about PreCrime and minority reports before such knowledge became untraceable with subsequent generations (this assumes, of course, that the existence of minority reports was never documented and was known only by the co-founders of PreCrime). We could infer, thus, that Burgess effectively wanted to create the ultimate utopian country when PreCrime became a national establishment. 

However, once it became apparent that the systematic flaw of PreCrime was not the minority reports themselves but in the way it dealt with random flaws (by erasing minority reports instead of allowing the PreCrime officers to consider alternative futures of the supposed perpetrators and/or victims), the disbandment of PreCrime was inevitable because such a systematic flaw not only rendered PreCrime as an imperfect system, but that such imperfection meant that this system itself could not justify its lack of “innocent until proven guilty” judicial processing. Even if hundreds of potential perpetrators were caught in the act of committing murder, it also means that those who were convicted of pre-meditated murder (those not caught in the act), regardless of their murderous potential, were jailed without due processing, thus violating their own civil right to testify in court. PreCrime is nothing short of an autocracy – a utopic one, but autocratic nonetheless. 

Minority Report ends on a rather unique note. On the surface, it argues that individual autonomy and civil rights outweigh the needs of a collective – Arthur, Dashiell and Agatha are eventually released to live out the rest of their lives in peace in isolation – and that misdemeanor can only be rightly punished after the fact because the future, no matter how accurate a prediction may be, is never absolute. The more interesting implication stems from the fact that PreCrime, once its flaws become apparent, is no different than an autocracy, yet until its disbandment is fully supported by the American public. The remaining question lies once again in the question regarding the balance between individual autonomy and the collective good, and at what cost we are willing to sacrifice to fulfill the needs of one side of the scale; unsurprisingly, Minority Report argues for the former (it’s an American film after all) and ends on a rather hopeful, humane tone as well. 

Everybody runs.

Old Writing on Minority Report and Recommended Reading: 

The Metaphysics and Paradoxes of Minority Report – originally posted on October 12th, 2010

Is there a Minority Report?, or: What is Subjectivity? – Matthew Sharpe, PhD

Be Human

To be real is to be mortal; to be human is to love, to dream and to perish.

- A. O. Scott

There’s been a lot of buzz on the internet lately post-The Social Network about whether or not the internet is a bane to our humanity. Enthusiast say it allows connection beyond physical limits, and that it is democracy in the dark; detractors say it allows us to release innate bestial behavior that we’d otherwise control in the physical world, and that it’s a endless sea of voices dumbing down one another. 

After some mulling over time, I realized almost no argument defined one very important term – what is humanity, and what is it to be human? 

This thought popped up after reading Richard Brody’s counterargument¹ to Zadie Smith’s Generation Why?² piece. Having read Zadie first before Brody, I could see why he takes so many issues with the Zadie’s seeming diatribe, who writes:  

How long is a generation these days? I must be in Mark Zuckerberg’s generation—there are only nine years between us—but somehow it doesn’t feel that way… I often worry that my idea of personhood is nostalgic, irrational, inaccurate. Perhaps Generation Facebook have built their virtual mansions in good faith, in order to house the People 2.0 they genuinely are, and if I feel uncomfortable within them it is because I am stuck at Person 1.0.

Brody then lambasts Smith on this angle, saying: 

The distinction between “People 1.0” and “People 2.0” is a menacingly ironic metaphor: following Smith’s suggestion that those in the Facebook generation have lost something essential to humanity, it turns “People 2.0” into, in effect, “People 0.0”—inhuman, subhuman, nonhuman. That judgment is not merely condescending and hostile, it’s indecent.

On this level I couldn’t agree more with Brody, not only because the 1.0 vs 2.0 distinction arrogant and subjective – where’s the cut off to begin with? – but Smith falls upon the sword by failing to define what she considers human to begin with. To be fair, neither has Brody, but his response is a critical one, and rightfully so. 

Perhaps Smith didn’t intend for the 1.0/2.0 distinction to be interpreted this way; maybe she simply meant our brains are wired differently. According to Nicholas Carr’s The Internet Makes Deep Thought Difficult, if Not Impossible³, 

The Net delivers precisely the sensory and cognitive stimuli – repetitive, intensive, interactive, addictive – that result in strong and rapid alterations in the brain cell

Regardless, the very notion that somehow the internet age generation is “less human” because of how we operate or even how our brains are wired is a condescending one, conservative even. It implies some sort of overarching normality and human ideal that, frankly, is not universally applicable to how people operate on a day-to-day basis. To claim that those engaging on the net are “less human” implies that we cannot exist as distinct entities if we are not tied to our physical forms, period. 

I’ve written before⁴ how much I disagree with this assessment, primarily because the body-existence relationship theory implies you somehow become “less human” if certain bodily functions cease to exist (e.g. unable to eat/drink/talk, lost of appendages). An existence is influenced and defined to an extent by the environment, but it has been proven repeatedly over and over again that this existence can be extended beyond the shell of a body it occupies. The very action of writing down anything is already extension of someone’s existence: books were one of the earliest gateways to a alternative reality, and television, film, and other media followed suit. The internet is no different. 

To be fair on Smith’s end, she makes a good point about how the internet has become less personal in some respects: 

When a human being becomes a set of data on a website like Facebook, he or she is reduced. Everything shrinks. Individual character. Friendship. Language. Sensibility.

Brody rightfully criticizes her for this very phrase, but let’s put her rather negative parsing into this context: remember the earlier days everyone was sort of building their own websites, their own way online journaling via whatever they could? Before wikipedia, YouTube, hell even Google – we were all trying to build our own realm, experimenting with different media and limits of the internet at the time. I remember my friend Dominique once showed me some clips of Pokemon she’d managed to upload onto her dot-com site, and how much that impressed me; over ten years later and we’re posting minutes of clips on YouTube, no problem. I used Geocities (RIP) to create a online portfolio of writing, drawings, and even a journal; now there’s flickr, tumblr, more platforms than I could’ve ever imagined. There are now distinct hubs of websites, many long established during the dot-com bubble – Amazon for shopping, Photobucket for photos, and Google for pretty much everything now these days. 

There is of course some weight to what Smith says here: there have been multiple psychological tests and research done to see how much empathy people can show for one another depending on the degree of separation. For instance, in light of the Holocaust, the elephant in the room question was this – how could ordinary people enlisted in the Nazi regime even perform such atrocities? It turns out that multiple things were at play: primarily the threat of death and harm to their family kept many members in check, regardless of their personal thoughts and philosophies on the idea of subjugating Jews to such depravity and genocide; it also turned out that simply by not seeing someone you are about to harm or even kill, people are more likely to follow orders (i.e. “the person in the other room is a prime subject, so you need to keep pushing the electrify button every five minutes or every time they deny they are a liar”) because of the physical disconnect from the person they are engaging with. This explains why we get so many trolls and inflammatory remarks on the internet, which is a sorrow symptom of a media that allows us to be physically disconnected from actually seeing the person we may be engaging with. If this is the angle Smith meant when she said “we are reduced,” then yes – there is the potential for us to be less civil and collected given the chance to simply detach ourselves from physical sociocultural restraints. However, Smith goes on to say that the very existence of the net makes us less human in terms of individual character – all which I so fundamentally disagree with because like Brody, I find it arrogant and hostile. It’s one thing to say a degree of separation enables us to be more belligerent and in poor taste (even bestial, with the recent surge of cyber bullying), but it is another to claim no good can come of the media either, and that to subscribe to the net is to subscribe to an existence of hallow emptiness. 

It’s true that institutionalizing websites has made the internet less personalized, and that to subscribe to such a categorical organization is giving up certain programming and web designing idiosyncrasies otherwise expressed outside the determined parameters of a search engine, mail client, or social networking tool. Smith’s wording, however, is much too heavy for an assertion without a real foundation: we’re still ourselves on the net, and while we may share similarities to one another these similarities do not detract away from how we are as individuals. Standardization, albeit gray-scaling things down a bit, allows us to connect more easily to others, and then it’s possible to use such a connection as a segway to another piece of the internet that is perhaps more personal to ourselves, and ourselves alone. 

A perfect analogy for how the internet has transitioned is that of the radio. When it began everyone was for themselves: you played with radio waves, made your own messages, relayed back and forth between different frequencies – it was a free-for-all. However, as talkers began to be established and distinct stations became in place, this free-for-all diminished and everyone began subscribing to different programs of their liking. 

If we were to go by Smith’s standards, the radio equally made everyone “less human” than their predecessors once stations began their broadcast domination over people from playing around with frequencies and wavelengths. With this additional perspective, I couldn’t agree more with Brody’s response: 

Smith’s piece is just another riff on the newly resurgent “what’s wrong with kids” meme that surges up from the recently young who, in the face of rapid change, may suddenly feel old… Democratization of culture implies democratization of influence. Biological age, years of study, and formal markers of achievement aren’t badges of authority any more; they haven’t been for a long time. In school, in universities, they do remain so, and Facebook arose in response to, even in rebellion against, the social structures of university life that were invented for students by their elders the administrators on the basis of generations’ worth of sedimented tradition. Who can blame Zuckerberg for trying to reorganize university life on his own terms—and who can blame those many young people, at Harvard and elsewhere, who found that his perspective matched, to some significant extent, their own world view and desires? And if Facebook also caught on even among us elders, maybe it’s because—in the wake of the exertions of the late-sixties generation—in some way we are now, today, all somehow still in a healthy state of struggle with institutions and traditions.

No matter how you look at it, the internet truly is democracy in the dark. Democracy does not necessarily mean people do or believe in the positive – Apartheid is a perfect example of democracy in the negative – but it is simply an avenue for voices to be heard, and for the truly strong, competent and remarkable individuals to shine even brighter amongst the screams and laughter of the net. To claim that the internet somehow makes us dumber⁵ ignores the very external institutions that have collapsed so badly (the American education system is riddled with so many problems that I’m astounded still why the federal government insists upon school curriculum determined by the state), and ignores the fact that the internet, no matter how you look at it, is an extreme representation of our very selves which have inherently been shaped by the very institutions and policies formed in the non-virtual world. To claim also that media somehow makes us “less human” (made famously by Chuck Klosterman in this interview⁶) is, like Brody says, incredibly inappropriate and condescending, and again ignores a grave, fundamental definition – what does it mean to be human in the first place? 

Lastly, to add my two cents: not once does she directly define what it is to be “human,” claiming insofar that people 1.0, “real people,” are simply not people 2.0, “the Zuckerberg generation.” This is where and why Smith’s argument (and rant, for the matter) falls apart from the very beginning, all because she chooses to define terms in opposition rather than by definite terms. Defining in opposition is all relative: if you say “I’m against those who are sexually deprived,” you’re not defining what sexual depravation is, and merely implying some subjective grouping of those you consider “sexually depraved”; conversely, if you directly say “I’m against the philosophy of free love because I believe in monogamy,” the terms of “sexual depravity” are more clearly defined, and while these parameters are subjective there is at least a constant the argument can fall back onto. 

So to you, Professor Smith: to be human is to have irrational desire to love and be loved, regardless of how these emotion, illogical actions may entail for our physical well-being. There is a universal sense of our mortality and the gateway to death that awaits us at the undefined chronological end. So no matter how much the world changes or what new innovations happen to come our way, or even how our behaviors may fluctuated with pulsating sociocultural upheavals and revisions humanity will always be present so long as we have a sense of illogic in dealing with the world around us. I am not 2.0 person as you are not a 1.0 person: we are both in the same game, only I do not crave the nostalgia and older institution you are more familiar and comfortable with. Let us all agree that the change is one of the few constants in life, all in addition to life and death. 

Irrational love defeats life’s programming. 

– Andrew Stanton


Referenced Reading: 

¹Plus on Change – by Richard Brody

²Generation Why? – by Zadie Smith

³The Internet Makes Deep Thought Difficult, if Not Impossible – by Nicholas Carr, published in the magazine “Discover Presents: The Brain,” published Fall 2010

Ghosting

Dumb and Getting Dumberby Janice Kennedy

The Chuck Klosterman Interview Part 2 – conducted by Hunter Stephenson of /Film

Alzheimer's and Algernon

If there was one disease that was truly terrifying from a philosophical perspective, Alzheimer’s is an indisputable contender. Neurodegenerative, the disease slowly but surely robs victims of their dignity, destroying the very essence that years of invaluable experience culminated into. 

Philosophy takes for granted that the human condition cannot meteorite: presumably our minds continue to pulsate and change, and that these pulsations result in our very being – a unique entity within the blistering swarm of the universe. Philosophers took for granted that uniqueness could not somehow be degraded from its essence, that it was an all-or-nothing relationship: existence or death. Yet Alzheimer’s has proven otherwise, manifesting into one of the slowest and cruelest ways of degeneration. Most other diseases eat away at your body, some perhaps resulting in insanity or dementia; Alzheimer’s, however, simply does not destroy you physically, but it must also shed away your personality and mind until nothing is more than sensationless pulp. 

There have been books written and films made about Alzheimer’s – perhaps the most famous and critically acclaimed is Away from Her – but I have always been curious about the sort of despair and devastation a patient feels when the very characteristics that defined them become impossible to act out. Most of the time the patient is unaware of their own degeneration; some know their predicament, and that it is only a matter of time before their own sense of being is really no more. I know families of patients perhaps suffer more than the patient (especially since in the worst stages, they are blissfully ignorant of their state of being), but I do wonder – what on earth is it like to suddenly realize you’re falling down, down and down from a pillar you once proudly stood upon, the pillar which effectively defined your own existence? 

Such is a question beautifully and poignantly explored in the novel Flowers for Algernon. The premise is this: a mouse, called Algernon, successfully undergoes experimental surgery to artificially improve its intelligence. Charlie, a mentally disabled man, volunteers for the treatment in hopes of becoming more intelligent. Charlie’s treatment is also successful, and he becomes exponentially intelligent to the point of outclassing the finest minds in the world. However, Algernon begins deteriorating, and very soon it’s obvious that Charlie will also meet the same fate. 

The book is exceptional not only because of the moral and ethical dilemmas it presents such as treatment of the mentally disabled or how academia interacts, but especially by how it is written and presented. The book’s structure is one of a journal, supposedly maintained by Charlie when he first opts for the treatment all the way to his fantastic intellect and then to the beginnings of his decline, where he eventually stops writing because he is afraid and devastated by the idea of documenting his deterioration any further. 

I can only imagine that perhaps Charlie’s fall from intellectual greatness experience is perhaps analogous to that of an Alzheimer’s patient. For the first time in his life, Charlie exudes a intellect so utterly spectacular, so magnificent that he truly feels a sense of pride in himself. However, the side effects of the experiment kick in, and like the plaque and dying neuron effects of Alzheimer’s Charlie finds himself losing more and more of himself everyday; more horrifying is the prospect of falling from greatness and back into state even more mentally handicapped before, and possibly brain death for the matter. These last journal entries are devastating: the anger, the despair, the desperation, and finally the acceptance – there’s almost a cruel irony that the greatest genius on earth should perish as a vegetable. Understandably, Charlie leaves his journal before he begins documenting in a more degenerated state. This last move is Charlie’s desperate effort to maintain himself somehow, to leave a documentation that chronicles the intelligent Charlie did exist, and while his now handicapped self can still remember such; further recording would only show how this Charlie was replaced by another Charlie, and another, and so on. Denial? Perhaps, but in his situation wouldn’t you opt for the same thing? 

Perhaps a more interesting question to consider is this: is there a certain point where our physical or mental degeneration effectively makes us a completely different person than previous? That is, is our existence an all-or-nothing or gradient? With Alzheimer’s, I feel that the all-or-nothing model fails to account to the nuances and accumulating changes an afflicted individual encounters; many of the earliest symptoms are mistakenly attributed to senility, but as the symptoms become more and more frequent it becomes obvious that something is amiss – and that’s when the diagnosis comes in (interestingly, doctors can only confirm Alzheimer’s by autopsy). By the time a diagnosis is made, it’s only a matter of time until the person you know and love is no longer there, effectively snuffed out of their existential essence and ghost. 

The analogy of Flowers for Algernon to Alzheimer’s is nothing else than my own projecting. I’ve never had relatives or friends afflicted with the disease; the closest experience I had with Alzheimer’s was back in July 2009 when I volunteered at an Alzheimer’s clinic, where I simply kept patients company and interacted with them so they wouldn’t be left all day to watch nothing but television. Yet I still wonder the sort of distress (or lack of) one feels when slowly but surely they become less and less themselves. 

I recently read a Times magazine article titled “Alzheimer’s Unlocked.” Detailing current developments and advances against the disease, the article optimistically stated that with recent medical imaging techniques like advanced MRI machines, doctors and researchers were now able to visualize pieces of anatomy and physiological pathways in the brain that before, were completely out of the question with traditional dissection techniques. The biggest hope is that more avenues of research will open up, and that now we can really see what else we might have missed in researching the disease: traditionally, many believe that plaques formation corresponded to Alzheimer’s development, but to what extent this relation is (direct or indirect) or if there is another (or several other) physiological mechanisms at hand is the more recent question at hand. 

Surprisingly, the article did not state perhaps why Alzheimer’s occurs in the first place, beyond the physical fact that some are genetically predisposed to it. I wonder, though, if the disease itself is perhaps a natural, inherited mechanism to shut down the human body when it begins to seem that our physical forms are no longer reproductively viable or as energetically sustainable – possibly, Alzheimer’s is almost a way of slowly shutting down a physical system that is simply too old. 

This is all theory, of course. The only basis for it is that Alzheimer’s is considered a disease of the elderly, while other diseases like Parkinson’s, Tuberculosis and cancer can afflict anyone at any point in life, and afflictions at birth such as Down Syndrome are the effects of genetics seen immediately. Perhaps Alzheimer’s is just a genetic affliction triggered by the mere physical state of being elderly, and that if certain aspects of the environment trigger chronic stress (a constant firing of the sympathetic nervous system with little chance of the parasympathetic nervous system to balance it out) only accelerate the aging process, Alzheimer’s manifests as a way to simply shut down the now overworked body. 

We may not know for many more years, or very easily a finding could reinforce or completely disprove what I’ve just out here. However, I’m sure we all agree on one thing – that Alzheimer’s unequivocally destroys any sense of being we might have ourselves, ghost and all. 

Recommended Reading

Flowers for Algernon – Daniel Keyes

Away from Her – Roger Ebert movie review

Charly – Roger Ebert movie review

Away from Her – A.O. Scott movie review

New Research on Understanding Alzheimer’s – Alice Park of Time Magazine

Time of Eve (イヴの時間) - A Exploration of Our Humanity

In lieu of my discussion on “Ghosting,” a few weeks ago Allan Estrella recommended Time of Eve, commenting that the story was exceptional in exploring human behavior with respect to artificial beings – specifically robots and androids, or artificial “ghosts." 

The premise is this: in the (likely) future of Japan, androids have become as commercial as the cell phone and laptop. However, in order to maintain traditional social structure, humans and androids are discouraged from interaction beyond basic controls and commands, and androids are required to always maintain a halo-like projection above their heads so they may not be mistaken as humans. 

The main character, Rikuo, has taken robots for granted his entire life. One day, he discovers that his family’s home android, Sammy, has begun acting independently, and with his friend Masaki traces her movements to a cafe called "Time of Eve,” where any patron – android or human – is welcomed, and no one is discriminated against. 

From there on out, the story explores different vignettes of characters, from the hyperactive Akiko to the lovers Koji and Rina. The main conflict, of course, is how humancentric behavior arises in lieu of an intelligent, artificial being created by humans, and how such fears, prejudices, and pride can make us as inhuman as the androids we make out to be. In Time of Eve, humans commonly treat androids subserviently, coldly ordering them about without a single glance. Social stigma additionally deters people from acting kindly, graciously or gratefully to androids: the mere act of holding an umbrella over an android will get others pointing and laughing at you, derogatively labeling you as a dori-kei (“android holic”). Such behavior is encouraged non-governmental organization, the Robot Ethics Committee, which advocates segregation between humans and robots and the government to enforce such. 

At the heart of this conflict is one of emotional legitimacy: given that robots and androids are cognitively capable (if not more than humans regarding information processing) due to their code and algorithmic coding (and are thus self-learning, perhaps to an extent), does this mean they are capable of emotional display and reception?; and if so, should we consider such as legitimate? 

First, let’s consider living organisms, more particularly the vertebrates (reptiles, birds, mammals). Animals, while possibly exhibiting physical features or behavior similar to humans (Chimpanzees, for example), are not us: we cannot interbreed viable offspring with non-Homo sapiens, yet there is a tendency for animal lovers to anthropomorphize certain aspects of animals we observe (I’m particularly fond of Oxboxer’s description of cheetah cubs: “They look like the kid you hated in preschool because he got light-up sneakers three months before they were even being sold in the States, and lorded it over everyone until your friend colored his hair green with a marker during nap time.”) This is especially true for household pets, and lends us to distress whenever they pass away. Understandably, our tendency to become emotionally attached to animals is not unusual: their behaviors are invariably tied to their emotions, and while we cannot completely communicate or understand between them and ourselves the underlying attachment is one of organic core – our natural, organic ghosts, per se. 

Now let’s consider why we get attached to inanimate objects. Most of the time it’s because of nostalgia or keepsake, or perhaps even habitual. These objects are not human, yet somehow we find some sort personal meaning in them. For instance, for months I rode a 11-year-old bike that was too small for me, and had a broken front derailed, severely misaligned rim breaks, an old chain, and a steel frame so heavy I’m pretty sure my upper arm strength increased significantly just from lifting it on occasion; yet I never had the heart to abandon it because I had so many biking memories attached to it (I even named it “Bikey” to commemorate my affection). Eventually, I had to invest in a new bike because the effort of pedaling up and down hills with Bikey increasingly irritated the tendonitis in my left knee, and unless I wanted to continue half-limping on foot I knew it was time to put Bikey in the garage (for the record, I named my current bike “BB”, only highlighting another tendency of mine to become attached to objects otherwise inanimate). 

This leads us to the last level which is on the verge of the uncanny valley: an intelligent artificial being constructed by our own algorithms and for our own purposes. Assuming that A.I. are capable of self-learning to an extent, the conflict is now a question of whether or not our own emotional reactions to them and their’s to ours have true emotional weight, or if we should abide by our own logic and merely consider them derivatives of our own being, tools that are anthropomorphized very closely to our likeness but nevertheless derivatives. 

This latter mentality is presented in Roger Ebert’s review of Stanley Kubrick’s and Steven Spielberg’s A.I. Artificial Intelligence, where he states

But when a manufactured pet is thrown away, is that really any different from junking a computer? … From a coldly logical point of view, should we think of David, the cute young hero of “A.I.,” as more than a very advanced gigapet? Do our human feelings for him make him human? Stanley Kubrick worked on this material for 15 years, before passing it on to Spielberg, who has not solved it, either. It involves man’s relationship to those tools that so closely mirror our own desires that we confuse them with flesh and blood…

Ebert brings up an interesting point, which is whether we impose and project our own beliefs and feelings upon what is otherwise an animate and well-programmed tool – a practice not too unsimilar to a child projecting their fantasies and adventures upon their doll or stuffed animal, for instance. There is also a question of a A.I. being so well-programmed as to detect our facial muscles twitch, contract and relax and react so appropriately human that they effectively trick us into believing their emotions are real, thus resulting in our illogical mentality of humanizing something that is nothing more than a extremely sophisticated tool. 

Do you remember that one that was constantly reading books? Well, when we got to the lab, the first thing the techs did was take apart its brain! It kind of seemed like that tachikoma liked it though. 

Oh, I see! Then, they were lucky enough to experience death…

Consider this: in Ghost in the Shell: Stand Alone Complex, Major Motoko Kusanagi and her team at Section 9 work with A.I. tanks called Tachikoma. As the series progresses, the Tachikoma increasingly develop more and more distinct personalities and have increasing tendencies to act independently despite orders from their users. Troubled by this, Motoko eventually halts use of the Tachikoma’s and has them sent back to the lab for further testing. However, as the series progress and only three remaining Tachikoma return to help Batou (Motoko’s closest companion amongst the Section 9 members), they eventually sacrifice themselves in order to save Batou’s life; and as Motoko looks on at their remains, she acknowledges that she mistakenly had them put out of commission, and even ponders if they had reached the state of creating their own distinct ghosts. 

While these questions are interesting to mull over, I believe the more important question is how we behave to an intelligent entity that is otherwise unbounded by our biological, organic limits of the flesh. We can argue to the end of time whether or not an A.I.’s “emotions” are real or not, and there can really be no way of knowing for sure; what we can assess is our own reactions, feelings and behavior when confronted with them. 

For an analogy, let’s consider video games: I’m not going to argue whether or not the medium is an art form, but I think we can all agree that all video games offer a virtual simulation of something – fighting, adventure, strategy, interaction, etc. The virtual environment is the product of programmers piecing together polygons into what conceptual artists conceived and writers hoped to flesh out within the constructs of a console or computer; algorithms and codes allow players to do whatever they want within the confines of the programmed environment, and with nothing short of individual A.I. and environments aspects for us to talk to or mess around with. Now, logic dictates that these virtual environments are nothing more but gateways for temporal detachment from our immediate physical environment; yet I dare anyone to claim that they did not experience something while running through the desert’s of Red Dead Redemption or confronting the likes of Andrew Ryan in Bioshock. 


The process of creating a videogame may be the greatest and grandest illusion ever created, but when finished, it holds the capacity to grant us experiences we can never experience. Loves we have never loved, fights we have never fought, losses we have never lost. The lights may turn off, the stage may go dark, but for a moment, while the disc still whirs and our fingers wrap around the buttons, we can believe we are champions.

– Viet Le, “The Illusionist

Video game players will always invest a certain amount of emotions into any game they choose to engage in. Whether it be placing your heart on Pikachu in Super Smash Brothers Brawl or wondering when the hell the story in Final Fantasy XIII is going to actually become interesting, there is almost a guarantee that these games elicit some emotional reaction from us – excitement, fear, frustration, sorrow, these emotions are real to us. Whether or not the game A.I.s share such sentiment is irrelevant, for we can only truly account for ourselves, and ourselves alone. 

So perhaps a robot or android may create the illusion of seeming more human than they actually are, or perhaps deep down their circuitry they perhaps do care about how we feel – we will never know. We can account for our behavior towards such A.I., and consider what exactly we feel entitled to in our given society and culture. 

In Time of Eve, there is a distinct political and social structure that discourages people from acting humanely towards androids, who are governed by the Three Laws of Robotics

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Additionally, all androids in Time of Eve are required to always display a halo projection above their heads, a marker that determines their subservient status. Constant propaganda ads spearheaded by the non-governmental Ethics Committee claim that sociable interaction between humans and androids are unhealthy, will end in disaster and possibly lead to the end of humanity; and it is neither uncommon for android owners to toss luggage at them without so much of a glance or unimaginable thank you, less they be deemed dori-kei and face ridicule from their peers. To be blunt, there’s nothing short of social norms or policy that enforces human and android segregation. 

Stepping back, the social and political structures in Time of Eve are not so unlike a democracy that deems segregation a norm. The most obvious example is that of Apartheid in South Africa, where a white minority democratically voted for segregation and lacking civil rights to their native African country. It took years for the likes of Nelson Mandela and other activists to end political mandate justifying racism, mostly because for years the empowered, minority white South Africans considered the social and political barriers a norm: by virtue of politics beginning from colonial times, caucasian Afrikaaners were obviously quite comfortable with their perceived birth right; it didn’t matter that their comfort and political representation was at the expense of a color majority – they had politicians to back up their views, and democratically so because for years the majority black Afrikaans were deprived of citizenship. 

The argument can be made that because androids are not human, we cannot treat them like how we would treat other fellow human beings. Perhaps this would be convincing if incarnations of it beforehand had not be used to justify injustice between fellow human begins beforehand: African slavery, European colonialism, the Holocaust – all atrocities against human rights twisted humanity into a sort of superiority complex, rationalizing entitlement rights groups of people believed they had above others. Furthermore, this argument structure again ignores the most pressing issue – how we behave as humans when dealing with individuals we are unfamiliar with. 

* Some may strongly stand by the divide between organic and inorganic beings, and state that since androids are artificial intelligence, we cannot equate such segregation to that between humans. If this is the case, then I offer this other example: if I were to equate androids to computers by virtue of them both being created as tools, our behavior is still indicative of ourselves at least. That is, if I take horrendous care of my MacBook and repeatedly drop it or fail to do simple maintenance on it, my MacBook may still operate and function but my carelessly reflects poorly of me regarding my behavior and lack of responsibility towards maintaining my computer; if I take excellent care of my MacBook (and I contest that I do), my MacBook may still operate and function but my maintenance and care for my MacBook reflects well of my abilities as a computer owner and responsibility towards it. 

In Time of Eve, policies and social structures against human-android interaction likely stem from public fear, distrust and insecurity culminating into a nationwide superiority complex, where it is absolutely normal for a person to feel superior than an android, regardless of the android’s intellectual and functional capabilities. As this negativity became more and more widespread, social structures morphed as well to accommodate such fervor, and eventually formed the policies which forbade human-android relationships from progressing into the uncanny valley of emotions and attachment. It’s considered taboo to humans to be humane to androids. Now given the social and political structures deeming inhumane behavior proper and normal, what does it mean when one chooses to or not abide by such norms? 

It takes no courage to act accordingly within social and political structures which provide you power at the expense of others’ dignity and civil rights; it takes an extraordinary person to break away from inhumane behavior otherwise deemed normal by a democratic majority, and especially speaks volumes about our ability to aspire towards a humanistic ideal above and beyond our dark, personal demons. Our emotions are our own, and if we feel an attachment to something otherwise illogical, then so be it – it is our right as humans, as well as our responsibility to act in the positive if we are to claim our rights to humanity. So if it means I’ll get laughed at for holding an umbrella over an android’s head, that’s fine by me. 

To be real is to be mortal; to be human is to love, to dream and to perish. 

– A.O. Scott

Recommended Articles

A.O. Scott’s Review on A.I. Artificial Intelligence

Roger Ebert’s Review on A.I. Artificial Intelligence

• ”The Illusionist“ by Viet Le

Armond White’s Review on District 9

• ”Not in defense of Armond White“ by Roger Ebert

District 9 Soundtrack - Main Theme by Clinton Shorter

*Edit @ 9:41pm - I forgot to add an important paragraph. It is italicized and is marked by a *

Ghosting

‘Yes, but they – Wurst, and Knaust, and Pripasov – will tell you that your consciousness of existence is derived from the conjunction of all your sensations – is, in fact, the result of your sensations. Wurst even goes so far as to say that where sensation ceases to exist there is no consciousness of existence.’

'I would maintain the contrary,’ began Koznyshev. 

But here again it seemed to Levin that just as they were reaching the root of the matter they again retreated; and he made up his mind to put a question to the Professor. 

'So if my senses are annihilated, if my body dies, no further existence is possible?’ he asked. 

– Anna Karenin by Leo Tolstoy

After reading this passage from Tolstoy’s masterpiece, I stopped and pondered for awhile on the entire discourse and its implications. The idea of existence has been broiling in the back burner of my mind for quite some time, and this small portion from Anna Karenin amped me back into full throttle. Likewise, I decided in lieu of Levin’s question – no, if one’s senses are annihilated and one’s body dies, existence is still possible. 

The professor in Anna Karenin assumes that sensory experience shapes and defines one existence, which is a fairly reasonable assertion. However, when you consider the assumptions the statement, there are implications rather questionable regarding basic humanity and human conditions: essentially, the professor assumes that existence is directly related to how much we can sense and feel from our immediate environment – assuming, of course, the professor equilibrates all sensations as equal (non-equal considerations of sensations are too subjective to really add or detract from this statement). This linear relationship is really the downfall of the sensory-existence argument for a few reasons: 

If this is the case, then those who have lost some amount of sensory function are less of an existing conscious. Take for example an amputee: now that they’ve lost an appendage, compared to their former selves these individuals are less of a conscious existence by virtue of having less surface area of their sensory nerves (while there is the phenomena of “ghost limbs,” strictly anatomically amputees have lost a certain amount of sensory functions). We could also look at paraplegics, who can no longer use their lower limbs – according to the professor’s assertion, these individuals are only half the conscious of a non-handicapped peer. We can easily look at other physical conditions that render individuals into relative handicapped status – blindness, hearing loss, anosmia, burn victims, etc – and see that the professor’s statement, while intriguing, is short-sighted: it essentially states that a existence is solely dependent on the cumulative sensations one is able to acquire and experience; on the latter fold, those who are not a normal physical condition are essentially “lesser” consciousness since their cumulative sensations are comparatively less by virtue of their own physical condition. The professor’s logic equates public figures like Stephen Hawkings and Roger Ebert as “lesser” conscious existences because both rely on artificial means to articulate their thoughts to the world. The implications of his argument extends to cases individual who is in a vegetative state, where their bodies still function biologically but the probability of them ever regaining conscious thought or cognitive function is less than the an elephant suddenly appearing in your living room out of thin air by virtue of metaphysics – according to him, they are greater conscious entities because their bodies can still pick up sensations. 

I disagree with the professor’s statement, simply because I define existence slightly but significantly differently: that one’s root existence is the conscious thought, and that this root existence manifests into the physical condition of a body that one’s cognitive function puppeteers and performs with. Additionally, if someone is effectively brain dead without any chance of recovery – then I believe this individual has effectively died, regardless of their body’s physical condition. This distinction between one’s conscious and one’s physical manifestation relates to the prime idea of this article: ghosting

I’ve watched Ghost in the Shell: Stand Alone Complex (and 2nd GIG as well) on-and-off for a few years, and this past summer I rewatched some episodes again with my older brother. Each episode is dense, complex, and philosophically intriguing – so much so that if you stop paying attention for a few moments, you’ll likely be lost as to what’s going on and what the character’s are thinking. 

GITS: SAC take place in the future, where cyborg technology is sophisticated and commercial. It’s not uncommon to see someone with a cybernetic attribute walking around and living everyday life as per usual (in fact, nearly everyone has cybernetic eyes and chips in their brains, enabling them to receive information without a screen and so forth). This cybernetic society essentially ties everyone together on a metaphysical-like technological net – almost as if you could access the world wide web anytime, anywhere. Likewise, this means capable hackers can cause societal mayhem if unchecked – which is where Public Security Section 9 comes in, led by Major Motoko Kusanagi. 

Motoko is a unique character in the GITS: SAC universe because unlike most others, her body is completely cybernetic – she possess no natural biological function. Her condition is a result of a plane crash she was in when only six-years-old: she was in a coma until it became obvious she would die unless she unwent full cyberization. This process forced Motoko to completely separate body and mind to the extreme; unable to feel real sensations as a cyborg, she regards her body more as a shell her true essence resides and acts upon within – her ghost. 

Theoretically, in the world of GIST: SAC you could surpass “dying” by uploading your conscious into the collective technological “net”; and while your body would decay, your conscious still exists, and therefore you have not necessarily died (however, in the unfortunate case the server somehow crashed and wiped out all data, you really would cease to exist). More pressingly however is the idea of one’s ghost and shell being separate entities, that the relationship between mind and body is not entirely necessary for one to still exist. 

Here’s a thought experiment: say somehow, in some dimension you were able to separate your conscious from your current body and then occupy a different body – are you still the same conscious, the same person? 

I believe that if one still acts out certain behavioral traits and personality quirks unique to themselves regardless of what body, what shell they occupy – they are still that same individual. They still exist as a distinct conscious. 

In one episode of GITS: SAC, called “Runaway Evidence – Testation,” a rogue tank runs amok the city, hacked into the by recognition code of the tank’s designer, Kago Takeshi, who had died a week earlier. It turns out the “ghost” of this tank is actually that of Kago’s: due to religious reasons, his parents refused to let him undergo cyberization despite his serious medical problems, which invariably led him to physical dying at a early age; however, he manages to transfer his ghost into the tank, and before Motoko short-circuits the tank’s brain she discovers in a brief moment that Kago simply wanted to show his parents his new steel body. 

“Runaway Evidence” is an intriguing episode because it really addresses the core argument of whether one’s existence solely depends on the physical medium upon which they act out their conscious functions. While we never know if the tank performed similar personality traits Kago performed while biologically alive, its clear that the tank’s motive derives from Kago’s conscious, his ghost. His action are no different than a hermit crab migrating into a different shell. 

This all leads to the final portion of Tolstoy’s passage in Anna Karenin, where Levin asks if one can still exist if their physical being is somehow exterminated – that is, can one still exist without a shell? 

I believe yes, for various reasons. If you look around you, their a billions of information and narratives documented into multiple media forms – books, film, painting, photography, everything. Every word, every letter, every frame and every brush stroke that goes into each of these mediums was done by someone, a distinct somebody, and as we gloss over and intake the contents of each medium we invariably soak up the presentation, wording, dilution and creativity of this unique and distinct somebody. In the midst of these actions, we experience the remnant pieces of one’s ghost. 

In a less abstract level, you can easily consider the internet as a prime example of separating one’s ghost from their shell, mind from body. As a distinct individual on the net, you define yourself either which way you want, whether it be by writing, subject, ethnicity, age, interests, purpose, and so on; but, unless you know the unique user in real life, there’s no real way of confirming one hundred percent what a user says they are is really who they are in real life. On the net, we are defined solely by how we want to be, independent (not mutually exclusive) of who we are in real life. 

For instance, I could easily say that in real life, I look like this: 

Or this: 

Or even this: 

If I were savvy, charming and mischievous enough, I might actually get away with claiming my genetic origin as a Timelord, with a TARDIS and Sonic Screwdriver and all. 

More seriously though, is that our existence on the net is defined more or less by how we present ourselves in writing (and perhaps photography or video, inclusively). This is wholly separate from our physical being, our shell – yet we still exist in our the form of our distinct internet avatars, cached and all. We still communicate to one another via the internet medium: from the established email to live tweeting, we are speaking to one another, directly and indirectly so, distinct conscious entities in mental collision – and all of this independent of our bodies in the physical world. 

So to finally answer Levin’s question: yes, I believe you can still exist if your body has deteriorated or been destroyed, so long as your ghost remains a distinct entity through whatever natural or artificial means possible. This is the ultimate philosophical implication of ghosting, of one’s ghost of existence.