ghosting

Time of Eve (イヴの時間) - A Exploration of Our Humanity

In lieu of my discussion on “Ghosting,” a few weeks ago Allan Estrella recommended Time of Eve, commenting that the story was exceptional in exploring human behavior with respect to artificial beings – specifically robots and androids, or artificial “ghosts." 

The premise is this: in the (likely) future of Japan, androids have become as commercial as the cell phone and laptop. However, in order to maintain traditional social structure, humans and androids are discouraged from interaction beyond basic controls and commands, and androids are required to always maintain a halo-like projection above their heads so they may not be mistaken as humans. 

The main character, Rikuo, has taken robots for granted his entire life. One day, he discovers that his family’s home android, Sammy, has begun acting independently, and with his friend Masaki traces her movements to a cafe called "Time of Eve,” where any patron – android or human – is welcomed, and no one is discriminated against. 

From there on out, the story explores different vignettes of characters, from the hyperactive Akiko to the lovers Koji and Rina. The main conflict, of course, is how humancentric behavior arises in lieu of an intelligent, artificial being created by humans, and how such fears, prejudices, and pride can make us as inhuman as the androids we make out to be. In Time of Eve, humans commonly treat androids subserviently, coldly ordering them about without a single glance. Social stigma additionally deters people from acting kindly, graciously or gratefully to androids: the mere act of holding an umbrella over an android will get others pointing and laughing at you, derogatively labeling you as a dori-kei (“android holic”). Such behavior is encouraged non-governmental organization, the Robot Ethics Committee, which advocates segregation between humans and robots and the government to enforce such. 

At the heart of this conflict is one of emotional legitimacy: given that robots and androids are cognitively capable (if not more than humans regarding information processing) due to their code and algorithmic coding (and are thus self-learning, perhaps to an extent), does this mean they are capable of emotional display and reception?; and if so, should we consider such as legitimate? 

First, let’s consider living organisms, more particularly the vertebrates (reptiles, birds, mammals). Animals, while possibly exhibiting physical features or behavior similar to humans (Chimpanzees, for example), are not us: we cannot interbreed viable offspring with non-Homo sapiens, yet there is a tendency for animal lovers to anthropomorphize certain aspects of animals we observe (I’m particularly fond of Oxboxer’s description of cheetah cubs: “They look like the kid you hated in preschool because he got light-up sneakers three months before they were even being sold in the States, and lorded it over everyone until your friend colored his hair green with a marker during nap time.”) This is especially true for household pets, and lends us to distress whenever they pass away. Understandably, our tendency to become emotionally attached to animals is not unusual: their behaviors are invariably tied to their emotions, and while we cannot completely communicate or understand between them and ourselves the underlying attachment is one of organic core – our natural, organic ghosts, per se. 

Now let’s consider why we get attached to inanimate objects. Most of the time it’s because of nostalgia or keepsake, or perhaps even habitual. These objects are not human, yet somehow we find some sort personal meaning in them. For instance, for months I rode a 11-year-old bike that was too small for me, and had a broken front derailed, severely misaligned rim breaks, an old chain, and a steel frame so heavy I’m pretty sure my upper arm strength increased significantly just from lifting it on occasion; yet I never had the heart to abandon it because I had so many biking memories attached to it (I even named it “Bikey” to commemorate my affection). Eventually, I had to invest in a new bike because the effort of pedaling up and down hills with Bikey increasingly irritated the tendonitis in my left knee, and unless I wanted to continue half-limping on foot I knew it was time to put Bikey in the garage (for the record, I named my current bike “BB”, only highlighting another tendency of mine to become attached to objects otherwise inanimate). 

This leads us to the last level which is on the verge of the uncanny valley: an intelligent artificial being constructed by our own algorithms and for our own purposes. Assuming that A.I. are capable of self-learning to an extent, the conflict is now a question of whether or not our own emotional reactions to them and their’s to ours have true emotional weight, or if we should abide by our own logic and merely consider them derivatives of our own being, tools that are anthropomorphized very closely to our likeness but nevertheless derivatives. 

This latter mentality is presented in Roger Ebert’s review of Stanley Kubrick’s and Steven Spielberg’s A.I. Artificial Intelligence, where he states

But when a manufactured pet is thrown away, is that really any different from junking a computer? … From a coldly logical point of view, should we think of David, the cute young hero of “A.I.,” as more than a very advanced gigapet? Do our human feelings for him make him human? Stanley Kubrick worked on this material for 15 years, before passing it on to Spielberg, who has not solved it, either. It involves man’s relationship to those tools that so closely mirror our own desires that we confuse them with flesh and blood…

Ebert brings up an interesting point, which is whether we impose and project our own beliefs and feelings upon what is otherwise an animate and well-programmed tool – a practice not too unsimilar to a child projecting their fantasies and adventures upon their doll or stuffed animal, for instance. There is also a question of a A.I. being so well-programmed as to detect our facial muscles twitch, contract and relax and react so appropriately human that they effectively trick us into believing their emotions are real, thus resulting in our illogical mentality of humanizing something that is nothing more than a extremely sophisticated tool. 

Do you remember that one that was constantly reading books? Well, when we got to the lab, the first thing the techs did was take apart its brain! It kind of seemed like that tachikoma liked it though. 

Oh, I see! Then, they were lucky enough to experience death…

Consider this: in Ghost in the Shell: Stand Alone Complex, Major Motoko Kusanagi and her team at Section 9 work with A.I. tanks called Tachikoma. As the series progresses, the Tachikoma increasingly develop more and more distinct personalities and have increasing tendencies to act independently despite orders from their users. Troubled by this, Motoko eventually halts use of the Tachikoma’s and has them sent back to the lab for further testing. However, as the series progress and only three remaining Tachikoma return to help Batou (Motoko’s closest companion amongst the Section 9 members), they eventually sacrifice themselves in order to save Batou’s life; and as Motoko looks on at their remains, she acknowledges that she mistakenly had them put out of commission, and even ponders if they had reached the state of creating their own distinct ghosts. 

While these questions are interesting to mull over, I believe the more important question is how we behave to an intelligent entity that is otherwise unbounded by our biological, organic limits of the flesh. We can argue to the end of time whether or not an A.I.’s “emotions” are real or not, and there can really be no way of knowing for sure; what we can assess is our own reactions, feelings and behavior when confronted with them. 

For an analogy, let’s consider video games: I’m not going to argue whether or not the medium is an art form, but I think we can all agree that all video games offer a virtual simulation of something – fighting, adventure, strategy, interaction, etc. The virtual environment is the product of programmers piecing together polygons into what conceptual artists conceived and writers hoped to flesh out within the constructs of a console or computer; algorithms and codes allow players to do whatever they want within the confines of the programmed environment, and with nothing short of individual A.I. and environments aspects for us to talk to or mess around with. Now, logic dictates that these virtual environments are nothing more but gateways for temporal detachment from our immediate physical environment; yet I dare anyone to claim that they did not experience something while running through the desert’s of Red Dead Redemption or confronting the likes of Andrew Ryan in Bioshock. 


The process of creating a videogame may be the greatest and grandest illusion ever created, but when finished, it holds the capacity to grant us experiences we can never experience. Loves we have never loved, fights we have never fought, losses we have never lost. The lights may turn off, the stage may go dark, but for a moment, while the disc still whirs and our fingers wrap around the buttons, we can believe we are champions.

– Viet Le, “The Illusionist

Video game players will always invest a certain amount of emotions into any game they choose to engage in. Whether it be placing your heart on Pikachu in Super Smash Brothers Brawl or wondering when the hell the story in Final Fantasy XIII is going to actually become interesting, there is almost a guarantee that these games elicit some emotional reaction from us – excitement, fear, frustration, sorrow, these emotions are real to us. Whether or not the game A.I.s share such sentiment is irrelevant, for we can only truly account for ourselves, and ourselves alone. 

So perhaps a robot or android may create the illusion of seeming more human than they actually are, or perhaps deep down their circuitry they perhaps do care about how we feel – we will never know. We can account for our behavior towards such A.I., and consider what exactly we feel entitled to in our given society and culture. 

In Time of Eve, there is a distinct political and social structure that discourages people from acting humanely towards androids, who are governed by the Three Laws of Robotics

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Additionally, all androids in Time of Eve are required to always display a halo projection above their heads, a marker that determines their subservient status. Constant propaganda ads spearheaded by the non-governmental Ethics Committee claim that sociable interaction between humans and androids are unhealthy, will end in disaster and possibly lead to the end of humanity; and it is neither uncommon for android owners to toss luggage at them without so much of a glance or unimaginable thank you, less they be deemed dori-kei and face ridicule from their peers. To be blunt, there’s nothing short of social norms or policy that enforces human and android segregation. 

Stepping back, the social and political structures in Time of Eve are not so unlike a democracy that deems segregation a norm. The most obvious example is that of Apartheid in South Africa, where a white minority democratically voted for segregation and lacking civil rights to their native African country. It took years for the likes of Nelson Mandela and other activists to end political mandate justifying racism, mostly because for years the empowered, minority white South Africans considered the social and political barriers a norm: by virtue of politics beginning from colonial times, caucasian Afrikaaners were obviously quite comfortable with their perceived birth right; it didn’t matter that their comfort and political representation was at the expense of a color majority – they had politicians to back up their views, and democratically so because for years the majority black Afrikaans were deprived of citizenship. 

The argument can be made that because androids are not human, we cannot treat them like how we would treat other fellow human beings. Perhaps this would be convincing if incarnations of it beforehand had not be used to justify injustice between fellow human begins beforehand: African slavery, European colonialism, the Holocaust – all atrocities against human rights twisted humanity into a sort of superiority complex, rationalizing entitlement rights groups of people believed they had above others. Furthermore, this argument structure again ignores the most pressing issue – how we behave as humans when dealing with individuals we are unfamiliar with. 

* Some may strongly stand by the divide between organic and inorganic beings, and state that since androids are artificial intelligence, we cannot equate such segregation to that between humans. If this is the case, then I offer this other example: if I were to equate androids to computers by virtue of them both being created as tools, our behavior is still indicative of ourselves at least. That is, if I take horrendous care of my MacBook and repeatedly drop it or fail to do simple maintenance on it, my MacBook may still operate and function but my carelessly reflects poorly of me regarding my behavior and lack of responsibility towards maintaining my computer; if I take excellent care of my MacBook (and I contest that I do), my MacBook may still operate and function but my maintenance and care for my MacBook reflects well of my abilities as a computer owner and responsibility towards it. 

In Time of Eve, policies and social structures against human-android interaction likely stem from public fear, distrust and insecurity culminating into a nationwide superiority complex, where it is absolutely normal for a person to feel superior than an android, regardless of the android’s intellectual and functional capabilities. As this negativity became more and more widespread, social structures morphed as well to accommodate such fervor, and eventually formed the policies which forbade human-android relationships from progressing into the uncanny valley of emotions and attachment. It’s considered taboo to humans to be humane to androids. Now given the social and political structures deeming inhumane behavior proper and normal, what does it mean when one chooses to or not abide by such norms? 

It takes no courage to act accordingly within social and political structures which provide you power at the expense of others’ dignity and civil rights; it takes an extraordinary person to break away from inhumane behavior otherwise deemed normal by a democratic majority, and especially speaks volumes about our ability to aspire towards a humanistic ideal above and beyond our dark, personal demons. Our emotions are our own, and if we feel an attachment to something otherwise illogical, then so be it – it is our right as humans, as well as our responsibility to act in the positive if we are to claim our rights to humanity. So if it means I’ll get laughed at for holding an umbrella over an android’s head, that’s fine by me. 

To be real is to be mortal; to be human is to love, to dream and to perish. 

– A.O. Scott

Recommended Articles

A.O. Scott’s Review on A.I. Artificial Intelligence

Roger Ebert’s Review on A.I. Artificial Intelligence

• ”The Illusionist“ by Viet Le

Armond White’s Review on District 9

• ”Not in defense of Armond White“ by Roger Ebert

District 9 Soundtrack - Main Theme by Clinton Shorter

*Edit @ 9:41pm - I forgot to add an important paragraph. It is italicized and is marked by a *

Ghosting

‘Yes, but they – Wurst, and Knaust, and Pripasov – will tell you that your consciousness of existence is derived from the conjunction of all your sensations – is, in fact, the result of your sensations. Wurst even goes so far as to say that where sensation ceases to exist there is no consciousness of existence.’

'I would maintain the contrary,’ began Koznyshev. 

But here again it seemed to Levin that just as they were reaching the root of the matter they again retreated; and he made up his mind to put a question to the Professor. 

'So if my senses are annihilated, if my body dies, no further existence is possible?’ he asked. 

– Anna Karenin by Leo Tolstoy

After reading this passage from Tolstoy’s masterpiece, I stopped and pondered for awhile on the entire discourse and its implications. The idea of existence has been broiling in the back burner of my mind for quite some time, and this small portion from Anna Karenin amped me back into full throttle. Likewise, I decided in lieu of Levin’s question – no, if one’s senses are annihilated and one’s body dies, existence is still possible. 

The professor in Anna Karenin assumes that sensory experience shapes and defines one existence, which is a fairly reasonable assertion. However, when you consider the assumptions the statement, there are implications rather questionable regarding basic humanity and human conditions: essentially, the professor assumes that existence is directly related to how much we can sense and feel from our immediate environment – assuming, of course, the professor equilibrates all sensations as equal (non-equal considerations of sensations are too subjective to really add or detract from this statement). This linear relationship is really the downfall of the sensory-existence argument for a few reasons: 

If this is the case, then those who have lost some amount of sensory function are less of an existing conscious. Take for example an amputee: now that they’ve lost an appendage, compared to their former selves these individuals are less of a conscious existence by virtue of having less surface area of their sensory nerves (while there is the phenomena of “ghost limbs,” strictly anatomically amputees have lost a certain amount of sensory functions). We could also look at paraplegics, who can no longer use their lower limbs – according to the professor’s assertion, these individuals are only half the conscious of a non-handicapped peer. We can easily look at other physical conditions that render individuals into relative handicapped status – blindness, hearing loss, anosmia, burn victims, etc – and see that the professor’s statement, while intriguing, is short-sighted: it essentially states that a existence is solely dependent on the cumulative sensations one is able to acquire and experience; on the latter fold, those who are not a normal physical condition are essentially “lesser” consciousness since their cumulative sensations are comparatively less by virtue of their own physical condition. The professor’s logic equates public figures like Stephen Hawkings and Roger Ebert as “lesser” conscious existences because both rely on artificial means to articulate their thoughts to the world. The implications of his argument extends to cases individual who is in a vegetative state, where their bodies still function biologically but the probability of them ever regaining conscious thought or cognitive function is less than the an elephant suddenly appearing in your living room out of thin air by virtue of metaphysics – according to him, they are greater conscious entities because their bodies can still pick up sensations. 

I disagree with the professor’s statement, simply because I define existence slightly but significantly differently: that one’s root existence is the conscious thought, and that this root existence manifests into the physical condition of a body that one’s cognitive function puppeteers and performs with. Additionally, if someone is effectively brain dead without any chance of recovery – then I believe this individual has effectively died, regardless of their body’s physical condition. This distinction between one’s conscious and one’s physical manifestation relates to the prime idea of this article: ghosting

I’ve watched Ghost in the Shell: Stand Alone Complex (and 2nd GIG as well) on-and-off for a few years, and this past summer I rewatched some episodes again with my older brother. Each episode is dense, complex, and philosophically intriguing – so much so that if you stop paying attention for a few moments, you’ll likely be lost as to what’s going on and what the character’s are thinking. 

GITS: SAC take place in the future, where cyborg technology is sophisticated and commercial. It’s not uncommon to see someone with a cybernetic attribute walking around and living everyday life as per usual (in fact, nearly everyone has cybernetic eyes and chips in their brains, enabling them to receive information without a screen and so forth). This cybernetic society essentially ties everyone together on a metaphysical-like technological net – almost as if you could access the world wide web anytime, anywhere. Likewise, this means capable hackers can cause societal mayhem if unchecked – which is where Public Security Section 9 comes in, led by Major Motoko Kusanagi. 

Motoko is a unique character in the GITS: SAC universe because unlike most others, her body is completely cybernetic – she possess no natural biological function. Her condition is a result of a plane crash she was in when only six-years-old: she was in a coma until it became obvious she would die unless she unwent full cyberization. This process forced Motoko to completely separate body and mind to the extreme; unable to feel real sensations as a cyborg, she regards her body more as a shell her true essence resides and acts upon within – her ghost. 

Theoretically, in the world of GIST: SAC you could surpass “dying” by uploading your conscious into the collective technological “net”; and while your body would decay, your conscious still exists, and therefore you have not necessarily died (however, in the unfortunate case the server somehow crashed and wiped out all data, you really would cease to exist). More pressingly however is the idea of one’s ghost and shell being separate entities, that the relationship between mind and body is not entirely necessary for one to still exist. 

Here’s a thought experiment: say somehow, in some dimension you were able to separate your conscious from your current body and then occupy a different body – are you still the same conscious, the same person? 

I believe that if one still acts out certain behavioral traits and personality quirks unique to themselves regardless of what body, what shell they occupy – they are still that same individual. They still exist as a distinct conscious. 

In one episode of GITS: SAC, called “Runaway Evidence – Testation,” a rogue tank runs amok the city, hacked into the by recognition code of the tank’s designer, Kago Takeshi, who had died a week earlier. It turns out the “ghost” of this tank is actually that of Kago’s: due to religious reasons, his parents refused to let him undergo cyberization despite his serious medical problems, which invariably led him to physical dying at a early age; however, he manages to transfer his ghost into the tank, and before Motoko short-circuits the tank’s brain she discovers in a brief moment that Kago simply wanted to show his parents his new steel body. 

“Runaway Evidence” is an intriguing episode because it really addresses the core argument of whether one’s existence solely depends on the physical medium upon which they act out their conscious functions. While we never know if the tank performed similar personality traits Kago performed while biologically alive, its clear that the tank’s motive derives from Kago’s conscious, his ghost. His action are no different than a hermit crab migrating into a different shell. 

This all leads to the final portion of Tolstoy’s passage in Anna Karenin, where Levin asks if one can still exist if their physical being is somehow exterminated – that is, can one still exist without a shell? 

I believe yes, for various reasons. If you look around you, their a billions of information and narratives documented into multiple media forms – books, film, painting, photography, everything. Every word, every letter, every frame and every brush stroke that goes into each of these mediums was done by someone, a distinct somebody, and as we gloss over and intake the contents of each medium we invariably soak up the presentation, wording, dilution and creativity of this unique and distinct somebody. In the midst of these actions, we experience the remnant pieces of one’s ghost. 

In a less abstract level, you can easily consider the internet as a prime example of separating one’s ghost from their shell, mind from body. As a distinct individual on the net, you define yourself either which way you want, whether it be by writing, subject, ethnicity, age, interests, purpose, and so on; but, unless you know the unique user in real life, there’s no real way of confirming one hundred percent what a user says they are is really who they are in real life. On the net, we are defined solely by how we want to be, independent (not mutually exclusive) of who we are in real life. 

For instance, I could easily say that in real life, I look like this: 

Or this: 

Or even this: 

If I were savvy, charming and mischievous enough, I might actually get away with claiming my genetic origin as a Timelord, with a TARDIS and Sonic Screwdriver and all. 

More seriously though, is that our existence on the net is defined more or less by how we present ourselves in writing (and perhaps photography or video, inclusively). This is wholly separate from our physical being, our shell – yet we still exist in our the form of our distinct internet avatars, cached and all. We still communicate to one another via the internet medium: from the established email to live tweeting, we are speaking to one another, directly and indirectly so, distinct conscious entities in mental collision – and all of this independent of our bodies in the physical world. 

So to finally answer Levin’s question: yes, I believe you can still exist if your body has deteriorated or been destroyed, so long as your ghost remains a distinct entity through whatever natural or artificial means possible. This is the ultimate philosophical implication of ghosting, of one’s ghost of existence.