time of eve

Time of Eve (イヴの時間) - A Exploration of Our Humanity

In lieu of my discussion on “Ghosting,” a few weeks ago Allan Estrella recommended Time of Eve, commenting that the story was exceptional in exploring human behavior with respect to artificial beings – specifically robots and androids, or artificial “ghosts." 

The premise is this: in the (likely) future of Japan, androids have become as commercial as the cell phone and laptop. However, in order to maintain traditional social structure, humans and androids are discouraged from interaction beyond basic controls and commands, and androids are required to always maintain a halo-like projection above their heads so they may not be mistaken as humans. 

The main character, Rikuo, has taken robots for granted his entire life. One day, he discovers that his family’s home android, Sammy, has begun acting independently, and with his friend Masaki traces her movements to a cafe called "Time of Eve,” where any patron – android or human – is welcomed, and no one is discriminated against. 

From there on out, the story explores different vignettes of characters, from the hyperactive Akiko to the lovers Koji and Rina. The main conflict, of course, is how humancentric behavior arises in lieu of an intelligent, artificial being created by humans, and how such fears, prejudices, and pride can make us as inhuman as the androids we make out to be. In Time of Eve, humans commonly treat androids subserviently, coldly ordering them about without a single glance. Social stigma additionally deters people from acting kindly, graciously or gratefully to androids: the mere act of holding an umbrella over an android will get others pointing and laughing at you, derogatively labeling you as a dori-kei (“android holic”). Such behavior is encouraged non-governmental organization, the Robot Ethics Committee, which advocates segregation between humans and robots and the government to enforce such. 

At the heart of this conflict is one of emotional legitimacy: given that robots and androids are cognitively capable (if not more than humans regarding information processing) due to their code and algorithmic coding (and are thus self-learning, perhaps to an extent), does this mean they are capable of emotional display and reception?; and if so, should we consider such as legitimate? 

First, let’s consider living organisms, more particularly the vertebrates (reptiles, birds, mammals). Animals, while possibly exhibiting physical features or behavior similar to humans (Chimpanzees, for example), are not us: we cannot interbreed viable offspring with non-Homo sapiens, yet there is a tendency for animal lovers to anthropomorphize certain aspects of animals we observe (I’m particularly fond of Oxboxer’s description of cheetah cubs: “They look like the kid you hated in preschool because he got light-up sneakers three months before they were even being sold in the States, and lorded it over everyone until your friend colored his hair green with a marker during nap time.”) This is especially true for household pets, and lends us to distress whenever they pass away. Understandably, our tendency to become emotionally attached to animals is not unusual: their behaviors are invariably tied to their emotions, and while we cannot completely communicate or understand between them and ourselves the underlying attachment is one of organic core – our natural, organic ghosts, per se. 

Now let’s consider why we get attached to inanimate objects. Most of the time it’s because of nostalgia or keepsake, or perhaps even habitual. These objects are not human, yet somehow we find some sort personal meaning in them. For instance, for months I rode a 11-year-old bike that was too small for me, and had a broken front derailed, severely misaligned rim breaks, an old chain, and a steel frame so heavy I’m pretty sure my upper arm strength increased significantly just from lifting it on occasion; yet I never had the heart to abandon it because I had so many biking memories attached to it (I even named it “Bikey” to commemorate my affection). Eventually, I had to invest in a new bike because the effort of pedaling up and down hills with Bikey increasingly irritated the tendonitis in my left knee, and unless I wanted to continue half-limping on foot I knew it was time to put Bikey in the garage (for the record, I named my current bike “BB”, only highlighting another tendency of mine to become attached to objects otherwise inanimate). 

This leads us to the last level which is on the verge of the uncanny valley: an intelligent artificial being constructed by our own algorithms and for our own purposes. Assuming that A.I. are capable of self-learning to an extent, the conflict is now a question of whether or not our own emotional reactions to them and their’s to ours have true emotional weight, or if we should abide by our own logic and merely consider them derivatives of our own being, tools that are anthropomorphized very closely to our likeness but nevertheless derivatives. 

This latter mentality is presented in Roger Ebert’s review of Stanley Kubrick’s and Steven Spielberg’s A.I. Artificial Intelligence, where he states

But when a manufactured pet is thrown away, is that really any different from junking a computer? … From a coldly logical point of view, should we think of David, the cute young hero of “A.I.,” as more than a very advanced gigapet? Do our human feelings for him make him human? Stanley Kubrick worked on this material for 15 years, before passing it on to Spielberg, who has not solved it, either. It involves man’s relationship to those tools that so closely mirror our own desires that we confuse them with flesh and blood…

Ebert brings up an interesting point, which is whether we impose and project our own beliefs and feelings upon what is otherwise an animate and well-programmed tool – a practice not too unsimilar to a child projecting their fantasies and adventures upon their doll or stuffed animal, for instance. There is also a question of a A.I. being so well-programmed as to detect our facial muscles twitch, contract and relax and react so appropriately human that they effectively trick us into believing their emotions are real, thus resulting in our illogical mentality of humanizing something that is nothing more than a extremely sophisticated tool. 

Do you remember that one that was constantly reading books? Well, when we got to the lab, the first thing the techs did was take apart its brain! It kind of seemed like that tachikoma liked it though. 

Oh, I see! Then, they were lucky enough to experience death…

Consider this: in Ghost in the Shell: Stand Alone Complex, Major Motoko Kusanagi and her team at Section 9 work with A.I. tanks called Tachikoma. As the series progresses, the Tachikoma increasingly develop more and more distinct personalities and have increasing tendencies to act independently despite orders from their users. Troubled by this, Motoko eventually halts use of the Tachikoma’s and has them sent back to the lab for further testing. However, as the series progress and only three remaining Tachikoma return to help Batou (Motoko’s closest companion amongst the Section 9 members), they eventually sacrifice themselves in order to save Batou’s life; and as Motoko looks on at their remains, she acknowledges that she mistakenly had them put out of commission, and even ponders if they had reached the state of creating their own distinct ghosts. 

While these questions are interesting to mull over, I believe the more important question is how we behave to an intelligent entity that is otherwise unbounded by our biological, organic limits of the flesh. We can argue to the end of time whether or not an A.I.’s “emotions” are real or not, and there can really be no way of knowing for sure; what we can assess is our own reactions, feelings and behavior when confronted with them. 

For an analogy, let’s consider video games: I’m not going to argue whether or not the medium is an art form, but I think we can all agree that all video games offer a virtual simulation of something – fighting, adventure, strategy, interaction, etc. The virtual environment is the product of programmers piecing together polygons into what conceptual artists conceived and writers hoped to flesh out within the constructs of a console or computer; algorithms and codes allow players to do whatever they want within the confines of the programmed environment, and with nothing short of individual A.I. and environments aspects for us to talk to or mess around with. Now, logic dictates that these virtual environments are nothing more but gateways for temporal detachment from our immediate physical environment; yet I dare anyone to claim that they did not experience something while running through the desert’s of Red Dead Redemption or confronting the likes of Andrew Ryan in Bioshock. 


The process of creating a videogame may be the greatest and grandest illusion ever created, but when finished, it holds the capacity to grant us experiences we can never experience. Loves we have never loved, fights we have never fought, losses we have never lost. The lights may turn off, the stage may go dark, but for a moment, while the disc still whirs and our fingers wrap around the buttons, we can believe we are champions.

– Viet Le, “The Illusionist

Video game players will always invest a certain amount of emotions into any game they choose to engage in. Whether it be placing your heart on Pikachu in Super Smash Brothers Brawl or wondering when the hell the story in Final Fantasy XIII is going to actually become interesting, there is almost a guarantee that these games elicit some emotional reaction from us – excitement, fear, frustration, sorrow, these emotions are real to us. Whether or not the game A.I.s share such sentiment is irrelevant, for we can only truly account for ourselves, and ourselves alone. 

So perhaps a robot or android may create the illusion of seeming more human than they actually are, or perhaps deep down their circuitry they perhaps do care about how we feel – we will never know. We can account for our behavior towards such A.I., and consider what exactly we feel entitled to in our given society and culture. 

In Time of Eve, there is a distinct political and social structure that discourages people from acting humanely towards androids, who are governed by the Three Laws of Robotics

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Additionally, all androids in Time of Eve are required to always display a halo projection above their heads, a marker that determines their subservient status. Constant propaganda ads spearheaded by the non-governmental Ethics Committee claim that sociable interaction between humans and androids are unhealthy, will end in disaster and possibly lead to the end of humanity; and it is neither uncommon for android owners to toss luggage at them without so much of a glance or unimaginable thank you, less they be deemed dori-kei and face ridicule from their peers. To be blunt, there’s nothing short of social norms or policy that enforces human and android segregation. 

Stepping back, the social and political structures in Time of Eve are not so unlike a democracy that deems segregation a norm. The most obvious example is that of Apartheid in South Africa, where a white minority democratically voted for segregation and lacking civil rights to their native African country. It took years for the likes of Nelson Mandela and other activists to end political mandate justifying racism, mostly because for years the empowered, minority white South Africans considered the social and political barriers a norm: by virtue of politics beginning from colonial times, caucasian Afrikaaners were obviously quite comfortable with their perceived birth right; it didn’t matter that their comfort and political representation was at the expense of a color majority – they had politicians to back up their views, and democratically so because for years the majority black Afrikaans were deprived of citizenship. 

The argument can be made that because androids are not human, we cannot treat them like how we would treat other fellow human beings. Perhaps this would be convincing if incarnations of it beforehand had not be used to justify injustice between fellow human begins beforehand: African slavery, European colonialism, the Holocaust – all atrocities against human rights twisted humanity into a sort of superiority complex, rationalizing entitlement rights groups of people believed they had above others. Furthermore, this argument structure again ignores the most pressing issue – how we behave as humans when dealing with individuals we are unfamiliar with. 

* Some may strongly stand by the divide between organic and inorganic beings, and state that since androids are artificial intelligence, we cannot equate such segregation to that between humans. If this is the case, then I offer this other example: if I were to equate androids to computers by virtue of them both being created as tools, our behavior is still indicative of ourselves at least. That is, if I take horrendous care of my MacBook and repeatedly drop it or fail to do simple maintenance on it, my MacBook may still operate and function but my carelessly reflects poorly of me regarding my behavior and lack of responsibility towards maintaining my computer; if I take excellent care of my MacBook (and I contest that I do), my MacBook may still operate and function but my maintenance and care for my MacBook reflects well of my abilities as a computer owner and responsibility towards it. 

In Time of Eve, policies and social structures against human-android interaction likely stem from public fear, distrust and insecurity culminating into a nationwide superiority complex, where it is absolutely normal for a person to feel superior than an android, regardless of the android’s intellectual and functional capabilities. As this negativity became more and more widespread, social structures morphed as well to accommodate such fervor, and eventually formed the policies which forbade human-android relationships from progressing into the uncanny valley of emotions and attachment. It’s considered taboo to humans to be humane to androids. Now given the social and political structures deeming inhumane behavior proper and normal, what does it mean when one chooses to or not abide by such norms? 

It takes no courage to act accordingly within social and political structures which provide you power at the expense of others’ dignity and civil rights; it takes an extraordinary person to break away from inhumane behavior otherwise deemed normal by a democratic majority, and especially speaks volumes about our ability to aspire towards a humanistic ideal above and beyond our dark, personal demons. Our emotions are our own, and if we feel an attachment to something otherwise illogical, then so be it – it is our right as humans, as well as our responsibility to act in the positive if we are to claim our rights to humanity. So if it means I’ll get laughed at for holding an umbrella over an android’s head, that’s fine by me. 

To be real is to be mortal; to be human is to love, to dream and to perish. 

– A.O. Scott

Recommended Articles

A.O. Scott’s Review on A.I. Artificial Intelligence

Roger Ebert’s Review on A.I. Artificial Intelligence

• ”The Illusionist“ by Viet Le

Armond White’s Review on District 9

• ”Not in defense of Armond White“ by Roger Ebert

District 9 Soundtrack - Main Theme by Clinton Shorter

*Edit @ 9:41pm - I forgot to add an important paragraph. It is italicized and is marked by a *