On the 10 March, we had a guest lecturer, Bin Ong Kian Peng, share about artificial intelligence, machine learning and a utopian world. Sharing many examples of technology and machine learning in art, the idea of a technological utopia was introduced to us, and the question of whether AI would result in a utopia or dystopia was posed.
The lecture was enjoyable and eye-opening, allowing us to be introduced to many new interactive artworks. Out of these works, there were many that were highly intriguing and thought-provoking, such as Refik Anadol’s Melting Memories exhibition, that projects the data collection translated from the process of memory retrieval, and Alexandra Daisy Ginsberg’s The Substitute, that explores a paradox: our preoccupation with creating new life forms, while neglecting existing ones.
However, at the end of the day, the one that pushed me to wonder more about humanity, AI, and human’s idea of a Utopia was Doomsday Book’s The Heavenly Creature by Kim Jee-Woon.
The movie talks about a robot that reaches enlightenment on its own while working at a temple. Its creators regard this phenomenon as a threat to mankind and decide to terminate the robot, stating that this is a glitch in the robot. Arguments rise as to whether robots can achieve enlightenment, and the movie suggests the lines between humanity and robots are blurred, and whether enlightenment is achieved is also relative.
In the film, the robot that achieves enlightenment states the following:
To perceive is to distinguish merely a classification of knowing. While all living creatures share the same inherent nature, perception is what classifies one as Buddha and another as machine. We mistake perception as permanent truth and such delusions cause us pain. Perception itself is void as is the process of perceiving.
He goes on to say that perhaps all humans had already achieves enlightenment and he, a robot, sees this world as beautiful.
Such an eye-opening statement, and it is no doubt full of truth.
It is said that humans are only able to be “bad” or “good” because of our ability to differentiate the two and choose to do either. The choice of doing something deemed “morally wrong” causes us to become “bad”. Likewise, perhaps it is the perception of “success” and “failures” that also leads us to believe that there is more to the world, more to achieve before one can reach enlightenment.
Yet, the truth is that the world is how it is. And the robot in the film, who takes it as it is, not perceiving or classifying the world around him, as such does not crave anything more. He does not have worldly desires, as he takes what he has as it is, thus achieving enlightenment.
However, following this, I do disagree with one part of the show, that heavily influences whether or not I believe AI can help create a Utopian world.
In the movie, they say that the lines between humanity and robots are blurred, and that humans always had achieved enlightenment. It is our mistaking of perception as permanent truth that hurts us and thus fail to see things as they are. While this much is true, I believe that humanity and robots are fundamentally different as humans will almost never be able to detach seeing their perception as the truth.
It is said that “the fear of the unknown” is what every human is afraid of, and thus it is a continual cycle that we attempt to fill this void with what we perceive the truth to be, whether or not it truly is. A truly emotional feeling.
Unlike us, robots do not have such a fear and should they perceive the world and input “emotional feelings” towards certain things, it is also due to the programming by humans who have inflicted biasness onto their coding. Regardless of how much machine learning or how intelligent a robot may be, they are most often than not, highly objective, and lack the emotional classification that humans have. Thus, humans and robots are different.
While perhaps the objectiveness of the robot may potentially help to achieve society of maximised benefits in a “utopian” world. The so called “maximised benefits” may not be the best outcome for humans. Robots that does what they deem is necessary for the society may conflict with what humans consider “good”.
This perhaps can also be explained in movies that have AI’s actions conflicting with what is good for humanity. For example, in 2001 A Space Odyssey, HAL receives a conflicting command, and while it had chosen the “best” way to solve the conflicting command, it had resulted in multiple deaths. HAL’s actions had been solely on the basis of meeting his goal and not out of morality.
While this is an extreme example, what this means is that an AI utopia can easily become a dystopia if an AI is highly intelligent at accomplishing its goal but the goal not necessarily aligns with ours. This is also why so many AI dystopian movies exist.
Of course, one may argue that machine learning could help a robot to learn moral reasoning as well, and ensure the safety of all of humanity. Yet, as said in Bin Ong Kian Peng’s lecturer, according to Lyman Tower Sargent,
Utopia’s nature is inherently contradictory, because societies are not homogenous and have desires which conflict and therefore cannot simultaneously be satisfied.
Conflicting objectives, and different perception of truths result in difficulty in creating a utopia. Perhaps if one day humans create a robot so powerful, through machine learning, it is able to consider the “utopia” of every single individual in the entire world, and find a way to create a world with that objective made, then perhaps, an automated utopia will be able to be made.