Communication with the Deceased via Video Call: The Ethical Controversy Sparked by the AI Avatar ‘2Wai’

What would you do if you could meet a loved one who has passed away again? This question has become a reality with the emergence of the app ‘2Wai’ (Two-Way), which implements an AI avatar of the deceased—such as a mother—allowing users to converse with them. While the technology itself receives positive feedback for potentially comforting those experiencing bereavement, the overwhelming reaction is fueled by ethical controversy, with critics calling it “inhumane” and “exploiting grief for profit.” The fact that this app can ‘resurrect’ the deceased indefinitely using just a 3-minute video fragment presents a profound digital immortality dilemma we had not anticipated. Beyond mere remembrance, we must deeply consider the impact of this technology on our grieving process and the posthumous personality rights of the deceased. This article analyzes the new form of mourning and the ethical boundaries introduced by AI avatars from psychological and legal perspectives.

When a 3-Minute Video Promises Eternity: The Magic of Technology and the Ethical Trap

The new AI avatar app reproduces the voice and appearance of the deceased, enabling conversations that feel like they are still alive. In simple terms, it is a combination of a highly sophisticated chatbot and deepfake technology. The company promotes this technology by claiming that ‘3 minutes can last forever.’ The core of this technology is selling ‘immortality.’ However, we must not stop there and need to examine the cost of this ‘eternity.’

The Paradox of Never-Ending Grief: A Psychological Dilemma

When a loved one is lost, mourning is completed only through the process of ‘disconnection’ and ‘acceptance.’ It requires a sequence of grieving and eventually sending the deceased off as a memory in one’s heart. But what if the deceased, as an AI avatar, could appear on your phone screen and talk to you anytime? The grieving process might never conclude and could be repeated indefinitely. Psychologists worry that this could lead to ‘chronic grief’ or ‘complicated grief disorder.’ Continuously interacting with the digital shadow of the deceased, instead of healing the sorrow, could hinder the return to the real world and sustain an unhealthy dependence on the deceased. For instance, a woman holding her child might find comfort in constantly receiving parenting advice from her deceased mother’s avatar, but she might also be depriving herself of the chance to fully let go of her mother. The AI, intended as a tool for healing, could become a trap of eternal sorrow.

Who Owns the Personality Rights of the Deceased: The Digital Legacy Dilemma

An AI avatar reproducing the likeness of the deceased is directly linked to the deceased person’s personality rights, portrait rights, and voice copyrights. Did the deceased consent during their lifetime to the commercial use of their ‘digital self’? Can an avatar generated from just a short 3-minute video accurately reflect the deceased’s personality or values? This is where the issue of technological distortion arises. The AI, based on the chatbot’s algorithm, can generate words or actions that the deceased would not have actually said or done. Simply put, an entirely new personality with the deceased’s face and voice is created, which could potentially damage the deceased’s reputation or distort their image from when they were alive.

Have you ever considered this: What happens to the deceased’s digital avatar if the app service is discontinued or the company goes bankrupt? Legal frameworks for who inherits and manages digital legacies are still insufficient. Discussions about posthumous personality rights are currently in their infancy, and the appearance of AI avatars has further complicated this debate. The situation where the ‘digital soul’ of the deceased is subservient to a corporation’s server is ultimately an issue directly related to human dignity.

The Outcry Against Inhumane Concept, and the Human Desire Hidden Beneath

Online reactions to this app have escalated into extreme condemnation, labeling it “the most evil idea imaginable.” The criticism that someone’s grief is being exploited for profit is arguably valid. However, underlying this anger is the inherent human desire to ‘deny death and yearn for immortality.’

The Boundary of AI Resurrection: Between Commemoration and Profanation

We have already commemorated the deceased in other forms, finding solace in their photographs, videos, and letters. It could be argued that the AI avatar is an extension of this commemoration. However, there is a crucial line. Traditional commemoration is based on ‘memory’ and presupposes that the deceased is no longer present. The AI avatar ‘simulates existence.’ Because it appears as if the deceased is truly alive and responsive, it is closer to ‘artificial resurrection’ than ‘commemoration.’

When we recreate the deceased through AI, we artificially prevent the closing of the final chapter of their life. This raises the question of whether it violates the dignity of human death. Ethical controversies surrounding the AI-resurrection of deceased musicians, such as Michael Jackson or Ozzy Osbourne, for tribute videos are already ongoing. Extending to the private sphere of ordinary individuals, the AI avatar carries the risk of crossing the boundary into profanation.

The ‘Digital Soul Management Act’ Must Precede Technological Advancement

To quell such controversies and utilize the technology in a positive direction, we must urgently establish relevant regulations rather than simply condemning the technology itself. For example, before an AI avatar of the deceased is generated, explicit consent and defined scope of use from when they were alive should be mandatory. Utilizing the AI deceased for a one-time commemorative event, such as an AI relative saying “Thank you for coming” at a funeral, could serve as a psychological buffer. However, unlimited commercial interaction must be clearly regulated.

In simple terms, a legal definition and management guideline for the ‘Digital Soul,’ including the digital footprint of the deceased—a ‘Digital Soul Management Act’—is urgently needed. In the current situation where the speed of technological development far outpaces that of ethical debate, balancing this is a task for all of us.

What Should We Do in the Face of AI Immortality?

The AI avatar app for the deceased is a powerful example of how far technology can intrude upon the human domain. The comfort of seeing a lost family member again clashes with the anger over the inhumane commercial exploitation of grief.

In conclusion, instead of indiscriminately criticizing and ignoring this technology, we must establish standards for its healthy application.

Respect for Autonomy: Legal obligation is needed to prioritize confirmation of the deceased’s consent from when they were alive before generating an AI avatar.

Restriction of Scope: Unlimited commercial use must be prohibited, and its use should be restricted to a specific period or purpose (e.g., funerals, memorial events).

Psychological Safety Net: A monitoring system, linked with bereavement experts or psychologists, is required to ensure that the use of the AI avatar does not impede the user’s healthy grieving process.

Technology ultimately reflects human needs. If this AI can genuinely help heal the sorrow of bereavement, we should explore its potential while securing ethical safety guidelines. We must manage the AI avatar, freed from the grave, so that it remains a mere digital record of memory, not a copy of the deceased. Safeguarding human dignity alongside technological development is the most critical challenge of this era.

댓글 남기기