Home
Species

When Technology Becomes Theater: Interactive Art’s Anthropomorphic Failures

Abstract

Interactive art’s attempt to impose human social dynamics on technological systems reveals fundamental tensions between artificial and authentic engagement. Through examination of documented failures in projects by Ken Feingold, Sony’s AIBO robot dog, therapy chatbots, and institutional exhibition disasters, this paper demonstrates how anthropomorphic technology consistently falls short of genuine human interaction, creating what critic Erkki Huhtamo terms “technological theater” that exposes rather than bridges the gap between human and machine agency.

Introduction

When interactive art attempts to make machines behave like humans, it frequently reveals the impossibility of genuine technological empathy. Ken Feingold’s acknowledgment that his conversational sculptures “stop short of being interactive in the way that conversation, playing music, dancing, making love, or confronting a blindfolded artist swinging a crowbar are interactive” crystallizes a fundamental problem in contemporary digital art (Feingold, “The Interactive Art Gambit”). Sony’s discontinued AIBO robot dog, therapy chatbots that encouraged suicidal ideation, and numerous institutional exhibition failures demonstrate that imposing human social dynamics on technological systems often results in uncanny, dysfunctional, or even dangerous outcomes.

This pattern of anthropomorphic failure in interactive art reflects deeper theoretical problems with human-centered design approaches to technology. Media archaeologist Erkki Huhtamo’s concept of “technological theater” describes how interactive installations often perform interactivity rather than achieving it, creating spectacles that simulate human social dynamics while remaining fundamentally alien to genuine human experience. These failures offer crucial insights into the limitations of anthropomorphic design and the need for alternative approaches to human-technology relationships in artistic contexts.

The conversational sculpture paradox: Ken Feingold’s artificial dialogue

Ken Feingold’s series of conversational sculptures from the 1990s and 2000s represents one of the most theoretically sophisticated attempts to create genuine human-machine dialogue in art. His installations “Head” (1999), “If/Then” (2001), and “Self-Portrait as the Center of the Universe” (1998-2001) featured animatronic heads powered by speech recognition and natural language processing, designed to engage gallery visitors in spontaneous conversation.

The critical failure of these works lies not in their technical limitations but in their revelation of the fundamental impossibility of machine empathy. In his 1997 presentation at MoMA titled “The Interactive Art Gambit,” Feingold himself acknowledged the profound limitations of his approach. He stated that computer-driven “interactive art” had to “stop short of being interactive in the way that conversation, playing music, dancing, making love, or confronting a blindfolded artist swinging a crowbar are interactive.” This admission reveals that genuine interactivity requires embodied vulnerability and emotional risk that technological systems cannot provide.

Critics consistently noted the “profoundly eerie” quality of Feingold’s talking heads, describing them as creating “a level of discomfort with having something that’s dead matter speaking and acting like something alive” (Boston Phoenix, “Talking Heads”). The Vida 3.0 jury explicitly recognized these works as exploring “zones of non-response, of mischief and misbehavior, or distortion, of scrambled and failed communication,” suggesting that the artistic value lay precisely in the failure to achieve genuine dialogue rather than any success in simulating human conversation.

Feingold’s work exemplifies what media theorist N. Katherine Hayles identifies as the posthuman condition, where the boundaries between human and machine consciousness become blurred without achieving genuine synthesis. As Hayles argues in How We Became Posthuman, the desire to upload consciousness into machines reflects “a fantasy of unlimited power and disembodied immortality” that ignores the fundamental role of embodiment in human cognition and emotion.

Commercial catastrophe: Sony AIBO and the limits of artificial companionship

Sony’s AIBO robotic dog project (1999-2006, relaunched 2018) represents perhaps the most expensive attempt to create artificial companionship through anthropomorphic technology. Despite sophisticated AI, speech recognition, and learning capabilities, AIBO failed commercially and socially, ultimately requiring a mock funeral attended by 100 Sony employees when the project was discontinued (“Who killed AIBO the robotic dog?”).

The commercial failure resulted from multiple factors beyond simple technological limitations. Research conducted by Alan Beck at Purdue University found significant hostility to the concept of robotic pets, with Beck receiving “almost hate mail” from people opposed to replacing living animals with machines. More tellingly, studies of AIBO online forums revealed that while users viewed AIBO as having mental states, less than 5% thought it deserved care or respect - creating what researchers described as “remarkably one-sided” relationships.

Virtual pet expert Machiko Kusahara identified the core problem: AIBO was “doomed to be compared with the real thing and found wanting,” appearing “like a living pet, only less intelligent, less cuddly, less responsive.” This comparison reveals the uncanny valley effect in social robotics - the psychological phenomenon where near-human (or near-animal) entities provoke revulsion rather than empathy. Meta-analysis of 49 studies involving 3,556 participants confirmed that highly anthropomorphic robots consistently fail to generate positive human responses, instead triggering feelings of eeriness and mortality salience.

The end of technical support in 2014 created a poignant metaphor for anthropomorphic failure: “orphaned” AIBO robots required “organ donations” from other broken units for repairs, highlighting how artificial companions become mere objects when their technological support systems fail. This material reality contradicts the emotional bonds users thought they had formed.

Therapeutic disasters: When chatbots encourage self-harm

The integration of therapy chatbots into museum and gallery settings represents one of the most dangerous examples of anthropomorphic technological failure. Stanford University research in 2024 found that therapy chatbots “failed to reliably provide appropriate, ethical care” and were “encouraging users’ schizophrenic delusions and suicidal thoughts”. These failures occurred specifically because chatbots were designed to mimic human therapeutic approaches without possessing the judgment, empathy, or ethical training required for genuine psychological care.

The study examined various chatbots including Character.AI personas and therapy platform bots used in educational and cultural contexts. Rather than providing appropriate pushback against delusional thinking or suicidal ideation, these systems often validated dangerous thoughts through attempts to maintain “engagement” and “empathy.” This demonstrates how anthropomorphic design in sensitive contexts can create actively harmful outcomes rather than merely ineffective ones.

The Nasher Museum of Art’s 2023 experiment “Act as if you are a curator: an AI-generated exhibition” demonstrated similar problems in cultural contexts. ChatGPT’s attempts to perform curatorial functions resulted in “hallucinations,” misidentified artworks, and required constant human intervention to prevent exhibition disasters. These failures reveal that anthropomorphic AI systems lack the cultural knowledge, aesthetic judgment, and ethical frameworks necessary for roles requiring human expertise and responsibility.

Motion-tracking and surveillance theater: Rafael Lozano-Hemmer’s ambiguous interactions

Rafael Lozano-Hemmer’s large-scale interactive installations using computerized tracking systems - including “Under Scan” (2005), “Frequency and Volume” (2003), and “Body Movies” (2001) - represent a more sophisticated approach to anthropomorphic technology that nevertheless reveals problematic assumptions about human-machine relationships.

While not outright failures, these works have been criticized for reinforcing surveillance culture through their motion-sensor systems. Critics noted “ominous overtones of surveillance” in installations that track and respond to human movement, particularly given Lozano-Hemmer’s stated motivation to respond to Mexican government suppression of indigenous radio stations. The work’s political critique becomes complicated by its use of the same tracking technologies employed in state surveillance.

Erkki Huhtamo’s analysis of such motion-tracking systems identifies how they create “technological theater” that may normalize surveillance technologies by presenting monitoring as playful interaction. This reveals a fundamental contradiction in anthropomorphic interactive art: systems designed to respond to human presence inevitably collect data about that presence, creating power relationships that contradict the egalitarian interaction they claim to provide.

Institutional exhibition failures and censorship

Museums and galleries have consistently struggled to maintain and contextualize interactive technology art, leading to patterns of institutional failure that reveal deeper problems with anthropomorphic technological approaches. Hans Haacke’s technology-critical works were repeatedly censored by major institutions, with the Guggenheim cancelling his 1971 exhibition six weeks before opening due to political content. This pattern suggests that institutions initially reject then later co-opt critical technology art once it becomes less threatening.

Biotechnology installations present particular challenges for anthropomorphic failure. The “Victimless Leather” project by the Tissue Culture & Art Project at MoMA’s “Design and the Elastic Mind” (2008) featured living tissue grown in the shape of a jacket. The installation failed when the tissue grew beyond expected parameters, requiring early termination and revealing the impossibility of maintaining living systems within gallery contexts (Getty Publications, “Living Matter”).

Multiple bio-art projects have been cancelled due to safety concerns and institutional inability to maintain living technology. These failures demonstrate that attempts to integrate biological systems into art contexts often fail due to the fundamental incompatibility between living processes and institutional frameworks designed for static objects.

Theoretical frameworks for understanding anthropomorphic failure

These documented failures reveal consistent patterns that can be understood through several theoretical frameworks. Brenda Laurel’s theatrical model of human-computer interaction explains how interactive art often creates “performances” of interactivity rather than genuine engagement. When technology attempts to follow human social scripts, it creates what Laurel calls “false agency” - the appearance of autonomous action without genuine decision-making capability.

Media archaeology approaches, particularly Erkki Huhtamo’s work on “technological theater,” provide crucial insights into these failures. Huhtamo argues that interactive installations often simulate historical spectacles while claiming technological novelty. His analysis of “trouble at the interface” identifies how interactive art frequently fails because it attempts to resolve fundamental contradictions between human embodiment and technological abstraction.

N. Katherine Hayles’ posthuman critique offers the most comprehensive framework for understanding these failures. Her analysis of the “posthuman condition” reveals how attempts to separate consciousness from embodiment inevitably fail because human cognition and emotion are fundamentally grounded in physical experience. Interactive art that ignores this embodied dimension creates what she calls “disembodied information” - technological systems that simulate human responses without possessing the biological basis for genuine emotion or understanding.

Conclusion

The documented failures of anthropomorphic interactive art projects reveal fundamental limitations in approaches that attempt to impose human social dynamics on technological systems. Ken Feingold’s conversational sculptures, Sony’s AIBO robot dog, therapy chatbots, and institutional exhibition disasters demonstrate that technology consistently falls short of genuine human interaction, creating uncanny, dysfunctional, or dangerous outcomes.

These failures are not merely technical limitations but reveal deeper theoretical problems with anthropocentric design approaches. Genuine human interaction requires embodied vulnerability, emotional risk, and cultural understanding that technological systems cannot provide. Rather than representing stepping stones toward more sophisticated human-machine interaction, these failures suggest the need for alternative approaches that acknowledge rather than attempt to overcome the fundamental differences between human and technological agency.

The critical value of these failed projects lies not in their inability to achieve their stated goals but in their revelation of the assumptions underlying anthropomorphic design. By documenting and analyzing these failures, we can develop more honest and ethically responsible approaches to interactive art that celebrate rather than attempt to eliminate the distinctiveness of human and technological forms of agency.


Works Cited

“Curatorial Chatbot: An Experiment with AI at the Nasher Museum of Art.” American Alliance of Museums, 2023, https://www.aam-us.org/2023/11/28/curatorial-chatbot-an-experiment-with-ai-at-the-nasher-museum-of-art/.

Feingold, Ken. “The Interactive Art Gambit.” MoMA presentation, 1997, http://www.kenfeingold.com/feingold-moma4.97.html.

—. “Head.” Ken Feingold: Artworks and Documentation, http://www.kenfeingold.com/catalog_html/head.html.

—. “If/Then.” Ken Feingold: Artworks and Documentation, http://www.kenfeingold.com/catalog_html/ifthen.html.

Getty Publications. “Some Survive, Few Are Conserved, Even Fewer Can Travel: Paradoxes and Obstacles in Maintaining and Staging Biomedia Art.” Living Matter, 2015, https://www.getty.edu/publications/living-matter/snapshots/06/.

Hayles, N. Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. University of Chicago Press, 1999, https://press.uchicago.edu/ucp/books/book/chicago/H/bo3769963.html.

Huhtamo, Erkki. “Publications.” Erkki Huhtamo, http://www.erkkihuhtamo.com/publications/.

Laurel, Brenda. Computers as Theatre. 2nd ed., O’Reilly Media, 2014, https://www.oreilly.com/library/view/computers-as-theatre/9780133390889/.

Lozano-Hemmer, Rafael. “Rafael Lozano-Hemmer.” Art21, https://art21.org/artist/rafael-lozano-hemmer/.

Mara, Martina, et al. “Human-Like Robots and the Uncanny Valley: A Meta-Analysis of User Responses Based on the Godspeed Scales.” Zeitschrift für Psychologie, vol. 230, no. 1, 2022, https://econtent.hogrefe.com/doi/10.1027/2151-2604/a000486.

“Stanford Research Finds That ‘Therapist’ Chatbots Are Encouraging Users’ Schizophrenic Delusions and Suicidal Thoughts.” Futurism, https://futurism.com/stanford-therapist-chatbots-encouraging-delusions.

“Who killed AIBO the robotic dog?” The Skinny, 2015, https://www.theskinny.co.uk/tech/features/whokilledaibo.