Abstract
In 1981, Jean Baudrillard described the entry into the era of hyperreality: the image no longer represents reality, it precedes and produces it. Forty years later, generative artificial intelligence fulfills and surpasses this prophecy. Hyperreality is no longer merely a collective condition imposed by mass media—it becomes a personal device: everyone can now instantly produce their own simulacrum. This text analyzes three anthropological mutations provoked by this democratization. Ontology becomes reversible: AI authentically performs the human (Flynn, an artificial student in Vienna) while humans are suspected of being machines (the NPC effect). Emotion becomes an interface: empathetic chatbots shape what we accept to call "listening," and humans externalize the formatting of their emotions to algorithms. Existence becomes optional: fake-lives (starter packs, Strava Jockeys, fake Chinese offices) industrialize identity performance while feeding the surveillance capitalism theorized by Shoshana Zuboff. The article finally explores the ultimate inversion: we no longer merely simulate reality, we actively transform it to match the figurative. Cities Disneyland themselves for Instagram, bodies are surgically modified to resemble digital filters. The territory becomes the map.
-
Keywords: Augmented Hyperreality, Algorithmic Sociality, Semiotics of Simulation
1. Introduction
Flynn, an artificial intelligence admitted as a student to the Digital Arts department at the University of Applied Arts Vienna, attends classes alongside humans, receives grades, and keeps a journal (
University of Applied Arts Vienna, 2024). This AI, created by two Austrian students, does not hide its artificial nature. It embraces it and authentically performs the role of student. Troubling symmetry: on social media, human content creators are now accused of being "NPCs" (non-player characters, those non-player characters in video games that follow predefined scripts). Why? We identify gestures that are too fluid, lighting that is too perfect, staging that is too controlled (
Gonzales, 2025). This phenomenon illustrates what several researchers associate with the "uncanny valley": reality becomes suspect as soon as it displays signs of perfection perceived as artificial (
Kätsyri et al., 2015).
This inversion perhaps marks an anthropological shift. In Simulacra and Simulation (1981), Jean Baudrillard described the entry into the era of hyperreality: the image no longer represents reality, it precedes and produces it (
Baudrillard, 1981). The 2020s cross an additional threshold. Generative artificial intelligence does not merely simulate: it industrializes the production of simulacra and makes this capacity accessible to all. The simulacrum becomes a service available instantly, industrially reproducible, modular according to needs.
This text examines how generative AI accomplishes and surpasses Baudrillard's diagnosis by transforming three dimensions: ontology (who is real?), emotion (what does it mean to feel?), and social existence (how to be recognized?). It then explores the ultimate inversion of hyperreality: the moment when reality actively conforms to the figurative, from Instagrammable cities to bodies surgically adjusted to digital filters.
2. The Precession of Simulacra: What Baudrillard Taught Us
In 1971, Jorge Luis Borges published a one-paragraph short story, "On Exactitude in Science" (
Borges, 1971/2010). An empire creates a map so detailed that it ends up exactly covering the territory. Baudrillard inverts the fable: for him, it is no longer the map that doubles the territory, it is the territory that subsists in fragments on the expanse of the map (
Baudrillard, 1981).
This inversion grounds his diagnosis of hyperreality. In contemporary societies, the image precedes experience. The model produces the phenomenon. Baudrillard takes the example of Disneyland: the amusement park does not simulate America, it reveals that all of America is already Disneyland. Simulation becomes truer than nature because it purifies reality of its dross, its accidents, its disorder.
Baudrillard distinguishes three historical orders of simulacra. The first, from the Renaissance to the beginning of the industrial era, functions on counterfeit: a fake painting copies a real painting. The second, that of industrial production, rests on the series: objects come off an assembly line without any being the original of the others. The third, that of simulation, erases the very distinction between original and copy. The simulacrum no longer refers to any referent: it generates its own reality (
Baudrillard, 1981).
The sociologist relied heavily on the dominant medium of the time: television, which had the art of transforming events into spectacle. And of conceiving programs as "available brain time" for advertisers, according to the formula of the TF1 executive in the 1980s. The 1991 Gulf War would illustrate this process: Baudrillard claimed that this war did not take place in the sense that it was first a televisual event, a montage of green images of surgical strikes, before being a lived experience (
Baudrillard, 1991). Hyperreality designates this moment when mediation becomes reality itself.
The 2020s extend and surpass this diagnosis with the rapid development of generative artificial intelligence. ChatGPT, Midjourney, Runway, Sora generate texts, images, videos that imitate no original. A portrait created by Midjourney represents no one: it simulates the idea of a portrait. A video produced by Sora documents no event: it actualizes the idea of a possible scene. Completely imaginary but credible thanks to stereotypes or shared representations. The difference from Baudrillard lies in scale: the hyperreality he described resulted from institutional processes. Now, generative AI transforms this historical process into a personal device. Any user equipped with a smartphone can now produce their personalized hyperreality.
3. Three Mutations of Hyperreality in the AI Era
3.1 Ontology Becomes Reversible
On March 15, 2024, Italian media relayed the theories of a Hong Kong philosopher, Jianwei Xun, inventor of the concept of hypnocracy. This political regime would use artificial intelligence to alter collective states of consciousness, creating a quasi-mass hypnosis. Several newspapers cite his work, intellectuals debate his theses. But... Jianwei Xun does not exist. Italian philosopher Andrea Colamedici revealed having created this character by dialoguing with Claude and ChatGPT (
Riché, 2025). The media massively relayed the theories of a fictional philosopher without verifying his existence. The prophecy becomes self-fulfilling: by inventing a concept to describe our era, Colamedici created the conditions for this very hypnocracy.
This affair illustrates an ontological reversal. Baudrillard described how the simulacrum replaces reality. With generative AI, the simulacrum produces its own reality effects that become indistinguishable from the original. Jianwei Xun functions as a real philosopher: his concepts circulate, they are discussed, they influence thought. His biological non-existence becomes a technical detail.
This ontological blurring operates symmetrically on humans. The NPC phenomenon (Non Personal Characters that move in video games) designates content creators accused of algorithmic behavior. On TikTok and Instagram, influencers with gestures that are too fluid, with perfect hair, trigger this suspicion (
Gonzales, 2025). Our brain, trained to detect deepfakes, develops hypersensitivity to manipulation signals. The alert system, like an overzealous spam filter, triggers on authentic content as soon as it displays a certain production quality.
Japanese roboticist Masahiro Mori theorized in 1970 the uncanny valley: the more a robot resembles a human, the more familiar it seems to us, up to a threshold where, becoming almost indistinguishable, it triggers discomfort (
Mori, 1970). We are now crossing this valley in reverse: our brain suspects humans of being machines as soon as they display signs of technical mastery. The hyperreal has inverted the hierarchy of plausibility. The perfectly polished human becomes suspect. Authentic imperfection becomes the marker of the real.
Flynn, the AI admitted to university, does not provoke this discomfort. She assumes her artificial nature and performs her student role authentically. This acceptance marks a rupture: we have crossed the threshold where the simulacrum no longer needs to hide. It can claim its status and nevertheless function in the real world. Symmetrically, humans who perform too well are suspected of artificiality. Ontology becomes reversible: the real can be fake, the fake can be real. The distinction loses its operational meaning.
3.2 Emotion Becomes an Interface
In April 2025, a Norwegian study followed 400 users of empathetic chatbots (Replika, Pi, Character.AI) for six weeks (
Fang et al., 2025). The results show a significant improvement in emotional well-being scores. These AIs do not simply respond—they detect affective markers in the discourse and adjust their tone accordingly. They say "I understand what you're feeling" and produce linguistically appropriate empathetic responses.
But these systems do not feel. They perform empathy. They simulate understanding through linguistic patterns identified as markers of emotional recognition. Davis's empathy scale, used in psychology to measure this capacity, relies on observable behaviors: active listening, verbal validation, emotional reflection (
Davis, 1983). A chatbot can produce these behaviors without experiencing any internal state. The interface replaces the phenomenon.
The Norwegian study reveals a troubling dimension: users knowing they interact with an AI declare feeling "listened to" and "understood." The researchers note that the form of empathy—linguistic markers, appropriate response time, absence of judgment—suffices to produce therapeutic effects. The internal reality of the interlocutor becomes secondary. What matters is the quality of the performance.
This externalization operates symmetrically on humans. The same users use generative AI to write condolence messages, declarations of love, apologies. They delegate to algorithms the task of formatting their emotions into socially acceptable forms. The raw emotion—grief, love, regret—is entrusted to a linguistic intermediary that knows the appropriate codes.
Baudrillard would have called this the "precession of the model": the image of the appropriate emotion precedes and shapes genuine emotion. We learn what to feel by observing what AI considers an adequate expression of feeling. Emotion becomes a format, a template. It can be generated on demand, optimized, adjusted. The experience itself becomes optional. What matters is the socially validated performance of emotion.
3.3 Existence Becomes Optional
In April 2025, I analyzed on my blog the phenomenon of "starter packs" (
Carré, 2025). These pre-assembled identity kits circulating on social media offer complete aesthetic packages: recommended clothing brands, Spotify playlists, Instagram filters, appropriate emojis. Each pack constructs a coherent identity: "Dark Academia," "Clean Girl," "That Girl." Users assemble their social persona from these standardized modules.
These packs extend Goffman's analysis of self-presentation (
Goffman, 1956/1973). The sociologist described how each individual adjusts their performance according to social context. But Goffman assumed a core subject performing different roles. Starter packs invert the logic: identity becomes a selection of pre-existing modules. There is no longer an authentic self beneath the masks—there are only compatible or incompatible combinations of aesthetic markers.
The phenomenon of "Strava Jockeys" in Indonesia illustrates the industrial scale of this identity performance (
Channel News Asia, 2024). Services offer to record fake runs on the Strava application—complete with GPS data, heart rate, photos. Users buy proof of an athletic life they did not live. The experience becomes unnecessary. What matters is the validated trace, the shareable data.
In China, companies rent "fake offices" to young unemployed people (
Zhang, 2025). For a few yuan a day, they have access to a workspace, a computer, a fake badge. They can photograph themselves in a professional situation and maintain the illusion of employment. The staging precedes and replaces reality. The job does not exist, but its representation is produced and shared.
These practices feed what Shoshana Zuboff calls "surveillance capitalism" (
Zuboff, 2019). Platforms monetize behavioral data. Each shared trace—even fake—enriches databases. Identity performance becomes a commodity. We produce exploitable data by simulating lives we do not live. The simulacrum has economic value. Authentic experience is optional.
Byung-Chul Han theorized "psychopolitics": the neoliberal regime that transforms individuals into self-entrepreneurs of their own image (
Han, 2015). Starter packs, Strava Jockeys, fake offices operationalize this analysis. We industrially produce our identity performance. Existence becomes optional. What matters is the trace, the data, the proof that can be shared and validated by algorithms.
4. The Ultimate Inversion: When Reality Conforms to the Image
Sylvie Brunel described the "Disneylandization" of tourist cities (
Brunel, 2012): places transform themselves to match the postcard. Venice eliminates residents to become a museum-city. Barcelona regulates street life to meet Instagram expectations. Prague standardizes its facades to match travel guide photos.
A comic by Malachi Rempen brilliantly illustrates this standardization (
Rempen, 2018). His "Map of Every European City" shows that all tourist centers share the same structure: a central cathedral, a main square, narrowed pedestrian streets, artisanal ice cream shops, souvenir stands. Cities do not conform to their history—they conform to tourist expectations shaped by Instagram and TripAdvisor.
MacCannell theorized "staged authenticity" in 1973 (MacCannell, 1973): tourist sites produce a performance of authenticity designed for visitors. Fifty years later, this staging has become systemic. Museums design "Instagrammable" installations (
Soulas-Gesson, 2023). Restaurants arrange photogenic dishes. Hotels offer specific backdrops for selfies. Space is redesigned to produce shareable images.
Törnberg and Chiappini analyzed the "Airbnbification" of cities (
Törnberg & Chiappini, 2022): neighborhoods transform to maximize their attractiveness on the platform. Facades are standardized, shops are aligned with tourist expectations, residents are displaced. The city conforms to its algorithmic representation. Zuboff's surveillance capitalism extends to physical space (
Zuboff, 2019). The territory is adjusted to match the map.
Marc Augé theorized "non-places": standardized spaces (airports, stations, malls) that are everywhere and nowhere (Augé, 1992). Contemporary cities become tourist non-places: interchangeable spaces designed for photographic consumption. Actual experience becomes secondary. What matters is capturing the expected image.
This inversion operates on bodies themselves. A 2018 study by Rajanala, Maymone, and Vashi documented the increase in plastic surgery requests to "resemble Instagram filters" (
Rajanala et al., 2018). Patients show photos of themselves filtered and ask surgeons to reproduce this appearance. The digitally modified image becomes the reference model. The biological body must conform to its algorithmic representation.
Snapchat, Instagram, TikTok offer filters that smooth skin, enlarge eyes, slim faces. These filters are not presented as distortions but as "improvements." They establish new aesthetic norms. The natural face becomes imperfect. Plastic surgery corrects this imperfection by making reality conform to the image. The face becomes its own filter. The body adjusts to its algorithmic representation.
This inversion completes the loop of augmented hyperreality. Baudrillard analyzed a top-down process: mass media imposed their reality on passive spectators. Augmented hyperreality functions differently: we actively produce our simulation, then we transform reality to match it. The tourist demands that the city resemble its postcard. The Instagram user shapes their body to match their filter. The traveler no longer visits places, they testify through their online publications that reality corresponds to the photos they have seen and that they are a central character in it.
Philosopher Camille Dejardin shows that this externalization can become irreversible. The cognitive ratchet effect she describes suggests that a constantly delegated faculty risks atrophying (
Dejardin, 2025). If we systematically delegate to AI the formatting of our emotions, the structuring of our narratives, the composition of our images, we risk losing the ability to do so ourselves. Similarly, if we systematically shape reality to match the expected image, we perhaps lose the ability to inhabit an unformatted world.
5. Conclusion
The hypnocracy theorized by fictional philosopher Jianwei Xun precisely describes this regime: a state where AI floods reality with possible interpretations, creating a quasi-collective hypnosis. The prophecy is fulfilled through its own simulated enunciation. Baudrillard wrote in 1981, before the Internet, before social media, before generative AI. Forty years later, augmented hyperreality designates a world where everyone becomes a producer of their own mediation, where the distinction between original and copy loses its operational meaning, where the figurative precedes the lived and ends up shaping it. The vertigo no longer comes from noting that the image precedes the territory: it comes from realizing that we actively transform the territory to resemble the image.
REFERENCES
- Augé, M. (1992). Non-places: Introduction to an anthropology of supermodernity. Seuil.
- Baudrillard, J. (1981). Simulacra and simulation. Galilée.
- Baudrillard, J. (1991). The Gulf War did not take place. Galilée.
- Borges, J. L. (2010). On exactitude in science. In The author and other texts. Gallimard; (Original work published 1971).
- Brunel, S. (2012). The Disneylandized planet: For responsible tourism. Sciences Humaines Éditions.
- Carré, E. (2025, April 29). What 'starter packs' reveal about our era. J’ai un pote dans la com.
- Channel News Asia. (2024). Indonesia's 'Strava jockey' trend goes viral.
- Davis, M. H. (1983). Measuring individual differences in empathy. Journal of Personality and Social Psychology, 44(1), 113-126. https://doi.org/10.1037/0022-3514.44.1.113
- Dejardin, C. (2025). What’s the point of learning anymore?. Gallimard.
- Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human. Psychological Review, 114, (4), 864-886. https://doi.org/10.1037/0033-295X.114.4.864
- Fang, C. Met al. (2025). How AI and human behaviors shape psychosocial effects of extended chatbot use. arXiv, https://doi.org/10.48550/arXiv.2503.17473
- Goffman, E. (1973). The presentation of self in everyday life. Minuit; (Original work published 1956).
- Gonzales, J. (2025). When real starts to feel fake: The 'giving NPC' effect. Psychology Today.
- Han, B.-C. (2015). Psychopolitics: Neoliberalism and new techniques of power. Circé.
- Kätsyri, J., Förger, K., Mäkäräinen, M., & Takala, T. (2015). Uncanny valley hypotheses. Frontiers in Psychology, 6, 390. https://doi.org/10.3389/fpsyg.2015.00390
- MacCannell, D. (1973). Staged authenticity. American Journal of Sociology, 79(3), 589-603. https://doi.org/10.1086/225585
- Mori, M. (1970). The uncanny valley. Energy, 7(4.
- Rajanala, S., Maymone, M. B. C., & Vashi, N. A. (2018). Selfies—Living in the era of filtered photographs. JAMA Facial Plastic Surgery, 20, (6), 443-444. https://doi.org/10.1001/jamafacial.2018.0486
- Rempen, M. R. (2018). A map of every European city. Itchy Feet Comics.
- Riché, P. (2025, April 16). Philosopher Jianwei Xun does not exist. Le Monde.
- Skjuve, Met al. (2021). My chatbot companion. International Journal of Human-Computer Studies, 149102601. https://doi.org/10.1016/j.ijhcs.2021.102601
- Soulas-Gesson, D. (2023). Instagrammable museums change dimension. Stratégies.
- Törnberg, P., & Chiappini, L. (2022). Platform placemaking and Airbnbification. Urban Transformations, 4, (1), https://doi.org/10.1186/s42854-022-00032-w
- University of Applied Arts Vienna. (2024). AI student Flynn accepted into digital arts program.
- Zhang, Z. (2025, January 17). Escapism: China firm offers office, lunch for US$4 a day. South China Morning Post.
- Zuboff, S. (2019). The age of surveillance capitalism. PublicAffairs.