This article examines how social robots, or “bots,” have transformed online interactions and information manipulation, particularly on the platform X (formerly Twitter). It retraces their socio-historical evolution—from early chatbots like ELIZA to AI-driven agents capable of realistic human mimicry. Drawing on the Beelzebot research project, the paper proposes a classification of “malicious” bots according to their technical sophistication, intentionality, and interaction strategies. These bots amplify, polarize, and distort public debate through mechanisms such as astroturfing, fake engagement, and echo-chamber exploitation. The integration of generative AI has produced a new generation of adaptive, persuasive “AI bots” blurring human-machine boundaries. The article highlights how these entities shape information flows, foster disinformation, and undermine trust in institutions. It argues for a socio-technical “archaeology” of bots to understand their evolving power in digital public spaces. Finally, it calls for new multidisciplinary tools—technical, educational, and regulatory—to preserve the authenticity of social interaction and democratic deliberation in the AI era.
Our study proposes a socio-relational approach intended to inform the training of an artificial intelligence system for botnet detection. First, a corpus of accounts likely to be automated was assembled using individual criteria defined by the Beelzebot team (Brachotte et al.). These accounts were then analysed through their interaction dynamics in order to identify relational configurations that could serve as relevant signals for automated detection. The article presents a socio-relational analysis based on a three-step protocol: (1) identifying forms of self-interaction; (2) examining internal interactions among suspected accounts; and (3) analysing their external interactions with third-party actors. Conducted within the framework of the ANR Beelzebot project, which aims to develop the first French-language solution capable of detecting information manipulation strategies deployed by automated networks in the French-speaking X-sphere, this research constitutes an exploratory phase designed to calibrate the data-preparation methodologies required for training an AI model that integrates socio-relational indicators. In addition to producing a quantitative score, our model aims to provide a complementary qualitative output that offers insight into the characteristics of the botnet and the functional roles occupied by different bot profiles within the network. From an ethical standpoint, this approach contributes to the development of a more explainable AI model.