For Lotman, it is human nature to want to 'foresee' the future. This desire is reinforced in times of crisis, of which we seem to currently be experiencing several. One of the most critical problems for this desire is the fundamental unpredictability of sociocultural events. Therefore, the quest to “control” the future is often manifest in attempts to mechanize the socio-cultural world and minimize both chance and human agency. Such endeavours have occurred throughout history, first in the conception of mechanistic philosophy, then applied as various forms of automata, scientific instruments, Industrial Revolution, and today, in the applications of “artificial intelligence” (AI).The speed of current technological progress is perceived to outpace any historical precedent. In the context of added global environmental changes, the accompanying fear of an unpredictable future culminates in eschatological visions of AI-induced apocalypse. The presentation frames the problem of fear and its articulations in techno-discourse as specific types of tension between discrete and non-discrete types of signification (Lotman, Uspensky 1978; J. Lotman 2007, 2019; Madisson 2014) and the semiotics of fear (Madisson 2010; Ventsel et al. 2021; M. Lotman 2009a). The combination of different types of signification and the discourse of fear revives archaic code-texts underlying the contemporary discourse on AI.