Recent advances in information technology have lead to the introduction of powerful artificial intelligence tools flooding the market. The limits of the capabilities of these tools are still being studied, especially after their gradual move from classifying to generating information. Among concerns of personal data loss and intellectual property issues, AI tools are praised for their superiority in producing knowledge otherwise inaccessible.The object of this presentation is to delve deeper into the epistemological status of represented objects in AI images. While no one questions the ability of AI to create images of high fidelity, their connection to reality is severed. For the time being, this anomaly is visible in some forms of graphical artifacts (number of fingers, typography etc.). With technological progress promising to eradicate these imperfections, soon it will be impossible to identify AI generated visual content as such. Still, the loose connection to reality will persist since the computational foundations will remain unchanged.In order to do so, the algorithmic generation of images will be compared with the multifaceted process of scientific imaging. It is argued that the surprisingly impressive results of AI are more closely related to the lotmanian notion of unpredictability rather than Kuhn’s paradigm shift. To correctly assess their content, they need to be viewed and read from a different perspective, including at least partial access to their programming and training datasets.