During my talk, I will focus on two image generative models, namely Midjourney and DALL·E, in order to understand how machine processing has been applied to works of art, especially painting and photography. Midjourney uses, as the substance of its plane of expression, databases made up of artistic images and it is able to combine and reconfigure famous painting styles, as already stated by Manovich in his recent writings. This mixing of artistic styles in Midjourney generations is crucial to understanding how the database and the algorithm work together to build a composite whole in which the styles of different painters find a homogeneous treatment. In contrast, DALL·E works in a different way: it seems to know painting iconographies better than Midjourney, but not styles. I will address the relationship between the plane of expression and the plane of content of artificially generated images, with a special focus on the (un)coordinated variations of the verbal prompt on the visual results in order to understand the degree of participation of the aleatoric in this computational practice. The general aim is to describe the different incorporations of painting traditions in image generative models and the part of stereotypical/innovative contribution by these models in the manipulation of art historical forms.