If we assume that with Chat GPT we have for the first time a non-human “author” generating entire texts on demand, what lessons could be learned by the major “competing” semiotic paradigms? The way the meaning of the text is formed, interpreted and explained represents one of the core and transversal semiotic problems. I will present a theoretic investigation in what is known so far about the training of the Large language Models (LLM), of the architecture of transformers and of the word embedding in Natural Language Processing in order to project the general ideas coming from there on the generative (structuralist) approach of explaining the text and the one of the textual pragmatics of Umberto Eco (inspired by pragmatism). Although it could have been expected generative semiotics to have greater affinity to the text generative algorithms actually we found bigger potential for collaboration with the interpretative semiotics and in particular with the way the encyclopedic model of culture is defined with its rhizomatic structure and nodes based on statistical constancy of the sign uses: “(a) Every point of the rhizome can and must be connected with every other point. (d) The rhizome is antigenealogical. (g) A network of trees which open in every direction can create a rhizome. (h) No one can provide a global description of the whole rhizome; […] the rhizome is multidimensionally complicated, but also […] its structure changes through the time; […] every node can be connected with every other node […].