In this paper, I propose to make the connection between network-based semantics analysis and the semiotic notion of aboutness.Stevan Harnad (1990) famously challenged the behavioral approach to symbolic AI presented by John Searle's Chinese Room puzzle, which countered the validity of the Turing test. Harnad redirected Searle's doubts towards humans, posited by many to be symbolic machines. This begs the question, how do we inject meaning into language and whether the process leaves traces in the latent structure of natural language lexicons? In an exploratory study, Vincent-Lamarre et al. 2016 analyzed definition graphs for English to show that such minimal fundamental sets may in fact be identified in real-world data.Aboutness (Yablo 2011) is a highlighted subject matter whose role isn't explained by syntax, grammar or truth-conditional factors. If sentences can have aboutness as a separate factor of their meaning not dependent on satisfaction conditions, then aboutness could function on its own and even predate the propositional, truth-conditional relations with which philosophers of language and semioticians are concerned most often.Elsewhere, I performed a comparative analysis of the structure of the subgraph of the definition graph for several languages induced by the set of semantic primes' exponents (Goddard, Wierzbicka 2014) and found that they significantly differ from the lexicons' boostrapped representative sets.In this paper, I will present how the results from that work may be applied to sentence-level analyses and, in particular, whether we can make inferences about overlap in aboutness from latent lexicon structures and their topology.