As my artificial intelligence Asa H learns spatial temporal patterns from the world it collates these observations into concepts of various degrees of abstraction. I.e., it learns a hierarchically organized vocabulary/language (or series of vocabularies/languages) with which it then describes/understands the world it lives (acts) in. If Wittgenstein , Dewey, and Quine are right no private language is possible and it should be possible for me to decode all of Asa's casebases and translate them into some human understandable natural language. (The translation process might be very difficult, however.) I have been successful at some of this as reported in my publications and in this blog over the years. But there have also been portions of Asa's casebase that I have not been able to translate, and then still other bits that I have found that I have gotten wrong.
It is also true that if one starts with 2 identical AIs and train both on identically the same input examples (but presented in different orders) one can develop different concepts (internal vocabularies) in the 2 resulting minds. Various machine learning algorithms do this.
Kelly has suggested conditions under which "minor differences in the order in which they receive the data may lead to different inductive conclusions in the short run. These distinct conclusions cause a divergence of meaning between the two scientists..." (The Logic of Reliable Inquiry, Oxford U. Press, 1996, Pg 381-382) And "two logically reliable scientists can stabilize in the limit to theories that appear to each scientist to contradict one another." (Pg. 383) "nothing in what follows presupposes meaning invariance or intertranslatability." (Pg. 384) Perhaps neither could then understand (or translate) the other's private language (concepts/vocabulary/ontology).
Clearly, this is also related to scientific pluralism, the idea of reconceptualizing reality, and the possibility of having alternate realities.