Sunday, January 10, 2016

Can we force an alternate conceptualization of reality?

I am interested in alternate models of the world. (See, for example, my blog of  22 April 2013)  Asa H was able to learn all of Wierzbicka's natural semantic metalanguage in a hierarchy about 5 layers deep. Hinton's deep backprop net learned the concept "cat" at about layer 6  and "human face" at  about layer 5.  A shallow backprop network might be trained on the same input and learn the same mapping function as the deep network did.  I suppose that the shallow network could possibly learn the very same concepts as the deep network if they were all distributed representations but this seems unlikely.  Teasing out the concepts that are generated in the net would not be easy of course.

No comments:

Post a Comment