Saturday, June 11, 2016

Subsymbolic?

Asa H can be taught names for the concepts it learns, for example:
Collision=(sense near, bump, decelerate) can be expanded to (taught):
Collision=(sense near, bump, decelerate, sound "collision")

Artificial neural networks, on the other hand, are frequently subsymbolic.

How many of the concepts (case vectors) that Asa learns should be named (symbolic)?

Going in the other direction Theodore Sider has suggested that complex linguistic entities be constructed as sequences or tree-structures of linguistic atoms (words). (Writing the Book of the World, OUP, 2011, page 295) This is exactly what Asa H creates (learns). We would certainly not want to assign names (words) to all of these larger scale case vectors. I.e., Vocabulary choice is required at this point.

No comments:

Post a Comment