I have been working on the following abstract for a conference early next year:
Our recently developed "Asa H" software architecture (KAS Transactions 109 (3/4): 159-167) consists of a hierarchical memory assembled out of clustering modules and feature extractors. Various experiments have been performed with Asa H 2.0:
1. prefer extrapolation from real recorded patterns over extrapolation from synthetic cases
2. record signal input only when it changes by several standard deviations
3. include the number of times a pattern has been seen as a contribution to this pattern's utility
4. weight a pattern's feature more if the feature's standard deviation is smaller
5. search for short sequences common to longer patterns which have high utilities
6. search only until a "close enough" match is found
7. if a component of a case vector varies less (as measured by its standard deviation) then value it (weight it) more
8. as a postprocessor Asa outputs can be made the reference input commands to adaptive approximation based controllers
9. after some number of tries if we can't improve all components of a vector utility then approximate with a scalar utility
10. store and update a mean and standard deviation for time warpage and spatial dilation. Use these as components in the subsequent degree of matching
11. longer memory for very low utility cases (so we can avoid them)
12. prefer pattern changes that involve agent output change
13. try randomness detection as a filter at the input and at other levels in the network hierarchy
14. compression by blending; if V1 and V2 are similar enough replace them with their vector average
No comments:
Post a Comment