Wednesday, October 28, 2020

Adversarial attacks on A.s.a. H.

 A.s.a. H.* is a deep learning network in which backprop layers have been replaced by clustering.** When trained to recognize typical human concepts/categories*** A.s.a does not seem to be vulnerable to the kind of adversarial attacks that other deep learning networks fall victim to.**** If and when we use neural networks as preprocessors that may introduce a vulnerability.

* R. Jones, Kansas Academy of Science Transactions, vol. 109, num. 3/4, pg 159-167, 2006 and my blogs of 14 May 2012 and 10 February 2011.

** See Introduction to Artificial Intelligence second edition by W. Ertel, Springer, 2017, pg 280.

*** Learning from a curriculum like that in my 12 September 2020 blog. 

**** See Explaining and Harnessing Adversarial Examples, Goodfellow et al, ICLR, 2015.

No comments:

Post a Comment