The best curriculum for training any given AI agent is probably dependent upon the specialization that that agent will take up. For the case of an artificial general intelligence I've felt that perhaps one should begin by giving the agent something like the set of concepts listed in my blog of 1 October 2015, then filling in the remaining concepts needed for the Toki Pona language. From there one can build up the vocabulary of Ogden's Simplish language. After that, reading of dictionaries and an encyclopedia. (This tends to emphasize human conceptualizations and vocabulary of course while deemphasizing possible alternative concepts.)
Tuesday, September 1, 2020
At one time or another I have taught A.s.a. H. much* of Ogden's Simplish (Basic English)**. Rather than reading the internet perhaps A.s.a. should read a good dictionary, grow its vocabulary, and then read a good encyclopedia.*** The whole issue of AI curriculum again.
Humans typically employ a fairly large vocabulary. What can be done with a small vocabulary like Toki Pona and what requires a larger one? Is greater compression simply placing more demands upon context?
* I don't necessarily want to give A.s.a. concepts of church and religion for example.
** What vocabulary an agent needs depends, of course, on its specialization.
*** There are computer programs to translate English into Simplish. I don't know how good they are.
Wednesday, August 19, 2020
Tuesday, August 11, 2020
A.s.a. H. can make use of various clustering algorithms including Grossberg's adaptive resonance theory. I have an ultralight version of A.s.a., using a.r.t., written in Python, running on Raspberry Pis, which can be carried by and interfaced with mobile robots. I have a second small program like this but using k means clustering, a third using another learning vector quantization algorithm, and a fourth employing Kohonen's self organizing map (allowing some comparison between different clustering algorithms).
Sunday, August 2, 2020
Thursday, July 30, 2020
* Quantum mechanics and relativity, for example.
** And may be used in real emergencies like the covid-19 pandemic.
Friday, July 24, 2020
* See my blog of 1 January 2016 for example and the end of my 23 August 2017 blog.
** This may be further enhanced when we give A.s.a. the very austere Toki Pona language.
Thursday, July 23, 2020
* M Tegmark, Fortsch. Phys., 46: 855-862, 1998.
Thursday, July 16, 2020
* Shown in my blog of 7 January 2012.
Tuesday, July 14, 2020
Monday, July 13, 2020
* See my blog of 12 October 2010.
* See my blog of 23 August 2010.
Monday, July 6, 2020
* registered at a single time step
Monday, June 22, 2020
* See, for example, my blog of 15 Oct. 2010.
Thursday, June 18, 2020
* See, for example, my blogs of 1 Oct 2015 and 5 Nov 2015.
** See, for example, my blogs of 23 Jan 2013, 19 Oct 2015, 21 Feb 2020, and 27 Feb 2020.
Wednesday, June 10, 2020
* See my blog of 20 June 2019.
Saturday, June 6, 2020
Perhaps we can not simply turn an AI loose reading from the internet and expect it to learn.
Thursday, May 28, 2020
Wednesday, May 20, 2020
Tuesday, May 19, 2020
* See, for example, Artificial Intelligence: A Modern Approach, 4th edition, Russell and Norvig, Pearson, 2020, page 252.
Friday, May 15, 2020
Saturday, May 9, 2020
* Any translation software is acting as a preprocessor, in effect performing dimensionality reduction.
** I’ve been using period or pause to signal the end of a temporal sequence. Is this the “correct”/only/“best” thing to do?
Wednesday, April 22, 2020
Thursday, April 16, 2020
Saturday, April 11, 2020
Sunday, March 22, 2020
Friday, March 20, 2020
Thursday, March 19, 2020
* In terms of actual physical agents I now have 40-50 small robots like those in my blog of 8 Jan. 2018.
* See my blog of 28 Oct. 2018.
** See for example my original paper on A.s.a. H, Trans. Kan. Acad. Sci., vol. 109, No. 3/4, 2006.
Tuesday, March 17, 2020
Friday, March 6, 2020
Monday, March 2, 2020
Thursday, February 27, 2020
A.s.a. is hierarchical. Low level regularities are learned more quickly than higher level ones. We have also played with adjusting the learning rates differently on different levels of the concept hierarchy.** When we have done some hand coding of concepts this is equivalent to giving A.s.a. innate concepts. We have sometimes given a layer in the hierarchy a two dimensional memory to allow it to create a spatial map or 2-D vision field. A.s.a. has been given an innate sense of time via time stepping and the time dilation algorithm.
A.s.a. records, updates, and employs probabilities, are they sufficient?
A.s.a.'s hierarchically organized concepts are immediately available for reuse in new combinations. I've emphasized the importance of output/actions, prediction, and extrapolation in addition to simply passively learning sensory input patterns.
A.s.a. may be more comparable to a society of humans rather than one single person.*** Agents can specialize, helping to deal with the combinatorial explosion.**** Various agents can compete against each other in each generation. A.s.a. really can multitask even if individual humans can not.
I have been continuously working on attention mechanisms. How should error correction be propagated between layers of the concept hierarchy? What should a good object concept include? Can consolidation of learning be equated with a society training a specialist agent or is more needed?
* See, for example, How We Learn, Viking, 2020. (Something of a counter argument is in my blog of 21 February 2020.) Dehaene may equate AI to deep learning neural networks and big data, the current fad. There is, of course, a lot more to AI than that.
** And a simulated annealing process.
*** Alternatively, an A.s.a. agent might be likened to one of the specialized regions in a human brain.
**** One sort of attention mechanism.
Monday, February 24, 2020
Friday, February 21, 2020
* See, for example, Stanislas Dahaene, How We Learn, Viking, 2020.
** For example, number neurons that activate when they see 1 thing, or 2 things, or 3 things...
Sunday, February 16, 2020
Wednesday, February 12, 2020
Thursday, February 6, 2020
Saturday, February 1, 2020
* Seeking out things like abundant light for solar panels, moderate temperatures, low clutter environment, etc. in order to maximize utility.
Tuesday, January 28, 2020
I don't think that consciousness is as difficult as the "hard problem" people would have us believe. On the other hand I don't think that Hyper-ConceptNet is as fully conscious as A.s.a. H. is.*****
The attention issue is part of dealing with the curse of dimensionality. Its a problem that must be faced by any machine trying to operate in a large state space.
* arXiv:2001.09442v1, 26 Jan. 2020
** See my blog of 19 Oct. 2016
*** See Trans. Kansas Academy of Sci., 2017, page 108
**** For example, it seems to lack orientation, emotion, and values.
*****But ConceptNet is a large knowledgebase of almost 3 million axioms in first order logic!
Monday, January 20, 2020
- Society requires that most of us work.
- But physics tells us that work is energy. “Labor saving appliances” allow us to replace human labor with other energy sources.
- It might be possible to make energy free. Tesla thought that there might be sources of free cosmic energy. Much of his physics was unsound but solar energy is a possible example. Lewis Strauss, the chairman of the atomic energy commission (1954), thought nuclear energy might become “too cheap to meter.” Plentiful thorium or deuterium fuels, for example.
- No one then need work any longer. Machines would replace all human labor. (Today machines are able to do half of all human jobs. But completing the task might involve the creation of “mechanical life” and the subsequent class struggle between humans and AIs.)
Sunday, January 12, 2020
*A.s.a. H. frequently makes use of a vector value system (see my blog of 19 Feb. 2011) and my criticism of capitalism is based in part on the need to avoid a scalar utility (see my paper www.robert-w-jones, philosopher, Capitalism is Wrong).
Friday, January 10, 2020
It also learns that sweeping the ultrasonic (obstacle) sensor back and forth correlates with having fewer collisions as compared with having a fixed directed ultrasonic sensor. A.s.a. H. then learns to sweep it's sensor, looking for obstacles and spending more time attending to this particular input channel.
Alternatively, if the robot has a single fixed mounted sensor it may learn to make small repeated left and right turns as it advances forward.
Thursday, January 2, 2020
Wednesday, January 1, 2020
Pick and place = (Grasp ball -> Carry ball -> Drop ball)
Grasp ball = (detect ball inside grippers -> close grippers -> sense force against grippers)
Carry ball = (sense force against grippers -> move)
Drop ball = (sense force against grippers -> open grippers -> sense no force against grippers)