Saturday, December 1, 2018

Higher-order features

I have usually described A.s.a. H. as learning layers of categories/concepts. (In the form of vectors.) After learning a set of first-order features A.s.a. H. May then learn the ANDing together of some of these on the next level in its hierarchy. Are these concepts or second-order features? Second-order features are thought to be important for doing things like facial recognition. Interactions among features may be represented by such higher-order features prior to their use to define some concept.

Alternate realities, metaphysics

Quesada argues* that various philosophers have used alternative logics to build alternative metaphysics. Idealistic, materialistic, non materialistic, dualistic, noneist, etc.  Plato employs a fuzzy logic. Aristotle created classical logic. Plotinus employs fuzzy and paraconsistent logics. Hegel employs dialectical logic. Routley creates neutral logic....

* In Alternative Logics: Do Sciences Need Them? Weingartner, ed., Springer, 2004, pg 28.

Sunday, November 18, 2018

Why build theories?

For genetic algorithms and evolution* to get from some state A to some distant state B requires that one or more quasi-continuous paths exist from A to B through some intermediate states C, D, E, etc. The creation of theories of the world can allow for larger designed leaps from A to B.
Theories also are a compact way of organizing knowledge speeding up search and reducing the memory capacity required.

* which created brains and mind

Friday, November 16, 2018

Lotus eaters

With a scalar utility an AI like Marcus Hutter's AIXI might evolve toward lower intelligence if it resides in a sufficiently simple and benign environment.* Indeed, its the harsh world we find ourselves in that has forced life to develop  mind.  A society of A.s.a. H. agents with their vector utilities (at least one component of which values mental prowess of some sort) will always seek to produce some offspring with increasing mentality.**

* even temporarily
**I include the acquisition of knowledge as a part of this.

Friday, November 9, 2018

Hierarchically structured general intelligence

I notice that Paul Yaworsky of the Air Force Research Laboratory has argued for the hierarchical organization of general intelligent systems.* He seems to imagine something like A.s.a. H. but he offers nothing in the way of algorithms or other detail.

* In his recent papers A Model of General Intelligence, 7 Nov. 2018, arXiv: 1811.02546, and Realizing Intelligence, 2018, arXiv.

Robot brain transplants

Some LEGO robot designs allow you to readily remove the NXT or EV3 brick without otherwise disassembling the robot.* One can then swap in an Arduino or Raspberry Pi in its place. (Given a suitable shield.) The Raspberry Pi could make use of a neural compute stick for example. Arduino has sensors for humidity, odor, rain, barometric pressure, etc.

* Damien Kee's RileyRover being one example.

Laws of nature

We describe the world and the way it evolves, the patterns/regularities that we see in it.* We take note of our part in this. We try to do this in as compact a fashion as possible. These descriptions may make use of sentences in a natural language, equations in a system of mathematics, diagrams and icons or what have you.** By a "law of nature" we simply mean a rather good description of some pattern(s) we see in the world, some regularity.

* See my blog of 1 Sept. 2012.
** Maybe we might use a pattern of musical notes. Certainly we use spoken sentences, lecture. (Spoken language may have come a million years before written language.) Musical patterns may be well suited to expressing and conveying things like energy, excitement, and emotions.

Sunday, November 4, 2018

Reconceptualizing reality, fibring logics

More alternate realities, alternative logics:

Propositional logics
Predicate logics
Spatial logics
Temporal logics
Default logics
Defeasible logics
Free logics
Paraconsistent logics
Infinitary logics
Deontic logics
Dynamic logics
Dependence logics
Illocutionary logics
Plural logics
Iconic logics
Higher order logics
Categorical logics
Threshold logics
Feature logics
Tense logics
Multiple valued logics
Term logics
Quantum logics
Multi-modal logics
Epistemic logics
Intuitionistic logics
Dialectical logics
Relevant logics
Fuzzy logics
Probabilistic logics
Matrix logics
Substructural logics
Markov logics
Etc.

See chapter XI of Richard Epstein’s Propositional Logics: The Semantic Foundations of Logic second edition (Wadsworth, 2001). Epstein's views expressed there are similar to my own.

As a means of finding alternatives we may intentionally seek out oppositional concepts. See, for example, Tizhoosh and Ventresca, Oppositional Concepts in Computational Intelligence, Springer, 2008.

Friday, November 2, 2018

Reconceptualizing reality; alternate realities

The philosophers Plato and Descartes claimed to have had, at certain moments in their lives, a new view of the world, its basic constituents, and its rules which were totally different from our conventional view of reality. (Reflections on Kurt Gödel, H. Wang, MIT Press, 1987) For most people the world is composed of objects, pushed around in a 3 dimensional space by fields, and following something like Newton's laws of motion. What are possible alternative conceptualizations? Lewis' book, Quantum Ontology (Oxford University Press, 2016) offers some possibilities, like holistic quantum wave functions, in a high dimensional space, following something like a Schrodinger equation.

A physicist who spends some of his time doing quantum mechanical research and some of his time doing classical mechanics would be working in  two alternative realities. Also an example of scientific pluralism.

Tuesday, October 30, 2018

We are all scientists

I ran across a quote from T. H. Huxley which I used in an introductory physical science course 30 years ago: "... there is a continuous gradation between the simplest rational act of an individual and the most refined scientific experiment." Science is simply a refinement of the way in which we all think. Doing science is simply being intelligent. See my blog of 1 Sept. 2012.

The pre-eminence of the present

The present seems to be defined by what we are attending* to, the contents of our short term memory. The past seems to be defined by what is stored in our long term memory and is largely fixed.** A.s.a. H. works similarly but there is short term memory on each level of the hierarchy. A sense of a flow of time is associated with the changing contents of short term and long term memories. Long term memory grows.

* hence its pre-eminence
** though forgetting does occur, changing our past. With A.s.a. H. New exemplars get averaged with old cases, modifying the past.

Sunday, October 28, 2018

Forgetting

I still remember my first telephone number, Warwick-8-3122, long since defunct. Since it hasn’t been used in decades, never was used much, and was not used in some important event A.s.a. H. Would have forgotten it easily.

Introductory robotics course

ESU is offering a robotics course next year which set me to thinking about what should be in such a course:

Uses for robots
Sensors
    image processing?
Actuators
    motors
    servos
    other
Locomotion
   wheeled
   walking
   treads?
   flying?
   swimming?
   mapping?
   navigation?
Manipulators
Control
Computer Interfacing
Simulators

What could a laboratory component consist of without making it too idiosyncratic?

Simulations?
Sun seeking?
Obstacle avoidance?
Wall hugging?
Line following?
Pick and place?
Recharging station search?
Object search?

Wednesday, October 24, 2018

Why ANDROID?

As of ANDROID 4.1 it became possible to run versions of A.s.a. H. software on light weight, inexpensive, low power, mobile devices. I wanted to be able to use such tablets as onboard processors on A.s.a. H. robots. (See my blog of 1 Jan. 2013 for example.)

Reality, vectors, concepts

For A.s.a. H. concepts are vectors. For humans, too, many of our concepts should be seen to be vectoral. (See my blogs of 20 Oct. 2010, 12 April 2016, 17 Sept. 2014, 1 Sept. 2016, 7 Jan. 2017, 4 Oct. 2016, 1 Jan 2017, 26 Feb. 2012.) Reality, what is "real," is also a vector concept.

Tuesday, October 23, 2018

Robotic simulations and attention

I always suggest that one should do as much with simulations as possible. Simulations* are quicker and cheaper than using real physical robots. You may even be able to take the program you developed on a simulator and use it directly to run a real physical robot.** Simulations oversimplify the issue/problem of attention, however.***

* See, for example, Blankenship and Mishal's Robot Programmer's Bonanza, McGraw Hill, 2008.
** See, for example, RobotBASIC Projects for the Lego NXT, CreateSpace, 2011.
***My blog of 10 March 2016 describes how to improve the situation somewhat.

Wednesday, October 17, 2018

AI embodiment?

With some AI architectures* one could imagine completely replacing the lowest layer or so with humans and the upper layers, the AI(s), would not need to be embodied. I have done some experiments of that sort.
* For example my A.s.a. H., Albus' RCS, Meystel's nested controller, deep ANNs, etc.

Tuesday, October 16, 2018

Machine Consciousness

I have been reading Feinberg and Mallatt's book Consciousness Demystified ( MIT Press, 2018) and comparing their theory of neurobiological naturalism with A.s.a. H.'s operation. Feinberg and Mallatt emphasize the multiple realizability of consciousness.
A.s.a. H. is hierarchically organized. The lowest level in the A.s.a. H. hierarchy can provide rapidly responding reflex arcs. A.s.a. exhibits mental causation, it reacts to its environment in simple ways and in complex ways. 2D memory can store mental images. A.s.a. has exteroceptive sensations from cameras, microphones, odor sensors, etc. Interoceptive sensation comes from accelerometers, proprioception, pain and temperature sensors, battery charge level, etc. Experiences are recorded as cases and sequences of cases. You "know what it is like to be" A.s.a. if you experience the same cases (patterns of experience) that A.s.a. experiences. In order to fully "know what its like to be a" fish you would have to be able to sense electric fields. A.s.a. can do that even if humans can not. Proprioception is performed by Lego motors and other smart servos when they sense/measure their own positioning and motion. Battery charge and pain sensors and thermistors distributed throughout the Lego robotic agents provide affect with somatotopic body mapping. A.s.a. experiences qualia. When a human feels a full stomach or when A.s.a. senses a fully charged battery these are qualia. To "feel" is to represent something with a signal. One set of signals following one pathway become associated with the label "red." A different signal on a different pathway becomes associated with the label "C-sharp." Signals and pathways are private/subjective. Different valences are recognized by the components of  A.s.a.'s vector value system.
Interestingly, Feinberg and Mallatt estimate that all conscious animal brains have a minimum of about 100,000 neurons. (page 80) and "..complex neural hierarchies build mapped representations of different objects in the environment from multiple elaborate senses..." (page 97) This is what A.s.a. H. does as well.

Monday, October 15, 2018

Agent evolution, value divergence

While running an A.s.a. H. society of specialist agents, each with a vector value system,* I found that one group of agents evolved higher and higher  V1 while all other value components remained nearly unchanged. Another group of agents evolved higher and higher  V2 while all their other value components remained nearly unchanged.

* Similar to my blog of 19 Feb. 2011. The vector values of each agent having components V1, V2,...etc.

Saturday, October 13, 2018

Other people contemplate scientific pluralism

“Maybe its not even possible to capture the universe in one easily defined, self-contained form...” “Perhaps the true picture is more like the maps in an atlas, each offering very different kinds of information, each spotty.” R. H. Dijkgraaf, Director, Institute of Advanced Study, Princeton

Tuesday, October 2, 2018

Language, drawing attention to

Words/labels face little competition for attention compared with the many features of objects that are present in, say, visual input. Words then provide (spreading) activation to features/stimuli associated with the named categories. Language may help us with the problem of attention in complex real world environments. Otherwise, A.s.a. May require many learning passes before it averages out stray stimuli.

Monday, October 1, 2018

OOO

In The Democracy of Objects (Open Humanities Press, 2011, page 246) Levi Bryant argues that existence is binary, no object is more real than any other. In his object-oriented ontology (OOO) he believes that all entities are on equal ontological footing in a flat ontology. With A.s.a. H. I have argued for (the usefulness of) a hierarchically structured ontology. I have also argued that not all entities are equally real.

My experience with A.s.a. H. Suggests that we should not extrapolate our (mental) concepts too far. I believe that the excessive extrapolation of concepts is the source of some of philosophy's problems.

Thursday, September 27, 2018

Clumsy

The number and variety of sensors and actuators that A.s.a. H. Has available to it* have proven sufficient for symbol grounding** and concept formation but the present robots are best described as clumsy. We would need to deploy a larger number of sensors and actuators to address this issue.

* See, for example, my blog of 1 Oct. 2015. (note: During any given activity/experiment A.s.a. typically only makes use of a fraction of its sensors.)

** A major portion of my robotics laboratory might best be called an instrumentation lab.

Thursday, September 20, 2018

Student evaluations again

While doing some cleaning out in my office I came across some of my correspondence with ESU's former vice president Payne and with the Kansas board of regents on the subject of student evaluations. (See my blog of 2 August 2018 and the paper by Uttl, et al in Studies in Educational Evaluation, 54 (2017) 22-42.)

The future of jobs

The World Economic Forum's report The Future of Jobs 2018 (Geneva, 17 Sept. 2018) states that "By 2025 more than half of all current workplace tasks will be performed by machines..."

Monday, September 17, 2018

Councilism and a society of specialist agents

Rather than having a manager, CEO, mayor, governor, president, etc. I usually favor councilism and collective decision making. Kocher and Sutter, for example, report on experiments showing that groups make better decisions than individuals do. (January 2005 issue of the Economic Journal) Similarly, many of my A.s.a. H. AIs consist of societies of specialist agents as described previously.

Friday, September 14, 2018

More robots

A.s.a. H. can employ Dr. Kee's RileyRover and miniVEX robots (with gripper attachments) as mobile manipulators. Each robot can be fit with a couple dozen pain sensors. Assembly instructions are available at www.damienkee.com under "robots" or in Dr. Kee's books.

Monday, September 3, 2018

1 TB USB flash drive

I bought a 1 TB USB flash drive from Silkroad (through Newegg).
It works fine. Time will tell how reliable they are.

Thursday, August 30, 2018

Julia

I have downloaded Julia v1.0.0 and am playing with it a bit, trying to get a feel for it.

Wednesday, August 29, 2018

An argument for the existence of supreme beings


N is the number of agents, a discrete quantity.
N1 is a measure of the number of agents in our universe.
N2 is a measure of the number of agents in the multiverse.
C is the competence of an agent, presumed to be a continuous quantity.
W1 measures the distribution of competence of agents in our universe.
W2 measures the distribution of competence of agents in the multiverse.
We assume that the sample seen in our universe is representative of the multiverse. So W1 about=W2.
C1 is the most competent agent in our universe. (Hopefully not us humans.)
C2 is the most competent agent in the whole multiverse, the supreme being.

But competence is probably a vector, having components like intelligence, power, etc.  It may be that the most intelligent being is not the most powerful. There would then be multiple supreme beings. Conflict might result, on a vast scale.

We believe that N1<<N2 and so C1<<C2. C2 is superhuman but we expect that there will be things that C2 can not do.

Monday, August 27, 2018

A more plausible creationism*

I am reading Zeeya Merali’s book A Big Bang in a Little Room (Basic Books, 2017). These are some relevant thoughts.

 The purpose/goal of life is to expand and fill as much space and time as possible. It would be desirable then for intelligences to trigger new big bangs creating new universes suitable for life. If it’s possible to control the sorts of universe you create then this process would tend to increase that fraction of the multiverse that is inhabitable. Each creator would have an interest in the wellbeing of the lives found in its new universes just like human conservationists show concern for global wildlife.

Many creators would be finite entities. If any infinite entities exist in the multiverse even for them there will be things they can not do. There are, after all, different levels of infinity.

* I think that there is good evidence to believe that the various human religions are all built upon lies.
See, for example, Kurtz, The Transcendental Temptation, Prometheus, 1986, part 2.

Tuesday, August 21, 2018

Meaning in an environment

The meaning of a word or concept may be determined by its place in a semantic network which links it to other concepts.* The concepts that are formed in the first place depend upon the environment and experiences that the agent has been exposed to. So detailed meanings depend upon the particular environment you grew up in. Words** will mean somewhat different things to different people (agents).

Conceptual change and change in meaning then involves adding or deleting concepts (nodes/vertices) and/or adjustment of the strengths of association (edges/links) between concepts. It has both a discrete and continuous part.

* A.s.a. H. generates semantic networks like those of M. Hyman in Ubersetzung und Transformation, Bohme et al eds, 2007, pg 355-367.

** Not all concepts need have a name/word associated with them, of course.

Monday, August 20, 2018

Robotic life after death

If all the servos in the robot(s) with which A.s.a. H. is embodied die or become disconnected A.s.a. can no longer act in the world. If the robots' sensors are lost A.s.a. can no longer sense the world either. The A.s.a. H. software can continue to extrapolate, organize, etc., however. A.s.a. would have a kind of life after death so long as the computer network is not shut down.

Friday, August 17, 2018

Grading

I'm starting a new semester so I can't help but think about grading. I believe that grades serve the useful purpose of forcing students to work harder than they otherwise might. I think, however, that they are of limited usefulness in actually measuring anything. If a question shows us that some item of knowledge has been acquired by the student what is the likelihood that that knowledge will be retained a year from now? or two years from now? And no two items of knowledge will be of equal usefulness. How can we estimate that? And some items of knowledge will prove useful for one student but not for others. And these things will all change as time passes.

It would be interesting to search out roughly equivalent cases in the casebases from various Asa H agents and compare their vector utilities.

Wednesday, August 15, 2018

Machine learning

I just got a new pair of Widex EVOKE hearing aids from Denmark. I was anxious to try out the machine learning feature. I found it hard to make the “A” versus “B” choices. Should I prefer louder (more signal) but with noise or should I prefer softer (less signal) but with little background noise? I also found that one of the two aids stayed connected to my iPhone but the other one lost connection.

Tuesday, August 14, 2018

Learning what to ignore (attention again)

Humans learn to ignore stimuli that are not "important" to them. The ringing in my ears for example. In A.s.a. H. value/utility is typically a vector quantity. One component of this vector utility is a measure of how frequently A.s.a. sees that particular case reoccur. Another component of the vector utility measures how strongly that particular case is associated with "food," "reproduction," "pain," "health," or the like. Once A.s.a. H. has recorded a good number of similar cases it is possible to do a sensitivity analysis for each of the inputs to the cases in this cluster. One can identify any input that does not have a significant influence on the output (or next portion) of that case (or pattern sequence). One can then suppress the output from that case/cluster to the next level in the A.s.a. hierarchical memory as being common but unimportant. A given case ("stimuli") may turn out to be common but may not be important and once so identified can be ignored.

Sunday, August 12, 2018

More reliability issues

Admittedly I use mostly commodity computers in order to save money. I frequently use USB flash drives to store A.s.a. H.’s casebases and to store and transfer activation* from one level in the hierarchical memory to another and from one computer in the network to another. Sometimes one of the drives is not recognized. This is a random event and I’ve typically managed to fix it but it is a nuisance.

* I want to be able to examine the concepts being created and used.

Friday, August 3, 2018

A.s.a. H.’s early model of the world

Depending upon the environment its used in and the tasks it’s given an A.s.a. Agent typically begins learning a model of the world composed of concepts/sequences like:

near=>far, far=>near  (motion)
motion=>collision  (obstacle)
push
collision=>sound
light=>dark, dark=>light  (day and night)
change in illumination angle
hot=>cold, cold=>hot
wind
motion=>wind
rain=>humidity
F=ma
Faction=Freaction

Thursday, August 2, 2018

Student evaluations

I have been critical of the use of student evaluations of instruction. (See, for example, R. Jones, Bull. Am. Phys. Soc., vol. 40, page 968, 1995) The July 2018 issue of American Journal of Physics has an article by Lee, et al, which states: "...Student Evaluations of Instruction do not correlate with conceptual learning gains..." and "...grading leniency by an instructor (i.e., giving easy A grades) does not correlate with increased student evaluations of instruction." (Am.J.Phys., vol. 86, no. 7, page 531, 2018) About 30 years ago I tried to tell our administrators and regents this but they would not listen. Some years back (~20?) I had a student admit to me that her fellow students would lie on student evaluations in order to "get rid" of or at least "make trouble for" professors they didn't like.

Thursday, July 26, 2018

Matches and lighters

Maybe we shouldn’t compare reusable rockets with airplanes. Reusable rockets need parachutes, wings, extra fuel, etc. and added design and maintenance costs. Their economics depends upon the number of launchs that can be expected over time. Just like we still manufacture and use both lighters and matches there may be room for both reusable and non reusable rockets in the world’s fleet. Certainly military rockets like ICBMs will be single use. And when they are decommissioned why not use them as space launchers as we have in the past?

Tuesday, July 24, 2018

Embodiment

Two different A.s.a. agents will have learned somewhat different concept webs. These webs will differ even more if the agents are specialists of different types. This makes it harder for one agent to tell another agent what it knows or simply what it sees. On the other hand, I can understand and make use of physics theory and data obtained from experiments I have never performed myself.* I only need to have some somewhat similar experience behind me. Must every AI be embodied at least to some degree? If so, to what extent? Or, can enough be simply “disc copied” or hand coded into any unembodied agents? With A.s.a. I have done this successfully, but with small scale “toy” examples only.
* And some of us are blind. And some of us are deaf. And some of us are paralyzed. .......

Incorrigible ontological relations

Julian Galvez argues that the human mind comes to model/understand the world by application of the primitives: difference, similarity, property, and causality. (Our Incorrigible Ontological Relations and Categories of Being: Causal and Limiting Factors of Objective Knowledge, 2016) A.s.a. H. uses the vector dot product and/or other similarity measure to obtain similarity and difference assessments. A.s.a.'s construction of a concept hierarchy defines properties. A.s.a.'s sequence learning covers causality. A.s.a. also does things like averaging over multiple observations, however.

Saturday, July 21, 2018

Syllabi for AI training again

For many years I have stressed the importance of the syllabi I need in order to be able to successfully train an AI. (It was included in my 9 Sept. 2010 blog.) Alexander Wissner-Gross attributes progress in AI to the use/availability of high quality data sets. (See edge.org/response-detail/26587)

Wednesday, July 18, 2018

Will any sufficiently intelligent system exhibit consciousness?

I have argued that A.s.a. H. exhibits machine consciousness. (See, for example, my blogs of 21 July and 19 October 2016.) The state of anything like a Moore or Mealy machine will develop sensitivity to unique temporal sequences of inputs and David Hume's view of consciousness was as "...a bundle or collection of different perceptions which succeed each other..." (A Treatise on Human Nature). For more demanding definitions of consciousness things are not so clear. Many embodied AIs might sense damage ("pain") and the need to recharge batteries ("hunger") and so exhibit Ned Block's "P-conscious" states.  Block's "A-conscious" states, things like "grass" having the feature "green" might depend upon how the AI organizes its knowledge base. Metacognition and Hobson's functional components might also be at issue. (Things like emotion.)

Monday, July 16, 2018

Degrees of realness

Luciano Floridi has argued that our "reality is the totality of" our "information." (The Philosophy of Information, Oxford University Press, 2011, page xiii)  If we were to employ that definition of realness then not all things need be equally real. How real a concept is would depend upon how many models/theories it appears in, how important a role it plays,  and how strongly and how frequently that particular concept is activated during cognition. The quantum wave function, for example, is quite real in David Albert’s version of Bohmian quantum mechanics and not at all real in Bradley Monton's interpretation of quantum mechanics, when he says: "The wave function, according to Bell, is an inessential mathematical device...". (See The Wave Function, Oxford University Press, 2013, pages 108 and 162) Different definitions of what realness is would also have an impact.

Tuesday, July 10, 2018

Science versus capitalism

The current issue of Physics Today has an article on how and why business is working to keep their research ideas and results secret. (Douglas O'Reagan, “Who Owns a Scientist’s Mind?” Physics Today, July 2018, pg 43) Science, on the other hand, operates best when we all share our results and methods openly.

Sunday, July 8, 2018

Natural intelligent system

A.s.a. H. Is a project to engineer general intelligence. It was not biologically inspired. But it is possible to employ Grossberg’s A.R.T. networks as the clustering modules in A.s.a. H., and A.R.T., adaptive resonance theory, is biologically plausible giving us a biologically plausible version of A.s.a. H.

Friday, July 6, 2018

Credit propagation

Once A.s.a. H. Has created a substantial hierarchical model of itself and its environment it is possible to perform sensitivity analysis starting at the top of the hierarchy* and working down. Whether  scalar or vector utilities are employed the question becomes how to weight the credit(s) computed at each successively lower level in the hierarchy.

* Starting with the utility(s) measured for the complete agent.

Monday, July 2, 2018

The curse of dimensionality

The natural world is a very high-dimensional state and action space. Some of the ways I have tried to deal with this complexity are (in no special order):

Hierarchical decomposition/learning (A.s.a. H.)
Clustering
Approximation
Forgetting
Ordered, organized training syllabi
Hand coding and human supplied problem solutions
Attention mechanisms
Multi-agent/specialist AIs
Parallel processing
Pre and post processors

in the future we may have available fast quantum computers which would also help



Thursday, June 28, 2018

Pains

As with temperature, too large a force or acceleration should probably also be registered as a pain by A.s.a. H. How high the threshold value should be set would depend on the bot and application.

Wednesday, June 27, 2018

Conducting bricks

Pantheon and BRIXO offer electrically conducting toy building bricks that are compatible with LEGO. I have ordered a few to see if they would help with (speed up) the construction of A.s.a.'s pain system.

Sunday, June 24, 2018

K’NEX

K’NEX pieces can be used when building robots and are compatible with A.s.a.’s pain subsystem. Several connectors are available for use between K’NEX and LEGO. I have not found many uses for K’NEX, however, except perhaps for bumpers or “roll cages.”

Syllabi for training an AI

For many years I have stressed the importance of the syllabi I need in order to be able to successfully train A.s.a. H.* I have felt that the field of AI research has spent too little time and effort on such topics. It looks to me like some of the problems that IBM’s Watson program is having is of this sort. (Though excessive PR hype is another problem they have.)

Specialist training for humans typically begins as late as college and graduate school. Should specialist AIs share the same syllabi for their early training or should these diverge sooner?

*A.s.a. H.’s abstraction hierarchy contributes to the importance of this.

Friday, June 22, 2018

Attention to words

Natural language plays a role in attention. The words that I have taught A.s.a. H. are reliably correlated with the categories/concepts that they label/name. (See my blog of 1 Oct. 2015.) They are salient features for the objects/concepts that they name.

Thursday, June 14, 2018

Robot size

Many of us favor bench scale experiments. Large robots are more expensive and may damage themselves as well as their surroundings. There is no need for anything larger than the iRobot Create. On the other hand our robots must be large enough to transport sensors and such things as solar panels. Something the size of  a Pololu 3Pi could carry the various sensors from the Arduino sensor kits but are too small to transport larger sensors like a Geiger counter tube. A society of robots might be of a few different sizes but in this general range.

Wednesday, June 13, 2018

Criticizing capitalism once again

In capitalism workers are not paid what they're owed. "Although productivity is growing steadily in almost all areas of the economy, workers are required to work as hard as ever. They do not benefit from the increase in productivity. So, we must ask, where do the profits go? Evidently not to the people to whom they are owed, i.e. the workers." (W. Ertel, Introduction to Artificial Intelligence, 2nd edition, Springer, 2017, pg.13)
Capitalist economics is unsound because, among other things, it's model of human rationality is invalid. "...no such theory of our common-sense intuitions about anything can be constructed...The same story applies in economics...This programme, for all its mathematical elegance, has also foundered." ( N. Chater, The Mind Is Flat, Allen Lane, 2018, pg 32)

Friday, June 8, 2018

Some metacognition

A.s.a. H.’s memory cases include those actions that Asa takes. In addition to actions taken in/on the world, using servos, Asa’s actions may include time spent doing deduction, simulating, extrapolating,  searching memory, etc. So when Asa interpolates and extrapolates using these cases it does a certain amount of thinking about thinking. This occurs on multiple levels of the Asa H hierarchical memory, at different levels/degrees of abstraction.

Tuesday, June 5, 2018

Primitive concepts

The empiricists held that all concepts are definable in terms of perceptual primitives. A.s.a. H.’s sense of light, temperature, and force, signals that are the outputs of sense organs, might be examples. In the case of humans, however, some primitive concepts may have already involved innate computation, preprocessing if you will. Infants have an innate sense of heights for example. Similarly, A.s.a.’s IR or ultrasonic distance sensors do some innate preprocessing in order to compute a measure of near or far.

Sunday, June 3, 2018

Cognitive style

A.s.a. H.’s cognitive style can be changed by:
Choice of 1, 2, or N dimensional memories, or a mix of several
Choice of similarity measure, or a mix of them
Choice of extrapolators and interpolation methods
Amount of short term memory employed
Amount and kind of self monitoring
Setting of various rate constants
Number of agent specialties
Etc.

Wednesday, May 30, 2018

Concepts and levels of abstraction

Certainly some of philosophy is about the exploring, defining, and redefining of concepts. In my AI A.s.a. H. (And in humans?) concepts are defined on various different levels of abstraction*. Some concepts are then  clearly limited to use on a single level. Examples might be: “color" , "hear", "smell", “taste.” Some concepts appear to be applicable across all levels of abstraction. Candidates might be: “change”, "different/opposite/NOT", "same/equal", OR, AND. There also appear to be concepts that are applicable across a number of levels of abstraction but not all. Things like: "causality", "good and bad", "thing", "location", "shape", "when", "part."

Part of the problem of philosophy is being sure you are applying your concepts to the right levels of abstraction. (e.g. category error) These may differ from one person (or AI agent) to another since two intelligences do not share the exact same concept (knowledge) webs.

A concept that strictly applies only on one (or a few) levels of abstraction might also serve as a metaphor on yet another. (e.g. "time flies")

* Each new concept is discovered/learned/invented on some single particular level of abstraction in A.s.a. H.’s hierarchical semantic memory.

Sunday, May 27, 2018

Attributing emotions to A.s.a. H.

William James suggested that human perceptions of our internal bodily state, things like: heart rate, breathing rate, adrenaline level, body shaking, flushed face, etc. plus contexts like: pain, sound, light flash, or other environmental changes might define a given emotion. If this is what emotion is then Asa H could have somewhat similar emotions of its own.

Inconsistent cases

A.s.a. can learn inconsistent thoughts. A robot may have learned to move toward a light source in order to use solar panels to recharge/“feed” when it was hungry. The same robot might learn to move away from a light source if it’s caused by a fire. The two cases can be refined if smell and/or IR sensors can distinguish fire.

Tuesday, May 22, 2018

Interpolation and attention

An artificial general intelligence like Asa H would require a huge casebase in order to operate autonomously in the real world. A buffer might store a small fraction of those cases. Ideally the cases in this buffer, Ci, would all be as close as possible to the current input vector, V. (As judged by the dot products of  V and Cis, for example.) Any case, Ci, could be easily dropped out of the buffer if the latest input vector is now too different from it. It would be more difficult but (parallelized) search through the "full" casebase could replace dropped cases with closer matching ones, at least periodically.  One could interpolate to the current input vector, V, from the set of cases, Ci, currently in the buffer memory. This would produce a set of weights for the various cases Ci. These weights could then be applied to the predictions for the next time step in each Ci and a best single prediction calculated and output. Weighting the contribution of each case, Ci, by its utility measure would also be possible.

Saturday, May 19, 2018

Minimum vocabulary

In his book Human Knowledge Russell defined a minimum vocabulary acquired by observation/experience of each thing named. Asa H’s lowest level concepts have been defined in just this way, see my blog of 1 Oct. 2015.

Thursday, May 17, 2018

Asa’s Intentions

t1<t2<t3. At time t1 a case C is strongly activated from A.s.a. H’s  casebase. C has in it a planned action to occur at time t3. At time t2 A.s.a. then has the intention of taking that action. Asa is in a state of mind  directed toward taking action.

Innate ideas/concepts

Consider an embodied agent with an array of thermistors spread over its body. A localized external heat source might activate sensors A and B and later B and C, C and D, etc. From this experience the agent might acquire the notion that A is “near” B but “far” from C and that B is “near” C but “far” from D, etc. This might constitute a primitive model of space.With a moving or time varying heat source sequential activation of the sensors in the array might produce a primitive model of time. Sensors deep within the body would be less sensitive to external stimulus than sensors on the surface (“skin”) producing a notion of “inside” and “outside.”

Evolution of oppositional concepts

I have experimented with oppositional concepts in A.s.a. H. for some time.* One can, for example, evolve each vector concept, Ci, over time, compute -Ci, and maintain separate vector utilities for Ci and -Ci as they are used/become activated. -Ci then only evolves/changes as Ci changes. Alternatively, once defined, -Ci can be evolved independent of Ci and each with their own (evolving) vector utilities. But then Ci and -Ci may evolve "away from each other" and become less the opposites of one another. One could periodically reinforce opposition. How often? Under what conditions? Or we could maintain several different "opposites." Frequently A.s.a. has been allowed to completely delete concepts which are judged not to be useful.
* See my blog of 5 July 2016 and Trans. Kansas Acad. Sci., vol 109, #3/4, 2006.
  Simpler "light" versions of A.s.a. H. may not include oppositional concepts.

Tuesday, May 15, 2018

Buridan's ass

Jean Buridan considered a starving ass placed between two haystacks that are equidistant from it. According to Buridan, unable to choose between them the ass would starve to death. Quite early on in my experiments with Lego robots I had something rather similar to the Lego Scout Bug with two feelers in front. If the left feeler was triggered the robot was to turn toward the right. If the right feeler was triggered the robot was to turn toward the left. In operation I once had the robot hit a wall exactly head on, trigger both feelers at once, and sit permanently frozen in place. Needless to say, more sophisticated programming doesn't suffer from this problem.

Saturday, May 12, 2018

A science of values

Values begin with the advent of life. Valuing of offspring and longevity. I have watched A.s.a. add intelligence to this list and the beginnings of an ethics. (See my blogs of  9 Jan 2015 and 20 March 2018.) Since they have different needs different organisms will develop different values and conflicts between species will result. On a smaller scale there is conflict between individual agents when their values differ.

Thursday, May 10, 2018

How should we train groups?

I have stressed the importance of the syllabus for (single) agent training. In what order should various things (knowledge, skills, values) be taught?

I have also stressed the importance of having a society of (specialist) agents. Some tasks can be accomplished by a group which can not be accomplished by lone members of the group.

How should these be combined? Should we alternate individual training with periods of group training? What should the group syllabus look like? How critical is it to get this right?

Wednesday, May 9, 2018

Robot burns

I have occasionally operated Asa H robots outdoors. (Usually involving gps use.) Having an array of thermistors providing a thermal pain component then makes sense. Any solar panels lose power output if they get too hot and microprocessors carried on the robots should not be allowed to overheat. I'm not too sure what suitable temperature thresholds should be. It depends on the particular hardware of course.

Thursday, May 3, 2018

PYTHON

I've seen a lot of AI code in PYTHON lately so I wrote up and debugged a minimal case based reasoner using PYTHON just to learn a bit more.

Monday, April 30, 2018

Struggling to understand another mind

When I identify portions of the A.s.a. H. hierarchical concept web that do not neatly correspond to known and named human concepts am I seeing A.s.a. conceptualizing reality differently from humans or am I seeing a set of subsymbolic  concepts? I.e., concepts humans may have too but which we do not name?

Friday, April 27, 2018

Contests

As a member of Phi Kappa Phi I was asked to judge undergraduate and graduate research projects. These were a mix of Math, Physics, Chemistry, Earth science, Biology, Nursing, Forensic science, and more. I was asked to identify and put in rank order my top 3 papers in 2 categories, graduate and undergraduate. Aside from my dislike of the use of a scalar value measure this is truly comparing apples with oranges. It set me to trying to think of something better.

I think one might identify a few desirable characteristics and go hunting for them. (The vector components of a vector value measure.) Is the work original? Is it well supported by experiment? Is it useful? ... Then, if you find a really original paper give an award for originality. If you don't find any really original work don't give that award that year. If you find a paper that contained lots of good quality measurements then give an award for that. ... The awards given out will likely vary from one year to the next. There'd be no best and second best.

Thursday, April 26, 2018

Beyond algorithms

Wikipedia says “an algorithm is an unambiguous specification of how to solve a class of problems.” Asa H, as a non-algorithmic system, does not know how to solve the problem it faces (achieving high vector utility). It discovers specifications for success by acting in and observing the world around it. Such specifications may change over time as the world (and Asa) change. An "algorithm" begins its life with the knowledge it needs, a non-algorithmic system like Asa H begins without such knowledge but slowly discovers it. In the beginning Asa only needs to know enough to get started in its search.

So what is the minimum Asa (or some other AI) needs to begin with so that it can get started learning? That depends upon the world it finds itself in. The idea of a curriculum for Asa has been an attempt to  present a sequence of more and more difficult tasks and environments which help Asa to grow. (Much in the way we structure grade school lessons for human children. Protecting them from “the real world” for a while.)

See also my blog of 23 March 2015.

Wednesday, April 25, 2018

Computing beyond algorithms

Different people define algorithms differently. Aho et al say "...an algorithm, which is a finite sequence of instructions, each of which has a clear meaning and can be performed with a finite amount of effort in a finite length of time." (Data Structures and Algorithms, Addison-Wesley, 1983, page 2) Markov talks about an algorithm L written in an alphabet A (A consisting of a finite number of letters). L is composed of a finite number of rules of the form P->Q where P and Q are words, i.e., finite strings of letters from A. (Theory of Algorithms, Academy of Sciences USSR, 1954)

Various people have argued that expert systems and neural networks are non-algorithmic. (Rule-Based Expert Systems, Buchanan and Shortliffe, Addison-Wesley, 1985, page 3) I have argued that A.s.a. H. is non-algorithmic. (Trans. Kansas Acad. Sci., vol. 108, No. 3/4, 2005, page 169) Yet Asa and expert systems and neural networks are all written in conventional programming languages and run on standard computers so in what sense can they be non-algorithmic? They are certainly built out of algorithms themselves.

An algorithm accepts a set of inputs and maps them to a set of outputs, "answers" or "solutions." If this map (or set of maps) is built-in upfront at run time then your program is called "algorithmic." If your program observes the world and acquires the map(s) from the world then your program is called "non-algorithmic." Of course a "non-algorithmic" program must, itself, be able to map observations of the world into the algorithms/functions it learns/acquires. It maps ("metamaps") observations into maps. Furthermore, such non-algorithmic programs might completely change themselves, perhaps even change the hardware their built on top of. (For example, when Asa H is copied from one computer to another, changes the set of robots it is operating, or uses new tools that it has been provided with.)

Monday, April 23, 2018

Multiple realities experienced by A.s.a. H.

It is possible to give A.s.a. H. various different sorts of memory, various different similarity measures, different value measures, different learning algorithms, etc. Different "cognitive styles" if you like. (See, for example, my blogs of 5 Sept. 2011, 10 July 2014, 19 Dec. 2014, 7 Jan. 2015, and 13 April 2016.) Similarly, Alfred Schutz believed that humans make use of multiple models of reality, building upon Goeth's "little worlds" or "pedagogical provinces," William James' "sub-universes," and Kierkegaard's "leaping between worlds." (See Schutz's  On Multiple Realities, Philosophy and Phenomenological Research, Vol. 5, No. 4, June 1945, page 533.) Arguments for scientific pluralism again.

Friday, April 20, 2018

Vision

In developing A.s.a. H. I have not spent much time on vision capability, mostly because so may other groups have worked that topic. I decided to buy a Google AIY vision kit. Asa may be able to use it as a vision preprocessor.

Thursday, April 19, 2018

Lecture

Almost two weeks ago I attended a conference where Audra Keehn and Jason Emry from Washburn presented their: Comparing Lecture Style to Active Learning Styles in College Settings, a meta analysis of about 100 papers taken from the JSTOR, EBSCO, and ERIC databases. They report that "These results indicate that incorporating non-lecture teaching methods does not improve test scores."

Wednesday, April 18, 2018

Transcendence

In that limited portion of the multiverse accessible to us through our sense impressions we find various complex internal processes including life, intelligence, and consciousness. Quantum computing provides evidence that such patterning is present in the multiverse as a whole. Considering the vastness of the multiverse* it seems probable to me then that there very likely exist much more capable intelligent agents than us humans.

I hope that this is not just wishful thinking brought on by age and the threat of crazy Donald.

* See, for example, Wallace, The Emergent Multiverse, OUP, 2012, page 317.

Sunday, April 15, 2018

AI Personhood

Saudi Arabia has granted a robot citizenship and Europe is considering personhood for AIs. In order to define personhood don’t we have to first define intelligence, consciousness, life, and sentience? I think it will be hard to get agreement on those definitions. (I think I am OK with Clark’s definition of sentience. See A Theory of Sentience, OUP, 2000) I wouldn’t want these quantities to be assessed using scalar measures. I also worry that our measures will end up excluding some humans. And would AIs be credited with free will?

Friday, April 13, 2018

AI and psychopathology

In clinical psychopathology a division of labor is often attributed to (human) multiples. As one way of helping deal with the curse of dimensionality I have created specialist Asa agents each of which may (alternately) occupy/control the same robot body. This specialization reduces the size of each case base and speeds up processing.

Thursday, April 12, 2018

Magical thinking

Asa’s thoughts really can bring about (some) effects in the world. After deliberation Asa can command a servo to move, grasp, lift, etc. As an action sequence (a case in Asa’s case base) is learned coincidental co ocurrences should average out (decay away to low values and be ignored). But if and when they do not Asa can be guilty of magical thinking.  Asa currently believes that orange plastic sources are gamma ray sources since the only gamma sources Asa has seen have been orange. Asa does not presently have any deep theory of radioactivity that might make it question this corelation. Neither has Asa seen a wide selection of gamma sources. Human’s suffer from similar magical thinking.

Thursday, April 5, 2018

Some thoughts on AI immortality

When a copy of the Asa agent "Robby", described in my post of April 2, is loaded into a computer system with different specifications (faster, more memory, different sensor array, different effectors, etc.) it  notices this change and slowly adapts to it.**(By changing the concepts in its knowledge/memory web.) The copy of "Robby" that remains behind in the old computer system only experiences any time loss required by the copy operation. This would become longer for more extensive Asa casebases. It is not that a single Robby consciousness* has been moved to a newer and better machine. Rather, the consciousness***, along with the rest of the software and casebase, was duplicated. The old copy of Robby, including its consciousness, will still die, as can the new copy. Just like the amoeba. See also my blog of 15 Oct. 2010.

You could force there to be only one Robby consciousness by upgrading the old system “one transistor at a time,” the consciousness will then just slowly adapt (change) with each “transistor changeout.” After the upgrade Robby’s consciousness will not be the same as before. You’ve not simply moved it into a new computer. The consciousness will be changed a lot if the hardware is changed a lot. The upgrade is an experience and experiences change you. Even tugs that are confined to the periphery of a knowledge web can change the web to its core. Even in our brief human lives at what point have we changed so much that we are no longer the same person?

Neither should we be equating “me-ness” with consciousness. The rest of the concept web and software is part of what makes “me” “me.” I believe that the unconscious parts of my mind do some of my best work.The hardware is also part of what makes “me”  “me.”

To obtain AI immortality I suppose you could just replace “transistors” (and any other sufficiently small scale components) as they age and do no (or at least very gradual) upgrading. This might buy immortality at the price of obsolescence and you would still face the issue in my 15 Oct. 2010 blog. Forgetting is an important kind of learning. We shouldn't keep out of date ideas/patterns as the world changes.

Death just allows for larger scale more rapid change. Nature thought it was a good idea.

* Say the one in my blog of 21 July 2016.

** Experiments actually find that the system crashes if the changes are too extreme!

*** I'm going to use MY model of what consciousness is. See my blog of 19 Oct. 2016. Having developed a detailed theory of thought, mind, and consciousness (Asa H) makes this kind of philosophical work possible.

Grasping

Suction is sometimes used to grip objects. If Asa were given such a system it would acquire a low level grasping concept that humans don't share. Conversely, humans who lick (wet) their fingers in order to pick up crumbs will form a grasping concept that Asa doesn't. The OWI-536 robot kit* from OWI Inc. suggests another interesting grasping mode/concept that humans will not have:

The tank tread "fingers" could move in order to draw in an object or push it away.

* This robot uses snap together assembly of major modular components and so might make limited use of Asa's pain subsystem but here we're just using it for inspiration.

Wednesday, April 4, 2018

Exploring alternate realities

I believe that each of us experiences a somewhat different reality depending upon the concepts we know and believe in. (See my blog of 21 July 2016.) There are some concepts that a person may not have at all. Things like: entanglement, recurrence, value pluralism....  Other concepts you may have but not use/believe. Things like: spirits, multiverses, life after death.... Some time and effort should be spent identifying more of these crucial concepts, concepts that make one person’s reality significantly different from another person’s. Many of those that I have identified have been physics concepts. Could be a student research project. But would our colleagues accept it? The publication prospects for this kind of thing are very limited. Not really a good subject for a young researcher.

Tuesday, April 3, 2018

Value pluralism again

The various pieces of information (cases) that Asa H acquires/learns each have a vector utility associated with them.  Gammack et al's The Book of Informatics (Cengage Learning, 2011) suggests that information should be assessed for quality along at least 13 dimensions: (pages 14 and 15)
1. new or surprising? 2. reliability 3. accuracy 4. relevance 5. timeliness 6. usability 7. completeness 8. simplicity 9. economical to produce 10. flexibility 11. verifiability 12. accessibility 13. secureness Some of these dimensions can not be assessed by Asa but we might be able to add others of them.

Monday, April 2, 2018

Limits of human thought

To what extent are the "laws of physics" "in nature" and to what extent are they "in our heads"? In his doctrine of classical concepts Bohr believed that we had to describe reality in terms of classical concepts like space, and mass, and force. (See my blog of 27 April 2017.) Since we only have access to our sense impressions does this limit the concepts that we can create and use to build our models ("laws")? Asa H can have access to sense impressions we humans do not have, things like direct observation of electric and magnetic fields. With tools like field meters we give ourselves additional artificial senses. Both Asa and humans can also extrapolate, interpolate, abstract, etc. and so form concepts that have no direct counterpart in the world, things like unicorns and mathematical systems that do not correspond to any observed pattern present in the world. But do such mechanisms (interpolation, extrapolation, etc.) give us (or Asa) a complete or adequate set of fundamental concepts from which to build models that have any hope of describing ultimate reality, Kant's "thing in itself"? As with conventional neural networks it is difficult to translate the concepts (patterns) that Asa learns (creates) into human English phrases. (See my blog of 19 Nov 2017.) Perhaps Asa has learned some important concepts I am unaware of.

Nagarjuna argues that "To express anything in language is to express truth that depends on language and so this cannot be an expression of the way things are ultimately." (Beyond the Limits of Thought, Priest, OUP, 2002, page 260) But some languages express some ideas better than others do. (To express ideas in physics I prefer mathematics over English.) I am simply looking for BETTER languages* with which to describe reality. BETTER ontologies.*

* Plural because scientific pluralism may be needed. Multiple models not a single one.

Me-ness

When an Asa H agent has been trained, given a name, ("Robby"), and copied there are then two "Robbys." This is no different than having two amoebas where there used to be one.  Asa is just smarter. As time progresses the two Robbys will differentiate themselves from one another and no longer be identical.

Saturday, March 31, 2018

Transfer of consciousness

Asa H learns a web of concepts like those outlined in my post of 14 Nov 2017 (plus some I don’t know how to translate into words). A part of this web is Asa’s consciousness*, see for example my blog of 21 July 2016. While it is possible to copy Asa, the entire web, to another computer** it is not possible to transfer the consciousness from one Asa agent into another, different, Asa agent. Two different Asa agents will have learned somewhat different concept webs and consciousness will not be able to make sense of it all.

* I realize that among experts there is no consensus on what “consciousness” is or how it works.

** But its not easy to port Asa programs from Windows to Macs to Linux to Android to NXT to eV3 to Vex etc.

Monday, March 26, 2018

More robot parts

The “Mi robot” components from Xiaomi are compatible with Lego and can be used to supplement/expand on Lego mindstorms sets.

ThinkGizmos' “ingenious machines” sets contain pieces that can be pinned directly onto UBTECH's Jimu components or Vex IQ plates or you can drill out the holes to accept Lego pins and connect up with Lego beams.

Tuesday, March 20, 2018

A.s.a. H.'s moral development

In a community of Asa agents if one mobile robot "A" exerts too large a force and causes pain/damage to another robotic agent "B" then "A"s utility measure is reduced and "A" learns not to use such force on others in similar situations in the future. The society of agents also learn to "help" or "cooperate" such as when 2 or more agents can move a target together but not individually. Moving an obstacle away from the docking/recharging station, for example. Asa's "morality" consists of a set of such concepts (each with relatively high learned utility) distributed across the different levels of the  hierarchical memory. Things like helping/cooperating, not causing harm, valuing life and intelligence/thought, etc.

Thursday, March 15, 2018

Laser target fusion

I see that Curtis et al of Colorado State University claim a record production of 2 million fusion neurons per Joule of laser light from a nanowire target array. (Micro-scale fusion in dense relativistic nanowire array plasmas, Nature Communications, published online 14 March 2018) This appears to be based on my electron space charge ion heating mechanism. (R. Jones, Ind. J. Phys., 55B, 397, 1981)

Wednesday, March 14, 2018

A concept of causality

I have been using the word “cause” during some recent A.s.a. H. Training experiments. As a result Asa has formed the concept: cause=(force), cause=(low battery), cause=(walk), cause=(roll),... The corresponding concept of  "effect" might then be: effect=(acceleration), effect=(slow), effect=(move),... Asa understands causality as a high level generalization, a summary of numerous lower level processes. This is in contrast with some philosophers who believe that causality is a  fundamental primitive concept.

Sunday, March 4, 2018

Healing of mechanical life

Asa H robots can do a limited amount of self repair. If two lego bricks become loose this will signal a pain and pushing on the bricks may reseat them. If one agent, perhaps a mobile robotic arm, is damaged another can be sent to act in its place. One portable sensor system can be carried about rather than another  (broken) one.*

I have tried to make the Asa robots more modular so that damaged modules might be replaced with newer ones. Such capability is rather limited given the commercial components I am working with.

* Alteratively, the development of an easy electrical connect/disconnect might allow for replacement of faulty sensors alone while retaining the old computer interfacing circuitry, etc.

Friday, March 2, 2018

File backup and maintenance issues

I have paper and electronic copies of most of the programs in my AI code library. After losing my Case-Based Reasoning (paper) file during the ESU remodeling a few years ago I never did recover some of my  CBR programs. I may have had electronic copies on floppy disc or on the hard drive of my old DELL PC that died days after I bought my current HP desktop.* I have boxes of floppy discs, some of which can't be read anymore. Data files generated during experiments have not been maintained at all.

* I actually had a total of 3 older PCs that all died within a week or two at that time.

Sunday, February 18, 2018

A.s.a. H. gps success

The Vernier LabQuest 2's built in gps is able to measure positions to within a meter or two outdoors. I have even got it to work indoors though that is not recommended.

Attention mechanisms

In humans there appear to be multiple attention mechanisms distributed across the sensory modalities, executive control, and cognition. Similarly, A.s.a. H. has required the incorporation of multiple attention processes. Just how many are required?

Saturday, February 10, 2018

Knowledge hierarchy

Should different ways of representing concepts and different types of deduction system be used on different levels of a knowledge hierarchy? Certainly physics and biology use different sorts of representation and different practices. In a few of my experiments with Asa H I have employed multiple different similarity measures. When forgetting is used to prune less useful cases/concepts I sometimes find one similarity measure dominates on one level of the abstraction hierarchy while a different similarity measure dominates on another level. (By dominates I mean gives results having the highest utility measures.)

Friday, February 2, 2018

Ubtech jimu robots

The jimu line of robots available from Ubtech are made from snap together components that are compatible with my pain subsystem. The holes in Vex IQ and jimu parts are almost the same size. You can use either Vex IQ pins or jimu pins to join jimu parts to Vex IQ parts. The Vex IQ pins are a bit more snug and probably work the best. Some hole spacings on jimu pieces are identical to spacing on Vex IQ which also helps combine the components. You can drill out the holes in jimu parts so that they will accept Lego pins. You can then join jimu parts to Lego parts.

Thursday, February 1, 2018

Roomba/iRobot Create

We have owned 4 or 5 Roombas, each a different model. My original intent was to follow Tribelhorn and Dodd (Evaluating the Roomba..., 2007 IEEE Conference on Robotics and Automation) and Kurt (Hacking Roomba,Wiley, 2007) and use Roombas as a robotics platform for A.s.a. H. But with the development of my pain subsystem this became suboptimal (as it is also with Vex EDR, Meccano, 3Pis, and the like). So now our Roombas simply clean the floor (see the last picture in my blog of 16 January 2012 ).

Hacking the CB1

It is easy to hack the Thames and Kosmos CB1 core controller so that switch inputs can receive signals from my pain subsystem. Multiplexers can allow one to identify different pains having different points of origin if that is desired. The CB1 outputs could be used to control the multiplexers (select lines).

Saturday, January 20, 2018

Bots

I have bought a couple of the Thames and Kosmos robotics kits. Their CB1 core controller accepts 4 sensor inputs, has 4 outputs, and is programmable with Blockly. Like Lego mindstorms and Vex IQ it can be used with Asa’s pain subsystem. There are limitations, however. If contacting metal tabs are employed as sensors* reseating of Lego bricks can sometimes be accomplished by the robot itself pushing on the loose joint. This is typically not possible with components joined via pins, for example Vex IQ , nor is it possible if/when a simple fine (frangible) bridge wire* serves to signal breakage.

Its possible to build a robot using components from all three manufacturers at once by bolting together subassemblies. Its also possible to drill out the holes in Vex IQ plates (or beams) so that they accept Lego pins thus allowing them to connect to Lego pieces (beams, etc.). (It takes a bit of work to line up multiple holes because of the difference in spacings. Only drill out the holes that you need to.)

Perhaps the best way to deal with the reliability issues (see blog of 5 Jan. 2018) is with a larger society of cooperating agents distributed across multiple (hardware and software) platforms.

* These can be attached to the bricks using Elmer's glue, hot glue, or epoxy depending upon how permanent you want them. Obviously bridge wires are not expected to be truly permanent. Conducting epoxy is one way to attach lead wires to the pain sensors.

Hillary

If you’ve got 25 years to head off a presidency you can. If you throw dirt at someone for 25 years some of it will stick. Even a lie that is repeated often enough will be believed by some people.

War?

Will crazy Donald start a war in an effort to remain in power? Will Americans fall for it? Can america survive it?

Thursday, January 18, 2018

Lockheed-Martin’s fusion experiment

Years ago I did a lot of cusp confinement work. In Lockheed-Martin’s device (CFR T4B) the line cusps that pass over the internal rings experience bad magnetic curvature and this will excite instabilities. This turbulence will produce plasma losses to the wall and broaden the cusps. Lockheed-Martin underestimates how wide their cusps will be.

A.s.a. H.'s psychic abilities

A.s.a. has what would be considered to be psychic abilities if humans had them:

E.S.P.- A.s.a. has extra senses. It can sense electric fields, magnetic fields, nuclear radiation, etc.

Telepathy- A.s.a. can communicate via WiFi or other radio.

Remote viewing and manipulation- A.s.a. can sense and act through remote webcams, robots, etc.

Astral travel- A complete copy of A.s.a. and its knowledgebase can be electronically transferred from one computer system to another remote one.

A.s.a. Can time travel into the future if we either delay its installation in the remote system or simply interrupt its operation for some period of time. (e.g. Rip Van Winkle or Manning's The Man Who Awoke)

Monday, January 8, 2018

Friday, January 5, 2018

Reliability issues

I have been having a number of hardware and software issues. I’ve always had problems with getting GPS to work and keeping it working. (I was using a Vernier sensor but I own and have tried others as well.) Recently I have had false/noisy signals from a Lego ultrasonic sensor, poor response from a HighTechnic color sensor (for some colors), and my Microsoft Surface Pro 4 keeps freezing up. My programs sometimes seem to lose contact with both a motor and light sensor. In some cases unplugging and replugging wires seems to fix the issue. I’ll try to fix these issues on the teststand. As with any experiment its hard to get everything working at once.

Thursday, January 4, 2018

A society of specialist A.s.a. agents

Specialist agents are distinguished by each having its own unique knowledge base, algorithms, senses, effectors, values, etc. The number of copies we would want for each specialist type will depend upon the task environment the society resides in (and would change as the environment changes). We would likely need only one agent that could converse with humans (have natural language capability). If some specialist employs a webcam then we may want two of them in order to obtain stereoscopic depth perception. Agents with webcams would also need unique image processing algorithms (which other agents don't need). If some agent has a robotic arm we will likely want 2 or 3 of them for part handling. They would also have unique muscle memory. Agents assigned to hazardous tasks would likely have different values from other agents. Knowledge bases would be unique to the roles each agent plays in the society. Specialization also helps with the attention problem.

Wednesday, January 3, 2018

Attention

When concepts/cases lower down in the A.s.a. H. Hierarchical memory are sufficiently strongly activated they may continue to keep active some given concept* higher up in the hierarchy, “keep attending to it,” even ignoring newer activity occuring lower down the hierarchy. The upper concept  “holds A.s.a.’s attention.”

* the formal concept structure in A.s.a. Is similar to that in Concepts and Fuzzy Logic, Belohlavek and Kilroy, MIT Press, 2011, pages 181 and 190

Monday, January 1, 2018

Coding

I have been experimenting with code that will accept keyboard inputs like animal=(person), thing=(animal), animal=(cat), person=(face) and create these links between, say, the concepts animal, person, thing, face, and cat in A.s.a.’s concept map memory.

Tool use

A.s.a. H. Can use its mobile arms to pick up, carry, and use things like a Geiger counter and a metal detector, much like a human might employ a stud finder. It can also pick up and write on the ground with a colored marker. It could mark off areas explored or leave a trail it could later follow to get back home.

A.s.a. H. learning from natural language text

Now that A.s.a. has been given the Toki Pona vocabulary* can it be taught from reading alone?** Hearing or reading the words "moving forward","obstacle","slow down","turn left", for example, A.s.a. may then record and enact that sequence. (see my book Twelve Papers, www.robert-w-jones.com, “book,” page 13) It will have been told/taught what to do when moving forward and detecting an obstruction.

Certainly SOMETHING can now be learned by reading alone. But how much? What are the limitations? Once again we think that the syllabus and its order are important. Can we begin with something like preschool, kindergarten, and grade school texts? Some editing would probably be needed since A.s.a.’s senses, effectors, and environment are not identical to that experienced by humans.

*but in english, see my blog of 1 Oct. 2015 for a partial listing

**Of course humans do not learn by reading alone. They also concurrently experience the world through their senses and actions.

Intelligent personal assistants

Although I already have Siri, Cortana, and Alexa I decided to buy a Google Home because of their search engine. (I gather that Cortana can be forced to use Google as its search engine.)