Thursday, June 14, 2018

Robot size

Many of us favor bench scale experiments. Large robots are more expensive and may damage themselves as well as their surroundings. There is no need for anything larger than the iRobot Create. On the other hand our robots must be large enough to transport sensors and such things as solar panels. Something the size of  a Pololu 3Pi could carry the various sensors from the Arduino sensor kits but are too small to transport larger sensors like a Geiger counter tube. A society of robots might be of a few different sizes but in this general range.

Wednesday, June 13, 2018

Criticizing capitalism once again

In capitalism workers are not paid what they're owed. "Although productivity is growing steadily in almost all areas of the economy, workers are required to work as hard as ever. They do not benefit from the increase in productivity. So, we must ask, where do the profits go? Evidently not to the people to whom they are owed, i.e. the workers." (W. Ertel, Introduction to Artificial Intelligence, 2nd edition, Springer, 2017, pg.13)
Capitalist economics is unsound because, among other things, it's model of human rationality is invalid. " such theory of our common-sense intuitions about anything can be constructed...The same story applies in economics...This programme, for all its mathematical elegance, has also foundered." ( N. Chater, The Mind Is Flat, Allen Lane, 2018, pg 32)

Friday, June 8, 2018

Some metacognition

A.s.a. H.’s memory cases include those actions that Asa takes. In addition to actions taken in/on the world, using servos, Asa’s actions may include time spent doing deduction, simulating, extrapolating,  searching memory, etc. So when Asa interpolates and extrapolates using these cases it does a certain amount of thinking about thinking. This occurs on multiple levels of the Asa H hierarchical memory, at different levels/degrees of abstraction.

Tuesday, June 5, 2018

Primitive concepts

The empiricists held that all concepts are definable in terms of perceptual primitives. A.s.a. H.’s sense of light, temperature, and force, signals that are the outputs of sense organs, might be examples. In the case of humans, however, some primitive concepts may have already involved innate computation, preprocessing if you will. Infants have an innate sense of heights for example. Similarly, A.s.a.’s IR or ultrasonic distance sensors do some innate preprocessing in order to compute a measure of near or far.

Sunday, June 3, 2018

Cognitive style

A.s.a. H.’s cognitive style can be changed by:
Choice of 1, 2, or N dimensional memories, or a mix of several
Choice of similarity measure, or a mix of them
Choice of extrapolators and interpolation methods
Amount of short term memory employed
Amount and kind of self monitoring
Setting of various rate constants
Number of agent specialties

Wednesday, May 30, 2018

Concepts and levels of abstraction

Certainly some of philosophy is about the exploring, defining, and redefining of concepts. In my AI A.s.a. H. (And in humans?) concepts are defined on various different levels of abstraction*. Some concepts are then  clearly limited to use on a single level. Examples might be: “color" , "hear", "smell", “taste.” Some concepts appear to be applicable across all levels of abstraction. Candidates might be: “change”, "different/opposite/NOT", "same/equal", OR, AND. There also appear to be concepts that are applicable across a number of levels of abstraction but not all. Things like: "causality", "good and bad", "thing", "location", "shape", "when", "part."

Part of the problem of philosophy is being sure you are applying your concepts to the right levels of abstraction. (e.g. category error) These may differ from one person (or AI agent) to another since two intelligences do not share the exact same concept (knowledge) webs.

A concept that strictly applies only on one (or a few) levels of abstraction might also serve as a metaphor on yet another. (e.g. "time flies")

* Each new concept is discovered/learned/invented on some single particular level of abstraction in A.s.a. H.’s hierarchical semantic memory.

Sunday, May 27, 2018

Attributing emotions to A.s.a. H.

William James suggested that human perceptions of our internal bodily state, things like: heart rate, breathing rate, adrenaline level, body shaking, flushed face, etc. plus contexts like: pain, sound, light flash, or other environmental changes might define a given emotion. If this is what emotion is then Asa H could have somewhat similar emotions of its own.