Tuesday, January 28, 2020

A conscious machine

Working within Baars' global workspace theory Barthelmess, Furbach, and Schon argue* that the Hyper reasoning system, with ConceptNet as its knowledge base, is conscious. While I agree with much of this I do believe there are different degrees of consciousness. I have also argued** that consciousness is a collection of processes, not one single thing. Hyper and ConceptNet does not have a notion of self*** nor does it have all 10 of Hobson's "functional components".****

I don't think that consciousness is as difficult as the "hard problem" people would have us believe. On the other hand I don't think that Hyper-ConceptNet is as fully conscious as A.s.a. H. is.*****

The attention issue is part of dealing with the curse of dimensionality. Its a problem that must be faced by any machine trying to operate in a large state space.

* arXiv:2001.09442v1, 26 Jan. 2020
** See my blog of 19 Oct. 2016
*** See Trans. Kansas Academy of Sci., 2017, page 108
**** For example, it seems to lack orientation, emotion, and values.
*****But ConceptNet is a large knowledgebase of almost 3 million axioms in first order logic!

Monday, January 20, 2020

The Communist Utopia

The argument goes something like this:
- Society requires that most of us work.
- But physics tells us that work is energy. “Labor saving appliances” allow us to replace human labor with other energy sources.
- It might be possible to make energy free. Tesla thought that there might be sources of free cosmic energy. Much of his physics was unsound but solar energy is a possible example. Lewis Strauss, the chairman of the atomic energy commission (1954), thought nuclear energy might become “too cheap to meter.” Plentiful thorium or deuterium fuels, for example.
- No one then need work any longer. Machines would replace all human labor. (Today machines are able to do half of all human jobs. But completing the task might involve the creation of  “mechanical life” and the subsequent class struggle between humans and AIs.)

Sunday, January 12, 2020

Vector values

The idea that humans have a vector value system* receives some support from Shalom H. Schwartz's "circular model of values." (see, for example, Journal of Research in Personality, June 2004, pg 230-255)

*A.s.a. H. frequently makes use of a vector value system (see my blog of 19 Feb. 2011) and my criticism of capitalism is based in part on the need to avoid a scalar utility (see my paper  www.robert-w-jones, philosopher, Capitalism is Wrong).

Friday, January 10, 2020

An example of learned attention, attending to

A.s.a. H. learns that (a robot's) collisions correlate with increased pain and damage.
It also learns that sweeping the ultrasonic (obstacle) sensor back and forth correlates with having fewer collisions as compared with having a fixed directed ultrasonic sensor. A.s.a. H. then learns to sweep it's sensor, looking for obstacles and spending more time attending to this particular input channel.

Alternatively, if the robot has a single fixed mounted sensor it may learn to make small repeated left and right turns as it advances forward.

Thursday, January 2, 2020

A kind of intentional thought

Whenever A.s.a. H. learns a case (sequence) this will include any actions that were taken.  Actions need not be the activation of servo motors, they can include things like choosing to perform “thinking with a simulation” (see my 30 May 2019 blog), or adjusting things like time spent extrapolating, doing feature extraction, etc. (e.g., adjusting parameters like L and skip, see my 10 Feb. 2011 blog) See also my book Twelve Papers, pages 15 and 16, self monitoring, www.robert-w-jones.com.

Wednesday, January 1, 2020

Disembodied AI, a complication

Following up on my 1 May 2019 blog I have replaced A.s.a. H.’s lowest layer with human inputs. Unfortunately, some common human inputs need to go to A.s.a.’s second, third, and fourth layers. This complicates learning among other things.

A.s.a. H. learns behavior trees

Colledanchise and Ogden have discussed the advantages of behavior trees in their book Behavior Trees in Robotics and AI: An Introduction (arxiv 1709.00084v3 15 Jan. 2018). Advantages are said to include modularity, hierarchical organization, reusability, and reactivity. A.s.a. H. learns behavior trees similar to that of figure 1.1 from Colledanchise and Ogden’s book:

Pick and place = (Grasp ball -> Carry ball -> Drop ball)
Grasp ball = (detect ball inside grippers -> close grippers -> sense force against grippers)
Carry ball = (sense force against grippers -> move)
Drop ball = (sense force against grippers -> open grippers -> sense no force against grippers)


A.s.a. H.'s devided self

In some experiments I have employed a society of Asa agents. In others a small processor (a LEGO NXT, EV3, Arduino, or Raspberry Pi) rode on each mobile effector (or sensor array). These little brains then were linked (frequently by a power and communication tether) to a larger processor (brain). (Somewhat like in an octopus.) The self concept that A.s.a. H. forms (see, for example, my blogs of  21 July 2016 and 1 January 2017) is then distributed among multiple brains in multiple locations.