Friday, February 13, 2026

Is nature non-Markovian?

 A non-Markovian process is one where future states depend upon the history or past, not just upon the current state. Jacob Barandes argues* that "...perhaps when one takes physically fundamental non-Markovian processes and tries to shoehorn them into a Markovian paradigm, the result is quantum theory..." 

* Pilot-Wave Theories as Hidden Markov Models, arXiv:2602.10569, 12 Feb. 2026

Sunday, February 1, 2026

Robotics development

It makes sense to develop industrial robots first*, their environment is simpler, more structured. And, again, how intelligent the robot needs to be will depend upon the task and the environment.

* Before home robots, for instance. (Of course many tasks at home are automated already: dish washing, clothes washing, drying, some cooking, robot vacuums, ....)

The AI label

On the one hand "AI" has become a marketing label intended to draw in suckers. On the other hand there really are a lot of valid AI algorithms, including things as simple as linear regression. So the "AI" label can be valid.* You have to look closely.

My new hearing aids are advertised as having "AI", including advanced noise cancelation ability.** The in-office demo was impressive, we'll see how well they work over time "in the wild."

* On the other hand the label "agent" is almost always misleading. A correct definition of "A.I. agent" is something like what's given in my 1 April 2025 blog.

** My previous pair were advertised as having machine learning. I did not find that feature to be all that useful.

Thursday, January 1, 2026

Good enough A.I.

Typical assembly line robots need not be mobile. The work can be brought to them. Clean rooms are only required for certain operations. Robotic surgery needs to be precise but it need not lift heavy loads. My A.I. specialists only need to be good enough for their specialized tasks. The agent's required memory, speed, processing power (I.Q.), architecture*, and cost depends on the specialty.*

* There's way more to A.I. than neural networks. See my blogs of 9 and 13 September 2010. (The majority of these subfields were needed in order to create A.s.a. H.) The big AI companies today have much too narrow a focus.

The limited ability of LLMs to learn temporal sequences*

 Huang et al** argue that "transformers incur a quadratic attention cost, limiting their ability to model long spatial and temporal sequences..."

* see my blog of 1 Feb 2025

** Jihao Huang et al, LADY: Linear Attention for Autonomous Driving Efficiency without Transformers, arXiv:2512.15038v1, 17 Dec 2025.

AI progress

It remains to be seen if transformer neural networks (LLMs) can learn (approximate) the human cognitive architecture.  Even if they could, humans are not, themselves, rational* nor do they possess a sound value system**.  Noorbakhsh Amiri Golilarz et al have a good discussion*** of some of the things that are missing with LLMs. With my A.s.a. H. architecture I have tried to address the shortcomings Golilarz et al outline. Self-monitoring, allocation of resources, and meta-cognitive awareness are discussed in my "Experiments with Asa H" in my book Twelve Papers.**** The learned case-base (knowledge representation/concept structure/memory) is corrected/repaired, grows, updates, and evolves over time. Deployed on robots Asa is fully embodied (sensorimotor grounding). The hierarchical case-base is distributed across multiple timescales.

* see Predictably Irrational, Dan Ariely, Harper, 2008.

** We are, after all, apes. (Gentle, very modern apes as Erika would say.)

*** Bridging the Gap: Toward Cognitive Autonomy in Artificial Intelligence, arXiv:2512.02280v1, 1 Dec. 2025.

**** Available under "book" at www.robert-w-jones.com

ESP32

Science is, inherently, an open-source activity. So, with the threat to the open-source character of the Arduino ecosystem I am now buying some more ESP32 development boards.