It should be impossible for Donald Trump to be elected US president. The fact that it isn't is proof once again that the human value system is inadequate. It is not necessary to "align" A.I. s' values with human values. It isn't even desirable.
Tuesday, October 22, 2024
Monday, October 21, 2024
Space "race"
Large projects must be broken up into pieces and the work distributed among many cooperating specialists. For these specialists to coordinate with one another there have to be schedules.
Work can be done either too fast or too slowly. Attempts to work fast push up costs and lower safety. The death on the first soviet manned Soyuz flight is an example. Work can also be done too slowly. The fixed overhead that must be covered year after year is one issue. Low launch rate also unfavorably impacts worker skill levels.
But it seems to me that the U.S. has no need to "race" at the current moment.
If the soviets had been first to land a crew vehicle on the moon that might well have led to a stronger American post-apollo effort. Similarly, today, if the Chinese land a crew on the moon before the U.S. that might lead to a stronger U.S. program going forward.
I also think that the payback from space telescopes and other unmanned space assets has exceeded that of the manned programs. Crew should only be included when they are actually needed for the mission. Also, see my blog of 3 November 2010.
Friday, October 11, 2024
Musk
If you try to do too many things at once you end up doing none of them well. Some years ago I criticized Spacex for this. I would now suggest that Musk is trying to do too much: spacex, starlink, electric cars, solar power, AI, robots, twitter, neuralink, politics, boring, ... I don't care how many good engineers he hires, this is too much.
Robot skin/clothes
My AI robots require a lot of sensors and, therefore, a lot of wiring:
One should, of course, first shorten all the wires as much as possible. It is possible to then cover the wiring (and the bot) with "skin" or "clothing." This can be "resistive rubber" sheeting (velostat, linqstat) or more conventional cloth with embedded flex/force sensing thin film ribbon sensors. Any tethers can have their wiring similarly covered.Sunday, October 6, 2024
Attention again
Typically, each layer of the A.s.a. H. concept hierarchy broadcasts forward (upward) its N strongest outputs*. It is also possible for layers to only accept the M strongest inputs that they see (from the one or more layers beneath it). Feedback can be incorporated if layers are allowed to see outputs from above them.
* R. Jones, Trans. Kansas Academy of Science, vol. 109, pg 159, 2006.
Tuesday, October 1, 2024
"mind-only" Idealism
Why have I not used idealist models?* I think that my biggest problem with idealism is the following: one of the most fundamental distinctions we make is between "ourselves" (what we can immediately/directly control/influence) and "the external world" (that which we can't directly influence). "Inside" and "outside." "Me" and "not me." If idealism were true couldn't we control everything we "see?"
Of course we could simply make idealism more complex. We could suppose there are other minds and they control those things we can't. But when I postulate the existence of other minds there still seems to be an "outside" that NONE of us can directly control. (If rocks, for example, were other "minds" they don't seem to be like "me" at all. So could I consider them to BE "minds?")
Or perhaps there are "laws of thought" that limit what can be influenced? If so, what are these laws? Is idealism simply poorly developed, i.e. in a pre-theoretic state of development? That part of our experience that we can influence would be related to consciousness while things like rocks might be related to something like a subconscious.
I also find it hard to come up with idealist models and then try to make practical use of them.
* See my blog of 1 April 2024.
teaching, attention, syllabus/curriculum
Teaching an AI agent to an idealized (noise and distraction free) syllabus is another way of imposing attention as well as speeding up and reducing the cost of learning.
Decomposing the AI into a society of specialists also helps with attention as well as helping to deal with complexity.