Nonlinear descriptions of reality may be one of the origins of what we think of as free will.
For some problems (like robot motion planning) it is appropriate to explore multiple alternative solutions (e.g. alternative routes). Suppose an AI has learned to model some activity using a quadratic function. For a given input condition it computes the (>1) roots of this model quadratic. Even if the AI always picks a solution (root) in the same way (first found, smallest, randomly, etc.) it sees that another output (solution) would work too. It sees itself as free to use either solution to its problem.
The AI is going to store and reuse some of its problem solutions. As goals and external conditions change it may even start using other roots or choose from the available solutions (roots) in some different way. A notion/concept of free will might develop from this.
With a society of Asa H agents I sometimes use an executive or router to assign tasks (or sent input) to one or more of the specialist agents. I am looking to see if a concept like "free will" evolves in this executive.