I have argued previously that science, like other forms of cognitive processing, can not be valuefree.
A reinforcement learner accepts among its inputs a stream of rewards. These constitute some of what it values.
A learning system like a backpropagation neural network may have no reward stream but if it learns from sets of inputs and outputs one of the things it will slowly learn is preferences (values) inherent in the training set supplied to it.
A learning system trained on input (observations) alone (no output actions) will value things like amount learned, speed of learning, precision of recall, etc. These will be built into the learner in the form of various thresholds, learning rate parameters, vigilance parameters, similarity measures, etc.
Science will not be value free any more than any other cognitive process going on in such a machine/human.