During neural network training the impact of rarely occurring training examples* can suffer "dilution" as compared to more frequently occurring examples. They can get "averaged out". When such uncommon examples are important* one can/should artificially increase the number of times the ANN/LLM sees these during its training. I.e., frequency of training exposure should be weighted by example importance/value. I've employed this technique since my earliest work with ANNs.
* A key example would be dangerous scenarios during level 5 vehicle autonomy.
No comments:
Post a Comment