A Multi-Method Fusion Interpretation Method for Feature Importance Based on Evidence Fusion

Published:

The present invention discloses a multi-method fusion interpretation approach for feature importance based on evidence fusion, belonging to the field of machine learning technologies. The method mainly comprises the following steps: first, the importance scores of features generated by multiple explanation methods are transformed into normalized ranking scores and mapped into propensity probabilities within a binary proposition space; second, the reliability weights of each method are calculated by evaluating their stability and fidelity, and the initial evidence is discounted accordingly; subsequently, multi-source evidence is fused using Dempster–Shafer (D–S) evidence theory, and the combination rule is adaptively selected according to the degree of conflict; finally, the fused importance score, uncertainty interval, and explanation conflict degree for each feature are output, thereby achieving reliable fusion of interpretability results and uncertainty quantification.

By integrating evidence from multiple explanation methods and introducing reliability assessment and a conflict-adaptive mechanism, the proposed invention provides more consistent and robust feature importance interpretations, accompanied by quantifiable uncertainty intervals and diagnostic information on explanatory conflicts.