The Applicability Domain (AD) is a critical element in ensuring the reliability of in silico models for chemical safety assessment, as it defines the space within which model predictions can be considered valid. Despite its relevance, AD is often implemented using generic strategies that fail to account for the specific characteristics of the model and data set, limiting both scientific robustness and regulatory confidence. To address this limitation, we conducted a systematic benchmarking of established AD methods across regression models trained on OECD-compliant data sets, which demonstrated robust predictive ability outperforming established QSAR methods such as read-across. A regulatory-accepted approach demonstrated some limitations; additionally, its rigid structure reliant on fixed parameters and proprietary libraries limited its adaptability and transparency. We therefore reimplemented and optimized it using open-source tools to enhance flexibility, reproducibility, and predictive accuracy. Most notably, our method was demonstrated to be the preferred choice for the highest number of end points. Nonetheless, our analysis highlighted that no single strategy performs best across all scenarios. Driven by these findings and the recognized need for model-adaptive AD assessment, we developed ADvisor, a modular and broadly applicable tool that allows users to evaluate, compare, and implement AD strategies suited to the characteristics of their specific models and data sets, thereby promoting transparency and regulatory compliance in in silico applications.
ADvisor: An Open-Source Tool for Applicability Domain Definition and Optimization in Molecular Predictive Modeling
Piazza, LisaPrimo
;Poles, Clarissa;Bononi, Giulia;Granchi, Carlotta;Di Stefano, Miriana
;Poli, Giulio;Macchia, Marco;Tuccinardi, Tiziano
Ultimo
2025-01-01
Abstract
The Applicability Domain (AD) is a critical element in ensuring the reliability of in silico models for chemical safety assessment, as it defines the space within which model predictions can be considered valid. Despite its relevance, AD is often implemented using generic strategies that fail to account for the specific characteristics of the model and data set, limiting both scientific robustness and regulatory confidence. To address this limitation, we conducted a systematic benchmarking of established AD methods across regression models trained on OECD-compliant data sets, which demonstrated robust predictive ability outperforming established QSAR methods such as read-across. A regulatory-accepted approach demonstrated some limitations; additionally, its rigid structure reliant on fixed parameters and proprietary libraries limited its adaptability and transparency. We therefore reimplemented and optimized it using open-source tools to enhance flexibility, reproducibility, and predictive accuracy. Most notably, our method was demonstrated to be the preferred choice for the highest number of end points. Nonetheless, our analysis highlighted that no single strategy performs best across all scenarios. Driven by these findings and the recognized need for model-adaptive AD assessment, we developed ADvisor, a modular and broadly applicable tool that allows users to evaluate, compare, and implement AD strategies suited to the characteristics of their specific models and data sets, thereby promoting transparency and regulatory compliance in in silico applications.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


