Practically significant method comparison protocols for machine learning in small molecule drug discovery.
Comments
First, thanks for putting this together. Long overdue, and I am excited to move away from the dreaded bold table. I'm new to rigorous stats-world, so please forgive me if the below is totally off-base. My question is related to the suggestion that repeated random sampling is undesirable. I prefer this method since (I believe) it rigorously permits parametric testing for comparisons and because it allows using more advanced splitting methods (fingerprint based clustering and partitioning, for example) without having to worry about rigorously 'striping' through the data. From section 3.1.2 (v2): "Commonly used alternatives to CV like bootstrapping and repeated ran- dom splits of the data have also been shown to result in strong dependency between samples and are generally not recommended [13]." Where reference 13 is " Bates, S., Hastie, T. & Tibshirani, R. Cross-validation: What does it estimate and how well does it do it? Journal of the American Statistical Association 119, 1434–1445 (2023). URL http://dx.doi.org/10.1080/01621459.2023.2197686" (1) Where in this paper is this claim? (2) I find it unintuitive that repeated random splits would result in strong dependency, especially given the the suggested Repeated CV is very similar. Repeated random sampling is basically just Repeated CV (5x2) but without the x2 part (?).
Hello, I would be interested to know when this gets published somewhere.
Response,
Cas Wognum
: Nov 10, 2024, 15:59
Hey Francois, thanks for reaching out! We're sharing this work as a preprint to seek feedback from the community on the proposed guidelines. Afterwards we intend to submit the paper for publication to a peer-reviewed journal early next year. The best way to share your feedback would be as a Github Discussion here: https://github.com/polaris-hub/polaris-method-comparison/discussions . I hope that helps!