AlzPED Analytics

Full reports of AlzPED summary and presentations can be downloaded here

 

AlzPED ANALYTICS SHOWCASE - EXPERIMENTAL DESIGN

 

AlzPED identifies 24 experimental design elements that should be included in any preclinical efficacy study to improve its rigor, reproducibility and translational value (Figure 1). Comprehensive analysis of the 1030 published preclinical efficacy studies compiled in AlzPED demonstrate considerable variation in the frequency of including and reporting these elements of experimental design. For example, experimental design elements like dose and formulation of the therapeutic agent being examined and treatment paradigms are included and reported with consistency (at 95% or greater) whereas other critical design elements like power calculation, blinding for treatment allocation and outcome measures, randomization, and balancing a study for sex as a biological variable are less frequently reported (less than 35%). 

 

                                                             

 

AlzPED further defines 9 core experimental design elements that are critical for ensuring scientific rigor and reproducibility of a preclinical efficacy study, derived from Shineman et al., 2011, Landis et al., 2012, Snyder et al., 2016 and ARRIVE guidelines (Figure 2). These include power/sample size calculation, randomization, blinding for treatment allocation and outcome measures, sex as a biological variable and balancing a study for sex as a biological variable, animal genetic background, financial conflict of interest statement and inclusion and exclusion criteria. Of these 9 core design elements, sex as a biological variable and genetic background of the animals used in the study as well as a financial conflict of interest statement from authors are well reported, though this is reflective of changes in data reporting and publication policies and requirements recommended by federal funding agencies, private foundations and peer-reviewed scientific journals. The remaining 6 core design elements including power/sample size calculation, randomization, blinding for treatment and outcomes, balancing a study for sex as a biological variable, and inclusion and exclusion criteria are very poorly reported.

                                                                                                 

                                                                                           

 

Further evaluation of the reporting trends in the 9 core experimental design elements demonstrates that few studies report more than 5 core design elements, most studies reporting only 2-4 core design elements. From the 1030 curated preclinical studies in the database, 5% report none of the core design elements, 14% report at least 1 core design element, 25% report at least 2, 26% report at least 3, 18% report 4, 8% report 5, 3% report 6, 2% report 7, and 0.2% report all 9 core design elements (Figure 3).

 

                                                                        

 

NIH-issued policies to enhance the rigor, reproducibility and translatability of its supported research place significant emphasis on rigorous scientific method/study design and consideration of biological variables including sex. Analysis of study design and methodology of NIH-funded published research curated in AlzPED demonstrates a positive impact of these policies. NIH-funded studies show significantly higher frequency of reporting of sex as a biological variable, study balanced for sex as a biological variable, blinding for outcome measures, randomization, and conflict of interest when compared with studies supported by non-NIH funding agencies (Figure 4). Non-NIH funded studies include those funded by nonprofit organizations, pharmaceutical companies, the European Union, and the Chinese, Japanese and Korean governments.

                                       

                                                                

 

The lack of rigor and reproducibility of research findings in the scientific publishing arena is well documented. A systematic review of the rigor and translatability of highly cited animal studies published in leading scientific journals (including Science, Nature, Cell, and others) demonstrates a lack of scientific rigor in study design. A comprehensive review of reporting trends for critical experimental design elements in highly cited studies published in leading journals that are curated in AlzPED revealed a similar pattern of poor study design and reporting practices. These analyses are described in greater detail in the next segment. Reporting trends for the 9 core and all 24 experimental design elements were evaluated based on the impact factor of the journal in which the curated preclinical study was published (Figure 5). These studies were categorized into 4 groups based on 2019 journal impact factor values. Curated studies published in journals with impact factors below 3 were sorted in Group 1, and those published in journals with impact factors between 3 and 4.99, or between 5 and 9.99 were sorted in Groups 2 and 3 respectively. Studies published in high impact journals with impact factors greater than 10 were sorted in Group 4. While t-tests show that there are statistically significant differences in reporting the 9 core elements as well as all 24 elements of experimental design between these four groups, overall, the data demonstrate poor reporting practices irrespective of journal impact factor.

 

                                                                      

 

Reporting trends for the 9 core and all 24 experimental design elements were evaluated based on the relative number of citations per year of each curated study published between 1996 and 2019 (Figure 6). Relative number of citations for each curated study was calculated by dividing the total number of citations for that study by the number of years since publication. For example, for a study published in 2018, the total number of citations for that study was divided by 2, or for a study published in 2017, the total number of citations for that study was divided by 3, and so on. These studies were categorized into 3 groups based on the relative number of citations per year. Curated studies with less than 3 relative number of citations per year were sorted into Group 1, those with relative number of citations per year between 3 and 7 or those with relative number of citations per year greater than 7 were sorted into Groups 2 and 3 respectively. While t-tests show that there are statistically significant differences in reporting the 9 core elements as well as all 24 elements of experimental design between these three groups, overall, the data demonstrate poor reporting practices irrespective of relative number of citations per year.  

 

                                                                     

 

Therefore, it is clear that there are serious deficiencies in reporting critical elements of methodology such as power/sample size calculation, blinding for treatment/outcomes, randomization, and others, even in high impact factor journals as well as in highly cited published preclinical research. Consequently, the scientific rigor, reproducibility and translational value of preclinical studies is diminished. In light of these results, it is evident that a standardized set of best practices is required for successful translation of therapeutic efficacy in AD research. To this end, several meetings and workshops have been held between the NIH and journal publishers to discuss the issue of reproducibility and rigor of research findings and identify common opportunities to enhance rigor and support research that reproducible and transparent. It is imperative that the NIH, other federal funding agencies, private foundations and scientific journal publishers continue to collaborate on this issue and enforce a standardized set of best practices.  

 

View the "AlzPED Analytics Summary 2020" downloadable pdf above for more detailed analytics.