Today's Hours: 12:00pm - 8:00pm

Search

Did You Mean:

Search Results

  • Article
    Yao T, Sweeney E, Nagorski J, Shulman JM, Allen GI.
    PLoS One. 2020;15(11):e0241707.
    Even though there is a clear link between Alzheimer's Disease (AD) related neuropathology and cognitive decline, numerous studies have observed that healthy cognition can exist in the presence of extensive AD pathology, a phenomenon sometimes called Cognitive Resilience (CR). To better understand and study CR, we develop the Alzheimer's Disease Cognitive Resilience Score (AD-CR Score), which we define as the difference between the observed and expected cognition given the observed level of AD pathology. Unlike other definitions of CR, our AD-CR Score is a fully non-parametric, stand-alone, individual-level quantification of CR that is derived independently of other factors or proxy variables. Using data from two ongoing, longitudinal cohort studies of aging, the Religious Orders Study (ROS) and the Rush Memory and Aging Project (MAP), we validate our AD-CR Score by showing strong associations with known factors related to CR such as baseline and longitudinal cognition, non AD-related pathology, education, personality, APOE, parkinsonism, depression, and life activities. Even though the proposed AD-CR Score cannot be directly calculated during an individual's lifetime because it uses postmortem pathology, we also develop a machine learning framework that achieves promising results in terms of predicting whether an individual will have an extremely high or low AD-CR Score using only measures available during the lifetime. Given this, our AD-CR Score can be used for further investigations into mechanisms of CR, and potentially for subject stratification prior to clinical trials of personalized therapies.
    Digital Access Access Options
  • Article
    Pernot P, Savin A.
    J Chem Phys. 2018 Jun 28;148(24):241707.
    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.
    Digital Access Access Options