22 March 2025

Don't Be Fooled by AI and Humans

Why Critically Evaluate:

  • Bias in Data and Algorithms
    • Biased data leads to biased models and algorithms
  • Black Box Problem
    • Opaque internal workings makes it difficult to understand why a model produces an output, reducing trust and accountability
  • Overfitting and Lack of Generalization
    • Limits on model performance in generalizability and overfitting to training data
  • Publication Bias
    • Overestimation on methods as papers publish overaly positive results
  • Speed of the Field
    • Not enough vetting on research papers to keep up the pace with field

How to Critically Evaluate:
  • Check Authors and Affiliations
    • Assess authors reputation 
  • Examine Data and Methodology
    • Evaluate the quality of data and rigor of experimental research
  • Look for reproducibility
    • Can it be reproduced through code or data?
  • Consider Limitations
    • Do the authors critically evaluate their on results and limitations, are the results sound and sensible?
  • Seek Peer Review
    • Look for reputable peer-reviewed sources, even peer review is not a guarantee
  • Cross-Reference and Compare
    • Compare findings with other related research, find consensus or conflicting results
  • Be Aware of Funding Resources
    • Who funded this research? Is there a conflict of interest?