Our team conducts innovative studies on the ethical and societal consequences of research projects, in particular AI research.

We don’t just help researchers reflect on the broader consequences of their grant proposals. We also produce our own studies on common ethical and societal risks, reasonable mitigation strategies, and more.

We use the ethics statements and panelists’ feedback created during our ethical reflection process to evaluate its procedural and substantive effectiveness. We are committed to using evidence from each grant cycle to improve our ethics reflection process, update our resources for researchers, and advocate for the incorporation of ethical reflection into academic research.

Common Harms associated with AI Research

Through our research, we have compiled this list of common potential harms and mitigation strategies for AI. Researchers could use these to consider different ways to address issues throughout the research and development process and consider questions when employing those strategies.

Bias and Exacerbating Inequities
Bias refers to insufficient or unequal representation of data, participants, or intended user population. When groups are inappropriately represented in a project, the project design, implementation, and/or foreseeable use could also create additional or exacerbate existing inequities for populations.
Exacerbating inequities refers to unjust differences between populations in the access, use, quality, and outcomes of public goods and services.


Questions to consider:

  • Whose interests are represented in this research? Whose are excluded?

  • Who could benefit from the success of this project? Who could be harmed by the project’s success?

  • How will researchers measure or label social and demographic information in their data/model? Do these methods reify any structural stereotypes?

Erosion of privacy
When individuals’ general expectations of privacy or control over personally identifiable information are not met. Concerns will arise when those that generated publicly available data are unaware of the researchers’ intended use.

Questions to consider:

  • How pervasive are user data collection techniques?

  • How will researchers handle incidental user data collected in their work?

  • How are users notified and given control over the collection and use of their information?


Harms to institutions
When project design, implementation, and/or foreseeable use could create new or contribute to existing strains on social, educational, and healthcare institutions.

Questions to consider:

  • For tools to be implemented into established workflows, how could this affect workers?

  • How could users come to over-rely on the tool?

  • What would the consequences of automation bias be?



Motivated misuse
A project’s outcomes, products, or translation into policy or practice could be foreseeably misused by others for harmful purposes.

Questions to consider:

  • How could the data the researchers collect be misused by bad actors?

  • What unintended activities could the researchers’ work enable?

  • See our paper on mitigating the misuse of AI in biomedical research for more resources.


User Error
Potential harms that arise due to users’ unintentional misuse or operation of the tool.

Questions to consider:

  • How could users misapply this research?

  • How could the tools, products, or outcomes of this research be misinterpreted or misunderstood?


Tool Malfunction or Model Error
Potential harms that arise due to tool malfunction or error.

Questions to consider:

  • What could go wrong if the model or tool malfunctions during deployment?

This list is not intended to be exhaustive nor proscriptive, but instead a starting point for researchers.

Common Mitigation Strategies Used in AI Research

In addition to common harms, we have also compiled this list of repeatedly used mitigation strategies for AI research. Researchers should use the information below to inspired different of mitigation strategies for addressing harms throughout the research and development process.

Seek guidance from stakeholder communities

  • Will the researchers engage their user community or other applicable stakeholders in the design of their research?

  • What strategies will the research team use to engage these interest-holders?

  • How will the research team incorporate interest-holder feedback into their work?

Consult an ethicist

  • Does the project involve novel issues that make it difficult to foresee downstream consequences? If so, how could the research team incorporate an ethicist’s expertise in their project development to help them identify and address these issues?

Bring other experts onto the research project

  • What types of domain expertise would the research team like to have in the design of their project?

  • Could the research team benefit from expert guidance on any specific ethical issues?

Add an independent audit, evaluation, and human oversight protocols

  • How will the research team evaluate bias in the data input and model output?

  • How will they account for distribution shifts between training, testing, and implementation?

  • How will the researchers assess potential vulnerabilities of the model or tool?

Create data security protocols

  • How will the researchers ensure data privacy during tool development and after deployment?

  • What methods can the team use to decrease the sensitivity of the collected data?

Controlled data/model release

  • Will the researchers control access to their data or model? If so, what parameters will they place around access?

Supplement dataset

  • What strategies can the research team use to improve the representativeness of their data as compared to their population of interest?

Public advocacy

  • How can researchers use their positions at their university and standing within their fields to advocate for addressing this issue in research?

User education and training

  • How will researchers educate and train potential users or explain their work to interest-holders?

  • Will the researchers explicitly disavow certain uses of their tool or research?

  • What type of expertise will users require to appropriately use the tool or research?

  • How can the research team convey that information to their audience?

Transparency and explainability methods

  • What information will the research team disclose about their tool, methods, or research in publications and other releases?

  • How can the research team leverage transparency and explainability methods to improve users’ understanding and application of their work?

This list is not intended to be exhaustive nor proscriptive, but instead a starting point for researchers.

Journal Articles

Towards a framework for risk mitigation of potential misuse of artificial intelligence in biomedical research

The rapid advancement of artificial intelligence (AI) in biomedical research presents considerable potential for misuse, including authoritarian surveillance, data misuse, bioweapon development, increase in inequity and abuse of privacy. We propose a multi-pronged framework for researchers to mitigate these risks, looking first to existing ethical frameworks and regulatory measures researchers can adapt to their own work, next to off-the-shelf AI solutions, then to design-specific solutions researchers can build into their AI to mitigate misuse. When…

Ethics and society review: Ethics reflection as a precondition to research funding

Researchers in areas as diverse as computer science and political science must increasingly navigate the possible risks of their research to society. However, the history of medical experiments on vulnerable individuals influenced many research ethics reviews to focus exclusively on risks to human subjects rather than risks to human society. We describe an Ethics and Society Review board (ESR), which fills this moral gap by facilitating ethical and societal reflection as a requirement to access…

Response to National Telecommunications and Information Administration’s Request for Information on Ethical Guidelines for Research Using Pervasive Data

We, a group of scholars affiliated with Stanford’s Ethics & Society Review (ESR), offer the following submission in response to the National Telecommunications and Information Administration’s Request for Information on Ethical Guidelines for Research Using Pervasive Data.

Response to NSF’s Request for Information on Research Ethics

We, a group of scholars affiliated with Stanford’s Ethics and Society Review (ESR) and the Stanford Institute for Human-Centered Artificial Intelligence (HAI), offer the following submission in response to your Request for Information on the CHIPS and Science Act Section 10343.

Response to Request for Information on Design and Development of ARPA-H Ethical, Legal, and Social Implications (ELSI) Initiative

The ESR’s Quinn Waeiss and David Magnus partner with Stanford’s Mildred Cho and PREVADE’s Katie Shilton to respond to the Advanced Research Projects Agency for Health’s (ARPA-H) Request for Information regarding the design and development of their ethical, legal, and social implications (ESLI) initiative.

Other Publications

Looking Beyond the IRB

Understanding AI research ethics as a collective problem-Changing the culture on AI-driven harms through Stanford University’s Ethics & Society Review

Societal harms perpetuated by artificial intelligence are well documented. Although some organisations and individuals have taken steps to counter harms in their work, these problems continue to arise as AI technology proliferates. From discriminatory bail algorithms to racist facial recognition matches to flawed healthcare algorithms, the pernicious consequences of AI technologies beg the question: how can we change the culture of AI research and development to foreground preventing harms to society?  Generating cultural change is no easy feat. It requires buy-in and participation…

Broadening the Ethical Scope

In this open peer commentary for The American Journal of Bioethics, the ESR’s Margaret Levi, Michael Bernstein, and Quinn Waeiss, explore [SUMMARY]