Research
Our team conducts innovative studies on the ethical and societal consequences of research projects, in particular AI research.
Research
Our team conducts innovative studies on the ethical and societal consequences of research projects, in particular AI research.
Evaluating the ESR’s Effectiveness
We don’t just help researchers reflect on the broader consequences of their grant proposals. We also produce our own studies on common ethical and societal risks, reasonable mitigation strategies, and more.
We use the ethics statements and panelists’ feedback created during our ethical reflection process to evaluate its procedural and substantive effectiveness. We are committed to using evidence from each grant cycle to improve our ethics reflection process, update our resources for researchers, and advocate for the incorporation of ethical reflection into academic research.
Common Harms associated with AI Research
Through our research, we have compiled this list of common potential harms and mitigation strategies for AI. Researchers could use these to consider different ways to address issues throughout the research and development process and consider questions when employing those strategies.
Exacerbating inequities refers to unjust differences between populations in the access, use, quality, and outcomes of public goods and services.
Questions to consider:
- Whose interests are represented in this research? Whose are excluded?
- Who could benefit from the success of this project? Who could be harmed by the project’s success?
- How will researchers measure or label social and demographic information in their data/model? Do these methods reify any structural stereotypes?
Questions to consider:
- How pervasive are user data collection techniques?
- How will researchers handle incidental user data collected in their work?
- How are users notified and given control over the collection and use of their information?
Questions to consider:
- For tools to be implemented into established workflows, how could this affect workers?
- How could users come to over-rely on the tool?
- What would the consequences of automation bias be?
Questions to consider:
- How could the data the researchers collect be misused by bad actors?
- What unintended activities could the researchers’ work enable?
- See our paper on mitigating the misuse of AI in biomedical research for more resources.
Questions to consider:
- How could users misapply this research?
- How could the tools, products, or outcomes of this research be misinterpreted or misunderstood?
Questions to consider:
- What could go wrong if the model or tool malfunctions during deployment?
This list is not intended to be exhaustive nor proscriptive, but instead a starting point for researchers.
Common Mitigation Strategies Used in AI Research
In addition to common harms, we have also compiled this list of repeatedly used mitigation strategies for AI research. Researchers should use the information below to inspired different of mitigation strategies for addressing harms throughout the research and development process.
- Will the researchers engage their user community or other applicable stakeholders in the design of their research?
- What strategies will the research team use to engage these interest-holders?
- How will the research team incorporate interest-holder feedback into their work?
- Does the project involve novel issues that make it difficult to foresee downstream consequences? If so, how could the research team incorporate an ethicist’s expertise in their project development to help them identify and address these issues?
- What types of domain expertise would the research team like to have in the design of their project?
- Could the research team benefit from expert guidance on any specific ethical issues?
- How will the research team evaluate bias in the data input and model output?
- How will they account for distribution shifts between training, testing, and implementation?
- How will the researchers assess potential vulnerabilities of the model or tool?
- How will the researchers ensure data privacy during tool development and after deployment?
- What methods can the team use to decrease the sensitivity of the collected data?
- Will the researchers control access to their data or model? If so, what parameters will they place around access?
- What strategies can the research team use to improve the representativeness of their data as compared to their population of interest?
- How can researchers use their positions at their university and standing within their fields to advocate for addressing this issue in research?
- How will researchers educate and train potential users or explain their work to interest-holders?
- Will the researchers explicitly disavow certain uses of their tool or research?
- What type of expertise will users require to appropriately use the tool or research?
- How can the research team convey that information to their audience?
- What information will the research team disclose about their tool, methods, or research in publications and other releases?
- How can the research team leverage transparency and explainability methods to improve users’ understanding and application of their work?
This list is not intended to be exhaustive nor proscriptive, but instead a starting point for researchers.