Deadline extended to September 25 for Call for Proposals: Auditing Artificial Intelligence

NM
Nicole McAlee
Thu, Sep 15, 2022 7:33 PM

Notre Dame-IBM Technology Ethics Lab

2022-2023 Call for Proposals

Auditing Artificial Intelligence


OVERVIEW ----------------------------------------

Artificial intelligence increasingly undergirds, digitizes, and automates
important processes in our society and economy, such as processing loan
applications, making medical diagnoses, informing hiring decisions,
surfacing information, and piloting autonomous vehicles. Recent headlines
reveal that with such important outcomes on the line for individuals,
organizations, and the broader society, it is critical that AI can be
trusted to make fair and accurate decisions.

Globally, legislation has been both proposed and passed to regulate AI; yet
there is not yet broad consensus on the most effective ways to assess the
implications of a given system prior to and during deployment. While the
objectives that inform financial audits may translate to artificial
intelligence in that a financial auditor gathers and inspects evidence to
determine whether a company’s practices are free from material misstatement
or fraud, an AI auditor may examine design documents, code, and training
data to determine whether a company’s algorithms are free from material
bias, inaccuracy, or other potentially consequential impact. Financial
audits are just one example; other kinds of audits, such as IT, privacy,
security, and operational audits, may provide models for examining the
impact of decisions made by AI systems.

The predicament remains daunting, however. As AI becomes more
sophisticated, auditing it will only involve increasingly complicated
ethical, social, and regulatory challenges. Dimensions that require
auditing must be identified, agreed upon, and measured. AI auditors must be
trained. Policies must be developed to govern the operations,
credentialing, and impact of audits.

The Notre Dame-IBM Technology Ethics Lab invites proposals for projects
that grapple with these challenges and suggest innovative solutions.
Potential areas for research and scholarship include, but are not limited
to, the following:

  • Scope of AI audits

  • Regulatory frameworks for AI audits

  • Methodologies for AI audits

  • Skills for future AI auditors

  • Teaching methodologies for AI audits

  • How AI audits may impact various sectors and industries

  • Suggested best practices for AI audits

  • Adoption and deployment of AI audits

Successful applications will propose a defined deliverable (such as, but
not limited to, research papers, draft policy, model legislation, teaching
materials, and impact assessments) that address the above challenges to be
completed between January 1, 2023 and December 31, 2023.
Key words: AI Ethics, Privacy, Controls, Risk assessment, Risk mitigation,
Governance, Policy, Compliance, Evidence, Opinion, Security, Availability,
Confidentiality, Integrity,  Fairness, Bias, Accuracy, Trust, Impact
assessment, Transparency


KEY DATES

2022

  • August 15 - Application period opens

  • September 18 September 25 - Application period closes

  • November 18 - Recipients notified

2023

  • January 1 - Projects kick off

  • April - Virtual recipient workshop

  • June - Recipient workshop at the University of Notre Dame in Indiana, USA

  • October - Virtual recipient workshop

  • December 31 - Projects close


PROJECT AWARDS ----------------------------------------

Recipients may apply for up to $60,000 USD in award funding and will have
from January 1, 2023 through December 31, 2023 to complete their project,
including preparation, research, execution, and deliverables. A detailed
budget and timeline should be submitted as part of the application.


TO APPLY

For full details of the call, including eligibility criteria, proposal
preparation guidelines, and a link to the application form, please visit
https://techethicslab.nd.edu/call-for-proposals/.

Questions may be directed to techlab@nd.edu.

Notre Dame-IBM Technology Ethics Lab 2022-2023 Call for Proposals Auditing Artificial Intelligence ---------------------------------------- OVERVIEW ---------------------------------------- Artificial intelligence increasingly undergirds, digitizes, and automates important processes in our society and economy, such as processing loan applications, making medical diagnoses, informing hiring decisions, surfacing information, and piloting autonomous vehicles. Recent headlines reveal that with such important outcomes on the line for individuals, organizations, and the broader society, it is critical that AI can be trusted to make fair and accurate decisions. Globally, legislation has been both proposed and passed to regulate AI; yet there is not yet broad consensus on the most effective ways to assess the implications of a given system prior to and during deployment. While the objectives that inform financial audits may translate to artificial intelligence in that a financial auditor gathers and inspects evidence to determine whether a company’s practices are free from material misstatement or fraud, an AI auditor may examine design documents, code, and training data to determine whether a company’s algorithms are free from material bias, inaccuracy, or other potentially consequential impact. Financial audits are just one example; other kinds of audits, such as IT, privacy, security, and operational audits, may provide models for examining the impact of decisions made by AI systems. The predicament remains daunting, however. As AI becomes more sophisticated, auditing it will only involve increasingly complicated ethical, social, and regulatory challenges. Dimensions that require auditing must be identified, agreed upon, and measured. AI auditors must be trained. Policies must be developed to govern the operations, credentialing, and impact of audits. The Notre Dame-IBM Technology Ethics Lab invites proposals for projects that grapple with these challenges and suggest innovative solutions. Potential areas for research and scholarship include, but are not limited to, the following: - Scope of AI audits - Regulatory frameworks for AI audits - Methodologies for AI audits - Skills for future AI auditors - Teaching methodologies for AI audits - How AI audits may impact various sectors and industries - Suggested best practices for AI audits - Adoption and deployment of AI audits Successful applications will propose a defined deliverable (such as, but not limited to, research papers, draft policy, model legislation, teaching materials, and impact assessments) that address the above challenges to be completed between January 1, 2023 and December 31, 2023. Key words: AI Ethics, Privacy, Controls, Risk assessment, Risk mitigation, Governance, Policy, Compliance, Evidence, Opinion, Security, Availability, Confidentiality, Integrity, Fairness, Bias, Accuracy, Trust, Impact assessment, Transparency ---------------------------------------- KEY DATES ---------------------------------------- 2022 - August 15 - Application period opens - September 18 *September 25* - Application period closes - November 18 - Recipients notified 2023 - January 1 - Projects kick off - April - Virtual recipient workshop - June - Recipient workshop at the University of Notre Dame in Indiana, USA - October - Virtual recipient workshop - December 31 - Projects close ---------------------------------------- PROJECT AWARDS ---------------------------------------- Recipients may apply for up to $60,000 USD in award funding and will have from January 1, 2023 through December 31, 2023 to complete their project, including preparation, research, execution, and deliverables. A detailed budget and timeline should be submitted as part of the application. ---------------------------------------- TO APPLY ---------------------------------------- For full details of the call, including eligibility criteria, proposal preparation guidelines, and a link to the application form, please visit https://techethicslab.nd.edu/call-for-proposals/. Questions may be directed to techlab@nd.edu.