Accountability through Interpretability in Visual AI Systems
Funder: Volkswagen Foundation
Scheme: Artificial Intelligence and the Society of the Future
Cambridge PI: Dr Leonardo Impett
Collaborating institutions: University of Arts and Design Karlsruhe, University of California Santa Barbara, Durham University, University of Kassel
AI Forensics is a collaborative project between researchers at several international universities in Germany, the UK, and the US – and in disciplines including media theory, STS, design, computer science, and digital humanities.
The project attempts to design a new sociotechnical and political framework for the analysis and critique of visual AI systems. This includes the design and development of new tools, methods, and metaphors for the critical understanding of AI systems at three levels:
- Datasets: exploring and developing tools for examining large AI image datasets (such as Imagenet or Celeb-500k) that cannot be viewed ‘manually’;
- Models: uncovering biases and hidden assumptions in deep learning architectures and other AI/vision models;
- Applications: understanding applied AI models in their wider social, political, and historical contexts.
The Cambridge component of the project will work chiefly on datasets and models, and consider the ‘visual culture’ of computer vision systems, including:
- the visual culture of generative image networks like DALL·E;
- the application of cultural analytics and digital art history techniques to dataset critique in computer vision;
- the implicit philosophy of vision of contemporary computer vision architectures;
- the methodological and epistemic implications, and implicit theoretical assumptions, of text-image transformers.