..
/ cscw / home /
 

Dependable AI: Social transparency in AI systems

Persons

A person wearing glasses

Description automatically generated with medium confidence

Hussain Abid Syed

Raum: US-G 006
Telefon: +49 (0) 271/ 740 – 4410
Hussain.Syed(at)uni-siegen.de

Short Description

‘AI’ has got its revival in the recent times, and it seems that the technological world is shifting its focus from an era of digital revolution to an era of intelligent technologies and systems. AI systems are often socio-organizationally embedded. As AI-powered systems increasingly facilitate significant decision-making, their explainability is critical for end-users to take informed and accountable actions. Explanations in human-human interactions are socially-situated, however, Explainable AI (XAI) approaches have been predominantly algorithm-centered [ref]. We use algorithms to make algorithms explain their decision-making process. These initiatives for transparency intend to make a black box algorithm transform into a white box and further into a see-through glass box, so the user can establish trust in the intelligent system. The questions here are of quintessential nature like:

  • What it means with a technology being intelligent? 
  • Why don’t we demand such explainability from other decision-making or decision-aiding technologies? 
  • And most importantly, what is the role of machine’s comprehensibility in socially-suited human-AI interaction?

This project will empirically test the frameworks presented in the recent AI literature for explainability and social transparency to understand and enhance the underlying notions of dependable AI.

Formalities

  • Project A, B and C
  • Master/Bachelor Project
  • Master/Bachelor Thesis

Schedule

  • To be decided in collaboration with students

Notes

This project can be adjusted to fit option A, B, or C depending on the level of difficulty and specific interest of the student(s). This project can start as Project A or B and be continued as C and also lead to a Master Thesis.