How we organize shapes the work that we do

Interactive implications tutorial at the ACM FAT* 2020 Conference

The work within the Fairness, Accountability, and Transparency of ML (fair-ML) community could positively benefit from understanding the role of organizational culture and structure in the effective practice of fair-ML efforts of individuals, teams, or initiatives within industry. In this tutorial session we will explore various organizational structures and possible leverage points to effectively intervene in the process of development and deployment of AI systems towards contributing to positive fair-ML outcomes.

What could the organizational culture that exist in the Aviation Industry teach us about the possibilities for shared accountability and shared responsibility in the field of AI?

We looked at how automation was introduced in the field of Aviation, specifically how agency and responsibility is split between the different actors involved. For example, the pilot of an aircraft, the operators at the Air Traffic Control station on the ground, as well as the manufacturer of the aircraft engine, all bare the responsibility. In 1967, a National Transportation Safety Board was established in the US where the "go-team" model was introduced.

The "go-team" model employs a rotating task-force of 8-12 experts trained in technical, policy, social-science, media, forensic, and community management matters. The multi-faceted quantitative and qualitative data that this task-force generates feeds back to industry stakeholders to improve airline safety and to inform government stakeholders so they can improve the relevant infrastructure, such as Air Traffic Control systems.

The NTSB was involved in the investigation of a recent fatal self-driving vehicle accident, producing an over 400 page report including findings and recommendations.

What could the organizational culture that exist in the Space Industry teach us about the possibilities for shared accountability and shared responsibility in the field of AI?

"One person’s mistake is everybody’s mistake.
In Space, if you make a mistake … it’s more the organization that did not see the possible mistake, it did not put in place all the blocks for that mistake not to happen or to be caught at the beginning."

See:

Teams are interdependent [1] and the best teams are highly interdependent [2]. Interdependence is associated with innovation [3,4] however the state of interdependence relies on multiple factors including adaptability, integration, reducing uncertainty, and focus [5].

References:

There are many books that have been influential in the field of Management and Organizational Science. Some of the references we found helpful include:

  • Wheatley, M. J., & Rogers, M. E. (1998). A simpler way. Berrett-Koehler Publishers.
  • Horowitz, Ben. (2019). What You Do Is Who You Are: How to Create Your Business Culture. New York: Harper Business.
  • Coffman, C., & Sorensen, K. (2013). Culture Eats Strategy for Lunch: The Secret of Extraordinary Results. Igniting the Passion Within (Denver, Colo., 2013).

Ethnography, and the many disciplines that count ethnography as a core method, provide crucial perspectives on the topics within fair-ML work, by illuminating the context and interconnected relationships surrounding algorithmic systems. As explored by scholar Nick Seaver in his work on critical algorithmic studies, ethnography could enable practitioners to "enact algorithms not as inaccessible black boxes, but as heterogeneous and diffuse sociotechnical systems, with entanglements beyond the boundaries of proprietary software" [1]. The "scavenging ethnographer", he describes, has developed an understanding of algorithms not as singular technological objects but rather as "culture" because they are composed of collective human practices. Considering how people enact algorithms through their actions requires the development of reflexive practices recognizing that "any model of social behavior is inseparable from the social context and research methods from which it was produced" [2].

References:

  • [1] Nick Seaver. 2017. Algorithms as culture: Some tactics for the ethnography of 326 algorithmic systems. Big Data & Society 4, 2 (2017).
  • [2] Anna Lauren Hoffmann. 2019. Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society 22, 7 (2019), 900–915.

See Also:

  • R. Stuart Geiger, Dan Sholler, Aaron Culich, Ciera Martinez, Fer- nando Hoces de la Guardia, François Lanusse, Kellie Ottoboni, Marla Stuart, Maryam Vareth, Nelle Varoquaux, Sara Stoudt, and Stéfan van der Walt. "Challenges of Doing Data-Intensive Research in Teams, Labs, and Groups." BIDS Best Practices in Data Science Series. Berkeley Institute for Data Science: Berkeley, California. 2018. doi:10.31235/osf.io/a7b3m
  • R. Stuart Geiger, Orianna DeMasi, Aaron Culich, Andreas Zoglauer, Diya Das, Fernando Hoces de la Guardia, Kellie Ottoboni, Marsha Fenner, Nelle Varoquaux, Rebecca Barter, Richard Barnes, Sara Stoudt, Stacey Dorton, Stéfan van der Walt. “Best Practices for Fostering Diversity and Inclusion in Data Science: Report from the BIDS Best Practices in Data Science Series.” BIDS Best Practices in Data Science Series. Berkeley, CA: Berkeley Institute for Data Science. 2019. doi:10.31235/osf.io/8gsjz
  • Dan Sholler, Sara Stoudt, Chris Kennedy, Fernando Hoces de la Guardia, François Lanusse, Karthik Ram, Kellie Ottoboni, Marla Stuart, Maryam Vareth, Nelle Varoquaux, Rebecca Barter, R. Stuart Geiger, Scott Peterson, and Stéfan van der Walt. “Resistance to Adoption of Best Practices.” BIDS Best Practices in Data Science Series. Berkeley Institute for Data Science: Berkeley, California. 2019. doi:10.31235/osf.io/qr8cz

Watch the FAT* 2019 Translation Tutorial: Challenges of incorporating algorithmic fairness into industry practice.

Drawing from prior work by Michael Veale et al., investigating the challenges of practitioners operating within high-stakes public sector institutions, we see that many of them are related to organizational structure [1]. Situating the development of AI systems within different organizational cultures gives us new insights on debated FAT* issues of power imbalance, discrimination, and many others. As scholar Anna Hoffman explores in her work, "designers and engineers are limited in their ability to address broad social and systemic problems" [2] and overcoming these limits requires broader socio-technical understanding. Recent work by Andrew Selbst et al. developed a framework for identifying and mitigating the failure modes or "traps" which arise from failing to consider the interrelationship between social context and technology [3]. We build on their work and provide new perspectives on some of the concrete characteristics of the Framing Trap, the Portability Trap, and the Solutionism Trap. Our work remains focused on the complex human and algorithmic interactions [4] within the larger organizational and social context where these traps are enacted.

References:

See Also:

See:

The work within the Fairness, Accountability, and Transparency of AI community relates to the field of AI Safety and Verification. In the fields of Computer Science and Machine Learning the process of verification has been defined by ML researchers as "producing a compelling argument that the system will not misbehave under a broad range of circumstances" [1]. Traditionally, in Machine Learning research, there's a difference between testing and verification. Testing refers to evaluating the system in concrete conditions and making sure that it behaves as expected. Testing has evolved to be a major part of software development ever since its early developments in the 1950s.

"Organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations."

Melvin Conway [2]

References:

See:

ABOUT ML (Annotation and Benchmarking on Understanding and Transparency of Machine learning Lifecycles) is a multi-year, multi-stakeholder initiative led by Partnership on AI that aims to bring together a diverse range of perspectives to develop, test, and promulgate documentation best practices for transparency in machine learning by synthesizing learnings from academic research and existing practices. This is an ongoing, iterative process designed to co-evolve with the rapidly advancing research and practice of AI technology.

Further understanding the intersection of organizational culture and fair-ML work will contribute to bridging the gap between best practices and industry practitioners.

By broadening the scope of the discussion, we seek to create space for multidisciplinary and intergenerational insights to emerge. We invite all of you to participate virtually or during the ACM FAT* conference. The tutorial will be most relevant to you if you are currently involved in or interested in initiating fair-ML efforts in industry. Use the visualization above to explore and contribute to the conversation.

What to expect:

  • An interdisciplinary discussion about organizational change.
  • An overview of the results of an ethnographic study we conducted among industry practitioners working specifically within the fair-ML field.
  • A facilitated design-thinking session where we'll do a deep dive into the themes that have emerged from the study.

Organizers

Use the interactive sketch above to explore the work we think will influence the discussion during the tutorial session.
Most of all, what are we missing? Use the "+" bubble to be part of this exploration and contribute your ideas and experience.

We've adopted Berkana Institute's Two Loop Theory of Change model, further explored by Cassie Robinson:

Source: Cassie Robinson, Hospicing The Old

For questions, thoughts, ideas, feedback, please reach out to us here. Thank You!