Data Science Checklists (Part 3): Design Justice Network Principles and Checklists for Data Scientists

We have established that checklists can help data scientists handle the complexity of ensuring their work aligns with a given set of values, but to actually propose a checklist, we need to state explicitly what those values are. Up until now, we have discussed to AI principles in general, without making reference to any specific set of ideals. Our checklist intends to help put theDesign Justice Network principles into action. In this post we will argue for why the Design Justice Network principles, in particular, may benefit from a checklist and then propose a prototype checklist.

Read More

Data Science Checklists (Part 2): Can Checklists Help?

Now that we have established the complexity of putting AI principles into practice, we can begin discussing whether checklists can help. At first glance, it would appear that they cannot. Checklists seem designed to handle simple problems, maybe complicated problems, but certainly not complex problems. In The Checklist Manifesto, Gawande writes that one way to understand the benefit of a checklist is that "[t]hey remind us of the minimum necessary steps and make them explicit" (36). This type solution seems well suited to simple and complicated processes. In a simple problem that can be solved via a recipe, this is exactly what one would be looking for: a tool that ensures that the most critical steps in the recipe are followed. In complicated tasks, a tool that reminds one of the minimum necessary steps to solve the problem can be useful given that expert knowledge is fallible. Ultimately, since both simple and complicated problems have solutions that can be applied in all instances of the problem with relative success, checklists can be written to capture this underlying order.

Read More

Data Science Checklists (Part 1): The Complexity of Putting Principles Into Practice

A recent trend in discussions surrounding the governance of machine learning (ML) algorithms is the drafting of so-called "AI principles". Organizations ranging from technology firms to government to civil society to multi-stakeholder initiatives have published documents outlining the values they believe should govern these technologies. In the context of Internet regulation, this phenomenon has been called digital constitutionalism in the sense that these documents might be the building blocks for future internet regulation, but lack any codified rules or enforcement mechanisms in their current formulation. These AI principles make foundational normative claims, but cannot demand compliance –AI principles are, for now, only politically charged pdfs. Thus, another question is implied by the very existence of these documents. How can a given set of principles best be put into practice?

Read More

BlueDot in Wired!

Wired just wrote an article on the work I helped with when I was a data scientist at BlueDot. We built a system that ingests news articles from around the world and uses the information to create a database of global infectious disease outbreaks. The product identified the recent coronavirus outbreak a week before the CDC and 10 days before the World Health Organization.

Read More

Principled Artificial Intelligence

I was only tangentially related to this project doing some NLP work that didn’t really go anywhere, but I think it’s very cool, so I wanted to post about it. There has been a recent trend for organizations, both private and public, to post “AI Principles”. These documents are intended to guide the development of AI, but they also could just be ethics-washing.

Read More