Talk at Strata + Hadoop World Conference 2016, San Jose, Ca.

Today, algorithms predict our preferences, interests, and even future actions—recommendation engines, search, and advertising targeting are the most common applications. With data collected on mobile devices and the Internet of Things, these user profiles become algorithmic representations of our identities, which can supplement—or even replace—traditional social research by providing deep insight into people’s personalities. We can also use such data-based representations of ourselves to build intelligent agents who can act in the digital realm on our behalf: the AlgorithmicMe.

These algorithms must make value judgments, decisions on methods, or presets of the program’s parameters—choices made on how to deal with tasks according to social, cultural, or legal rules or personal persuasion—but this raises important questions about the transparency of these algorithms, including our ability (or lack thereof) to change or affect the way an algorithm views us.

Using key examples, Joerg Blumtritt and Majken Sander outline some of these value judgements, discuss their consequences, and present possible solutions, including algorithm audits and standardized specifications, but also more visionary concepts like an AlgorithmicMe, a data ethics oath, and algorithm angels that could raise awareness and guide developers in building their smart things. Joerg and Majken underscore the importance of higher awareness, education, and insight regarding those subjective algorithms that affect our lives. We need to look at how we—data consumers, data analysts, and developers—more or less knowingly produce subjective answers with our choice of methods and parameters, unaware of the bias we impose on a product, a company, and its users.