Medication Plans on The Blockchain – Building a Decentralised Application in Healthcare

The theme of this post is easily generalised to other use cases and serves as an example of how blockchain technology can shift power and trust in a well-established system, in this case the one of health care.

TL;DR

Medical prescriptions should be unified and digitalised. They should be resilient and controlled by the real owner of the prescription (and thus of the personal data). This can be achieved by a blockchain-based solution. A system of smart contracts in Solidity is proposed which achieves this and furthermore is modular and update-able. Some general advice on designing a blockchain solution is given.

What’s the problem?

How many of you know what iatrogenic illness means? I confess that prior to writing my Master thesis upon which this post is based, I also had no idea. So, to not keep you waiting, here’s the definition from Merriam-Webster:

ioatrogenic: induced inadvertently by a physician or surgeon or by medical treatment or diagnostic procedures

from the Greek word for physician (iatros). Add an illness to that and you have an illness caused by a physician. Now, it sounds like an oxymoron, but it is in fact more common than we would of course like to be. You can divide the causes for iatrogenic illness into so-called Adverse Drug Events (ADE) and, to be completely MECE*, other reasons. Other reasons would include things such as rough examinations, surgical errors (there’s a reason they draw arrows on the limb to be amputated) and so on. ADE includes all injuries or complications caused be medication, be it the wrong medication, drugs interacting in unintended ways and so on. [1] ADE has shown to be the most common cause of injury to hospitalised patients, and furthermore, the most preventable one.

Where is the problem coming from?

In fact, computer-based prescribing systems have been shown to decrease medication errors by 55% to 80% in a study from 2004. [2] It does not, however guarantee that the most severe of those medication errors are prevented by the usage of an IT solution. Among ADE’s, the most common form of avoidable medication errors are prescribing errors (i.e. an error made somewhere in the process of getting a drug to a patient). There is a list of sixteen classes of these prescribing errors, but basically they boil down to:

  • Knowledge deficiencies – among doctors, patients or pharmacist about drugs, other parties, et c.
  • Mistakes or memory lapses – e.g. a patient forgets what medication he/she is already on
  • Name-related errors – complicated-sounding substance gets mistaken for other complicated-sounding substance
  • Transferring errors – information is missing or incorrect once the order arrives at the pharmacist
  • ID checks – patient, doctor or pharmacist ID isn’t properly verified
  • Illegible handwriting (!)
  • Wrong type of document filled out

These errors all illustrate why prescribing errors are so common, but also why they should, to a large extent, be avoidable. [3] The thing is that, considering the current rate of prescribing errors causing damage or danger to patients being relatively low (ca. 2% [2]), its importance is overshadowed by more clinical research in medicine and is thus being overlooked by the research community and public in general. One reason for this could be the wide-ranging competencies required to implement a system for decreasing the rate of prescribing errors to zero. To do such a thing, one would require technical expertise within security and privacy as well as all the various skills for application development, one would also require medical and pharmacological knowledge, and essentially, one would need to have experience within information systems management.

A step in the right (digital) direction

To combat prescribing errors, many public health systems require or recommend that patients with more than three different prescribed medications have a unified medication plan which should theoretically contain all prescriptions. The effectiveness and quality of medication plans was examined in 2015 by a group of German researchers. The results were scary. 6.5% of all medication plans examined did not contain discrepancies! Where discrepancies means differences in drug names, additional or missing drugs, deviations in dosage, et c. In spite of this, or perhaps to improve the quality of medication plans, a law was passed in Germany three months after the publication of the medication plan review, which makes it mandatory for all patients with three or more medications to have a medication plan. In order to cope with the slowness of technology adoption in healthcare, up until January 2018, there is no requirement that the medication plans should be digital. Thereafter they should be available on an electronic health card (eGK). [4]

Considering the different types of prescribing errors we’ve identified, it is not difficult to translate those into some type of requirements for a system to solve those errors. The resulting requirements happen to fit very well to a blockchain system with smart contracts, therefore we’ll propose a design of a system of smart contracts to function as medication plan. Let’s look at the errors one by one and explain which requirements fit to them:

Knowledge deficiencies

To resolve this error, data regarding patients and their medications needs to be unified, available and guaranteed correct. There shouldn’t be multiple versions with equal or uncertain amounts of validity. Additionally, there should be little chance of the data getting lost or not available when it is needed.

Mistakes or memory lapses

It is completely human and expectable that a patient taking many different medication can’t remember the details of complicated names of each substance. This can be solved, however, by the unification of medication plans and assurance that all prescriptions are correct and active.

Name-related errors

See point Knowledge deficiencies.

Transferring errors

Through the unification of the various systems available currently, the process of transferring prescriptions would be simplified.

ID checks

Through the digitalisation and implementation of a permissions management system patients would only need some type of identification (could be biometric) to collect their medication.

Illegible handwriting

Assuming the doctor enters the prescription into a digital system and doesn’t write with pen and paper, this problem is practically eliminated.

Wrong type of document filled out

Again, through the unification of the different possibilities to prescribe a medication, there would be no such things as the wrong type of document. At least not inside the system.

Design choices in the solution

So what are the technical details one needs to consider when designing a blockchain-based system for a medication plan? I’ll describe the three most important design choices in this blog post. The three questions are:

  • Who needs to participate in the network?

In this case, the only users are doctors, patients and pharmacies. So to not take on additional risk regarding data exposure, only those who are on-boarded and verified through some separate process should be allowed to participate in the network. There are however some negative aspects of choosing a private or permissioned blockchain, one point being that there might not be enough active nodes to keep the consensus building at an acceptable fault-tolerance level at all times. This can be solve by some type of incentive or requirement that for example doctors keep a running node at all times. Another risk of running a private blockchain is that, when the amount of nodes isn’t very large, and the users consists of a specific group of people (such as doctors in Germany), then the risk of collusion becomes considerable. To combat this, the consensus-making should be well-spread geographically and demographically.

  • What data and functions need to be on the blockchain and what should definitely not be there?

In the case of a medication plan, the data which is required to be on the blockchain consists of three parts; user IDs, prescriptions and doctor/pharmacy permissions to prescribe/sell medications. Naturally, we can’t have plaintext information about patients and their prescriptions, even if it is a private network. Therefore, IDs are formed from a public/private key-pair (similar to bitcoin or ethereum), which should be generated by the user, on a user device. Prescriptions are only ever published on the blockchain as hashes, because even though the users theoretically are anonymous, it has been shown that Bitcoin transactions can be traced back to a person. [5] The permissions of doctors and pharmacies also need to be stored on the blockchain, in a smart contract to ensure that they aren’t manipulated or somehow overruled. Including permissions and sensitive data in smart contract means that extreme caution needs to be taken when programming them, to ensure that no syntactic or logical mistakes are made. The functionality needed on the blockchain is basically complimentary to the data pieces, getters and setters. But additionally, permissions needs to be handled on-chain.

    • How should the smart contracts be written?

There are relatively few resources by experienced smart contracts developers on best practices for building smart contracts, but mostly the general advice for writing good code (failing loudly and as early as possible, commenting, etc.) should be followed. There is however, so much to say about specific smart contract programming that it will be more explained in another blog post. Here, I’ll just talk about architecture of the system of smart contracts briefly.

In order to be able to keep an overview of the smart contracts and functionality used in the application, they should be as small and simple as possible, thus facilitating analysis. Ok, so say that you have a fairly complicated (not in a computational way) functionality to begin with, then you separate it into multiple smart contracts and end up with maybe five to ten of them. How are you supposed to keep track of them and increase the modularity of you system? Enter the contract managing contract. [6] It is basically a contract to keep track of (and manage) the different contracts in your system, it logs the addresses and names of each separate contract and provides another contract, the endpoint of the user-facing application, with the possibility to access them.

Conclusion

Designing an application for managing sensitive personal information needs to be resistant to failure, privacy-preserving and provide accountability so that any changes to the information can be traced. A very relevant use case for such an application is a medication plan. A suitable system for building the application back-end, is a blockchain-based system of smart contracts. Smart contracts programming is a fairly new phenomenon and is based on decentralisation, therefore much thought should be given to how such a system should be designed. A possible solution was drafted above.

*MECE stands for Mutually Exclusive, Collectively Exhaustive

References

1. Tierney LM. Iatrogenic Illness. Western Journal of Medicine. 1989;151(5):536-541.
2. The Epidemiology of Prescribing Errors, The Potential Impact of Computerized Prescriber Order Entry. Anne Bobb; Kristine Gleason; Marla Husch; et al, Arch Intern Med. 2004;164(7):785-792. doi:10.1001/archinte.164.7.785
3. Prescription errors in the National Health Services, time to change practice,
Hamid, Harper and Cushley et al., Scottish Medical Journal. Vol 61, issue 1, pp. 1-6. 21.04.2016
4. Full legal text available at: http://www.bgbl.de/xaver/bgbl/start.xav?startbk=Bundesanzeiger_BGBl&jumpTo=bgbl115s2408.pdf
5. Deanonymisation of Clients in Bitcoin P2P Network. Alex Biryukov, Dmitry Khovratovic, Ivan Pustogarov. Proceeding
CCS ’14, Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, Pages 15-29, November 03 – 07, 2014
6. Monax – Solidity tutorials, https://monax.io/docs/solidity/solidity_1_the_five_types_model/, Accessed on 15/05/2017.

My AlgorithmicMe: Our representation in data

Talk at Strata + Hadoop World Conference 2016, San Jose, Ca.

Today, algorithms predict our preferences, interests, and even future actions—recommendation engines, search, and advertising targeting are the most common applications. With data collected on mobile devices and the Internet of Things, these user profiles become algorithmic representations of our identities, which can supplement—or even replace—traditional social research by providing deep insight into people’s personalities. We can also use such data-based representations of ourselves to build intelligent agents who can act in the digital realm on our behalf: the AlgorithmicMe.

These algorithms must make value judgments, decisions on methods, or presets of the program’s parameters—choices made on how to deal with tasks according to social, cultural, or legal rules or personal persuasion—but this raises important questions about the transparency of these algorithms, including our ability (or lack thereof) to change or affect the way an algorithm views us.

Using key examples, Joerg Blumtritt and Majken Sander outline some of these value judgements, discuss their consequences, and present possible solutions, including algorithm audits and standardized specifications, but also more visionary concepts like an AlgorithmicMe, a data ethics oath, and algorithm angels that could raise awareness and guide developers in building their smart things. Joerg and Majken underscore the importance of higher awareness, education, and insight regarding those subjective algorithms that affect our lives. We need to look at how we—data consumers, data analysts, and developers—more or less knowingly produce subjective answers with our choice of methods and parameters, unaware of the bias we impose on a product, a company, and its users.

The Need For Encryption

With this post we do something quite unusual: we re-publish Apple’s letter to their customers. It’s not that we were such Apple fans that we would join  the company’s marketing efforts but the encryption of iPhones has been discussed for a long time. And this week, the FBI and the United States government, demanded that Apple builds a new version of iOS bypassing security and creating a backdoor. This might not be the first case of such a request a private company being asked to install a backdoor (we know that some have done it) but Apple does not retreat. Please read this letter and add your thoughts to that topic – either here at Datarella or other channels – we think that this needs to be discussed in public. And – maybe – there even are (secure) technical solutions to comply with the FBI’s requests AND privacy.

A message to our customers„, published by Apple, Inc., February 16:

February 16, 2016 

A Message to Our Customers

The United States government has demanded that Apple take an unprecedented step which threatens the security of our customers. We oppose this order, which has implications far beyond the legal case at hand.

This moment calls for public discussion, and we want our customers and people around the country to understand what is at stake.

The Need for Encryption

Smartphones, led by iPhone, have become an essential part of our lives. People use them to store an incredible amount of personal information, from our private conversations to our photos, our music, our notes, our calendars and contacts, our financial information and health data, even where we have been and where we are going.

All that information needs to be protected from hackers and criminals who want to access it, steal it, and use it without our knowledge or permission. Customers expect Apple and other technology companies to do everything in our power to protect their personal information, and at Apple we are deeply committed to safeguarding their data.

Compromising the security of our personal information can ultimately put our personal safety at risk. That is why encryption has become so important to all of us.

For many years, we have used encryption to protect our customers’ personal data because we believe it’s the only way to keep their information safe. We have even put that data out of our own reach, because we believe the contents of your iPhone are none of our business.

The San Bernardino Case

We were shocked and outraged by the deadly act of terrorism in San Bernardino last December. We mourn the loss of life and want justice for all those whose lives were affected. The FBI asked us for help in the days following the attack, and we have worked hard to support the government’s efforts to solve this horrible crime. We have no sympathy for terrorists.

When the FBI has requested data that’s in our possession, we have provided it. Apple complies with valid subpoenas and search warrants, as we have in the San Bernardino case. We have also made Apple engineers available to advise the FBI, and we’ve offered our best ideas on a number of investigative options at their disposal.

We have great respect for the professionals at the FBI, and we believe their intentions are good. Up to this point, we have done everything that is both within our power and within the law to help them. But now the U.S. government has asked us for something we simply do not have, and something we consider too dangerous to create. They have asked us to build a backdoor to the iPhone.

Specifically, the FBI wants us to make a new version of the iPhone operating system, circumventing several important security features, and install it on an iPhone recovered during the investigation. In the wrong hands, this software — which does not exist today — would have the potential to unlock any iPhone in someone’s physical possession.

The FBI may use different words to describe this tool, but make no mistake: Building a version of iOS that bypasses security in this way would undeniably create a backdoor. And while the government may argue that its use would be limited to this case, there is no way to guarantee such control.

The Threat to Data Security

Some would argue that building a backdoor for just one iPhone is a simple, clean-cut solution. But it ignores both the basics of digital security and the significance of what the government is demanding in this case.

In today’s digital world, the “key” to an encrypted system is a piece of information that unlocks the data, and it is only as secure as the protections around it. Once the information is known, or a way to bypass the code is revealed, the encryption can be defeated by anyone with that knowledge.

The government suggests this tool could only be used once, on one phone. But that’s simply not true. Once created, the technique could be used over and over again, on any number of devices. In the physical world, it would be the equivalent of a master key, capable of opening hundreds of millions of locks — from restaurants and banks to stores and homes. No reasonable person would find that acceptable.

The government is asking Apple to hack our own users and undermine decades of security advancements that protect our customers — including tens of millions of American citizens — from sophisticated hackers and cybercriminals. The same engineers who built strong encryption into the iPhone to protect our users would, ironically, be ordered to weaken those protections and make our users less safe.

We can find no precedent for an American company being forced to expose its customers to a greater risk of attack. For years, cryptologists and national security experts have been warning against weakening encryption. Doing so would hurt only the well-meaning and law-abiding citizens who rely on companies like Apple to protect their data. Criminals and bad actors will still encrypt, using tools that are readily available to them.

A Dangerous Precedent

Rather than asking for legislative action through Congress, the FBI is proposing an unprecedented use of the All Writs Act of 1789 to justify an expansion of its authority.

The government would have us remove security features and add new capabilities to the operating system, allowing a passcode to be input electronically. This would make it easier to unlock an iPhone by “brute force,” trying thousands or millions of combinations with the speed of a modern computer.

The implications of the government’s demands are chilling. If the government can use the All Writs Act to make it easier to unlock your iPhone, it would have the power to reach into anyone’s device to capture their data. The government could extend this breach of privacy and demand that Apple build surveillance software to intercept your messages, access your health records or financial data, track your location, or even access your phone’s microphone or camera without your knowledge.

Opposing this order is not something we take lightly. We feel we must speak up in the face of what we see as an overreach by the U.S. government.

We are challenging the FBI’s demands with the deepest respect for American democracy and a love of our country. We believe it would be in the best interest of everyone to step back and consider the implications.

While we believe the FBI’s intentions are good, it would be wrong for the government to force us to build a backdoor into our products. And ultimately, we fear that this demand would undermine the very freedoms and liberty our government is meant to protect.

Tim Cook

Telling the story of people’s lives – Strata+Hadoop, Feb 15, San Jose

We can draw a colorful picture of people’s everyday lives from the data we collect via smartphones. To tell the data-story, we need to translate the raw measurements into meaningful events, like “driving a car”, “strolling in a mall”, or even more intimate, like “being nervous”. We will show how to access the phone’s data, how to derive complex events from the phone’s raw data, and how to bring it into a meaningful story, and how to make it work for businesses.

Cases we’ll show: an app for the automotive industry to support ecological driving, learning about preferences of Chinese passengers at an international airport, and supporting people suffering from osteoporosis to stabelize their condition and maintain mobility.

More on Strata+Hadoop

 

BYOD – Bring your own Data. Self-Tracking for Medical Practice and Research

„Facebook would never change their advertsing relying on a sample size as small as we do medical research on.“
(David Wilbanks)

People want to learn about themselves and get their lives soundly supported by data. Parents record the height of their children. When we feel ill, we measure our temperature. And many people own a bathroom scales. But without context, data is little meaningful. Thus we try to compare owr measurements with those of other people.

Data that we track just for us alone

Self-tracking has been trending for years. Fitness tracker like Fitbit count our steps, training apps like Runtustic deliver to us analysis and benchmark us with others. Since 2008, a movement has been around that has put self-tracking into its center: The Quantified Self.

Self-tracking has been tending for years. In this picture you see a wristband that already made it into a museum and is now on display in the London Science Museum.
Self-tracking has been tending for years. In this picture you see a wristband that already made it into a museum and is now on display in the London Science Museum.

However it is not just self-optimizer and fitness junkies who measure themselves. Essential drive to self-tracking originated from self-caring chronically ill.

Data for the physician, for family members, and for nursing staff

In the US like in many countries lacking strong public health-care, it becomes increasingly common to bring self-measured data to the physician. With many examinations this saves significant consts and speeds up the treatment. With Quantified Self, many people have been able to get good laboratory analytics about their health for the first time ever. One example is kits for blood analysis that sends the measurement via mobile to the lab and then displays the results. Such kits are e.g. widely in use in India.
Also for family members and nursing staff, self-tracked data of the pations is useful. They draw a realistic picture of our conditions to those who care for us. Even automatic emergency calls based on data measured at site are possible today.

The image at the top is taken from the blog of Sara Riggere, who suffers from Parkinson. Sara tracks her medication and the syptoms of her Parkinson’s desease with her smartphone. Her story is worth reading in any case, and it shows all facettes that make the topic „own data“ so fascinating:
http://www.riggare.se/ and
http://quantifiedself.com


Mood-tracking – a mood diary. People suffering from bipolar disorder try to help themselves by recording their mood and other influences of their lives. By doing so, they are able to counteract, when they approach a depression, and they are able to finetune their medication much better, than it would be possible by the rare visits to their psychiatrist. (Shown here is soundfeelings.com)

Data for research

Self-recorded data for the first time maps people’s actions and condition into an uninterupted image. For research, these data are significantly richer than the snap-shots made by classic clinical research – regarding case numbers as well as by making possible for the first time to include the multivariate influences of all kinds of behavior and environment. Even if only a small fraction of self-trackers is willing to share their data with researchers, it is hardly to imagine the huge value the findings will have for medicine, enabled by this.

Privacy

The difficulty with these data: they are so rich and so personal, that it is always possible to get down on the single individual. Anonymization, e.g. by deleting the user id or the IP adress is not possible. Like fingerprints, the trace we leave in the data can always identify us. This problem cannot be solved by even more privacy regulation. Already today, the mandatory committment to informed consent and to data avoidance impede research with medical data to such extent, it is hardly worthwhile to work with it, at all. The only remedy would be comprehensive legal protection. Every person sharing their data with research has to be sure that no disadvantages will come from their cooperation. Insurance companies and employers must not take advantage from the openness of people. This could be shaped similar to anti-discrimination laws. Today, e.g. insurance companies are not allowed to differenciate their rates by the insurant’s gender.

Algorithm ethics

Another issue lies within the data itself. First, arbitrary, technical differences like hardware defects, compression algorithms, or samling rates make the data hard to match. Second, it is hardly the raw data itself, but rather mathematical abstractions derived from the data, that gets further processed. Fitbit or Jawbone UP don’t store the three-dimensional measurements of the gyroscope, but the steps, calculated from it. However, what would be regarded as a step, and what would be another kind of movement, is an arbitrary decision of the author of the algorithm programmed for this task. Here it is important to open the black boxes of the algorithms. As the EU commission demands Google to open its search algorithms, because they suspect (probably with good reasons) that Google would discriminate against obnoxious content in a clandistine way, we have to demand to see behind the tracking-devices from their makers.
Data is generated by the users. The users have to be heared what is made from it.

There is no privacy in mobile

Our phones register in radio cells to route the calls to the phone network. When we move around, we occasionally leave one cell and enter another. So our movements over leave a trace through the cells we have been passing the course of the day. Yves-Alexandre de Montjoye and his co-authors from MIT explored, how many observations we need, to identify a specific user. Based on actual data provided by telephone companies, they calculated, that just four observations are sufficient to identify 95% of all mobile users. We need just so little evidence because people’s moving patterns are surprisingly unique, just like our fingerprints, these are more or less reliable identifiers.

Location

When we analyze the raw data, that we collect through our mobile sensor framework ‚explore‘ we found several other fingerprint-like traces, that all of us continuously drop by using our smartphones. Obviously we can reproduce de Monjoye’s experiment with much more granular resolution when we use the phone’s own location tracking data instead of the rather coarse grid of the cells. GPS and mobile positioning spot us with high precision.

Wifi

Inside buildings we have the Wifis in reception. Each Wifi has a unique identifier, the BSSID and provides lots of other useful information.

Wifis in reception around my office. When the location of the wifi emitter is known we can use signal strength to locate users within buildings.
Wifis in reception around my office. When the location of the wifi emitter is known we can use signal strength to locate users within buildings.

Even the aribitrary label "SSID" can often be telling: You can immediately see what kind of printer I use.
Even the aribrary lable „SSID“ can often be telling: You can immediately see what kind of printer I use.

Magnetic fields

To provide compass functionality, most smartphones carry a magnetic flux sensor. This probe monitors the surrounding magnetic fields in all three dimensions.

Each location has its very own magnetic signature. Also many things we do leave telling magnetic traces - like driving a car or riding on a train. In this diagram you see my magnetic readings. You can immediately detect when I was home or when I was traveling.
Each location has its very own magnetic signature. Also many things we do leave telling magnetic traces – like driving a car or riding on a train. In this diagram you see my magnetic readings. You can immediately detect when I was home or when I was traveling.

Battery

The way we use the phone has effect on the power consumptions. This can be monitored via the battery charge probe:

The battery drain and charge pattern is very unique and also telling the story of our daily lives.
The battery drain and charge pattern is very unique and also telling the story of our daily lives.

Hardware artifacts

All the sensors in our phones have typical and very unique inaccuracies. In the gyroscope data shown at the top of the page, you see spikes that shoot out from the average pattern quite regularily. Such artifacts caused by small hardware defects are specific to a single phone and can easily be used to re-identify a phone.

No technical security

„We no longer live in a world where technology allows us to separate communications we want to protect from communications we want to exploit. Assume that anything we learn about what the NSA does today is a preview of what cybercriminals are going to do in six months to two years.“
Bruce Schneier, „NSA Hacking of Cell Phone Networks“

As Bruce Schneier points out in his post: there are more than enough hints that we should not regard our phones as private. Not only have we learned how corrosive governmental surveillance has been for a long time, there are lots of commercial offerings to breach the privacy of our communication and also tap into the other, even more telling data.

But what to do? We can’t just opt-out. For most people, not using mobile phones is not an option. And frankly: I don’t want to quit my mobile. So how should we deal with it? Well, for people like me – white, privileged, supported by a legal system providing me civil rights protection, that is more discomfort than a real threat. But for everyone else, people that can not be confident in the system to protect them, the situation is truly grim.

First, we have to show people what the data does tell about them. We have to make people understand what is happening; because most people don’t. I am frequently baffled how naive even data experts often are.

Second, as Bruce Schneier argues, we have to get NSA and other governmental agencies to use their knowledge to protect us, to patch security breaches, rather then exploit these for spying.

Third, it is more important then ever, to work and fight for a just society with very general protection of not only civil but also human rights. Adelante!

„Mobile Data: Under the Hood“

The slides of our talk at Munich DataGeeks Meetup:
„A smartphone is a mighty array of sensors. How to access the data, and get meaningful information from the various readings, like geo-location, gyroscope, accelerometer, or even the magnetic flux.
We also discuss the ehtical implication of mobile tracking: informational self-determination, „other-tracking“ vs. self-tracking, and how to do spooky things with apparently innocent measurements.“

Data Courtesy

Picture above: The court of Louis XVI is regared as the ecstasy of courtesy. Esprit, the bon-mot and the courtly attire had been overdone to an extend never to be reached again. The end: the terror – the most uncourtly form of social cohabition.

„Privacy invasion is now one of our biggest knowledge industries.“
„The more the data banks record about us, the less we exist.“
Marshall McLuhan

„Handle so, dass du die Menschheit sowohl in deiner Person, als in der Person eines jeden anderen jederzeit zugleich als Zweck, niemals bloß als Mittel brauchst.“
(„Act in such a way that you treat humanity, whether in your own person or in the person of any other, always at the same time as an end and never merely as a means to an end“)
Immanuel Kant

„Being socially exposed is OK when you hold a lot of privilege, when people cannot hold meaningful power over you, or when you can route around such efforts. Such is the life of most of the tech geeks living in Silicon Valley. But I spend all of my time with teenagers, one of the most vulnerable populations because of their lack of agency (let alone rights). […] The odd thing about forced exposure is that it creates a scenario where everyone is a potential celebrity, forced into approaching every public interaction with the imagined costs of all future interpretations of that ephemeral situation.“

With danah boyd’s concern, I completely sympathize. Usually we recommend to „stay safe from the dangers of the Internet“ to children or teenagers . But what does that mean? Should they abstain from connecting to others on Facebook? And how should a teenager ask for her peer or friend not to post a picture on which she would be visible? (The option to let their parents solve problems with unwanted pictures for them is not realistic – apart maybe from gross denigrations and mobbing).

There is no real choice. Either we will be regarded whimsical, even Luddite, or we will leave a broad track of data in the world. As time goes by, our behavior maps ourselves into an image, a projection into the data realm. This image of ourselves lies more or less in the open. And many clandestine and creepy companions lurk arround watching our lives in the mirror of our data; Google, Facebook, advertising targeting systems, shop recommandation engines, and finally the governmental surveillance services, couching behind the curtain, waiting for us to fail.

But indiscretion is not restricted to professional data krakens. Our profiles with personal information, our posts, our check-ins can be read by anyone who wants to. And indeed this is our very intention: of course I love people following me on Twitter, and I have met some of my closest friends in social networks. Social media work through authenticity – this buzzword has been written and told so many times, that it leaves a flat taste. But it is true: if we are not open, tell in fact about ourselves, we will hardly get in contact with others. It is part of the culture of social media (as in social life in general), to disclose details about ourselves, even if they could be used against us. Like me, posting my drinking habits quite regularly; and of course I want people who know me, or who have an interest in me, to read this. But what would it feel like, if someone would set-up a „Joerg-drinks“-bot, publishing statistics about my wine-tweets? Without context this would certainly draw a rather unfavorable picture of me.

From personal experience I would judge the damage from mortifications by „manual“ data access much more severe, than the professional analyses of data krakens for their commercial use. And while with the latter, privacy and self-determination rights can be defined and often also enforced legally (e.g. via law against unfair-competition), assaults on personal data by individuals can hardly be contained with legal means. Where does stalking start, what exactly is an insult, what malicious gossip? The worst is, that the victim has to act in defense – „Streisand-effect“, fleer and mockery is poured on people, who „just don’t understand the rules“, are „stupid enough“ to try to resist.

„Just because people can profile, stereotype, and label people doesn’t mean that they should.“ (danah boyd in her essay)

If you ask yourself where to set the limits what to do with data, the answer is not really hard:
data courtesy

Courtesy is a cultural technique to maintain distance. We are courteous to organize our distance to others, not to offend them. We become courteous by keeping within our boundaries, which are not defined by laws or other written rules, but by our understanding, respect, and sympathy with others. Courtesy is the esprit de conduite, the good spirit of conduct. What was defined as religious command, or feudal duty in ancient or medival times, was unfolded as philosophy during the enlightenment. While still an act of dominance and power of the strengthening kind against the weakening noblemen at the court of Louis XIV, esprit becomes a bourgeois habit after the French revolution. And Kant’s maximes for human dignity were made into explicit recommendations for everyday’s life by Adolph von Knigge in his guidebook „Über den Umgang mit Menschen“.

Even before the invention of the web, the community of the first users on the net had formulated the Netiquette . „When someone makes a mistake – whether it’s a spelling error or a spelling flame, a stupid question or an unnecessarily long answer – be kind about it.“ – Kindness – integral part of courtesy – was a topic back then.

„Gar zu leicht missbrauchen oder vernachlässigen uns die Menschen, sobald wir mit ihnen in einem vollkommen vertraulichen Tone verkehren. Um angenehm zu leben, muss man fast immer als ein Fremder unter den Leuten erscheinen.“
(„Far too easily people abuse or neglect us, as soon as we use a confidential tone in conversation. To live in a pleasant way, we almost all the time have to appear as stranger among other people.“)
Adolph von Knigge

So the culture of courtesy of the 19th century might be well suited for our age of post-privacy. Courtesy is a cultural thing. Cultivated means taken care of. It is time to act carefully with our data, which are so closely tied to us. It is time for data courtesy.

[this text was originally posted in German at slow-media.net]