BYOD – Bring your own device to the car

BYOD – “Bring your own device” has recently become one of the most debated IT topics. BYOD means that employees use their private computers, smartphones, or tablets at their everyday workplaces and office desks, instead of getting seperate hardware, administrated by the company’s IT department.

More and more people want to use the same technology at work that they have chosen to use for their private purposes. In particular for younger professionals it becomes less and less accepted not to have all their tools at hand. Why would they let themselved get restricted to outdated operation systems, cheep hardware, and crippled internet access? Although some companies set up rules for their employees to use the devices of their choice, for most businesses the concerns outweigh any potential advantages.

What about cars? Some manufactorers provide rudimentary interfaces via bluetooth to connect some functionality of our smartphones with the car’s entertainment system. Most however seam to believe that people would still want to rely solely on the car’s onboard systems. Hardly any model has a proper place to put your device while driving. With our phone stored away in the usual bowl or compartment next to the driver’s seat, we couldn’t use it directly. We would have to access it via the car’s system that support only a tiny fraction of the phone’s functionality. To really use the smartphone, we still have to install cheep third party hands-free car kits.

Using our own mobile devices while driving is not just owed to our lazyness. While our own gadgets are up-to-date, the car’s technology will be totally outdated already when it first hits the road due to the long development cycles that are unavoidable in car construction.
Furthermore our apps have optimized user interfaces, continuously adjusted to users’ behavior. We might drive various cars, some we might not even own. How convenient would it be, could we use the same interface, no matter what model we would use?

Cars should support the technology of our choice. They should become agnostic to the way, people would want to navigate, listen to music, or even control the climate. Instead of forcing us to rely on their propriatory interfaces, they should give us as much freedom as reasonably possible to control the car with our mobiles. There might be limits due to security concerns.

Just last week, Jeep had to recall millions of their vehicles due to a vulnerability in the car’s computer system. System critical function could be accessed wirelessly. BYOD might be a good way to rethinking the architecture of the electronic systems. Accessing entertainment, air condition, and other passenger support systems is not as dangerous as controling acceleration, airbags, or the breaks. The different systems should be seperated physically. While the core of the car should be protected and not accessable without proper authorization, the peripherals should be as easy to connect as possible.

I have seen people taping their phones atop the car’s dashboard after loosing patience with the clumsy user interface of the built-in navigation system. Does anyone use these dinosaurs of consumer electronics anymore, at all? It is high time to change the way we, car companies treat their drivers. BYOD is a good first step.

Breaking Bad Habits with Self-Tracking

For the forth time, the Quantified Self Conference took place in Amsterdam. Quantified Self is a way to get “self knowledge through numbers” as the two founders Kevin Kelly and Gerry Wolf put it, learning about one’s life by measuring various aspects of our bodily funcions, our actions, habits, and environment. With all kinds of tracking devices from simple step counters to complex sleep monitors, that are know generally available in every consumer electronics store, Quantified Self has matured from a nerdy, rather esoteric niche to a mainstream trend. In many countries, healthcare institutions are experimenting with self-tracking, and there is a plethora of self-tracking apps for iOS and Android smartphones.

“Self-tracking is about change. But change is more often not about doing, but about stopping to do something.” Gerry Wolf introduced this year’s conference with a keynote about breaking routines. A routine, he remarked, is a method to fight entropy. It consumes energy to maintain routines. Routines are efficient, as long as the conditions remain unchanged, but it restrains our acting freely. Self-tracking for most people is about uncovering routines in daily life, making bad habits visible, and then guiding the change by supplying an indicator.

When self-tracking is used to break habits, it opens additional degrees of freedom. Thus self-tracking is not so much about self-discipline, about restricting actions, living according to more rules, but about pushing the boundaries, and reliefing from constrains that are not neccessary, but exist just because we are used to do things that way.

Bad habits can creep into all our everyday activities. And self-tracking is not limited to counting the steps or measuring blood pressure. There are already a few apps that support people by tracking their driving. Acceleration (respectively breaking), turing, and speed can easily be tracked with the sensors that sit on every smartphone. From the readings of these probes, indexes can be derived, that give feedback on the quality and safety of driving. Becoming aware of bad habits can not only help the driver to save energy by learning to drive more ecologically, but reduce stress and lower the risk of accidents. Self-tracking can by this help driviers to act more consciously, and thus give them more freedom on the road.

Second Screen

In the US, smartphone and tablet displays have replaced the TV set as screen number one. Meanwhile, not only the time spent on mobile devices has become longer on mobile than on the tube -primarily the intensity of using mobiles, the attention, people dedicate to mobile content, is higher.

Smartphones and tablets have thus become the first screen.

Nevertheless, there are plenty of occasions, when a smartphone is out of place, or even annoying: At the table, during a lively conversation, or while driving a car, to name just three obvious examples. My twitter friend J├╝rgen Geuter has put it in one sentence, why smartwatches will the remedy here: To check the wrist watch is socially accepted. Completely within the common boundaries we may just look on our smartwatch if the awaited reply on Whatsapp has arrived, just as we would have consulted the watch to get the time. Likewise, smartwatches are handy in the sense of the word, when we drive a car, and in combination with speech recognition many smartphone apps will work even better than the hands-free kit.

Smartwatches will thus become the second screen, the companion of our smartphones. Also therefore they will find their buyers.

We are the content!

Since 1967 the Consumer Electronics Show CES in Las Vegas has been the display for the latest products in electronic entertainment. Every first week in Januar the big brands disclose their secret products, that will be stacked on the shelves – or rather: put on the online shops.

Until recently, CES was the home match for the classic industry: Hifi, TV sets, car radio, … Even with Apple, Microsoft, and their likes have played an increasingly important role, their focus at this trade show was still also “classic” home entertainment: Smart TV, home cinema, computer games.

TV ist dowdy

Also for 2015 the industry had set-up their things. Companies like Samsung would make clear their strategy in advance: Even more fancy TV sets with even more features. But things turned out quite differently. It seamed as if nobody wanted to watch the TV screens on this year’s CES. Classic home electronics appeared like a relict from the 70s. If not before, it became clear at CES 2015, that TV is no longer the “first screen”.

Three Trends at CES2015

Three trends dominated the visitors’ interest and the media coverage of this year’s CES:

First: Connected home, in particular smoke detectors, thermostates, and other totally unsuspicous devices, which however gain enormously in their usefulness when connecte via the Internet; it is thus not the “talking refridgerator” that has been predicted by industry journalists for years, but classic, dry appliences.

Second: Connected car. What the automotive industry has shown at CES was really amazing: Autonomous cars, market ready; convincing systems for accident prevention, and other novelties in driving safety, that we can expect now in new cars.

Third: Wearable tech, electronics that we carry directly with us. Apart from fitness gadgets like wristbands, pedometers, or connected scales, there are in fact hundreds of smart watches with all kinds of applications, from fitness up to caring for the severely ill.

All three trends have one thing in common: The display by which we interact with them is mobile.

Our smartphones are already no longer just the second screen behind the TV set. In the US, the time spent with mobile devices exceeded watching TV. And the awareness that we dedicate to our mobile screens when checking them more than 200 times a day, is far greater.

The Content

The content that works on our mobiles is by no means online, with just less space on the screen. On our smartphones and tablets, we use mainly the direct communication. Social media, especially Facebook and Twitter, and in particular image-sharing platforms like Instagram are still the closest to what used to be “classic” media. But more and more forms of connected communication draw their users by the millions or even billions, like messaging apps of Whatsapp or Snapchat sort, or totally new forms, like Yo.

What not long ago would have been derogatorily called ‘user generated content’, UGC, has become simply the content.

The trends at CES show, how this development will further evolve. Next to the content that gets posted consciously and arbitrarily by the users, like texts or images, more and more automatically generated data is added. Every morning there are thousands automatic posts, published by fitness apps like runtustic about the training of their users. Lifetracking cameras like the narrative clip pictorially document without us being active, our daily routine, Jawbone reports our deep sleep phases to all connected friends in our user group. With ‘Bring Your Own Data’ sharing of very intimate data, gathered by our gadgets, becomes a serious part of our health care: Our self-tracked data support our doctors at helping us.

Social media, that is shared thoughts and pictures, and self-tracking, that is shared data, are two sides of the same coin: The most interesting stories are humans themselves.

There is no privacy in mobile

Our phones register in radio cells to route the calls to the phone network. When we move around, we occasionally leave one cell and enter another. So our movements over leave a trace through the cells we have been passing the course of the day. Yves-Alexandre de Montjoye and his co-authors from MIT explored, how many observations we need, to identify a specific user. Based on actual data provided by telephone companies, they calculated, that just four observations are sufficient to identify 95% of all mobile users. We need just so little evidence because people’s moving patterns are surprisingly unique, just like our fingerprints, these are more or less reliable identifiers.


When we analyze the raw data, that we collect through our mobile sensor framework ‘explore’ we found several other fingerprint-like traces, that all of us continuously drop by using our smartphones. Obviously we can reproduce de Monjoye’s experiment with much more granular resolution when we use the phone’s own location tracking data instead of the rather coarse grid of the cells. GPS and mobile positioning spot us with high precision.


Inside buildings we have the Wifis in reception. Each Wifi has a unique identifier, the BSSID and provides lots of other useful information.

Wifis in reception around my office. When the location of the wifi emitter is known we can use signal strength to locate users within buildings.
Wifis in reception around my office. When the location of the wifi emitter is known we can use signal strength to locate users within buildings.

Even the aribitrary label "SSID" can often be telling: You can immediately see what kind of printer I use.
Even the aribrary lable “SSID” can often be telling: You can immediately see what kind of printer I use.

Magnetic fields

To provide compass functionality, most smartphones carry a magnetic flux sensor. This probe monitors the surrounding magnetic fields in all three dimensions.

Each location has its very own magnetic signature. Also many things we do leave telling magnetic traces - like driving a car or riding on a train. In this diagram you see my magnetic readings. You can immediately detect when I was home or when I was traveling.
Each location has its very own magnetic signature. Also many things we do leave telling magnetic traces – like driving a car or riding on a train. In this diagram you see my magnetic readings. You can immediately detect when I was home or when I was traveling.


The way we use the phone has effect on the power consumptions. This can be monitored via the battery charge probe:

The battery drain and charge pattern is very unique and also telling the story of our daily lives.
The battery drain and charge pattern is very unique and also telling the story of our daily lives.

Hardware artifacts

All the sensors in our phones have typical and very unique inaccuracies. In the gyroscope data shown at the top of the page, you see spikes that shoot out from the average pattern quite regularily. Such artifacts caused by small hardware defects are specific to a single phone and can easily be used to re-identify a phone.

No technical security

“We no longer live in a world where technology allows us to separate communications we want to protect from communications we want to exploit. Assume that anything we learn about what the NSA does today is a preview of what cybercriminals are going to do in six months to two years.”
Bruce Schneier, “NSA Hacking of Cell Phone Networks”

As Bruce Schneier points out in his post: there are more than enough hints that we should not regard our phones as private. Not only have we learned how corrosive governmental surveillance has been for a long time, there are lots of commercial offerings to breach the privacy of our communication and also tap into the other, even more telling data.

But what to do? We can’t just opt-out. For most people, not using mobile phones is not an option. And frankly: I don’t want to quit my mobile. So how should we deal with it? Well, for people like me – white, privileged, supported by a legal system providing me civil rights protection, that is more discomfort than a real threat. But for everyone else, people that can not be confident in the system to protect them, the situation is truly grim.

First, we have to show people what the data does tell about them. We have to make people understand what is happening; because most people don’t. I am frequently baffled how naive even data experts often are.

Second, as Bruce Schneier argues, we have to get NSA and other governmental agencies to use their knowledge to protect us, to patch security breaches, rather then exploit these for spying.

Third, it is more important then ever, to work and fight for a just society with very general protection of not only civil but also human rights. Adelante!

Access Smartphone Data With our new API

Datarella now provides an API for our app ‘explore’, that allows every user to access the data collected and stored by the app.

An Application Programming Interface, in short API, is an interface for accessing software or databases externally. Web-APIs giving us access via the internet, have become the principle condition for most businesses in the web. Whenever we pay something online with our credit card, the shop system accesses our account via the API of the card issuing company. Ebay, Amazon, PayPal -they all provide us with their APIs to automatize their whole functionality to be included in our own website’s services. Most social networks offer APIs, too. Through these we can post automatic messages, analyze data about usage and reach, or control ad campaigns.

The ‘explore’ app was developed by Datarella to access the smartphones internal sensors (or probes), and to store the data. It is however not just about standard data like location, widely known because of Google Maps. ‘explore’ reads all movements in three dimensions via the gyroscope, accelleration, magnetic fields in the environment. Mobile network providers and Wifis in reception are also tracked. From these data we can learn many interesting things about ourself, our surroundings and environment, and about our behavior. To set the data in context, the API also gives out data from other users. For the sake of privacy and information self-determination, this is aggregated and averaged over several users, so that identification of a specific person is not possible.

With our API, Datarella commits to open data: We are convinced, that data has to be available for users.

➜ Here is our API’s documentation:

➜ Here the download-link for ‘explore’:

We are excited to learn, what you will make from the data.

Download the 'explore' app here.
Download the ‘explore’ app here.

Crowdsourced environmental monitoring

Many parameters a smartphone accidentially measures are useful in monitoring the environment. We have recently discussed, how air pollution with particulate dust can be monitored with an easy ad on to the phone’s camera. But there are even more subtle ways by which users can help to research and monitor environmental conditions.

Another example is given by A. Overeem who track urban temperature over time in various metropole regions arround the globe. The approach is as simple as powerful: a regression over the battery temperature (that is measured by every smartphone anyway).

The microphone, too, can give valuable data on local environmental conditions for an unlimited mass of individual users that participate. Sound level show noise emmission that can be located in space and time. Noise is regarded as a prime source of stress, but rather little is known about the changes that occur in different microgeographic regions.

Apps like Weather Signal use thus a combination of the phone’s sensors to contribute to a richer model for weather conditions.

Appart from just passivly deploying the phones as sensor boards themselves, it is of course also possible to collect data from other local sources and just transmit the results via smartphone. This can be done by letting the users take a picture of some reading of a scale which can then be processed via image recognition. Or you just ask people to put in the readings or their observations into some kind of questionnaire.

The fascinating thing is: since so many people in almost every country carry a smartphone, monitoring environmental conditions and changes is now possible on far larger scales than ever before.

Tracking Lung Function with the Phone’s Microphone

Asthma is one of the most common chronical conditions. For many who are affected, it would be necessary to monitor their lung functions much more frequently than by visiting their doctor once or twice a year.

Spirometers which measure the volume of air taken in and out whith a breath are expensive and even if you’d buy one, you’d still have to carry another device with you. Smartphones are ubiquitous, everybody carries one – this is what makes mHealth so powerful after all.

SpiroSmart is an app that makes use just of the most basic function of any phone: the microphone. By exhaling all your lung’s content into the phone’s mike at the distance of your full arm’s lenght, SpiroSmart calculates the breath capacity. The app analyzes the dynamics of the sound, the exhaling makes to fulfill the task of the classic spirometers that do the same with a small fan that gets propelled by the exhaled breath inside a mouthpiece. The error rate lies close to the parameters set up by the American Thoracic Society ATS.
SpiroSmart is developed by an interdisciplinary team at the University of Washington in Seatle.

“Tracking Lung Function on any Phone”. Poster by E. Larson

Mapping particulate dust with phones

iSpex device on a smartphone. Image by Sebastiaan ter Burg , published under licence CC BY 2.0
iSpex device on a smartphone. Image by Sebastiaan ter Burg , published under licence CC BY 2.0
iSpex is a plastic contraption that can be clipped on top of a smartphone’s camera. In this simple slit spectrograph light is defracted and polarized by shining through birefringent plastic sheets and a polarisation film. iSpex measures how aerosoles – microscopic or nanoscopic particles hovering in the athmosphere – change the polarization of the highly polarized light that shines from an unclowded, blue sky. This change in polarization renders a distinct pattern in the spectrum, that is cast by the iSpex-device into the phone’s camera. By this approach, iSpex can measure how the air is polluted with particulate dust, which is regarded especially unhealthy and has become topic of fierce political discussions, when the EU ordered city governments to regulate and even lock out automotive traffic.

Behind iSpex stands a consortium of the Netherlands Research School for Astronomy at University of Leiden, Netherlands Institute for Space Research (SRON), National Institute for Public Health and the Environment (RIVM), and the Royal Netherlands Meteorological Institute (KNMI).

Over the course of summer and autumn 2013, thousands of people in the Netherlands participated in “national iSpex days”, jointly measuring particulate dust. The first results of this awesome social effort are published, and we can hope this project will find epigones in other countries.

iSpex website:
Measuring aerosols with spectropolarimetry