IOTA Hackathon: Fraud Detection (Part 2)

This is the second installment in our posts about the experiences of the „Freedom Pass“ team during the IOTA Hackathon. In the first post (found here), Kira set the stage and explained the current issues of the London Freedom Pass. In this post, we’ll get a bit more detailed with regards to how we built the project.

DISCLAIMER: Even though the project is called „Fraud Detection“ the technological focus is very much on IOTA and not at all on machine learning-methodologies or data science, as one would commonly associate with fraud detection and prevention.

After we’d narrowed the scope down sufficiently to what we thought would be achievable during a hackathon, we started getting familiar with the IOTA tangle. We followed this tutorial for making a simple transaction, written only a few weeks earlier but already with some modifications required. After having gotten ourselves familiar with the general concepts of the Tangle (much accelerated by a presentation and Q&A by Chris Dukakis of IOTA) we connected to a testnet node and started issuing transactions.

Before we get into the details of the project, I’ll make a short comment about the decision whether to run a full node, the IOTA Reference Implementation (IRI) or to connect to pre-existing nodes. In short, to run the IRI, one needs a complete Java Runtime Environment, which is one of the reasons why IOTA can’t be run on an IoT device at this point. Each node connected to the tangle exposes an HTTP API through which transactions can be issued. To set up an instance of the IRI, one has to acquire the addresses of the already connected nodes in the tangle. The recommended way to do this is by asking people in the slack-channel #nodesharing. Because of the above restrictions and our requirements in time, we didn’t think it would be necessary to run our own node.

Back to the task of solving the problem of fraud in the application process for the Freedom Pass in London boroughs. We settled for the JavaScript library since it does a lot of the heavy lifting on top of the API and is by far the best-documented library. (The winning team used the mostly undocumented Python library and still managed to interact fairly smoothly with the tangle). The iota.lib.js implements both the standard API, some useful functionality like signing, unit conversion and reading from the tangle. In our project, we had set out to supply the following interactions between the tangle and our users:

  1. Register Doctor as a seed on the tangle
  2. Register Applicant as a seed on the tangle
  3. Perform a transaction for each certificate between the issuing Doctor to the Applicant.
  4. Verify that a certificate was registered on the tangle given a Doctor and an Applicant
  5. Read information off of the tangle about outgoing transactions from all Doctors

Given the above functionality, how could we leverage the existing IOTA library in the best way possible? Well, since smart contracts or most types of advanced transactions aren’t really possible on IOTA (yet), we will need some off-tangle processing, storage and UI.

For this, we implemented a backend and some wrapping to process the information from the applications. The server-side was written using Node.JS and the express-framework. To model the logic and structure of the database, we used MongoDB and mongoose. The MongoDB contained a simple key-value store, saving relevant applicant information. One could imagine that is could be upgraded to a graph-model to better mirror the tangle structure and to be able to more efficiently analyse connections between Doctors and Applicants, however, that was out-of-scope during the ~24h of coding we had.

In order for the user to interact with the tangle in an easy way, we built a small web-frontend. It allows the user to enter information about an application such as the national insurance number of an Applicant, postal code of the Doctor and Applicant, phone numbers, etc. At this stage, four things need to happen:

  1. The information is saved in the MongoDB-collection,
  2. seeds for the Applicant and Doctor are created based on an aggregate of identifying information,
  3. new test tokens are generated and sent to the Doctor’s account and
  4. an IOTA transaction is issued from the Doctor to the Applicant.

To save the information into a MongoDB-collection a controller instantiates and returns a new model containing the just entered data. It passes it on to the server.jswho handles the HTTP-requests from the client.

There is no dedicated IOTA API-call for generating seeds, but they do supply a command line command for generating a random seed. We made our seeds relatable to the private information by concatenating the private key with the national insurance number for the Applicants and the Doctor’s ID for the Doctors. After the seed was generated, a fresh address is created for each new transaction.

To make the functions from the iota.lib.js a bit more usable, we wrapped the existing callbacks-based structure in Promises. This allowed our code to become a bit more asynchronous than it is ‚out-of-the-box‘.

Here is an overview of the architecture:

„Freedom Pass“ System Architecture

Once the data and the transactions were issued, the next step was to provide a way of viewing the existing applications and certificates. So we created a second page of the UI for listing all applications with relevant information read from the MongoDB-collection.

UI for entering Doctor’s and Applicant’s data

This doesn’t, however, provide such a great way of finding the main type of fraud that we were considering, namely Applicants reusing information about Doctors. This makes it look like a single Doctor issued an unreasonable amount of certificates. A pretty easy case to catch, one would think, but considering it is a completely analog process done by on  paper in different boroughs by different administrators, it sums up to quite a large amount of faked applications. This is the type of fraud we focussed on in our processing.

So how can we in a user-friendly way flag cases that should be investigated? We chose the simplest option and created a second view of the UI where each Doctor in the system is listed along with the number of certificates they’ve, supposedly, issued. The list is sorted by the number of certificates issued. Here one could imagine making it a bit smarter by including the date the certificate was issued and creating a more differentiated metric of certificates per time unit, but it wasn’t in scope this time around.  If a Doctor issued more than 10 certificates, they were highlighted in red. A very simple but potentially efficient way of communicating to the user that something needs to be investigated. Of course, the number 10 was completely arbitrary and could have been chosen differently. In fact, to decide that number, one would have to, first of all, analyze historical data.

Hitlist of certificates issued by Doctors

To sum up, Team Freedom had a lot of fun and learned tons about IOTA, ideation, cooperation, and creation in a short time-frame. We managed to build a functioning Proof of Concept for how IOTA can be used for the secure issuing of medical certificates in order to prevent and detect fraud. The application to the Freedom Pass was done so that it would be easier to understand what was being done and why. But that does in no way mean that the base structure cannot be used for other purposes, in fact, it was written specifically to be general enough that it is also interesting in other areas.

Is this the only way that the problem could have been solved? No. Was it the easiest way of solving it? Absolutely not. However, we believe that only by experimenting and utilizing one of the few scalable and future-resistant distributed ledger solutions can we achieve applicability. There is, generally speaking, almost no distributed ledger application that could not have been done without the use of a distributed ledger, but it would have incurred great financial, organizational or trust costs. IOTA is a very cost-effective and scalable solution, but with the caveat that it is still in its infancy.

Team „Freedom Pass“ at the IOTA Hackathon in Gdansk, Poland

  Here is an overview of all reports on the IOTA Hackathon’s projects:

1st place – „PlugInBaby“:

…describes the idea and the pivot of the project
Team „PlugInBaby“: Open Car Charging Network (Part 2)
…describes the technical level and provides resources

2nd place – „Freedom Pass“:
Team Freedom Pass: Fraud Detection (Part 1)
…describes the high level of the project
Team Freedom Pass: Fraud Detection (Part 2)
…describes the technical level of the project

Wearable Data Hack Munich 2015

Today, we would like to announce something special. Something we can’t wait to take place and until mid June it’s going to be tough to sit tight. Please, feel invited to our Wearable Data Hack Munich 2015!

The Wearable Data Hack Munich 2015 is the first hack day on wearable tech applications and data. It will take place right after the launch of Apple Watch – the gadget we expect to rise the tide for all wearables. Withe the Wearable Data Hack Munich 2105, we aim to kick-off app development for the emerging smartwatch and wearable tech market. During this weekend you will have the first occasion to share your views and ideas and jointly gather experience with the new data realm.

Apple calls the Apple Watch “Our most personal device ever”. And with good cause: The data from wearable tech, smartphones and smartwatches are really the most personal data ever. Our mobile devices accompany every step we take, every move we make. A plentitude of sensors on the devices draw a multidimensional picture of our daily lives. Applications of wearable data range from fitness to retail, from automotive to health. There is hardly an industry that cannot make direct use of it. And yet, wearable apps still are in their childhood. The Apple Watch will be hitting the street in April and will get the ball rolling.

The Wearable Data Hack Munich 2015 is jointly organized by Stylight and Datarella.

Developers, data geeks and artists will pursue one or more of these threads:
– Data-driven business models for wearables
– Data-driven wearables
– Smartphone app (Stand alone / combined with smartphone)
– User Experience
– Open Data
– mHealth / Medical Data

So let’s explore what we can do with this data! Let’s play with the possibilities of our wearable gadgets and mobile sensors.

To apply for the Wearable Data Hack Munich 2015, please send us an email with
– your name
– your profession
– your take on wearable data
– 3 tags describing yourself best.
Don’t wait for too long – the number of participants is limited.

For more information, please have a look here! See you at Wearable Data Hack Munich 2015!

The Datarella World Map Of Behavior

Every smartphone user produces more than 20 MB of data collected by her phone’s sensors per day. Now, imagine the sensor data of 2 billion smartphone users worldwide, translated into realtime human behavior, shown on a global map. That is the vision of the Datarella World Map of Behavior.

A typical 2015 generation smartphone sports up to 25 sensors, measuring activities as diverse as movements, noise, light, or magnetic flux. Most smartphone users aren’t even aware of the fact that their phone’s camera or microphone never are really „off“ but that they constantly collect data about the noise level or the intensity of light the user is experiencing.

Actions speak louder than words
Actions speak louder than words – if we want to really know a person we have to know how she behaves, and not only what she says. And that’s not only true for politicians. We all form our opinions on others by looking at their actions, more than their words. Many inter-personal problems result from NOT looking at people’s actions, but focusing on other aspects, such as their looks or their words. Behind superficial distinctions such as physical appearances, over time we often realize similarities with other people based on their and our actions.


Our vision of a World Map of Behavior
At Datarella, our vision is to show the actions of people around the world on a global map. By displaying the actions of people of all continents, we want to tell stories about the differences and similarities of global human behavior – to draw a picture of human co-existence. There already are snapshots of global behavior, provided by data focused companies, such as Jawbone, who map sleep patterns worldwide. From that we know that Russians get up latest, and Japanese get the least sleep in total . And there are different behavior-related maps, showing the world’s most dangerous places, defined by the number and seriousness of crimes or actions of war.

Co-operation & Participation
To create the World Map of Behavior is our ambitious project for 2015 that we won’t complete alone. We need your support: if you are an expert in the field of mobile sensor data or if your company already focuses on collecting and interpreting mobile sensor data in the fields of mobility, finance, health or transport and travel. If you are interested to play a role in this project, please send us an email with a brief description of how you would like to contribute. We are looking forward to hearing from you!

Data is the new media

Data storytelling, data journalism, and even data fiction – since the advent of Big Data, we find data more and more as tool of narratives. With pattern recognition, exploratory data analytics, and especially with data visualization, data has re-centered from the quantitative to the qualitative.

More and more applications support us in using data to tell a story. Dashboards like Tableau or DataLion plug into our data sources and translate the numbers into a visual format that can be much more easily digested. Even highly multivariate data can deliver straightforward meaning to us when we use tools like Gephi, or say, the notorious Palantir. These tools also make social media analytics and text mining feasible techniques to research society, advertising, and markets.

Jawbone Up not only tracks our sleep. The app also shares our data in a meaningful way with our friends - like we share our thoughts on Twitter.
Jawbone Up not only tracks our sleep. The app also shares our data in a meaningful way with our friends – like we share our thoughts on Twitter.
Data driven storytelling has conquered most non-fiction publication. News publishers like New York Times or The Guardian employ huge teams of infographic specialists to enrich their reports with meaningful data visualization. Some of their editors have put together awesome collections of beautiful examples, e.g.

Our most personal data however is generated on our mobile and wearable devices. On our smartphones, wristbands, or smartwatches, some twenty sensors continuously track our behavior and our actions. There is a plenitude of apps making use of mobile data: To support our training, to guide our routes, to find friends nearby, to share images, etc. etc.

Many people already share their daily workout via apps like Strava or Runtastic. It is even quite common to let such apps automatically post your training results into your social media timeline, e.g. to Twitter or Facebook.
Many people already share their daily workout via apps like Strava or Runtastic. It is even quite common to let such apps automatically post your training results into your social media timeline, e.g. to Twitter or Facebook.

Apps like Jawbone Up or Strava not only track our workout, they also provide for an easy way to share what data they measured. We publish our training data the same way there, as we publish our stories on Twitter or Facebook. Our data becomes equivalent to the texts and images we post. The most highly integrated version of this data-as-story so far is Google Now.

Image on top: Google Now. Google Now follows the idea to display all kinds of information in the form of tiles, like Twitter or Facebook would display the posts of the people you follow in a timeline. Funny enough, Google obviously has no clue where my "place of work" seams to be.
Image on top: Google Now. Google Now follows the idea to display all kinds of information in the form of tiles, like Twitter or Facebook would display the posts of the people you follow in a timeline. Funny enough, Google obviously has no clue where my „place of work“ seams to be.

Data is media not only regarding the content. Advertising which has by and large been data driven for decades is facing a major transformation. Media planning and buying – the art of placing ads in the most efficient way, i.e. optimizing effect for a given budget – is changing dramatically. About 20% of all ads are placed programmatic now. Programmatic buying means that an algorithm decides which exact user would be appropriate to watch the ad instead of buying the spot via explicit insertion order, as it used to be. The decision if a certain user would match with the campaign’s objective is made by predictions based on the users‘ observed behavior. Data thus drives the ads we get displayed.

With the idea of ‚The Quantified Self‚, data starts to conquer even the concept of our identity. We are not only what we tell, how we appear, how we act voluntarily, but we are as well defined by our innards, by our bodies‘ functions, the data that comes from our physical being. The concept of ’self‘ is changing by this notion, overcoming the strict separation of mind and body, of conscious and unconscious. The physical aspects of our lives now get equal credit, as being veritable part of our being ourselves.

Data is becoming integral part to our stories. It pervades through all the media. We should learn to see data as part of our lives the same way, we are used to tell about things with words.

Further reading:

We are content!
Data stories: From facts to fiction.

A Bit of Data Science – What Your Battery Status Tells About You

Working with lots of data, the biggest challenge is not to store or handle this data – these jobs are far from being trivial, but there are solutions for nearly any kind of problem in this space. The real work with data starts when you ask yourself: what’s behind the data? How could you interpret this data? What story can you tell with this data? That’s what we do and we want to share some of our findings with you and motivate you to join our discussion about the meaning of the data . We want to create Data Fiction.

Today, we start with some sensor data collected by our explore app – the smartphone’s battery status including the loading process. Below you see sample data for our user’s behavior during the week (Feature Visual) and at the weekend (Figure 1).

Smartphone Battery Weekend
Figure 2: Smartphone Battery Status (weekend) (Datarella)

In Figure 1 you see that most users load their smartphones around 7 a.m. and (again) around 5 p.m. What does that tell us? First, we know when most users wake up in the morning – around 7 a.m.. Most probably they have used their smartphones‘ alarm functions and then connect their devices to the power supply. Late afternoon, they load their devices a second time – probably at their office desks – before they leave their workplaces. During weekends, the loading behavior is different: people get up later, and maybe use their devices for reading, social networking or gaming, before they reconnect them to their power supplies.

Late rising leads to an avarega minimum battery status of 60% during weekends, whereas during the week, users let their smartphones batteries go down to 50%. This 10% difference is interesting, but the real surprise is the absolute minimum battery status of 50% or 60%, respectively. It seems that the days of „zero battery“ and hazardous action to get your device „refilled“ are completely over.

For some, data is art. And often, it’s possible to create data visualizations resembling modern art. What do you think of this piece?

Figure 2: Battery Loading Matrix (Datarella)

This matrix shows the daily smartphone loading behavior of explore users per time of day. Each color value represents a battery status (red = empty, green = full). So, you either can print it and use it as data art on your office’s wall or you think about the different loading types: some people seem to „live on the edge“, others do everything (i.e. load) to staying on the safe side of smartphone battery status.

What are your thoughts on this? When and how often do you load your mobile device? Would you describe your loading behavior as „loading on the edge“ or „safe? We would love to read your thoughts! Come on – let’s create Data Fiction!

The design of the explore app – The Datarella Interview

Today, we speak with Kira Nezu (KN), Co-founder of Datarella, about the design of the explore app.

The explore app is available for Android smartphones only. What is the reason not to launch an iPhone version, too?

We started to develop explore as a so-called MVP, a Minimum Viable Product. We chose Android to start with since it offers more variety regarding sensor and phone data. So we only test and make mistakes on one platform. At some point, we will also launch an iPhone version.

explore consists of two different elements: the sensor tracking and the interaction area with surveys, tasks and recommendations. Could you tell us more about the structure and the functionalities of the app?

With the MVP we are trying to stay as flexible as possible to enable fast changes and bug fixing. So we decided to create a hybrid app which incorporates native and web elements. The native part basically is the container with most of the graphics. The content is dynamically fetched from our backend, whereas the result area is fully created with web views. This brings great flexibility: we can update our content within minutes.

Regarding the structure there are 3 areas:
– main content area – divided into the survey area and recommendations,
– menu area,
– result area.

Regarding surveys: there are already mobile survey apps on the market. How does the explore app differ from those?

Before designing the app, we did a lot of research on existing survey apps. We found that either the apps had a very technical design that reminded us of Windows 95. Other apps were very playful but done simply, i.e. one app would show two images – and the user could tap one of those to make a choice.

We want the user to have a playful experience while keeping the flexibility of different interaction formats.

You call explore a Quantified Self app. Can you elaborate on that?

The Quantified Self aspect of explore relies on regular interactions wich ask the same information from the user. In the result area we show the user her personal mood chart with her own results compared to other explore users. Currently we are working on a location heat map in which the user can see her personal location history of the past days – and also that of other users. We had some surprise moments in internal tests: it took quite a while to recall why we were at certain locations. You could compare that with cooking water for tea 5 times before finally remembering to brew your tea.

So what are the next steps for explore?

We will focus on adding more Quantified Self elements as results as well as offer an API for users to play with their own data. We are really looking forward to see what our users will come up with! If you are interested with playing with your data now, you are welcome to participate in our Call for Data Fiction.

Thank you very much.

Call for Data Fiction


Do you read science fiction? Can you make data interesting? Can you tell the story behind a pool of data? Are you a data fictionista? Submit your data fiction.

People, animals, plants and things produce data – a lot of data. The data itself is the basic resource – like words are the basis for language. If you put words together to sentences and you combine sentences to chapters and aggregate several chapters – you write a story, you create fiction. Same with data: if you combine different data sources to data pools and aggregate them – you write the story behind the data, you create data fiction.

[Strong narrative] augments the available data by way of context, and extends the patience of the audience by sustaining their interest as well.

Does that sound like you?

We’d love to see and discuss your applications, analyses, case studies and models with you and help you make your data fiction become reality.

The Data
We will provide you with sample data resulting from the usage of our explore app.

The App
The data has been created by users of the explore app. In explore, the user interacts by answering surveys, attending tasks and heeding valuable recommendations based on her behavior. She immediately sees the results of her interactions in the feedback area. Second, explore tracks several sensors of the user’s phone, which can be set on and off by the user herself (see full list of sensors below). explore connects both areas, interactions and the sensor tracking area, with the integrated Complex Event Processing Engine CEPE.

datarella explore app

The Complex Event Processing Engine (CEPE)
The CEPE is a mechanism to target an efficient processing of continuous event streams in sensor networks. It enables rapid development of applications that process large volumes of incoming messages or events, regardless of whether incoming messages are historical or real-time in nature.
Our CEPE is based on ESPER and Event Processing Language EPL

List of Sensors
– GPS location data
– Network location data
– Accelerometer
– Gyroscope
– Wifi
– Magnetic field
– Battery status
– Mobile Network

– Overview and extended description or representation of your main idea, any subtopics and a conclusion
– Use or integration of at least 1 (one) category of sensor data (e.g. Gyroscope). If you use GPS location, you should use or integrate at least 1 (one) additional category of sensor data beside GPS location data.

– Presentation
– Video
– Installation

We will reward fascinating data fiction with preferred access to our data, a post on the QS Blog and the possibility of making data fiction come true.

Yes, I am a data fictionista and want to submit my data fiction!

Big Data helping people to understand real-time pollution risks

From rapid urbanization in China to dung-fired stoves in New Delhi, air pollution claimed 7 million lives around the world in 2012, according to the World Health Organization. Globally, one out of every eight deaths is tied to dirty air – which makes air pollution the world’s single biggest environmental health risk. And, in areas with very bad air pollution, people live an average of 5 fewer years than those in other areas. 

Not only in Chinese megacities or indian agricultural areas, people are trying hard to keep air pollution at bay. In Portland, Oregon, a local initiative called Neighbors for Clean Air is using Big Data to make bad air visible. The group is part of an experiment in initiated by Intel Labs, that uses 17 common, low-cost sensors, each weighing less than a pound to gather air quality data. This data feeds to websites that analyze and present comprehensible visualizations of the data. The sensors itself are built using an Arduino controller. They measure carbon and nitrogen dioxide emissions, temperature and humidity.

By making the air pollution problem visible, the experiment not only made people recognize the importance of technology in understanding air quality, but Neighbors for Clean Air could forego an agreement with a local metal foundry to cut emissions.

If you want to have a look at your own air pollution, go to Air Quality Egg – perhaps one of the several hundred eggs worldwide has been installed in your neighborhood.

Wearable app ‚explore‘ succedes in field trial

We have been trialing our app explore in the field since the beginning of December together with our partner Serviceplan Group. Employees of the different branches of Serviceplan tested explore regarding stability and usability. After four weeks testing we can happily announce: explore has passed the test with excellence. The app runs smoothly and stable and will be used in the wild from January on.

Main user locations explore app, December 2013 (Source: Datarella)

The app users have been mostly situated in Germany, but explore was in use also in London, in Poland, Thailand and at the US West Coast, as you can see on the figure above. In an impressive way the aggregated paths and whereabouts map the main routes of the local transit services within Munich, including the lines to the airport, and also the most frequented areas in Munich city, Schwabing, Isarvorstand, and Sendling, and the south west:

explore Nutzung in München

Main user locations within Greater Munich, explore app, December 2013 (Source: Datarella)

All explore participants have been keeping on using the GPS functionality over the full span of the trial – no one has chosen the option to switch off the tracking, which can be done in the settings of the app. This is an important result of the trial: the feature to opt-out was notified to the users, but no one has actively quit the tracking. For the Datarella team this indicates clearly that GPS-tracing is generally accepted by the user, at least if she recognizes, she would get something back from it.

Apart from being able to map the individual paths, the GPS-tracking was deployed to ask the users questions about the specific places they passed by and about the corresponding time. Thus we could learn why a user would stay at the train station and what she would have done there apart from getting on a train.

Additional to the location specific questionnaires, we issued up to three surveys per day, each with up to 10 questions. These surveys covered the different parts of life: the job of course (it was not at last a trial with employees of a company), but also regarding leisure and other topics like general well being, media consumption, environmental concerns, and much more. To find an optimal compromise between the app’s usability and the operational feasibility of the results was one of the main task of the trial.

BarometerHow are you?

Survey: „How are you?“, explore App, format: Smileys 1=very bad, 5=very good (Source: Datarella)

This figure gives the format of the survey (as shown within the explore app) and how the responses could be analyzed (in our backend), for the example „How are you?“ („Wie fühlst du dich?“) from December 3rd to 18th. This question is part of the „mood barometer“ survey which can be answered by the user on a scale of five smileys. In the analysis, the weeping face corresponds to a value of 1, the beaming smiley to 5. This survey is presentet repeatedly and thus makes the mood trend visible over time.

How are you?

Survey: „How are you?“, explore App, format: Smileys 1=very bad, 5=very good (Source: Datarella)

This shall give only a small excerpt of the results of the explore field trial. We find it just awesome that the results have been so positive and we now deploy the app with a much broader circle of users. This will be announced early in January with another blog post. If you want to participate, just contact us. The app can be downloaded from Google Play.

We want to thank all participants of the field trial and look forward to continue with an enlarged user base. To all our readers we wish a good start for 2014!

explore besteht Feldeinsatz mit Bravour

Seit Beginn Dezember läuft die explore App im Feldtest mit unserem Partner, der Serviceplan Gruppe. Mitarbeiter verschiedener Tochterunternehmen der Gruppe testen explore hinsichtlich Stabilität und Usability. Nach vier Wochen Test können wir erfreut melden: explore hat mit Bravour bestanden. Die App läuft rund und stabil – und wird nun im Januar auf ihre Einsatzmöglichkeiten im Feld geprüft.

Main user locations explore app, December 2013 (Quelle: Datarella)

Die Nutzer der App haben sich schwerpunktmässig in Deutschland aufgehalten, aber auch in London, Polen, Thailand und an der Westküste der Vereinigten Staaten wurde explore eingesetzt, wie die obige Abbildung zeigt. Innerhalb Münchens markieren die aggregierten zurückgelegten Wege und Aufenthaltspunkte der explore Tester in anschaulicher Form  die Hauptstrecken des öffentlich-rechtlichen Nahverkehrs inklusive der S-Bahn zum Flughafen, sowie die häufig frequentierten Bereiche der Münchner Innenstadt – mit Schwabing, Isarvorstadt und Sendling, sowie im Münchner Südwesten:

explore Nutzung in München

Main user locations within Greater Munich, explore app, December 2013 (Quelle: Datarella)

Alle explore Teilnehmer haben die GPS-Funktionalität über den gesamten Testzeitraum genutzt – niemand hat zur Option gegriffen, das Tracking im Profilbereich der App auszuschalten. Dies ist ein wichtiges Einzelergebnis des Tests: den Teilnehmern war diese Möglichkeit des Opt-out bewusst, keiner hat sich allerdings aktiv aus dem Tracking verabschiedet. Für das Datarella Team ein deutliches Indiz dafür, dass GPS-Tracking grundsätzlich vom Nutzer akzeptiert wird – zumindest wenn er einen Nutzen bzw. eine Sinnhaftigkeit in diesem Feature erkennt.

Das GPS-Tracking wurde im Test neben der Möglichkeit zur Erstellung individueller Wegstrecken dazu eingesetzt, den Nutzern an bestimmten Orten, an denen sie sich aufhielten, Fragen zu eben jenen Orten und korrespondierenden Aufenthalts-Zeiten zu stellen. Auf diese Weise konnte in Erfahrung gebracht werden, aus welchen Gründen sich Teilnehmer am Bahnhof aufhielten, und was sie dort – ausser beispielsweise mit dem Zug zu fahren – noch gemacht haben.

Neben den Fragen zu Aufenthaltsorten wurden im Test täglich bis zu 3 Umfragen mit jeweils bis zu 10 Einzelfragen durchgeführt. Diese Umfragen bezogen sich auf unterschiedliche Lebensbereiche: zum einen natürlich den Arbeitsplatz – bei einem Test mit Mitarbeitern eines Unternehmens eine Selbstverständlichkeit – zum anderen aber auch auf die Freizeitgestaltung sowie auf weitere Themen wie die allgemeine Befindlichkeit, Mediennutzung, Umweltbewusstsein und viele mehr. Als besonders wichtig bei Erhebung und Auswertung dieser Fragen hat sich eine optimale Mischung aus Bedienbarkeit der App (Usability) und Operationalisierbarkeit der Ergebnisse erwiesen.

BarometerWie fühlst Du Dich gerade?

Umfrage: „Wie fühlst Du Dich gerade?“, explore App, Format: Smilies 1=sehr schlecht, 5=sehr gut (Quelle: Datarella)

So zeigt diese Abbildungen Frage-Format (in der explore App) und Antworten (im Backend) der Frage „Wie fühlst Du Dich gerade?“ im Zeitraum 3.-18. Dezember. Die Frage ist Bestandteil der Barometer-Umfrage nach der persönlichen Befindlichkeit und kann vom Nutzer auf einer Skala mit fünf Smilies beantwortet werden. In der Auswertung bedeutet der Wert 1 das weinende Gesicht, der Wert 5 wird dem strahlenden Smiley gegeben. Diese Umfrage wurde den Teilnehmern wiederholt gestellt, damit der Stimmungsverlauf über die Zeit sichtbar ist.

Wie fühlst Du Dich?

Umfrage: „Wie fühlst Du Dich gerade?“, explore App, Format: Smilies 1=sehr schlecht, 5=sehr gut (Quelle: Datarella)

Dies soll nur ein kleiner Ausschnitt aus den Ergebnissen des explore Feldtests sein. Wir freuen uns sehr, dass das Ergebnis so positiv ist und wir die App ab Januar auch einen grösseren Kreis zum Einsatz zur Verfügung stellen können. Dazu werden wir uns Anfang Januar in einem weiteren Beitrag auffordern. Wer Interesse an der Nutzung bzw. dem Einsatz von explore hat, kann sich gern an uns wenden. Die App befindet sich bereits in Google Play und kann heruntergeladen werden.

Wir bedanken uns herzlich bei allen Teilnehmern des Feldtests und freuen uns auf eine Fortsetzung im erweiterten Kreis ab Januar! Allen Lesern wünschen wir einen guten Start ins Jahr 2014!