Earlier this year, Datarella and Wirecard AG started a collaboration around a couple of blockchain projects. One of them was RAW coin, the trading of commodities made more efficient through the usage of blockchain.
We started by a thorough analysis of various commodity supply and trade chains, which led us to find the following challenges of supply chains:
rising pressure from global competition
many intermediaries and complex governance structures
the end-consumer is demanding ever-higher levels of transparency
struggle for supply chain stakeholders to maintain an adequate overview of their networks and the supply costs associated
difficulty to ensure the quality and integrity of raw materials
We decided to narrow the PoC down to one specific use case, namely coffee beans. The reason for this is not simply that we’re huge fans of (good) coffee, but actually, coffee is the second most sought after commodity after crude oil, globally. It has a trading volume of $100B per year and is grown in 50 countries (in some of which Wirecard offers financial services). Finally, the coffee supply chain has a large number of middle men and intermediaries adding marginal value but capturing a large amount of the end-price paid by consumers.
The basic idea of RAW.coin is to digitise trading mechanisms and replace middlemen. As a larger vision, we aim to establish the RAW.coin network to become the ecosystem for supply chains, ensuring the origin, quality, compliance and proper handling of items tracked by the network.
What the solution does is to connect the Producer and the Importer while providing a marketplace where any commodities can be traded. It’s based on Ethereum, using smart contracts and a modern decentralised architecture. We implemented smart contracts to represent the terms and conditions of the network as well as enforcing them at the same time through automation. Additionally, we integrated with Wirecard to provide truly seamless B2B payments and fiat interactions. All transactions are done using the cryptocurrency RAW, but which can be immediately exchanged into fiat using the Wirecard Gateway API.
This is an actual screenshot of the current product:
We are currently inquiring into the best way to scale up and apply it to other commodities and in which markets. If you wish to learn more, tweet us at @datarella or contact us!
A: Mobility as a service (MaaS) through connecting Ethereum smart contracts with Datastreams on IOTA.
Q: What problem does your project solve?
A: We wanted to develop a case in the field of autonomous urban air mobility, better known for example as the concept of air taxis. In order to implement such an autonomous mobility concept, you have to bring together different partners. Making an air taxi fly is actually a bit more complicated than sending an autonomous car into traffic. Because you need some very important permission before you are allowed to start.
There is, for example, the flight control that has to check all the details of your flight to make sure that there is no other vehicle on the same route. Or the weather forecast service telling you if the conditions are okay for your flight. In addition to the permits, there are also other stakeholders, who should be integrated into the process. What about the insurance company offering real time insurance conditions depending on how much passengers you have and how long your flight will take? Or the booking platform where everybody can book easily his flight in an air taxi? So, at the end, it just makes sense to involve all parties in smart contracts on blockchain to automate as much processes as possible. And this is what we did.
Q: What expertise and roles do your team members have?
A: In our team we were one designer and two backend developers with numerous experiences in blockchain projects.
Q: Which technologies do you use for which purposes?
A: Mainly we used all kind of Ethereum based smart contracts, to bring the different participants together on a decentralized network. To get real time data of the air taxi we used the data streaming technology of IOTA. In our case, we created a process that opens a new data stream automatically in the moment in which the air taxi starts. From this point on we push the data stream in our Ethereum based smart contracts. To show the connection between IOTA and Ethereum, we used the data of the battery of the air taxi. The battery will be a very critical point for every autonomous process in the future. Because, while the air taxi is flying, the partners around, like for example the market places where you can book your air taxi, need the information when the air taxi will be available again for the next flight. And that is highly depended on the current status of the battery. So, to know in advance how long the air taxi has to charge after the flight before it can go back in the air, you need real time information about the battery condition.
Q: How do you plan to proceed with your project?
A: We will use the developed demo to show, what type of use cases you can implement at evan.network.
This is the second installment in our posts about the experiences of the “Freedom Pass” team during the IOTA Hackathon. In the first post (found here), Kira set the stage and explained the current issues of the London Freedom Pass. In this post, we’ll get a bit more detailed with regards to how we built the project.
DISCLAIMER: Even though the project is called “Fraud Detection” the technological focus is very much on IOTA and not at all on machine learning-methodologies or data science, as one would commonly associate with fraud detection and prevention.
After we’d narrowed the scope down sufficiently to what we thought would be achievable during a hackathon, we started getting familiar with the IOTA tangle. We followed this tutorial for making a simple transaction, written only a few weeks earlier but already with some modifications required. After having gotten ourselves familiar with the general concepts of the Tangle (much accelerated by a presentation and Q&A by ChrisDukakis of IOTA) we connected to a testnet node and started issuing transactions.
Before we get into the details of the project, I’ll make a short comment about the decision whether to run a full node, the IOTA Reference Implementation (IRI) or to connect to pre-existing nodes. In short, to run the IRI, one needs a complete Java Runtime Environment, which is one of the reasons why IOTA can’t be run on an IoT device at this point. Each node connected to the tangle exposes an HTTP API through which transactions can be issued. To set up an instance of the IRI, one has to acquire the addresses of the already connected nodes in the tangle. The recommended way to do this is by asking people in the slack-channel #nodesharing. Because of the above restrictions and our requirements in time, we didn’t think it would be necessary to run our own node.
Register Doctor as a seed on the tangle
Register Applicant as a seed on the tangle
Perform a transaction for each certificate between the issuing Doctor to the Applicant.
Verify that a certificate was registered on the tangle given a Doctor and an Applicant
Read information off of the tangle about outgoing transactions from all Doctors
Given the above functionality, how could we leverage the existing IOTA library in the best way possible? Well, since smart contracts or most types of advanced transactions aren’t really possible on IOTA (yet), we will need some off-tangle processing, storage and UI.
For this, we implemented a backend and some wrapping to process the information from the applications. The server-side was written using Node.JS and the express-framework. To model the logic and structure of the database, we used MongoDB and mongoose. The MongoDB contained a simple key-value store, saving relevant applicant information. One could imagine that is could be upgraded to a graph-model to better mirror the tangle structure and to be able to more efficiently analyse connections between Doctors and Applicants, however, that was out-of-scope during the ~24h of coding we had.
In order for the user to interact with the tangle in an easy way, we built a small web-frontend. It allows the user to enter information about an application such as the national insurance number of an Applicant, postal code of the Doctor and Applicant, phone numbers, etc. At this stage, four things need to happen:
The information is saved in the MongoDB-collection,
seeds for the Applicant and Doctor are created based on an aggregate of identifying information,
new test tokens are generated and sent to the Doctor’s account and
an IOTA transaction is issued from the Doctor to the Applicant.
To save the information into a MongoDB-collection a controller instantiates and returns a new model containing the just entered data. It passes it on to the server.jswho handles the HTTP-requests from the client.
There is no dedicated IOTA API-call for generating seeds, but they do supply a command line command for generating a random seed. We made our seeds relatable to the private information by concatenating the private key with the national insurance number for the Applicants and the Doctor’s ID for the Doctors. After the seed was generated, a fresh address is created for each new transaction.
To make the functions from the iota.lib.js a bit more usable, we wrapped the existing callbacks-based structure in Promises. This allowed our code to become a bit more asynchronous than it is ‘out-of-the-box’.
Here is an overview of the architecture:
Once the data and the transactions were issued, the next step was to provide a way of viewing the existing applications and certificates. So we created a second page of the UI for listing all applications with relevant information read from the MongoDB-collection.
This doesn’t, however, provide such a great way of finding the main type of fraud that we were considering, namely Applicants reusing information about Doctors. This makes it look like a single Doctor issued an unreasonable amount of certificates. A pretty easy case to catch, one would think, but considering it is a completely analog process done by on paper in different boroughs by different administrators, it sums up to quite a large amount of faked applications. This is the type of fraud we focussed on in our processing.
So how can we in a user-friendly way flag cases that should be investigated? We chose the simplest option and created a second view of the UI where each Doctor in the system is listed along with the number of certificates they’ve, supposedly, issued. The list is sorted by the number of certificates issued. Here one could imagine making it a bit smarter by including the date the certificate was issued and creating a more differentiated metric of certificates per time unit, but it wasn’t in scope this time around. If a Doctor issued more than 10 certificates, they were highlighted in red. A very simple but potentially efficient way of communicating to the user that something needs to be investigated. Of course, the number 10 was completely arbitrary and could have been chosen differently. In fact, to decide that number, one would have to, first of all, analyze historical data.
To sum up, Team Freedom had a lot of fun and learned tons about IOTA, ideation, cooperation, and creation in a short time-frame. We managed to build a functioning Proof of Concept for how IOTA can be used for the secure issuing of medical certificates in order to prevent and detect fraud. The application to the Freedom Pass was done so that it would be easier to understand what was being done and why. But that does in no way mean that the base structure cannot be used for other purposes, in fact, it was written specifically to be general enough that it is also interesting in other areas.
Is this the only way that the problem could have been solved? No. Was it the easiest way of solving it? Absolutely not. However, we believe that only by experimenting and utilizing one of the few scalable and future-resistant distributed ledger solutions can we achieve applicability. There is, generally speaking, almost no distributed ledger application that could not have been done without the use of a distributed ledger, but it would have incurred great financial, organizational or trust costs. IOTA is a very cost-effective and scalable solution, but with the caveat that it is still in its infancy.
Here is an overview of all reports on the IOTA Hackathon’s projects:
Today, we would like to announce something special. Something we can’t wait to take place and until mid June it’s going to be tough to sit tight. Please, feel invited to our Wearable Data Hack Munich 2015!
The Wearable Data Hack Munich 2015 is the first hack day on wearable tech applications and data. It will take place right after the launch of Apple Watch – the gadget we expect to rise the tide for all wearables. Withe the Wearable Data Hack Munich 2105, we aim to kick-off app development for the emerging smartwatch and wearable tech market. During this weekend you will have the first occasion to share your views and ideas and jointly gather experience with the new data realm.
Apple calls the Apple Watch “Our most personal device ever”. And with good cause: The data from wearable tech, smartphones and smartwatches are really the most personal data ever. Our mobile devices accompany every step we take, every move we make. A plentitude of sensors on the devices draw a multidimensional picture of our daily lives. Applications of wearable data range from fitness to retail, from automotive to health. There is hardly an industry that cannot make direct use of it. And yet, wearable apps still are in their childhood. The Apple Watch will be hitting the street in April and will get the ball rolling.
THREADS TO BE PURSUED
Developers, data geeks and artists will pursue one or more of these threads:
– Data-driven business models for wearables – Data-driven wearables – Smartphone app (Stand alone / combined with smartphone) – User Experience – API – Open Data – mHealth / Medical Data
So let’s explore what we can do with this data! Let’s play with the possibilities of our wearable gadgets and mobile sensors.
To apply for the Wearable Data Hack Munich 2015, please send us an email with
– your name
– your profession
– your take on wearable data
– 3 tags describing yourself best.
Don’t wait for too long – the number of participants is limited.
For more information, please have a look here! See you at Wearable Data Hack Munich 2015!
Every smartphone user produces more than 20 MB of data collected by her phone’s sensors per day. Now, imagine the sensor data of 2 billion smartphone users worldwide, translated into realtime human behavior, shown on a global map. That is the vision of the Datarella World Map of Behavior.
A typical 2015 generation smartphone sports up to 25 sensors, measuring activities as diverse as movements, noise, light, or magnetic flux. Most smartphone users aren’t even aware of the fact that their phone’s camera or microphone never are really “off” but that they constantly collect data about the noise level or the intensity of light the user is experiencing.
Actions speak louder than words Actions speak louder than words – if we want to really know a person we have to know how she behaves, and not only what she says. And that’s not only true for politicians. We all form our opinions on others by looking at their actions, more than their words. Many inter-personal problems result from NOT looking at people’s actions, but focusing on other aspects, such as their looks or their words. Behind superficial distinctions such as physical appearances, over time we often realize similarities with other people based on their and our actions.
Our vision of a World Map of Behavior At Datarella, our vision is to show the actions of people around the world on a global map. By displaying the actions of people of all continents, we want to tell stories about the differences and similarities of global human behavior – to draw a picture of human co-existence. There already are snapshots of global behavior, provided by data focused companies, such as Jawbone, who map sleep patterns worldwide. From that we know that Russians get up latest, and Japanese get the least sleep in total . And there are different behavior-related maps, showing the world’s most dangerous places, defined by the number and seriousness of crimes or actions of war.
Co-operation & Participation To create the World Map of Behavior is our ambitious project for 2015 that we won’t complete alone. We need your support: if you are an expert in the field of mobile sensor data or if your company already focuses on collecting and interpreting mobile sensor data in the fields of mobility, finance, health or transport and travel. If you are interested to play a role in this project, please send us an email with a brief description of how you would like to contribute. We are looking forward to hearing from you!
Data storytelling, data journalism, and even data fiction – since the advent of Big Data, we find data more and more as tool of narratives. With pattern recognition, exploratory data analytics, and especially with data visualization, data has re-centered from the quantitative to the qualitative.
More and more applications support us in using data to tell a story. Dashboards like Tableau or DataLion plug into our data sources and translate the numbers into a visual format that can be much more easily digested. Even highly multivariate data can deliver straightforward meaning to us when we use tools like Gephi, or say, the notorious Palantir. These tools also make social media analytics and text mining feasible techniques to research society, advertising, and markets.
Data driven storytelling has conquered most non-fiction publication. News publishers like New York Times or The Guardian employ huge teams of infographic specialists to enrich their reports with meaningful data visualization. Some of their editors have put together awesome collections of beautiful examples, e.g. informationisbeautiful.net.
Our most personal data however is generated on our mobile and wearable devices. On our smartphones, wristbands, or smartwatches, some twenty sensors continuously track our behavior and our actions. There is a plenitude of apps making use of mobile data: To support our training, to guide our routes, to find friends nearby, to share images, etc. etc.
Apps like Jawbone Up or Strava not only track our workout, they also provide for an easy way to share what data they measured. We publish our training data the same way there, as we publish our stories on Twitter or Facebook. Our data becomes equivalent to the texts and images we post. The most highly integrated version of this data-as-story so far is Google Now.
Data is media not only regarding the content. Advertising which has by and large been data driven for decades is facing a major transformation. Media planning and buying – the art of placing ads in the most efficient way, i.e. optimizing effect for a given budget – is changing dramatically. About 20% of all ads are placed programmatic now. Programmatic buying means that an algorithm decides which exact user would be appropriate to watch the ad instead of buying the spot via explicit insertion order, as it used to be. The decision if a certain user would match with the campaign’s objective is made by predictions based on the users’ observed behavior. Data thus drives the ads we get displayed.
With the idea of ‘The Quantified Self‘, data starts to conquer even the concept of our identity. We are not only what we tell, how we appear, how we act voluntarily, but we are as well defined by our innards, by our bodies’ functions, the data that comes from our physical being. The concept of ‘self’ is changing by this notion, overcoming the strict separation of mind and body, of conscious and unconscious. The physical aspects of our lives now get equal credit, as being veritable part of our being ourselves.
Data is becoming integral part to our stories. It pervades through all the media. We should learn to see data as part of our lives the same way, we are used to tell about things with words.
Working with lots of data, the biggest challenge is not to store or handle this data – these jobs are far from being trivial, but there are solutions for nearly any kind of problem in this space. The real work with data starts when you ask yourself: what’s behind the data? How could you interpret this data? What story can you tell with this data? That’s what we do and we want to share some of our findings with you and motivate you to join our discussion about the meaning of the data . We want to create Data Fiction.
Today, we start with some sensor data collected by our explore app – the smartphone’s battery status including the loading process. Below you see sample data for our user’s behavior during the week (Feature Visual) and at the weekend (Figure 1).
Figure 2: Smartphone Battery Status (weekend) (Datarella)
In Figure 1 you see that most users load their smartphones around 7 a.m. and (again) around 5 p.m. What does that tell us? First, we know when most users wake up in the morning – around 7 a.m.. Most probably they have used their smartphones’ alarm functions and then connect their devices to the power supply. Late afternoon, they load their devices a second time – probably at their office desks – before they leave their workplaces. During weekends, the loading behavior is different: people get up later, and maybe use their devices for reading, social networking or gaming, before they reconnect them to their power supplies.
Late rising leads to an avarega minimum battery status of 60% during weekends, whereas during the week, users let their smartphones batteries go down to 50%. This 10% difference is interesting, but the real surprise is the absolute minimum battery status of 50% or 60%, respectively. It seems that the days of “zero battery” and hazardous action to get your device “refilled” are completely over.
For some, data is art. And often, it’s possible to create data visualizations resembling modern art. What do you think of this piece?
Figure 2: Battery Loading Matrix (Datarella)
This matrix shows the daily smartphone loading behavior of explore users per time of day. Each color value represents a battery status (red = empty, green = full). So, you either can print it and use it as data art on your office’s wall or you think about the different loading types: some people seem to “live on the edge”, others do everything (i.e. load) to staying on the safe side of smartphone battery status.
What are your thoughts on this? When and how often do you load your mobile device? Would you describe your loading behavior as “loading on the edge” or “safe? We would love to read your thoughts! Come on – let’s create Data Fiction!
Q The explore app is available for Android smartphones only. What is the reason not to launch an iPhone version, too?
We started to develop explore as a so-called MVP, a Minimum Viable Product. We chose Android to start with since it offers more variety regarding sensor and phone data. So we only test and make mistakes on one platform. At some point, we will also launch an iPhone version.
Q explore consists of two different elements: the sensor tracking and the interaction area with surveys, tasks and recommendations. Could you tell us more about the structure and the functionalities of the app?
With the MVP we are trying to stay as flexible as possible to enable fast changes and bug fixing. So we decided to create a hybrid app which incorporates native and web elements. The native part basically is the container with most of the graphics. The content is dynamically fetched from our backend, whereas the result area is fully created with web views. This brings great flexibility: we can update our content within minutes.
Regarding the structure there are 3 areas:
– main content area – divided into the survey area and recommendations,
– menu area,
– result area.
Q Regarding surveys: there are already mobile survey apps on the market. How does the explore app differ from those?
Before designing the app, we did a lot of research on existing survey apps. We found that either the apps had a very technical design that reminded us of Windows 95. Other apps were very playful but done simply, i.e. one app would show two images – and the user could tap one of those to make a choice.
We want the user to have a playful experience while keeping the flexibility of different interaction formats.
Q You call explore a Quantified Self app. Can you elaborate on that?
The Quantified Self aspect of explore relies on regular interactions wich ask the same information from the user. In the result area we show the user her personal mood chart with her own results compared to other explore users. Currently we are working on a location heat map in which the user can see her personal location history of the past days – and also that of other users. We had some surprise moments in internal tests: it took quite a while to recall why we were at certain locations. You could compare that with cooking water for tea 5 times before finally remembering to brew your tea.
Q So what are the next steps for explore?
We will focus on adding more Quantified Self elements as results as well as offer an API for users to play with their own data. We are really looking forward to see what our users will come up with! If you are interested with playing with your data now, you are welcome to participate in our Call for Data Fiction.
Do you read science fiction? Can you make data interesting? Can you tell the story behind a pool of data? Are you a data fictionista? Submit your data fiction.
People, animals, plants and things produce data – a lot of data. The data itself is the basic resource – like words are the basis for language. If you put words together to sentences and you combine sentences to chapters and aggregate several chapters – you write a story, you create fiction. Same with data: if you combine different data sources to data pools and aggregate them – you write the story behind the data, you create data fiction.
[Strong narrative] augments the available data by way of context, and extends the patience of the audience by sustaining their interest as well.
Does that sound like you?
We’d love to see and discuss your applications, analyses, case studies and models with you and help you make your data fiction become reality.
DATA, APP & COMPLEX EVENT PROCESSING ENGINE The Data We will provide you with sample data resulting from the usage of our explore app.
The data has been created by users of the explore app. In explore, the user interacts by answering surveys, attending tasks and heeding valuable recommendations based on her behavior. She immediately sees the results of her interactions in the feedback area. Second, explore tracks several sensors of the user’s phone, which can be set on and off by the user herself (see full list of sensors below). explore connects both areas, interactions and the sensor tracking area, with the integrated Complex Event Processing Engine CEPE.
The Complex Event Processing Engine (CEPE)
The CEPE is a mechanism to target an efficient processing of continuous event streams in sensor networks. It enables rapid development of applications that process large volumes of incoming messages or events, regardless of whether incoming messages are historical or real-time in nature.
Our CEPE is based on ESPER and Event Processing Language EPL
List of Sensors – GPS location data
– Network location data
– Magnetic field
– Battery status
– Mobile Network
REQUIRED – Overview and extended description or representation of your main idea, any subtopics and a conclusion
– Use or integration of at least 1 (one) category of sensor data (e.g. Gyroscope). If you use GPS location, you should use or integrate at least 1 (one) additional category of sensor data beside GPS location data.
DATA FICTION TYPES – Presentation
We will reward fascinating data fiction with preferred access to our data, a post on the QS Blog and the possibility of making data fiction come true.
Yes, I am a data fictionista and want to submit my data fiction!
From rapid urbanization in China to dung-fired stoves in New Delhi, air pollution claimed 7 million lives around the world in 2012, according to the World Health Organization. Globally, one out of every eight deaths is tied to dirty air – which makes air pollution the world’s single biggest environmental health risk. And, in areas with very bad air pollution, people live an average of 5 fewer years than those in other areas.
Not only in Chinese megacities or indian agricultural areas, people are trying hard to keep air pollution at bay. In Portland, Oregon, a local initiative called Neighbors for Clean Air is using Big Data to make bad air visible. The group is part of an experiment in initiated by Intel Labs, that uses 17 common, low-cost sensors, each weighing less than a pound to gather air quality data. This data feeds to websites that analyze and present comprehensible visualizations of the data. The sensors itself are built using an Arduino controller. They measure carbon and nitrogen dioxide emissions, temperature and humidity.
By making the air pollution problem visible, the experiment not only made people recognize the importance of technology in understanding air quality, but Neighbors for Clean Air could forego an agreement with a local metal foundry to cut emissions.
If you want to have a look at your own air pollution, go to Air Quality Egg – perhaps one of the several hundred eggs worldwide has been installed in your neighborhood.