How EU data protection law could interfere with targeted ads

An interesting article in The Conversation by James Davenport at the University of Bath about some of the possible implications of the GDPR.  The extent to which cloud computing providers, such as Amazon Web Services, should be considered data processors is particularly interesting.  After all, these companies need to exercise some basic competence to ensure data security, but beyond that have no real say in what’s happening to data since they’re involved only at the “bit” level.

From a consent perspective, does an infrastructure provider matter, or is this a case where just regulating these companies as utility providers would be the best approach?

Source: How EU data protection law could interfere with targeted ads

Microsoft announces new Skype ToS

Microsoft has announced that Skype will be governed by the new Microsoft Services Agreement and the Microsoft Privacy Statement from 1st August 2015.

This is part of an effort to standardise all services under a single Terms of Service document and Privacy Policy. Less terms of service to read should be good for consent, but Microsoft provides such a broad range of services that such a document might be too vague to really inform digital citizens about what, specificcally, is happening to their data. Google took a lot of flack, including a fine from the French data protection body when it tried a similar unification in 2012.

When Microsoft acquired Skype in 2011, we brought together our communication technologies to help you stay closer to friends, family and colleagues. And, if you’re like millions of other people who use a number of Microsoft’s services (for example, for email, Bing, Xbox, Office 365, etc.) we’re making life a little easier for everyone. How? Well, most of Microsoft’s consumer services are being brought together under a single Microsoft Services Agreement and a consolidated Microsoft Privacy Statement.

Full Email

Meanwhile, Apple assumes that bloggers consent to the terms of service for its forthcoming News service unless they opt-out – Apparently even if they’ve never heard of it!

A good week for Meaningful Consent? Leave your thoughts in the comments.

DATA-PSST Seminar 2 Position Statement

I was lucky enough to be invited along to take part in the Second DATA-PSST workshop in Sheffield last week, hosted by Vian Bakir of Bangor University. It was a very thought-provoking event and raised many issues around the ethical and technical limits of privacy, which I’m sure will be available in the report in due course.

A articular issue raised for me is the degree to which human eyes need to be involved in a surveillance practice before it constitutes “Surveillance” – I’d argue that any form of agency exercised in response to that data, whether by a human or an algorithm, has a surveillance element. An argument about mere “processing” of data is problematic since merely indexing the data for targetted search by a human operator would be processing in itself. Definitions like this are important when it comes to meaningful consent by individuals, and the idea of collective consent to being policed or governed adds a whole extra dimension to what is already a very nuanced problem!

In the mean time, this is the position statement that I submitted, based in part on the arguments outlined in “The Grey Web” paper authored with colleagues mc schraefel and Natasa Milic-Frayling. Inaccuracies or outrageous claims in this position paper are entirely my own, though.


The Web is a Surveillance Tool

Today’s web is funded primarily by advertising; the sub-millisecond delivery of targeted advertising alongside content of genuine interest to users. Networks of content providers, advertising brokers and advertisers allow private companies to record extensive amounts of web browsing history from individual web users. Our research indicates that after visiting only 30 search results there is a 99.5% chance that an individual user has been tracked at least once by each of the top ten third party tracking domains.

These private digital dossiers allow the inference of many pieces of personal information; both in practice (for the purposes of delivering targeted advertisements) and in theory (were the data to be obtained by a fourth party and put to new uses).

Through the research that we are conducting in the Meaningful Consent Project, we observe that even people in the small minority of web users that understand the mechanisms through which third party tracking operates are surprised when we demonstrate the extent of third party tracking that they are subject to. When asked to suggest information that Facebook holds about them, no participants in any of our focus groups or interviews (n ≈ 35) have mentioned data about their browsing history, which is collected via the “Like” and “Share” widgets that Facebook provides to site operators. This undermines, among other things, the notion that web users give up information as “payment” for service usage – most have never considered the data that is being collected, let alone balanced this against the value of the service that they receive. First party websites and advertisers themselves become complicit in the process of tracking their users and customers, often without a full understanding of the implications or mechanism through which the advertising networks operate.

Unlike state surveillance, which is typically intentional, deliberately engineered and subject to oversight, the development of this private surveillance infrastructure has been driven by commercial ends and without any oversight, direction or regulation. Yet, there is an unclear relationship between this organic but pervasive surveillance and the more deliberate, structured surveillance of nation states. Individual users (particularly those outside the US to whom most of the USA’s legal privacy safeguards do not apply) are left wondering about how porous the relationship between the primarily US-based third party tracking companies and the US secret services really are.

The technology that underpins this third party tracking is often either undetectable – the stateless ‘device fingerprint’ – or functionally ambiguous, by virtue of being the very same technologies that support end-users’ own legitimate aims – the stateful browser cookie that stores your shopping basket. These properties of the technology make it virtually impossible to determine the extent of the tracking that a particular user is subject to and limit the feasibility of technical countermeasures to block it. Given the ubiquity of third party tracking on today’s web, this provides a very real limit to the technical feasibility of online privacy.

Far from its initial purpose as a tool for academic collaboration, or the grand vision of an egalitarian, pro-human interchange of ideas, the Web that we have today is (at least quantitatively) primarily a surveillance tool.

The team #2: Michael Vlassopoulos, Mirco Tonin & Helia Marreiros

This is the second in a series of introductory posts, outlining the different people within the Meaningful Consent project.

Hi, we are Michael Vlassopoulos, Mirco Tonin and Helia Marreiros from the division of Economics at the University of Southampton. We use experiments, both in the lab and in the field, to do research in behavioural economics with a particular focus in public and organizational economics.

Our particular areas of interest within the meaningful consent in the digital economy project (MCDE) are connected with the economics of privacy. Our aim in this project is to contribute to the behavioural economics of privacy and move a step forward developing a framework to understand the behavioural economics of meaningful consent and digital economy.
The economics of privacy attempts to study the costs and benefits associated with the protection or disclosure of personal data – for the data subject, the data holder, and for society as a whole. As a field of research, it has been active for some decades. One of the main research questions is if there is a combination of economic incentives and technological solutions to privacy issues that is acceptable for the individual and beneficial to society.

To understand the benefits of the digital economy for the individual, it is essential to study his actual behaviour, hence the behavioural economics of privacy.
In today’s digital era, increasingly many of our daily market transactions as well as social interactions are occurring online. This raises numerous questions and challenges that can be fruitfully addressed applying the standard tools in an economist’s toolbox (i.e. the rational choice model of consumer behaviour), enriched by insights stemming from Behavioural Economics (e.g. biases in decision-making) and data obtained applying experimental methods. Some exemplary research questions we are interested are:

  • Do people value online privacy?
  • Is there heterogeneity in the preferences for privacy?
  • Is there a paradox between stated attitudes toward online privacy and actual behaviour?
  • Are users aware of the “risks” associated with sharing personal information online?
    • If not, is it because of the costs associated with acquiring information (time, cognitive effort, financial cost, technological obstacles)?
    • Can sharing choices be made more meaningful through the dissemination of relevant information (or nudges) regarding the “risks”?
  • Do behavioural biases affect users’ choices regarding sharing personal information online? Here are some example of possible biases relevant in this field:
    • Bounded Rationality – Framing effects, Limited Attention
    • Endowment effect – Loss Aversion
    • Present Bias – Self Control problems, overconfidence.

Presently, we are mapping preferences for online privacy, where we observe attitudes, private actions (give private information for free) and public actions (support for a privacy advocacy group).

First, we observe if attitudes and actions are consistent across subjects or disconnected. Second, we observe how these three elements change in response to a positive/neutral/negative privacy policy frame. Specifically, those frames are statements retrieved from Facebook and Google privacy policies that are considered by users to signal a positive, neutral or negative attitude toward users.

This first study is very relevant to informed consent. Once the trade-off between privacy and services is highlighted in an intelligible way (as opposed to current terms and conditions that nobody reads), understanding how this is going to change attitudes/private actions/public actions is important. We may well expect that a negative frame affects people’s attitudes, but will it change also how they behave? Not obvious at all given the results on disconnection between attitudes and actions observed in many economic markets.

Mapping users’ preferences and behaviour on online privacy and meaningful consent can help policy makers and organizations in general to find a common ground in the “Terra incognita” that digital economy still is.

Moreover, this knowledge can help the design of automated vs manual negotiation models studied by other members of our team in the Agents, Interaction and Complexity Research Group of the School of Electronics and Computer Science.

Our final goal is that the research we produce in this project can help decisions of policy makers and organizations and therefore have an impact on society.

Consenting agents: semi-autonomous interactions for ubiquitous consent

In September, the Meaningful Consent project was represented at the UBICOMP2014 workshop “How do you solve a problem like consent?” in Seattle.

The full workshop note is available online, from the Southampton open access repository, and the abstract is below.

Ubiquitous computing, given a regulatory environment that seems to favor consent as a way to empower citizens, introduces the possibility of users being asked to make consent decisions in numerous everyday scenarios such as entering a supermarket or walking down the street. In this note we outline a model of semi-autonomous consent (SAC), in which preference elicitation is decoupled from the act of consenting itself, and explain how this could protect desirable properties of informed consent without overwhelming users. We also suggest some challenges that must be overcome to make SAC a reality.

Download the full note to continue reading about semi-autonomous agents for meaningful consent

The team #1: Enrico Gerding & Tim Baarslag

This is the first in a series of introductory posts, outlining the different people within the Meaningful Consent project.

Hi, we are Enrico Gerding and Tim Baarslag from the Agents, Interaction and Complexity Research Group of the School of Electronics and Computer Science.

Our areas of particular interest within the meaningful consent project are:

Consent support: We develop practical mechanisms, including user interface aspects, that allow users to signal consent to common scenarios in an automated manner to minimize the negative effects on users of habituation and decision fatigue, whilst taking into account the needs of service providers both in terms of ease of implementation (a factor that will influence real-world deployment) and their legal obligations for obtaining informed consent.

Negotiation support: We explore implementations of interfaces and engagement models that enable negotiation between consumers and vendors on the terms of consent and service agreements. We also investigate potential community service for group negotiation of service terms.

Automated vs Manual Negotiation for Consent: Results from these studies will act as a foundation for our explorations into the design of automated vs manual negotiation. We anticipate that classes of consent may emerge and that therefore an agent based approach to manage the possible combinations of consent terms may be designed to handle these multiple, micro consent requests. This approach – introducing potentially low cost negotiation systems into the economy – in itself offers a new economic model.