Privacy body backs ‘explicit consent’ rules in data protection reforms

The Article 29 Working Party has released an opinion concerning the requirements for consent in the upcoming GDPR.

I’d agree that it’s important that there is no doubt as to the subject’s intent to consent – That is fundamental and arguably a good definition of what we try to encapsulate through the term “meaningful”.  Their use of the word “explicit” seems problematic, to me, though.  For a start it isn’t really clear what that means – to me explicit consent means an act that ONLY creates consent, with not other meaning or effect.  That feels like overkill, and will constrain innovation around genuinely consentful interactions.  My view is that we should be able to measure if an interaction really embodies consent, and it’s clear (to cite the common example of holding our your arm to give blood) that consent CAN be both intended, meaningful and implicit. That is to say, the act of holding out my arm intuitively gives consent to phlebotomy but also makes my arm physically available for the procedure.

Uploading a photograph by clicking “upload”, in the clear presence of an appropriate notice, is not necessarily explicit consent, but it does to me embody a signal of consent that is unambiguous and meaningful. The act of transmitting the photograph may not be an explicit consent signal, but it clearly does embody consent to the same extent that ticking a box would.

My own feeling is that we should really be talking in terms of whether or not consent signals are intended and unambiguous rather than whether they are “opt-in” and “explicit”.  Opt-in and explicitness clearly are ways to reduce ambiguity, but they just become box-ticking requirements for interaction designers that constrain us to a subset of meaningful consent interactions and which rule out some interactions that actually would fulfil our desires and which actually reinforce some of the extant problems with consent, like user-bother and consent fatigue.

Have a read of the Article 29 opinion, and maybe watch my recent WSI talk for more about my thoughts on taking a broader, more innovative approach to consent.

Source: Privacy body backs ‘explicit consent’ rules in data protection reforms

How EU data protection law could interfere with targeted ads

An interesting article in The Conversation by James Davenport at the University of Bath about some of the possible implications of the GDPR.  The extent to which cloud computing providers, such as Amazon Web Services, should be considered data processors is particularly interesting.  After all, these companies need to exercise some basic competence to ensure data security, but beyond that have no real say in what’s happening to data since they’re involved only at the “bit” level.

From a consent perspective, does an infrastructure provider matter, or is this a case where just regulating these companies as utility providers would be the best approach?

Source: How EU data protection law could interfere with targeted ads

The man who read all the small print on the internet

Very nice piece in The Guardian by someone who decided not to do anything before he’d read all the terms and conditions first. Did you know Sony is allowed to brick your (offline!) Playstation if you were to translate the PS4 software?

The article makes a great case for increasing negotiating power of consumers. See http://www.theguardian.com/technology/2015/jun/15/i-read-all-the-small-print-on-the-internet

Microsoft announces new Skype ToS

Microsoft has announced that Skype will be governed by the new Microsoft Services Agreement and the Microsoft Privacy Statement from 1st August 2015.

This is part of an effort to standardise all services under a single Terms of Service document and Privacy Policy. Less terms of service to read should be good for consent, but Microsoft provides such a broad range of services that such a document might be too vague to really inform digital citizens about what, specificcally, is happening to their data. Google took a lot of flack, including a fine from the French data protection body when it tried a similar unification in 2012.

When Microsoft acquired Skype in 2011, we brought together our communication technologies to help you stay closer to friends, family and colleagues. And, if you’re like millions of other people who use a number of Microsoft’s services (for example, Outlook.com for email, Bing, Xbox, Office 365, etc.) we’re making life a little easier for everyone. How? Well, most of Microsoft’s consumer services are being brought together under a single Microsoft Services Agreement and a consolidated Microsoft Privacy Statement.

Full Email

Meanwhile, Apple assumes that bloggers consent to the terms of service for its forthcoming News service unless they opt-out – Apparently even if they’ve never heard of it!

A good week for Meaningful Consent? Leave your thoughts in the comments.

DATA-PSST Seminar 2 Position Statement

I was lucky enough to be invited along to take part in the Second DATA-PSST workshop in Sheffield last week, hosted by Vian Bakir of Bangor University. It was a very thought-provoking event and raised many issues around the ethical and technical limits of privacy, which I’m sure will be available in the report in due course.

A articular issue raised for me is the degree to which human eyes need to be involved in a surveillance practice before it constitutes “Surveillance” – I’d argue that any form of agency exercised in response to that data, whether by a human or an algorithm, has a surveillance element. An argument about mere “processing” of data is problematic since merely indexing the data for targetted search by a human operator would be processing in itself. Definitions like this are important when it comes to meaningful consent by individuals, and the idea of collective consent to being policed or governed adds a whole extra dimension to what is already a very nuanced problem!

In the mean time, this is the position statement that I submitted, based in part on the arguments outlined in “The Grey Web” paper authored with colleagues mc schraefel and Natasa Milic-Frayling. Inaccuracies or outrageous claims in this position paper are entirely my own, though.

Richard

The Web is a Surveillance Tool

Today’s web is funded primarily by advertising; the sub-millisecond delivery of targeted advertising alongside content of genuine interest to users. Networks of content providers, advertising brokers and advertisers allow private companies to record extensive amounts of web browsing history from individual web users. Our research indicates that after visiting only 30 search results there is a 99.5% chance that an individual user has been tracked at least once by each of the top ten third party tracking domains.

These private digital dossiers allow the inference of many pieces of personal information; both in practice (for the purposes of delivering targeted advertisements) and in theory (were the data to be obtained by a fourth party and put to new uses).

Through the research that we are conducting in the Meaningful Consent Project, we observe that even people in the small minority of web users that understand the mechanisms through which third party tracking operates are surprised when we demonstrate the extent of third party tracking that they are subject to. When asked to suggest information that Facebook holds about them, no participants in any of our focus groups or interviews (n ≈ 35) have mentioned data about their browsing history, which is collected via the “Like” and “Share” widgets that Facebook provides to site operators. This undermines, among other things, the notion that web users give up information as “payment” for service usage – most have never considered the data that is being collected, let alone balanced this against the value of the service that they receive. First party websites and advertisers themselves become complicit in the process of tracking their users and customers, often without a full understanding of the implications or mechanism through which the advertising networks operate.

Unlike state surveillance, which is typically intentional, deliberately engineered and subject to oversight, the development of this private surveillance infrastructure has been driven by commercial ends and without any oversight, direction or regulation. Yet, there is an unclear relationship between this organic but pervasive surveillance and the more deliberate, structured surveillance of nation states. Individual users (particularly those outside the US to whom most of the USA’s legal privacy safeguards do not apply) are left wondering about how porous the relationship between the primarily US-based third party tracking companies and the US secret services really are.

The technology that underpins this third party tracking is often either undetectable – the stateless ‘device fingerprint’ – or functionally ambiguous, by virtue of being the very same technologies that support end-users’ own legitimate aims – the stateful browser cookie that stores your shopping basket. These properties of the technology make it virtually impossible to determine the extent of the tracking that a particular user is subject to and limit the feasibility of technical countermeasures to block it. Given the ubiquity of third party tracking on today’s web, this provides a very real limit to the technical feasibility of online privacy.

Far from its initial purpose as a tool for academic collaboration, or the grand vision of an egalitarian, pro-human interchange of ideas, the Web that we have today is (at least quantitatively) primarily a surveillance tool.

The team #2: Michael Vlassopoulos, Mirco Tonin & Helia Marreiros

This is the second in a series of introductory posts, outlining the different people within the Meaningful Consent project.

Hi, we are Michael Vlassopoulos, Mirco Tonin and Helia Marreiros from the division of Economics at the University of Southampton. We use experiments, both in the lab and in the field, to do research in behavioural economics with a particular focus in public and organizational economics.

Our particular areas of interest within the meaningful consent in the digital economy project (MCDE) are connected with the economics of privacy. Our aim in this project is to contribute to the behavioural economics of privacy and move a step forward developing a framework to understand the behavioural economics of meaningful consent and digital economy.
The economics of privacy attempts to study the costs and benefits associated with the protection or disclosure of personal data – for the data subject, the data holder, and for society as a whole. As a field of research, it has been active for some decades. One of the main research questions is if there is a combination of economic incentives and technological solutions to privacy issues that is acceptable for the individual and beneficial to society.

To understand the benefits of the digital economy for the individual, it is essential to study his actual behaviour, hence the behavioural economics of privacy.
In today’s digital era, increasingly many of our daily market transactions as well as social interactions are occurring online. This raises numerous questions and challenges that can be fruitfully addressed applying the standard tools in an economist’s toolbox (i.e. the rational choice model of consumer behaviour), enriched by insights stemming from Behavioural Economics (e.g. biases in decision-making) and data obtained applying experimental methods. Some exemplary research questions we are interested are:

  • Do people value online privacy?
  • Is there heterogeneity in the preferences for privacy?
  • Is there a paradox between stated attitudes toward online privacy and actual behaviour?
  • Are users aware of the “risks” associated with sharing personal information online?
    • If not, is it because of the costs associated with acquiring information (time, cognitive effort, financial cost, technological obstacles)?
    • Can sharing choices be made more meaningful through the dissemination of relevant information (or nudges) regarding the “risks”?
  • Do behavioural biases affect users’ choices regarding sharing personal information online? Here are some example of possible biases relevant in this field:
    • Bounded Rationality – Framing effects, Limited Attention
    • Endowment effect – Loss Aversion
    • Present Bias – Self Control problems, overconfidence.

Presently, we are mapping preferences for online privacy, where we observe attitudes, private actions (give private information for free) and public actions (support for a privacy advocacy group).

First, we observe if attitudes and actions are consistent across subjects or disconnected. Second, we observe how these three elements change in response to a positive/neutral/negative privacy policy frame. Specifically, those frames are statements retrieved from Facebook and Google privacy policies that are considered by users to signal a positive, neutral or negative attitude toward users.

This first study is very relevant to informed consent. Once the trade-off between privacy and services is highlighted in an intelligible way (as opposed to current terms and conditions that nobody reads), understanding how this is going to change attitudes/private actions/public actions is important. We may well expect that a negative frame affects people’s attitudes, but will it change also how they behave? Not obvious at all given the results on disconnection between attitudes and actions observed in many economic markets.

Mapping users’ preferences and behaviour on online privacy and meaningful consent can help policy makers and organizations in general to find a common ground in the “Terra incognita” that digital economy still is.

Moreover, this knowledge can help the design of automated vs manual negotiation models studied by other members of our team in the Agents, Interaction and Complexity Research Group of the School of Electronics and Computer Science.

Our final goal is that the research we produce in this project can help decisions of policy makers and organizations and therefore have an impact on society.

Consenting agents: semi-autonomous interactions for ubiquitous consent

In September, the Meaningful Consent project was represented at the UBICOMP2014 workshop “How do you solve a problem like consent?” in Seattle.

The full workshop note is available online, from the Southampton open access repository, and the abstract is below.

Ubiquitous computing, given a regulatory environment that seems to favor consent as a way to empower citizens, introduces the possibility of users being asked to make consent decisions in numerous everyday scenarios such as entering a supermarket or walking down the street. In this note we outline a model of semi-autonomous consent (SAC), in which preference elicitation is decoupled from the act of consenting itself, and explain how this could protect desirable properties of informed consent without overwhelming users. We also suggest some challenges that must be overcome to make SAC a reality.

Download the full note to continue reading about semi-autonomous agents for meaningful consent