‘Design Jam’ in Berlin Delivers New Approach to Data Transparency and Control

(Originally posted at Facebook Brussels)

By Stephen Deadman, Global Deputy Chief Privacy Officer, Facebook and Richard Gomer, Senior Research Assistant, Agents Interaction and Complexity Research Group, University of Southampton

Last weekend in Berlin, organizations including service innovation consultancy Work Play Experience, business consultancy Ctrl-Shift, the University of Southampton and Facebook piloted a unique workshop — called a Design Jam — dedicated to re-inventing the way we help people understand and manage how their data is used. Our goal: use design thinking – the methods used by designers to make technology usable and our lives simpler – to give people better visibility and control over their data.

Technology is becoming smarter and more intuitive all the time – smart thermostats adjust their temperatures depending on whether someone is home, cars adjust their seating positions depending on who is driving and navigation apps suggest alternative routes to avoid traffic. Data drives these advances, but people need better ways of understanding how it all works. While legal documents are important, we need to couple them with people-centric design that consumers find engaging and intuitive.

What does people-centric design mean? It means products, user interfaces and experiences that are built around how real people behave and what they expect and want. The best designs come from diverse perspectives. With this in mind, the Design Jam brought cross-industry experts from the fields of service design, user experience, behavioural science and product development together with policymakers and regulators to collaborate in person. Joined by participants from organisations including The Financial Times, Adidas, Microsoft, Barclays, University of Trento, the United Kingdom Cabinet Office, the Centre for Information Policy Leadership and a delegation from the Privacy Bridges Project [https://privacybridges.mit.edu/], we set out on a mission: create designs that help people understand and control the way services use their data, and make sure the designs build trust, transparency and control while also delivering a frictionless, enjoyable experience. Participants went from brainstorms to prototypes to in-the-wild testing in three action-packed, high-energy days.

We structured the Design Jam around three core principles.

1. Put people front and center
Effective solutions start with listening to people – understanding what they want, what their concerns are and how they like to receive information and make choices. To this end, we took to the streets of Berlin to interview people with different perspectives on personal data. And we used Skype to have more in-depth conversations with a mix of people based on their backgrounds, demographics and general attitudes toward privacy and data. Based on those insights, we set out to create user-interface design templates that reflect how people actually behave and interact online.

2. The mix of perspectives matters
Traditionally, service designers and policy experts have worked on trust and transparency challenges in separate camps. But, we believe that in order to create solutions that are useful to people and businesses and that regulators believe in, you need diverse perspectives from the outset. In line with this thinking, we split participants into teams with a mix of disciplines and backgrounds.

In feedback so far, participants have said that working with other disciplines, often for the first time, was critical to the effort. One Senior Policy Officer from the UK Information Commissioner’s Office said working with designers gave him a fresh perspective that he would take back to the ICO. “For there to be transparency and accountability, people need to understand what’s happening with their data. Effective design can help to achieve that.” He noted this was especially important with new data protection regulation coming into force across Europe next year.

3. Create real world solutions
Success means channeling the Jam’s efforts towards real-world implementations. That’s why we focused heavily on prototyping and user testing the ideas with real people. First, each team created a persona to design for – like ‘Anke,’ a 25-year-old woman from Berlin who worries about sharing her data online but doesn’t have time to research how she can protect her privacy, which she regrets.

Next, teams turned the ideas with the most potential into working prototypes of apps and tested them with consumers via Skype. At each stage of the design process, teams took feedback and revised again, creating real templates as opposed to ideas for solutions. To understand how people would interact with a concept, teams also used props like Lego, Playmobil and Play Doh to play out and test use cases.

Looking ahead
This Jam was an experiment and we’re excited by its success. It won’t be a one-off. If we are going to make a lasting impact, we need to take the approach here and scale it. The ultimate goal is to create a system in which design jams for trust, transparency and control become common and frequent. Our hope is also that the templates and design patterns become available for everyone to take, replicate, learn from and build on. The best ones would go on to be rigorously tested and refined. The collaborative spirit and energy people brought to the challenge were inspiring and we look forward to planning the next jam later in the year.

Trust and Consentfulness

I’ve given some presentations lately, as part of the RealConsent series organised by Mark Lizar and Richard Beaumont of the Personal Data and Trust network, on a couple of topics, that I think are worth sharing.

Notes – ie what I said – are available in the second half of each document, in lieu of an actual video.

The first, consentfulness, is about getting back to the first principles of consent and coming up with ways to measure it, empirically, so we can open up new innovation in that space, and make consent design more scientific.

The second, on the relationship between trust and consent, tries to unpick the relationship between these two hugely important concepts, both of which are pretty hot in the data protection and privacy space right now.

As always, comments are very welcome – post them here, or drop an email to r.gomer at soton.ac.uk

‘Smart’ Things: Making disempowerment physical?

I’ve published another post in my series about the crisis of intelligibility that I think we have in modern technology, this time “‘Smart’ Things: Making disempowerment physical?”

This is the second in a series of posts about the crisis of intelligibility and empowerment in modern technology. If you’ve not read the first post, “Technology Indistinguishable from Magic,” that might be a good place to start.

The Internet of Things (IoT) is set to continue as the Hottest Thing in Tech ™ in 2016, and is receiving huge attention from industry and bodies such as the UK’s Digital Catapult. There is clear promise in the idea of using established communications technology (TCP/IP) and infrastructure to control and orchestrate previously disconnected objects, or to enable entirely new classes of device such as smart dust.

Read more…

We need to talk about identifiers

Our PI, @mcphoo, raised the issue of tracking bluetooth MAC addresses last week. The debate over whether these IDs – the hardware identifiers that are burnt into the networking hardware in our smartphones, laptops and other devices – are personal identifiers, is ongoing. On the one hand are those that claim these are just hardware IDs, that they don’t identify people, just devices. On the other are people who claim that the links between the device and the individual are strong enough that by tracking the device you’re actually tracking a person. I fall firmly into the latter. Interestingly, the Information Commissioner does not. Quelle surprise.

To properly explain my own position, it’s necessary to unpack what we mean by “identify”. Broadly, identification is about differentiating one thing from another thing. An identity is a collection of properties about something that can be identified. An identifier is a piece of information that sets one individual apart from others. An identifier could be completely unique like a passport number (at least the long one on the bottom), or a fairly uncommon piece of data like a name. Non-unique identifiers don’t identify globally, but in a particular context (or combined with other pieces of data) they are identifying.

Immediately, we have two classes of identifier – analogous to the URLs and URIs of the web – those that allow us to find an individual and those that allow us to simply recognise them. As an intuitive example, a home address allows us to find an individual, physically. A phone number or email address facilitates communication and so, in a sense, lets us find their owner. What about a photograph of someone’s face, or a copy of their fingerprint, though? Armed with these pieces of information we could recognise a person if they presented themselves to us, but we’d be hard pressed to go and find that person except in quite limited contexts.

In reality, no piece of information is inherently identifying, they all depend to some extent on a broader context. Phone numbers identify because they’re built on a global telecomms infrastructure; photographs identify because we can compare a photograph to what we see when we look at someone; even latitude and longitude of a person’s current location is only identifying in the context of an agreed standard for naming points on the surface of the earth. The extent to which something is identifying is, therefore, largely determined by the uniqueness of the data, and the availability of the directories, databases and other information sources that are necessary to actually perform the identification.

With that in mind, a device MAC address is identifying in much the same way as a person’s fingerprint. Absent of a database of fingerprints, a fingerprint only allows recognition, and that’s (currently) true of MAC addresses, too. Given your bluetooth MAC address I can’t go and find you, or even email you, but if you walk into my home I can tell if you’re the person the address ‘belongs’ to. Which brings us to the second question, the extent to which a MAC address is related to a particular person – does it ‘belong’ to them in a meaningful sense? Not by design, and not when the mac address is created. Unlike a fingerprint, which is born with, and dies with, a particular person, a mac address is created for a device. Until the device is purchased and starts routinely sitting in a pocket that address only relates to the device. But once it does start sitting in a pocket, it typically sits there every time we leave the house. Our smartphones accompany us to work, to the supermarket, on the street and on holiday. About the only place you’d have a hard time finding out the MAC address associated with someone is in a swimming pool.

Recognition of bluetooth devices, and hence their owners, is trivial. It’s not a secure identifier in the same way as a fingerprint – it would be stupid to unlock a bank vault just because a particular MAC address was in range – but from a pragmatic point of view it is a viable and low-noise way to correlate an observation of a person in one location with a later observation of the same person in another location. What’s more, unlike fingerprints or face recognition MAC address detection is both physically and computationally practical to do on a large scale, with high accuracy, with little (if any) co-operation from the people that you want to track.

The fact that mac addresses are a good proxy for identifying humans is precisely why they’re useful for seeing how long those humans are spending in a queue, or for detecting the routes shoppers take through a store.

The real question in scenarios that are measuring human activity is not whether particular data points are personally identifiable or not – most of them are given the correct context – but whether the collection and processing is justified, whether it is fair, and whether the data subject has a chance to opt-out. Empowered citizens deserve to understand when they’re being monitored, and to understand how to exercise their right to choose whether to take part. Privacy is not just about data, it’s about the purpose for which it’s being collected, the person who’s collecting it and the subject’s own unique concerns, context and circumstances. Denying us the choice to decide if we want to be tracked in a queue, or around a store, or as we go about our lives on the grounds that the data you’re collecting isn’t technically about a person misses the bigger picture.

Quote

Reasonable Consent?

From:
http://www.bbc.co.uk/news/blogs-the-papers-33671192

“Finally, the Times says it is not a job that anyone is likely to want, but reading the terms and conditions of Britain’s most popular websites would be almost a full-time occupation.

“The paper has been doing a bit of research and discovered that if the average Briton wanted to read the small print of every website they visit in a typical year, it would take 124 working days. This is equivalent to about six months of full-time employment.

“The Times says the T&Cs of the country’s 10 most-visited websites amount to more words than Romeo and Juliet, Macbeth, Hamlet and The Tempest put together.”

“Anonymous” app funded by DfE stores IP address for 5 years

(via @digitalmaverick)

silentsecret

This story raised a few issues for me…

Firstly, the appeal to the status-quo, “standard for any business” to justify a practice. This sort of unthinking doing-what-everyone-else-does would maybe be OK if the status quo weren’t so awful, but in this case an anonymous app in a world of over-tracked technology should probably be using the status quo as an example of what NOT to do.

Second, the idea that if something is “clearly stated” (in a privacy policy) then it must be fine. That’s obviously not true. It reminds me of an episode of Panorama in which they created a product called “Fit and Fruity” then crammed as much sugar into it as possible, to demonstrate how misleading food labelling is allowed to be. An “anonymous” app that stores a poster’s IP address is not anonymous, and hiding the truth in a privacy policy is not disclosing information it’s burying it.

The issue of mergers and bankruptcy. I’ve suggested, in conversations over the last few years, that personal data should probably be considered by competition regulators when deciding whether mergers and acquisitions should be allowed. More broadly, I think we need better guidelines around personal data when the controller is liquidated. We’re (slowly) recognising that personal data isn’t like other assets. Data subjects have a stake in personal data that simply doesn’t exist in fungible assets like gold, or furniture or even some non-fungible ones like intellectual property. There shouldn’t be market in trading consent – It should be like a parking ticket, non-transferable, whether through acquisition or liquidation. Who is processing data is a fundamental part of a decision whether or not to allow that processing and if the who changes, then the consent is no longer meaningful.

A rare example of a scenario where someone might actually need to think of the children!

Privacy body backs ‘explicit consent’ rules in data protection reforms

The Article 29 Working Party has released an opinion concerning the requirements for consent in the upcoming GDPR.

I’d agree that it’s important that there is no doubt as to the subject’s intent to consent – That is fundamental and arguably a good definition of what we try to encapsulate through the term “meaningful”.  Their use of the word “explicit” seems problematic, to me, though.  For a start it isn’t really clear what that means – to me explicit consent means an act that ONLY creates consent, with not other meaning or effect.  That feels like overkill, and will constrain innovation around genuinely consentful interactions.  My view is that we should be able to measure if an interaction really embodies consent, and it’s clear (to cite the common example of holding our your arm to give blood) that consent CAN be both intended, meaningful and implicit. That is to say, the act of holding out my arm intuitively gives consent to phlebotomy but also makes my arm physically available for the procedure.

Uploading a photograph by clicking “upload”, in the clear presence of an appropriate notice, is not necessarily explicit consent, but it does to me embody a signal of consent that is unambiguous and meaningful. The act of transmitting the photograph may not be an explicit consent signal, but it clearly does embody consent to the same extent that ticking a box would.

My own feeling is that we should really be talking in terms of whether or not consent signals are intended and unambiguous rather than whether they are “opt-in” and “explicit”.  Opt-in and explicitness clearly are ways to reduce ambiguity, but they just become box-ticking requirements for interaction designers that constrain us to a subset of meaningful consent interactions and which rule out some interactions that actually would fulfil our desires and which actually reinforce some of the extant problems with consent, like user-bother and consent fatigue.

Have a read of the Article 29 opinion, and maybe watch my recent WSI talk for more about my thoughts on taking a broader, more innovative approach to consent.

Source: Privacy body backs ‘explicit consent’ rules in data protection reforms