Trust and Consentfulness

I’ve given some presentations lately, as part of the RealConsent series organised by Mark Lizar and Richard Beaumont of the Personal Data and Trust network, on a couple of topics, that I think are worth sharing.

Notes – ie what I said – are available in the second half of each document, in lieu of an actual video.

The first, consentfulness, is about getting back to the first principles of consent and coming up with ways to measure it, empirically, so we can open up new innovation in that space, and make consent design more scientific.

The second, on the relationship between trust and consent, tries to unpick the relationship between these two hugely important concepts, both of which are pretty hot in the data protection and privacy space right now.

As always, comments are very welcome – post them here, or drop an email to r.gomer at

‘Smart’ Things: Making disempowerment physical?

I’ve published another post in my series about the crisis of intelligibility that I think we have in modern technology, this time “‘Smart’ Things: Making disempowerment physical?”

This is the second in a series of posts about the crisis of intelligibility and empowerment in modern technology. If you’ve not read the first post, “Technology Indistinguishable from Magic,” that might be a good place to start.

The Internet of Things (IoT) is set to continue as the Hottest Thing in Tech ™ in 2016, and is receiving huge attention from industry and bodies such as the UK’s Digital Catapult. There is clear promise in the idea of using established communications technology (TCP/IP) and infrastructure to control and orchestrate previously disconnected objects, or to enable entirely new classes of device such as smart dust.

Read more…

We need to talk about identifiers

Our PI, @mcphoo, raised the issue of tracking bluetooth MAC addresses last week. The debate over whether these IDs – the hardware identifiers that are burnt into the networking hardware in our smartphones, laptops and other devices – are personal identifiers, is ongoing. On the one hand are those that claim these are just hardware IDs, that they don’t identify people, just devices. On the other are people who claim that the links between the device and the individual are strong enough that by tracking the device you’re actually tracking a person. I fall firmly into the latter. Interestingly, the Information Commissioner does not. Quelle surprise.

To properly explain my own position, it’s necessary to unpack what we mean by “identify”. Broadly, identification is about differentiating one thing from another thing. An identity is a collection of properties about something that can be identified. An identifier is a piece of information that sets one individual apart from others. An identifier could be completely unique like a passport number (at least the long one on the bottom), or a fairly uncommon piece of data like a name. Non-unique identifiers don’t identify globally, but in a particular context (or combined with other pieces of data) they are identifying.

Immediately, we have two classes of identifier – analogous to the URLs and URIs of the web – those that allow us to find an individual and those that allow us to simply recognise them. As an intuitive example, a home address allows us to find an individual, physically. A phone number or email address facilitates communication and so, in a sense, lets us find their owner. What about a photograph of someone’s face, or a copy of their fingerprint, though? Armed with these pieces of information we could recognise a person if they presented themselves to us, but we’d be hard pressed to go and find that person except in quite limited contexts.

In reality, no piece of information is inherently identifying, they all depend to some extent on a broader context. Phone numbers identify because they’re built on a global telecomms infrastructure; photographs identify because we can compare a photograph to what we see when we look at someone; even latitude and longitude of a person’s current location is only identifying in the context of an agreed standard for naming points on the surface of the earth. The extent to which something is identifying is, therefore, largely determined by the uniqueness of the data, and the availability of the directories, databases and other information sources that are necessary to actually perform the identification.

With that in mind, a device MAC address is identifying in much the same way as a person’s fingerprint. Absent of a database of fingerprints, a fingerprint only allows recognition, and that’s (currently) true of MAC addresses, too. Given your bluetooth MAC address I can’t go and find you, or even email you, but if you walk into my home I can tell if you’re the person the address ‘belongs’ to. Which brings us to the second question, the extent to which a MAC address is related to a particular person – does it ‘belong’ to them in a meaningful sense? Not by design, and not when the mac address is created. Unlike a fingerprint, which is born with, and dies with, a particular person, a mac address is created for a device. Until the device is purchased and starts routinely sitting in a pocket that address only relates to the device. But once it does start sitting in a pocket, it typically sits there every time we leave the house. Our smartphones accompany us to work, to the supermarket, on the street and on holiday. About the only place you’d have a hard time finding out the MAC address associated with someone is in a swimming pool.

Recognition of bluetooth devices, and hence their owners, is trivial. It’s not a secure identifier in the same way as a fingerprint – it would be stupid to unlock a bank vault just because a particular MAC address was in range – but from a pragmatic point of view it is a viable and low-noise way to correlate an observation of a person in one location with a later observation of the same person in another location. What’s more, unlike fingerprints or face recognition MAC address detection is both physically and computationally practical to do on a large scale, with high accuracy, with little (if any) co-operation from the people that you want to track.

The fact that mac addresses are a good proxy for identifying humans is precisely why they’re useful for seeing how long those humans are spending in a queue, or for detecting the routes shoppers take through a store.

The real question in scenarios that are measuring human activity is not whether particular data points are personally identifiable or not – most of them are given the correct context – but whether the collection and processing is justified, whether it is fair, and whether the data subject has a chance to opt-out. Empowered citizens deserve to understand when they’re being monitored, and to understand how to exercise their right to choose whether to take part. Privacy is not just about data, it’s about the purpose for which it’s being collected, the person who’s collecting it and the subject’s own unique concerns, context and circumstances. Denying us the choice to decide if we want to be tracked in a queue, or around a store, or as we go about our lives on the grounds that the data you’re collecting isn’t technically about a person misses the bigger picture.


Reasonable Consent?


“Finally, the Times says it is not a job that anyone is likely to want, but reading the terms and conditions of Britain’s most popular websites would be almost a full-time occupation.

“The paper has been doing a bit of research and discovered that if the average Briton wanted to read the small print of every website they visit in a typical year, it would take 124 working days. This is equivalent to about six months of full-time employment.

“The Times says the T&Cs of the country’s 10 most-visited websites amount to more words than Romeo and Juliet, Macbeth, Hamlet and The Tempest put together.”

“Anonymous” app funded by DfE stores IP address for 5 years

(via @digitalmaverick)


This story raised a few issues for me…

Firstly, the appeal to the status-quo, “standard for any business” to justify a practice. This sort of unthinking doing-what-everyone-else-does would maybe be OK if the status quo weren’t so awful, but in this case an anonymous app in a world of over-tracked technology should probably be using the status quo as an example of what NOT to do.

Second, the idea that if something is “clearly stated” (in a privacy policy) then it must be fine. That’s obviously not true. It reminds me of an episode of Panorama in which they created a product called “Fit and Fruity” then crammed as much sugar into it as possible, to demonstrate how misleading food labelling is allowed to be. An “anonymous” app that stores a poster’s IP address is not anonymous, and hiding the truth in a privacy policy is not disclosing information it’s burying it.

The issue of mergers and bankruptcy. I’ve suggested, in conversations over the last few years, that personal data should probably be considered by competition regulators when deciding whether mergers and acquisitions should be allowed. More broadly, I think we need better guidelines around personal data when the controller is liquidated. We’re (slowly) recognising that personal data isn’t like other assets. Data subjects have a stake in personal data that simply doesn’t exist in fungible assets like gold, or furniture or even some non-fungible ones like intellectual property. There shouldn’t be market in trading consent – It should be like a parking ticket, non-transferable, whether through acquisition or liquidation. Who is processing data is a fundamental part of a decision whether or not to allow that processing and if the who changes, then the consent is no longer meaningful.

A rare example of a scenario where someone might actually need to think of the children!

Privacy body backs ‘explicit consent’ rules in data protection reforms

The Article 29 Working Party has released an opinion concerning the requirements for consent in the upcoming GDPR.

I’d agree that it’s important that there is no doubt as to the subject’s intent to consent – That is fundamental and arguably a good definition of what we try to encapsulate through the term “meaningful”.  Their use of the word “explicit” seems problematic, to me, though.  For a start it isn’t really clear what that means – to me explicit consent means an act that ONLY creates consent, with not other meaning or effect.  That feels like overkill, and will constrain innovation around genuinely consentful interactions.  My view is that we should be able to measure if an interaction really embodies consent, and it’s clear (to cite the common example of holding our your arm to give blood) that consent CAN be both intended, meaningful and implicit. That is to say, the act of holding out my arm intuitively gives consent to phlebotomy but also makes my arm physically available for the procedure.

Uploading a photograph by clicking “upload”, in the clear presence of an appropriate notice, is not necessarily explicit consent, but it does to me embody a signal of consent that is unambiguous and meaningful. The act of transmitting the photograph may not be an explicit consent signal, but it clearly does embody consent to the same extent that ticking a box would.

My own feeling is that we should really be talking in terms of whether or not consent signals are intended and unambiguous rather than whether they are “opt-in” and “explicit”.  Opt-in and explicitness clearly are ways to reduce ambiguity, but they just become box-ticking requirements for interaction designers that constrain us to a subset of meaningful consent interactions and which rule out some interactions that actually would fulfil our desires and which actually reinforce some of the extant problems with consent, like user-bother and consent fatigue.

Have a read of the Article 29 opinion, and maybe watch my recent WSI talk for more about my thoughts on taking a broader, more innovative approach to consent.

Source: Privacy body backs ‘explicit consent’ rules in data protection reforms

How EU data protection law could interfere with targeted ads

An interesting article in The Conversation by James Davenport at the University of Bath about some of the possible implications of the GDPR.  The extent to which cloud computing providers, such as Amazon Web Services, should be considered data processors is particularly interesting.  After all, these companies need to exercise some basic competence to ensure data security, but beyond that have no real say in what’s happening to data since they’re involved only at the “bit” level.

From a consent perspective, does an infrastructure provider matter, or is this a case where just regulating these companies as utility providers would be the best approach?

Source: How EU data protection law could interfere with targeted ads