Can we use data to benefit society while protecting the individual?

September 1, 2017

Share this story


by Ryan Meade, Public Affairs and Communications Consultant

This is part of a blog series examining the theme of online data privacy and public libraries. More information about the wider project can be found here.

 

While the US and the EU have very different cultural and regulatory approaches to privacy, there is a shared determination among politicians, activists, technologists and other civil society actors, on both sides of the Atlantic, to create a framework in which digital technology can flourish while the rights of citizens are protected and indeed enhanced. This was the positive conclusion I took from a week diving deep into the topic of online privacy and public libraries in New York earlier this year.

In my day job as a Public Policy Adviser I work with clients on a range of issues associated with the digital economy, including online privacy and security, usually with a focus on the debates around specific legislative or regulatory proposals. Being asked by the Carnegie UK Trust to join their study trip was a welcome opportunity to put these issues into a broader context, to step back and consider the digital future without a narrow focus on what is happening today in Ireland and the EU.

 

Why do we need to get this right?

In part it is because digital technology is now so integrated in our lives that few of us will be willing to do without it. In many cases the trail of data produced by the technology, for example a smartphone, is fundamental to its operation and can’t be wished or regulated away. Many of the online platforms that we now consider vital tools of expression and cultural interaction are supported by advertising models that make use of data provided by users. Regulating away these business models would threaten the platforms’ availability, or at least limit it to those who are willing and able to pay.

Another reason is that data and technology hold great promise for improving lives beyond the current digital consumers. The most exciting technology innovations are not strictly commercial, but relate to the use of data to address social needs. As Bruce Schneier told us at a seminar at the Carnegie Council for Ethics in International Affairs:

But there is a real fundamental quandary here, and that is: How do we design systems that benefit society as a whole while at the same time protecting people individually? I actually think this is the fundamental issue of the information age. Our data together has enormous value to us collectively; our data apart has enormous value to us individually. Is data in the group interest or is data in the individual interest? It is the social benefit of big data versus the individual risks of personal data.

Schneier mentioned the aggregation of medical data and transport data as examples of uses with great potential for benefit to society but with obvious privacy concerns for the individual. Back in Dublin in June I attended the Dublin Data Summit, which showcased several other projects using “data for good” and hosted a discussion on the privacy and trust implications of such initiatives. The projects highlighted include the Alone Platform, using sensors and apps to help older people live independently at home, and Space Engagers, which crowdsources geographic data from citizens to help tackle homelessness. Another was FoodCloud, an Irish startup which uses data to match the availability of surplus food in supermarkets to charities who can distribute it safely to those in need.

These examples show data and digital technology helping to meet a wide range of social needs, but in most cases the scale is small compared to well-established commercial services. Realising the potential of digital technology to address the most pressing social and environmental issues will require scale and intensification.

Any serious response to the problem of climate change, for example, will imply a radical increase in resource efficiency and data can help us here. Smart energy grids enable advanced demand management techniques including the ability to power down connected appliances such as fridges and washing machines to address spikes in demand or troughs in supply. Smart cities aim to use digital technology to reduce the environmental footprint of cities by driving efficiency in transport, buildings and energy by ubiquitous monitoring, analysis and integration.[1]

These innovations will involve decisions being made using personal data that will have real-world consequences. They will also involve the generation of huge quantities of data through sensors, including sensors installed in private homes. If privacy concerns threaten trust in the digital economy, how can we engender the even greater degree of trust that will be required to support potentially more intrusive, but potentially much more beneficial uses of data?

A recent case in the UK demonstrates how trust in socially beneficial uses of data is easily squandered. A group of homeless charities collaborated with the Greater London Authority to provide data on rough sleepers in London in order to provide better supports for them and to assist policy makers by identifying emerging needs. However The Guardian has reported that the Home Office sought access to the data, which included sensitive information such as nationality and mental health, and subsequently used it to target people for deportation.

This case resonated with another session from the New York study trip. At the offices of the research institute Data & Society, Mary Madden introduced us to her valuable work on the privacy and security experiences of low socio-economic status populations in the USA. She and her co-authors have now written up their survey research in an article for the Washington University Law Review.[2] Among other things it reveals that low-income people face vulnerability to particular forms of privacy and security harms, which may not be sufficiently addressed in the creation of legal frameworks.

The London case and Madden’s research aptly demonstrates that data privacy is not “one size fits all”. Not all data subjects enjoy the “digital privilege” of not only being in control of the data they share, but insulated from the worst potential consequences of its inappropriate use. An inclusive digital future must be based on the vision of this digital privilege being extended to all. This puts the library in a unique position. As others in this blog series have argued, public libraries can use their unique advisory and educational role to empower citizens with knowledge of their privacy rights.

The role of public libraries, however, need not only be as a contact point for less digitally privileged citizens, but as an advocate for policies that will help everybody to reap the benefits of the digital age. Helping individuals to understand and thus protect their privacy is necessary but not sufficient. Libraries can also help to develop a broader social understanding and debate around privacy that will drive better regulation and legal frameworks and ultimately help to create the conditions of trust in a digital society. Civil society groups and private sector organisations that have an interest in shaping a positive digital future are already working towards this goal and public libraries will find many allies in this common cause.

 

 

[1] I have previously written about some of the ethical and privacy challenges associated with smart grids and smart cities for the Institute of International and European Affairs
[2] Mary Madden, Michele Gilman, Karen Levy, and Alice Marwick, Privacy, Poverty, and Big Data: A Matrix of Vulnerabilities for Poor Americans, 95 Wash. U. L. Rev. 053 (2017). Available at: http://openscholarship.wustl.edu/law_lawreview/vol95/iss1/6