Q&A with Ivana Bartoletti: How can we protect our security and privacy during the pandemic?

An Artificial Revolution JACKET.jpg

By Lottie Jackson

The Covid-19 pandemic has forced us to confront a diverse range of issues that have long been present in society. Security and personal data protection is one such area. Many fear the shift to virtual workplaces and online socialising will spark a free-for-all on our confidential information. With an NHS coronavirus tracking app in development— trials are currently taking place on the Isle of Wight— there are concerns that this form of data sharing and collection is the first stop on our way to a surveillance state. 

Under the government’s new policy of contact tracing, millions of people in the UK will soon be asked to monitor their movements to limit the spread of coronavirus. While this is clearly an important and vital step in controlling the pandemic, we should stop to assess the risks and dangers of tracking everything from where we have been to our social interactions. 

In a bid to decipher the key issues in this debate, we’ve asked Ivana Bartoletti, leading privacy and digital ethics expert, and author of the upcoming book An Artificial Revolution: On Power, Politics and AI, to share her expertise.

 

What are the biggest dangers associated with coronavirus tracking apps? And do you think these risks outweigh the benefits? 

A digital tracking app can only work if it's part of a bigger picture, and operates alongside testing, traditional human tracing and forms of social distancing. The race to trace that we are witnessing all around the world is an interesting phenomenon. On the one hand, it is great to see that technology can serve our common endeavour and help us navigate through this pandemic.

On the other hand, tracing can bring challenges as, ultimately, tracking apps are an invasion of privacy. Similarly, the digitalisation of the COVID response also brings challenges around the digital divide, exclusion, and inequality.

The starting point here is that our response to the pandemic is built on the digital surveillance ecosystem we have grown into. It’s no surprise that people are now suspicious and wary that COVID data - especially bio-surveillance data - could be misused later. 

It is actually a good thing that citizens are more aware now. The issue is that data privacy laws do exist exactly so that citizens do not have to worry about what happens to their data. However, we are now in an emergency with no end date, and emergencies bring a completely new dimension to the checks and balances we have in democracy.

The most important thing is for us to retain democratic oversight and demand accountability. I am all in favour of data being used for the common good, if the common good is to get rid of this deadly virus. However, the long-term consequences of trumping over human rights and privacy can be devastating. The rule of law must always apply. 

 

Is digital surveillance ethical when it comes to public health?

I think the real issue is that privacy and health are seen as opposite. It is right that we worry about a far-reaching system of tracing us. However, privacy vs health is a dangerously misleading dichotomy as it plays into the hands of those who want greater digital authoritarianism and unbounded surveillance. They will say that if you asked most people if they would be willing to download an app in exchange for the freedom to return safely to work and normality, they would welcome it. And therefore, why shouldn’t we use it? 

We must work to do better and see privacy as the most important collective good. At no other time has it been shown how interconnected and interdependent we all are in our humanity, our bodies, our experiences, our mobility, and our illness. Our shared clinical data is the greatest public asset that we have.

 

If you were forming the content of data privacy for the Coronavirus Act, what would you include? 

First, sunset clauses: where does all this end? At which point do we get rid of the data we have collected for the pandemic? This is crucial, as the imposition of an emergency without an expiry date is not good for anyone.

Second, a list of what can and cannot be done with the data. If the app works, and becomes mandatory, that has to be through expedite rule making.

Third, oversight and public accountability. Who looks at what happens to the data, and reports to the media and the public?

And fourth, the establishment of a COVID Transparency officer for accountability purposes. 

 

What are the longer-term consequences of more of our lives being online?

Most of our structures, rules and frameworks will become obsolete. For example, let's think of the consent-based model of privacy, that says that we - consumers - are in control and can decide whether or not we want to accept something. How is the consent model sustainable in a world where we are mostly online or, as Luciano Floridi, Professor of Philosophy and Ethics of Information and Director of the Digital Ethics Lab, University of Oxford, describes, “onlife”? “Onlife” means being both online and onlife, and in and out of those dimensions at all time. We need to rethink all of this. We cannot expect consumers to read one hundred privacy notices every day especially if the “onlife” space is a smart city!

The other big thing is that our digital space is too polarised, and this is because of the digital architecture underpinning it. It is often unsafe too, especially for young girls. I am horrified by the body shaming young women have to be subjected to. But I am also worried about the digital advertising mechanisms which can recommend things based on what we like, read, and purchase online. It is a true intrusion of our autonomy of thinking and freedom to live life as a discovery rather than an imposition. This can be changed by reforming the digital advertising ecosystem and by replacing polarisation with empathy.

 

At the moment, do you think people are more likely to take risks and let their guard down when it comes to privacy and data protection?

There is no choice, that is the problem. Part of it is because sometime people are deceived by design and tricked into data extraction practices. Good, resilient companies who are in it for the long term do the opposite - and engage with customers through trust and empowerment.

The other big thing is that data is underpinning our economy, with some used misuse of it but some great missed uses too. If people start to trust the digital ecosystem, then we can really use data for some amazing things.

 

How can we protect ourselves?

By stopping to think that we can control everything. It is impossible. This is why regulation and rules matter, and that is because companies need to embed privacy and ethics into the design stage of technology. The narrative of control and consent is not working because the current digital ecosystem is too complex for us to be able to manage all we do in it. This is why rules exist. So the way we protect ourselves is by rewarding companies that we trust with the handling of data; by demanding enforcement when that trust is breached; and by moving from privacy as a merely individual right to privacy as a public value that holds our community and society together. 

 

How can we integrate virtual and real life more holistically?

By embracing environmentalism as the greatest challenge ahead of us. One thing is certain: this crisis is demonstrating how vulnerable we are, and how our vulnerability is fuelled by inequality, racial capitalism, and social divisions. Our digital spaces are no different from the real ones: issues of accumulation, pollution and hate pullulate there, too. If we want to move forward and make this crisis a watershed moment, we must think of our digital and physical environment as one, and dramatically rethink it.

  

An Artificial Revolution: On Power, Politics and AI (The Indigo Press) by Ivana Bartoletti will be released in eBook format 20th May. Available here.

Tickets to Ivana's virtual launch of the book, are available here.