Gravel with digits on it

Data (in)justices

During the COVID-19 pandemic, public and private actors have pursued mass data collection and surveillance as solutions to the biosecurity threat. Contact tracing apps, GPS ankle bracelets, ‘immunity passports’, and the embedding of thermal cameras into public spaces are only a few examples of tech fixes proposed as strategic responses to the pandemic. Simultaneously, quarantine and social isolation measures have forced the normalisation of remote workplaces. Who can and cannot work from home exposes the problem of entrenched inequalities that underpins the digital divide. In cities such as Australia’s, critical telecommunication infrastructures (for example internet connections) that service and maintain a functioning city are concentrated in central business districts. What then happens when forced distancing measures cause a spatial stretching of economic activity to remote locations? And how do those responsible for decision-making across government policy ensure that their fixes are timely, necessary and needed as to not hardcode existing inequality? The “digital solutionism” and associated tech fixes maps onto a broader paradigm shift of embedded technologies that monitor, mediate and control public behaviour. The COVID-19 pandemic has caused a seemingly natural acceleration towards a consolidated socio-technical domain as if tech is the panacea to the world’s problems.

Privacy protections existing in law overlay the mass collection of citizen data. Legislative instruments proliferating in the recent year(s) dealing specifically with novel digital-situations in turn undermine the coherency of legal protection frameworks and force us to question, as Professor Lyria Bennett Moses asks, “[t]o what extent is it problematic that we solve concerns about privacy one dataset at a time?” Phrased as an integer, “One dataset at a time” reads euphemistically as a catch cry for the continuing immanence of the future digital age, egregiously phrased “the new normal”. In this new normal, we are constituted and understood as people through cumulative datapoints; transformed from the analogue citizen subject into the digitised data subject. This setting, and the (almost) first anniversary of life in the time of a global pandemic, formed the basis of the second installation of the Data Justice Research Network’s “Disrupting Data Injustices” workshop series, held as a Virtual Roundtable on 10 November.

A common theme throughout the discussions centred on a second ‘risk’ that has mobilised around the pandemic and has propelled a rapid transformation to an increasingly digital society. The digitalisation of the pandemic has featured the enmeshing of “tech solutions” as hallmarks of Government pandemic responses. How, by whom and for what purposes are datapoints being created, collected and sold on in fulfillment of other proprietary interests? The very legitimate function that tech, namely contact tracing apps, proposed to perform in curbing the spread of COVID-19 came to eminence as a battle between centralised and decentralised models. The Australian Federal Government’s COVIDSafe App, a centralised example, was arguably a policy failure. Its success in reducing transmission rates was negligible and its heightened vulnerability to security breaches further incited public distrust. From COVIDSafe App’s seven million users, it has been responsible for identifying seventeen cases. Despite this, the Federal Government has fiercely defended the App on the basis that one life saved is evidence of success. It’s the unarguability of that statement that compels citizen reticence, appealing us to imagine the one life as being located somewhere in our networks.

The focus of citizen criticism on the security flaws in centralised platforms has diverted attention from the fact that data is a commodity, of which the private sector controls the overwhelming majority, ie, at orders of magnitude. The same transparency, accountability and legal protections that regulate government held citizen data do not necessarily apply to privately held data with the same rigour. The intervention of technocapitalism and the progressive merging of the public and private raises a fundamental question, do we need an app for that? From an interdisciplinary perspective, is the digitisation of the health crisis necessary or not? The concentration of the public sector’s ‘brains trust’ is exacerbated when the online infrastructures are controlled by the competitor. Who loses in this trade-off, and who is included in the ‘public’ when justification for the merging is framed as being ‘in the public benefit’? If the building of ‘smart cities’ is taken as an example, its packaging as an urban transformation in the public benefit, coercively operates to control who enters, thereby demarcating whose presence is protected and whose is marked as a threat. This also plays out in the Singaporean context, whose adopted control measures have disproportionately impacted vulnerable groups. Migrant workers, who account for approximately a fifth of the population, have accounted for more than 90% of Singapore’s coronavirus infections. The legislative framework intended as safeguards to protect the health and well-being of migrant workers is completely ineffective because of migrant worker vulnerability and the coupling with superseding proprietary interests.

Returning to the question posed by Professor Bennett Moses, a more immediate and simpler legal solution would be to fix data (in)security’s legislative architecture—one that is currently fragmented, overlapping and overriding; further complicating the development of an effective framework that compels compliance with regulatory regimes. Any legislative framework needs to ensure in practice, and not just in a legal sense, measures are in place to protect data privacy, and that they are enforced and remain effective.

This ‘the new normal’, should it too be accepted as a product of pandemic discourses stuck within a false duality? Disquiets surrounding data collection, around Government authority; disquiets regarding Big Tech’s internal architectures of control. Disquiets should compel us, as future digitised data subjects, to demand a more nuanced critique—one that involves a data subject perspective and makes space for digital self-determination. Discourses need to be interdisciplinary, including the direct participation of data subjects, to develop solutions that are adequate to address structural inequalities. With a vaccine rollout now imminent, how will flows of access dictate who gets to be in a post-pandemic world? Importantly, why aren’t we asking these questions now?

*All ideas have been adapted to those of the Roundtable’s discussants and participants in the Ideas Session. Any misinterpretation of discussant’s arguments, thoughts and opinions are the author’s own. Many thanks to the Data Justice Research Network and participants of the Roundtable which was a joint initiative between the UNSW Allens Hub and Singapore Management University’s (SMU) Centre for AI and Data Governance. A special thank you to the panellists, Claire Daniel (UNSW), Dr Monique Mann (Deakin), Professor Lyria Bennett Moses (UNSW), Professor Mark Findlay (SMU), Nydia Remolina (SMU), Jane Loo (SMU), and Alicia Wee (SMU); and also to Danielle Hynes (UNSW), Professor Janet Chan (UNSW), Dr Michael Richardson (UNSW), Josephine Seah (SMU), and Rachel Rowe (UNSW) for hosting the event.