Fascinated as we are by digital gadgets and promises, we do not look deeply enough at the radical impact of the digital transformation on our health and life. Increasingly social scientists warn of a new phase of the organisation of health and medical knowledge which will be tailored to data extraction and will allow for a new structure of power over individuals if the right political choices are not made (Couldry 2018).
The UN Special Rapporteur on extreme poverty and human rights, Philip Alston, describes this development as the emergence and expansion of a digital welfare state which is increasingly driven “to automate, predict, identify, surveil, detect, target and punish.”
His report, submitted to the UN in October 2019, reinforces concerns linked to the digital transformation of the health sector. It is not only the massive appropriation of data by the big technology companies and their platforms that should concern us, but also the expansion of digital governance by the state, especially in the sectors of welfare and health. Alston describes how the social sector, which provides citizens with benefits that they have a right to receive, can morph into a technology and data driven control system. Vast quantities of data are collected from a wide range of sources. They are connected between government silos and then processed to enable automated decision making by algorithms and to apply predictive analytics to foresee risks. Combined with behavioural economics this also allows governments to target public health messages to change social behaviours. Those that have read Foucault’s Discipline and Punish will not be surprised by the application of new scientific knowledge and technological development to new forms of control.
In 2016, NHS England implemented a new policy called “Healthy Children: transforming child health information” which sets out its approach to digitising childhood (Lupton 2017).
The aim is “to transform child health information services allowing better monitoring of every child’s health and providing access to information for all those that are involved in the child’s care, where appropriate, to ensure that all children get the best possible start in life.”
The policy promises to improve care by collecting and connecting data across health, social services, and education to allow for “better monitoring, personalised care, and greater control of decisions.” It is stated that young people and families can set their own preferences, but will anyone realistically do this? Children have no say on how their life is monitored and quantified from day one. Their data become part of the global data economy. This is reinforced by the ominous statements that explain the approach further:
“There will be an online record of a child’s health and development as well as their health and care issues; professionals will have access to key health information at the point of care to improve decision making; professionals delivering preventative programmes of care will be alerted by a failsafe (sic!) management service when an intervention is due or has been missed. And it becomes possible to deliver personalized health promotion materials.”
Anyone engaged in digital health should read Alston’s report carefully. He states unequivocally that from a human rights perspective such approaches imply that the most vulnerable and powerless in society are subject to demands and forms of intrusiveness without accountability. Citizens become ever more visible to their governments—not the other way round. The drive for relational data is expanding in the digital health sphere as well—all with the good intentions of identifying social determinants, instilling and monitoring healthy behaviours, and predicting disease, or at least detecting it as early as possible, and then personalising and targeting treatment.
This development is presented as progress, and as being necessary and inevitable in order to take all the factors that determine health into account, to increase efficiency and cost saving, or to improve the communication between the system and client. (Sandvik 2019) Only the smallest part of health data are tracked within the confines of present day healthcare—the larger value lies in the tracking and linking of a wide range of behavioural, contextual, and social data. In some countries such data are used to define health and welfare programs, in others to establish social credit scores, in others to market and sell goods and services. We already know that many of the approaches are not failsafe and show a race and gender bias as well as other forms of in-built discrimination. The health sector might not be that far removed from morphing into a technology and data driven control system—all for the benefit of health. I believe the dangers in sectors that are considered benign—such as health, education, social welfare—are possibly even greater than elsewhere.
Michele Bachelet, the UN Human rights commissioner stated recently that “The digital revolution is a major global human rights issue. Its unquestionable benefits do not cancel out its unmistakable risks. We cannot ignore the dark side.” We must not be naive. We cannot develop digital health as a human rights free zone and destroy our children’s future in the process.
Competing interests: None declared