Data and Privacy From an Ethical Perspective

In April 2020, Norwegian authorities launched the app Smittestopp (“infection stop”) to prevent the spread of the coronavirus. The app was itself stopped by Datatilsynet—i.e., The Norwegian Data Protection Authority (DPA)—in June 2020 due to insufficient privacy protections and unjustified data collection and processing. The story of Smittestopp reminds us of something we often see with new digital solutions: even with large resources and the best of intentions, it can still go wrong.
If the app’s developers had conducted a more rigorous assessment themselves, it would not have been necessary for the DPA to intervene and shut down the app. The developers could have addressed and resolved the privacy concerns before launch (primarily it was about a violation of the principle of data minimisation, see chapter 3).
Legal or security issues can go unnoticed in new digital services. Sometimes, developers may not have a complete understanding of the laws and regulations. Or, they might knowingly enter a grey area or overstep boundaries, hoping they won’t get caught. In any case, most services haven’t been specifically examined by a third party, like the DPA.
In other words: The fact that we have laws like GDPR does not automatically mean that all services are safe to use. There will always be a gap between the number of apps and programs you can choose from and the number that have been legally assessed and controlled by an independent third party. Also, there will always appear new situations and issues that the law hasn’t taken into account. Our laws are developed in a democratic process, and democracy takes time.
Something we can do to minimise the risk of using new technology is to assess the ethics. Ethics, in short, is about assessing what is good and right. We call it digital ethics when we reflect on ethics related to new technology, or when the action is performed with digital technology.

What is digital ethics?

Ethics is about reflecting on what is the moral thing to do, and to act accordingly. Being moral means making choices that promote good lives for ourselves and others.
People around the world have different ideas of what it means to do the right thing. In Norway, and many other places, human dignity and equality are considered key values. Actions that uphold these values are deemed good.
These values don’t just form the foundation of ethics, they also shape our laws. For instance, human rights and equality laws help define what dignity and equality really mean. Privacy regulations also draw on these values. The belief is that if all people are equally valuable, they should have the same rights to participate in society, and to share or withhold personal information as they wish. This necessitates a right to privacy, and by extension, data protection.
However, digital ethics goes beyond basic Internet manners. Data is collected, analysed, and used in ways that can be hard for the average person to grasp. So, a crucial aspect of digital ethics is understanding how this process works. The more we understand about data and how it’s used, the better equipped we are to judge whether it’s ethically justifiable.
Digital ethics prompts us to ask, what kind of society do we want? While privacy focuses on what’s legal, digital ethics considers what’s desirable and beneficial for individuals and society, both now and in the future.

What is ethically defensible collection and use of data?

In digital ethics, we often have to balance different considerations. For instance, in the case of the first version of the Smittestopp app (a new app with the same name has since been released), the authorities prioritised societal needs over individual privacy. They designed an app that made data accessible for research and analysis to promote public health, but that didn’t fully protect individual privacy.
Datatilsynet viewed the app’s potential for surveillance as much too invasive, given its basic purpose.
There are many other examples of how digital services force us to choose between different values. A relevant example is the discussion of how much surveillance we should allow online to prevent child abuse and other crimes.
Ethical reflection does not provide a definite answer, but it may lead us toward well-reasoned decisions. This applies to digital ethics as well.
When you’re in a position to collect or share data, you need to balance various factors. You have to understand how to ethically approach an issue and document your thought process, so you can explain your decisions later.
Deciding what counts as ethical data collection and use isn’t straightforward. Various factors come into play and need to be weighed against each other. Making these assessments is challenging but important. Failing to do so can have serious repercussions for individuals and society alike.
Just consider the Cambridge Analytica scandal, where Facebook allowed an external party to extract data from Facebook users without their knowledge or consent. The data was used to manipulate voters in democratic elections worldwide and may have significantly contributed to the victories of both Trump and the Brexit movement.