OPINION Simon Rogerson, professor emeritus in Computer Ethics at the Centre for Computing & Social Responsibility, De Montfort University, explains why ‘the data self’ has ethical implications for everyone in a connected world. Additional reporting: Chris Middleton.
Wireless technology has transformed the space surrounding us from a physical public space into a virtual private one that moves everywhere with us.
We can instantly disengage with the physical world and engage in virtual interactions of our choosing – on what we perceive to be a global basis. The smartphone revolution was one of the final pieces of this jigsaw, but the Internet of Things will complete the picture, providing a virtual shadow of the physical world.
This we all know. But the very meaning of space and time has changed and this, in turn, has changed the way we behave in this new space. The ethical and social implications of that cannot be underestimated.
We’ve become digital citizens with our needs satisfied and aspirations fulfilled through a combination of the real and virtual worlds – as the people surrounding you in cafes, on trains, and in physical public spaces attest, as they gaze into their screens.
The speed of technological advance is astonishing – as illustrated by the following statistics. In 2010, 400 million people had Facebook accounts, 126 million blogs existed, 50 million tweets were created daily, and 91 percent of mobile Web users accessed social networking sites.
But by 2018, that picture has changed dramatically: 2.2 billion people now have Facebook accounts; over 440 million blogs exist on Tumblr, Squarespace, and WordPress alone; more than five billion videos are watched every day on YouTube; 500 million tweets are created daily; 546 million people are LinkedIn users in over 200 countries; and most online/social time is spent on smartphones, with WhatsApp boasting 1.5 billion active users.
Social media, supported by high-performance wireless, is now an essential part of the global societal infrastructure. Its variety and reach continue to expand, as shown here:-
Consultant and tech blogger Fred Cavazza’s diagram, above, clearly shows that many of the activities that provide the ‘social glue’ of the human world are increasingly supported through social media. This raises a number of ethical issues, such as access, technological literacy and aversion, and economic disparity.
The graphic, below, showing internet penetration highlights these issues. It’s clear from this that regions of socio-economic deprivation, such as central Africa, are increasingly disadvantaged in a world dependent on online services.
Put another way, many of these people are not currently part of the same conversation.
Privacy
Privacy is another critical area for ethics in the connected age.
In 2011, Viviane Reding, vice president of the European Commission and EU Justice commissioner, laid out the foundations on which new data protection regulations should be based.
These comprised four pillars: the right to be forgotten; transparency; privacy by default; and protection regardless of data location. In May 2018, the European General Data Protection Regulation (GDPR), which is the third version of European data protection legislation, was rolled out.
It contains a list of an individual’s rights as to how personal data is handled. These include the rights: to be informed; of access; to rectification; to erasure; to restrict processing; to data portability; to object; and not to be subject to automated decision-making, including profiling.
The US is now on the cusp of adopting similar rules, some believe.
California has a history of being in the vanguard of privacy legislation. In 1972, voters amended the state’s Constitution to include the legal and enforceable right to privacy as being among the “inalienable” rights of all citizens. However, over the past quarter century, that right has been encroached on by the digital economy – ironically, led by companies in the state, such as Google and Facebook.
In November 2017, lawyers acting on behalf of the citizens of California wrote to the Attorney General, outlining proposals for a new consumer privacy act. Their proposed law entailed adding 15 clauses to the state’s Civil Code. The most significant ones for data-collecting organisations such as social platforms, were:
- The right to know what personal information is being collected
- The right to know if personal information is sold or disclosed, and to whom
- The right to say no to the sale of that personal information
- The right to equal service and price (i.e. not to be discriminated against, based on that personal data).
More, the draft legislation’s definition of personal information was extremely broad, and included:
- Identifiers such as name, address, IP address, email address, account name, social security number, passport number, and driving licence
- Property records
- Biometric data
- Browsing history, interaction with advertisements, apps, or websites
- Geolocation data
- Audio, electronic, visual, thermal, olfactory, or similar information, including facial recognition
- Psychometric data
- Employment history
- Inferences drawn from any of the information identified above
- All of the above as applied to any minor children of the data subject.
On 28 June 2018, the California Consumer Privacy Act of 2018 (CCPA) was passed unanimously. However, it watered down some of the November proposals.
Most significantly, it includes an exception to the right to equal service, allowing companies to offer different levels of service, depending on how customers interact with a site, app, or advertisement – the so-called ‘Spotify exception’.
Corporate support
Google, Facebook, and others, oppose tighter regulations of any kind, which is why they’ve been lobbying the US government for a watered-down federal solution that serves their commercial interests. The aim is to neuter California’s act.
But Europe is getting involved once again. In September 2018, France’s data regulator, the Commission Nationale de l’Informatique et des Libertés, announced that it is seeking to extend the so-called right to be forgotten globally, arguing that any Europe-only removal of data is meaningless on a global platform in an age of IP cloaking.
Either way, new European and US state legislation acknowledges something critically important: that personal data is now an important part of an individual and, consequently, the individual must have much greater control over that data, given that it is now a fundamental element of digital citizenship.
As such, there is an implied acceptance that humans have become more than their organic selves, and now extend into data.
Acknowledging that humans are becoming composite beings, in effect, leads to a requirement to think of ourselves not as data subjects, but as data selves.
It therefore follows that, data – our virtual anatomy, if you like – should never be owned by third parties. This additional right of the individual will perhaps be included in the fourth version of data protection legislation sometime in the future.
The problem is that there is currently no widely accepted mechanism for managing the digital self online, such as a personal API, although some blockchain projects are working in this area.
Not the only ethical issue
While privacy is an important ethical issue in the context of social media, it is by no means the only one as the connected world expands, along with its ethical challenges.
Moral norms and values are embedded in social networking sites. As such, according to Light and McGrath1, the technology shapes the user experience and, ultimately, changes the user as more and more time is spent on a platform.
Bateman et al2 explain that self-disclosure on social media has begun to change what spaces, time, and information we judge to be private or public. This tension raises a number of ethical issues.
Dual use and even multiple uses are common place across social media. Witt3 provides evidence of employers perusing social profiles before making hiring decisions. Lawyers often access social media to collect incriminating information in divorce and child custody cases, she says.
Thus social platforms present a danger to participants and society as a whole, if improperly accessed or used.
Social network sites can be, and are, monitored, and legal precedents have been established. For example, Strutin4 explains how a juvenile gang member pleaded guilty to a weapons offence in a California court. He was sentenced to probation, which included barring him from access to any social network.
This judicial recognition of social networks as communication media that can be monitored should change our perception of them.
Meanwhile, Louch et al5 suggest that the teenage years are when adolescents work on determining their identity. Advertisements, which are core to the social media experience, have an impact on their sense of who or what they can become.
As such, it could be argued that targeted marketing using social profiles is tantamount to covert social manipulation, which is discriminatory. Such action requires ethical scrutiny.
This challenging ethical landscape was amply illustrated by the recent problems with Facebook and Cambridge Analytica, a scandal in which 87 million data profiles were shared.
Melrose explains that data selves were harvested by Cambridge Analytica, using a digital personality quiz, together with data from their Facebook profiles and information about their friends.
It was ‘people as data’ who were the priceless commodities mined by Cambridge Analytica, and subsequently used for a disingenuous purpose.
This type of unethical behaviour is not uncommon. On 21 July 2018, the BBC reported that Facebook had suspended Crimson Hexagon, a US-based analytics firm, while it investigated concerns about the collection and sharing of user data.
Internet of Business says
The ethical issues surrounding social media are complex and difficult to address, and consideration of them must go beyond mere compliance with current regulations and laws.
Creating simple checklists is problematic, as the ethical dimension of ICT can, and does, change rapidly as technology evolves. The appliance of ethical sensitivity – rather than compliance with an ethical checklist – must be the way forward.
Our data shadows will remain for as long as the virtual world exists. We are, therefore, all permanently vulnerable in this technologically dependent world.
• Professor Rogerson can be reached at: srog@dmu.ac.uk.
1 Light, B. and McGrath, K., 2010. Ethics and social networking sites: a disclosive analysis of Facebook. Information Technology & People, 23(4), pp.290-311.
2 Bateman, P.J., Pike, J.C. and Butler, B.S., 2011. To disclose or not: Publicness in social networking sites. Information Technology & People, 24(1), pp.78-100.
3 Witt, C.L., 2009. Social networking: Ethics and etiquette. Advances in Neonatal Care, 9(6), pp.257-258.
4 Strutin, K., 2011. Social media and the vanishing points of ethical and constitutional boundaries. Pace L. Rev., 31, p.228.
5 Louch, M.O., Mainier, M.J. and Frketich, D.D., 2010. An analysis of the ethics of data warehousing in the context of social networking applications and adolescents. 2010 ISECON Proceedings, 27(1392).