In the early days of the internet, there was an optimistic belief that technology could foster a wise and self-governing global community. However, as social networks emerged and became monopolistic, this ideal was undermined. The 2016 elections in the US and Brexit vote demonstrated how social media could polarize societies and disrupt democratic processes. Then, the Facebook-Cambridge Analytica scandal of 2018 exposed the manipulative power of targeted advertising, sparking further widespread concern over data privacy and the influence of social media on public opinion.
Various mechanisms of social networks contributed to this dysfunction. These include the abuse of personal data through extensive profiling and tracking, the spread of misinformation and clickbait, and the optimization of content to maximize user engagement at the expense of truth and societal well-being. The consequences of these practices are far-reaching, leading to political polarization, mental health issues, and the erosion of trust in democratic institutions.
The response to these challenges must be multifaceted, involving efforts from tech companies, governments, parents, and activists to mitigate the negative impacts of social networks. Crucial elements include transparency, ethical design, and regulatory measures to create a safer and more humane digital environment. We need a collective effort to harness the positive potential of the internet while addressing its darker aspects.
After a few years, people joined the social networks instead of making websites,and the social networks became monopolies. In 2016, there was concern that the Donald Trump election and the Brexit poll may have been influenced by manipulation of people on social media.
Then in 2018 with the Facebook - Cambridge Analytica scandal[CA], the public at large realized that the way the internet had been working had led to people being manipulated by targeted advertising based on profiles which had been made of them from their social media activity. Rather than one worrying about the exposure of their own personal data, they also worried about the indirect effect it was having on other people. Since then, the political battle in the USA and the UK seems to have become more polarized to the point that congress is in 2024 not functioning properly [CON].
There is a widespread but questionable assumption that the only way to get revenue on the web is with targeted advertising. To do that, you need to build profiles of users to be able to target them with particular ads, particular languages, particular arguments. To build that profile, you collect the data associated with an online identity connect it with data about different online interactions from the same person, and then you trade the data with other companies.
Here we use "Spying" for any retention of user data. We can divide it roughly into data which the user is aware of, like the data entered on forms and photos uploaded, and data the user is not aware of, like their IP address, location and super-cookie parameters which help identify them.
The dark art of concluding that the person in two different bits of personal data is in fact the same person is a very well developed one. It includes things like browser type, processor speed, typing timing, and so on, which together make a characteristic signature for a person which will be consistent between different sites.
Any application has to build some kind a profile of its user. Profile building is needed in order to serve the user well, if nothing else. So the profile building itself is not the problem. The problems are when that profile is used to target the user in a way as to take advantage of them, be it for selling products or ideas that are not good for the user. The market in online profiles of people means that when the profile has been built, it is sometimes sold to another organization which may not be benevolent in its use of it. The profile may be augmented by third party data from the personal data market.
Once one has a stream of deep profile data about a user, then one can target them with targeted advertising. This is the best way of selling things on the web, but more concerning is that it is way to manipulate politics and ideologies, particularly in the run up to an election. The ad market for example on Facebook allows one to select all kinds of very specific groups to target specific demographics, attribute, or allow for re-targeting [FBAD].
If you are an advertiser, targeted advertising is much more effective than non-targeted. For the user, it means that ads are more relevant, more useful. It makes your life more interesting in that it limits the things which you are shown.
Clearly a drawback of any powerful technique, like targeted advertising, is that it can be used for malevolent purposes. In this case it can be used to manipulate people to buy something which they should not be buying, or to vote for something which is not in their own best interests. An insidious aspect is that the targeted individual are not aware the fact that other groups may be shown wildly different and inconsistent versions.
If the goal is to get the user to click on that link, then the most effective thing to do is to lie about what the link goes to. The link may seem to be something free but in fact be about something for pay. The link may seem to be something about sex but in fact be to something about chess. The link may seem to be something about chess but in fact be to something about sex. And so on. Clickbait takes many forms, and is rife. It is a classic deceptive pattern.[DARK1]
So far we have looked at the processes building profiles of users, and using those profiles for targeted advertising. We have not looked at the other side of the social network dystopian world, the optimization of the feed items which the site uses to optimize for maximum engagement. What seems to happen is that the social networking sites use an AI system that learns to keep a user on the site. It turns out that people are kept engaged by things like:
The first leads to misinformation, which directly messes up a democracy where you need an informed directorate
The second contributes to a toxic online environment, and to the polarization of society.
Why are people motivated to lie on the internet? Because that's what gets the clicks. The Macedonian young people who during the first Trump election campaign were making a living by creating sites funded by Google ads experimented with all sorts of headlines and tweets to draw traffic to their website [MAC]. The best was "Hilary really wants Trump to win" - a brilliant concoction from one of their creatives that brought more add revenue than anything else.
The Google add machine was training them what to say to be rewarded with money. And so untruths about the election were being spread by people who personally had no skin in the game, no direct way of benefitting from the result of the election either way.
There are of course many other subsystems in the net which incentivize people to lie.
While the psychology profession may have been loath to use the term Addiction for behaviors rather than substances, the media have been using it for Social Media and online gaming.
There are many causes of misinformation -- people believing things which are not true -- not all of which are disinformation -- that caused deliberately by others. Classic examples of disinformation vary from intense operations to scam one target person at one time, to broad conspiracy theories believed by huge numbers of people.
"While Facebook, the largest social media platform, has gone out of its way to deny that it contributes to extreme divisiveness, a growing body of social science research, as well as Facebook’s own actions and leaked documents, indicate that an important relationship exists." [POL1]. So the link from social media is contended.
The polarization of society, evident in the in the USA but widespread, has been blamed for the fact that it is very difficult to have civilized conversations between people of differing views. This has had a significant effect on democracy in the USA [CONG] and elsewhere [DEM1], [DEM2]. .
There has been a lot of discussion about the extent to which variations of these processes on social media actually affected the result of elections. It is believed in the US that Russia certainly tried to influence the US 2016 elections. [EL1][EL2][EL3] In the run up to 2024 as a year of many elections, people have naturally wondered whether the upcoming elections will be affected by the mass of misinformation which exists, or also by deliberate manipulation of voters through social media. [EL4]
It would be hard to catalog and measure all the ways in which mental health is affected by the various things on the web. In sad, extreme, cases, there is a paper trail where a person's suicide is traceable to their use of the internet. [MH1][MH2]. In the USA recently, the US Surgeon General’s suggestion that maybe Social networks, like cigarettes, should have mandatory health warnings certainly raised the level of public discussion. [MUR1][MUR2].
One common aspect of many of the platforms, not just the addictive ones, is the process of comparing oneself with others, whether it is beauty on Instagram, but also any aspect of ones life and lifestyle. Comparing with friends, or public figures whether they be film stars or influencers. Peer pressure can be unhealthy and is close to fear of missing out "FOMO".
Of course mental health can also be improved, through access to resources, and through supportive relationships with friends. [WEL1][WEL2].
There are many benevolent processes in which individuals and groups work together to make their own lives, and the world a better place. In the next article we go over some of them.
In a short article like this, it is hard to describe all of the issues with social media and the ways they interact, but there is a lot of material out there which does it in more depth.
It is interesting to see how these platforms move with time in this space. When I first got an instagram account, I used to track friends and family, and when I opened it I knew I would get a list of posts by people I had followed. Now years later I find that I get a feed of amazing videos of amazing things and products -- all very clickable and doom-scrollable - but the but now its no longer useful as a private place of private friends and family.
Also, Skype which many use as a chat and video conference seems to be suggest I follow not just people but subjects, basically Skype-controlled feeds. What'sApp, too, seems to suggest following some sort of channel. Is the reward for a platform that grabs my attention so great that all kinds of current social media platforms will drift toward the same identical model, and lose their originality? Some people fear that is inevitable.
What has been society's response to these problems? There has been a lot of press. There have been a number of books about different aspects the problem. There are a few things listed in the references below, but searching on the web will get you deeper into the problem. There have been organizations like the World Wide Web Foundation [WF0] and the Center for Humane Technology[CHT] set up to counter these problems. CHT produced a movie, the Social Dilemma [SD] describing a family severely impacted. The Web Foundation had projects, for example, including affordable access for all, and women's rights online. Research groups at various universities have taken on various aspects of this.
When you build a feed and you use some form of AI to learn an algorithm for the feed, you have a choice as how to train the AI which selects the feed. This is a really important choice! The common understanding that a lot of dysfunction comes from training the AI to optimize for engagement: for the time the user spends on the website or app. This ends up correlating with experiencing anger and hate.
Suppose you optimize instead for other things such as constructive discussion, exploring other cultures, meeting slightly different new people ("Stretch Friend"), and so on. Things which would not completely destroy your revenue, but would leave the user -- and the world -- in a better place, not a worse one.
Be transparent about the the algorithms you are using.[WF1]
The Web Foundation, over the last few years had a project on women's rights on the web, and specifically on online gender-based violence (OGBV). Among other things, the report highlighted three changes to the platforms which women had pointed out would actually make the situation better. When this was pointed out, each change was actually made by the platform concerned. (One was the ability to opt out of being included in the global heat map on Strava.) [WF2][WF3]
And the basic things you already know about making a web site.
Legislation is an important part of the solution, with regulation it typically involves. It has to be done, of course, working closely with industry, academia and the NGO sector. In 2024*,
Public information and online resources about the issues are important tools. Another one is "pre-bunking": preparing a population for the future onslaught of fake news before the election season by getting people to imagine in advance what could happen. It is also good to train people to recognize provenance, like domain names and SMS numbers, of government information.[PRE1][PRE2]
Here are a few tips that parents can keep in mind to help keep their children safe online:
While governments and large companies are not doing the things they can do to reduce harms from social media on the web, then it is reasonable to protest. Protests from "normal people" and from celebrities have been important in the past (like with Net Neutrality).
A simple way of mitigating the fact that is so many subsystems on the net which lead to bad things is simply: don't use them. And use the many many good things. The next article in this series points out what a huge amount of good stuff there on the net.
If you are a developer, if you can code -- or if you can design things and other people can code them -- or if you can commission things that people can build, then you make systems which are better the ones we have. One could argue that you have an obligation to do so. Some of these will be new takes on an existing thing which make it one notch more humane. Others will be whole new wonderful systems which we can't imagine now.
You can't build something as big as the web and -- or as big as a social network -- and just launch it off hoping that good things will happen. You can hope for utopian things to result. You have to carefully watch and research what actually happens, what unexpected consequences emerge in the behaviors of huge groups of humanity connected by technology. Then you need to re-engineer the system, technical, social, regulatory and legal, to fix the problems, and enhance the benefits.