Fix Our Feeds Archives - People vs. Big Tech https://peoplevsbig.tech/category/fix-our-feeds/ We’re people, not users Thu, 07 Nov 2024 13:10:17 +0000 en-GB hourly 1 https://peoplevsbig.tech/wp-content/uploads/2024/06/cropped-favicon-32x32.png Fix Our Feeds Archives - People vs. Big Tech https://peoplevsbig.tech/category/fix-our-feeds/ 32 32 Briefing: protecting children and young people from addictive design https://peoplevsbig.tech/briefing-protecting-children-and-young-people-from-addictive-design/ Thu, 07 Nov 2024 11:59:42 +0000 https://peoplevsbig.tech/?p=1015 Research has shown the deep harm excessive social media use can do to young brains and bodies. The EU Commission must tackle the root cause.

The post Briefing: protecting children and young people from addictive design appeared first on People vs. Big Tech.

]]>

Social media companies design their platforms to encourage users to spend as much time on them as possible. Addictive design impacts everyone, but children and young people are especially susceptible. Research shows that given their neural developmental stage, young users are particularly prone both to excessive use of social media as well as its harmful effects, and young users with preexisting psychosocial vulnerabilities are even more at risk.

What is addictive design?

Social media platforms’ business model relies on keeping users online for as long as possible, so they can display more advertising. The platforms are optimised to trigger the release of dopamine - a neurotransmitter the brain releases when it expects a reward - making users crave more and use more.
Young users are far from exempt, documents reveal that Meta has invested significant resources to study and even created an internal presentation on how to exploit the neurological vulnerabilities of young users.

While more research is needed, the following addictive features have been identified:

  • Notifications such as “likes”: both the novelty and validation of another user’s engagement triggers a dopamine release reinforcing the desire to post and interact creating a “social validation feedback loop”.
  • Hyper personalised content algorithms or “recommender systems”: Brain scans of students showed that watching a personalised selection of videos triggered stronger activity in addiction-related areas of the brain compared to non-personalised videos.
  • Intermittent-reinforcement: meaning users receive content they find less interesting punctuated by frequent dopamine hits from likes or a video they really like. This keeps the user scrolling in anticipation for the next dopamine reward. This randomisation of rewards has been compared to “fruit machines” in gambling.
  • Autoplay and infinite scroll: automatically showing the next piece of content to provide a continuous, endless feed, makes it difficult to find a natural stopping point.

 Why is addictive design so harmful? 

Excessive screen time and social media usage has been shown to cause

  • Neurological harm:
    • Reduction in grey matter in the brain according to several studies, similar to the effects seen in other addictions.
    • Reduced attention span and impulse control is linked to the rapid consumption of content on social media, particularly short-form videos, and especially in younger users.
    • Possible impairment of prefrontal cortex development, which is responsible for decision-making and impulse control, due to early exposure to  social media's fast-paced content. N.B. the prefrontal cortex does not fully develop until around age 25.
    • Possible development of ADHD-like symptoms: may be linked to excessive screen according to early studies.
    • Temporary decline in task performance identified in children after watching fast-paced videos.

  • Psychological harm:
    • In November 2023, Amnesty International found that within an hour of launching a dummy account posing as a 13 year old child on TikTok who interacted with mental health content, multiple videos romanticising, normalising or encouraging suicide had been recommended. This illustrates both the risk of prolonged screen time and also the hyper personalisation of content recommender systems.
    • Increased anxiety, depression, and feelings of isolation have been linked to prolonged online engagement, as social media can negatively affect self-esteem, body image and overall psychological well-being.
    • Risk exposure: Longer time online exposes children and young people more to risks such as cyberbullying, abuse, scams, and age-inappropriate content.

  • Physical harm:
    • “93% of Gen Z have lost sleep because they stayed up to view or participate in social media,” according to the American Academy of Sleep Medicine.
    • Reduced sleep and activity: Social media usage can lead to sleep loss and decreased physical activity, which impacts weight, school performance, mental health, and distracts from real-life experiences.

Gone is the time when the streets were considered the most dangerous place for a child to be - now, for many young people the most dangerous place they can be is alone in their room with their phone.

What’s the solution?

Given the severity of the risks to children online, we need binding rules for platforms. Unfortunately, the very large online platforms (VLOPs) have repeatedly demonstrated that they choose profit over the safety of children, young people and society in general.

The adjustments that some have made have been minor, for example, TikTok no longer allows push notifications after 9 pm for users aged 13 to 15. But they will still be exposed to push notifications (linked to addictive behaviour) for most of the day. In March 2023, TikTok introduced a new screen-time management tool which requires under-18s to actively extend their time on the app once they have reached a 60-minute daily limit. However, this measure puts the burden on children, who in large numbers describe themselves as “addicted” to TikTok, to set limits on their own use of the platform. The prompt can also be easily dismissed and does not include a health warning. Adding to the limitations of the measure, the change only applies to users who the system identifies as being a child, with the effectiveness of TikTok’s age verification being called into question. For example, the UK’s media regulator Ofcom has found that 16% of British three- and four-year-olds have access to TikTok.

Meta’s leaked internal documents reveal that the corporation knowingly retains millions of users under 13 years old, and has chosen not to remove them. Notably, Harvard University research last year estimated that in the US alone, Instagram made $11 billion in advertising revenue from minors in 2022.

Risk of overreliance on age verification 

While we welcome norms on an appropriate age to access social media platforms, overreliance on age-gating and age verification to adequately protect minors online is, unfortunately, unrealistic and alone will not adequately protect minors online. Even the most robust age-verification can be circumvented.
Age-gating and age verification still assume that parents or guardians have the availability, capacity and interest in monitoring internet usage. Frequent monitoring is unrealistic for most families but in particular, this approach risks disadvantaging young people who face additional challenges, such as those living in care, whose parents work long hours or face language barriers in their country of residence.
To truly protect children and young users, we need safe defaults for all. Please see our whitepaper prepared in collaboration with Panoptykon and other researchers and technologists: Safe by Default: Moving away from engagement-based rankings towards safe, rights-respecting, and human centric recommender systems.
Aside from this, age verification can present its own risks to privacy, security, free speech, as well as cost and convenience to businesses.

Establishing binding rules

Fortunately, there has been momentum to tackle addictive design in the EU; last December the European Parliament adopted by an overwhelming majority a call urging the Commission to address addictive design. In its conclusions for the Future of Digital Policy, the Council stressed the need for measures to address issues related to addictive design. In July, Commission President von der Leyen listed this as a priority for the 2024-2029 mandate. The Commission’s recent Digital Fairness Fitness Check also outlined the importance of addressing addictive design.

The Commission must:

  • assess and prohibit the most harmful addictive techniques not already covered by existing regulation, with a focus on provisions on children and special consideration of their specific rights and vulnerabilities.
    • examine whether an obligation not to use profiling/interaction-based content recommender systems ‘by default’ is required in order to protect users from hyper personalised content algorithms; 
    • put forward a ‘right not to be disturbed’ to empower consumers by turning all attention-seeking features off.
  • ensure strong enforcement of the Digital Services Act on the protection of minors, prioritising:
    • clarifying the additional risk assessment and mitigation obligations of very large online platforms (VLOPs) in relation to potential harms to health caused by the addictive design of their platforms;
    • independently assessing the addictive and mental-health effects of hyper-personalised recommender systems;
    • naming features in recommender systems that contribute to systemic risks;

The post Briefing: protecting children and young people from addictive design appeared first on People vs. Big Tech.

]]>
Letter to European Commissioner Breton: Tackling harmful recommender systems https://peoplevsbig.tech/letter-to-european-commissioner-breton-tackling-harmful-recommender-systems/ Mon, 05 Feb 2024 15:26:00 +0000 https://peoplevsbig.tech/?p=514 Civil society organisations unite behind Coimisiún na Meán's proposal to disable profiling-based recommender systems on social media video platforms

The post Letter to European Commissioner Breton: Tackling harmful recommender systems appeared first on People vs. Big Tech.

]]>

Dear Commissioner Breton,

Coimisiún na Meán’s proposal to require social media video platforms to disable recommender systems based on intimately profiling people by default, is an important step toward realising the vision of the Digital Services Act (DSA). We eighteen civil society organisations urge you not to block it, and moreover, to recommend this as a risk mitigation measure under Article 35 of the DSA. This is an opportunity to once more prove European leadership.

Disabling profiling-based recommender systems by default has overwhelming support from civil society, the Irish public and cross-group MEPs. More than 60 diverse Irish civil society organisations endorsed a submission strongly backing this measure, as covered by the Irish Examiner. We are united in our support for this Irish civil society initiative. 82% of Irish citizens are also in favour, as shown in a national poll across all ages, education, income, and regions of Ireland conducted independently by Ireland Thinks in January 2024. At the end of last year, a cross-party group of MEPs wrote a letter to the Commission to adopt the Ireland example across the European Union.

Our collective stance is based on overwhelming evidence of the harms caused by profiling-based recommender systems especially for most vulnerable groups such as children – Algorithmic recommender systems select emotive and extreme content and show it to people who they estimate are most likely to engage with it. These people then spend longer on the platform, which allows Big Tech corporations to sell ad space. Meta's own internal research disclosed that a significant 64% of extremist group joins were caused by their toxic algorithms. Even more alarmingly, Amnesty International found that TikTok’s algorithms exposed multiple 13-year-old child accounts to videos glorifying suicide in less than an hour of launching the account.

Platforms that originally promised to connect and empower people have become tools that are optimised to “engage, enrage and addict” them. As described above, profiling-based recommender systems are one of the major areas where platform design decisions contribute to “systemic risks”, as defined in Article 34 of the DSA, especially when it comes to “any actual or foreseeable negative effects” for the exercise of fundamental rights, to the protection of personal data, to respect for the rights of the child, on civic discourse and electoral processes, and public security, to gender-based violence, the protection of public health and minors and serious negative consequences to the person’s physical and mental well-being. By determining how users find information and how they interact with all types of commercial and noncommercial content, recommender systems are therefore a crucial design-layer of Very Large Online Platforms regulated by the DSA.

Therefore, we urge the European Commission not only to support Ireland’s move, but to apply this across the European Union, and recommend disabling recommender systems based on profiling people by default on social media video platforms as a mitigation measure for Very Large Online Platforms, as outlined in article 35(1)(c) of the Digital Services Act.

Furthermore, we join the Irish civil society organisations in urging the Coimisiún na Meán and the European Commission to foster the development of rights-respecting alternative recommender systems. For example, experts have pointed to various alternatives including recommender-systems that are built on explicit user feedback rather than data profiling, as well as signals that optimise for outcomes other than engagement, such as quality content and plurality of viewpoint. Ultimately, the solution is not for platforms to provide only one alternative to the currently harmful defaults but rather to open up their networks to allow a marketplace of possible options offered by third parties, competing on a number of parameters including how rights respecting they are, thereby promoting much greater user choice.

We believe these actions are crucial steps towards mitigating against the inherent risks of profiling based recommender systems towards a rights-respecting and pluralistic information ecosystem. We look forward to your support and action on this matter.

Yours sincerely,

  1. Amnesty International
  2. Civil Liberties Union for Europe (Liberties)
  3. Defend Democracy
  4. Ekō
  5. The Electronic Privacy Information Center (EPIC)
  6. Fair Vote UK
  7. Federación de Consumidores y Usuarios CECU
  8. Global Witness
  9. Irish Council for Civil Liberties
  10. LODelle
  11. Panoptykon Foundation
  12. People vs Big Tech
  13. The Citizens
  14. The Real Facebook Oversight Board
  15. Xnet, Institute for Democratic Digitalisation
  16. 5Rights Foundation
  17. #jesuislà
  18. Homo Digitalus

The post Letter to European Commissioner Breton: Tackling harmful recommender systems appeared first on People vs. Big Tech.

]]>
Open letter to the European Parliament: A critical opportunity to protect children and young people https://peoplevsbig.tech/open-letter-to-the-european-parliament-on-the-addictive-design-of-online-services/ Mon, 11 Dec 2023 12:21:00 +0000 https://peoplevsbig.tech/?p=899 Dear Members of the European Parliament, We, experts, academics and civil society groups, are writing to express our profound alarm at the social-media driven

The post Open letter to the European Parliament: A critical opportunity to protect children and young people appeared first on People vs. Big Tech.

]]>

Dear Members of the European Parliament,

We, experts, academics and civil society groups, are writing to express our profound alarm at the social-media driven mental health crisis harming our young people and children. We urge you to take immediate action to rein in the abusive Big Tech business model at its core to protect all people, including consumers and children. As an immediate first step, this means voting for the Internal Market and Consumer Protection Committee’s report on addictive design of online services and consumer protection in the EU, in its entirety.

We consider social media's predatory, addictive business model to be a public health and democratic priority that should top the agenda of legislators globally. Earlier this year, the US Surgeon General issued a clear warning about the impact of addictive social media design: “Excessive and problematic social media use, such as compulsive or uncontrollable use, has been linked to sleep problems, attention problems, and feelings of exclusion among adolescents… Small studies have shown that people with frequent and problematic social media use can experience changes in brain structure similar to changes seen in individuals with substance use or gambling addictions”.

This is no glitch in the system; addiction is precisely the outcome tech platforms like Instagram, TikTok and YouTube are designed and calibrated for. The platforms make more money the longer people are kept online and scrolling, and their products are therefore built around ‘engagement at all costs’ – leading to potentially devastating outcomes while social media corporations profit. One recent study by Panoptykon Foundation showed that Facebook's recommender system not only exploits users' fears and vulnerabilities to maintain their engagement but also ignores users' explicit feedback, even when they request to stop seeing certain content.

The negative consequences of this business model are particularly acute among those we should be protecting most closely: children and young people whose developing minds are most vulnerable to social media addiction and the ‘rabbit hole’ effect that is unleashed by hyper-personalised recommender systems. In October 2023, dozens of states in the U.S. filed a lawsuit on behalf of children and young people accusing Meta of knowingly and deliberately designing features on Instagram and Facebook that addict children to its platforms, leading to "depression, anxiety, insomnia, interference with education and daily life, and many other negative outcomes".

Mounting research has revealed the pernicious ways in which social media platforms capitalise on the specific vulnerabilities of the youngest in society. In November 2023, an investigation by Amnesty International, for example, found that within 20 minutes of launching a dummy account posing as a 13 year old child on TikTok who interacted with mental health content, more than half of the videos in the ‘For You’ feed were related to mental health struggles. Within an hour, multiple videos romanticising, normalising or encouraging suicide had been recommended.

The real-world ramifications of this predatory targeting can be devastating. In 2017, 14 year-old British teenager Molly Russell took her own life after being bombarded with 2,100 posts discussing and glorying self-harm and suicide on Instagram and Pinterest over a 6-month period. A coroner’s report found that this material likely “contributed to her death in a more than minimal way”. The words of Molly’s father, Ian Russell, must serve as an urgent message to us all: “It’s time to protect our innocent young people, instead of allowing platforms to prioritise their profits by monetising their misery.”

Across Europe, children and young people, parents, teachers and doctors are facing the devastating consequences of this mental health crisis. But change will not come about from individual action. We urgently need lawmakers and regulators to stand up against a social media business model that is wreaking havoc on the lives of young people. We strongly endorse and echo the IMCO Committee Report’s calls on the European Commission to:

1. ensure strong enforcement of the Digital Services Act on the matter, with a focus on provisions on children and special consideration of their specific rights and vulnerabilities. This should include as a matter of priority:

  • independently assessing the addictive and mental-health effects of hyper-personalised recommender systems;
  • clarifying the additional risk assessment and mitigation obligations of very large online platforms (VLOPs) in relation to potential harms to health caused by the addictive design of their platforms;
  • naming features in recommender systems that contribute to systemic risks;
  • naming design features that are not addictive or manipulative and that enable users to take conscious and informed actions online (see, for example, People vs Big Tech and Panoptykon report: Prototyping user empowerment: Towards DSA-compliant recommender systems).

2. assess and prohibit harmful addictive techniques that are not covered by existing legislation, paying special consideration to vulnerable groups such as children. This should include:

  • assessing and prohibiting the most harmful addictive practices;
  • examining whether an obligation not to use interaction-based recommendation systems ‘by default’ is required in order to protect consumers;
  • putting forward a ‘right not to be disturbed’ to empower consumers by turning all attention-seeking features off by design.
Signed by the following experts and academics,

Dr Bernadka Dubicka Bsc MBBs MD FRCPsych, Professor of Child and Adolescent Psychiatry, Hull and York Medical School, University of York

Dr Elvira Perez Vallejos, Professor of Mental Health and Digital Technology, Director RRI, UKRI Trustworthy Autonomous Systems (TAS) Hub, EDI & RRI Lead, Responsible AI UK, Youth Lead, Digital Youth, University of Nottingham

Ian Russell, Chair of Trustees, Molly Rose Foundation

Kyle Taylor, Visiting Digital World and Human Rights Fellow, Tokyo Peace Centre

Dr Marina Jirotka, Professor of Human Centred Computing, Department of Computer Science, University of Oxford

Michael Stora, Psychologist and Psychoanalyst, Founder and Director of Observatoire des Mondes Numériques en Sciences Humaines

Dr Nicole Gross, Associate Professor in Business & Society, School of Business, National College of Ireland

Dr S. Bryn Austin, ScD, Professor, Harvard T.H. Chan School of Public Health, and Director, Strategic Training Initiative for the Prevention of Eating Disorders

Dr Trudi Seneviratne OBE, Consultant Adult & Perinatal Psychiatrist, Registrar, The Royal College of Psychiatrists

Signed by the following civil society organisations,

AI Forensics

Amnesty International

ARTICLE 19

Avaaz Foundation

Civil Liberties Union for Europe (Liberties)

Federación de Consumidores y Usuarios CECU

Defend Democracy

Digital Action

D64 - Center for Digital Progress (Zentrum für Digitalen Fortschritt)

Ekō

Fair Vote UK

Global Action Plan

Global Witness

Health Action International

Institute for Strategic Dialogue (ISD)

Irish Council for Civil Liberties

Mental Health Europe

Panoptykon Foundation

Superbloom (previously known as Simply Secure)

5Rights Foundation

#JeSuisLà

The post Open letter to the European Parliament: A critical opportunity to protect children and young people appeared first on People vs. Big Tech.

]]>
Prototyping User Empowerment – Towards DSA-compliant recommender systems https://peoplevsbig.tech/prototyping-user-empowerment-towards-dsa-compliant-recommender-systems/ Fri, 08 Dec 2023 15:29:00 +0000 https://peoplevsbig.tech/?p=516 What would a healthy social network look like? Researchers, civil society experts, technologists and designers came together to imagine a new way forward

The post Prototyping User Empowerment – Towards DSA-compliant recommender systems appeared first on People vs. Big Tech.

]]>

Executive Summary (full briefing here)

What would a healthy social network look and feel like, with recommender systems that show users the content they really want to see, rather than content based on predatory and addictive design features?

In October 2022, the European Union adopted the Digital Services Act (DSA), introducing transparency and procedural accountability rules for large social media platforms – including giants such as Facebook, Instagram, YouTube and TikTok – for the first time. When it comes to their recommender systems, Very Large Online Platforms (VLOPs) are now required to assess systemic risks of their products and services (Article 34), and propose measures to mitigate against any negative effects (Article 35). In addition, VLOPs are required to disclose the “main parameters” of their recommender systems (Article 27), provide users with at least one option that is not based on personal data profiling (Article 38), and prevent the use of dark patterns and manipulative design practices to influence user behaviour (Article 25).

Many advocates and policy makers are hopeful that the DSA will create the regulatory conditions for a healthier digital public sphere – that is, social media that act as public spaces, sources of quality information and facilitators of meaningful social connection. However, many of the risks and harms linked to recommender system design cannot be mitigated without directly addressing the underlying business model of the dominant social media platforms, which is currently designed to maximise users’ attention in order to generate profit from advertisements and sponsored content. In this respect, changes that would mitigate systemic risks as defined by the DSA are likely to be heavily resisted – and contested – by VLOPs, making independent recommendations all the more urgent and necessary.

It is in this context that a multidisciplinary group of independent researchers, civil society experts, technologists and designers came together in 2023 to explore answers to the question: ‘How can the ambitious principles enshrined in the DSA be operationalised by social media platforms?’. On August 25th 2023, we published the first brief, looking at the relationship between specific design features in recommender systems and specific harms.1 Our hypotheses were accompanied by a list of detailed questions to VLOPs and Very Large Online Search Engines (VLOSEs), which serve as a ‘technical checklist’ for risk assessments, as well as for auditing recommender systems.

In this second brief, we explore user experience (UX) and interaction design choices that would provide people with more meaningful control and choice over the recommender systems that shape the content they see. We propose nine practical UX changes that we believe can facilitate greater user agency, from content feedback features to controls over the signals used to curate their feeds, and specific ‘wellbeing’ features. We hope this second briefing serves as a starting point for future user research to ground UX changes related to DSA risk mitigation in a better understanding of user's needs.

This briefing concludes with recommendations for VLOPs and the European Commission.

With regards to VLOPs, we would like to see these and other design provocations user-tested, experimented with and iterated upon. This should happen in a transparent manner to ensure that conflicting design goals are navigated with respect to the DSA. Risk assessment and risk mitigation is not a one-time exercise but an ongoing process, which should engage civil society, the ethical design community and a diverse representation of users as consulted stakeholders.

The European Commission should use all of its powers under the DSA, including the power to issue delegated acts and guidelines (e.g., in accordance with Article 35), to ensure that VLOPs:

  • Implement the best UX practices in their recommender systems
  • Modify their interfaces and content ranking algorithms in order to mitigate systemic risks
  • Make transparency disclosures and engage stakeholders in the ways we describe above.

Read the full briefing here.


Photo by Christin Hume

The post Prototyping User Empowerment – Towards DSA-compliant recommender systems appeared first on People vs. Big Tech.

]]>
Civil Society Organisations Call on EU Parliament to Close Disinformation Loophole https://peoplevsbig.tech/civil-society-organisations-call-on-eu-parliament-to-close-disinformation-loophole/ Sun, 24 Sep 2023 15:38:00 +0000 https://peoplevsbig.tech/?p=520 The carve-out for media in the proposed European Media Freedom Act will seriously impede efforts to combat hate speech and disinformation

The post Civil Society Organisations Call on EU Parliament to Close Disinformation Loophole appeared first on People vs. Big Tech.

]]>

Dear Members of the European Parliament,


We, 24 civil society groups and experts from across Europe, are writing to express our deep concern about the danger posed to public safety by Article 17.2 of the European Media Freedom Act and to urge you to vote for plenary amendments seeking to mitigate its threat.


As written, the proposed Article would introduce a dangerous carve-out from online content moderation for media, seriously impeding the fight against hate speech and disinformation, hindering protection of minors, and laying Europe’s democracies bare to interference from malign foreign and domestic actors.


It would also damage the very people it seeks to protect, corroding the reach of legitimate journalists and drowning out their voices with clickbait and disinformation. Indeed, Nobel prize-winning journalists Maria Ressa and Dmitry Muratov have warned against such “special exemptions” in their 10-point-plan to fix the information crisis, stressing that such carve-outs would give a “blank check” to governments and non-state actors producing industrial-scale disinformation to harm democracy. The plan has been signed by over 289 Nobel Laureates, organisations, and individuals around the world.


While well-intentioned, the CULT Committee’s version of Article 17.2 is the worst of all worlds, with parameters so wide, and vetting procedures so weak, that virtually anyone describing themselves as media would be entitled to privileged treatment. By requiring platforms to keep problematic “media'' content up for 24 hours, and preventing them from labelling or blurring posts, it would remove the ability to take swift action to prevent the viral spread of disinformation or other harmful content in the most crucial hours — or contain the subsequent damage.


A must-carry provision for media content raises particular concerns in countries where the ruling party controls public service broadcasting as state media. It would also mean content from pro-Putin disinformation sites would be subject to lighter rules than posts from ordinary people, a situation as perilous as it would be unjust. This rash approach is even more alarming for the timing, coming just after a Commission study found that online disinformation is still thriving, and tech companies still failing to remove a large share.


A media exemption was already considered, and rejected, in the Digital Services Act. MEPs wisely said no, understanding that the measure would seriously disable Europe’s efforts to rein in the worst abuses of the tech platforms and compromise user expectations for unbiased content moderation. A year later it is back, pushed by a powerful media lobby, despite posing the same threat to democracy, public safety, and the future of robust, fact-based journalism. By assigning a media privilege to media service providers, Article 17 undermines the EU code of practice on disinformation and the EU’s Digital Services Act by adding new and potentially conflicting procedures.



A media exemption was a bad idea for the DSA, and it is a bad idea for the EMFA – even if disguised under a new name. We urge you to once again stand up for European citizens, democracy and media integrity and vote for alternative plenary amendments that would:

  • Remove “restrict” from “suspend or restrict” so that platforms will still be able to automatically blur, label, or algorithmically downrank content that violates their policies even if they cannot suspend content, limiting the damage of the media loophole;
  • Remove the 24-hour must-carry obligation, which allows huge damage to be done by the spread of viral disinformation and hate speech;
  • Remove the involvement of national regulators in the designation of media service providers, which is ripe for abuse by member states where media freedom is at threat.

Yours sincerely, 

Bits of Freedom

Centre for Peace Studies

Coalition For Women In Journalism (CFWIJ)

Defend Democracy

Digital Action

Ekō

Electronic Frontier Foundation (EFF)

Electronic Frontier Finland

EU Disinfo Lab

European Digital Rights (EDRi)

European Partnership for Democracy (EPD)

Fair Vote UK

Foundation The London Story

Global Witness

HateAid

Homo Digitalis

Institute for Strategic Dialogue (ISD)

Liberties

‘NEVER AGAIN’ Association

Panoptykon

People vs Big Tech

Politiscope

WHAT TO FIX

#jesuislà

The post Civil Society Organisations Call on EU Parliament to Close Disinformation Loophole appeared first on People vs. Big Tech.

]]>
Safeguarding Europe’s 2024 Elections: a Checklist for Robust Enforcement of the DSA https://peoplevsbig.tech/safeguarding-europes-2024-elections-a-checklist-for-robust-enforcement-of-the-dsa/ Wed, 23 Aug 2023 15:49:00 +0000 https://peoplevsbig.tech/?p=527 Over 50 civil society groups urge the European Commission to rigorously enforce the Digital Services Act in a critical year for democracy

The post Safeguarding Europe’s 2024 Elections: a Checklist for Robust Enforcement of the DSA appeared first on People vs. Big Tech.

]]>

Democracy is in crisis and 2024 will be its biggest test yet. With critical elections due to take place across the world amid the wrecking ball of viral disinformation and deepening polarisation, the choices made by social media companies – and those who regulate them – will have profound consequences for years to come.

As Europe faces crucial elections, and alarmed by the backward slide of our democracies, 56 organisations are urgently calling on European leaders to meet this challenge head on. We ask you to take decisive action to safeguard the integrity of the election information environment, protect people’s rights as voters and set a global standard that others may follow.

The critical first step is for the European Commission to use its new powers under the Digital Services Act to require Big Tech companies to publish robust and comprehensive election plans - outlining publicly how they intend to mitigate “systemic risks” in the context of upcoming national and EU elections.

As a minimum, election plans must include meaningful transparency and mitigation measures to:

1. Deamplify disinformation and hate

Tech platforms have shown they can switch on measures to make content less viral at critical moments. They must, as a matter of course:

  • Make their recommender systems safe-by-design, by default and all the time (not just during election periods), including measures to suppress the algorithmic reach and visibility of disinformation and hate-spreading content, groups and accounts.
  • Implement meaningful user control features, including giving users clear options to choose over which types of data are used for ranking and recommending content and the ability to optimise their feeds for values other than engagement.

2. Ensure effective content moderation in every European language

The tragic impacts of viral hate speech in Ethiopia, Myanmar and countless other places show content moderation is worthless if not properly and equitably resourced. Tech platforms must:

  • Properly resource moderation teams in all languages, including both cultural and linguistic competency
  • Make content moderation rules public, and apply them consistently and transparently.
  • Pay moderators a decent wage, and provide them with psychological support.

3. Stop microtargeting users

The potential to exploit and manipulate voters with finely targeted election disinformation is an existential danger for democracy. The solution is to:

  • End processing of all observed and inferred data for political ads, for both targeting and amplification. Targeting on the basis of contextual data would still be permitted.
  • Enforce the ban on using sensitive categories of personal data, including data voluntarily provided by the user, for both targeting and amplification.


4. Build in transparency

Elections are for the people, not social media companies. Tech platforms must not be allowed to shape the fate of elections behind closed doors – instead, they must:

  • Be fully transparent about all measures related to political content and advertisements, including explanations of national variations in the measures they put in place, technical documentation about the algorithms used to recommend content, publication of ad libraries and their functionality (as well as ad financing) and full disclosure of content moderation policies and enforcement including notice, review and appeal mechanisms.
  • Allow researchers and wider civil society to independently monitor the spread of dis/misinformation and potential manipulation of the information space by sharing real-time, cross-platform data, including: content meta-data; information on content that is demoted, promoted and recommended and tools to analyse data.
  • Provide training for researchers, civil society, independent media and election monitors to monitor activity on the platforms.
  • Facilitate independent audits on the effectiveness of mitigation measures adopted in the context of elections and publish their results.


5. Increase and strengthen partnerships

Companies are not experts in elections. They must work with those who are.

  • Companies must meaningfully engage with partners such as fact-checkers, independent media, civil society and other bodies that protect electoral integrity, taking into account partners’ independence and reporting on their engagement in a standardised format.

Alongside the European elections, over 50 other countries will be going to the polls in 2024 – and even in the remainder of this year, several crucial elections are due to take place. Very large online platforms pose significant global risks if they fail to safeguard people and elections in the coming year. In making full use of its powers, the European Commission has a critical opportunity to lead the way globally in demonstrating that platforms can bring their operations in line with democracy and human rights.

Signed, the following organisations active in the EU,




AI Forensics

AlgorithmWatch

Alliance 4 Europe

Association for International Affairs (AMO) in Prague

Avaaz Foundation

Centre for Peace Studies

Centre for Research on Multinational Corporations (SOMO)

Checkfirst

Coalition For Women In Journalism (CFWIJ)

Corporate Europe Observatory (CEO)

Cyber Rights Organization

CyberPeace Institute

Defend Democracy

Delfi Lithuania

Democracy Reporting International gGmbH

digiQ

Digital Action

Donegal Intercultural Platform

Doras

Ekō

Epicenter.works

European Federation of Public Services Unions (EPSU)

Eticas

EU DisinfoLab

European Digital Rights (EDRi)

Federación de Consumidores y Usuarios (CECU)

Global Witness

Gong

HateAid

Institute for Strategic Dialogue (ISD)

Irish Council for Civil Liberties

Kempelen Institute of Intelligent Technologies

LGBT Ireland

NEVER AGAIN' Association

Panoptykon Foundation

Pavee Point Traveller and Roma Centre

SUPERRR Lab

The Daphne Caruana Galizia Foundation

The London Story


The Rowan Trust

Transparency International EU

Uplift

Waag Futurelab

Women In Journalism Institute - Canada

#jesuisla

#ShePersisted

Endorsed by the following global organisations;

Accountable Tech

ANDA - Agência de Notícias de Direitos Animais

Consortium of Ethiopian Human Rights Organizations (CEHRO)

Fair Vote UK

Full Fact

Global Action Plan

Legal Resources Centre

Open Britain

Rede Nacional de Combate à Desinformação-RNCD BRASIL

Tech4Peach

The post Safeguarding Europe’s 2024 Elections: a Checklist for Robust Enforcement of the DSA appeared first on People vs. Big Tech.

]]>
BRIEFING: Fixing Recommender Systems: From identification of risk factors to meaningful transparency and mitigation https://peoplevsbig.tech/briefing-fixing-recommender-systems-from-identification-of-risk-factors-to-meaningful-transparency-and-mitigation/ Wed, 23 Aug 2023 15:45:00 +0000 https://peoplevsbig.tech/?p=522 As platforms gear up to submit their first risk assessments to the European Commission, civil society experts set out what the regulator should look for

The post BRIEFING: Fixing Recommender Systems: From identification of risk factors to meaningful transparency and mitigation appeared first on People vs. Big Tech.

]]>

From August 25th 2023 Europe’s new Digital Services Act (DSA) rules kick in for the world’s largest digital platforms, shaping the design and functioning of their key services. For the nineteen platforms that have been designated “Very Large Online Platforms” (VLOPs) and “Very Large Online Search Engines” (VLOSEs), there will be many new requirements, from the obligation to undergo independent audits and share relevant data in their transparency reports, to the responsibility to assess and mitigate against “systemic risks” in the design and implementation of their products and services. Article 34 of the DSA defines “systemic risks” by reference to “actual or foreseeable negative effects” on the exercise of fundamental rights, dissemination of illegal content, civic discourse and electoral processes, public security and gender-based violence, as well as on the protection of public health and minors and physical and mental well-being.

One of the major areas where platform design decisions contribute to “systemic risks” is through their recommender systems – algorithmic systems used to rank, filter and target individual pieces of content to users. By determining how users find information and how they interact with all types of commercial and noncommercial content, recommender systems became a crucial design-layer of VLOPs regulated by the DSA. Shadowing their rise, is a growing body of research and evidence indicating that certain design features in popular recommender systems contribute to the amplification and virality of harmful content, such as hate speech, misinformation and disinformation, addictive personalisation and discriminatory targeting in ways that harm fundamental rights, particularly the rights of minors. As such, social media recommender systems warrant urgent and special attention from the Regulator.

VLOPs and VLOSEs are due to submit their first risk assessments (RAs) to the European Commission in late August 2023. Without official guidelines from the Commission on the exact scope, structure and format of the RAs, it is up to each large platform to interpret what “systemic risks” mean in the context of their services – and to choose their own metrics and methodologies for assessing specific risks.

In order to assist the Commission in reviewing the RAs, we have compiled a list of hypotheses that indicate which design features used in recommender systems may be contributing to what the DSA calls “systemic risks”. Our hypotheses are accompanied by a list of detailed questions to VLOPs and VLOSEs, which can serve as a “technical checklist” for risk assessments as well as for auditing recommender systems.

Based on independent research and available evidence we identified six mechanisms by which recommender systems may be contributing to “systemic risks”:

  1. amplification of “borderline” content (content that the platform has classified as being at higher risk of violating their terms of service) because such content drives “user engagement”;
  2. rewarding users who provoke the strongest engagement from others (whether positive or negative) with greater reach, further skewing the publicly available inventory towards divisive and controversial content;
  3. making editorial choices that boost, protect or suppress some users over others, which can lead to censorship of certain voices;
  4. exploiting people’s data to personalise content in a way that harms their health and wellbeing, especially for minors and vulnerable adults;
  5. building in features that are designed to be addictive at the expense of people’s health and wellbeing, especially minors;
  6. using people’s data to personalise content in ways that lead to discrimination.

For each hypothesis, we provide highlights from available research, which support our understanding of how design features used in recommender systems contribute to harms experienced by their users. However, it is important to note that researchers have been constrained in their attempts to verify causal relationships between specific features of recommender systems and observed harms by what data was made available to them either by online platforms or platforms’ users. Because of these limitations external audits have spurred debates about the extent to which observed harms are caused by recommender system design decisions or by natural patterns in human behaviour.

It is our hope that risk assessments carried out by VLOPs and VLOSEs, followed by independent audits and investigations led by DG CONNECT, will end these speculations by providing data for scientific research and revealing specific features of social media recommender systems that directly or indirectly contribute to “systemic risks” as defined by Article 34 of the DSA.

In the second part of this brief (page 14) we provide a list of technical information that platforms should disclose to the Regulator, independent researchers and auditors to ensure that results of the risk assessments can be verified. This includes providing a high-level architectural description of the algorithmic stack as well as specifications of different algorithmic modules used in the recommender systems (type of algorithm and its hyperparameters; input features; loss function of the model; performance documentation; training data; labelling process etc).

Revealing key choices made by VLOPs and VLOSEs when designing their recommender systems would provide a “technical bedrock” for better design choices and policy decisions aimed at safeguarding the rights of European citizens online.

You can find a full glossary of technical terms used in this briefing on page 16 of the full report.


Read the full report in the pdf attached.

ACKNOWLEDGEMENTS

This brief was drafted by Katarzyna Szymielewicz (Senior Advisor at the Irish Council for Civil Liberties) and Dorota Głowacka (Panoptykon Foundation), with notable contributions from Alexander Hohlfeld (independent researcher), Bhargav Srinivasa Desikan (Knowledge Lab, University of Chicago), Marc Faddoul (AI Forensics) and Tanya O’Carroll (independent expert).

In addition, we are grateful to the following civil society experts for their contributions:

Anna-Katharina Meßmer (Stiftung Neue Verantwortung (SNV). Asha Allen (Centre for Democracy and Technology, Europe Office). Belen Luna (HateAid). Josephine Ballon (HateAid). Claire Pershan (Mozilla Foundation). David Nolan (Amnesty International). Fernando Hortal Foronda (European Partnership for Democracy). Jesse McCrosky (Mozilla Foundation/Thoughtworks). John Albert (AlgorithmWatch). Lisa Dittmer (Amnesty International). Martin Degeling (Stiftung Neue Verantwortung (SNV). Pat de Brún (Amnesty International). Ramak Molavi Vasse’i (Mozilla Foundation). Richard Woods (Global Disinformation Index).

Fixing Recommender Systems_Briefing for the European Commission (PDF)

The post BRIEFING: Fixing Recommender Systems: From identification of risk factors to meaningful transparency and mitigation appeared first on People vs. Big Tech.

]]>