DSA Archives - People vs. Big Tech https://peoplevsbig.tech/category/dsa/ We’re people, not users Wed, 26 Jun 2024 19:25:53 +0000 en-GB hourly 1 https://peoplevsbig.tech/wp-content/uploads/2024/06/cropped-favicon-32x32.png DSA Archives - People vs. Big Tech https://peoplevsbig.tech/category/dsa/ 32 32 Letter to European Commissioner Breton: Tackling harmful recommender systems https://peoplevsbig.tech/letter-to-european-commissioner-breton-tackling-harmful-recommender-systems/ Mon, 05 Feb 2024 15:26:00 +0000 https://peoplevsbig.tech/?p=514 Civil society organisations unite behind Coimisiún na Meán's proposal to disable profiling-based recommender systems on social media video platforms

The post Letter to European Commissioner Breton: Tackling harmful recommender systems appeared first on People vs. Big Tech.

]]>

Dear Commissioner Breton,

Coimisiún na Meán’s proposal to require social media video platforms to disable recommender systems based on intimately profiling people by default, is an important step toward realising the vision of the Digital Services Act (DSA). We eighteen civil society organisations urge you not to block it, and moreover, to recommend this as a risk mitigation measure under Article 35 of the DSA. This is an opportunity to once more prove European leadership.

Disabling profiling-based recommender systems by default has overwhelming support from civil society, the Irish public and cross-group MEPs. More than 60 diverse Irish civil society organisations endorsed a submission strongly backing this measure, as covered by the Irish Examiner. We are united in our support for this Irish civil society initiative. 82% of Irish citizens are also in favour, as shown in a national poll across all ages, education, income, and regions of Ireland conducted independently by Ireland Thinks in January 2024. At the end of last year, a cross-party group of MEPs wrote a letter to the Commission to adopt the Ireland example across the European Union.

Our collective stance is based on overwhelming evidence of the harms caused by profiling-based recommender systems especially for most vulnerable groups such as children – Algorithmic recommender systems select emotive and extreme content and show it to people who they estimate are most likely to engage with it. These people then spend longer on the platform, which allows Big Tech corporations to sell ad space. Meta's own internal research disclosed that a significant 64% of extremist group joins were caused by their toxic algorithms. Even more alarmingly, Amnesty International found that TikTok’s algorithms exposed multiple 13-year-old child accounts to videos glorifying suicide in less than an hour of launching the account.

Platforms that originally promised to connect and empower people have become tools that are optimised to “engage, enrage and addict” them. As described above, profiling-based recommender systems are one of the major areas where platform design decisions contribute to “systemic risks”, as defined in Article 34 of the DSA, especially when it comes to “any actual or foreseeable negative effects” for the exercise of fundamental rights, to the protection of personal data, to respect for the rights of the child, on civic discourse and electoral processes, and public security, to gender-based violence, the protection of public health and minors and serious negative consequences to the person’s physical and mental well-being. By determining how users find information and how they interact with all types of commercial and noncommercial content, recommender systems are therefore a crucial design-layer of Very Large Online Platforms regulated by the DSA.

Therefore, we urge the European Commission not only to support Ireland’s move, but to apply this across the European Union, and recommend disabling recommender systems based on profiling people by default on social media video platforms as a mitigation measure for Very Large Online Platforms, as outlined in article 35(1)(c) of the Digital Services Act.

Furthermore, we join the Irish civil society organisations in urging the Coimisiún na Meán and the European Commission to foster the development of rights-respecting alternative recommender systems. For example, experts have pointed to various alternatives including recommender-systems that are built on explicit user feedback rather than data profiling, as well as signals that optimise for outcomes other than engagement, such as quality content and plurality of viewpoint. Ultimately, the solution is not for platforms to provide only one alternative to the currently harmful defaults but rather to open up their networks to allow a marketplace of possible options offered by third parties, competing on a number of parameters including how rights respecting they are, thereby promoting much greater user choice.

We believe these actions are crucial steps towards mitigating against the inherent risks of profiling based recommender systems towards a rights-respecting and pluralistic information ecosystem. We look forward to your support and action on this matter.

Yours sincerely,

  1. Amnesty International
  2. Civil Liberties Union for Europe (Liberties)
  3. Defend Democracy
  4. Ekō
  5. The Electronic Privacy Information Center (EPIC)
  6. Fair Vote UK
  7. Federación de Consumidores y Usuarios CECU
  8. Global Witness
  9. Irish Council for Civil Liberties
  10. LODelle
  11. Panoptykon Foundation
  12. People vs Big Tech
  13. The Citizens
  14. The Real Facebook Oversight Board
  15. Xnet, Institute for Democratic Digitalisation
  16. 5Rights Foundation
  17. #jesuislà
  18. Homo Digitalus

The post Letter to European Commissioner Breton: Tackling harmful recommender systems appeared first on People vs. Big Tech.

]]>
Prototyping User Empowerment – Towards DSA-compliant recommender systems https://peoplevsbig.tech/prototyping-user-empowerment-towards-dsa-compliant-recommender-systems/ Fri, 08 Dec 2023 15:29:00 +0000 https://peoplevsbig.tech/?p=516 What would a healthy social network look like? Researchers, civil society experts, technologists and designers came together to imagine a new way forward

The post Prototyping User Empowerment – Towards DSA-compliant recommender systems appeared first on People vs. Big Tech.

]]>

Executive Summary (full briefing here)

What would a healthy social network look and feel like, with recommender systems that show users the content they really want to see, rather than content based on predatory and addictive design features?

In October 2022, the European Union adopted the Digital Services Act (DSA), introducing transparency and procedural accountability rules for large social media platforms – including giants such as Facebook, Instagram, YouTube and TikTok – for the first time. When it comes to their recommender systems, Very Large Online Platforms (VLOPs) are now required to assess systemic risks of their products and services (Article 34), and propose measures to mitigate against any negative effects (Article 35). In addition, VLOPs are required to disclose the “main parameters” of their recommender systems (Article 27), provide users with at least one option that is not based on personal data profiling (Article 38), and prevent the use of dark patterns and manipulative design practices to influence user behaviour (Article 25).

Many advocates and policy makers are hopeful that the DSA will create the regulatory conditions for a healthier digital public sphere – that is, social media that act as public spaces, sources of quality information and facilitators of meaningful social connection. However, many of the risks and harms linked to recommender system design cannot be mitigated without directly addressing the underlying business model of the dominant social media platforms, which is currently designed to maximise users’ attention in order to generate profit from advertisements and sponsored content. In this respect, changes that would mitigate systemic risks as defined by the DSA are likely to be heavily resisted – and contested – by VLOPs, making independent recommendations all the more urgent and necessary.

It is in this context that a multidisciplinary group of independent researchers, civil society experts, technologists and designers came together in 2023 to explore answers to the question: ‘How can the ambitious principles enshrined in the DSA be operationalised by social media platforms?’. On August 25th 2023, we published the first brief, looking at the relationship between specific design features in recommender systems and specific harms.1 Our hypotheses were accompanied by a list of detailed questions to VLOPs and Very Large Online Search Engines (VLOSEs), which serve as a ‘technical checklist’ for risk assessments, as well as for auditing recommender systems.

In this second brief, we explore user experience (UX) and interaction design choices that would provide people with more meaningful control and choice over the recommender systems that shape the content they see. We propose nine practical UX changes that we believe can facilitate greater user agency, from content feedback features to controls over the signals used to curate their feeds, and specific ‘wellbeing’ features. We hope this second briefing serves as a starting point for future user research to ground UX changes related to DSA risk mitigation in a better understanding of user's needs.

This briefing concludes with recommendations for VLOPs and the European Commission.

With regards to VLOPs, we would like to see these and other design provocations user-tested, experimented with and iterated upon. This should happen in a transparent manner to ensure that conflicting design goals are navigated with respect to the DSA. Risk assessment and risk mitigation is not a one-time exercise but an ongoing process, which should engage civil society, the ethical design community and a diverse representation of users as consulted stakeholders.

The European Commission should use all of its powers under the DSA, including the power to issue delegated acts and guidelines (e.g., in accordance with Article 35), to ensure that VLOPs:

  • Implement the best UX practices in their recommender systems
  • Modify their interfaces and content ranking algorithms in order to mitigate systemic risks
  • Make transparency disclosures and engage stakeholders in the ways we describe above.

Read the full briefing here.


Photo by Christin Hume

The post Prototyping User Empowerment – Towards DSA-compliant recommender systems appeared first on People vs. Big Tech.

]]>
Safeguarding Europe’s 2024 Elections: a Checklist for Robust Enforcement of the DSA https://peoplevsbig.tech/safeguarding-europes-2024-elections-a-checklist-for-robust-enforcement-of-the-dsa/ Wed, 23 Aug 2023 15:49:00 +0000 https://peoplevsbig.tech/?p=527 Over 50 civil society groups urge the European Commission to rigorously enforce the Digital Services Act in a critical year for democracy

The post Safeguarding Europe’s 2024 Elections: a Checklist for Robust Enforcement of the DSA appeared first on People vs. Big Tech.

]]>

Democracy is in crisis and 2024 will be its biggest test yet. With critical elections due to take place across the world amid the wrecking ball of viral disinformation and deepening polarisation, the choices made by social media companies – and those who regulate them – will have profound consequences for years to come.

As Europe faces crucial elections, and alarmed by the backward slide of our democracies, 56 organisations are urgently calling on European leaders to meet this challenge head on. We ask you to take decisive action to safeguard the integrity of the election information environment, protect people’s rights as voters and set a global standard that others may follow.

The critical first step is for the European Commission to use its new powers under the Digital Services Act to require Big Tech companies to publish robust and comprehensive election plans - outlining publicly how they intend to mitigate “systemic risks” in the context of upcoming national and EU elections.

As a minimum, election plans must include meaningful transparency and mitigation measures to:

1. Deamplify disinformation and hate

Tech platforms have shown they can switch on measures to make content less viral at critical moments. They must, as a matter of course:

  • Make their recommender systems safe-by-design, by default and all the time (not just during election periods), including measures to suppress the algorithmic reach and visibility of disinformation and hate-spreading content, groups and accounts.
  • Implement meaningful user control features, including giving users clear options to choose over which types of data are used for ranking and recommending content and the ability to optimise their feeds for values other than engagement.

2. Ensure effective content moderation in every European language

The tragic impacts of viral hate speech in Ethiopia, Myanmar and countless other places show content moderation is worthless if not properly and equitably resourced. Tech platforms must:

  • Properly resource moderation teams in all languages, including both cultural and linguistic competency
  • Make content moderation rules public, and apply them consistently and transparently.
  • Pay moderators a decent wage, and provide them with psychological support.

3. Stop microtargeting users

The potential to exploit and manipulate voters with finely targeted election disinformation is an existential danger for democracy. The solution is to:

  • End processing of all observed and inferred data for political ads, for both targeting and amplification. Targeting on the basis of contextual data would still be permitted.
  • Enforce the ban on using sensitive categories of personal data, including data voluntarily provided by the user, for both targeting and amplification.


4. Build in transparency

Elections are for the people, not social media companies. Tech platforms must not be allowed to shape the fate of elections behind closed doors – instead, they must:

  • Be fully transparent about all measures related to political content and advertisements, including explanations of national variations in the measures they put in place, technical documentation about the algorithms used to recommend content, publication of ad libraries and their functionality (as well as ad financing) and full disclosure of content moderation policies and enforcement including notice, review and appeal mechanisms.
  • Allow researchers and wider civil society to independently monitor the spread of dis/misinformation and potential manipulation of the information space by sharing real-time, cross-platform data, including: content meta-data; information on content that is demoted, promoted and recommended and tools to analyse data.
  • Provide training for researchers, civil society, independent media and election monitors to monitor activity on the platforms.
  • Facilitate independent audits on the effectiveness of mitigation measures adopted in the context of elections and publish their results.


5. Increase and strengthen partnerships

Companies are not experts in elections. They must work with those who are.

  • Companies must meaningfully engage with partners such as fact-checkers, independent media, civil society and other bodies that protect electoral integrity, taking into account partners’ independence and reporting on their engagement in a standardised format.

Alongside the European elections, over 50 other countries will be going to the polls in 2024 – and even in the remainder of this year, several crucial elections are due to take place. Very large online platforms pose significant global risks if they fail to safeguard people and elections in the coming year. In making full use of its powers, the European Commission has a critical opportunity to lead the way globally in demonstrating that platforms can bring their operations in line with democracy and human rights.

Signed, the following organisations active in the EU,




AI Forensics

AlgorithmWatch

Alliance 4 Europe

Association for International Affairs (AMO) in Prague

Avaaz Foundation

Centre for Peace Studies

Centre for Research on Multinational Corporations (SOMO)

Checkfirst

Coalition For Women In Journalism (CFWIJ)

Corporate Europe Observatory (CEO)

Cyber Rights Organization

CyberPeace Institute

Defend Democracy

Delfi Lithuania

Democracy Reporting International gGmbH

digiQ

Digital Action

Donegal Intercultural Platform

Doras

Ekō

Epicenter.works

European Federation of Public Services Unions (EPSU)

Eticas

EU DisinfoLab

European Digital Rights (EDRi)

Federación de Consumidores y Usuarios (CECU)

Global Witness

Gong

HateAid

Institute for Strategic Dialogue (ISD)

Irish Council for Civil Liberties

Kempelen Institute of Intelligent Technologies

LGBT Ireland

NEVER AGAIN' Association

Panoptykon Foundation

Pavee Point Traveller and Roma Centre

SUPERRR Lab

The Daphne Caruana Galizia Foundation

The London Story


The Rowan Trust

Transparency International EU

Uplift

Waag Futurelab

Women In Journalism Institute - Canada

#jesuisla

#ShePersisted

Endorsed by the following global organisations;

Accountable Tech

ANDA - Agência de Notícias de Direitos Animais

Consortium of Ethiopian Human Rights Organizations (CEHRO)

Fair Vote UK

Full Fact

Global Action Plan

Legal Resources Centre

Open Britain

Rede Nacional de Combate à Desinformação-RNCD BRASIL

Tech4Peach

The post Safeguarding Europe’s 2024 Elections: a Checklist for Robust Enforcement of the DSA appeared first on People vs. Big Tech.

]]>
BRIEFING: Fixing Recommender Systems: From identification of risk factors to meaningful transparency and mitigation https://peoplevsbig.tech/briefing-fixing-recommender-systems-from-identification-of-risk-factors-to-meaningful-transparency-and-mitigation/ Wed, 23 Aug 2023 15:45:00 +0000 https://peoplevsbig.tech/?p=522 As platforms gear up to submit their first risk assessments to the European Commission, civil society experts set out what the regulator should look for

The post BRIEFING: Fixing Recommender Systems: From identification of risk factors to meaningful transparency and mitigation appeared first on People vs. Big Tech.

]]>

From August 25th 2023 Europe’s new Digital Services Act (DSA) rules kick in for the world’s largest digital platforms, shaping the design and functioning of their key services. For the nineteen platforms that have been designated “Very Large Online Platforms” (VLOPs) and “Very Large Online Search Engines” (VLOSEs), there will be many new requirements, from the obligation to undergo independent audits and share relevant data in their transparency reports, to the responsibility to assess and mitigate against “systemic risks” in the design and implementation of their products and services. Article 34 of the DSA defines “systemic risks” by reference to “actual or foreseeable negative effects” on the exercise of fundamental rights, dissemination of illegal content, civic discourse and electoral processes, public security and gender-based violence, as well as on the protection of public health and minors and physical and mental well-being.

One of the major areas where platform design decisions contribute to “systemic risks” is through their recommender systems – algorithmic systems used to rank, filter and target individual pieces of content to users. By determining how users find information and how they interact with all types of commercial and noncommercial content, recommender systems became a crucial design-layer of VLOPs regulated by the DSA. Shadowing their rise, is a growing body of research and evidence indicating that certain design features in popular recommender systems contribute to the amplification and virality of harmful content, such as hate speech, misinformation and disinformation, addictive personalisation and discriminatory targeting in ways that harm fundamental rights, particularly the rights of minors. As such, social media recommender systems warrant urgent and special attention from the Regulator.

VLOPs and VLOSEs are due to submit their first risk assessments (RAs) to the European Commission in late August 2023. Without official guidelines from the Commission on the exact scope, structure and format of the RAs, it is up to each large platform to interpret what “systemic risks” mean in the context of their services – and to choose their own metrics and methodologies for assessing specific risks.

In order to assist the Commission in reviewing the RAs, we have compiled a list of hypotheses that indicate which design features used in recommender systems may be contributing to what the DSA calls “systemic risks”. Our hypotheses are accompanied by a list of detailed questions to VLOPs and VLOSEs, which can serve as a “technical checklist” for risk assessments as well as for auditing recommender systems.

Based on independent research and available evidence we identified six mechanisms by which recommender systems may be contributing to “systemic risks”:

  1. amplification of “borderline” content (content that the platform has classified as being at higher risk of violating their terms of service) because such content drives “user engagement”;
  2. rewarding users who provoke the strongest engagement from others (whether positive or negative) with greater reach, further skewing the publicly available inventory towards divisive and controversial content;
  3. making editorial choices that boost, protect or suppress some users over others, which can lead to censorship of certain voices;
  4. exploiting people’s data to personalise content in a way that harms their health and wellbeing, especially for minors and vulnerable adults;
  5. building in features that are designed to be addictive at the expense of people’s health and wellbeing, especially minors;
  6. using people’s data to personalise content in ways that lead to discrimination.

For each hypothesis, we provide highlights from available research, which support our understanding of how design features used in recommender systems contribute to harms experienced by their users. However, it is important to note that researchers have been constrained in their attempts to verify causal relationships between specific features of recommender systems and observed harms by what data was made available to them either by online platforms or platforms’ users. Because of these limitations external audits have spurred debates about the extent to which observed harms are caused by recommender system design decisions or by natural patterns in human behaviour.

It is our hope that risk assessments carried out by VLOPs and VLOSEs, followed by independent audits and investigations led by DG CONNECT, will end these speculations by providing data for scientific research and revealing specific features of social media recommender systems that directly or indirectly contribute to “systemic risks” as defined by Article 34 of the DSA.

In the second part of this brief (page 14) we provide a list of technical information that platforms should disclose to the Regulator, independent researchers and auditors to ensure that results of the risk assessments can be verified. This includes providing a high-level architectural description of the algorithmic stack as well as specifications of different algorithmic modules used in the recommender systems (type of algorithm and its hyperparameters; input features; loss function of the model; performance documentation; training data; labelling process etc).

Revealing key choices made by VLOPs and VLOSEs when designing their recommender systems would provide a “technical bedrock” for better design choices and policy decisions aimed at safeguarding the rights of European citizens online.

You can find a full glossary of technical terms used in this briefing on page 16 of the full report.


Read the full report in the pdf attached.

ACKNOWLEDGEMENTS

This brief was drafted by Katarzyna Szymielewicz (Senior Advisor at the Irish Council for Civil Liberties) and Dorota Głowacka (Panoptykon Foundation), with notable contributions from Alexander Hohlfeld (independent researcher), Bhargav Srinivasa Desikan (Knowledge Lab, University of Chicago), Marc Faddoul (AI Forensics) and Tanya O’Carroll (independent expert).

In addition, we are grateful to the following civil society experts for their contributions:

Anna-Katharina Meßmer (Stiftung Neue Verantwortung (SNV). Asha Allen (Centre for Democracy and Technology, Europe Office). Belen Luna (HateAid). Josephine Ballon (HateAid). Claire Pershan (Mozilla Foundation). David Nolan (Amnesty International). Fernando Hortal Foronda (European Partnership for Democracy). Jesse McCrosky (Mozilla Foundation/Thoughtworks). John Albert (AlgorithmWatch). Lisa Dittmer (Amnesty International). Martin Degeling (Stiftung Neue Verantwortung (SNV). Pat de Brún (Amnesty International). Ramak Molavi Vasse’i (Mozilla Foundation). Richard Woods (Global Disinformation Index).

Fixing Recommender Systems_Briefing for the European Commission (PDF)

The post BRIEFING: Fixing Recommender Systems: From identification of risk factors to meaningful transparency and mitigation appeared first on People vs. Big Tech.

]]>
Open Letter to President Macron re Sensitive Ads Ban in the DSA https://peoplevsbig.tech/open-letter-to-president-macron-re-sensitive-ads-ban-in-the-dsa/ Wed, 30 Mar 2022 05:47:00 +0000 https://peoplevsbig.tech/?p=542 Coalition urges French president not to betray promise to European citizens on crucial DSA measure

The post Open Letter to President Macron re Sensitive Ads Ban in the DSA appeared first on People vs. Big Tech.

]]>

To: Mr Emmanuel Macron, President of the French Republic

Bruxelles, 30 Mars, 2022

Open Letter: France must not betray its promise to European citizens to prohibit the most invasive practices in online advertising in the Digital Services Act


Dear President Macron,

We are writing to you on behalf of the People vs Big Tech Coalition to express our deep concern regarding France’s failure to follow through on its promise to meaningfully protect EU citizens from the invasive use of their sensitive personal data for targeted advertising in the Digital Services Act. This is a system that has been weaponised by foreign and nefarious actors to distort public debate and democracy - not least by Russia. It is also a system that routinely tramples on the rights of European citizens. According to our newly published YouGov poll, an overwhelming majority of French citizens (70%) support a ban on the use of people’s sensitive personal data to target online advertisements. They are counting on you to secure this baseline protection in the DSA.

While we commend you for France’s tenacity in seeing through sweeping reform of the Big Tech platforms in the form of the Digital Markets Act, agreed last week, one of our movement’s core demands is that the Digital Services Act and Digital Markets Act take adequate steps to rein in the most invasive and harmful practices in online advertising. This is why we were so encouraged to hear Minister O’s commitment on Friday, announcing that the DSA would include the proposal to prohibit targeted advertising to minors as well as the use of sensitive information for ad targeting. Minister O rounded off his commitment with a reference to “how much trust there was between (the negotiators) to allow us to move forward and to take the most logical approach” on this all important issue.

To our dismay, that trust now appears to have been broken. Mere days later, the French Council Presidency appears to have diluted the provision on sensitive data in ad targeting by moving it to a recital and severely weakening it so that it no longer meaningfully protects citizens from this exploitative practice. This means European citizens will continue to be exposed to intrusive advertising on the basis of inferences about them which they may never choose to explicitly share or meaningfully consent to - including sensitive categories such as religious or political views, health conditions, and sexual preferences.

Beyond the well documented harms to people’s rights, the use of sensitive data for advertising raises serious democracy and national security concerns. By segmenting the paid-for messages that are seen by specific groups of the electorate, dialogue between communities is prevented and disinformation can more easily thrive. This type of advertising can and has already been weaponised by nefarious actors to distort public debate and influence democratic processes in Europe. Russian interference in the US 2016 election via targeted ads was a clear example and, at a time when the world order is increasingly precarious and actors such as Russia seek to undermine the EU, the risks are now even higher.

The Digital Services Act is a vital opportunity to move towards a safer online advertising system which European consumers and businesses are able to trust and which safeguards citizen’s fundamental rights. European citizens are counting on France to follow through on its promise to ensure that a final deal on the DSA prohibits the use of sensitive data, including the drawing of inferences about a person’s sensitive characteristicsfor the purpose of displaying advertisements. This is a critical baseline protection, already limited in scope to online platforms only, proportionate to the harms and necessary to achieve the aims.

If France wants a swift deal on the Digital Services Act, it cannot afford to betray European citizens at the eleventh hour. We hope that instead you will lead the way in ensuring a Digital Services Act that offers vital and overdue protections for European citizens.

Yours sincerely,


Access Now

All Out

Alliance4Europe

Avaaz

Bits of Freedom

Bulgarian Helsinki Committee

Civil Liberties Union for Europe (Liberties)

Citizen D / Državljan D

Cultural Broadcasting Archive (CBA)

Defend Democracy

D3 - Defesa dos Direitos Digitais

Democracy and Human Rights Education in Europe - DARE network

Digitas Institute

European Digital Rights Initiative (EDRi)

European Youth Forum

Fair Vote

Federation of German Consumer Organisations (vzbv)

Fix the Status Quo

Global Action Plan UK

Global Forum for Media Development

Global Witness

HateAid

Institute for Strategic Dialogue

Irish Council for Civil Liberties

#JeSuisLà

Lie Detectors

LobbyControl

Panoptykon Foundation

Peter Tatchell Foundation

Ranking Digital Rights

Sum of Us

The Coalition For Women In Journalism

The Daphne Caruana Galizia Foundation

The Signals Network

Vrijschrift.org

Waag

WeMove Europe

Wikimedia Deutschland

Wikimedia France

CC:

Mr Thierry Breton, European Commissioner for Internal Market

Mr Bruno Le Maire, Minister of the Economy, Finance and the Recovery

Mr Cédric O, Minister of State for Digital Transition and Electronic Communications

Ms Agnès Pannier-Runacher, Minister Delegate for Industry

Mr Clément Beaune, Secretary of State for European Affairs

The post Open Letter to President Macron re Sensitive Ads Ban in the DSA appeared first on People vs. Big Tech.

]]>
Vast Majority of French People Do Not Want to be Targeted with Online Ads Based on Their Sensitive Personal Data (YouGov Poll) https://peoplevsbig.tech/vast-majority-of-french-people-do-not-want-to-be-targeted-with-online-ads-based-on-their-sensitive-personal-data-yougov-poll/ Fri, 25 Mar 2022 05:49:00 +0000 https://peoplevsbig.tech/?p=544 New polling reveals 70% of adults in France support a ban on the use of people’s sensitive personal data in online targeted advertising

The post Vast Majority of French People Do Not Want to be Targeted with Online Ads Based on Their Sensitive Personal Data (YouGov Poll) appeared first on People vs. Big Tech.

]]>

New polling reveals 70% of adults in France support a ban on the use of people’s sensitive personal data in online targeted advertising.

A new YouGov poll commissioned by the People vs Big Tech network demonstrates that people in France find it wholly unacceptable for technology companies to target individuals with online advertising based on their personal data. The findings come just as French Minister of State for Digital and Telecommunications, Cédric O, announced that the Digital Services Act (DSA) will include a ban on targeted ads using sensitive data, as well as targeted ads to minors. While this is a promising development, it’s not a done deal as EU lawmakers continue to discuss the final provisions of the landmark legislation which aims to hold large online platforms accountable for the harms of their business model.

At a time when industry pressure appears to be causing representatives of the EU Commission and EU Council to consider backsliding and removing the proposed measure by the European Parliament, the survey shows that 69% of people in France find it unacceptable for technology companies to use their personal data in such a way.

Rewan Al-Haddad, Campaign Director at SumOfUs, said: “For years now people across Europe have been demanding stronger protection against Big Tech’s toxic and abusive data practices which only serve to enrich the coffers of the likes of Mark Zuckerberg – and these poll results in France demonstrate exactly that. Now the question is whether EU lawmakers will follow Cédric O’s lead, or capitulate to Big Tech lobbyists.”

A full 70% of those polled supported a ban on the use of people’s sensitive personal data (such as a person’s ethnicity, religious beliefs, sexual preferences, health conditions, or political opinions) in online targeted advertising – with 42% strongly supporting such a ban. By contrast, just 10% strongly opposed a proposed ban on the invasive practice.

These findings are particularly significant for French politicians as France currently chairs the negotiations for the DSA and plays a pivotal role in shaping the rules by which Big Tech companies will be allowed to operate in Europe going forward. Of note, the overall results of the survey show public opinion in France is in line with the views of small business leaders as well. For more information on their views, see January 2022 polling.

The EU has a decisive opportunity to establish the baseline protections necessary to secure the fundamental rights of EU citizens. Because no regulation currently prevents companies like Meta/Facebook or Google from profiling people based on their sensitive data (or making inferences about such characteristics), the online targeted advertising system has been repeatedly weaponised with ads designed to suppress votes, sell miracle cures, recruit militia, and incite violence.

But an opportunity to stem these harms is available and, as the YouGov poll results demonstrate, the time to act is now. The measure proposed by the European Parliament to prohibit the use of sensitive data in Article 24 of the DSA (including the drawing of inferences about a person’s sensitive characteristics) for the purpose of displaying advertisements must be adopted into the final provisions of the law. A system that enables foreign interference, facilitates the spread of disinformation, and inflames tensions between groups can no longer be permitted to operate without robust government oversight and new rules in place to protect privacy, promote safety, and maintain the integrity of Europe’s democratic processes.


Polling Note: All figures, unless otherwise stated, are from YouGov Plc. Total sample size was 1023 adults. Fieldwork was undertaken between 21st - 23rd March 2022. The survey was carried out online. The figures have been weighted and are representative of all French adults (aged 18+).

The post Vast Majority of French People Do Not Want to be Targeted with Online Ads Based on Their Sensitive Personal Data (YouGov Poll) appeared first on People vs. Big Tech.

]]>
BRIEFING: Priorities for the Digital Services Act Trilogues https://peoplevsbig.tech/briefing-priorities-for-the-digital-services-act-trilogues/ Thu, 17 Feb 2022 05:54:00 +0000 https://peoplevsbig.tech/?p=549 A civil society briefing for EU negotiators sets out key requirements for ensuring the DSA protects citizens’ fundamental rights

The post BRIEFING: Priorities for the Digital Services Act Trilogues appeared first on People vs. Big Tech.

]]>

This briefing paper has been compiled by SumOfUs, Panoptykon Foundation, Global Witness, Alliance 4 Europe, Je Suis Là, Hate Aid, Amnesty International, The Signals Network, AlgorithmWatch, Defend Democracy, Avaaz and Vrijschift

The Digital Services Act (DSA) is a crucial and welcome opportunity to hold online platforms to account and ensure a safer and more transparent online environment for all. EU negotiators must ensure that the DSA has the protection of citizens’ fundamental rights and democracy at its core, establishing meaningful long-term accountability and scrutiny of online platforms. Key outstanding issues must be resolved in the Trilogues, including EU-level enforcement, due diligence requirements, data scrutiny and tackling systemic risks related to tracking-based advertising.

As the DSA negotiations progress, we therefore urge you to prioritise the following issues:

A strong EU-level enforcement regime for VLOPs (Art 50)

We commend the Council for its support for an EU-level enforcement structure, as confirmed in the General Approach, and recommend giving enforcement powers to an independent unit inside the European Commission to oversee VLOPs. Matched with adequate resources, we believe independent EU-level enforcement powers offer the best opportunity for ensuring deep and consistent checks of VLOP’s compliance with due diligence measures from the outset. We urge you to prioritise this in the negotiations, avoiding the pitfalls of fragmentation and delay that has plagued other EU legislation such as the GDPR.

Tackling the most egregious forms of tracking-based advertising (Art 24)

The European Parliament’s DSA position secures important new safeguards against some of the most egregious and invasive forms of profiling for tracking-based advertising: minors and sensitive data - including sexual orientation, health data, or religious and political beliefsEU policymakers must urgently guarantee this protection for citizens. This type of data should not be used for advertising purposes, given the inherent systemic risks posed. Recent polling from Global Witness and Amnesty Tech in France and Germany has shown that not only are citizens deeply uncomfortable with their sensitive data being used for advertising, but SMEs are also wary, believing their own customers would disapprove and wanting to see more regulation.

An end to manipulative practices and fair access (Art 13a & 24)


If the DSA is meant to truly empower users and protect fundamental rights, platforms must be prevented from using manipulative design techniques, or “dark patterns”, to coerce users’ consent and decisions. The Parliament’s addition of Article 13a on “Online interface design and organisation” is an essential development for safeguarding users’ rights and protect them from unfair consumer practices. This must include the ability for users to indicate their opt out preference in the browser via a legally binding “do not track” signal, sparing them from continuous consent banners. Refusing consent should be just as easy as giving it and users who reject tracking should still have alternative access options which are fair and reasonable (Art 13a 1; Art 24 1a).

Ensuring meaningful third party scrutiny of VLOPs (Art 31)

While we welcome the DSA’s ambition to mandate data scrutiny of VLOPs by third parties in relation to their systemic risks, we are concerned this crucial oversight measure will be severely weakened if it is limited to academics and if platforms are able to invoke a broad “trade secrets” exemption. Given the crucial role civil society organisations play in holding platforms to account and exposing rights breaches and other harms, access should be extended to them - provided their proposals adhere to the highest ethical and methodological standards and they are able to secure any personal data they receive. Currently, scrutiny is severely hampered by the lack of data available as well as a hostile approach from key platforms. This includes Facebook’s intimidation of AlgorithmWatch to shut down its Instagram Monitoring Project by weaponizing the company’s terms of service. We therefore strongly urge you to support the Parliament’s position to widen access to include “vetted not-for-profit bodies, organisations or associations” and remove the trade secrets exemption.

Widening risk assessment to cover all rights and social harms (Art 26 and 27) 

We urge you to support the Parliament’s position on risk assessment and clarify the text to ensure that it expands risk assessment to consider all fundamental rights, as set out in the EU charter of Fundamental Rights, while maintaining a focus on social harms such as disinformation. This expansion is essential to ensure risk assessment is comprehensive and sufficiently addresses all systemic risks - current and future. A crucial addition from the Parliament’s position is to ensure assessments of risks posed by algorithms, activities, and business-model choices, before new products are deployed as well as explicit focus on VLOPs’ business model choices and inclusion of risks stemming from “algorithmic systems”. Finally, the DSA should require that civil society organisations be consulted as part of VLOPs’ risk assessment and when designing risk mitigation measures, as the Parliament’s position underlines (Art 26 2a; Art 27 1a). This is essential as a check on potential negative effects of mitigation measures on citizens or minorities, such as discriminatory moderation or over-removal of content.

Empowering users to seek redress (Art 17)

We commend the Council for its position regarding the internal complaint handling system, empowering users to seek redress against wrongful actions and inactions by the platforms. As the General Approach makes clear, the system must be broadened so it covers all cases, including where users want to act when a platform has not removed or disabled access to a piece of content. Failing to broaden the application of this Article would further harm victims of hate speech and vulnerable communities, who would be left powerless. We therefore strongly urge you to follow Council’s position (by including “whether or not” in Art.17 (1)) and provide redress through internal complaint handling mechanisms to all users.

Priorités pour les trilogues (pdf)

Schwerpunkte für die Triloge (pdf)

Key issues for trilogues (pdf)

The post BRIEFING: Priorities for the Digital Services Act Trilogues appeared first on People vs. Big Tech.

]]>
MEPs Stand Up to Big Tech with Significant DSA Vote https://peoplevsbig.tech/meps-stand-up-to-big-tech-with-significant-dsa-vote/ Mon, 14 Feb 2022 05:57:00 +0000 https://peoplevsbig.tech/?p=553 The European Parliament moves to curtail invasive advertising and block loopholes that would worsen vulnerability to disinformation attacks

The post MEPs Stand Up to Big Tech with Significant DSA Vote appeared first on People vs. Big Tech.

]]>

The European Parliament moves to curtail invasive advertising and block loopholes that would worsen vulnerability to disinformation attacks.

In a full vote of the European Parliament in Strasbourg on the evening of 19 January, announced the following morning, Members of European Parliament (MEPs) backed amendments to Article 24 of the Digital Services Act (DSA) that will see tougher restrictions on how personal data can be used in targeted advertising, including a ban on use of sensitive data for targeted ads and a requirement that platforms must provide continued fair access to users who turn off targeted ads. Although the Parliament missed an historic opportunity to fully outlaw targeted ads based on people’s personal data, these essential steps will help restrict the current abusive business model which allows Big Tech companies to profit off the invasive collection and use of their users’ data.

The final DSA text (which still must go through the Trilogues process before it becomes law) comes after months of hard campaigning from civil society groups in the face of unprecedented lobbying from Silicon Valley firms. Another welcome development was the voting down of an amendment (Recital 38) that would have effectively mandated the continued algorithmic promotion of content from any outlet calling itself media, even if the content is disinformation. Other wins represented in the vote include last year’s defeat of a broad trade secrets exemption that would have undermined crucial data access and scrutiny provisions in the DSA, as well as widened access to platform data for third-party researchers including civil society.

MEPs voting to outlaw the most invasive practices of targeted advertising embodies the growing global momentum against Big Tech’s surveillance advertising model. The crucial European vote came on the heels of US Members of Congress separately proposing legislation to ban surveillance advertising in the US – the latest signal that lawmakers around the world are looking to take a stand against Big Tech’s abusive business model.

Members of the People vs Big Tech network welcomed the outcome of the European Parliament’s vote and called on EU leaders to ensure these changes are signed into law later this year, releasing a joint statement here.

In response to the outcome of the vote and the collective efforts of the People vs Big Tech network to help secure it, MEPs said the following:

  • MEP Karen Melchior, Danish Social Liberal Party: "This week the people of Europe took back control. People Vs Big Tech campaign allowed to unify civil society and digital rights activists; bringing the debate into the mainstream. The united front paid off when we voted the amendments in plenary, we managed to fight off the media exception, and got a majority for protection against tracking ads. I’m grateful for the work of People Vs Big Tech. You should all be proud of the results achieved!"
  • MEP Alexandra Geese, Greens: "I thank all of you, who helped us to expose the interests of the big-tech lobby in the public debate and to ensure objectification. The outcome is a tremendous success. You have opened up the discussion space and brought the debate back to the facts."
  • MEP Paul Tang, Progressive Alliance of Socialists and Democrats: "[The] DSA voting result proved Big Tech's long-lasting campaign - worth millions of euros - couldn't stand the power of all these individuals and civil society organisations defending their rights and interests. Civil society won! We are, as MEP's, but foremost as members of the Tracking-free Ads Coalition, enormously grateful for all your efforts and this powerful result! We are not there yet. However, by continuing this cooperation and unity, I'm confident we will effectively limit the harmful practices of a few and make the many powerful. Many thanks once again!"
  • MEP Kim Van de Sparrentak, Greens: "We made a number of groundbreaking steps! Thank you for creating a strong movement, raising the momentum and making sure people’s voices were heard, to counter big tech’s lobbying efforts. We’ll keep up the fight for more fundamental change in the next years. Together we will end the divisive recommender algorithms and toxic business models for once and for all."

The next round of DSA negotiations (the so-called Trilogues) are already underway – with an aim to agree the final package as early as April. Going forward, the People vs Big Tech network will continue to work to protect these victories, while also demanding greater access to justice for victims of online abuse under Article 17 of the Act. As it stands, the DSA currently leaves victims of digital violence and abuse with no option to appeal if their notifications or request for remedy are denied by the platforms. Platforms’ content moderation practices already disproportionately harm marginalised groups – a lack of appeal options would have a silencing effect on large numbers of platform users such as women and minorities.

On this important issue, Josephine Ballon, Head of Legal of HateAid, said “Every second woman is afraid to express their opinion freely online. With this vote, the European Parliament leaves millions of users defenseless against hate speech and disinformation - with devastating consequences especially for women and minority groups. HateAid, the first counseling center for victims of online violence in Germany, is calling on the Council to uphold their position concerning equal access to mechanisms laid out in Article 17 and Article 18 in the Trilogues.”

The post MEPs Stand Up to Big Tech with Significant DSA Vote appeared first on People vs. Big Tech.

]]>
Small Businesses Want EU to Get Tough on Big Tech Ads https://peoplevsbig.tech/small-businesses-want-eu-to-get-tough-on-big-tech-ads/ Mon, 17 Jan 2022 06:01:00 +0000 https://peoplevsbig.tech/?p=555 YouGov poll reveals small business leaders are uncomfortable with Facebook and Google’s tracking-based advertising

The post Small Businesses Want EU to Get Tough on Big Tech Ads appeared first on People vs. Big Tech.

]]>

YouGov poll reveals small business leaders are uncomfortable with Facebook and Google’s tracking-based advertising

A new YouGov poll commissioned by Amnesty International and Global Witness has shown that small business leaders in France and Germany want alternatives to Facebook and Google’s dominant tracking-based advertising. The findings come ahead of a key EU vote this week on the Digital Services Act (DSA), which aims to impose stricter rules on tracking-based digital advertising and force more accountability from Big Tech companies.

Facebook and other industry leaders have emphasized their belief that targeted advertising is necessary for the survival of European small and mid-sized enterprises. But the survey shows that 79% of leaders of small and medium-sized enterprises felt that large online platforms – such as Facebook and Google – should face increased regulation of how they use personal data to target users while advertising online.

The poll further reveals that 75% of respondents also believe tracking-based advertising undermines peoples’ privacy and other human rights.

“The constant and invasive monitoring of our lives to target people with ads is unacceptable, annihilates our right to privacy, and fuels discrimination. These results show that business owners are extremely uncomfortable with the approach to tracking-based advertising that their customers currently experience,” said Claudia Prettner, Legal and Policy Adviser at Amnesty Tech. “This week’s plenary vote on the Digital Services Act represents a vital opportunity for MEPs to stand up for human rights, and to take action to address advertising practices that rely on intrusive surveillance.”

Respondents were uncomfortable with the influence and monopoly that platforms like Facebook and Google have. A total of 69% of business owners surveyed said that they felt they had no option but to advertise with them due to their dominance of the industry.

The survey also showed that business owners believed their customers were not comfortable being targeted with online ads based on their race or ethnicity (62%), their sexual orientation (66%), information about their health (67%), their religious views (65%), their political views (65%), or personal events in their life (62%).

“It’s been part of Facebook and Google’s lobbying playbook to use small business’ reliance on their services as a fig leaf to justify their invasive profiling and targeting of users for advertising,” said Nienke Palstra, Senior Campaigner on Digital Threats to Democracy at Global Witness. “In fact, our polling shows small business leaders in France and Germany are deeply wary of their ad tech practices – but don’t see an alternative. Given the overwhelming support from small business to regulate ad tech giants, there is every reason for MEPs to go further in the Digital Services Act and protect individuals from surveillance advertising.”

The latest findings support previous Global Witness polling conducted in February 2021 that investigated French and German social media users’ attitudes to targeted advertising. Those results showed overwhelmingly that people were deeply uncomfortable about the ways they are targeted by advertisers every day, from being categorized by income and religious views to life events such as pregnancy, bereavement, or illness.

The YouGov poll was conducted with more than 600 leaders of small and medium-sized enterprises in France and Germany. The full results are available to download for the France and Germany polls.

The post Small Businesses Want EU to Get Tough on Big Tech Ads appeared first on People vs. Big Tech.

]]>
MEPs Must Reject “Media Exemption” Loopholes in the DSA https://peoplevsbig.tech/meps-must-reject-media-exemption-loopholes-in-the-dsa/ Sun, 16 Jan 2022 06:02:00 +0000 https://peoplevsbig.tech/?p=557 Our briefing explains why proposed "media exemption" amendment will lead to greater online disinformation attacks against EU citizens

The post MEPs Must Reject “Media Exemption” Loopholes in the DSA appeared first on People vs. Big Tech.

]]>

A proposed "media exemption" amendment will lead to greater online disinformation attacks against EU citizens.

On Thursday, European Members of Parliament (MEPs) will vote on the Digital Services Act (DSA), a landmark piece of legislation that serves as a golden opportunity for Europe to address algorithmic harms and turn off Big Tech’s manipulation machine. Years in the making, the DSA has the potential to make a significant impact in the critical fight to combat online disinformation by requiring platforms to mitigate the serious risks created by the functionality of their platforms – including the way their algorithms amplify illegal and harmful content that could include propaganda and disinformation attacks. Yet the potential for the DSA to tackle online disinformation could be severely undermined by a proposed “media exemption” amendment, which would effectively mandate the continued algorithmic promotion of media news content even if such content is false. This is particularly problematic when who or what constitutes “media” itself remains vague. If a false story is published by a media outlet, platforms would not be able to apply circuit breakers on their own algorithms to deamplify the disinformation – regardless of how damaging it may be.

It is important to acknowledge that rigorous journalism and fact-checking are vital in the fight against disinformation and must be protected and promoted in a healthy democracy. Platforms should also not be allowed to arbitrarily abuse their power. But creating carve-outs in the DSA is a dangerous solution, prompting European Commission Vice-President Věra Jourová (responsible for disinformation and media freedom) to call it “good intentions leading to hell.” That’s why over 50 fact-checkers, journalists, and experts alike called on MEPs to reject “media exemption” loopholes in the DSA. As Thursday’s vote approaches, the People vs Big Tech network once again affirms that any “media exemptions” must be categorically rejected in order to maintain the efficacy of the DSA. Such loopholes, if passed, would open the floodgates for disinformation because:

  1. Current media rules do not provide sufficient limitations on the production and publication of false news and information (regulatory frameworks governing press and broadcasters are designed to offer ex-post, not ex-ante, solutions). Given this, a “media exemption” to the DSA would allow the damage from a false story to keep spreading online before a solution was able to be imposed.
  2. Troubling trends in media ownership have allowed for “a resurgence of press baronism and politicisation.” Under a “media exemption” loophole, potentially compromised media outlets would gain cover for publishing conspiracy theories or other falsehoods designed to advance the personal and political agendas of their owners.
  3. State-controlled media from countries like Russia are able to produce what amounts to essentially “licensed propaganda,” and self-regulated private media organisations don’t always come with quality assurances. If a “media exemption” loophole was passed, even if the stories were dangerously misleading or blatantly false, they’d continue being shown to millions of people.

In line with these crucial considerations, media scholar Dr. Justin Schlosberg has just released a new briefing outlining why any “media exemption” in the DSA will lead to greater online disinformation attacks against European citizens. A Reader in Journalism and Media at Birkbeck College, University of London, and the author of multiple books about the media, Dr. Schlosberg unpacks the above points and makes plain why it is paramount for MEPs to reject any proposed exemptions. As he notes: “A frictionless system that algorithmically amplifies content cannot have special carve-outs for any type of content.” The briefing also shows how current provisions in the DSA already protect the media from arbitrary decisions by powerful platforms without needing exemptions.

With voting on the DSA this week, now is the time to take a stand and tell MEPs that the DSA can still protect media freedom while rejecting media exemptions.

Download Dr. Schlosberg’s full briefing, and a tweet-ready image calling for MEPs to reject any “media exemption” loophole, below.

Schlosberg DSA Media Exemption Briefing (pdf)

No Media Exemption Loophole (Image) (png)

The post MEPs Must Reject “Media Exemption” Loopholes in the DSA appeared first on People vs. Big Tech.

]]>