Mis/disinformation Archives - People vs. Big Tech https://peoplevsbig.tech/category/mis-disinformation/ We’re people, not users Wed, 26 Jun 2024 12:11:07 +0000 en-GB hourly 1 https://peoplevsbig.tech/wp-content/uploads/2024/06/cropped-favicon-32x32.png Mis/disinformation Archives - People vs. Big Tech https://peoplevsbig.tech/category/mis-disinformation/ 32 32 Civil Society Organisations Call on EU Parliament to Close Disinformation Loophole https://peoplevsbig.tech/civil-society-organisations-call-on-eu-parliament-to-close-disinformation-loophole/ Sun, 24 Sep 2023 15:38:00 +0000 https://peoplevsbig.tech/?p=520 The carve-out for media in the proposed European Media Freedom Act will seriously impede efforts to combat hate speech and disinformation

The post Civil Society Organisations Call on EU Parliament to Close Disinformation Loophole appeared first on People vs. Big Tech.

]]>

Dear Members of the European Parliament,


We, 24 civil society groups and experts from across Europe, are writing to express our deep concern about the danger posed to public safety by Article 17.2 of the European Media Freedom Act and to urge you to vote for plenary amendments seeking to mitigate its threat.


As written, the proposed Article would introduce a dangerous carve-out from online content moderation for media, seriously impeding the fight against hate speech and disinformation, hindering protection of minors, and laying Europe’s democracies bare to interference from malign foreign and domestic actors.


It would also damage the very people it seeks to protect, corroding the reach of legitimate journalists and drowning out their voices with clickbait and disinformation. Indeed, Nobel prize-winning journalists Maria Ressa and Dmitry Muratov have warned against such “special exemptions” in their 10-point-plan to fix the information crisis, stressing that such carve-outs would give a “blank check” to governments and non-state actors producing industrial-scale disinformation to harm democracy. The plan has been signed by over 289 Nobel Laureates, organisations, and individuals around the world.


While well-intentioned, the CULT Committee’s version of Article 17.2 is the worst of all worlds, with parameters so wide, and vetting procedures so weak, that virtually anyone describing themselves as media would be entitled to privileged treatment. By requiring platforms to keep problematic “media'' content up for 24 hours, and preventing them from labelling or blurring posts, it would remove the ability to take swift action to prevent the viral spread of disinformation or other harmful content in the most crucial hours — or contain the subsequent damage.


A must-carry provision for media content raises particular concerns in countries where the ruling party controls public service broadcasting as state media. It would also mean content from pro-Putin disinformation sites would be subject to lighter rules than posts from ordinary people, a situation as perilous as it would be unjust. This rash approach is even more alarming for the timing, coming just after a Commission study found that online disinformation is still thriving, and tech companies still failing to remove a large share.


A media exemption was already considered, and rejected, in the Digital Services Act. MEPs wisely said no, understanding that the measure would seriously disable Europe’s efforts to rein in the worst abuses of the tech platforms and compromise user expectations for unbiased content moderation. A year later it is back, pushed by a powerful media lobby, despite posing the same threat to democracy, public safety, and the future of robust, fact-based journalism. By assigning a media privilege to media service providers, Article 17 undermines the EU code of practice on disinformation and the EU’s Digital Services Act by adding new and potentially conflicting procedures.



A media exemption was a bad idea for the DSA, and it is a bad idea for the EMFA – even if disguised under a new name. We urge you to once again stand up for European citizens, democracy and media integrity and vote for alternative plenary amendments that would:

  • Remove “restrict” from “suspend or restrict” so that platforms will still be able to automatically blur, label, or algorithmically downrank content that violates their policies even if they cannot suspend content, limiting the damage of the media loophole;
  • Remove the 24-hour must-carry obligation, which allows huge damage to be done by the spread of viral disinformation and hate speech;
  • Remove the involvement of national regulators in the designation of media service providers, which is ripe for abuse by member states where media freedom is at threat.

Yours sincerely, 

Bits of Freedom

Centre for Peace Studies

Coalition For Women In Journalism (CFWIJ)

Defend Democracy

Digital Action

Ekō

Electronic Frontier Foundation (EFF)

Electronic Frontier Finland

EU Disinfo Lab

European Digital Rights (EDRi)

European Partnership for Democracy (EPD)

Fair Vote UK

Foundation The London Story

Global Witness

HateAid

Homo Digitalis

Institute for Strategic Dialogue (ISD)

Liberties

‘NEVER AGAIN’ Association

Panoptykon

People vs Big Tech

Politiscope

WHAT TO FIX

#jesuislà

The post Civil Society Organisations Call on EU Parliament to Close Disinformation Loophole appeared first on People vs. Big Tech.

]]>
A 10-point plan to address our information crisis https://peoplevsbig.tech/a-10-point-plan-to-address-our-information-crisis/ Fri, 02 Sep 2022 05:42:00 +0000 https://peoplevsbig.tech/?p=538 Nobel prize laureates Maria Ressa and Dmitry Muratov launch a roadmap for fixing our global public square

The post A 10-point plan to address our information crisis appeared first on People vs. Big Tech.

]]>
عربي - FilipinoNederlands - FrançaisDeutsch - हिंदी Italiano - 한국인 - Português - -РусскийEspañol

Presented by 2021 Nobel Peace Prize laureates Maria Ressa and Dmitry Muratov at the Freedom of Expression Conference, Nobel Peace Center, Oslo 2 September 2022



We call for a world in which technology is built in service of humanity and where our global public square protects human rights above profits.

Right now, the huge potential of technology to advance our societies has been undermined by the business model and design of the dominant online platforms. But we remind all those in power that true human progress comes from harnessing technology to advance rights and freedoms for all, not sacrificing them for the wealth and power of a few.

We urge rights-respecting democracies to wake up to the existential threat of information ecosystems being distorted by a Big Tech business model fixated on harvesting people’s data and attention, even as it undermines serious journalism and polarises debate in society and political life.

When facts become optional and trust disappears, we will no longer be able to hold power to account. We need a public sphere where fostering trust with a healthy exchange of ideas is valued more highly than corporate profits and where rigorous journalism can cut through the noise.

Many governments around the world have exploited these platforms’ greed to grab and consolidate power. That is why they also attack and muzzle the free press. Clearly, these governments cannot be trusted to address this crisis. But nor should we put our rights in the hands of technology companies’ intent on sustaining a broken business model that actively promotes disinformation, hate speech and abuse.

The resulting toxic information ecosystem is not inevitable. Those in power must do their part to build a world that puts human rights, dignity, and security first, including by safeguarding scientific and journalistic methods and tested knowledge. To build that world, we must:

Bring an end to the surveillance-for-profit business model

The invisible ‘editors’ of today’s information ecosystem are the opaque algorithms and recommender systems built by tech companies that track and target us. They amplify misogyny, racism, hate, junk science and disinformation – weaponizing every societal fault line with relentless surveillance to maximize “engagement”. This surveillance-for-profit business model is built on the con of our supposed consent. But forcing us to choose between allowing platforms and data brokers to feast on our personal data or being shut out from the benefits of the modern world is simply no choice at all. The vast machinery of corporate surveillance not only abuses our right to privacy, but allows our data to be used against us, undermining our freedoms and enabling discrimination.

This unethical business model must be reined in globally, including by bringing an end to surveillance advertising that people never asked for and of which they are often unaware. Europe has made a start, with the Digital Services and Digital Markets Acts. Now these must be enforced in ways that compel platforms to de-risk their design, detox their algorithms and give users real control. Privacy and data rights, to date largely notional, must also be properly enforced. And advertisers must use their money and influence to protect their customers against a tech industry that is actively harming people.

End tech discrimination and treat people everywhere equally

Global tech companies afford people unequal rights and protection depending on their status, power, nationality, and language. We have seen the painful and destructive consequences of tech companies’ failure to prioritize the safety of all people everywhere equally. Companies must be legally required to rigorously assess human rights risks in every country they seek to expand in, ensuring proportionate language and cultural competency. They must also be forced to bring their closed-door decisions on content moderation and algorithm changes into the light and end all special exemptions for those with the most power and reach. These safety, design, and product choices that affect billions of people cannot be left to corporations to decide. Transparency and accountability rules are an essential first step to reclaiming the internet for the public good.

Rebuild independent journalism as the antidote to tyranny

Big tech platforms have unleashed forces that are devastating independent media by swallowing up online advertising while simultaneously enabling a tech-fueled tsunami of lies and hate that drown out facts. For facts to stand a chance, we must end the amplification of disinformation by tech platforms. But this alone is not enough. Just 13% of the world’s population can currently access a free press. If we are to hold power to account and protect journalists, we need unparalleled investment in a truly independent media persevering in situ or working in exile that ensures its sustainability while incentivizing compliance with ethical norms in journalism.

21st century newsrooms must also forge a new, distinct path, recognizing that to advance justice and rights, they must represent the diversity of the communities they serve. Governments must ensure the safety and independence of journalists who are increasingly being attacked, imprisoned, or killed on the frontlines of this war on facts.

We, as Nobel Laureates, from across the world, send a united message: together we can end this corporate and technological assault on our lives and liberties, but we must act now. It is time to implement the solutions we already have to rebuild journalism and reclaim the technological architecture of global conversation for all humanity.

We call on all rights-respecting democratic governments to:
1. Require tech companies to carry out independent human rights impact assessments that must be made public as well as demand transparency on all aspects of their business – from content moderation to algorithm impacts to data processing to integrity policies.


2. Protect citizens’ right to privacy with robust data protection laws.


3. Publicly condemn abuses against the free press and journalists globally and commit funding and assistance to independent media and journalists under attack.


We call on the EU to:
4. Be ambitious in enforcing the Digital Services and Digital Markets Acts so these laws amount to more than just ‘new paperwork’ for the companies and instead force them to make changes to their business model, such as ending algorithmic amplification that threatens fundamental rights and spreads disinformation and hate, including in cases where the risks originate outside EU borders.


5. Urgently propose legislation to ban surveillance advertising, recognizing this practice is fundamentally incompatible with human rights.


6. Properly enforce the EU General Data Protection Regulation so that people’s data rights are finally made reality.


7. Include strong safeguards for journalists’ safety, media sustainability and democratic guarantees in the digital space in the forthcoming European Media Freedom Act.


8. Protect media freedom by cutting off disinformation upstream. This means there should be no special exemptions or carve-outs for any organisation or individual in any new technology or media legislation. With globalised information flows, this would give a blank check to those governments and non-state actors who produce industrial scale disinformation to harm democracies and polarise societies everywhere.


9. Challenge the extraordinary lobbying machinery, the astroturfing campaigns and recruitment revolving door between big tech companies and European government institutions.


We call on the UN to:
10. Create a special Envoy of the UN Secretary-General focused on the Safety of Journalists (SESJ) who would challenge the current status quo and finally raise the cost of crimes against journalists.


Signed by:
Dmitry Muratov, 2021 Nobel Peace Prize laureate
Maria Ressa, 2021 Nobel Peace Prize laureate

For more information and the full list of signatories, go to: www.10pointplan.org

The post A 10-point plan to address our information crisis appeared first on People vs. Big Tech.

]]>
Fueled by Social Media, Calls for Violence Against Muslims Reach Fever Pitch in India https://peoplevsbig.tech/fueled-by-social-media-calls-for-violence-against-muslims-reach-fever-pitch-in-india/ Tue, 22 Feb 2022 05:51:00 +0000 https://peoplevsbig.tech/?p=546 New research documents the dangerous degree to which hate speech and disinformation on Facebook are thriving. Genocide Watch warns that the “early warning

The post Fueled by Social Media, Calls for Violence Against Muslims Reach Fever Pitch in India appeared first on People vs. Big Tech.

]]>

New research documents the dangerous degree to which hate speech and disinformation on Facebook are thriving. Genocide Watch warns that the “early warning signs of genocide” are present in India.

A recent study by The London Story (TLS), a diaspora-led foundation in the Netherlands working to combat disinformation and hate speech online, reveals the shockingly pervasive nature of hate speech on Facebook in India. The foundation’s report, Face of Hatebook, finds the world’s most popular social media platform hosts and promotes “a disturbing volume of direct hate speech, disinformation, and calls to violence against minorities” – particularly those of Muslim faith. The prevalence of this type of content is all the more dangerous in a country such as India, notes TLS, which has Facebook’s highest number of users and is a de facto “public square” for news and propaganda, and where violent far-right voices have become more mainstream due to amplification on Facebook and its associated apps.

The nature of the hate speech, the protection of perpetrators of violence, and the complicity of elected and law enforcement officials have all prompted genocide experts to sound the alarm that India’s minorities, particularly Muslims, are at grave risk. Genocide Watch, which issued the alert, is however careful to say that it does not mean genocide is underway – only that the signs of danger are present – and that if or when genocidal violence is unleashed, it will be by mobs rather than directly by the State. Civil society activists have been warning for years that Silicon Valley’s tools have emboldened and enabled these violent mobs. Activists are demanding that Facebook release the full human rights impact assessment it commissioned on India.

In the wake of Frances Haugen’s disclosures, Facebook’s non-English content moderation failures are well documented. Although roughly two-thirds of Facebook users engage on the platform in a language other than English, internal company documents show dangerous content and hate speech are common in regions where Facebook lacks moderators and AI solutions capable of understanding and detecting threatening content in other languages. This allocation of resources contributes to a gap in protection for at-risk populations in non-English-speaking countries such as India where unaddressed online hate speech and disinformation can be weaponised to prompt and accelerate real-world attacks.

The widespread violence that occurred against Rohingya Muslims in Myanmar is one of the most acute examples of how dangerous this environment can be. A U.N. fact-finding mission examining the Myanmar crisis found Facebook was “a useful instrument for those seeking to spread hate, in a context where, for most users, Facebook is the Internet.” The mission further noted the company’s response to the ethnic cleansing was “slow and ineffective.” A subsequent independent report commissioned by the Big Tech giant itself confirmed these issues, finding the platform had created an “enabling environment” for human rights abuse and concluding: “Facebook has become a means for those seeking to spread hate and cause harm, and posts have been linked to offline violence.”

A similarly explosive situation is now brewing in India. Documents disclosed last year by Haugen to the U.S. Securities and Exchange Commission reveal Facebook knows it is struggling with a similar problem in its largest user market, where nearly 400 million accounts exist. In March of 2021, documents show company employees discussing whether they would be able to address the “fear mongering, anti-Muslim narratives” being broadcast on the platform by a far-right Hindu nationalist group with ties to Indian Prime Minister Narendra Modi. Another internal document shows significant portions of India-based Islamophobic content were “never flagged or actioned” due to Facebook’s inability to effectively moderate content in Hindi and Bengali. Facebook’s content policies in India faced additional scrutiny in 2020 when it came to light that the company’s former Public Policy Director for India, South & Central Asia, Ankhi Das, opposed applying hate-speech rules to politicians associated with Modi’s Bharatiya Janata Party and told staff doing so “would damage the company’s business prospects in the country.”

Aware of these glaring shortcomings, and pursuant to its mission to investigate human rights violations and abuses, TLS conducted its own research into hate speech in India on Facebook. TLS researchers used specific keywords to identify over 607 active Facebook groups that post anti-Muslim and pro-"Hindu Rashtra" (Hindu nation) narratives on the platform. The team compared the content of these groups’ posts to Facebook’s Community Standards, and found the following:

  • Violence and Incitement: “There are several examples of threats to kill, raze and demolish persons, properties and communities on the Facebook. These include direct threats and calls to action, as well as indirect subtle commenting that some people, religion, etc. should perish. Both direct and indirect threats have resulted in real-time violence across the world. These threats have also led to imminent harm, such as lynching, assembling of armed groups with machetes and firearms.”
  • Hate Speech: “Hateful content dehumanizing Indian Muslims, attacking them as inferior, morally bankrupt, alleging them to be violent and sexual predators, and calling for their exclusion and segregation continues to get a platform on Facebook. By allowing such massive amount of hate and vitriol, Facebook is not just complicit in dehumanizing Indian Muslims, it is also shares responsibility for creating an atmosphere of fear in India.”
  • Dangerous Individuals and Organizations: “Facebook continues to host ultra-right wing Hindu outfits like RSS, Vishwa Hindu Parishad, Hindu Swayam Sewak, Bajrang Dal, and its support system of fan pages, despite the violent proclamations against Indian Muslims of these groups and pages. Facebook continues to allow not only these organizations, but also their millions of supporters to praise and incite violence against Indian Muslims.”

Throughout the study, TLS reported its troubling findings to Facebook. Yet consistent with the moderation concerns identified by the above disclosed internal documents, the platform’s automated processes repeatedly responded that the content was not in violation of any of Facebook’s Community Standards. Although several of the most problematic posts were also submitted for human review, they all remained on the platform.

Excerpt from Face of Hatebook report: Post calling for violence against Muslims and stating “They all deserve to be kept in camps like China keeps Uyghur Muslim.”

One such post, a 2019 video of a speech in which influential Hindu religious leader Yati Narsinghanand calls for the “extermination” of Islam “from the face of the Earth,” has been viewed over 32 million times. The speech, and the bulk of its 144,000 comments, are in Hindi. The TLS team referred this post to Facebook’s Oversight Board, but the case was not selected for review.

The continued public availability of posts like this video beggars belief because the priest featured in the video, Yati Narsinghanand, was ultimately arrested for his subsequent actions after his December 2021 speech in Haridwar called for violence against India’s Muslims and encouraged an “ethnic cleansing” similar to the attacks on Rohingya Muslims in Myanmar. Video of the event went viral, elevating hate-filled and violence-tinged rhetoric in the country to dangerous levels.

TLS’s additional research revealed videos of Narsinghanand’s December speech remained up and publicly available in various segments on Facebook at the time of this post’s publication. Other similarly inciting videos, such as one from last year with nearly six million views in which Narsinghanand refers to a fifteen-year old Muslim boy who was beaten in his temple as a “poisonous snake,” remain available on the platform as well.

In light of Facebook’s inability to contain hate speech and inciting calls to violence on its platform, TLS is calling for Facebook to be shut down in India to help protect millions of Muslims and other minorities from hate speech and dehumanisation on social media. The foundation is also urging Facebook shareholders not to turn a blind eye to these harms, but rather to consciously divest from Facebook and its businesses. A public petition demanding the release of Facebook’s  human rights impact assessment on India in 2020 is available for signature here. Given the volatility of the situation, and the fact that these online harms are increasingly translating into offline violence, the time for action is now.

To help raise awareness of this pressing issue, TLS is hosting several conversations on the issue of hate speech and digital propganda in the India on the Brink: Preventing Genocide summit from February 26-28, 2022. The virtual event will bring together a variety of expert speakers to commemorate the 20th anniversary of the 2002 Gujarat pogrom, share insights and warning signs of what may be to come, and put forth possibilities for a way forward that prevents genocide from occuring in India. Those speaking at the event include former UN Special Adviser on the Prevention of Genocide Adama Dieng, Executive Director of Genocide Watch Dr. Gregory Stanton, and international genocide experts like Elisa von Joeden. All are welcome to attend and encouraged to help amplify the risks facing Indian minorities. More details and sign-up information is available here.

The post Fueled by Social Media, Calls for Violence Against Muslims Reach Fever Pitch in India appeared first on People vs. Big Tech.

]]>
BRIEFING: Priorities for the Digital Services Act Trilogues https://peoplevsbig.tech/briefing-priorities-for-the-digital-services-act-trilogues/ Thu, 17 Feb 2022 05:54:00 +0000 https://peoplevsbig.tech/?p=549 A civil society briefing for EU negotiators sets out key requirements for ensuring the DSA protects citizens’ fundamental rights

The post BRIEFING: Priorities for the Digital Services Act Trilogues appeared first on People vs. Big Tech.

]]>

This briefing paper has been compiled by SumOfUs, Panoptykon Foundation, Global Witness, Alliance 4 Europe, Je Suis Là, Hate Aid, Amnesty International, The Signals Network, AlgorithmWatch, Defend Democracy, Avaaz and Vrijschift

The Digital Services Act (DSA) is a crucial and welcome opportunity to hold online platforms to account and ensure a safer and more transparent online environment for all. EU negotiators must ensure that the DSA has the protection of citizens’ fundamental rights and democracy at its core, establishing meaningful long-term accountability and scrutiny of online platforms. Key outstanding issues must be resolved in the Trilogues, including EU-level enforcement, due diligence requirements, data scrutiny and tackling systemic risks related to tracking-based advertising.

As the DSA negotiations progress, we therefore urge you to prioritise the following issues:

A strong EU-level enforcement regime for VLOPs (Art 50)

We commend the Council for its support for an EU-level enforcement structure, as confirmed in the General Approach, and recommend giving enforcement powers to an independent unit inside the European Commission to oversee VLOPs. Matched with adequate resources, we believe independent EU-level enforcement powers offer the best opportunity for ensuring deep and consistent checks of VLOP’s compliance with due diligence measures from the outset. We urge you to prioritise this in the negotiations, avoiding the pitfalls of fragmentation and delay that has plagued other EU legislation such as the GDPR.

Tackling the most egregious forms of tracking-based advertising (Art 24)

The European Parliament’s DSA position secures important new safeguards against some of the most egregious and invasive forms of profiling for tracking-based advertising: minors and sensitive data - including sexual orientation, health data, or religious and political beliefsEU policymakers must urgently guarantee this protection for citizens. This type of data should not be used for advertising purposes, given the inherent systemic risks posed. Recent polling from Global Witness and Amnesty Tech in France and Germany has shown that not only are citizens deeply uncomfortable with their sensitive data being used for advertising, but SMEs are also wary, believing their own customers would disapprove and wanting to see more regulation.

An end to manipulative practices and fair access (Art 13a & 24)


If the DSA is meant to truly empower users and protect fundamental rights, platforms must be prevented from using manipulative design techniques, or “dark patterns”, to coerce users’ consent and decisions. The Parliament’s addition of Article 13a on “Online interface design and organisation” is an essential development for safeguarding users’ rights and protect them from unfair consumer practices. This must include the ability for users to indicate their opt out preference in the browser via a legally binding “do not track” signal, sparing them from continuous consent banners. Refusing consent should be just as easy as giving it and users who reject tracking should still have alternative access options which are fair and reasonable (Art 13a 1; Art 24 1a).

Ensuring meaningful third party scrutiny of VLOPs (Art 31)

While we welcome the DSA’s ambition to mandate data scrutiny of VLOPs by third parties in relation to their systemic risks, we are concerned this crucial oversight measure will be severely weakened if it is limited to academics and if platforms are able to invoke a broad “trade secrets” exemption. Given the crucial role civil society organisations play in holding platforms to account and exposing rights breaches and other harms, access should be extended to them - provided their proposals adhere to the highest ethical and methodological standards and they are able to secure any personal data they receive. Currently, scrutiny is severely hampered by the lack of data available as well as a hostile approach from key platforms. This includes Facebook’s intimidation of AlgorithmWatch to shut down its Instagram Monitoring Project by weaponizing the company’s terms of service. We therefore strongly urge you to support the Parliament’s position to widen access to include “vetted not-for-profit bodies, organisations or associations” and remove the trade secrets exemption.

Widening risk assessment to cover all rights and social harms (Art 26 and 27) 

We urge you to support the Parliament’s position on risk assessment and clarify the text to ensure that it expands risk assessment to consider all fundamental rights, as set out in the EU charter of Fundamental Rights, while maintaining a focus on social harms such as disinformation. This expansion is essential to ensure risk assessment is comprehensive and sufficiently addresses all systemic risks - current and future. A crucial addition from the Parliament’s position is to ensure assessments of risks posed by algorithms, activities, and business-model choices, before new products are deployed as well as explicit focus on VLOPs’ business model choices and inclusion of risks stemming from “algorithmic systems”. Finally, the DSA should require that civil society organisations be consulted as part of VLOPs’ risk assessment and when designing risk mitigation measures, as the Parliament’s position underlines (Art 26 2a; Art 27 1a). This is essential as a check on potential negative effects of mitigation measures on citizens or minorities, such as discriminatory moderation or over-removal of content.

Empowering users to seek redress (Art 17)

We commend the Council for its position regarding the internal complaint handling system, empowering users to seek redress against wrongful actions and inactions by the platforms. As the General Approach makes clear, the system must be broadened so it covers all cases, including where users want to act when a platform has not removed or disabled access to a piece of content. Failing to broaden the application of this Article would further harm victims of hate speech and vulnerable communities, who would be left powerless. We therefore strongly urge you to follow Council’s position (by including “whether or not” in Art.17 (1)) and provide redress through internal complaint handling mechanisms to all users.

Priorités pour les trilogues (pdf)

Schwerpunkte für die Triloge (pdf)

Key issues for trilogues (pdf)

The post BRIEFING: Priorities for the Digital Services Act Trilogues appeared first on People vs. Big Tech.

]]>
MEPs Stand Up to Big Tech with Significant DSA Vote https://peoplevsbig.tech/meps-stand-up-to-big-tech-with-significant-dsa-vote/ Mon, 14 Feb 2022 05:57:00 +0000 https://peoplevsbig.tech/?p=553 The European Parliament moves to curtail invasive advertising and block loopholes that would worsen vulnerability to disinformation attacks

The post MEPs Stand Up to Big Tech with Significant DSA Vote appeared first on People vs. Big Tech.

]]>

The European Parliament moves to curtail invasive advertising and block loopholes that would worsen vulnerability to disinformation attacks.

In a full vote of the European Parliament in Strasbourg on the evening of 19 January, announced the following morning, Members of European Parliament (MEPs) backed amendments to Article 24 of the Digital Services Act (DSA) that will see tougher restrictions on how personal data can be used in targeted advertising, including a ban on use of sensitive data for targeted ads and a requirement that platforms must provide continued fair access to users who turn off targeted ads. Although the Parliament missed an historic opportunity to fully outlaw targeted ads based on people’s personal data, these essential steps will help restrict the current abusive business model which allows Big Tech companies to profit off the invasive collection and use of their users’ data.

The final DSA text (which still must go through the Trilogues process before it becomes law) comes after months of hard campaigning from civil society groups in the face of unprecedented lobbying from Silicon Valley firms. Another welcome development was the voting down of an amendment (Recital 38) that would have effectively mandated the continued algorithmic promotion of content from any outlet calling itself media, even if the content is disinformation. Other wins represented in the vote include last year’s defeat of a broad trade secrets exemption that would have undermined crucial data access and scrutiny provisions in the DSA, as well as widened access to platform data for third-party researchers including civil society.

MEPs voting to outlaw the most invasive practices of targeted advertising embodies the growing global momentum against Big Tech’s surveillance advertising model. The crucial European vote came on the heels of US Members of Congress separately proposing legislation to ban surveillance advertising in the US – the latest signal that lawmakers around the world are looking to take a stand against Big Tech’s abusive business model.

Members of the People vs Big Tech network welcomed the outcome of the European Parliament’s vote and called on EU leaders to ensure these changes are signed into law later this year, releasing a joint statement here.

In response to the outcome of the vote and the collective efforts of the People vs Big Tech network to help secure it, MEPs said the following:

  • MEP Karen Melchior, Danish Social Liberal Party: "This week the people of Europe took back control. People Vs Big Tech campaign allowed to unify civil society and digital rights activists; bringing the debate into the mainstream. The united front paid off when we voted the amendments in plenary, we managed to fight off the media exception, and got a majority for protection against tracking ads. I’m grateful for the work of People Vs Big Tech. You should all be proud of the results achieved!"
  • MEP Alexandra Geese, Greens: "I thank all of you, who helped us to expose the interests of the big-tech lobby in the public debate and to ensure objectification. The outcome is a tremendous success. You have opened up the discussion space and brought the debate back to the facts."
  • MEP Paul Tang, Progressive Alliance of Socialists and Democrats: "[The] DSA voting result proved Big Tech's long-lasting campaign - worth millions of euros - couldn't stand the power of all these individuals and civil society organisations defending their rights and interests. Civil society won! We are, as MEP's, but foremost as members of the Tracking-free Ads Coalition, enormously grateful for all your efforts and this powerful result! We are not there yet. However, by continuing this cooperation and unity, I'm confident we will effectively limit the harmful practices of a few and make the many powerful. Many thanks once again!"
  • MEP Kim Van de Sparrentak, Greens: "We made a number of groundbreaking steps! Thank you for creating a strong movement, raising the momentum and making sure people’s voices were heard, to counter big tech’s lobbying efforts. We’ll keep up the fight for more fundamental change in the next years. Together we will end the divisive recommender algorithms and toxic business models for once and for all."

The next round of DSA negotiations (the so-called Trilogues) are already underway – with an aim to agree the final package as early as April. Going forward, the People vs Big Tech network will continue to work to protect these victories, while also demanding greater access to justice for victims of online abuse under Article 17 of the Act. As it stands, the DSA currently leaves victims of digital violence and abuse with no option to appeal if their notifications or request for remedy are denied by the platforms. Platforms’ content moderation practices already disproportionately harm marginalised groups – a lack of appeal options would have a silencing effect on large numbers of platform users such as women and minorities.

On this important issue, Josephine Ballon, Head of Legal of HateAid, said “Every second woman is afraid to express their opinion freely online. With this vote, the European Parliament leaves millions of users defenseless against hate speech and disinformation - with devastating consequences especially for women and minority groups. HateAid, the first counseling center for victims of online violence in Germany, is calling on the Council to uphold their position concerning equal access to mechanisms laid out in Article 17 and Article 18 in the Trilogues.”

The post MEPs Stand Up to Big Tech with Significant DSA Vote appeared first on People vs. Big Tech.

]]>
MEPs Must Reject “Media Exemption” Loopholes in the DSA https://peoplevsbig.tech/meps-must-reject-media-exemption-loopholes-in-the-dsa/ Sun, 16 Jan 2022 06:02:00 +0000 https://peoplevsbig.tech/?p=557 Our briefing explains why proposed "media exemption" amendment will lead to greater online disinformation attacks against EU citizens

The post MEPs Must Reject “Media Exemption” Loopholes in the DSA appeared first on People vs. Big Tech.

]]>

A proposed "media exemption" amendment will lead to greater online disinformation attacks against EU citizens.

On Thursday, European Members of Parliament (MEPs) will vote on the Digital Services Act (DSA), a landmark piece of legislation that serves as a golden opportunity for Europe to address algorithmic harms and turn off Big Tech’s manipulation machine. Years in the making, the DSA has the potential to make a significant impact in the critical fight to combat online disinformation by requiring platforms to mitigate the serious risks created by the functionality of their platforms – including the way their algorithms amplify illegal and harmful content that could include propaganda and disinformation attacks. Yet the potential for the DSA to tackle online disinformation could be severely undermined by a proposed “media exemption” amendment, which would effectively mandate the continued algorithmic promotion of media news content even if such content is false. This is particularly problematic when who or what constitutes “media” itself remains vague. If a false story is published by a media outlet, platforms would not be able to apply circuit breakers on their own algorithms to deamplify the disinformation – regardless of how damaging it may be.

It is important to acknowledge that rigorous journalism and fact-checking are vital in the fight against disinformation and must be protected and promoted in a healthy democracy. Platforms should also not be allowed to arbitrarily abuse their power. But creating carve-outs in the DSA is a dangerous solution, prompting European Commission Vice-President Věra Jourová (responsible for disinformation and media freedom) to call it “good intentions leading to hell.” That’s why over 50 fact-checkers, journalists, and experts alike called on MEPs to reject “media exemption” loopholes in the DSA. As Thursday’s vote approaches, the People vs Big Tech network once again affirms that any “media exemptions” must be categorically rejected in order to maintain the efficacy of the DSA. Such loopholes, if passed, would open the floodgates for disinformation because:

  1. Current media rules do not provide sufficient limitations on the production and publication of false news and information (regulatory frameworks governing press and broadcasters are designed to offer ex-post, not ex-ante, solutions). Given this, a “media exemption” to the DSA would allow the damage from a false story to keep spreading online before a solution was able to be imposed.
  2. Troubling trends in media ownership have allowed for “a resurgence of press baronism and politicisation.” Under a “media exemption” loophole, potentially compromised media outlets would gain cover for publishing conspiracy theories or other falsehoods designed to advance the personal and political agendas of their owners.
  3. State-controlled media from countries like Russia are able to produce what amounts to essentially “licensed propaganda,” and self-regulated private media organisations don’t always come with quality assurances. If a “media exemption” loophole was passed, even if the stories were dangerously misleading or blatantly false, they’d continue being shown to millions of people.

In line with these crucial considerations, media scholar Dr. Justin Schlosberg has just released a new briefing outlining why any “media exemption” in the DSA will lead to greater online disinformation attacks against European citizens. A Reader in Journalism and Media at Birkbeck College, University of London, and the author of multiple books about the media, Dr. Schlosberg unpacks the above points and makes plain why it is paramount for MEPs to reject any proposed exemptions. As he notes: “A frictionless system that algorithmically amplifies content cannot have special carve-outs for any type of content.” The briefing also shows how current provisions in the DSA already protect the media from arbitrary decisions by powerful platforms without needing exemptions.

With voting on the DSA this week, now is the time to take a stand and tell MEPs that the DSA can still protect media freedom while rejecting media exemptions.

Download Dr. Schlosberg’s full briefing, and a tweet-ready image calling for MEPs to reject any “media exemption” loophole, below.

Schlosberg DSA Media Exemption Briefing (pdf)

No Media Exemption Loophole (Image) (png)

The post MEPs Must Reject “Media Exemption” Loopholes in the DSA appeared first on People vs. Big Tech.

]]>
The Tale of the Heckler https://peoplevsbig.tech/the-tale-of-the-heckler/ Thu, 11 Nov 2021 06:15:00 +0000 https://peoplevsbig.tech/?p=569 WaterBear releases short film illustrating how social media platforms are amplifying mis and disinformation online

The post The Tale of the Heckler appeared first on People vs. Big Tech.

]]>

WaterBear releases short film illustrating how social media platforms are amplifying mis and disinformation online.

Interactive streaming platform WaterBear has released a short documentary film called The Tale of the Heckler. The film illustrates how social media platforms are amplifying mis and disinformation, undermining the ability to progress on critical issues like climate change.

Watch the film below.

Also available with German and French subtitles.

The short film tells the story of a heckler at a town hall meeting where people are discussing the increase in extreme weather, its connection to climate change, and what could be done about it. The heckler claims that the storms have nothing to do with climate change and at first, only the people closest to the heckler can hear his dissent. But then he is handed a megaphone, and suddenly his opinion is able to travel further and more loudly than before. Quickly, the attention is taken away from the purpose of the town hall meeting and people leave dispirited and divided by the takeover of the meeting by the heckler.

This is how mis and disinformation spread on social media - powered by algorithms that can make unverified, triggering information go viral, creating a crisis in our information ecosystem. These platforms’ algorithms end up rewarding outright lies and hate because it keeps people clicking and scrolling for longer. This has deepened fault lines across different societies and the film shows how digital world lies exacerbate real-world crises.

The film was commissioned by the global philanthropic organisation Luminate. Alaphia Zoyab,  Director of Advocacy at Reset, an initiative of Luminate, said: “We live in a disinformation age where lies and hate powered by algorithms are getting amplified to millions of people.  Through this simple take of what happens to a heckler at a town hall meeting, we see how social media is polluting our public square, threatening our progress on so many vital issues. To solve this information crisis, first, we need to understand it.”

The film releases at a critical juncture when the EU (Digital Services Act) and the UK (Online Safety Bill) have draft laws on regulating Big Tech that are currently being debated. A massive movement of citizens across Europe is urging these governments to ensure that new laws don’t just tackle illegal content but force platforms to address systemic risks created by social media algorithms – such as the amplification of disinformation, hate and abuse.

The post The Tale of the Heckler appeared first on People vs. Big Tech.

]]>
Facebook’s Climate Commitments Miss the Bigger Picture https://peoplevsbig.tech/facebooks-climate-commitments-miss-the-bigger-picture/ Fri, 05 Nov 2021 06:17:00 +0000 https://peoplevsbig.tech/?p=571 As global leaders convene for COP26, a new report reveals the alarming growth of climate misinformation on the world's most popular platform

The post Facebook’s Climate Commitments Miss the Bigger Picture appeared first on People vs. Big Tech.

]]>

As global leaders convene to set stronger emission goals at COP26, a new report reveals the alarming growth of climate misinformation on the world’s most popular platform.

Politicians and policymakers from all over the world have gathered in Glasgow for the 2021 United Nations Climate Change Conference (COP26) with the goal of establishing emission reduction standards consistent with targets from the landmark 2015 Paris Agreement. Yet while the scientific community warns strong action must be taken immediately to stem the climate crisis, the threat posed by climate change deniers is increasing with alarming speed due to the amplification of their outlier voices on online platforms. As a new report by Stop Funding Heat and the Real Facebook Oversight Board reveals, not only do climate misinformation posts rarely receive fact-checking labels on Facebook -- but views of such posts on the platform far outpace the number of visitors Facebook sends to its Climate Science Information Center to get accurate information.

Though Facebook has implemented significant sustainability policies across its operations and carbon footprint, the reality remains that climate change denial -- perhaps the largest threat to meaningful environmental progress -- continues to thrive on the platform. A recent analysis by Avaaz from May 2021 found an estimated 25 millions views on Facebook of “misinformation related to climate science and renewable energy” in the first two months of President Biden’s term alone. The constant spread of this type of toxic information plainly undermines the tech titan’s other environmental efforts and threatens to negatively impact the viability of new, large-scale climate policies such as those being discussed at COP26.

In an effort to capture the scale of this complex problem, Stop Funding Heat -- a group of concerned individuals committed to making climate misinformation unprofitable -- prepared two reports in advance of COP26. The first, On The Back Burner: How Facebook’s Inaction on Misinformation is Fueling the Climate Crisis, addressed Facebook’s lack of climate misinformation policies and the company’s failure to contain climate misinformation on its platform.

The second, #In Denial - Facebook’s Growing Friendship with Climate Misinformation, takes this analysis a step further by documenting the significant spread of misinformation across organic posts and ads on the platform. Drawing off an English-only dataset of 196 accounts and 48,700 posts from January to August 2021, the just-released report from Stop Funding Heat and the Real Facebook Oversight Board highlights several key findings, including:

  • Climate misinformation posts are constantly being made and interacted with on Facebook. Across the dataset, there were 38,925 instances of climate misinformation, with 10 million combined interactions (likes, reactions, comments, and shares). Notably, interactions do not include the following engagements: link clicks, photo views, video views, or event responses.
  • Climate misinformation posts are constantly being viewed on Facebook. Across the dataset, there were 818,000 to 1.36 million daily views of climate misinformation (based on a conservative interaction rate of 3 to 5%).
  • Fact-checking labels are very rarely applied to climate misinformation posts. Only 3.6% of climate misinformation posts in the dataset had a fact-checking label.
  • Climate misinformation posts made by news channels and news presenters serve as hotspots for interactions. Despite making up just 4% of climate misinformation posts in the dataset, posts by news channels and news presenters accounted for 67% of all interactions.

Further unpacking these numbers helps to demonstrate the extent of the problem. As the report notes, the 818,000 to 1.36 million daily views of climate misinformation are particularly troublesome because they represent 8.2x to 13.6x the number of visitors Facebook sends to its Climate Science Information Center on a daily basis (according to Facebook’s public claim that its center has “over 100,000 daily visitors”). And for some of the most problematic climate misinformation accounts (i.e., the 41 pages and groups in the dataset that exclusively post climate misinformation), engagement is growing. In January 2021, these accounts saw 165,000 interactions with their posts. By August 2021, that monthly figure had climbed to 241,000. Further muddying the waters, only 10.6% of posts in this subset included a Facebook-applied link to the Climate Science Information Center, and only 9.1% had a fact-checking label applied. That means 80% of climate misinformation posts from accounts solely dedicated to posting this type of content received no intervention from Facebook whatsoever.

In addition to profiting off increased user interactions with inflammatory posts, the report also demonstrates that Facebook continues to accept payment for climate misinformation ads. Using the Facebook Ad Library, the report identified 113 climate misinformation ads from 1 January to 17 October 2021. Facebook’s own figures estimate these ads received 11.7 million to 14.1 million views. These ads included language such as Turning Point USA’s “Climate Change Is A HOAX!” and PragerU’s “Are you fed up with the left’s dishonest climate propaganda and hysteria like Tomi Lahren is? Sign this petition to tell mainstream media to stop censoring the important work of scientists and scholars.”

To help draw attention to the report and climate misinformation crisis on Facebook, global activist group SumOfUs placed a 5,000lb block of recycled ice in front of the U.S. Capitol on 4 November. As the ice melted, Facebook’s logo and flames became increasingly apparent -- paralleling the damage being done to the planet by the company’s climate misinformation failures.

Said SumofUS campaigns advisor Rewan Al-Haddad, “The flood of disinformation unleashed by Mark Zuckerberg and Facebook is drowning audiences around the world, with more than 1.3m views of climate misinformation each day. Demanding better of Facebook only leads to more greenwashing and lies. We call for the US, UK, and the EU to stop Facebook’s unchecked power, for the sake of our climate and our collective future.”

In light of the report's findings, Stop Funding Heat is calling for Facebook to (1) go public with its definition of climate misinformation, (2) share its internal research on how climate misinformation spreads on the platform, and (3) produce a transparent plan to meaningfully reduce the spread of climate misinformation on Facebook.

Said Sean Buchan, Chief Researcher for Stop Funding Heat, “Facebook has been told over and over, through public reports and in private meetings, that its platform is a breeding ground for climate misinformation. Either they don’t care or they don’t know how to fix it.”

Download #In Denial - Facebook’s Growing Friendship with Climate Misinformation here.

The post Facebook’s Climate Commitments Miss the Bigger Picture appeared first on People vs. Big Tech.

]]>