Human rights Archives - People vs. Big Tech https://peoplevsbig.tech/category/human-rights/ We’re people, not users Wed, 26 Jun 2024 12:12:36 +0000 en-GB hourly 1 https://peoplevsbig.tech/wp-content/uploads/2024/06/cropped-favicon-32x32.png Human rights Archives - People vs. Big Tech https://peoplevsbig.tech/category/human-rights/ 32 32 LGBTQ+ groups tell Leo Varadkar: reform the Irish Data Protection Commission https://peoplevsbig.tech/lgbtq-groups-tell-leo-varadkar-reform-the-irish-data-protection-commission/ Wed, 22 Feb 2023 05:34:00 +0000 https://peoplevsbig.tech/?p=535 Dozens of LGBTQ+ and human rights groups have written to Irish Taoiseach Leo Varadkar asking him to reform the Irish Data Protection Commission

The post LGBTQ+ groups tell Leo Varadkar: reform the Irish Data Protection Commission appeared first on People vs. Big Tech.

]]>

Dozens of LGBTQ+ and human rights groups have written to Irish Taoiseach Leo Varadkar asking him to stop enabling hate and reform the Irish Data Protection Commission. Here’s the letter:

Dear Taoiseach,

As LGBTQ+ people, organisations and allies across the world, we know the vital importance of social media for forging connection and finding community. But we also see first hand the damage inflicted by the business model of the Big Tech companies – whose algorithms are primed to amplify the worst of humanity and cause deep harm to marginalised groups.

Research suggests 78% of our community in Europe faced anti-LGBTQ+ hate crime or hate speech online in the 5 years to 2020, and that we are disproportionately affected by digital privacy violations. Just this month, Facebook, TikTok and YouTube all approved ads for publication containing extreme and violent anti-LGBTQ+ content, in a test carried out by campaign group Global Witness.

Such findings equate to real-world suffering: marginalised people viciously trolled, gay teens targeted with conversion therapy ads, harassment of Pride participants live-streamed on YouTube. And underpinning this proliferation of hate is the business model of the Big Tech companies, which mines vast amounts of our personal data to maximise “engagement” and enables deeply invasive profiling and targeting.

As an open, inclusive country committed to the protection of human rights, Ireland should be leading the charge against the corporate culture and incentives that facilitate harmful exploitation of our data. But rather than act against hate, Ireland is enabling it – neglecting enforcement of European data protection law and prioritising its relationship with Silicon Valley over the safety and dignity of millions.

This is all too obvious from the Irish Data Protection Commission’s (DPC) disappointing enforcement record. The DPC is the lead data protection regulator, with unique responsibility for holding the world’s biggest – Dublin-based – tech platforms to account. But privacy advocates, legislators, regulators and citizens across Europe and the world are starting to see the stark truth that Ireland would rather protect Silicon Valley profits than fundamental human rights.

That is why we, LGBTQ+ people, organisations and allies across the world, are calling on you to change course. We ask you to stand up for citizens everywhere, be true to Irish values and ensure big tech platforms respect our rights. These actions will be popular with your own citizens. According to a Yonder poll from last year, only 18% of Irish citizens think these companies are trustworthy, while 58% hold your government responsible for protecting them against the harm they cause.

The first step in fulfilling this responsibility must be urgent reform of the DPC. Only the DPC has the power to ensure Europe’s General Data Protection Regulation (GDPR) is working as intended when it comes to the world’s most powerful platforms. But so far it has dragged its feet on enforcement and allowed Big Tech to keep trampling on people’s rights.

In the first 3.5 years of GDPR, Ireland issued just four draft decisions – 10 times fewer than Spain, for example, despite its larger budget. The European Data Protection Board recently had to step in to overrule DPC decisions that allowed Meta to illegally force users to accept targeted ads. (And even now, the DPC is seeking a court order to block an EU demand that it investigate Meta’s use of data about our sexual orientations and other intimate aspects of our lives). This is not the first time the EDPB has been forced to intervene because Ireland is failing to protect people’s basic data rights, a record sharply at odds with the country's professed commitment to human rights globally.

In the coming months, your government has a unique opportunity to change course when it appoints two new Commissioners to the DPC. This is a welcome step, but only if it leads to meaningful change. We therefore urge you to ensure that only candidates with the expertise and integrity to hold Big Tech to account are considered for the job. It is also essential that the Irish government commission an entirely independent review of how to strengthen and reform the DPC, to make it fit for purpose, and that the new commissioners appointed are given a mandate to implement these changes.

For LGBTQ+ people across the world, social media is a vital space for expression, community and connection with the capability to transform -- even save -- lives. But its positive role is being significantly undermined by a surveillance-based business model fine-tuned for spreading hate, and the failure of regulators to bring it into line. Ireland has a powerful opportunity to change that by reforming the DPC. We urge you to seize it.

Yours sincerely,

All Out

Alliance4Europe

AI Forensics

Amnesty International

Centre for Peace Studies, Croatia

Coalition For Women In Journalism (CFWIJ)

Defend Democracy

Digital Action

Ekō

Fair Vote UK

Far Right Observatory Ireland

Global Witness

ILGA-Europe

Irish Network Against Racism (INAR)

Irish Council for Civil Liberties

Institute for Strategic Dialogue (ISD)

LGBT Ireland

'NEVER AGAIN' Association

Outhouse LGBTQ+ Centre - Ireland

Transgender Europe (TGEU)

Transgender Equality Network Ireland (TENI)

Transvanilla Transgender Association - Hungary

Panoptykon Foundation

Peter Tatchell Foundation

Rouge Direct

STOP homophobie (stophomophobie.og)

Uplift Ireland

Zagreb Pride

#jesuislà

The post LGBTQ+ groups tell Leo Varadkar: reform the Irish Data Protection Commission appeared first on People vs. Big Tech.

]]>
A 10-point plan to address our information crisis https://peoplevsbig.tech/a-10-point-plan-to-address-our-information-crisis/ Fri, 02 Sep 2022 05:42:00 +0000 https://peoplevsbig.tech/?p=538 Nobel prize laureates Maria Ressa and Dmitry Muratov launch a roadmap for fixing our global public square

The post A 10-point plan to address our information crisis appeared first on People vs. Big Tech.

]]>
عربي - FilipinoNederlands - FrançaisDeutsch - हिंदी Italiano - 한국인 - Português - -РусскийEspañol

Presented by 2021 Nobel Peace Prize laureates Maria Ressa and Dmitry Muratov at the Freedom of Expression Conference, Nobel Peace Center, Oslo 2 September 2022



We call for a world in which technology is built in service of humanity and where our global public square protects human rights above profits.

Right now, the huge potential of technology to advance our societies has been undermined by the business model and design of the dominant online platforms. But we remind all those in power that true human progress comes from harnessing technology to advance rights and freedoms for all, not sacrificing them for the wealth and power of a few.

We urge rights-respecting democracies to wake up to the existential threat of information ecosystems being distorted by a Big Tech business model fixated on harvesting people’s data and attention, even as it undermines serious journalism and polarises debate in society and political life.

When facts become optional and trust disappears, we will no longer be able to hold power to account. We need a public sphere where fostering trust with a healthy exchange of ideas is valued more highly than corporate profits and where rigorous journalism can cut through the noise.

Many governments around the world have exploited these platforms’ greed to grab and consolidate power. That is why they also attack and muzzle the free press. Clearly, these governments cannot be trusted to address this crisis. But nor should we put our rights in the hands of technology companies’ intent on sustaining a broken business model that actively promotes disinformation, hate speech and abuse.

The resulting toxic information ecosystem is not inevitable. Those in power must do their part to build a world that puts human rights, dignity, and security first, including by safeguarding scientific and journalistic methods and tested knowledge. To build that world, we must:

Bring an end to the surveillance-for-profit business model

The invisible ‘editors’ of today’s information ecosystem are the opaque algorithms and recommender systems built by tech companies that track and target us. They amplify misogyny, racism, hate, junk science and disinformation – weaponizing every societal fault line with relentless surveillance to maximize “engagement”. This surveillance-for-profit business model is built on the con of our supposed consent. But forcing us to choose between allowing platforms and data brokers to feast on our personal data or being shut out from the benefits of the modern world is simply no choice at all. The vast machinery of corporate surveillance not only abuses our right to privacy, but allows our data to be used against us, undermining our freedoms and enabling discrimination.

This unethical business model must be reined in globally, including by bringing an end to surveillance advertising that people never asked for and of which they are often unaware. Europe has made a start, with the Digital Services and Digital Markets Acts. Now these must be enforced in ways that compel platforms to de-risk their design, detox their algorithms and give users real control. Privacy and data rights, to date largely notional, must also be properly enforced. And advertisers must use their money and influence to protect their customers against a tech industry that is actively harming people.

End tech discrimination and treat people everywhere equally

Global tech companies afford people unequal rights and protection depending on their status, power, nationality, and language. We have seen the painful and destructive consequences of tech companies’ failure to prioritize the safety of all people everywhere equally. Companies must be legally required to rigorously assess human rights risks in every country they seek to expand in, ensuring proportionate language and cultural competency. They must also be forced to bring their closed-door decisions on content moderation and algorithm changes into the light and end all special exemptions for those with the most power and reach. These safety, design, and product choices that affect billions of people cannot be left to corporations to decide. Transparency and accountability rules are an essential first step to reclaiming the internet for the public good.

Rebuild independent journalism as the antidote to tyranny

Big tech platforms have unleashed forces that are devastating independent media by swallowing up online advertising while simultaneously enabling a tech-fueled tsunami of lies and hate that drown out facts. For facts to stand a chance, we must end the amplification of disinformation by tech platforms. But this alone is not enough. Just 13% of the world’s population can currently access a free press. If we are to hold power to account and protect journalists, we need unparalleled investment in a truly independent media persevering in situ or working in exile that ensures its sustainability while incentivizing compliance with ethical norms in journalism.

21st century newsrooms must also forge a new, distinct path, recognizing that to advance justice and rights, they must represent the diversity of the communities they serve. Governments must ensure the safety and independence of journalists who are increasingly being attacked, imprisoned, or killed on the frontlines of this war on facts.

We, as Nobel Laureates, from across the world, send a united message: together we can end this corporate and technological assault on our lives and liberties, but we must act now. It is time to implement the solutions we already have to rebuild journalism and reclaim the technological architecture of global conversation for all humanity.

We call on all rights-respecting democratic governments to:
1. Require tech companies to carry out independent human rights impact assessments that must be made public as well as demand transparency on all aspects of their business – from content moderation to algorithm impacts to data processing to integrity policies.


2. Protect citizens’ right to privacy with robust data protection laws.


3. Publicly condemn abuses against the free press and journalists globally and commit funding and assistance to independent media and journalists under attack.


We call on the EU to:
4. Be ambitious in enforcing the Digital Services and Digital Markets Acts so these laws amount to more than just ‘new paperwork’ for the companies and instead force them to make changes to their business model, such as ending algorithmic amplification that threatens fundamental rights and spreads disinformation and hate, including in cases where the risks originate outside EU borders.


5. Urgently propose legislation to ban surveillance advertising, recognizing this practice is fundamentally incompatible with human rights.


6. Properly enforce the EU General Data Protection Regulation so that people’s data rights are finally made reality.


7. Include strong safeguards for journalists’ safety, media sustainability and democratic guarantees in the digital space in the forthcoming European Media Freedom Act.


8. Protect media freedom by cutting off disinformation upstream. This means there should be no special exemptions or carve-outs for any organisation or individual in any new technology or media legislation. With globalised information flows, this would give a blank check to those governments and non-state actors who produce industrial scale disinformation to harm democracies and polarise societies everywhere.


9. Challenge the extraordinary lobbying machinery, the astroturfing campaigns and recruitment revolving door between big tech companies and European government institutions.


We call on the UN to:
10. Create a special Envoy of the UN Secretary-General focused on the Safety of Journalists (SESJ) who would challenge the current status quo and finally raise the cost of crimes against journalists.


Signed by:
Dmitry Muratov, 2021 Nobel Peace Prize laureate
Maria Ressa, 2021 Nobel Peace Prize laureate

For more information and the full list of signatories, go to: www.10pointplan.org

The post A 10-point plan to address our information crisis appeared first on People vs. Big Tech.

]]>
Stop Facebook from Silencing Whistleblower Daniel Motaung https://peoplevsbig.tech/stop-facebook-from-silencing-whistleblower-daniel-motaung/ Wed, 20 Jul 2022 05:44:00 +0000 https://peoplevsbig.tech/?p=540 Over 80 organisations demand that Meta drop bid to gag South African human rights defender and whistleblower Daniel Motaung

The post Stop Facebook from Silencing Whistleblower Daniel Motaung appeared first on People vs. Big Tech.

]]>

Days after Meta published its first human rights report, an international coalition of more than 80 organisations is demanding the company respect South African human rights defender and whistleblower Daniel Motaung. In an open letter published today, they demand Meta and Facebook content moderation outsourcing company Sama immediately cease all attempts to silence Daniel Motaung.

Meta’s most high profile whistleblower Frances Haugen is also a signatory.

The letter reads:

Dear Mr Zuckerberg and Ms Gonzalez,

We are writing to you as more than 80 organizations, lawyers and citizens from around the world to demand that you drop all attempts to silence whistleblower Daniel Motaung and to crush his efforts to improve labor conditions for Facebook content moderators in Kenya and around the world.

Daniel is a human rights defender. He has the right to express himself freely under international human rights law and within the Kenyan constitution, as well as the right to seek justice for the abuses he and his colleagues say they experienced at your hands – working as Facebook content moderators for Meta’s Kenyan outsourcing partner Sama. It is on this basis that he is taking your companies to court, alleging that he and his former colleagues are victims of forced labor, human trafficking and union-busting.

But rather than engage with and learn from his story, your companies are aggressively attempting to silence Daniel, as well as Foxglove, the legal NGO supporting him, with a gag order and contempt of court proceedings. Your lawyers have even asked a judge to “crack the whip” against Daniel, a frontline worker who suffers post-traumatic stress disorder as a result of the work he did for you and for which he was earning just $2.20 per hour. It appears Meta and Sama would rather shut Daniel up than meaningfully address his allegations.

Daniel and the hundreds of colleagues who he is standing up for are an integral part of Facebook’s global workforce. Their relentless work sifting through the most toxic and harmful content on the platform, including beheadings and child abuse, hour after hour, day upon day, is what keeps the company in business. Their experiences should be taken seriously and they should be encouraged and supported to speak up – not fired from their jobs and gagged.

It should be a source of intense shame for Meta, one of the richest companies on earth, that it has chosen to focus its corporate clout and resources on the latter course of action. Sama, a company that professes to champion dignified work for all but has instead treated its own workers with callous disdain, should equally hang its head. It couldn’t be clearer that both Facebook and Sama view Daniel, and workers like him, as expendable.

Facebook’s treatment of a low-paid, Black whistleblower is all the more shocking when compared to its response to other whistleblowers with more privilege and profile. Frances Haugen, for example, a white former Facebook product manager who won global media attention after leaking thousands of internal company documents, has rightfully been left to speak freely. It appears to us that the company is making a racist calculation that it can safely seek to silence Daniel without causing itself a PR crisis.

Meta and Sama publicly claim to champion freedom of expression, and to support global movements fighting for equality and racial justice. It is impossible to square such statements with your actions in Kenya and with your treatment of content moderation workers globally. The first step to fixing this is to publicly affirm that you will respect Daniel’s right to speak his truth about his experiences working for your companies and to immediately cease your attempts to impose a gag order on Daniel, Foxglove and his legal team.

We also urge both Facebook and Sama to support the unionization of your content moderation workforce as a vital step towards guaranteeing fair conditions and labor rights in this hazardous industry.

Yours sincerely,


Rebecca Dixon, National Employment Law Project
Dr. Cory Doctorow (h.c.), Author and Activist
Patrick Gaspard, President and Chief Executive Officer, Center for American Progress
Frances Haugen, Facebook whistleblower
Dr. Ritumbra Manuvie, Lecturer of International Law and Human Rights, University of Groningen, The Netherlands.
Dr. J. Nathan Matias, Assistant Professor, Cornell University Departments of Communication and Information Science
Roger McNamee, Author of Zucked: Waking Up to the Facebook Catastrophe; early investor in Facebook
Dr. Safiya Noble, Author, Algorithms of Oppression: How Search Engines Reinforce Racism
Erecu Richard, Women, Climate Change & Environmental Rights Defender
Aliganyira Moses Sabiiti, Program Officer, Bunyoro Choice Uganda Masindi
Anya Schiffrin, Director, Technology, Media, and Communications specialization, School of International and Public Affairs, Columbia University
Phumzile van Damme, Ethical Tech Activist and Former South Africa MP
Shoshana Zuboff, author, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power and Charles Edward Wilson Professor Emerita at Harvard Business School
Dr. Susie Alegre, Human Rights Lawyer and Author of Freedom to Think: The Long Struggle to Liberate Our Minds

Access Now
Accountable Tech
AfricanDefenders (Pan African Human Rights Defenders Network)
Africa Freedom of Information Centre (AFIC)
Amalgamated Rural Teachers Union of Zimbabwe (ARTUZ)
ARTICLE 19 Eastern Africa
Avaaz
Campaign for Free Expression (CFE)
Central Organization of Trade Unions, Kenya (COTU-K)
Centre for Peace Studies
Center for Research on Multinational Corporations (SOMO)
CFDT Cadres /French Union of White Collars
Color of Change
Corporate Accountability Lab
CoWorker.org
The Daphne Caruana Galizia Foundation
Dare to be Grey
Defend Democracy
DefendDefenders (East and Horn of Africa Human Rights Defenders Project)
Državljan D / Citizen D
Fair Vote UK
Freedom of Expression Institute
Free Press
Global Action Plan UK
Global Forum for Media Development
Global Witness
Institute for Strategic Dialogue (ISD)
International Federation of Journalists
International Lawyers Assisting Workers Network
International Trade Union Confederation (ITUC)
Irish Council of Civil Liberties
Jeevika - Jeeta Vimukti Karnataka, India
Jobs With Justice
Kenya Human Rights Commission
Labour Start
Legal Resources Centre
Lesotho Teachers Trade Union (LTTU)
Lie Detectors
Local Sustainable Communities Organization (LOSCO)
Mazingira Institute (Kenya)
Media Institute of Southern Africa (MISA)
Namibia Media Trust (NMT)
National Coalition of Human Rights Defenders Uganda
Nothing2Hide (N2H)
Panoptykon Foundation
People Vs Big Tech
People Forum for Human Rights (People Forum)
Ranking Digital Rights
Real Facebook Oversight Board
Sherpa
Socio-Economic Rights and Accountability Project [SERAP]
Stichting the London Story
SumofUs
The All Africa Students Union (AASU)
The Coalition For Women In Journalism (CFWIJ)
The Platform to Protect Whistleblowers in Africa (PPLAAF)
The Red Vests Movement
The Signals Network
Transparency International EU
Tribeless Youth
UCLA Center for Critical Internet Inquiry
UNI Global Union
Umbrella for Journalists in Kasese (UJK)
Uplift
Uyghur Human Rights Project
WeMove Europe
Women Human Rights Defenders Hub (The Hub)
#ShePersisted
#jesuisla

Further information on Daniel’s case:

Time, 1 July 2022, Facebook Asks Judge to 'Crack the Whip' in Attempt to Silence a Black Whistleblower

Foxglove, 14 February 2022, NEW CASE: Foxglove supports Facebook content moderator sacked for leading workers to form a trade union in Kenya

The post Stop Facebook from Silencing Whistleblower Daniel Motaung appeared first on People vs. Big Tech.

]]>
Fueled by Social Media, Calls for Violence Against Muslims Reach Fever Pitch in India https://peoplevsbig.tech/fueled-by-social-media-calls-for-violence-against-muslims-reach-fever-pitch-in-india/ Tue, 22 Feb 2022 05:51:00 +0000 https://peoplevsbig.tech/?p=546 New research documents the dangerous degree to which hate speech and disinformation on Facebook are thriving. Genocide Watch warns that the “early warning

The post Fueled by Social Media, Calls for Violence Against Muslims Reach Fever Pitch in India appeared first on People vs. Big Tech.

]]>

New research documents the dangerous degree to which hate speech and disinformation on Facebook are thriving. Genocide Watch warns that the “early warning signs of genocide” are present in India.

A recent study by The London Story (TLS), a diaspora-led foundation in the Netherlands working to combat disinformation and hate speech online, reveals the shockingly pervasive nature of hate speech on Facebook in India. The foundation’s report, Face of Hatebook, finds the world’s most popular social media platform hosts and promotes “a disturbing volume of direct hate speech, disinformation, and calls to violence against minorities” – particularly those of Muslim faith. The prevalence of this type of content is all the more dangerous in a country such as India, notes TLS, which has Facebook’s highest number of users and is a de facto “public square” for news and propaganda, and where violent far-right voices have become more mainstream due to amplification on Facebook and its associated apps.

The nature of the hate speech, the protection of perpetrators of violence, and the complicity of elected and law enforcement officials have all prompted genocide experts to sound the alarm that India’s minorities, particularly Muslims, are at grave risk. Genocide Watch, which issued the alert, is however careful to say that it does not mean genocide is underway – only that the signs of danger are present – and that if or when genocidal violence is unleashed, it will be by mobs rather than directly by the State. Civil society activists have been warning for years that Silicon Valley’s tools have emboldened and enabled these violent mobs. Activists are demanding that Facebook release the full human rights impact assessment it commissioned on India.

In the wake of Frances Haugen’s disclosures, Facebook’s non-English content moderation failures are well documented. Although roughly two-thirds of Facebook users engage on the platform in a language other than English, internal company documents show dangerous content and hate speech are common in regions where Facebook lacks moderators and AI solutions capable of understanding and detecting threatening content in other languages. This allocation of resources contributes to a gap in protection for at-risk populations in non-English-speaking countries such as India where unaddressed online hate speech and disinformation can be weaponised to prompt and accelerate real-world attacks.

The widespread violence that occurred against Rohingya Muslims in Myanmar is one of the most acute examples of how dangerous this environment can be. A U.N. fact-finding mission examining the Myanmar crisis found Facebook was “a useful instrument for those seeking to spread hate, in a context where, for most users, Facebook is the Internet.” The mission further noted the company’s response to the ethnic cleansing was “slow and ineffective.” A subsequent independent report commissioned by the Big Tech giant itself confirmed these issues, finding the platform had created an “enabling environment” for human rights abuse and concluding: “Facebook has become a means for those seeking to spread hate and cause harm, and posts have been linked to offline violence.”

A similarly explosive situation is now brewing in India. Documents disclosed last year by Haugen to the U.S. Securities and Exchange Commission reveal Facebook knows it is struggling with a similar problem in its largest user market, where nearly 400 million accounts exist. In March of 2021, documents show company employees discussing whether they would be able to address the “fear mongering, anti-Muslim narratives” being broadcast on the platform by a far-right Hindu nationalist group with ties to Indian Prime Minister Narendra Modi. Another internal document shows significant portions of India-based Islamophobic content were “never flagged or actioned” due to Facebook’s inability to effectively moderate content in Hindi and Bengali. Facebook’s content policies in India faced additional scrutiny in 2020 when it came to light that the company’s former Public Policy Director for India, South & Central Asia, Ankhi Das, opposed applying hate-speech rules to politicians associated with Modi’s Bharatiya Janata Party and told staff doing so “would damage the company’s business prospects in the country.”

Aware of these glaring shortcomings, and pursuant to its mission to investigate human rights violations and abuses, TLS conducted its own research into hate speech in India on Facebook. TLS researchers used specific keywords to identify over 607 active Facebook groups that post anti-Muslim and pro-"Hindu Rashtra" (Hindu nation) narratives on the platform. The team compared the content of these groups’ posts to Facebook’s Community Standards, and found the following:

  • Violence and Incitement: “There are several examples of threats to kill, raze and demolish persons, properties and communities on the Facebook. These include direct threats and calls to action, as well as indirect subtle commenting that some people, religion, etc. should perish. Both direct and indirect threats have resulted in real-time violence across the world. These threats have also led to imminent harm, such as lynching, assembling of armed groups with machetes and firearms.”
  • Hate Speech: “Hateful content dehumanizing Indian Muslims, attacking them as inferior, morally bankrupt, alleging them to be violent and sexual predators, and calling for their exclusion and segregation continues to get a platform on Facebook. By allowing such massive amount of hate and vitriol, Facebook is not just complicit in dehumanizing Indian Muslims, it is also shares responsibility for creating an atmosphere of fear in India.”
  • Dangerous Individuals and Organizations: “Facebook continues to host ultra-right wing Hindu outfits like RSS, Vishwa Hindu Parishad, Hindu Swayam Sewak, Bajrang Dal, and its support system of fan pages, despite the violent proclamations against Indian Muslims of these groups and pages. Facebook continues to allow not only these organizations, but also their millions of supporters to praise and incite violence against Indian Muslims.”

Throughout the study, TLS reported its troubling findings to Facebook. Yet consistent with the moderation concerns identified by the above disclosed internal documents, the platform’s automated processes repeatedly responded that the content was not in violation of any of Facebook’s Community Standards. Although several of the most problematic posts were also submitted for human review, they all remained on the platform.

Excerpt from Face of Hatebook report: Post calling for violence against Muslims and stating “They all deserve to be kept in camps like China keeps Uyghur Muslim.”

One such post, a 2019 video of a speech in which influential Hindu religious leader Yati Narsinghanand calls for the “extermination” of Islam “from the face of the Earth,” has been viewed over 32 million times. The speech, and the bulk of its 144,000 comments, are in Hindi. The TLS team referred this post to Facebook’s Oversight Board, but the case was not selected for review.

The continued public availability of posts like this video beggars belief because the priest featured in the video, Yati Narsinghanand, was ultimately arrested for his subsequent actions after his December 2021 speech in Haridwar called for violence against India’s Muslims and encouraged an “ethnic cleansing” similar to the attacks on Rohingya Muslims in Myanmar. Video of the event went viral, elevating hate-filled and violence-tinged rhetoric in the country to dangerous levels.

TLS’s additional research revealed videos of Narsinghanand’s December speech remained up and publicly available in various segments on Facebook at the time of this post’s publication. Other similarly inciting videos, such as one from last year with nearly six million views in which Narsinghanand refers to a fifteen-year old Muslim boy who was beaten in his temple as a “poisonous snake,” remain available on the platform as well.

In light of Facebook’s inability to contain hate speech and inciting calls to violence on its platform, TLS is calling for Facebook to be shut down in India to help protect millions of Muslims and other minorities from hate speech and dehumanisation on social media. The foundation is also urging Facebook shareholders not to turn a blind eye to these harms, but rather to consciously divest from Facebook and its businesses. A public petition demanding the release of Facebook’s  human rights impact assessment on India in 2020 is available for signature here. Given the volatility of the situation, and the fact that these online harms are increasingly translating into offline violence, the time for action is now.

To help raise awareness of this pressing issue, TLS is hosting several conversations on the issue of hate speech and digital propganda in the India on the Brink: Preventing Genocide summit from February 26-28, 2022. The virtual event will bring together a variety of expert speakers to commemorate the 20th anniversary of the 2002 Gujarat pogrom, share insights and warning signs of what may be to come, and put forth possibilities for a way forward that prevents genocide from occuring in India. Those speaking at the event include former UN Special Adviser on the Prevention of Genocide Adama Dieng, Executive Director of Genocide Watch Dr. Gregory Stanton, and international genocide experts like Elisa von Joeden. All are welcome to attend and encouraged to help amplify the risks facing Indian minorities. More details and sign-up information is available here.

The post Fueled by Social Media, Calls for Violence Against Muslims Reach Fever Pitch in India appeared first on People vs. Big Tech.

]]>
Big Tech’s Assault on Women https://peoplevsbig.tech/big-techs-assault-on-women/ Wed, 24 Nov 2021 06:09:00 +0000 https://peoplevsbig.tech/?p=563 Ahead of the DSA vote, online platform priorities continue to enable rampant abuse, sexism, and racism. As European leaders shape the final provisions of the

The post Big Tech’s Assault on Women appeared first on People vs. Big Tech.

]]>

Ahead of the DSA vote, online platform priorities continue to enable rampant abuse, sexism, and racism.

As European leaders shape the final provisions of the Digital Services Act (DSA) and the Digital Markets Act (DMA), it is essential that they pay attention to the particular ways in which Big Tech products harm the women and gender non-conforming people who use their platforms. Considering how Facebook began -- as a way for male Harvard students to rate and rank women on their “hotness” -- it’s perhaps not surprising that these platforms continue to perpetuate and enable misogyny. But it is absolutely unacceptable going forward.

Big Tech is an industry still largely dominated by men. While there have been efforts to increase diversity at tech companies in recent years, women remain massively underrepresented -- especially in leadership and tech roles -- making up just a quarter of the entire workforce. And though Big Tech companies have pledged time and time again that they will work harder to eradicate online misogyny and disinformation, we have yet to see any meaningful results. To the contrary, we have all too often been met with further studies and internal documents revealing just how bad these issues truly are. It’s plain to see that the parties who write the rules must change.

In honour of International Day for the Elimination of Violence Against Women, this article surveys the wealth of existing research that demonstrates how women -- and especially women of colour and from LGBTQ communities -- face an increased risk of harm and abuse while they engage on online platforms. From vile hate speech and threats to rampant disinformation designed to exploit sexist and racist tropes, the facts are clear: Big Tech companies are incapable of regulating themselves. To avert further damage, decisive action must be taken by civic leaders to create an online world that is safe for all users. With a majority of young women and girls experiencing online abuse, and 87% reporting the problem is getting worse, there is simply no time to waste. European legislators must seize the golden opportunity now before them.

A troubling rise in online abuse, hate speech, and revenge porn

Academic and civil society research reveals that not only is online gendered abuse widespread -- it is on the rise. Globally, 38% of women have personally experienced online violence -- and 65% know women from within their networks who have experienced it.

During the Covid-19 pandemic, this type of abuse has increased even further. A study by UK charity Glitch into the impact that the UK’s national lockdown had on online abuse against women and non-binary individuals found that 46% of respondents experienced online abuse since the beginning of Covid-19, with 29% reporting it had gotten worse during the pandemic (for women of colour and non-binary people, this figure increased to 38%). A sharp increase in online violence against women coincided with the rise in people staying at home and spending more time online. With many workers transitioning to remote offices, online abuse alarmingly began to include the actions of colleagues (9%) as well.

For women in high profile positions, such as journalists, politicians and influencers, online abuse and threats are common. These threats make many women feel unsafe in offline spaces, too, forcing them to take additional measures to protect their safety. For some that means hiring private security or moving locations; for others it means removing themselves from online spaces and networks, censoring their actions and speech, or taking other similar precautions that unjustly inhibit their ability to express themselves.

study by Amnesty International examining the Twitter presence of women journalists and politicians in the US and UK found that 7.1% of tweets they received were abusive or problematic. Black women in the study were 84% more likely to be the targets of online abuse than white women. The study estimated that “of the 14.5 million tweets mentioning the women, 1.1 million were abusive or problematic. That’s a problematic or abusive tweet every 30 seconds.”

Unlike the type of abuse men may receive online, the nature of gendered abuse means that the kinds of messages women receive are more violent, and often involve threats of sexual or other physical violence.

Women are also disproportionately at risk of image-based online abuse -- commonly known as revenge porn -- where private photos are leaked to online platforms or porn sites without their consent. There is little a person can do once their photo is circulating on these platforms -- while it can be reported, the onus lies with the platform to remove it, and police often have little powers or resources to properly follow this up. This reality can cause anxiety: a survey by HateAid found that 30% of women fear their photos will be stolen or leaked online.

Troublingly, there are platforms that are deliberately designed to facilitate using pictures of women for porn against their will, by creating easy to use interfaces where a woman’s picture or video can be uploaded in a couple of clicks. Some research estimates between 90% and 95% of all online deepfake videos are non-consensual porn, and around 90% of those feature women. The emotional impact this can have on survivors is immense.

Disinformation designed to perpetuate sexism

While the above examples of gender-based violence focus on instances where women receive comments or messages that are targeted at them, another form of gender-based violence online is gendered disinformation, i.e., abuse about women. These kinds of disinformation campaigns are designed to exploit existing gender narratives, language, and discrimination in order to “maintain the status quo of gender equality or creating a more polarised electorate”.

Gendered disinformation is often used to discredit female politicians running for office. For example, in the US, once Kamala Harris was named as President Biden’s running mate, false claims about her were being shared 3,000 times an hour on Twitter. These kinds of disinformation campaigns work to promote the narrative that women are not good political leaders and aim to undermine female candidates by spreading disinformation about their qualifications and experience, or implying they are “too emotional” for the task -- with the ultimate aim to keep women out of politics altogether and ultimately harm democratic processes.

Notably, these coordinated attacks on women are often orchestrated by far-right groups (like in the US context) or groups aligned with government authorities (such as in the Philippines). One of the impacts of these campaigns is that it shifts the narrative away from the political to the personal, meaning that women are forced to spend time refuting personal attacks and thus have less time to talk about substantive issues. This disinformation also creates barriers for other women wanting to get involved in politics, or dissuade them from even standing for office.

Platforms have also played a significant role in facilitating the spread of disinformation targeted at transgender people, including false claims and hateful rhetoric about bathrooms, gender dysphoria, puberty blockers, "detransitioning," and mental illness. The impact of these disinformation campaigns on the trans community in particular cannot be underestimated.

Young women and girls at particular risk of abuse and mental health damage

A 2020 survey by the World Wide Web foundation found that 52% of young women and girls have experienced online abuse, and 87% think the problem is getting worse. Of those that have experienced it, 51% said it affected their emotional wellbeing.

Even for those not directly under attack online, image-based platforms such as Instagram subject young women and girls to a constant stream of problematic content. Recent research by SumOfUs showed how quickly and far too easily users can find content promoting eating disorders or extreme dieting on Instagram -- despite the platform banning certain hashtags related to these topics. Those promoting their products know they can easily get around such restrictions by using alternative hashtags. Content promoting plastic surgery was also rampant, with promoters targeting young people and collaborating with influencers to convince girls and young women to spend money on altering their bodies.

Though Facebook, who owns Instagram, has long promised to curb this type of harmful content, we now know, thanks to Frances Haugen’s disclosures that the company has turned a blind eye to the toxic impact its platform has on young people, and particularly teenage girls. Facebook’s own research from 2019 confirms that “32% of teen girls said that when they felt bad about their bodies, Instagram made them feel worse.” While Facebook continues to downplay these negative impacts, their failure to take action to address this problem allows the company to continue to profit from the ad revenue related to this kind of harmful content.

Online harms can also affect young women in other ways. For example, what message does it send to them if they are seeing that the few women that are in the public eye are regularly targeted by vile and vicious digital violence? If Big Tech platforms aren’t properly regulated and forced to adequately tackle digital violence, this could lead to young women being deterred from taking up positions that would make them a target - thus limiting their career or education choices.

A strong Digital Services Act could help to tackle this

Women’s rights organisations have appealed to EU lawmakers to address these harms by supporting the key accountability tools on risk assessment, risk mitigation and mandatory audits in the European Commission’s proposal for the Digital Services Act.

They say: “Since some of this abusive behaviour is facilitated and indirectly encouraged by platforms’ design features, there needs to be a clear obligation imposed on very large platforms in particular to identify, prevent and mitigate the risk of gender-based violence taking place on and being amplified by their products. Through Article 26.1 of the DSA, platforms should be forced to take into account the ways in which design choices and operational approaches can influence and increase these risks, especially as defined in article 26.1.a-c.”

The DSA provides this unique opportunity to act on and to prevent further harms against women, and all users, of Big Tech platforms. EU leaders must embrace this opportunity and pass a strong DSA that puts people ahead of Big Tech profits. "The status quo is in no way supporting freedom of expression," says Lucina Di Meco, a co-founder of #ShePersisted Global. “It’s in reality supporting censorship of women online.”

Read more about the People’s Declaration and our movement’s demands to EU leaders.

The post Big Tech’s Assault on Women appeared first on People vs. Big Tech.

]]>
Who Writes the Rules? https://peoplevsbig.tech/who-writes-the-rules/ Mon, 04 Oct 2021 06:20:00 +0000 https://peoplevsbig.tech/?p=577 Six campaigners highlight marginalised people’s exclusion from the process of writing the rules that govern the online experience

The post Who Writes the Rules? appeared first on People vs. Big Tech.

]]>

6 campaigners highlight marginalised people’s exclusion from the process of writing the rules that govern the online experience.

Over the next few weeks, policymakers in the European Union are gearing up to a crucial round of voting on the Digital Services Act (DSA) - a legislative proposal geared toward governing digital content and laying out the roles and responsibilities of Big Tech platforms. Yet all too often, marginalised groups - often those the most affected by the issues the DSA is trying to address - are excluded from these processes.

That’s why 6 women have come together to form Who Writes The Rules. They want to highlight the fact that marginalised people are routinely excluded from the process of writing the rules that govern the online experience - because the European Commission’s employees are overwhelmingly white and male. And they want to draw attention to the fact that the rules that are being written often don’t include adequate enforcement to protect those facing the brunt of Big Tech harm.

The Who Writes the Rules campaigners come from a variety of backgrounds: they research and prevent online abuse, run tech companies, support refugee and immigrant women to code, fight gender-based violence and advocate for women’s rights. They have come together to represent some of the people disproportionately impacted by the systemic threats to their online experience.

Below are excerpts from their stories - the full stories can be found at https://www.whowritestherules.online/ourstories

Aina Abiodun: The cost of their enrichment is my continued oppression

As a Black tech entrepreneur, Aina Abiodun has had to curate her online experience. Using digital platforms is a professional necessity, and she feels forced, against her own principles, to participate (and fund) the continuation of their oppressive practices.

She wants policy-makers to pay more attention to these seeming ‘technicalities’ or intricacies of technological oppression as this is clearly complicity.

“In the West, the dismantling of the legacy of colonialism and race-based oppression must include and rigorous investigation into the ways in which white power is perpetuated online and reinforces the never-ending, metamorphosis of a vile and immoral anti-Black, anti-femme agenda.” - Aina Abiodun

Read Aina Abiodun’s full story here: https://www.whowritestherules.online/stories/abiodun

Asha Allen: The Brussels Bubble: Advocating for the rights of marginalised women and girls in EU tech policy

As a young Black woman advocate in the Brussels political bubble, Asha Allen has personal experience of the political exclusion of marginalised and racialised communities that continues to characterise the European decision-making space. This is mirrored by the same lack of inclusion in the digital and tech sphere, which remains overwhelmingly male and pale.

“In the case of online violence, the experience of Black women [...] represents not only some of the worst manifestations of the systemic issues regarding harm in the online space, but how our continued exclusion from these decision-making spaces only further exacerbates online violence despite efforts to combat it.” - Asha Allen

Asha Allen collaborates with activists leading the charge for digital citizenship and transformative change - and they will be watching and holding decision-makers and Big Tech to account.

Read Asha’s full story here: https://www.whowritestherules.online/stories/allen

Dr Carolina Are: Bodies have rights just like words do


Dr Carolina Are has found support and education as a survivor in sex-positive networks and spaces online. Those spaces helped her to love her body again after abuse. But now, those spaces and networks are under threat:

“Simply because social media platforms have decided that nudity - aka women’s and marginalised users’ bodies - are inappropriate, risky and worth censoring. I don’t want to lose those networks and the opportunities they provide, and I don’t want people who go through my same experiences to be left unsupported.” - Dr Carolina Are

She wants to see the Digital Services Act legislation process to include those affected by the policies and not see a repeat of previous examples that have led to blanket censorship of bodies.

Read Dr Carolina Are’s full story here: https://www.whowritestherules.online/stories/are

Hera Hussain: Decolonising digital rights

Hera Hussain talks about the need for a decolonised digital rights approach because Eurocentricity when discussing digital rights is exclusionary and short-sighted. For example, videos containing disinformation in languages other than English take longer to take down, simply because Youtube hasn’t invested staff in the relevant countries. And while everyone worldwide was made aware of fact-checking features during elections in the US, disinformation flourishes in Hungary and Myanmar.

“We need radical reform but one that works for everyone. When we talk about reform, let’s not forget that the ripples of policies in Europe can create a tsunami in the rest of the world. Though courts see jurisdictions - the web sees none.“ - Hera Hussain

Read Hera Hussain’s full story here: https://www.whowritestherules.online/stories/hussain

Dr Nakeema Stefflbauer: #defundbias in online hiring and listen to the people in Europe whom AI algorithms harm

Having lived and worked in Europe for close to a decade, Dr Nakeema Stefflbauer knows that biased hiring practices are far from unusual. And now that many EU institutions receive thousands of applications per job, more and more employers are moving to artificial intelligence (AI) hiring algorithms.

These algorithms make it far too easy to simply filter out candidates based on a criteria, whether it’s their religious background or their age or education. The algorithms will even select “top candidates” for the employer. But no one knows exactly what information those matches are based on. In this way, AI hiring algorithms may unfairly exclude people from job opportunities –without them ever knowing the reason why.

Dr Nakeema Stefflbauer asks,

“What if we looked at the reality of employment discrimination in Europe and whom it actually harms? What if hiring bias in Europe was addressed with input from people with actual lived experience of the problem?”

Read her full story here: https://www.whowritestherules.online/stories/stefflbauer

Raziye Buse Çetin: The absence of marginalised people in AI policymaking

Raziye Buse Çetin has witnessed frequently how people of colour are almost totally absent from AI policy conversations. But she sees this as not just an issue of representation. AI systems can also inherently contain bias. But currently it’s impossible to even measure algorithmic bias related to race since it’s forbidden in the EU to collect such data.

For example, people of colour have shared their ongoing and traumatic experiences of not being recognised by AI security machines at the airport and how this can automatically put them in a position of suspect. Most of the people involved in AI policymaking simply do not have this lived experience.

Raziye Buse Çetin says there is a reluctance in the EU to acknowledge racism, and to call out its historic roots in European colonialism.

“With the inclusion of racialised people and welcoming policies; the EU needs to adopt a racial equity approach in AI policy and understand how discrimination and inequity manifests in AI in the EU. Algorithmic bias is only one of the visible results of many intertwined forms of inequity; but the problem has deeper roots.” - Raziye Buse Çetin


Read her full story here: https://www.whowritestherules.online/stories/cetin

The post Who Writes the Rules? appeared first on People vs. Big Tech.

]]>