Family Crest

Family Crest
Motto: I will never forget. [ Source HouseofNames ]

HUMANITY DOOMSDAY CLOCK - Moves forward to 2125 due to election of US President trump.

Estimate of the time that Humanity will go extinct or civilization will collapse. The HUMANITY DOOMSDAY CLOCK moves forward to 2125 due to US President trump's abandonment of climate change goals. Clock moved to 90 seconds to doom at December 2023. Apologies to Bulletin of the Atomic Scientists for using the name.

PLEASE QUOTE, COPY and LINK

While this material is copyrighted, you are hereby granted permission and encouraged to copy and paste any excerpt and/or complete statement from any entry on this blog into any form you choose. In return, please provide explicit credit to this source and a link or URL to the publication. Email links to mckeever.mp@gmail.com

You may also wish to read and quote from these groundbreaking essays on economic topics with the same permission outlined above

The Jobs Theory of Growth [https://miepa.net/apply.html]

Moral Economics [https://miepa.net/moral.html]

Balanced Trade [https://miepa.net/essay.html]

There Are Alternatives to Free Market Capitalism [https://miepa.net/taa.html]

Specific Country Economic Policy Analyses - More Than 50 Countries from Argentina to Yemen [https://miepa.net/]




Translate

Monday, February 20, 2023

It’s Time to Tear Up Big Tech’s Get-Out-of-Jail-Free Card

It’s Time to Tear Up Big Tech’s Get-Out-of-Jail-Free Card - Feb. 20, 2023


By Julia Angwin,  New York Times Opinion section


Ms. Angwin is a contributing Opinion writer and an investigative journalist.


I still remember the shock I felt when I was able to buy a Facebook ad aimed only at white house hunters — something the Fair Housing Act was designed to prevent — in just minutes. But even more shocking is that it took six years after my test for Meta, Facebook’s parent company, to comply with the act. As of today, the company still has not fully fixed its discriminatory ad system.


A major reason for the delay: Section 230, the notorious snippet of law embedded in the 1996 Telecommunications Act, which Meta and others have successfully used to protect themselves from a broad swath of legal claims.


The law, created when the number of websites could be counted in the thousands, was designed to protect early internet companies from libel lawsuits when their users inevitably slandered one another on online bulletin boards and chat rooms. But since then, as the technology evolved to billions of websites and services that are essential to our daily lives, courts and corporations have expanded it into an all-purpose legal shield that has acted similarly to the qualified immunity doctrine that often protects police officers from liability even for violence and killing.


As a journalist who has been covering the harms inflicted by technology for decades, I have watched how tech companies wield Section 230 to protect themselves against a wide array of allegations, including facilitating deadly drug sales, sexual harassment, illegal arms sales and human trafficking — behavior that they would have likely been held liable for in an offline context.


This week the Supreme Court will hear arguments in a case that could limit tech companies’ use of the legal shield of Section 230. The family of a victim of an ISIS terrorist shooting in Paris argued that Google’s algorithms should be held responsible for promoting ISIS videos. Google says it is protected by Section 230.


Big tech companies argue that any limitations to the broad immunity they enjoy could break the internet and crush free speech, while advocates for reform argue that broad immunity incentivizes tech companies to underinvest in harm reduction.


But there is a way to keep internet content freewheeling while revoking tech’s get-out-of-jail-free card: drawing a distinction between speech and conduct.


In this scenario, companies could continue to have immunity for the defamation cases that Congress intended, but they would be liable for illegal conduct that their technology enables.


Courts have already been heading in this direction by rejecting the use of Section 230 in a case where Snapchat was held liable for its design of a speed filter that encouraged three teenage boys to drive incredibly fast in the hopes of receiving a virtual reward. They crashed into a tree and died.


In its Supreme Court brief, the Biden administration argues in favor of drawing a line between the benign algorithmic sorting that enables popular products like Google search and algorithmic manipulation that can violate the law, such as recommending terrorism-related content. “When an online service provider substantially adds or otherwise contributes to a third party’s information,” it may be held liable, the government argues.


I have seen firsthand how Section 230 enables tech companies to do little to address the harms their technologies can enable. In 2016 the civil rights attorney Rachel Goodman called to tell me that she had been trying unsuccessfully to warn Facebook that advertisers could use its ad targeting algorithms to violate the Fair Housing Act.


With Facebook’s automated ad targeting system, Ms. Goodman told me, advertisers could buy ads that were shown only to audiences Meta had identified as white without anyone being the wiser. To test her claim, my colleague Terry Parris Jr. and I decided to buy an ad. We logged onto Facebook’s ad portal and selected an audience of people interested in buying a house.


We were then offered a drop-down menu with a choice of audiences to exclude from seeing our ad. We chose to exclude three “ethnic affinity” groups: African Americans, Asian Americans and Hispanics. After 15 minutes, our ad was approved.


We immediately deleted our test. We had just witnessed the face of 21st-century discrimination: silent attributes hidden in code. There was no need for a “whites only” label in the ad. Hardly anyone but white people would ever see the ad.


Facebook responded to public pressure by adding language to its fine print notifying advertisers that they were responsible for complying with civil rights laws. It said it would build an algorithm to stop advertisers from exploiting racial categories in housing, employment and credit ads. (The company’s algorithm didn’t address other protected categories in civil rights law, such as age and gender.)


After our article was published, several lawsuits were filed against Facebook alleging violations of the Fair Housing Act. Facebook responded with claims of immunity under Section 230. Its view was that the advertisers alone were liable for any illegality. Historically, courts had agreed. In 2008, for instance, a federal court of appeals ruled that Craigslist was not liable for discriminatory housing ads posted on its website.


Less than a year after Facebook started using a new algorithm, I was able to buy another housing ad targeted at white audiences. Facebook blamed this on a “technical failure” of its new algorithmic system. Soon after, I found dozens of companies using Facebook’s ad targeting system to exclude older people from seeing employment ads. Facebook argued that age-targeting job ads were acceptable when “used responsibly,” despite a federal law prohibiting employers from indicating an age preference in advertising.


In 2019, three years after I purchased that first discriminatory housing ad, Facebook reached a settlement to resolve several legal cases brought by individual job seekers and civil rights groups and agreed to set up a separate portal for housing, employment and credit ads, where the use of race, gender, age and other protected categories would be prohibited. The Equal Employment Opportunity Commission also reached settlements with several advertisers that had targeted employment ads by age.


But later that year, researchers at Northeastern University found that the new portal’s algorithm continued to distribute ads in a biased manner: “Ads for supermarket jobs were shown primarily to women, while ads for jobs in lumber industry were presented mostly to men.”


This is the problem with automated systems. They can create discrimination even when discriminatory variables are removed from their inputs because they often have enough information to make surprisingly accurate inferences.


Meanwhile, reporters at the nonprofit newsroom I founded, The Markup, identified credit card advertisements targeted by age. Facebook said “enforcement is never perfect” and that it would remove the ads we identified. Because of Section 230, Meta kept winning in the courts.


Last year, Meta agreed to yet another settlement, this time with the U.S. Department of Justice. The company agreed to pay a fine of more than $115,000 and to build a new algorithm — just for housing ads — that would distribute such ads in a nondiscriminatory manner. But the settlement didn’t fix any inherent bias embedded in credit, insurance or employment ad distribution algorithms.


And so here we are, seven years after my first purchase, and Meta still hasn’t fully fixed its discriminatory ad system, even as its revenues have quadrupled. As Judge Frank Easterbrook wrote in 2003, Section 230 makes internet providers “indifferent to the content of information they host or transmit” and encourages them to “do nothing.”


Drawing a distinction between speech and conduct seems like a reasonable step toward forcing big tech to do something when algorithms can be proved to be illegally violating civil rights, product safety, antiterrorism and other important laws. Otherwise, without liability, the price of doing nothing will always outweigh the cost of doing something.


Julia Angwin is a contributing Opinion writer, an investigative journalist and the author of “Dragnet Nation: A Quest for Privacy, Security and Freedom in a World of Relentless Surveillance.”


The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.


Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram. 

No comments:

Post a Comment