Deep Links

Syndicate content
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 5 days 13 hours ago

Sen. Hawley’s “Bias” Bill Would Let the Government Decide Who Speaks

Thu, 06/20/2019 - 20:50

Despite its name, Sen. Josh Hawley’s Ending Support for Internet Censorship Act (PDF) would make the Internet less safe for free expression, not more. It would violate the First Amendment by allowing a government agency to strip platforms of legal protection based on their decisions to host or remove users’ speech when the federal government deems that action to be politically biased. Major online platforms’ moderation policies and practices are deeply flawed, but putting a government agency in charge of policing bias would only make matters worse.

The bill targets Section 230, the law that shields online platforms, services, and users from liability for most speech created by others. Section 230 protects intermediaries from liability both when they choose to edit, curate, or moderate speech and when they choose not to. Without Section 230, social media would not exist in its current form—the risks of liability would be too great given the volume of user speech published through them—and neither would thousands of websites and apps that host users’ speech and media.

Under the bill, platforms over a certain size—30 million active users in the U.S. or 300 million worldwide—would lose their immunity under Section 230. In order to regain its immunity, a company would have to pay the Federal Trade Commission for an audit to prove “by clear and convincing evidence” that it doesn’t moderate users’ posts “in a manner that is biased against a political party, political candidate, or political viewpoint.”

It’s foolish to assume that anyone could objectively judge a platform’s “bias,” but particularly dangerous to put a government agency in charge of making those judgments.

It’s foolish to assume that anyone could objectively judge a platform’s “bias,” but particularly dangerous to put a government agency in charge of making those judgments.

It might be tempting to understate the bill’s danger given that it limits its scope to very large platforms. But therein lies one of the bill’s most insidious features. Google, Facebook, and Twitter would never have climbed to dominance without Section 230. This bill could effectively set a ceiling on the success of any future competitor. Once again, members of Congress have attempted to punish social media platforms by introducing a bill that will only reinforce those companies’ dominance. Don’t forget that last time Congress undermined Section 230, large tech companies cheered it on.

Don’t Let the Government Decide What Bias Is

Sen. Hawley’s bill is clearly unconstitutional. A government agency can’t punish any person or company because of its political viewpoints, or because it favors certain political speech over others. And decisions about what speech to carry or remove are inherently political.

What does “in a manner that is biased against a political party, political candidate, or political viewpoint” mean, exactly? Would platforms be forced to host propaganda from hate groups and punished for doing anything to let users hide posts from the KKK that express its political viewpoints? Would a site catering to certain religious beliefs be forced to accommodate conflicting beliefs?

What about large platforms where users intentionally opt into partisan moderation decisions? For example, would Facebook be required to close private groups that leftist activists use to organize and share information, or instruct the administrators of those groups to let right-wing activists join too? Would Reddit have to delete r/The_Donald, the massively popular forum exclusively for fans of the current U.S. president?

The bill provides no guidance on any of these questions. In practice, the FTC would have broad license to enforce its own view on which platform moderation practices constitute bias. The commissioners’ enforcement decisions would almost certainly reflect the priorities of the party that nominated them. Since the bill requires that a supermajority of commissioners agree to grant a platform immunity, any two of the five FTC commissioners could decide together to withhold immunity from a platform.

That’s the problem: this bill would let the government make decisions about whose speech stays online, one thing that the government simply cannot do under the U.S. Constitution. To see how a government might attempt to push the FTC to focus only on certain types of bias or censorship, consider President Trump’s relentless focus on perceived anti-conservative bias on social media. Before supporting the bill, conservatives in Congress may want to consider how it might be used by future administrations.

The Problem Isn’t Bias; It’s Censorship

Hawley’s bill is rooted in a long-running meme about anti-conservative bias on social media. The White House recently launched a survey on bias on social media platforms with the obvious policy goal of bolstering President Trump’s claims that the big Internet companies are stacked against conservatives. Congress held a hearing last year to discuss platform moderation practices, but most of the discussion centered on Diamond and Silk, the conservative commentators who claim to have been censored on Facebook.

In reality, there is little evidence of systemic bias against conservatives on social media. The most egregious examples of censorship online spring not from political bias but from naïve platform moderation policies: YouTube scrubbing Syrian activists’ documentation of human rights violations under rules intended to curb extremism; state actors taking advantage of Facebook’s reporting mechanisms to remove political dissidents’ posts; Tumblr’s nudity filter censoring innocuous patent illustrations. The national discussion on bias in social media too often ignores such stories and under Sen. Hawley’s bill, it’s hard to imagine the FTC doing much about them.

Social media platforms must work to ensure that their moderation policies don’t silence innocent people, intentionally or unintentionally. That requires common sense measures like clear, transparent rules and letting users appeal inappropriate moderation decisions, not a highly politicized system where a government agency assesses a platform’s perceived bias.

As we have argued in several recent amicus briefs, Internet users are best served by the existence of both moderated and unmoderated platforms, both those that are open forums for all speech and those that are tailored to certain interests, audiences, and user sensibilities. This bill threatens the existence of the latter.

Section 230 Doesn’t—and Shouldn’t—Preclude Platform Moderation

Sen. Hawley’s bill comes after a long campaign of misinformation about how Section 230 works. A few members of Congress—including Sen. Hawley—have repeatedly claimed that under current law, platforms must make a choice between their right under the First Amendment to moderate speech and the liability protections that they enjoy under Section 230. In truth, no such choice exists. Under the First Amendment, platforms have the right to moderate their online platforms however they like; Section 230 additionally shields them from most types of liability for their users’ activity. It’s not one or the other. It’s both.

Indeed, one of Congress’ motivations for passing Section 230 was to remove the legal obstacles that discouraged platforms from filtering out certain types of speech (at the time, Congress was focusing its attention on sexual material in particular). In two important early cases over Internet speech, courts allowed civil defamation claims against Prodigy but not against Compuserve. Because Prodigy deleted some messages for “offensiveness” and “bad taste,” a court reasoned, it could be treated as a publisher and held liable for its users’ posts even if it lacked knowledge of the contents.

Reps. Chris Cox and Ron Wyden realized in 1995 that that precedent would hamstring the nascent industry of online moderation. That’s why they introduced the Internet Freedom and Family Empowerment Act, which we now know as Section 230.

Hawley’s bill would bring us closer to that pre-230 Internet, punishing online platforms when they take measures to protect their users, including efforts to minimize the impacts of harassment and abuse—the very sorts of efforts that Section 230 was intended to preserve. While platforms often fail in such measures—and frequently silence innocent people in the process—giving the government discretion to shut down those efforts is not the solution.

Section 230 plays a crucial, historic role in protecting free speech and association online. That includes the right to participate in online communities organized around certain political viewpoints. It’s impossible to enforce an objective standard of “neutrality” on social media—giving government license to do so would pose a huge threat to speech online.

Categories: Privacy

Massachusetts Can Become a National Leader to Stop Face Surveillance

Tue, 06/18/2019 - 16:27

Massachusetts has a long history of standing up for liberty. Right now, it has the opportunity to become a national leader in fighting invasive government surveillance. Lawmakers need to hear from the people of Massachusetts to say they oppose government use of face surveillance.

Face surveillance poses a threat to our privacy, chills protest in public places, and gives law enforcement unregulated power to undermine due process. The city of Somerville—home of Tufts University—has heard these concerns and is considering a ban on that city’s use of face surveillance. Meanwhile, bills before the Massachusetts General Court would pause the government’s use of face surveillance technology on a statewide basis. This moratorium would remain in place unless the legislature passes measures to regulate these technologies, protect civil liberties, and ensure oversight of face surveillance use.

Face recognition technology has disproportionately high error rates for women and people of color. Making matters worse, law enforcement agencies often rely on images pulled from mugshot databases—which exacerbates historical biases born of unfair policing in Black and Latinx neighborhoods. If such systems are incorporated into street lights or other forms of surveillance cameras, people in these communities may be unfairly targeted simply because they appeared in another database or were subject to discriminatory policing in the past.

Last month, San Francisco became the first city in the country to ban government use of face surveillance, showing it is possible for us to take back our privacy in public places. Oakland is now examining a similar proposal. Somerville is the first community on the East Coast to consider a ban.

The people of Somerville, with support from Ward 3 Council Member Ben Ewen-Campen, have a chance now to stand against government use of face surveillance and proclaim that they do not want it in their community. Speak up to protect your privacy rights, and demand that the Somerville City Council pass Councilor Ewen-Campen’s ordinance banning government use of face surveillance in Somerville.

TAKE ACTION

Support Somerville’s ban on face surveillance

If you are in the Somerville area and would like to speak at the city’s legislative affairs council meeting, please contact organizing@eff.org.

The Somerville City Council has also endorsed a pair of bills in the state legislature that would press pause on the use of face surveillance throughout Massachusetts. Specifically, Massachusetts bills S.1385 and H.1538 would place a moratorium on government use of face surveillance.

If you are in the Somerville area and would like to speak at the city’s legislative affairs council meeting, please contact organizing@eff.org.

The Somerville City Council has also endorsed a pair of bills in the state legislature that would press pause on the use of face surveillance throughout Massachusetts. Specifically, Massachusetts bills S.1385 and H.1538 would place a moratorium on government use of face surveillance.

TAKE ACTION

Tell your legislators to press the pause button on face surveillance

Polling from the ACLU of Massachusetts has found that 91 percent of likely voters in the state support government regulation of face recognition surveillance and other biometric tracking. More than three-quarters, 79 percent, support a statewide moratorium.

Governments should immediately stop use of face surveillance in our communities, given what researchers at MIT’s Media Lab and others have said about its high error rates—particularly for women and people of color. But even if manufacturers someday mitigate these risks, government use of face recognition technology will threaten safety and privacy, amplify discrimination in our criminal justice system, and chill of every resident’s free speech.

Support bans in your own communities and tell lawmakers it’s time to hit the pause button on face surveillance across the country.

Categories: Privacy

The Lofgren-Amash Amendment Would Check Warrantless Surveillance

Tue, 06/18/2019 - 11:10

The NSA has used Section 702 of the FISA Amendments Act to justify collecting and storing millions of Americans’ online communications. Now, the House of Representatives has a chance to pull the plug on funding for Section 702 unless the government agrees to limit the reach of that program.

The House of Representatives must vote yes in order to make this important corrective. Amendment #24 offered by Representatives Lofgren (CA) and Amash (MI) would make sure that no money in next year’s budget would fund the warrantless surveillance of people residing in the United States. Specifically, their amendment would withhold money [PDF] intended to fund Section 702 unless the government commits not to knowingly collect the data of people communicating from within the U.S. to other U.S. residents, and who are not specifically communicating with a foreign surveillance target.

Section 702 allows the government to collect and store the communications of foreign intelligence targets outside of the U.S if a significant purpose is to collect “foreign intelligence” information.  Although the law contains some protections—for example, a prohibition on knowingly collecting communications between two U.S. citizens on U.S. soil—we have learned that the program actually does sweep up billions of communications involving people not explicitly targeted, including Americans. For example, a 2014 report by the Washington Post that reviewed of a “large cache of intercepted conversations” provided by Edward Snowden revealed that 9 out of 10 account holders “were not the intended surveillance targets but were caught in a net the agency had cast for somebody else.”

The Lofgren-Amash amendment would require the government to acknowledge the protections in the law and to explicitly promise not to engage in “about collection,” the practice of collecting communications that merely mention a foreign intelligence target. About collection has been one of the most controversial aspects of Section 702 surveillance, and although the government ended this practice in 2017, it has consisted claimed the right to restart it.

With a big fight looming later this year on whether Congress should renew another controversial national security law, Section 215 of the Patriot Act, we encourage the House of Representatives to vote Yes on the Lofgren-Amash Amendment to take a step toward reining in Section 702.

 


 

 

Categories: Privacy

Certbot's Website Gets a Refresh

Mon, 06/17/2019 - 21:46

Certbot has a brand new website! Today we’ve launched a major update that will help Certbot’s users get started even more quickly and easily.

Certbot is a free, open source software tool for enabling HTTPS on manually-administered websites, by automatically deploying Let’s Encrypt certificates. Since we introduced it in 2016, Certbot has helped over a million users enable encryption on their sites, and we think this update will better meet the needs of the next million, and beyond.

Certbot is part of EFF’s larger effort to encrypt the entire Internet. Websites need to use HTTPS to secure the web. Along with our browser add-on, HTTPS Everywhere, Certbot aims to build a network that is more structurally private, safe, and protected against censorship.

This change is the culmination of a year’s work in understanding how users interact with the Certbot tool and information around it. Last year, the Certbot team ran user studies to identify areas of confusion—from questions users had when getting started to common mistakes that were often made. These findings led to changes in both the instructions for interacting with the command-line tool, and in how users get the full range of information necessary to set up HTTPS.

The new site will make it clearer what the best steps are for all users, whether that’s understanding the prerequisites to running Certbot, getting clear steps to install and run it, or figuring out how to get HTTPS in their setup without using Certbot at all.

Over a year ago, Let’s Encrypt hit 50 million active users—and counting. We hope this update will help us expand on that peak, and make unencrypted websites a thing of the past.

Categories: Privacy

EFF's Recommendations for Consumer Data Privacy Laws

Mon, 06/17/2019 - 13:18

Strong privacy legislation in the United States is possible, necessary, and long overdue. EFF emphasizes the following concrete recommendations for proposed legislation regarding consumer data privacy.

Three Top Priorities

First, we outline three of our biggest priorities: avoiding federal preemption, ensuring consumers have a private right of action, and using non-discrimination rules to avoid pay-for-privacy schemes.

No federal preemption of stronger state laws

We have long sounded the alarm against federal legislation that would wipe the slate clean of stronger state privacy laws in exchange for one, weaker federal one. Avoiding such preemption of state laws is our top priority when reviewing federal privacy bills.

State legislatures have long been known as “laboratories of democracy” and they are serving that role now for data privacy protections. In addition to passing strong laws, state legislation also allows for a more dynamic dialogue as technology and social norms continue to change. Last year, Vermont enacted a law reining in data brokers, and California enacted its Consumer Privacy Act. Nearly a decade ago, Illinois enacted its Biometric Information Privacy Act. Many other states have passed data privacy laws and many are considering data privacy bills.

But some tech giants aren’t happy about that, and they are trying to get Congress to pass a weak federal data privacy law that would foreclose state efforts. They are right about one thing: it would be helpful to have one nationwide set of protections. However, consumers lose—and big tech companies win—if those federal protections are weaker than state protections.

Private right of action

It is not enough for government to pass laws that protect consumers from corporations that harvest and monetize their personal data. It is also necessary for these laws to have bite, to ensure companies do not ignore them. The best way to do so is to empower ordinary consumers to bring their own lawsuits against the companies that violate their privacy rights.

Often, government agencies will lack the resources necessary to enforce the laws. Other times, regulated companies will “capture” the agency, and shut down enforcement actions. For these reasons, many privacy and other laws provide for enforcement by ordinary consumers.

Non-discrimination rules

Companies must not be able to punish consumers for exercising their privacy rights. New legislation should include non-discrimination rules, which forbid companies from denying goods, charging different prices, or providing a different level of quality to users who choose more private options.

Absent non-discrimination rules, companies will adopt and enforce “pay-for-privacy” schemes. But corporations should not be allowed to require a consumer to pay a premium, or waive a discount, in order to stop the corporation from vacuuming up—and profiting from—the consumer’s personal information. Privacy is a fundamental human right. Pay-for-privacy schemes undermine this fundamental right. They discourage all people from exercising their right to privacy. They also lead to unequal classes of privacy “haves” and “have-nots,” depending upon the income of the user.

Critical Privacy Rights

In addition to the three priorities discussed above, strong data privacy legislation must also ensure certain rights: the right to opt-in consent, the right to know, and the right to data portability. Along with those core rights, EFF would like to see data privacy legislation including information fiduciary rules, data broker registration, and data breach protection and notification.

Right to opt-in consent

New legislation should require the operators of online services to obtain opt-in consent to collect, use, or share personal data, particularly where that collection, use, or transfer is not necessary to provide the service.

Any request for opt-in consent should be easy to understand and clearly advise the user what data the operator seeks to gather, how they will use it, how long they will keep it, and with whom they will share it. This opt-in consent should also be ongoing—that is, the request should be renewed any time the operator wishes to use or share data in a new way, or gather a new kind of data. And the user should be able to withdraw consent, including for particular purposes, at any time.

Opt-in consent is better than opt-out consent. The default should be against collecting, using, and sharing personal information. Many consumers cannot or will not alter the defaults in the technologies they use, even if they prefer that companies do not collect their information.

Some limits are in order. For example, opt-in consent might not be required for a service to take steps that the user has requested, like collecting a user's phone number to turn on two-factor authentication. But the service should always give the user clear notice of the data collection and use, especially when the proposed use is not part of the transaction, like using that phone number for targeted advertising.

There is a risk that extensive and detailed opt-out requirements can lead to “consent fatigue.” Any new regulations should encourage entities seeking consent to explore new ways of obtaining meaningful consent to avoid that fatigue. At the same time, research suggests companies are becoming skilled at manipulating consent and steering users to share personal data. 

Finally, for consent to be real, data privacy laws must prohibit companies from discriminating against consumers who choose not to consent. As discussed above, “pay-for-privacy” systems undermine privacy rules and must be prohibited.

Right to know

Users should have an affirmative “right to know” what personal data companies have gathered about them, where they got it, and with whom these companies have shared it (including the government). This includes the specific items of personal information, and the specific third parties who received it, and not just categorical descriptions of the general kinds of data and recipients.

Again, some limits are in order to ensure that the right to know doesn’t impinge on other important rights and privileges.  For example, there needs to be an exception for news gathering, which is protected by the First Amendment, when undertaken by professional reporters and lay members of the public alike. Thus, if a newspaper tracked visitors to its online edition, the visitors’ right-to-know could cover that information, but not extend to a reporter’s investigative file.

There also needs to be an effective verification process to ensure that an adversary cannot steal a consumer’s personal information by submitting a fraudulent right to know request to a business.

Right to data portability

Users should have a legal right to obtain a copy of the data they have provided to an online service provider. Such “data portability” lets a user take their data from a service and transfer or “port” it elsewhere.

One purpose of data portability is to empower consumers to leave a particular social media platform and take their data with them to a rival service. This may improve competition. Other equally important purposes include analyzing your data to better understand your relationship with a service, building something new out of your data, self-publishing what you learn, and generally achieving greater transparency.

Regardless of whether you are “porting” your data to a different service or to a personal spreadsheet, data that is “portable” should be easy to download, organized, tagged, and machine-parsable.

Information fiduciary rules

One tool in the data privacy legislation toolbox is “information fiduciary” rules. The basic idea is this: When you give your personal information to an online company in order to get a service, that company should have a duty to exercise loyalty and care in how it uses that information.

Professions that already follow fiduciary rules—such as doctors, lawyers, and accountants—have much in common with the online businesses that collect and monetize users’ personal data. Both have a direct relationship with customers; both collect information that could be used against those customers; and both have one-sided power over their customers.

Accordingly, several law professors have proposed adapting these venerable fiduciary rules to apply to online companies that collect personal data from their customers. New laws would define such companies as “information fiduciaries.” However, such rules should not be a replacement for the other fundamental privacy protections discussed in this post.

Data broker registration

Data brokers harvest and monetize our personal information without our knowledge or consent. Worse, many data brokers fail to securely store this sensitive information, predictably leading to data breaches (like Equifax) that put millions of people at risk of identity theft, stalking, and other harms for years to come.

Legislators should take a page from Vermont’s new data privacy law, which requires data brokers to register annually with the government (among other significant reforms). When data broker registration and the right-to-know are put together, the whole is greater than the sum of the parts. Consumers might want to learn what information data brokers have collected about them, but have no idea who those data brokers are or how to contact them. Consumers can use the data broker registry to help decide where to send their right-to-know requests.

Data breach protection and notification

Given the massive amounts of personal information about millions of people collected and stored by myriad companies, the inherent risk of data theft and misuse is substantial. Data privacy legislation must address this risk. Three tools deserve emphasis.

First, data brokers and other companies that gather large amounts of sensitive information must promptly notify consumers when their data is leaked, misused, or stolen.

Second, it must be simple, fast, and free for consumers to freeze their credit. When a consumer seeks credit from a company, that company runs a credit check with one of the major credit agencies. When a consumer places a credit freeze with these credit agencies, an identity thief cannot use their stolen personal information to borrow money in their name.

Third, companies must have a legal duty to securely store consumers’ personal information. Also, where a company fails to meet this duty, it should be easier for people harmed by data breaches—including those suffering non-financial harms—to take those companies to court.

Some Things To Avoid

Data privacy laws should not expand the scope or penalties of computer crime laws. Existing computer crime laws are already far too broad.

Any new regulations must be judicious and narrowly tailored, avoiding tech mandates.

Policymakers must take care that any of the above requirements don’t create an unfair burden for smaller companies, nonprofits, open source projects, and the like. To avoid one-size-fits-all rules, they should tailor new obligations based on size of the service in question. For example, policymakers might take account of the entity’s revenue, or the number of people whose data the entity collects.

Too often, users gain new rights only to effectively lose them when they “agree” to terms of service and end user license agreements that they haven’t read and aren’t expected to read. Policymakers should consider the effect such waivers have on the rights and obligations they create, and be especially wary of mandatory arbitration requirements.

Next Steps

There is a daily drip-drip of bad news about how big tech companies are intruding on our privacy. It is long past time to enact new laws to protect consumer data privacy. We are pleased to see legislators across the country considering bills to do so, and we hope they will consider the principles above. 

Categories: Privacy

Congress Should Pass the Protecting Data at the Border Act

Fri, 06/14/2019 - 16:24

Under the bipartisan Protecting Data at the Border Act, border officers would be required to get a warrant before searching a traveler’s electronic device. Last month, the bill was re-introduced into the U.S. Senate by Sen. Ron Wyden (D-Ore.) and Sen. Rand Paul (R-Ky.). It is co-sponsored by Sen. Ed Markey (D-Mass.) and Sen. Jeff Merkley (D-Ore.), and the House companion bill is co-sponsored by Rep. Ted Lieu (D-Cal.).

 The rights guaranteed by the U.S. constitution don’t fade away at the border. And yet the Department of Homeland Security (DHS) asserts the power to freely search the electronic devices of travelers before allowing them entrance into, or exit from, the United States. This practice will end if Congress passes the Protecting Data at the Border Act.

Think about all of the things your cell phone or laptop computer could tell a stranger about you. Modern electronic devices could reveal your romantic and familial connections, daily routines, and financial standings. Ordinarily, law enforcement cannot obtain this sensitive information absent a signed warrant from a judge based on probable cause. But DHS claims they need no suspicion at all to search and seize this information at the border.

The bill does much more to protect digital liberty at the border. It would protect free speech by preventing federal agents from requiring a person to reveal their social media handles, usernames, or passwords. No one crossing the U.S. border should fear that a tweet critical of ICE or CBP will complicate their travel plans.

The bill also blocks agents from denying entry or exit from the United States to any U.S. person who refuses to disclose digital account information, the contents of social media accounts, or provide access to electronic equipment. Further, the bill would prevent border agencies from holding any lawful U.S. persons for over four hours in pursuit of consensual access to online accounts or the information on electronic equipment. It would also prevent the retention of traveler’s private information absent probable cause—a protection that is increasingly important after CBP admitted this week that photographs of almost 100,000 travelers’ faces and license plates were stolen from a federal subcontractor. Can we really trust this agency to securely retain our text messages and phone camera rolls? 

The bill has teeth. It forbids the use of any materials gathered in violation of the Act from being used as evidence in court, including any immigration hearings.

More than ever before, our devices hold all sorts of personal and sensitive information about us, and this bill would be an important step forward in recognizing and protecting us and our devices. Congress should pass the Protecting Data at the Border Bill.

To learn more, check out EFF’s pages on how you can protect your privacy when you travel, on our lawsuit challenging border searches of traveler’s devices without a warrant, and our support for the original version of this bill.

Related Cases: Alasaad v. Nielsen
Categories: Privacy

Details of Justice Department Efforts To Break Encryption of Facebook Messenger Must Be Made Public, EFF Tells Court

Wed, 06/12/2019 - 19:54
Ruling Blocking DOJ Should Be Unsealed To Keep Public Informed About Anti-Encryption Tactics

San Francisco—The Electronic Frontier Foundation, ACLU and Stanford cybersecurity scholar Riana Pfefferkorn asked a federal appeals court today to make public a ruling that reportedly forbade the Justice Department from forcing Facebook to break the encryption of a communications service for users.

Media widely reported last fall that a federal court in Fresno, California denied the government’s effort to compromise the security and privacy promised to users of Facebook’s Messenger application. But the court’s order and details about the legal dispute have been kept secret, preventing people from learning about how DOJ sought to break encryption, and why a federal judge rejected those efforts.

EFF, ACLU and Pfefferkorn told the appeals court in a filing today that the public has First Amendment and common law rights to access judicial opinions and court records about the laws that govern us. Unsealing documents in the Facebook Messenger case is especially important because the public deserves to know when law enforcement tries to compel a company that hosts massive amounts of private communications to circumvent its own security features and hand over users’ private data, EFF, ACLU and Pfefferkorn said in a filing  to the U.S. Court of Appeals for the Ninth Circuit. ACLU and Pfefferkorn, Associate Director of Surveillance and Cybersecurity at Stanford University’s Center for Internet and Society, joined EFF’s request to unseal. A federal judge in Fresno denied a motion to unseal the documents, leading to this appeal.

Media reports last year revealed DOJ’s attempt to get Facebook to turn over customer data and unencrypted Messenger voice calls based on a wiretap order in an investigation of suspected M-13 gang activity. Facebook refused the government’s request, leading DOJ to try to hold the company in contempt. Because the judge’s ruling denying the government’s request is entirely under seal, the public has no way of knowing how the government tried to justify its request or why the judge turned it down—both of which could impact users’ ability to protect their communications from prying eyes.

“The ruling likely interprets the scope of the Wiretap Act, which impacts the privacy and security of Americans’ communications, and it involves an application used by hundreds of millions of people around the world,” said EFF Senior Staff Attorney Andrew Crocker. “Unsealing the court records could help us understand how this case fits into the government’s larger campaign to make sure it can access any encrypted communication.’’

In 2016 the FBI attempted to force Apple to disable security features of its mobile operating system to allow access to a locked iPhone belonging to one of the shooters alleged to have killed 14 people in San Bernardino, California. Apple fought the order, and EFF supported the company’s efforts. Eventually the FBI announced that it had received a third-party tip with a method to unlock the phone without Apple's assistance. We believed that the FBI’s intention with the litigation was to obtain legal precedent that it could compel Apple to sabotage its own security mechanisms.  

“The government should not be able to rely on a secret body of law for accessing encrypted communications and surveilling Americans,” said EFF Staff Attorney Aaron Mackey. “We are asking the court to rule that every American has a right to know about rules governing who can access their private conversations.

For the motion:
https://www.eff.org/files/2019/06/12/e.c.f._9th_cir._19-15472_dckt_000_filed_2019-06-12.pdf


Contact:  AndrewCrockerSenior Staff Attorneyandrew@eff.org AaronMackeyStaff Attorneyamackey@eff.org
Categories: Privacy

Experts Warn Congress: Proposed Changes to Patent Law Would Thwart Innovation

Wed, 06/12/2019 - 18:35

It should be clear now that messing around with Section 101 of the Patent Act is a bad idea. A Senate subcommittee has just finished hearing testimony about a bill that would wreak havoc on the patent system. Dozens of witnesses have testified, including EFF Staff Attorney Alex Moss. Alex’s testimony [PDF] emphasized EFF’s success in protecting individuals and small businesses from threats of meritless patent litigation, thanks to Section 101.

Section 101 is one the most powerful tools patent law provides for defending against patents that never should have been issued in the first place. We’ve written many times about small businesses that were saved because the patents being used to sue them were thrown out under Section 101, especially following the Supreme Court’s Alice v. CLS Bank decision. Now, the Senate IP subcommittee is currently considering a proposal that will eviscerate Section 101, opening the door to more stupid patents, more aggressive patent licensing demands, and more litigation threats from patent trolls.

Three days of testimony has made it clear that we’re far from alone in seeing the problems in this bill. Patents that would fail today’s Section 101 aren’t necessary to promote innovation. We’ve written about how the proposal, by Senators Thom Tillis and Chris Coons, would create a field day for patent trolls with abstract software patents. Here, we’ll take a look at a few of the other potential effects of the proposal, none of them good.

Private Companies Could Patent Human Genes

The ACLU, together with 169 other civil rights, medical, and scientific groups, has sent a letter to the Senate Judiciary Committee explaining that the draft bill would open the door to patents on human genes.

The bill sponsors have said they don’t intend to allow for patents on the human genome. But as currently written, the draft bill would do just that. The bill explicitly overrules recent Supreme Court rulings that prevent patents on things that occur in nature, like cells in the human body. Those protections were made explicit in the 2013 Myriad decision, which held that Section 101 bars patents on genes as they occur in the human body. A Utah company called Myriad Genetics had monopolized tests on the BRCA1 and BRCA2 genes, which can be used to determine a person's likelihood of developing breast or ovarian cancer. Myriad said that because its scientists had identified and isolated the genes from the rest of the human genome, it had invented something that warranted a patent. The Supreme Court disagreed, holding that DNA is a product of nature and “is not patent eligible merely because it has been isolated.”

Once Myriad couldn’t enforce its patents, competitors offering diagnostic screening for breast and ovarian cancer could, and did, enter the market immediately, charging just a fraction of what Myriad’s test cost. Myriad’s patent did not claim to invent any of the technology actually used to perform the DNA analysis or isolation, which was available before and apart from Myriad’s gene patents.

It’s just one example of how Section 101 protects innovation and enhances access to medicine, by prohibiting monopolies on things no person could have invented.

Alice Versus the Patent Trolls

Starting around the late 1990s, the Federal Circuit opened the door to broad patenting of software. 

“The problem of patent trolls grew to epic proportions,” Stanford Law Professor Mark Lemley told the Senate subcommittee last week. “One of the things that brought it under control was the Alice case and Section 101.”

A representative of the National Retail Federation (NRF) explained how, before Alice, small Main Street businesses were subject to constant litigation brought by “non-practicing entities,” also known as patent trolls. Patent trolls are not a thing of the past—even after Alice, the majority of patent lawsuits continue to be filed by non-practicing entities.

“Our members are a target-rich environment for those with loose patent claims,” NRF’s Stephanie Martz told the subcommittee.

She went on to give examples of patents that were rightfully invalidated under Section 101, like a patent for posting nutrition information and picture menus online, which was used to sue Whataburger, Dairy Queen, and other chain restaurants—more than 60 cases in all. A patent for an online shopping cart was used to sue candy shops and 1-800-Flowers. And a patent for online maps showing properties in a particular area was used to sue Realtors and homeowners [PDF], leading to decades of litigation.

The Alice decision didn’t end such cases, but it did make it much easier to fight back. As Martz explained, since Alice, the cost of litigation has gone down between 40 and 45 percent.

The sponsors of the draft litigation have made it clear they intend to overturn Alice. That will take us back to a time not so long ago, when small businesses had to pay unjustified licensing fees to patent trolls, or face the possibility of multimillion-dollar legal bills to fight off wrongly issued patents. 

More Litigation, Less Research

The High Tech Inventors Alliance (HTIA), a group of large technology companies, also spoke against the current draft proposal.

The proposal “would allow patenting of business methods, fundamental scientific principles, and mathematical equations, as long as they were performed on a computer,” said David Jones, representing HTIA. “A more stringent test is needed, and perhaps even required by the Constitution.”

Jones also cited recent research showing that the availability of business method patents actually lowered R&D among firms that sought those patents. After Alice limited their availability, the same companies that had been seeking those patents stopped doing so, and increased their research and development budgets.

The current legal test for patents is not arbitrary or harmful to innovation, Jones argued. On the contrary, the Alice-Mayo framework “has improved patent clarity and decreased spurious litigation.”

EFF’s Alex Moss also disagreed that the current case law was “a mess” or “confusing.” Rather than throw out decades of case law, she urged Congress to look to history to consider changes that could actually point the patent system towards promoting progress. 

“In the 19th century, when patent owners wanted to get a term extension, they would come to Congress and bring their accounting papers, and say—look how much we invested,” Moss explained. “I’d like to see that practical element, to make sure our patent system is promoting innovation—which is its job under the Constitution—and not just a proliferation of patents.”

At the conclusion of testimony, Sen. Tillis has said that he and Sen. Coons will take these testimonies into account as they work towards a bill that could be introduced as early as next month. We hope the Senators will begin to consider proposals that could improve the patent system, rather than open the door to the worst kinds of patents. In the meantime, please tell your members of Congress that the proposed bill is not the right solution. 

TAKE ACTION

TELL CONGRESS WE DON'T NEED MORE BAD PATENTS

Categories: Privacy

Social Media Platforms Increase Transparency About Content Removal Requests, But Many Keep Users in the Dark When Their Speech Is Censored, EFF Report Shows

Wed, 06/12/2019 - 10:03
Who Has Your Back Spotlights Good, and Not So Good, Content Moderation Policies

San Francisco and Tunis, Tunisia—While social media platforms are increasingly giving users the opportunity to appeal decisions to censor their posts, very few platforms comprehensively commit to notifying users that their content has been removed in the first place, raising questions about their accountability and transparency, the Electronic Frontier Foundation (EFF) said today in a new report.

How users are supposed to challenge content removals that they’ve never been told about is among the key issues illuminated by EFF in the second installment of its Who Has Your Back: Censorship Edition report. The paper comes amid a wave of new government regulations and actions around the world meant to rid platforms of extremist content. But in response to calls to remove objectionable content, social media companies and platforms have all too often censored valuable speech.

EFF examined the content moderation policies of 16 platforms and app stores, including Facebook, Twitter, the Apple App Store, and Instagram. Only four companies—Facebook, Reddit, Apple, and GitHub—commit to notifying users when any content is censored and specifying the legal request or community guideline violation that led to the removal. While Twitter notifies users when tweets are removed, it carves out an exception for tweets related to “terrorism,” a class of content that is difficult to accurately identify and can include counter-speech or documentation of war crimes. Notably, Facebook and GitHub were found to have more comprehensive notice policies than their peers.

“Providing an appeals process is great for users, but its utility is undermined by the fact that users can’t count on companies to tell them when or why their content is taken down,” said Gennie Gebhart, EFF associate director of research, who co-authored the report. “Notifying people when their content has been removed or censored is a challenge when your users number in the millions or billions, but social media platforms should be making investments to provide meaningful notice.”

In the report, EFF awarded stars in six categories, including transparency reporting of government takedown requests, providing meaningful notice to users when content or accounts are removed, allowing users to appeal removal decisions, and public support of the Santa Clara Principles, a set of guidelines for speech moderation based on a human rights framework. The report was released today at the RightsCon summit on human rights in the digital age, held in Tunis, Tunisia.

Reddit leads the pack with six stars, followed by Apple’s App Store and GitHub with five stars, and Medium, Google Play, and YouTube with four stars. Facebook, Reddit, Pinterest and Snap each improved their scores over the past year since our inaugural censorship edition of Who Has Your Back in 2018. Nine companies meet our criteria for transparency reporting of takedown requests from governments, and 11 have appeals policies, but only one—Reddit—discloses the number of appeals it receives. Reddit also takes the extra step of disclosing the percentage of appeals resolved in favor of or against the appeal.

Importantly, 12 companies are publicly supporting the Santa Clara Principles, which outline a set of minimum content moderation policy standards in three areas: transparency, notice, and appeals.

“Our goal in publishing Who Has Your Back is to inform users about how transparent social media companies are about content removal and encourage improved content moderation practices across the industry,” said EFF Director of International Free Expression Jillian York. “People around the world rely heavily on social media platforms to communicate and share ideas, including activists, dissidents, journalists, and struggling communities. So it’s important for tech companies to disclose the extent to which governments censor speech, and which governments are doing it.”

For the report:
https://www.eff.org/wp/who-has-your-back-2019

For more on platform censorship:
https://www.eff.org/deeplinks/2019/05/christchurch-call-good-not-so-good-and-ugly

Contact:  GennieGebhartAssociate Director of Researchgennie@eff.org AndrewCrockerSenior Staff Attorneyandrew@eff.org
Categories: Privacy

EFF to U.N.: Ola Bini's Case Highlights The Dangers of Vague Cybercrime Law

Tue, 06/11/2019 - 18:15

For decades, journalists, activists and lawyers who work on human rights issues around the world have been harassed, and even detained, by repressive and authoritarian regimes seeking to halt any assistance they provide to human rights defenders. Digital communication technology and privacy-protective tools like end-to-end encryption have made this work safer, in part by making it harder for governments to target those doing the work. But that has led to technologists building those tools being increasingly targeted for the same harassment and arrest, most commonly under overbroad cybercrime laws that cast suspicion on even the most innocent online activities.  

Right now, that combination of misplaced suspicion, and arbitrary detention under cyber-security regulations, is being played out in Ecuador. Ola Bini, a Swedish security researcher, is being detained in that country under unsubstantiated accusations, based on an overbroad reading of the country’s cybercrime law. This week, we submitted comments to the Office of the U.N. High Commissioner for Human Rights (OHCHR) and the Inter-American Commission on Human Rights (IACHR) for their upcoming 2019 joint report on the situation of human rights defenders in the Americas. Our comments focus on how Ola Bini’s detainment is a flagship case of the targeting of technologists, and dangers of cyber-crime laws.

While the pattern of demonizing benign uses of technology is global, EFF has noted its rise in the Americas in particular. Our 2018 report, “Protecting Security Researchers' Rights in the Americas,” was created in part to push back against ill-defined, broadly interpreted cybercrime laws. It also promotes standards that lawmakers, judges, and most particularly the Inter-American Commission on Human Rights might use to protect the fundamental rights of security researchers, and ensure the safe and secure development of the Internet and digital technology in the Americas and across the world.

We noted that these laws fail in several ways. First, they don't meet the requirements established by the Inter-American Human Rights Standards, which bars any restriction of a right through the use of criminal law. Vague and ambiguous criminal laws are an impermissible basis to restrict the rights of a person.

These criminal provisions also fail to clarify the definition of malicious intent or mens rea, and actual damage turning general behaviors into strict liability crimes. That means they can affect the free expression of security researchers since they can be interpreted broadly by prosecutors seeking to target individuals.

For instance, Ola Bini is currently being charged under Article 232 of the Ecuadorian Criminal Code:

Any person who destroys, damages, erases, deteriorates, alters, suspends, blocks, causes malfunctions, unwanted behavior or deletes computer data, e-mails, information processing systems, telematics or telecommunications from all or parts of its governing logical components shall be liable to a term of imprisonment of three to five years, or:

Designs, develops, programs, acquires, sends, introduces, executes, sells or distributes in any way, devices or malicious computer programs or programs destined to cause the effects indicated in the first paragraph of this article, or:

Destroys or alters, without the authorization of its owner, the technological infrastructure necessary for the transmission, reception or processing of information in general.

If the offense is committed on computer goods intended for the provision of a public service or linked to public safety, the penalty shall be five to seven years' deprivation of liberty.

Bini’s case highlights two consistent problems with cybercrime laws: the statute can be interpreted in such a way that any software that could be misused creates criminal liability for its creator; indeed, potentially more liability than on those who conduct malicious acts. This allows misguided prosecutions against human rights defenders to proceed on the basis that the code created by technologists might possibly be used for malicious purposes.

Additionally, we point the OHCHR-IACHR to the chain of events associated with Ola Bini’s arrest. Bini is a free software developer, who works to improve the security and privacy of the Internet for all its users. He has contributed to several key open source projects used to maintain the infrastructure of public Internet services, including JRuby, several Ruby libraries, as well as multiple implementations of the secure and open communication protocol OTR. Ola’s team at ThoughtWorks contributed to Certbot, the EFF-managed tool that has provided strong encryption for millions of websites around the world.

His arrest and detention was full of irregularities: his warrant was for a “Russian hacker” (Bini is neither Russian nor a hacker); he was not read his rights, nor allowed to contact his lawyer, nor offered a translator. The arrest was preceded by a press conference, and framed as part of a process of defending Ecuador from retaliation by associates of Wikileaks. During the press conference, Ecuador’s Interior Minister announced that the government was about to apprehend individuals who are supposedly involved in trying to establish a “piracy center” in Ecuador, including two Russian hackers, a Wikileaks collaborator, and a person close to Julian Assange. She stated: “We are not going to allow Ecuador to become a hacking center, and we cannot allow illegal activities to take place in the country, either to harm Ecuadorian citizens or those from other countries or any government.”

Neither she nor any investigative authority has provided any evidence to back these claims.

As we wrote in our comments, prosecutions of technologists working in this space should be treated in the same way as the prosecution of journalists, lawyers, and other human rights defenders — with extreme caution, and with regard to the risk of politicization and misuse of such prosecutions. Unfortunately, Bini’s arrest is typical of the treatment of security researchers conducting human rights work.

We hope that the OHCHR and IACHR carefully consider our comments, and recognize how broad cybercrime laws, and their misuse by political actors, can directly challenge human rights defenders. Ola Bini’s case—and the other examples we’ve given—present clear evidence for why we must treat cybercrime law as connected to human rights considerations.

Categories: Privacy

How LBS Innovations Keeps Trying to Monopolize Online Maps

Tue, 06/11/2019 - 15:11
Stupid Patent of the Month

For years, the Eastern District of Texas (EDTX) has been a magnet for lawsuits filed by patent trolls—companies who make money with patent threats, rather than selling products or services. Technology companies large and small were sued in EDTX every week. We’ve written about how that district’s unfair and irregular procedures made it a haven for patent trolls.

In 2017, the Supreme Court put limits on this venue abuse with its TC Heartland decision. The court ruled that companies can only be sued in a particular venue if they are incorporated there, or have a “regular and established” place of business.

That was great for tech companies that had no connection to EDTX, but it left brick-and-mortar retailers exposed. In February, Apple, a company that has been sued hundreds of times in EDTX, closed its only two stores that were in the district, located in Richardson and Plano. With no stores located in EDTX, Apple will be able to ask for a transfer in any future patent cases.

In the last few days those stores were open, Apple was sued for patent infringement four times, as patent trolls took what is likely their last chance to sue Apple in EDTX.

This month, as part of our Stupid Patent of the Month series, we’re taking a closer look at one of these last-minute lawsuits against Apple. On April 12, the last day the store was open, Apple was sued by LBS Innovations, LLC, a patent-licensing company owned by two New York patent lawyers, Daniel Mitry and Timothy Salmon. Since it was formed in 2011, LBS has sued more than 60 companies, all in the Eastern District of Texas. Those defendants include some companies that make their own technology, like Yahoo, Waze, and Microsoft, but they’re mostly retailers that use software made by others. LBS has sued tire stores, pizza shops, pet-food stores, and many others, all for using internet-based maps and “store location” features. LBS has sued retailers that use software made by Microsoft, others that use Mapquest, some that use Google, as well as those that use the open-source provider OpenStreetMaps

Early Internet Maps

LBS’ lawsuits accuse retailers of infringing one or more claims of U.S. Patent No. 6,091,956, titled “Situation Information System.” The most relevant claim, which is specifically cited in many lawsuits, is claim 11, which describes a method of showing “transmittable mappable hypertext items” to a user. The claim language describes “buildings, roads, vehicles, and signs” as possible examples of those items. It also describes providing “timely situation information” on the hypertext map.

There’s a big problem with the ’956 patent, and its owners’ broad claim to have invented Internet mapping. The patent application was filed on June 12, 1997—but electronic maps, and specifically Internet-based maps, were well-known by then. Not only that, but the maps were already were adding what one would think of as “timely situation information,” such as weather and traffic updates.

Mapquest, the first commercial internet mapping service, is one example. Mapquest launched in 1996—before this patent’s 1997 priority date—and by July of that year, it was offering not just driving directions but personalized maps of cities that included favorite destinations.

And Mapquest wasn’t the first. Xerox Parc’s free interactive map was online as far back as 1993. By January 1997, it was getting more than 80,000 mapping requests per day. Michigan State University was getting 159,000 daily requests [PDF] for its weather map, which was updated regularly, in March 1997. Some cities, such as Houston, had online traffic maps available in that time period, which also got timely updates.

In 1997, any Internet user, let alone anyone actually developing online maps, would have been aware of these very public examples.

As technology advanced, and Internet use became widespread, the information available on the electronic maps we all use became richer and more frequently updated. This was no surprise. What’s described in the ‘956 patent added nothing to this clear and well-known path.

The Trouble With Prior Art

How has the LBS Innovations patent hold up in court? Despite the fact that these examples of earlier Internet maps can be found online fairly easily, that doesn’t mean it’s easy to get rid of a patent like the ‘956 patent in court. The process of invalidating patents using prior art—patent law’s term for relevant knowledge about earlier inventions—is difficult and expensive. It requires the hiring of high-priced experts, the filing of long reports, and months or years of litigation. And it often requires the substantial risk of a jury trial, since it’s difficult to get an early ruling on prior art defenses.

Because of that drawn-out process, LBS has been able to extract settlements from dozens of defendants. It’s also reached settlements with companies like Microsoft and Google, which intervened after users of their respective mapping software were sued. In one case where LBS got near trial, after having settled with several other defendants, it simply dropped its lawsuit against the final company that was willing to fight, avoiding an invalidity judgment against its patent. 

LBS never should have been issued this patent in the first place. But patent examiners are given less than 20 hours, on average, to examine an application. Faced with far-reaching claims by an ambitious applicant, but little time to scrutinize them, examiners don’t have many options—especially since applicants can return again and again. That means the only way for examiners can get applications off their desk for good is by approving them. Given that incentive, it’s no surprise judges and juries often find issued patents invalid.

For software, it can be extremely difficult to find prior art that can invalidate the patent. Software was generally not patentable until the mid-1990s, when a Federal Circuit decision called State Street Bank opened the door. That means patents aren’t good prior art for the vast majority of 20th century advances in computer science. Also, software is often protected by copyright or trade secret, and therefore not published or otherwise made public.

Often, published information may not precisely match the limitations of each patent claim. Did the earlier maps search “unique mappable information code sequences,” where each code sequence represented the mapped items, “copied from the memory of said computer”? They may well have done so—but published papers on internet mapping wouldn’t bother specifying inane steps that just recite basic computer technology.

The success of a litigation campaign like the one pushed by LBS Innovations shows why we can’t rely on the parts of the Patent Act that cover prior art to weed out bad patents. Section 101 allows courts to find patents ineligible on their face and early in a case. That saves defendants the staggering costs of litigation or an unnecessary settlement. Since the Alice v. CLS Bank decision, Section 101 has been used to dispose of hundreds of abstract software patents before trial.

Right now, key U.S. Senators are crafting a bill that would weaken Section 101. That will greatly increase the leverage of patent trolls like LBS Innovations, and their claims to own widespread Internet technology.

Proponents of the Tillis-Coons patent bill argue that there’s little need to worry about bad patents slipping through Section 101, because other sections of the patent law—the sections which allow for patents to be invalidated because of earlier inventions—will ensure that wrongly granted patents don’t win in court. But patent trolls simply aren’t afraid of those sections of law, because their effects are so limited. For many defendants, the costs of attempting to prove a patent invalid under these sections makes them unusable. Faced with legal bills of hundreds of thousands of dollars, if not millions, many defendants will have little choice but to settle.

We all lose when small businesses and independent programmers lose their most powerful means of fighting against bad patents. That’s why we’re asking EFF supporters to contact their representatives in Congress, and ask them to reject the Tillis-Coons patent proposal.

Categories: Privacy

EFF’s Newest Advisory Board Member: Michael R. Nelson

Tue, 06/11/2019 - 14:31

EFF is proud to announce our newest member of our already star-studded advisory board: Michael R. Nelson. Michael has worked on Internet-related global public policy issues for more than 30 years, including working on technology policy in the U.S. Senate and the Clinton White House.

Michael’s broad expertise in many different aspects of technology will be invaluable to the work we do at EFF. His experience includes launching the Washington, D.C. policy office for Cloudflare, and working as a Principal Technology Policy Strategist in Microsoft’s Technology Policy Group, a Senior Technology and Telecommunications Analyst with Bloomberg Government, and the Director of Internet Technology and Strategy at IBM. In addition, Michael has been affiliated with the CCT Program at Georgetown University for more than ten years, teaching courses and doing research on the future of the Internet, cyber-policy, technology policy, innovation policy, and e-government.

In the 1990s, Michael was Director for Technology Policy at the Federal Communications Commission and Special Assistant for Information Technology at the White House Office of Science and Technology Policy. There, he worked with Vice President Al Gore and the President's Science Advisor on issues relating telecommunications policy, information technology, encryption, electronic commerce, and information policy. He also served as a professional staff member for the Senate's Subcommittee on Science, Technology, and Space, chaired by then-Senator Gore and was the lead Senate staffer for the High-Performance Computing Act. He has a B.S. from Caltech and a Ph.D. from MIT. Welcome Michael!

Categories: Privacy

California: No Face Recognition on Body-Worn Cameras

Mon, 06/10/2019 - 18:43

EFF has joined a coalition of civil rights and civil liberties organizations to support a California bill that would prohibit law enforcement from applying face recognition and other biometric surveillance technologies to footage collected by body-worn cameras.

About five years ago, body cameras began to flood into police and sheriff departments across the country. In California alone, the Bureau of Justice Assistance provided more than $7.4 million in grants for these cameras to 31 agencies. The technology was pitched to the public as a means to ensure police accountability and document police misconduct. However, if enough cops have cameras, a police force can become a roving surveillance network, and the thousands of hours of footage they log can be algorithmically analyzed, converted into metadata, and stored in searchable databases.

Today, we stand at a crossroads as face recognition technology can now be interfaced with body-worn cameras in real time. Recognizing the impending threat to our fundamental rights, California Assemblymember Phil Ting introduced A.B. 1215 to prohibit the use of face recognition, or other forms of biometric technology, such as gait recognition or tattoo recognition, on a camera worn or carried by a police officer.

“The use of facial recognition and other biometric surveillance is the functional equivalent of requiring every person to show a personal photo identification card at all times in violation of recognized constitutional rights,” the lawmaker writes in the introduction to the bill. “This technology also allows people to be tracked without consent. It would also generate massive databases about law-abiding Californians, and may chill the exercise of free speech in public places.”

Ting’s bill has the wind in its sails. The Assembly passed the bill with a 45-17 vote on May 9, and only a few days later the San Francisco Board of Supervisors made history by banning government use of face recognition. Meanwhile, law enforcement face recognition has come under heavy criticism at the federal level by the House Oversight Committee and the Government Accountability Office.

The bill is now before the California Senate, where it will be heard by the Public Safety Committee on Tuesday, June 11.

EFF, along with a coalition of civil liberties organizations including the ACLU, Advancing Justice - Asian Law Caucus, CAIR California, Data for Black Lives, and a number of our Electronic Frontier Alliance allies have joined forces in supporting this critical legislation.

Face recognition technology has disproportionately high error rates for women and people of color. Making matters worse, law enforcement agencies conducting face surveillance often rely on images pulled from mugshot databases, which include a disproportionate number of people of color due to racial discrimination in our criminal justice system. So face surveillance will exacerbate historical biases born of, and contributing to, unfair policing practices in Black and Latinx neighborhoods.

Polling commissioned by the ACLU of Northern California in March of this year shows the people of California, across party lines, support these important limitations. The ACLU's polling found that 62% of respondents agreed that body cameras should be used solely to record how police treat people, and as a tool for public oversight and accountability, rather than to give law enforcement a means to identify and track people. In the same poll, 82% of respondents said they disagree with the government being able to monitor and track a person using their biometric information.

Last month, Reuters reported that Microsoft rejected an unidentified California law enforcement agency’s request to apply face recognition to body cameras due to human rights concerns.

“Anytime they pulled anyone over, they wanted to run a face scan,” Microsoft President Brad Smith said. “We said this technology is not your answer.”

We agree that ubiquitous face surveillance is a mistake, but we shouldn’t have to rely on the ethical standards of tech giants to address this problem. Lawmakers in Sacramento must use this opportunity to prevent the threat of mass biometric surveillance from becoming the new normal. We urge the California Senate to pass A.B. 1215.

Categories: Privacy

Five California Cities Are Trying to Kill an Important Location Privacy Bill

Mon, 06/10/2019 - 18:36

If you rely on shared biked or scooters, your location privacy is at risk. Cities across the United States are currently pushing companies that operate shared mobility services like Jump, Lime, and Bird to share individual trip data for any and all trips taken within their boundaries, including where and when trips start and stop and granular details about the specific routes taken. This data is extremely sensitive, as it can be used to reidentify riders—particularly for habitual trips—and to track movements and patterns over time. While it is beneficial for cities to have access to aggregate data about shared mobility devices to ensure that they are deployed safely, efficiently, and equitably, cities should not be allowed to force operators to turn over sensitive, personally identifiable information about riders.

As these programs become more common, the California Legislature is considering a bill, A.B. 1112, that would ensure that local authorities receive only aggregated or non-identifiable trip data from shared mobility providers. EFF supports A.B. 1112, authored by Assemblymember Laura Friedman, which strikes the appropriate balance between protecting individual privacy and ensuring that local authorities have enough information to regulate our public streets so that they work for all Californians. The bill makes sure that local authorities will have the ability to impose deployment requirements in low-income areas to ensure equitable access, fleet caps to decrease congestion, and limits on device speed to ensure safety. And importantly, the bill clarifies that CalEPCA—California’s landmark electronic privacy law—applies to data generated by shared mobility devices, just as it would any other electronic devices.

Five California cities, however, are opposing this privacy-protective legislation. At least four of these cities—Los Angeles, Santa Monica, San Francisco, and Oakland—have pilot programs underway that require shared mobility companies to turn over sensitive individual trip data as a condition to receiving a permit. Currently, any company that does not comply cannot operate in the city. The cities want continued access to individual trip data and argue that removing “customer identifiers” like names from this data should be enough to protect rider privacy.

The problem? Even with names stripped out, location information is notoriously easy to reidentify, particularly for habitual trips. This is especially true when location information is aggregated over time. And the data shows that riders are, in fact, using dockless mobility vehicles for their regular commutes. For example, as documented in Lime’s Year End Report for 2018, 40 percent of Lime riders reported commuting to or from work or school during their most recent trip. And remember, in the case of dockless scooters and bikes, these devices may be parked directly outside a rider’s home or work. If a rider used the same shared scooter or bike service every day to commute between their work and home, it’s not hard to imagine how easy it might be to reidentify them—even if their name was not explicitly connected to their trip data. Time-stamped geolocation data could also reveal trips to medical specialists, specific places of worship, and particular neighborhoods or bars. Patterns in the data could reveal social relationships, and potentially even extramarital affairs, as well as personal habits, such as when people typically leave the house in the morning, go to the gym or run errands, how often they go out on evenings and weekends, and where they like to go.

The cities claim that they will institute “technical safeguards” and “business processes” to prohibit reidentification of individual consumers, but so long as the cities have the individual trip data, reidentification will be possible—by city transportation agencies, law enforcement, ICE, or any other third parties that receive data from cities.

The cities’ promises to keep the data confidential and make sure the records are exempt from disclosure under public records laws also fall flat. One big issue is that the cities have not outlined and limited the specific purposes for which they plan to use the geolocation data they are demanding. They also have not delineated how they will minimize their collection of personal information (including trip data) to data necessary to achieve those objectives. This violates both the letter and the spirit of the California Constitution’s right to privacy, which explicitly lists privacy as an inalienable right of all people and, in the words of the California Supreme Court, “prevents government and business interests from collecting and stockpiling unnecessary information about us” or “misusing information gathered for one purpose in order to serve other purposes[.]”

The biggest mistake local jurisdictions could make would be to collect data first and think about what to do with it later—after consumers’ privacy has been put at risk. That’s unfortunately what cities are doing now, and A.B. 1112 will put a stop to it.

The time is ripe for thoughtful state regulation reining in local demands for individual trip data. As we’ve told the California legislature, bike- and scooter- sharing services are proliferating in cities across the United States, and local authorities should have the right to regulate their use. But those efforts should not come at the cost of riders’ privacy.

We urge the California legislature to pass A.B. 1112 and protect the privacy of all Californians who rely on shared mobility devices for their transportation needs. And we urge cities in California and across the United States to start respecting the privacy of riders. Cities should start working with regulators and the public to strike the right balance between their need to obtain data for city planning purposes and the need to protect individual privacy—and they should stop working to undermine rider privacy.

Categories: Privacy

EFF and Open Rights Group Defend the Right to Publish Open Source Software to the UK Government

Mon, 06/10/2019 - 15:57

EFF and Open Rights Group today submitted formal comments to the British Treasury, urging restraint in applying anti-money-laundering regulations to the publication of open-source software.

The UK government sought public feedback on proposals to update its financial regulations pertaining to money laundering and terrorism in alignment with a larger European directive. The consultation asked for feedback on applying onerous customer due diligence regulations to the cryptocurrency space as well as what approach the government should take in addressing “privacy coins” like Zcash and Monero. Most worrisome, the government also asked “whether the publication of open-source software should be subject to [customer due diligence] requirements.”

We’ve seen these kind of attacks on the publication of open source software before, in fights dating back to the 90s, when the Clinton administration attempted to require that anyone merely publishing cryptography source code obtain a government-issued license as an arms dealer. Attempting to force today’s open-source software publishers to follow financial regulations designed to go after those engaged in money laundering is equally obtuse.

In our comments, we describe the breadth of free, libre, and open source software (FLOSS) that benefits the world today across industries and government institutions. We discuss how these regulatory proposals could have large and unpredictable consequences not only for the emerging technology of the blockchain ecosystem, but also for the FLOSS software ecosystem at large. As we stated in our comments:

If the UK government was to determine that open source software publication should be regulated under money-laundering regulations, it would be unclear how this would be enforced, or how the limits of those falling under the regulation would be determined. Software that could, in theory, provide the ability to enable cryptocurrency transactions, could be modified before release to remove these features. Software that lacked this capability could be quickly adapted to provide it. The core cryptographic algorithms that underlie various blockchain implementations, smart contract construction and execution, and secure communications are publicly known and relative trivial to express and implement. They are published, examined and improved by academics, enthusiasts, and professionals alike…

The level of uncertainty this would provide to FLOSS use and provision within the United Kingdom would be considerable. Such regulations would burden multiple industries to attempt to guarantee that their software could not be considered part of the infrastructure of a cryptographic money-laundering scheme.

Moreover, source code is a form of written creative expression, and open source code is a form of public discourse. Regulating its publication under anti-money-laundering provisions fails to honor the free expression rights of software creators in the United Kingdom, and their collaborators and users in the rest of the world.

Source code is a form of written creative expression, and open source code is a form of public discourse.

EFF is monitoring the regulatory and legislative reactions to new blockchain technologies, and we’ve recently spoken out about misguided ideas for banning cryptocurrencies and overbroad regulatory responses to decentralized exchanges. Increasingly, the regulatory backlash against cryptocurrencies is being tied to overbroad proposals that would censor the publication of open-source software, and restrict researchers’ ability to investigate, critique and communicate about the opportunities and risks of cryptocurrency.

This issue transcends controversies surrounding blockchain tech and could have significant implications for technological innovation, academic research, and freedom of expression. We’ll continue to watch the proceedings with HM Treasury, but fear similar anti-FLOSS proposals could emerge—particularly as other member states of the European Union transpose the same Anti-Money Laundering Directive into their own laws.

Read our full comments.

Thanks to Marta Belcher, who assisted with the comments. 

Categories: Privacy

Hearing Tuesday: EFF Will Voice Support For California Bill Reining In Law Enforcement Use of Facial Recognition

Mon, 06/10/2019 - 15:20
Assembly Bill 1215 Would Bar Police From Adding Facial Scanning to Body-Worn Cameras

Sacramento, California—On Tuesday, June 11, at 8:30 am, EFF Grassroots Advocacy Organizer Nathan Sheard will testify before the California Senate Public Safety Committee in support of a measure to prohibit law enforcement from using facial recognition in body cams.

Following San Francisco’s historic ban on police use of the technology—which can invade privacy, chill free speech and disproportionately harm already marginalized communities—California lawmakers are considering AB 1215, proposed legislation that would extend the ban across the state.

Face recognition technology has been shown to have disproportionately high error rates for women, the elderly, and people of color. Making matters worse, law enforcement agencies often rely on images pulled from mugshot databases. This exacerbates historical biases born of, and contributing to, over-policing in Black and Latinx neighborhoods. The San Francisco Board of Supervisors and other Bay Area communities have decided that police should be stopped from using the technology on the public.

The utilization of face recognition technology in connection with police body cameras would force Californians to decide between actively avoiding interaction and cooperation with law enforcement, or having their images collected, analyzed, and stored as perpetual candidates for suspicion, Sheard will tell lawmakers.

WHAT:
Hearing before the California Senate Public Safety Committee on SB 1215

WHO:
EFF Grassroots Advocacy Organizer Nathan Sheard

WHEN:
Tuesday, July 11, 8:30 am

WHERE:
California State Capitol
10th and L Streets
Room 3191
Sacramento, CA  95814

Contact:  Nathan 'nash'SheardGrassroots Advocacy Organizernash@eff.org
Categories: Privacy

Adversarial Interoperability: Reviving an Elegant Weapon From a More Civilized Age to Slay Today's Monopolies

Fri, 06/07/2019 - 14:24

Today, Apple is one of the largest, most profitable companies on Earth, but in the early 2000s, the company was fighting for its life. Microsoft's Windows operating system was ascendant, and Microsoft leveraged its dominance to ensure that every Windows user relied on its Microsoft Office suite (Word, Excel, Powerpoint, etc). Apple users—a small minority of computer users—who wanted to exchange documents with the much larger world of Windows users were dependent on Microsoft's Office for the Macintosh operating system (which worked inconsistently with Windows Office documents, with unexpected behaviors like corrupting documents so they were no longer readable, or partially/incorrectly displaying parts of exchanged documents). Alternatively, Apple users could ask Windows users to export their Office documents to an "interoperable" file format like Rich Text Format (for text), or Comma-Separated Values (for spreadsheets). These, too, were inconsistent and error-prone, interpreted in different ways by different programs on both Mac and Windows systems.

Apple could have begged Microsoft to improve its Macintosh offerings, or they could have begged the company to standardize its flagship products at a standards body like OASIS or ISO. But Microsoft had little motive to do such a thing: its Office products were a tremendous competitive advantage, and despite the fact that Apple was too small to be a real threat, Microsoft had a well-deserved reputation for going to enormous lengths to snuff out potential competitors, including both Macintosh computers and computers running the GNU/Linux operating system.

Apple did not rely on Microsoft's goodwill and generosity: instead, it relied on reverse-engineering. After its 2002 "Switch" ad campaign—which begged potential Apple customers to ignore the "myths" about how hard it was to integrate Macs into Windows workflows—it intensified work on its iWork productivity suite, which launched in 2005, incorporating a word-processor (Pages), a spreadsheet (Numbers) and a presentation program (Keynote). These were feature-rich applications in their own right, with many innovations that leapfrogged the incumbent Microsoft tools, but this superiority would still not have been sufficient to ensure the adoption of iWork, because the world's greatest spreadsheets are of no use if everyone you need to work with can't open them.

What made iWork a success—and helped re-launch Apple—was the fact that Pages could open and save most Word files; Numbers could open and save most Excel files; and Keynote could open and save most PowerPoint presentations. Apple did not attain this compatibility through Microsoft's cooperation: it attained it despite Microsoft's noncooperation. Apple didn't just make an "interoperable" product that worked with an existing product in the market: they made an adversarially interoperable product whose compatibility was wrested from the incumbent, through diligent reverse-engineering and reimplementation. What's more, Apple committed to maintaining that interoperability, even though Microsoft continued to update its products in ways that temporarily undermined the ability of Apple customers to exchange documents with Microsoft customers, paying engineers to unbreak everything that Microsoft's maneuvers broke. Apple's persistence paid off: over time, Microsoft's customers became dependent on compatibility with Apple customers, and they would complain if Microsoft changed its Office products in ways that broke their cross-platform workflow.

Since Pages' launch, document interoperability has stabilized, with multiple parties entering the market, including Google's cloud-based Docs offerings, and the free/open alternatives from LibreOffice. The convergence on this standard was not undertaken with the blessing of the dominant player: rather, it came about despite Microsoft's opposition. Docs are not just interoperable, they're adversarially interoperable: each has its own file format, but each can read Microsoft's file format.

The document wars are just one of many key junctures in which adversarial interoperability made a dominant player vulnerable to new entrants:

Scratch the surface of most Big Tech giants and you'll find an adversarial interoperability story: Facebook grew by making a tool that let its users stay in touch with MySpace users; Google products from search to Docs and beyond depend on adversarial interoperability layers; Amazon's cloud is full of virtual machines pretending to be discrete CPUs, impersonating real computers so well that the programs running within them have no idea that they're trapped in the Matrix.

Adversarial interoperability converts market dominance from an unassailable asset to a liability. Once Facebook could give new users the ability to stay in touch with MySpace friends, then every message those Facebook users sent back to MySpace—with a footer advertising Facebook's superiority—became a recruiting tool for more Facebook users. MySpace served Facebook as a reservoir of conveniently organized potential users that could be easily reached with a compelling pitch about why they should switch.

Today, Facebook is posting 30-54% annual year-on-year revenue growth and boasts 2.3 billion users, many of whom are deeply unhappy with the service, but who are stuck within its confines because their friends are there (and vice-versa).

A company making billions and growing by double-digits with 2.3 billion unhappy customers should be every investor's white whale, but instead, Facebook and its associated businesses are known as "the kill zone" in investment circles.

Facebook's advantage is in "network effects": the idea that Facebook increases in value with every user who joins it (because more users increase the likelihood that the person you're looking for is on Facebook). But adversarial interoperability could allow new market entrants to arrogate those network effects to themselves, by allowing their users to remain in contact with Facebook friends even after they've left Facebook.

This kind of adversarial interoperability goes beyond the sort of thing envisioned by "data portability," which usually refers to tools that allow users to make a one-off export of all their data, which they can take with them to rival services. Data portability is important, but it is no substitute for the ability to have ongoing access to a service that you're in the process of migrating away from.

Big Tech platforms leverage both their users' behavioral data and the ability to lock their users into "walled gardens" to drive incredible growth and profits. The customers for these systems are treated as though they have entered into a negotiated contract with the companies, trading privacy for service, or vendor lock-in for some kind of subsidy or convenience. And when Big Tech lobbies against privacy regulations and anti-walled-garden measures like Right to Repair legislation, they say that their customers negotiated a deal in which they surrendered their personal information to be plundered and sold, or their freedom to buy service and parts on the open market.

But it's obvious that no such negotiation has taken place. Your browser invisibly and silently hemorrhages your personal information as you move about the web; you paid for your phone or printer and should have the right to decide whose ink or apps go into them.

Adversarial interoperability is the consumer's bargaining chip in these coercive "negotiations." More than a quarter of Internet users have installed ad-blockers, making it the biggest consumer revolt in human history. These users are making counteroffers: the platforms say, "We want all of your data in exchange for this service," and their users say, "How about none?" Now we have a negotiation!

Or think of the iPhone owners who patronize independent service centers instead of using Apple's service: Apple's opening bid is "You only ever get your stuff fixed from us, at a price we set," and the owners of Apple devices say, "Hard pass." Now it's up to Apple to make a counteroffer. We'll know it's a fair one if iPhone owners decide to patronize Apple's service centers.

This is what a competitive market looks like. In the absence of competitive offerings from rival firms, consumers make counteroffers by other means.

There is good reason to want to see a reinvigorated approach to competition in America, but it's important to remember that competition is enabled or constrained not just by mergers and acquisitions. Companies can use a whole package of laws to attain and maintain dominance, to the detriment of the public interest.

Today, consumers and toolsmiths confront a thicket of laws and rules that stand between them and technological self-determination. To change that, we need to reform the Computer Fraud and Abuse Act, Section 1201 of the Digital Millennium Copyright Act, , patent law, and other rules and laws. Adversarial interoperability is in the history of every tech giant that rules today, and if it was good enough for them in the past, it's good enough for the companies that will topple them in the future.

Categories: Privacy

Same Problem, Different Day: Government Accountability Office Updates Its Review of FBI’s Use of Face Recognition—and It’s Still Terrible

Thu, 06/06/2019 - 18:33

This week the federal Government Accountability Office (GAO) issued an update to its 2016 report on the FBI’s use of face recognition. The takeaway, which they also shared during a Congressional House Oversight Committee hearing: the FBI now has access to 641 million photos—including driver’s license and ID photos—but it still refuses to assess the accuracy of its systems.

According to the latest GAO Report, FBI’s Facial Analysis, Comparison, and Evaluation (FACE) Services unit not only has access to FBI’s Next Generation Identification (NGI) face recognition database of nearly 30 million civil and criminal mug shot photos, it also has access to the State Department’s Visa and Passport databases, the Defense Department’s biometric database, and the driver’s license databases of at least 21 states. Totaling 641 million images—an increase of 230 million images since GAO’s 2016 report—this is an unprecedented number of photographs, most of which are of Americans and foreigners who have committed no crimes.

The FBI Still Hasn’t Properly Tested the Accuracy of Its Internal or External Searches

Although GAO criticized FBI in 2016 for failing to conduct accuracy assessments of either its internal NGI database or the searches it conducts on its state and federal partners’ databases, the FBI has done little in the last three years to make sure that its search results are accurate, according to the new report. As of 2016, the FBI had conducted only very limited testing to assess the accuracy of NGI's face recognition capabilities. These tests only assessed the ability of the system to detect a match—not whether that detection was accurate, and as GAO notes, “reporting a detection rate of 86 percent without reporting the accompanying false positive rate presents an incomplete view of the system’s accuracy.”

As we know from previous research, face recognition is notoriously inaccurate across the board and may also misidentify African Americans and ethnic minorities, young people, and women at higher rates than whites, older people, and men, respectively. By failing to assess the accuracy of its internal systems, GAO writes—and we agree—that the FBI is also failing to ensure it is “sufficiently protecting the privacy and civil liberties of U.S. citizens enrolled in the database.” This is especially concerning given that, according to the FBI, they’ve run a massive 152,500 searches between fiscal year 2017 and April 2019—since the original report came out.

The FBI also has not taken any steps to determine whether the face recognition systems of its external partners—states and other federal agencies—are sufficiently accurate to prevent innocent people from being identified as criminal suspects. These databases, which are accessible to the FACE services unit, are mostly made up of images taken for identification, certification, or other non-criminal purposes. Extending their use to FBI investigations exacerbates concerns of accuracy, not least of which because, as GAO notes, the “FBI’s accuracy requirements for criminal investigative purposes may be different than a state’s accuracy requirements for preventing driver’s license fraud.” The FBI claims that it has no authority to set or enforce accuracy standards outside the agency. GAO disagrees: because the FBI is using these outside databases as a component of its routine operations, it is responsible for ensuring the systems are accurate, and given the lack of testing, it is unclear “whether photos of innocent people are unnecessarily included as investigative leads.”

Many of the 641 million face images to which the FBI has access are through 21 states’ driver’s license databases. 10 more states are in negotiations to provide similar access.



As the report points out, most of the 641 million face images to which the FBI has access—like driver’s license and passport and visa photos—were never collected for criminal or national security purposes. And yet, under agreements and “Memorandums of Understanding” we’ve never seen between the FBI and its state and federal partners, the FBI may search these civil photos whenever it’s trying to find a suspect in a crime. As the map above shows, 10 more states are in negotiations with the FBI to provide similar access to their driver’s license databases.

Images from the states’ databases aren’t only available through external searches. The states have also been very involved in the development of the FBI’s own NGI database, which includes nearly 30 million of the 641 million face images accessible to the Bureau (we’ve written extensively about NGI in the past). As of 2016, NGI included more than 20 million civil and criminal images received directly from at least six states, including California, Louisiana, Michigan, New York, Texas, and Virginia. And it’s not a way one-way street: it appears that five additional states—Florida, Maryland, Maine, New Mexico, and Arkansas—could send their own search requests directly to the NGI database. As of December 2015, the FBI was working with eight more states to grant them access to NGI, and an additional 24 states were also interested.

New Report, Same Criticisms

The original GAO report heavily criticized the FBI for rolling out these massive face recognition capabilities without ever explaining the privacy implications of its actions to the public, and the current report reiterates those criticisms. Federal law and Department of Justice policies require the FBI to complete a Privacy Impact Assessment (PIA) of all programs that collect data on Americans, both at the beginning of development and any time there’s a significant change to the program. While the FBI produced a PIA in 2008, when it first started planning out the face recognition component of NGI, it didn’t update that PIA until late 2015—seven years later and well after it began making the changes. It also failed to produce a PIA for the FACE Services unit until May 2015—three years after FACE began supporting FBI with face recognition searches.

Federal law and regulations also require agencies to publish a “System of Records Notice” (SORN) in the Federal Register, which announces any new federal system designed to collect and use information on Americans. SORNs are important to inform the public of the existence of systems of records; the kinds of information maintained; the kinds of individuals on whom information is maintained; the purposes for which they are used; and how individuals can exercise their rights under the Privacy Act. Although agencies are required to do this before they start operating their systems, FBI failed to issue one until May 2016—five years after it started collecting personal information on Americans. As GAO noted, the whole point of PIAs and SORNs is to give the public notice of the privacy implications of data collection programs and to ensure that privacy protections are built into systems from the start. The FBI failed at this.

This latest GAO report couldn’t come at a more important time. There is a growing mountain of evidence that face recognition used by law enforcement is dangerously inaccurate, from our white paper, “Face Off,” to two Georgetown studies released just last month which show that law enforcement agencies in some cities are implementing real-time face recognition systems and others are using the systems on flawed data.

Two years ago, EFF testified before The Congressional House Oversight Committee on the subject, pointing out the FBI's efforts to build up and link together these massive facial recognition databases that may be used to track innocent people as they go about their daily lives. The Congressional House Oversight Committee held two more hearings in the last month on the subject which saw bipartisan agreement over the need to rein in law enforcement’s use of this technology, and during which GAO pointed out many of the issues raised by this report. At least one more hearing is planned. As the Congressional House Oversight Committee continues to assess law enforcement use of face recognition databases, and as more and more cities are working to incorporate flawed and untested face recognition technology into their police and government-maintained cameras, we need all the information we can get on how law enforcement like the FBI are currently using face recognition and how they plan to use it in the future. Armed with that knowledge, we can push cities, states, and possibly even the federal government to pass moratoria or bans on the use of face recognition.

Categories: Privacy

30 Years Since Tiananmen Square: The State of Chinese Censorship and Digital Surveillance

Tue, 06/04/2019 - 18:29

Thirty years ago today, the Chinese Communist Party used military force to suppress a peaceful pro-democracy demonstration by thousands of university students. Hundreds (some estimates go as high as thousands) of innocent protesters were killed. Every year, people around the world come together to mourn and commemorate the fallen; within China, however, things are oddly silent.

The Tiananmen Square protest is one of the most tightly censored topics in China. The Chinese government’s network and social media censorship is more than just pervasive; it’s sloppy, overbroad, inaccurate, and always errs on the side of more takedowns. Every year, the Chinese government ramps up VPN shutdowns, activist arrests, digital surveillance, and social media censorship in anticipation of the anniversary of the Tiananmen Square protests. This year is no different; and to mark the thirtieth anniversary, the controls have never been tighter.

Keyword filtering on social media and messaging platforms

It’s a fact of life for many Chinese that social media and messaging platforms perform silent content takedowns via regular keyword filtering and more recently, image matching. In June 2013, Citizen Lab documented a list of words censored from social media related to the anniversary of the protests, which included words like “today” and “tomorrow.”

Since then, researchers at the University of Hong Kong have developed real-time censorship monitoring and transparency projects—“WeiboScope” and “WechatScope”—to document the scope and history of censorship on Weibo and Wechat. A couple of months ago, Dr. Fu King-wa, who works on these transparency projects, released an archive of over 1200 censored Weibo image posts relating to the Tiananmen anniversary since 2012. Net Alert has released a similar archive of historically censored images.

Simultaneous service disruptions for “system maintenance” across social media platforms

This year, there has been a sweep of simultaneous social media shutdowns a week prior to the anniversary, calling back to similar “Internet maintenance” shutdowns that happened during the twentieth anniversary of the Tiananmen Square protests. Five popular video and livestreaming platforms are suspending all comments until June 6th, citing the need for “system upgrades and maintenance.” Douban, a Chinese social networking service, is locking some of their larger news groups from any discussion June 29th, also for “system maintenance.” And popular messaging service WeChat recently blocked users from changing their status messages, profile pictures, and nicknames for the same reason.

Apple censors music and applications alike

Since 2017, Apple has removed VPNs from its mainland Chinese app store. These application bans have continued and worsened over time. A censorship transparency project by GreatFire, AppleCensorship.com, allows users to look up which applications are available in the US but not in China. Apart from VPNs, the Chinese Apple app store has also censored applications from news organizations, including the New York Times, Radio Free Asia, Tibetan News, Voice of Tibet, and other Chinese-language human rights publications. They have also taken down other censorship circumvention tools like Tor and Psiphon.

Leading up to this year’s 30-year Tiananmen anniversary, Apple Music has been removing songs from its Chinese streaming service. A 1990 song by Hong Kong’s Jacky Cheung that references Tiananmen Square was removed, as were songs by pro-democracy activists from Hong Kong’s Umbrella Movement protest.

Activist accounts caught in Twitter sweep

On May 31st, a slew of China-related Twitter accounts were suspended, including prominent activists, human rights lawyers, journalists, and other dissidents. Activists feared this action was in preparation for further June 4th related censorship. Since then, some of the more prominent accounts have been restored, but many remain suspended. An announcement from Twitter claimed that these accounts weren’t reported by Chinese authorities, but were just caught up in a large anti-spam sweep.

The lack of transparency, poor timing, and huge number of false positives on Twitter’s part has led to real fear and uncertainty in Chinese-language activism circles.

Beyond Tiananmen Square: Chinese Censorship and Surveillance in 2019 Xinjiang, China’s ground zero for pervasive surveillance and social control

Thanks to work by Human Rights Watch, security researchers, and many brave investigators and journalists, a lot has come to light about China’s terrifying acceleration of social and digital controls in Xinjiang in the past two years. And the chilling effect is real—as we approach the end of Ramadan, a holiday which is discouraged and banned for Party members and public school students to observe—mosques remain empty. Uighur students and other expatriates abroad fear returning home, as many of their families have already been detained for no cause.

China’s extensive reliance on surveillance technology in Xinjiang is a human rights nightmare, and according to the New York Times, “the first known example of a government intentionally using artificial intelligence for racial profiling.” Researchers have noticed that more and more computer vision papers coming out of China are specifically trained to build facial recognition for Uighurs.

China has long been a master of security theater, overstating and over-performing its own surveillance capabilities in order to spread a “chilling effect” over digital and social behavior. Something similar is happening here, albeit at a much larger scale than we’ve ever seen before. Despite the government’s claims of fully automated and efficient systems, even the best automated facial recognition systems they use are only accurate in less than 20 percent of cases, leading to mistakes and the need for hundreds of workers to monitor cameras and confirm the results. These smoke-and-mirrors “pseudo-AI” systems are more than common in the AI startup industry. For a lot of “automated” technologies, we just aren’t quite there yet.

Resource or technical limitations aren’t going to stop the Chinese government. Security spending since 2017 shows that Chinese officials are serious about building a panopticon, no matter the cost. The development of the surveillance apparatus in Xinjiang shows us just how expensive building pervasive surveillance can be; local governments in Xinjiang have accrued hundreds of millions (in USD) of “invisible debt” as they continue to ramp up investment in their surveillance state. A large portion of that cost is labor. “We risk understating the extent to which this high-tech police state continues to require a lot of manpower,” says Adrien Zenz for the New York Times.

Client-side blocking of labor movements on Github

996 is a recent labor movement in China by white-collar tech workers who demand regular 40-hour work weeks and the explicit outlaw of the draconian but standard “996” schedule; that is, 9 am to 9 pm, six days a week. The movement, like other labor-organizing movements, has been to subject to keyword censorship on social media platforms, but individuals have been able to continue organizing on Github.

Github itself has remained relatively immune to Chinese censorship efforts. Thanks to widespread deployment of HTTPS, Chinese network operators must either block the entire website or nothing at all. Github was briefly blocked in 2013, but the backlash from developers was too great, and the site was unblocked shortly thereafter. China’s tech sector, like the rest of the world, rely on open-source projects hosted on the website. But although Github is no longer censored at the network level, Chinese-built browsers and Wechat’s web viewer started blacklisting specific URLs from being accessed, including the 996 Github repository.

Google’s sleeping Dragonfly

Late last year, we stood in solidarity with over 70 human rights groups led by Human Rights Watch and Amnesty International, calling on Google to end their secret internal project to architect a censored Chinese search engine codenamed Dragonfly. Google employees wrote their own letter protesting the project, some resigning in protest, demanding transparency at the very least.

In March, some Google employees found that changes were still being committed to the Dragonfly codebase. Google has yet to publicly commit to ending the project, leading many to believe the project could just be on the back burner for now.

How are people fighting back?

Relatively little news gets out of Xinjiang to the rest of the world, and China wants to keep it that way— journalists are denied visas, their relatives are detained, and journalists on the ground are arrested. Any work by groups that help shed light on the situation is extremely valuable. Earlier this year, we wrote about the amazing work by Humans Rights Watch, Amnesty International, other human rights groups, and other independent researchers and journalists in helping uncover the inner workings of China’s surveillance state.

Censorship transparency projects like WechatScope, WeiboScope, Tor’s OONI, and GreatFire's AppleCensorship, as well as ongoing censorship research by academic centers like The Citizen Lab and organizations like GreatFire continue to shed light on the methods and intentions of broader Chinese censorship efforts.

And of course, we have to take a look at the individuals and activists within and outside China who continue to fight to have their voices heard. Despite the continued rise of crackdowns on VPNs, VPN usage across Chinese web users continues to rise. In the first quarter of 2019, 35% of web users use VPNs, not just for accessing better music and TV shows, but also commonly for accessing blocked social networks, and blocked news sites.

Human rights groups, security researchers, investigators, journalists, and activists on the ground continue to make tremendous sacrifices in fighting for a more free China.

Categories: Privacy

EFF Tells Congress: Don’t Throw Out Good Patent Laws

Tue, 06/04/2019 - 15:42

At a Senate hearing today, EFF Staff Attorney Alex Moss gave formal testimony [PDF] about how to make sure our patent laws promote innovation, not abusive litigation.

Moss described how Section 101 of the U.S. patent laws serves a crucial role in protecting the public. She urged the Senate IP Subcommittee, which is considering radical changes to Section 101, to preserve the law to protect users, developers, and small businesses.

Since the Supreme Court’s decision in Alice v. CLS Bank, courts have been empowered to quickly dismiss lawsuits based on abstract patents. That has allowed many small businesses to fight back against meritless patent demands, which are often brought by "patent assertion entities," also known as patent trolls.

At EFF, we often hear from businesses or individuals that are being harassed or threatened by ridiculous patents. Moss told the Senate IP Subcommittee the story of Ruth Taylor, who was sued for infringement over a patent that claimed the idea of holding a contest with an audience voting for the winner but simply added generic computer language. The patent owner wanted Ruth to pay $50,000. Because of today’s Section 101, EFF was able to help Ruth pro bono, and ask the court to dismiss the case under Alice. The patent owner dropped the lawsuit days before the hearing.

We hope the Senate takes our testimony to heart and reconsiders the proposal by Senators Thom Tillis and Chris Coons, which would dismantle Section 101 as we know it. This would lead to a free-for-all for patent trolls, but huge costs and headaches for those who actually work in technology. 

We need your help. Contact your representatives in Congress today, and tell them to reject the Tillis-Coons patent proposal.

TAKE ACTION

TELL CONGRESS WE DON'T NEED MORE BAD PATENTS

Categories: Privacy