Deep Links

Syndicate content
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 32 weeks 3 days ago

U.S. IP Policy Spins Out of Control in the 2018 Special 301 Report

Tue, 05/01/2018 - 12:26

Certain reports and publications from U.S. government agencies, such as those of the Congressional Research Service, have become important reference works due to their reputation for being relatively in-depth, up to date, and factual. The United States Trade Representative's (USTR) Special 301 Report [PDF], the latest annual edition of which was released last week, is not such a report.

The report claims to "call out foreign countries and expose the laws, policies, and practices that fail to provide adequate and effective IP protection and enforcement for U.S. inventors, creators, brands, manufacturers, and service providers." But it has no consistent methodology for assessing what is "adequate and effective." Instead of relying on rigorous analysis to quantify the differences in standards of protection and enforcement among U.S. trading partners, it is driven by anecdotes, with a bias towards those contributed by IP lobbyists such as the International Intellectual Property Alliance (IIPA) and ACTION for Trade. This is a document so heavy on spin that one gets dizzy from reading it.

Canada Joins the "Naughty List" This Year

Due to the lack of a consistent methodology for preparation of the report and its heavy reliance on submissions rather than primary sources, the countries called out in the report and the misdeeds for which they are called out change with the winds of U.S. foreign policy. This provides a good explanation for the inclusion of Canada on this year's Priority Watch List, which is reserved for the most egregious offenders (China and Russia are also among the dozen countries receiving that honor).

As Canadian law professor Michael Geist has explained, Canada's inclusion on the Priority Watch List is likely a tactic intended to bring pressure on Canada to cave in to U.S. demands in the current negotiations over a modernized North American Free Trade Agreement (NAFTA). As Professor Geist points out, Canada has long recognized the Special 301 Report for the public relations exercise that it is, correctly observing that it "fails to employ a clear methodology and the findings tend to rely on industry allegations rather than empirical evidence and objective analysis."

Unfortunately however, some other countries give the Special 301 Report more credence, and this can result in them making unwarranted changes to their law in order to placate the USTR. Earlier this year for example, Switzerland responded specifically [PDF] to U.S. criticisms of its copyright system by pointing to its introduction of a "stay down" obligation (which is a synonym for mandatory copyright upload filtering), and its loosening of personal data protection for alleged copyright infringers.

Neither of these changes was required for Switzerland to fulfil its international obligations, and they will likely result in user-generated content platforms abandoning Switzerland for jurisdictions where the regulatory environment is more favorable. Yet despite its unnecessary efforts, Switzerland remains on the Watch List again for the third year running.

A Tired, Repetitive Report

This year's Special 301 Report is a staid, by-the-numbers affair that will satisfy IP maximalist lobbyists, but will disappoint anyone who was expecting a balanced or nuanced look at the differences between U.S. and foreign IP laws and policies, and the reasons for those differences. The report maintains the line that there is only one "adequate and effective" level of IP protection and enforcement that every country should adhere to, regardless of its social and economic circumstances or its international legal obligations. The allegations that it repeats are tired and familiar, such as:

  • Countries like Brazil, Ecuador, Peru, and Taiwan do not effectively criminalize unauthorized camcording in theaters. (They are not required to do so; there is no international obligation for them to recognize this particular method of copyright infringement as a crime.)
  • Countries like Argentina, Brazil, Chile, China, Hong Kong, Indonesia, Mexico, Peru, Singapore, Taiwan, and Vietnam are accused of allowing trade in "Illicit Streaming Devices" (aka. general-purpose computers) that can be used to access copyright-infringing media streams.
  • Some country code domain name registries are accused of failing to "require the registrant to provide true and complete contact information; and make such registration information publicly available." The USTR neglects to point out that in many cases this is a deliberate policy decision due to the application of local data protection law.
An Alternative Approach to the Special 301

In EFF's submission to the USTR in its consultation over this year's report, we pointed out how the report is unbalanced by focusing only on how (some) U.S. businesses benefit from strict levels of IP protection and enforcement, without also considering how (many more) U.S. businesses also benefit from the flexibilities in U.S. intellectual property law, such as the fair use right. In our submission, we pointed out that:

Some of our trading partners do not have a fair use right in their copyright law, and this makes it harder for U.S. companies to conduct business overseas. They may run the risk of committing copyright infringement for activities that create economic and social value, and would be fully legal in the United States. For example, basic technical processes such as indexing, linking, and temporary copying may be found to infringe copyright in countries that lack a fair use doctrine.

We also suggested that the Special 301 process could be used to address the issue of foreign governments attempting to enforce their intellectual property laws on U.S. companies extraterritorially, as occured in the Equustek case. Unfortunately, neither of our suggestions had any influence on the 2018 Special 301 Report. On the contrary, the USTR goes so far as to criticize Canada for the breadth of its "fair dealing" right in copyright law, which is similar to the U.S. fair use right. No criticism is made of countries such as Mexico, which lack any close equivalent to fair use at all.

Impartiality isn't the goal of the Special 301 Report; its goal is to influence the attitudes and behaviors of U.S. trading partners to bring them into alignment with U.S. foreign policy objectives on intellectual property, regardless of whether those objectives reflect our partners' obligations under international law. As such, it continues to well serve the interests of the IP maximalist lobbyists with whom the USTR has a very close relationship. But for those who are looking for a more balanced report, the 2018 Special 301 Report has nothing to offer, and its recommendations carry no weight. 

Categories: Privacy

Catalog of Missing Devices: Fonts on e-readers

Mon, 04/30/2018 - 20:18

In today's world, your ability to choose something as everyday as a typeface depends on the permission of the company that made your device and the software that runs on it. Choosing your typeface may seem like a novelty, but type design can have far-ranging implications for accessibility (some fonts are optimized for people with dyslexia and other cognitive print disabilities), clarity (other fonts are optimized the minimize the chance of mistaking one character for another, critical for technical applications), and even culture (the right to choose a script that matches the language you're reading can make all the difference).

Categories: Privacy

The Fate of Text and Data Mining in the European Copyright Overhaul

Fri, 04/27/2018 - 12:19

The current European Digital Single Market copyright negotiations involve more than just the terrible upload filter and link tax proposals that have caused so much concern—and not all of the other provisions under negotiation are harmful. We haven't said much about the text and data mining provisions that form part of this ambitious legislative agenda, but as the finalization of the deal is fast approaching, the form of these provisions is now taking shape. The next few weeks will provide Europeans with their last opportunity to guide the text and data mining provisions to support coders rights, open access, and innovation.

Text and data mining, which is the automated processing and analysis of large amounts of published data to create useful new outputs, necessarily involves copying at least some of the original data. Often, that data isn't subject to copyright in the first place, but even when it is, copies made in the course of processing generally fall within the scope of the fair use right in the United States.

But European countries have no such fair use right in their copyright law. Instead, they have a patchwork of narrower user rights, which vary from one country to another. Although some states have introduced rights to conduct text and data mining, there is little consistency between them. As such, the legality of text and data mining conducted in Europe is questionable, even though it doesn't result in the creation of anything that resembles the original input data set. Worse still, Europe also has a separate copyright-like regime of protection for databases, which has no equivalent in the United States. Text and data mining activities could also run afoul of these database rights.

Recognizing the usefulness of text and data mining to scientific research, the European Commission proposed to clarify its legality by adding a new optional text and data mining right to European copyright law. Provided that those exercising the right had lawful access to the input data in the first place, they would not have to acquire any additional license to perform text and data mining on such data, for either commercial or non-commercial purposes—and, importantly, the copyright owner would not be able to prohibit them from doing so by contract.

However the Commission's proposal also contained a number of limitations that made it less useful than it ought to have been. Its three biggest limitations were that:

  • It only allowed research organizations to conduct text and data mining activities, excluding independent researchers, small businesses, libraries and archives, and others who might otherwise wish to make use of the exception.
  • Text and data mining could only be conducted for the purpose of scientific research, excluding other purposes such as education, archival, or literary criticism.
  • It would do nothing to prevent copyright holders from using DRM (digital locks with legal reinforcement) to make the exercise of the right practically impossible.
Proposals to Strengthen or Weaken the Commission's Proposal

In February 2018 an in-depth analysis [PDF] of the provisions was published for the Legal Affairs (JURI) Committee, which has leadership of the Digital Single Market dossier within the European Parliament. This analysis identifies the limitations mentioned above, and provides recommends for addressing some of them; perhaps most notably, "clearly spelling out that both Technological Protection Measures (TPMs) and network security and integrity measures should not undermine the effective application of the exception."

Following up on this, in late March 2018 by a letter to the Legal Affairs (JURI) Committee from a coalition of 28 groups including EIFL (Electronic Information for Libraries), the European University Association (EUA), and Science Europe, made four concrete recommendations that would strengthen the Commission's proposal by:

  • Broadening it to include any person (natural or legal) that has lawful access to content, provided that reproduction or extraction is used for the sole purpose of text and data mining.
  • Affirming that contractual terms restricting the use of the right should be unenforceable.
  • Clarifying that DRM cannot be used to unreasonably restrict the exercise of the right.
  • Allowing datasets created for the purpose of text and data mining to be stored on secured servers for future verification.

But countering these recommendations, some member states would like to weaken the text and data mining right, rather than strengthening it. Last week the Bulgarian Presidency of the Council of the European Union asked member states, [PDF] “Should the scope of the optional exception for text and data mining provided for in Article 3a be limited and to what extent, for example to temporary copies of works and other subject matter which have been made freely available to the public online?” Their answer, expected to be given at Friday's meeting of the Committee of the Permanent Representatives of the Governments of the Member States to the European Union (COREPER), may determine the version of the proposal that goes to a vote.

We are encouraging all our European members to contact their representatives about an upcoming vote on the European copyright proposals in the JURI Committee. Along with the most serious problems with the proposal—the link tax in favor of news publishers (Article 11) and the upload filtering mandate on Internet platforms (Article 13)—the Article 3a text and data mining right is also included in the upcoming vote. When you contact your representatives about the sweeping and dangerous copyright proposals, tell them your thoughts about the importance of protecting text and data mining too. Although the details are complex, you can keep to one simple message—that Articles 11 and 13 should be eliminated, and that Article 3a should be kept and strengthened.

Take Action

Demand fair copyright policies

Categories: Privacy

Defenders of Copyright Troll Victims Urge Congress to Reject the "Small Claims" Bill

Thu, 04/26/2018 - 20:26

A dedicated group of attorneys and technologists from around the U.S. defend Internet users against abuse by copyright trolls. Today, they wrote to the House Judiciary Committee with a warning about the CASE Act, a bill that would create a powerful new “small claims” tribunal at the U.S. Copyright Office in Washington D.C. The CASE Act would give copyright trolls a faster, cheaper way of coercing Internet users to fork over cash “settlements,” bypassing the safeguards against abuse that federal judges have labored to create.

Copyright trolls are companies that turn threats of copyright litigation into profit by accusing Internet users of infringement—typically of pornographic films or independent films that flopped at the box office. Wielding boilerplate legal papers, dubious investigators, and the threat of massive, unpredictable copyright damages, these companies try to coerce Internet users into paying “settlements” of several thousand dollars to avoid litigation. Because their business is built around litigation threats, not the creative work itself, copyright trolls aren’t very careful about making sure the people they accuse actually infringed a copyright. In fact, since profitable copyright trolling depends on targeting thousands of Internet users, trolls have an incentive not to investigate their claims carefully before filing suit.

Trolling is a massive problem. Between 2014 and 2016, copyright troll lawsuits constituted just under 50% of all copyright cases on the federal dockets. Overall, since 2010, researchers have estimated the number of Internet users targeted at over 170,000 - and that’s probably a low estimate.

These schemes have a human cost. Targets have included many elderly retirees who don’t use the Internet, who are often coerced into paying settlements. Others are documented immigrants with a green card or work visa, who must pay to avoid litigation that could imperil their immigration status.

Still others are homeowners, apartment managers, and leaseholders—whoever’s name is on the ISP bill. Copyright trolls force them to choose between paying a cash settlement or becoming part of the shakedown by interrogating their tenants, family members, roommates, or houseguests about their Internet use, despite having no legal responsibility to police that use.

The federal courts have cracked down on copyright trolling, by requiring copyright holders to present solid evidence of infringement before the courts will issue a subpoena to unmask an anonymous Internet user. Some courts have even begun to review settlement demand letters to ensure that they don’t use abusive methods.

The CASE Act, H.R. 3945, would reverse this progress by giving copyright trolls a whole new, and more favorable, legal forum. In particular, the bill would

  • allow the Copyright Office to issue subpoenas for the identity of an Internet user, who can then be targeted for harassment and threats;
  • do away with the requirement that copyright holders register their works before infringement begins in order to recover automatic statutory damages, which weeds out frivolous claims;
  • allow the Copyright Office to issue $5,000 copyright “parking tickets” through a truncated process, with no true right of appeal.

The opponents of copyright trolling who signed the letter are concerned about giving the Copyright Office these powers, bypassing the federal courts. Given that the Copyright Office calls rightsholders its “customers,” and often favors rightsholders’ interests over those of the broader public, we don’t trust a Copyright Office panel to give careful protection to the accused.

The House is considering a few copyright bills this spring. This week, it voted to approve three of them: the Music Modernization Act, the CLASSICS Act, and the AMP Act. Wisely, they left the CASE Act off the schedule—perhaps because of its controversial provisions—but the bill could still come up for a vote.

The CASE Act is supported by photographers who want a faster, cheaper way to bring infringement claims. But creating a new federal administrative tribunal with the power to issue fines against ordinary Internet users is dangerous. Aid to photographers can’t come at the expense of inviting more copyright troll abuse. Legislators should heed the words of professionals who defend the public against this form of abuse. They should reject the CASE Act.

Categories: Privacy

Oakland Should Ensure Community Control of Surveillance Technology

Thu, 04/26/2018 - 18:33

The Northern California cities of Berkeley and Davis began the year with successful community efforts to demand transparency and oversight in their community’s acquisition of surveillance technology. With tax season just days behind us, U.S. communities continue to focus on gaining control and transparency over whether their hard-earned tax dollars are used to acquire surveillance technologies that threaten our fundamental privacy, disparately burden people of color, and threaten immigrant communities.

Community organizers in the East Bay—having already successfully defeated plans to have the Port of Oakland’s Domain Awareness Center expand into a city-wide surveillance apparatus—are well-poised to make Oakland the next U.S. city to adopt a law that would ensure substantial community controls over law enforcement acquisition and use of surveillance technology.

The power to decide whether these tools are acquired, and how they are used, should not stand unilaterally with agency executives. Instead, elected City Council members should be empowered with the authority to decide whether to approve or reject surveillance technology. Most importantly, all residents must be provided an opportunity to comment on proposed surveillance technologies, and the policies guiding their use, before representatives decide whether to adopt them.

Oakland’s Surveillance and Community Safety Ordinance enshrines these rights by requiring that city agencies submit use policies to the City Council for approval before acquiring surveillance technology, and that the City Council provide notice and an opportunity for public comment before approving these requests. To assure compliance, and that any approved equipment does indeed serve its stated purpose, the law would additionally require annual use reports including any violations of the existing policy.

 

TAKE ACTION

SUPPORT THE SURVEILLANCE AND COMMUNITY SAFETY ORDINANCE

In many cities across the country, local law enforcement and other city agencies acquire surveillance technology—such as cell-site simulators, automated license plate readers (ALPR), and face recognition equipment—with little or no oversight or public input. In some cases, manufacturers require city agencies to sign non-disclosure agreements prohibiting the sharing of basic information about the types of equipment, the equipment’s capabilities, how the equipment is used, and how much it cost. Compounding this problem, many agencies lack use policies outlining how and under what circumstances the equipment may be used, or with what outside entities information collected by the technology may be shared.

Many communities are increasingly worried that surveillance technologies are a threat to immigrant communities. For example, the City of Alameda recently sidelined a proposal to expand its ALPR system, because of resident concerns that the resulting ALPR data might be used for immigration enforcement against their neighbors.

Since the early days of the fight to rein in the expansion of Oakland’s Domain Awareness Center, we have worked alongside local and national partners, including our Electronic Frontier Alliance ally Oakland Privacy, on empowering communities to take control of surveillance equipment policy and acquisition. These coalitions have supported cities across the United States in proposing ordinances that would provide transparency, accountability, and oversight measures.

In April, the City of Oakland’s Public Safety Committee voted unanimously to approve the proposed Surveillance and Community Safety Ordinance. With this strong show of support from the committee and the community, the ordinance is expected to go before the full City Council as soon as Tuesday, May 1.

As we wrote in the letter of support we submitted along with the Freedom of The Press Foundation in May 2017:

Public safety requires trust between government and the community served. To ensure that trust, Oakland needs a participatory process for deciding whether or not to adopt new government surveillance technologies, and ongoing transparency and oversight of any adopted technologies.

As federal agencies continue to erode our privacy, and target our Muslim and immigrant neighbors, we must insist that state and local elected officials take every opportunity to protect our most basic civil rights and civil liberties.

Oakland residents should contact their city council representative, and urge them to vote to pass the Surveillance and Community Safety Ordinance. Across the U.S., Electronic Frontier Alliance allies are building support for similar transparency and oversight measures within their own cities and towns. To join an Electronic Frontier Alliance member organization in your community, or to find out how your group can become a member, visit eff.org/fight.

Categories: Privacy

Axon’s Ethics Board Must Keep the Company in Check

Thu, 04/26/2018 - 15:38

EFF, together with 41 national, state, and local civil rights and civil liberties groups, sent a letter today urging the ethics board of police technology and weapons developer Axon to hold the company accountable to the communities its products impact—and to itself.

Axon, based in Scottsdale, Arizona, is responsible for making and selling some of the most used police products in the United States, including tasers and body-worn cameras. Over the years, the company has taken significant heat for how those tools have been used in police interactions with the public, especially given law enforcement’s documented history of racial discrimination. Axon is now considering developing and incorporating into existing products new technologies like face recognition and artificial intelligence. It set up an “AI Ethics Board” made up of outside advisors and says it wants to confront the privacy and civil liberties issues associated with police use of these invasive technologies.

As we noted in the letter, “Axon has a responsibility to ensure that its present and future products, including AI-based products, don’t drive unfair or unethical outcomes or amplify racial inequities in policing.” Given this, our organizations called on the Axon Ethics Board to adhere to the following principles at the outset of its work:

  • Certain products are categorically unethical to deploy.
  • Robust ethical review requires centering the voices and perspective of those most impacted by Axon’s technologies.
  • Axon must pursue all possible avenues to limit unethical downstream uses of its technologies.
  • All of Axon’s digital technologies require ethical review.

With these guidelines, we urge Axon’s Ethics Board to steer the company in the right direction for all its current and future products. For example, the Ethics Board must advise Axon against pairing real-time face recognition analysis technology to the live video captured by body-worn cameras:

Real-time face recognition would chill the constitutional freedoms of speech and association, especially at political protests. In addition, research indicates that face recognition technology will never be perfectly accurate and reliable, and that accuracy rates are likely to differ based on subjects’ race and gender. Real-time face recognition therefore would inevitably misidentify some innocent civilians as suspects. These errors could have fatal consequences—consequences that fall disproportionately on certain populations.

For these reasons, we believe “no policy or safeguard can mitigate these risks sufficiently well for real-time face recognition ever to be marketable.”

Similarly, we urge Axon’s ethical review process to include the voices of those most impacted by its technologies:

The Board must invite, consult, and ultimately center in its deliberations the voices of affected individuals and those that directly represent affected communities. In particular, survivors of mass incarceration, survivors of law enforcement harm and violence, and community members who live closely among both populations must be included.

Finally, we believe that all of Axon’s products, both hardware and software, require ethical review. The Ethics Board has a large responsibility for the future of Axon. We hope its members will listen to our requests and hold Axon accountable for its products.[1]

Letter signatories include Color of Change, UnidosUS, South Asian Americans Leading Together, Detroit Community Technology Project, Algorithmic Justice League, Data for Black Lives, NAACP, NC Statewide Police Accountability Project, Urbana-Champaign Independent Media Center, and many more. All are concerned about the misuse of technology to entrench or expand harassment, prejudice, and bias against the public.

You can read the full letter here.

 

[1] EFF’s Technology Policy Director Jeremy Gillula, has chosen to join Axon’s Ethics Board in his personal capacity. He has recused himself from writing or reviewing this blog, or the letter, and his participation on the board should not be attributed to EFF.

Categories: Privacy

Facebook Inches Toward More Transparency and Accountability

Thu, 04/26/2018 - 14:03

Facebook took a step toward greater accountability this week, expanding the text of its community standards and announcing the rollout of a new system of appeals. Digital rights advocates have been pushing the company to be more transparent for nearly a decade, and many welcomed the announcements as a positive move for the social media giant.

The changes are certainly a step in the right direction. Over the past year, following a series of controversial decisions about user expression, the company has begun to offer more transparency around its content policies and moderation practices, such as the “Hard Questions” series of blog posts offering insight into how the company makes decisions about different types of speech.

The expanded community standards released on Tuesday offer a much greater level of detail of what’s verboten and why. Broken down into six overarching categories—violence and criminal behavior, safety, objectionable content, integrity and authenticity, respecting intellectual property, and content-related requests—each section comes with a “policy rationale” and bulleted lists of “do not post” items. 

But as Sarah Jeong writes, the guidelines “might make you feel sorry for the moderator who’s trying to apply them.” Many of the items on the “do not post” lists are incredibly specific—just take a look at the list contained in the section entitled “Nudity and Adult Sexual Activity”—and the carved-out exceptions are often without rationale.

And don’t be fooled: The new community standards do nothing to increase users’ freedom of expression; rather, they will hopefully provide users with greater clarity as to what might run afoul of the platform’s censors.

Facebook’s other announcement—that of expanded appeals—has received less media attention, but for many users, it's a vital development. In the platform’s early days, content moderation decisions were final and could not be appealed. Then, in 2011, Facebook instituted a process through which users whose accounts had been suspended could apply to regain access. That process remained in place until this week.

Through Onlinecensorship.org, we often hear from users of Facebook who believe that their content was erroneously taken down and are frustrated with the lack of due process on the platform. In its latest announcement, VP of Global Policy Management Monika Bickert explains that over the coming year, Facebook will be building the ability for people to appeal content decisions, starting with posts removed for nudity/sexual activity, hate speech, or graphic violence—presumably areas in which moderation errors occur more frequently.

Some questions about the process remain (will users be able to appeal content decisions while under temporary suspension? Will the process be expanded to cover all categories of speech?), but we congratulate Facebook on finally instituting a process for appealing content takedowns, and encourage the company to expand the process quickly to include all types of removals.

Categories: Privacy

Platform Censorship Won't Fix the Internet

Wed, 04/25/2018 - 21:31

The House Judiciary Committee will hold a hearing on “The Filtering Practices of Social Media Platforms” on April 26. Public attention to this issue is important: calls for online platform owners to police their members’ speech more heavily inevitably lead to legitimate voices being silenced online. Here’s a quick summary of a written statement EFF submitted to the Judiciary Committee in advance of the hearing.

Our starting principle is simple: Under the First Amendment, social media platforms and other online intermediaries have the right to decide what kinds of expression they will carry. But just because companies can act as judge and jury doesn’t mean they should.

We all want an Internet where we are free to meet, create, organize, share, associate, debate and learn. We want to make our voices heard in the way that technology now makes possible. No one likes being lied to or misled, or seeing hateful messages directed against them or flooded across our newsfeeds. We want our elections free from manipulation and for the speech of women and marginalized communities not to be silenced by harassment.

The powerless struggle to be heard in the first place; social media can and should help change that reality, not reinforce it.

But we won’t make the Internet fairer or safer by pushing platforms into ever more aggressive efforts to police online speech. When social media platforms adopt heavy-handed moderation policies, the unintended consequences can be hard to predict. For example, Twitter’s policies on sexual material have resulted in posts on sexual health and condoms being taken down. YouTube’s bans on violent content have resulted in journalism on the Syrian war being pulled from the site. It can be tempting to attempt to “fix” certain attitudes and behaviors online by placing increased restrictions on users’ speech, but in practice, web platforms have had more success at silencing innocent people than at making online communities healthier.

Indeed, for every high profile case of despicable content being taken down, there are many, many more stories of people in marginalized communities who are targets of persecution and violence. The powerless struggle to be heard in the first place; social media can and should help change that reality, not reinforce it.

That’s why we must remain vigilant when platforms decide to filter content. We are worried about how platforms are responding to new pressures to filter the content on their services. Not because there’s a slippery slope from judicious moderation to active censorship, but because we are already far down that slope.

To avoid slipping further, and maybe even reverse course, we’ve outlined steps platforms can take to help protect and nurture online free speech. They include:

  • Better transparency
  • Foster innovation and competition; e.g., by promoting interoperability
  • Clear notice and consent procedures
  • Robust appeal processes
  • Promote user control
  • Protect anonymity

You can read our statement here for more details.

For its part, rather than instituting more mandates for filtering or speech removal, Congress should defend safe harbors, protect anonymous speech, encourage platforms to be open about their takedown rules and to follow a consistent, fair, and transparent process, and avoid promulgating any new intermediary requirements that might have unintended consequences for online speech.

EFF was invited to participate in this hearing and we were initially interested. However, before we confirmed our participation, the hearing shifted in a different direction. We look forward to engaging in further discussions with policymakers and the platforms themselves.

Categories: Privacy

California Can Build Trust Between Police and Communities By Requiring Agencies to Publish Their Policies Online

Wed, 04/25/2018 - 19:30

If we as citizens are more informed of police policies and procedures, and we can easily access those materials online and study them, it’ll lead to greater accountability and better relations between our communities and the police departments that serve us. EFF supports a bill in the California legislature which aims to do exactly that.

S.B. 978, introduced by Sen. Steven Bradford, will require law enforcement agencies to post online their current standards, practices, policies, operating procedures, and education and training materials. As we say in our letter of support:

[The bill] will help address the increased public interest and concern about police policies in recent years, including around the issues of use of force, less-lethal weapons, body-worn cameras, anti-bias training, biometric identification and collection, and surveillance (such as social media analysis, automated license plate recognition, cell-site simulators, and drones).

Additionally, policies governing police activities should be readily available for review and scrutiny by the public, policymakers, and advocacy groups. Not only will this transparency measure result in well-informed policy decisions, but it will also provide the public with a clearer understanding of what to expect and how to behave during police encounters.

Last year, Gov. Jerry Brown vetoed a previous version of this bill, which had broad support from both civil liberties groups and law enforcement associations. The new bill is meant to address his concerns of the bill’s scope, and removes a few of the state law enforcement agencies from the law’s purview, like the Department of Alcoholic Beverage Control and California Highway Patrol, among others.

We hope that the legislature will once again pass this important bill, and that Gov. Brown will support transparency and accountability between law enforcement and Californians.

Categories: Privacy

A Tale of Two Poorly Designed Cross-Border Data Access Regimes

Wed, 04/25/2018 - 14:08

On Tuesday, the European Commission published two legislative proposals that could further cement an unfortunate trend towards privacy erosion in cross-border state investigati­ons. Building on a foundation first established by the recently enacted U.S. CLOUD Act, these proposals compel tech companies and service providers to ignore critical privacy obligations in order to facilitate easy access when facing data requests from foreign governments. These initiatives collectively signal the increasing willingness of states to sacrifice privacy as a way of addressing pragmatic challenges in cross-border access that could be better solved with more training and streamlined processes.

The EU proposals (which consist of a Regulation and a Directive) apply to a broad range of companies1 that offer services in the Union and that have a “substantial connection” to one or more Member States.2 Practically, that means companies like Facebook, Twitter, and Google, though not based in the EU, would still be affected by these proposals. The proposals create a number of new data disclosure powers and obligations, including:

  • European court orders that compel internet companies and service providers to preserve data they already stored at the time the order is received (European preservation orders);
  • European court orders for content and ‘transactional’ data3 for investigation of a crime that carries a custodial sentence of at least 3 years or more (European production orders for content data);
  • European orders for some metadata defined as “access data” (IP addresses, service access times) and customer identification data (including name, date of birth, billing data and email addresses) that could be issued for any criminal offense (European production orders for access and subscriber data);4
  • An obligation for some service providers to appoint an EU legal representative who will be responsible for complying with data access demands from any EU Member State;
  • The package of proposals does not address real-time access to communications (in contrast to the CLOUD Act).
Who Is Affected and How?

Such orders would affect Google, Facebook, Microsoft, Twitter, instant messaging services, voice over IP, apps, Internet Service Providers, and e-mail services, as well as cloud technology providers, domain name registries, registrars, privacy and proxy service providers, and digital marketplaces.

Moreover, tech companies and service providers would have to comply with law enforcement orders for data preservation and delivery within 10 days or, in the case of an imminent threat to life or physical integrity of a person or to a critical infrastructure, within just six hours. Complying with these orders would be costly and time-consuming.

Alarmingly, the EU proposals would compel affected companies (which include diverse entities ranging from small ISPs and burgeoning startups to multibillion dollar global corporations) to develop extensive resources and expertise in the nuances of many EU data access regimes. A small regional German ISP  will need the capacity to process demands from France, Estonia, Poland, or any other EU member state in a manner that minimizes legal risks. Ironically, the EU proposals are presented as beneficial to businesses and service providers on the basis that they provide ‘legal certainty and clarity’. In reality, they do the opposite, forcing these entities to devote resources to understanding the law of each member state. Even worse, the proposal would immunize businesses from liability in situations where good faith compliance with a data request might conflict with EU data protection laws. This creates a powerful incentive to err on the side of compliance with a data demand at cost to privacy. There is no comparable immunity from the heavy fines that could be levied for ignoring a data access request on the basis of good-faith compliance with EU data protection rules.

No such liability limitation at all is available to companies and service providers subject to non-EU privacy protections. In some instances, the companies would be forced to choose between complying with EU data demands issued further to EU standards and complying with legal restrictions on data exposure imposed by other jurisdictions. For example, mechanisms requiring service providers to disclose customer identification data on the basis of a prosecutorial demand could conflict with Canada’s data protection regime. Personal Information Protection and Electronic Documents Act (PIPEDA), a Canadian privacy law, has been held to prevent service providers from identifying customers associated with anonymous online activity in the absence of a court order. As the European proposals purport to apply to domain name registries as well, these mechanisms could also interfere with efforts at ICANN to protect anonymity in website registration by shielding customer registration information.

The EU package could also compel U.S.-based providers to violate the Stored Communications Act (SCA), which prevents the disclosure of stored communications content in the absence of a court order.5 The recent U.S. CLOUD Act created a new mechanism for bypassing these safeguards—allowing certain foreign nations (if the United States enters into a “executive agreement” with them under the CLOUD Act) to compel data production from U.S.-based providers without following U.S. law or getting an order from a U.S. judge. However, the United States has not entered into any such an agreement with the EU or any EU member states at this stage, and the European package would require compliance even in the absence of one.

No Political Will to Fix the MLAT Process

The unfortunate backdrop to this race to subvert other states’ privacy standards is a regime that already exists for navigating cross-border data access. The Mutual Legal Assistance Treaty (MLAT) system creates global mechanisms by which one state can access data hosted in another while still complying with privacy safeguards in both jurisdictions. The MLAT system is in need of reform, as the volume of cross-border requests in modern times has strained some of its procedural mechanisms to the point where delays in responses can be significant. However, the fundamental basis of the MLAT regime remains sound and the pragmatic flaws in its implementation are far from insurmountable. Instead of reforming the MLAT regime in a way that would retain the current safeguards it respects, the European Commission and the United States seem to prefer to jettison these safeguards.

Perhaps ironically, much of the delay within the MLAT system arises from a lack of expertise in state agencies and officials in the data access laws of foreign states. Developing such expertise would allow state agencies to formulate foreign data access requests faster and more efficiently. It would also allow state officials to process incoming requests with greater speed. The EU proposals seek to bypass this requirement by effectively privatizing the legal assessment process: meaning that we're losing a real judge making real judgments. Service providers will now need to decide whether foreign requests are properly formulated under foreign laws. Yet the judicial authorities and state agencies are far better placed to make these assessments—not only from a resource management perspective, but also from a legitimacy perspective.

Contrary to this trend, European courts have continued to assert their own domestic privacy standards when protecting EU individuals’ data from access by foreign state agencies. Late last week, an Irish court questioned whether U.S. state agencies ( particularly the NSA and FBI who are granted broad powers under the U.S. Foreign Intelligence Surveillance Court) are sufficiently restrained in their ability to access EU individuals’ data. The matter was referred to the EU’s highest court and an adverse finding on the matter could prevent global communications platforms from exporting EU individuals’ data to the U.S. Such a finding could even prevent those same platforms from complying with some U.S. data demands regarding EU individuals’ data if additional privacy safeguards and remedies are not added. It is not yet clear what role such restrictions might ultimately play in any EU-U.S. agreement that might be negotiated under the U.S. CLOUD Act.

Ultimately, both the U.S. CLOUD Act and the EU proposal are a missed opportunity to work towards cross border data access regime that facilitates efficient law enforcement access and respects privacy, due process, and freedom of expression.

Conclusion

Unlike the last-minute rush to approve the U.S. CLOUD Act, there is still a long way to go before finalizing the EU proposals. Both documents need to be reviewed by the European Parliament and the Council of the European Union, and be subject to amendments. Once approved by both institutions, the regulation will become immediately enforceable as law in all Member States simultaneously, and it will override all national laws dealing with the same subject matter. The directive, however, will need to be transposed into national law.

We call on EU policy-makers to avoid the privatization of law enforcement and work instead to enhance judicial cooperation within and outside the European Union.

  • 1. Specifically listed are: providers of electronic communications service, social networks, online marketplaces, hosting service providers, and Internet infrastructure providers such as IP address and domain name registries. See Article 2, Definitions.
  • 2. A substantial connection is defined in the regulation as having an establishment in one or more Member States. In the absence of an establishment in the Union, a substantive connection will be the existence of a significant number of users in one or more Member States, or the targeting of activities towards one or more Member States (including factors such as the use of a language or a currency generally used in a Member State, availability of an app in the relevant national app store from providing local advertising or advertising in the language used in a Member State, from making use of any information originating from persons in Member States in the course of its activities, among others). See Article 3 Scope of the Regulation.
  • 3. Transactional data is “generally pursued to obtain information about the contacts and whereabouts of the user and may be served to establish a profile of an individual concerned”. The regulation described transactional data as the “the source and destination of a message or another type of interaction, data on the location of the device, date, time, duration, size, route, format, the protocol used and the type of compression, unless such data constitutes access data.
  • 4. The draft regulation states that access data is “typically recorded as part of a record of events (in other words a server log) to indicate the commencement and termination of a user access session to a service. It is often an individual IP address (static or dynamic) or other identifier that singles out the network interface used during the access session.”
  • 5. Most large U.S. providers insist on a warrant based on probable cause to disclose content, although the SCA allows disclosure on a weaker standard in some cases.
Categories: Privacy

Supreme Court Upholds Patent Office Power to Invalidate Bad Patents

Tue, 04/24/2018 - 19:19

In one of the most important patent decisions in years, the Supreme Court has upheld the power of the Patent Office to review and cancel issued patents. This power to take a “second look” is important because, compared to courts, administrative avenues provide a much faster and more efficient means for challenging bad patents. If the court had ruled the other way, the ruling would have struck down various patent office procedures and might even have resurrected many bad patents. Today’s decision [PDF] in Oil States Energy Services, LLC v. Greene’s Energy Group, LLC is a big win for those that want a more sensible patent system.

Oil States challenged the inter partes review (IPR) procedure before the Patent Trial and Appeal Board (PTAB). The PTAB is a part of the Patent Office and is staffed by administrative patent judges. Oil States argued that the IPR procedure is unconstitutional because it allows an administrative agency to decide a patent’s validity, rather than a federal judge and jury.

Together with Public Knowledge, Engine Advocacy, and the R Street Institute, EFF filed an amicus brief [PDF] in the Oil States case in support of IPRs. Our brief discussed the history of patents being used as a public policy tool, and how Congress has long controlled how and when patents can be canceled. We explained how the Constitution sets limits on granting patents, and how IPR is a legitimate exercise of Congress’s power to enforce those limits.

Our amicus brief also explained why IPRs were created in the first place. The Patent Office often does a cursory job reviewing patent applications, with examiners spending an average of about 18 hours per application before granting 20-year monopolies. IPRs allow the Patent Office to make sure it didn’t make a mistake in issuing a patent. The process also allows public interest groups to challenge patents that harm the public, like EFF’s successful challenge to Personal Audio’s podcasting patent. (Personal Audio has filed a petition for certiorari asking the Supreme Court to reverse, raising some of the same grounds argued by Oil States. That petition will be likely be decided in May.)

The Supreme Court upheld the IPR process a 7-2 decision. Writing for the majority, Justice Thomas explained:

Inter partes review falls squarely within the public rights doctrine. This Court has recognized, and the parties do not dispute, that the decision to grant a patent is a matter involving public rights—specifically, the grant of a public franchise. Inter partes review is simply a reconsideration of that grant, and Congress has permissibly reserved the PTO’s authority to conduct that reconsideration. Thus, the PTO can do so without violating Article III.

Justice Thomas noted that IPRs essentially serve the same interest as initial examination: ensuring that patents stay within their proper bounds.

Justice Gorsuch, joined by Chief Justice Roberts, dissented. He argued that only Article III courts should have the authority to cancel patents. If that view had prevailed, it likely would have struck down IPRs, as well as other proceedings before the Patent Office, such as covered business method review and post-grant review. It would also have left the courts with difficult questions regarding the status of patents already found invalid in IPRs. 

In a separate decision [PDF], in SAS Institute v. Iancu, the Supreme Court ruled that, if the PTAB institutes an IPR, it must decide the validity of all challenged claims. EFF did not file a brief in that case. While the petitioner had tenable arguments under the statute (indeed, it won), the result seems to make the PTAB’s job harder and creates a variety of problems (what is supposed to happen with partially-instituted IPRs currently in progress?). Since it is a statutory decision, Congress could amend the law. But don’t hold your breath for a quick fix.

Now that IPRs have been upheld, we may see a renewed push from Senator Coons and others to gut the PTAB’s review power. That would be a huge step backwards. As Justice Thomas explained, IPRs protect the public’s “paramount interest in seeing that patent monopolies are kept within their legitimate scope.” We will defend the PTAB’s role serving the public interest.

Categories: Privacy

Stop Egypt’s Sweeping Ridesharing Surveillance Bill

Tue, 04/24/2018 - 18:11

The Egyptian government is currently debating a bill which would compel all ride-sharing companies to store any Egyptian user data within Egypt. It would also create a system that would let the authorities have real-time access to their passenger and trip information. If passed, companies such as Uber and its Dubai-based competitor Careem would be forced to grant unfettered direct access to their databases to unspecified security authorities. Such a sweeping surveillance measure is particularly ripe for abuse in a country known for its human rights violations, including an attempts to use surveillance against civil society. The bill is expected to pass a final vote before Egypt’s House on May 14th or 15th.

Article 10 of the bill requires companies to relocate their servers containing all Egyptian users’ information to within the borders of the Arab Republic of Egypt. Compelled data localization has frequently served as an excuse for enhancing a state’s ability to spy on its citizens.  

Even more troubling, article 9 of the bill forces these same ride-sharing companies to electronically link their local servers directly to unspecified authorities, from police to intelligence agencies. Direct access to a server would provide the Egyptian government unrestricted, real-time access to data on all riders, drivers, and trips. Under this provision, the companies themselves would have no ability to monitor the government’s use of their network data.

Effective computer security is hard, and no system will be free of bugs and errors.  As the volume of ride-sharing usage increases, risks to the security and privacy of ridesharing databases increase as well. Careem just admitted on April 23rd that its databases had been breached earlier this year. The bill’s demand to grant the Egyptian government unrestricted server access greatly increases the risk of accidental catastrophic data breaches, which would compromise the personal data of millions of innocent individuals. Careem and Uber must focus on strengthening the security of their databases instead of granting external authorities unfettered access to their servers.

Direct access to the databases of any company without adequate legal safeguards undermines the privacy and security of innocent individuals, and is therefore incompatible with international human rights obligations. For any surveillance measure to be legal under international human rights standards, it must be prescribed by law. It must be “necessary” to achieve a legitimate aim and “proportionate” to the desired aim. These requirements are vital in ensuring that the government does not adopt surveillance measures which threaten the foundations of a democratic society.

The European Court of Human Rights, in Zakharov v. Russia, made clear that direct access to servers is prone to abuse:

“...a system which enables the secret services and the police to intercept directly the communications of each and every citizen without requiring them to show an interception authorisation to the communications service provider, or to anyone else, is particularly prone to abuse.”                                                                                             

Moreover, the Court of Justice of the European Union (CJEU) has also discussed the importance of having an independent authorization prior to government access to electronic data. In Tele2 Sverige AB v. Post, held:

“it is essential that access of the competent national authorities to retained data should, as a general rule, (...) be subject to a prior review carried out either by a court or by an independent administrative body, and that the decision of that court or body should be made following a reasoned request by those authorities submitted...”.

Unrestricted direct access to the data of innocent individuals using ridesharing apps, by its very nature, eradicates any consideration of proportionality and due process. Egypt must turn back from the dead-end path of unrestricted access, and uphold its international human rights obligations. Sensitive data demands strong legal protections, not an all-access pass. Hailing a rideshare should never include a blanket access for your government to follow you. We hope Egypt’s House of Representatives rejects the bill.

Categories: Privacy

California Bill Would Guarantee Free Credit Freezes in 15 Minutes

Tue, 04/24/2018 - 15:09

 

After the shocking news of the massive Equifax data breach, which has now ballooned to jeopardize the privacy of nearly 148 million people, many Americans are rightfully scared and struggling to figure out how to protect themselves from the misuse of their personal information.

To protect against credit fraud, many consumer rights and privacy organizations recommend placing a ‘credit freeze’ with the credit bureaus. When criminals seek to use breached data to borrow money in the name of a breach victim, the potential lender normally runs a credit check with a credit bureau. If there’s a credit freeze in place, then it’s harder to obtain the loan.

But placing a credit freeze can be cumbersome, time-consuming, and costly. The process can also vary across states. It can be an expensive time-suck if a consumer wants to place a freeze across all credit bureaus and for all family members.

Fortunately, California now has an opportunity to dramatically streamline the credit freeze process for its residents, thanks to a state bill introduced by Sen. Jerry Hill, S.B. 823. EFF is proud to support it.

The bill will allow Californians to place, temporarily lift, and remove credit freezes easily and at no charge. Credit reporting agencies will be required to carry out the request in 15 minutes or less if the consumer uses the company’s website or mobile app.

The response time for written requests has been cut as well from three days to just 24 hours. Additionally, credit reporting agencies must offer consumers the option of passing along credit freeze requests to other credit reporting agencies, saving Californians time and reducing the likelihood of the misuse of their information. 

You can read our support letter for the bill here.

Free and convenient credit freezes are becoming even more important as many consumer credit reporting agencies are pushing their inferior “credit lock” products. These products don’t offer the same protections built into credit freezes by law, and to use some of them, consumers have to agree to have their personal information be used for targeted ads.

The bill has passed the California Senate and will soon be heading to the Assembly for a vote. EFF endorses this effort to empower consumers to protect their sensitive information.

Categories: Privacy

Net Neutrality Did Not Die Today

Mon, 04/23/2018 - 17:01

When the FCC’s “Restoring Internet Freedom Order,” which repealed net neutrality protections the FCC had previously issued, was published on February 22nd, it was interpreted by many to mean it would go into effect on April 23. That’s not true, and we still don’t know when the previous net neutrality protections will end.

On the Federal Register’s website—which is the official daily journal of the United States Federal Government and publishes all proposed and adopted rules, the so-called “Restoring Internet Freedom Order” has an “effective date” of April 23. But that only applies to a few cosmetic changes. The majority of the rules governing the Internet remain the same—the prohibitions on blocking, throttling, and paid prioritization—remain.

Before the FCC’s end to those protections can take effect, the Office of Management and Budget has to approve the new order, which it hasn’t done. Once that happens, we’ll get another notice in the Federal Register. And that’s when we’ll know for sure when the ISPs will be able to legally start changing their actions.

If your Internet experience hasn’t changed today, don’t take that as a sign that ISPs aren’t going to start acting differently once the rule actually does take effect;  for example, Comcast changed the wording on its net neutrality pledge almost immediately after last year’s FCC vote.

Net neutrality protections didn’t end today, and you can help make sure they never do. Congress can still stop the repeal from going into effect by using the Congressional Review Act (CRA) to overturn the FCC’s action. All it takes is a simple majority vote held within 60 legislative working days of the rule being published. The Senate is only one vote short of the 51 votes necessary to stop the rule change, but there is a lot more work to be done in the House of Representatives. See where your members of Congress stand and voice your support for the CRA here.

Take Action

Save the net neutrality rules

Categories: Privacy

Stupid Patent of the Month: Suggesting Reading Material

Mon, 04/23/2018 - 16:49

Online businesses—like businesses everywhere—are full of suggestions. If you order a burger, you might want fries with that. If you read Popular Science, you might like reading Popular Mechanics. Those kinds of suggestions are a very old part of commerce, and no one would seriously think it’s a patentable technology.

Except, apparently, for Red River Innovations LLC, a patent troll that believes its patents cover the idea of suggesting what people should read next. Red River filed a half-dozen lawsuits in East Texas throughout 2015 and 2016. Some of those lawsuits were against retailers like home improvement chain Menards, clothier Zumiez, and cookie retailer Ms. Fields. Those stores all got sued because they have search bars on their websites.

In some lawsuits, Red River claimed the use of a search bar infringed US Patent No. 7,958,138. For example, in a lawsuit against Zumiez, Red River claimed [PDF] that “after a request for electronic text through the search box located at www.zumiez.com, the Zumiez system automatically identifies and graphically presents additional reading material that is related to a concept within the requested electronic text, as described and claimed in the ’138 Patent.” In that case, the “reading material” is text like product listings for jackets or skateboard decks.

In another lawsuit, Red River asserted a related patent, US Patent No. 7,526,477, which is our winner this month. The ’477 patent describes a system of electronic text searching, where the user is presented with “related concepts” to the text they’re already reading. The examples shown in the patent display a kind of live index, shown to the right of a block of electronic text. In a lawsuit against Infolinks, Red River alleged [PDF] infringement because “after a request for electronic text, the InText system automatically identifies and graphically presents additional reading material that is related to a concept within the requested electronic text.”   

Suggesting and providing reading material isn’t an invention, but rather an abstract idea. The final paragraph of the ’477 patent’s specification makes it clear that the claimed method could be practiced on just about any computer. Under the Supreme Court’s decision in Alice v. CLS Bank, an abstract idea doesn’t become eligible for a patent merely because you suggest performing it with a computer. But hiring lawyers to make this argument is an expensive task, and it can be daunting to do so in a faraway locale, like the East Texas district where Red River has filed its lawsuits so far. That venue has historically attracted “patent troll” entities that see it as favorable to their cases.

The ’477 patent is another of the patents featured in Unified Patents’ prior art crowdsourcing project Patroll. If you know of any prior art for the ’477 patent, you can submit it (before April 30) to Unified Patents for a possible $2,000 prize.

The good news for anyone being targeted by Red River today is that it’s not going to be as easy to drag businesses from all over the country into a court of their choice. The Supreme Court’s TC Heartland decision, combined with a Federal Circuit case called In re Cray, mean that patent owners have to sue in a venue where defendants actually do business.

It’s also a good example of why fee-shifting in patent cases, and upholding the case law of the Alice decision, are so important. Small companies using basic web technologies shouldn’t have to go through a multi-million dollar jury trial to get a chance to prove that a patent like the ’477 is abstract and obvious.

Categories: Privacy

We’re in the Uncanny Valley of Targeted Advertising

Fri, 04/20/2018 - 14:22

Mark Zuckerberg, Facebook’s founder and CEO, thinks people want targeted advertising. The “overwhelming feedback,” he said multiple times during his congressional testimony, was that people want to see “good and relevant” ads. Why then are so many Facebook users, including leaders of state in the U.S. Senate and House, so fed up and creeped out by the uncannily on-the-nose ads? Targeted advertising on Facebook has gotten to the point that it’s so “good,” it’s bad—for users, who feel surveilled by the platform, and for Facebook, who is rapidly losing its users’ trust. But there’s a solution, which Facebook must prioritize: stop collecting data from users without their knowledge or explicit, affirmative consent.

It should never be the user’s responsibility to have to guess what’s happening behind the curtain.

Right now, most users don’t have a clear understanding of all the types of data that Facebook collects or how it’s analyzed and used for targeting (or for anything else). While the company has heaps of information about its users to comb through, if you as a user want to know why you’re being targeted for an ad, for example, you’re mostly out of luck. Sure, there's a “why was I shown this” option on an individual ad", but each generally reveals only bland categories like “Over 18 and living in California”—and to get an even semi-accurate picture of all the ways you can be targeted, you’d have to click through various sections, one at a time, on your “Ad Preferences” page.

Text from Facebook explaining why an ad has been shown to the user

Even more opaque are categories of targeting called “Lookalike audiences.” Because Facebook has so many users—over 2 billion per month—it can automatically take a list of people supplied by advertisers, such as current customers or people who like a Facebook page—and then do behind-the-scenes magic to create a new audience of similar users to beam ads at.

Facebook does this by identifying “the common qualities” of the people in the uploaded list, such as their related demographic information or interests, and finding people who are similar to (or "look like") them, to create an all-new list. But those comparisons are made behind the curtain, so it’s impossible to know what data, specifically, Facebook is using to decide you look like another group of users. And to top if off: much of what’s being used for targeting generally isn’t information that users have explicitly shared—it’s information that’s been actively—and silently—taken from them.

Telling the user that targeting data is provided by a third party like Acxiom doesn’t give any useful information about the data itself, instead bringing up more unanswerable questions about how data is collected

Just as vague is targeting using data that’s provided by third party “data brokers.” Changes by Facebook in March to discontinue one aspect of this data sharing called partner categories, wherein data brokers like Acxiom and Experian use their own massive datasets combined with Facebook’s to target users, are the kinds of changes Facebook has touted to “help improve people’s privacy”—but they won’t have a meaningful impact on our knowledge of how data is collected and used.

As a result, the ads we see on Facebook—and other places online where behaviors are tracked to target users—creep us out. Whether they’re for shoes that we’ve been considering buying to replace ours, for restaurants we happened to visit once, or even for toys that our children have mentioned, the ads can indicate a knowledge of our private lives that the company has consistently failed to admit to having, and moreover, knowledge that was supplied via Facebook’s AI, which makes inferences about people—such as their political affiliation and race—that’s clearly out of many users’ comfort zones. This AI-based ad targeting on Facebook is so obscured in its functioning that even Zuckerberg thinks it’s a problem. “Right now, a lot of our AI systems make decisions in ways that people don't really understand,” he told Congress during his testimony. “And I don't think that in 10 or 20 years, in the future that we all want to build, we want to end up with systems that people don't understand how they're making decisions.”

But we don’t have 10 or 20 years. We’ve entered an uncanny valley of opaque algorithms spinning up targeted ads that feel so personal and invasive that both the House and the Senate mentioned the spreading myth that the company wiretaps its users’ phones. It’s understandable that users have come to conclusions like this for the creeped out feelings that they rightfully experience. The concern that you’re being surveilled persists, essentially, because you are being surveilled—just not via your microphone. Facebook seems to possess an almost human understanding of us. Like the unease and discomfort people sometimes experience interacting with a not-quite-human-like robot, being targeted highly accurately by machines based on private, behavioral information that we never actively gave out feels creepy, uncomfortable, and unsettling.

The trouble isn’t that personalization is itself creepy. When AI is effective it can produce amazing results that feel personalized in a delightful way—but only when we actively participated in teaching the system what we like and don't like. AI-generated playlists, movie recommendations, and other algorithm-powered suggestions work to benefit users because the inputs are transparent and based on information we knowingly give those platforms, like songs and television shows we like. AI that feels accurate, transparent, and friendly can bring users out of the uncanny valley to a place where they no longer feel unsettled, but instead, assisted.

But apply a similar level of technological prowess to other parts of our heavily surveilled, AI-infused lives, and we arrive in a world where platforms like Facebook creepily, uncannily, show us advertisements for products we only vaguely remember considering purchasing or people we had only just met once or just thought about recently—all because the amount of data being hoovered up and churned through obscure algorithms is completely unknown to us.

Unlike the feeling that a friend put together a music playlist just for us, Facebook’s hyper-personalized advertising—and other AI that presents us with surprising, frighteningly accurate information specifically relevant to us—leaves us feeling surveilled, but not known. Instead of feeling wonder at how accurate the content is, we feel like we’ve been tricked.

To keep us out of the uncanny valley, advertisers and platforms like Facebook must stop compiling data about users without their knowledge or explicit consent. Zuckerberg multiple times told Congress that “an ad-supported service is the most aligned with [Facebook’s] mission of trying to help connect everyone in the world.” As long as Facebook’s business model is built around surveillance and offering access to users’ private data for targeting purposes to advertisers, it’s unlikely we’ll escape the discomfort we get when we’re targeted on the site. Steps such as being more transparent about what is collected, though helpful, aren’t enough. Even if users know what Facebook collects and how they use it, having no way of controlling data collection, and more importantly, no say in the collection in the first place, will still leave us stuck in the uncanny valley.

Even Facebook’s “helpful” features, such as reminding us of birthdays we had forgotten, showing pictures of relatives we’d just been thinking of (as one senator mentioned), or displaying upcoming event information we might be interested in, will continue to occasionally make us feel like someone is watching. We'll only be amazed (and not repulsed) by targeted advertising—and by features like this—if we feel we have a hand in shaping what is targeted at us. But it should never be the user’s responsibility to have to guess what’s happening behind the curtain.

While advertisers must be ethical in how they use tracking and targeting, a more structural change needs to occur. For the sake of the products, platforms, and applications of the present and future, developers must not only be more transparent about what they’re tracking, how they’re using those inputs, and how AI is making inferences about private data. They must also stop collecting data from users without their explicit consent. With transparency, users might be able to make their way out of the uncanny valley—but only to reach an uncanny plateau. Only through explicit affirmative consent—where users not only know but have a hand in deciding the inputs and the algorithms that are used to personalize content and ads—can we enjoy the “future that we all want to build,” as Zuckerberg put it.

Arthur C. Clarke said famously that “any sufficiently advanced technology is indistinguishable from magic”—and we should insist that the magic makes us feel wonder, not revulsion. Otherwise, we may end up stuck on the uncanny plateau, becoming increasingly distrustful of AI in general, and instead of enjoying its benefits, fear its unsettling, not-quite-human understanding.  

Categories: Privacy

Minnesota Supreme Court Ruling Will Help Shed Light on Police Use of Biometric Technology

Fri, 04/20/2018 - 12:43

A decision by the Minnesota Supreme Court on Wednesday will help the public learn more about how law enforcement use of privacy invasive biometric technology.

The decision in Webster v. Hennepin County is mostly good news for the requester in the case, who sought the public records as part of a 2015 EFF and MuckRock campaign to track mobile biometric technology use by law enforcement across the country. EFF filed a brief in support of Tony Webster, arguing that the public needed to know more about how officials use these technologies.

Across the country, law enforcement agencies have been adopting technologies that allow cops to identify subjects by matching their distinguishing physical characteristics to giant repositories of biometric data. This could include images of faces, fingerprints, irises, or even tattoos. In many cases, police use mobile devices in the field to scan and identify people during stops. However, police may also use this technology when a subject isn’t present, such as grabbing images from social media, CCTV, or even lifting biological traces from seats or drinking glasses.

Webster’s request to Hennepin County officials sought a variety of records, and included a request for the agencies to search officials’ email messages for keywords related to biometric technology, such as “face recognition” and “iris scan.”

Officials largely ignored the request and when Webster brought a legal challenge, they claimed that searching their email for keywords would be burdensome and that the request was improper under the state’s public records law, the Minnesota Government Data Practices Act.

Webster initially prevailed before an administrative law judge, who ruled that the agencies had failed to comply with the Data Practices Act in several respects. The judge also ruled that request a search of email records for keywords was proper under the law and was not burdensome.

County officials appealed that decision to a state appellate court. That court agreed that Webster’s request was proper and not burdensome. But it disagreed that the agencies had violated the Data Practices Act by not responding to Webster’s request or that they had failed to set up their records so that they could be easily searched in response to records requests.

Webster appealed to the Minnesota Supreme Court, who on Wednesday agreed with him that the agencies had failed to comply with the Data Practices Act by not responding to his request. The court, however, agreed with the lower appellate court that county officials did not violate the law in how they had configured their email service or arranged their records systems.

In a missed opportunity, however, the court declined to rule on whether searching for emails by keywords was appropriate under the Data Practices Act and not burdensome. The court claimed that it didn’t have the ability to review that issue because Webster had prevailed in the lower court and county officials failed to properly raise the issue.

Although this means that the lower appellate court’s decision affirming that email keyword searches are proper and not burdensome still stands, it would have been nice if the state’s highest court weighed in on the issue.

EFF is nonetheless pleased with the court’s decision as it means Webster can finally access records that document county law enforcement’s use of biometric technology. We would like to thank attorneys Timothy Griffin and Thomas Burman of Stinson Leonard Street LLP for drafting the brief and serving as local counsel.

For more on biometric identification, such as face recognition, check out EFF’s Street-Level Surveillance project.

Categories: Privacy

Dear Canada: Accessing Publicly Available Information on the Internet Is Not a Crime

Thu, 04/19/2018 - 23:00

Canadian authorities should drop charges against a 19-year-old Canadian accused of “unauthorized use of a computer service” for downloading thousands of public records hosted and available to all on a government website. The whole episode is an embarrassing overreach that chills the right of access to public records and threatens important security research.

At the heart of the incident, as reported by CBC news this week, is the Nova Scotian government’s embarrassment over its own failure to protect the sensitive data of 250 people who used the province’s Freedom of Information Act (FOIA) to request their own government files. These documents were hosted on the government web server that also hosted public records containing no personal information. Every request hosted on the server contained very similar URLs, which differed only in a single document ID number at the end of the URL. The teenager took a known ID number, and then, by modifying the URL, retrieved and stored all of the FOIA documents available on the Nova Scotia FOIA website.

Beyond the absurdity of charging someone with downloading public records that were available to anyone with an Internet connection, if anyone is to blame for this mess, it’s Nova Scotia officials. They have both insecurely set up their public records server to permit public access to others’ private information. Officials should accept responsibility for failing to secure such sensitive data rather than ginning up a prosecution. The fact that the government was publishing documents that contained sensitive data in a public website without any passwords or access controls demonstrates their own failure to protect the private information of individuals. Moreover, it does not appear that the site even deployed minimal technical safeguards to exclude widely-known indexing tools such as Google search and the Internet Archive from archiving all the records published on the site, as both appear to have cached some of the documents.

The lack of any technical safeguards shielding the Freedom of Information responses from public access would make it difficult for anyone to know that they were downloading material containing private information, much less provide any indication that such activity was “without authorization” under the criminal statute. According to the report, more than 95% of the 7,000 Freedom of Information responses in question included redactions for any information properly excluded from disclosure under Nova Scotia’s FOI law. Freedom of Information laws are about furthering public transparency, and information released through the FOI process is typically considered to be public to everyone.

But beyond the details of this case, automating access to publicly available freedom of information requests is not conduct that should be criminalized: Canadian law criminalizes unauthorized use of  computer systems, but these provisions are only intended to be applied when the use of the service is both unauthorized and carried out with fraudulent intent. Neither element should be stretched to meet the specifics in this case. The teenager in question believed he was carrying out a research and archiving role, preserving the results of freedom of information requests. And given the setup of the site, he likely wasn’t aware that a few of the documents contained personal information. If true, he would not have had any fraudulent intent.

“The prosecution of this individual highlights a serious problem with Canada’s unauthorized intrusion regime,”  Tamir Israel, Staff Lawyer at CIPPIC, told us. “Even if he is ultimately found innocent, the fact that these provisions are sufficiently ambiguous to lay charges can have a serious chilling effect on innovation, free expression and legitimate security research.”

The deeper problem with this case is that it highlights how concerns about computer crime can lead to absurd prosecutions. The Canadian police are using to prosecute the teen was implemented after Canada sign the Budapest Cybercrime Convention. The convention’s original intent was to punish those who break into protected computers to steal data or cause damage.

Criminalizing access to publicly available data over the Internet twists the Cybercrime Convention’s purpose. Laws that offer the possibility of imposing criminal liability on someone simply for engaging with freely available information on the web pose a continuing threat to the openness and innovation of the Internet. They also threaten legitimate security research. As technology law professor Orin Kerr describes it, publicly posting information on the web and then telling someone they are not authorized to access it is “like publishing a newspaper but then forbidding someone to read it.”

Canada should take the lead from the  United States federal court’s decision in Sandvig v. Sessions, which made clear that using automated tools to access freely available information is not a computer crime. As the court wrote:  

"Scraping is merely a technological advance that makes information collection easier; it is not meaningfully different from using a tape recorder instead of taking written notes, or using the panorama function on a smartphone instead of taking a series of photos from different positions.”

The same is true in the case of the Canadian teen.

We've long defended the use of “automated scraping,” which is the process of using web crawlers or bots — applications that run automated tasks over the Internet—to extract content and data from a website. Scraping provides a wide range of valuable tools and services that Internet users, programmers, journalists, and researchers around the world rely on every day to the benefit of the broader public.

The value of automated scraping value goes well beyond curious teenagers seeking access to freedom of information requests. The Internet Archive has long been scraping public portions of the world wide web and preserving them for future researchers. News aggregation tools, including Google’s Crisis Map, which aggregated critical information about the California’s October 2016 wildfires, involve scraping. ProPublica journalists used automated scrapers to investigate Amazon’s algorithm for ranking products by price and uncovered that Amazon’s pricing algorithm was hiding the best deals from many of its customers. The researchers who studied racial discrimination on Airbnb also used bots, and found that distinctively African American names were 16 percent less likely to be accepted relative to identical guests with distinctively white names.

Charging the Canadian teen with a computer crime for what amounts to his scraping publicly available online content has severe consequences for him and the broader public. As a result of the charges against him, the teen is banned from using the Internet and is concerned he may not be able to complete his education.

More broadly, the prosecution is a significant deterrent to anyone who wanted to use common tools such as scraping to collect public government records from websites, as the government’s own failure to adequately protect private information can now be leveraged into criminal charges against journalists, activists, or anyone else seeking to access public records.

Even if the teen is ultimately vindicated in court, this incident calls for a re-examination of Canada’s unauthorized intrusion regime and law enforcement’s use of it. The law was not intended for cases like this, and should never have been raised against an innocent Internet user.

Categories: Privacy

A Little Help for Our Friends

Thu, 04/19/2018 - 21:01

In periods like this one, when governments seem to ignore the will of the people as easily as companies violate their users’ trust, it’s important to draw strength from your friends. EFF is glad to have allies in the online freedom movement like the Internet Archive. Right now, donations to the Archive will be matched automatically by the Pineapple Fund.

Founded 21 years ago by Brewster Kahle, the Internet Archive’s mission is to provide free and universal access to knowledge through its vast digital library. Their work has helped capture the massive—yet now too often ephemeral—proliferation of human creativity and knowledge online. Popular tools like the Wayback Machine have allowed people to do things like view deleted and altered webpages and recover public statements to hold officials accountable.

EFF and the Internet Archive have stood together in a number of digital civil liberties cases. We fought back when the Archive became the recipient of a National Security Letter, a tool often used by the FBI to force Internet providers and telecommunications companies to turn over the names, addresses, and other records about their customers, and frequently accompanied by a gag order. EFF and the Archive have worked together to fight threats to free expression, online innovation, and the free flow of information on the Internet on numerous occasions. We have even collaborated on community gatherings like EFF’s own Pwning Tomorrow speculative fiction launch and the recent Barlow Symposium exploring EFF co-founder John Perry Barlow’s philosophy of the Internet.

EFF co-founder John Perry Barlow with the Internet Archive’s Brewster Kahle.

This month, the Bitcoin philanthropist behind the Pineapple Fund is challenging the world to support the Internet Archive and the movement for online freedom. The Pineapple Fund will match up to $1 million in donations to the Archive through April 30. (EFF was also the grateful recipient of a $1 million Pineapple Fund grant in January of this year.) If you would like to support the future of libraries and preserve online knowledge for generations to come, consider giving to the Internet Archive today. We salute the Internet Archive for supporting privacy, free expression, and the open web.

Categories: Privacy

Patent Office Throws Out GEMSA’s Stupid Patent on a GUI For Storage

Thu, 04/19/2018 - 18:14

The Patent Trial and Appeal Board has issued a ruling [PDF] invalidating claims from US Patent No. 6,690,400, which had been the subject of the June 2016 entry in our Stupid Patent of the Month blog series. The patent owner, Global Equity Management (SA) Pty Ltd. (GEMSA), responded to that post by suing EFF in Australia. Eventually, a U.S. court ruled that EFF’s speech was protected by the First Amendment. Now the Patent Office has found key claims from the ’400 patent invalid.

The ’400 patent described its “invention” as “a Graphic User Interface (GUI) that enables a user to virtualize the system and to define secondary storage physical devices through the graphical depiction of cabinets.” In other words, virtual storage cabinets on a computer. E-Bay, Alibaba, and Booking.com, filed a petition for inter partes review arguing that claims from the ’400 patent were obvious in light of the Partition Magic 3.0 User Guide (1997) from PowerQuest Corporation. Three administrative patent judges from the Patent Trial and Appeal Board (PTAB) agreed.

The PTAB opinion notes that Partition Magic’s user guide teaches each part of the patent’s Claim 1, including the portrayal of a “cabinet selection button bar,” a “secondary storage partitions window,” and a “cabinet visible partition window.” This may be better understood through diagrams from the opinion. The first diagram below reproduces a figure from the patent labeled with claim elements. The second is a figure from Partition Magic, labeled with the same claim elements.

GEMSA argued that the ’400 patent was non-obvious because the first owner of the patent, a company called Flash Vos, Inc., “moved the computer industry a quantum leap forward in the late 90’s when it invented Systems Virtualization.” But the PTAB found that “Patent Owner’s argument fails because [it] has put forth no evidence that Flash Vos or GEMSA actually had any commercial success.”

The constitutionality of inter partes review is being challenged in the Supreme Court in the Oil States case. (EFF filed an amicus brief in that case in support of the process.) A decision is expected in Oil States before the end of June. The successful challenge to GEMSA’s patent shows the importance of inter partes review. GEMSA had sued dozens of companies alleging infringement of the ’400 patent. GEMSA can still appeal the PTAB’s ruling. If the ruling stands, however, it should end those suits as to this patent.

Related Cases: EFF v. Global Equity Management (SA) Pty Ltd
Categories: Privacy