In an opinion issued today (January 25, 2019), the Illinois Supreme Court found that a Six Flags season pass holder can claim a violation of the state’s biometric privacy law by collecting the thumbprint of plaintiff Stacy Rosenbach’s son without permission, even without alleging any actual harm.  This is an important ruling that could impact hundreds of similar pending cases.

In a unanimous decision, the court wrote that Rosenbach’s son can be considered an “aggrieved person” under the state’s Biometric Information Privacy Act (“BIPA”) based on a technical violation of the statute and without alleging that her son’s data was stolen or misused.

Under the statute, “aggrieved persons” may file a right of action and recovery for each violation the greater of $1000 liquidated damages or actual damages, reasonable attorney fees and costs, and any other relief, including an injunction, that the court deems appropriate.  The central issue was whether one qualifies as an “aggrieved person” if he or she has not alleged some actual injury or adverse effect, beyond violation of his or her rights under the statute.  In the lower appellate court’s view, “a plaintiff who alleges only a technical violation of the statute without alleging some injury or adverse effect is not an aggrieved person”. 2017 IL App (2d 18-317, P 23). Today, the Illinois Supreme Court reversed and remanded the appellate court’s decision for further proceedings.

The Six Flags fingerprinting system involved two steps. First, the pass holder went to a security checkpoint, where he was asked to scan his thumb into the biometric data capture system. After that, he was directed to a nearby administrative building, where he obtained a season pass card.  The card and his thumbprint, when used together, enabled him to gain access as a season pass holder.  Upon returning home, the son was asked by plaintiff Rosenbach for the booklet or paperwork he had been given in connection with his new season pass. The son responded that Six Flags did “it all by fingerprint now” and that no paperwork has been provided.  The complaint alleged that neither the son, who was 14 years old and thus a minor, nor the plaintiff mother Rosenbach, were informed in writing or any other way of the specific purpose and length of term for which his finger print had been collected or that they sign any written release regarding taking of the fingerprint. Moreover, neither of them consented in writing “to the collection, storage, use, sale, lease, dissemination, disclosure, redisclosure, or trade of, or for [defendants] to otherwise profit from, [son’s] thumbprint or associated biometric identifiers or information.”

The defendants sought dismissal, among other grounds, that the plaintiff had suffered no actual or threatened injury and therefore lacked standing to sue.  In rejecting this position, the court noted that, “[w]hen the General Assembly has wanted to impose such a requirement in other situations, it had made that intention clear”, citing Illinois’s consumer Fraud and Deceptive Business Practices Act, which requires actual damage to bring a private right of action. See 815 ILCS 505/10a(a) (Action for actual damages).  In contrast, Illinois’s AIDS Confidentiality Act (410 ILCS 305/1) did not require proof of actual damages in order to recover. The court noted that Section 20 of the Act in question, followed the latter model, providing simply that “[a]ny person aggrieved by a violation of this Act shall have a right of action in a State circuit court or as a supplemental claim in federal district court against an offending party.”

The court then discussed the historical and popular use of the term “aggrieved”, concluding that it was sufficient that the plaintiff’s legal rights were adversely affected. Specifically, the Act codified that individuals possess right to privacy in and control over their biometric identifiers, and when a private entity fails to comply with one of those requirements, that violation “constitutes an invasion, impairment, or denial of the statutory rights of any person or customer whose biometric identifier or biometric information is subject to the breach.” Therefore, such a person or customer would “clearly be ‘aggrieved’ within the meaning of Section 20 of the Act” and entitled to seek recovery.  The court added that the appellate court’s characterization of the violation as merely technical in nature “misapprehends the nature of the harm our legislature is attempting to combat through this legislation”, noting that these procedural protections “are particularly crucial in our digital world because technology now permits the wholesale collection and storage of an individual’s unique biometric identifiers – identifiers that cannot be changed if compromised or misused.”  When a private entity fails to adhere to these statutory procedures, “the right of the individual to maintain [his or] her biometric privacy vanishes into thin air. The precise harm the Illinois legislature sought to prevent is then realized.”  For these reasons, the court stated, the procedural injury is “real and significant”.

To view the court’s opinion, click here.

On January 21, 2019, the French Data Protection Authority, the Commission Nationale de L’Informatique et de Libertés (“CNIL”) announced a sanction of 50 million euros against Google.  On May 25 and 28, 2018, the CNIL received complaints from two different associations, asserting that Google did not have a valid legal basis for the processing of personal data of the users of its services, particularly with respect to ad personalization.  The complaints were brought by “None of Your Business”, a nonprofit organization chaired by Max Schrems (yes, that Max Schrems), and “La quadrature du Net”, a French digital rights advocacy group. The decision is significant for at least two reasons: (1) because it reveals CNIL’s analysis in how it was permitted to issue the decision and sanction despite Google’s European headquarters and (2) because it is the first time the CNIL has leveraged its new powers under GPDR to issue a sanction greater than its € 20 million pre-GDPR limits.

Coordination of Enforcement

The GDPR establishes a “one stop shop mechanism”, providing that an organization with a main establishment in the European Union shall have only one interlocutor, the Data Protection Authority (“DPA”) in the country where its main establishment is located, which shall serve as the “lead authority”.  In Google’s case, their European headquarters is in Ireland.  The lead authority must coordinate the cooperation between the other DPAs before taking any decision about cross-border processing carried out by the company. The CNIL cited the definition of “main establishment” in Article 4(16)(a):  “as regards a controller with establishments in more than one Member State, the place of its central administration in the Union, unless the decisions on the purposes and means of the processing of personal data are taken in another establishment of the controller in the Union and the latter establishment has the power to have such decisions implemented, in which case the establishment having taken such decisions is to be considered to be the main establishment …”.  It then discussed several elements of Google’s European headquarters in Ireland,

After lengthy discussion, the CNIL concluded that the restricted training taking place at Google’s European headquarters reveals that it could not be considered as a main establishment within the meaning of Article 4(16) when it is not established that the Ireland headquarters had decision making power as to privacy policies presented to the user during the creation of this account during the configuration of the Android mobile phone.  In the absence of a main establishment, therefore, the CNIL was competent to initiate this procedure and to exercise its powers. The CNIL therefore asserted authority to make decision regarding Google’s processing operations, and implemented the new European framework interested by all European authorities in the EDPB’s guidelines.

CNIL’s restricted committee carried out online inspections in September 2018 to verify the compliance of the processing operations implemented by Google with the French Data Protection Act and the GDPR by analyzing the browsing pattern of a user and the documents he or she can have access to when creating a Google account during the configuration of Android mobile equipment. On the basis of its inspections, the CNIL’s restricted committee observed two types of breaches of the GPDR.

Violation of Transparency and Information.

First, the CNIL noticed that the information provided by Google was not easily accessible for users:

“Essential information, such as the data processing purposes, the data storage periods or the categories of personal data used for the ads personalization, are excessively disseminated across several documents, with buttons and links on which it is required to click to access complementary information. The relevant information is accessible after several steps only, implying sometimes up to 5 or 6 actions. For instance, this is the case when a user wants to have complete information on his or her data collected for the personalization purposes or for the geo-tracking service.”

The restricted committee also observed that some information is not always clear or comprehensive:

“Users are not able to fully understand the extent of the processing operations carried out by Google. But the processing operations are particularly massive and intrusive because of the number of services offered (about twenty), and the amount and the nature of the data processed and combined. The restricted committee observe[d] in particular that the purposes of processing are described in a too generic and vague manner, and so are the categories of data processed for these various purposes. Similarly, the information communicated is not clear enough so that the user can understand that the legal basis of processing operations for the ads personalization is the consent, and not the legitimate interest of the company. Finally, the restricted committee notices that the information about the retention period is not provided for some data.”

Violation of the obligation to have a Legal Basis for ads Personalization Processing.

Although Google stated that it obtained user consent to process data for ads personalization purposes, the committee considered that the consent was not validly obtained for two reasons:

“First, the restricted committee observed that the users’ consent was not sufficiently informed.   The information on processing operations for the ads personalization is diluted in several documents and does not enable the user to be aware of their extent. For example, in the section “Ads Personalization”, it is not possible to be aware of the plurality of services, websites and applications involved in these processing operations (Google search, You tube, Google home, Google maps, Playstore, Google pictures…) and therefore of the amount of data processed and combined.”

Second, the committee observed that consent collected by Google was neither “specific” nor “unambiguous”.  Admittedly, when a user creates an account he or she can modify some account options by clicking on the button <<More options>>, accessible above the button <<Create Account>>.  It is notably possible to configure the display of personalized ads.  However, the use not only has to click on <<More options>> to access the configuration, but the display of ads personalization is pre-checked. However, the GDPR requires that consent is “unambiguous” only with a clear affirmative action from the user (e.g., opting in by ticking a non-pre-ticked box for instance, as opposed to opting out by clearing a pre-ticked box). Finally, before creating an account, the user is asked to tick the boxes << I agree to Google’s Terms of Service>> and “I agree to the processing of my information as described above and further explained in the Privacy Policy” in order to create the account.  In other words, the user gives his or her consent in full for all of the processing operations purposes carried by Google based on this consent (e.g., ads personalization, speech recognition, etc.). However, GDPR requires that consent is “specific” only if it is given distinctly for each purpose.

Sanctions.

As a result of its findings, the committee publicly imposed a financial penalty of 50 million euros against Google, representing the first time that the CNIL applied the new sanction limits provided by the GDPR.  CNIL stated that the amount and publicity of the sanction was “justified by the severity of the infringements observed regarding the essential principles of the GDPR:  transparency, information, and consent.”

Despite the measures implemented by Google (documentation and configuration tools), CNIL stated that the infringements observed “deprive the users of essential guarantees regarding processing operations that can reveal important parts of their private life since they are based on a huge amount of data, a wide variety of services, and almost unlimited possible combinations.”  The committee recalled that the extent of the processing operations in question “imposes to enable the users to control their data and therefore to sufficiently inform them and allow them to validly consent.”  Moreover, the committee, stated, the violations were continuous breaches of the regulation as they are still observed to date; it Is not a one-off, time-limited infringement.  The CNIL also noted the important place the Android operating system has on the French market, with thousands of French citizens creating Google accounts everyday when using their smartphone. Finally, the restricted committee points out that the economic model of the company is partly based on the ads personalization.

Google Response.

In a statement obtained by ABC News, a Google spokesperson said the company is “studying the decision” to determine its next steps:

“People expect high standards of transparency and control from us. We’re deeply committed to meeting those expectations and the consent requirements of the GDPR. We’re studying the decision to determine our next steps.”

To view the CNIL press release, click here.

To view the CNIL decision (in French), click here.

The Federal Financial Institutions Examination Council (FFIEC) has issued a joint statement providing guidance for financial institutions about the role of cyber insurance in risk management of informational technology systems. The FFIEC comprises the principals of the following: The Board of Governors of the Federal Reserve System, Federal Deposit Insurance Corporation, National Credit Union Administration, Office of the Comptroller of the Currency, Consumer Financial Protection Bureau, and State Liaison Committee.

On April 10, 2018, the FDIC, as a member of the FFIEC, issued statement FIL-16-2018, applicable to all FDIC-supervised institutions. Similarly, on April 11, 2018, the Office of the Comptroller of Currency (OCC) issued a similar bulletin (OCC Bulletin 2018-8) on the FFIEC’s joint statement, noting that the joint statement applies to all institutions supervised by the OCC.  The joint statement and associated FDIC letter and OCC bulletin include the following highlights:

  • FDIC-supervised institutions are not required to maintain cyber insurance. However, cyber insurance could offset financial losses from a variety of exposures—including data breaches resulting in the loss of confidential information—that may not be covered by more traditional insurance policies.
  • Traditional general liability insurance policies may not provide effective coverage for all potential exposures caused by cyber events.
  • Cyber insurance does not replace a sound and effective risk management program.
  • Cyber attacks are increasing in volume and sophistication and that traditional general liability coverage insurance policies may not provide effective coverage for potential exposures caused by cyber events
  • Cyber insurance may help reduce financial losses from a variety of exposures, such as data breaches resulting in the loss of sensitive customer information.
  • Cyber insurance does not diminish the importance of a sound control environment; rather, cyber insurance may be a component of a broader risk management strategy.
  • As institutions weigh the benefits and costs of cyber insurance, considerations may include: (a) involving multiple stakeholders in the cyber insurance decision; (b) performing proper due diligence to understand available cyber insurance coverage; and (c) evaluating cyber insurance in the annual insurance review and budgeting process.

The FFIEC’s statement is not intended to contain new regulatory expectations, but instead to provide awareness of the potential role of cyber insurance in financial institutions’ risk management programs.  Financial institutions ultimately remain responsible for maintaining a control environment consistent with the guidance outlined in the FFIEC IT Examination Handbook.

Click here to see the FFIEC press release.

Click here to see the full 3-page joint statement.

Over a dozen lawsuits have been filed by users and investors against Facebook after it was revealed last month that Cambridge Analytica, a political research firm, obtained personal information on millions of Facebook users. Cambridge Analytica obtained the data through a personality test app linked to Facebook accounts. Many of the lawsuits claim the information was used to create profiles and target audiences for purposes of categorizing voters in the 2016 presidential election. Most of the lawsuits accuse Facebook of failing to protect users’ personal information despite stating in its privacy policy that Facebook users own and control personal information posted on Facebook. Some of the lawsuits go beyond allegations of privacy violations and accuse Facebook of negligence, consumer fraud, unfair competition, securities fraud and racketeering. On March 16, Facebook announced that it was suspending Cambridge Analytica for violating Facebook’s policies on data gathering

Starting April 9, Facebook will begin alerting users whose data may have been harvested by Cambridge Analytica. As part of this process, the company plans to post a link at the top of users’ news feeds that will allow them to see which apps are connected to their Facebook accounts and what information those apps are permitted to see. Additionally, Facebook CEO Mark Zuckerberg is scheduled to testify before U.S. Congress on April 10 and April 11. Zuckerberg will appear before the Senate Judiciary and Commerce committees on April 10 and the House Energy and Commerce Committee on the morning of April 11. Zuckerberg’s testimony will hopefully shed more light into how this alleged violation occurred and its broader implications on data privacy in general.

 

 

On Wednesday, March 28, 2018, the Alabama Data Breach Notification Act of 2018 (SB318) was signed into law by the Governor, making Alabama round out the roster of 50 states with data breach notification laws.  (South Dakota’s data breach notification was signed by its governor on March 21, 2018, making it the 49th state.)  The new law will be effective on June 1, 2018.  Below is a more detailed summary of the Alabama law:

Definitions.

The Alabama law defines a security breach as the “unauthorized acquisition of data in electronic form containing Sensitive Personally Identifying Information (“Sensitive PII”).  As is typical, a breach does not include either: (a) good faith acquisitions by employees or agents unless used for unrelated purposes; (b) the release of public records not otherwise subject to confidentiality or nondisclosure requirements; or (c) any lawful investigative, protection or intelligence activities by a state law enforcement or intelligence agency.

“Sensitive PII” is defined to include: (a) an Alabama resident’s first name or first initial and last name in combination with one or more of the following regarding the same resident:

  • A non-truncated SSN number or tax identification number;
  • A non-truncated driver’s license number, state ID number, passport, military ID, or other unique identification number issued on a government document;
  • A Financial account number, including bank account number, credit card or debit card, in combination with any security code, access code, password, expiration date, or PIN, that is necessary to access the financial account or to conduct a transaction that will credit or debit the financial account.
  • Any information regarding an individual’s medical history, mental or physical conditions, or medical treatment or diagnosis by a health care professional.
  • An individual’s health insurance policy number or subscriber identification number and any unique identifier used by a health insurer to identify the individuals.
  • A user name or email address, in combination with a password or security question and answer that would permit access to an online account affiliated with the covered entity that is reasonably likely to contain or is used to obtain Sensitive PII.

Notification Requirements.

  • Notification to Individuals. If a covered entity determines that an unauthorized acquisition of Sensitive PII has or is reasonably believed to have occurred, and is reasonably likely to cause substantial harm, it shall notify affected individuals as expeditiously as possible and without unreasonable delay but no later than 45 days after the determination of both a breach and a likelihood of substantial harm. A federal or state law enforcement agency may request delayed notification if it may interfere with an investigation.  If an entity determines that notice is not required, it shall document the determination and maintain the documentation for at least 5 years.
    • Format and Content. Written notice can be by mail or email, and must include: (1) the estimated date or date range of the breach; (2) a description of the Sensitive PII acquired; (3) a general description of actions taken to restore the security and confidentiality of the personal information; (4) steps an affected individual can take to protect himself or herself from identity theft; and (5) contact information for the covered entity in case of inquiries.
    • Substitute Notice. Substitute notice can be provided if direct notice would cause excessive cost relative to the covered entity’s resources, if the affected individuals exceed 100,000 persons, or if there is a lack of sufficient contact information for the required individual to be notified.  Costs are deemed excessive automatically if they exceed $500,000.  Substitute notice may include both posting on the website for 30 days and using print or broadcast media in the major urban and rural areas where the individuals reside.   An alternative form of substitute notice may be approved by the Attorney General.
  • Notification to Attorney General. If the affected individuals exceed 1,000, the entity must notify the Attorney General as expeditiously as possible and without unreasonable delay, but no more than 45 days from receiving notice of a breach by a third party agent or upon determining a breach and substantial likelihood of harm has occurred. Notice must include: (1) an event synopsis; (2) the approximate number of affected individuals in Alabama; (3) any free services being offered by the covered entity to individuals and instructions on how to use them; and (4) contact information for additional inquiries.  The covered entities may provide supplemental or updated information at any time, and information marked as confidential is not subject to any open records or freedom of information laws.
  • Notification to Consumer Reporting Agencies. If the covered entity discovers notice is required to more than 1,000 individuals at a single time, it shall also notify, without unreasonable delay, all consumer reporting agencies.
  • Third Party Notification. Third party agents experiencing a breach of a system maintained on behalf of a covered entity shall notify the covered entity as expeditiously as possible and without unreasonable delay, but no later than 10 days following the determination (or reason to believe) a breach has occurred.

Enforcement

  • Enforcement Authority. Violating the notification provisions is an unlawful trade practice under the Alabama Deceptive Trade Practices Act (ADTPA), and the Attorney General has exclusive authority to bring an action for penalties. There is no private cause of action.  The Attorney General also has exclusive authority to bring a class action for damages, but recovery is limited to actual damages plus reasonable attorney’s fees and costs.  The Attorney General must submit an annual report.
  • Penalties. Any entity knowingly violating the notification provisions is subject to ADTPA penalties, which can be up to $2,000/day, up to a cap of $500,000 per breach.   (“Knowing” means willfully or with reckless disregard.)  In addition to these penalties, a covered entity violating the notification provisions shall be liable for a penalty of up to $5,000/day for each day it fails to take reasonable action to comply with the notice provisions. Government entities are subject to the notice requirements, but exempt from penalties, although the Attorney General may bring an action to compel performance or enjoin certain acts.

Other Requirements

  • While enforcement authority is limited to notification violations, the statute also instructs entities to take “reasonable security measures”, provides guidance on conducting a “good faith and prompt investigation” of a breach, and requires covered entities to take reasonable measures to dispose of Sensitive PII. It is unclear how these provisions might be enforced, except potentially to determine if a notification violation was willful or with reckless disregard.
    • Reasonable Security Measures”. Covered entities and third party agents must implement and maintain reasonable security measures to protect Sensitive PII, and the law provides guidance on what elements to include.  It also provides guidance on what an assessment of a covered entity’s security measures might consider and emphasize.
    • Breach Investigation. A covered entity shall conduct a “good faith and prompt investigation”, and the law lists considerations to include in the investigation.
    • Records Disposal. A covered entity or third-party agent must take reasonable measures to dispose of or arrange for the disposal of records containing Sensitive PII when they are no longer to be retained, and the law includes examples of such disposal methods.

A Berlin regional court recently ruled that Facebook’s use of personal data was illegal because the social media platform did not adequately secure the informed consent of its users. A German consumer rights group, the Federal of German Consumer Organisations (vzvb) said that Facebook’s default settings and some of its terms of service were in breach of consumer law, and that the court had found parts of the consent to data usage to be invalid. One concern highlighted by the consumer rights group was that, in Facebook’s app for smartphones, a service was pre-activated that revealed the user’s location to the person they were chatting to.  Also, in the privacy settings, ticks were already placed in boxes that allowed search engines to link to the user’s timeline, meaning that anyone would be able quickly and easily to find a user’s profile.

A week after the ruling, Facebook promised to radically overhaul its privacy settings, saying the work would prepare it for the introduction of the upcoming General Data Protection Regulations (GDPR).  Facebook has faced repeated attacks from Germany and other European regulators over issues ranging from perceived anti-competitive practices to alleged misuse of customer data. In October, the Article 29 Working Party (WP29) launched a task force to examine the sharing of user data between WhatsApp and Facebook, which it says does not have sufficient user consent.  “Whilst the WP29 notes there is a balance to be struck between presenting the user with too much information and not enough, the initial screen made no mention at all of the key information users needed to make an informed choice, namely that clicking the agree button would result in their personal data being shared with the Facebook family of companies,” the group told WhatsApp in October.

Similarly, a Belgian court earlier this month ordered Facebook to stop collecting data on users or face daily fines of €250,000 a day, or up to €100million.  The court ruled that Facebook had broken privacy laws by tracking people on third-party sites. “Facebook informs us insufficiently about gathering information about us, the kind of data it collects, what it does with that data and how long it stores it,” the court said. “It also does not gain our consent to collect and store all this information.”  The court ordered Facebook to delete all data it had gathered illegally on Belgian citizens, including people who were not users of the social network.

With regards to the German suit, Facebook said it would appeal, releasing a statement that it had already made significant changes to its terms of service sand data protection guidelines since the case was first brought in 2015. In the meantime, Facebook stated it would update its data protection guidelines and terms of services so that they comply with the new EU-wide GDPR rules.

On January 28, 2017, as part of Data Privacy Day, Facebook shared its data privacy principles for the first time. In a blog post drafted by Erin Egan, Facebook’s Chief Privacy Officer, Facebook posted these principles to help users understand how data is used and managed on the site. Among other things, Facebook’s data privacy principles stress user control of privacy, the goal of protecting users’ accounts and implementing security tools (like two-factor authentication), and user ownership of information shared. Facebook also announced the launch of a new education campaign to help users understand how data privacy is handled by the company. As part of this effort, Facebook is preparing to roll out a “Privacy Center” that features important privacy settings in a single place.

This publication comes ahead of the European Union’s (EU) General Data Protection Regulation (GDPR), which will be implemented on May 25, 2018. The GDPR will set stringent data privacy requirements for companies operating in the EU.  In recent years, Facebook has faced scrutiny from EU regulators over its handling of user data. Facebook hopes to embrace a more transparent data privacy approach to meet all GDPR obligations.

To view Facebook’s Privacy Principles, click here.

With the May 25, 2018 deadline quickly approaching, many businesses are scrambling to prepare for compliance with the EU’s General Data Protection Regulation (GDPR), and questions and conversations are heating up.  Still others are still trying to wrap their arms around what GDPR is and what it means for U.S. businesses.  For those of you still trying to wrap your heads around it, below are a few basics to help familiarize yourself with the regulation and its relevance to you.

  1. I’m a U.S. business. Why does GDPR matter to me?

The reach of the GDPR regulation extends not only to European-based businesses, but also to all companies that do business, have customers, or collect data from people in the EU.  If you even have a website that could collect data from someone visiting the site from the EU, your business could be affected. No matter where your business resides, if you intentionally offer goods or services to the European Union, or monitor the behavior of individuals within the EU, the GPDR could be applicable.

  1. What’s the risk?

In addition to the PR or brand risk of being associated with noncompliance, GDPR provides for some pretty significant monetary penalties .  Some violations are subject to fines up to 10 million EUR or up to 2% of global annual turnover, whichever is greater.  For other violations, it is double – up to 20 million euros or 4% of your global annual turnover, whichever is greater.  For large businesses, this could be a substantial amount.

  1. What should I be doing?

First, talk with your general counsel or outside law firm.  They can help you interpret the law, review contractual obligations and assess the company’s overall privacy policies to help guide your compliance strategy going forward.  They can also help create defensible interpretations within certain ambiguous language in the regulation (e.g., what is “personal data” for purposes of the GDPR?).  The Article 29 Working Party, made up of the data protection authorities (DPAs) from all EU member states, has published guidance to clarify certain provisions, which can be helpful during this process.

Second, create a cross-functional team including areas including (but not limited to): communications/PR, IT, customer experience, digital, legal and operations.  This may be fairly similar to any cross-functional teams you may have (and hopefully have) already established to prepare for data breaches.  This team can begin designing and implementing a compliance strategy.  Under certain conditions, your business may need to appoint a Data Protection Officer (DPO) (See Articles 29 and 30).

  1. What are some key points of the GDPR?

GDPR is a data privacy regulation in the EU that is aimed at protecting users’ rights and privacy online.  It requires business to assess what kinds of data they’re collecting and to make that data accessible to users.  The regulation is long and complex with several moving parts, but four key points may be worth noting.

Key Definitions:  You will see several references to controllers, data subjects, personal data, and processing.  This vocabulary may be unfamiliar in relation to U.S. law, but here is how these key terms are defined – as a business subject to GDPR, you may be a “controller” or you may be a “processor”.  The individual is the “data subject”:

  • “Controller” = “the natural or legal person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data; where the purposes and means of such processing are determined by Union or Member State law, the controller or the specific criteria for its nomination may be provided for by Union or Member State law.”
  • “Processor” = “means a natural or legal person, public authority, agency or other body which processes personal data on behalf of the controller”
  • “Data subject”= “an identified or identifiable natural person (see definition of “personal data” above).”
  • “Personal data” = “any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.”
  • “Processing” = “any operation or set of operations which is performed on personal data or on sets of personal data, whether or not by automated means, such as collection, recording, organization, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction”
  1. Some Key Articles/Provisions:

Article 12Transparent information, communication and modalities for the exercise of the rights of the data subject.

This article creates rules around how users give consent to record their data.  The data subject must be provided with accurate information on all relevant issues, such as the kind of data to be collected or process, and for what purposes. For some particularly sensitive data, (e.g., political opinion, religion, biometric data (including photographs), health data, etc.), consent must be “explicit”.   Consent must be “freely given”, meaning that the user has a “genuine” choice and be able to withdraw consent “without detriment”.  The data subject cannot be obliged to consent to data processing that is not necessary to provide the service he or she has requested.

For these reasons, the traditional “notice and consent” may not be sufficient, and actionable forms or buttons may be necessary.  “Silence, pre-ticked boxes or inactivity,” however, is presumed inadequate to confer consent.  Recital 32 of the GDPR notes that an affirmative action signaling consent may include ticking a box on a website, “choosing technical settings for information society services”, or “another statement or conduct” that clearly indicates assent to the processing.  “Silence, pre-ticked boxes, or inactivity” however, is presumed inadequate.  For those reaching European citizens digitally, working with IT or UX experts may prove important to create a seamless, but compliant, experience.

Article 17Right to erasure

The “right to be forgotten” means that businesses must be able to remove data on a user at their “without undue delay”.  Further, the businesses have an obligation to erase personal data “without undue delay” under certain additional circumstances.

Article 20. Right to data portability.

Users have the right to receive any data that a business may have on them the firm must provide such data in a “structured, commonly used and machine-readable format”.  Further, the data subject has the right to transmit such data to another business without being hindered by the business that provide the data where the processing is either (a) based on certain consents or (b) carried out by automated means.  Where technically feasible, the data subject also has the right to have the personal data transmitted directly from one controller to another.

Article 8. Conditions applicable to child’s consent in relation to information society services.

Article 8 limits the ability of children to consent to data processing without parental authorization.  Previous drafts of the GDPR had set the age of consent at 13 years old, which would have been consistent with the age set by the United States’ Children’s Online Privacy and Protection Act (“COPPA”). A last-minute proposal aimed  to raise the age of consent to 16 years old.  In the final draft, the age of consent is set at 16 unless a member state sets a lower age no below 13 years.  Thus, unless otherwise provided by member state law, controllers must obtain parental consent when processing the personal data of a child under the age of 16. With the difference between the U.S. age of consent under COPPA set at 13 (COPPA) and the European age of consent under the GDPR set at 16 (unless otherwise lowered by a member state), this could present some challenges for U.S. businesses offering international services.

Article 32.  Security of Processing.

Firms must follow security best practices across the board when collecting and protecting data. This may include, but isn’t limited to, specific password policies, information security frameworks (e.g., NIST, ISO, COBIT/ISACA, SSAE, etc.), and data encryption.

  1. What Else Should I Know?

If you believe your business might be affected, you should already be familiarizing yourself with the GDPR regulations and be well into your compliance plan.  The above summary is a sampling of key points and not a comprehensive analysis,, which should be undertaken to better understand your compliance obligations.  You should also be aware of the ePrivacy Regulation which will be following on the heels of the GDPR.

Whereas the GDPR covers the right to protection of personal data, while the ePrivacy Regulation encompasses a person’s right to a private life, including confidentiality.  There is some obvious overlap here, but the ePrivacy Regulation is intended to particularize GDPR for electronic communications — devices, processing techniques, storage, browsers etc.  The laws are intended to be in sync, but the ePrivacy regulations are still up in the air — optimistically forecasted to be finally approved by the end of 2018, although the implementation date remains to be seen.  In sum, GDPR compliance is all you can focus on right now, and hopefully GDPR compliance should position your business well for any additional compliance obligations that could subsequently arise from the finalized ePrivacy Regulation.

Today, the FTC issued its National Do Not Call Registry Data book for Fiscal Year 2017 (October 1, 2016 to September 30, 2017).

The National Do Not Call Registry Data Book contains statistical data about phone numbers on the Registry, telemarketers and sellers accessing phone numbers on the Registry, and complaints consumers submit to the FTC about telemarketers allegedly violating the Do Not Call rules. Statistical data on Do Not Call (DNC) complaints is based on unverified complaints reported by consumers, not on a consumer survey. This year’s Data Book has been redesigned to provide more information on robocall complaints, new information about the types of calls consumers reported to the FTC, and includes a complete state-by-state analysis.  In addition, the FTC has developed a mini site on its website to make the information in the FY 2017 Data Book more accessible for the public, such as providing a webpage for each state. For the first time, the data behind the report will be available in (.csv) data files.  Leading the number of complaints per 100,000 in population was New Jersey, with Puerto Rico in last place.

Here are some statistics from our firm’s geographic footprint states (rankings are based on complaints per 100,000 population).

Alabama:

  • Complaints: Ranked #22 (108,003) – up from 66,812 in 2016.
  • Active registrations: Ranked #31 (3,393,619)  — up from 3.33M in 2016
  • Complaints by call type:
  1. Robocall: 71,627
  2. Live caller: 34,896
  3. Type not reported: 1,480
  • Top 10 Complaints by topics (not all complaints report a topic):
  1. Reducing debt: 13,673
  2. Warranties and Protection Plans: 5,036
  3. Home Security and Alarms: 3,973
  4. Imposters: 3,390
  5. Medical and Prescriptions: 3,305
  6. Vacation and timeshare 1,783
  7. Computer & Technical Support: 1,386
  8. Lotteries, prizes and sweepstakes : 1,110
  9. Work from home: 584
  10. Home improvement and cleaning: 253

District of Columbia (Washington, D.C.)

  • Complaints: Not ranked (24,303) – up from 18,304 in 2016.
  • Active registrations: Not ranked (620,154)  — up from 605,725 in 2016
  • Complaints by call type:
  1. Robocall: 16,724
  2. Live caller: 7,383
  3. Type not reported: 196
  • Top 10 Complaints by topics (not all complaints report a topic):
  1. Reducing debt: 4,103
  2. Vacation and timeshares: 1,628
  3. Imposters: 1,262
  4. Warranties and protection plans: 665
  5. Medical and prescriptions: 459
  6. Computer and technical support: 419
  7. Lotteries, prizes and sweepstakes: 413
  8. Energy, solar, and utilities: 196
  9. Home improvement and cleaning: 188
  10. Home security and alarms: 178

Florida:

  • Complaints: Ranked #3 (588,021) – up from 385,490 in 2016
  • Active Registrations: Ranked #29 (14,605,866) – up from 14.39M in 2016
  • Complaints by call type:
  1. Robocall: 363,801
  2. Live caller: 219,301
  3. Type not reported: 4,919
  • Top 10 Complaints by topics (not all complaints report a topic):
  1. Reducing debt: 63,231
  2. Vacation and timeshare: 19,058
  3. Warranties and protection plans: 18,939
  4. Imposters: 18,262
  5. Medical and prescriptions: 16,997
  6. Home security and alarms: 10,906
  7. Computer and technical support: 8,977
  8. Lotteries, prizes and sweepstakes: 6,270
  9. Energy, solar and utilities: 5,108
  10. Work from home: 3,370

Georgia:

  • Complaints: Ranked #15 (242,242) – up from 168,478  in 2016.
  • Active registrations: Ranked #34 (7,095,159)  — up from 6.97M in 2016
  • Complaints by call type:
  1. Robocall: 153,542
  2. Live caller: 86,095
  3. Type not reported: 2,605
  • Top 10 Complaints by topics (not all complaints report a topic):
  1. Reducing debt: 28,121
  2. Vacation and timeshares: 11,711
  3. Warranties and protection plans: 10,658
  4. Medical and prescriptions: 7,913
  5. Imposters: 7,817
  6. Home security and alarms: 5,789
  7. Computer and technical support: 3,091`
  8. Lotteries, prizes and sweepstakes: 2,840
  9. Work from home: 1,467
  10. Home improvement and cleaning: 1,237

Mississippi:

  • Complaints: Ranked #46 (39,969) – up from 25,221 in 2016.
  • Active registrations: Ranked #48 (1,636,395)  — up from 1.60M in 2016
  • Complaints by call type:
  1. Robocall: 25,674
  2. Live caller: 13,567
  3. Type not reported: 728
  • Top 10 Complaints by topics (not all complaints report a topic):
  1. Reducing debt: 6,111
  2. Warranties and protection plans: 2,252
  3. Imposters: 1,413
  4. Vacation and timeshares: 1,261
  5. Home security & alarms: 1,075
  6. Medical and prescriptions: 1,068
  7. Computer and technical support: 554
  8. Lotteries, prizes and sweepstakes: 528
  9. Work from home: 254
  10. Home improvement and cleaning: 59

Today, the FCC voted to pass the Restoring Internet Freedom order, which repeals the 2015 “net neutrality” rules and reverts back to the “light regulatory” touch the FCC previously had in place regarding internet service providers (“ISPs”).  Of primary importance, the FCC restored the classification of Broadband Internet Access Services as “information services” under Title I of the Communications Act rather than as telecommunications services under Title II.  For purposes of data privacy and security, this reclassification (more specifically, the reversal of the 2015 reclassification) restores the jurisdiction of the Federal Trade Commission to act when broadband providers engage in anticompetitive, unfair, or deceptive acts or practices related to the security and privacy of online consumers.  While the FTC had such jurisdiction prior to the 2015 net neutrality order, they are prohibited from regulating common carriers, and so today’s order restores that jurisdiction.  Although the final order has not yet been published, today’s press releases outlines that today’s declaratory ruling, report and order, and order, will do the following:

Declaratory Ruling:

  • Restores the classification of Broadband Internet Access Service as an “information service” under Title I of the Communications Act – the classification affirmed by the Supreme Court in the 2005 Brand X case.
  • Reinstates the classification of mobile broadband internet access service as a private mobile service.
  • Finds that the regulatory uncertainty created by utility-style Title II regulations has reduced Internet service provider (ISP) investments in networks, as well as hampered innovation, particularly among small ISPs serving rural consumers.
  • Finds that public policy, in addition to legal analysis, supports the information service classification, because it is more likely to encourage broadband investment and innovation, thereby furthering the goal of closing the digital divide and benefitting the entire Internet ecosystem.
  • Restores broadband consumer protection authority to the Federal Trade Commission (FTC), enabling it to apply its extensive expertise to provide uniform online protections against unfair, deceptive, and anticompetitive practices.

Report and Order

  • Requires that ISPs disclose information about their practices to consumers, entrepreneurs, and the Commission, including any blocking throttling, paid prioritization, or affiliate prioritization.
  • Finds that transparency, combined with market forces as well as antitrust and consumer protection laws, achieve benefits comparable to those of the 2015 “bright line” rules at lower costs.
  • Eliminates the vague and expansive Internet Conduct Standards, under which the FCC could micromanage innovative business models.

Order

  • Finds that the public interest is not served by adding to the already-voluminous record in this proceeding additional materials, including confidential materials submitted in other proceedings.

The order was approved by Chairman Pai, and Commissioners O’Rielly and Carr, with dissents from Commissioners Clyburn and Rosenworcel.  Chairman Pai and Commissioners Clyburn, O’Rielly, Carr and Rosenworcel each issued separate statements.

A link to the press release is available here.

The draft order, issued in November, is available here.