On March 18, 2020, the Federal Energy Regulatory Commission (FERC) and the North American Electric Reliability Corporation (NERC) announced steps to ensure that operators of the bulk electric system can focus resources on safety and reliability during the COVID-19 emergency.  FERC and NERC are advising all registered entities that they will consider the impact of the coronavirus outbreak with regards to NERC compliance as follows:

  • The effects of the coronavirus will be considered an acceptable basis for non-compliance with the personnel certification requirements of Reliability Standard PER-003-2 from March 1, 2020 to December 31, 2020. Registered entities should notify their Regional Entities and Reliability Coordinators when using system operator personnel that are not NERC-certified.
  • The effects of the coronavirus will also be considered an acceptable reason for case-by-case non-compliance with NERC requirements involving periodic actions that would have been taken between March 1, 2020 and July 31, 2020. Registered entities should notify their Regional Entities of any periodic compliance actions that will be missed during this period.
  • Finally, on-site audits, certifications and other on-site activities by Regional Entities will be postponed until at least until July 31, 2020. Registered entities should communicate any resource impacts associated with remote activities to their Regional Entities.

FERC and NERC will continue to evaluate the situation to determine whether the above dates should be extended.

To view the FERC-NERC joint announcement click here.

As more and more businesses send their employees home to self-quarantine and work remotely as part of their COVID-19 mitigation measures, it is important to remember that working remotely carries with it unique data privacy and security concerns of which everyone should be aware.  The following are a few tips for employers and employees to be aware of during these times:

  • Security of VPN/Remote Connections. As employees shift to logging into VPN and other remote connections, IT professionals should be assessing their resources to ensure both: (a) adequate capacity and (b) proper security of these connections.   According to a study conducted by OpenVPN in 2019, 24% of companies had not updated their remote work security policy in over a year, and 44% say their IT department did not lead the remote work security policy plan. Questions employers should be asking include:
    • What’s the maximum number of users who need remote access?
    • How does this translate into additional bandwidth needed?
    • How soon will we need additional bandwidth and how quickly can it be provided?
    • What technologies can we use to boost bandwidth cost-effectively?
    • How quickly can we obtain additional licenses and other resources to support the demand?
    • How much will additional bandwidth and network components cost?
    • How will we handle cybersecurity threats?
    • How do we secure and protect the increased amount of data traffic?
    • How quickly and cost-effectively can we scale back resources once the demand for remote access has subsided to normal levels?
    • What resources are available from local access carriers, WAN carriers and Internet services providers (ISPs)?

 

When choosing a VPN, consider whether it allows for multi-factor authentication, provides access control, and provides endpoint security (i.e., securing the various endpoints that connect to a network such as mobile devices, laptops, and desktops), as these issues will be critical to both availability and security of remote connections.

 

  • Keep a proper balance between employee/customer health and privacy rights. Collecting and sharing information is necessary, but must be done with employee’s privacy in mind. Many businesses are curious to know what they can ask employees without violating any privacy laws.
    • For example, can businesses take temperatures at work? This is typically considered a medical exam and normally would be prohibited under the Americans with Disabilities Act (ADA). However, according to new guidance issued by the Equal Employment Opportunity Commission (EEOC) on March 18, 2020, employers may measure employees’ body temperatures in light of CDC and local health authorities’ precautions. https://www.eeoc.gov/eeoc/newsroom/wysk/wysk_ada_rehabilitaion_act_coronavirus.cfm
    • The new EEOC Guidance also states that if an employee calls in sick, the employer may ask if  the employee is experiencing symptoms of the pandemic virus, which for COVID-19 include symptoms such as fever, chills, cough, shortness of breath, or sore throat.
      • The employer may also ask other employees if they too have “the same symptoms” and “encourage them to report that they may be a high risk for COVID-19.”  The CDC states that employees who fall ill with flu-like symptoms during a pandemic should leave the workplace, and so this information is necessary to comply with that guidance.
    • Both temperature readings and information an employee provides about symptoms should be considered confidential medical information.  The employer should maintain all such information about employee illness as a confidential medical record in compliance with the ADA.
      • The EEOC has directed employers to review the EEOC publication entitled: Pandemic Preparedness in the Workplace and the Americans With Disabilities Act.
      • Educational institutions should also be cautious about how they handle the health concerns and privacy rights of students under the Family Educational Rights and Privacy Act (FERPA). FERPA prohibits an educational agency or institution from disclosing personally identifiable information (PII) from a student’s education record without the prior written consent of a parent or non-minor student unless an exception applies. One exception is the “health or safety emergency,” which allows disclosure in an emergency to public health agencies, medical personnel, law enforcement officials or even parents if such disclosure is necessary to protect the health and safety of other students or individuals. There must be an actual emergency, not a future or unknown one. In areas where COVID-19 has been declared a public health emergency, this exception would arguably be met. However, the Department of Education notes that public health departments typically can have education records disclosed under this exception even in the absence of a formally declared health emergency. For more, see the U.S. Department of Education’s Frequently Asked Questions regarding student privacy and coronavirus.
    • Consider security and confidentiality of client data. For employees who are attorneys, healthcare workers, accountants, government contractors, and some consultants, consider how you plan to keep client information appropriately confidential and proprietary, and in compliance with any applicable privacy laws, while working in a home environment.  This is especially important if you are part of a dual income family whether both spouses are working from home. Consider the following:
      • Find out if your organization has rules or policies for telework; if so, make sure you read and comply with them. For example, they may allow you to use your own computer for reading company email, but not for accessing or storing sensitive customer data.
      • If you use Wi-Fi at home, make sure your network is set up securely. Look to see if it is using “WPA2” or “WPA3” security and make sure your password is hard to guess.
      • If working from a home computer or mobile device, make sure it is patched and updated.
      • Do you and your spouse share a computer? If so, do you have separate login profiles where electronic data can be segregated, or do you share the same drives, servers, and folders? Can you store client data separately on the cloud instead of locally on the hard drive?
      • Where do you keep physical files? Do you have a file cabinet at home? If not, can you designate a separate workspace?
      • How can you ensure privacy during phone calls and teleconferences. As you engage in client phone calls and teleconferences / videoconferences, can you isolate yourself within the house to a separate room? Can you be aware of the information you disclose verbally so as to effectively communicate without necessarily revealing identities or other confidential information verbally. (e.g., say “the client/patient” instead of “John Smith”).
    • Continue to be vigilant and educate employees regarding phishing and other social engineering attempts.
      • As always, there will continue to be bad actors who wish to capitalize on a national tragedy or vulnerability. Already, the Department of Health and Human Services experienced a cyber attack intended to slow its coronavirus response.
      • It is entirely expected that a new onslaught of phishing attempts will flood inboxes related to the coronavirus pandemic – pretending to offer information, provide education or services, or solicit donations. With increased information exchange taking place over the phone or through email, you can also expect to see more “spearphishing” attempts where an employee receives an email from a sender purporting to be another employee within the organization (up to and including executive management) requesting the recipient to click on a link, open an attachment, or process or wire funds.
      • It is therefore important that employees – particularly those unaccustomed to working remotely or via email – be on the lookout for social engineering attempts such as phishing emails or phone scams related to telework. Be wary of emails from unknown accounts with strange file attachments, any calls from people claiming to be technical staff asking for passwords or requesting that you allow them to ‘scan’ your computer, or unusual web meeting requests—don’t hesitate to ask questions and verify things by phone or other means before proceeding. Employers should consider updating firm directories or creating phone trees that would allow an employee to pick up the phone and verify such attempts “offline” before proceeding.
      • As always, judgment is key. If something seems slightly off, or if the stakes are large (i.e., large payments), take the extra time to double check “offline” through independent means before proceeding with granting access to a computer, clicking a link or opening any attachments, or processing any payments.

 

For more resources about addressing legal and business challenges associated with COVID-19, please visit Balch’s COVID-19 Resource Center.

According to a Bloomberg article posted earlier this morning, the U.S. Health and Human Services Department (“HHS”) suffered a cyber attack on its computer systems Sunday night.  The attack appears to have been intended to slow the agency’s systems, but was unable to do so in any meaningful way.   Just before midnight, the National Security Council also tweeted: “Text message rumors of a national #quarantine are FAKE.  There is no national lockdown.  @CDCGov has and will continue to post the latest guidance on #COVID19.”  The tweet was related to the hacking and release of disinformation. The government realized Sunday that there had been a cyberintrusion and that false information was circulating.

The hacking involved multiple incidents, and the tweet was meant in part to address the hacking.  It does not appear the hackers took data from the systems.  The administration has not yet confirmed who was behind the attack, which involved overloading the HHS servers with millions of hits over several hours. AS of the posting of this blog, neither HHS, the White House, nor the National Security Council responded to Bloomberg’s requests for comment.

To view Bloomberg’s article, click here.

On December 6, 2019, the FTC issued an opinion finding that Cambridge Analytica, they had engaged in deceptive practices to collect personal information from several users of Facebook for purposes of voter profiling and targeting.  In addition, the Commission found that Cambridge Analytica had engaged in deceptive practices regarding its participation in the EU-US Privacy Shield framework. According to the administrative complaint’s allegations, an app developer worked with Cambridge Analytica’s then-CEO to enable the developer’s GSRApp to collect Facebook data from app users and their Facebook friends. The complaint alleged that app users were falsely told the app would not collect users’ names or other identifiable information. The GSRApp, however, collected users’ Facebook User ID, which connects individuals to their Facebook profiles

The FTC issued a final order which would prohibit Cambridge Analytica from misrepresenting the extent to which it protects the privacy and confidentiality of personal information, and its participation in the Privacy Shield and other similar regulatory or standard-setting organizations. Moreover, the Final Order instructs Cambridge Analytica to continue to apply the Privacy Shield’s protections to personal information collected while it participated in the Privacy Shield, or to provide other protections authorized by law, or to return or delete the information. It must also delete the personal information it collected through the GSRApp.

To view the Final Order, click here.

To view the FTC’s Opinion, click here.

To read the press release, click here.

Yesterday (November 26, 2019), a comprehensive federal privacy bill was introduced that would grant individuals broad rights with respect to their data, impose new obligations on data processors, and expand the Federal Trade Commission’s enforcement authority with respect to privacy, as well as allowing for state attorney general enforcement and individual rights of action. The bill was sponsored by Senators Maria Cantwell (D-WA), Brian Schatz (D-HI), Amy Klobuchar (D-MN), and Ed Markey (D-MA).

Some key elements of the bill include:

  • Broad definitions of covered data. Covered data is broadly defined, including all information “that identifies, or is linked or reasonably linkable to an individual or consumer device, including derived data”.
  • Broad scope of covered entities. With certain exceptions, covered entities would include all of those that are subject to the FTC Act and those that process or transfer covered data.
  • Preemption. The bill would preempt directly conflicting state laws, but not those that provide greater protections.
  • Individual Privacy Rights. Much like the GDPR and CCPA, individuals would have rights of access, deletion, correction, and portability over covered data. The individual also has the right to object to the transfer of data to a third party.
  • Consent and Data Minimization Obligations.  The bill would impose a general duty not to engage in deceptive or harmful data practices. The entity must also engage generally in the privacy principle of data minimization, but not processing or transferring covered data “beyond what is reasonably necessary, proportionate, and limited.”  Specifically, an entity must have “prior, affirmative express” consent of the individual to transfer or process “sensitive” covered data (e.g., sensitive images, geolocation information, and others information as defined).
  • Reasonable Data Security and Other Obligations. An entity must implement “reasonable” data security practices, including vulnerability assessments, employee training, and secure data retention and disposal.  The entity must also designate privacy and data security officers in charge of ensuring compliance.  Entities transferring or processing data for a significant number of individuals must annually certify to the FTC that adequate internal controls exist.
  • Civil Rights. The bill would prohibit the use of data based on certain classifications (e.g., gender and familial status).  Entities engaged in algorithmic decision-making for certain purposes (e.g., credit eligibility) must conduct privacy impact assessments.
  • FTC Authority. The bill directs the FTC to establish a new bureaus focused on privacy and data security, and grants the FTC along with state attorneys general (as well as individual rights of action, see below) the authority to enforce COPRA.  The FTC and state attorneys general, would deposit recovered funds in the Data Privacy and Security Relief Fund, which would be used to compensate individuals. COPRA also directs the FTC to issue implementing regulations to refine definitions and establish a process for objecting to transfers of covered data.
  • Private Rights of Action. COPRA provides a private right of action for individuals, with damages ranging from $100 – $1000 per violation per day. Arbitration agreements and class action waivers are invalid with respect to disputes arising under COPRA.

We will be tracking the progress of this bill as it evolves.  To view the text of the draft bill, click here.

Last Friday, October 11, 2019, one day after the California Attorney General issued proposed regulations to implement the California Consumer Privacy Act of 2018 (“CCPA”), the California Governor, Gavin Newsom, announced that he signed all five of the September 2019 legislative amendments to the CCPA into law.  Those amendments include AB-25, AB-874, AB-1146, AB-1355, and AB-1564.  The governor had until Sunday, October 13 to either sign or veto the bills.

Among other changes to the CCPA, the amendments make the following notable changes:

  • Create a one-year exemption for HR data which sunsets Jan 1, 2021 (AB-25)
  • Create a one-year exemption from applicability for business-to-business customer representative personnel date, which sunsets Jan 1, 2021 (AB-1355)
  • Make various changes to the definitions of
    • “personal information” (AB-874 and AB-1355) to add reasonableness into the capability of being associated with an individual consumer or household; clarifies that personal information does not include de-identified or aggregate consumer information
    • “publicly available” information (AB-874); and
    • “verifiable consumer request” (AB-1355);
  • Create revisions to the private right of action (AB-1355) to clarify that class action lawsuits may only be brought for breaches pursuant to CA data breach notification law when the person information is “nonencrypted and nonredacted”.
  • Create limited exemptions for personal information necessary to fulfill a product warrant or recall or vehicle repair covered by a vehicle warranty or recall (AB-1146)
  • Clarify that a business does not need to retain or collect information that is in addition to that it would otherwise collect in the ordinary course of business (AB-1355)
  • Revise the anti-discrimination right (AG-1355); and
  • Clarify that a business only operating online needs to only provide an email address as a designated consumer request method (AB-1564).

To view the various amendments, click on the following links: AB-25, AB-874, AB-1146, AB-1355, AB-1564.

Today, on October 10, 2019, the California Attorney General (“AG”) issued long-awaited proposed regulations implementing the California Consumer Privacy Act of 2018 (“CCPA”).  The AG also issued a notice of proposed rulemaking action and an initial statement of reasons elaborating on the purposes of the proposed regulations. The proposed regulations are intended to “establish procedures to facilitate consumer’s new rights under the CCPA and provide guidance to businesses for how to comply.”

While the CCPA’s statutory compliance date is January 1, 2020, the AG stated in a related press conference that July 1, 2020 is the expected date of final regulations and enforcement.

The deadline for comments on the proposed regulations is 5:00pm local (Pacific) time on December 6, 2019.  Interested parties may also attend and provide comment at four scheduled public hearings on December 2 (Sacramento), December 3 (Los Angeles), December 4 (San Francisco), or December 5 (Fresno).

To access the notice of proposed rulemaking, click here.
To access the text of the proposed regulations, click here.
To access the initial statement of reasons (ISOR), click here.

Today, the FTC announced that Equifax, Inc. will pay at least $575 million (and potentially up to $700 million) as part of a proposed global settlement with the Federal Trade Commission (FTC), the Consumer Financial Protection Bureau (CFPB), and 50 U.S. states and territories. Their complaint alleges that Equifax failed to take reasonable steps to secure its network in ways that led to a 2017 data breach affecting approximately 147 million people. The proposed settlement will be filed along with a complaint today in the U.S. District Court for the Northern District of Georgia.

As part of the proposed settlement, Equifax will pay $300 million to a fund which will provide affected consumers with credit monitoring. It will also compensate consumers who bought credit or identity monitoring services from Equifax and paid other out-of-pocket expenses as a result of the breach.  Equifax will add up to $125 million if the initial payment proves insufficient. The company also has agreed to pay $175 million to 48 states, the District of Columbia and Puerto Rico, as well as $100 million to the CFPB in civil penalties.

In addition to the monetary penalties, beginning in January 2020, Equifax will provide all U.S. consumers with six free credit reports each year for seven years—in addition to the one free annual credit report that Equifax and the two other nationwide credit reporting agencies currently provide.

Equifax must also implement a comprehensive information security program under which it will be required to, among other things, implement the following:

  • Designate an employee to oversee the information security program;
  • Conduct annual assessments of internal and external security risks and implementing safeguards to address potential risks, such as patch management and security remediation policies, network intrusion mechanisms, and other protections;
  • Obtain annual certifications from the Equifax board of directors or relevant subcommittee attesting that the company has complied with the order, including its information security requirements;
  • Test and monitor the effectiveness of the security safeguards; and
  • Ensure service providers that access personal information stored by Equifax also implement adequate safeguards to protect such data.

Under the proposed settlement, Equifax must obtain third-party assessments of its information security program every two years. The assessments must specify evidence supporting its conclusions and must include independent sampling, employee interviews, and document reviews. The order grants the FTC authority to approve the third-party assessor for each two-year assessment period. Equifax must also provide an annual update to the FTC about the status of the consumer claims process.

The Commission authorized the filing of the complaint and proposed order in a 5-0 vote.

This last week saw significant compliance and enforcement activity with respect to both GDPR and the FTC.  Specifically, we saw two significant GDPR fines handed down by the UK Information Commissioner’s Office (ICO) against British Airways (approx. $230 million) and Marriott International (approx. $130 million).  In addition, Facebook settled with the FTC for the largest privacy-related penalty ever at $5 billion. Discussed in more detail below, these developments provide some valuable insight into the landscape of data privacy governance and compliance.

Marriott International

On July 9, 2019, the ICO issued a notice of its intention to fine Marriott International £99,200,396 for violating the EU’s General Data Protection Regulations (GDPR).  The fine relates to an incident that Marriott brought to the ICO’s attention in November 2018. Specifically, a variety of personal data containing approximately 339 million guest records were exposed by the incident.  Approximately 30 million records were thought to relate to residents of 31 countries in the European Economic Area (EEA), with 7 million related to UK residents. It is believed the vulnerability began with systems of the Starwood hotel group that were compromised in 2014. Marriott acquired Starwood in 2016, but the exposure was not discovered until 2018.

The ICO found that Marriott failed to undertake sufficient due diligence when it bought Starwood. It also found that Marriott should have done more to secure its systems, specifically “putting in place proper accountability measures to assess not only what personal data has been acquired, but also how it is protected,” according to Information Commissioner Elizabeth Denham.

The ICO further stated that Marriott has cooperated with the ICO investigation and made improvements to its security arrangements since discovery of the events in question.  As allowed under the GDPR’s “one stop shop” provisions, the ICO has been investigating the case as lead supervisory authority on behalf of other EU Member State data protection authorities, who will have an opportunity, along with Marriott, to comment on the ICO’s findings. The ICO states it will carefully consider the representations of both the company as well as other data protection authorities before making a final decision.

In a statement filed the same day with the U.S. Securities Commission announcing the ICO’s proposed fine, Marriott stated that it “intends to respond and vigorously defend its position.”  Marriott CEO, Arne Sorenson, stated that, “We are disappointed with this notice of intent from the ICO, which we will contest. Marriott has been cooperating with the ICO throughout its investigation into the incident, which involved a criminal attack against the Starwood guest reservation database. We deeply regret this incident happened. We take the privacy and security of guest information very seriously and continue to work hard to meet the standard of excellence that our guests expect from Marriott.”  Marriott also stated that the Starwood database that was attacked is no longer used for business operations.

British Airways

Similarly, the day before (July 8, 2019), the ICO issued notice of its intent to fine British Airways £183.39 million for GDPR infringements.  British Airways notified the ICO of the incident in September 2018, which involved user traffic to the British Airways website being diverted to a fraudulent site.  The attackers harvested customer details through this fraudulent site, compromising approximately 500,000 customers. The ICO found that the company had poor security arrangements which compromised a variety of data, including log in, payment card, and travel booking details as well name and address information.

In her statement, Commissioner Denham stated that “[w]hen an organization fails to protect it from loss, damage or theft it is more than an inconvenience. That’s why the law is clear – when you are entrusted with personal data you must look after it. Those that don’t will face scrutiny from my office to check they have taken appropriate steps to protect fundamental privacy rights.”

The ICO noted that British Airways cooperated with the investigation and has since made improvements to its security arrangements. As with Marriott, the ICO was investigating the case on behalf of other EU Member State data protection authorities as the “lead supervisory authority”. Both British Airways and those data protection authorities will be given an opportunity to comment on the ICO’s findings, which it will consider carefully before making a final decision.

In its announcement to the London Stock Exchange regarding the ICO’s proposed sanctions, British Airways chairman and chief executive Alex Cruz stated: “We are surprised and disappointed in this initial finding from the ICO.  British Airways responded quickly to a criminal act to steal customers’ data. We have found no evidence of fraud/fraudulent activity on accounts linked to the theft.  We apologize to our customers for any inconvenience this event caused.”  Willie Walsh, International Airlines Group chief executive, also stated that it intends “to take all appropriate steps to defend the airline’s position vigorously, including making any necessary appeals.”

Facebook Settlement

The FTC also voted this past week to approve a record settlement with Facebook over the company’s 2018 Cambridge Analytica scandal.  The settlement of $5 billion represents the largest fine ever approved by the FTC against a technology company – over 200 times larger than the previous largest fine.

The settlement was adopted along party lines – with the three Republicans supporting it and two Democrats against it, and signals the end of a wide-ranging probe into Facebook’s mishandling of personal information that began more than a year ago. From here, the Department of Justice must approve and finalize the FTC settlement, which it typically does.

The two Democrat votes against the settlement suggest some desire for stronger accountability in terms of executive accountability as well as internal processes, and concerns that even such a large fine may not be sufficient to incentivize change.   Despite the record fine, critics have assailed the FTC for approving a fine that is small (approximately 9%) in comparison to Facebook’s massive profits, calling the agency’s efforts a “slap on the wrist.”  The real test of the agency’s work should depend on the final terms and conditions of the settlement agreement, which have not yet been disclosed. Facebook’s stock closed nearly 2% higher after the news broke.  In April, Facebook had warned Wall Street that it could face a fine as high as $5 billion, and had set aside a $3 billion charge during its first quarter earnings report when it announced it earned $15 billion in quarterly revenue.

The FTC’s investigation began in March 2018, in response to reports that political consulting firm Cambridge Analytica had improperly accessed personal data of 87 million users, which critics charged had violated a previous settlement agreement between Facebook and the FTC from 2011 to protect users’ privacy. Cambridge Analytica’s quiz app had harnessed information on both users installing the app as well as their Facebook friends, a form of data collection that Facebook has previously allowed under an earlier version of its privacy policy. This harnessed information may have helped Cambridge Analytica create profiles of users that clients could target with political messages.  The FTC investigation then expanded beyond Cambridge Analytica to cover other privacy and security activities at Facebook, including a discovery that Facebook had provided popular websites and the makers of some smartphones and other devices with access to users’ social data without adequately notifying them.

Under the FTC’s new settlement, Facebook could have to document every decision it makes about data before offering new products, closely monitor third-party applications that collect users’ information, and require Facebook CEO Mark Zuckerberg and other top executives to attest that the company has adequate privacy protections.  Facebook had agreed to broad versions of these terms as part of the confidential settlement talks with the FTC, according to the Washington Post. These provisions are broader than the 2011 settlement agreement, which had required Facebook to give users greater notification about what happens to their data and how their information is used. It also required Facebook to submit to 20 years of regular privacy checkups from outside watchdogs, even though those reviewers had not flagged any major mishaps at the company.

Takeaways

As more frequent and significant fines continue to emanate from both Europe and the U.S., heightened responsibilities for companies and their management translate to larger budgets for privacy programs and data governance, which require systems and technologies to be managed at scale. It is becoming increasingly apparent that GDPR compliance is real (and not just for the tech industry, as the Marriott and British Airways proposed sanctions make clear), and that the California Consumer Privacy Act (CCPA) compliance is around the corner. (Often compared to GDPR, the law becomes effective Jan. 1, 2020.). Although some criticize the significant FTC fine as a “slap on the wrist”, such fines could be crippling to companies with less revenue, and the non-monetary terms of the settlement, once revealed, could forecast more about what to expect in terms of the new standards of “reasonableness”. Other states, such as Nevada and New York, are also passing more stringent laws.  Data privacy as an enterprise-wide risk management issue is clearly here to stay, and will require a cross-functional collaboration across multiple departments and business units. Compliance and best practices, both abroad and at home, should be a top priority for companies of all industries.

 

*written with assistance from co-author and W&L law student, Isabella Gray.

On May 28, 2019, the Cyberspace Administration of China (“Cybersecurity Administration”) released a set of draft Measures for Data Security Management (the “Draft Measures”).  The Draft Measures provide articles governing how network operators, defined as someone who owns and administrates a network or a network service provider, can collect, use, and store different types of data.

The articles contained in the Draft Measures were developed to expand China’s existing Cybersecurity Law and to safeguard national security and the public interest while also protecting the rights of Chinese citizens, legal persons, and other organizations in cyberspace. The Draft Measures apply to the collection, storage, transmission, process, and use of data as well as the protection, supervision, and administration of cybersecurity within the territory of China.

Required Consent for Data Collection or Use

Under the Draft Measures, each network operator must develop and disclose separate rules for the collection of data and the subsequent use. The developed rules for collection and use may both be presented in a privacy policy but must be specific, easy to understand, and presented in a clear and obvious way to encourage reading. Additionally, the rules must highlight:

  • general information about the network operator;
  • the name and contact information for the network operator’s main person responsible for data security;
  • how data is collected and used;
  • how data is stored;
  • a summary of the rules the network operator must comply with when disclosing collected data to others;
  • how the collected data is protected by the network operator;
  • how users can withdraw consent to collection and can access or delete collected personal information;
  • how users can file complaints; and
  • any additional information required by other laws or regulations.

A network operator may only collect data after a user acknowledges the rules for collection and use and gives express consent to those actions. If a user is under the age of 14, consent from a parent or guardian is required prior to collecting data. Additionally, network operators cannot mislead users into consenting to data collection or discriminate against users who do not consent.

Means of Collecting Data

The Draft Measures further prescribe what network operators must do after collecting two types of data: important data and personal information. “Important data” is defined as the kind of data that, if divulged, may directly affect national security, economic security, social stability, or public health and security. “Personal information” is defined as data which could be used to identify a person specifically, such as their name, date of birth, or telephone number.

If a network operator is collecting important data or personal information, the network operator must file information about its collection and use of such data with the Cybersecurity Administration.  The network operator must describe the purpose of its data collection, the scope and type of data collected, and how long it will retain the data.

Additionally, a network operator collecting important or personal data must specify the person responsible for data security. Such designated person must:

  • create data protection plans and ensure proper implementation of such plans;
  • conduct data security risk assessments and rectify potential risks;
  • report data security incidents to the Cybersecurity Administration; and
  • oversee the resolution of complaints and reports from users.

In addition to cooperating with data users, the Draft Measures require that network operators cooperate with website owners.  If a network operator uses automatic means to collect website data, the means must not interfere with the normal operation of the websites. If a website owner requests that the network operator stop collecting data from its site, the network operator must stop.

Use and Storage of Data

Data collected by network operators can be used for a variety of purposes such as more effectively displaying advertisements. In some instances, network operators must tell users how that are using certain data. For example, when conducting targeted pushes of information, network operators must clearly identify that the information presented to a user is a “targeted push” and give the user the option to reject the targeted push information.  Additionally, network operators must identify when they are synthesizing information.

Under the Draft Measures, network operators are permitted to publish, share, and sell data after assessing potential security risks. Approval from the Cyberspace Administration is required to publish, share, or sell data internationally. The uses for data prohibited by the Draft Measures include publishing market predictions, statistics, credit, or any other information that would endanger national security or damage the lawful rights and interests of any person.

A network operator generally needs consent from a user to share collected data with a third party. However, consent is not required for a network operator to share data where:

  • the data was collected from legal public channels;
  • the data was voluntarily disclosed by the user;
  • the data was anonymized so that it could not be traced back to any specific user;
  • sharing the data is necessary for compliance with law enforcement agencies in accordance with the law; or
  • sharing the data is necessary for safeguarding national security, public interest, or the life of the user.

Under the Draft Measures, a network operator may only keep data for the retention period specified in its filing with the Cyberspace Administration. Should a user request that its data be deleted prior to the end of the retention period, the network operator must comply. Network operators must also take steps to urge users to be responsible with their network behavior and encourage self-regulation.

Finally, security is a large issue with data collection, use, and storage. To address this, the Draft Measures require that network operators categorize, back-up, and encrypt data to strengthen the protection of it. In the event of a security incident where data is divulged, a network operator must immediately take remedial measures to inform users about the incident and additionally report the incident to the Cyberspace Administration.

Penalty for Noncompliance

A network operator’s violation of any of the Draft Measures could result in disciplinary actions, such as confiscating income received as a result of the violation, suspending the network operator’s business operations, or revoking the network operator’s business permit. If the violation amounts to a crime, the network operator could be subject to applicable criminal punishments.

The Draft Measures will remain open for comment until June 28, 2019.