On Friday, August 14, 2020, the California Attorney General released the final CCPA regulations issued under the California Consumer Privacy Act of 2018 (“CCPA”) as approved by the California Office of Administrative Law (“OAL”), and filed them with the California Secretary of State.  During its review, the OAL made additional revisions to the CCPA regulations, which it stated were “non-substantive” and primarily for accuracy, consistency, grammar, and clarity, and for eliminating unnecessary or duplicative provisions. Per the Attorney General’s request, the regulations became effective on August 14, 2020, the day they were submitted to the Secretary of State by the Attorney General.

To view the final CCPA regulations, click here.

To view a redline version of the additional changes made by the OAL to the Attorney General’s proposed regulations, click here.

To view OAL’s statement of reasons detailing its additional changes, click here.

Yesterday, on August 10, 2020, the European Commission (“Commission”) and the Department of Commerce (“DoC”) issued a joint statement announcing they are beginning discussions to evaluate potential enhancements to the EU-U.S. Privacy Shield framework.  These discussions have begun to address compliance with the recent Schrems II decision by the Court Justice of the European Union (“CJEU”).  Both entities recognized the importance of data protection in the EU and the U.S. as well as the significance of cross-border data transfers to the “nearly 800 million citizens on both sides of the Atlantic.” They also noted their shared commitment to privacy and the rule of law, as well as to further deepening their economic relationship.

To view the joint statement on the Commission’s website, click here.

To view the joint statement on the DoC’s website, click here.

Vermont Amends Data Breach Notification Law

On July 1, 2020, amendments to Vermont’s Security Breach Notice Act, 9 V.S.A. §§ 2330 & 2335, took effect along with a new “Student Online Personal Information Protection Act.”

Key amendments to the security breach act include:

  • An expanded definition of Personally Identifiable Information (“PII”). The definition now adds various ID numbers, unique biometric data, genetic information, and certain health or wellness records.
  • Expanded definition of security breach to include “login credentials”. Login credentials are defined by the amendment as “a consumer’s user name or email address in combination with a password or an answer to a security question that together permit access to an online account.”  Businesses should consider login credentials and PII as the same when considering whether breach occurred and whether a business has a general duty to notify, but login credentials differ from PII in how and to whom notice must be provided.
    • If only login credentials are breached (without breach of actual PII), a data collector is only required to notify the Vermont Attorney General (or the Department of Finance, as applicable) if the login credentials were acquired directly from the data collector or its agent. The law specifies different notification requirements depending on whether the breached login credential would permit access to an email account.
  • Narrows the Permissibility of Substitute Notice. Previously, substitute notice was permitted when the class of affected consumers to be provided written or telephonic notice exceeded 5,000, the cost of direct notice would exceed $5000, or the data collector did not have sufficient contact information. Now, substitute notice is only permitted where the lowest cost of providing notice to affected customers via written, email, or telephonic notice would exceed $10,000. This revision included e-mail as a permissible form of notice and eliminated the number of affected consumers exceeding 5,000 as a basis for providing substitute notice.  Because email allows companies to provide mass notice to affected customers in a timely manner at low cost, it will be more difficult for data collectors to reach that $10,000 minimum.

Vermont Enacts New Student Privacy Law

Vermont’s new Student Online Personal Information Protection Act updates its privacy law to include regulations specifically concerning the data of pre-K to 12th grade students. The law applies to website operators, online services, or mobile applications designed and marketed to, and used primarily by, pre-K to 12th grade schools.

Under the new law, enforceable by the Vermont Attorney General, operators are generally prohibited from:

  • Engaging in targeted advertising based on any information the operator has acquired because of the use of its site, service, or application for PreK-12 purposes;
  • Using information that is created or gathered by the operator’s site, service, or application to amass a profile about a student, except for PreK-12 purposes;
  • Selling, bartering, or renting a student’s information; or
  • Disclosing covered information to a third party, unless a specific exception applies (including certain disclosures for educational purposes).

Operators are also required to: (a) implement and maintain reasonable security procedures and practices; (b) delete a student’s covered information within a reasonable time period if the school or school district requests it; and (c) publicly disclose and provide the school with material information about the operator’s collection, use, and disclosure of covered information, including publishing terms of service, a privacy policy or similar document.

Operators may use or disclose covered information as required by law. Operators may also use covered information for legitimate research purposes in certain circumstances and to disclose the information to a state or local education agency for PreK-12 purposes, as permitted by State or federal law.  Operators are also not prohibited from using covered information in the following scenarios so long as the information is not associated with an identified student within the operator’s control (sites, services, applications, products, or marketing):

  • Improving educational products;
  • Demonstrate the effectiveness of the operator’s products or services, including their marketing;
  • Development or improvement of educational sites, services, or applications;
  • Using recommendation engines to recommend to a student (1) additional content or (2) additional services, in which both relate to an educational, other learning, or employment opportunity purpose, so long as the recommendation is not determined in whole or in part by payment or other consideration from a third party; or
  • Responding to a student’s request for information or feedback without the response being determined by payment or other consideration

This subchapter does not:

  • Limit the authority of law enforcement to lawfully obtain content or information;
  • Limit the ability of an operator to use student data for adaptive or customized student learning purposes;
  • Apply to general audience websites, online services, online applications, or mobile applications
  • Limit service providers from providing Internet connectivity to schools, students, or their families;
  • Prohibit an operator from marketing educational products directly to parents;
  • Impose a duty upon a provider of an electronic store, gateway, marketplace, or other means or purchasing or downloading software to review or enforce compliance of this law;
  • Impose a duty upon a provider or an interactive computer service to review or enforce compliance with this law;
  • Prohibit students from downloading, exporting, transferring, saving, or maintaining their own student-created data or documents; or
  • Supersede the federal Family Educational Rights and Privacy Act (FERPA) or rules adopted pursuant to the Act.

Finally, the law requires the Vermont Attorney General, in consultation with the Vermont Agency of Education, to examine the issue of student data privacy as it relates to FERPA and access to student data by data brokers, and determine whether to make any recommendations.


This post was co-authored with Kaylee Rose, first-year law student at Cumberland School of Law:

On July 21, 2020, the New York State Department of Financial Services (NYDFS) filed charges against First American Title Insurance Company (First American) for breach of state cybersecurity regulations. Specifically, NYDFS alleges that First American exposed tens of millions of documents containing consumers’ sensitive personal information, including bank account numbers and statements, mortgage and tax records, social security numbers, wire transaction receipts, and drivers’ license images. The statement of charges against First American state that from October 2014 through May 2019, a vulnerability in First American’s website exposed customers’ personal data. The statement of charges also claim that First American failed to adequately remedy the vulnerability when it was eventually discovered.

First American is charged with violating multiple provisions under the NYDFS’s cybersecurity regulations. These regulations require regulated entities, like insurance providers, to establish and maintain an adequate cybersecurity program and procedures. First American is the first entity to be charged under these regulations, which came into effect in 2017.



The National Security Agency (NSA) and Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) have issued a joint alert warning that, over recent months, hackers have been attempting to target Critical Infrastructure (CI) by exploiting Internet-accessible Operational Technology (OT) assets. The alert notes recently observed tactics from the hackers, including spear phishing and ransomware.  The alert recommends that owners and operators of CI take immediate steps to ensure the safety of their systems. The alert recommends that owners and operators: (1) have a resilience plan for OT, (2) exercise an incident response plan, (3) harden their networks, (4) create an accurate “as-operated” OT network map, (5) understand and evaluate cyber-risk on “as-operated” OT assets, and (6) implement a continuous and vigilant system monitoring program.

To view the Joint Alert, click here.

On July 21, 2020, the New York State Department of Financial Services (NYDFS) filed charges against First American Title Insurance Company (First American) for breach of state cybersecurity regulations. Specifically, NYDFS alleges that First American exposed tens of millions of documents containing consumers’ sensitive personal information, including bank account numbers and statements, mortgage and tax records, social security numbers, wire transaction receipts, and drivers’ license images. The statement of charges against First American state that from October 2014 through May 2019, a vulnerability in First American’s website exposed customers’ personal data. The statement of charges also claim that First American failed to adequately remedy the vulnerability when it was eventually discovered.

First American is charged with violating multiple provisions of the NYDFS’s cybersecurity regulations. These regulations require regulated entities, like insurance providers, to establish and maintain an adequate cybersecurity program and procedures. First American is the first entity to be charged under these regulations, which came into effect in 2017.

To read the Notice of Charges, click here.

We previously posted on yesterday’s Schrems II decision issued by the Court of Justice of the European Union (CJEU). Today (Jun 17, 2020), the Berlin data protection authority (Berlin DPA) went even further than the CJEU opinion, issuing a statement on the Schrems II case, calling for Berlin-based data controllers storing personal data in the US to transfer the same to Europe.  The DPA stated that data should not be transferred to the US until that legal framework is reformed.  In addition, regarding the SCCs that were cautiously validated by the CJEU, the Berlin DPA stated that European data exporters and third country data importers must check, prior to transferring data, whether the third country has state access to the data that exceeds that permitted under European law. If such access rights exists, the Berlin DPA stated, the SCCs cannot justify the data transfer to such third country. The Berlin DPA thus requested all data controllers to observe and comply with the CJEU’s judgment. In practice, the Berlin Commissioner provided that data controllers transferring data to the US, especially when using cloud service providers, are now required to use service providers based in the EU or in a country with an adequate level of protection.

This could impact the ability of Berlin-based companies to transfer personal data to their US subsidiaries or other US-based vendors or business partners.

To read the press release (currently available only in German), click here.

On July 16, 2020, the Court of Justice of the European Union (“CJEU” or “Court”) issued a significant judgment in Case C-311/18 (“Schrems II decision”) on the adequacy of protection provided by the EU-US Data Protection Shield. The court concluded that the Standard Contractual Clauses (“SCCs”) issued by the European Commission for the transfer of personal data to processors outside of the EU continue to be valid. However, the Court also invalidated the E.U.-U.S. Privacy Shield framework. In our post below, we: (I) provide some background on the events leading up to today’s decision; (II) summarize today’s decision and (III) provide some reflection on what it means for U.S. organizations that transfer personal data from Europe.

I. Context/Background.

The Schrems II decision is the latest in a series of decisions regarding privacy advocate Maximilian Schrems (“Max Schrems”), who filed a complaint in 2015 with the Irish Data Protection Commissioner challenging Facebook Ireland’s reliance on standard contractual clauses (“SCCs”) as a legal basis for transferring personal data to Facebook Inc. in the United States. Facebook turned to the SCCs after the CJEU had invalidated the US-EU Safe Harbor framework in 2015 upon Max Schrems’ earlier complaint.

The General Data Protection Regulation (‘the GDPR’) provides that the transfer of personal data to a third country may, in principle, take place only if the third country in question ensures an “adequate level of data protection”. According to the GDPR, the Commission may find that a third country ensures, by reason of its domestic law or its international commitments, an adequate level of protection. In the absence of an adequacy decision, such transfer may take place only if the personal data exporter established in the EU has provided appropriate safeguards, which may arise, in particular, from standard contractual clauses adopted by the Commission, and if data subjects have enforceable rights and effective legal remedies. Furthermore, the GDPR details the conditions under which such a transfer may take place in the absence of an adequacy decision or appropriate safeguards.

Max Schrems, an Austrian national residing in Austria, has used Facebook since 2008. The personal data of Mr. Schrems (and other European nationals) is transferred by Facebook Ireland to Facebook servers located in the United States, where it is processed. Mr. Schrems lodged a complaint with the Irish data protection authority (“DPA”) seeking to prohibit those transfers. He claimed that U.S. laws and practices do not sufficiently protect against access to the transferred data by U.S. public authorities. Specifically, he was concerned that EU personal data might be at risk of being accessed and processed by the U.S. government once transferred, in a manner inconsistent with privacy rights guaranteed in the EU, and that there is no remedy available to EU citizens to ensure protection of their personal data after it is transferred to the U.S. Mr. Schrems’ complaint was rejected at the time on the basis, among others, that the Commission had already found that the U.S. Safe Harbor Framework did ensure an adequate level of protection in Decision 2000/5205 (“the Safe Harbour Decision”).

Following his complaint, the Irish DPA brought proceedings against Facebook in the Irish High Court, which referred 11 questions to the CJEU for a preliminary ruling. These questions primarily addressed the validity of the SCCs, but also raised concerns about the E.U.-U.S. Privacy Shield framework. On October 6, 2015, the CJEU declared that the Safe Harbour Decision to be invalid (“the Schrems I judgment”), thus invalidating the EU-US Safe Harbor Framework and annulling the rejection of Max Schrems’ complaint.

In light of the Schrems I judgment, the Irish DPA then asked Mr. Schrems to amend his complaint. In his amended complaint, Mr. Schrems claimed that the U.S. still does not sufficiently protect data transferred to that country, and sought to suspend or prohibit future transfers of his personal data from the EU to the United States. Meanwhile, Facebook Ireland had begun carrying out data transfers pursuant to the alternative method of standard contractual clauses (“SCCs”) set out in the Annex to Decision 2010/87 (“SCC Decision”), which provides standard contractual clauses which could be used for data transfers to countries that had not been deemed adequate.

Since the outcome of Mr. Schrems’ amended complaint hinged upon the validity of the SCC Decision, the Irish DPA brought proceedings before the High Court in order for it to refer questions to the Court of Justice for a preliminary ruling. After the initiation of those proceedings, the Commission adopted Decision 2016/1250 on the adequacy of the protection provided by the E.U.-U.S. Privacy Shield (‘the Privacy Shield Decision’).

In today’s decision, the Irish High Court asked the CJEU whether: (1) the GDPR applies to transfers of personal data pursuant to the SCCs from Decision 2010/87, and what level of protection is required by the GDPR in connection with such a transfer and (2) what obligations are incumbent on supervisory authorities in those circumstances. The High Court also raised the question of the validity of both (3) the 2010 SCC Decision and (4) the 2016 Privacy Shield Decision.

II. Summary of Today’s CJEU Decision

In today’s decision, the Court stated that:

(1) GDPR Applies to Data Transfers. EU Law, and the GDPR in particular, applies to the transfer of personal data for commercial purposes by an economic operator established in a Member State to another economic operator established in a third country, even if, at the time of that transfer or thereafter, that data may be processed by the authorities of the third country in question for the purposes of public security, defense and state security. The Court adds that this type of data processing by the authorities of a third country cannot preclude such a transfer from the scope of the GDPR. The requirements of the GDPR concerning appropriate safeguards, enforceable rights, and effective legal remedies must be interpreted as meaning that data subjects whose personal data are transferred to a third country pursuant to SCCs must be afforded a level of protection essentially equivalent to that guaranteed within the EU by the GDPR, read in the light of the Charter. The assessment of that level of protection must take into consideration both: (a) the contractual clauses agreed between the data exporter established in the EU and the data importer recipient established in the third country concerned and, (b) the relevant aspects of the third country’s legal system regarding access by public authorities of that third country.

(2) Obligations of Supervisory Authorities. Regarding obligations of supervisory authorities (such as the Irish DPA) in connection with such a transfer, the CJEU held that, unless there is a valid Commission adequacy decision, those competent supervisory authorities are required to suspend or prohibit a transfer of personal data to a third country where the DPA takes the view, in the light of all the circumstances of the transfer, that the SCCs are not or cannot be complied with in that country and that the protection of the data transferred that is required by EU law cannot be ensured by other means, where the data exporter established in the EU has not itself suspended or put an end to such a transfer.

(3) Validation of SCC Decision. The Court found that Decision 2010/87 (SCC Decision) sufficiently establishes effective mechanisms that make it possible, in practice, to ensure compliance with the level of protection required by EU law and to ensure that transfers of personal data pursuant to such clauses a suspended or prohibited in the event of the breach of such clauses or it being impossible honor them. Specifically, the Court pointed out that, in particular, that that decision imposes an obligation on a data exporter and the recipient of the data to verify, prior to any transfer, whether that level of protection is respected in the third country concerned and that the decision requires the recipient to inform the data exporter of any inability to comply with the standard data protection clauses, the latter then being, in turn, obliged to suspend the transfer of data and/or to terminate the contract with the former. The court also emphasized the EU organizations relying on them must take a proactive role in evaluating, prior to any transfer, whether there is in fact an “adequate level of protection” for personal data in the data importer’s jurisdiction. The Court stated that many organizations may implement additional safeguards to ensure an “adequate level of protection” for personal data transfers, although it was not specific on what those additional safeguards might be. Further, non-EU organizations importing data from the EU based on SCCs must inform data exporters in the EU of any inability to comply with the SCCs. When non-EU data importers are unable to comply with the SCCs, and there are not additional safeguards in place to ensure an “adequate level of protection”, the EU data exporter must suspend the transfer of data and/or terminate the contract.

(4) Invalidity of Privacy Shield Decision. Finally, the CJEU decided, unexpectedly, to examine and rule on the validity of the EU-U.S. Privacy Shield framework. In invalidating the Privacy Shield, the Court took the view that “the limitations on the protection of personal data arising from the domestic law of the United States on the access and use by U.S. public authorities of such data transferred from the European Union to the United States, which the Commission assessed in the Privacy Shield Decision, are not circumscribed in a way that satisfies requirements that are essentially equivalent to those required under EU law…” Specifically, the CJEU found, the Privacy Shield and its Ombudsperson mechanism “does not provide data subjects with any cause of action before a body which offers guarantees substantially equivalent to those required by EU law, such as to ensure both the independence of the Ombudsperson provided for by that mechanism and the existence of rules empowering the Ombudsperson to adopt decisions that are binding on the US intelligence services.” For these reasons, the Court declared the Privacy Shield Decision to be invalid.

III. What this Means for U.S. Organizations

Therefore, while the SCCs remain valid under today’s decision, organizations that currently rely on SCCs will need to consider whether there is still an “adequate level of protection” for the personal data as required by EU law, taking into account the nature of the personal data, the purposes and context of the processing, and the country of destination. Where that is not the case, organizations should consider what additional safeguards may be implemented to ensure there is in fact an “adequate level of protection.”

Further, organizations that currently rely on the EU- U.S. Privacy Shield framework will need to urgently identify an alternative data transfer mechanism to continue transfers of personal data to the U.S. These may include the SCCs that remain valid (along with any additional safeguards as necessary). Alternatives may also include derogations provided in the GDPR for certain transfers (such as when the transfer is necessary to perform a contract), or Binding Corporate Rules (“BCRs”) as set forth in the GDPR.

To read the CJEU decision, click here.

To read the CJEU press release, click here.

On June 1, 2020, California Attorney General Xavier Becerra submitted a finalized package of CCPA regulations to the California Office of Administrative Law (OAL).   The package included not only the final text of the regulations, but also the final statement of reasons for amendments to the previous drafts. There have been multiple rounds of drafts of the regulations, along with corresponding comment periods and workshops.  The first comment period received over 1,700 comments, leading to modifications published on February 7, 2020.  A second set of modifications released on March 11, 2020 eliminated the opt-out button and clarified procedures for consumer requests.  This finalized version of the regulations appears nearly identical to the March version.

Attorney General Becerra has stood firm in his insistence on the scheduled enforcement deadline of July 1, 2020, notwithstanding pleas to delay enforcement due to the COVID-19 pandemic.  However, the regulations must first be approved by the OAL, which has 90 days to make its decision.  Specifically, the OAL has 30 working days, plus an additional 60 calendar days under Executive Order N-40-20 related to the COVID-19 pandemic, to review the package for procedural compliance. However, the AG submitted a written justification for a request for an expedited review by the OAL to be completed within 30 business days and that the Final Regulations become effective upon filing with the Secretary of State.  Notably, the statute has been in effect as of January 1, 2020, and the statute includes a 12-month “look-back” requirement allowing consumers to request their records dating back one year from when the request was made.

While these CCPA regulations move toward finalization, we have previously written about a new ballot initiative scheduled for November 2020 for the California Privacy Rights Act (CPRA), commonly referred to as CCPA 2.0, which would significant strengthen the CCPA’s requirements and enforcement tools, would weaken defenses to private rights of action under the CCPA, and would establish the California Privacy Protection Agency to enforce privacy rights in California.

To view the final CCPA regulations, click here.

To view the California Attorney General’s press release, click here.

To view the California Attorney General’s supporting documentation, click here.

  1. Details about Apple/Google Launch

Yesterday (May 20, 2020), Apple and Google launched software that will allow public health authorities to create mobile applications that notify people when they may have come in contact with people who have confirmed cases of COVID-19, while purportedly preserving privacy around identifying information and location data. People who have updated their phones with the latest software will be able to share their Bluetooth signal, logging when the radio recognizes other people who have downloaded an app that uses the software.

Their public launch means that health agencies can now use the API in applications released to the general public. To date, Apple and Google had only released beta versions to help with the development process.  (To be clear, Apple and Google are not themselves creating an exposure notification or contact tracing application – but the launch means that developers working on behalf public health agencies can do so.)  This “exposure notification” tool uses Bluetooth radios within smartphones, and will be part of a new software update the companies will be pushing out Wednesday. State and federal governments can use it to create contact tracing applications that citizens can download via the Apple Store or Google Play store.

Many U.S. states and 22 countries across five continents have already asked for, and been provided access to, the API to support their development efforts, and they anticipate more being added going forward. So far, Apple and Google say they have conducted more than 24 briefings and tech talks for public health officials, epidemiologists and app developers working on their behalf.

The exposure notification API uses a decentralized identifier system with randomly generated temporary keys created on a user’s device (but not specifically tied to personally identifiable information). Public health agencies can define parameters around exposure time and distance, and can tweak transmission risk and other factors according to their own standards.

The applications are allowed to combine the API and voluntarily submitted user data provided through individual apps to enable public health authorities to contact exposed users directly to make them aware of what steps they should take.

Apple and Google have incorporated various privacy protections, including: (a) encryption of all device-specific Bluetooth metadata (e.g., signal strength, specific transmitting power), and (b) explicitly barring use of the API in any apps that also seek geolocation information permission from users.  Because many public health authorities developing contact tracing were considering using geolocation data, this privacy measure has prompted some to reconsider their approach.

Apple and Google provided the following joint statement about the API and how it will support contact-tracing efforts undertaken by public health officials and agencies:

One of the most effective techniques that public health officials have used during outbreaks is called contact tracing. Through this approach, public health officials contact, test, treat and advise people who may have been exposed to an affected person. One new element of contact tracing is Exposure Notifications: using privacy-preserving digital technology to tell someone they may have been exposed to the virus. Exposure Notification has the specific goal of rapid notification, which is especially important to slowing the spread of the disease with a virus that can be spread asymptomatically.

To help, Apple and Google cooperated to build Exposure Notifications technology that will enable apps created by public health agencies to work more accurately, reliably and effectively across both Android phones and iPhones. Over the last several weeks, our two companies have worked together, reaching out to public health officials scientists, privacy groups and government leaders all over the world to get their input and guidance.

Starting today, our Exposure Notifications technology is available to public health agencies on both iOS and Android. What we’ve built is not an app — rather public health agencies will incorporate the API into their own apps that people install. Our technology is designed to make these apps work better. Each user gets to decide whether or not to opt-in to Exposure Notifications; the system does not collect or use location from the device; and if a person is diagnosed with COVID-19, it is up to them whether or not to report that in the public health app. User adoption is key to success and we believe that these strong privacy protections are also the best way to encourage use of these apps.

Today, this technology is in the hands of public health agencies across the world who will take the lead and we will continue to support their efforts.

Google and Apple are also releasing draft technical documentation including Bluetooth and cryptography specification and framework documentation.

2. Privacy Reactions and Concerns Regarding Contact Tracing Applications

Many within the privacy community are focused on whether these types of applications meet the principles of “Privacy by Design”, with much emphasis being placed on using decentralized tracing rather than location data stored in central databases. The UK data protection authority (UK ICO) concluded on April 17, 2020 that proposals for the contact tracing framework itself “appear aligned with the principles of data protection by design and by default”, based on certain assumptions.  At the same time, France asked Apple to remove the limitation that Apple’s operating system prevents contract tracing apps using its Bluetooth technology from running constantly in the background if that data is going to be moved off the devise, a limit designed to protect user’s privacy, but which France said was standing in the way of the type of app that France wanted to build.

It is important to recognize that technology (in the form of a contact tracing application) is only a part of the solution, and that many security and privacy issues arise not only from the technology itself, but in the purpose, process, and manner in which it is used. For employers considering the use of contact tracing technologies or applications leveraging the Apple/Google APIs, a number of questions need to be addressed. For example:

  • Will you require employee consent and on what conditions? Will the contact tracing app continue to monitor after work hours are over?
  • How will you handle external requests (e.g., law enforcement, state/local government, hospitals, health authorities, nonprofits, etc.)?
  • Will your process be forward- or backward-looking? Will you penalize those who violate social distancing requirements based on this information to prevent infection? Or simply wait until positive testing results indicate a positive infection, and then look back at contact history to notify those in contact with the individuals?  The Apple and Google API appears to favor more of a backwards looking approach.
  • How will you ensure confidentiality among colleagues? South Korea and Israel’s approaches were more publicly accessible, which led in some cases to protests, vigilante reactions, and social stigma. Think through how to avoid social stigma towards any employees – even when not testing positive, there may be questions or rumors based on employer efforts to preserve confidentiality.  Policies should be clearly explained and emphasized to mitigate such misunderstanding and disproportionate reactions.
  • Also consider how to handle false positives and false negatives. If you lift the lockdown with the idea that an app can control the infection, you could create a false sense of security that, once compromised, eventually gives way to ignoring the technology itself. One examples is Singapore where, despite using a widely credited tracing app, still had to return to lockdown. Their app examined whether an individual had been within two meters of someone with COVID-19 in the past 30 minutes. If so, they receive a signal that they are possibly infected, as well. This is both over-inclusive (Bluetooth through glass walls and windows) and under-inclusive (viral transmission through kissing or intimate contact for less than 30 minutes).

Many of these applications attempt to address many of these privacy concerns by simply notifying the app users themselves (instead of the employer or public health agency), to encourage responsible behavior.

Much remains to be seen about how our society will balance the tension between privacy rights and public health and safety needs as it pertains to application of contact tracing technologies. Nonetheless, yesterday’s release marks a significant event in this continuing conversation.