A Deep-Dive Into Two Of The Most Popular Senate Privacy Bills

A federal privacy law, which many have hoped for and others dread, may actually materialize over the next several months. The concept of federal privacy is now getting strong hints of bipartisan support and a few Senate bills and a House bill are starting to show us the general shape of any federal privacy law.
Supporters are hoping that one federal privacy law will make compliance easier than dozens of state privacy laws, all varying slightly. The problem is that it’s not clear if the final version will explicitly supersede state rules, which would have the effect of making compliance yet more difficult.
To give a hint of where the discussion is, we deep-dived into the two most popular Senate bills: A proposal from Sen. Maria Cantwell (D-WA); And a proposal from Sen. Roger Wicker (R-MS), chairman of the Senate Commerce Committee.
Let’s take a look at some of the more interesting and potentially problematic references in both. First, from the Wicker bill:
“AGGREGATED DATA.—For purposes of subparagraph (C), the term “aggregated data” means information that relates to a group or category of individuals or devices that does not identify and is not linked or reasonably linkable to any individual.”
This sidesteps a key issue about aggregated data. Is the bill a mechanism that would grab aggregated and never collect any PII? Or is an approach that grabs a lot of PII, but it’s then scrubbed clean before it’s used. That’s a massive difference that the bill opted to not address.
“(G) PUBLICLY AVAILABLE INFORMATION.— (i) IN GENERAL.—For the purposes of subparagraph (C), the term “publicly available information” means any information that has been lawfully made available to the general public from Federal, State, or local government records; or (II) is widely available to the general public, including information from— (aa) a telephone book or online directory; (bb) a television, internet, or radio program; or (cc) the news media or a website that is available to the general public on an unrestricted basis (for purposes of this subclause, a website is not restricted solely because there is a fee or log-in requirement associated with accessing the website). (ii) EXCLUSIONS.—Such term does not include— (I) an obscene visual depiction (as defined for purposes of section 1460 of 31 title 18, United States Code); or (II) a disclosure to the general public that is made voluntarily by an individual or is required to be made by the individual under Federal, State or local law.”
There is very much to unpack here. First of all, a telephone book? Really? In 2020, they are referencing the phone pages as something widely available?
More to the point, under this definition, a ludicrous unsubstantiated rumor would qualify because it appeared on a Web site. Then there’s this mysterious phrasing: “a website that is available to the general public on an unrestricted basis” which seems clear until you see this: “a website is not restricted solely because there is a fee or log-in requirement associated with accessing the website .” Actually, that is exactly what it means. If a fee or log-in doesn’t prevent a site from being considered “available to the general public on an unrestricted basis,” not sure what is. And what is the logic behind saying something is not publicly available information if the information comes from “a disclosure to the general public that is made voluntarily by an individual or is required to be made by the individual under Federal, State or local law”?
“The term ‘delete’ means to remove or destroy information such that it is not maintained in retrievable form and cannot be retrieved in the normal course of business.”
That may be the ideal definition, but as a practical matter, it’s the cyberthief’s definition. Unless the company routinely uses high-security wiping methods for dealing with everyday PII, something like the Pentagon’s DoD 5220.22-M, with its three-step random overwriting methods (Overwrite all addressable locations with binary zeroes, overwrite all addressable locations with binary ones, overwrite all addressable locations with a random bit pattern and then verify the final overwrite pass), the CISO can’t say that the data “cannot be retrieved in the normal course of business.” Simply deleting the file once won’t do it and the Senate bill doesn’t specify this.
“The term ‘large data holder’ means a covered entity that in the most recent calendar year— (A) processed or transferred the covered data of more than 5,000,000 individuals or 16 devices that are linked or reasonably linkable to such individuals; or (B) processed or transferred the sensitive covered data of more than 100,000 individuals or devices that linked or reasonably linkable to such individuals (excluding any instance where the covered entity processes the log-in information of an individual or device to allow the individual or device to log in to an account administered by the covered entity).”
There’s nothing unusual here, but it’s see how Wicker’s bill defines a large data holder.
“Prohibition on the Denial of Goods and Service.—A covered entity shall not deny goods or services to an individual because the individual exercised any of the rights established under this title. (b) No Waiver of Individual Controls.—The rights and obligations created under section 103 may not be waived in an agreement between a covered entity and an individual.”
This is significant because it makes explicit that the days of “accept these terms or you can’t download the software/enter the Web site” are over.
“A covered entity— (A) shall not comply with a request to exercise the rights described in paragraph (1) if the covered entity cannot verify that the individual making the request is the individual to whom the covered data that is the subject of the request relates; and (B) may decline to comply with a request that would— (i) require the entity to retain any covered data for the sole purpose of fulfilling the request; (ii) be impossible or demonstrably impracticable to comply with; or (iii) require the covered entity to reidentify covered data that has been deidentified.”
This is an especially interesting passage. It would place a new burden on CISOs and/or CIOs to create a verification mechanism that would work for the U.S. government. Would that verification method have to be preserved for XX number of months, in case someone later wants to challenge it? That would presumably happen if the enterprise is later accused of having taken delete instructions from an impostor. But given that the bill makes no reference to any retention obligation, what choice should an enterprise make? More to the point: Wouldn’t preserving the verification method particulars itself create new PII that would need to be deleted? This forces the mind-numbing question of whether following this new rule could literally place an enterprise in violation of it?
The writers of this Senate seemed to be anticipated that issue, as the bill says that an enterprise may “decline to comply with a request that would require the entity to retain any covered data for the sole purpose of fulfilling the request….or ) require the covered entity to reidentify covered data that has been deidentified.” But that phrasing is not as helpful as it might be. It doesn’t suggest a way around this dilemma, which might have companies declining all such requests for that reason.
Why? Because verifying that someone is who they claim to be typically uses the enterprise’s own information on the individual or third-party data (“which of the following is one of your prior addresses?”). It’s using PII data to verify your request to wipe out all PII data.
Then there’s the out for requests that are “impossible or demonstrably impracticable to comply with.” Would the lack of specifics in the bill itself allow this clause to be used to deny all such requests? What about staffing? Can an enterprise argue that the team at issue is already way overburdened so it’s always going to be “demonstrably impracticable” to place more burdens on that team without generating new revenue so that more team members can be hired?
Turning to Cantwell’s bill, we find more interesting elements.
“The term ‘affirmative express consent’ means an affirmative act by an individual that clearly communicates the individual’s authorization for an act or practice, in response to a specific request that meets the requirements of subparagraph (B). EXPRESS CONSENT REQUIRED.—An entity shall not infer that an individual has provided affirmative express consent to an act or practice from the inaction of the individual or the individual’s continued use of a service or product provided by the entity.”
What this provision gets at is the ending of practices such as telling customers “By using this software/site, you are accepting all of our terms and conditions.” The only consent is an affirmative consent, not a consent through doing nothing or using a service.
“The term ‘biometric information’ means any covered data generated from the measurement or specific technological processing of an individual’s biological, physical or physiological characteristics, including— (i) fingerprints; (ii) voice prints; (iii) iris or retina scans; (iv) facial scans or templates; (v) deoxyribonucleic acid (DNA) information; and (vi) gait. 21 (B) EXCLUSIONS.—Such term does not include writing samples, written signatures, photographs, voice recordings, demographic data or physical characteristics such as height, weight, hair color, or eye color, provided that such data is not used for the purpose of identifying an individual’s unique biological, physical or physiological characteristics.
This is an intriguing list, based on some of the choices made. Within biometrics, they chose to list DNA (which no major site is using nor seriously considering using any time soon) and to not list vein pattern, which is far more likely to be considered than DNA as it’s less controversial to deploy.
The exclusions section is oddly phrased. It seems as though it’s simply saying “As long as you are not using this data for authentication/identification, it’s excluded,” but why then offer those specific examples? Many of those details appear to overlap with the biometrics section above. It’s just confusing and distracting. Our best guess is that they mean “If a voice recording isn’t used for authentication, such as perhaps a system that records and stores voicemail messages, then it’s fine.”
“DE-IDENTIFIED DATA.—Term ‘de-identified data’ means information that cannot reasonably be used to infer information about, or otherwise be linked to, an individual, a household, or a device used by an individual or household, provided that the entity— (A) takes reasonable measures to ensure that the information cannot be reidentified, or associated with, an individual, a household, or a device used by an individual or household; (B) publicly commits in a conspicuous manner— (i) to process and transfer the information in a de-identified form; and (ii) not to attempt to reidentify or associate the information with any individual, household, or device used by an individual or household; and (C) contractually obligates any person or entity that receives the information from the covered entity to comply with all of the provisions of this paragraph.”
This raises a similar issue to the data aggregation issue from the Wicker bill. What de-identification method do they prefer?
“LARGE DATA HOLDER.—The term ‘large data holder’ means a covered entity that, in the most recent calendar year— (A) processed or transferred the covered data of more than 5,000,000 individuals, devices used by individuals or households, or households; or (B) processed or transferred the sensitive covered data of more than 100,000 individuals, devices used by individuals or households, or households.”
The critical word in this description of what constitutes an enterprise being a “large data holder” is the word “sensitive.” If it’s merely data, the trigger is 5 million users or devices or households. If the data is sensitive, the threshold plunges to 100,000 users or devices or households. What did they forget to do? Define, for the purposes of this definition, what the Senate considers to be sensitive. Sensitive is far more vague than PII. Also, another key question is: Sensitive to who? The customer/employee/contractor/partner or the enterprise? Are shipment details sensitive for a retailer? For that retailer’s customer? On a dating site, is there anything there that is not sensitive?
“PREEMPTION OF DIRECTLY CONFLICTING STATE LAWS.—Except as provided in subsections (b) and (d), this Act shall supersede any State law to the extent such law directly conflicts with the provisions of this Act, or a standard, rule, or regulation promulgated under this Act, and then only to the extent of such direct conflict. Any State law, rule, or regulation shall not be considered in direct conflict if it affords a greater level of protection to individuals protected under this Act.”
The most widely articulated hope for people who want a federal privacy law is that it will clean up the regulatory landscape. In short, that it will provide one set of privacy rules for the federal government and remove the need to worry about any conflicting state laws. The Cantwell bill explicitly is not doing that. It seems to only supersede if they are weaker than the fed version. If the state law is considered to provider “a greater level of protection,” then it prevails. That’s hardly the legislative regulatory cleanup that supporters had sought.