OnStar’s New Privacy Policy

September 21, 2011 § Leave a comment

OnStar recently updated their privacy policy. The new policy

  • allows OnStar to continue collecting information from its links even after a customer cancels service, unless the customer specifically requests OnStar not to; and
  • removes language that said that OnStar would not share customer data with third-party marketers without explicit customer consent.

Changes to privacy polices aren’t usually notable. But I think there are some interesting things going on here.

First, although it has been claimed that the new policy allows OnStar to share anonymized information, including GPS (speed and location) information, that does not appear to be a recent addition. The current privacy policy already allows OnStar to “share or sell any anonymized data (including location, speed, and safety belt usage) with third parties for any purpose.” That conflicts to some extent with the language claiming that OnStar would not sell information to third-party marketers without consent, so perhaps the removal of that language allows OnStar to share the data with marketers. On the other hand, that limiting language does not appear in the actual policy, only in the summary information at the top of the “Our Privacy Practices” page. Because it’s unclear what the old policy allowed, it’s hard to tell what the new policy added.

Second, the change highlights how much people read into privacy policies. For example, CNET suggests that language allowing OnStar to transfer data in the event that part of its business is spun off (so that the new business has the data) could be read as indicating plans to spin off part of the business. I’m not even sure if that language was new (I can’t find a copy of the 2010 policy to compare it to).

Finally, GPS data is really sensitive, and people are justifiably worried about tracking—in the literal sense. Thus, any policy change that seems to allow greater use of that information, even if intended to clarify existing practice, is going to set off alarms.

Personally, I think the biggest area of concern in OnStar’s privacy policy is that it doesn’t really define how data is “anonymized.” There are at least two possibilities. “Anonymizing” could mean removing traditional personal information (name, address, VIN, etc.) from GPS data, but leaving other information intact, such as the information needed to track a vehicle’s movements from place to place. That wouldn’t be much protection, because if you can watch a car go to a house and park there overnight every night, you have a pretty good idea who that car belongs to. The better approach would be to “anonymize” the data so that not only is the car not directly associated with a person, but so that any individual car’s movements cannot be tracked. Thus, I don’t think is true, as has been suggested, that it is impossible to anonymize GPS data. But it has to be done right, and the new privacy policy doesn’t indicate whether OnStar will do it right.

“Password” as a Password

December 14, 2010 § Leave a comment

Gawker media (which includes Gizmodo, Lifehacker, Consumerist, and others) got hacked. Hackers obtained source code for the site and—the part that really grabs media attention—the usernames, e‑mail addresses, and passwords of about a million users. The three most popular passwords? 123456, password, and 12345678 (kudos to those who picked 12345678, no doubt security-savvy users heeding warnings that six-character passwords weren’t long enough). The media reaction, in a nutshell: “OMG! Pick better passwords! And use different passwords for every single account!”

Well, yes. If you’re using “12345678″ as the password for your online banking account, perhaps you should reconsider that choice. And if your online banking account has the same password as your account for a message board, we should have a serious chat. But not all accounts are created equal, and no human can remember separate passwords for every single site that demands account registration—nor should they have to. Password vaults and Bruce Schneier’s idea of writing down important passwords and putting them in your wallet, while useful, have practical limits. Does anyone really want to look up every single password? And I can’t fit a phone book in my wallet.

The answer, in my opinion, is to know which passwords are important and which are not. Here’s my own personal hierarchy of password importance:

1. Passwords for sites that want you to create an account for their convenience, not yours.
For example, if a news site wants readers to create accounts solely to track what they’re doing, and that account does not carry any special privileges (commenting on posts under a name, or being able to buy things), the account is for the site’s convenience, not the user’s, and there’s no reason a user should choose a particularly secure password. E-commerce sites that require people to create accounts before they can buy anything fall into this category if the customer can either be sure that the site won’t store payment information (good luck with that) or the customer has a “disposable” payment mechanism available.

When I’m shopping online and run into a store that won’t let me buy something from them without creating an account, I generate a unique e-mail address, generate a unique credit card number that can only be used by that merchant, buy whatever I need, then forget that the account ever existed. If the account is hacked, the hacker gets an e-mail address I never use (and can easily turn off) and a credit card number that’s no good to anyone but the merchant who first placed a charged on it. I can even set the charge limit on that temporary credit card number near the amount of what I’m buying. For that sort of account, I could easily use the password “Password” and lose nothing.

2. Passwords for message boards.
These passwords prevent anyone else from impersonating me on the message board. I generally use the same password on all these sites. If someone pretends to be me on GeekyLawChat.com (name available for registration!), well, that’s annoying, and the fact that I use the same password on NerdsOfTheLaw.net (also available!) means they could pretend to be me there, too. The worst-case scenario is that I might have an interesting time defending a defamation charge. When the alternative is to remember umpteen relatively unimportant passwords (or spend less time on Internet message boards—some of which require registration just to search the message boards), I’ll take the risk.

3. Accounts where money is at stake.
Bank accounts. Amazon (and the like), where you use the account regularly. To some extent, iTunes. If someone guessing your password means they can spend your money (or make it hard for you to get at your money), use a good password. Use a really good password. But where you want to use an ever better password is for…

4. E-mail.
Wait—an e-mail password is more important than the password for your online banking account? Maybe so. Think about how many web sites think that control of your e-mail account means you are who you say you are. Think of the number of sites with “e-mail me my password” links. Think of the number of passwords you probably have sitting in your e-mail right now. Having hackers drain your bank account would be very, very bad, but there’s a chance you could get that money back. Try proving you’re who you are on the Internet when someone else has control of your e-mail. And that’s not even getting to the content of the e-mails in your account. Ask Sarah Palin how annoying that can be.


In the case of Gawker, I might rather have had a “password” password than something real. A real password might have been used on an account I care about. “Password” is the next best thing to no password at all, and sometimes that’s just about the right level of security. “Password” is a perfectly fine password for an account you don’t care about.

South Carolina’s Democratic Primary: The Result of an E-Voting Malfunction?

June 15, 2010 § 1 Comment

Alvin Greene’s recent victory in South Carolina’s Democratic Senate primary has lots of people wondering how a relative unknown who did not campaign could win sixty percent of the vote against a four-term state senator. Although plenty of theories have surfaced—that it was because his name was first on the ballot, that his name reminded people of soul legend Al Green, that it was all a Republican plot—one possibility is harder to refute than it ought to be: problems with the electronic voting machines.

Greene’s primary opponent, Vic Rawl, has now publicly pointed a finger at the voting machines (by the way, if Alvin Greene got the Al Green votes, why didn’t Vic Rawl get the Lou Rawls votes? Someone needs to investigate this soul singer gap). Columbia’s WTLX.com quotes Rawl as saying, “It appears to me that we have some sort of either machine malfunction or software malfunction.” Rawls also said he had no idea whether the malfunction was accidental or intentional. South Carolina’s election commission responded that it was “confident in the accuracy and reliability” of the voting machines.

It’s hard to know if that confidence is well-placed, however. South Carolina uses ES&S iVotronic voting machines, which have a history of accuracy and reliability issues. Newer versions support voter-verified paper audit trail, but it’s unclear whether South Carolina uses that feature. The elections commission said that every vote was recorded and left a paper trail, but its web page describing the process of voting with the machines says nothing about the voter verifying his or her vote against a paper record. The “paper trail” the commission talks about could be a paper record of every vote that was cast, verified by each voter; or it could be summary totals. It’s hard to tell from news reports.

If there is a good, voter-verified paper trail, machine malfunction (or tampering) is relatively easy to detect and correct. Just count the paper ballots. If there is no paper trail, or if the “paper trail” is merely a set of summary statistics, it’s impossible to know if the result is accurate.

Minnesota is one of twenty-two states that require voter-verifiable paper ballots. South Carolina is not. Minnesota law requires more than just a paper trail, however. By statute, all voting systems purchased after 2005 must be paper-based. According to that statute, voting machines must either scan marked paper ballots, or assist voters in marking those paper ballots.

If the Greene-Rawl primary had been held in Minnesota, the voting machines could quickly be eliminated as a source of the unexpected result. As it is, it may be impossible to know if Mr. Greene’s primary victory was influenced by voting machine irregularities.

Privacy Seal Provider ControlScan Settles FTC Charges

February 27, 2010 § 1 Comment

The FTC announced on Thursday that it had reached a settlement with ControlScan, a provider of so-called “privacy seals”—those small-ish images certifying a website’s security or privacy practices.

The FTC charged that ControlScan had “misled consumers about how often it monitored the sites and the steps it took to verify their privacy and security practices.” Although the seals claimed that ControlScan had verified the site’s privacy practices, ControlScan did “little or no verification” of those practices, according to the FTC. The FTC also took issue with the fact that the seals had current date stamps even though ControlScan did no daily reviews.

The settlement agreement required ControlScan’s former CEO to give up $102,000 in profits. It also suspended a $750,000 penalty against the company for inability to pay.

It’s uncertain whether privacy or security seals mean much. Even when providers scan daily, how much assurance can one expect for $71.50 per month? McAfee, the big player in the market after it bought (and renamed) the “HackerSafe” seal, had its own bit of bad press a couple of years ago when it turned out that several “Hacker Safe” sites were vulnerable to cross-site scripting attacks.

Even though ControlScan appears to have been in a different category than legitimate privacy seal vendors, the FTC settlement highlights a classic reputation problem with these seals. The seals look like they mean something, but the only way to know for sure is to check the seal provider’s practices—which undermines the point of the badge in the first place.

U.S. Supreme Court to Hear Government Employer Privacy Case

December 15, 2009 § Leave a comment

The U.S. Supreme Court has granted certiorari in City of Ontario v. Quon. That’s the new name for Quon v. Arch Wireless Operating Company, the Ninth Circuit case that found that a police officer had a reasonable expectation of privacy in his text pager messages.

This should be an interesting case to watch. For a discussion of how this case might affect privacy for government employees, see Orin Kerr’s post over at the Volokh Conspiracy.

Cost of Disclosing 179 Social Security Numbers in a Court Filing: $5000

October 23, 2009 § Leave a comment

Here’s a new way of holding someone directly liable for a data breach. A Minnesota attorney was fined $5000 for filing a federal court document containing the social security numbers and birth dates of 179 people. Court filings are public, which is why Federal Rule of Civil Procedure 5.2(a) says that a court filing may only contain the year of birth or last four digits of a social security number. As Judge Davis wrote in his order:

The Court is deeply concerned with the harmful and widespread ramifications associated with negligent and inattentive electronic filing of court documents. Although electronic filing significantly improves the efficiency and accessibility of our court system, it also elevates the likelihood of identity theft and damage to personal privacy when lawyers fail to follow federal and local rules.

Ninth Circuit Adopts Plain-Language View of “Authorization” in CFAA Decision

September 30, 2009 § Leave a comment

The Computer Fraud and Abuse Act (CFAA) creates criminal penalties for doing various bad things by intentionally accessing a computer without authorization or by exceeding authorized access. There’s been a some debate recently over just what “authorization” means. For example, one of the issues in the Lori Drew case was whether Drew had exceeded authorized access, and thus committed a federal crime, by violating MySpace’s terms of service. Another frequent issue comes up in employment contexts: is it unauthorized access to use company computers for purposes other than those intended?

For example, suppose an employee has access to an employer’s computers for regular business purposes, but e-mails confidential data to an outside account. Later, he leaves the company and uses that confidential data to set up a competing business. Did the employee access that confidential data without authorization? The simple answer would be “no”: he had an account, he was allowed to use it, that permission had not been revoked, so any access was authorized.

The Ninth Circuit Court of Appeals recently adopted essentially this definition. LVRC Holdings, LLC v. Brekka said that such conduct is not unauthorized for purposes of the CFAA. The court looked at the language of the statute and a dictionary, and held that an employee has authorization to access a computer when the employer has given permission to use it. Because Brekka’s permission to use the computer had not been revoked when he accessed and mailed data to an outside account, the court held that his access was not unauthorized.

The Ninth Circuit rejected the agency-law analysis from a 2006 Seventh Circuit decision, International Airport Centers, LLC v. Citrin. That case had held that an employee’s authorization to access a computer ended the moment he breached his duty of loyalty to his employer—in that case, by wiping data from a laptop to hide evidence of misconduct. But in LVRC, the Ninth Circuit stuck to the text of the CFAA, noting that the CFAA is a criminal statute and should be interpreted in favor of lenience. Because the Ninth Circuit could find no agency law principles in the text of the CFAA, it held that a person uses a computer without authorization “when the person has not received permission to use the computer for any purpose . . . or when the employer has rescinded permission to access the computer and the defendant uses the computer anyway.”

An aspect of this case that might be of interest to employers is that Brekka did not have a written employment agreement and LVRC had no policies against e-mailing documents to outside accounts. Such a policy would presumably have made Brekka’s actions unauthorized. But it’s hard to write policies that cover every single thing an employee is not allowed to do. If a company wrote a policy that “employees are only authorized to use company computers to the extent that such use is consistent with company interests,” would that create the Seventh Circuit agency-law definition of unauthorized access? It seems like it might, but, as always, This Is Not Legal Advice.


Get every new post delivered to your Inbox.