The Atlantic: “..Security measures fail in one of two ways: Either they can’t stop a bad actor from doing a bad thing, or they block an innocent person from doing an innocuous thing. Sometimes one is more important than the other. When it comes to attacks that have catastrophic effects—say, launching nuclear missiles—we want the security to stop all bad actors, even at the expense of usability. But when we’re talking about milder attacks, the balance is less obvious. Sure, banks want credit cards to be impervious to fraud, but if the security measures also regularly prevent us from using our own credit cards, we would rebel and banks would lose money. So banks often put ease of use ahead of security. That’s how we should think about COVID-vaccine cards and test documentation. We’re not looking for perfection. If most everyone follows the rules and doesn’t cheat, we win. Making these systems easy to use is the priority. The alternative just isn’t worth it. I design computer security systems for a living. Given the challenge, I could design a system of vaccine and test verification that makes cheating very hard. I could issue cards that are as unforgeable as passports, or create phone apps that are linked to highly secure centralized databases. I could build a massive surveillance apparatus and enforce the sorts of strict containment measures used in China’s zero-COVID policy. But the costs—in money, in liberty, in privacy—are too high. We can get most of the benefits with some pieces of paper and broad, but not universal, compliance with the rules…”
Sorry, comments are closed for this post.