EFF: “State, federal, and international regulators are increasingly concerned about the harms they believe the internet and new technology are causing. The list is long, implicating child safety, journalism, access to healthcare data, digital justice, competition, artificial intelligence, and government surveillance, just to name a few. The stories behind them are important: no one wants to live in a world where children are preyed upon, we lose access to news, or we face turbocharged discrimination or monopoly power. This concern about the impact of technology on our values is also not new—both serious concerns and outsized moral panics have accompanied many technological developments. The printing press, the automobile, the victrola, the television, and the VCR all prompted calls for new laws and regulations. Trouble is, our lawmakers seem to be losing the forest for the trees, promoting scattered and disconnected proposals addressing whichever perceived harm is causing the loudest public anxiety in any given moment. Too often, those proposals do not carefully consider the likely unintended consequences or even whether the law will actually reduce the harms it’s supposed to target…
The truth is many of the ills of today’s internet have a single thing in common: they are built on a system of corporate surveillance. Multiple companies, large and small, collect data about where we go, what we do, what we read, who we communicate with, and so on. They use this data in multiple ways and, if it suits their business model, may sell it to anyone who wants it—including law enforcement. Addressing this shared reality will better promote human rights and civil liberties, while simultaneously holding space for free expression, creativity, and innovation than many of the issue-specific bills we’ve seen over the past decade. In other words, whatever online harms you want to alleviate, you can do it better, with a broader impact, if you do privacy first…”
Sorry, comments are closed for this post.