Close
post-featured-image

How can we make technology that frees us, rather than enslaves us?

opens in a new windowPlaceholder of  -29Written by opens in a new windowCory Doctorow

In the Foundation series, Isaac Asimov posited three rules to protect humans from robots. As our own technology advances exponentially every day, how can we make technology that frees us, rather than enslaving us?

Let us begin by cleaving this problem into two pieces, only one of which I am qualified to address:

  1.  How can we make technology that works well?
  2. How can we make technology that fails well?

I only know about #2.

The Second Law of Thermodynamics is a thing. Security—like all forms of experimentally derived knowledge—is a process, not a product. Computers with no known flaws are not flawless: their flaws just have not yet been discovered and reported.

Computers have metastasized. Software is eating the world. Your toaster, pacemaker, car, tractor, insulin pump and thermostat are (or soon will be) computers in fancy cases that have the power to inflict enormous pain and harm upon your person and life. It is correct to view software as a nexus of control for solving your problems. When books become digital objects, publishers attempt to solve their problems by controlling the both the code embedded in the ebooks themselves and the devices that can play them back.

But those problems aren’t your problems. The fact that some publishers don’t like the used book market and perceive an opportunity to kill it by using software to keep people from giving away, selling, or lending digital books doesn’t mean you benefit when they attempt it. Their security from used books is your insecurity of not getting to read used books.

What the entertainment companies started, the rest of the world has cottoned onto. Today, a startling variety of technologies use digital countermeasures to control their owners: insulin pumps stop you from reading your coronary telemetry except by manufacturer-authorized doctors with paid-up software licenses. GM stops you from visiting independent mechanics who diagnose your engine with unauthorized tools and repair it with third-party replacement parts. Voting machine vendors stop independent researchers from validating their products.

This only works if you can’t replace the software the manufacturer specifies with software from someone else—say, a competitor of the manufacturer—that gives you back the freedom the software has taken away. That’s because the computer the software is running on is a general purpose computer: that’s the only kind of computer we know how to build, and it can run any program that can be expressed in symbolic language.

A computer that won’t obey you—a DVD player that won’t play an out-of-region disc; a phone that won’t accept apps that come from third-party app stores—isn’t a computer that’s incapable of obeying you. That computer can readily do all the things on the forbidden list. It just refuses to do them.

This is what controlling people with their computers means: designing disobedient computers that view their owners as adversaries, that obfuscate their operations from those owners, that prefer the orders they get from distant third parties to the policies set by the person holding the computer, having paid for it.

It’s hard to keep people from changing the software on computers they own—even software that’s designed to hide from its owner and refuse to shut down can eventually be located and neutralized. If you let skilled adversaries play with a computer whose software is skulking in the operating system’s shadows, the skilled adversary will eventually find its spider hole and flush it out and kill it with extreme prejudice. Then that expert will tell everyone else how to do it with their computers.

So it was that in 1998, the US Congress enacted the Digital Millennium Copyright Act (DMCA), whose Section 1201 makes it a serious crime to figure out how the computers you own work and tell other people what you’ve learned. Under DMCA 1201, it’s a potential felony (punishable by a 5 year sentence and a $500,000 fine for a first offense) to weaken or bypass a system that restricts access to a copyrighted work.

Every device with software in it has a copyrighted work in it—software is a copyrighted work. Manufacturers who want to force their customers to use their property in ways beneficial to the manufacturer (and not the device’s owner) can configure those devices so that using them in any other way involves tampering with a copyright lock, which makes using your computer in the way you want into a potential felony.

That’s why John Deere tractors are designed so that getting them fixed by non-authorized repair people requires breaking a copyright lock; thus Deere can force farmers to pay $230, plus $130/hour for simple service calls. The farmers are just the start: add a vision-system to a toaster and it can prevent you from using third-party bread, and make disabling the bread-enforcement system into a felony.

As software metastasizes into every category of goods, an entertainment industry law from the late XXth Cen is turning into an existential threat to human liberty: we are being Huxleyed into the Full Orwell.

That’s for starters. But security is a process, not a product. You can only make a device secure by continuously prodding at it, looking for its defects, and repairing them before they are exploited by your adversary.

DMCA 1201 is now the leading reason that security researchers fail to disclose the vulnerabilities they discover. Once a device has a copyright-protecting lock on it, reporting that device’s defects makes you potentially liable to bowel-watering criminal and civil penalties. In 2015, security researchers told the US Copyright Office that they are sitting on potentially lethal bugs in insulin pumps and cars, on bugs in thermostats and voting machines, in entertainment consoles whose unblinking eyes and ever-listening ears witness our most intimate moments.

By providing an incentive to companies to add copyright locks to their systems, we’ve also given them a veto over who can reveal that they have sold us defective and dangerous products. Companies don’t view this as a bug in their digital monopolization strategy: it is a feature.

Isaac Asimov started from the presumption that we’d make positronic brains with a set of fixed characteristic and that this design would be inviolable for millennia, and then wrote several books’ worth of stories about which unchanging rules these positronic brains should follow. He was wrong.

Designing computers to treat their owners as untrustworthy adversaries, unfit to reconfigure them or know their defects, is a far more dangerous proposition than merely having computers with bad software. Asimov was interested in how computers work. He should have been paying attention to how they fail.

The failure mode of prohibiting the owners of computers from changing which programs they run, and of knowing whether those computers are secure, is that those computers are now designed to control their owners, rather than being controlled by them.

This is the key difference between computers that liberate and computers that enslave.

Asimov had three laws. I propose two:

  1.  Computers should obey their owners
  2.  It should always be legal to tell the truth about computers and their security

Neither of these laws is without potential for mischief. I could write a hundred stories about how they could go wrong. But the harms of following these rules are far worse than the harms of deliberately setting computers to control the people they are meant to serve.

I charge you to be hard-liners for these rules. If they aren’t calling you unreasonable, a puritan, a fanatic for these rules, you’re not trying hard enough.

The future is riding on it.

Order Your Copy

opens in a new windowImage Placeholder of amazon- 7 opens in a new windowPlaceholder of bn -9 opens in a new windowPlace holder  of booksamillion- 57 opens in a new windowibooks2 28 opens in a new windowindiebound

Follow Cory Doctorow on opens in a new windowTwitter, on opens in a new windowhis website, and on the blog opens in a new windowBoing Boing.

This is a rerun of a post originally posted on April 3rd, 2017.

1 thought on “How can we make technology that frees us, rather than enslaves us?

Comments are closed.

Leave a Reply

The owner of this website has made a commitment to accessibility and inclusion, please report any problems that you encounter using the contact form on this website. This site uses the WP ADA Compliance Check plugin to enhance accessibility.