Cyber security is not merely about weeding out technological flaws to prevent crime. We also have to change the practises that allowed such flaws to proliferate. What we need is an open, uncensored discussion about security systems, realistic analyses of the true costs of leakages and a liability urge for the firms and people who are entrusted with delicate data. Stocking PII needs to become unattractive.
Interview with Cory Doctorow
From Simone Achermann and Stephan Sigrist
The concern about cyber security is ubiquitous; people and industries have developed more awareness for the risks of the digital age. In your opinion: Is this just an overreaction to change?
On the contrary, we have to take cyber security much more seriously and extend the discussion. So far, the focus has been exclusively on technological problems. Of course, there is need for optimising the technology. But in order to get there, we have to change the policy. In the last years, poor security decisions have let companies arrive at practices that are irresponsible and even inhibit attempts to improve security systems. I give you an example: Companies that provide policing services to cities, such as automatic licence plate readers or facial recognition programs, require these cities to sign a nondisclosure agreement. So if these systems perform badly - for example if the AI facial recognition is more likely to ring false-positive with an African-American person than a Caucasian one -, the police are not allowed to tell anyone. As a consequence, such systems proliferate, gain stakeholders and eventually everything is done to protect the malfunctioning technology. These structural barriers need to be addressed along with the actual security defects in the products.
Using your example of facial recognition: Is this not also the result of an overestimation of the benefits of artificial intelligence?
There is an important distinction to be made here. AIis in fact overtrusted. On the one hand, because there is so much lobbying and money invested in it. On the other hand, when AI performs badly in the fields of security, no one finds out. If the only problem was that AI is not working properly, we would ‘only’ have to make better assessment tools. But if it is underperforming and this underperformance is protected by confidentiality agreements, then we will not start making the system better. People do not want to go to jail for telling the truth.
Speaking of the risks of interlinked devices like cars, houses or speakers such as Alexa: How vulnerable are we?
Our privacy is increasingly at risk. Today we can be spied on by our computers, phones, televisions, cars, even by our own houses. But again we are not merely facing a problem of poor system design, but of the structural impediments to plumbing them. Manufacturers of IOT devices are allowed to deploy systems in their products that limit the power of their competitors and their critics. According to European and US law, if you have a system restricting access to a copyrighted work, publishing information that might help or encourage someone bypass that system is a felony, charged with a five year prison sentence. That means it is forbidden to post a truthful critique of a technical system without being legally prosecuted. In 2017, the W3C (World Wide Web Consortium), which is the main international standards organisation for the World Wide Web, was asked by the big US entertainment companies like Netflix to standardise DRM for browsers to prevent people from streaming videos. The Electronic Frontier Foundation (EFF) addressed them, asking them to promise not to sue people who post true comments about flawed systems. They categorically refused!
In the long run, will this not lead to unhappy customers who will stop buying products or using services from these companies?
As more and more people will be affected by bad security we will arrive at a moment when a critical mass of people will get aware of the risks they are facing. I call this the moment of peak indifferencewhich I use to describe the tipping point in the perception of a problem: previously seen as far-off, a problem suddenly becomes so obvious that the number of people alarmed begins to grow of its own accord.However, the deeper these bad system designs are integrated into our lives, the harder it will be to remediate them. By the time enough people will care about security and demand action, it may be too late to revert large scale negative outcome. As a consequence, nihilism could become the dominant motif in the security discussion. Many people will say ‘it is just too late, we should give up’. It is the same with climate change: By the time people understand the vastness of the problem, it may be too late to stop the world temperature rising by two degrees. There is a real risk that the critical mass of activity energy will arrive too late to convince people to use that energy to make a difference, in cyber security and climate change.
A hopeless scenario.How can we prevent this from happening?
We have to start taking action now. First, we need to stop censoring criticism of technological flaws. That’s a categorical imperative. A simple way to get there would be a liability change: Companies that take measures to suppress information about defects in their products should face higher liability than other firms. Second, we need to create liability regimes that reflect the full cost of data breaches. Today, firms treat the cost of stolen data as being limited to the immediate visible costs even though they can have severe later consequences for the victims. A few years ago in New York and London, criminals had stolen data from several breaches to forge duplicate keys to rob people’s houses when they were out of town. Who pays for these costs? Take the Home Depot data breach with 80 Mio stolen credit card details: The fine they faced was about 30 cents per customer plus a six month voucher for credit card monitoring services. In the future, a company should be responsible for at least half a percentage of all the value of all the property owned by everyone in a breach – everything else is just a way of externalising cost. If the liability is set realistically high, companies will become extremely cautious about their data selection and the way they keep it save. Not least because insurance and reinsurance-companies would start to refuse writing a policy for firms who stock too much PII (personal identifiable information).
If harvesting personal data shall be punished: What is the value of data in the future?
People claim data is like oil but you could also argue data is like pollution. If you are a paint factory you need to make sure no toxic substances get into the water supply. If you fail you will be sued and the fees you will have to pay make you do anything to prevent this from happening – such as continuously controlling the system and the water for leakage. It should be the same with companies and data: They must do whatever they can to prevent leakage and act immediately if it happens nevertheless. If they do not detect it and go on ‘poisoning’ people, their firm will be dissolved and the money given to the people affected.
And what about the responsibility of the employees?
I suggest that people who take important decisions in securing data breaches should be personally liable, even after a firm dissolve. The necessity for personal liability becomes evident if we look at the Cloudpets leakage, a company that produced internet-connected fluffy animals. The personal information of more than half a million people has been compromised, such as email addresses and passwords, profile pictures and voice recordings of children and adults who had used these toys. The toy maker was notified several times by data breach monitoring companies that its customer data was online and available for anyone to have their hands on — yet the data remained up for almost a week. The reason for this lack of action was that the firm was about to be shut down and its staff at this point consisted of a single person, a bookkeeper dealing with the liquidation. In my opinion, people who fail to protect the data they were entrusted should be treated like people who steel.
How can we improve the protection of privacy and establish a data based economy?
We need new laws and ethics for handling data. One possibility would be the formation of information fiduciaries. A quick explanation: The law of fiduciaries arises from economic relationships based on asymmetrical power, such as people giving their personal information to skilled professionals like doctors, lawyers or accountants. In exchange for this trust, the professionals owe these people a duty of loyalty and of care, which means: they cannot use their customers’ information against their customers’ interests and they must act competently and diligently to avoid harm.[1] Now the idea is that the same fiduciary rules should apply to online companies who collect and monetize their customers’ personal data as they also have one-sided power over them: they can monitor their activities, but those customers don’t have reciprocal power. New laws would define such companies as information fiduciaries. Like this, people handling data would be legally and ethically bound to put your interest ahead of theirs.
What is your advice for all of us in handling cyber risk?
We have to become more aware of the risks and stop offering our privacy so frivolously. Parents have to become more aware of security issues when buying toys like Cloudpets. They trust the product as they believe no one would have invested in them unless the investors had done their due diligence. In an ideal future – free of all structural impediments to better cyber security and the acknowledgement of the real costs of breaches – this might be the case and we can trust our devices. Until then, more caution is needed. This is best achieved by starting to value our privacy higher. We need to teach our children not to give away too much information on social media, for example. And we have to tell them to learn not to get spied on. For us parents the logical consequence should be that we also don’t spy on our children by monitoring their media use or pinpoint their whereabouts.
[1]https://www.eff.org/de/deeplinks/2018/10/information-fiduciaries-must-protect-your-data-privacy
Cory Doctorow (craphound.com) is a science fiction author, activist, journalist and blogger — the co-editor of Boing Boing (boingboing.net) and the author of RADICALIZED and WALKAWAY, science fiction for adults, a YA graphic novel called IN REAL LIFE, the nonfiction business book INFORMATION DOESN’T WANT TO BE FREE, and young adult novels like HOMELAND, PIRATE CINEMA and LITTLE BROTHER and novels for adults like RAPTURE OF THE NERDS and MAKERS. He works for the Electronic Frontier Foundation, is a MIT Media Lab Research Affiliate, is a Visiting Professor of Computer Science at Open University, a Visiting Professor of Practice at the University of South Carolina’s School of Library and Information Science and co-founded the UK Open Rights Group. Born in Toronto, Canada, he now lives in Los Angeles.
[Web for Interdisciplinary Research & Expertise]
Think Tank für Wirtschaft, Wissenschaft und Gesellschaft
W.I.R.E. Zürich | Hallwylstrasse 22 | CH-8004 Zürich | Schweiz
W.I.R.E. London | 34 Albert Street | NW17NU London | United Kingdom
+41 43 243 90 56 | info@thewire.ch | www.thewire.ch