The Fear of Encryption

Right now, if I send a text to my friend on iMessage, and we both are using the iOS software, that text is encrypted, meaning only my friend and I can read it. Apple, the government, or any badly intentioned actors cannot read the message while in transit. This message may not be encrypted while at rest (not in transit), depending on whether the phones require a passcode and whether the phones are backed up to Apple’s iCloud servers. If the messages are saved to iCloud, they could potentially be read by the company or law enforcement (with the help of a warrant). 

In 2015, the San Bernardino shooting called into question whether the security and privacy of everyone (who had the same iOS the shooter had on his phone) should remain protected. Apple turned over all iCloud data stored from the phone in the month prior to the shooting to the FBI, in keeping with a warrant. Apple could do little more than that because giving a backdoor into the shooter’s phone would give a backdoor key to law enforcement for everyone else’s (on the same iOS) phone. This possible breach in protection could also make future iOS software more vulnerable. The shooter’s phone was locked, sure, but the real problem was the 10-attempt lockout feature. After ten failed logins, the phone freezes and all of the data is lost. In this context, Apple was willing to demonstrate a politically unpopular position with law enforcement and White House policy in defense of its more global intention to show a commitment to protect user data in comparison with its rival Google, proving Apple to be a more secure product. This, of course, was in part a publicity stunt to thwart Google, but also misleading. While Apple stands strong against US government inquiry, the company capitulates to more authoritarian governments, like China and Qatar, where the company does not have the same user support (and thus, market share) it enjoys in the US. 

The New York Times daily report podcast The Daily recently analyzed privacy and encryption through the lens of child pornography and the safety of survivors. While listening, I was concerned by the Daily’s correlation with Facebook’s lack of accountability and unwillingness to flag (and eliminate) child pornography, particularly when coupled with the added encryption by platforms like Facebook. Currently, Facebook is in the process of encrypting their messenger service. Facebook, however, accounts for the majority of the flagging of child pornography, responsible for more than 90% of the reports in 2018. The former Chief of Security for Facebook, Alex Stamos, said Facebook are able to find, and then report, child pornography so heavily because they lack the encryption that creates barriers to finding the abuse and prosecuting the abuser. I was intrigued by this phenomenon in part because of a class I took on the Internet with Dave Walsh. A few days after listening to the podcast, I spoke with Dave to discuss the truth, history, and legal ramifications of the real injury resulting from child pornography, in contrast to the propaganda-based fear tactic behind the anti-encryption movement. 

When I spoke with Dave, he first discussed the history of encryption in the United States. This history focused on the gesture of invoking fear, from statements like, “Protect the (white) children” and “Terrorists (non-white people) will get us all.” In the 1950s, the government bugged the landlines of what they saw as “black radicals” and “communists” and later wired cellphones with the excuse of the “War on Drugs,” targeting and undermining the privacy of these individuals. The federal government resisted adapting a positive stance towards digital encryption in the 1970s to the 1990s (and again in 2001 and 2015 with 9/11 and the San Bernardino shooting), with the idea that more information meant more ability to catch perpetrators of violence and threat. The government used a variety of pressures and legislation (like the 2004 Patriot Act) to ensure that the government, law enforcement, and military would not lose access to content and bad actors. These pressures centered on fear, like that of protecting children, because strong encryption would be used by criminals and terrorists who would be able to hide their criminal activity and violence from intelligence gathering efforts by government agencies and law enforcement. Their logic that more information meant catching more people is flawed because both encryption and capturing criminals is much more complex. The vast majority of Americans are against child pornography but using it as a fear tactic is an ethically ambiguous move and historically pertinent by these companies and the government.  

Recently, the conversation on child sexual abuse has risen due to President Trump’s attention on the matter. Trump stated that his “administration is putting unprecedented pressure on traffickers at home and abroad,” because his daughter, Ivanka Trump, has made child sexual abuse her signature issue. While child sexual abuse is an important cause, it is vital citizens read through Trump’s fear-tactics of perpetrators to not target specific, typically non-white individuals. Trump is exploiting the cause for his own political gain without any real change.

In the world of all encrypted messages, child pornography makes up a small proportion of this data. While platforms like Facebook are in the process of encrypting their messages, advocates that seek to protect victims of child porn trafficking are against this as they see it as encryption as inhibiting the ability of the police to catch perpetrators. The same argument goes for the dark web; it’s not necessarily a bad thing even though it can be used for malicious purposes. Owning Tor, which gives users access to the “dark web”, or any URL address that does not begin with www., is not illegal, but rather an alternative, unique internet ecosystem. Journalists and whistleblowers, like Edward Snowden, use it for privacy protection purposes to access sources that, if found, may put their lives on the line. 

How do we stop how child pornography is generated, shared, and collected online? Some suggest that technology should be able to fix it by creating an algorithm that can detect child pornography. There are multiple problems with this: one, the algorithm would have to be fed massive amounts of data, using tens of thousands of victimized images, to be taught what defines pornography and how children can be identified; two, the data used would most likely be white children even though children of color make up a decent percentage of survivors of child abuse, specifically pornographic abuse (this would depend on the images made available for the algorithm to digest); finally, algorithms cannot predict future behavior which is why people like the Christchurch shooter in Australia can post the massacre to Facebook Live. Algorithms cannot stop what may happen but can only intervene after the fact. In the case of Christchurch, however, the video was pulled fairly quickly after it was posted but because Facebook is a network, people can easily download and reupload elsewhere. Currently, the people detecting child pornography are employees outsourced by these companies for content management. These employees are based in places like the Philippines, where the companies can get away with treating their employees with minimal resources and support. These employees, content moderators, have jobs that are incredibly emotionally and mentally taxing. They have to meet a quota, shifting through different posts to flag for inappropriate content: murders, sexual assault, animal abuse, and child pornography. 

Encryption aside, if a person is caught for child pornography, the files they uploaded or shared are not always deleted by law enforcement or the companies. We need state level pressure because Facebook and other platforms are seen as interstate commerce. States need to criminalize keeping child pornography on servers like Facebook. Survivors are often re-traumatized when legal authorities are mandated to tell them (or their parents if under the age of 18) how often they flag their specific files of child pornography shared or re-uploaded, despite if the original perpetrator of the act has been caught. 

Finally, companies are not responsible for the content uploaded on their platforms. Section 230 of the Electronic Communication Decency Act that stated that internet platforms like Twitter, Facebook, and Google are not more responsible for illegal content, from child pornography to hate speech, than the individuals posting it because they are merely a host. Tech companies almost always comply with law enforcement, but there seems to be a lack of pressure to report, take down, and then keep track of malicious users or harmful material. 

We have seen the government’s fear of encryption used in the past half-century to curtail the rights of its citizens. Encryption is vital to the lives of many—people who speak out to journalists through protected services, survivors of cyber stalking who need this protection to avoid contact with their perpetrators—and thus, must be protected. We cannot bend to the propaganda of fear from the government to compromise our privacy. Combating child pornography is important, but encryption does not compromise finding, flagging, and stopping the spread of child pornography. By giving more people access to my data, I make it less secure. In order to protect more vulnerable people in the internet ecosystem, we need more protection. 

Share your thoughts