A recent survey from UNSW provides us with a cautionary illustration of what can happen when a student is not provided with the support they need from their supervisors to design good research. If you’re not familiar with how universities and student/supervisor relationships work (many people outside of academia are not), it’s important to remember this as we continue discussion of this survey: it is ultimately the responsibility of the student’s supervisor(s) to ensure that student work is of sufficient quality to be worth the time of the student, any participants they recruit, and the eventual examiners. The student is, by definition, a student and still learning what good research is — it is the function of supervision to support and guide that learning.

Designing good research studies is hard. Students need to learn how to test theories rigorously without their pre-existing guesses and assumptions getting in the way. In the social sciences, it's important to understand how to frame a survey impartially, to the best of your abilities, to avoid tilting the outcome in the direction you wanted, rather than the direction the survey respondents chose. Research should advance genuine inquiry, not push a predetermined perspective. That is propaganda, not research. It is the opposite of what a student should be learning. Nobody does this perfectly – avoiding personal bias is an ideal that scientists should strive for, not a perfect state that we attain by the time we graduate.

Sadly neither poor supervision nor bad survey design are rare in academia  — so why does EFA care about this particular survey? The survey in question purports to be an examination of community expectation regarding the responsibilities of online service providers for child safety on their platforms. In reality, it pushes a particular anti-encryption, carceral surveillance approach to the Internet that has nothing to do with safety and everything to do with authoritarian control over people.

Have you ever seen a political party or the government advance a claim that a particular surveillance law or expanded power is good, actually, because “research shows” that “a majority of Australians support…” whatever it is? Where that research comes from, and how that support was acquired is worth digging into a bit.

Let’s get started.

Learning From Mistakes

Mistakes provide a good opportunity for learning. Let’s use this survey to examine how survey design can be used to manipulate people to answer in a particular, biased way. A word of caution: once you learn how to design a push poll, you’ll start to see just how often they’re used everywhere: newspapers, lobby groups, and yes even in academia.

The survey is here if you’d like to follow along: https://unsw.au1.qualtrics.com/jfe/form/SV_1X4K65UYCLsPTim

We’ll step through each question, but our main concern is not the individual questions per se but the framing of the survey and the absence of context or alternative solutions that respondents might have opted for instead.

Question 1

"Children younger than 16 should not be able to access adult pornography sites."

Note that we haven't had any discussion of what pornography is, who identifies it, or even how "adult pornography" differs from other kinds of pornography. Is there a whole genre of children's pornography? Pornography for pets, perhaps?

Question 2

"Adult pornography sites should check the age of their users to stop children accessing pornography."

Note how this question follows on immediately from the previous one, to which the vast majority of respondents have presumably answered yes, and leads them to consider only one solution. Kids shouldn't access porn? Well then the porn sites should verify age, obviously! Presented this way, it seems like the only logical solution.

But what if a series of alternative solutions had been presented instead? "To discourage children from accessing porn, which of the following techniques do you think would be most effective:

  • age verification by the website
  • education programs in schools
  • better access to accurate and respectful age-appropriate information about sex
  • an anonymous government-organised age-attestation system that didn't give ID information to the porn site
  • etc.

A survey that genuinely wanted to elicit people's opinions would have offered them a list of possibilities, including an "other (please specify)" option, and would have asked them to think about which, if any, of the options were going to be effective.

Question 3

"Many phones currently scan for viruses and spam. I think they should also scan for child sexual abuse material."

Again, consider the framing. One could instead have framed the same question by saying "Many authoritarian states routinely scan the private photographs and messages of their citizens. I think Australia should do so too, but only for child sexual abuse material." Same question, but the different framing would elicit a very different set of responses.

A well-designed survey would have deliberately avoided framing it in a way that tilted it one way or the other. (Incidentally, one of EFA’s board members wrote a whole paper about why the virus-scanning as CSAM-scanning argument is a false equivalence – it's all about whose interest the scanner runs in https://arxiv.org/abs/2110.07450)

Question 4

"When I am using a website or an app, it is important to me that my data can never be accessed by my government or the police."

Again, obviously, framing. Some Australians greatly distrust their own government(s), but many don't. If the question instead had asked "When I am using a website or app, it is important to me that my data can never be sold to advertisers or accessed by a foreign government" we'd get different answers. A neutral framing would simply have said "never be accessed by anyone other than the intended recipient." But no effort at a neutral framing has been made.

Question 5

"It is important to me that the websites and apps that I use are not being used by child sexual abusers to harm children."

Are we supposed to be less concerned about children being abused if it’s done with apps we don’t happen to use? It’s somehow more abhorrent if it’s done using the good, clean apps that normal, decent, law-abiding people use?

This question attempts to paint child abuse as something that only ever happens over there by strange people we never meet, who are nothing like ourselves, when the reality is that it’s often committed by close family friends of the victim-survivors.

Question 6

"I am comfortable with government or police accessing my data, if it helps websites to be safer for children."

This is a particularly sneaky question. It is tacitly asking that everyone should constantly prove to the government that they are innocent. Constant authoritarian surveillance is not a feature of a liberal democracy. Asking “I am comfortable with full-blown fascism if it helps websites be safer for children” would be ludicrous, but we’re expected to accept that this question is somehow less absurd.

What if the question was "I am comfortable with the government or the police accessing all of my photos if it helps websites to be safer for children”? All of them, including the ones parents take of their children in the bath because they pulled a hilarious face. The intimate ones shared with their partner with full consent and enthusiastic participation. Because that’s what “my data” implies: full access to everything on your phone.

Question 7

"If scanning for child sexual abuse material is done, I would be worried that this information might be used later for different purposes."

This is a better question. It acknowledges that there are risks, not just benefits, to the proposals. An even better question would seek to discover what other risks might exist, and attempt to gauge how worried people are about them. For an example of how this could be done, see the OAIC’s Australian Community Attitudes to Privacy Survey 2020.

Question 8

"Some online messenger apps are considering using “encryption” (messages are locked away from everyone except the sender and receiver, and even the tech company and the police can’t see them). 

This would make it very difficult to detect child abusers grooming children online. 

Do you agree encryption should be used for online messaging?"

Leaving aside the failure to distinguish end-to-end encryption from ordinary client-server encryption, the issue again is framing. The survey could instead have asked "There have been incidents in Australia of intermediaries harvesting health information and using it for financial gain. How important is it to ensure that online messaging is encrypted so that only the intended receiver can decode the information?" or even "Some teens, and many adults, voluntarily send sexualised images to a partner. How important is it that those images cannot be viewed by intermediaries on the Internet?"

The truth is that there are security risks associated with exposure, too. Encryption also makes it difficult for abusers to stalk their victims when they’re trying to escape.

The survey should have asked about risks, or at least refrained from framing the question only in terms of one example of why security would be bad. Security is good!

Question 9

"If it helped reduce online child sexual abuse, I would be comfortable with proving my identity to the social media sites that I use, such as by providing a driver’s license number (or similar)."

Again the same question: why would handing your ID to Twitter do anything to deter child sexual abuse? It's not just that this question completely fails to examine any of the downsides (for example, the risk of identity theft, particularly for low-budget, insecure or untrustworthy platforms), it asserts – without asking the respondent to examine - a completely false link between undermining the privacy of the survey respondent and making children safer.

Question 10

"Social media and other companies know that some adults groom children online.

I would support social media companies scanning online messaging for signs of grooming, and if detected, warning children who are being groomed online."

Telephone companies know that some adults groom children on the phone. Should phone companies listen in to everyone’s conversations for signs of grooming? Car companies also know that some adults groom children in cars. Should car companies scan conversations in private vehicles for signs of grooming?

The framing of this question assumes that there are no other ways to address grooming of children, implies benefits, and ignores any potential risks from online message scanning.

We could have asked “Social media companies know that children use online communities to explore identities that abusive and controlling adults in their life object to — often violently. Sometimes, those abusive individuals and communities call learning about other ways of living "grooming".

Should social media companies alert parents if their children are viewing queer or trans educational material?

Question 11

"What do you expect social media and technology companies to do to make sure that their service is free from child sexual abuse material (CSAM)? Select as many as you want..." There follows a list of options, including "other (please specify)"

This is the first question that comes across as a genuine effort to find out what people think, but it is not without its challenges. It implies that making a service completely free of child abuse material is possible in a society that creates CSAM.

We are not asked “what do you expect society to do to ensure it is free from child sexual abuse material?” We are not asked what we expect of other companies. The question frames social media and technology companies as uniquely responsible for CSAM in ways that are unreasonable.

It’s also unclear if it’s even possible for companies to take action of this nature on their platforms without significant undesirable consequences. Yet again we’re asked to make an assessment of options with only benefits presented to us, and none of the risks. That’s an unfair trade off to expect us to make, and it biases the research significantly.

Question 12

"Do you think that the Australian government should pass laws making tech companies take action on their platforms against child sexual abuse material?"

Who could say no? This is a “when did you stop embezzling money” question. It’s a trap.

The question also ignores that Australia already has laws that require companies of all kinds to respond to evidence of child sexual abuse material. No effort is made to assess what legislative gaps exist that necessitate new laws. Are the existing laws not being enforced?

A better question would have included a list of options, or even a specific proposed law. As it is, this question provides no useful information but does push a particular, narrow, perspective: Something must be done. This is something. Therefore we must do this.

Question 13

"What do you think should happen to an internet company that repeatedly allows child sexual abuse material to be shared on its platform?

Select as many as you want:" 

What a surprise - when we're not being asked about Australian laws we are allowed to select the options we want.

Summary

Overall this survey asked us how much personal privacy and security we were willing to sacrifice in return for an ill-defined promise to “make children safer”. It didn't ask us to examine whether any of this sacrifice actually would make children safer, and it didn't ask whether exposing children's personal messages or adults’ identity documents might actually make them a lot less safe.

When it asked for our support of new laws, it didn't even tell us what the new laws would be. It didn't ask us whether police or ASIO officers should need a warrant, or whether powers to "make children safer" needed judicial oversight and needed to be necessary or proportionate, or would actually make the slightest bit of difference. 

The occasional real question merely serves to underline the coercive and misleading nature of the others.

This really isn't the student's fault; it's the supervisor's fault for not explaining what scholarship is supposed to be. The trouble is that it deprives Australians of the trustworthy scientific contributions that public debate depends upon, and undermines trust in scientific research more generally.

Now you know a bit more about how poorly designed (or actively malicious) surveys can be used to manipulate research outcomes and public opinion. Keep a careful eye out for more, and let us know if you spot one!

Leave a Reply

Your email address will not be published.