A couple of weeks back (with the help of Electronic Frontiers Australia & conference organisers), I attended the inaugural Safety on the Edge Conference hosted by the Office of the eSafety Commissioner. The eSafety Office falls under the Communications and Arts portfolio as it deals with regulation of internet content - an historically contentious topic for civil liberties groups.
The original scope of the eSafety Office was a focus on the welfare of children online, however after noting the prevalence of online abuse issues faced by adults, the agency was recently given funding to widen its scope.
This article is by and is the copyright of Rosie Williams, a citizen journalist who works on a range of issues, including data ethics and online safety. It was originally published on her The Little Bird blog and is republished here with permission. She the original article. Rosie is also very active on Twitter @Info_Aus.
Research used by the eSafety Office found image-based abuse has become a major issue facing internet users:
The research shows victims’ intimate images were most commonly shared without consent on popular social media sites. Facebook/Messenger accounted for 53%, followed by Snapchat at 11% and then Instagram at 4%. Text messaging and MMS were other common channels for distribution.
Earlier in the year, the office launched their online portal for reporting image-based abuse but used the more recent conference to announce the rollout of an additional tool aimed at pre-empting abuse.
The additional functionality is the result of a pilot partnership between Facebook and the eSafety office with Australia the first jurisdiction to trial the technology which offers assistance to people worried someone may be about to share their intimate images against their wishes.
In order to trigger the functionality, potential victims must first make a report through the eSafety Office portal. The potential victim must then send images they are worried will be shared to themselves via FB messenger and Facebook will create a special code (called a hash) unique to each image that will be used to detect attempts to send it on Facebook and prevent unauthorised sharing.
Clarifying Facebook's revenge porn pilot after speaking to them:
- A Facebook worker will see the full, uncensored nude images
- Images stored for a period
- But how else could you do it; it's about riskhttps://t.co/t3FaaxxR2T
— Joseph Cox (@josephfcox) November 8, 2017
The tool received a round of applause from the sold-out conference room but has received a very mixed response from the media (and among my network of technical experts). The issues raised by concerned community members are elaborated well in this article in The Conversation.
The most obvious concerns question the invitation to share nude photos as a measure aimed at securing one’s privacy. TechCrunch suggests it would make more sense to provide a way for users to hash the image themselves rather than have them upload it and have Facebook do it on their behalf.
The main technical questions revolve around the limitations of the hashing function given that changing an image also changes the hash. The worry is that all an abuser would have to do is make relatively minor changes to the image/s and be free to go on sharing as they please.
Of the two forms of hashing available, it seems based on comments by Alex Stamos that the more robust photoDNA is being used which is resistant to simple changes rather than cryptographic hashing which would fail if even a single pixel was changed.
Chief Security Officer Alex Stamos used his personal Twitter account to discuss the limitations of the technology in this thread.
There are algorithms that can be used to create a fingerprint of a photo/video that is resilient to simple transforms like resizing. I hate the term "hash" because it implies cryptographic properties that are orthogonal to how these fingerprints work.
— Alex Stamos (@alexstamos) November 8, 2017
It may be the case that the use of photoDNA (as opposed to cryptographic hashing) is the reason why the hashing needs to be done at Facebook’s end and not by the potential victim. Alex Stramos (and the Wikipedia explanation) make clear there is some flexibility in the tool to cope with small changes but it would be good to hear more detail on exactly what kinds of image alterations the tool can deal with and which it can not.
Most of the articles on the tool to date have come from more mainstream channels so it would be helpful to hear more expert opinion that can provide a solid basis to inform decisions by potential victims and their advocates of the level of confidence we can have in using or recommending the tool.
I look forward to more information.