Privacy – Electronic Frontiers Australia https://www.efa.org.au Promoting and protecting digital rights in Australia since 1994. Fri, 13 Apr 2018 04:00:40 +0000 en-AU hourly 1 Turnbull Hates your Privacy Online https://www.efa.org.au/2018/04/13/turnbull-hates-your-privacy-online/ https://www.efa.org.au/2018/04/13/turnbull-hates-your-privacy-online/#respond Fri, 13 Apr 2018 03:59:44 +0000 https://www.efa.org.au/?p=9218 Continue reading ]]> We see today that the Australian government is still pushing for an encryption backdoor, and isn’t engaging with the Australian public about what they intend to do. The bill is apparently in “advanced stages” without any transparency on its impact to your privacy and security.

There are signs Labor may oppose the legislation, so all is not yet lost.

“Labor, which has typically supported the government in matters of national security, remains sceptical additional legislation of this nature is practical.”

Meanwhile the Department of Home Affairs is still building its Panopticon-like Facial Identification Service and ignoring the concerns of the Australian public. It sees concerns about letting privacy companies access the system as “overblown”.

“Home Affairs said that there should not be a warrant required in order to access the Face ID system.”

Your digital rights are being attacked by powerful and well-funded adversaries. We only stand a chance if we stand together to oppose them.

Donate now to help fund our important work. We are an entirely volunteer-run organisation, so every dollar goes towards supporting the work of the EFA.

Join us as an individual, concession or organisational member as we fight to preserve your digital rights - and tell your family and friends why they should care too!

]]>
https://www.efa.org.au/2018/04/13/turnbull-hates-your-privacy-online/feed/ 0
Australian Digital Rights Organisations Call For Politicians To Clarify Their Dealings With Cambridge Analytica https://www.efa.org.au/2018/03/19/australian-digital-rights-organisations-call-for-politicians-to-clarify-their-dealings-with-cambridge-analytica/ https://www.efa.org.au/2018/03/19/australian-digital-rights-organisations-call-for-politicians-to-clarify-their-dealings-with-cambridge-analytica/#respond Sun, 18 Mar 2018 23:06:55 +0000 https://www.efa.org.au/?p=9203 Continue reading ]]> JOINT MEDIA RELEASE BY ELECTRONIC FRONTIERS AUSTRALIA, FUTURE WISE, AUSTRALIAN PRIVACY FOUNDATION, AND DIGITAL RIGHTS WATCH

In light of the revelations that Cambridge Analytica has reportedly misused the data of over 50 million people on Facebook, Australia’s leading digital and civil rights advocates call on all Australian governments and political parties to categorically answer the following questions:

  • Have you, at any time, engaged the services of Cambridge Analytica or its parent company Strategic Communication Laboratories?
  • Have you, at any time, been provided with data on Australian citizens by Cambridge Analytica or its parent company Strategic Communication Laboratories?
  • Have you ever provided any Government data such as voter rolls to Cambridge Analytica or its parent company Strategic Communication Laboratories?
  • Do you believe that the linkage of this sort of data to generate sensitive political data meets the definition of consent required by Australian law?

Many Australian political parties and ministers have reportedly met with Cambridge Analytica over the past few years. We must know who in the Australian political sphere believes in informed consent, and who does not.

The New York Times and The Guardian have reported that Cambridge Analytica accessed the Facebook profiles of 50 million people without their informed consent. People who trusted Facebook to keep their private information private; people who did not give their informed consent that their data be shared in this way. Facebook has since suspended Cambridge Analytica from its platform, as well as its parent company Strategic Communication Laboratories.

Australian governments are pushing to collect more and more data on Australians, and to link it with larger and larger datasets. Australians must be confident that the custodians of our data will look after our best interests, proactively, and with due care and skill. We must know that our data is not being collected merely for narrow, self-interested reasons. We must be sure that this data is not being shared without our informed consent.

These should be simple questions for any government or political party to answer. We look forward to seeing how trustworthy they really are.

MEDIA CONTACTS
For Electronic Frontiers Australia
Justin Warren
Phone: 0412 668 526
Email: media@efa.org.au

For Future Wise
Dr Trent Yarwood
Phone: 0403 819 234
Email: trent@futurewise.org.au

For Australian Privacy Foundation
Liam Pomfret
Email: liam.pomfret@privacy.org.au

For Digital Rights Watch
Tim Singleton Norton
Email: info@digitalrightswatch.org.au

]]>
https://www.efa.org.au/2018/03/19/australian-digital-rights-organisations-call-for-politicians-to-clarify-their-dealings-with-cambridge-analytica/feed/ 0
FOI uncovers a three month window to opt out of the My Health Record Scheme - then what? https://www.efa.org.au/2018/02/13/foi-uncovers-a-three-month-window-to-opt-out-of-the-my-health-record-scheme-then-what/ https://www.efa.org.au/2018/02/13/foi-uncovers-a-three-month-window-to-opt-out-of-the-my-health-record-scheme-then-what/#respond Tue, 13 Feb 2018 01:24:55 +0000 https://www.efa.org.au/?p=9170 Continue reading ]]> More concerns than answers have been raised over the Governments My Health Record Scheme after EFA board member Justin Warren made a request for access to the "Operational Blueprint" through the Right to Know FOI platform. In it's response to the request, the Government made the information public.

Unfortunately for your priavcy, detail is light on and the information raises more questions than it answers. 

Justin Warren said "EFA are concerned that we are less than four months from the start of the compulsory My Health Record scheme and we still don't have any information on how to opt out of the system. A system that was originally supposed to be opt-in and was changed to opt-out in complete disregard for the advice the government received from multiple parties."

"According to this information, the opt-out period will end a mere three months after starting. What then? How do people opt-out of the system after this period?"

"Agencies are quick to trot out the 'We take your privacy seriously' line at every opportunity. We say talk is cheap. Prove it."

EFA members also expressed frustration that the Department of Human Services has no information about an opt-out trial completed in 2016. And members have questions, a whole bunch of them, and they are questions that really need answering if Government are really taking the health information of an entire country seriously...

Member questions include (and are certainly not limited to):

Is the opt-out procedure going to be given any prominence other than on a buried government website? Any consumer advisories about this option? Will health practitioners be provided equal information about opting out as well as the faux benefits (lies) about the "service" (sic)? What is the budget for 'fair and balanced' information to the public about this option?

Is opting out like death -- forever? Or if someone wants to participate in the future, will that also be an option? What is the truth about a person's relationship to this faux "service" (sic)?

How can a system only be opt-out for 3 months when there are people joining the health system every day? This window is a stupid view of the public. We don't have a static population. You know. Migrants. New citizens. Real people who will arrive here after that ridiculous "window".

Have more questions? Comment below or on our twitter or facebook posts on the topic.

Text: Historical opt-out trial information An opt-out model of participation for the My Health Record system trialled in North Queensland and Nepean Blue Mountain regions. The trial was completed in 2016. We do not have information about the outcomes of the trial.

From the Department of Human Services website

Read the article from 12/2/18: How to opt out of Australia's e-health record scheme published on ITNews.

Read: Opt-out for My Health Record 011-04150000 information page

]]>
https://www.efa.org.au/2018/02/13/foi-uncovers-a-three-month-window-to-opt-out-of-the-my-health-record-scheme-then-what/feed/ 0
ABC Radio Interview: Does technology know you better than your family or friends? https://www.efa.org.au/2018/02/12/abc-radio-interview-does-technology-knows-you-better-than-your-family-or-friends/ https://www.efa.org.au/2018/02/12/abc-radio-interview-does-technology-knows-you-better-than-your-family-or-friends/#respond Mon, 12 Feb 2018 02:26:53 +0000 https://www.efa.org.au/?p=9167 Continue reading ]]> EFA’s Chair Lyndsey Jackson spoke with ABC Radio’s Rod Quin and callers early Saturday (10/2/18) morning about the data we share and what it can reveal about us.

Increasingly people are discovering the impact and value of the information we give out.  Callers shared caution and distrust in the way data mining was building profiles of our lives and behaviour.

For EFA the radio interview presented a great opportunity to talk to people from a broad background. Ms Jackson reflected, “people are aware that private data has a value to business, and it’s an area that they want to know more about. We may not all understand all of the detail in how data matching and technologies work, but people are confident in talking about experiences where they see it touch their own lives and the stories and patterns people share”.

Listen to the ABC Overnights radio interview: Does technology knows you better than your family or friends?

]]>
https://www.efa.org.au/2018/02/12/abc-radio-interview-does-technology-knows-you-better-than-your-family-or-friends/feed/ 0
FBI says device encryption is 'evil' and a threat to public safety https://www.efa.org.au/2018/01/14/fbi-encryption-evil/ https://www.efa.org.au/2018/01/14/fbi-encryption-evil/#respond Sun, 14 Jan 2018 04:13:07 +0000 https://www.efa.org.au/?p=9151 Continue reading ]]> The FBI continues its anti-encryption push. It's now expanded past Director Christopher Wray to include statements by other FBI personnel. Not that Chris Wray isn't taking every opportunity he can to portray personal security as a threat to the security of the American public. He still is. But he's no longer the only FBI employee willing to speak up on the issue.

This post is by Tim Cushing and was originally published on Techdirt.com. See the original article.

Wray expanded his anti-encryption rhetoric last week at a cybersecurity conference in New York. In short, encryption is inherently dangerous. And the FBI boss will apparently continue to complain about encryption without offering any solutions.

The Federal Bureau of Investigation was unable to access data from nearly 7,800 devices in the fiscal year that ended Sept. 30 with technical tools despite possessing proper legal authority to pry them open, a growing figure that impacts every area of the agency's work, Wray said during a speech at a cyber security conference in New York.

The FBI has been unable to access data in more than half of the devices that it tried to unlock due to encryption, Wray added.

"This is an urgent public safety issue," Wray added, while saying that a solution is "not so clear cut."

The solution is clear cut, even if it's not workable. What Wray wants is breakable encryption. And he wants companies to do the work and shoulder the blame. Wray wants to be able to show up at Apple's door with a warrant and walk away with the contents of someone's phone. How that's accomplished isn't really his problem. And he's not intellectually honest enough to own the collateral damage backdoored encryption would cause. But that's how Wray operates. He disparages companies, claiming encryption is all about profit and the government is all about caring deeply for public safety. Both statements are dishonest.

But Wray isn't the only FBI employee taking the move to default encryption personally. And the others commenting are taking the rhetoric even further, moving towards personal attacks.

On Wednesday, at the the International Conference on Cyber Security in Manhattan, FBI forensic expert Stephen Flatley lashed out at Apple, calling the company “jerks,” and “evil geniuses” for making his and his colleagues' investigative work harder. For example, Flatley complained that Apple recently made password guesses slower, changing the hash iterations from 10,000 to 10,000,000.

That means, he explained, that “password attempts speed went from 45 passwords a second to one every 18 seconds,” referring to the difficulty of cracking a password using a “brute force” method in which every possible permutation is tried.

[...]

“At what point is it just trying to one up things and at what point is it to thwart law enforcement?" he added. "Apple is pretty good at evil genius stuff."

This is great. Apple is now an "evil genius" because it made stolen iPhones pretty much useless to thieves. Sure, the device can be sold but no one's going to be able to drain a bank account or harvest a wealth of personal information. This was arguably in response to law enforcement (like the FBI!) complaining cellphone makers like Apple were assholes because they did so little to protect users from device theft. And why should they, these greedy bastards? Someone's phone gets stolen and the phone manufacturer now has a repeat customer.

Encryption gets better and better, limiting the usefulness of stolen devices and now Apple is an "evil genius" engaged in little more than playing keepaway with device contents. Go figure.

The FBI's phone hacker did have some praise for at least one tech company: Cellebrite. The Israeli hackers were rumored to have helped the FBI get into San Bernardino shooter Syed Farook's phone after a failed courtroom showdown with Apple. The FBI ended up with nothing -- no evidence on the phone and no court precedent forcing companies to hack away at their own devices anytime the government cites the 1789 All Writs Act.

Now we're supposed to believe device makers are the villains and the nation's top law enforcement agency is filled with unsung heroes just trying to protect the public from greedy phone profiteers. I don't think anyone believes that narrative, possibly not even those trying to push it.
 
 
 

]]>
https://www.efa.org.au/2018/01/14/fbi-encryption-evil/feed/ 0
Protecting Sources and Whistleblowers in the Digital Age https://www.efa.org.au/2017/11/27/protecting-sources-whistleblowers/ https://www.efa.org.au/2017/11/27/protecting-sources-whistleblowers/#respond Mon, 27 Nov 2017 05:05:22 +0000 https://www.efa.org.au/?p=9092 Continue reading ]]>

Image: Matthew Da Silva

I recently had the pleasure of participating in the Walkley Foundation panel discussion on "Protecting Sources and Whistleblowers in the Digital Age". My co-panelists were Paul Farrell (Buzzfeed, ex-Guardian), Elise Worthington (ABC), with Julie Posetti (ex-Fairfax/ABC) as compere. As well as providing a forum for the panel discussion, the event served as the official release of Julie’s UNESCO study “Protecting Journalism Sources in the Digital Age”.

As the only non-journalist on the panel, I offered a technical perspective on the challenges that journalists face. Over the past few years, I have provided practical technical solutions for several journalists and Australian news organisations to protect their sources and themselves.

This article is by Peter Tonoli and was originally published on Peter's blog. It is republished here under a Creative Commons Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0) licence. See the original article. Peter is an EFA Board member and tweets @peter_tonoli.

Julie Posetti opened the event with a brief summary of her UNESCO study and invited panelists to share their initial thoughts on her report. For me, the report underscored the bleak situation journalists face in attempting to protect themselves and their sources. Mass surveillance of Australian journalists and citizens is multifaceted. Among the most prominent forms of Australian mass surveillance are:

  • Governmental – mandatory data retention: where all telecommunications metadata is being stored for two years;
  • Governmental – the “5 Eyes” intelligence alliance, between the US, UK, Canada, Australia and New Zealand, where governments outsource surveillance of their citizens through alliance members, and share that surveillance with their counterparts;
  • Corporate – such as Facebook, Google and Twitter, organisations who have a voracious appetite Hoovering up even the smallest details about their users.

Governmental surveillance is increasing each year; the tightening of national security and anti-terrorism legislation is continually used as justification to erode citizens’ rights. Prima facie, governmental surveillance breaches Article 17 of the International Covenant on Civil and Political Rights, guaranteeing privacy. Article 17 specifies, “individuals have the right to share information and ideas with one another without interference by the State, secure in the knowledge that their communication will reach and be read by the intended recipients alone.”

Corporate entities make whistleblowing difficult by disincentivising anonymity. Facebook has a ‘real name’ policy, where ‘pretending to be anything or anyone’ is not allowed. Twitter only gives accounts a ‘verified’ status if they have provided a verified email address, phone number, and birth date. Furthermore, these corporate policies foster suspicion and prompt members of society to shun, or question those who use anonymity. Together with the overt effects, corporates often insinuate those who use anonymising networks, such as Tor, are up to no good, simply because they choose not to reveal their true identity.

Society has stigmatised protection of privacy, such that, those who are pseudonymous, and use privacy protection tools—such as Tor— are labelled pejoratively as ‘paranoid’, at the very least. The collarary is, those who are labelled as paranoid, can only be so labelled if they are not being actively surveiled. With government metadata retention, and wholesale capture of data by the 5EYES agreement, all citizens and journalists are rightly justified in protecting their identity and using anonymity systems.

In Australia journalists are theoretically protected by ‘shield laws’, which protect them from government interference that forces them to reveal confidential sources. Ideally, shield laws also protect whistleblowers by proxy, however shield laws have lost their efficacy in today’s environment of mass surveillance. Mass surveillance facilitates accessing suspected whistleblowers metadata and examining it for interactions with the publishing journalist, allowing whistleblowers to be outed. Current mass surveillance practices do not simply create an exception where communications to or from a journalist are expunged. For example, the Australian Federal Police accessed Paul Farrell’s metadata, without a warrant, to seek his sources.

The current narrow legal definitions of the term ‘journalist’ further diminish the effectiveness of shield laws. In the past ten years, the journalism industry has been disrupted, with a massive increase in the number of journalists who freelance, not to mention the fine line that has appeared between professional journalists and bloggers/tweeters like Behrouz Boochani.

Julie Posetti asked how I would respond to a potential whistleblower wanting to maximise their chances of remaining protected from exposure. While each scenario is different, ranging from a worker blowing the whistle on poor governance within council, to explosive releases such as those released by Edward Snowden, there are a few tips:

  • Whistleblow to a journalist that has a proven history of protecting sources, such as Paul Farrell, or the ABC Four Corners team. At the very least, contact journalists who provide secure channels for initial contact—I notice ever increasingly journalists on Twitter have added Signal or Wikr contact details in their Twitter bios.
  • Minimise your digital footprint. Try to use analogue methods of communication, such as dead drops, transmission of material through the post, or meeting in person (without electronic devices/phones being present).

Citizens and journalists need to provide ‘herd immunity’, by using anonymising and privacy enhancing technologies all or most of the time, not just when requiring privacy. Increasing use of these technologies also results in:

  • Normalising these technologies, resulting a reduction, and hopefully a removal in the stigma that only ne’er-do-wells use these technologies.
  • Ensuring that journalists and citizens can use these technologies with a high degree of confidence—with that high degree of confidence, improved productivity will result.
  • Increasing expertise throughout the journalistic profession. This expertise will facilitate journalists teaching their peers, filling an ever-increasing hole in training capabilities in media organisations due to ever diminishing income for media organisations.

Other aspects of using technology to protect sources that were mentioned by the panel were:

  • Tails - The Amnesic Incognito Live System. Tails is an operating system designed from the ground up for anonymity and privacy. Tails can be used in most PC’s, and can run from a USB stick. The significance of using a USB stick means that Tails users don’t have to reformat their computer to use Tails. Tails has the Tor anonymising browser, encryption utilities, as well as utilities for cleaning and working on sensitive documents.
  • Journalists using a dedicated phone for Signal, without a SIM card, that sits in the journalists’ office, ready for contact by whistleblowers.

On a final note, during the panel discussion, Paul mentioned that the privacy of Australian journalists is less compromised than in other jurisdictions. To some extent I agree with Paul, however, journalists and citizens must remain vigilant to ensure the situation in Australia does not descend to the poor standards faced in other jurisdictions.

 
 
 
 

]]>
https://www.efa.org.au/2017/11/27/protecting-sources-whistleblowers/feed/ 0
Will Facebook’s new image abuse tool really work? https://www.efa.org.au/2017/11/21/fb-image-abuse-tool/ https://www.efa.org.au/2017/11/21/fb-image-abuse-tool/#respond Tue, 21 Nov 2017 01:54:24 +0000 https://www.efa.org.au/?p=8749 Continue reading ]]> A couple of weeks back (with the help of Electronic Frontiers Australia & conference organisers), I attended the inaugural Safety on the Edge Conference hosted by the Office of the eSafety Commissioner. The eSafety Office falls under the Communications and Arts portfolio as it deals with regulation of internet content - an historically contentious topic for civil liberties groups.

The original scope of the eSafety Office was a focus on the welfare of children online, however after noting the prevalence of online abuse issues faced by adults, the agency was recently given funding to widen its scope.

This article is by and is the copyright of Rosie Williams, a citizen journalist who works on a range of issues, including data ethics and online safety. It was originally published on her The Little Bird blog and is republished here with permission. She the original article. Rosie is also very active on Twitter @Info_Aus.

Research used by the eSafety Office found image-based abuse has become a major issue facing internet users:

The research shows victims’ intimate images were most commonly shared without consent on popular social media sites. Facebook/Messenger accounted for 53%, followed by Snapchat at 11% and then Instagram at 4%. Text messaging and MMS were other common channels for distribution.

Earlier in the year, the office launched their online portal for reporting image-based abuse but used the more recent conference to announce the rollout of an additional tool aimed at pre-empting abuse.

The additional functionality is the result of a pilot partnership between Facebook and the eSafety office with Australia the first jurisdiction to trial the technology which offers assistance to people worried someone may be about to share their intimate images against their wishes.

In order to trigger the functionality, potential victims must first make a report through the eSafety Office portal. The potential victim must then send images they are worried will be shared to themselves via FB messenger and Facebook will create a special code (called a hash) unique to each image that will be used to detect attempts to send it on Facebook and prevent unauthorised sharing.

The tool received a round of applause from the sold-out conference room but has received a very mixed response from the media (and among my network of technical experts). The issues raised by concerned community members are elaborated well in this article in The Conversation.

The most obvious concerns question the invitation to share nude photos as a measure aimed at securing one’s privacy. TechCrunch suggests it would make more sense to provide a way for users to hash the image themselves rather than have them upload it and have Facebook do it on their behalf.

The main technical questions revolve around the limitations of the hashing function given that changing an image also changes the hash. The worry is that all an abuser would have to do is make relatively minor changes to the image/s and be free to go on sharing as they please.

Of the two forms of hashing available, it seems based on comments by Alex Stamos that the more robust photoDNA is being used which is resistant to simple changes rather than cryptographic hashing which would fail if even a single pixel was changed.

Chief Security Officer Alex Stamos used his personal Twitter account to discuss the limitations of the technology in this thread.

It may be the case that the use of photoDNA (as opposed to cryptographic hashing) is the reason why the hashing needs to be done at Facebook’s end and not by the potential victim. Alex Stramos (and the Wikipedia explanation) make clear there is some flexibility in the tool to cope with small changes but it would be good to hear more detail on exactly what kinds of image alterations the tool can deal with and which it can not.

Most of the articles on the tool to date have come from more mainstream channels so it would be helpful to hear more expert opinion that can provide a solid basis to inform decisions by potential victims and their advocates of the level of confidence we can have in using or recommending the tool.

I look forward to more information.
 
 
 
 

]]>
https://www.efa.org.au/2017/11/21/fb-image-abuse-tool/feed/ 0
You may be sick of worrying about online privacy, but 'surveillance apathy' is also a problem https://www.efa.org.au/2017/11/10/surveillance-apathy/ https://www.efa.org.au/2017/11/10/surveillance-apathy/#respond Thu, 09 Nov 2017 22:11:49 +0000 https://www.efa.org.au/?p=8676 Continue reading ]]>

Do you care if your data is being used by third parties? Image: shutterstock.com

We all seem worried about privacy. Though it’s not only privacy itself we should be concerned about: it’s also our attitudes towards privacy that are important.

When we stop caring about our digital privacy, we witness surveillance apathy.

And it’s something that may be particularly significant for marginalised communities, who feel they hold no power to navigate or negotiate fair use of digital technologies.

This article is by Siobhan Lyons, from Macquarie University and was originally published on The Conversation. It is republished here under a Creative Commons CC-BY-SA licence. See the original article.

In the wake of the NSA leaks in 2013 led by Edward Snowden, we are more aware of the machinations of online companies such as Facebook and Google. Yet research shows some of us are apathetic when it comes to online surveillance.

Privacy and surveillance

Attitudes to privacy and surveillance in Australia are complex.

According to a major 2017 privacy survey, around 70% of us are more concerned about privacy than we were five years ago.

Snapshot of Australian community attitudes to privacy 2017. Office of the Australian Information Commissioner


And yet we still increasingly embrace online activities. A 2017 report on social media conducted by search marketing firm Sensis showed that almost 80% of internet users in Australia now have a social media profile, an increase of around ten points from 2016. The data also showed that Australians are on their accounts more frequently than ever before.

Also, most Australians appear not to be concerned about recently proposed implementation of facial recognition technology. Only around one in three (32% of 1,486) respondents to a Roy Morgan study expressed worries about having their faces available on a mass database.

A recent ANU poll revealed a similar sentiment, with recent data retention laws supported by two thirds of Australians.

So while we’re aware of the issues with surveillance, we aren’t necessarily doing anything about it, or we’re prepared to make compromises when we perceive our safety is at stake.

Across the world, attitudes to surveillance vary. Around half of Americans polled in 2013 found mass surveillance acceptable. France, Britain and the Philippines appeared more tolerant of mass surveillance compared to Sweden, Spain, and Germany, according to 2015 Amnesty International data.

Apathy and marginalisation

In 2015, philosopher Slavoj Žižek proclaimed that he did not care about surveillance (admittedly though suggesting that “perhaps here I preach arrogance”).

This position cannot be assumed by all members of society. Australian academic Kate Crawford argues the impact of data mining and surveillance is more significant for marginalised communities, including people of different races, genders and socioeconomic backgrounds. American academics Shoshana Magnet and Kelley Gates agree, writing:

[…] new surveillance technologies are regularly tested on marginalised communities that are unable to resist their intrusion.

A 2015 White House report found that big data can be used to perpetuate price discrimination among people of different backgrounds. It showed how data surveillance “could be used to hide more explicit forms of discrimination”.

According to Ira Rubinstein, a senior fellow at New York University’s Information Law Institute, ignorance and cynicism are often behind surveillance apathy. Users are either ignorant of the complex infrastructure of surveillance, or they believe they are simply unable to avoid it.

As the White House report stated, consumers “have very little knowledge” about how data is used in conjunction with differential pricing.

So in contrast to the oppressive panopticon (a circular prison with a central watchtower) as envisioned by philosopher Jeremy Bentham, we have what Siva Vaidhyanathan calls the “crytopticon”. The crytopticon is “not supposed to be intrusive or obvious. Its scale, its ubiquity, even its very existence, are supposed to go unnoticed”.

But Melanie Taylor, lead artist of the computer game Orwell (which puts players in the role of surveillance) noted that many simply remain indifferent despite heightened awareness:

That’s the really scary part: that Snowden revealed all this, and maybe nobody really cared.

The Facebook trap

Surveillance apathy can be linked to people’s dependence on “the system”. As one of my media students pointed out, no matter how much awareness users have regarding their social media surveillance, invariably people will continue using these platforms. This is because they are convenient, practical, and “we are creatures of habit”.

Are you prepared to give up the red social notifications from Facebook? nevodka/shutterstock


As University of Melbourne scholar Suelette Dreyfus noted in a Four Corners report on Facebook:

Facebook has very cleverly figured out how to wrap itself around our lives. It’s the family photo album. It’s your messaging to your friends. It’s your daily diary. It’s your contact list.

This, along with the complex algorithms Facebook and Google use to collect and use data to produce “filter bubbles” or “you loops” is another issue.

Protecting privacy

While some people are attempting to delete themselves from the network, others have come up with ways to avoid being tracked online.

Search engines such as DuckDuckGo or Tor Browser allow users to browse without being tracked. Lightbeam, meanwhile, allows users to see how their information is being tracked by third party companies. And MIT devised a system to show people the metadata of their emails, called Immersion.

Surveillance apathy is more disconcerting than surveillance itself. Our very attitudes about privacy will inform the structure of surveillance itself, so caring about it is paramount.
 
 
 
 
 

]]>
https://www.efa.org.au/2017/11/10/surveillance-apathy/feed/ 0
Discussing the national facial recognition database https://www.efa.org.au/2017/10/11/abcbrisbane-facial-recognition/ Wed, 11 Oct 2017 05:50:51 +0000 https://www.efa.org.au/?p=8490 Continue reading ]]> On Monday, 9th October, our Executive Officer Jon Lawrence spoke with Steve Austin on ABC Radio Brisbane about the decision by COAG to implement a comprehensive national facial recognition database including photos from all Australian driver's licences and the threats this 'Capability' poses to civil liberties.

Listen to the interview here, from 39:14.

 

 

]]>
Let's face it, we'll be no safer with a national facial recognition database https://www.efa.org.au/2017/10/07/no-safer-with-facial-recognition/ https://www.efa.org.au/2017/10/07/no-safer-with-facial-recognition/#comments Fri, 06 Oct 2017 19:14:12 +0000 https://www.efa.org.au/?p=8473 Continue reading ]]>

Image: Mirko Tobias Schäfer (CC-BY).

A commitment to share the biometric data of most Australians – including your driving licence photo – agreed at Thursday’s Council of Australian Governments (COAG) meeting will result in a further erosion of our privacy.

That sharing is not necessary. It will be costly. But will it save us from terrorism? Not all, although it will give people a false sense of comfort.

Importantly, it will allow politicians and officials to show that they are doing something, in a climate where a hunt for headlines demands the appearance of action.

This article is by Bruce Baer Arnold, Assistant Professor, School of Law, University of Canberra and a Board member of the Australian Privacy Foundation. It was originally published on The Conversation and is republished here under a Creative Commons CC-BY-SA licence. See the original article.

Your biometric data

Biometric data used in fingerprint and facial recognition systems is indelible. It can be used in authoritative identity registers, featured on identity documents such as passports and driver licences.

It can be automatically matched with data collected from devices located in airports, bus and train stations, retail malls, court buildings, prisons, sports facilities and anywhere else we could park a networked camera.

Australia’s state and territory governments have built large biometric databases through registration of people as drivers – every licence has a photograph of the driver. The national government has built large databases through registration for passports, aviation/maritime security and other purposes.

Irrespective of your consent to uses beyond those for which the picture was taken, the governments now have a biometric image of most Australians, and the ability to search the images.

COAG announced that the governments will share that data in the name of security.

Sharing data with who?

Details of the sharing are very unclear. This means we cannot evaluate indications that images will be captured in both public and private places. For example, in retail malls and libraries or art galleries – soft targets for terrorism – rather than in streets and secure buildings such as Parliament House.

Prime Minister Malcolm Turnbull has responded to initial criticism by clarifying that matching will not involve “live” CCTV.

But the history of Australian surveillance law has been a matter of creep, with step-by-step expansion of what might initially have been an innocuous development. When will law enforcement agencies persuade their ministers to include live public or private CCTV for image matching?

We cannot tell which officials will be accessing the data and what safeguards will be established to prevent misuse. Uncertainty about safeguards is worrying, given the history of police and other officials inappropriately accessing law enforcement databases on behalf of criminals or to stalk a former partner.

The sharing occurs in a nation where Commonwealth, state and territory privacy law is inconsistent. That law is weakly enforced, in part because watchdogs such as the Office of the Australian Information Commissioner (OAIC) are under-resourced, threatened with closure or have clashed with senior politicians.

Australia does not have a coherent enforceable right to privacy. Instead we have a threadbare patchwork of law (including an absence of a discrete privacy statute in several jurisdictions).

The new arrangement has been foreshadowed by governments over several years. It can be expected to creep, further eroding privacy and treating all citizens as suspects.

Software and hardware providers will be delighted: there’s money to be made by catering to our fears. But we should be asking some hard questions about the regime and questioning COAG’s statement.

Let’s avoid a privacy car crash

Will sharing and expansion of the biometric network – a camera near every important building, many cameras on every important road – save us from terrorism? The answer is a resounding no. Biometrics, for example, seems unlikely to have saved people from the Las Vegas shooter.

Will sharing be cost effective? None of the governments have a great track record with major systems integration. The landscape is littered with projects that went over budget, didn’t arrive on time or were quietly killed off.

Think the recent Census and Centrelink problems, and the billion dollar bust up known as the Personally Controlled Electronic Health Record.

It won’t be improved by a new national ID card to fix the Medicare problem.

Is the sharing proportionate? One answer is to look at experience in India, where the Supreme Court has comprehensively damned that nation’s ambitious Aadhaar biometric scheme that was meant to solve security, welfare and other problems.

The Court – consistent with decisions in other parts of the world – condemned the scheme as grossly disproportionate: a disregard of privacy and of the dignity of every citizen.

Is sharing likely to result in harms, particularly as the biometric network grows and grows? The answer again is yes. One harm, disregarded by our opportunistic politicians, is that all Australians and all visitors will be regarded as suspects.

Much of the data for matching will be muddy – some street cameras, for example, are fine resting places for pigeons – and of little value.

As with the mandatory metadata retention scheme, the more data (and more cameras) we have the bigger trove of indelible information for hackers. Do not expect the OAIC or weak state privacy watchdogs (which in some jurisdictions do not exist) to come to the rescue.

As a society we should demand meaningful consultation about official schemes that erode our rights. We should engage in critical thinking rather than relying on headlines that reflect political opportunism and institutional self-interest.

The incoherent explanation and clarifications should concern everyone, irrespective of whether they have chosen to be on Facebook – and even if they have nothing to hide and will never be mistaken for someone else.
 
 
 

]]>
https://www.efa.org.au/2017/10/07/no-safer-with-facial-recognition/feed/ 1