The Appreciation for Data Mining.

By Emma Chu

‘Data mining has come to prominence over the last two decades as a discipline in its own right…’[1]

In light of my initial post into the questioning of whether we can truly consider, particularly in the future, machinery, technology and such devices as ‘emotionless.’

To put it in short, my subjective response would be ‘no’. With only an every ‘increasing ability of institutions to collect electronic data, facilitated by advanced computer processing, means that the desire to ‘mine’ data is likely to expend’. Technology is taking on a human form as it evolves on its own learning at the rate of all humans who have an online presence. With already a well established set of techniques, the processing of data is usually represented and demonstrated in the form of links, information, knowledge, infographics all of which when mistreated or placed in the wrong hands, can truly misinform, deceive and inhibit the ability of the public to trust sources.

A good example would be the origins of Wikipedia whose initial intentions were quite genuinely to spread and share information, however when people were able to edit the information freely this anonymous freewill created a platform for misinformation on a public archive. This is evidence for our future scenario in Project 2 where people can truly act deceptively simply because they have the ability to.

With the two platforms of people anonymous online versus physically identified, generations are becoming more and more receptive to the idea of multiple personas. One in which acts accordingly to societies rules and regulations and one who actively attempts to defy them. This brings further concern to the future of our social behaviours and what is considered both acceptable and normal, again another attribute that is further investigated throughout Project 2 and Project 3 of Interdiscplinary Lab A.

It is with almost a sadness that the future of data mining genuinely questions, alters and will potentially hang in the balance of the attitudes of their users. The environment and upbringings of said users can definitively alter the behavioural characteristics of their online persona, quod erat demonstrandum the case of Wikipedia. Data mining should be appreciated rather than used as a means to inhibit society as a quest for its ever prominent lust for knowledge.

[1] Coenen, F. 2011, “Data mining: past, present and future”, The Knowledge Engineering Review, vol. 26, no. 1, pp. 25-29.

 

Games of control and misinformation

by Laura Wallace

I would hope that most of us, being well accustomed to personal wi-fi machines, understand that we are literally being watched all the time. We carry devices that allow data to be collected on a large, pervasive scale and online data collected from users is a major player in terms of how we monitor and gather information.

Through massive amounts of “big data” that take up entire buildings, we are capable of creating the most eerily intimate portraits based on our online footprint, so it’s no wonder we’re uncomfortable.

Theorists such as photographer Curtis Wallen argue that the concept itself is enough to warrant an aggressive battle against forms of online tracking[3]. Wallen claims this is difficult as we haven’t figured out “where the line is” in the first place, but do we really understand the practicalities of our concerns enough to do so?

A side effect of the efficiency of data retrieval is that it now acts as a form of currency for commercial “third parties”. Information including location, search habits, likes and key lines in online conversations are commonly sold by social media entities(Facebook) under the guise of “enhanced user experience”. This can be a subjective term as some of us pay no mind to personalised ads, regardless of how invasive they seem in theory, but there is no debate that this is a completely one-way street. Using the now commonplace practice of dataveillance, a term coined by writer Rita Raley to describe the tracking and harvesting of user information, sites like Facebook sell out information to companies who in turn use it to increase profit[1].

It is not hard for most people to connect the source of unwittingly self-tailored ads appearing on news feeds every few scrolls, even if they are unfamiliar with how deep web surveillance runs.

The real issue is the lack of autonomy users have, not just because of the system in which this information is transacted but because they are simply not informed enough, partly because transparency never seems to be the brightest policy.

Although Facebook outlines the ways in which it will be gathering data in its terms and agreements the moment the user signs up, this information is kept in sections that hardly any one would have considered reading in the first place, and it is impossible that this is not known by the company making these terms. One of the most common lies we tell these days comes not in the form of words but of clicking: “I have read and agree to the terms and conditions”.

This is a policy that needs to change if we expect dataveillance to be a part of our everyday functioning in a way that is not more immorally biased than it already is. Companies have little to no incentive to change their methods or even to make them obvious, because how would this aid their sole purpose of profit? At present we find starting with using our information to enlighten users seems to be the most realistic (if often more difficult in practice) solution.

If you feel like you may have been unclear on these issues before, I hope you continue searching aware that knowledge is, at the very least, a start towards power.

Screen Shot 2014-08-21 at 10.34.15 PM
Part of Facebook’s current terms on advertising and privacy.

 

Sources:

[1] Raley, R. 2013. “Dataveillance and Counterveillance” in Gitleman, L. (ed) “Raw Data Is an Oxymoron”, Ch. 7. The MIT Press, Cambridge, Massachusetts.

[2] Facebook, Statement of Rights and Responsibilities, US, 20th August 2014. <https://www.facebook.com/legal/terms&gt;

[3] Future Tense, Social media, data and property rights, 2014, radio interview, ABC, Sydney, 20th August 2014.  <http://www.abc.net.au/radionational/programs/futuretense/social-media-data-and-property-rights/5312518#transcript&gt;

Is Our Data Really Safe? It Depends.

by Mitchell Anson

To begin with, I’d like to discuss the concept of Big Data. This was a concept that has emerged relative to our culture’s increasing reliance on technology, and the relevance of technology in our current lives. It is an idea that the way that we interact with our technology and the technology around us can provide information about how we think and behave, which in turn is compared, combined and contrasted with the same information from people around us, forming a “cloud of knowledge”. It is the lifeblood of corporate giants such as Google and Facebook. Information about people feeds companies that both depend on people and exist to serve them, such as public transportation networks and governments.

Over the course of the last ten years, social networking has grown from non-existence to a global culture in its own, along with its own sets of rules and conventions. Some conventions, such as hashtagging, can be used by companies to mine data about the people who are talking about a certain topic, or find out about the general public’s opinion of them. Companies are using social media to advertise subliminally, even by compiling interesting articles for the purpose of drawing users to their website, increasing website traffic. Of course, these traffic figures are used to measure the value of the company’s website. In this sense, even figures comprised of raw data can actually affect the profitability of a global giant.

Both of the readings from week 3 were in reference to the topic of Big Data, and both were written from the standpoint that it is impossible to know how our information can remain safe when the boundaries of how it can be accessed are ever-expanding. This article was interesting because it seems to have been written by a “non-technophile”. “Technophile” is a relatively new term for someone who openly embraces technological advance without resistance and, furthermore, sees it as the vision for the future. Its antonym would be a “technophobe”.

The point I have here is that someone who knows exactly what is allowed to happen to their information online shouldn’t see a problem with online sites having access to their information provided they have to provide said information manually and sign a contract stating what their information can be used for by the website host (e.g. Facebook). Of course, the internet doesn’t know anything about you that you or someone else hasn’t told it about you. So, provided you don’t type your home address in on Twitter, and nobody else you know has, nobody without authorisation will know. The government isn’t allowed to disclose such information.

Overall, the fear is that Big Data is an uncontrollable force of knowledge, where instead it should be seen as a positive source of knowledge. Here we have the ability to identify trends in human society and, using this information, make the world a better place. The important overarching fact is thus: Raw data cannot bring down a society any more than it can build one from the ground. The force of change comes from what those who have the ability to make change choose to do with it.

 

Raley, Rita. 2013, ‘Dataveillance and countervailance,’ in Gitelman, L. (ed) “Raw data” is an oxym Cambridge, MA: Cambridge University Press, pp. 131-9

Do we really have a choice?

by Matilda Clarke

This blog post. These words that I have willingly placed online for the world to view. Are they still mine, or do they now belong to the public? To people whom I have never met, in positions I may never have thought would spare the time to read into them. Of what interest am I, a 21 year old student studying product design at a university in Australia, to the security agencies? That’s the question on the lips of the majority of the country’s population, who are just going about their daily lives and who believe that they have nothing to hide.

There are many arguments surrounding the subject of online data, digital footprints and digital profiles. The main debate over whether the collection of metadata is for public protection or public snooping is the most apparent, significantly in the media. In reality, the topic actually extends across the vast expanse of the digital world, attracting the attentions of a myriad of different people.

So why does the thought of “metadata collection” give people the creeps? A large contributor to this discomfort is brought about by the knowledge that some unknown person, somewhere far away could be looking at and analyzing what an individual is doing online whilst sitting at home with a cup of tea, and the unintentional mystery that surrounds it. As Curtis Wallen states in his interview for Futuretense [2], there is a “lack of transparency” when it comes to the topic of metadata collection that appears to fill the public with an unease. This is exacerbated by government officials tying themselves in knots trying to explain the nature of the bill that they are trying so hard to sell to the public. Instead, this lack of understanding and an inability for the country’s leaders to explain has left the general public with little confidence in the matter at hand. Another contributor could be man’s inherent want and the right he feels to privacy, and a feeling of a lack of control. This builds on the idea that those with an intimate knowledge of technology and how it works are taking advantage of the ignorant, leading to what Kevin Robins and Frank Webster call a “Cybernetic Capitalism”.[1]

This idea of a lack of control over the data an individual creates has brought about the need, felt by some, for retaliation and this has emerged in different forms. Michael Fraser, professor of Law at UTS, has called for the need for online property rights to give back the individual some control over their digital footprint and their digital profile. If pushed further, this could extend to the individual sharing in the economy founded on this digital platform, in which corporations are benefiting and making huge profits from the analysis of the data collected. Would this apparent turn in the tables make the public more inclined to have a more positive view of the situation? Could knowing that it was a reciprocal agreement lead to the dispelling of the sense of being taken advantage of, or a sense of control incurred by the knowledge that they have the power to decide their contributions? Rather than the feeling of being lost in a sea of sheep lead by shepherds who don’t really know what they’re doing.

Where there are rules, especially those that are displeasing, there are also those keen to break them. This is seen in ‘“Raw Data” is an oxymoron’, where Rita Raley discusses ideas of “datavaillance” and “countervaillance”. She discusses a number of different ways that have risen in which to minimize an individuals data footprint or ways in which to become invisible on the internet, or disappear altogether, but as Curtis Wallen states, it is “impossible to disappear”.

So, do we really have a choice? Do we really have any power to influence the situation? Or should we all just relax, after all, it’s for our own protection…

References:

[1] Raley, Rita. 2013, ‘Dataveillance and countervailance,’ in Gitelman, L. (ed) “Raw data” is an oxym Cambridge, MA: Cambridge University Press, pp. 131-9.

[2] Social media, data and property rights, 2014, radio interview, ABC, Sydney, 16 March

Image:

Reuters 2009, Financial Review, viewed 23 August 2014, <http://www.afr.com/p/technology/metadata_collection_critics_soften_mR1Pla5rfZXwaPVp6yQVUJ>.