Social Machines and the Internet: What Went Wrong?

 

[Versió catalana] [Versión castellana]


David Casacuberta

Department of Philosophy
Universitat Autònoma de Barcelona

 
 

 

In his famous essay of 1996, A Declaration of the Independence of Cyberspace, John Perry Barlow imagined a virtual utopia in which real-world events and problems, including governments, had become irrelevant. Today, driven by nostalgia, some people wonder what happened to this internet republic in which they had a new cyberspace identity and nobody knew their real-life persona. Personally, my concerns lie elsewhere. The craze for the virtual world that obsessed intellectuals and activists at the end of the 20th century was based primarily on a misunderstanding of the role of digital technologies, which, after all, is to facilitate cooperation and make an impact on the real world, rather than to create an alternative to it in the form of escapist fantasies.

I am much more interested in the ideal posed three years after A Declaration by Tim Berners-Lee in his book Weaving the Web, and his idea of social machines. For Berners-Lee, social machines should be “processes in which the people do the creative work and the machine does the administration [...]. The stage is set for an evolutionary growth of new social engines. The ability to create new forms of social process would be given to the world at large, and development would be rapid.” (Berners-Lee; Fischetti, 1999, p. 172–175). In his text, Tim Berners-Lee imagined these social machines as systems that could be variously used to improve the transmission of information, facilitate administrative processes or improve the creative capacity of people, in line with the concept Web 2.0, coined by Tim O’Reilly a few years later in 2005.

There is little doubt that social machines have arrived. Digital social networks such as Facebook and Twitter are consistent with the description provided in Weaving the Web, as is the Google search engine, which administers its users’ daily work by linking to the pages they consider relevant. Unfortunately, today’s social machines are a far cry from Berners-Lee’s ideal. Their technology has brought us the concept of the filter bubble, in which we only receive information based on our preferences, thereby resulting in a highly biased picture of what our fellow citizens think and care about. Filter bubbles seriously compound the spread of false and partial information, while increasingly eroding basic human rights such as freedom of expression and privacy and allowing the aspirations of cooperative human beings to be drowned out by armies of trolls interested only in picking fights.

What went wrong along the way? How did this dream of hyperconnectivity turn into a nightmare? Most of today’s critics of the web agree that the main culprit is the attention economy. If the service is free, we are the product. Thus, digital social networks are forced to create addiction devices to ensure we don’t abandon the site. We continue to be bombarded with ads, content producers rely increasingly on “clickbait”, and countless buyers keep popping up to obtain more and more personal data about us.

Although this economy, based on providing free services in exchange for targeted ads, is clearly the main culprit behind the decline, it is not the only one. Another key factor is the move from symbolic artificial intelligence to automatic or machine learning.

In a later article, Jim Hendler and Tim Berners-Lee (2009) described the mechanisms that would allow a new generation of social machines to emerge. This aspect of the debate seems particularly relevant to me: “Extending the current web infrastructure to provide mechanisms that make the social properties of information sharing explicit and that guarantee that the uses of this information conform to the relevant social policy expectations of the users.” (Hendler; Berners-Lee, 2009, p. 2).

Berners-Lee’s social machine proposal was associated with what was then known as the Semantic Web, or Web 3.0, i.e. an internet that relies on a myriad of XML tags to facilitate the location and classification of information. Thus, these processes are based on freely accessible information, provided directly by the creator, and algorithms based on open-source software developed by humans. In other words, information provided voluntarily and explicitly, processed by open, transparent algorithms.

This way of presenting and processing information is crucial for the development of context-based mechanisms when allowing or denying access to information. Who can access my health data and when? When can we say that digital social network users are crossing the line and that their posts are inappropriate? Today’s social machines have plenty of mechanisms to determine this, but they process information that many users are unaware of providing, and through the use of “black box” algorithms in which establishing how the information is processed is highly complex, if not impossible.

How can we remedy this situation? The first step is simply to stress that the situation is not beyond hope. When John Perry Barlow published his manifesto, he described an internet whose nature shielded it from government or corporate controls. Much of the criticism levelled at the way in which we interact with the web these days is underpinned by the idea that the current infrastructure cannot be subverted and that the only alternative to today’s disgraceful social machines is to completely abandon the digital sphere and return to paper and pen.

However, as Lawrence Lessig (1999) argues in his book Code and Other Laws of Cyberspace, the internet, or cyberspace, has no nature. It only has code, and code can be modified. In the book, Lessig compares internet communication protocols with a constitution. To me this is a key perspective. A constitution is something that we citizens give ourselves and that can be amended when there is a significant change in our political, cultural or social environment.

It’s time to change our digital constitution and give ourselves the social machines we really deserve. A first key step is to demand transparency in algorithmic processes; to ensure that the companies that process our data carry out serious audits, conducted by impartial actors, that they ensure the integrity and veracity of the data they collect, and, above all, that they process the data ethically and fairly, while respecting our basic rights.

To achieve this, we need to rethink our relationship with algorithms. The current boom in automatic learning based on neural networks is not the result of some deep insight into how the mind works. Essentially, researchers have discovered that automatic learning algorithms based on the identification of statistical regularities can be applied to many more fields than originally thought, but that does not mean we have made significant theoretical progress towards having genuinely intelligent artificial objects. The fact that these algorithms work is not an excuse to use them if their lack of transparency makes it impossible to identify the types of decision they make and whether they are truly fair. We need other logic in software development, as well as efficiency.

At the same time, a shift is required in our attitude regarding what data should be public and why. Academics and the media need to make huge efforts to communicate the new big data reality to the public. It is paradoxical that many users continue to use Twitter as if it were a private channel for chatting with friends, when in actual fact it is a public space where their views are visible to anyone with a computer and an internet connection. It is also absurd that many people scream blue murder when they find out their medical data is to be used in population studies (with personal information anonymized to ensure that they cannot be identified), but later have no problem with providing companies such as Google and Facebook with intimate personal details in exchange for a free email address or the amusement of reading what their friends have posted on social media.

 

Bibliography

Barlow, John Perry (1996). A Declaration of the Independence of Cyberspace. <https://www.eff.org/cyberspace-independence>. [Consulted: 10/09/2018].

Berners-Lee, Tim; Fischetti, Mark (1999). Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web. New York: Harper Collins.

Hendler, Jim; Berners-Lee, Tim (2009). “From the Semantic Web to social machines: A research challenge for AI on the World Wide Web”. Artificial Intelligence, vol. 174, no. 2 (February 2010), p. 156–161.

Lessig, Lawrence (1999). Code and Other Laws of Cyberspace. New York: Basic Books.

O’Reilly, Tim (2005). What is Web 2.0?
<http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html>. [Consulted: 10/09/2018].

 

similar articles in BiD

Temària's articles of the same author(s)

Casacuberta, David

[ more information ]

llicencia CC BY-NC-ND Creative Commons licence (Attribution-Non-Commercial-No Derivative works). They may be consulted and distributed freely provided that the author and publisher are quoted (in accordance with the “Recommended citation” section in each of the articles). However, no derivative works (translation, change of format, etc.) may be made without the publisher’s permission. Therefore, it meets the definition of open access form the Budapest Open Access Initiative declaration. The journal allows the author(s) to hold the copyright without restrictions and to retain publishing rights without restrictions.