Table of Contents
Defining cyberspace is really a matter of context. For example, John Perry Barlow, a founder of the Electronic Frontier Foundation, famously defined it as “... where you are when you are talking on the telephone.”. Others have defined it more narrowly, as the interactive space that is used for all computer mediated communication, or even the space that the web resides in. William Gibson, who coined the term in his Sci-fi novel Neuromancer (1984), suggested it was a “consensual hallucination”. Here, we'll take the broadest of views.
A common factor in almost all definitions of cyberspace is the sense of place that they convey - cyberspace is most definitely a place where you chat, explore, research and play.
How can we discover the geography of the Internet? How does this geography alter our perception of the real and virtual worlds?
Since the Internet is really just a set of interconnected points, it is readily examined using the formal techniques of topology or graph theory.
A common technique is to use internet traffic diagnostics, such as the ping and traceroute commands to explore the structure of the nodes that connect to form the internet. Webcrawlers (as used by search engines) can also explore and discover the underlying network.
Further reading: Opte
One of the most attractive methods of examining the structure of the Internet comes from the field of traditional cartography. Maps of the Internet can be made in many ways, such as overlaying a physical map of the world with nodes and links, or taking a central node and drawing network connections radiating out from this.
Further reading: CyberGeography;
The implementation of the internet on a global scale has been a fundamental part of the globalisation of industry and commerce. A similar revolution occurred with the first intercontinental telegraph links, where there was suddenly the possibility of sending information almost instantaneously in a world where transatlantic travel still took weeks.
What we are beginning to realise is that the world (and in particular, human society) is a form of network in itself, and one in which the degree of connectedness is increasing rapidly with the advent of the internet.
For further reading, examine Milgram's “Small World Problem”, which popularised the idea of “six degrees of separation” (1967), and was more recently tested by Duncan J. Watts Six Degrees. The BBC has also featured this topic, see Whitehouse (1999a).
Since cyberspace is not bound by physical laws and real-world constraints, we can be more imaginitive in our use of spatial metaphors for online worlds. To some extent this has already been seen in computer games, where multidimensional (i.e. more than three dimensions) worlds, hyperspace jumps and wormholes become possible. Kant postulated that we are bound to a three-dimensional world only by experience, and that there is no theoretical problems with the existence of spaces of four, five, or more dimensions (Browne Garnett Jr., 1965).
Cyberspace also allows us to experience something close to a perfect world too - virtual reality can model the world exactly, and create a place which is mathematically perfect, an idea which looks back to the philosophy of Plato and his ideal forms (Heim 1993, p.8).
There has long been discussion on the nature of knowledge, and on the differences between data, information, knowledge and wisdom. As we move into the information age, these questions will become more important. The reliability and usefulness of the information around us, and the techniques we will need to master to discern this and filter out the signal from the noise are all part of our philosophy of information.
As an example, let's look at the hyperlink. Hypertext links are both simple and complex. On the surface, they merely provide a reference to another online resource, which a browser can use to access that resource. But links can be used in many more ways than this simplicity would suggest. They can point to small or large amounts of information (a glossary entry, or a whole encyclopedia). They can reference local or remote resources. They can even be used to add irony (the e-zine Suck was the exemplar of this), or to add meaning to an otherwise simple phrase. In effect, the link has become “a rhetorical device loaded with meaning.” (Shields 2000).
But the way the link appears does not necessarily indicate any of this - you have to click to discover the role and meaning of the link.
Dreyfus argues that this presents a paradox. The hyperlink was created “to use the speed and processing power of computers to relate a vast amount of information without needing to understand it or impose any authoritarian or even generally accepted structure on it” (2001, p.9), rather than for any purpose related to meaning or understanding.
Information in cyberspace tends to be interactive, dynamic and temporary. Most web sites are restructured regularly, and their content changes often on a daily basis, which means that whilst the information is up to date and fresh, it can be difficult to return to it and rediscover it.
Despite this, web information can also be surprisingly persistent. Controversial documents that have been taken down from one website often appear in multiple mirrored locations. Often, the chances of survival of a resource are primarily dependent upon the number of people interested in the resource, often out of the control of the owner/webmaster.
There have been attempts at ensuring the core material of the web does not disappear unnoticed. For example, the WayBack Machine attempts to archive a significant proportion of web information for future generations. But the most successful and most widely used archive is the cache that some search engines store of the pages they have catalogued. Coupled with the power of the search engine, the cached copies of websites can be startlingly persistent, again showing that information can live well beyond its owner's control.
Further reading: Seely Brown & Duguid (2000)
Another observation on the use of knowledge arises from the power of Google and other search engines to find almost everything we need at the click of a button. There is no longer a need to remember facts, we can just google them. We are only limited by the skill with which we use the search engine, and how specific and trustworthy we need the answer to be.
It's clear that developing this type of “informational intelligence” is a much better strategy in the long-term than memorising facts (especially if you have as poor memory retention as I have).
Further reading: Morville (2005)
One of the casualties, perhaps, of this early stage in the information age, is the canonical work, the authoritative, quotable, stable text that print publication inevitably tends to produce. How do we develop systems of classification, authority and structure that allow us to reference online materials? And how do we cope with the transience that we discussed earlier with respect to this referencing.
Again, search engines can help here, with Google keeping cached copies of many pages that have long since disappeared, but there is little systematic archiving of useful material outside of the organisations that are generating it.
One attempt at imposing structure and metadata onto the web as it exists today involves adding extra, meaningful, but invisible information to each web resource, so that information-processing applications such as Google can more effectively analyse and catalogue web information. The Semantic Web is a group organised by the World Wide Web Consortium, of technologies and individuals that are attempting to do this.
Being a Good Cybercitizen
Freedom of Speech
Internet as Anarchy
Anonymity and Trust
Human Rights and Information Access
Key reading: Graham (1999).