Table of Contents
Most web search engines use robots (known as spiders or webcrawlers) to gather data for their databases. There are other uses for robots too, in fact, you can download them and let them do your searching for you.
Robots can cause damage by quickly and repeatedly accessing web files, causing the server to overload. They can also become confused when accessing dynamically generated web pages, such as databases, product catalogues and even web logs. The Robot Exclusion Protocol allows sites to offer guidance to robots, indicating what is `acceptable behaviour' on that site. The site administrator provides hints (in a file called robots.txt) that show where robots will have problems gathering data. Robots cannot be forced to follow certain behaviour patterns, but it's good manners to program your bot to follow these protocols.
In the future, the Internet will become pervasive and ubiquitous, with even the smallest devices connected together. Computers are already embedded into many objects, and as processing power increases and size decreases, these objects will be more intelligent and more connected.
Wearable computers and intelligent clothing will talk to the devices you come into contact with, and even set the correct programme on the washing machine! Rather than using your PC or laptop to access the Internet, you'll use “Internet Appliances” which have a single purpose, e.g. a dedicated emailer (built into the collar of your jacket, perhaps?), an online shopping appliance in the kitchen for the groceries, a fully interactive TV with video on demaind, a car that avoids traffic jams, and emails the garage when it needs a repair or service.
Of course, this has a downside - we will always be connected, always locatable and never “offline” - will our privacy suffer?
Marshall McLuhan's was the first to examine the idea of technology as an extension of our senses, in Understanding Media: the extensions of man ([McLuhan1964]).