"We will never have LCD screens - they will need too many connectors"
"Vector graphics are the future; raster graphics need too much memory"
"Full audio on computers will need too much bandwidth"
"Digital photography will never replace film"
"Moore's Law hasn't got much longer to go" (1977, 1985, 1995, 2005)
We all know this one. But often people don't understand its true effects.
Take a piece of paper, divide it in two, and write this year's date in one half:
Now divide the other half in two vertically, and write the date 18 months ago in one half:
Now divide the remaining space in half, and write the date 18 months earlier (or in other words 3 years ago) in one half:
Repeat until your pen is thicker than the space you have to divide in two:
This demonstrates that your current computer is more powerful than all other computers you have had put together (and the original Macintosh (1984) had tiny amounts of computing power available.)
In the 1980's the most powerful machines were Crays
And people used to say "One day we will all have a Cray on our desks!"
Sure: in fact current workstations are about 120 Craysworth.
Even my previous mobile phone was 35 Craysworth...
Just as a side issue, LED's are transistors too, and also follow Moore's Law, lumens are increasing exponentially, prices are dropping.
That's why we have those tiny, dirt cheap, bike lights now.
One day, soonish, all lighting will be using LEDs... (This is a good example of a disruptive technology)
And have you noticed how LCD screens have almost entirely replaced tube TVs?
(This is also a good example of disruptive technology)
LCD screens also contain transistors, so you can predict that screens are going to get higher-density and cheaper.
What is less well-known is that bandwidth is also growing exponentially at constant cost, but the doubling time is 1 year!
(Actually 10½ months according recently to an executive of one of the larger suppliers)
Put another way, in 7 years we could have 1 Gigabit connections to the home.
Metcalf proposes that the value of a network is proportional to the square of the number of nodes.
v(n)=n2
Simple maths shows that if you split a network into two, it halves the total value:
(n/2)2 + (n/2)2 = n2/4 + n2/4 = n2/2
This is why it is good that there is only one email network, and bad that there are so many Instant Messenger networks. It is why it is good that there is only one World Wide Web.
Proposed in an article in Emerce as the result of an interview with me:
Every 12½ years computers become powerful enough to allow the use of a new generation of programming languages that give an order of magnitude more productivity to the programmer.
(In other words, what used to take you a week, would now take a half day).
The term Web 2.0 was invented by a book publisher (O'Reilly) as a term to build a series of conferences around.
It conceptualises the idea of Web sites that gain value by their users adding data to them, such as Wikipedia, Facebook, Flickr, ...
But the concept existed before the term: Ebay was already Web 2.0 in the era of Web 1.0.
By putting a lot of work into a website, you commit yourself to it, and lock yourself into their data formats too.
This is similar to data lock-in when you use a proprietary program. You commit yourself and lock yourself in. Moving comes at great cost. Try installing a new server, or different Wiki software.
This was one of the justifications for creating XML: it reduces the possibility of data lock-in, and having a standard representation for data helps using the same data in different ways too.
As an example, if you commit to a particular photo-sharing website, you upload thousands of photos, tagging extensively, and then a better site comes along. What do you do?
How about if the site you have chosen closes down (as has happened with some Web 2.0 music sites): all your work is lost.
How do you decide which social networking site to join? Do you join several and repeat the work? I am currently being bombarded by emails from networking sites (LinkedIn, Dopplr, Plaxo, Facebook, MySpace, Hyves, Spock...) telling me that someone wants to be my friend, or business contact.
How about geneology sites? You choose one and spend months creating your family tree. The site then spots similar people in your tree on other trees, and suggests you get together. But suppose a really important tree is on another site?
These are all examples of Metcalf's law.
Web 2.0 partitions the Web into a number of topical sub-Webs, and locks you in, thereby reducing the value of the network as a whole.
What should really happen is that you have a personal Website, with your photos, your family tree, your business details, and aggregators then turn this into added value by finding the links across the whole web.
Firstly and principally, machine readable Web pages.
When an aggregator comes to your Website, it should be able to see that this page represents (a part of) your family tree, and so on.
One of the technologies that can make this happen has the catchy name of RDFa
You could describe it as a CSS for meaning: it allows you to add a small layer of markup to your page that adds machine-readable semantics.
It allows you to say "This is a date", "This is a place", "This is a person", and uniquely identify them on your web page.
Comparable to microformats, but then done right.
If a page has machine-understandable semantics, you can do lots more with it.
So rather than putting all your data on someone else's website, and the fact that it is there implying a certain semantics, you should put your own data on your own website with explicit semantics.
Then you get the true web-effect, with its full Metcalf value.
It doesn't really matter, because on the whole Websites are interoperable.
I am particularly charmed by this sort of device:
It is a wireless router containing network storage and a music server for in your house, while offering FTP and a Webserver for outside, plus a Bittorrent server. So you can switch off all your machines, and still serve webpages to the outside world.
Web 2.0 is damaging to the Web by dividing it into topical sub-webs.
With machine-readable pages, we don't need those separate websites, but can reclaim our data, and still get the value.