next up previous contents
Next: Why Known Space Is Up: Examining the Issues Previous: Examining the Issues

Revenue Flows

It's impossible for the average user's computer to keep up with an exponential increase in websites, so more and more of the burden of weblinkage must inevitably fall on the commercial search engines--they have the bandwidth and the horsepower to do continuous and extensive searching and mapping. Unfortunately, there is at present no motivation for the commercial searchers to give discovered linkage information directly to users. But that may change after the web grows for another few years and the current search strategy collapses entirely. When users pose queries and routinely get a million responses they are unlikely to find search engines of much use.

The web is likely to keep growing both in size and in volatility until no amount of snapshot web analysis will be sufficient. The average user will be further hampered by a severe bandwidth crunch compared to commercial sites able to afford high-bandwidth lines. Further, as the web grows ever more commercial, and as its population continues to explode, ever more users and user-created webagents will be roaming the net adding to the bandwidth crunch. Eventually, users are likely to be asked to pay not just for phone line rental, but also for server time (perhaps measured in megabytes accessed times the cycles needed to process and deliver those megabytes).

Presumably, some servers will remain free to all users, being supported by advertising dollars, but at peak times such free sites will always have many more requests than they can service, particularly if they are popular. (The web is self-limiting in that regard; the more popular a site is the less likely it is for anyone to ever actually see it.) Finally, as the web grows, the webmaps will grow with it, making the creation of even partial neighborhood maps impractical by normal users. In sum, by the turn of the century all significant webmapping will likely have to be done by commercial sites who do it for profit.

The next question is how will money flow from the users of a mapping service to the mappers. If a mapper's revenue comes solely from advertising (as is true for the search engine companies today), there is no profit in allowing automated searchers to hit their site. For one thing, the download volume goes way up, and for another automated searchers don't read ads. Eventually, the search engine companies will probably disallow automated searchers. Since the web will eventually grow so large that no user-created search engine could do an adequate job, this means that mapping companies will likely arise who charge users subscription fees (and charge major sites advertising dollars to find some connection--however tenuous--between their sites and popular sites).

Users of a particular mapping service should be able to download simple default neighborhood maps for common search categories (say: car dealers, food, books, entertainment) and prune that linkage map for places they find most interesting. Once they have a core set of neighborhoods they can instruct their webagent to search the web for sites like those currently in their neighborhood map. Users can thus incrementally build up a linkage map of everything on the web that they might be interested in.


next up previous contents
Next: Why Known Space Is Up: Examining the Issues Previous: Examining the Issues
Gregory J. E. Rawlins
1/13/1998