From the introduction of Webbots, Spiders and Screen Scrapers:
The basic problem with browsers is that they’re manual tools. Your browser only downloads and renders websites: You still need to decide if the web page is relevant, if you’ve already seen the information it contains, or if you need to follow a link to another web page. What’s worse, your browser can’t think for itself. It can’t notify you when something important happens online, and it certainly won’t anticipate your actions, automatically complete forms, make purchases, or download files for you. To do these things, you’ll need the automation and intelligence only available with a webbot, or a web robot. Once you start thinking about the inherent limitations of browsers, you start to see the endless opportunities that wait around the corner for webbot developers.
We realized that and built services to track pages, aggregate and filter information. But this post is not about them either.
Can browsers get smarter? Here are a few thoughts:
- What if we can take most frequently used functions and made them into verbs. Some imaginary statements:
- Repeat my daily searches and show me new stuff (programming speak – take 3 searches I saved, run them on the browser, get the top 100 results, compare them with the results I got yesterday and just show me the new ones)
- Go to amazon and get me the new deals for gadgets (programming speak – Go to amazon site, pull the daily deals, filter them by my interests (or look at my wishlist) and let me know if there are any I should look at.
- Find recommended books for subject x (Go to Amazon, search for book on subject x, find similar books, rank them based on popularity and sentiments from reviews and show me a list).
- Find me the best deal from Amazon, ebay and other deal sites on my list, for the most popular Android devices mentioned in my favorite review sites.