Do You Want To Build a Search Engine?

My reaction to the Common Crawl was one big WOW!

Common Crawl produces and maintains a repository of web crawl data that is openly accessible to everyone. The crawl currently covers 5 billion pages and the repository includes valuable metadata. The crawl data is stored by Amazon’s S3 service, allowing it to be bulk downloaded as well as directly accessed for map-reduce processing in EC2.  This makes wholesale extraction, transformation, and analysis of web data cheap and easy. Small startups or even individuals can now access high quality crawl data that was previously only available to large search engine corporations.

With this data, Solr and Hadoop, you have the elements to build your own custom search engine.

One thought on “Do You Want To Build a Search Engine?

Comments are closed.