What is Intelligence? There are several definitions. Here are a couple of them.
Intelligence is the ability to comprehend; to understand and profit from experience.
Intelligence is effectively perceiving, interpreting and responding to the environment.
I would suggest a slight modification to the first definition. Change "experience" to "information and experience".
The definition page (search result from Google) also provides various related links to intelligence. The one that caught my attention was fluid intelligence. Fluid intelligence:
The first step is gathering intelligence. There are several ways to do this. Fuld & Company defines an Index for Internet Intelligence. In addition, they list a variety of sources to obtain the Information needed.
The next step is to tag, categorize and map the information. You can do this, using a variety of tools. You can use Google Bookmarks or del.icio.us to keep track of web resources and tag them. You can use a variety of mind mapping tools to map the relationships between different items of information.
The final step is to correlate the information and derive the intelligence you need. The best tool for this is the human mind.
Once you have identified your sources of information, mapped the relationships obtained the intelligence you need, you can use simple tools to keep track of the information. The speed with which information changes and the type of changes may provide additional clues. This in turn may provide additional intelligence.
Can't believe that Python has 0 bugs from this static analysis. Take a look at this table. Coverity along with Stanford is focused on identifying bugs in some of the popular open source projects.
The pieces are falling in place one by one. An online database, a portal, a page creator, a blog hosting service. What will be next? A wiki? Backed by a set of APIs the elements required to build online applications, websites and portals are taking shape at Google.
I had some difficulties in creating a page with an image. Hopefully will be fixed soon. Here is my page (I spent a couple of minutes).
Search Engine Watch has an article on the product here. TechCrunch has a mini-review.
I love this name. Whoever came up with this is a genius. Here is a brief description of the Semantic Web Brainlet
This brainlet contains information about papers, events, conferences and ideas surrounding the Semantic Web initiative. If something is missing, maybe your totally interesting paper that was however rejected, just add it and share it on the P2P. Dont forget to make sure you have properly connected it with the relevant topics, people, publications and whatever else. And dont be surprised if you’ll receive reviews and comparisons with other papers by people that were interested specifically on that topic. The identity of whoever provided the metadata (you) will always be verifiable thanks the the built in digital signature infrastructure.
Got this link from Danny’s post on PlanetRDF. Brainlets seem to be pluggable components for DBin, a project to create peer-to-peer discussion groups with a difference. Instead of interaction through messages, DBin allows you to share documents and annotate them. It uses RDF (I wonder whether it uses the Annotea) for annotations. When we built a prototype of Hyperscope, we used Annotea with Annozilla client. That was pretty cool.
DBin integrated digital identity so you will know who is annotating. I plan to check out DBin. I think we can use it on some of the projects we are working on.
I talked about An Amazing Search Engine, Contextual Search and More Contextual Search in my previous blogs. In all these cases, the search engine takes on the burden of deriving the context from the hints you provide. How about the searchers doing some work, themselves?
What if I can do searches like this:
person: Tim B
book: Mozart’s brain
I am not fussy about the syntax. I can provide the hint in any form the search engine wants. But it would be nice to tell the search engine the broad space I am looking for.
Let us say that we identify the type of resource we are searching for. How do you come up with some uniform standard for resource names? There are several options.
1. We can use popular tags (people are already familiar with this thanks to del.icio.us, technorati, flickr etc.)
2. We can use terms from a dictionary
3. We can use common taxonomies or ontologies if they exist
The search engines can use common synonyms and equivalent terms to make this even better or easier. Let me know if you know any product that does some thing similar.
Today I came across an announcement from DocSoft.
“Element” unleashes the power of structured data within a company’s repository. With Element as a plug-n-play addition to a company’s network, users will be able to search for stored data across the network “smarter,” thus more efficiently. Element indexes XML-originated documents according to each tag or “element,” which provides what the company calls “context searching.” Element can even search metadata embedded into virtually any file format using Adobe’s eXtensible Metadata Platform (XMP).
Search based on XML data will definitely provide better results. It is nice to know that search engines are showing up to take advantage of the embedded metadata in XML documents.
Element allows the appliance to index 26 different file formats. These formats include XML, HTML, PDF, SVG, CGM, Microsoft Office products, some common audio and video files, and any file in which XMP can be used to embed metadata.
This is good news. As we get better search engines to leverage metadata, hopefully more content with embedded metadata will show up.
Other context search engines, I have come across:
Swoogle – a semantic websearch engine
Python Search – A Python programming specific search engine (searches the internet for results relevant to python programming)
Krugle – A search engine for source code and other highly relevant technical information
In his article on Semantic Screenscraping, Jon Udell talks about how to integrate information from the web using XPath as a tool for scraping web pages and integrating them with open APIs like the Google Map API.
Regular expressions once dominated my screenscraping code. Now XPath expressions do. Screenscraping is becoming more declarative, more query-like.
Jon outlines the developments that make obtaining data from the web easier.
1. HTML is readily covertible to XHTML
2. The resulting XHTML is semantically richer
3. XPath and XQuery are maturing to the point where they are very useful in extracting information
This topic is very interesting to me. About 5 years ago, we attempted a product called Information Integrator. The goal was to interactively step through web pages, mark portions that you are interested in, convert them to XML and integrate them into a single page. So your home page will be a set of transclusions. After a few attempts working with tidy and a mapper UI, we gave it up in favor of our current InfoMinder product. I think a variation of that idea still has some merit.
Of late, my use of del.icio.us is on the increase. I kept wondering why I use it more now than I did before. Here are seven reasons (among many others).
1.As a bookmark service
del.icio.us is integrated well into Firefox browser. A single click lets me add a book mark. It is very easy to use.
2. As a tool to jog my memory
I often forget why I bookmarked a resource. When I bookmark with del.icio.us, however, I can mark a paragraph on the page and it automatically becomes the description. Or I can type my own description to jog my memory.
3. Discovering other resources
Tagging is a quick way to remember and categorize your bookmarks. I use some of the popular tags available in del.icio.us when possible. Otherwise, I make up my own. This way when I click on a tag, del.icio.us not only lists my bookmarks but others as well.
4. Finding popular tech topics
I use the service as a way of finding popular topics related to technology. I also try other services like memeorandum, technorati (for blogs), digg etc. But del.icio.us is one of my favorites.
5. Trend Spotting
The popular bookmarks on del.icio.us and the popularity of certain tags gives me a broad sense of tech trends.
6. Reading List
When I read a blog or an article on the web, I find interesting links for further reading. I simply add the links to del.icio.us and get back to the articles when I have more time.
7. Blog material
Since I started blogging more regularly, I am constantly looking for topics in my areas of interest. Del.icio.us is one of my sources. Others include alerts from InfoMinder (our own product), some of my favorite blogs (I use bloglines), material from the mailing list etc.
You probably have a lot of creative ways of using the service. I will be interested in knowing how you use this or a similar service.