User Ratings of Ontologies: Who Will Rate the Raters?

AAAI Spring Symposium: Knowledge Collection from Volunteer Contributors(2005)

引用 54|浏览19
暂无评分
摘要
The number of ontologies and knowledge bases covering different domains and available on the World-Wide Web is steadily growing. As more ontologies are available, it is becoming harder, and not easier, for users to find ontologies they need. How do they evaluate if a particular ontology is appropriate for their task? How do they choose among many ontologies for the same domain? We argue that allowing users on the Web to annotate and review ontologies is an important step in facilitating ontology evaluation and reuse for others. However, opening the system to everyone on the Web poses a problem of trust: Users must be able to identify reviews and annotations that are useful for them. We discuss the kinds of metadata that we can collect from users and authors of ontologies in the form of annotations and reviews, explore the use of an Open Rating System for evaluating ontologies and knowledge sources, and present a brief overview of a Web-based browser for Protege ontologies that enables users to annotate information in ontologies. Ontologies On The Web Scale The number of ontologies and knowledge bases covering different domains and available on the World-Wide Web is steadily growing. Ontologies constitute the backbone of the Semantic Web and their number is steadily growing. The Swoogle crawler, for example, indexes more than 4000 ontologies at the time of this writing. It is commonly agreed that one of the reasons ontologies became popular is because they hold a promise of facilitating interoperation between software resources by virtue of being shared agreed-upon descriptions of domains used by different agents. Such interoperation is, for example, a key requirement for the Semantic Web to succeed. Suppose we are developing a Semantic Web service that uses an ontology. If we choose to reuse an existing ontology to support our service rather than to create a new one, we get the interoperation with the others using the same ontology “for free.” In addition, Copyright c © 2005, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. http://www.swoogle.org we save time and money required to develop an ontology and we get the benefit of using an ontology that has already been tested by others. However, as more ontologies become available, it becomes harder, rather than easier, to find an ontology to reuse for a particular application or task. Even today (and the situation will get only worse), it is often easier to develop a new ontology from scratch than to reuse someone else’s ontology that is already available. First, ontologies and other knowledge sources vary widely in quality, coverage, level of detail, and so on. Second, in general, there are very few, if any, objective and computable measures to determine the quality of an ontology. Deciding whether an ontology is appropriate for particular use is a subjective task. We can often agree on what a bad ontology is, but most people would find it hard to agree on a universal “good” ontology: an ontology that is good for one task may not be appropriate for another. Third, while it would be helpful to know how a particular ontology was used and which applications found it appropriate, this information is almost never available. We believe that having a large number of reviews and annotations generated both by ontology authors and users is the crucial component in enabling reuse of ontologies and other knowledge sources that have to be evaluated subjectively. The idea is not unlike rating consumer products or books: there is no perfect coffeemaker or perfect book for everyone, there is no uniform “best” measure in either category, and therefore we rely on reviews by others to help us decide what to buy. As with existing review and rating systems, such as Epinions and Amazon, the scale of the Web helps by providing a huge number of potential reviewers, but it also poses new challenges: It is inevitable that some significant portion of such reviews and annotations will be of low quality. Furthermore, evaluation and review of ontologies pose additional challenge: if our application needs a simple hierarchy of classes in a particular domain, an excellent review from a trusted user who prizes the quality and number of formal axioms over http://www.epinions.com http://www.amazon.com coverage or simplicity would probably still count as an “unhelpful” review for us. Therefore, when deciding whom to trust, we must take into account which dimensions of evaluation we are interested in and who we trust in assessing this particular dimension. In this paper, we discuss the kinds of metadata that we can collect from users and authors of ontologies, we explore the use of an Open Rating System for evaluating ontologies and knowledge sources, and present a brief overview of a Web-based browser for Protege ontologies that enables users to annotate information in ontologies. The prototype itself is very preliminary and the main contributions of the paper are identification of the type of metainformation that can be collected from users, and the concept of using an Open Rating System and Web of Trust for ontology evaluation. Metadata for Describing Ontologies and
更多
查看译文
关键词
ontologies,raters
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要