“How do we ascertain truth on the web? That’s a question being pursued by researchers at Google who have articulated a flow of data that generates discrete statements of fact from countless web sources, relates those statements to previously assembled stores of knowledge, and fuses these mathematically to identify which statements may be more “truthful” than others. They describe this assembly of scored statements as a “Knowledge Vault.” As OCLC works with data from library, archive and museum sources, we grapple with the same question and similarly varying data. Though the number of statements made is smaller and there may be fewer conflicts, we benefit by taking a closer look at the Google Knowledge Vault idea, to see how it applies to a vault of library knowledge. In this webinar [View webinar slides on SlideShare [link]; Download webinar slides (.pptx) [link] Jeff Mixter and Bruce Washburn provided an update on how we’re evaluating this idea, including:
- extracting simple statements about entities and their relationships from bibliographic and authority records,
- establishing a relevant score for similar statements provided by different sources,
- viewing the Library Knowledge Vault data using a prototype application,
- and testing how statements contributed by users of that prototype can find their way back to the Vault.”
Sorry, comments are closed for this post.