Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

LegalRuleML Core Specification Version 1.0

Oasis Open: “…Legal texts, e.g. legislation, regulations, contracts, and case law, are the source of norms, guidelines, and rules. As text, it is difficult to exchange specific information content contained in the texts between parties, to search for and extract structured the content from the texts, or to automatically process it further. Legislators, legal practitioners, and business managers are, therefore, impeded from comparing, contrasting, integrating, and reusing the contents of the texts, since any such activities are manual. In the current web-enabled context, where innovative eGovernment and eCommerce applications are increasingly deployed, it has become essential to provide machine-readable forms (generally in XML) of the contents of the text. In providing such forms, the general norms and specific procedural rules in legislative documents, the conditions of services and business rules in contracts, and the information about arguments and interpretation of norms in the judgments for case-law would be amenable to such applications. The ability to have proper and expressive conceptual, machine-readable models of the various and multifaceted aspects of norms, guidelines, and general legal knowledge is a key factor for the development and deployment of successful applications. The LegalRuleML TC, set up inside of OASIS (www.oasis-open.org), aims to produce a rule interchange language for the legal domain. Using the representation tools, the contents of the legal texts can be structured in a machine-readable format, which then feeds further processes of interchange, comparison, evaluation, and reasoning. The Artificial Intelligence (AI) and Law communities have converged in the last twenty years on modeling legal norms and guidelines using logic and other formal techniques [6]. Existing methods begin with the analysis of a legal text by a Legal Knowledge Engineer, who scopes the analysis, extracts the norms and guidelines, applies models and a theory within a logical framework, and finally represents the norms using a particular formalism. In the last decade, several Legal XML standards have been proposed to represent legal texts [30] with XML-based rules (RuleML, SWRL, RIF, LKIF, etc.) At the same time, the Semantic Web, in particular Legal Ontology research combined with semantic norm extraction based on Natural Language Processing (NLP), has given a strong impetus to the modeling of legal concepts. Based on this, the work of the LegalRuleML Technical Committee will focus on three specific needs:

1      To close the gap between legal texts, which are expressed in natural language, and semantic norm modeling. This is necessary in order to provide integrated and self-contained representations of legal resources that can be made available on the Web as XML representations and so foster Semantic Web technologies such as: NLP, Information Retrieval and Extraction (IR/IE), graphical representation, as well as Web ontologies and rules.
2      To provide an expressive XML standard for modeling normative rules that satisfies legal domain requirements. This will enable use of a legal reasoning layer on top of the ontological layer, aligning with the W3C envisioned Semantic Web stack.
3      To apply the Linked Open Data approach to model raw data in the law (acts, contracts, court files, judgments, etc.) and to extend it to legal concepts and rules along with their functionality and usage. Without rules that apply to legal concepts, legal concepts constitute just a taxonomy…”

Sorry, comments are closed for this post.