pax_global_header 0000666 0000000 0000000 00000000064 15042011661 0014506 g ustar 00root root 0000000 0000000 52 comment=8e1bdbb322c656fcb7f6745447d740eec40fc18e
OWL-RL-7.1.4/ 0000775 0000000 0000000 00000000000 15042011661 0012473 5 ustar 00root root 0000000 0000000 OWL-RL-7.1.4/.gitignore 0000664 0000000 0000000 00000000247 15042011661 0014466 0 ustar 00root root 0000000 0000000 # Jetbrains editors
.idea/
* Python
*.pyc
.pytest_cache/
# 2to3
*.bak
/venv/
docs/build/
build/
dist/
deb_dist/
sdist/
*.egg-info/
owlrl*.tar.gz
# Mac
.DS_Store
OWL-RL-7.1.4/CHANGELOG.rst 0000664 0000000 0000000 00000030726 15042011661 0014524 0 ustar 00root root 0000000 0000000 Changelog
---------
v.7.1.4 - July, 2025
~~~~~~~~~~~~~~~~~~~~~~~
Changes:
* dependencies updates to meet RDFLib 7.1.4
* Python 3.9+ only, to match RDFLib 7.1.4
* background dependencies updated and tested to work
v.7.1.3 - January, 2025
~~~~~~~~~~~~~~~~~~~~~~~
Changes:
* dependencies updates only to meet RDFLib 7.1.3
* ensured development dependencies allow for Python 3.8.1+
* updated README with contact info and installation instructions
v.7.1.2 - October, 2024
~~~~~~~~~~~~~~~~~~~~~~~
Changes:
* Python dependency changed to > 3.8 (not > 3.10) to be compatible with other RDFLib packaged
v.7.1.1 - October, 2024
~~~~~~~~~~~~~~~~~~~~~~~
Changes:
* works with rdflib >= 7.1.1
This version of OWL-RL has been released now to update the RDFLib dependency version only, to further reduce upstream dependencies. Additionally there are several major functional updates included:
* [Better handling of detecting identical literals for RDFS sameAs rules](https://github.com/RDFLib/OWL-RL/pull/68)
* [Allow OWL-RL Closures to be run on rdflib Dataset instances](https://github.com/RDFLib/OWL-RL/pull/69)
* [Add inferred triples to a separate named graph](https://github.com/RDFLib/OWL-RL/pull/70)
The main result of the changes above is to allow OWLRL to be run on a Graph within a Dataset and to store the inferred triples in a separate Graph which allows asserted and inferred data to be kept cleanly, side-by-side, and used together but managed independently.
This update is of particular importance to tools such as [pySHACL](https://github.com/RDFLib/pySHACL)...
6.0.2 - October, 2021
~~~~~~~~~~~~~~~~~~~~~~~
Changes:
* works with rdflib >= 6.2.0
* replaces owl namespace element lists with those of rdflib (OWL, RDF, RDFS, XSD)
v5.2.3
~~~~~~
Changes:
* fix README image display
* fix version acquisition in setup.py
v5.2.2
~~~~~~
Changes:
* Depend on RDFLib v5.0+
* Removed rdflib_jsonld as a requirement. It is included in rdflib 6.0
* Detect if rdflib 6.0 is in use, and do not import the jsonld module
v5.2.1
~~~~~~
Changes:
* Added stdeb.cfg file, and notes in requirements-dev.txt for building a debian package
* Removed the .py extension from the files in the scripts directory
* Same change as above but for the RDFConvertService script
* Fix Qualified Max Cardinality 0 Bug
* Fix output printing of owlrl commandline script
v5.2.0
~~~~~~
Changes:
* Dropped LiteralProxies finally. Processing of literals is no longer done through a LiteralProxy class (thanks :code:`@wrobell`)
* As a consequence:
* The change improves performance of the library by about 14%.
* Related blocks of code, which swallow exceptions are removed.
* Unit tests for the following OWL 2 RL rules are implemented
cax-dw,
cls-avf,
cls-maxc1,
cls-maxc2,
cls-maxqc1,
cls-maxqc2,
cls-maxqc3,
cls-maxqc4
* Unit tests for RDFS closure implemented
adding datatype axioms,
one time rules
* Unit tests for OWL 2 RL extras closure implemented
one time rules
v5.1.1
~~~~~~
Changes:
* Renamed script: :code:`closure.py` to :code:`owlrl.py`
* Fixed a deployment bug which caused the shebang to be rewritten incorrectly in the owlrl.py script.
v5.1.0
~~~~~~
Changes:
* Rename module from RDFClosure to owlrl
* Published on PyPI!
* Fixed bugs caused by python3 automatic conversion (like :code:`range` being a variabe, not a function)
* Added some basic tests (more tests coming!)
* Started foundational work to remove :code:`LiteralProxies` in the future (thanks :code:`@wrobell`)
* Simplified some sections of the code (thanks :code:`@wrobell`)
Version 5.0.0
~~~~~~~~~~~~~
Changes:
* Port to Python3. Minimum recommended version is now Python v3.5.
* Fixed a bug where the inferencing process would crash if the engine encountered a literal node that has a datatype for which it does not have a hardcoded converter.
Version 4/5
~~~~~~~~~~~
This is a major release: the package has been updated to Python 2.7 and RDFLib 4 (and to Python 3.5 in v5.0.0).
Some important changes:
* The local parser and serializer implementations have been removed; the package relies fully on RDFLib.
* If the extra JSON-LD parser and serializer is available, that format may also be used both for input and output.
* RDFa as a possible input format has been added.
* The datatype part has been reworked to adapt itself to the way RDFLib handles datatypes.
* The :code:`Literal` class has been adapted to the latest versions of RDFLib's :code:`Literal` (there is no :code:`cmp_value` any more, only value)
* Python 2.7 includes an implementation for rational numbers (under the name :code:`Fraction`), so the separate module became moot.
* The :code:`script` directory has been moved to the top level of the distribution.
* The RDF1.1 specific datatypes (:code:`LangString` and :code:`HTML`) have been added, although the :code:`HTML` is simply treated as a text (a reliance on the HTML5 Library may be too much for what this is worth…)
* The :code:`closure` script has now an extra flag (:code:`-m`) to use the "maximal" entailment, i.e., extended OWLRL+RDF with extra trimmings.
Version 4.2.1
~~~~~~~~~~~~~
Changes:
* Per error report of Michael Schneider: if a class name is a blank node, the XML serialization went wrong. In case of exception, the fall back is to use the pure xml rather than the pretty xml; that works. There was also a 'trimming' argument missing in case of a pure format conversion that led to an exception, that is handled, too.
Version 4.2
~~~~~~~~~~~
Changes:
* I exchanged rdflib Graph usage to rdflib ConjunctiveGraph. It avoids issues around deprecation and is also a possible entry point for named graphs.
* Added an extra check in the allValuesFrom handling for datatype restrictions. This does not affect pure OWLRL but is used by the extras that implement facets.
* The RestrictedDatatype class has now a 'Core' superclass; this can be used by other restricted datatypes that are not necessarily defined in OWL 2
Version 4.1
~~~~~~~~~~~
Changes:
* On advise from Dominique, the error message in the CGI script uses cgi.escape on the text input before displaying it.
* 'Trimming' has been added to the command line options
* Adaptation to rdflib 2.4.2 (or even 2.4.1?): the :code:`Literal._PythonToXSD` changed its structure from a dictionary to a list of tuples; :code:`DatatypeHandling.use_Alt_lexical_conversions()` had to change.
Version 4.0
~~~~~~~~~~~
Changes:
* The top level :code:`__init__` file has been reorganized, so that the package can be used as a module for applications in RDFLib. There is a top level class (:code:`DeductiveClosure`) that can be invoked from an RDFLib application and the old entry point (:code:`convert_graph`) relies on that.
* New class have been added to cover a combined RDFS + OWL 2 RL closure (Michael Schneider's idea).
* An extension mechanism has been built in from bottom up; user can define his/her own rules via an extension class that is given as a parameter to the core closure class.
* Using the extension mechanism a separate OWLRLExtras module has been added to implement, eg, self restriction, rational datatype.
* In the closure class the array of temporarily stored tuples has been exchanged against a set; in other words, checking whether the tuple is to be stored already is now done by the built-in set operation. It became much faster...
* The input argument has changed from 'source' to 'sources'; ie, several input files can be given to the service at the same time (eg, a separate URI for the data and the ontology, respectively).
* Added the implementation of owl:imports.
* Added an implemenatation for the datatype restrictions.
* Bugs:
* there was an optimization in the datatype handling of OWLRL that excluded subsumptions for 'implicit' literals, ie, literals that are given datatypes via the ^^ formalism (and not via sameAs and explicit datatype definitions). But this excluded proper inferences for existential restrictions...:-(
* handler for the :code:`xsd:normalizedString` datatype was missing.
Version 3.2
~~~~~~~~~~~
Note: this version passes the full batch of official OWL Full/RL tests uploaded by Michael Schneider to the OWL Working Group site. The difference, in this respect, between this version and version 3.1 is the handling of datatypes (which was only rudimentary in 3.1)
* Bugs:
* the rules on dt-diff/dt-eq were missing in the implementation. (My mistake: I did not realize that ( owl:sameAs "adfa") was a possible setups whereby those rules do come in even in practice, so I did not implement them thinking that the results would not appear in the final code anyway due to a literal appearing in a subject position. Clearly an error in judgement.)
* :code:`PlainLiteral` was in a wrong namespace in the OWLRL file:-(
* Added an explicit handling for virtually all data types, to check the lexical values. (This is, in fact, a RDFLib deficiency for most cases, except those that came in via OWL, like PlainLiteral...)
* Added a note referring to a Turtle parser bug...
Version 3.1
~~~~~~~~~~~
Note: this version passes the first, basic batch of official OWL Full/RL tests uploaded by Michael Schneider to the OWL Working Group site.
* Bugs:
* if the URI of a predicate did not correspond to a defined namespace, the extra namespace declaration did not appear in the pretty xml output. Typical situation: the user defines a namespace without trailing '#' or '/', but uses the prefix nevertheless; this ends up in a URI for, say, a predicate or a type that cannot be represented in XML. The proper approach is then to add a new prefix with 'http://' and use that in the output.
The original XML serialization of RDFLib does that; the PrettyXMLSerialization did not. The pretty XML serialization is based on the one of RDFLib, and has therefore inherited this bug.
* the axiomatic expression for (byte subclass short) was misspelled to (byte subclass byte)
* the axiomatic triples added automatically should say (Thing type :code:`owl:Class`) (and not :code:`rdfs:Class` as before). Also, (Nothing type :code:`owl:Class`) was missing there.
* :code:`rdf:text` changed to :code:`rdf:PlainLiteral` (in the axiomatic triples), as a result of the OWL WG on changing the name.
* missing subclass relationship for dateTimeStamp vs dateTime.
* there was an optimization that added Datatype triples only for those datatypes that appeared as part of a literal in the input graph. However, the rule set requires those triples to be added no matter what. At the moment, this is pending (there are discussions in the group on this).
* the set of triples declaring annotation properties were missing
* error message for asymmetric properties was bogus (has :code:`%p` instead of :code:`%s` in the text).
* there was a leftover error message via exceptions for :code:`owl:Nothing` check.
* rule :code:`scm-eqc2` was missing :-(
* New Features:
* added some support to booleans; essentially introducing a stronger check (according to XSD the :code:`"111"^xsd:boolean` is not a valid boolean values, though RDFLib accepts it as such...).
* triples with a bnode predicate were systematically filtered out when added to a graph. However, incoming ontologies may include statements like '[ owl:inverseOf P]', and processing those through the rule set requires to allow such triples during deduction. Lucklily RDFLib is relaxed on that. So such 'generalized' triples are now allowed during the forward chaining and are filtered out only once, right before serialization.
* some improvements on the datatype handling:
* adding type relationships to super(data)types. For example, if the original graph includes (:code:` rdf:type xsd:short`), then the triple (:code:` rdf:type xsd:integer`), etc, is also added. As an optimization the (:code:`xsd:short rdfs:subClassOf xsd:integer`) triples are not added, but the direct datatyping is done instead.
* adding disjointness information on datatypes on top of the hierarchy. This means that inconsistencies of the sort :code:` ex:prop 123 . ex:prop "1"^^xsd:boolean`. will be detected (integers and booleans must be disjoing per XSD; the explicit type relationships and the disjointness of some data types will trigger the necessary rules).
Note that, mainly the first rule, is really useful when generic nodes are used as datatypes, as opposed to explicit literals.
* added the possibility to set the input format explicitly, and changed the RDFConvert script accordingly (the service is not yet changed...).
* added the possibility to consume standard input.
OWL-RL-7.1.4/Doc_OLD/ 0000775 0000000 0000000 00000000000 15042011661 0013676 5 ustar 00root root 0000000 0000000 OWL-RL-7.1.4/Doc_OLD/RDFClosure-module.html 0000664 0000000 0000000 00000136522 15042011661 0020030 0 ustar 00root root 0000000 0000000
RDFClosure
This module is brute force implementation of the 'finite' version of
RDFS semantics
and of OWL 2 RL on the top of RDFLib (with some caveats, see
below). Some extensions to these are also implemented. Brute force means
that, in all cases, simple forward chaining rules are used to extend
(recursively) the incoming graph with all triples that the rule sets
permit (ie, the "deductive closure" of the graph is computed).
There is an extra options whether the axiomatic triples are added to the
graph (prior to the forward chaining step). These, typically set the
domain and range for properties or define some core classes. In the case
of RDFS, the implementation uses a 'finite' version of the axiomatic
triples only (as proposed, for example, by Herman ter Horst). This means
that it adds only those rdf:_i type predicates that do
appear in the original graph, thereby keeping this step finite. For OWL 2
RL, OWL 2 does not define axiomatic triples formally; but they can be
deduced from the OWL 2 RDF Based Semantics document and are listed in
Appendix 6 (though informally). Note, however, that this implementation
adds only those triples that refer to OWL terms that are meaningful for
the OWL 2 RL case.
Package Entry Points
The main entry point to the package is via the DeductiveClosure class. This class should be
initialized to control the parameters of the deductive closure; the
forward chaining is done via the expand method. The simplest way to use the package
from an RDFLib application is as follows:
graph = Graph() # creation of an RDFLib graph
...
... # normal RDFLib application, eg, parsing RDF data
...
DeductiveClosure(OWLRL_Semantics).expand(graph) # calculate an OWL 2 RL deductive closure of graph
# without axiomatic triples
The first argument of the DeductiveClosure
initialization can be replaced by other classes, providing different
types of deductive closure; other arguments are also possible. For
example:
will calculate the deductive closure including RDFS and some
extensions to OWL 2 RL, and with all possible axiomatic triples added
to the graph (this is about the maximum the package can do…)
The same instance of DeductiveClosure can be used for several graph
expansions. In other words, the expand function does not change any
state.
For convenience, a second entry point to the package is provided in
the form of a function called convert_graph, that expects a directory with various
options, including a file name. The function parses the file, creates
the expanded graph, and serializes the result into RDF/XML or Turtle.
This function is particularly useful as an entry point for a CGI call
(where the HTML form parameters are in a directory) and is easy to use
with a command line interface. The package distribution contains an
example for both.
There are major closure type (ie, semantic closure possibilities);
these can be controlled through the appropriate parameters of the DeductiveClosure class:
In all three cases there are other dimensions that can control the
exact closure being generated:
for convenience, the so called axiomatic triples (see, eg, the axiomatic triples in RDFS) are, by default,
not added to the graph closure to reduce the number of
generated triples. These can be controlled through a separate
initialization argument
similarly, the axiomatic triples for D-entailment are separated
Some Technical/implementation aspects
The core processing is done in the in the Core class,
which is subclassed by the RDFS and the OWL 2
RL classes (these two are then, on their turn, subclassed by the RDFS + OWL 2 RL Semantics) class). The core implements
the core functionality of cycling through the rules, whereas the rules
themselves are defined and implemented in the subclasses. There are
also methods that are executed only once either at the beginning or at
the end of the full processing cycle. Adding axiomatic triples is
handled separately, which allows a finer user control over these
features.
Literals must be handled separately. Indeed, the functionality
relies on 'extended' RDF graphs, that allows literals to be in a
subject position, too. Because RDFLib does not allow that, processing
begins by exchanging all literals in the graph for bnodes (identical
literals get the same associated bnode). Processing occurs on these
bnodes; at the end of the process all these bnodes are replaced by
their corresponding literals if possible (if the bnode occurs in a
subject position, that triple is removed from the resulting graph).
Details of this processing is handled in the separate Literals Proxies class.
The OWL specification includes references to datatypes that are not
in the core RDFS specification, consequently not directly implemented
by RDFLib. These are added in a separate module of the package.
Problems with Literals with datatypes
The current distribution of RDFLib is fairly poor in handling
datatypes, particularly in checking whether a lexical form of a
literal is "proper" as for its declared datatype. A typical
example is:
"-1234"^^xsd:nonNegativeInteger
which should not be accepted as valid literal. Because the
requirements of OWL 2 RL are much stricter in this respect, an
alternative set of datatype handling (essentially, conversions) had
to be implemented (see the XsdDatatypes module).
The DeductiveClosure class has an additional instance
variable whether the default RDFLib conversion routines should be
exchanged against the new ones. If this flag is set to True and
instance creation (this is the default), then the conversion routines
are set back to the originals once the expansion is complete, thereby
avoiding to influence older application that may not work properly
with the new set of conversion routines.
If the user wants to use these alternative lexical conversions
everywhere in the application, then the use_improved_datatypes_conversions method can be
invoked. That method changes the conversion routines and, from that
point on, all usage of DeductiveClosure instances will use the improved
conversion methods without resetting them. Ie, the code structure can
be something like:
which will result in a proper graph expansion except for the
datatype specific comparisons which will be incomplete.
Problems with Literals with datatypes
The current distribution of RDFLib is fairly poor in handling
datatypes, particularly in checking whether a lexical form of a
literal is "proper" as for its declared datatype. A typical
example is:
"-1234"^^xsd:nonNegativeInteger
which should not be accepted as valid literal. Because the
requirements of OWL 2 RL are much stricter in this respect, an
alternative set of datatype handling (essentially, conversions) had
to be implemented (see the XsdDatatypes module).
The DeductiveClosure class has an additional instance
variable whether the default RDFLib conversion routines should be
exchanged against the new ones. If this flag is set to True and
instance creation (this is the default), then the conversion routines
are set back to the originals once the expansion is complete, thereby
avoiding to influence older application that may not work properly
with the new set of conversion routines.
If the user wants to use these alternative lexical conversions
everywhere in the application, then the use_improved_datatypes_conversions method can be
invoked. That method changes the conversion routines and, from that
point on, all usage of DeductiveClosure instances will use the improved
conversion methods without resetting them. Ie, the code structure can
be something like:
RDFClosure.RDFSClosure: This module is brute force implementation of the RDFS semantics on
the top of RDFLib (with some caveats, see in the introductory
text).
return_closure_class(owl_closure,
rdfs_closure,
owl_extras,
trimming=False)
Return the right semantic extension class based on three possible
choices (this method is here to help potential users, the result can
be fed into a DeductiveClosure instance at initialization)
convert_graph(options,
closureClass=None)
Entry point for external scripts (CGI or command line) to parse an
RDF file(s), possibly execute OWL and/or RDFS closures, and serialize
back the result in some format.
Parse the input into the graph, possibly checking the suffix for the
format.
Parameters:
iformat - input format; can be one of AUTO, TURTLE, or RDFXML. AUTO means
that the suffix of the file name or URI will decide: '.ttl' means
Turtle, RDF/XML otherwise.
inp - input file; anything that RDFLib accepts in that position (URI,
file name, file object). If '-', standard input is used.
Interpret the owl import statements. Essentially, recursively merge
with all the objects in the owl import statement, and remove the
corresponding triples from the graph.
This method can be used by an application prior to expansion. It is
not done by the the DeductiveClosure class.
Parameters:
iformat - input format; can be one of AUTO, TURTLE, or RDFXML. AUTO means
that the suffix of the file name or URI will decide: '.ttl' means
Turtle, RDF/XML otherwise.
Return the right semantic extension class based on three possible
choices (this method is here to help potential users, the result can be
fed into a DeductiveClosure instance at initialization)
Parameters:
owl_closure (boolean) - whether OWL 2 RL deductive closure should be calculated
rdfs_closure (boolean) - whether RDFS deductive closure should be calculated. In case
owl_closure==True, this parameter should also be
used in the initialization of DeductiveClosure
owl_extras - whether the extra possibilities (rational datatype, etc) should
be added to an OWL 2 RL deductive closure. This parameter has no
effect in case owl_closure==False.
trimming - whether extra trimming is done on the OWL RL + Extension output
Entry point for external scripts (CGI or command line) to parse an RDF
file(s), possibly execute OWL and/or RDFS closures, and serialize back
the result in some format. Note that this entry point can be used
requiring no entailment at all; because both the input and the output
format for the package can be RDF/XML or Turtle, such usage would simply
mean a format conversion.
If OWL 2 RL processing is required, that also means that the
owl:imports statements are interpreted. Ie, ontologies can be spread over
several files. Note, however, that the output of the process would then
include all imported ontologies, too.
Parameters:
options - object with specific attributes, namely:
options.sources: list of uris or file names for the source
data; for each one if the name ends with 'ttl', it is
considered to be turtle, RDF/XML otherwise (this can be
overwritten by the options.iformat, though)
options.text: direct Turtle encoding of a graph as a text
string (useful, eg, for a CGI call using a text field)
options.owlClosure: can be yes or no
options.rdfsClosure: can be yes or no
options.owlExtras: can be yes or no; whether the extra rules
beyond OWL 2 RL are used or not.
options.axioms: whether relevant axiomatic triples are added
before chaining (can be a boolean, or the strings
"yes" or "no")
options.daxioms: further datatype axiomatic triples are added
to the output (can be a boolean, or the strings
"yes" or "no")
options.format: output format, can be "turtle" or
"rdfxml"
options.iformat: input format, can be "turtle",
"rdfa", "json", "rdfxml", or
"auto". "auto" means that the suffix of
the file is considered: '.ttl'. '.html', 'json' or '.jsonld'
respectively with 'xml' as a fallback
options.trimming: whether the extension to OWLRL should also
include trimming
closureClass - explicit class reference. If set, this overrides the various
different other options to be used as an extension.
1# -*- coding: utf-8 -*- 2# 3""" 4This module is brute force implementation of the 'finite' version of U{RDFS semantics<http://www.w3.org/TR/rdf-mt/>} 5and of U{OWL 2 RL<http://www.w3.org/TR/owl2-profiles/#Reasoning_in_OWL_2_RL_and_RDF_Graphs_using_Rules>} 6on the top of RDFLib (with some caveats, see below). Some extensions to these are also implemented. 7Brute force means that, in all cases, simple forward chaining rules are used to extend (recursively) the incoming graph with all triples 8that the rule sets permit (ie, the "deductive closure" of the graph is computed). 9There is an extra options whether the axiomatic triples are added to the graph (prior to the forward chaining step). 10These, typically set the domain and range for properties or define some core classes. 11In the case of RDFS, the implementation uses a 'finite' version of the axiomatic triples only (as proposed, for example, 12by Herman ter Horst). This means that it adds only those C{rdf:_i} type predicates that do appear in the original graph, 13thereby keeping this step finite. For OWL 2 RL, OWL 2 does not define axiomatic triples formally; but they can be deduced from the 14U{OWL 2 RDF Based Semantics<http://www.w3.org/TR/owl2-rdf-based-semantics/>} document and are listed in Appendix 6 (though informally). 15Note, however, that this implementation adds only those triples that refer to OWL terms that are meaningful for the OWL 2 RL case. 16 17 18Package Entry Points 19==================== 20 21The main entry point to the package is via the L{DeductiveClosure<DeductiveClosure>} class. This class should be initialized to control 22the parameters of the deductive closure; the forward chaining is done via the L{expand<DeductiveClosure.expand>} method. 23The simplest way to use the package from an RDFLib application is as follows:: 24 25 graph = Graph() # creation of an RDFLib graph 26 ... 27 ... # normal RDFLib application, eg, parsing RDF data 28 ... 29 DeductiveClosure(OWLRL_Semantics).expand(graph) # calculate an OWL 2 RL deductive closure of graph 30 # without axiomatic triples 31 32The first argument of the C{DeductiveClosure} initialization can be replaced by other classes, providing different 33types of deductive closure; other arguments are also possible. For example:: 34 35 DeductiveClosure(OWLRL_Extension, rdfs_closure = True, axiomatic_triples = True, datatype_axioms = True).expand(graph) 36 37will calculate the deductive closure including RDFS and some extensions to OWL 2 RL, and with all possible axiomatic 38triples added to the graph (this is about the maximum the package can do…) 39 40The same instance of L{DeductiveClosure<DeductiveClosure>} can be used for several graph expansions. In other words, the expand function does not change any state. 41 42For convenience, a second entry point to the package is provided in the form of a function called L{convert_graph<convert_graph>}, 43that expects a directory with various options, including a file name. The function parses the file, creates the expanded graph, and serializes the result into RDF/XML or 44Turtle. This function is particularly useful as an entry point for a CGI call (where the HTML form parameters are in a directory) and 45is easy to use with a command line interface. The package distribution contains an example for both. 46 47There are major closure type (ie, semantic closure possibilities); these can be controlled through the appropriate 48parameters of the L{DeductiveClosure<DeductiveClosure>} class: 49 50 - using the L{RDFS_Semantics<RDFSClosure.RDFS_Semantics>} class, implementing the U{RDFS semantics<http://www.w3.org/TR/rdf-mt/>} 51 - using the L{OWLRL_Semantics<OWLRL.OWLRL_Semantics>} class, implementing the U{OWL 2 RL<http://www.w3.org/TR/owl2-profiles/#Reasoning_in_OWL_2_RL_and_RDF_Graphs_using_Rules>} 52 - using L{RDFS_OWLRL_Semantics<CombinedClosure.RDFS_OWLRL_Semantics>} class, implementing a combined semantics of U{RDFS semantics<http://www.w3.org/TR/rdf-mt/>} and U{OWL 2 RL<http://www.w3.org/TR/owl2-profiles/#Reasoning_in_OWL_2_RL_and_RDF_Graphs_using_Rules>} 53 54In all three cases there are other dimensions that can control the exact closure being generated: 55 56 - for convenience, the so called axiomatic triples (see, eg, the U{axiomatic triples in RDFS<http://www.w3.org/TR/rdf-mt/#rdfs_interp>}) are, by default, I{not} added to the graph closure to reduce the number of generated triples. These can be controlled through a separate initialization argument 57 - similarly, the axiomatic triples for D-entailment are separated 58 59Some Technical/implementation aspects 60===================================== 61 62The core processing is done in the in the L{Core<Closure.Core>} class, which is subclassed by the L{RDFS<RDFS_Semantics>} and 63the L{OWL 2 RL<OWLRL_Semantics>} classes (these two are then, on their turn, subclassed by the 64L{RDFS + OWL 2 RL Semantics<CombinedClosure.RDFS_OWLRL_Semantics>}) class). The core implements the core functionality of cycling 65through the rules, whereas the rules themselves are defined and implemented in the subclasses. There are also methods that are executed only once either 66at the beginning or at the end of the full processing cycle. Adding axiomatic triples is handled separately, which allows a finer user control over 67these features. 68 69Literals must be handled separately. Indeed, the functionality relies on 'extended' RDF graphs, that allows literals 70to be in a subject position, too. Because RDFLib does not allow that, processing begins by exchanging all literals in the 71graph for bnodes (identical literals get the same associated bnode). Processing occurs on these bnodes; at the end of the process 72all these bnodes are replaced by their corresponding literals if possible (if the bnode occurs in a subject position, that triple 73is removed from the resulting graph). Details of this processing is handled in the separate L{Literals Proxies<RDFClosure.Literals.LiteralProxies>} 74class. 75 76The OWL specification includes references to datatypes that are not in the core RDFS specification, consequently not 77directly implemented by RDFLib. These are added in a separate module of the package. 78 79 80Problems with Literals with datatypes 81------------------------------------- 82 83The current distribution of RDFLib is fairly poor in handling datatypes, particularly in checking whether a lexical form 84of a literal is "proper" as for its declared datatype. A typical example is:: 85 "-1234"^^xsd:nonNegativeInteger 86which should not be accepted as valid literal. Because the requirements of OWL 2 RL are much stricter in this respect, an alternative set of datatype handling (essentially, conversions) had to be implemented (see the L{XsdDatatypes} module). 87 88The L{DeductiveClosure<DeductiveClosure>} class has an additional instance variable whether 89the default RDFLib conversion routines should be exchanged against the new ones. If this flag is set to True and instance creation (this is 90the default), then the conversion routines are set back 91to the originals once the expansion is complete, thereby avoiding to influence older application that may not work properly with the 92new set of conversion routines. 93 94If the user wants to use these alternative lexical conversions everywhere in the application, then 95the L{use_improved_datatypes_conversions<DeductiveClosure.use_improved_datatypes_conversions>} method can be invoked. That method changes 96the conversion routines and, from that point on, all usage of L{DeductiveClosure<DeductiveClosure>} instances will use the 97improved conversion methods without resetting them. Ie, the code structure can be something like:: 98 DeductiveClosure().use_improved_datatypes_conversions() 99 ... RDFLib application100 DeductiveClosure().expand(graph)101 ...102The default situation can be set back using the L{use_rdflib_datatypes_conversions<DeductiveClosure.use_improved_datatypes_conversions>} call.103104It is, however, not I{required} to use these methods at all. Ie, the user can use::105 DeductiveClosure(improved_datatypes=False).expand(graph)106which will result in a proper graph expansion except for the datatype specific comparisons which will be incomplete.107108109110Problems with Literals with datatypes111-------------------------------------112113The current distribution of RDFLib is fairly poor in handling datatypes, particularly in checking whether a lexical form114of a literal is "proper" as for its declared datatype. A typical example is::115 "-1234"^^xsd:nonNegativeInteger116which should not be accepted as valid literal. Because the requirements of OWL 2 RL are much stricter in this respect, an alternative set of datatype handling (essentially, conversions) had to be implemented (see the L{XsdDatatypes} module).117118The L{DeductiveClosure<DeductiveClosure>} class has an additional instance variable whether119the default RDFLib conversion routines should be exchanged against the new ones. If this flag is set to True and instance creation (this is120the default), then the conversion routines are set back121to the originals once the expansion is complete, thereby avoiding to influence older application that may not work properly with the122new set of conversion routines.123124If the user wants to use these alternative lexical conversions everywhere in the application, then125the L{use_improved_datatypes_conversions<DeductiveClosure.use_improved_datatypes_conversions>} method can be invoked. That method changes126the conversion routines and, from that point on, all usage of L{DeductiveClosure<DeductiveClosure>} instances will use the127improved conversion methods without resetting them. Ie, the code structure can be something like::128 DeductiveClosure().use_improved_datatypes_conversions()129 ... RDFLib application130 DeductiveClosure().expand(graph)131 ...132The default situation can be set back using the L{use_rdflib_datatypes_conversions<DeductiveClosure.use_improved_datatypes_conversions>} call.133134It is, however, not I{required} to use these methods at all. Ie, the user can use::135 DeductiveClosure(improved_datatypes=False).expand(graph)136which will result in a proper graph expansion except for the datatype specific comparisons which will be incomplete.137138139@requires: U{RDFLib<https://github.com/RDFLib/rdflib>}, 4.0.0 and higher140@requires: U{rdflib_jsonld<https://github.com/RDFLib/rdflib-jsonld>}141@license: This software is available for use under the U{W3C Software License<http://www.w3.org/Consortium/Legal/2002/copyright-software-20021231>}142@organization: U{World Wide Web Consortium<http://www.w3.org>}143@author: U{Ivan Herman<a href="http://www.w3.org/People/Ivan/">}144145"""146147148# Examples: LangString is disjoint from String149__version__="5.0"150__author__='Ivan Herman'151__contact__='Ivan Herman, ivan@w3.org'152__license__=u'W3C® SOFTWARE NOTICE AND LICENSE, http://www.w3.org/Consortium/Legal/2002/copyright-software-20021231'153154importStringIO155fromtypesimport*156157# noinspection PyPackageRequirements,PyPackageRequirements,PyPackageRequirements158importrdflib159160fromrdflibimportLiteralasrdflibLiteral161# noinspection PyPep8Naming162fromrdflibimportGraph163164importDatatypeHandling,Closure165fromOWLRLExtrasimportOWLRL_Extension,OWLRL_Extension_Trimming166fromOWLRLimportOWLRL_Semantics167fromRDFSClosureimportRDFS_Semantics168fromCombinedClosureimportRDFS_OWLRL_Semantics169fromOWLimportimports170171################################################################################################################172RDFXML="xml"173TURTLE="turtle"174JSON="json"175AUTO="auto"176RDFA="rdfa"177178NONE="none"179RDF="rdf"180RDFS="rdfs"181OWL="owl"182FULL="full"183184try:185fromrdflib_jsonld.parserimportJsonLDParser186fromrdflib_jsonld.serializerimportJsonLDSerializer187fromrdflib.pluginimportregister,Serializer,Parser188register('json-ld',Parser,'rdflib_jsonld.parser','JsonLDParser')189register('json-ld',Serializer,'rdflib_jsonld.serializer','JsonLDSerializer')190json_ld_available=True191except:192json_ld_available=False
199"""Parse the input into the graph, possibly checking the suffix for the format.200201 @param iformat: input format; can be one of L{AUTO}, L{TURTLE}, or L{RDFXML}. L{AUTO} means that the suffix of the file name or URI will decide: '.ttl' means Turtle, RDF/XML otherwise.202 @param inp: input file; anything that RDFLib accepts in that position (URI, file name, file object). If '-', standard input is used.203 @param graph: the RDFLib Graph instance to parse into. 204 """205ififormat==AUTO:206ifinp=="-":207format="turtle"208else:209ifinp.endswith('.ttl')orinp.endswith('.n3'):210format="turtle"211elifjson_ld_availableand(inp.endswith('.json')orinp.endswith('.jsonld')):212format="json-ld"213elifinp.endswith('.html'):214format="rdfa1.1"215else:216format="xml"217elififormat==TURTLE:218format="n3"219elififormat==RDFA:220format="rdfa1.1"221elififormat==RDFXML:222format="xml"223elififormat==JSON:224ifjson_ld_available:225format="json-ld"226else:227raiseException("JSON-LD parser is not available")228else:229raiseException("Unknown input syntax")230231ifinp=="-":232# standard input is used233importsys234source=sys.stdin235else:236source=inp237graph.parse(source,format=format)
241"""Interpret the owl import statements. Essentially, recursively merge with all the objects in the owl import statement, and remove the corresponding242 triples from the graph.243244 This method can be used by an application prior to expansion. It is I{not} done by the the L{DeductiveClosure} class.245246 @param iformat: input format; can be one of L{AUTO}, L{TURTLE}, or L{RDFXML}. L{AUTO} means that the suffix of the file name or URI will decide: '.ttl' means Turtle, RDF/XML otherwise.247 @param graph: the RDFLib Graph instance to parse into. 248 """249whileTrue:250#1. collect the import statements:251all_imports=[tfortingraph.triples((None,imports,None))]252iflen(all_imports)==0:253# no import statement whatsoever, we can go on...254return255#2. remove all the import statements from the graph256fortinall_imports:graph.remove(t)257#3. get all the imported vocabularies and import them258for(s,p,uri)inall_imports:259# this is not 100% kosher. The expected object for an import statement is a URI. However,260# on local usage, a string would also make sense, so I do that one, too261ifisinstance(uri,rdflibLiteral):262__parse_input(iformat,str(uri),graph)263else:264__parse_input(iformat,uri,graph)
265#4. start all over again to see if import statements have been imported
269"""270 Return the right semantic extension class based on three possible choices (this method is here to help potential users, the result can be271 fed into a L{DeductiveClosure} instance at initialization)272 @param owl_closure: whether OWL 2 RL deductive closure should be calculated273 @type owl_closure: boolean274 @param rdfs_closure: whether RDFS deductive closure should be calculated. In case C{owl_closure==True}, this parameter should also be used in the initialization of L{DeductiveClosure}275 @type rdfs_closure: boolean276 @param owl_extras: whether the extra possibilities (rational datatype, etc) should be added to an OWL 2 RL deductive closure. This parameter has no effect in case C{owl_closure==False}.277 @param trimming : whether extra trimming is done on the OWL RL + Extension output278 @return: deductive class reference or None279 @rtype: Class type280 """281ifowl_closure:282ifowl_extras:283iftrimming:284returnOWLRL_Extension_Trimming285else:286returnOWLRL_Extension287else:288ifrdfs_closure:289returnRDFS_OWLRL_Semantics290else:291returnOWLRL_Semantics292elifrdfs_closure:293returnRDFS_Semantics294else:295returnNone
300"""301 Entry point to generate the deductive closure of a graph. The exact choice deductive302 closure is controlled by a class reference. The important initialization parameter is the C{closure_class}: a Class object referring to a303 subclass of L{Closure.Core}. Although this package includes a number of304 such subclasses (L{OWLRL_Semantics}, L{RDFS_Semantics}, L{RDFS_OWLRL_Semantics}, and L{OWLRL_Extension}), the user can use his/her own if additional rules are305 implemented. 306307 Note that owl:imports statements are I{not} interpreted in this class, that has to be done beforehand on the graph that is to be expanded.308309 @ivar rdfs_closure: Whether the RDFS closure should also be executed. Default: False.310 @type rdfs_closure: boolean311 @ivar axiomatic_triples: Whether relevant axiomatic triples are added before chaining, except for datatype axiomatic triples. Default: False.312 @type axiomatic_triples: boolean313 @ivar datatype_axioms: Whether further datatype axiomatic triples are added to the output. Default: false.314 @type datatype_axioms: boolean315 @ivar closure_class: the class instance used to expand the graph316 @type closure_class: L{Closure.Core}317 @cvar improved_datatype_generic: Whether the improved set of lexical-to-Python conversions should be used for datatype handling I{in general}, ie, not only for a particular instance and not only for inference purposes. Default: False.318 @type improved_datatype_generic: boolean319 """320improved_datatype_generic=False321
323"""324 @param closure_class: a closure class reference.325 @type closure_class: subclass of L{Closure.Core}326 @param rdfs_closure: whether RDFS rules are executed or not327 @type rdfs_closure: boolean328 @param axiomatic_triples: Whether relevant axiomatic triples are added before chaining, except for datatype axiomatic triples. Default: False.329 @type axiomatic_triples: boolean330 @param datatype_axioms: Whether further datatype axiomatic triples are added to the output. Default: false.331 @type datatype_axioms: boolean332 @param improved_datatypes: Whether the improved set of lexical-to-Python conversions should be used for datatype handling. See the introduction for more details. Default: True.333 @type improved_datatypes: boolean334 """335ifclosure_classisNone:336self.closure_class=None337else:338ifnotisinstance(closure_class,ClassType):339raiseValueError("The closure type argument must be a class reference")340else:341self.closure_class=closure_class342self.axiomatic_triples=axiomatic_triples343self.datatype_axioms=datatype_axioms344self.rdfs_closure=rdfs_closure345self.improved_datatypes=improved_datatypes
383"""384 Entry point for external scripts (CGI or command line) to parse an RDF file(s), possibly execute OWL and/or RDFS closures,385 and serialize back the result in some format.386 Note that this entry point can be used requiring no entailment at all;387 because both the input and the output format for the package can be RDF/XML or Turtle, such usage would388 simply mean a format conversion.389390 If OWL 2 RL processing is required, that also means that the owl:imports statements are interpreted. Ie,391 ontologies can be spread over several files. Note, however, that the output of the process would then include all 392 imported ontologies, too.393394 @param options: object with specific attributes, namely:395 - options.sources: list of uris or file names for the source data; for each one if the name ends with 'ttl', it is considered to be turtle, RDF/XML otherwise (this can be overwritten by the options.iformat, though)396 - options.text: direct Turtle encoding of a graph as a text string (useful, eg, for a CGI call using a text field)397 - options.owlClosure: can be yes or no398 - options.rdfsClosure: can be yes or no399 - options.owlExtras: can be yes or no; whether the extra rules beyond OWL 2 RL are used or not.400 - options.axioms: whether relevant axiomatic triples are added before chaining (can be a boolean, or the strings "yes" or "no")401 - options.daxioms: further datatype axiomatic triples are added to the output (can be a boolean, or the strings "yes" or "no")402 - options.format: output format, can be "turtle" or "rdfxml"403 - options.iformat: input format, can be "turtle", "rdfa", "json", "rdfxml", or "auto". "auto" means that the suffix of the file is considered: '.ttl'. '.html', 'json' or '.jsonld' respectively with 'xml' as a fallback404 - options.trimming: whether the extension to OWLRL should also include trimming405 @param closureClass: explicit class reference. If set, this overrides the various different other options to be used as an extension.406 """407408def__check_yes_or_true(opt):409returnoptisTrueoropt=="yes"oropt=="Yes"oropt=="True"oropt=="true"
410411importwarnings412413warnings.filterwarnings("ignore")414iflen(options.sources)==0and(options.textisNoneorlen(options.text.strip())==0):415raiseException("No graph specified either via a URI or text")416417graph=Graph()418419# Just to be sure that this attribute does not create issues with older versions of the service...420# the try statement should be removed, eventually...421iformat=AUTO422try:423iformat=options.iformat424except:425# exception can be raised if that attribute is not used at all, true for older versions426pass427428# similar measure with the possible usage of the 'source' options429try:430ifoptions.sourceisnotNone:431options.sources.append(options.source)432except:433# exception can be raised if that attribute is not used at all, true for newer versions434pass435436# Get the sources first. Note that a possible error is filtered out, namely to process the same file twice. This is done437# by turning the input arguments into a set...438forinpinset(options.sources):439__parse_input(iformat,inp,graph)440441# add the possible extra text (ie, the text input on the HTML page)442ifoptions.textisnotNone:443graph.parse(StringIO.StringIO(options.text),format="n3")444445# Get all the options right446# noinspection PyPep8Naming447owlClosure=__check_yes_or_true(options.owlClosure)448# noinspection PyPep8Naming449rdfsClosure=__check_yes_or_true(options.rdfsClosure)450# noinspection PyPep8Naming451owlExtras=__check_yes_or_true(options.owlExtras)452try:453trimming=__check_yes_or_true(options.trimming)454except:455trimming=False456axioms=__check_yes_or_true(options.axioms)457daxioms=__check_yes_or_true(options.daxioms)458459ifowlClosure:460interpret_owl_imports(iformat,graph)461462# adds to the 'beauty' of the output463graph.bind("owl","http://www.w3.org/2002/07/owl#")464graph.bind("xsd","http://www.w3.org/2001/XMLSchema#")465466#@@@@ some smarter choice should be used later to decide what the closure class is!!! That should467# also control the import management. Eg, if the superclass includes OWL...468ifclosureClassisnotNone:469closure_class=closureClass470else:471closure_class=return_closure_class(owlClosure,rdfsClosure,owlExtras,trimming)472473DeductiveClosure(closure_class,improved_datatypes=True,rdfs_closure=rdfsClosure,axiomatic_triples=axioms,datatype_axioms=daxioms).expand(graph)474475ifoptions.format==TURTLE:476returngraph.serialize(format="turtle")477elifoptions.format==JSON:478ifjson_ld_available:479returngraph.serialize(format="json-ld")480else:481raiseException("JSON-LD serializer is not available")482else:483returngraph.serialize(format="pretty-xml")484
36"""Core of the semantics management, dealing with the RDFS and other Semantic triples. The only 37 reason to have it in a separate class is for an easier maintainability. 38 39 This is a common superclass only. In the present module, it is subclassed by 40 a L{RDFS Closure<RDFClosure.RDFSClosure.RDFS_Semantics>} class and a L{OWL RL Closure<RDFClosure.OWLRL.OWLRL_Semantics>} classes. 41 There are some methods that are implemented in the subclasses only, ie, this class cannot be used by itself! 42 43 @ivar IMaxNum: maximal index of C{rdf:_i} occurrence in the graph 44 @ivar literal_proxies: L{Literal Proxies with BNodes<RDFClosure.Literals.LiteralProxies>} for the graph 45 @type literal_proxies: L{LiteralProxies<RDFClosure.Literals.LiteralProxies>} 46 @ivar graph: the real graph 47 @type graph: rdflib.Graph 48 @ivar axioms: whether axioms should be added or not 49 @type axioms: boolean 50 @ivar daxioms: whether datatype axioms should be added or not 51 @type daxioms: boolean 52 @ivar added_triples: triples added to the graph, conceptually, during one processing cycle 53 @type added_triples: set of triples 54 @ivar error_messages: error messages (typically inconsistency messages in OWL RL) found during processing. These are added to the final graph at the very end as separate BNodes with error messages 55 @type error_messages: array of strings 56 @ivar rdfs: whether RDFS inference is also done (used in subclassed only) 57 @type rdfs: boolean 58 """ 59# noinspection PyUnusedLocal
61""" 62 @param graph: the RDF graph to be extended 63 @type graph: rdflib.Graph 64 @param axioms: whether axioms should be added or not 65 @type axioms: boolean 66 @param daxioms: whether datatype axioms should be added or not 67 @type daxioms: boolean 68 @param rdfs: whether RDFS inference is also done (used in subclassed only) 69 @type rdfs: boolean 70 """ 71self._debug=debugGlobal 72 73# Calculate the maximum 'n' value for the '_i' type predicates (see Horst's paper) 74n=1 75maxnum=0 76cont=True 77whilecont: 78cont=False 79predicate=ns_rdf[("_%d"%n)] 80for(s,p,o)ingraph.triples((None,predicate,None)): 81# there is at least one if we got here 82maxnum=n 83n+=1 84cont=True 85self.IMaxNum=maxnum 86 87self.graph=graph 88self.axioms=axioms 89self.daxioms=daxioms 90 91self.rdfs=rdfs 92 93self.error_messages=[] 94self.empty_stored_triples()
106"""107 Do some pre-processing step. This method before anything else in the closure. By default, this method is empty, subclasses108 can add content to it by overriding it.109 """110pass
113"""114 Do some post-processing step. This method when all processing is done, but before handling possible115 errors (ie, the method can add its own error messages). By default, this method is empty, subclasses116 can add content to it by overriding it.117 """118pass
121"""122 The core processing cycles through every tuple in the graph and dispatches it to the various methods implementing123 a specific group of rules. By default, this method raises an exception; indeed, subclasses124 I{must} add content to by overriding it.125 @param t: one triple on which to apply the rules126 @type t: tuple127 @param cycle_num: which cycle are we in, starting with 1. This value is forwarded to all local rules; it is also used128 locally to collect the bnodes in the graph.129 """130raiseException("This method should not be called directly; subclasses should override it")
133"""134 Add axioms.135 This is only a placeholder and raises an exception by default; subclasses I{must} fill this with real content136 """137raiseException("This method should not be called directly; subclasses should override it")
140"""141 Add d axioms.142 This is only a placeholder and raises an exception by default; subclasses I{must} fill this with real content143 """144raiseException("This method should not be called directly; subclasses should override it")
147"""148 This is only a placeholder; subclasses should fill this with real content. By default, it is just an empty call.149 This set of rules is invoked only once and not in a cycle.150 """151pass
155"""156 Return the literal value corresponding to a Literal node. Used in error messages.157 @param node: literal node158 @return: the literal value itself159 """160try:161returnself.literal_proxies.bnode_to_lit[node].lex162except:163return"????"
173"""174 Send the stored triples to the graph, and empty the container175 """176fortinself.added_triples:177self.graph.add(t)178self.empty_stored_triples()
181"""182 In contrast to its name, this does not yet add anything to the graph itself, it just stores the tuple in an183 L{internal set<Core.added_triples>}. (It is important for this to be a set: some of the rules in the various closures may184 generate the same tuples several times.) Before adding the tuple to the set, the method checks whether185 the tuple is in the final graph already (if yes, it is not added to the set).186187 The set itself is emptied at the start of every processing cycle; the triples are then effectively added to the188 graph at the end of such a cycle. If the set is189 actually empty at that point, this means that the cycle has not added any new triple, and the full processing can stop.190191 @param t: the triple to be added to the graph, unless it is already there192 @type t: a 3-element tuple of (s,p,o)193 """194(s,p,o)=t195ifnot(isinstance(s,rdflibLiteral)orisinstance(p,rdflibLiteral))andtnotinself.graph:196ifself._debugorofflineGeneration:197printt198self.added_triples.add(t)
202"""203 Generate the closure the graph. This is the real 'core'.204205 The processing rules store new triples via the L{separate method<store_triple>} which stores206 them in the L{added_triples<added_triples>} array. If that array is emtpy at the end of a cycle,207 it means that the whole process can be stopped.208209 If required, the relevant axiomatic triples are added to the graph before processing in cycles. Similarly210 the exchange of literals against bnodes is also done in this step (and restored after all cycles are over).211 """212self.pre_process()213214# Handling the axiomatic triples. In general, this means adding all tuples in the list that215# forwarded, and those include RDF or RDFS. In both cases the relevant parts of the container axioms should also216# be added.217ifself.axioms:218self.add_axioms()219220# Create the bnode proxy structure for literals221self.literal_proxies=LiteralProxies(self.graph,self)222223# Add the datatype axioms, if needed (note that this makes use of the literal proxies, the order of the call is important!224ifself.daxioms:225self.add_d_axioms()226227self.flush_stored_triples()228229# Get first the 'one-time rules', ie, those that do not need an extra round in cycles down the line230self.one_time_rules()231self.flush_stored_triples()232233# Go cyclically through all rules until no change happens234new_cycle=True235cycle_num=0236whilenew_cycle:237# yes, there was a change, let us go again238cycle_num+=1239240# DEBUG: print the cycle number out241ifself._debug:242print"----- Cycle #:%d"%cycle_num243244# go through all rules, and collect the replies (to see whether any change has been done)245# the new triples to be added are collected separately not to interfere with246# the current graph yet247self.empty_stored_triples()248249# Execute all the rules; these might fill up the added triples array250fortinself.graph:251self.rules(t,cycle_num)252253# Add the tuples to the graph (if necessary, that is). If any new triple has been generated, a new cycle254# will be necessary...255new_cycle=len(self.added_triples)>0256257fortinself.added_triples:258self.graph.add(t)259260# All done, but we should restore the literals from their proxies261self.literal_proxies.restore()262263self.post_process()264self.flush_stored_triples()265266# Add possible error messages267ifself.error_messages:268# I am not sure this is the right vocabulary to use for this purpose, but I haven't found anything!269# I could, of course, come up with my own, but I am not sure that would be kosher...270ERRNS=Namespace("http://www.daml.org/2002/03/agents/agent-ont#")271self.graph.bind("err","http://www.daml.org/2002/03/agents/agent-ont#")272forminself.error_messages:273message=BNode()274self.graph.add((message,type,ERRNS['ErrorMessage']))275self.graph.add((message,ERRNS['error'],rdflibLiteral(m)))
Core of the semantics management, dealing with the RDFS and other
Semantic triples. The only reason to have it in a separate class is for
an easier maintainability.
This is a common superclass only. In the present module, it is
subclassed by a RDFS
Closure class and a OWL RL
Closure classes. There are some methods that are implemented in the
subclasses only, ie, this class cannot be used by itself!
rules(self,
t,
cycle_num)
The core processing cycles through every tuple in the graph and
dispatches it to the various methods implementing a specific group of
rules.
Do some pre-processing step. This method before anything else in the
closure. By default, this method is empty, subclasses can add content to
it by overriding it.
Do some post-processing step. This method when all processing is done,
but before handling possible errors (ie, the method can add its own error
messages). By default, this method is empty, subclasses can add content
to it by overriding it.
The core processing cycles through every tuple in the graph and
dispatches it to the various methods implementing a specific group of
rules. By default, this method raises an exception; indeed, subclasses
must add content to by overriding it.
Parameters:
t (tuple) - one triple on which to apply the rules
cycle_num - which cycle are we in, starting with 1. This value is forwarded
to all local rules; it is also used locally to collect the bnodes
in the graph.
This is only a placeholder; subclasses should fill this with real
content. By default, it is just an empty call. This set of rules is
invoked only once and not in a cycle.
In contrast to its name, this does not yet add anything to the graph
itself, it just stores the tuple in an internal set. (It is important for this to be a set:
some of the rules in the various closures may generate the same tuples
several times.) Before adding the tuple to the set, the method checks
whether the tuple is in the final graph already (if yes, it is not added
to the set).
The set itself is emptied at the start of every processing cycle; the
triples are then effectively added to the graph at the end of such a
cycle. If the set is actually empty at that point, this means that the
cycle has not added any new triple, and the full processing can stop.
Parameters:
t (a 3-element tuple of (s,p,o)) - the triple to be added to the graph, unless it is already there
Generate the closure the graph. This is the real 'core'.
The processing rules store new triples via the separate method which stores them in the added_triples array. If that array is emtpy at the end
of a cycle, it means that the whole process can be stopped.
If required, the relevant axiomatic triples are added to the graph
before processing in cycles. Similarly the exchange of literals against
bnodes is also done in this step (and restored after all cycles are
over).
error messages (typically inconsistency messages in OWL RL) found during
processing. These are added to the final graph at the very end as
separate BNodes with error messages
The combined closure: performing both the OWL 2 RL and RDFS
closures. The two are very close but there are some rules in RDFS that
are not in OWL 2 RL (eg, the axiomatic triples concerning the container
membership properties). Using this closure class the OWL 2 RL
implementation becomes a full extension of RDFS.
1#!/d/Bin/Python/python.exe 2# -*- coding: utf-8 -*- 3# 4""" 5The combined closure: performing I{both} the OWL 2 RL and RDFS closures. 6The two are very close but there are some rules in RDFS that are not in OWL 2 RL (eg, the axiomatic 7triples concerning the container membership properties). Using this closure class the 8OWL 2 RL implementation becomes a full extension of RDFS. 9 10@requires: U{RDFLib<https://github.com/RDFLib/rdflib>}, 4.0.0 and higher 11@license: This software is available for use under the U{W3C Software License<http://www.w3.org/Consortium/Legal/2002/copyright-software-20021231>} 12@organization: U{World Wide Web Consortium<http://www.w3.org>} 13@author: U{Ivan Herman<a href="http://www.w3.org/People/Ivan/">} 14 15""" 16 17__author__='Ivan Herman' 18__contact__='Ivan Herman, ivan@w3.org' 19__license__=u'W3C® SOFTWARE NOTICE AND LICENSE, http://www.w3.org/Consortium/Legal/2002/copyright-software-20021231' 20 21fromRDFClosure.RDFSimportResource,Class,Datatype 22fromRDFClosure.OWLimportOWLClass,Thing,equivalentClass,DataRange 23 24fromRDFClosure.RDFSClosureimportRDFS_Semantics 25fromRDFClosure.OWLRLimportOWLRL_Semantics
32"""Common subclass of the RDFS and OWL 2 RL semantic classes. All methods simply call back 33 to the functions in the superclasses. This may lead to some unnecessary duplication of terms 34 and rules, but it it not so bad. Also, the additional identification defined for OWL Full, 35 ie, Resource being the same as Thing and OWL and RDFS classes being identical are added to the 36 triple store. 37 38 Note that this class is also a possible user extension point: subclasses can be created that 39 extend the standard functionality by extending this class. This class I{always} performs RDFS inferences. 40 Subclasses have to set the C{self.rdfs} flag explicitly to the requested value if that is to be controlled. 41 42 @ivar full_binding_triples: additional axiom type triples that are added to the combined semantics; these 'bind' the RDFS and the OWL worlds together 43 @ivar rdfs: whether RDFS inference is to be performed or not. In this class instance the value is I{always} C{True}, subclasses may explicitly change it at initialization time. 44 @type rdfs: boolean 45 """ 46full_binding_triples=[ 47(Thing,equivalentClass,Resource), 48(Class,equivalentClass,OWLClass), 49(DataRange,equivalentClass,Datatype) 50] 51
53""" 54 @param graph: the RDF graph to be extended 55 @type graph: rdflib.Graph 56 @param axioms: whether (non-datatype) axiomatic triples should be added or not 57 @type axioms: bool 58 @param daxioms: whether datatype axiomatic triples should be added or not 59 @type daxioms: bool 60 @param rdfs: placeholder flag (used in subclassed only, it is always defaulted to True in this class) 61 @type rdfs: boolean 62 """ 63OWLRL_Semantics.__init__(self,graph,axioms,daxioms,rdfs) 64RDFS_Semantics.__init__(self,graph,axioms,daxioms,rdfs) 65self.rdfs=True
70"""If an extension wants to add new datatypes, this method should be invoked at initialization time. 71 72 @param uri : URI for the new datatypes, like owl_ns["Rational"] 73 @param conversion_function : a function converting the lexical representation of the datatype to a Python value, 74 possibly raising an exception in case of unsuitable lexical form 75 @param datatype_list : list of datatypes already in use that has to be checked 76 @param subsumption_dict : dictionary of subsumption hierarchies (indexed by the datatype URI-s) 77 @param subsumption_key : key in the dictionary, if None, the uri parameter is used 78 @param subsumption_list : list of subsumptions associated to a subsumption key (ie, all datatypes that are superclasses of the new datatype) 79 """ 80fromDatatypeHandlingimportAltXSDToPYTHON,use_Alt_lexical_conversions 81 82ifdatatype_list: 83datatype_list.append(uri) 84 85ifsubsumption_dictandsubsumption_list: 86ifsubsumption_key: 87subsumption_dict[subsumption_key]=subsumption_list 88else: 89subsumption_dict[uri]=subsumption_list 90 91AltXSDToPYTHON[uri]=conversion_function 92use_Alt_lexical_conversions()
95"""Do some post-processing step. This method when all processing is done, but before handling possible 96 errors (ie, the method can add its own error messages). By default, this method is empty, subclasses 97 can add content to it by overriding it. 98 """ 99OWLRL_Semantics.post_process(self)
102"""103 @param t: a triple (in the form of a tuple)104 @param cycle_num: which cycle are we in, starting with 1. This value is forwarded to all local rules; it is also used105 locally to collect the bnodes in the graph.106 """107OWLRL_Semantics.rules(self,t,cycle_num)108ifself.rdfs:109RDFS_Semantics.rules(self,t,cycle_num)
122"""Adds some extra axioms and calls for the d_axiom part of the OWL Semantics."""123fortinself.full_binding_triples:124self.store_triple(t)125126# Note that the RL one time rules include the management of datatype which is a true superset127# of the rules in RDFS. It is therefore unnecessary to add those even self.rdfs is True.128OWLRL_Semantics.one_time_rules(self)
Common subclass of the RDFS and OWL 2 RL semantic classes. All methods
simply call back to the functions in the superclasses. This may lead to
some unnecessary duplication of terms and rules, but it it not so bad.
Also, the additional identification defined for OWL Full, ie, Resource
being the same as Thing and OWL and RDFS classes being identical are
added to the triple store.
Note that this class is also a possible user extension point:
subclasses can be created that extend the standard functionality by
extending this class. This class always performs RDFS inferences.
Subclasses have to set the self.rdfs flag explicitly to the
requested value if that is to be controlled.
add_new_datatype(uri,
conversion_function,
datatype_list,
subsumption_dict=None,
subsumption_key=None,
subsumption_list=None)
If an extension wants to add new datatypes, this method should be
invoked at initialization time.
full_binding_triples = [(rdflib.term.URIRef(u'http://www.w3.or...
additional axiom type triples that are added to the combined
semantics; these 'bind' the RDFS and the OWL worlds together
boolean
rdfs
whether RDFS inference is also done (used in subclassed only)
If an extension wants to add new datatypes, this method should be
invoked at initialization time.
Parameters:
uri - URI for the new datatypes, like owl_ns["Rational"]
conversion_function - a function converting the lexical representation of the datatype
to a Python value, possibly raising an exception in case of
unsuitable lexical form
datatype_list - list of datatypes already in use that has to be checked
subsumption_dict - dictionary of subsumption hierarchies (indexed by the datatype
URI-s)
subsumption_key - key in the dictionary, if None, the uri parameter is used
subsumption_list - list of subsumptions associated to a subsumption key (ie, all
datatypes that are superclasses of the new datatype)
Do some post-processing step. This method when all processing is done,
but before handling possible errors (ie, the method can add its own error
messages). By default, this method is empty, subclasses can add content
to it by overriding it.
Go through the RDFS entailment rules rdf1, rdfs4-rdfs12, by extending
the graph.
Parameters:
t - a triple (in the form of a tuple)
cycle_num - which cycle are we in, starting with 1. This value is forwarded
to all local rules; it is also used locally to collect the bnodes
in the graph.
Most of the XSD datatypes are handled directly by RDFLib. However, in
some cases, that is not good enough. There are two major reasons for
this:
Some datatypes are missing from RDFLib and required by OWL 2 RL
and/or RDFS
In other cases, though the datatype is present, RDFLib is fairly lax
in checking the lexical value of those datatypes. Typical case is
boolean.
Some of these deficiencies are handled by this module. All the
functions convert the lexical value into a python datatype (or return the
original string if this is not possible) which will be used, eg, for
comparisons (equalities). If the lexical value constraints are not met,
exceptions are raised.
_strToDecimal(v)
The built in datatype handling for RDFLib maps a decimal number to
float, but the python version 2.4 and upwards also has a Decimal
number.
Almost all time/date related methods require the extraction of an
optional time zone information.
Parameters:
incoming_v - the time/date string @return (v,timezone) tuple; 'v' is the input
string with the timezone info cut off, 'timezone' is a _namelessTZ instance
or None
The built in datatype handling for RDFLib maps a decimal number to
float, but the python version 2.4 and upwards also has a Decimal number.
Better make use of that to use very high numbers. However, there is also
a big difference between Python's decimal and XSD's decimal, because the
latter does not allow for an exponential normal form (why???). This must
be filtered out.
Parameters:
v - the literal string defined as decimal @return Decimal
Rudimentary test for the AnyURI value. If it is a relative URI, then
some tests are done to filter out mistakes. I am not sure this is the
full implementation of the RFC, though, may have to be checked at some
point later.
Parameters:
v - the literal string defined as a URI @return the incoming value
Rudimentary test for the base64Binary value. The problem is that the
built-in b64 module functions ignore the fact that only a certain family
of characters are allowed to appear in the lexical value, so this is
checked first.
Parameters:
v - the literal string defined as a base64encoded string @return the
decoded (binary) content
Test and convert a double value into a Decimal or float. Raises an
exception if the number is outside the permitted range, ie, 1.0E+310 and
1.0E-330. To be on the safe side (python does not have double!) Decimals
are used if possible. Upper and lower values, as required by xsd, are
checked (and these fixed values are the reasons why Decimal is used!)
Parameters:
v - the literal string defined as a double @return Decimal
Test and convert a float value into Decimal or (python) float. Raises
an exception if the number is outside the permitted range, ie, 1.0E+40
and 1.0E-50. (And these fixed values are the reasons why Decimal is
used!)
Parameters:
v - the literal string defined as a float @return Decimal if the
local python version is >= 2.4, float otherwise
Registering the datatypes item for RDFLib, ie, bind the dictionary
values. The 'bind' method of RDFLib adds extra datatypes to the
registered ones in RDFLib, though the table used here (ie, AltXSDToPYTHON) actually overrides all of the default
conversion routines. The method also add a Decimal entry to the
PythonToXSD array of RDFLib.
1#!/d/Bin/Python/python.exe 2# -*- coding: utf-8 -*- 3# 4""" 5Most of the XSD datatypes are handled directly by RDFLib. However, in some cases, that is not good enough. There are two 6major reasons for this: 7 8 1. Some datatypes are missing from RDFLib and required by OWL 2 RL and/or RDFS 9 2. In other cases, though the datatype is present, RDFLib is fairly lax in checking the lexical value of those datatypes. Typical case is boolean. 10 11Some of these deficiencies are handled by this module. All the functions convert the lexical value into a 12python datatype (or return the original string if this is not possible) which will be used, eg, 13for comparisons (equalities). If the lexical value constraints are not met, exceptions are raised. 14 15@requires: U{RDFLib<https://github.com/RDFLib/rdflib>}, 4.0.0 and higher 16@license: This software is available for use under the U{W3C Software License<http://www.w3.org/Consortium/Legal/2002/copyright-software-20021231>} 17@organization: U{World Wide Web Consortium<http://www.w3.org>} 18@author: U{Ivan Herman<a href="http://www.w3.org/People/Ivan/">} 19""" 20 21__author__='Ivan Herman' 22__contact__='Ivan Herman, ivan@w3.org' 23__license__=u'W3C® SOFTWARE NOTICE AND LICENSE, http://www.w3.org/Consortium/Legal/2002/copyright-software-20021231' 24 25# noinspection PyPep8Naming 26fromRDFClosure.RDFSimportRDFNSasns_rdf 27 28fromrdflib.termimportXSDToPython,Literal,_toPythonMapping 29# noinspection PyPep8Naming 30fromrdflib.namespaceimportXSDasns_xsd 31 32importdatetime,time,re 33fromdecimalimportDecimal 34 35# noinspection PyMissingConstructor,PyPep8Naming
37"""(Nameless) timezone object. The python datetime object requires timezones as 38 a specific object added to the conversion, rather than the explicit hour and minute 39 difference used by XSD. This class is used to wrap around the hour/minute values. 40 """
60"""Almost all time/date related methods require the extraction of an optional time zone information. 61 @param incoming_v: the time/date string 62 @return (v,timezone) tuple; 'v' is the input string with the timezone info cut off, 'timezone' is a L{_namelessTZ} instance or None 63 """ 64ifincoming_v[-1]=='Z': 65v=incoming_v[:-1] 66tzone=_namelessTZ(0,0) 67else: 68pattern=".*(\+|-)([0-9][0-9]):([0-9][0-9])" 69match=re.match(pattern,incoming_v) 70ifmatchisNone: 71v=incoming_v 72tzone=None 73else: 74hours=int(match.groups()[1]) 75ifmatch.groups()[0]=='-': 76hours=-hours-1 77minutes=int(match.groups()[2]) 78v=incoming_v[:-6] 79tzone=_namelessTZ(hours,minutes) 80returnv,tzone
86"""The built-in conversion to boolean is way too lax. The xsd specification requires that only true, false, 1 or 0 should be used... 87 @param v: the literal string defined as boolean 88 @return corresponding boolean value 89 @raise ValueError: invalid boolean values 90 """ 91ifv.lower()=="true"orv.lower()=="1": 92returnTrue 93elifv.lower()=="false"orv.lower()=="0": 94returnFalse 95else: 96raiseValueError("Invalid boolean literal value %s"%v)
102"""The built in datatype handling for RDFLib maps a decimal number to float, but the python version 2.4 and upwards also103 has a Decimal number. Better make use of that to use very high numbers.104 However, there is also a big difference between Python's decimal and XSD's decimal, because the latter does not allow105 for an exponential normal form (why???). This must be filtered out.106 @param v: the literal string defined as decimal107 @return Decimal108 @raise ValueError: invalid decimal value109 """110# check whether the lexical form of 'v' is o.k.111ifv.find('E')!=-1orv.find('e')!=-1:112# this is an invalid lexical form, though would be accepted by Python113raiseValueError("Invalid decimal literal value %s"%v)114else:115returnDecimal(v)
116117#################################### ANY URIS ##################################################118#: set of characters allowed in a hexadecimal number119_hexc=['A','B','C','D','E','F','a','b','c','d','e','f']120#: set of numerals121_numb=['1','2','3','4','5','6','7','8','9','0']122# noinspection PyPep8Naming
124"""Rudimentary test for the AnyURI value. If it is a relative URI, then some tests are done to filter out125 mistakes. I am not sure this is the full implementation of the RFC, though, may have to be checked at some point126 later.127 @param v: the literal string defined as a URI128 @return the incoming value129 @raise ValueError: invalid URI value130 """131importurlparse132iflen(v)==0:returnv133ifurlparse.urlsplit(v)[0]!="":134# this means that there is a proper scheme, the URI should be kosher135returnv136else:137# this is meant to be a relative URI.138# If I am correct, that cannot begin with one or more "?" or ":" characters139# all others are o.k.140# if it begins with a % then it should be followed by two hexa characters,141# otherwise it is also a bug142ifv[0]=='%':143iflen(v)>=3and(v[1]in_hexcorv[1]in_numb)and(v[2]in_hexcorv[2]in_numb):144returnv145else:146raiseValueError("Invalid IRI %s"%v)147elifv[0]=='?'orv[0]==':':148raiseValueError("Invalid IRI %s"%v)149else:150returnv
155"""Rudimentary test for the base64Binary value. The problem is that the built-in b64 module functions ignore the156 fact that only a certain family of characters are allowed to appear in the lexical value, so this is checked first.157 @param v: the literal string defined as a base64encoded string158 @return the decoded (binary) content159 @raise ValueError: invalid base 64 binary value160 """161importbase64162ifv.replace('=','x').replace('+','y').replace('/','z').isalnum():163try:164returnbase64.standard_b64decode(v)165except:166raiseValueError("Invalid Base64Binary %s"%v)167else:168raiseValueError("Invalid Base64Binary %s"%v)
169170#################################### Numerical types ##################################################171#: limits for unsigned bytes172_limits_unsignedByte=[-1,256]173174#: limits for bytes175_limits_byte=[-129,128]176177#: limits for unsigned int178_limits_unsignedInt=[-1,4294967296]179180#: limits for int181_limits_int=[-2147483649,2147483648]182183#: limits for unsigned short184_limits_unsignedShort=[-1,65536]185186#: limits for short187_limits_short=[-32769,32768]188189#: limits for unsigned long190_limits_unsignedLong=[-1,18446744073709551616]191192#: limits for long193_limits_long=[-9223372036854775809,9223372036854775808]194195#: limits for positive integer196_limits_positiveInteger=[0,None]197198#: limits for non positive integer199_limits_nonPositiveInteger=[None,1]200201#: limits for non negative integer202_limits_nonNegativeInteger=[-1,None]203204#: limits for negative integer205_limits_negativeInteger=[None,0]206207# noinspection PyPep8Naming,PyBroadException
209"""Test (and convert) a generic numerical type, with a check against a lower and upper limit.210 @param v: the literal string to be converted211 @param interval: lower and upper bounds (non inclusive). If the value is None, no comparison should be done212 @param conversion: conversion function, ie, int, long, etc213 @raise ValueError: invalid value214 """215try:216i=conversion(v)217if(interval[0]isNoneorinterval[0]<i)and(interval[1]isNoneori<interval[1]):218returni219except:220pass221raiseValueError("Invalid numerical value %s"%v)
222223#################################### Double and float ##################################################224# noinspection PyPep8Naming
226"""Test and convert a double value into a Decimal or float. Raises an exception if the number is outside the permitted227 range, ie, 1.0E+310 and 1.0E-330. To be on the safe side (python does not have double!) Decimals are used228 if possible. Upper and lower values, as required by xsd, are checked (and these fixed values are the reasons229 why Decimal is used!)230231 @param v: the literal string defined as a double232 @return Decimal233 @raise ValueError: invalid value234 """235try:236value=Decimal(v)237upper=Decimal("1.0E+310")238lower=Decimal("1.0E-330")239iflower<abs(value)<upper:240# bingo241returnvalue242else:243raiseValueError("Invalid double %s"%v)244except:245# there was a problem in creating a decimal...246raiseValueError("Invalid double %s"%v)
250"""Test and convert a float value into Decimal or (python) float. Raises an exception if the number is outside the251 permitted range, ie, 1.0E+40 and 1.0E-50. (And these fixed values are the reasons why Decimal is used!)252253 @param v: the literal string defined as a float254 @return Decimal if the local python version is >= 2.4, float otherwise255 @raise ValueError: invalid value256 """257try:258value=Decimal(v)259upper=Decimal("1.0E+40")260lower=Decimal("1.0E-50")261iflower<abs(value)<upper:262# bingo263returnvalue264else:265raiseValueError("Invalid float %s"%v)266except:267# there was a problem in creating a decimal...268raiseValueError("Invalid float %s"%v)
273"""Test (and convert) hexa integer values. The number of characters should be even.274 @param v: the literal string defined as a hexa number275 @return long value276 @raise ValueError: invalid value277 """278# first of all, the number of characters must be even according to the xsd spec:279length=len(v)280if(length/2)*2!=length:281raiseValueError("Invalid hex binary number %s"%v)282returnlong(v,16)
283284#################################### Datetime, date timestamp, etc ################################285286# noinspection PyPep8Naming
288"""Test (and convert) datetime and date timestamp values.289 @param incoming_v: the literal string defined as the date and time290 @param timezone_required: whether the timezone is required (ie, for date timestamp) or not291 @return datetime292 @rtype: datetime.datetime293 @raise ValueError: invalid datetime or date timestamp294 """295296# First, handle the timezone portion, if there is any297(v,tzone)=_returnTimeZone(incoming_v)298299# Check on the timezone. For time date stamp object it is required300iftimezone_requiredandtzoneisNone:301raiseValueError("Invalid datetime %s"%incoming_v)302303# The microseconds should be handled here...304final_v=v305milliseconds=0306milpattern="(.*)(\.)([0-9]*)"307match=re.match(milpattern,v)308ifmatchisnotNone:309# we have a millisecond portion...310try:311final_v=match.groups()[0]312milliseconds=int(match.groups()[2])313except:314raiseValueError("Invalid datetime %s"%incoming_v)315#316# By now, the pattern should be clear317# This may raise an exception...318try:319tstr=time.strptime(final_v,"%Y-%m-%dT%H:%M:%S")320iftzoneisnotNone:321returndatetime.datetime(tstr.tm_year,tstr.tm_mon,tstr.tm_mday,tstr.tm_hour,tstr.tm_min,tstr.tm_sec,milliseconds,tzone)322else:323returndatetime.datetime(tstr.tm_year,tstr.tm_mon,tstr.tm_mday,tstr.tm_hour,tstr.tm_min,tstr.tm_sec,milliseconds)324except:325raiseValueError("Invalid datetime %s"%incoming_v)
329"""Test (and convert) time values.330 @param incoming_v: the literal string defined as time value331 @return time332 @rtype datetime.time333 @raise ValueError: invalid datetime or date timestamp334 """335336# First, handle the timezone portion, if there is any337(v,tzone)=_returnTimeZone(incoming_v)338339# The microseconds should be handled here...340final_v=v341milliseconds=0342milpattern="(.*)(\.)([0-9]*)"343match=re.match(milpattern,v)344ifmatchisnotNone:345# we have a millisecond portion...346try:347final_v=match.groups()[0]348milliseconds=int(match.groups()[2])349except:350raiseValueError("Invalid datetime %s"%incoming_v)351#352# By now, the pattern should be clear353# This may raise an exception...354try:355tstr=time.strptime(final_v,"%H:%M:%S")356iftzoneisnotNone:357returndatetime.time(tstr.tm_hour,tstr.tm_min,tstr.tm_sec,milliseconds,tzone)358else:359returndatetime.time(tstr.tm_hour,tstr.tm_min,tstr.tm_sec,milliseconds)360except:361raiseValueError("Invalid time %s"%incoming_v)
365"""Test (and convert) date values.366 @param incoming_v: the literal string defined as date (in iso format)367 @return date368 @return datetime.date369 @raise ValueError: invalid datetime or date timestamp370 """371372# First, handle the timezone portion, if there is any373(final_v,tzone)=_returnTimeZone(incoming_v)374375# This may raise an exception...376try:377tstr=time.strptime(final_v,"%Y-%m-%d")378returndatetime.date(tstr.tm_year,tstr.tm_mon,tstr.tm_mday)379except:380raiseValueError("Invalid date %s"%incoming_v)
381382#################################### The 'g' series for dates ############################383# The 'g' datatypes (eg, gYear) cannot be directly represented as a python datatype384# the series of methods below simply check whether the incoming string is o.k., but the385# returned value is the same as the original386# noinspection PyPep8Naming
454"""Test (and convert) XML Literal values.455 @param v: the literal string defined as an xml literal456 @return the canonical version of the same xml text457 @raise ValueError: incorrect xml string458 """459importxml.dom.minidom460try:461dom=xml.dom.minidom.parseString(v)462returndom.toxml()463except:464raiseValueError("Invalid XML Literal %s"%v)
465466#################################### language, NMTOKEN, NAME, etc #########################467#: regular expression for a 'language' datatype468_re_language="[a-zA-Z]{1,8}(-[a-zA-Z0-9]{1,8})*"469470#: regexp for NMTOKEN. It must be used with a re.U flag (the '(?U' regexp form did not work. It may depend on the locale...)471_re_NMTOKEN="[\w:_.\-]+"472473#: characters not permitted at a starting position for Name (otherwise Name is like NMTOKEN474_re_Name_ex=['.','-']+_numb475476#: regexp for NCName. It must be used with a re.U flag (the '(?U' regexp form did not work. It may depend on the locale...)477_re_NCName="[\w_.\-]+"478479#: characters not permitted at a starting position for NCName480_re_NCName_ex=['.','-']+_numb481482# noinspection PyDefaultArgument,PyPep8Naming,PyPep8Naming
484"""Test (and convert) a generic string type, with a check against a regular expression.485 @param v: the literal string to be converted486 @param regexp: the regular expression to check against487 @param flag: flags to be used in the regular expression488 @param excludeStart: array of characters disallowed in the first position489 @return original string490 @raise ValueError: invalid value491 """492match=re.match(regexp,v,flag)493ifmatchisNoneormatch.end()!=len(v):494raiseValueError("Invalid literal %s"%v)495else:496iflen(excludeStart)>0andv[0]inexcludeStart:497raiseValueError("Invalid literal %s"%v)498returnv
499500#: Disallowed characters in a token or a normalized string, as a regexp501_re_token="[^\n\t\r]+"502503# noinspection PyPep8Naming
505"""Test (and convert) a string to a token.506 @param v: the literal string to be converted507 @return original string508 @raise ValueError: invalid value509 """510iflen(v)==0:511returnv512# filter out the case when there are new lines and similar (if there is a problem, an exception is raised)513_strToVal_Regexp(v,_re_token)514v1=' '.join(v.strip().split())515# normalize the string, and see if the result is the same:516iflen(v1)==len(v):517# no characters lost, ie, no unnecessary spaces518returnv519else:520raiseValueError("Invalid literal %s"%v)
525"""Test (and convert) a plain literal526 @param v: the literal to be converted527 @return a new RDFLib Literal with language tag528 @raise ValueError: invalid value529 """530reg="(.*)@([^@]*)"531# a plain literal must match this regexp!532match=re.match(reg,v)533ifmatchisNone:534raiseValueError("Invalid plain literal %s"%v)535else:536lit=match.groups()[0]537iflen(match.groups())==1ormatch.groups()[1]=="":538# no language tag539returnLiteral(lit)540else:541lang=match.groups()[1]542# check if this is a correct language tag. Note that can raise an exception!543try:544lang=_strToVal_Regexp(lang,_re_language)545returnLiteral(lit,lang=lang.lower())546except:547raiseValueError("Invalid plain literal %s"%v)
597"""Registering the datatypes item for RDFLib, ie, bind the dictionary values. The 'bind' method of RDFLib adds598 extra datatypes to the registered ones in RDFLib, though the table used here (ie, L{AltXSDToPYTHON}) actually overrides599 all of the default conversion routines. The method also add a Decimal entry to the PythonToXSD array of RDFLib.600 """601_toPythonMapping.update(AltXSDToPYTHON)
605"""Restore the original (ie, RDFLib) set of lexical conversion routines.606 """607_toPythonMapping.update(XSDToPython)
608609#######################################################################################610# This module can pretty much tested individually...611if__name__=='__main__':612importsys613dtype=sys.argv[1]614string=sys.argv[2]615datatype=ns_xsd[dtype]616result=AltXSDToPYTHON[datatype](string)617printtype(result)618printresult619
(Nameless) timezone object. The python datetime object requires
timezones as a specific object added to the conversion, rather than the
explicit hour and minute difference used by XSD. This class is used to
wrap around the hour/minute values.
Entry point to generate the deductive closure of a graph. The exact
choice deductive closure is controlled by a class reference. The
important initialization parameter is the closure_class: a
Class object referring to a subclass of Closure.Core.
Although this package includes a number of such subclasses (OWLRL_Semantics, RDFS_Semantics, RDFS_OWLRL_Semantics, and OWLRL_Extension), the user can use his/her own if
additional rules are implemented.
Note that owl:imports statements are not interpreted in this
class, that has to be done beforehand on the graph that is to be
expanded.
improved_datatype_generic = False
Whether the improved set of lexical-to-Python conversions should be
used for datatype handling in general, ie, not only for a
particular instance and not only for inference purposes.
closure_class (subclass of Closure.Core) - a closure class reference.
rdfs_closure (boolean) - whether RDFS rules are executed or not
axiomatic_triples (boolean) - Whether relevant axiomatic triples are added before chaining,
except for datatype axiomatic triples. Default: False.
datatype_axioms (boolean) - Whether further datatype axiomatic triples are added to the
output. Default: false.
improved_datatypes (boolean) - Whether the improved set of lexical-to-Python conversions should
be used for datatype handling. See the introduction for more
details. Default: True.
Whether the improved set of lexical-to-Python conversions should be used
for datatype handling in general, ie, not only for a particular
instance and not only for inference purposes. Default: False.
The issue is that pure literals cannot appear in subject position
according to the current rules on RDF. That means that different types of
conclusions cannot properly finish. The present trick is trying to get
around the problem as follows:
For all literals in the graph a bnode is created. The module does not
do a full D entailment but just relies on RDFLib's ability to
recognize identical literals
All those bnodes get a type Literal
All triples with literals are exchanged against a triple with the
associated bnode
The inferences are then calculated with the modified Graph. At the end
of the process, the above steps are done backwards: for all triples where
a bnode representing a literal appear in object position, a triple is
generated; however, all triples where the bnode appears in a subject
position are removed from the final graph.
_LiteralStructure
This class serves as a wrapper around rdflib's Literal, by changing
the equality function to a strict identity of datatypes and lexical
values.
1# -*- coding: utf-8 -*- 2# 3""" 4Separate module to handle literals. 5 6The issue is that pure literals cannot appear in subject position according to the current rules on RDF. That means that 7different types of conclusions cannot properly finish. The present trick is trying to get around the problem as follows: 8 9 1. For all literals in the graph a bnode is created. The module does not do a full D entailment but just relies on RDFLib's ability to recognize identical literals 10 2. All those bnodes get a type Literal 11 3. All triples with literals are exchanged against a triple with the associated bnode 12 13The inferences are then calculated with the modified Graph. At the end of the process, the above steps are done backwards: for all triples where 14a bnode representing a literal appear in object position, a triple is generated; however, all triples where the bnode appears in a 15subject position are removed from the final graph. 16 17 18@requires: U{RDFLib<https://github.com/RDFLib/rdflib>}, 4.0.0 and higher 19@license: This software is available for use under the U{W3C Software License<http://www.w3.org/Consortium/Legal/2002/copyright-software-20021231>} 20@organization: U{World Wide Web Consortium<http://www.w3.org>} 21@author: U{Ivan Herman<a href="http://www.w3.org/People/Ivan/">} 22 23""" 24 25__author__='Ivan Herman' 26__contact__='Ivan Herman, ivan@w3.org' 27__license__=u'W3C® SOFTWARE NOTICE AND LICENSE, http://www.w3.org/Consortium/Legal/2002/copyright-software-20021231' 28 29fromrdflibimportBNode 30fromrdflibimportLiteralasrdflibLiteral 31fromrdflib.namespaceimportXSDasns_xsd 32 33from.RDFSimporttype 34from.RDFSimportLiteral 35from.DatatypeHandlingimportAltXSDToPYTHON 36 37
39"""This class serves as a wrapper around rdflib's Literal, by changing the equality function to a strict 40 identity of datatypes and lexical values. 41 42 On the other hand, to implement, eg, OWL RL's datatype rules, a should be able to generate 43 an 'a sameAs b' triple, ie, the distinction should be kept. Hence this class that overrides the equality, 44 and then can be used as a key in a Python dictionary. 45 """ 46 47# noinspection PyPep8
77"""Compare to literal structure instances for equality. Here equality means in the sense of datatype values 78 @return: comparison result 79 @rtype: boolean 80 """ 81try: 82returnself.lit==other.lit 83except: 84# There might be conversion problems... 85returnFalse
113"""114 @param graph: the graph to be modified115 """116self.lit_to_bnode={}117self.bnode_to_lit={}118self.graph=graph119120to_be_removed=[]121to_be_added=[]122fortinself.graph:123(subj,pred,obj)=t124# This is supposed to be a "proper" graph, so only the obj may be a literal125ifisinstance(obj,rdflibLiteral):126# Test the validity of the datatype127ifobj.datatype:128try:129AltXSDToPYTHON[obj.datatype](str(obj))130exceptValueError:131closure.add_error("Lexical value of the literal '%s' does not match its datatype (%s)"%(str(obj),obj.datatype))132133# In any case, this should be removed:134iftnotinto_be_removed:135to_be_removed.append(t)136# Check if a BNode has already been associated with that literal137obj_st=_LiteralStructure(obj)138found=False139forlinself.lit_to_bnode.keys():140ifobj_st.lex==l.lexandobj_st.dt==l.dtandobj_st.lang==l.lang:141t1=(subj,pred,self.lit_to_bnode[l])142to_be_added.append(t1)143found=True144break145ifnotfound:146# the bnode has to be created147bn=BNode()148# store this in the internal administration149self.lit_to_bnode[obj_st]=bn150self.bnode_to_lit[bn]=obj_st151# modify the graph152to_be_added.append((subj,pred,bn))153to_be_added.append((bn,type,Literal))154# Furthermore: a plain literal should be identified with a corresponding xsd:string and vice versa, 155# cf, RDFS Semantics document156ifobj_st.dtisNoneandobj_st.langisNone:157newLit=rdflibLiteral(obj_st.lex,datatype=ns_xsd["string"])158new_obj_st=_LiteralStructure(newLit)159new_obj_st.dt=ns_xsd["string"]160bn2=BNode()161self.lit_to_bnode[new_obj_st]=bn2162self.bnode_to_lit[bn2]=new_obj_st163to_be_added.append((subj,pred,bn2))164to_be_added.append((bn2,type,Literal))165elifobj_st.dt==ns_xsd["string"]:166newLit=rdflibLiteral(obj_st.lex,datatype=None)167new_obj_st=_LiteralStructure(newLit)168# new_obj_st = _LiteralStructure(obj) # Was this the correct one, or was this an old bug?169new_obj_st.dt=None170bn2=BNode()171self.lit_to_bnode[new_obj_st]=bn2172self.bnode_to_lit[bn2]=new_obj_st173to_be_added.append((subj,pred,bn2))174to_be_added.append((bn2,type,Literal))175176# Do the real modifications177self._massageGraph(to_be_removed,to_be_added)
180"""181 This method is to be invoked at the end of the forward chain processing. It restores literals (whenever possible)182 to their original self...183 """184to_be_removed=[]185to_be_added=[]186fortinself.graph:187(subj,pred,obj)=t188# The two cases, namely when the literal appears in subject or object positions, should be treated differently189ifsubjinself.bnode_to_lit:190# well... there may be to cases here: either this is the original tuple stating that191# this bnode is a literal, or it is the result of an inference. In both cases, the tuple must192# be removed from the result without any further action193iftnotinto_be_removed:194to_be_removed.append(t)195elifobjinself.bnode_to_lit:196# This is where the exchange should take place: put back the real literal into the graph, removing the proxy one197iftnotinto_be_removed:198to_be_removed.append(t)199# This is an additional thing due to the latest change of literal handling in RDF concepts.200# If a literal is an xsd:string then a plain literal is put in its place for the purpose of serialization...201lit=self.bnode_to_lit[obj].lit202iflit.datatypeisnotNoneandlit.datatype==ns_xsd["string"]:203lit=rdflibLiteral(str(lit))204to_be_added.append((subj,pred,lit))205206# Do the real modifications207self._massageGraph(to_be_removed,to_be_added)
210"""211 Perform the removal and addition actions on the graph212 @param to_be_removed: list of tuples to be removed213 @param to_be_added : list of tuples to be added214 """215fortinto_be_removed:216self.graph.remove(t)217fortinto_be_added:218self.graph.add(t)
This class serves as a wrapper around rdflib's Literal, by changing
the equality function to a strict identity of datatypes and lexical
values.
On the other hand, to implement, eg, OWL RL's datatype rules, a should
be able to generate an 'a sameAs b' triple, ie, the distinction should be
kept. Hence this class that overrides the equality, and then can be used
as a key in a Python dictionary.
OWL and OWL2 terms. Note that the set of terms is complete, ie,
it includes all OWL 2 terms, regardless of whether the term is
used in OWL 2 RL or not.
This module is a brute force implementation of the OWL 2
RL profile.
RDFLib works with 'generalized' RDF, meaning that triples with a BNode
predicate are allowed. This is good because, eg, some of the
triples generated for RDF from an OWL 2 functional syntax might look like
'[ owl:inverseOf p]', and the RL rules would then operate on such
generalized triple. However, as a last, post processing steps, these
triples are removed from the graph before serialization to produce
'standard' RDF (which is o.k. for RL, too, because the consequent triples
are all right, generalized triples might have had a role in the deduction
steps only).
1#!/d/Bin/Python/python.exe 2# -*- coding: utf-8 -*- 3# 4""" 5This module is a C{brute force} implementation of the OWL 2 RL profile. 6 7RDFLib works with 'generalized' RDF, meaning that triples with a BNode predicate are I{allowed}. This is good because, eg, some of the 8triples generated for RDF from an OWL 2 functional syntax might look like '[ owl:inverseOf p]', and the RL rules would then operate 9on such generalized triple. However, as a last, post processing steps, these triples are removed from the graph before serialization 10to produce 'standard' RDF (which is o.k. for RL, too, because the consequent triples are all right, generalized triples might 11have had a role in the deduction steps only). 12 13@requires: U{RDFLib<https://github.com/RDFLib/rdflib>}, 4.0.0 and higher 14@license: This software is available for use under the U{W3C Software License<http://www.w3.org/Consortium/Legal/2002/copyright-software-20021231>} 15@organization: U{World Wide Web Consortium<http://www.w3.org>} 16@author: U{Ivan Herman<a href="http://www.w3.org/People/Ivan/">} 17 18""" 19 20__author__='Ivan Herman' 21__contact__='Ivan Herman, ivan@w3.org' 22__license__=u'W3C® SOFTWARE NOTICE AND LICENSE, http://www.w3.org/Consortium/Legal/2002/copyright-software-20021231' 23 24importrdflib 25fromrdflibimportBNode 26 27fromRDFClosure.RDFSimportProperty,type 28fromRDFClosure.RDFSimportsubClassOf,subPropertyOf,comment,label,domain,range 29fromRDFClosure.RDFSimportseeAlso,isDefinedBy,Datatype 30 31fromRDFClosure.OWLimport* 32fromRDFClosure.ClosureimportCore 33fromRDFClosure.AxiomaticTriplesimportOWLRL_Axiomatic_Triples,OWLRL_D_Axiomatic_Triples 34fromRDFClosure.AxiomaticTriplesimportOWLRL_Datatypes_Disjointness 35 36OWLRL_Annotation_properties=[label,comment,seeAlso,isDefinedBy,deprecated,versionInfo,priorVersion,backwardCompatibleWith,incompatibleWith] 37 38from.XsdDatatypesimportOWL_RL_Datatypes,OWL_Datatype_Subsumptions 39 40########################################################################################################################### 41 42 43## OWL-R Semantics class 44# 45# 46# As an editing help: each rule is prefixed by RULE XXXX where XXXX is the acronym given in the profile document. 47# This helps in referring back to the spec... 48# noinspection PyPep8Naming,PyPep8Naming,PyBroadException
50"""OWL 2 RL Semantics class, ie, implementation of the OWL 2 RL closure graph. 51 52 Note that the module does I{not} implement the so called Datatype entailment rules, simply because the underlying RDFLib does 53 not implement the datatypes (ie, RDFLib will not make the literal "1.00" and "1.00000" identical, although 54 even with all the ambiguities on datatypes, this I{should} be made equal...). Also, the so-called extensional entailment rules 55 (Section 7.3.1 in the RDF Semantics document) have not been implemented either. 56 57 The comments and references to the various rule follow the names as used in the U{OWL 2 RL document<http://www.w3.org/TR/owl2-profiles/#Reasoning_in_OWL_2_RL_and_RDF_Graphs_using_Rules>}. 58 """
60""" 61 @param graph: the RDF graph to be extended 62 @type graph: rdflib.Graph 63 @param axioms: whether (non-datatype) axiomatic triples should be added or not 64 @type axioms: bool 65 @param daxioms: whether datatype axiomatic triples should be added or not 66 @type daxioms: bool 67 @param rdfs: whether RDFS inference is also done (used in subclassed only) 68 @type rdfs: boolean 69 """ 70Core.__init__(self,graph,axioms,daxioms,rdfs) 71self.bnodes=[]
74""" 75 Shorthand to get a list of values (ie, from an rdf:List structure) starting at a head 76 77 @param l: RDFLib resource, should be the head of an rdf:List 78 @return: array of resources 79 """ 80return[chforchinself.graph.items(l)]
89""" 90 Remove triples with bnode predicates. The Bnodes in the graph are collected in the first cycle run. 91 """ 92to_be_removed=[] 93forbinself.bnodes: 94fortinself.graph.triples((None,b,None)): 95iftnotinto_be_removed: 96to_be_removed.append(t) 97fortinto_be_removed: 98self.graph.remove(t)
115"""Helping method to check whether a type statement is in line with a possible116 restriction. This method is invoked by rule "cls-avf" before setting a type117 on an allValuesFrom restriction.118119 The method is a placeholder at this level. It is typically implemented by subclasses for120 extra checks, eg, for datatype facet checks.121 @param v: the resource that is to be 'typed'122 @param t: the targeted type (ie, Class)123 @return: boolean124 """125returnTrue
128"""129 Some of the rules in the rule set are axiomatic in nature, meaning that they really have to be added only130 once, there is no reason to add these in a cycle. These are performed by this method that is invoked only once131 at the beginning of the process.132133 These are: cls-thing, cls-nothing1, prp-ap, dt-types1, dt-types2, dt-eq, dt-diff.134135 Note, however, that the dt-* are executed only partially, limited by the possibilities offered by RDFLib. These may not136 cover all the edge cases of OWL RL. Especially, dt-not-type has not (yet?) been implemented (I wonder whether RDFLib should not raise137 exception for those anyway...138 """139# noinspection PyShadowingNames140def_add_to_explicit(s,o):141ifsnotinexplicit:142explicit[s]=[]143ifonotinexplicit[s]:144explicit[s].append(o)
153154def_add_to_used_datatypes(d):155used_datatypes.add(d)156157# noinspection PyShadowingNames158def_handle_subsumptions(r,dt):159ifdtinOWL_Datatype_Subsumptions:160fornew_dtinOWL_Datatype_Subsumptions[dt]:161self.store_triple((r,type,new_dt))162self.store_triple((new_dt,type,Datatype))163_add_to_used_datatypes(new_dt)164165166# For processing later:167# implicit object->datatype relationships: these come from real literals which are represented by168# an internal bnode169implicit={}170171# explicit object->datatype relationships: those that came from an object being typed as a datatype172# or a sameAs. The values are arrays of datatypes to which the resource belong173explicit={}174175# datatypes in use by the graph (directly or indirectly). This will be used at the end to add the176# necessary disjointness statements (but not more177used_datatypes=set()178179# the real literals from the original graph:180# literals = self.literal_proxies.lit_to_bnode.keys()181182# RULE dt-type2: for all explicit literals the corresponding bnode should get the right type183# definition. The 'implicit' dictionary is also filled on the fly184# RULE dt-not-type: see whether an explicit literal is valid in terms of the defined datatype185forltinself.literal_proxies.lit_to_bnode.keys():186# note that all non-RL datatypes are ignored187iflt.dtisnotNoneandlt.dtinOWL_RL_Datatypes:188bn=self.literal_proxies.lit_to_bnode[lt]189# add the explicit typing triple190self.store_triple((bn,type,lt.dt))191ifbnnotinimplicit:192implicit[bn]=lt.dt193_add_to_used_datatypes(lt.dt)194195# for dt-not-type196# This is a dirty trick: rdflib's Literal includes a method that raises an exception if the197# lexical value cannot be mapped on the value space.198try:199val=lt.lit.toPython()200except:201self.add_error("Literal's lexical value and datatype do not match: (%s,%s)"%(lt.lex,lt.dt))202203# RULE dt-diff204# RULE dt-eq205# Try to compare literals whether they are different or not. If they are different, then an explicit206# different from statement should be added, if they are identical, then an equality should be added207forlt1inself.literal_proxies.lit_to_bnode.keys():208forlt2inself.literal_proxies.lit_to_bnode.keys():209iflt1!=lt2:210try:211lt1_d=lt1.lit.toPython()212lt2_d=lt2.lit.toPython()213#if lt1_d != lt2_d :214# self.store_triple((self.literal_proxies.lit_to_bnode[lt1], differentFrom, self.literal_proxies.lit_to_bnode[lt2]))215#else :216# self.store_triple((self.literal_proxies.lit_to_bnode[lt1], sameAs, self.literal_proxies.lit_to_bnode[lt2]))217except:218# there may be a problem with one of the python conversion, but that should have been taken219# care of already220pass221222# Other datatype definitions can come from explicitly defining some nodes as datatypes (though rarely used,223# it is perfectly possible...224# there may be explicit relationships set in the graph, too!225for(s,p,o)inself.graph.triples((None,type,None)):226ifoinOWL_RL_Datatypes:227_add_to_used_datatypes(o)228ifsnotinimplicit:229_add_to_explicit(s,o)230231# Finally, there may be sameAs statements that bind nodes to some of the existing ones. This does not introduce232# new datatypes, so the used_datatypes array does not get extended233for(s,p,o)inself.graph.triples((None,sameAs,None)):234ifoinimplicit:235_add_to_explicit(s,implicit[o])236# note that s in implicit would mean that the original graph has237# a literal in subject position which is not allowed at the moment, so I do not bother238ifoinexplicit:239_append_to_explicit(s,o)240ifsinexplicit:241_append_to_explicit(o,s)242243# what we have now:244# explicit+implicit contains all the resources of type datatype;245# implicit contains those that are given by an explicit literal246# explicit contains those that are given by general resources, and there can be a whole array for each entry247248# RULE dt-type1: add a Datatype typing for all those249# Note: the strict interpretation of OWL RL is to do that for all allowed datatypes, but this is250# under discussion right now. The optimized version uses only what is really in use251fordtinOWL_RL_Datatypes:252self.store_triple((dt,type,Datatype))253fordtsinexplicit.values():254fordtindts:255self.store_triple((dt,type,Datatype))256257# Datatype reasoning means that certain datatypes are subtypes of one another.258# I could simply generate the extra subclass relationships into the graph and let the generic259# process work its way, but it seems to be an overkill. Instead, I prefer to add the explicit typing260# into the graph 'manually'261forrinexplicit:262# these are the datatypes that this resource has263dtypes=explicit[r]264fordtindtypes:265_handle_subsumptions(r,dt)266267forrinimplicit:268dt=implicit[r]269_handle_subsumptions(r,dt)270271# Last step: add the datatype disjointness relationships. This is done only for272# - 'top' level datatypes273# - used in the graph274fortinOWLRL_Datatypes_Disjointness:275(l,pred,r)=t276iflinused_datatypesandrinused_datatypes:277self.store_triple(t)278
294"""295 Some of the rules in the rule set are axiomatic in nature, meaning that they really have to be added only296 once, there is no reason to add these in a cycle. These are performed by this method that is invoked only once297 at the beginning of the process.298299 These are: cls-thing, cls-nothing1, prp-ap, dt-types1, dt-types2, dt-eq, dt-diff.300 """301self._one_time_rules_misc()302self._one_time_rules_datatypes()
305"""306 Go through the various rule groups, as defined in the OWL-RL profile text and implemented via307 local methods. (The calling cycle takes every tuple in the graph.)308 @param t: a triple (in the form of a tuple)309 @param cycle_num: which cycle are we in, starting with 1. This value is forwarded to all local rules; it is also used310 locally to collect the bnodes in the graph.311 """312ifcycle_num==1:313forrint:314ifisinstance(r,BNode)andrnotinself.bnodes:315self.bnodes.append(r)316317self._properties(t,cycle_num)318self._equality(t,cycle_num)319self._classes(t,cycle_num)320self._class_axioms(t,cycle_num)321self._schema_vocabulary(t,cycle_num)
324"""325 Implementation of the property chain axiom, invoked from inside the property axiom handler. This is the326 implementation of rule prp-spo2, taken aside for an easier readability of the code. """327chain=self._list(x)328# The complication is that, at each step of the chain, there may be spawns, leading to a multitude329# of 'sub' chains:-(330iflen(chain)>0:331for(u1,_y,_z)inself.graph.triples((None,chain[0],None)):332# At least the chain can be started, because the leftmost property has at least333# one element in its extension334finalList=[(u1,_z)]335chainExists=True336forpiinchain[1:]:337newList=[]338for(_u,ui)infinalList:339foruinself.graph.objects(ui,pi):340# what is stored is only last entry with u1, the intermediate results341# are not of interest342newList.append((u1,u))343# I have now, in new list, all the intermediate results344# until pi345# if new list is empty, that means that is a blind alley346iflen(newList)==0:347chainExists=False348break349else:350finalList=newList351ifchainExists:352for(_u,un)infinalList:353self.store_triple((u1,p,un))
356"""357 Table 4: Semantics of equality. Essentially, the eq-* rules.358 @param triple: triple to work on359 @param cycle_num: which cycle are we in, starting with 1. Can be used for some optimization.360 """361# In many of the 'if' branches, corresponding to rules in the document,362# the branch begins by a renaming of variables (eg, pp,c = s,o).363# There is no programming reasons for doing that, but by renaming the364# variables it becomes easier to compare the declarative rules365# in the document with the implementation366s,p,o=triple367# RULE eq-ref368self.store_triple((s,sameAs,s))369self.store_triple((o,sameAs,o))370self.store_triple((p,sameAs,p))371ifp==sameAs:372x,y=s,o373# RULE eq-sym374self.store_triple((y,sameAs,x))375# RULE eq-trans376forzinself.graph.objects(y,sameAs):377self.store_triple((x,sameAs,z))378# RULE eq-rep-s379forpp,ooinself.graph.predicate_objects(s):380self.store_triple((o,pp,oo))381# RULE eq-rep-p382forss,ooinself.graph.subject_objects(s):383self.store_triple((ss,o,oo))384# RULE eq-rep-o385forss,ppinself.graph.subject_predicates(o):386self.store_triple((ss,pp,s))387# RULE eq-diff1388if(s,differentFrom,o)inself.graphor(o,differentFrom,s)inself.graph:389s_e=self._get_resource_or_literal(s)390o_e=self._get_resource_or_literal(o)391self.add_error("'sameAs' and 'differentFrom' cannot be used on the same subject-object pair: (%s, %s)"%(s_e,o_e))392393# RULES eq-diff2 and eq-diff3394ifp==typeando==AllDifferent:395x=s396# the objects method are generators, we cannot simply concatenate them. So we turn the results397# into lists first. (Otherwise the body of the for loops should be repeated verbatim, which398# is silly and error prone...399m1=[iforiinself.graph.objects(x,members)]400m2=[iforiinself.graph.objects(x,distinctMembers)]401foryinm1+m2:402zis=self._list(y)403foriinxrange(0,len(zis)-1):404zi=zis[i]405forjinxrange(i+1,len(zis)-1):406zj=zis[j]407if((zi,sameAs,zj)inself.graphor(zj,sameAs,zi)inself.graph)andzi!=zj:408self.add_error("'sameAs' and 'AllDifferent' cannot be used on the same subject-object pair: (%s, %s)"%(zi,zj))
411"""412 Table 5: The Semantics of Axioms about Properties. Essentially, the prp-* rules.413 @param triple: triple to work on414 @param cycle_num: which cycle are we in, starting with 1. Can be used for some optimization.415 """416# In many of the 'if' branches, corresponding to rules in the document,417# the branch begins by a renaming of variables (eg, pp,c = s,o).418# There is no programming reasons for doing that, but by renaming the419# variables it becomes easier to compare the declarative rules420# in the document with the implementation421p,t,o=triple422423# RULE prp-ap424ifcycle_num==1andtinOWLRL_Annotation_properties:425self.store_triple((t,type,AnnotationProperty))426427# RULE prp-dom428ift==domain:429forx,yinself.graph.subject_objects(p):430self.store_triple((x,type,o))431432# RULE prp-rng433elift==range:434forx,yinself.graph.subject_objects(p):435self.store_triple((y,type,o))436437elift==type:438# RULE prp-fp439ifo==FunctionalProperty:440# Property axiom #3441forx,y1inself.graph.subject_objects(p):442fory2inself.graph.objects(x,p):443# Optimization: if the two resources are identical, the samAs is already444# taken place somewhere else, unnecessary to add it here445ify1!=y2:446self.store_triple((y1,sameAs,y2))447448# RULE prp-ifp449elifo==InverseFunctionalProperty:450forx1,yinself.graph.subject_objects(p):451forx2inself.graph.subjects(p,y):452# Optimization: if the two resources are identical, the samAs is already453# taken place somewhere else, unnecessary to add it here454ifx1!=x2:455self.store_triple((x1,sameAs,x2))456457# RULE prp-irp458elifo==IrreflexiveProperty:459forx,yinself.graph.subject_objects(p):460ifx==y:461self.add_error("Irreflexive property used on %s with %s"%(x,p))462463# RULE prp-symp464elifo==SymmetricProperty:465forx,yinself.graph.subject_objects(p):466self.store_triple((y,p,x))467468# RULE prp-asyp469elifo==AsymmetricProperty:470forx,yinself.graph.subject_objects(p):471if(y,p,x)inself.graph:472self.add_error("Erroneous usage of asymmetric property %s on %s and %s"%(p,x,y))473474# RULE prp-trp475elifo==TransitiveProperty:476forx,yinself.graph.subject_objects(p):477forzinself.graph.objects(y,p):478self.store_triple((x,p,z))479480#481# Breaking the order here, I take some additional rules here to the branch checking the type...482#483# RULE prp-adp484elifo==AllDisjointProperties:485x=p486foryinself.graph.objects(x,members):487pis=self._list(y)488foriinxrange(0,len(pis)-1):489pi=pis[i]490forjinxrange(i+1,len(pis)-1):491pj=pis[j]492forx,yinself.graph.subject_objects(pi):493if(x,pj,y)inself.graph:494self.add_error("Disjoint properties in an 'AllDisjointProperties' are not really disjoint: (%s, %s,%s) and (%s,%s,%s)"%(x,pi,y,x,pj,y))495496# RULE prp-spo1497elift==subPropertyOf:498p1,p2=p,o499forx,yinself.graph.subject_objects(p1):500self.store_triple((x,p2,y))501502# RULE prp-spo2503elift==propertyChainAxiom:504self._property_chain(p,o)505506# RULES prp-eqp1 and prp-eqp2507elift==equivalentProperty:508p1,p2=p,o509# Optimization: it clearly does not make sense to run these510# if the two properties are identical (a separate axiom511# does create an equivalent property relations among identical512# properties, too...)513ifp1!=p2:514# RULE prp-eqp1515forx,yinself.graph.subject_objects(p1):516self.store_triple((x,p2,y))517# RULE prp-eqp2518forx,yinself.graph.subject_objects(p2):519self.store_triple((x,p1,y))520521# RULE prp-pdw522elift==propertyDisjointWith:523p1,p2=p,o524forx,yinself.graph.subject_objects(p1):525if(x,p2,y)inself.graph:526self.add_error("Erroneous usage of disjoint properties %s and %s on %s and %s"%(p1,p2,x,y))527528529# RULES prp-inv1 and prp-inv2530elift==inverseOf:531p1,p2=p,o532# RULE prp-inv1533forx,yinself.graph.subject_objects(p1):534self.store_triple((y,p2,x))535# RULE prp-inv2536forx,yinself.graph.subject_objects(p2):537self.store_triple((y,p1,x))538539# RULE prp-key540elift==hasKey:541c,u=p,o542pis=self._list(u)543iflen(pis)>0:544forxinself.graph.subjects(type,c):545# "Calculate" the keys for 'x'. The complication is that there can be various combinations546# of the keys, and that is the structure one has to build up here...547#548# The final list will be a list of lists, with each constituents being the possible combinations of the549# key values.550# startup the list551finalList=[[zi]forziinself.graph.objects(x,pis[0])]552forpiinpis[1:]:553newList=[]554forziinself.graph.objects(x,pi):555newList=newList+[l+[zi]forlinfinalList]556finalList=newList557558# I am not sure this can happen, but better safe then sorry... ruling out559# the value lists whose length are not kosher560# (To be checked whether this is necessary in the first place)561valueList=[lforlinfinalListiflen(l)==len(pis)]562563# Now we can look for the y-s, to see if they have the same key values564foryinself.graph.subjects(type,c):565# rule out the existing equivalences566ifnot(y==xor(y,sameAs,x)inself.graphor(x,sameAs,y)inself.graph):567# 'calculate' the keys for the y values and see if there is a match568forvalsinvalueList:569same=True570foriinxrange(0,len(pis)-1):571if(y,pis[i],vals[i])notinself.graph:572same=False573# No use going with this property line574break575ifsame:576self.store_triple((x,sameAs,y))577# Look for the next 'y', this branch is finished, no reason to continue578break579580# RULES prp-npa1 and prp-npa2581elift==sourceIndividual:582x,i1=p,o583forp1inself.graph.objects(x,assertionProperty):584fori2inself.graph.objects(x,targetIndividual):585if(i1,p1,i2)inself.graph:586self.add_error("Negative (object) property assertion violated for: (%s, %s, %s)"%(i1,p1,i2))587fori2inself.graph.objects(x,targetValue):588if(i1,p1,i2)inself.graph:589self.add_error("Negative (datatype) property assertion violated for: (%s, %s, %s)"%(i1,p1,self.get_literal_value(i2)))
592"""593 Table 6: The Semantics of Classes. Essentially, the cls-* rules594 @param triple: triple to work on595 @param cycle_num: which cycle are we in, starting with 1. Can be used for some optimization.596 """597# In many of the 'if' branches, corresponding to rules in the document,598# the branch begins by a renaming of variables (eg, pp,c = s,o).599# There is no programming reasons for doing that, but by renaming the600# variables it becomes easier to compare the declarative rules601# in the document with the implementation602c,p,x=triple603604# RULE cls-nothing2605ifp==typeandx==Nothing:606self.add_error("%s is defined of type 'Nothing'"%c)607608# RULES cls-int1 and cls-int2609ifp==intersectionOf:610classes=self._list(x)611# RULE cls-int1612# Optimization: by looking at the members of class[0] right away one613# reduces the search spaces a bit. Individuals not in that class614# are without interest anyway615# I am not sure how empty lists are sanctioned, so having an extra check616# on that does not hurt..617iflen(classes)>0:618foryinself.graph.subjects(type,classes[0]):619ifFalsenotin[(y,type,cl)inself.graphforclinclasses[1:]]:620self.store_triple((y,type,c))621# RULE cls-int2622foryinself.graph.subjects(type,c):623forclinclasses:624self.store_triple((y,type,cl))625626# RULE cls-uni627elifp==unionOf:628forclinself._list(x):629foryinself.graph.subjects(type,cl):630self.store_triple((y,type,c))631632# RULE cls-comm633elifp==complementOf:634c1,c2=c,x635forx1inself.graph.subjects(type,c1):636if(x1,type,c2)inself.graph:637self.add_error("Violation of complementarity for classes %s and %s on element %s"%(c1,c2,x))638639# RULES cls-svf1 and cls=svf2640elifp==someValuesFrom:641xx,y=c,x642# RULE cls-svf1643# RULE cls-svf2644forppinself.graph.objects(xx,onProperty):645foru,vinself.graph.subject_objects(pp):646ify==Thingor(v,type,y)inself.graph:647self.store_triple((u,type,xx))648649# RULE cls-avf650elifp==allValuesFrom:651xx,y=c,x652forppinself.graph.objects(xx,onProperty):653foruinself.graph.subjects(type,xx):654forvinself.graph.objects(u,pp):655ifself.restriction_typing_check(v,y):656self.store_triple((v,type,y))657else:658self.add_error("Violation of type restriction for allValuesFrom in %s for datatype %s on value %s"%(pp,y,self._get_resource_or_literal(v)))659660# RULES cls-hv1 and cls-hv2661elifp==hasValue:662xx,y=c,x663forppinself.graph.objects(xx,onProperty):664# RULE cls-hv1665foruinself.graph.subjects(type,xx):666self.store_triple((u,pp,y))667# RULE cls-hv2668foruinself.graph.subjects(pp,y):669self.store_triple((u,type,xx))670671# RULES cls-maxc1 and cls-maxc1672elifp==maxCardinality:673# This one is a bit complicated, because the literals have been674# exchanged against bnodes...675#676# The construct should lead to an integer. Something may go wrong along the line677# leading to an exception...678val=-1679try:680val=int(self.literal_proxies.bnode_to_lit[x].lit)681except:682pass683xx=c684ifval==0:685# RULE cls-maxc1686forppinself.graph.objects(xx,onProperty):687foru,yinself.graph.subject_objects(pp):688# This should not occur:689if(u,type,xx)inself.graph:690self.add_error("Erroneous usage of maximum cardinality with %s, %s"%(xx,y))691elifval==1:692# RULE cls-maxc2693forppinself.graph.objects(xx,onProperty):694foru,y1inself.graph.subject_objects(pp):695if(u,type,xx)inself.graph:696fory2inself.graph.objects(u,pp):697ify1!=y2:698self.store_triple((y1,sameAs,y2))699700# RULES cls-maxqc1, cls-maxqc2, cls-maxqc3, cls-maxqc4701elifp==maxQualifiedCardinality:702# This one is a bit complicated, because the literals have been703# exchanged against bnodes...704#705# The construct should lead to an integer. Something may go wrong along the line706# leading to an exception...707val=-1708try:709val=int(self.literal_proxies.bnode_to_lit[x].lit)710except:711pass712xx=c713ifval==0:714# RULES cls-maxqc1 and cls-maxqc2 folded in one715forppinself.graph.objects(xx,onProperty):716forccinself.graph.objects(xx,onClass):717foru,yinself.graph.subject_objects(pp):718# This should not occur:719if(u,type,xx)inself.graphand(cc==Thingor(y,type,cc)inself.graph):720self.add_error("Erroneous usage of maximum qualified cardinality with %s, %s, and %s"%(xx,cc,y))721elifval==1:722# RULE cls-maxqc3 and cls-maxqc4 folded in one723forppinself.graph.objects(xx,onProperty):724forccinself.graph.objects(xx,onClass):725foru,y1inself.graph.subject_objects(pp):726if(u,type,xx)inself.graph:727ifcc==Thing:728fory2inself.graph.objects(u,pp):729ify1!=y2:730self.store_triple((y1,sameAs,y2))731else:732if(y1,type,cc)inself.graph:733fory2inself.graph.objects(u,pp):734ify1!=y2and(y2,type,cc)inself.graph:735self.store_triple((y1,sameAs,y2))736737# RULE cls-oo738elifp==oneOf:739foryinself._list(x):740self.store_triple((y,type,c))
743"""744 Table 7: Class Axioms. Essentially, the cax-* rules.745 @param triple: triple to work on746 @param cycle_num: which cycle are we in, starting with 1. Can be used for some optimization.747 """748# In many of the 'if' branches, corresponding to rules in the document,749# the branch begins by a renaming of variables (eg, pp,c = s,o).750# There is no programming reasons for doing that, but by renaming the751# variables it becomes easier to compare the declarative rules752# in the document with the implementation753c1,p,c2=triple754# RULE cax-sco755ifp==subClassOf:756# Other axioms sets classes to be subclasses of themselves, to one can optimize the trivial case757ifc1!=c2:758forxinself.graph.subjects(type,c1):759self.store_triple((x,type,c2))760761# RULES cax-eqc1 and cax-eqc1762# Other axioms set classes to be equivalent to themselves, one can optimize the trivial case763elifp==equivalentClassandc1!=c2:764# RULE cax-eqc1765forxinself.graph.subjects(type,c1):766self.store_triple((x,type,c2))767# RULE cax-eqc1768forxinself.graph.subjects(type,c2):769self.store_triple((x,type,c1))770771# RULE cax-dw772elifp==disjointWith:773forxinself.graph.subjects(type,c1):774if(x,type,c2)inself.graph:775self.add_error("Disjoint classes %s and %s have a common individual %s"%(c1,c2,self._get_resource_or_literal(x)))776777# RULE cax-adc778elifp==typeandc2==AllDisjointClasses:779x=c1780foryinself.graph.objects(x,members):781classes=self._list(y)782iflen(classes)>0:783foriinxrange(0,len(classes)-1):784cl1=classes[i]785forzinself.graph.subjects(type,cl1):786forcl2inclasses[(i+1):]:787if(z,type,cl2)inself.graph:788self.add_error("Disjoint classes %s and %s have a common individual %s"%(cl1,cl2,z))
791"""792 Table 9: The Semantics of Schema Vocabulary. Essentially, the scm-* rules793 @param triple: triple to work on794 @param cycle_num: which cycle are we in, starting with 1. Can be used for some optimization.795 """796# In many of the 'if' branches, corresponding to rules in the document,797# the branch begins by a renaming of variables (eg, pp,c = s,o).798# There is no programming reasons for doing that, but by renaming the799# variables it becomes easier to compare the declarative rules800# in the document with the implementation801s,p,o=triple802803# RULE scm-cls804ifp==typeando==OWLClass:805c=s806self.store_triple((c,subClassOf,c))807self.store_triple((c,equivalentClass,c))808self.store_triple((c,subClassOf,Thing))809self.store_triple((Nothing,subClassOf,c))810811# RULE scm-sco812# Rule scm-eqc2813elifp==subClassOf:814c1,c2=s,o815# RULE scm-sco816# Optimize out the trivial identity case (set elsewhere already)817ifc1!=c2:818forc3inself.graph.objects(c2,subClassOf):819# Another axiom already sets that...820ifc1!=c3:self.store_triple((c1,subClassOf,c3))821# RULE scm-eqc2822if(c2,subClassOf,c1)inself.graph:823self.store_triple((c1,equivalentClass,c2))824825# RULE scm-eqc826elifp==equivalentClassands!=o:827c1,c2=s,o828self.store_triple((c1,subClassOf,c2))829self.store_triple((c2,subClassOf,c1))830831# RULE scm-op and RULE scm-dp folded together832# There is a bit of a cheating here: 'Property' is not, strictly speaking, in the rule set!833elifp==typeand(o==ObjectPropertyoro==DatatypePropertyoro==Property):834pp=s835self.store_triple((pp,subPropertyOf,pp))836self.store_triple((pp,equivalentProperty,pp))837838# RULE scm-spo839# RULE scm-eqp2840elifp==subPropertyOfands!=o:841p1,p2=s,o842# Optimize out the trivial identity case (set elsewhere already)843# RULE scm-spo844ifp1!=p2:845forp3inself.graph.objects(p2,subPropertyOf):846ifp1!=p3:847self.store_triple((p1,subPropertyOf,p3))848849#RULE scm-eqp2850if(p2,subPropertyOf,p1)inself.graph:851self.store_triple((p1,equivalentProperty,p2))852853# RULE scm-eqp854# Optimize out the trivial identity case (set elsewhere already)855elifp==equivalentPropertyands!=o:856p1,p2=s,o857self.store_triple((p1,subPropertyOf,p2))858self.store_triple((p2,subPropertyOf,p1))859860# RULES scm-dom1 and scm-dom2861elifp==domain:862# RULE scm-dom1863pp,c1=s,o864for(_x,_y,c2)inself.graph.triples((c1,subClassOf,None)):865ifc1!=c2:866self.store_triple((pp,domain,c2))867# RULE scm-dom1868p2,c=s,o869for(p1,_x,_y)inself.graph.triples((None,subPropertyOf,p2)):870ifp1!=p2:871self.store_triple((p1,domain,c))872873# RULES scm-rng1 and scm-rng2874elifp==range:875# RULE scm-rng1876pp,c1=s,o877for(_x,_y,c2)inself.graph.triples((c1,subClassOf,None)):878ifc1!=c2:self.store_triple((pp,range,c2))879# RULE scm-rng1880p2,c=s,o881for(p1,_x,_y)inself.graph.triples((None,subPropertyOf,p2)):882ifp1!=p2:self.store_triple((p1,range,c))883884# RULE scm-hv885elifp==hasValue:886c1,i=s,o887forp1inself.graph.objects(c1,onProperty):888forc2inself.graph.subjects(hasValue,i):889forp2inself.graph.objects(c2,onProperty):890if(p1,subPropertyOf,p2)inself.graph:891self.store_triple((c1,subClassOf,c2))892893# RULES scm-svf1 and scm-svf2894elifp==someValuesFrom:895# RULE scm-svf1896c1,y1=s,o897forppinself.graph.objects(c1,onProperty):898forc2inself.graph.subjects(onProperty,pp):899fory2inself.graph.objects(c2,someValuesFrom):900if(y1,subClassOf,y2)inself.graph:901self.store_triple((c1,subClassOf,c2))902903# RULE scm-svf2904c1,y=s,o905forp1inself.graph.objects(c1,onProperty):906forc2inself.graph.subjects(someValuesFrom,y):907forp2inself.graph.objects(c2,onProperty):908if(p1,subPropertyOf,p2)inself.graph:909self.store_triple((c1,subClassOf,c2))910911# RULES scm-avf1 and scm-avf2912elifp==allValuesFrom:913# RULE scm-avf1914c1,y1=s,o915forppinself.graph.objects(c1,onProperty):916forc2inself.graph.subjects(onProperty,pp):917fory2inself.graph.objects(c2,allValuesFrom):918if(y1,subClassOf,y2)inself.graph:919self.store_triple((c1,subClassOf,c2))920921# RULE scm-avf2922c1,y=s,o923forp1inself.graph.objects(c1,onProperty):924forc2inself.graph.subjects(allValuesFrom,y):925forp2inself.graph.objects(c2,onProperty):926if(p1,subPropertyOf,p2)inself.graph:927self.store_triple((c2,subClassOf,c1))928929# RULE scm-int930elifp==intersectionOf:931c,x=s,o932forciinself._list(x):933self.store_triple((c,subClassOf,ci))934935# RULE scm-uni936elifp==unionOf:937c,x=s,o938forciinself._list(x):939self.store_triple((ci,subClassOf,c))
OWL 2 RL Semantics class, ie, implementation of the OWL 2 RL closure
graph.
Note that the module does not implement the so called Datatype
entailment rules, simply because the underlying RDFLib does not implement
the datatypes (ie, RDFLib will not make the literal "1.00" and
"1.00000" identical, although even with all the ambiguities on
datatypes, this should be made equal...). Also, the so-called
extensional entailment rules (Section 7.3.1 in the RDF Semantics
document) have not been implemented either.
The comments and references to the various rule follow the names as
used in the OWL 2 RL document.
_one_time_rules_datatypes(self)
Some of the rules in the rule set are axiomatic in nature, meaning
that they really have to be added only once, there is no reason to
add these in a cycle.
one_time_rules(self)
Some of the rules in the rule set are axiomatic in nature, meaning
that they really have to be added only once, there is no reason to
add these in a cycle.
Helping method to check whether a type statement is in line with a
possible restriction. This method is invoked by rule "cls-avf"
before setting a type on an allValuesFrom restriction.
The method is a placeholder at this level. It is typically implemented
by subclasses for extra checks, eg, for datatype facet checks.
Some of the rules in the rule set are axiomatic in nature, meaning
that they really have to be added only once, there is no reason to add
these in a cycle. These are performed by this method that is invoked only
once at the beginning of the process.
These are: cls-thing, cls-nothing1, prp-ap, dt-types1, dt-types2,
dt-eq, dt-diff.
Note, however, that the dt-* are executed only partially, limited by
the possibilities offered by RDFLib. These may not cover all the edge
cases of OWL RL. Especially, dt-not-type has not (yet?) been implemented
(I wonder whether RDFLib should not raise exception for those
anyway...
Some of the rules in the rule set are axiomatic in nature, meaning
that they really have to be added only once, there is no reason to add
these in a cycle. These are performed by this method that is invoked only
once at the beginning of the process.
These are: cls-thing, cls-nothing1, prp-ap, dt-types1, dt-types2,
dt-eq, dt-diff.
Go through the various rule groups, as defined in the OWL-RL profile
text and implemented via local methods. (The calling cycle takes every
tuple in the graph.)
Parameters:
t - a triple (in the form of a tuple)
cycle_num - which cycle are we in, starting with 1. This value is forwarded
to all local rules; it is also used locally to collect the bnodes
in the graph.
Implementation of the property chain axiom, invoked from inside the
property axiom handler. This is the implementation of rule prp-spo2,
taken aside for an easier readability of the code.
Extension to OWL 2 RL, ie, some additional rules added to the system
from OWL 2 Full. It is implemented through the OWLRL_Extension class, whose reference has to be passed
to the relevant semantic class (ie, either the OWL 2 RL or the combined
closure class) as an 'extension'.
According to the OWL spec: numerator must be an integer, denominator a
positive integer (ie, xsd['integer'] type), and the denominator should
not have a '+' sign.
Parameters:
v - the literal string defined as boolean @return corresponding
Rational value
61"""Converting a string to a rational. 62 63 According to the OWL spec: numerator must be an integer, denominator a positive integer (ie, xsd['integer'] type), and the denominator 64 should not have a '+' sign. 65 66 @param v: the literal string defined as boolean 67 @return corresponding Rational value 68 @rtype: Rational 69 @raise ValueError: invalid rational string literal 70 """ 71try: 72r=v.split('/') 73iflen(r)==2: 74n_str=r[0] 75d_str=r[1] 76else: 77n_str=r[0] 78d_str="1" 79ifd_str.strip()[0]=='+': 80raiseValueError("Invalid Rational literal value %s"%v) 81else: 82returnRational(AltXSDToPYTHON[ns_xsd["integer"]](n_str),AltXSDToPYTHON[ns_xsd["positiveInteger"]](d_str)) 83except: 84raiseValueError("Invalid Rational literal value %s"%v)
91""" 92 Additional rules to OWL 2 RL. The initialization method also adds the C{owl:rational} datatype to the set of allowed 93 datatypes with the L{_strToRational} function as a conversion between the literal form and a Rational. The C{xsd:decimal} datatype 94 is also set to be a subclass of C{owl:rational}. Furthermore, the restricted datatypes are extracted from the graph 95 using a L{separate method in a different module<RestrictedDatatype.extract_faceted_datatypes>}, and all those datatypes are also 96 added to the set of allowed datatypes. In the case of the restricted datatypes and extra subsumption relationship is set up 97 between the restricted and the base datatypes. 98 99 @cvar extra_axioms: additional axioms that have to be added to the deductive closure (in case the axiomatic triples are required)100 @ivar restricted_datatypes : list of the datatype restriction101 @type restricted_datatypes : array of L{restricted datatype<RestrictedDatatype.RestrictedDatatype>} instances102 """103extra_axioms=[104(hasSelf,rdfType,Property),105(hasSelf,domain,Property),106]
134"""135 A one-time-rule: all the literals are checked whether they are (a) of type restricted by a136 faceted (restricted) datatype and (b) whether137 the corresponding value abides to the restrictions. If true, then the literal gets an extra138 tag as being of type of the restricted datatype, too.139 """140forrtinself.restricted_datatypes:141# This is a recorded restriction. The base type is:142base_type=rt.base_type143# Look through all the literals; more precisely, through the144# proxy bnodes:145forbninself.literal_proxies.bnode_to_lit:146# check if the type of that proxy matches. Note that this also takes147# into account the subsumption datatypes, that have been taken148# care of by the 'regular' OWL RL process149if(bn,rdfType,base_type)inself.graph:150# yep, that is a good candidate!151lt=self.literal_proxies.bnode_to_lit[bn]152try:153# the conversion or the check may go wrong and raise an exception; then simply move on154value=lt.lit.toPython()155ifrt.checkValue(value):156# yep, this is also of type 'rt'157self.store_triple((bn,rdfType,rt.datatype))158except:159continue
162"""Helping method to check whether a type statement is in line with a possible163 restriction. This method is invoked by rule "cls-avf" before setting a type164 on an allValuesFrom restriction.165166 The method is a placeholder at this level. It is typically implemented by subclasses for167 extra checks, eg, for datatype facet checks.168 @param v: the resource that is to be 'typed'169 @param t: the targeted type (ie, Class)170 @return: boolean171 """172# Look through the restricted datatypes to see if 't' corresponds to one of those...173# There are a bunch of possible exceptions here with datatypes, but they can all174# be ignored...175try:176forrtinself.restricted_datatypes:177ifrt.datatype==t:178# bingo179ifvinself.literal_proxies.bnode_to_lit:180returnrt.checkValue(self.literal_proxies.bnode_to_lit[v].lit.toPython())181else:182returnTrue183# if we got here, no restriction applies184returnTrue185except:186returnTrue
189"""190 This method is invoked only once at the beginning, and prior of, the forward chaining process.191192 At present, only the L{subsumption} of restricted datatypes<_subsume_restricted_datatypes>} is performed.193 """194RDFS_OWLRL_Semantics.one_time_rules(self)195# it is important to flush the triples at this point, because196# the handling of the restriction datatypes rely on the datatype197# subsumption triples added by the superclass198self.flush_stored_triples()199self._subsume_restricted_datatypes()
202"""203 Add the L{extra axioms<OWLRL_Extension.extra_axioms>}, related to the self restrictions.204 """205RDFS_OWLRL_Semantics.add_axioms(self)206fortinself.extra_axioms:self.graph.add(t)
209"""210 Go through the additional rules implemented by this module.211 @param t: a triple (in the form of a tuple)212 @param cycle_num: which cycle are we in, starting with 1. This value is forwarded to all local rules; it is also used213 locally to collect the bnodes in the graph.214 """215RDFS_OWLRL_Semantics.rules(self,t,cycle_num)216z,q,x=t217ifq==hasSelf:218forpinself.graph.objects(z,onProperty):219foryinself.graph.subjects(rdfType,z):220self.store_triple((y,p,y))221fory1,y2inself.graph.subject_objects(p):222ify1==y2:223self.store_triple((y1,rdfType,z))
228"""229 This Class adds only one feature to L{OWLRL_Extension}: to initialize with a trimming flag set to C{True} by default. 230231 This is pretty experimental and probably contentious: this class I{removes} a number of triples from the Graph at the very end of the processing steps.232 These triples are either the by-products of the deductive closure calculation or are axiom like triples that are added following the rules of OWL 2 RL.233 While these triples I{are necessary} for the correct inference of really 'useful' triples, they may not be of interest for the application234 for the end result. The triples that are removed are of the form (following a SPARQL-like notation):235236 - C{?x owl:sameAs ?x}, C{?x rdfs:subClassOf ?x}, C{?x rdfs:subPropertyOf ?x}, C{?x owl:equivalentClass ?x} type triples237 - C{?x rdfs:subClassOf rdf:Resource}, C{?x rdfs:subClassOf owl:Thing}, C{?x rdf:type rdf:Resource}, C{owl:Nothing rdfs:subClassOf ?x} type triples238 - For a datatype that does I{not} appear explicitly in a type assignments (ie, in a C{?x rdf:type dt}) the corresponding C{dt rdf:type owl:Datatype} and C{dt rdf:type owl:DataRange} triples, as well as the disjointness statements with other datatypes239 - annotation property axioms240 - a number of axiomatic triples on C{owl:Thing}, C{owl:Nothing} and C{rdf:Resource} (eg, C{owl:Nothing rdf:type owl:Class}, C{owl:Thing owl:equivalentClass rdf:Resource}, etc.)241242 Trimming is the only feature of this class, done in the L{post_process} step. If users extend L{OWLRL_Extension}, this class can be safely mixed in via multiple243 inheritance.244 """
Additional rules to OWL 2 RL. The initialization method also adds the
owl:rational datatype to the set of allowed datatypes with
the _strToRational function as a
conversion between the literal form and a Rational. The
xsd:decimal datatype is also set to be a subclass of
owl:rational. Furthermore, the restricted datatypes are
extracted from the graph using a separate method in a different module, and all those
datatypes are also added to the set of allowed datatypes. In the case of
the restricted datatypes and extra subsumption relationship is set up
between the restricted and the base datatypes.
_subsume_restricted_datatypes(self)
A one-time-rule: all the literals are checked whether they are (a) of
type restricted by a faceted (restricted) datatype and (b) whether
the corresponding value abides to the restrictions.
extra_axioms = [(rdflib.term.URIRef(u'http://www.w3.org/2002/0...
additional axioms that have to be added to the deductive closure (in
case the axiomatic triples are required)
A one-time-rule: all the literals are checked whether they are (a) of
type restricted by a faceted (restricted) datatype and (b) whether the
corresponding value abides to the restrictions. If true, then the literal
gets an extra tag as being of type of the restricted datatype, too.
This method is invoked only once at the beginning, and prior of, the forward chaining process.
At present, only the L{subsumption} of restricted datatypes<_subsume_restricted_datatypes>} is performed.
Helping method to check whether a type statement is in line with a
possible restriction. This method is invoked by rule "cls-avf"
before setting a type on an allValuesFrom restriction.
The method is a placeholder at this level. It is typically implemented
by subclasses for extra checks, eg, for datatype facet checks.
Go through the additional rules implemented by this module.
Parameters:
t - a triple (in the form of a tuple)
cycle_num - which cycle are we in, starting with 1. This value is forwarded
to all local rules; it is also used locally to collect the bnodes
in the graph.
This Class adds only one feature to OWLRL_Extension: to initialize with a trimming flag set
to True by default.
This is pretty experimental and probably contentious: this class
removes a number of triples from the Graph at the very end of the
processing steps. These triples are either the by-products of the
deductive closure calculation or are axiom like triples that are added
following the rules of OWL 2 RL. While these triples are necessary
for the correct inference of really 'useful' triples, they may not be of
interest for the application for the end result. The triples that are
removed are of the form (following a SPARQL-like notation):
For a datatype that does not appear explicitly in a type
assignments (ie, in a ?x rdf:type dt) the corresponding
dt rdf:type owl:Datatype and dt rdf:type
owl:DataRange triples, as well as the disjointness statements
with other datatypes
annotation property axioms
a number of axiomatic triples on owl:Thing,
owl:Nothing and rdf:Resource (eg,
owl:Nothing rdf:type owl:Class, owl:Thing
owl:equivalentClass rdf:Resource, etc.)
Trimming is the only feature of this class, done in the post_process step. If users extend OWLRL_Extension, this class can be safely mixed in via
multiple inheritance.
1#!/d/Bin/Python/python.exe 2# -*- coding: utf-8 -*- 3# 4""" 5This module is brute force implementation of the RDFS semantics on the top of RDFLib (with some caveats, see in the introductory text). 6 7 8@requires: U{RDFLib<https://github.com/RDFLib/rdflib>}, 4.0.0 and higher 9@license: This software is available for use under the U{W3C Software License<http://www.w3.org/Consortium/Legal/2002/copyright-software-20021231>} 10@organization: U{World Wide Web Consortium<http://www.w3.org>} 11@author: U{Ivan Herman<a href="http://www.w3.org/People/Ivan/">} 12 13""" 14 15__author__='Ivan Herman' 16__contact__='Ivan Herman, ivan@w3.org' 17__license__=u'W3C® SOFTWARE NOTICE AND LICENSE, http://www.w3.org/Consortium/Legal/2002/copyright-software-20021231' 18 19fromRDFClosure.RDFSimportProperty,type 20fromRDFClosure.RDFSimportResource,Class,subClassOf,subPropertyOf,domain,range 21fromRDFClosure.RDFSimportLiteral,ContainerMembershipProperty,member,Datatype 22# noinspection PyPep8Naming 23fromRDFClosure.RDFSimportRDFNSasns_rdf 24 25fromRDFClosure.ClosureimportCore 26fromRDFClosure.AxiomaticTriplesimportRDFS_Axiomatic_Triples,RDFS_D_Axiomatic_Triples 27 28 29###################################################################################################### 30 31## RDFS Semantics class 32# noinspection PyPep8Naming
34"""RDFS Semantics class, ie, implementation of the RDFS closure graph. 35 36 Note that the module does I{not} implement the so called Datatype entailment rules, simply because the underlying RDFLib does 37 not implement the datatypes (ie, RDFLib will not make the literal "1.00" and "1.00000" identical, although 38 even with all the ambiguities on datatypes, this I{should} be made equal...). Also, the so-called extensional entailment rules 39 (Section 7.3.1 in the RDF Semantics document) have not been implemented either. 40 41 The comments and references to the various rule follow the names as used in the U{RDF Semantics document<http://www.w3.org/TR/rdf-mt/>}. 42 """
44""" 45 @param graph: the RDF graph to be extended 46 @type graph: rdflib.Graph 47 @param axioms: whether (non-datatype) axiomatic triples should be added or not 48 @type axioms: bool 49 @param daxioms: whether datatype axiomatic triples should be added or not 50 @type daxioms: bool 51 @param rdfs: whether RDFS inference is also done (used in subclassed only) 52 @type rdfs: boolean 53 """ 54Core.__init__(self,graph,axioms,daxioms,rdfs)
69"""This is not really complete, because it just uses the comparison possibilities that rdflib provides.""" 70literals=self.literal_proxies.lit_to_bnode.keys() 71# #1 72forltinliterals: 73iflt.dtisnotNone: 74self.graph.add((self.literal_proxies.lit_to_bnode[lt],type,lt.dt)) 75 76fortinRDFS_D_Axiomatic_Triples: 77self.graph.add(t)
81"""Some of the rules in the rule set are axiomatic in nature, meaning that they really have to be added only 82 once, there is no reason to add these in a cycle. These are performed by this method that is invoked only once 83 at the beginning of the process. 84 85 In this case this is related to a 'hidden' same as rules on literals with identical values (though different lexical values) 86 """ 87# There is also a hidden sameAs rule in RDF Semantics: if a literal appears in a triple, and another one has the same value, 88# then the triple should be duplicated with the other value. 89forlt1inself.literal_proxies.lit_to_bnode.keys(): 90forlt2inself.literal_proxies.lit_to_bnode.keys(): 91iflt1!=lt2: 92try: 93lt1_d=lt1.lit.toPython() 94lt2_d=lt2.lit.toPython() 95iflt1_d==lt2_d: 96# In OWL, this line is simply stating a sameAs for the corresponding BNodes, and then let 97# the usual rules take effect. In RDFS this is not possible, so the sameAs rule is, essentially 98# replicated... 99bn1=self.literal_proxies.lit_to_bnode[lt1]100bn2=self.literal_proxies.lit_to_bnode[lt2]101for(s,p,o)inself.graph.triples((None,None,bn1)):102self.graph.add((s,p,bn2))103except:104# there may be a problem with one of the python conversions; the rule is imply ignored105#raise e106pass
RDFS Semantics class, ie, implementation of the RDFS closure
graph.
Note that the module does not implement the so called Datatype
entailment rules, simply because the underlying RDFLib does not implement
the datatypes (ie, RDFLib will not make the literal "1.00" and
"1.00000" identical, although even with all the ambiguities on
datatypes, this should be made equal...). Also, the so-called
extensional entailment rules (Section 7.3.1 in the RDF Semantics
document) have not been implemented either.
The comments and references to the various rule follow the names as
used in the RDF
Semantics document.
one_time_rules(self)
Some of the rules in the rule set are axiomatic in nature, meaning
that they really have to be added only once, there is no reason to
add these in a cycle.
Some of the rules in the rule set are axiomatic in nature, meaning
that they really have to be added only once, there is no reason to add
these in a cycle. These are performed by this method that is invoked only
once at the beginning of the process.
In this case this is related to a 'hidden' same as rules on literals
with identical values (though different lexical values)
In the case above the system can then infer that ex:q is
also of type ex:RE.
Datatype restrictions are used by the OWL
RL Extensions extension class.
The implementation is not 100% complete. Some things that an ideal
implementation should do are not done yet like:
checking whether a facet is of a datatype that is allowed for that
facet
handling of non-literals in the facets (ie, if the resource is
defined to be of type literal, but whose value is defined via a
separate 'owl:sameAs' somewhere else)
_lang_range_check(range,
lang)
Implementation of the extended filtering algorithm, as defined in
point 3.3.2, of RFC 4647, on matching language ranges and language
tags.
This method is used to convert a string to a value with facet
checking. RDF Literals are converted to Python values using this method;
if there is a problem, an exception is raised (and caught higher up to
generate an error message).
The method is the equivalent of all the methods in the DatatypeHandling module, and is registered to the system
run time, as new restricted datatypes are discovered.
(Technically, the registration is done via a lambda v:
_lit_to_value(self,v) setting from within a RestrictedDatatype instance)
Implementation of the extended filtering algorithm, as defined in
point 3.3.2, of RFC 4647, on matching language ranges and language
tags. Needed to handle the rdf:PlainLiteral datatype.
1# -*- coding: utf-8 -*- 2# 3""" 4Module to datatype restrictions, ie, data ranges. 5 6The module implements the following aspects of datatype restrictions: 7 8 - a new datatype is created run-time and added to the allowed and accepted datatypes; literals are checked whether they abide to the restrictions 9 - the new datatype is defined to be a 'subClass' of the restricted datatype 10 - literals of the restricted datatype and that abide to the restrictions defined by the facets are also assigned to be of the new type 11 12The last item is important to handle the following structures:: 13 ex:RE a owl:Restriction ; 14 owl:onProperty ex:p ; 15 owl:someValuesFrom [ 16 a rdfs:Datatype ; 17 owl:onDatatype xsd:string ; 18 owl:withRestrictions ( 19 [ xsd:minLength "3"^^xsd:integer ] 20 [ xsd:maxLength "6"^^xsd:integer ] 21 ) 22 ] 23 . 24 ex:q ex:p "abcd"^^xsd:string. 25In the case above the system can then infer that C{ex:q} is also of type C{ex:RE}. 26 27Datatype restrictions are used by the L{OWL RL Extensions<OWLRLExtras.OWLRL_Extension>} extension class. 28 29The implementation is not 100% complete. Some things that an ideal implementation should do are not done yet like: 30 31 - checking whether a facet is of a datatype that is allowed for that facet 32 - handling of non-literals in the facets (ie, if the resource is defined to be of type literal, but whose value 33 is defined via a separate 'owl:sameAs' somewhere else) 34 35@requires: U{RDFLib<https://github.com/RDFLib/rdflib>}, 4.0.0 and higher 36@license: This software is available for use under the U{W3C Software License<http://www.w3.org/Consortium/Legal/2002/copyright-software-20021231>} 37@organization: U{World Wide Web Consortium<http://www.w3.org>} 38@author: U{Ivan Herman<a href="http://www.w3.org/People/Ivan/">} 39 40""" 41 42__author__='Ivan Herman' 43__contact__='Ivan Herman, ivan@w3.org' 44__license__=u'W3C® SOFTWARE NOTICE AND LICENSE, http://www.w3.org/Consortium/Legal/2002/copyright-software-20021231' 45 46importre 47 48fromOWLimport* 49# noinspection PyPep8Naming,PyPep8Naming 50fromOWLimportOWLNSasns_owl 51fromRDFClosure.RDFSimportDatatype 52fromRDFClosure.RDFSimporttype 53# noinspection PyPep8Naming 54fromRDFClosure.RDFSimportRDFNSasns_rdf 55 56fromrdflibimportLiteralasrdflibLiteral 57# noinspection PyPep8Naming 58fromrdflib.namespaceimportXSDasns_xsd 59 60fromDatatypeHandlingimportAltXSDToPYTHON 61 62#: Constant for datatypes using min, max (inclusive and exclusive): 63MIN_MAX=0 64#: Constant for datatypes using length, minLength, and maxLength (and nothing else) 65LENGTH=1 66#: Constant for datatypes using length, minLength, maxLength, and pattern 67LENGTH_AND_PATTERN=2 68#: Constant for datatypes using length, minLength, maxLength, pattern, and lang range 69LENGTH_PATTERN_LRANGE=3 70 71#: Dictionary of all the datatypes, keyed by category 72Datatypes_per_facets={ 73MIN_MAX:[ns_owl["rational"],ns_xsd["decimal"],ns_xsd["integer"], 74ns_xsd["nonNegativeInteger"],ns_xsd["nonPositiveInteger"], 75ns_xsd["positiveInteger"],ns_xsd["negativeInteger"], 76ns_xsd["long"],ns_xsd["short"],ns_xsd["byte"], 77ns_xsd["unsignedLong"],ns_xsd["unsignedInt"],ns_xsd["unsignedShort"],ns_xsd["unsignedByte"], 78ns_xsd["double"],ns_xsd["float"], 79ns_xsd["dateTime"],ns_xsd["dateTimeStamp"],ns_xsd["time"],ns_xsd["date"] 80], 81LENGTH:[ns_xsd["hexBinary"],ns_xsd["base64Binary"]], 82LENGTH_AND_PATTERN:[ns_xsd["anyURI"],ns_xsd["string"],ns_xsd["NMTOKEN"],ns_xsd["Name"],ns_xsd["NCName"], 83ns_xsd["language"],ns_xsd["normalizedString"] 84], 85LENGTH_PATTERN_LRANGE:[ns_rdf["plainLiteral"]] 86} 87 88#: a simple list containing C{all} datatypes that may have a facet 89facetable_datatypes=reduce(lambdax,y:x+y,Datatypes_per_facets.values()) 90 91####################################################################################################### 92
94""" 95 This method is used to convert a string to a value with facet checking. RDF Literals are converted to 96 Python values using this method; if there is a problem, an exception is raised (and caught higher 97 up to generate an error message). 98 99 The method is the equivalent of all the methods in the L{DatatypeHandling} module, and is registered100 to the system run time, as new restricted datatypes are discovered.101102 (Technically, the registration is done via a C{lambda v: _lit_to_value(self,v)} setting from within a103 L{RestrictedDatatype} instance)104 @param dt: faceted datatype105 @type dt: L{RestrictedDatatype}106 @param v: literal to be converted and checked107 @raise ValueError: invalid literal value108 """109# This may raise an exception...110value=dt.converter(v)111112# look at the different facet categories and try to find which is113# is, if any, the one that is of relevant for this literal114forcatinDatatypes_per_facets.keys():115ifdt.base_typeinDatatypes_per_facets[cat]:116# yep, this is to be checked.117ifnotdt.checkValue(value):118raiseValueError("Literal value %s does not fit the faceted datatype %s"%(v,dt))119# got here, everything should be fine120returnvalue
124"""125 Implementation of the extended filtering algorithm, as defined in point 3.3.2,126 of U{RFC 4647<http://www.rfc-editor.org/rfc/rfc4647.txt>}, on matching language ranges and language tags.127 Needed to handle the C{rdf:PlainLiteral} datatype.128 @param range: language range129 @param lang: language tag130 @rtype: boolean131 """132def_match(r,l):133"""Matching of a range and language item: either range is a wildcard or the two are equal134 @param r: language range item135 @param l: language tag item136 @rtype: boolean137 """138returnr=='*'orr==l
167"""168 Extractions of restricted (ie, faceted) datatypes from the graph.169 @param core: the core closure instance that is being handled170 @type core: L{Closure.Core}171 @param graph: RDFLib graph172 @return: array of L{RestrictedDatatype} instances173 """174retval=[]175fordtypeingraph.subjects(type,Datatype):176base_type=None177facets=[]178try:179base_types=[xforxingraph.objects(dtype,onDatatype)]180iflen(base_types)>0:181iflen(base_types)>1:182raiseException("Several base datatype for the same restriction %s"%dtype)183else:184base_type=base_types[0]185ifbase_typeinfacetable_datatypes:186rlists=[xforxingraph.objects(dtype,withRestrictions)]187iflen(rlists)>1:188raiseException("More than one facet lists for the same restriction %s"%dtype)189eliflen(rlists)>0:190final_facets=[]191forringraph.items(rlists[0]):192for(facet,lit)ingraph.predicate_objects(r):193ifisinstance(lit,rdflibLiteral):194# the python value of the literal should be extracted195# note that this call may lead to an exception, but that is fine,196# it is caught some lines below anyway...197try:198iflit.datatypeisNoneorlit.datatype==ns_xsd["string"]:199final_facets.append((facet,str(lit)))200else:201final_facets.append((facet,AltXSDToPYTHON[lit.datatype](str(lit))))202exceptException,msg:203core.add_error(msg)204continue205# We do have everything we need:206new_datatype=RestrictedDatatype(dtype,base_type,final_facets)207retval.append(new_datatype)208exceptException,msg:209#import sys210#print sys.exc_info()211#print sys.exc_type212#print sys.exc_value213#print sys.exc_traceback214core.add_error(msg)215continue216returnretval
221"""An 'abstract' superclass for datatype restrictions. The instance variables listed here are222 used in general, without the specificities of the concrete restricted datatype.223224 This module defines the L{RestrictedDatatype} class that corresponds to the datatypes and their restrictions225 defined in the OWL 2 standard. Other modules may subclass this class to define new datatypes with restrictions.226 @ivar type_uri : the URI for this datatype227 @ivar base_type : URI of the datatype that is restricted228 @ivar toPython : function to convert a Literal of the specified type to a Python value. 229 """
236"""237 Check whether the (python) value abides to the constraints defined by the current facets.238 @param value: the value to be checked239 @rtype: boolean240 """241raiseException("This class should not be used by itself, only via its subclasses!")
245"""246 Implementation of a datatype with facets, ie, datatype with restrictions.247248 @ivar datatype : the URI for this datatype249 @ivar base_type : URI of the datatype that is restricted250 @ivar converter : method to convert a literal of the base type to a Python value (drawn from L{DatatypeHandling.AltXSDToPYTHON})251 @ivar minExclusive : value for the C{xsd:minExclusive} facet, initialized to C{None} and set to the right value if a facet is around252 @ivar minInclusive : value for the C{xsd:minInclusive} facet, initialized to C{None} and set to the right value if a facet is around253 @ivar maxExclusive : value for the C{xsd:maxExclusive} facet, initialized to C{None} and set to the right value if a facet is around254 @ivar maxInclusive : value for the C{xsd:maxInclusive} facet, initialized to C{None} and set to the right value if a facet is around255 @ivar minLength : value for the C{xsd:minLength} facet, initialized to C{None} and set to the right value if a facet is around256 @ivar maxLength : value for the C{xsd:maxLength} facet, initialized to C{None} and set to the right value if a facet is around257 @ivar length : value for the C{xsd:length} facet, initialized to C{None} and set to the right value if a facet is around258 @ivar pattern : array of patterns for the C{xsd:pattern} facet, initialized to C{[]} and set to the right value if a facet is around259 @ivar langRange : array of language ranges for the C{rdf:langRange} facet, initialized to C{[]} and set to the right value if a facet is around260 @ivar check_methods : list of class methods that are relevant for the given C{base_type}261 @ivar toPython : function to convert a Literal of the specified type to a Python value. Is defined by C{lambda v : _lit_to_value(self, v)}, see L{_lit_to_value} 262 """263
323"""324 Check whether the (python) value abides to the constraints defined by the current facets.325 @param value: the value to be checked326 @rtype: boolean327 """328formethodinself.check_methods:329ifnotmethod(self,value):330returnFalse331returnTrue
334"""335 Check the (python) value against min exclusive facet.336 @param value: the value to be checked337 @rtype: boolean338 """339ifself.minExclusiveisnotNone:340returnself.minExclusive<value341else:342returnTrue
345"""346 Check the (python) value against min inclusive facet.347 @param value: the value to be checked348 @rtype: boolean349 """350ifself.minInclusiveisnotNone:351returnself.minInclusive<=value352else:353returnTrue
356"""357 Check the (python) value against max exclusive facet.358 @param value: the value to be checked359 @rtype: boolean360 """361ifself.maxExclusiveisnotNone:362returnvalue<self.maxExclusive363else:364returnTrue
367"""368 Check the (python) value against max inclusive facet.369 @param value: the value to be checked370 @rtype: boolean371 """372ifself.maxInclusiveisnotNone:373returnvalue<=self.maxInclusive374else:375returnTrue
378"""379 Check the (python) value against minimum length facet.380 @param value: the value to be checked381 @rtype: boolean382 """383ifisinstance(value,rdflibLiteral):384val=str(value)385else:386val=value387ifself.minLengthisnotNone:388returnself.minLength<=len(val)389else:390returnTrue
393"""394 Check the (python) value against maximum length facet.395 @param value: the value to be checked396 @rtype: boolean397 """398ifisinstance(value,rdflibLiteral):399val=str(value)400else:401val=value402ifself.maxLengthisnotNone:403returnself.maxLength>=len(val)404else:405returnTrue
408"""409 Check the (python) value against exact length facet.410 @param value: the value to be checked411 @rtype: boolean412 """413ifisinstance(value,rdflibLiteral):414val=str(value)415else:416val=value417ifself.lengthisnotNone:418returnself.length==len(val)419else:420returnTrue
423"""424 Check the (python) value against array of regular expressions.425 @param value: the value to be checked426 @rtype: boolean427 """428ifisinstance(value,rdflibLiteral):429val=str(value)430else:431val=value432forpinself.pattern:433ifp.match(val)isNone:434returnFalse435returnTrue
438"""439 Check the (python) value against array of language ranges.440 @param value: the value to be checked441 @rtype: boolean442 """443ifisinstance(value,rdflibLiteral):444lang=value.language445else:446returnFalse447forrinself.langRange:448ifnot_lang_range_check(r,lang):449returnFalse450returnTrue
An 'abstract' superclass for datatype restrictions. The instance
variables listed here are used in general, without the specificities of
the concrete restricted datatype.
This module defines the RestrictedDatatype class that corresponds to the
datatypes and their restrictions defined in the OWL 2 standard. Other
modules may subclass this class to define new datatypes with
restrictions.
RDFClosure.Literals._LiteralStructure:
This class serves as a wrapper around rdflib's Literal, by changing
the equality function to a strict identity of datatypes and lexical
values.
object:
The most base type
datetime.tzinfo:
Abstract base class for time zone info objects.
OWL-RL-7.1.4/Doc_OLD/crarr.png 0000664 0000000 0000000 00000000524 15042011661 0015516 0 ustar 00root root 0000000 0000000 PNG
IHDR
eE ,tEXtCreation Time Tue 22 Aug 2006 00:43:10 -0500`X tIME)} pHYs nu> gAMA a EPLTEðf4sW ЊrD`@ bCܖX{`,lNo@xdE螊dƴ~Twv tRNS @f MIDATxc`@0&+(;;/EXؑ?n b;'+Y# (r<" IENDB` OWL-RL-7.1.4/Doc_OLD/epydoc.css 0000664 0000000 0000000 00000036617 15042011661 0015710 0 ustar 00root root 0000000 0000000
/* Epydoc CSS Stylesheet
*
* This stylesheet can be used to customize the appearance of epydoc's
* HTML output.
*
*/
/* Default Colors & Styles
* - Set the default foreground & background color with 'body'; and
* link colors with 'a:link' and 'a:visited'.
* - Use bold for decision list terms.
* - The heading styles defined here are used for headings *within*
* docstring descriptions. All headings used by epydoc itself use
* either class='epydoc' or class='toc' (CSS styles for both
* defined below).
*/
body { background: #ffffff; color: #000000; }
a:link { color: #0000ff; }
a:visited { color: #204080; }
dt { font-weight: bold; }
h1 { font-size: +140%;
font-style: italic;
font-weight: bold;
background: #cccc99; /* #EFEBCE; #005a9c; */
color: white;
margin-bottom: 0.5em;
}
h2 { font-size: +125%; font-style: italic;
font-weight: bold; }
h3 { font-size: +110%; font-style: italic;
font-weight: normal; }
code { font-size: 100%; }
/* Page Header & Footer
* - The standard page header consists of a navigation bar (with
* pointers to standard pages such as 'home' and 'trees'); a
* breadcrumbs list, which can be used to navigate to containing
* classes or modules; options links, to show/hide private
* variables and to show/hide frames; and a page title (using
*
). The page title may be followed by a link to the
* corresponding source code (using 'span.codelink').
* - The footer consists of a navigation bar, a timestamp, and a
* pointer to epydoc's homepage.
*/
h1.epydoc { margin: 0; font-size: +140%; font-weight: bold; }
h2.epydoc { font-size: +130%; font-weight: bold; }
h3.epydoc { font-size: +115%; font-weight: bold; }
td h3.epydoc { font-size: +115%; font-weight: bold;
margin-bottom: 0; }
table.navbar { background: #cccc99; color: #000000;
border: 2px groove #c0d0d0; }
table.navbar table { color: #000000; }
th.navbar-select { background: #70b0ff;
color: #000000; }
table.navbar a { text-decoration: none; }
table.navbar a:link { color: #0000ff; }
table.navbar a:visited { color: #204080; }
span.breadcrumbs { font-size: 85%; font-weight: bold; }
span.options { font-size: 70%; }
span.codelink { font-size: 85%; }
td.footer { font-size: 85%; }
/* Table Headers
* - Each summary table and details section begins with a 'header'
* row. This row contains a section title (marked by
* 'span.table-header') as well as a show/hide private link
* (marked by 'span.options', defined above).
* - Summary tables that contain user-defined groups mark those
* groups using 'group header' rows.
*/
td.table-header { background: #EFEBCE; color: #000000;
border: 1px solid #608090; }
td.table-header table { color: #000000; }
td.table-header table a:link { color: #0000ff; }
td.table-header table a:visited { color: #204080; }
span.table-header { font-size: 120%; font-weight: bold; }
th.group-header { background: #c0e0f8; color: #000000;
text-align: left; font-style: italic;
font-size: 115%;
border: 1px solid #608090; }
/* Summary Tables (functions, variables, etc)
* - Each object is described by a single row of the table with
* two cells. The left cell gives the object's type, and is
* marked with 'code.summary-type'. The right cell gives the
* object's name and a summary description.
* - CSS styles for the table's header and group headers are
* defined above, under 'Table Headers'
*/
table.summary { border-collapse: collapse;
background: #e8f0f8; color: #000000;
border: 1px solid #608090;
margin-bottom: 0.5em; }
td.summary { border: 1px solid #608090; }
code.summary-type { font-size: 85%; }
table.summary a:link { color: #0000ff; }
table.summary a:visited { color: #204080; }
/* Details Tables (functions, variables, etc)
* - Each object is described in its own div.
* - A single-row summary table w/ table-header is used as
* a header for each details section (CSS style for table-header
* is defined above, under 'Table Headers').
*/
table.details { border-collapse: collapse;
background: #e8f0f8; color: #000000;
border: 1px solid #608090;
margin: .2em 0 0 0; }
table.details table { color: #000000; }
table.details a:link { color: #0000ff; }
table.details a:visited { color: #204080; }
/* Fields */
dl.fields { margin-left: 2em; margin-top: 1em;
margin-bottom: 1em; }
dl.fields dd ul { margin-left: 0em; padding-left: 0em; }
div.fields { margin-left: 2em; }
div.fields p { margin-bottom: 0.5em; }
/* Index tables (identifier index, term index, etc)
* - link-index is used for indices containing lists of links
* (namely, the identifier index & term index).
* - index-where is used in link indices for the text indicating
* the container/source for each link.
* - metadata-index is used for indices containing metadata
* extracted from fields (namely, the bug index & todo index).
*/
table.link-index { border-collapse: collapse;
background: #e8f0f8; color: #000000;
border: 1px solid #608090; }
td.link-index { border-width: 0px; }
table.link-index a:link { color: #0000ff; }
table.link-index a:visited { color: #204080; }
span.index-where { font-size: 70%; }
table.metadata-index { border-collapse: collapse;
background: #e8f0f8; color: #000000;
border: 1px solid #608090;
margin: .2em 0 0 0; }
td.metadata-index { border-width: 1px; border-style: solid; }
table.metadata-index a:link { color: #0000ff; }
table.metadata-index a:visited { color: #204080; }
/* Function signatures
* - sig* is used for the signature in the details section.
* - .summary-sig* is used for the signature in the summary
* table, and when listing property accessor functions.
* */
.sig-name { color: #006080; }
.sig-arg { color: #008060; }
.sig-default { color: #602000; }
.summary-sig { font-family: monospace; }
.summary-sig-name { color: #006080; font-weight: bold; }
table.summary a.summary-sig-name:link
{ color: #006080; font-weight: bold; }
table.summary a.summary-sig-name:visited
{ color: #006080; font-weight: bold; }
.summary-sig-arg { color: #006040; }
.summary-sig-default { color: #501800; }
/* To render variables, classes etc. like functions */
table.summary .summary-name { color: #006080; font-weight: bold;
font-family: monospace; }
table.summary
a.summary-name:link { color: #006080; font-weight: bold;
font-family: monospace; }
table.summary
a.summary-name:visited { color: #006080; font-weight: bold;
font-family: monospace; }
/* Variable values
* - In the 'variable details' sections, each varaible's value is
* listed in a 'pre.variable' box. The width of this box is
* restricted to 80 chars; if the value's repr is longer than
* this it will be wrapped, using a backslash marked with
* class 'variable-linewrap'. If the value's repr is longer
* than 3 lines, the rest will be ellided; and an ellipsis
* marker ('...' marked with 'variable-ellipsis') will be used.
* - If the value is a string, its quote marks will be marked
* with 'variable-quote'.
* - If the variable is a regexp, it is syntax-highlighted using
* the re* CSS classes.
*/
pre.variable { padding: .5em; margin: 0;
background: #dce4ec; color: #000000;
border: 1px solid #708890; }
.variable-linewrap { color: #604000; font-weight: bold; }
.variable-ellipsis { color: #604000; font-weight: bold; }
.variable-quote { color: #604000; font-weight: bold; }
.variable-group { color: #008000; font-weight: bold; }
.variable-op { color: #604000; font-weight: bold; }
.variable-string { color: #006030; }
.variable-unknown { color: #a00000; font-weight: bold; }
.re { color: #000000; }
.re-char { color: #006030; }
.re-op { color: #600000; }
.re-group { color: #003060; }
.re-ref { color: #404040; }
/* Base tree
* - Used by class pages to display the base class hierarchy.
*/
pre.base-tree { font-size: 80%; margin: 0; }
/* Frames-based table of contents headers
* - Consists of two frames: one for selecting modules; and
* the other listing the contents of the selected module.
* - h1.toc is used for each frame's heading
* - h2.toc is used for subheadings within each frame.
*/
h1.toc { text-align: center; font-size: 105%;
margin: 0; font-weight: bold;
padding: 0; }
h2.toc { font-size: 100%; font-weight: bold;
margin: 0.5em 0 0 -0.3em; }
/* Syntax Highlighting for Source Code
* - doctest examples are displayed in a 'pre.py-doctest' block.
* If the example is in a details table entry, then it will use
* the colors specified by the 'table pre.py-doctest' line.
* - Source code listings are displayed in a 'pre.py-src' block.
* Each line is marked with 'span.py-line' (used to draw a line
* down the left margin, separating the code from the line
* numbers). Line numbers are displayed with 'span.py-lineno'.
* The expand/collapse block toggle button is displayed with
* 'a.py-toggle' (Note: the CSS style for 'a.py-toggle' should not
* modify the font size of the text.)
* - If a source code page is opened with an anchor, then the
* corresponding code block will be highlighted. The code
* block's header is highlighted with 'py-highlight-hdr'; and
* the code block's body is highlighted with 'py-highlight'.
* - The remaining py-* classes are used to perform syntax
* highlighting (py-string for string literals, py-name for names,
* etc.)
*/
pre.py-doctest { padding: .5em; margin: 1em;
background: #e8f0f8; color: #000000;
border: 1px solid #708890; }
table pre.py-doctest { background: #dce4ec;
color: #000000; }
pre.py-src { border: 2px solid #000000;
background: #f0f0f0; color: #000000; }
.py-line { border-left: 2px solid #000000;
margin-left: .2em; padding-left: .4em; }
.py-lineno { font-style: italic; font-size: 90%;
padding-left: .5em; }
a.py-toggle { text-decoration: none; }
div.py-highlight-hdr { border-top: 2px solid #000000;
border-bottom: 2px solid #000000;
background: #d8e8e8; }
div.py-highlight { border-bottom: 2px solid #000000;
background: #d0e0e0; }
.py-prompt { color: #005050; font-weight: bold;}
.py-more { color: #005050; font-weight: bold;}
.py-string { color: #006030; }
.py-comment { color: #003060; }
.py-keyword { color: #600000; }
.py-output { color: #404040; }
.py-name { color: #000050; }
.py-name:link { color: #000050 !important; }
.py-name:visited { color: #000050 !important; }
.py-number { color: #005000; }
.py-defname { color: #000060; font-weight: bold; }
.py-def-name { color: #000060; font-weight: bold; }
.py-base-class { color: #000060; }
.py-param { color: #000060; }
.py-docstring { color: #006030; }
.py-decorator { color: #804020; }
/* Use this if you don't want links to names underlined: */
/*a.py-name { text-decoration: none; }*/
/* Graphs & Diagrams
* - These CSS styles are used for graphs & diagrams generated using
* Graphviz dot. 'img.graph-without-title' is used for bare
* diagrams (to remove the border created by making the image
* clickable).
*/
img.graph-without-title { border: none; }
img.graph-with-title { border: 1px solid #000000; }
span.graph-title { font-weight: bold; }
span.graph-caption { }
/* General-purpose classes
* - 'p.indent-wrapped-lines' defines a paragraph whose first line
* is not indented, but whose subsequent lines are.
* - The 'nomargin-top' class is used to remove the top margin (e.g.
* from lists). The 'nomargin' class is used to remove both the
* top and bottom margin (but not the left or right margin --
* for lists, that would cause the bullets to disappear.)
*/
p.indent-wrapped-lines { padding: 0 0 0 7em; text-indent: -7em;
margin: 0; }
.nomargin-top { margin-top: 0; }
.nomargin { margin-top: 0; margin-bottom: 0; }
/* HTML Log */
div.log-block { padding: 0; margin: .5em 0 .5em 0;
background: #e8f0f8; color: #000000;
border: 1px solid #000000; }
div.log-error { padding: .1em .3em .1em .3em; margin: 4px;
background: #ffb0b0; color: #000000;
border: 1px solid #000000; }
div.log-warning { padding: .1em .3em .1em .3em; margin: 4px;
background: #ffffb0; color: #000000;
border: 1px solid #000000; }
div.log-info { padding: .1em .3em .1em .3em; margin: 4px;
background: #b0ffb0; color: #000000;
border: 1px solid #000000; }
h2.log-hdr { background: #70b0ff; color: #000000;
margin: 0; padding: 0em 0.5em 0em 0.5em;
border-bottom: 1px solid #000000; font-size: 110%; }
p.log { font-weight: bold; margin: .5em 0 .5em 0; }
tr.opt-changed { color: #000000; font-weight: bold; }
tr.opt-default { color: #606060; }
pre.log { margin: 0; padding: 0; padding-left: 1em; }
OWL-RL-7.1.4/Doc_OLD/epydoc.js 0000664 0000000 0000000 00000024525 15042011661 0015527 0 ustar 00root root 0000000 0000000 function toggle_private() {
// Search for any private/public links on this page. Store
// their old text in "cmd," so we will know what action to
// take; and change their text to the opposite action.
var cmd = "?";
var elts = document.getElementsByTagName("a");
for(var i=0; i";
s += " ";
for (var i=0; i... ";
elt.innerHTML = s;
}
}
function toggle(id) {
elt = document.getElementById(id+"-toggle");
if (elt.innerHTML == "-")
collapse(id);
else
expand(id);
return false;
}
function highlight(id) {
var elt = document.getElementById(id+"-def");
if (elt) elt.className = "py-highlight-hdr";
var elt = document.getElementById(id+"-expanded");
if (elt) elt.className = "py-highlight";
var elt = document.getElementById(id+"-collapsed");
if (elt) elt.className = "py-highlight";
}
function num_lines(s) {
var n = 1;
var pos = s.indexOf("\n");
while ( pos > 0) {
n += 1;
pos = s.indexOf("\n", pos+1);
}
return n;
}
// Collapse all blocks that mave more than `min_lines` lines.
function collapse_all(min_lines) {
var elts = document.getElementsByTagName("div");
for (var i=0; i 0)
if (elt.id.substring(split, elt.id.length) == "-expanded")
if (num_lines(elt.innerHTML) > min_lines)
collapse(elt.id.substring(0, split));
}
}
function expandto(href) {
var start = href.indexOf("#")+1;
if (start != 0 && start != href.length) {
if (href.substring(start, href.length) != "-") {
collapse_all(4);
pos = href.indexOf(".", start);
while (pos != -1) {
var id = href.substring(start, pos);
expand(id);
pos = href.indexOf(".", pos+1);
}
var id = href.substring(start, href.length);
expand(id);
highlight(id);
}
}
}
function kill_doclink(id) {
var parent = document.getElementById(id);
parent.removeChild(parent.childNodes.item(0));
}
function auto_kill_doclink(ev) {
if (!ev) var ev = window.event;
if (!this.contains(ev.toElement)) {
var parent = document.getElementById(this.parentID);
parent.removeChild(parent.childNodes.item(0));
}
}
function doclink(id, name, targets_id) {
var elt = document.getElementById(id);
// If we already opened the box, then destroy it.
// (This case should never occur, but leave it in just in case.)
if (elt.childNodes.length > 1) {
elt.removeChild(elt.childNodes.item(0));
}
else {
// The outer box: relative + inline positioning.
var box1 = document.createElement("div");
box1.style.position = "relative";
box1.style.display = "inline";
box1.style.top = 0;
box1.style.left = 0;
// A shadow for fun
var shadow = document.createElement("div");
shadow.style.position = "absolute";
shadow.style.left = "-1.3em";
shadow.style.top = "-1.3em";
shadow.style.background = "#404040";
// The inner box: absolute positioning.
var box2 = document.createElement("div");
box2.style.position = "relative";
box2.style.border = "1px solid #a0a0a0";
box2.style.left = "-.2em";
box2.style.top = "-.2em";
box2.style.background = "white";
box2.style.padding = ".3em .4em .3em .4em";
box2.style.fontStyle = "normal";
box2.onmouseout=auto_kill_doclink;
box2.parentID = id;
// Get the targets
var targets_elt = document.getElementById(targets_id);
var targets = targets_elt.getAttribute("targets");
var links = "";
target_list = targets.split(",");
for (var i=0; i" +
target[0] + "";
}
// Put it all together.
elt.insertBefore(box1, elt.childNodes.item(0));
//box1.appendChild(box2);
box1.appendChild(shadow);
shadow.appendChild(box2);
box2.innerHTML =
"Which "+name+" do you want to see documentation for?" +
"
";
}
return false;
}
function get_anchor() {
var href = location.href;
var start = href.indexOf("#")+1;
if ((start != 0) && (start != href.length))
return href.substring(start, href.length);
}
function redirect_url(dottedName) {
// Scan through each element of the "pages" list, and check
// if "name" matches with any of them.
for (var i=0; i-m" or "-c";
// extract the portion & compare it to dottedName.
var pagename = pages[i].substring(0, pages[i].length-2);
if (pagename == dottedName.substring(0,pagename.length)) {
// We've found a page that matches `dottedName`;
// construct its URL, using leftover `dottedName`
// content to form an anchor.
var pagetype = pages[i].charAt(pages[i].length-1);
var url = pagename + ((pagetype=="m")?"-module.html":
"-class.html");
if (dottedName.length > pagename.length)
url += "#" + dottedName.substring(pagename.length+1,
dottedName.length);
return url;
}
}
}
OWL-RL-7.1.4/Doc_OLD/frames.html 0000664 0000000 0000000 00000001120 15042011661 0016033 0 ustar 00root root 0000000 0000000
API Documentation
OWL-RL-7.1.4/Doc_OLD/help.html 0000664 0000000 0000000 00000024737 15042011661 0015531 0 ustar 00root root 0000000 0000000
Help
This document contains the API (Application Programming Interface)
documentation for this project. Documentation for the Python
objects defined by the project is divided into separate pages for each
package, module, and class. The API documentation also includes two
pages containing information about the project as a whole: a trees
page, and an index page.
Object Documentation
Each Package Documentation page contains:
A description of the package.
A list of the modules and sub-packages contained by the
package.
A summary of the classes defined by the package.
A summary of the functions defined by the package.
A summary of the variables defined by the package.
A detailed description of each function defined by the
package.
A detailed description of each variable defined by the
package.
Each Module Documentation page contains:
A description of the module.
A summary of the classes defined by the module.
A summary of the functions defined by the module.
A summary of the variables defined by the module.
A detailed description of each function defined by the
module.
A detailed description of each variable defined by the
module.
Each Class Documentation page contains:
A class inheritance diagram.
A list of known subclasses.
A description of the class.
A summary of the methods defined by the class.
A summary of the instance variables defined by the class.
A summary of the class (static) variables defined by the
class.
A detailed description of each method defined by the
class.
A detailed description of each instance variable defined by the
class.
A detailed description of each class (static) variable defined
by the class.
Project Documentation
The Trees page contains the module and class hierarchies:
The module hierarchy lists every package and module, with
modules grouped into packages. At the top level, and within each
package, modules and sub-packages are listed alphabetically.
The class hierarchy lists every class, grouped by base
class. If a class has more than one base class, then it will be
listed under each base class. At the top level, and under each base
class, classes are listed alphabetically.
The Index page contains indices of terms and
identifiers:
The term index lists every term indexed by any object's
documentation. For each term, the index provides links to each
place where the term is indexed.
The identifier index lists the (short) name of every package,
module, class, method, function, variable, and parameter. For each
identifier, the index provides a short description, and a link to
its documentation.
The Table of Contents
The table of contents occupies the two frames on the left side of
the window. The upper-left frame displays the project
contents, and the lower-left frame displays the module
contents:
Project Contents...
API Documentation Frame
Module Contents ...
The project contents frame contains a list of all packages
and modules that are defined by the project. Clicking on an entry
will display its contents in the module contents frame. Clicking on a
special entry, labeled "Everything," will display the contents of
the entire project.
The module contents frame contains a list of every
submodule, class, type, exception, function, and variable defined by a
module or package. Clicking on an entry will display its
documentation in the API documentation frame. Clicking on the name of
the module, at the top of the frame, will display the documentation
for the module itself.
The "frames" and "no frames" buttons below the top
navigation bar can be used to control whether the table of contents is
displayed or not.
The Navigation Bar
A navigation bar is located at the top and bottom of every page.
It indicates what type of page you are currently viewing, and allows
you to go to related pages. The following table describes the labels
on the navigation bar. Note that not some labels (such as
[Parent]) are not displayed on all pages.
Label
Highlighted when...
Links to...
[Parent]
(never highlighted)
the parent of the current package
[Package]
viewing a package
the package containing the current object
[Module]
viewing a module
the module containing the current object
[Class]
viewing a class
the class containing the current object
[Trees]
viewing the trees page
the trees page
[Index]
viewing the index page
the index page
[Help]
viewing the help page
the help page
The "show private" and "hide private" buttons below
the top navigation bar can be used to control whether documentation
for private objects is displayed. Private objects are usually defined
as objects whose (short) names begin with a single underscore, but do
not end with an underscore. For example, "_x",
"__pprint", and "epydoc.epytext._tokenize"
are private objects; but "re.sub",
"__init__", and "type_" are not. However,
if a module defines the "__all__" variable, then its
contents are used to decide which objects are private.
A timestamp below the bottom navigation bar indicates when each
page was last updated.
RDFClosure: This module is brute force implementation of the 'finite' version
of RDFS
semantics and of OWL 2 RL on the top of RDFLib (with some caveats,
see below).
RDFClosure.RDFSClosure: This module is brute force implementation of the RDFS semantics on
the top of RDFLib (with some caveats, see in the introductory
text).
When javascript is enabled, this page will redirect URLs of
the form redirect.html#dotted.name to the
documentation for the object with the given fully-qualified
dotted name.