MiTO, the Mention Typing Ontology, is an OWL 2 DL ontology designed to formalize the concept of mentions. Mentions can be explicit (e.g., directly stated references in text) or implicit (e.g., indirect references to entities or concepts). This ontology aims to enhance interoperability with other SPAR ontologies by providing a flexible framework for describing mentions and their characteristics. Mentions are crucial for bibliometric analysis, as they allow researchers to track references to entities (e.g., software) in scholarly communication. MiTO introduces the following key concepts: * **Mention**: Represents the act of referring to or introducing the name of a person, entity, or concept in discourse. This class is defined with minimal restrictions to maximize usability across different contexts. * **MentionType**: Reifies the implicit or explicit nature of a mention, enabling users to classify mentions based on their characteristics. MiTO also defines several object properties to describe relationships between mentions and entities: * `mito:mentions` and its inverse `mito:isMentionedBy`: Connect the two sides involved in the act of mentioning. * `mito:hasMentionedEntity`: Relates a mention to the entity being mentioned. * `mito:hasMentioningEntity`: Relates a mention to the entity performing the act of mentioning. * `mito:hasMentionType`: Defines the implicit or explicit character of the mention. MiTO is designed to integrate seamlessly with other SPAR ontologies, which provides a modular framework for describing crucial aspects of bibliographic entities and their metadata. The ontology is distributed under a Creative Commons Attribution 4.0 International license (https://creativecommons.org/licenses/by/4.0/legalcode). <img class="img-responsive center-block" src="/static/img/spar/mito-core-diagram.png" alt="A Graffoo diagram introducing the Mention Typing Ontology." /> MiTO is a valuable tool for researchers, publishers, and institutions seeking to analyze and understand the role of mentions in scholarly discourse.
**MiTO** lets you connect two resources where one (the *mentioning entity*, e.g., a paper) mentions another (the *mentioned entity*, e.g., a software, dataset, method), optionally qualifying the mention with a **mention type** (e.g., explicit/implicit). As with CiTO, you can use a **direct** triple (`mito:mentions`) *or* a **reified** mention (`mito:Mention`) to which you can attach further metadata such as the mention type.
@prefix : <http://www.sparontologies.net/example/> .
@prefix mito: <http://purl.org/spar/mito/> .
@prefix fabio: <http://purl.org/spar/fabio/> .
@prefix oa: <http://www.w3.org/ns/oa#> .
@prefix dcterms: <http://purl.org/dc/terms/> .
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
@prefix cnt: <http://www.w3.org/2011/content#> .
@prefix c4o: <http://purl.org/spar/c4o/> .
@prefix per: <http://data.semanticweb.org/person/> .
# The research paper
:ml-paper-2024 a fabio:JournalArticle ;
dcterms:title "Deep Learning Approaches for Medical Image Segmentation" ;
mito:mentions :tensorflow .
# The mentioned software
:tensorflow a fabio:ComputerProgram ;
foaf:name "TensorFlow" ;
mito:isMentionedBy :ml-paper-2024 .
# Reified mention with explicit type
:mention-tensorflow a mito:Mention ;
mito:hasMentioningEntity :ml-paper-2024 ;
mito:hasMentionedEntity :tensorflow ;
mito:hasMentionType mito:ExplicitMention .
# The research entities
:vision-paper a fabio:JournalArticle ;
dcterms:title "Novel Architectures for Biomedical Image Classification" .
:imagenet a fabio:Dataset ;
dcterms:title "ImageNet Large Scale Visual Recognition Challenge Dataset" .
# Direct mention relationship
:vision-paper mito:mentions :imagenet .
:imagenet mito:isMentionedBy :vision-paper .
# Reified mention for detailed characterization
:dataset-mention a mito:Mention ;
mito:hasMentioningEntity :vision-paper ;
mito:hasMentionedEntity :imagenet ;
mito:hasMentionType mito:ExplicitMention .
# Open Annotation providing context about the mention
:mention-annotation a oa:Annotation ;
oa:motivatedBy oa:commenting ;
oa:hasTarget :dataset-mention ;
oa:hasBody :usage-context .
:usage-context a cnt:ContentAsText ;
cnt:chars "ImageNet is used exclusively for pre-training the backbone network, not for final model evaluation." .
# Research entities
:nlp-paper a fabio:JournalArticle ;
dcterms:title "Contextual Embeddings for Scientific Text Classification" .
:bert-method a fabio:ResearchPaper ;
dcterms:title "Bidirectional Encoder Representations from Transformers" .
# Reified mention
:bert-mention a mito:Mention ;
mito:hasMentioningEntity :nlp-paper ;
mito:hasMentionedEntity :bert-method ;
mito:hasMentionType mito:ExplicitMention .
# In-text pointer in the methods section
:bert-pointer a c4o:InTextReferencePointer ;
c4o:hasContent "[BERT]" .
# Annotation linking pointer to mention
:pointer-annotation a oa:Annotation ;
oa:hasTarget :bert-pointer ;
oa:hasBody :bert-mention ;
oa:annotatedBy per:research-team .
# Direct relationships for convenience
:nlp-paper mito:mentions :bert-method .
:bert-method mito:isMentionedBy :nlp-paper .
# Research entities
:stats-paper a fabio:JournalArticle ;
dcterms:title "Bayesian Approaches to Uncertainty Quantification in Deep Learning" .
:bayesian-theory a fabio:ResearchPaper ;
dcterms:title "Bayesian Statistical Inference" .
# Direct mention (implicit)
:stats-paper mito:mentions :bayesian-theory .
:bayesian-theory mito:isMentionedBy :stats-paper .
# Reified implicit mention
:implicit-theory-mention a mito:Mention ;
mito:hasMentioningEntity :stats-paper ;
mito:hasMentionedEntity :bayesian-theory ;
mito:hasMentionType mito:ImplicitMention .
# Annotation explaining the implicit nature
:implicit-annotation a oa:Annotation ;
oa:motivatedBy oa:commenting ;
oa:hasTarget :implicit-theory-mention ;
oa:hasBody :implicit-explanation .
:implicit-explanation a cnt:ContentAsText ;
cnt:chars "Bayesian theory is applied throughout the methodology but never explicitly referenced by name." .
Please cite the source above with the following reference:
It will soon be available