All Classes and Interfaces

Class
Description
 
Adds the provided extensions to the alignment when the matcher is executed.
This is a simple matcher that adds a given alignment to the inputAlignment.
Adds an additional confidence by a user chosen function which gets a ontResource and has to return a double.
It filters based on the additional confidence.
An AddNegatives matcher requires an ideally correct alignment as input to its match function (input alignment).
Abstract class which is the base class for all AddNegatives which are based on random sampling.
This component adds negative samples to the alignment.
This component adds negative samples to the alignment.
This component adds negative samples to the alignment.
This component adds negative correspondences to the input alignment via an alignment (generated by a recall optimized matcher).
This component adds negative correspondences to the input alignment via a recall optimized matcher.
This matcher will detect the test case given in the input and use the reference (gold standard) to sample from it with the given rate which is added to the input alignment.
Class for computing hierarchy ranks in directed graphs (with cycles).This is an implementation of the paper:
Faster way to agony - Discovering hierarchies in directed graphs by Nikolaj Tatti
which is an improved version of the paper:
Hierarchies in directed networks by Nikolaj Tatti
Code is available at https://users.ics.aalto.fi/ntatti/software.shtml.
Helper class for Agony.
Helper class for Agony.
Helper class for Agony.
Helper class for Agony.
Helper class for Agony.
Helper class for Agony.
Deprecated.
use DotGraphUtil instead.
 
Filter which makes and alignment coherent.
Data structure to represent an Alignment.
Converts Alignment to Set of MappingObjectStr.
 
The AlignmentAnalyzerMetric is capable of calculating statistics about a finished alignment.
The AlignmentAnalyzerResult is the output of the AlignmentAnalyzerMetric.
Due to the fact that a matcher can only return one value, but alignment and parameters can be changed, an extra object is necessary.
This refiner will create the closure of the system and reference alignment.
The AlignmentHandler manages the parsing of alignment files.
The AlignmentParser can parse XML files following the convention described in the Alignment Format.
Just saves the ontologies in a specific format.
Analytical Store for alignments.
The AlignmentSerializer writes an Alignment to a file.
Class which helps to repair alignment files which are for example not correctly encoded.
 
This data structure can be used to store all kind of analytical information about one alignment.
Data structure for some often used features.
This filter removes correspondences where the source or target is a blank node.
An argument scope are multiple arguments but if an argument cannot be substituted, the whole argument scope will be empty.
 
Arrays2
Class holding the authorization configuration.
 
A dictionary that will use BabelNet offline indices.
Links concepts to BabelNet (using the BabelNet indices).
Links concepts to BabelNet (using the RDF dataset - NOT the indices).
Template matcher where the background knowledge and the exploitation strategy (represented as ImplementedBackgroundMatchingStrategies) can be plugged-in.
Matcher which applies String matching and matches then with the provided background knowledge source and strategy.
A tools class containing static functionality for string-based matching.
This filter removes correspondences where the source or target has not the same host of the OntModels.
Data structure keeps the notion of the original ordering but the equals method will ignore the ordering
Filters individual/instance mappings by comparing literals.
Basic filter for instances which compares sets like neighbours or properties.
A very basic string matcher that can be used as baseline for matchers.
This class considers two Strings to be equal when they contain the same tokens.
An enum which describes entities on how to optimize the batch size.
 
Structure based matcher which allows to find matches in hierarchies which are between two already matched entities.
Configuration objetc for BoundedPathMatching class.
Example: Input: "hello" Output: "Hello"
 
Deprecated.
This StringModifier removes all characters that are not a letter and then applies the TokenizeSpaceSeparateLowercaseModifier.
The distance measure to use for clustering.
An interface to choose between different implementations of clusterers like SMILE library or ELKI.
Clusterer based on the ELKI library and always using the Andernberg algorithm.
Clusterer based on the SMILE library.
Clustering
The clustering likage.
Helper class for adding and filtering correspondences based on cluster assignments.
 
 
50% Jaccard, 50% Overlap Coefficient.
Filter which deletes instance mappings if they have no matched properties in common.
This class can compute two things:
1) Communities detected by the Louvrain algorithm.
This enum represents different resource types that may occur in an ontology.
Combines the additional confidences and set the overall correspondence confidence to be the mean of the selected confidences.
Filters the alignment by computing the inflection point of the sorted confidences.
This filter returns only alignments with confidence greater or equals than a specific threshold.
This class offers static functionality to analyze and optimize matchers in terms of their confidences (and confidence thresholds).
 
Data Structure for an individual confusion matrix.
 
Data Structure for an individual confusion matrix.
Confusion Matrix Metric.
Defines different modes how a model should be copied during incremental merge.
A Correspondence contains a relation that holds between two elements from two different ontologies.
Comparator for Correspondence.
Enumeration for the relations used in a Correspondence such as "equivalence"/"=".
Deprecated.
use the counter in matching-jena-matchers.
A Counter is for counting arbitrary objects.
This class allows to analyze the concept coverage given a data source.
The result object of Coverage.
 
This class removes cycles based on the Agony algorithm.
Generates a dashboard with dc.js components based on the generated csv file.
Extracts from an URI the corresponding source / dataset identifier (which needs to be included in the URI like a specific domain etc).
Extracts the dataset id from a whole model based on sampling some resources.
Extracts the dataset id given a URL pattern which is currently a prefix and infix.
Extracts the dataset id given a map of URL prefixes and corresponding dataset ID.
Small utilities for dataset id extraction.
Store accessible to all matchers where variables and results can be persisted in.
Link DBpedia embeddings using the "normal" DBpedia linker (DBpediaLinker.
DBpedia knowledge source.
 
 
This class is only a helper class.
Default vocabulary as given by http://alignapi.gforge.inria.fr/labels.html.
Alignment server extensions.
Argumentation Extensions
Dublin Core Extensions
Linkkey Extensions
Additional vocabulary introduced with the MELT framework.
Ontology Metadata Vocabulary being a metadata ontology introduces many different labels that can be used in Alignment and correspondences but also defines it own sorts of objects that can be annotated.
OMWG Extensions
Standard API extensions by the AlignmentAPI.
 
 
A helper class which contains some static function which are often used in dispatchers.
The job/callable to compute the distance matrix in parallel.
A result of the distance matrix computation.
Updates the confidence of already matched resources.
ResultCallback logging directly using the SLF4J logger.
An exception when docker is not running.
ResultCallback collecting the full log and returning it as a single string for further processing.
A base class for all matchers which write a csv file where every line represents a resource with with cell as identifier like URI and second cell the corresponding tokens (whitespace separated).
Util to write Dot graphs.
 
This class considers two Strings to be equal when they contain the same tokens with stopwords removed.
Removes stopwords before comparing strings.
This class considers two Strings to be equal when they contain the same tokens with stopwords removed.
An Error handler that does nothing except for throwing an Exception in fatal cases.
 
Abstract class for all default evaluators.
Evaluates the alignments (min/max confidence, type of relations, correct positions of uris etc) and writes the output to the results folder.
A basic evaluator that is easy on Memory and prints the performance results per test case in CSV format.
This evaluator simply writes the system alignments of individual ExecutionResult instances to a file in the results folder.
This evaluator is capable of persisting the results of the matching process in a CSV file (which can be consumed in Excel, for example).
Implementation of a significance test according to information specified in: Mohammadi, Majid; Atashin, Amir Ahooye; Hofman, Wout; Tan, Yaohua.
Abstract class for all multisource evaluators.
 
A basic evaluator that is easy on Memory and prints the performance results per test case in CSV format.
A rank evaluator which writes a file resultsRanking.csv.
An evaluator that calculates rank metrics on an per-element basis for each element of a specified source ontology.
 
A class offering multiple services to evaluators (building blocks for quick evaluator development).
Matcher which creates correspondences based on exact string match.
This class represents the result of a matcher execution.
This class represents the result of a multi source matcher execution.
A collection of individual ExecutionResult instances that are typically returned by an Executor.
A collection of individual ExecutionResultMultiSource instances that are typically returned by an ExecutorMultiSource.
Data structure to hold two ExecutionResult instances.
Individual execution object for parallel execution.
A helper class which stores the information for one multi source matching task.
The Executor runs a matcher or a list of matchers on a single test case or a list of test cases.
 
Executes the multi source task in parallel.
Executor to run matchers in parallel.
This executor can run SEALS matchers.
A simple IExplainerResource which is capable of retrieving properties for given resources.
A simple IExplainerResource which return the type of the resource.
 
 
Handles everything with external process When no ProcessOutputConsumer is added, the default is to discard it.
An external resource carries a name, has a linker, and can be asked whether it holds a representation for a specified word.
 
 
A filter for multi source matching.
File cache which can be used to store a java object in a file and load it from that file if the program runs a second time.
 
 
Just saves the ontologies in a specific format.
Helper for creating files etc.
Interface for filters.
This filter of correspondences is based on the community structure of the correspondences.
This is a simple matcher that forwards a given alignment always (even if the input alignment is available).
This is a simple matcher that forwards a given alignment if the input alignment is not available.
This matcher caller expects some matcher object and all other paramters as objects as well and call it with appropriate type transformers such that the call can actually happen.
This matcher caller expects some matcher object and all other paramters as objects as well and call it with apropiate type transformers such that the call can actually happen.
This class represents a single gensim embedding model.
Defines how complete a gold standard is.
GridSearch for ontology matching with an arbitrary amount of parameter and values to optimize.
A high precision matcher which focuses on URI fragment and label (only element based string comparison).
Provides a way to interact with HOBBIT.
Wraps the interface of HOBBIT platform and maps it to calls similar to SEALS.
Convert bytes in human readable values like 12.12GB or 5.23 MB etc.
Helper Class for HungarianExtractor.
This implementation uses the Hungarian algorithm to find a one to one mapping.
 
Interface for classes that are able to generate explanatory statements about individual mappings.
Interface for classes that are able to generate explanatory statements about individual resources.
Class capable of explaining resources using an ontology that can be set.
 
Generic matcher interface which just implements one method called match.
A matcher interface which allows the matcher to call other matchers as well.
Generic matcher interface for matching multiple ontologies / knowledge graphs.
Generic matcher interface for matching multiple ontologies / knowledge graphs which calls other matchers itself.
Strategies that are supported by BackgroundMatcher.
This is a very special interface for matcher which have an index for the source / left ontology.
Providing basic input/output operations.
 
 
Jena implementation of OntologyValidationService.
A helper class for jena transformers.
Calls the KGvec2go service.
The available datasets on KGvec2go.
Response entity for /get-vector call.
Some datasets (e.g.
General interface for all label-to-concept linkers.
 
Linker able to combine different embedding approaches.
LabelToConceptLinker with some additional functions required for embedding approaches.
Universal language enum for all background knowledge data sets.
 
 
A left to right tokenizer runs over the array from left to right.
A small helper class which contains all dependencies as a list and the full file name of the base version.
Service class writing the links of a test case / track to a file.
Given a resource (from Jena - which has the model behind - and thus allows to traverse the whole graph), this interface extracts the literals which are usually helpful for matching.
All annotation properties are followed (recursively).
This extractor uses all literals of the resource.
This extractor uses all literals which are also strings e.g.
Extracts all values from a specific property as long as it is a literal.
This extractor is a composer and uses the given extractor in the given order as long as an extractor will yield an result.
Given a resource (from Jena - which has the model behind - and thus allows to traverse the whole graph), this interface extracts the literals which are usually helpful for matching.
Extracts the fragment of the url e.g.
Extracts the local name from the URI.
This filter asks a LLM which entity of the source fits best to an entity of the target.
This filter asks a LLM if a given correspondence is correct or not.
This filter asks the LLM given a source entity which is the best target entity (out of the ones in the alignment).
A track that does not exist on the SEALS repository.
 
Converts Set of MappingObjectStr to Alignment.
This is the logmap repair filter.
Transforms to lower case.
This filter learns and applies a classifier given a training sample and an existing alignment.
Non functional code.
 
This class is used as wrapper for the SEALS external matcher build process.
 
 
Extracts the main matcher class as a string from a file.
This class allows to manually inspect the output of a TextExtractor by writing the results to a file or stdout.
A matcher which matches classes based on already instance matches.
Matcher for running external matchers (require the subclass to create a command to execute).
Read the file "external/external_command.txt" and start an external process.
Combines multiple matchers.
This matcher creates a docker container based on a given docker image name.
For this matcher the results file that shall be written can be specified.
This class wraps a matcher service.
Multi source matcher which expects URLs as parameters.
 
Executes all matchers one after the other.
A matcher template for matchers that are based on YAAA.
Better use MatcherYAAAPipeline because it can combine matchers which use different APIS like Jena and OWLAPI etc
Better use MatcherPipelineYAAA because it can combine matchers which use different APIS like Jena and OWLAPI etc.
Better use MatcherYAAAPipeline because it can combine matchers which use different APIs like Jena and OWLAPI etc.
This matcher wraps the SEALS client such that a SEALS zip file or folder can be executed.
 
Resulting object of the MatcherSimilarityMetric calculation.
This writer can persist MatcherSimilarity objects in a LaTex graph from the perspective of one particular matcher.
This writer can persist MatcherSimilarity objects in a LaTex Heat Map.
This writer can persist MatcherSimilarity objects in a LaTex graph.
This metric allows to compare system results similarity by calculating the jaccard overlap between alignment results.
Indicator on whether Micro or Macro average shall be used for aggregation operations.
 
RawMatcher which implements the minimal interface for being executed under the SEALS platform.
A matcher template for matchers that are based on the YAAA Framework.
A matcher template for matchers that are based on the YAAA Framework.
A matcher template for matchers that are based on Apache Jena.
 
An exception which can be thrown by a matcher in case something goes wrong.
Graph-based Matcher: Checks all matched classes and matches also properties between them (domain and range) with mean value of both classes.
Matches properties based on same subject and object and the distribution.
A helper class for mathematical and statistical operations.
This tokenizer is able to assist in linking labels that consist of multiple concepts to the most specific concept possible.
Faster implementation than HungarianExtractor for generating a one-to-one alignment.
Local data structure.
Local data structure.
Local data structure.
How the curvature should be determined.
Util methods for melt
Class with helper methods to profile memory usage.
The class which actually runs a merge in MultiSourceDispatcherIncrementalMerge.
The information of how KGs should be merged together (in which order).
Result of the MergeTask
A task which contains the two knowledge graphs to be merged and the new position of the merged kg.
This class is a utility class which represents the merge tree is a different way (for which it is easier to calculate the height of the tree).
Abstract class which represents a metric.
 
 
Asserts a homogenous alignment (i.e.
 
The model (ontology / knowledge graph) and the corressponding index in the list of a multisource matching task.
Algorithm for modularity optimization used in ComputeErrDegree
A multi concept linker may map one link to multiple concepts in the background knowledge graph.
Replace multiple texts at once.
Replace multiple texts at once.
An interface which indicates that this multisource matcher delegates the task of matching multiple ontologies/knowledge graphs to a one to one matcher.
 
Matches multiple ontologies / knowledge graphs with an incremental merge approach.
Matches multiple ontologies / knowledge graphs with an incremental merge approach.
Matches multiple ontologies / knowledge graphs with an incremental merge approach.
Matches multiple ontologies / knowledge graphs with an incremental merge approach.
This dispatcher will match multiple ontologies by selecting a few pairs.
This dispatcher will compare the texts in a model and match the ones which are textually the clostest such that a connection between all ontologies exists.
 
Executes all multi source matchers one after the other.
Helper class for MaxWeightBipartiteExtractor.
Initialization heuristic for MaxWeightBipartiteExtractor.
Helper class for MaxWeightBipartiteExtractor.
Naive ascending extraction as shown in "Analyzing Mapping Extraction Approaches" (C.
Naive descending extraction as shown in "Analyzing Mapping Extraction Approaches" (C.
Internal data structure which represents a tuple of the form (String name, Property property).
 
Network
DEV REMARK: Be aware that refactoring the name leads to hardcoded String changes in the LabelToConcept Linker package.
Creates regular n-grams
 
Matcher which does nothing but returning a valid empty alignment.
 
 
A filter which removes correspondences where source or target is matched to more than one entity.
Converts number to words like 1 to one and 3 to three.
Deprecated.
use parameters file instead.
 
 
 
 
Data structure storing further information about an ontology.
Cache and reader for Jena ontologies.
Cache for ontologies for the OWL Api.
An OntologyValidationService allows to validate a single ontology, i.e., make sure that the ontology is parseable.
 
This matching module uses the OpenEA library to match entities.
 
 
OWL API implementation of OntologyValidationService.
List all the keys (URLs) which can be used as matching parameters.
This is a wrapper for PARIS matching system by Fabian Suchanek et al.
 
Interface which gets a track and uris.
 
 
A simple persistence service offering stripped-down database operations to other applications.
Enum with the preconfigured database persistences.
 
DEV REMARK: Be aware that refactoring the name leads to hardcoded String changes in the LabelToConcept Linker package.
PorterStemmer, implementing the Porter Stemming Algorithm The PorterStemmer class transforms a word into its root form.
This class represents a lookup service for Semantic Web prefixes.
This Mojo will: 1.
 
A collector which searches for an alignment URL or creates a file with the content of the lines and returns the url of this file.
 
 
Transforms a URI to java.uril.Properties.
 
 
Define some properties which are used with a similar semantic.
 
 
A client class to communicate with python libraries such as gensim.
A python server exception in case something goes wrong or the server is not started or returned no result etc.
 
This helper class is used to randomly sample resources from an OntModel.
A helper class to randomly sample elements from an initial set.
A metric which computes multiple rank metrics such as the NDCG and average precision for an execution result.
 
Result of the RankingMetric.
A matcher which tries to detect the testcase and return the reference alignment.
A refinement operation.
Removes all reflexive edges (which maps A to A) from an alignment.
This matcher predicts the relation type given a transformer model.
 
This matcher predicts the relation type given a transformer model.
The relation type refiner refines all execution results in such a way that only the specified reltion type is used.
 
This refiner is capable of refining an ExecutionResult according to trivial and nontrivial matches.
An interface which extracts resources of a given OntModel.
Extracts classes from a given OntModel.
Extracts datatype properties from a given OntModel.
Class for listing default extractors.
Extracts instances from a given OntModel.
Extracts object properties from a given OntModel.
Extracts RDF properties from a given OntModel.
Deprecated.
better use ConceptType which has the same functionality.
Results page generator for HTML.
Results page generator in Latex.
 
Enum for different sorting options in case correspondences have same confidence.
A quick helper program for track organizers and MELT administrators.
Matcher which uses different String Matching approaches (stored in PropertySpecificStringProcessing) with a specific confidence.
Scales the additional correspondence confidence values (that were produced by other filters/matchers) linearly to a given interval (by default [0,1]).
Scales the correspondence confidence values linearly to an given interval (by default [0,1]).
 
 
 
Track on the SEALS platform.
This class implements the SEALS interface (via MatcherURL) and calls the provided matcher class (the matcher class is provided via a file in the SEALS package in folder /conf/extenal/main_class.txt ).
 
 
Abstract class for dictionary access.
This matcher uses the Sentence Transformers library to build an embedding space for each resource given a textual representation of it.
Enum which lists all possible types of loss for training sentence transformers.
This matcher uses the Sentence Transformers library to build an embedding space for each resource given a textual representation of it.
 
 
Different methods for set comparison like Overlap coefficient, Jaccard or Sørensen–Dice_coefficient (DSC).
Interface for similarity measures of sets.
Enumeration for Significance
Simple count for significance statistics.
Check if already matched individuals have a similar hierarchy (class hierarchy).
Enum for choosing the approach in SimilarHierarchyFilter.
Checks for each instance mapping, how many already matched neighbours it has.
Checks for each instance mapping, how many already matched types it has in common.
Similarity Metric which can be used for MatchClassBasedOnInstances
A relatively simple matcher that can be used before running BackgroundMatcher to filter out simple matches.
A simple helper class for assigning URIs to testcases (and if it appears as source or target in the testcase).
This class provides static functionality for SPARQL calls.
 
Exception representing a error when data does not fit to SSSOM schema
 
The SSSOMParser can parse SSSOM files following the convention described in the SSSOM github and userguide.
 
 
The SSSOMSerializer can serialize to SSSOM files following the convention described in the SSSOM github and userguide.
Extracts corpus dependent stopwords from instances, classes and properties.
 
An interface for classes which define under what conditions two Strings are considered equal.
 
A simple interface for classes that can modify Strings.
A helper class for string operations.
Enum which indicates how shortcuts in camel case are handeled.
 
Simple data structure representing a String Tuple.
A collection of useful String operations that can be used for matcher development.
Interface for external resources that are capable to determine whether two concepts are synonymous given that the concepts are already linked.
Synonymy can be determined on a continuous scale.
Matches resource A (source) to B (target) iff they have at least one label in the same synset.
TDB util for generating and inspecting TDB datasets.
 
This class reads lines of documents from a directory and saves them in a HashSet.
A TestCase is an individual matching task that may be a component of a Track.
POJO which represents a testcase and the information if an entity corresponds to source or target.
Enumerations of the three files that make up a test case.
This class analyzes a test case.
A class which provides supporting functionalities mainly for writing unit tests.
The supported test types for McNemar significance tests.
Given a Jena resource, a ValueExtractor can derive zero or more String representations.
All annotation properties are followed (recursively).
This extractor uses all literals of the resource.
This extractor uses all literals which are also strings e.g.
This extractor is a composer and uses the given extractor in the given order as long as an extractor will yield an result.
A TextExtractor which extracts texts from a resource which can be used by transformer based matchers like TransformersFilter or TransformersFineTuner.
Extracts a label for the given resource and also creates a text for the superclass such that more context is provided.
Given a Jena resource, a ValueExtractor can derive zero or more String representations.
A TextExtractor which extracts texts from a resource which can be used by transformer based matchers like TransformersFilter or TransformersFineTuner.
Extracts all values from specific properties as long as it is a literal.
Extracts only one speaking label (language can be set in constructor) which can be (in decreasing importance): skos:prefLabel, rdfs:label, fragment (only if more than 50 percent are not numbers), skos:altLabel, skos:hiddenLabel.
Extracts all values from a specific property as long as it is a literal.
The textExtractor is a base class for all extractors which lists all statements about a resource.
 
A TextExtractor which extracts texts from a resource which can be used by transformer based matchers like TransformersFilter or TransformersFineTuner.
A TextExtractor which extracts texts from a resource which can be used by transformer based matchers like TransformersFilter or TransformersFineTuner.
Extracts the fragment of the URL, e.g.
Extracts the local name from the URI.
A text textractor which extracts texts from a resource which can be used by transformer based matchers like TransformersFilter or TransformersFilterFineTuner.
This extractor creates only one text per resource which describes it by verbalizing each statement where the resource is in the subject position.
 
 
 
 
 
 
 
 
 
 
 
 
This modifier tokenizes the String to be modified and separates the tokens with spaces.
This filter keeps only the top X correspondences according to confidence.
Filter mode.
This class represents a track from OAEI like anatomy, conference or multifarm etc.
 
Track repository which lists all different tracks in possibly multiple versions.
Anatomy track.
Biodiv track.
Bio-ML: A ML-friendly Biomedical track for Equivalence and Subsumption Matching.
This track presents an unified evaluation framework suitable for both ML-based and non-ML-based OM systems.
The 2022 edition involves the following ontologies: OMIM (Online Mendelian Inheritance in Man) ORDO (Orphanet Rare Disease Ontology) NCIT (National Cancer Institute Thesaurus) DOID (Human Disease Ontology) FMA (Foundational Model of Anatyomy) SNOMED CT See also https://doi.org/10.5281/zenodo.6510086.
This track evaluates the ability of matching systems to map the schema (classes) of large common knowledge graphs such as DBpedia, YAGO and NELL.
Complex track.
Conference track.
Doremus track.
Food Nutritional Composition track.
IIMB track.
Instance Matching
Knowledgegraph track.
Laboratory Analytics Domain track.
Large Biomedical Ontologies.
2015 version of Large Biomedical Ontologies.
2016 version of Large Biomedical Ontologies - these are also used for 2017 and 2018.
HOBBIT Link Discovery.
ML Datasets
Material Sciences and Engineering track.
Multifarm track.
Disease and Phenotype track.
2016 version of DiseasePhenotype Track
2017 version of DiseasePhenotype Track (HP_MP and DOID_ORDO also used in 2018)
Process Matching Track (last run in 2017) For more info see Process Matching Track Website
HOBBIT Spimbench The goal of this track is to determine when two OWL instances describe the same Creative Work.
 
 
 
 
 
This class analyzes a track (i.e., its individual test cases).
Deprecated.
replacement is to use AddNegativesViaMatcher which has the same functionality (but it really uses only correspondences with equivalence relation as positive correspondences) .
A class which can do a train test split for arbitrary data items.
A class which can do a train test split for arbitrary data items.
 
 
This class represents the arguments for the transformers library.
This is a base class for all Transformers.
This is a base class for all Transformers fine tuners.
This filter extracts the corresponding text for a resource (with the specified and customizable extractor) given all correspondences in the input alignment.
This class is used to fine-tune a transformer model based on a generated dataset.
 
This class represents the search space for hyper parameters.
The transformers library may not free all memory from GPU.
The enum which represents the possible measure for evaluating a model during hyperparameter search.
Computes a transitive closure in RAM.
Filters only class, instance or property matches.
The type refiner is capable of refining an ExecutionResult according to types.
The exception which is thrown if a transformation does not work.
TypeTransformer interface.
Helper functions for type transformation to URL.
Implement this interface to register multiple TypeTransformers.
The TypeTransformerRegistry is a registry for TypeTransformer which can transform an objetc of one type/class to another.
This matcher implements the URL interface and wrapps a MatcherYAAA.
Provides utility functions for URIs such as getting the fragment
 
 
Converts the URL to the owlapi OWLOntology
Transforms a URI to java.uril.Properties.
Interface for classes that are able to calculate vector distances.
Some utility provided in the form of static methods for vector operations.
Updates the confidence of already matched resources.
 
WebIsAlod Knowledge source.
This linker can link strings to dictionary entries.
Linker for WebIsALOD Embeddings
This class performs SPARQL queries for the WebIsALOD data set.
Enumeration of the two available endpoints.
 
 
Link to Wikidata embeddings using the "normal" Wikidata linker (WikidataLinker.
 
This linker links strings to Wikidata concepts.
 
Class utilizing DBnary, a SPARQL endpoint for Wiktionary.
This linker can link strings to dictionary entries.
Generates a CSV (first element is source - all others are synonyms) based on a DBnary dump file.
The configuration for the word2vec calculation.
Type of word2vec model/approach like CBOW or SG.
For this linker, you need (1) a WordNet embedding, (2) a text file of the vocabulary.
API for WordNet requests.
This class is capable of linking words to concepts in WordNet.
 
Write array like objects to file which can be read in python with numpy.