Relevance Scores for Triples from Type-Like Relations
Reproducability Material
Download everything needed to reproduce our experimental results
Files included:
-
benchmark_profession.txt Tab separated benchmark
for the profession relation. Columns: Person, Freebase-mid,
Profession, Score between 0 and 7 (obtained from the crowdsourcing
experiment).
-
benchmark_nationality.txt Tab separated benchmark
for the nationality relation. Columns: Person, Freebase-mid,
Nationality, Score between 0 and 7 (obtained from the crowdsourcing
experiment).
- README.txt
A step-by-step explanation of how to
reproduce our results locally and the required libraries and disk-space
for each approach.
- entity_contexts.txt The associated text for each
entity. One line per enttiy, semantic contexts separated by tabs. 3.6GB
uncompressed.
- entity_tf_file Word occurrence counts per entity.
1.1GB uncompressed.
- freebase_descriptions.txt All entity
descriptions from freebase. Needed to reproduce the "first" baseline.
2.2Gb uncompressed.
- mturk_judgments_profession The human judgments obtained
from the crowdsourcing task for the profession relation.
- mturk_judgments_nationality The human judgments obtained
from the crowdsourcing task for the nationality relation.
- entity_mid_map A map from entities with readable
names (used everywhere) to the original mid in Freebase.
- person-profession-freebase_full Legacy input format
needed by the words_regression appraoch.
- entity_features_normalized Normalized feature
values for the words_regression approach.
- Makefile Controls dependencies and has targets
to clean and produce result files and to print tables for evaluation.
- All result files A file per appraoch with the
final scores assigned to all triples. This allows that results
can be examined and tables can be print without long running times. In
order to reproduce a result from scratch, there is an associated target
in the Makefile.
- 20 python scripts All scripts print usage. To simply
reproduce the experiments, only calls via the Makefile are needed
- Several intermediate or trivial files These files
are either trivial to derive from the input files (e.g., a list of
persons to classify can easily be derived from the list of human
judgments) or will be cleaned by the corresponding clean target when an
appraoch is supposed to be reproduced from scratch (e.g., a list of all
word probabilities by profession). Trivial files are included because
our approaches evolved over time and so did input formats.