Data Legend Tools

If you want to link your own dataset, you first need to TRANSPOSE it to RDF. This can be done using two methods: either contained within the DRUID environment or using COW or Cattle. Both methods provide you with an easy way to transpose your csv or excel file to RDF. If you are more accustomed to command-line tools or editing scripts, we recommend using COW, a CSV On the Web converter. However, if you prefer to use a webpage with a graphic interface you’ll want to use Cattle. Both services generate a basic script to transpose your data into RDF. By enhancing the script (either manually or with the Ruminator tool) you can create more meaningful RDF.


To help you get going we provide several vocabularies for the domain of social-economic history for you to REUSE. By reusing vocabularies similar data items will be described in the same vocabularies, making it easier to find and link data. Moreover, LSD-dimensions allows you to search for a wide range of other vocabularies on the semantic web.

Vocabularies LSD Dimensions

An essential and critical component of linked data as a concept is the “linked” part. How are you supposed to use your data and that in other datasets, if they’re not online? To do so, you write a sparql QUERY, defining the common elements of interest across different datasets. We recommend DRUID as a helpful tool to write such queries. However, writing SPARQL isn’t an innate ability to most of us. Therefore, we provide GRLC. GRLC is a tool that allows one to share queries across the web. So anyone could execute predefined queries. Moreover, part of SPARQL queries are easy to read (e.g. for which time period, country or language) and thus allow for adaptations of the SPARQL queries even by non-initiates.



We recognise that each user has a different background and therefore a different level of familiarity with linked data and its accompanying technologies. While we aim for every user interested in linked data to use our tools, some are easier to learn (and master) than others. To make it easier for you to pick what tool best suits your experience or preference, the following list will detail the (approximate) difficulty of the tools for use and recommended experience.

COW: Easy to intermediate. If your CSV does not need adjusting (most don’t) the installation of COW is easy and straightforward. However, if you need to customize the transposing of your CSV file, this tool might require some experience with command lines (Terminal or Command Prompt).

Cattle: Very easy. The webpage transposes most CSV files without any problems. However, with bigger datasets we recommend you use COW. If you’re having trouble deciding, imagine Cattle as the streamlined tool and COW as the more versatile tool.

Metadata (manual or Ruminator): Easy to intermediate. Before COW or Cattle can transpose your CSV to linked data, both tools need a metadata file. This file contains all the necessary information about the contents of your dataset (what kind of data each column holds, who the creator is, and more). These metadata files can (and should) be edited to enrich your data, either through a text-editor (wordpad, Sublime, Visual Studio Code, for example) or Ruminator. Depending on the nature of your data this can be a very easy process or one that  might need a bit of experience with vocabularies and the javascript language. Ruminator simplifies this process a bit with a graphical interface and pre-baked drop-down lists, whereas editing the file with an editor is completely manual.

DRUID: Easy. Although the many options may be daunting at first, DRUID offers very powerful tools in an easy-to-use package. In a completely graphical and online environment, you can upload, transpose, edit, query, and present your datasets. Knowledge of SPARQL is very useful, but not required.

GRLC: Very easy. With this tool, you can generate an API for your SPARQL queries. Very straightforward implementation and use, especially after you have done the pre-requisites (which is, the entire pipeline).

Older Tools

Throughout the existence of Datalegend we have developed various tools to create, inspect, and manage Linked Data. Although the following tools are obsolete and not a part of our pipeline, they still deserve an honourable mention.

A visual interface and tool for automatically converting CSV files to Linked Data.

A straightforward browser for Linked Data files.

The inspector was used within Qber to visualize datasets, dimensions and users as an interactive graph.