diff --git a/src/doc/converters.rst b/src/doc/converters.rst
index cf0d26bf79719ae87c824f6ba148b55e14e96c26..de4e1115597afb54868c746d80c1c85e2c58a852 100644
--- a/src/doc/converters.rst
+++ b/src/doc/converters.rst
@@ -25,7 +25,7 @@ The yaml definition looks like the following:
 TODO: outdated, see cfood-schema.yml
 
 .. code-block:: yaml
-                
+
     <NodeName>:
         type: <ConverterName>
         match: ".*"
@@ -41,7 +41,7 @@ TODO: outdated, see cfood-schema.yml
                 - Experiment
         subtree:
             (...)
-     
+
 The **<NodeName>** is a description of what it represents (e.g.
 'experiment-folder') and is used as identifier.
 
@@ -58,13 +58,13 @@ described here.
 
 Transform Functions
 +++++++++++++++++++
-Often the situation arises, that you cannot use a value as it is found. Maybe a value should be 
+Often the situation arises, that you cannot use a value as it is found. Maybe a value should be
 increased by an offset or a string should be split into a list of pieces. In order to allow such
 simple conversions, transform functions can be named in the converter definition that are then
 applied to the respective variables when the converter is executed.
 
 .. code-block:: yaml
-                
+
     <NodeName>:
         type: <ConverterName>
         match: ".*"
@@ -191,7 +191,7 @@ column names to values of the respective cell.
 Example:
 
 .. code-block:: yaml
-                
+
    subtree:
      TABLE:
        type: CSVTableConverter
@@ -272,6 +272,15 @@ need to be reflected in the datamodel.  Using the default names, you would need
 although the names of both property and record type can be configured within the
 cfood definition.
 
+A simple example of a cfood definiton for HDF5 files can be found in the `unit
+tests
+<https://gitlab.com/linkahead/linkahead-crawler/-/blob/main/unittests/h5_cfood.yml?ref_type=heads>`_
+and shows how the individual converters are used in order to crawl a `simple
+example file
+<https://gitlab.com/linkahead/linkahead-crawler/-/blob/main/unittests/hdf5_dummy_file.hdf5?ref_type=heads`_
+containing groups, subgroups, and datasets, together with their respective
+attributes.
+
 H5FileConverter
 ---------------
 
@@ -340,10 +349,10 @@ The following methods are abstract and need to be overwritten by your custom con
 - :py:meth:`~caoscrawler.converters.Converter.match`
 - :py:meth:`~caoscrawler.converters.Converter.typecheck`
 
-  
+
 Example
 =======
-  
+
 In the following, we will explain the process of adding a custom converter to a yaml file using
 a SourceResolver that is able to attach a source element to another entity.
 
@@ -374,50 +383,50 @@ Furthermore we will customize the method :py:meth:`~caoscrawler.converters.Conve
 number of records can be generated by the yaml definition. So for any applications - like here - that require an arbitrary number of records to be created, a customized implementation of :py:meth:`~caoscrawler.converters.Converter.create_records` is recommended.
 In this context it is recommended to make use of the function :func:`caoscrawler.converters.create_records` that implements creation of record objects from python dictionaries of the same structure
 that would be given using a yaml definition (see next section below).
-     
+
 .. code-block:: python
 
     import re
     from caoscrawler.stores import GeneralStore, RecordStore
     from caoscrawler.converters import TextElementConverter, create_records
     from caoscrawler.structure_elements import StructureElement, TextElement
-    
+
 
     class SourceResolver(TextElementConverter):
       """
       This resolver uses a source list element (e.g. from the markdown readme file)
       to link sources correctly.
       """
-       
+
       def __init__(self, definition: dict, name: str,
                    converter_registry: dict):
           """
           Initialize a new directory converter.
           """
           super().__init__(definition, name, converter_registry)
-       
+
       def create_children(self, generalStore: GeneralStore,
                                 element: StructureElement):
-                                
+
           # The source resolver does not create children:
-          
+
           return []
-       
+
       def create_records(self, values: GeneralStore,
                          records: RecordStore,
                          element: StructureElement,
                          file_path_prefix):
           if not isinstance(element, TextElement):
               raise RuntimeError()
-       
+
           # This function must return a list containing tuples, each one for a modified
           # property: (name_of_entity, name_of_property)
           keys_modified = []
-       
+
           # This is the name of the entity where the source is going to be attached:
           attach_to_scientific_activity = self.definition["scientific_activity"]
           rec = records[attach_to_scientific_activity]
-       
+
           # The "source" is a path to a source project, so it should have the form:
           # /<Category>/<project>/<scientific_activity>/
           # obtain these information from the structure element:
@@ -425,18 +434,18 @@ that would be given using a yaml definition (see next section below).
           regexp = (r'/(?P<category>(SimulationData)|(ExperimentalData)|(DataAnalysis))'
                     '/(?P<project_date>.*?)_(?P<project_identifier>.*)'
                     '/(?P<date>[0-9]{4,4}-[0-9]{2,2}-[0-9]{2,2})(_(?P<identifier>.*))?/')
-       
+
           res = re.match(regexp, val)
           if res is None:
               raise RuntimeError("Source cannot be parsed correctly.")
-       
+
           # Mapping of categories on the file system to corresponding record types in CaosDB:
           cat_map = {
               "SimulationData": "Simulation",
               "ExperimentalData": "Experiment",
               "DataAnalysis": "DataAnalysis"}
           linkrt = cat_map[res.group("category")]
-       
+
           keys_modified.extend(create_records(values, records, {
               "Project": {
                   "date": res.group("project_date"),
@@ -450,7 +459,7 @@ that would be given using a yaml definition (see next section below).
               attach_to_scientific_activity: {
                   "sources": "+$" + linkrt
               }}, file_path_prefix))
-       
+
           # Process the records section of the yaml definition:
           keys_modified.extend(
               super().create_records(values, records, element, file_path_prefix))
@@ -463,7 +472,7 @@ that would be given using a yaml definition (see next section below).
 If the recommended (python) package structure is used, the package containing the converter
 definition can just be installed using `pip install .` or `pip install -e .` from the
 `scifolder_package` directory.
-          
+
 The following yaml block will register the converter in a yaml file:
 
 .. code-block:: yaml
@@ -473,7 +482,7 @@ The following yaml block will register the converter in a yaml file:
        package: scifolder.converters.sources
        converter: SourceResolver
 
-       
+
 Using the `create_records` API function
 =======================================
 
@@ -511,7 +520,7 @@ Let's formulate that using `create_records`:
 .. code-block:: python
 
   dir_name = "directory name"
-  
+
   record_def = {
     "Experiment": {
       "identifier": dir_name
@@ -587,7 +596,7 @@ Let's have a look at a more complex examples, defining multiple records:
         Project: $Project
       ProjectGroup:
         projects: +$Project
-      
+
 
 This block will create two new Records:
 
@@ -603,7 +612,7 @@ Let's formulate that using `create_records` (again, `dir_name` is constant here)
 .. code-block:: python
 
   dir_name = "directory name"
-  
+
   record_def = {
     "Project": {
       "identifier": "project_name",
@@ -615,7 +624,7 @@ Let's formulate that using `create_records` (again, `dir_name` is constant here)
     "ProjectGroup": {
       "projects": "+$Project",
     }
-    
+
   }
 
   keys_modified = create_records(values, records,