Skip to content
Snippets Groups Projects
Commit 7a43891c authored by florian's avatar florian
Browse files

Merge branch 'dev' into f-styling

parents 93c11929 580f3122
Branches
Tags
2 merge requests!105REL: v0.4.0,!99STY: comments, styling and renaming
Pipeline #34108 passed
Showing
with 325 additions and 158 deletions
......@@ -8,20 +8,35 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased] ##
### Added ###
- DateElementConverter: allows to interpret text as a date object
- the restricted_path argument allows to crawl only a subtree
- logging that provides a summary of what is inserted and updated
- You can now access the file system path of a structure element (if it has one) using the variable
name ``<converter name>.path``
- ``add_prefix`` and ``remove_prefix`` arguments for the command line interface
and the ``crawler_main`` function for the adding/removal of path prefixes when
creating file entities.
### Changed ###
- The definitions for the default converters were removed from crawl.py and placed into
a separate yaml file called `default_converters.yml`. There is a new test testing for
the correct loading behavior of that file.
- JSONFileConverter, YAMLFileConverter and MarkdownFileConverter now inherit from
SimpleFileConverter. Behavior is unchanged, except that the MarkdownFileConverter now raises a
ConverterValidationError when the YAML header cannot be read instead of silently not matching.
### Deprecated ###
- The ``prefix`` argument of `crawler_main` is deprecated. Use the new argument
``remove_prefix`` instead.
### Removed ###
- The command line argument ``--prefix``. Use the new argument ``--remove-prefix`` instead.
### Fixed ###
- an empty string as name is treated as no name (as does the server). This, fixes
queries for identifiables since it would contain "WITH name=''" otherwise
which is an impossible condition. If your cfoods contained this case, they are ill defined.
......
# Installation ##
## Linux ####
Make sure that Python (at least version 3.8) and pip is installed, using your system tools and
documentation.
Then open a terminal and continue in the [Generic installation](#generic-installation) section.
## Windows ####
If a Python distribution is not yet installed, we recommend Anaconda Python, which you can download
for free from [https://www.anaconda.com](https://www.anaconda.com). The "Anaconda Individual Edition" provides most of all
packages you will ever need out of the box. If you prefer, you may also install the leaner
"Miniconda" installer, which allows you to install packages as you need them.
After installation, open an Anaconda prompt from the Windows menu and continue in the [Generic
installation](#generic-installation) section.
## MacOS ####
If there is no Python 3 installed yet, there are two main ways to
obtain it: Either get the binary package from
[python.org](https://www.python.org/downloads/) or, for advanced
users, install via [Homebrew](https://brew.sh/). After installation
from python.org, it is recommended to also update the TLS certificates
for Python (this requires administrator rights for your user):
```sh
# Replace this with your Python version number:
cd /Applications/Python\ 3.9/
# This needs administrator rights:
sudo ./Install\ Certificates.command
```
After these steps, you may continue with the [Generic
installation](#generic-installation).
## Generic installation ####
The CaosDB crawler is available as [PyPi
package](https://pypi.org/project/caoscrawler/) and can simply installed by
```sh
pip3 install caoscrawler
```
Alternatively, obtain the sources from GitLab and install from there (`git` must
be installed for this option):
```sh
git clone https://gitlab.com/caosdb/caosdb-crawler
cd caosdb-crawler
pip3 install --user .
```
......@@ -153,6 +153,13 @@ Data:
metadata_json: &metadata_json_template
type: JSONFile
match: metadata.json
records:
JSONFile:
parents:
- JSONFile
role: File
path: ${metadata_json.path}
file: ${metadata_json.path}
validate: schema/dataset.schema.json
subtree:
jsondict:
......
......@@ -9,6 +9,7 @@
"minimum": 20000
},
"archived": { "type": "boolean" },
"JSONFile": { "type": "object" },
"url": {
"type": "string",
"description": "link to folder on file system (CaosDB or cloud folder)"
......
......@@ -25,8 +25,8 @@ extroot:
parents:
- mdfile
role: File
path: $DataFile
file: $DataFile
path: ${DataFile.path}
file: ${DataFile.path}
Experiment:
mdfile: $mdfile
......@@ -68,8 +68,8 @@ extroot:
parents:
- mdfile
role: File
path: $DataFile
file: $DataFile
path: ${DataFile.path}
file: ${DataFile.path}
Experiment: {}
......
......@@ -25,6 +25,7 @@
an integration test module that runs a test against a (close to) real world example
"""
from caosdb.utils.register_tests import clear_database, set_test_key
import logging
import json
import os
......@@ -35,6 +36,7 @@ from caoscrawler.identifiable_adapters import CaosDBIdentifiableAdapter
from caoscrawler.structure_elements import Directory
import pytest
from caosadvancedtools.models.parser import parse_model_from_json_schema, parse_model_from_yaml
from caosadvancedtools.loadFiles import loadpath
import sys
......@@ -52,6 +54,17 @@ def rfp(*pathcomponents):
DATADIR = rfp("test_data", "extroot", "realworld_example")
@pytest.fixture
def addfiles():
loadpath(path='/opt/caosdb/mnt/extroot/',
include=None,
exclude=None,
prefix="",
dryrun=False,
forceAllowSymlinks=True,
)
@pytest.fixture
def usemodel():
# First load dataspace data model
......@@ -85,22 +98,21 @@ def create_identifiable_adapter():
return ident
def test_dataset(clear_database, usemodel):
ident = create_identifiable_adapter()
crawler = Crawler(identifiableAdapter=ident)
crawler_definition = crawler.load_definition(
os.path.join(DATADIR, "dataset_cfoods.yml"))
# print(json.dumps(crawler_definition, indent=3))
# Load and register converter packages:
converter_registry = crawler.load_converters(crawler_definition)
# print("DictIntegerElement" in converter_registry)
records = crawler.start_crawling(
Directory("data", os.path.join(DATADIR, 'data')),
crawler_definition,
converter_registry
def test_dataset(clear_database, usemodel, addfiles, caplog):
caplog.set_level(logging.DEBUG, logger="caoscrawler")
identifiable_path = os.path.join(DATADIR, "identifiables.yml")
crawler_definition_path = os.path.join(DATADIR, "dataset_cfoods.yml")
crawler_main(
os.path.join(DATADIR, 'data'),
crawler_definition_path,
identifiable_path,
True,
os.path.join(DATADIR, "provenance.yml"),
False,
remove_prefix=DATADIR,
# this test will fail without this prefix since the crawler would try to create new files
add_prefix="/extroot/realworld_example"
)
crawler.synchronize()
dataspace = db.execute_query("FIND RECORD Dataspace WITH name=35 AND dataspace_id=20002 AND "
"archived=FALSE AND url='https://datacloud.de/index.php/f/7679'"
......@@ -119,13 +131,17 @@ def test_dataset(clear_database, usemodel):
"start_datetime='2022-02-10T16:36:48+01:00'") == 1
assert db.execute_query(f"FIND Event WITH latitude=53", unique=True)
# test logging
assert "Executed inserts" in caplog.text
assert "Going to insert" in caplog.text
assert "Executed updates" in caplog.text
def test_event_update(clear_database, usemodel):
def test_event_update(clear_database, usemodel, addfiles):
identifiable_path = os.path.join(DATADIR, "identifiables.yml")
crawler_definition_path = os.path.join(DATADIR, "dataset_cfoods.yml")
# TODO(fspreck): Use crawler_main
crawler_main(
os.path.join(DATADIR, 'data'),
crawler_definition_path,
......@@ -133,7 +149,9 @@ def test_event_update(clear_database, usemodel):
True,
os.path.join(DATADIR, "provenance.yml"),
False,
""
remove_prefix=DATADIR,
# this test will fail without this prefix since the crawler would try to create new files
add_prefix="/extroot/realworld_example"
)
old_dataset_rec = db.execute_query(
......
......@@ -38,9 +38,7 @@ DATADIR = os.path.join(os.path.dirname(__file__), "test_data",
"extroot", "use_case_simple_presentation")
def test_complete_crawler(
clear_database
):
def test_complete_crawler(clear_database):
# Setup the data model:
model = parser.parse_model_from_yaml(os.path.join(DATADIR, "model.yml"))
model.sync_data_model(noquestion=True, verbose=False)
......@@ -57,13 +55,24 @@ def test_complete_crawler(
dryrun=False,
forceAllowSymlinks=False)
# test that a bad value for "remove_prefix" leads to runtime error
with pytest.raises(RuntimeError) as re:
crawler_main(DATADIR,
os.path.join(DATADIR, "cfood.yml"),
os.path.join(DATADIR, "identifiables.yml"),
True,
os.path.join(DATADIR, "provenance.yml"),
False,
remove_prefix="sldkfjsldf")
assert "path does not start with the prefix" in str(re.value)
crawler_main(DATADIR,
os.path.join(DATADIR, "cfood.yml"),
os.path.join(DATADIR, "identifiables.yml"),
True,
os.path.join(DATADIR, "provenance.yml"),
False,
"/use_case_simple_presentation")
remove_prefix=os.path.abspath(DATADIR))
res = db.execute_query("FIND Record Experiment")
assert len(res) == 1
......
......@@ -56,6 +56,10 @@ SPECIAL_PROPERTIES = ("description", "name", "id", "path",
logger = logging.getLogger(__name__)
class CrawlerTemplate(Template):
braceidpattern = r"(?a:[_a-z][_\.a-z0-9]*)"
def _only_max(children_with_keys):
return [max(children_with_keys, key=lambda x: x[1])[0]]
......@@ -110,6 +114,19 @@ class ConverterValidationError(Exception):
self.message = msg
def create_path_value(func):
"""decorator for create_values functions that adds a value containing the path
should be used for StructureElement that are associated with file system objects that have a
path, like File or Directory.
"""
def inner(self, values: GeneralStore, element: StructureElement):
func(self, values=values, element=element)
values.update({self.name + ".path": element.path})
return inner
def replace_variables(propvalue, values: GeneralStore):
"""
This function replaces variables in property values (and possibly other locations,
......@@ -133,7 +150,7 @@ def replace_variables(propvalue, values: GeneralStore):
if isinstance(values[varname], db.Entity):
return values[varname]
propvalue_template = Template(propvalue)
propvalue_template = CrawlerTemplate(propvalue)
return propvalue_template.safe_substitute(**values.get_storage())
......@@ -241,7 +258,7 @@ def create_records(values: GeneralStore, records: RecordStore, def_records: dict
continue
# Allow replacing variables in keys / names of properties:
key_template = Template(key)
key_template = CrawlerTemplate(key)
key = key_template.safe_substitute(**values.get_storage())
keys_modified.append((name, key))
......@@ -477,6 +494,10 @@ class DirectoryConverter(Converter):
return children
@create_path_value
def create_values(self, values: GeneralStore, element: StructureElement):
super().create_values(values=values, element=element)
def typecheck(self, element: StructureElement):
return isinstance(element, Directory)
......@@ -524,6 +545,10 @@ class SimpleFileConverter(Converter):
def create_children(self, generalStore: GeneralStore, element: StructureElement):
return list()
@create_path_value
def create_values(self, values: GeneralStore, element: StructureElement):
super().create_values(values=values, element=element)
@Converter.debug_matching("name")
def match(self, element: StructureElement):
# TODO: See comment on types and inheritance
......@@ -542,7 +567,7 @@ class FileConverter(SimpleFileConverter):
super().__init__(*args, **kwargs)
class MarkdownFileConverter(Converter):
class MarkdownFileConverter(SimpleFileConverter):
"""
reads the yaml header of markdown files (if a such a header exists).
"""
......@@ -552,8 +577,18 @@ class MarkdownFileConverter(Converter):
if not isinstance(element, File):
raise RuntimeError("A markdown file is needed to create children.")
header = yaml_header_tools.get_header_from_file(
element.path, clean=False)
try:
header = yaml_header_tools.get_header_from_file(
element.path, clean=False)
except yaml_header_tools.NoValidHeader:
if generalStore is not None and self.name in generalStore:
path = generalStore[self.name]
else:
path = "<path not set>"
raise ConverterValidationError(
"Error during the validation (yaml header cannot be read) of the markdown file "
"located at the following node in the data structure:\n"
f"{path}")
children: List[StructureElement] = []
for name, entry in header.items():
......@@ -566,25 +601,6 @@ class MarkdownFileConverter(Converter):
"Header entry {} has incompatible type.".format(name))
return children
def typecheck(self, element: StructureElement):
return isinstance(element, File)
@Converter.debug_matching("name")
def match(self, element: StructureElement):
# TODO: See comment on types and inheritance
if not isinstance(element, File):
raise RuntimeError("Element must be a file.")
m = re.match(self.definition["match"], element.name)
if m is None:
return None
try:
yaml_header_tools.get_header_from_file(element.path)
except yaml_header_tools.NoValidHeader:
# TODO(salexan): Raise a validation error instead of just not
# matching silently.
return None
return m.groupdict()
def convert_basic_element(element: Union[list, dict, bool, int, float, str, None], name=None,
msg_prefix=""):
......@@ -691,20 +707,7 @@ class DictDictElementConverter(DictElementConverter):
super().__init__(*args, **kwargs)
class JSONFileConverter(Converter):
def typecheck(self, element: StructureElement):
return isinstance(element, File)
@Converter.debug_matching("name")
def match(self, element: StructureElement):
# TODO: See comment on types and inheritance
if not self.typecheck(element):
raise RuntimeError("Element must be a file")
m = re.match(self.definition["match"], element.name)
if m is None:
return None
return m.groupdict()
class JSONFileConverter(SimpleFileConverter):
def create_children(self, generalStore: GeneralStore, element: StructureElement):
# TODO: See comment on types and inheritance
if not isinstance(element, File):
......@@ -726,20 +729,7 @@ class JSONFileConverter(Converter):
return [structure_element]
class YAMLFileConverter(Converter):
def typecheck(self, element: StructureElement):
return isinstance(element, File)
@Converter.debug_matching("name")
def match(self, element: StructureElement):
# TODO: See comment on types and inheritance
if not self.typecheck(element):
raise RuntimeError("Element must be a file")
m = re.match(self.definition["match"], element.name)
if m is None:
return None
return m.groupdict()
class YAMLFileConverter(SimpleFileConverter):
def create_children(self, generalStore: GeneralStore, element: StructureElement):
# TODO: See comment on types and inheritance
if not isinstance(element, File):
......
......@@ -49,6 +49,7 @@ from typing import Any, Optional, Type, Union
import caosdb as db
from caosadvancedtools.utils import create_entity_link
from caosadvancedtools.cache import UpdateCache, Cache
from caosadvancedtools.crawler import Crawler as OldCrawler
from caosdb.apiutils import (compare_entities, EntityMergeConflictError,
......@@ -1016,20 +1017,25 @@ class Crawler(object):
referencing_entities)
for record in to_be_updated]
# Merge with existing data to prevent unwanted overwrites
to_be_updated = self._merge_properties_from_remote(to_be_updated,
identified_records)
to_be_updated = self._merge_properties_from_remote(to_be_updated, identified_records)
# remove unnecessary updates from list by comparing the target records
# to the existing ones
to_be_updated = self.remove_unnecessary_updates(
to_be_updated, identified_records)
to_be_updated = self.remove_unnecessary_updates(to_be_updated, identified_records)
logger.info(f"Going to insert {len(to_be_inserted)} Entities and update "
f"{len(to_be_inserted)} Entities.")
if commit_changes:
self.execute_parent_updates_in_list(to_be_updated, securityMode=self.securityMode,
run_id=self.run_id, unique_names=unique_names)
logger.info(f"Added parent RecordTypes where necessary.")
self.execute_inserts_in_list(
to_be_inserted, self.securityMode, self.run_id, unique_names=unique_names)
logger.info(f"Executed inserts:\n"
+ self.create_entity_summary(to_be_inserted))
self.execute_updates_in_list(
to_be_updated, self.securityMode, self.run_id, unique_names=unique_names)
logger.info(f"Executed updates:\n"
+ self.create_entity_summary(to_be_updated))
update_cache = UpdateCache()
pending_inserts = update_cache.get_inserts(self.run_id)
......@@ -1044,6 +1050,25 @@ class Crawler(object):
return (to_be_inserted, to_be_updated)
@staticmethod
def create_entity_summary(entities: list[db.Entity]):
""" Creates a summary string reprensentation of a list of entities."""
parents = {}
for el in entities:
for pp in el.parents:
if pp.name not in parents:
parents[pp.name] = [el]
else:
parents[pp.name].append(el)
output = ""
for key, value in parents.items():
output += f"{key}:\n"
for el in value:
output += create_entity_link(el) + ", "
output = output[:-2] + "\n"
return output
@staticmethod
def inform_about_pending_changes(pending_changes, run_id, path, inserts=False):
# Sending an Email with a link to a form to authorize updates is
......@@ -1228,7 +1253,9 @@ def crawler_main(crawled_directory_path: str,
prefix: str = "",
securityMode: SecurityMode = SecurityMode.UPDATE,
unique_names=True,
restricted_path: Optional[list[str]] = None
restricted_path: Optional[list[str]] = None,
remove_prefix: Optional[str] = None,
add_prefix: Optional[str] = None,
):
"""
......@@ -1247,7 +1274,7 @@ def crawler_main(crawled_directory_path: str,
dry_run : bool
do not commit any chnages to the server
prefix : str
remove the given prefix from file paths
DEPRECATED, remove the given prefix from file paths
securityMode : int
securityMode of Crawler
unique_names : bool
......@@ -1255,6 +1282,10 @@ def crawler_main(crawled_directory_path: str,
restricted_path: optional, list of strings
Traverse the data tree only along the given path. When the end of the given path
is reached, traverse the full tree as normal.
remove_prefix : Optional[str]
remove the given prefix from file paths
add_prefix : Optional[str]
add the given prefix to file paths
Returns
-------
......@@ -1271,11 +1302,19 @@ def crawler_main(crawled_directory_path: str,
crawler.save_debug_data(provenance_file)
if identifiables_definition_file is not None:
ident = CaosDBIdentifiableAdapter()
ident.load_from_yaml_definition(identifiables_definition_file)
crawler.identifiableAdapter = ident
if prefix != "":
warnings.warn(DeprecationWarning("The prefix argument is deprecated and will be removed "
"in the future. Please use `remove_prefix` instead."))
if remove_prefix is not None:
raise ValueError("Please do not supply the (deprecated) `prefix` and the "
"`remove_prefix` argument at the same time. Only use "
"`remove_prefix` instead.")
remove_prefix = prefix
if dry_run:
ins, upd = crawler.synchronize(commit_changes=False)
inserts = [str(i) for i in ins]
......@@ -1290,11 +1329,15 @@ def crawler_main(crawled_directory_path: str,
if isinstance(elem, db.File):
# correct the file path:
# elem.file = os.path.join(args.path, elem.file)
if prefix is None:
raise RuntimeError(
"No prefix set. Prefix must be set if files are used.")
if elem.path.startswith(prefix):
elem.path = elem.path[len(prefix):]
if remove_prefix:
if elem.path.startswith(remove_prefix):
elem.path = elem.path[len(remove_prefix):]
else:
raise RuntimeError("Prefix shall be removed from file path but the path "
"does not start with the prefix:"
f"\n{remove_prefix}\n{elem.path}")
if add_prefix:
elem.path = add_prefix + elem.path
elem.file = None
# TODO: as long as the new file backend is not finished
# we are using the loadFiles function to insert symlinks.
......@@ -1362,8 +1405,12 @@ def parse_args():
parser.add_argument("-u", "--unique-names",
help="Insert or updates entities even if name conflicts exist.")
parser.add_argument("-p", "--prefix",
help="Remove the given prefix from the paths "
"of all file objects.")
help="DEPRECATED, use --remove-prefix instead. Remove the given prefix "
"from the paths of all file objects.")
parser.add_argument("--remove-prefix",
help="Remove the given prefix from the paths of all file objects.")
parser.add_argument("--add-prefix",
help="Add the given prefix to the paths of all file objects.")
return parser.parse_args()
......@@ -1383,6 +1430,10 @@ def main():
conlogger = logging.getLogger("connection")
conlogger.setLevel(level=logging.ERROR)
if args.prefix:
print("Please use '--remove-prefix' option instead of '--prefix' or '-p'.")
return -1
# logging config for local execution
logger.addHandler(logging.StreamHandler(sys.stdout))
if args.debug:
......@@ -1405,12 +1456,13 @@ def main():
debug=args.debug,
provenance_file=args.provenance,
dry_run=args.dry_run,
prefix=args.prefix,
securityMode={"retrieve": SecurityMode.RETRIEVE,
"insert": SecurityMode.INSERT,
"update": SecurityMode.UPDATE}[args.security_mode],
unique_names=args.unique_names,
restricted_path=restricted_path
restricted_path=restricted_path,
remove_prefix=args.remove_prefix,
add_prefix=args.add_prefix,
))
......
# Getting started with the CaosDB Crawler #
## Installation ##
### How to install ###
#### Linux ####
Make sure that Python (at least version 3.8) and pip is installed, using your system tools and
documentation.
Then open a terminal and continue in the [Generic installation](#generic-installation) section.
#### Windows ####
If a Python distribution is not yet installed, we recommend Anaconda Python, which you can download
for free from [https://www.anaconda.com](https://www.anaconda.com). The "Anaconda Individual Edition" provides most of all
packages you will ever need out of the box. If you prefer, you may also install the leaner
"Miniconda" installer, which allows you to install packages as you need them.
After installation, open an Anaconda prompt from the Windows menu and continue in the [Generic
installation](#generic-installation) section.
#### MacOS ####
If there is no Python 3 installed yet, there are two main ways to
obtain it: Either get the binary package from
[python.org](https://www.python.org/downloads/) or, for advanced
users, install via [Homebrew](https://brew.sh/). After installation
from python.org, it is recommended to also update the TLS certificates
for Python (this requires administrator rights for your user):
```sh
# Replace this with your Python version number:
cd /Applications/Python\ 3.9/
# This needs administrator rights:
sudo ./Install\ Certificates.command
```
After these steps, you may continue with the [Generic
installation](#generic-installation).
#### Generic installation ####
---
Obtain the sources from GitLab and install from there (`git` must be installed for
this option):
```sh
git clone https://gitlab.com/caosdb/caosdb-crawler
cd caosdb-crawler
pip3 install --user .
```
**Note**: In the near future, this package will also be made available on PyPi.
## Installation
see INSTALL.md
## Run Unit Tests
Run `pytest unittests`.
## Documentation ##
We use sphinx to create the documentation. Docstrings in the code should comply
......
......@@ -149,6 +149,44 @@ create lists or multi properties instead of single values:
.. code-block:: yaml
Experiment1:
Measurement: +Measurement <- Element in List (list is cleared before run)
*Measurement <- Multi Property (properties are removed before run)
Measurement <- Overwrite
Measurement: +Measurement # Element in List (list is cleared before run)
*Measurement # Multi Property (properties are removed before run)
Measurement # Overwrite
File Entities
-------------
In order to use File Entities, you must set the appropriate ``role: File``.
Additionally, the path and file keys have to be given, with values that set the
paths remotely and locally, respectively. You can use the variable
``<converter name>_path`` that is automatically created by converters that deal
with file system related StructureElements. The file object itsself is stored
in a vairable with the same name (as it is the case for other Records).
.. code-block:: yaml
somefile:
type: SimpleFile
match: ^params.*$ # macht any file that starts with "params"
records:
fileEntity:
role: File # necessary to create a File Entity
path: somefile.path # defines the path in CaosDB
file: somefile.path # path where the file is found locally
SomeRecord:
ParameterFile: $fileEntity # creates a reference to the file
Automatically generated keys
++++++++++++++++++++++++++++
Some variable names are automatically generated and can be used using the
``$<variable name>`` syntax. Those include:
- ``<converter name>``: access the path of converter names to the current converter
- ``<converter name>.path``: the file system path to the structure element
(file system related converters only; you need curly brackets to use them:
``${<converter name>.path}``)
- ``<Record key>``: all entities that are created in the ``records`` section
are available under the same key
Concepts
))))))))
The CaosDB Crawler can handle any kind of hierarchical data structure. The typical use case is
directory tree that is traversed. We use the following terms/concepts to describe how the CaosDB
Crawler works.
Structure Elements
++++++++++++++++++
......
../../../INSTALL.md
\ No newline at end of file
Prerequisites
)))))))))))))
TODO Describe the smallest possible crawler run
Getting Started
+++++++++++++++
.. toctree::
:maxdepth: 2
:caption: Contents:
:hidden:
Installation<INSTALL>
prerequisites
helloworld
This section will help you get going! From the first installation steps to the first simple crawl.
Let's go!
Prerequisites
)))))))))))))
TODO Describe what you need to actually do a crawler run: data, CaosDB, ...
......@@ -7,12 +7,12 @@ CaosDB-Crawler Documentation
:caption: Contents:
:hidden:
Getting started<README_SETUP>
Getting started<getting_started/index>
Tutorials<tutorials/index>
Concepts<concepts>
Converters<converters>
CFoods (Crawler Definitions)<cfood>
Macros<macros>
Tutorials<tutorials/index>
How to upgrade<how-to-upgrade>
API documentation<_apidoc/modules>
......
......@@ -195,7 +195,7 @@ The example will be expanded to:
Limitation
----------
==========
Currently it is not possible to use the same macro twice in the same yaml node, but in different
positions. Consider:
......
Tutorials
+++++++++
This chapter contains a collection of tutorials.
.. toctree::
:maxdepth: 2
:caption: Contents:
:hidden:
Example CFood<example>
......@@ -22,7 +22,7 @@ Data: # name of the converter
parents:
- Project # not needed as the name is equivalent
date: $date
identifier: $identifier
identifier: ${identifier}
subtree:
measurement: # new name for folders on the 3rd level
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment