Skip to content
Snippets Groups Projects

F doc workflow

Merged Henrik tom Wörden requested to merge f-doc-workflow into dev
1 file
+ 10
5
Compare changes
  • Side-by-side
  • Inline
+ 10
5
@@ -4,17 +4,18 @@ Crawler Workflow
The LinkAhead crawler aims to provide a very flexible framework for synchronizing
data on file systems (or potentially other sources of information) with a
running LinkAhead instance. The workflow that is used in the scientific environment
should be choosen according to the users needs. It is also possible to combine multiple workflow or use them in parallel.
should be choosen according to the users needs. It is also possible to combine
multiple workflow or use them in parallel.
In this document we will describe several workflows for crawler operation.
Local Crawler Operation
-----------------------
A very simple setup that can also reliably used for testing (e.g. in local
docker containers) sets up the crawler on a local computer. The files that
are being crawled need to be visible to both, the local computer and the
machine, running the LinkAhead.
A very simple setup that can also reliably be used for testing
sets up the crawler on a local computer. The files that
are being crawled need to be visible to both, the locally running crawler and
the LinkAhead server.
Prerequisites
+++++++++++++
@@ -58,3 +59,7 @@ Running the crawler
The following command line assumes that the extroot folder visible in the LinkAhead docker container is located in "../extroot":
caosdb-crawler -i identifiables.yml --prefix /extroot --debug --provenance=provenance.yml -s update cfood.yml ../extroot/ExperimentalData/
Server Side Crawler Operation
-----------------------
To be filled.
Loading