From b3669164a7ca8b55220bc83c5335284022223d2e Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Henrik=20tom=20W=C3=B6rden?= <h.tomwoerden@indiscale.com>
Date: Fri, 21 Mar 2025 09:22:53 +0100
Subject: [PATCH] DOC: minor rephrasing

---
 src/doc/workflow.rst | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/src/doc/workflow.rst b/src/doc/workflow.rst
index 9116f85..b8d48f1 100644
--- a/src/doc/workflow.rst
+++ b/src/doc/workflow.rst
@@ -4,17 +4,18 @@ Crawler Workflow
 The LinkAhead crawler aims to provide a very flexible framework for synchronizing
 data on file systems (or potentially other sources of information) with a
 running LinkAhead instance. The workflow that is used in the scientific environment
-should be choosen according to the users needs. It is also possible to combine multiple workflow or use them in parallel.
+should be choosen according to the users needs. It is also possible to combine
+multiple workflow or use them in parallel.
 
 In this document we will describe several workflows for crawler operation.
 
 Local Crawler Operation
 -----------------------
 
-A very simple setup that can also reliably used for testing (e.g. in local
-docker containers) sets up the crawler on a local computer. The files that
-are being crawled need to be visible to both, the local computer and the
-machine, running the LinkAhead.
+A very simple setup that can also reliably be used for testing
+sets up the crawler on a local computer. The files that
+are being crawled need to be visible to both, the locally running crawler and
+the LinkAhead server.
 
 Prerequisites
 +++++++++++++
@@ -58,3 +59,7 @@ Running the crawler
 The following command line assumes that the extroot folder visible in the LinkAhead docker container is located in "../extroot":
 
 caosdb-crawler -i identifiables.yml --prefix /extroot --debug --provenance=provenance.yml -s update cfood.yml ../extroot/ExperimentalData/
+
+Server Side Crawler Operation
+-----------------------
+To be filled.
-- 
GitLab