diff --git a/src/doc/workflow.rst b/src/doc/workflow.rst
index 9116f85e06a2a3b2a78aa312991bfd427af8710a..b8d48f1ae299431e6aeaf8a173a9e9ffbc0388f2 100644
--- a/src/doc/workflow.rst
+++ b/src/doc/workflow.rst
@@ -4,17 +4,18 @@ Crawler Workflow
 The LinkAhead crawler aims to provide a very flexible framework for synchronizing
 data on file systems (or potentially other sources of information) with a
 running LinkAhead instance. The workflow that is used in the scientific environment
-should be choosen according to the users needs. It is also possible to combine multiple workflow or use them in parallel.
+should be choosen according to the users needs. It is also possible to combine
+multiple workflow or use them in parallel.
 
 In this document we will describe several workflows for crawler operation.
 
 Local Crawler Operation
 -----------------------
 
-A very simple setup that can also reliably used for testing (e.g. in local
-docker containers) sets up the crawler on a local computer. The files that
-are being crawled need to be visible to both, the local computer and the
-machine, running the LinkAhead.
+A very simple setup that can also reliably be used for testing
+sets up the crawler on a local computer. The files that
+are being crawled need to be visible to both, the locally running crawler and
+the LinkAhead server.
 
 Prerequisites
 +++++++++++++
@@ -58,3 +59,7 @@ Running the crawler
 The following command line assumes that the extroot folder visible in the LinkAhead docker container is located in "../extroot":
 
 caosdb-crawler -i identifiables.yml --prefix /extroot --debug --provenance=provenance.yml -s update cfood.yml ../extroot/ExperimentalData/
+
+Server Side Crawler Operation
+-----------------------
+To be filled.