From c087d08a120aefa3294025bd9a971196bdca572c Mon Sep 17 00:00:00 2001 From: Daniel <d.hornung@indiscale.com> Date: Wed, 22 May 2024 10:16:16 +0200 Subject: [PATCH] DOC: Small changes. --- src/doc/concepts.rst | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/src/doc/concepts.rst b/src/doc/concepts.rst index 7100bcd1..77073185 100644 --- a/src/doc/concepts.rst +++ b/src/doc/concepts.rst @@ -1,3 +1,4 @@ +======== Concepts ======== @@ -5,6 +6,10 @@ The CaosDB Crawler can handle any kind of hierarchical data structure. The typic directory tree that is traversed. We use the following terms/concepts to describe how the CaosDB Crawler works. +Basics +====== + + Structure Elements ++++++++++++++++++ @@ -29,7 +34,7 @@ existing StructureElements, Converters create a tree of StructureElements. .. image:: img/converter.png :height: 170 -See :std:doc:`converters<converters>` for details. +See the chapter :std:doc:`Converters<converters>` for details. Relevant sources in: @@ -183,8 +188,7 @@ TODO Caching +++++++ -The Crawler uses the cached library function ``cached_get_entity_by``. The -cache is cleared automatically, when the Crawler does updates, but if you would -run the same Python process indefinetely the Crawler would not see changes due -to the Cache. Thus, please make sure to clear the cache if you create long -running Python processes. +The Crawler uses the cached library function ``cached_get_entity_by``. The cache is cleared +automatically when the Crawler does updates, but if you ran the same Python process indefinitely, +the Crawler would not see changes in LinkAhead due to the cache. Thus, please make sure to clear the +cache if you create long running Python processes. -- GitLab