diff --git a/src/doc/concepts.rst b/src/doc/concepts.rst
index 7100bcd1790edb3e040a1a90663a32a09b7c8eaf..770731857112b93205f0e80d623fa9183c4aa885 100644
--- a/src/doc/concepts.rst
+++ b/src/doc/concepts.rst
@@ -1,3 +1,4 @@
+========
 Concepts
 ========
 
@@ -5,6 +6,10 @@ The CaosDB Crawler can handle any kind of hierarchical data structure. The typic
 directory tree that is traversed. We use the following terms/concepts to describe how the CaosDB
 Crawler works.
 
+Basics
+======
+
+
 Structure Elements
 ++++++++++++++++++
 
@@ -29,7 +34,7 @@ existing StructureElements, Converters create a tree of StructureElements.
 .. image:: img/converter.png
   :height: 170
 
-See :std:doc:`converters<converters>` for details.
+See the chapter :std:doc:`Converters<converters>` for details.
 
 Relevant sources in:
 
@@ -183,8 +188,7 @@ TODO
 Caching
 +++++++
 
-The Crawler uses the cached library function ``cached_get_entity_by``. The
-cache is cleared automatically, when the Crawler does updates, but if you would
-run the same Python process indefinetely the Crawler would not see changes due
-to the Cache. Thus, please make sure to clear the cache if you create long
-running Python processes.
+The Crawler uses the cached library function ``cached_get_entity_by``. The cache is cleared
+automatically when the Crawler does updates, but if you ran the same Python process indefinitely,
+the Crawler would not see changes in LinkAhead due to the cache. Thus, please make sure to clear the
+cache if you create long running Python processes.