diff --git a/CHANGELOG.md b/CHANGELOG.md
index 649874f61a38b82b865a47d1653ad56f5fd47115..c66132c38d8318406b584949cfdda8635c8a1774 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -56,6 +56,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
 - Nested queries.
 - Global entity permissions.
 - DOC: Data model tutorial.
+- Removed old documentation directory `/doc/`, migrated non-duplicate content to `/src/doc/`.
 
 ## [0.9.0] - 2023-01-19
 
diff --git a/doc/Authentication.md b/doc/Authentication.md
deleted file mode 100644
index a7e424b4c321156a009d8a7d9631f32dd296ce1a..0000000000000000000000000000000000000000
--- a/doc/Authentication.md
+++ /dev/null
@@ -1,81 +0,0 @@
-
-
-Author: Timm Fitschen
-
-Email: timm.fitschen@ds.mpg.de
-
-Date: Older than 2016
-
- Some features of CaosDB are available to registered users only. Making any changes to the data stock via HTTP requires authentication by `username` _plus_ `password`. They are to be send as a HTTP header, while the password is to be hashed by the sha512 algorithm:
-
-| `username:` | `$username` | 
-|-------------|-------------|-
-| `password:` | `$SHA512ed_password` |
-
-
-# Sessions
-
-## Login
-
-### Request Challenge
-
- * `GET http://host:port/login?username=$username`
- * `GET http://host:port/login` with `username` header
-
-*no password required to be sent over http*
-
-The request returns an AuthToken with a login challenge as a cookie. The AuthToken is a dictionary of the following form:
-
-
-        {scope=$scope;
-        mode=LOGIN;
-        offerer=$offerer;
-        auth=$auth
-        expires=$expires;
-        date=$date;
-        hash=$hash;
-        session=$session;
-        }
-
- $scope:: A uri pattern string. Example: ` {**/*} `
- $mode:: `ONETIME`, `SESSION`, or `LOGIN`
- $offerer:: A valid username
- $auth:: A valid username
- $expires:: A `YYYY-MM-DD HH:mm:ss[.nnnn]` date string
- $date:: A `YYYY-MM-DD HH:mm:ss[.nnnn]` date string
- $hash:: A string
- $session:: A string
-
-The challenge is solved by concatenating the `$hash` string and the user's `$password` string and calculating the sha512 hash of both. Pseudo code:
-
-
-        $solution = sha512($hash + sha512($password))
-
-### Send Solution
-
-The old $hash string in the cookie has to be replaces by $solution and the cookie is to be send with the next request:
-
-`PUT http://host:port/login`
-
-The server will return the user's entity in the HTTP body, e.g.
-
-
-        <Response ...>
-          <User name="$username" ...>
-            ...
-          </User>
-        </Response>
-
-and a new AuthToken with `$mode=SESSION` and a new expiration date and so on. This AuthToken cookie is to be send with every request.
-
-### Logout
-
-Send 
-
-`PUT http://host:port/logout`
-
-with a valid AuthToken cookie. No new AuthToken will be returned and no AuthToken with that `$session` will be accepted anymore.
-
-
-
-
diff --git a/doc/Datatype.md b/doc/Datatype.md
deleted file mode 100644
index 246f52e103eeb57e1bee355788260508a3df8619..0000000000000000000000000000000000000000
--- a/doc/Datatype.md
+++ /dev/null
@@ -1,95 +0,0 @@
-
-
-# TEXT
-* Description: TEXT stores stores any text values.
-* Range: Any [utf-8](https://en.wikipedia.org/wiki/UTF-8) encodable sequence of characters with maximal 65,535 bytes. (Simply put: In most cases, any text with less than 65,535 letters and spaces will work. But if you use special characters like `à`, `€` or non-latin letters then the number of bytes, which are needed to store it, increases. Then the effective maximal length is smaller than 65,535. A bad case scenario would be a text in Chinese. Chinese characters need about three times the space of letters from the latin alphabet. Therefore, only 21845 Chinese characters can be stored within this datatype. Which is still quite a lot I guess :D)
-* Examples: 
-  * `Am Faßberg 17, D-37077 Göttingen, Germany`
-  * `Experiment went well until the problem with the voltmeter occured. Don't use the results after that.`
-  * `someone@email.org`
-  * `Abstract: bla bla bla ...`
-  * `Head of Group`
-  * `http://www.bmp.ds.mpg.de`
-  * 
-
-        A. Schlemmer, S. Berg, TK Shajahan, S. Luther, U. Parlitz,
-           Quantifying Spatiotemporal Complexity of Cardiac Dynamics using Ordinal Patterns,
-           37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2015, doi: 10.1109/EMBC.2015.7319283
-
-----
-
-# BOOLEAN
-* Description: BOOLEAN stores boolean `TRUE` or `FALSE`. It is therefore suitable for any variable that represents that something is the case or not.
-* Accepted Values: `TRUE` or `FALSE`, case insensitive (i.e. it doesn't matter if you use capitals or small letters).
-* Note: You could also use a TEXT datatype to represent booleans (or even INTEGER or DOUBLE). But it makes a lot of sense to use this special datatype as it ensures that only the two possible values, `TRUE` or `FALSE` are inserted into the database. Every other input would be rejected. This helps to keep the database understandable and to avoid mistakes.
-
-----
-
-# INTEGER
-* Description: INTEGER stores integer numbers. If you need floating point variables, take a look at DOUBLE.
-* Range: `-2147483648` to `2147483647`, `-0` is interpreted and stored as `0`.
-* Note: This rather limited range is just provisional. It can be extended with low effort as soon as requested.
-
-----
-
-# DOUBLE
-* Description: DOUBLE stores floating point numbers with a double precision as defined by [IEEE 754](https://en.wikipedia.org/wiki/IEEE_floating_point).
-* Range: 
-  * From `2.2250738585072014E-308` to `1.7976931348623157E308` (negative and positive) with a precision of 15 decimals. 
-  * Any other decimal number _might work_ but it is not guaranteed.
-  * `-0`, `0`, `NaN`, `-inf` and `inf`
-* Note: The server generates a warning when the precision of the submitted DOUBLE value is to high to be preserved.
-
-----
-
-# DATETIME
-The DateTime data type exists in (currently) three flavors which are dynamically chosen during parsing on the the serverside. The flavors have different ranges, support of time zones and intended use cases. Only the first two flavors are actually implemented for storage and queries. The third one is implemented for queries exclusively.
-
-## UTCDateTime
-* Description: This DATETIME flavor stores values which represent a single point of time according to [UTC](https://en.wikipedia.org/wiki/Coordinated_Universal_Time) with the format specified by [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) (Combined date and time). It does support [UTC Leap Seconds](https://en.wikipedia.org/wiki/Leap_second) and time zones.
-* Range: From `-9999-01-01T00:00:00.0UTC` to `9999-12-31T23:59:59.999999999UTC` with nanosecond precision.
-* Examples:
-  * `2016-01-01T13:23:00.0CEST` which means _January 1, 2016,  1:23 PM, Central European Summer Time_.
-  * `-800-01-01T13:23:00.0` which means _January 1, 800 BC,  1:23 PM, UTC_.
-* Note:
-  * It is allowed to ommit the nanosecond part of a UTCDateTime (`2016-01-01T13:23:00CEST`). This indicates a precision of seconds for a UTCDateTime value.
-
-## Date
- Description:: This DATETIME flavor stores values which represent a single date, month or year according to the [gregorian calendar](https://en.wikipedia.org/wiki/Gregorian_Calendar). A month/year is conceived as a single date with the presion of a month/year. This concept is useful if you try to understand the query semantics which are explained [elsewhere](./QueryLanguage#POVDateTime).
- Format:: `Y[YYY][-MM[-dd]]` (where square brackets mean that the expression is optional).
- Range:: Any valid date according to the gregorian calendar from `-9999-01-01` to `9999-12-31` (and respective dates with lower precision. E.g. the year `-9999`). There is no year `0`.
-* Note: Date is a specialization of [#SemiCompleteDateTime]. 
-
-## SemiCompleteDateTime
-* Description: A generalization of the _Date_ and _UTCDateTime_ flavors. In general, there is no time zone support. Although this flavor is not yet storable in general, it is implemented for search queries yet. I.e. you might search for `FIND ... date>2015-04-03T20:15` yet.
-* Format: `Y[YYY]['-MM[-dd[Thh:[mm[:ss[.ns]]]]]]]`. 
-* Special Properties: For every SemiCompleteDateTime _d_ there exists a _Inclusive Lower Bound_ (`d.ILB`) and a _Exclusive Upper Bound_ (`d.EUB`). That means, a SemiCompleteDateTime can be interpreted as an interval of time. E.g. `2015-01` is the half-open interval `[2015-01-01T00:00:00.0, 2016-01-01T00:00:00.0)`. ILB and EUB are UTCDateTimes respectively. These properties are important for the semantics of the the query language, especialy the [operators](./QueryLanguage#POVDateTime). 
-
-## Future Flavors
-Please file a new feature request as soon as you need them.
-* Time:: For a time of the day (without the date). Supports time zones.
-* FragmentaryDateTime:: For any fragmentary DateTime. That is an arbitrary combination of year, month, day of week, day of month, day of year, hour of day, minute, seconds (and nanoseconds). This flavor is useful for recurrent events like a bus schedule (_Saturday, 7:30_) or the time of a standing order for money transfer (_third day of the month_).
-
-----
-
-# REFERENCE
-* Description: REFERENCE values store the [Valid ID](./Glossary#valid-id) of an existing entity. The are useful to establish links between two entities. 
-* Accepted Values: Any [Valid ID](./Glossary#valid-id) or [Valid Unique Existing Name](./Glossary#valid-unique-existing-name) or [Valid Unique Temporary ID](./Glossary#valid-unique-temporary-id) or [Valid Unique Prospective Name](./Glossary#valid-unique-prospective-pame).
-* Note:
-  * After beeing processed successfully by the server the REFERENCE value is normalized to a [Valid ID](./Glossary#valid-id). I.e. it is guaranteed that a REFERENCE value of a valid property is a positive integer.
-
-## FILE
-* Description: A FILE is a special REFERENCE. It only allows entity IDS which belong to a File.
-
-## RecordType as a data type
-* Furthermore, any RecordType can be used as a data type. This is a variant of the REFERENCE data type where any entity is a valid value which is a child of the RecordType in question.
-* Example:
-  * Let `Person` be a RecordType, `Bertrand Russel` be a child of `Person`. Then `Bertrand Russel` is a valid value for a property with a `Person` data type.
-
-# LIST
-* Description: A LIST is always a list of something which has another data type. E.g. A LIST of TEXT values, a LIST of REFERENCES value, etc. Here we call TEXT resp. REFERENCE the **Element Data Type**. The LIST data type allows you to store an arbitrary empty or non-empty ordered set (with duplicates) of values of the *same* data type into one property. Each value must be a valid value of the Element Data Type.
-* Example:
-  * LIST of INTEGER: ```[0, 2, 4, 5, 8, 2, 3, 6, 7]```
-  * LIST of Person, while `Person` is a RecordType: ```['Bertrand Russel', 'Mahatma Ghandi', 'Mother Therese']```
-
-
diff --git a/doc/Entity.md b/doc/Entity.md
deleted file mode 100644
index c2ff4acc733638f964f6d6343ef4dfd4db05ebc1..0000000000000000000000000000000000000000
--- a/doc/Entity.md
+++ /dev/null
@@ -1,190 +0,0 @@
-Version: 0.1.0r1
-
-Author: Timm Fitschen
-
-Email: timm.fitschen@ds.mpg.de
-
-Date: 2017-12-17
-
-# Introduction
-
-CaosDB is a database management system that stores it's data into `Entities`. An `Entity` can be thought of as the equivalent to tables, rows, columns and the tuples that fill the tables of a traditional RDBMS. Entities are not only used to store the data they also define the structure of the data.
-
-# Formal Definition
-
-An `Entity` may have 
-
-* a `domain`
-* an `id`
-* a `role`
-* a `name`
-* a `data type`
-* a `Set of Values`
-* a `Set of Properties`
-* a `Set of Parents`
-
-A `domain` contains an `Entity`.
-
-An `id` is an arbitrary string. 
-
-A `role` is an arbitrary string. Especially, it may be one of the following strings:
-
-* `RecordType`
-* `Record`
-* `Relation`
-* `Property`
-* `File`
-* `QueryTemplate`
-* `Domain`
-* `Unit`
-* `Rule`
-* `DataType`
-* `Remote`
-
-A `name` is an arbitrary string.
-
-A `data type` contains an `Entity`. Note: this is not necessarily a `Data Type`.
-
-## Set of Values
-
-A `Set of Values` is a mapping from a `indices` to a finite set of `Values`.
-
-An `index` is an interval of non-negative integers starting with zero.
-
-### Value
-
-A `Value` may have a `data type` and/or a `unit`.
-
-A `data type` is an `Entity`. Note: this is not necessarily a `Data Type`.
-
-A `unit` is an arbitrary string.
-
-## Data Type
-
-A `Data Type` is an `Entity` with role `DataType`.
-
-### Reference Data Type
-
-A `Reference Data Type` is a `Data Type`. It may have a `scope`. 
-
-A `scope` contains an `Entity`.
-
-### Collection Data Type
-
-A `Collection Data Type` is a `Data Type`. It may have an ordered set of `elements`.
-
-## Record Type
-
-A `Record Type` is an `Entity` with role `RecordType`.
-
-## Record
-
-A `Record` is an `Entity` with role `Record`.
-
-## Relation
-
-A `Relation` is an `Entity` with role `Relation`.
-
-## Property
-
-A `Property` is an `Entity` with role `Property`. It is also refered to as `Abstract Property`.
-
-## File
-
-A `File` is an `Entity` with role `File`.
-
-A `File` may have 
-
-* a `path`
-* a `size`
-* a `checksum`
-
-A `path` is an arbitrary string.
-
-A `size` is a non-negative integer.
-
-A `checksum` is an ordered pair (`method`,`result`).
-
-A `method` is an arbitrary string. 
-
-A `result` is an arbitrary string.
-
-## QueryTemplate
-
-A `QueryTemplate` is an `Entity` with role `QueryTemplate`.
-
-## Domain
-
-A `Domain` is an `Entity` with role `Domain`.
-
-## Unit
-
-A `Unit` is an `Entity` with role `Unit`.
-
-## Rule
-
-A `Rule` is an `Entity` with role `Rule`.
-
-## Remote
-
-A `Remote` is an `Entity` with role `Remote`.
-
-## Set of Parents
-
-A `Set of Parents` is a set of `Parents`.
-
-### Parent
-
-A `Parent` may contain another `Entity`. 
-
-A `Parent` may have an `affiliation`.
-
-An `affiliation` may contain of the following strings:
-
-* `subtyping`
-* `instantiation`
-* `membership`
-* `parthood`
-* `realization`
-
-## Set of Properties
-
-A `Set of Properties` is a tripple (`index`, set of `Implemented Properties`, `Phrases`).
-
-An `index` is a bijective mapping from an interval of non-negative integer numbers starting with zero to the set of `Implemented Properties`.
-
-### Implemented Property
-
-An `Implemented Property` contains another `Entity`.
-
-An `Implemented Property` may have an `importance`.
-
-An `Implemented Property` may have a `maximum cardinality`.
-
-An `Implemented Property` may have a `minimum cardinality`.
-
-An `Implemented Property` may have an `import`.
-
-An `importance` is an arbitrary string. It may contain of the following strings:
-
-* `obligatory`
-* `recommended`
-* `suggested`
-* `fix`
-
-A `maximum cardinality` is a non-negative integer.
-
-A `minimum cardinality` is a non-negative integer.
-
-An `import` is an arbitrary string. It may contain of the following strings:
-
-* `fix`
-* `none`
-
-### Phrases
-
-`Phrases` are a mapping from the cartesian product of the `index` with itself to a `predicate`.
-
-A `predicate` is an arbitrary string.
-
-
diff --git a/doc/FileServer.md b/doc/FileServer.md
deleted file mode 100644
index 64be8ef910d76ba6c1a304ddbdd64e4e25d31d56..0000000000000000000000000000000000000000
--- a/doc/FileServer.md
+++ /dev/null
@@ -1,132 +0,0 @@
-
-Author: Timm Fitschen
-
-Email: timm.fitschen@ds.mpg.de
-
-Date: 2014-06-17
-
-
-# Info
-There are several ways to utilize the file server component of CaosDB. It is possible to upload a file or a whole folder including subfolders via HTTP and the _drop off box_. It is possible to download a file via HTTP identified by its ID or by its path in the internal file system. Furthermore, it is possible to get the files metadata via HTTP as an xml. 
-
-# File upload
-## Drop off box
-
-The drop off box is a directory on the CaosDB server's local file system, specified in the `server.conf` file in the server's basepath (something like `~/CaosDB/server/server.conf`). The key in the `server.conf` is called `dropoffbox`. Since the drop off box directory is writable for all, users can push their files or complete folders via a `mv` or a `cp` (recommended!) in that folder. The server deletes files older than their maximum lifetime (24 hours by default, specified `in server.conf`). But within their lifetime a user can prompt the server to pick up the file (or folder) from the drop off box in order to transfer it to the internal file system. 
-
-Now, the user may send a pick up request to `POST http://host:port/FilesDropOff` with a similar body:
-
-        <Post>
-          <File pickup="$path_dropoffbox" destination="$path_filesystem" description="$description" generator="$generator"/>
-          ...
-        </Post>
-
-whereby 
-* $path_dropoffbox is the actual relative path of the dropped file or folder in the DropOffBox,
-* $path_filesystem is the designated relative path of that object in the internal file system,
-* $description is a description of the file to be uploaded,
-* $generator is the tool or client used for pushing this file.
-  
-After a successful pick up the server will return:
-
-        <Response>
-          <File description="$description" path="$path" id="$id" checksum="$checksum" size="$size" />
-          ...
-        </Response>
-
-whereby 
-* $id is the new generated id of that file and 
-* $path is the path of the submitted file or folder relative to the file system's root.
-
-## HTTP upload stream
-### Files
-
-File upload via HTTP is implemented in a [rfc1867](http://www.ietf.org/rfc/rfc1867.txt) consistent way. This is a de-facto standard that defines a file upload as a part of an HTML form submission. This concept shall not be amplified here. But it has to be noticed that this protocol is not designed for uploads of complete structured folders. Therefore the CaosDB file components have to impose that structure on the upload protocol. 
-
-CaosDB's file upload resource does exclusively accept POST requests of MIME media type `multipart/form-data`. The first part of each POST body is expected to be a form-data text field, containing information about the files to be uploaded. It has to meet the following requirements:
-* `Content-type: text/plain; charset=UTF-8`
-* `Content-disposition: form-data; name="FileRepresentation"`
-
-If the content type of the first part is not `text/plain; charset=UTF-8` the server will return error 418. If the body is not actually encoded in UTF-8 the servers behaviour is not defined. If the field name of the first part is not `FileRepresentation` the server will return error 419.
-
-The body of that first part is to be an xml document of the following form: 
-
-
-        <Post>
-          <File upload="$temporary_identifier" destination="$path_filesystem" description="$description" checksum="$checksum" size="$size"/>
-          ...
-        </Post>
-
-whereby 
-* $temporary_identifier is simply a arbitrary name, which will be used to identify this `<File>` tag with a uploaded file in the other form-data parts.
-* $path_filesystem is the designated relative path of that object in the internal file system,
-* $description is a description of the file to be uploaded,
-* $size is the files size in bytes,
-* $checksum is a SHA-512 Hash of the file.
-
-The other parts (which must be at least one) may have any appropriate media type. `application/octet-stream` is a good choice for it is the default for any upload file according to [rfc1867](http://www.ietf.org/rfc/rfc1867.txt). Their field name may be any name meeting the requirements of [rfc1867](http://www.ietf.org/rfc/rfc1867.txt) (most notably they must be unique within this POST). But in order to identify the corresponding xml file representation of each file the `filename` parameter of the content-disposition header has to be set to the proper $temporary_identifier. The Content-disposition type must be `form-data`:
-* `Content-disposition: form-data; name="$any_name"; filename="$temporary_identifier"`
-
-Finally the body of these parts have to contain the file encoded in the proper `Content-Transfer-Encoding`.
-
-
-If a file part has a `filename` parameter which doesn't occur in the xml file representation the server will return error 420. The file will not be stored anywhere. If an xml file representation has no corresponding file to be uploaded (i.e. there is no part with the same `filename`) the server will return error 421. Some other error might occur if the checksum, the size, the destination etc. are somehow corrupted. 
-
-### Folders
-
-Uploading folders works in a similar way. The first part of the `multipart/form-data` document is to be the representation of the folders:
-
-
-        <Post>
-          <File upload="$temporary_identifier" destination="$path_filesystem" description="$description" checksum="$checksum" size="$size"/>
-          ...
-        </Post>
-
-The root folder is represented by a part which has a header of the form:
-* `Content-disposition: form-data; name="$any_name"; filename="$temporary_identifier/"`
-The slash at the end of the `filename` indicates that this is a folder, not a file. Consequently, the body of this part will be ignored and should be empty.
-Any file with the name `$filename` in the root folder is represented by a part which has a header of the form:
-* `Content-disposition: form-data; name="$any_name"; filename="$temporary_identifier/$filename"`
-Any sub folder with the name `$subfolder` is represented by a part which has a header of the form: 
-* `Content-disposition: form-data; name="$any_name"; filename="$temporary_identifier/$subfolder/"`
-
-Likewise, a complete directory tree can be transfered by appending the structure to the `filename` header field.
-
-**Example**:
-Given the structure
-
-        rootfolder/
-        rootfolder/file1
-        rootfolder/subfolder/
-        rootfolder/subfolder/file2
-
-an upload document would have the following form:
-
-        ... (HTTP Header)
-        Content-type: multipart/form-data, boundary=AaB03x
-        
-        --AaB03x
-        content-disposition: form-data; name="FileRepresentation"
-        
-        <Post>
-          <File upload="tmp1234" destination="$path_filesystem" description="$description" checksum="$checksum" size="$size"/>
-        </Post>
-        
-        --AaB03x
-        content-disposition: form-data; name="random_name1"; filename="temp1234/"
-        
-        --AaB03x
-        content-disposition: form-data; name="random_name1"; filename="temp1234/file1"
-        
-        Hello, world! This is file1.
-        
-        --AaB03x
-        content-disposition: form-data; name="random_name1"; filename="temp1234/subfolder/"
-        
-        --AaB03x
-        content-disposition: form-data; name="random_name1"; filename="temp1234/subfolder/file2"
-        
-        Hello, world! This is file2.
-        
-        --AaB03x--
-
diff --git a/doc/Message.md b/doc/Message.md
deleted file mode 100644
index 5c5198d80974a6114d32c37224b22ff8ffb47b47..0000000000000000000000000000000000000000
--- a/doc/Message.md
+++ /dev/null
@@ -1,166 +0,0 @@
-# Introduction
-
-API Version 0.1.0
-
-A Message is a way of communication between the server and a client. The main purpose is to inform the clients about errors which occured during transactions, issue warnings when entities have a certain state or just explicitly confirm that a transaction was successful. Messages represents information that is not persistent or just the reproducible outcome of a transaction. Messages are not stored aside from logging.
-
-# Message Classes And Their Properties
-
-## Message (generic super class)
-
-A `Message` must be either a `Server Message` or a `Client Message`. 
-
-A `Message` must have a `description`. A `description` is a string and a human-readable explanation of the meaning and/or purpose of the message. The description must not have leading or trailing whitespaces. The description should be kept in English. For the time being there is no mechanism to indicate that the description is written in other languages. This could be changed in later versions of this API. 
-
-## Server Message
-
-A `Server Message` is a Message issued by the server. It must not be issued by clients. 
-
-A `Server Message` may be either a `Standard Server Message` or a `Non-Standard Server Message`
-
-### Standard Server Message
-
-A `Standard Server Message` is one of a set of predefined messages with a certain meaning. The set of these `Standard Server Messages` is maintained and documented in the Java code of the server. There should be a server resource for these definitions in order to have a always up-to-date documentation of the messages on every server.
-
-A `Standard Server Message` must have an `id`. An `id` is a non-empty string that uniquely identifies a standard server message. An id should consist only of ASCII compliant upper-case Latin alphabetic letters from `A` to `Z` and the underscore character `_`.
-An `id` of a `Standard Server Message` must not start with the string `NSSM_`.
-
-A `Standard Server Message` must have a `type`. A `type` is one these strings: `Info`, `Warning`, `Error`, or `Success`.
-
-#### Error Message
-
-A `Server Message` with type `Error` is also called `Error Message` and sometimes just `Error`. An `Error Message` indicates that a request has *failed*. It informs about the reasons for that failure or the nature of the problems which occurred. The description of each error message should explain the error and indicate if and how the client can remedy the problems with her request.
-
-#### Warning Message
-
-A `Server Message` with type `Error` is also called `Warning Message` and sometime just `Warning`. A `Warning Message` indicates that certain *irregularities* occurred during the processing of the request or that the client requested something that is *not recommended but not strictly forbidden*.
-
-#### Info Message
-
-A `Server Message` with type `Info` is also called `Info Message` and sometimes just `Info`. An `Info Message` is a means to inform the client about *arbitrary events* which occurred during the processing of the request and which are *not* to be considered *erroneous* or *non-recommended*. These info messages are primarily intended to make the processing of the request more understandable for the client. Info messages are not meant to be used for debugging.
-
-#### Success Message
-
-A `Server Message` with type `Success` is also called a `Success Message`. A `Success Message` indicates the successful *state change* due to portions of a request or the whole request. A success message must not be issued if the request fails.
-
-### Non-Standard Server Message
-
-A `Non-Standard Server Message` may be issued by any non-standard server plugin or extension. It is a placeholder for extensions to the Message API. 
-
-A `Non-Standard Server Message` may have an `id`. An `id` is a non-empty string. It should consist only of ASCII compliant upper-case Latin alphabetic letters from `A` to `Z` and the underscore character `_`. However, the id should not be equal to any id from the set of predefined standard server messages. Furthermore, the id of a non-standard server message should start with the string `NSSM_`.
-
-A `Non-Standard Server Message` may have a `type`. A `type` is a non-empty string. It should consist only of ASCII compliant upper-case or lower-case Latin alphabetic letters from `a` to `z`, from `A` to `Z`, and the underscore character `_`. If the type is equal to one of the above-mentioned types, it must have the same meaning and the same effects on the request as the respective type from above. Especially, a message with type `Error` must not be issued unless the request actually fails. Likewise a `Success` must not be issued unless the request actually caused a *state change* of the server.
-
-## Client Message
-
-A `Client Message` may have an `ignore` flag. The `ignore` flag can have one of these values: `no`, `yes`, `warn`, `silent`
-
-A `Client Message` is a message issued by a client. It should not be issued by the server. A `Client Message` may be completely ignored by clients. A client message must not be ignored by the server. A `Client Message` which cannot be understood by the server must result in an error, unless the `ignore` flag states otherwise. 
-
-### Ignore Flag
-
-If the `ignore` flag is set to `no` the server must not ignore the client message. If the server cannot understand the client message an error must be issued. This will cause the transaction to fail.
-
-If the `ignore` flag is set to `yes` the server must ignore the client message.
-
-If the `ignore` flag is set to `warn` the server should not ignore the message. If the server cannot understand the client message, a warning must be issued. The transaction will not fail due to this warning.
-
-## Message Parameters
-
-A `Message` may have zero or more parameters. A `Message Parameter` is a a triple of a `key`, a `value`. It is intended to facilitate the processing and representation of messages by clients and the server. For example, consider an `Error Message` which states that a certain server state cannot be reached and the reason be that there is an entity with certain features. Then it is useful to refer to the entity via the
-parameters. A client can now resolve the entity and just show it or generate a URI for this entity.
-
-A `key` is a non-empty string which should consist only of ASCII compliant lower-case Latin alphabetic letters from `a` to `z` and the minus character `-`. A `key` must be unique among the keys of the message parameters. 
-
-A `value` is a possibly empty, arbitrary string which must not have leading or trailing white spaces.
-
-A `Message Parameter` may have a `type`. The `type` of a `Message Parameter` is also called a `Message Parameter Type`. A `Message Parameter Type` is a non-empty string which should consist only of ASCII compliant lower-case Latin alphabetic letters from `a` to `z` and the minus character `-`. A message parameter type may be one these string: `entity-id`, `entity-name`, `entity-cuid`, `property-index`, `parent-id`, `parent-name`.
-
-A `Message Parameter` with a type which begins with `entity-` is also called an `Entity Message Parameter`. The value of an `Entity Message Parameter` must refer to an entity—via its id, name, or cuid, respectively.
-
-A `Message Parameter` with a type which begins with `property-` is also called a `Property Message Parameter`. The value of such a parameter must refer to an entity's property. In the case of the `property-index` type the value refers to a property via a zero-based index (among the list of properties of that entity). The list of properties in question must belong to the `Message Bearer` which must in turn be an `Entity`.
-
-A `Message Parameter` with a type which begins with `parent-` is also called a `Parent Message Parameter`. The value of such a parameter must refer to an entity's parent via its id or name, respectively.
-
-## Message Bearer
-
-A `Message` must have a single `Message Bearer`, or, equivalently, a `Message` `belongs to` a single `Message Bearer`. The message is usually considered to carry information about the message bearer if not stated otherwise. The message's subject should be the message bearer itself, so to speak. Although, possibly indicated by a `Message Parameter` the message may be additionally or solely concerned with other things than the message bearer. Please note: The message bearer may also indicate the context of evaluation of the message parameters, e.g. when the type of the message parameter is `property-index`. 
-
-A `Message Bearer` may be an `Entity`, a `Property`, a `Container`, a `Request`, a `Response`, or a `Transaction`.
-
-# Representation and Serialization
-
-Messages can be serialized, deserialized by the means of XML.
-
-## XML Representation
-
-A `Message` is serialized into a single XML Element Node (hereafter the *root element* with zero or more Child Nodes. 
-
-#### Root Element Tag
-
-The root element's tag of a `Server Message` must be equal to its `type` if and only if the type is equal to one of the allowed types of a `Standard Server Message` (even if it is a Non-Standard Server Message). Otherwise the root tag is just 'ServerMessage'.
-
-```xml
-<Error/><!--an Error Message-->
-<Warning/><!--a Warning Message-->
-<Info/><!--an Info Message-->
-<Success/><!--a Success Message-->
-<ServerMessage/> <!--a Non-Standard Server Message with a non-standard type-->
-```
-
-The root element's tag of a `Client Message` must be 'ClientMessage'. E.g.
-```xml
-<ClientMessage/><!--a Client Message-->
-```
-
-#### Root Element Attributes
-
-The root element must have the attributes nodes `id`, and/or `ignore` if and only if the messages have corresponding properties. The root element must have a 'type' attribute only if the message has a type property and if the type is not equal to the root element's tag. The values of the attributes must equal the corresponding properties. E.g.
-
-```xml
-<Error id="ENTITY_DOES_NOT_EXIST" type="Error"/><!--this and the next element are equivalent-->
-<Error id="ENTITY_DOES_NOT_EXIST"/>
-<ServerMessage type="CustomType"/><!--has no id-->
-<ServerMessage id="NSSM_MY_ID"/><!--has no type-->
-```
-
-or
-
-```xml
-<ClientMessage id="CM_MY_ID" ignore="warn"/>
-```
-
-All other Attributes should be ignored.
-
-#### Description Element
-
-The root element must have exactly one Child Element Node with tag 'Description' if and only if the message has a `description` property. The string value of the message's description must be the first Child Text Node of the 'Description' Element. E.g.
-
-```xml
-<ServerMessage>
-  <Description>This is a description.</Description>
-</ServerMessage>
-```
-
-Please note: Any leading or trailing whitespaces of the Text Node must be stripped during the deserialization.
-
-All other Attributes and Child Nodes should be ignored.
-
-#### Parameters Element
-
-The root element must have exactly one Child Element Node with tag 'Parameters' if the message has at least one `parameter`. The 'Parameters' Element in turn must have a single Child Element Node for each parameter which are called `Parameter Elements`. 
-
-A `Parameter Element` must have a tag equal to the `key` of the parameter.
-It must have a `type` attribute equal to the `type` property of the parameter if and only if the parameter has a type. And it must have a first Child Text Node which is equal to the parameter's `value`. E.g.
-
-```xml
-<ClientMessage>
-  <Parameters>
-    <param-one type="entity-name">Experiment</param-one><!--One parameter with key="param-one", value="Experiment", and type="entity-name"-->
-  </Parameters>
-</ClientMessage>
-```
-
-Please note: Any leading or trailing whitespaces of the Text Node must be stripped during the deserialization. 
-
-All other Attributes and Child Nodes below the 'Parameters' Element should be ignored.
diff --git a/doc/Paging.md b/doc/Paging.md
deleted file mode 100644
index 608ebc0489d35e5e60ff9cb0c5c28846d729a1ca..0000000000000000000000000000000000000000
--- a/doc/Paging.md
+++ /dev/null
@@ -1,24 +0,0 @@
-The Paging flag splits the retrieval of a (possibly huge) number entities into pages.
-
-# Syntax
-
-
-          flag   = name, [":", value];
-          name   = "P";
-          value  = [ index ], ["L", length]];
-          index  =  ? any positive integer ?;
-          length =  ? any positive integer ?;
-
-# Semantics
-
-The `index` (starting with zero) denotes the index of the first entity to be retrieved. The `length` is the number of entities on that page. If `length` is omitted, the default number of entities is returned (as configured by a server constant called ...). If only the `name` is given the paging behaves as if the `index` has been zero.
-
-# Examples
-
-`https://caosdb/Entities/all?flags=P:24L50` returns 50 entities starting with the 25th entity which would be retrieved without paging.
-
-`https://caosdb/Entities/all?flags=P:24` returns the default number of entities starting with the 25th entity which would be retrieved without paging.
-
-`https://caosdb/Entities/all?flags=P:L50` returns 50 entities starting with the first entity which would be retrieved without paging.
-
-`https://caosdb/Entities/all?flags=P` returns the default number of entities starting with the first entity which would be retrieved without paging.
\ No newline at end of file
diff --git a/doc/Query.md b/doc/Query.md
deleted file mode 100644
index ca1499672d9f66ec59731443b48f46dff54cce15..0000000000000000000000000000000000000000
--- a/doc/Query.md
+++ /dev/null
@@ -1,439 +0,0 @@
-
-# Searching Data
-
-In this chapter, the CaosDB Query Language (CQL) is presented as a means of
-formulating search commands, commonly referred to as queries. It is highly
-recommended that you experiment with the examples provided, such as those found
-on https://demo.indiscale.com. An interactive tour is also available on this
-public instance, which includes a comprehensive overview of the query language.
-Therefore, it is suggested that you begin there and subsequently proceed with
-this more detailed explanation.
-
-## Introduction
-
-Queries typically start with the keyword `FIND`, followed by a description of
-what you want to find. For example, you can search for all musical instruments
-with `FIND MusicalInstrument`.
-
-*Note*, the CQL is case**in**sensitive. We will write keywords of CQL in all
-caps to illustrate what parts are part of the language.
-
-The most common way is to provide a RecordType name after `FIND` (as in the
-example above). However, you can also use the name of some other entity:
-`FIND 'My first guitar'`.
-
-*Note*, that we put the name here in quotes. Spaces are used in CQL as separator
-of words. Thus, if something contains quotes, like the name here, it needs to be
-quoted.
-
-While queries like the last one are great to get an impression of the data,
-often we need to be more specific. Therefore, queries can include various
-conditions to restrict the result set.
-
-Example: `FIND MusicalAnalysis WITH quality_factor>0.5 AND date IN
-2019`. The keyword `WITH` signifies that for each Record of the type
-`MusicalAnalysis`, an assessment is made to determine whether it possesses a
-Property labelled `quality_factor` that exceeds 0.5, as well as another
-Property labelled `date` that may correspond to any day within the year 2019.
-
-In order to make CQL easier to learn and to remember we designed it to be close
-to natural spoken English language. For example, you can write
-`FIND Guitar WHICH HAS A PROPERTY price`. Here, "HAS A PROPERTY" is what we call
-syntactic sugar. It lets the query role off the tongue more easily than
-`FIND Guitar WHICH price` but it is actually not needed and does not change
-the meaning of the query. In fact, you could also write `FIND Guitar WITH
-price`.
-
-If you are only interested in the number of Entities that match your query, you
-can replace `FIND` with `COUNT` and the query will only return the number of
-Entities in the result set.
-
-Sometimes the list of Records that you get using a `FIND` query is not what you
-need; especially if you want to export a subset of the data for the analysis
-with some external tool.
-`SELECT` queries offer to represent the query result in a tabular form.
-
-If you replace the `FIND` keyword of a query with `SELECT x, y, z FROM`, then
-CaosDB will return the result as tabular data.
-
-For example, instead of `FIND Guitar`, try out
-`SELECT name, electric FROM Guitar`
-
-As you can see, those queries are design to allow very specific data requests.
-If you do not want/need to be as specific you can omit the first keyword (`FIND`
-or `SELECT`) which creates a search for anything that has a text Property with
-something like your expression. For example, the query "John" will search for
-any Records that has a text property that contains this string.
-
-With this, we conclude our introduction of CQL. You now know about the basic
-elements. The following will cover the various aspects in more detail and you
-will for example learn how you can use references among Records, or meta data
-like the creation time of a Record to restrict the query result set.
-
-## What am I searching for?
-
-We already learned, that we can provide the name of a RecordType after the `FIND`
-keyword. Let's call this part of the query "entity expression". In general, we
-need to identify with the entity expression one or more entities via their name, CaosDB ID
-or a pattern.
-- `FIND Guitar`
-- `FIND Guit*` ('*' represents none, one or more characters)
-- `FIND <<[gG]ui.*>>` (a regular expression surrounded by _<<_ and '>>'. see below)
-- `FIND 110`
-
-The result set will contain Entities that are either identified by the entity expression
-directly (i.e. they have the name or the given ID) or the have such an Entity as
-parent.
-
-As you know, CaosDB distincts among different Entity roles:
-- Entity
-- Record
-- RecordType
-- Property
-- File
-
-You can provide the role directly after the `FIND` keyword and before the
-entity expression: `FIND RECORD Guitar`. The result set will then restricted
-to Entities with that role.
-
-## Conditions / Filters
-
-### POV - Property-Operator-Value
-
-The following queries are equivalent and will restrict the result set to entities which have a property named _pname1_ that has a value _val1_.
-
-`FIND ename.pname1=val1`
-
-`FIND ename WITH pname1=val1`
-
-`FIND ename WHICH HAS A PROPERTY pname1=val1`
-
-`FIND ename WHICH HAS A pname1=val1`
-
-Again, the resultset can be restricted to records:
-
-`FIND RECORD ename WHICH HAS A pname1=val1`
-
-_currently known operators:_ `=, !=, <=, <, >=, >` (and cf. next paragraphes!)
-
-#### Special Operator: LIKE
-
-The _LIKE_ can be used with wildcards. The `*` is a wildcard for any (possibly empty) sequence of characters. Examples:
-
-`FIND RECORD ename WHICH HAS A pname1 LIKE va*`
-
-`FIND RECORD ename WHICH HAS A pname1 LIKE va*1`
-
-`FIND RECORD ename WHICH HAS A pname1 LIKE *al1`
-
-_Note:_ The _LIKE_ operator is will only produce expectable results with text properties.
-
-#### Special Case: References
-
-In general a reference can be addressed just like a POV filter. So
-
-`FIND ename1.pname1=ename2`
-
-will also return any entity named _ename1_ which references the entity with name or id _ename2_ via a reference property named _pname1_. However, it will also return any entity with a text property of that name with the string value _ename2_. In order to restrict the result set to reference properties one may make use of special reference operators:
-
-_reference operators:_ `->, REFERENCES, REFERENCE TO`
-
-
-The query looks like this:
-
-`FIND ename1 WHICH HAS A pname1 REFERENCE TO ename2`
-
-`FIND ename1 WHICH HAS A pname1->ename2`
-
-#### Time Special Case: DateTime
-
-_DateTime operators:_ `=, !=, <, >, IN, NOT IN`
-
-##### `d1=d2`: Equivalence relation.
-* ''True'' iff d1 and d2 are equal in every respect (same DateTime flavor, same fields are defined/undefined and all defined fields are equal respectively).
-* ''False'' iff they have the same DateTime flavor but have different fields defined or fields with differing values.
-* ''Undefined'' otherwise.
-
-Examples:
-* `2015-04-03=2015-04-03T00:00:00` is undefined.
-* `2015-04-03T00:00:00=2015-04-03T00:00:00.0` is undefined (second precision vs. nanosecond precision).
-* `2015-04-03T00:00:00.0=2015-04-03T00:00:00.0` is true.
-* `2015-04-03T00:00:00=2015-04-03T00:00:00` is true.
-* `2015-04=2015-05` is false.
-* `2015-04=2015-04` is true.
-
-##### `d1!=d2`: Intransitive, symmetric relation.
-* ''True'' iff `d1=d2` is false.
-* ''False'' iff `d1=d2` is true.
-* ''Undefined'' otherwise.
-
-Examples:
-* `2015-04-03!=2015-04-03T00:00:00` is undefined.
-* `2015-04-03T00:00:00!=2015-04-03T00:00:00.0` is undefined.
-* `2015-04-03T00:00:00.0!=2015-04-03T00:00:00.0` is false.
-* `2015-04-03T00:00:00!=2015-04-03T00:00:00` is false.
-* `2015-04!=2015-05` is true.
-* `2015-04!=2015-04` is false.
-
-##### `d1>d2`: Transitive, non-symmetric relation.
-Semantics depend on the flavors of d1 and d2. If both are...
-###### [UTCDateTime](Datatype#datetime)
-* ''True'' iff the time of d1 is after the the time of d2 according to [https://en.wikipedia.org/wiki/Coordinated_Universal_Time](UTC)
-* ''False'' otherwise.
-
-###### [SemiCompleteDateTime](Datatype#datetime)
-* ''True'' iff `d1.ILB>d2.EUB` is true or `d1.ILB=d2.EUB` is true.
-* ''False'' iff `d1.EUB<d2.ILB}} is true or {{{d1.EUB=d2.ILB` is true.
-* ''Undefined'' otherwise.
-
-Examples:
-* `2015>2014` is true.
-* `2015-04>2014` is true.
-* `2015-01-01T20:15.00>2015-01-01T20:14` is true.
-* `2015-04>2015` is undefined.
-* `2015>2015-04` is undefined.
-* `2015-01-01T20:15>2015-01-01T20:15:15` is undefined.
-* `2014>2015` is false.
-* `2014-04>2015` is false.
-* `2014-01-01>2015-01-01T20:15:30` is false.
-
-##### `d1<d2`: Transitive, non-symmetric relation.
-Semantics depend on the flavors of d1 and d2. If both are...
-###### [UTCDateTime](Datatype#datetime)
-* ''True'' iff the time of d1 is before the the time of d2 according to [UTC](https://en.wikipedia.org/wiki/Coordinated_Universal_Time)
-* ''False'' otherwise.
-
-###### [SemiCompleteDateTime](Datatype#datetime)
-* ''True'' iff `d1.EUB<d2.ILB` is true or `d1.EUB=d2.ILB` is true.
-* ''False'' iff `d1.ILB>d2.EUB}} is true or {{{d1.ILB=d2.EUB` is true.
-* ''Undefined'' otherwise.
-
-Examples:
-* `2014<2015` is true.
-* `2014-04<2015` is true.
-* `2014-01-01<2015-01-01T20:15:30` is true.
-* `2015-04<2015` is undefined.
-* `2015<2015-04` is undefined.
-* `2015-01-01T20:15<2015-01-01T20:15:15` is undefined.
-* `2015<2014` is false.
-* `2015-04<2014` is false.
-* `2015-01-01T20:15.00<2015-01-01T20:14` is false.
-
-##### `d1 IN d2`: Transitive, non-symmetric relation.
-Semantics depend on the flavors of d1 and d2. If both are...
-###### [SemiCompleteDateTime](Datatype#datetime)
-* ''True'' iff (`d1.ILB>d2.ILB` is true or `d1.ILB=d2.ILB` is true) and (`d1.EUB<d2.EUB` is true or `d1.EUB=d2.EUB` is true).
-* ''False'' otherwise.
-
-Examples:
-* `2015-01-01 IN 2015` is true.
-* `2015-01-01T20:15:30 IN 2015-01-01` is true.
-* `2015-01-01T20:15:30 IN 2015-01-01T20:15:30` is true.
-* `2015 IN 2015-01-01` is false.
-* `2015-01-01 IN 2015-01-01T20:15:30` is false.
-
-##### `d1 NOT IN d2`:  Transitive, non-symmetric relation.
-Semantics depend on the flavors of d1 and d2. If both are...
-###### [SemiCompleteDateTime](Datatype#datetime)
-* ''True'' iff (`d1.ILB IN d2.ILB` is false.
-* ''False'' otherwise.
-
-Examples:
-* `2015 NOT IN 2015-01-01` is true.
-* `2015-01-01 NOT IN 2015-01-01T20:15:30` is true.
-* `2015-01-01 NOT IN 2015` is false.
-* `2015-01-01T20:15:30 NOT IN 2015-01-01` is false.
-* `2015-01-01T20:15:30 NOT IN 2015-01-01T20:15:30` is false.
-
-##### Note
-These semantics follow a three-valued logic with ''true'', ''false'' and ''undefined'' as truth values. Only ''true'' is truth preserving. I.e. only those expressions which evaluate to ''true'' pass the POV filter. `FIND ... WHICH HAS A somedate=2015-01` only returns entities for which `somedate=2015-01` is true. On the other hand, `FIND ... WHICH DOESN'T HAVE A somedate=2015-01` returns entities for which `somedate=2015-01` is false or undefined. Shortly put, `NOT d1=d2` is not equivalent to `d1!=d2`. The latter assertion is stronger.
-
-#### Omitting the Property or the Value
-
-One doesn't have to specify the property or the value at all. The following query filters the result set for entities which have any property with a value greater than _val1_.
-
-`FIND ename WHICH HAS A PROPERTY > val1`
-
-`FIND ename . > val1`
-
-`FIND ename.>val1`
-
-
-And for references...
-
-`FIND ename1 WHICH HAS A REFERENCE TO ename2`
-
-`FIND ename1 WHICH REFERENCES ename2`
-
-`FIND ename1 . -> ename2`
-
-`FIND ename1.->ename2`
-
-
-The following query returns entities which have a _pname1_ property with any value.
-
-`FIND ename WHICH HAS A PROPERTY pname1`
-
-`FIND ename WHICH HAS A pname1`
-
-`FIND ename WITH pname1`
-
-`FIND ename WITH A pname1`
-
-`FIND ename WITH A PROPERTY pname1`
-
-`FIND ename WITH PROPERTY pname1`
-
-`FIND ename . pname1`
-
-`FIND ename.pname1`
-
-### TransactionFilter
-
-*Definition*
- sugar:: `HAS BEEN` | `HAVE BEEN` | `HAD BEEN` | `WAS` | `IS`
- negated_sugar:: `HAS NOT BEEN` | `HASN'T BEEN` | `WAS NOT` | `WASN'T` | `IS NOT` | `ISN'T`  | `HAVN'T BEEN` | `HAVE NOT BEEN` | `HADN'T BEEN` | `HAD NOT BEEN`
- by_clause:: `BY (ME | username | SOMEONE ELSE (BUT ME)? | SOMEONE ELSE BUT username)`
- datetime:: A datetime string of the form `YYYY[-MM[-DD(T| )[hh[:mm[:ss[.nnn][(+|-)zzzz]]]]]]`
- time_clause:: `[AT|ON|IN|BEFORE|AFTER|UNTIL|SINCE] (datetime) `
-
-`FIND ename WHICH (sugar|negated_sugar)? (NOT)? (CREATED|INSERTED|UPDATED) (by_clause time_clause?| time_clause by_clause?)`
-
-*Examples*
-
-`FIND ename WHICH HAS BEEN CREATED BY ME ON 2014-12-24`
-
-`FIND ename WHICH HAS BEEN CREATED BY SOMEONE ELSE ON 2014-12-24`
-
-`FIND ename WHICH HAS BEEN CREATED BY erwin ON 2014-12-24`
-
-`FIND ename WHICH HAS BEEN CREATED BY SOMEONE ELSE BUT erwin ON 2014-12-24`
-
-`FIND ename WHICH HAS BEEN CREATED BY erwin`
-
-`FIND ename WHICH HAS BEEN INSERTED SINCE 2021-04`
-
-Note that `SINCE` and `UNTIL` are inclusive, while `BEFORE` and `AFTER` are not.
-
-
-### File Location
-
-Search for file objects by their location:
-
-`FIND FILE WHICH IS STORED AT a/certain/path/`
-
-#### Wildcards
-
-_STORED AT_ can be used with wildcards similar to unix wildcards.
- * `*` matches any characters or none at all, but not the directory separator `/`
- * `**` matches any character or none at all.
- * A leading `*` is short cut for `/**`
- * A star directly between two other stars is ignored: `***` is the same as `**`.
- * Escape character: `\` (E.g. `\\` is a literal backslash. `\*` is a literal star. But `\\*` is a literal backslash followed by a wildcard.)
-
-Examples:
-
-Find any files ending with `.acq`:
-`FIND FILE WHICH IS STORED AT *.acq` or
-`FIND FILE WHICH IS STORED AT **.acq` or
-`FIND FILE WHICH IS STORED AT /**.acq`
-
-Find files stored one directory below `/data/`, ending with `.acq`:
-`FIND FILE WHICH IS STORED AT /data/*/*.acq`
-
-Find files stored in `/data/`, ending with `.acq`:
-`FIND FILE WHICH IS STORED AT /data/*.acq`
-
-Find files stored in a directory at any depth in the tree below `/data/`, ending with `.acq`:
-`FIND FILE WHICH IS STORED AT /data/**.acq`
-
-Find any file in a directory which begins with `2016-02`:
-`FIND FILE WHICH IS STORED AT */2016-02*/*`
-
-
-### Back References
-
-The back reference filters for entities that are referenced by another entity. The following query returns entities of the type _ename1_ which are referenced by _ename2_ entities via the reference property _pname1_.
-
-* `FIND ename1 WHICH IS REFERENCED BY ename2 AS A pname1`
-* `FIND ename1 WITH @ ename2 / pname1`
-* `FIND ename1 . @ ename2 / pname1`
-
-One may omit the property specification:
-
-* `FIND ename1 WHICH IS REFERENCED BY ename2`
-* `FIND ename1 WHICH HAS A PROPERTY @ ename2`
-* `FIND ename1 WITH @ ename2`
-* `FIND ename1 . @ ename2`
-
-### Combining Filters with Propositional Logic
-
-Any result set can be filtered by logically combining POV filters or back reference filters:
-
-#### Conjunction (AND)
-As we saw above, we can combine conditions: 
-`FIND MusicalAnalysis WHICH HAS quality_factor>0.5 AND date IN 2019`
-
-In general, the conjunction takes the form
-`FIND <eexpr> WHICH <filter1> AND <filter2>`. You can also chain more conditions
-with `AND`. If you mix conjunctions with disjunctions, you need to add brackets
-to define the priority. For example:
-`FIND <eexpr> WHICH (<filter1> AND <filter2>) OR <filter3>`.
-
-`FIND Guitar WHICH REFERENCES Manufacturer AND price`. Is a combination of
-a reference filter and a POV filter. For readability, you can also write
-`FIND Guitar WHICH REFERENCES Manufacturer AND WHICH HAS A price`. However,
-the additional "WHICH HAS A" is purely cosmetic (syntactic sugar).
-
-
-#### Disjunction (OR)
-The rules for disjunctions are the same as for conjunctions. See above.
-
-* `FIND ename1 WHICH HAS A PROPERTY pname1=val1 OR A PROPERTY pname2=val2 Or A PROPERTY...`
-* `FIND ename1 WHICH HAS A PROPERTY pname1=val1 OR A pname2=val2 OR ...`
-* `FIND ename1 . pname1=val1 | pname2=val2 | ...`
-
-#### Negation (NOT)
-You can negate any filter by prefixing the filter with 'NOT' or '!':
-`FIND <eexpr> WHICH NOT <filter1>`.
-There are many syntactic sugar alternatives which are treated the same as "NOT":
-- `DOES NOT HAVE`
-- `ISN'T`
-- and many more
-
-#### Parentheses
-Basically, you can put parantheses around filter expressions and con- or
-disjunctions.
-- `FIND Guitar WHICH (REFERENCES Manufacturer AND WHICH HAS A price)`.
-- `FIND Guitar WHICH (REFERENCES Manufacturer) AND (WHICH HAS A price)`.
-
-For better readability, the above query can be written as:
-- `FIND Guitar WHICH (REFERENCES Manufacturer AND HAS A price)`.
-Note, that this query without syntactic sugar looks like:
-- `FIND Guitar WHICH (REFERENCES Manufacturer AND price)`.
-
-* `FIND ename1 WHICH HAS A pname1=val1 AND DOESN'T HAVE A pname2<val2 AND ((WHICH HAS A pname3=val3 AND A pname4=val4) OR DOES NOT HAVE A (pname5=val5 AND pname6=val6))`
-* `FIND ename1 . pname1=val1 & !pname2<val2 & ((pname3=val3 & pname4=val4) | !(pname5=val5 & pname6=val6))`
-* `FIND ename1.pname1=val1&!pname2<val2&((pname3=val3&pname4=val4)|!(pname5=val5&pname6=val6))`
-
-### A Few Important Expressions
-
-*  A:: The indistinct article. This is only syntactic suger. Equivalent expressions: `A, AN`
-*  AND:: The logical _and_. Equivalent expressions: `AND, &`
-*  FIND:: The beginning of the query.
-*  NOT:: The logical negation. Equivalent expressions: `NOT, DOESN'T HAVE A PROPERTY, DOES NOT HAVE A PROPERTY, DOESN'T HAVE A, DOES NOT HAVE A, DOES NOT, DOESN'T, IS NOT, ISN'T, !`
-*  OR:: The logical _or_. Equivalent expressions: `OR, |`
-*  RECORD,RECORDTYPE,FILE,PROPERTY:: Role expression for restricting the result set to a specific role.
-*  WHICH:: The marker for the beginning of the filters. Equivalent expressions: `WHICH, WHICH HAS A, WHICH HAS A PROPERTY, WHERE, WITH (A), .`
-*  REFERENCE:: This one is tricky: `REFERENCE TO` expresses a the state of _having_ a reference property. `REFERENCED BY` expresses the state of _being_ referenced by another entity.
-*  COUNT:: `COUNT` works like `FIND` but doesn't return the entities.
-
-# Future
-
- * *Sub Queries* (or *Sub Properties*): `FIND ename WHICH HAS A pname WHICH HAS A subpname=val`. This is like: `FIND AN experiment WHICH HAS A camera WHICH HAS A 'serial number'= 1234567890`
- * *More Logic*, especially `ANY`, `ALL`, `NONE`, and `SUCH THAT` key words (and equivalents) for logical quantisation: `FIND ename1 SUCH THAT ALL ename2 WHICH HAVE A REFERENCE TO ename1 HAVE A pname=val`. This is like `FIND experiment SUCH THAT ALL person WHICH ARE REFERENCED BY THIS experiment AS conductor HAVE AN 'academic title'=professor.`
-
-
-## Text matching
-
-TODO: Describe escape sequences like `\\`, `\*`, `\<<` and `\>>`.
diff --git a/doc/User_Administration.md b/doc/User_Administration.md
deleted file mode 100644
index a51894cd87126b07f56addccba3f635e04872d71..0000000000000000000000000000000000000000
--- a/doc/User_Administration.md
+++ /dev/null
@@ -1,34 +0,0 @@
-Author: Timm Fitschen
-
-Email: timm.fitschen@ds.mpg.de
-
-Date: 2013-02-23
-
-# No Proposal
-http://caosdb/register
-
-# Proposal
-
-## Add User
-
-* POST Request is to be send to `http://host:port/User`.
-* This requires authetication as user _admin_ (default password: _adminpw_).
-* Http body:
-
-
-        <Post>
-          <User name="${username}" password="${md5ed_password} />
-        </Post>
-
-## Delete User
-
-* DELETE Request
-* admin authentication required.
-* Http body:
-
-
-        <Delete>
-          <User name="${username}/>
-        </Delete>
-
-The user to be deleted may also be identified by his id (`id="${id}"`) instead of his name.
diff --git a/doc/devel/Benchmarking.md b/doc/devel/Benchmarking.md
deleted file mode 100644
index 8a3eff2addb927eab425f7e755e3a181a53b9d18..0000000000000000000000000000000000000000
--- a/doc/devel/Benchmarking.md
+++ /dev/null
@@ -1,316 +0,0 @@
-
-
-# Benchmarking CaosDB #
-
-Benchmarking CaosDB may encompass several distinct areas: How much time is spent in the server's
-Java code, how much time is spent inside the SQL backend, are the same costly methods called more
-than once?  This documentation tries to answer some questions connected with these benchmarking
-aspects and give you the tools to answer your own questions.
-
-
-## Before you start ##
-In order to obtain meaningful results, you should disable caching.
-
-### MariaDB
-Set the corresponding variable to 0: `SET GLOBAL query_cache_type = 0;`
-
-### Java Server
-In the config:
-```conf
-CACHE_DISABLE=true
-```
-
-
-## Tools for the benchmarking ##
-
-For averaging over many runs of comparable requests and for putting the database into a
-representative state, Python scripts are used.  The scripts can be found in the `caosdb-dev-tools`
-repository, located at [https://gitlab.indiscale.com/caosdb/src/caosdb-dev-tools](https://gitlab.indiscale.com/caosdb/src/caosdb-dev-tools) in the folder
-`benchmarking`:
-
-### Python Script `fill_database.py` ###
-
-This commandline script is meant for filling the database with enough data to represeny an actual
-real-life case, it can easily create hundreds of thousands of Entities.
-
-The script inserts predefined amounts of randomized Entities into the database, RecordTypes,
-Properties and Records.  Each Record has a random (but with defined average) number of Properties,
-some of which may be references to other Records which have been inserted before.  Actual insertion
-of the Entities into CaosDB is done in chunks of a defined size.
-
-Users can tell the script to store times needed for the insertion of each chunk into a tsv file.
-
-### Python Script  `measure_execution_time.py` ###
-
-A somewhat outdated script which executes a given query a number of times and then save statistics
-about the `TransactionBenchmark` readings (see below for more information about the transaction
-benchmarks) delivered by the server.
-
-
-### Python Script  `sql_routine_measurement.py` 
-
-
-
-Simply call `./sql_routine_measurement.py` in the scripts directory. An sql
-file is automatically executed which enables the correct `performance_schema`
-tables. However, the performance_schema of mariadb needs to be enabled. Add
-`performance_schema=ON` to the configuration file of mariadb as it needs to be
-enabled on start up.
-This script expects the MariaDB server to be accessible on 127.0.0.1 with the default caosdb user
-and password (caosdb;random1234).
-
-You might consider to increase `performance_schema_events_transactions_history_long_size`.
-```
-performance_schema_events_transactions_history_long_size=1000000
-```
-The performance schema must be enabled (see below).
-
-### MariaDB General Query Log ###
-
-MariaDB and MySQL have a feature to enable the logging of SQL queries' times.  This logging must be
-turned on on the SQL server as described in the [upstream documentation](https://mariadb.com/kb/en/general-query-log/):
-Add to the mysql configuration:
-```
-log_output=TABLE
-general_log
-```
-or calling
-```sql
-SET GLOBAL log_output = 'TABLE';
-SET GLOBAL general_log = 'ON';
-```
-
-In the Docker environment LinkAhead, this can conveniently be 
-done with `linkahead mysqllog {on,off,store}`.
-
-### MariaDB Slow Query Log ###
-See [slow query log docs](https://mariadb.com/kb/en/slow-query-log-overview/)
-
-### MariaDB Performance Schema ###
-The most detailed information on execution times can be acquired using the performance schema.
-
-To use it, the `performance_schema` setting in the MariaDB server must be enabled([docs](https://mariadb.com/kb/en/performance-schema-overview/#enabling-the-performance-schema), for example by setting
-this in the config files:
-```
-[mysqld]
-
-performance_schema=ON
-```
-
-The performance schema provides many different tables in the `performance_schema`. You can instruct MariaDB to create
-those tables by setting the appropriate `instrument` and `consumer` variables. E.g. 
-```SQL
-update performance_schema.setup_instruments set enabled='YES', timed='YES' WHERE NAME LIKE '%statement%';
-update performance_schema.setup_consumers set enabled='YES' WHERE NAME LIKE '%statement%';
-```
-This can also be done via the configuration. 
-```
-[mysqld]
-
-performance_schema=ON
-performance-schema-instrument='statement/%=ON'
-performance-schema-consumer-events-statements-history=ON                        
-performance-schema-consumer-events-statements-history-long=ON
-```
-You may want to look at the result of the following commands:
-```sql
-
-select * from performance_schema.setup_consumers;
-select * from performance_schema.setup_instruments;
-```
-
-Note, that the `base_settings.sql` enables appropriate instruments and consumers.
-
-Before you start a measurement, you will want to empty the tables. E.g.:
-```sql
-truncate table  performance_schema.events_statements_history_long ;
-```
-The procedure `reset_stats` in `base_settings.sql` clears the typically used ones.
-
-The tables contain many columns. An example to get an informative view is
-```sql
-select left(sql_text,50), left(digest_text,50), ms(timer_wait) from performance_schema.events_statements_history_long order by ms(timer_wait);
-```
-where the function `ms` is defined in `base_settings.sql`.
-Or a very useful one:
-```sql
-select  left(digest_text,100) as digest,ms(sum_timer_wait) as time_ms, count_star from performance_schema.events_statements_summary_by_digest order by time_ms;
-```
-
-### Useful SQL configuration with docker
-In order to allow easy testing and debugging the following is useful when using docker.
-Change the docker-compose file to include the following for the mariadb service:
-```
-    networks:
-      # available on port 3306, host name 'sqldb'
-      - caosnet
-    ports:
-      - 3306:3306
-```
-Check it with `mysql -ucaosdb -prandom1234 -h127.0.0.1 caosdb`
-Add the appropriate changes (e.g. `performance_schema=ON`) to `profiles/empty/custom/mariadb.conf.d/mariadb.cnf` (or in the profile folder that you use).
-
-### Manual Java-side benchmarking #
-
-Benchmarking can be done using the `TransactionBenchmark` class (in package
-`org.caosdb.server.database.misc`).
-
-- Single timings can be added to instances of that class via the
-  `addBenchmark(object, time)` method.  Multiple benchmarks for the same object
-  (typically just strings) can be averaged.
-- Benchmarks can be serialized into XML, `Container` and `Query` objects already
-  use this with their included benchmarks to output benchmarking results.
-- To work with the benchmarks of often used objects, use these methods:
-  - `Container.getTransactionBenchmark().addBenchmark()`
-  - `Query.addBenchmark()`
-
-
-To enable transaction benchmarks and disable caching in the server, set these
-server settings:
-```conf
-TRANSACTION_BENCHMARK_ENABLED=true
-CACHE_DISABLE=true
-```
-Additionally, the server should be started via `make run-debug` (instead of
-`make run-single`), otherwise the benchmarking will not be active.
-
-#### Notable benchmarks and where to find them ##
-
-| Name                                 | Where measured                               | What measured                 |
-|--------------------------------------|----------------------------------------------|-------------------------------|
-| `Retrieve.init`                      | transaction/Transaction.java#135             | transaction/Retrieve.java#48  |
-| `Retrieve.transaction`               | transaction/Transaction.java#174             | transaction/Retrieve.java#133 |
-| `Retrieve.post_transaction`          | transaction/Transaction.java#182             | transaction/Retrieve.java#77  |
-| `EntityResource.httpGetInChildClass` | resource/transaction/EntityResource.java#118 | all except XML generation     |
-| `ExecuteQuery`                       | ?                                            | ?                             |
-|                                      |                                              |                               |
-
-### External JVM profilers ###
-
-Additionally to the transaction benchmarks, it is possible to benchmark the server execution via
-external Java profilers.  For example, [VisualVM](https://visualvm.github.io/) can connect to JVMs running locally or remotely
-(e.g. in a Docker container).  To enable this in LinkAhead's Docker environment, set
-
-```yaml
-devel:
-  profiler: true
-```
-Alternatively, start the server (without docker) with the `run-debug-single` make target, it will expose
-the JMX interface, by default on port 9090.
-
-Most profilers, like as VisualVM, only gather cumulative data for call trees, they do not provide
-complete call graphs (as callgrind/kcachegrind would do).  They also do not differentiate between
-calls with different query strings, as long as the Java process flow is the same (for example, `FIND
-Record 1234` and `FIND Record A WHICH HAS A Property B WHICH HAS A Property C>100` would be handled
-equally).
-
-
-#### Example settings for VisualVM 
-
-In the sampler settings, you may want to add these expressions to the blocked
-packages: `org.restlet.**, com.mysql.**`.  Branches on the call tree which are
-entirely inside the blacklist, will become leaves.  Alternatively, specify a
-whitelist, for example with `org.caosdb.server.database.backend.implementation.**`,
-if you only want to see the time spent for certain MySQL calls.
-
-
-## How to set up a representative database ##
-For reproducible results, it makes sense to start off with an empty database and fill it using the
-`fill_database.py` script, for example like this:
-
-```sh
-./fill_database.py -t 500 -p 700 -r 10000 -s 100 --clean
-```
-
-The `--clean` argument is not strictly necessary when the database was empty before, but it may make
-sense when there have been previous runs of the command.  This example would create 500 RecordTypes,
-700 Properties and 10000 Records with randomized properties, everything is inserted in chunks of 100
-Entities.
-
-## How to measure request times ##
-
-If the execution of the Java components is of interest, the VisualVM profiler should be started and
-connected to the server before any requests to the server are started.
-
-When doing performance tests which are used for detailed analysis, it is important that
-
-1. CaosDB is in a reproducible state, which should be documented
-2. all measurements are repeated several times to account for inevitable variance in access (for
-   example file system caching, network variablity etc.)
-
-### Filling the database ###
-
-By simply adding the option `-T logfile.tsv` to the `fill_database.py` command above, the times for
-inserting the records are stored in a tsv file and can be analyzed later.
-
-### Obtain statistics about a query ###
-
-To repeat single queries a number of times, `measure_execution_time.py` can be used, for example:
-
-```sh
-./measure_execution_time.py -n 120 -q "FIND MusicalInstrument WHICH IS REFERENCED BY Analysis"
-```
-
-This command executes the query 120 times, additional arguments could even plot the
-TransactionBenchmark results directly.
-
-## On method calling order and benchmarked events ##
-
-- `Transaction.execute()` :: Logs benchmarks for events like:
-  - `INIT` :: The transaction's `init()` method.
-  - `PRE_CHECK`
-  - `CHECK`
-  - `POST_CHECK`
-  - `PRE_TRANSACTION`
-  - `TRANSACTION` -> typically calls
-    `database.backend.transaction.[BackendTransaction].execute()`, which in turn
-    calls, some levels deeper, `backend.transaction.....execute(<k extends
-    BackendTransaction> t)` -> see next point
-  - ...
-- `backend.transaction.[...].execute(transaction)` :: This method is benchmarked
-  again (via parent class `BackendTransaction`), this is probably the deepest
-  level of benchmarking currently (Benchmark is logged as
-  e.g. `<RetrieveFullEntity>...</>`).  It finally calls
-  `[MySQLTransaction].execute()`.
-- `[MySQLTransaction].execute()` :: This is the deepest backend implementation
-  part, it typically creates a prepared statement and executes it.
-- Currently not benchmarked separately:
-  - Getting the actual implementation (probably fast?)
-  - Preparing the SQL statement
-  - Executing the SQL statement
-  - Java-side caching
-
-## What is measured ##
-
-For a consistent interpretation, the exact definitions of the measured times are as follows:
-
-### SQL logs ###
-
-As per https://mariadb.com/kb/en/general-query-log, the logs store only the time at which the SQL
-server received a query, not the duration of the query.
-
-#### Possible future enhancements ####
-
-- The `query_response_time` plugin may be additionally used in the future, see
-  https://mariadb.com/kb/en/query-response-time-plugin
-
-### Transaction benchmarks ###
-
-Transaction benchmarking manually collects timing information for each transaction.  At defined
-points, different measurements can be made, accumulated and will finally be returned to the client.
-Benchmark objects may consist of sub benchmarks and have a number of measurement objects, which
-contain the actual statistics.
-
-Because transaction benchmarks must be manually added to the server code, they only monitor those
-code paths where they are added.  On the other hand, their manual nature allows for a more
-abstracted analysis of performance bottlenecks.
-
-### Java profiler ###
-
-VisualVM records for each thread the call tree, specifically which methods were called how often and
-how much time was spent inside these methods.
-
-### Global requests ###
-
-Python scripts may measure the global time needed for the execution of each request.
-`fill_database.py` obtains its numbers this way.
diff --git a/doc/devel/Development.md b/doc/devel/Development.md
deleted file mode 100644
index 246f80db14eeadfd1b95e062f0b43636d5f6960c..0000000000000000000000000000000000000000
--- a/doc/devel/Development.md
+++ /dev/null
@@ -1,24 +0,0 @@
----
-title: Development
-author: Daniel Hornung
-...
-
-
-# Developing the CaosDB server #
-This file contains information about server development, it is aimed at those
-who want to debug, understand or enhance the CaosDB server.
-
-## Testing ##
-Whether developing new features, refacturing code or fixing bugs, the server
-code should be thoroughly tested for correct and incorrect behvaiour, on correct
-and incorrect input.
-
-### Writing tests ###
-Tests go into `src/test/java/caosdb/`, the files there can serve as examples for
-writing tests.
-
-### Running tests with Maven ###
-- Automatic testing can be done with `make test` or, after compilation, `mvn
-  test`.
-- Tests of single modules can be started with `mvn test -Dtest=TestClass`
-- Test of a single method `footest`: `mvn test -Dtest=TestClass#footest`
diff --git a/doc/devel/Logging.md b/doc/devel/Logging.md
deleted file mode 100644
index f9b1680e61b207379ff513e4828befca5df8ec31..0000000000000000000000000000000000000000
--- a/doc/devel/Logging.md
+++ /dev/null
@@ -1,67 +0,0 @@
-# Logging
-
-## Framework
-
-We use the SLF4J API with a log4j2 backend for all of our Code. Please do not use log4j2 directly or any other logging API.
-
-Note that some libraries on the classpath use the `java.util.logging` API and log4j1 logging framework instead. These loggers cannot be configurated with the help of this README by now.
-
-## Configuration
-
-The configuration of the log4j2 backend is done via `properties` files which
-comply with the [log4j2
-specifications](https://logging.apache.org/log4j/2.x/manual/configuration.html#Properties).
-XML, YAML, or JSON files are not supported.  The usual mechanisms for automatic
-configuration with such files is disabled.  Instead, files have to be placed
-into the `conf` subdirs, as follows:
-
-### Default and Debug Logging
-
-The default configuration is located at `conf/core/log4j2-default.properties`. For the debug mode, the configuration from `conf/core/log4j2-debug.properties` is merged with the default configuration. These files should not be changed by the user.
-
-### User Defined Logging
-
-The default and debug configuration can be overridden by the user with `conf/ext/log4j2.properties` and any file in the directory `conf/ext/log4j2.properties.d/` which is suffixed by `.properties`. All loggin configuration files are merged using the standard merge strategy of log4:
-
-> # Composite Configuration
-> Log4j allows multiple configuration files to be used by specifying them as a list of comma separated file paths on log4j.configurationFile. The merge logic can be controlled by specifying a class that implements the MergeStrategy interface on the log4j.mergeStrategy property. The default merge strategy will merge the files using the following rules:
-> 1. The global configuration attributes are aggregated with those in later configurations replacing those in previous configurations, with the exception that the highest status level and the lowest monitorInterval greater than 0 will be used.
-> 2. Properties from all configurations are aggregated. Duplicate properties replace those in previous configurations.
-> 3. Filters are aggregated under a CompositeFilter if more than one Filter is defined. Since Filters are not named duplicates may be present.
-> 4. Scripts and ScriptFile references are aggregated. Duplicate definiations replace those in previous configurations.
-> 5. Appenders are aggregated. Appenders with the same name are replaced by those in later configurations, including all of the Appender's subcomponents.
-> 6. Loggers are all aggregated. Logger attributes are individually merged with duplicates being replaced by those in later configurations. Appender references on a Logger are aggregated with duplicates being replaced by those in later configurations. Filters on a Logger are aggregated under a CompositeFilter if more than one Filter is defined. Since Filters are not named duplicates may be present. Filters under Appender references included or discarded depending on whether their parent Appender reference is kept or discarded.
-
-[2](https://logging.apache.org/log4j/2.x/manual/configuration.html#CompositeConfiguration)
-
-## Some Details and Examples
-
-### Make Verbose
-
-To make the server logs on the console more verbose, insert `rootLogger.level = DEBUG` or even `rootLogger.level = TRACE` into a properties file in the `conf/ext/log4j2.properties.d/` directory or the `conf/ext/log4j2.properties` file.
-
-### Log Directory
-
-By default, log files go to `./log/`, e.g. `./log/request_errors/current.log`. The log directory in `DEBUG_MODE` is located at `./testlog/`.
-
-To change that, insert `property.LOG_DIR = /path/to/my/logs` into a properties file in the `conf/ext/log4j2.properties.d/` directory or the `conf/ext/log4j2.properties` file
-
-### Special loggers
-
-* `REQUEST_ERRORS_LOGGER` for logging server errors with SRID, full request and full response. WARNING: This logger stores unencrypted content of request with possibly confidential content.
-* `REQUEST_TIME_LOGGER` for timing the requests.
-
-These loggers are defined in the `conf/core/log4j2-default.properties` file.
-
-#### Enable Request Time Logger
-
-The `REQUEST_TIME_LOGGER` is disabled by default, its log level is set to `OFF`. To enable it and write logs to the directory denoted by `property.LOG_DIR`, create a `properties` file under `conf/ext/log4j2.properties.d/` which contains at least
-
-```properties
-property.REQUEST_TIME_LOGGER_LEVEL = TRACE
-```
-
-### debug.log
-
-When in `DEBUG_MODE`, e.g. when started with `make run-debug`, the server also writes all logs to `debug.log` in the log directory.
-
diff --git a/doc/faq.md b/doc/faq.md
deleted file mode 100644
index 09e8adb7f3072ea94042e43d962c33345c2482d6..0000000000000000000000000000000000000000
--- a/doc/faq.md
+++ /dev/null
@@ -1,43 +0,0 @@
-
-
-# How do I declare a LIST property?
-
-Use the datatype parameter (available with Property constructors and with the ```add_property``` method and the ```LIST``` function.
-
-```python
-#with constructor
-p = caosdb.Property(name="ListOfDoubles", datatype=caosdb.LIST(caosdb.DOUBLE))
-
-# with add_property method
-my_entity.add_property(name="ListOfIntegers", datatype=caosdb.LIST(caosdb.INTEGER))
-my_entity.add_property(name="ListOfThings", datatype=caosdb.LIST("Thing"))
-my_entity.add_property(name="ListOfThings", datatype=caosdb.LIST(caosdb.RecordType('Thing'))
-```
-
-# Which data types are there?
-
-There are 7 basic data types:
-
-* `INTEGER`
-* `DOUBLE`
-* `DATETIME`
-* `TEXT`
-* `BOOLEAN`
-* `FILE`
-* `REFERENCE`
-
-There is (so far) 1 data type for collections:
-
-* `LIST` (Well, LIST-of-another-data-type, e.g. `LIST(INTEGER)`)
-
-And furthermore,... 
-
-* Any RecordType can be used as a `REFERENCE` data type with a limited scope. That is, a property
-
-    ```python
-    p = caosdb.Property(name="Conductor", datatype="Person")
-    ```
-
-    will only accept those Entities as value which have a "Person" RecordType as a direct or indirect parent.
-
-See also: [Datatype](Datatype)
diff --git a/src/doc/CaosDB-Query-Language.md b/src/doc/CaosDB-Query-Language.md
index a34191d051fec9907a6cbb47f11f97d60ac972ac..6cbe64f1582b53f9e5c6dc60e21d988fd193d476 100644
--- a/src/doc/CaosDB-Query-Language.md
+++ b/src/doc/CaosDB-Query-Language.md
@@ -2,61 +2,102 @@
 
 See syntax specification in [CaosDB Query Language Syntax](query-syntax).
 
-## Simple FIND Query
-
-The following query will return any record which has the name _somename_ and all
-record children of any entity with that name.
-
-`FIND somename`
-
-On server in the default configuration, the following queries are equivalent to this one.
-
-`FIND RECORD somename`
-
-`FIND RECORDS somename`
-
-Of course, the returned set of entities (henceforth referred to as _resultset_) can also be
-restricted to RecordTypes (`FIND RECORDTYPE ...`), Properties (`FIND PROPERTY ...`) and Files (`FIND
-FILE ...`).
-
-You can include all entities (Records, RecordTypes, Properties, ...) into the results by using the
-`ENTITY` keyword:
-
-`FIND ENTITY somename`
-
-Wildcards use `*` for any characters or none at all. Wildcards for single characters (like the `_` wildcard from mysql) are not implemented yet.
-
-`FIND en*` returns any record which has a name beginning with _en_.
-
-Regular expressions must be surrounded by _<<_ and _>>_:
-
-`FIND <<e[aemn]{2,5}>>`
-
-`FIND <<[cC]amera_[0-9]*>>`
-
-*TODO*:
-Describe escape sequences like `\\\\ `, `\*`, `\<<` and `\>>`.
-
-Currently, wildcards and regular expressions are only available for the _simple-find-part_ of the
-query, i.e. not for property-operator-value filters (see below).
-
-## Simple COUNT Query
-
-COUNT queries count entities which have certain properties.
-
-`COUNT ... rname ...`
-
-will return the number of records which have the name _rname_ and all record
-children of any entity with that name.
-
-The syntax of the COUNT queries is equivalent to the FIND queries in any
-respect (this also applies to wildcards and regular expressions) but one: The
-prefix is to be `COUNT` instead of `FIND`.
-
-Unlike the FIND queries, the COUNT queries do not return any entities. The result of the query is
-the number of entities which _would be_ returned if the query was a FIND query.
-
-## Filters
+In this chapter, the CaosDB Query Language (CQL) is presented as a means of
+formulating search commands, commonly referred to as queries. It is highly
+recommended that you experiment with the examples provided, such as those found
+on https://demo.indiscale.com. An interactive tour is also available on this
+public instance, which includes a comprehensive overview of the query language.
+Therefore, it is suggested that you begin there and subsequently proceed with
+this more detailed explanation.
+
+## Introduction
+
+Queries typically start with the keyword `FIND`, followed by a description of
+what you want to find. For example, you can search for all musical instruments
+with `FIND MusicalInstrument`.
+
+*Note*, the CQL is case **in**sensitive. We will write keywords of CQL in all
+caps to illustrate what parts are part of the language.
+
+The most common way is to provide a RecordType name after `FIND` (as in the
+example above). However, you can also use the name of some other entity:
+`FIND 'My first guitar'`.
+
+*Note*, that we put the name here in quotes. Spaces are used in CQL as separator
+of words. Thus, if something contains quotes, like the name here, it needs to be
+quoted.
+
+While queries like the last one are great to get an impression of the data,
+often we need to be more specific. Therefore, queries can include various
+conditions to restrict the result set.
+
+Example: `FIND MusicalAnalysis WITH quality_factor>0.5 AND date IN
+2019`. The keyword `WITH` signifies that for each Record of the type
+`MusicalAnalysis`, an assessment is made to determine whether it possesses a
+Property labelled `quality_factor` that exceeds 0.5, as well as another
+Property labelled `date` that may correspond to any day within the year 2019.
+
+In order to make CQL easier to learn and to remember we designed it to be close
+to natural spoken English language. For example, you can write
+`FIND Guitar WHICH HAS A PROPERTY price`. Here, "HAS A PROPERTY" is what we call
+syntactic sugar. It lets the query role off the tongue more easily than
+`FIND Guitar WHICH price` but it is actually not needed and does not change
+the meaning of the query. In fact, you could also write `FIND Guitar WITH
+price`.
+
+If you are only interested in the number of Entities that match your query, you
+can replace `FIND` with `COUNT` and the query will only return the number of
+Entities in the result set.
+
+Sometimes the list of Records that you get using a `FIND` query is not what you
+need; especially if you want to export a subset of the data for the analysis
+with some external tool.
+`SELECT` queries offer to represent the query result in a tabular form.
+
+If you replace the `FIND` keyword of a query with `SELECT x, y, z FROM`, then
+CaosDB will return the result as tabular data.
+
+For example, instead of `FIND Guitar`, try out
+`SELECT name, electric FROM Guitar`
+
+As you can see, those queries are design to allow very specific data requests.
+If you do not want/need to be as specific you can omit the first keyword (`FIND`
+or `SELECT`) which creates a search for anything that has a text Property with
+something like your expression. For example, the query "John" will search for
+any Records that has a text property that contains this string.
+
+With this, we conclude our introduction of CQL. You now know about the basic
+elements. The following will cover the various aspects in more detail and you
+will for example learn how you can use references among Records, or meta data
+like the creation time of a Record to restrict the query result set.
+
+## What am I searching for?
+
+We already learned, that we can provide the name of a RecordType after the `FIND`
+keyword. Let's call this part of the query "entity expression". In general, we
+need to identify with the entity expression one or more entities via their name, CaosDB ID
+or a pattern.
+- `FIND Guitar`
+- `FIND Guit*` ('*' represents none, one or more characters)
+- `FIND <<[gG]ui.*>>` (a regular expression surrounded by _<<_ and '>>'. see below)
+- `FIND 110`
+
+The result set will contain Entities that are either identified by the entity expression
+directly (i.e. they have the name or the given ID) or the have such an Entity as
+parent.
+
+As you know, CaosDB distincts among different Entity roles:
+- Entity
+- Record
+- RecordType
+- Property
+- File
+
+You can provide the role directly after the `FIND` keyword and before the
+entity expression: `FIND RECORD Guitar`. The result set will then restricted
+to Entities with that role.
+
+## Conditions / Filters
 
 ### POV - Property-Operator-Value
 
@@ -135,15 +176,19 @@ Examples:
 * `2015-04-03T00:00:00.0!=2015-04-03T00:00:00.0` is false.
 * `2015-04-03T00:00:00!=2015-04-03T00:00:00` is false.
 * `2015-04!=2015-05` is true.
-* `2015-04!=2015-04` is false
+* `2015-04!=2015-04` is false.
 
 ##### `d1>d2`: Transitive, non-symmetric relation.
-Semantics depend on the flavors of d1 and d2. If both are... 
-###### [UTCDateTime](specification/Datatype.html#datetime) 
+
+Semantics depend on the flavors of d1 and d2. If both are...
+
+###### [UTCDateTime](specification/Datatype.html#datetime)
+
 * ''True'' iff the time of d1 is after the the time of d2 according to [UTC](https://en.wikipedia.org/wiki/Coordinated_Universal_Time).
 * ''False'' otherwise.
 
 ###### [SemiCompleteDateTime](specification/Datatype.html#datetime)
+
 * ''True'' iff `d1.ILB>d2.EUB` is true or `d1.ILB=d2.EUB` is true.
 * ''False'' iff `d1.EUB<d2.ILB}} is true or {{{d1.EUB=d2.ILB` is true.
 * ''Undefined'' otherwise.
@@ -160,12 +205,16 @@ Examples:
 * `2014-01-01>2015-01-01T20:15:30` is false.
 
 ##### `d1<d2`: Transitive, non-symmetric relation.
-Semantics depend on the flavors of d1 and d2. If both are... 
+
+Semantics depend on the flavors of d1 and d2. If both are...
+
 ###### [UTCDateTime](specification/Datatype.html#datetime)
+
 * ''True'' iff the time of d1 is before the the time of d2 according to [UTC](https://en.wikipedia.org/wiki/Coordinated_Universal_Time)
 * ''False'' otherwise.
 
 ###### [SemiCompleteDateTime](specification/Datatype.html#datetime)
+
 * ''True'' iff `d1.EUB<d2.ILB` is true or `d1.EUB=d2.ILB` is true.
 * ''False'' iff `d1.ILB>d2.EUB}} is true or {{{d1.ILB=d2.EUB` is true.
 * ''Undefined'' otherwise.
@@ -182,8 +231,11 @@ Examples:
 * `2015-01-01T20:15.00<2015-01-01T20:14` is false.
 
 ##### `d1 IN d2`: Transitive, non-symmetric relation.
+
 Semantics depend on the flavors of d1 and d2. If both are... 
+
 ###### [SemiCompleteDateTime](specification/Datatype.html#datetime)
+
 * ''True'' iff (`d1.ILB>d2.ILB` is true or `d1.ILB=d2.ILB` is true) and (`d1.EUB<d2.EUB` is true or `d1.EUB=d2.EUB` is true).
 * ''False'' otherwise.
 
@@ -195,8 +247,11 @@ Examples:
 * `2015-01-01 IN 2015-01-01T20:15:30` is false.
 
 ##### `d1 NOT IN d2`: Transitive, non-symmetric relation.
+
 Semantics depend on the flavors of d1 and d2. If both are... 
+
 ###### [SemiCompleteDateTime](specification/Datatype.html#datetime)
+
 * ''True'' iff `d1.ILB IN d2.ILB` is false.
 * ''False'' otherwise.
 
@@ -208,7 +263,15 @@ Examples:
 * `2015-01-01T20:15:30 NOT IN 2015-01-01T20:15:30` is false.
 
 ##### Note
-These semantics follow a three-valued logic with ''true'', ''false'' and ''undefined'' as truth values. Only ''true'' is truth preserving. I.e. only those expressions which evaluate to ''true'' pass the POV filter. `FIND ... WHICH HAS A somedate=2015-01` only returns entities for which `somedate=2015-01` is true. On the other hand, `FIND ... WHICH DOESN'T HAVE A somedate=2015-01` returns entities for which `somedate=2015-01` is false or undefined. Shortly put, `NOT d1=d2` is not equivalent to `d1!=d2`. The latter assertion is stronger.
+
+These semantics follow a three-valued logic with `true`, `false` and `undefined` as truth
+values. Only `true` is truth preserving. I.e. only those expressions which evaluate to `true`
+pass the POV filter.
+
+`FIND ... WHICH HAS A somedate=2015-01` only returns entities for which `somedate=2015-01` is
+true. On the other hand, `FIND ... WHICH DOESN'T HAVE A somedate=2015-01` returns entities for which
+`somedate=2015-01` is false or undefined. Shortly put, `NOT d1=d2` is not equivalent to
+`d1!=d2`. The latter assertion is stronger.
 
 #### Omitting the Property or the Value
 
@@ -240,6 +303,12 @@ The following query returns records which have a _pname1_ property with any valu
 
 `FIND ename WITH pname1`
 
+`FIND ename WITH A pname1`
+
+`FIND ename WITH A PROPERTY pname1`
+
+`FIND ename WITH PROPERTY pname1`
+
 `FIND ename . pname1`
 
 `FIND ename.pname1`
@@ -338,24 +407,53 @@ Any result set can be filtered by logically combining POV filters or back refere
 
 #### Conjunction (AND)
 
-* `FIND ename1 WHICH HAS A PROPERTY pname1=val1 AND A PROPERTY pname2=val2 AND A PROPERTY...`
-* `FIND ename1 WHICH HAS A PROPERTY pname1=val1 AND A pname2=val2 AND ...`
-* `FIND ename1 . pname1=val1 & pname2=val2 & ...`
+As we saw above, we can combine conditions:
+
+`FIND MusicalAnalysis WHICH HAS quality_factor>0.5 AND date IN 2019`
+
+In general, the conjunction takes the form `FIND <eexpr> WHICH <filter1> AND <filter2>`. You can
+also use `&` instead of `AND` or chain more than two conditions.  If you mix conjunctions with
+disjunctions, you need to add brackets to define the priority. For example: `FIND <eexpr> WHICH
+(<filter1> AND <filter2>) OR <filter3>`.
+
+`FIND Guitar WHICH REFERENCES Manufacturer AND price` is a combination of a reference filter and a
+POV filter. For readability, you can also write  
+`FIND Guitar WHICH REFERENCES Manufacturer AND WHICH HAS A price`. However, the additional "WHICH
+HAS A" is purely cosmetic (syntactic sugar).
 
 #### Disjunction (OR)
 
+The rules for disjunctions (`OR` or `|`) are the same as for conjunctions, see above.
+
 * `FIND ename1 WHICH HAS A PROPERTY pname1=val1 OR A PROPERTY pname2=val2 Or A PROPERTY...`
 * `FIND ename1 WHICH HAS A PROPERTY pname1=val1 OR A pname2=val2 OR ...`
 * `FIND ename1 . pname1=val1 | pname2=val2 | ...`
 
 #### Negation (NOT)
 
+You can negate any filter by prefixing the filter with `NOT` or `!`:
+`FIND <eexpr> WHICH NOT <filter1>`.
+
+There are many syntactic sugar alternatives which are treated the same as "NOT":
+- `DOES NOT HAVE`
+- `ISN'T`
+- and many more
+
 * `FIND ename1 WHICH DOES NOT HAVE A PROPERTY pname1=val1`
 * `FIND ename1 WHICH DOESN'T HAVE A pname1=val1`
 * `FIND ename1 . NOT pname2=val2`
 * `FIND ename1 . !pname2=val2`
 
-#### ... and combinations with parentheses
+#### Parentheses
+Basically, you can put parantheses around filter expressions and con- or
+disjunctions.
+- `FIND Guitar WHICH (REFERENCES Manufacturer AND WHICH HAS A price)`.
+- `FIND Guitar WHICH (REFERENCES Manufacturer) AND (WHICH HAS A price)`.
+
+For better readability, the above query can be written as:
+- `FIND Guitar WHICH (REFERENCES Manufacturer AND HAS A price)`.
+Note, that without syntactic sugar this query looks like:
+- `FIND Guitar WHICH (REFERENCES Manufacturer AND price)`.
 
 * `FIND ename1 WHICH HAS A pname1=val1 AND DOESN'T HAVE A pname2<val2 AND ((WHICH HAS A pname3=val3 AND A pname4=val4) OR DOES NOT HAVE A (pname5=val5 AND pname6=val6))`
 * `FIND ename1 . pname1=val1 & !pname2<val2 & ((pname3=val3 & pname4=val4) | !(pname5=val5 & pname6=val6))`
@@ -369,7 +467,7 @@ Any result set can be filtered by logically combining POV filters or back refere
 *  NOT:: The logical negation. Equivalent expressions: `NOT, DOESN'T HAVE A PROPERTY, DOES NOT HAVE A PROPERTY, DOESN'T HAVE A, DOES NOT HAVE A, DOES NOT, DOESN'T, IS NOT, ISN'T, !`
 *  OR:: The logical _or_. Equivalent expressions: `OR, |`
 *  RECORD,RECORDTYPE,FILE,PROPERTY:: Role expression for restricting the result set to a specific role.
-*  WHICH:: The marker for the beginning of the filters. Equivalent expressions: `WHICH, WHICH HAS A, WHICH HAS A PROPERTY, WHERE, WITH, .`
+*  WHICH:: The marker for the beginning of the filters. Equivalent expressions: `WHICH, WHICH HAS A, WHICH HAS A PROPERTY, WHERE, WITH (A), .`
 *  REFERENCE:: This one is tricky: `REFERENCE TO` expresses a the state of _having_ a reference property. `REFERENCED BY` expresses the state of _being_ referenced by another entity.
 *  COUNT:: `COUNT` works like `FIND` but doesn't return the entities.
 
@@ -420,7 +518,7 @@ would return any entity with that name and all children, regardless of the
 entity's role. Basically, `FIND ename` *was* equivalent to `FIND ENTITY ename`.
 Since 0.9.0 the default behavior has changed and now `FIND ename` is equivalent
 to `FIND RECORD ename`. This default is, however, configurable via the
-`FIND_QUERY_DEFAULT_ROLE` server property. See [Server Configuration](./administration/configuration.rst).
+`FIND_QUERY_DEFAULT_ROLE` server property. See [Server Configuration](./administration/configuration).
 
 ## Future
 
diff --git a/src/doc/FAQ.rst b/src/doc/FAQ.rst
new file mode 100644
index 0000000000000000000000000000000000000000..54bfb296fa96e645034504e94368b34e76448b75
--- /dev/null
+++ b/src/doc/FAQ.rst
@@ -0,0 +1,57 @@
+====
+FAQs
+====
+
+These FAQs (frequently asked questions) can be extended, if *you* help us.  Please `submit an issue
+<https://gitlab.com/caosdb/caosdb-server/issues/new>`__ if you have a question that should be
+answered here.
+
+.. contents:: Select your question:
+   :local:
+
+How do I declare a LIST property?
+=================================
+
+Use the datatype parameter (available with Property constructors and
+with the ``add_property`` method and the ``LIST`` function.
+
+.. code:: python
+
+   # with constructor
+   p = caosdb.Property(name="ListOfDoubles", datatype=caosdb.LIST(caosdb.DOUBLE))
+
+   # with add_property method
+   my_entity.add_property(name="ListOfIntegers", datatype=caosdb.LIST(caosdb.INTEGER))
+   my_entity.add_property(name="ListOfThings", datatype=caosdb.LIST("Thing"))
+   my_entity.add_property(name="ListOfThings", datatype=caosdb.LIST(caosdb.RecordType('Thing'))
+
+Which data types are there?
+===========================
+
+There are 7 basic data types:
+
+-  ``INTEGER``
+-  ``DOUBLE``
+-  ``DATETIME``
+-  ``TEXT``
+-  ``BOOLEAN``
+-  ``FILE``
+-  ``REFERENCE``
+
+There is (so far) 1 data type for collections:
+
+-  ``LIST`` (Actually, LIST-of-another-data-type, e.g. ``LIST(INTEGER)``)
+
+And furthermore,…
+
+-  Any RecordType can be used as a ``REFERENCE`` data type with a
+   limited scope. That is, a property
+
+   .. code:: python
+
+      p = caosdb.Property(name="Conductor", datatype="Person")
+
+   will only accept those Entities as value which have a “Person”
+   RecordType as a direct or indirect parent.
+
+See also: :any:`Datatype<specification/Datatype>`.
diff --git a/src/doc/conf.py b/src/doc/conf.py
index 43f97aee968197c73771a3a8c1a617c987c15790..a8fbcec5c0357b7db958a189d55d120201c5782b 100644
--- a/src/doc/conf.py
+++ b/src/doc/conf.py
@@ -22,7 +22,7 @@ from os.path import dirname, abspath
 # -- Project information -----------------------------------------------------
 
 project = 'caosdb-server'
-copyright = '2022, IndiScale GmbH'
+copyright = '2023, IndiScale GmbH'
 author = 'Daniel Hornung, Timm Fitschen'
 
 # The short X.Y version
diff --git a/src/doc/development/benchmarking.md b/src/doc/development/benchmarking.md
deleted file mode 100644
index 0be781453a6f85577dd95f89844fd95f2b4141ba..0000000000000000000000000000000000000000
--- a/src/doc/development/benchmarking.md
+++ /dev/null
@@ -1,4 +0,0 @@
-# Benchmarking CaosDB #
-
-Please refer to the file `doc/devel/Benchmarking.md` in the CaosDB sources for developer resources
-how to do benchmarking and profiling of CaosDB.
diff --git a/src/doc/development/benchmarking.rst b/src/doc/development/benchmarking.rst
new file mode 100644
index 0000000000000000000000000000000000000000..60d36d125b3749cff3538f837c8bd4f299233602
--- /dev/null
+++ b/src/doc/development/benchmarking.rst
@@ -0,0 +1,422 @@
+Benchmarking CaosDB
+===================
+
+Benchmarking CaosDB may encompass several distinct areas: How much time
+is spent in the server’s Java code, how much time is spent inside the
+SQL backend, are the same costly methods called more than once? This
+documentation tries to answer some questions connected with these
+benchmarking aspects and give you the tools to answer your own
+questions.
+
+Before you start
+----------------
+
+In order to obtain meaningful results, you should disable caching.
+
+MariaDB
+~~~~~~~
+
+Set the corresponding variable to 0: ``SET GLOBAL query_cache_type = 0;``
+
+Java Server
+~~~~~~~~~~~
+
+In the config:
+
+.. code:: cfg
+
+   CACHE_DISABLE=true
+
+Tools for the benchmarking
+--------------------------
+
+For averaging over many runs of comparable requests and for putting the
+database into a representative state, Python scripts are used. The
+scripts can be found in the ``caosdb-dev-tools`` repository, located at
+https://gitlab.indiscale.com/caosdb/src/caosdb-dev-tools in the folder
+``benchmarking``:
+
+Python Script ``fill_database.py``
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This commandline script is meant for filling the database with enough
+data to represeny an actual real-life case, it can easily create
+hundreds of thousands of Entities.
+
+The script inserts predefined amounts of randomized Entities into the
+database, RecordTypes, Properties and Records. Each Record has a random
+(but with defined average) number of Properties, some of which may be
+references to other Records which have been inserted before. Actual
+insertion of the Entities into CaosDB is done in chunks of a defined
+size.
+
+Users can tell the script to store times needed for the insertion of
+each chunk into a tsv file.
+
+Python Script ``measure_execution_time.py``
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+A somewhat outdated script which executes a given query a number of
+times and then save statistics about the ``TransactionBenchmark``
+readings (see below for more information about the transaction
+benchmarks) delivered by the server.
+
+Python Script ``sql_routine_measurement.py``
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Simply call ``./sql_routine_measurement.py`` in the scripts directory.
+An sql file is automatically executed which enables the correct
+``performance_schema`` tables. However, the performance_schema of
+mariadb needs to be enabled. Add ``performance_schema=ON`` to the
+configuration file of mariadb as it needs to be enabled on start up.
+This script expects the MariaDB server to be accessible on 127.0.0.1
+with the default caosdb user and password (caosdb;random1234).
+
+You might consider to increase
+``performance_schema_events_transactions_history_long_size``.
+
+::
+
+   performance_schema_events_transactions_history_long_size=1000000
+
+The performance schema must be enabled (see below).
+
+MariaDB General Query Log
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+MariaDB and MySQL have a feature to enable the logging of SQL queries’
+times. This logging must be turned on on the SQL server as described in
+the `upstream
+documentation <https://mariadb.com/kb/en/general-query-log/>`__: Add to
+the mysql configuration:
+
+::
+
+   log_output=TABLE
+   general_log
+
+or calling
+
+.. code:: sql
+
+   SET GLOBAL log_output = 'TABLE';
+   SET GLOBAL general_log = 'ON';
+
+In the Docker environment LinkAhead, this can conveniently be done with
+``linkahead mysqllog {on,off,store}``.
+
+MariaDB Slow Query Log
+~~~~~~~~~~~~~~~~~~~~~~
+
+See `slow query log
+docs <https://mariadb.com/kb/en/slow-query-log-overview/>`__
+
+MariaDB Performance Schema
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The most detailed information on execution times can be acquired using
+the performance schema.
+
+To use it, the ``performance_schema`` setting in the MariaDB server must
+be
+enabled(`docs <https://mariadb.com/kb/en/performance-schema-overview/#enabling-the-performance-schema>`__,
+for example by setting this in the config files:
+
+::
+
+   [mysqld]
+
+   performance_schema=ON
+
+The performance schema provides many different tables in the
+``performance_schema``. You can instruct MariaDB to create those tables
+by setting the appropriate ``instrument`` and ``consumer`` variables.
+E.g.
+
+.. code:: sql
+
+   update performance_schema.setup_instruments set enabled='YES', timed='YES' WHERE NAME LIKE '%statement%';
+   update performance_schema.setup_consumers set enabled='YES' WHERE NAME LIKE '%statement%';
+
+This can also be done via the configuration.
+
+::
+
+   [mysqld]
+
+   performance_schema=ON
+   performance-schema-instrument='statement/%=ON'
+   performance-schema-consumer-events-statements-history=ON                        
+   performance-schema-consumer-events-statements-history-long=ON
+
+You may want to look at the result of the following commands:
+
+.. code:: sql
+
+
+   select * from performance_schema.setup_consumers;
+   select * from performance_schema.setup_instruments;
+
+Note, that the ``base_settings.sql`` enables appropriate instruments and
+consumers.
+
+Before you start a measurement, you will want to empty the tables. E.g.:
+
+.. code:: sql
+
+   truncate table  performance_schema.events_statements_history_long ;
+
+The procedure ``reset_stats`` in ``base_settings.sql`` clears the
+typically used ones.
+
+The tables contain many columns. An example to get an informative view
+is
+
+.. code:: sql
+
+   select left(sql_text,50), left(digest_text,50), ms(timer_wait) from performance_schema.events_statements_history_long order by ms(timer_wait);
+
+where the function ``ms`` is defined in ``base_settings.sql``. Or a very
+useful one:
+
+.. code:: sql
+
+   select  left(digest_text,100) as digest,ms(sum_timer_wait) as time_ms, count_star from performance_schema.events_statements_summary_by_digest order by time_ms;
+
+Useful SQL configuration with docker
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In order to allow easy testing and debugging the following is useful
+when using docker. Change the docker-compose file to include the
+following for the mariadb service:
+
+::
+
+       networks:
+         # available on port 3306, host name 'sqldb'
+         - caosnet
+       ports:
+         - 3306:3306
+
+Check it with ``mysql -ucaosdb -prandom1234 -h127.0.0.1 caosdb`` Add the
+appropriate changes (e.g. ``performance_schema=ON``) to
+``profiles/empty/custom/mariadb.conf.d/mariadb.cnf`` (or in the profile
+folder that you use).
+
+Manual Java-side benchmarking
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Benchmarking can be done using the ``TransactionBenchmark`` class (in
+package ``org.caosdb.server.database.misc``).
+
+-  Single timings can be added to instances of that class via the
+   ``addBenchmark(object, time)`` method. Multiple benchmarks for the
+   same object (typically just strings) can be averaged.
+-  Benchmarks can be serialized into XML, ``Container`` and ``Query``
+   objects already use this with their included benchmarks to output
+   benchmarking results.
+-  To work with the benchmarks of often used objects, use these methods:
+
+   -  ``Container.getTransactionBenchmark().addBenchmark()``
+   -  ``Query.addBenchmark()``
+
+To enable transaction benchmarks and disable caching in the server, set
+these server settings:
+
+.. code:: cfg
+
+   TRANSACTION_BENCHMARK_ENABLED=true
+   CACHE_DISABLE=true
+
+Additionally, the server should be started via ``make run-debug``
+(instead of ``make run-single``), otherwise the benchmarking will not be
+active.
+
+Notable benchmarks and where to find them
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
++----------------------+---------------------------+------------------+
+| Name                 | Where measured            | What measured    |
++======================+===========================+==================+
+| ``Retrieve.init``    | transac                   | transaction/     |
+|                      | tion/Transaction.java#135 | Retrieve.java#48 |
++----------------------+---------------------------+------------------+
+| ``Re                 | transac                   | transaction/R    |
+| trieve.transaction`` | tion/Transaction.java#174 | etrieve.java#133 |
++----------------------+---------------------------+------------------+
+| ``Retriev            | transac                   | transaction/     |
+| e.post_transaction`` | tion/Transaction.java#182 | Retrieve.java#77 |
++----------------------+---------------------------+------------------+
+| ``EntityResource.h   | resource/transactio       | all except XML   |
+| ttpGetInChildClass`` | n/EntityResource.java#118 | generation       |
++----------------------+---------------------------+------------------+
+| ``ExecuteQuery``     | ?                         | ?                |
++----------------------+---------------------------+------------------+
+|                      |                           |                  |
++----------------------+---------------------------+------------------+
+
+External JVM profilers
+~~~~~~~~~~~~~~~~~~~~~~
+
+Additionally to the transaction benchmarks, it is possible to benchmark
+the server execution via external Java profilers. For example,
+`VisualVM <https://visualvm.github.io/>`__ can connect to JVMs running
+locally or remotely (e.g. in a Docker container). To enable this in
+LinkAhead’s Docker environment, set
+
+.. code:: yaml
+
+   devel:
+     profiler: true
+
+Alternatively, start the server (without docker) with the
+``run-debug-single`` make target, it will expose the JMX interface, by
+default on port 9090.
+
+Most profilers, like as VisualVM, only gather cumulative data for call
+trees, they do not provide complete call graphs (as
+callgrind/kcachegrind would do). They also do not differentiate between
+calls with different query strings, as long as the Java process flow is
+the same (for example, ``FIND Record 1234`` and
+``FIND Record A WHICH HAS A Property B WHICH HAS A Property C>100``
+would be handled equally).
+
+Example settings for VisualVM
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In the sampler settings, you may want to add these expressions to the
+blocked packages: ``org.restlet.**, com.mysql.**``. Branches on the call
+tree which are entirely inside the blacklist, will become leaves.
+Alternatively, specify a whitelist, for example with
+``org.caosdb.server.database.backend.implementation.**``, if you only
+want to see the time spent for certain MySQL calls.
+
+How to set up a representative database
+---------------------------------------
+
+For reproducible results, it makes sense to start off with an empty
+database and fill it using the ``fill_database.py`` script, for example
+like this:
+
+.. code:: sh
+
+   ./fill_database.py -t 500 -p 700 -r 10000 -s 100 --clean
+
+The ``--clean`` argument is not strictly necessary when the database was
+empty before, but it may make sense when there have been previous runs
+of the command. This example would create 500 RecordTypes, 700
+Properties and 10000 Records with randomized properties, everything is
+inserted in chunks of 100 Entities.
+
+How to measure request times
+----------------------------
+
+If the execution of the Java components is of interest, the VisualVM
+profiler should be started and connected to the server before any
+requests to the server are started.
+
+When doing performance tests which are used for detailed analysis, it is
+important that
+
+1. CaosDB is in a reproducible state, which should be documented
+2. all measurements are repeated several times to account for inevitable
+   variance in access (for example file system caching, network
+   variablity etc.)
+
+Filling the database
+~~~~~~~~~~~~~~~~~~~~
+
+By simply adding the option ``-T logfile.tsv`` to the
+``fill_database.py`` command above, the times for inserting the records
+are stored in a tsv file and can be analyzed later.
+
+Obtain statistics about a query
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To repeat single queries a number of times,
+``measure_execution_time.py`` can be used, for example:
+
+.. code:: sh
+
+   ./measure_execution_time.py -n 120 -q "FIND MusicalInstrument WHICH IS REFERENCED BY Analysis"
+
+This command executes the query 120 times, additional arguments could
+even plot the TransactionBenchmark results directly.
+
+On method calling order and benchmarked events
+----------------------------------------------
+
+-  ``Transaction.execute()`` :: Logs benchmarks for events like:
+
+   -  ``INIT`` :: The transaction’s ``init()`` method.
+   -  ``PRE_CHECK``
+   -  ``CHECK``
+   -  ``POST_CHECK``
+   -  ``PRE_TRANSACTION``
+   -  ``TRANSACTION`` -> typically calls
+      ``database.backend.transaction.[BackendTransaction].execute()``,
+      which in turn calls, some levels deeper,
+      ``backend.transaction.....execute(<k extends   BackendTransaction> t)``
+      -> see next point
+   -  …
+
+-  ``backend.transaction.[...].execute(transaction)`` :: This method is
+   benchmarked again (via parent class ``BackendTransaction``), this is
+   probably the deepest level of benchmarking currently (Benchmark is
+   logged as e.g. ``<RetrieveFullEntity>...</>``). It finally calls
+   ``[MySQLTransaction].execute()``.
+-  ``[MySQLTransaction].execute()`` :: This is the deepest backend
+   implementation part, it typically creates a prepared statement and
+   executes it.
+-  Currently not benchmarked separately:
+
+   -  Getting the actual implementation (probably fast?)
+   -  Preparing the SQL statement
+   -  Executing the SQL statement
+   -  Java-side caching
+
+What is measured
+----------------
+
+For a consistent interpretation, the exact definitions of the measured
+times are as follows:
+
+SQL logs
+~~~~~~~~
+
+As per https://mariadb.com/kb/en/general-query-log, the logs store only
+the time at which the SQL server received a query, not the duration of
+the query.
+
+Possible future enhancements
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+-  The ``query_response_time`` plugin may be additionally used in the
+   future, see https://mariadb.com/kb/en/query-response-time-plugin
+
+Transaction benchmarks
+~~~~~~~~~~~~~~~~~~~~~~
+
+Transaction benchmarking manually collects timing information for each
+transaction. At defined points, different measurements can be made,
+accumulated and will finally be returned to the client. Benchmark
+objects may consist of sub benchmarks and have a number of measurement
+objects, which contain the actual statistics.
+
+Because transaction benchmarks must be manually added to the server
+code, they only monitor those code paths where they are added. On the
+other hand, their manual nature allows for a more abstracted analysis of
+performance bottlenecks.
+
+Java profiler
+~~~~~~~~~~~~~
+
+VisualVM records for each thread the call tree, specifically which
+methods were called how often and how much time was spent inside these
+methods.
+
+Global requests
+~~~~~~~~~~~~~~~
+
+Python scripts may measure the global time needed for the execution of
+each request. ``fill_database.py`` obtains its numbers this way.
diff --git a/src/doc/development/devel.rst b/src/doc/development/devel.rst
index 141b045594fe6f8a58c86f758734e160e5a8c2fe..cf0ba37e75c967b755bc2951e803110b15f23baf 100644
--- a/src/doc/development/devel.rst
+++ b/src/doc/development/devel.rst
@@ -6,6 +6,8 @@ Developing CaosDB
    :maxdepth: 2
 
    Structure of the Java code <structure>
+   Testing the server code <testing>
+   Logging server output <logging>
    Benchmarking CaosDB <benchmarking>
 
 CaosDB is an Open-Source project, so anyone may modify the source as they like. These pages aim to
diff --git a/src/doc/development/logging.rst b/src/doc/development/logging.rst
new file mode 100644
index 0000000000000000000000000000000000000000..1b0a94935a0c701fd9d8ba28a604f11b357d1a89
--- /dev/null
+++ b/src/doc/development/logging.rst
@@ -0,0 +1,126 @@
+Logging
+=======
+
+Framework
+---------
+
+We use the SLF4J API with a log4j2 backend for all of our Code. Please
+do not use log4j2 directly or any other logging API.
+
+Note that some libraries on the classpath use the ``java.util.logging``
+API and log4j1 logging framework instead. These loggers cannot be
+configurated with the help of this README by now.
+
+Configuration
+-------------
+
+The configuration of the log4j2 backend is done via ``properties`` files which comply with the
+`log4j2 specifications
+<https://logging.apache.org/log4j/2.x/manual/configuration.html#Properties>`__.
+XML, YAML, or JSON files are not supported. The usual mechanisms for
+automatic configuration with such files is disabled. Instead, files have
+to be placed into the ``conf`` subdirs, as follows:
+
+Default and Debug Logging
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The default configuration is located at
+``conf/core/log4j2-default.properties``. For the debug mode, the
+configuration from ``conf/core/log4j2-debug.properties`` is merged with
+the default configuration. These files should not be changed by the
+user.
+
+User Defined Logging
+~~~~~~~~~~~~~~~~~~~~
+
+The default and debug configuration can be overridden by the user with
+``conf/ext/log4j2.properties`` and any file in the directory
+``conf/ext/log4j2.properties.d/`` which is suffixed by ``.properties``.
+All logging configuration files are merged using the standard merge
+strategy of log4:
+
+   .. rubric:: Composite Configuration
+      :name: composite-configuration
+
+   Log4j allows multiple configuration files to be used by specifying
+   them as a list of comma separated file paths on
+   log4j.configurationFile. The merge logic can be controlled by
+   specifying a class that implements the MergeStrategy interface on the
+   log4j.mergeStrategy property. The default merge strategy will merge
+   the files using the following rules:
+
+   1. The global configuration attributes are aggregated with those in later configurations
+      replacing those in previous configurations, with the exception that the highest status level
+      and the lowest monitorInterval greater than 0 will be used.
+   
+   2. Properties from all configurations are aggregated.  Duplicate properties replace those in
+      previous configurations.
+
+   3. Filters are aggregated under a CompositeFilter if more than one Filter is defined. Since
+      Filters are not named duplicates may be present.
+
+   4. Scripts and ScriptFile references are aggregated.  Duplicate definiations replace those in
+      previous configurations.
+
+   5. Appenders are aggregated. Appenders with the same name are replaced by those in later
+      configurations, including all of the Appender’s subcomponents.
+
+   6. Loggers are all aggregated. Logger attributes are individually merged with duplicates being
+      replaced by those in later configurations. Appender references on a Logger are aggregated with
+      duplicates being replaced by those in later configurations. Filters on a Logger are aggregated
+      under a CompositeFilter if more than one Filter is defined. Since Filters are not named
+      duplicates may be present. Filters under Appender references included or discarded depending
+      on whether their parent Appender reference is kept or discarded.
+
+`Source <https://logging.apache.org/log4j/2.x/manual/configuration.html#CompositeConfiguration>`__
+
+Some Details and Examples
+-------------------------
+
+Make Verbose
+~~~~~~~~~~~~
+
+To make the server logs on the console more verbose, insert
+``rootLogger.level = DEBUG`` or even ``rootLogger.level = TRACE`` into a
+properties file in the ``conf/ext/log4j2.properties.d/`` directory or
+the ``conf/ext/log4j2.properties`` file.
+
+Log Directory
+~~~~~~~~~~~~~
+
+By default, log files go to ``./log/``,
+e.g. ``./log/request_errors/current.log``. The log directory in
+``DEBUG_MODE`` is located at ``./testlog/``.
+
+To change that, insert ``property.LOG_DIR = /path/to/my/logs`` into a
+properties file in the ``conf/ext/log4j2.properties.d/`` directory or
+the ``conf/ext/log4j2.properties`` file
+
+Special loggers
+~~~~~~~~~~~~~~~
+
+-  ``REQUEST_ERRORS_LOGGER`` for logging server errors with SRID, full
+   request and full response. WARNING: This logger stores unencrypted
+   content of request with possibly confidential content.
+-  ``REQUEST_TIME_LOGGER`` for timing the requests.
+
+These loggers are defined in the ``conf/core/log4j2-default.properties``
+file.
+
+Enable Request Time Logger
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The ``REQUEST_TIME_LOGGER`` is disabled by default, its log level is set
+to ``OFF``. To enable it and write logs to the directory denoted by
+``property.LOG_DIR``, create a ``properties`` file under
+``conf/ext/log4j2.properties.d/`` which contains at least
+
+.. code:: properties
+
+   property.REQUEST_TIME_LOGGER_LEVEL = TRACE
+
+debug.log
+~~~~~~~~~
+
+When in ``DEBUG_MODE``, e.g. when started with ``make run-debug``, the
+server also writes all logs to ``debug.log`` in the log directory.
diff --git a/src/doc/development/testing.rst b/src/doc/development/testing.rst
new file mode 100644
index 0000000000000000000000000000000000000000..2587695d67684d132e942b8eb4f25cd1038c9907
--- /dev/null
+++ b/src/doc/development/testing.rst
@@ -0,0 +1,22 @@
+Testing the server code
+-----------------------
+
+Whether developing new features, refacturing code or fixing bugs, the server
+code should be thoroughly tested for correct and incorrect behvaiour, on correct
+and incorrect input.
+
+Writing tests
+~~~~~~~~~~~~~
+
+Tests go into ``src/test/java/caosdb/``, the files there can serve as examples for
+writing tests.
+
+Running tests with Maven
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+- Automatic testing can be done with ``make test`` or, after compilation, ``mvn test``.
+- Tests of single modules can be started with ``mvn test -Dtest=TestClass``.
+- Test of a single method ``footest``: ``mvn test -Dtest=TestClass#footest``
+
+
+
diff --git a/src/doc/index.rst b/src/doc/index.rst
index 1a5e4134ef7f934ff5018cc8847603f1165ab16e..e34afd382f0c4a1a5520b94a4fc00e1c0a67427b 100644
--- a/src/doc/index.rst
+++ b/src/doc/index.rst
@@ -11,6 +11,7 @@ Welcome to caosdb-server's documentation!
    Getting started <README_SETUP>
    Concepts <concepts>
    tutorials
+   FAQ
    Query Language <CaosDB-Query-Language>
    administration
    Development <development/devel>
diff --git a/src/doc/specification/AbstractProperty.md b/src/doc/specification/AbstractProperty.md
index 3a2fe7583480c7467bccb0a2bf94850ea30d7be4..062ef205e048aef4142c683c9c4c1820fd89fe36 100644
--- a/src/doc/specification/AbstractProperty.md
+++ b/src/doc/specification/AbstractProperty.md
@@ -1,11 +1,12 @@
-# Note #
+# AbstractProperty Specification
 
->   This document has not been updated for a long time. Although it is concerned with the mostly
->   stable API, its content may no longer reflect the actual CaosDB behavior.
+**Warning:** This specification is outdated. It is included to serve as a starting point for a more
+up-to-date description of the `Property` entity.
 
-# AbstractProperty Specification
+## Note ##
 
-**Warning:** This specification is outdated. It is included to serve as a starting point for a more up-to-date description of the `Property` entity.
+>   This document has not been updated for a long time. Although it is concerned with the mostly
+>   stable API, its content may no longer reflect the actual CaosDB behavior.
 
 ## Introduction
 An `AbstractProperty` is one of the basal objects of CaosDB.
diff --git a/src/doc/specification/Authentication.rst b/src/doc/specification/Authentication.rst
index 93d68c20171e55dad663ece719a78008793a4191..3fcd25dad0d7fb9e591e1d4a2d845b3d353fff8b 100644
--- a/src/doc/specification/Authentication.rst
+++ b/src/doc/specification/Authentication.rst
@@ -1,98 +1,48 @@
+==============
 Authentication
 ==============
 
-Some features of CaosDB are available to registered users only. Making any
-changes to the data stock via HTTP requires authentication by ``username`` **plus**
-``password``. They are to be send as a HTTP header, while the password is to be
-hashed by the sha512 algorithm:
-
-============= ======================
-username:     password:
-============= ======================
-``$username`` ``$SHA512ed_password``
-============= ======================
+Some features of CaosDB are available to registered users only. Making any changes to the data stock
+via HTTP requires authentication.
 
 Sessions
---------
+========
 
 Login
-^^^^^
-
-Request Challenge
-^^^^^^^^^^^^^^^^^
-
-* ``GET http://host:port/mpidsserver/login?username=$username``
-* ``GET http://host:port/mpidsserver/login`` with ``username`` header
-
-**No password is required to be sent over http.**
-
-The request returns an AuthToken with a login challenge as a cookie.
-The AuthToken is a dictionary of the following form:
-
-.. code-block::
-
-   {scope=$scope;
-    mode=LOGIN;
-    offerer=$offerer;
-    auth=$auth
-    expires=$expires;
-    date=$date;
-    hash=$hash;
-    session=$session;
-   }
-
-where
-
-* ``$scope`` :: A uri pattern string. Example: ``{ **/* }``
-* ``$mode`` :: ``ONETIME``, ``SESSION``, or ``LOGIN``
-* ``$offerer`` :: A valid username
-* ``$auth`` :: A valid username
-* ``$expires`` :: A ``YYYY-MM-DD HH:mm:ss[.nnnn]`` date string
-* ``$date`` :: A ``YYYY-MM-DD HH:mm:ss[.nnnn]`` date string
-* ``$hash`` :: A string
-* ``$session`` :: A string
-
-The challenge is solved by concatenating the ``$hash`` string and
-the user's ``$password`` string and calculating the sha512 hash of both.
-Pseudo code:
-
-.. code-block::
+-----
 
-   $solution = sha512($hash + sha512($password))
+Authentication is done by ``username`` and ``password``. They must be sent as form data with a POST
+request to the `/login/` resource:
 
-Send Solution
-^^^^^^^^^^^^^
+username:
+  The user name, for example ``admin`` (on demo.indiscale.com).
 
-The old ``$hash`` string in the cookie has to be replaces by ``$solution`` and
- the cookie is to be send with the next request:
+password:
+  The password, for example ``caosdb`` (on demo.indiscale.com).
 
-``PUT http://host:port/mpidsserver/login``
-
-The server will return the user's entity in the HTTP body, e.g.
+Logout
+------
 
-.. code-block::
+The server does not invalidate AuthTokens. They invalidate after they expire or
+when the server is being restartet. Client should just delete their AuthToken
+to 'logout'.
 
-   <Response ...>
-     <User name="$username" ...>
-      ...
-     </User>
-   </Response>
+However, in order to remove the AuthToken cookie from the browsers there is a
+convenient resource which will invalidate the cookie (not the AuthToken).
 
-and a new AuthToken with ``$mode=SESSION`` and a new expiration date and so
-on. This AuthToken cookie is to be send with every request.
+Send
 
-Logout
-^^^^^^
+``GET http://host:port/logout``
 
-Send
+and the server will return an empty AuthToken cookie which immediately expires.
 
-``PUT http://host:port/mpidsserver/logout``
+Example using ``curl``
+----------------------
 
-with a valid AuthToken cookie. No new AuthToken will be returned and no
-AuthToken with that ``$session`` will be accepted anymore.
+.. _curl-login:
 
-Commandline solution with ``curl``
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Login
+~~~~~
 
 To use curl for talking with the server, first save your password into a
 variable: ``PW=$(cat)``
@@ -102,10 +52,22 @@ password visible for a short time to everyone on your system:
 
 .. code-block:: sh
 
-   curl -X POST -c cookie.txt -D head.txt -H "Content-Type: application/x-www-form-urlencoded" -d username=<USERNAME> -d password="$PW" --insecure "https://<SERVER>/login
+   curl -X POST -c cookie.txt -D head.txt -d username=<USERNAME> -d password="$PW" --insecure "https://<SERVER>/login
+
+Now ``cookie.txt`` contains the required authentication token information in the ``SessionToken``
+cookie (url-encoded json).
+
+.. rubric:: Example token content
+
+.. code-block:: json
+
+   ["S","PAM","admin",[],[],1682509668825,3600000,"Z6J4B[...]-OQ","31d3a[...]ab2c10"]
+
+Using the token
+~~~~~~~~~~~~~~~
 
 To use the cookie, pass it on with later requests:
 
 .. code-block:: sh
 
-   curl -X GET -b cookie.txt --insecure "https://<SERVER>/Entity/12345"
+   curl -X GET -b cookie.txt --insecure "https://<SERVER>/Entity/123"
diff --git a/src/doc/specification/Datatype.md b/src/doc/specification/Datatype.md
index 6a169042dce2be2e6dc939d0935f3336de264308..6354d6f2cfdb5215d94836bfe263e2d013bd71a8 100644
--- a/src/doc/specification/Datatype.md
+++ b/src/doc/specification/Datatype.md
@@ -73,7 +73,7 @@ Please file a new feature request as soon as you need them.
 ----
 
 ## REFERENCE
-* Description: REFERENCE values store the [Valid ID](../Glossary#valid-id) of an existing entity. The are useful to establish links between two entities. 
+* Description: REFERENCE values store the [Valid ID](../Glossary.html#valid-id) of an existing entity. The are useful to establish links between two entities. 
 * Accepted Values: Any [Valid ID](./Glossary#valid-id) or [Valid Unique Existing Name](./Glossary#valid-unique-existing-name) or [Valid Unique Temporary ID](./Glossary#valid-unique-temporary-id) or [Valid Unique Prospective Name](./Glossary#valid-unique-prospective-pame).
 * Note:
   * After beeing processed successfully by the server the REFERENCE value is normalized to a [Valid ID](./Glossary#valid-id). I.e. it is guaranteed that a REFERENCE value of a valid property is a positive integer.
diff --git a/src/doc/specification/Specification-of-the-Message-API.md b/src/doc/specification/Message-API.md
similarity index 99%
rename from src/doc/specification/Specification-of-the-Message-API.md
rename to src/doc/specification/Message-API.md
index fc08b343f67ca8c052395cb0979b02ab455e98fa..7a5d137bd277d9dea69df9d90ce76869d1d404c1 100644
--- a/src/doc/specification/Specification-of-the-Message-API.md
+++ b/src/doc/specification/Message-API.md
@@ -1,4 +1,4 @@
-# Specification of the Message API
+# Message API
 ## Introduction
 
 API Version 0.1.0
diff --git a/src/doc/specification/Paging.md b/src/doc/specification/Paging.md
index fb994639204704618af2b1c7a8ccb32301013af3..229e9c3bc254de5dcbe4455f4fc59b7538f811de 100644
--- a/src/doc/specification/Paging.md
+++ b/src/doc/specification/Paging.md
@@ -12,14 +12,14 @@ The Paging flag splits the retrieval of a (possibly huge) number entities into p
 
 ## Semantics
 
-The `index` (starting with zero) denotes the index of the first entity to be retrieved. The `length` is the number of entities on that page. If `length` is omitted, the default number of entities is returned (as configured by a server contant called ...). If only the `name` is given the paging behaves as if the `index` has been zero.
+The `index` (starting with zero) denotes the index of the first entity to be retrieved. The `length` is the number of entities on that page. If `length` is omitted, the default number of entities is returned (as configured by a server constant called ...). If only the `name` is given the paging behaves as if the `index` has been zero.
 
 ## Examples
 
-`http://localhost:8123/mpidsserver/Entities/all?flags=P:24L50` returns 50 entities starting with the 25th entity which would be retrieved without paging.
+`http://localhost:10080/Entities/all?flags=P:24L50` returns 50 entities starting with the 25th entity which would be retrieved without paging.
 
-`http://localhost:8123/mpidsserver/Entities/all?flags=P:24` returns the default number of entities starting with the 25th entity which would be retrieved without paging.
+`http://localhost:10080/Entities/all?flags=P:24` returns the default number of entities starting with the 25th entity which would be retrieved without paging.
 
-`http://localhost:8123/mpidsserver/Entities/all?flags=P:L50` returns 50 entities starting with the first entity which would be retrieved without paging.
+`http://localhost:10080/Entities/all?flags=P:L50` returns 50 entities starting with the first entity which would be retrieved without paging.
 
-`http://localhost:8123/mpidsserver/Entities/all?flags=P` returns the default number of entities starting with the first entity which would be retrieved without paging.
+`http://localhost:10080/Entities/all?flags=P` returns the default number of entities starting with the first entity which would be retrieved without paging.
diff --git a/src/doc/specification/index.rst b/src/doc/specification/index.rst
index 69677277fc793d513aa21f506295567340d462e1..51b0e070d5198603877ed336b353177b03a6e9f8 100644
--- a/src/doc/specification/index.rst
+++ b/src/doc/specification/index.rst
@@ -1,10 +1,11 @@
-Specification
-=============
+Specifications
+==============
+
+Specifications of assorted topics
 
 .. toctree::
-   :maxdepth: 2
+   :maxdepth: 1
    :caption: Contents:
-   :hidden:
 
    AbstractProperty
    Fileserver
@@ -13,6 +14,7 @@ Specification
    Datatype
    Paging
    RecordType
+   Query syntax <../query-syntax>
    Server side scripting <Server-side-scripting>
-   Specification of the Message API <Specification-of-the-Message-API>
-   Specification of the Entity API <entity_api>
+   Message API <Message-API>
+   Entity API <entity_api>