Skip to content
Snippets Groups Projects
Verified Commit f0bbb674 authored by Timm Fitschen's avatar Timm Fitschen
Browse files

Merge branch 'f-filesystem-cleanup' into f-filesystem-core

parents 647dc4b9 2baeefa6
Branches
No related tags found
1 merge request!75Draft: ENH: file system: core
Pipeline #47480 failed
Showing
with 898 additions and 301 deletions
---
title: Development
author: Daniel Hornung
...
# Developing the CaosDB server #
This file contains information about server development, it is aimed at those
who want to debug, understand or enhance the CaosDB server.
## Testing ##
Whether developing new features, refacturing code or fixing bugs, the server
code should be thoroughly tested for correct and incorrect behvaiour, on correct
and incorrect input.
### Writing tests ###
Tests go into `src/test/java/caosdb/`, the files there can serve as examples for
writing tests.
### Running tests with Maven ###
- Automatic testing can be done with `make test` or, after compilation, `mvn
test`.
- Tests of single modules can be started with `mvn test -Dtest=TestClass`
- Test of a single method `footest`: `mvn test -Dtest=TestClass#footest`
# Logging
## Framework
We use the SLF4J API with a log4j2 backend for all of our Code. Please do not use log4j2 directly or any other logging API.
Note that some libraries on the classpath use the `java.util.logging` API and log4j1 logging framework instead. These loggers cannot be configurated with the help of this README by now.
## Configuration
The configuration of the log4j2 backend is done via `properties` files which
comply with the [log4j2
specifications](https://logging.apache.org/log4j/2.x/manual/configuration.html#Properties).
XML, YAML, or JSON files are not supported. The usual mechanisms for automatic
configuration with such files is disabled. Instead, files have to be placed
into the `conf` subdirs, as follows:
### Default and Debug Logging
The default configuration is located at `conf/core/log4j2-default.properties`. For the debug mode, the configuration from `conf/core/log4j2-debug.properties` is merged with the default configuration. These files should not be changed by the user.
### User Defined Logging
The default and debug configuration can be overridden by the user with `conf/ext/log4j2.properties` and any file in the directory `conf/ext/log4j2.properties.d/` which is suffixed by `.properties`. All loggin configuration files are merged using the standard merge strategy of log4:
> # Composite Configuration
> Log4j allows multiple configuration files to be used by specifying them as a list of comma separated file paths on log4j.configurationFile. The merge logic can be controlled by specifying a class that implements the MergeStrategy interface on the log4j.mergeStrategy property. The default merge strategy will merge the files using the following rules:
> 1. The global configuration attributes are aggregated with those in later configurations replacing those in previous configurations, with the exception that the highest status level and the lowest monitorInterval greater than 0 will be used.
> 2. Properties from all configurations are aggregated. Duplicate properties replace those in previous configurations.
> 3. Filters are aggregated under a CompositeFilter if more than one Filter is defined. Since Filters are not named duplicates may be present.
> 4. Scripts and ScriptFile references are aggregated. Duplicate definiations replace those in previous configurations.
> 5. Appenders are aggregated. Appenders with the same name are replaced by those in later configurations, including all of the Appender's subcomponents.
> 6. Loggers are all aggregated. Logger attributes are individually merged with duplicates being replaced by those in later configurations. Appender references on a Logger are aggregated with duplicates being replaced by those in later configurations. Filters on a Logger are aggregated under a CompositeFilter if more than one Filter is defined. Since Filters are not named duplicates may be present. Filters under Appender references included or discarded depending on whether their parent Appender reference is kept or discarded.
[2](https://logging.apache.org/log4j/2.x/manual/configuration.html#CompositeConfiguration)
## Some Details and Examples
### Make Verbose
To make the server logs on the console more verbose, insert `rootLogger.level = DEBUG` or even `rootLogger.level = TRACE` into a properties file in the `conf/ext/log4j2.properties.d/` directory or the `conf/ext/log4j2.properties` file.
### Log Directory
By default, log files go to `./log/`, e.g. `./log/request_errors/current.log`. The log directory in `DEBUG_MODE` is located at `./testlog/`.
To change that, insert `property.LOG_DIR = /path/to/my/logs` into a properties file in the `conf/ext/log4j2.properties.d/` directory or the `conf/ext/log4j2.properties` file
### Special loggers
* `REQUEST_ERRORS_LOGGER` for logging server errors with SRID, full request and full response. WARNING: This logger stores unencrypted content of request with possibly confidential content.
* `REQUEST_TIME_LOGGER` for timing the requests.
These loggers are defined in the `conf/core/log4j2-default.properties` file.
#### Enable Request Time Logger
The `REQUEST_TIME_LOGGER` is disabled by default, its log level is set to `OFF`. To enable it and write logs to the directory denoted by `property.LOG_DIR`, create a `properties` file under `conf/ext/log4j2.properties.d/` which contains at least
```properties
property.REQUEST_TIME_LOGGER_LEVEL = TRACE
```
### debug.log
When in `DEBUG_MODE`, e.g. when started with `make run-debug`, the server also writes all logs to `debug.log` in the log directory.
# How do I declare a LIST property?
Use the datatype parameter (available with Property constructors and with the ```add_property``` method and the ```LIST``` function.
```python
#with constructor
p = caosdb.Property(name="ListOfDoubles", datatype=caosdb.LIST(caosdb.DOUBLE))
# with add_property method
my_entity.add_property(name="ListOfIntegers", datatype=caosdb.LIST(caosdb.INTEGER))
my_entity.add_property(name="ListOfThings", datatype=caosdb.LIST("Thing"))
my_entity.add_property(name="ListOfThings", datatype=caosdb.LIST(caosdb.RecordType('Thing'))
```
# Which data types are there?
There are 7 basic data types:
* `INTEGER`
* `DOUBLE`
* `DATETIME`
* `TEXT`
* `BOOLEAN`
* `FILE`
* `REFERENCE`
There is (so far) 1 data type for collections:
* `LIST` (Well, LIST-of-another-data-type, e.g. `LIST(INTEGER)`)
And furthermore,...
* Any RecordType can be used as a `REFERENCE` data type with a limited scope. That is, a property
```python
p = caosdb.Property(name="Conductor", datatype="Person")
```
will only accept those Entities as value which have a "Person" RecordType as a direct or indirect parent.
See also: [Datatype](Datatype)
......@@ -25,7 +25,7 @@
<modelVersion>4.0.0</modelVersion>
<groupId>org.caosdb</groupId>
<artifactId>caosdb-server</artifactId>
<version>0.9.1-SNAPSHOT</version>
<version>0.13.0-SNAPSHOT</version>
<packaging>jar</packaging>
<name>CaosDB Server</name>
<scm>
......@@ -381,9 +381,9 @@
</executions>
</plugin>
<plugin>
<groupId>com.coveo</groupId>
<groupId>com.spotify.fmt</groupId>
<artifactId>fmt-maven-plugin</artifactId>
<version>2.5.1</version>
<version>2.21.1</version>
<configuration>
<skip>
<!-- Set skip to `true` to prevent auto-formatting while coding. -->
......
sphinx-rtd-theme
sphinxcontrib-plantuml
javasphinx
sphinx-a4doc
......@@ -2,61 +2,102 @@
See syntax specification in [CaosDB Query Language Syntax](query-syntax).
## Simple FIND Query
The following query will return any record which has the name _somename_ and all
record children of any entity with that name.
`FIND somename`
On server in the default configuration, the following queries are equivalent to this one.
`FIND RECORD somename`
`FIND RECORDS somename`
Of course, the returned set of entities (henceforth referred to as _resultset_) can also be
restricted to RecordTypes (`FIND RECORDTYPE ...`), Properties (`FIND PROPERTY ...`) and Files (`FIND
FILE ...`).
You can include all entities (Records, RecordTypes, Properties, ...) into the results by using the
`ENTITY` keyword:
`FIND ENTITY somename`
Wildcards use `*` for any characters or none at all. Wildcards for single characters (like the `_` wildcard from mysql) are not implemented yet.
`FIND en*` returns any record which has a name beginning with _en_.
Regular expressions must be surrounded by _<<_ and _>>_:
`FIND <<e[aemn]{2,5}>>`
`FIND <<[cC]amera_[0-9]*>>`
*TODO*:
Describe escape sequences like `\\\\ `, `\*`, `\<<` and `\>>`.
Currently, wildcards and regular expressions are only available for the _simple-find-part_ of the
query, i.e. not for property-operator-value filters (see below).
## Simple COUNT Query
COUNT queries count entities which have certain properties.
`COUNT ... rname ...`
will return the number of records which have the name _rname_ and all record
children of any entity with that name.
The syntax of the COUNT queries is equivalent to the FIND queries in any
respect (this also applies to wildcards and regular expressions) but one: The
prefix is to be `COUNT` instead of `FIND`.
Unlike the FIND queries, the COUNT queries do not return any entities. The result of the query is
the number of entities which _would be_ returned if the query was a FIND query.
## Filters
In this chapter, the CaosDB Query Language (CQL) is presented as a means of
formulating search commands, commonly referred to as queries. It is highly
recommended that you experiment with the examples provided, such as those found
on https://demo.indiscale.com. An interactive tour is also available on this
public instance, which includes a comprehensive overview of the query language.
Therefore, it is suggested that you begin there and subsequently proceed with
this more detailed explanation.
## Introduction
Queries typically start with the keyword `FIND`, followed by a description of
what you want to find. For example, you can search for all musical instruments
with `FIND MusicalInstrument`.
*Note*, the CQL is case **in**sensitive. We will write keywords of CQL in all
caps to illustrate what parts are part of the language.
The most common way is to provide a RecordType name after `FIND` (as in the
example above). However, you can also use the name of some other entity:
`FIND 'My first guitar'`.
*Note*, that we put the name here in quotes. Spaces are used in CQL as separator
of words. Thus, if something contains quotes, like the name here, it needs to be
quoted.
While queries like the last one are great to get an impression of the data,
often we need to be more specific. Therefore, queries can include various
conditions to restrict the result set.
Example: `FIND MusicalAnalysis WITH quality_factor>0.5 AND date IN
2019`. The keyword `WITH` signifies that for each Record of the type
`MusicalAnalysis`, an assessment is made to determine whether it possesses a
Property labelled `quality_factor` that exceeds 0.5, as well as another
Property labelled `date` that may correspond to any day within the year 2019.
In order to make CQL easier to learn and to remember we designed it to be close
to natural spoken English language. For example, you can write
`FIND Guitar WHICH HAS A PROPERTY price`. Here, "HAS A PROPERTY" is what we call
syntactic sugar. It lets the query role off the tongue more easily than
`FIND Guitar WHICH price` but it is actually not needed and does not change
the meaning of the query. In fact, you could also write `FIND Guitar WITH
price`.
If you are only interested in the number of Entities that match your query, you
can replace `FIND` with `COUNT` and the query will only return the number of
Entities in the result set.
Sometimes the list of Records that you get using a `FIND` query is not what you
need; especially if you want to export a subset of the data for the analysis
with some external tool.
`SELECT` queries offer to represent the query result in a tabular form.
If you replace the `FIND` keyword of a query with `SELECT x, y, z FROM`, then
CaosDB will return the result as tabular data.
For example, instead of `FIND Guitar`, try out
`SELECT name, electric FROM Guitar`
As you can see, those queries are design to allow very specific data requests.
If you do not want/need to be as specific you can omit the first keyword (`FIND`
or `SELECT`) which creates a search for anything that has a text Property with
something like your expression. For example, the query "John" will search for
any Records that has a text property that contains this string.
With this, we conclude our introduction of CQL. You now know about the basic
elements. The following will cover the various aspects in more detail and you
will for example learn how you can use references among Records, or meta data
like the creation time of a Record to restrict the query result set.
## What am I searching for?
We already learned, that we can provide the name of a RecordType after the `FIND`
keyword. Let's call this part of the query "entity expression". In general, we
need to identify with the entity expression one or more entities via their name, CaosDB ID
or a pattern.
- `FIND Guitar`
- `FIND Guit*` ('*' represents none, one or more characters)
- `FIND <<[gG]ui.*>>` (a regular expression surrounded by _<<_ and '>>'. see below)
- `FIND 110`
The result set will contain Entities that are either identified by the entity expression
directly (i.e. they have the name or the given ID) or the have such an Entity as
parent.
As you know, CaosDB distincts among different Entity roles:
- Entity
- Record
- RecordType
- Property
- File
You can provide the role directly after the `FIND` keyword and before the
entity expression: `FIND RECORD Guitar`. The result set will then restricted
to Entities with that role.
## Conditions / Filters
### POV - Property-Operator-Value
......@@ -135,15 +176,19 @@ Examples:
* `2015-04-03T00:00:00.0!=2015-04-03T00:00:00.0` is false.
* `2015-04-03T00:00:00!=2015-04-03T00:00:00` is false.
* `2015-04!=2015-05` is true.
* `2015-04!=2015-04` is false
* `2015-04!=2015-04` is false.
##### `d1>d2`: Transitive, non-symmetric relation.
Semantics depend on the flavors of d1 and d2. If both are...
###### [UTCDateTime](specification/Datatype.html#datetime)
* ''True'' iff the time of d1 is after the the time of d2 according to [UTC](https://en.wikipedia.org/wiki/Coordinated_Universal_Time).
* ''False'' otherwise.
###### [SemiCompleteDateTime](specification/Datatype.html#datetime)
* ''True'' iff `d1.ILB>d2.EUB` is true or `d1.ILB=d2.EUB` is true.
* ''False'' iff `d1.EUB<d2.ILB}} is true or {{{d1.EUB=d2.ILB` is true.
* ''Undefined'' otherwise.
......@@ -160,12 +205,16 @@ Examples:
* `2014-01-01>2015-01-01T20:15:30` is false.
##### `d1<d2`: Transitive, non-symmetric relation.
Semantics depend on the flavors of d1 and d2. If both are...
###### [UTCDateTime](specification/Datatype.html#datetime)
* ''True'' iff the time of d1 is before the the time of d2 according to [UTC](https://en.wikipedia.org/wiki/Coordinated_Universal_Time)
* ''False'' otherwise.
###### [SemiCompleteDateTime](specification/Datatype.html#datetime)
* ''True'' iff `d1.EUB<d2.ILB` is true or `d1.EUB=d2.ILB` is true.
* ''False'' iff `d1.ILB>d2.EUB}} is true or {{{d1.ILB=d2.EUB` is true.
* ''Undefined'' otherwise.
......@@ -182,8 +231,11 @@ Examples:
* `2015-01-01T20:15.00<2015-01-01T20:14` is false.
##### `d1 IN d2`: Transitive, non-symmetric relation.
Semantics depend on the flavors of d1 and d2. If both are...
###### [SemiCompleteDateTime](specification/Datatype.html#datetime)
* ''True'' iff (`d1.ILB>d2.ILB` is true or `d1.ILB=d2.ILB` is true) and (`d1.EUB<d2.EUB` is true or `d1.EUB=d2.EUB` is true).
* ''False'' otherwise.
......@@ -195,8 +247,11 @@ Examples:
* `2015-01-01 IN 2015-01-01T20:15:30` is false.
##### `d1 NOT IN d2`: Transitive, non-symmetric relation.
Semantics depend on the flavors of d1 and d2. If both are...
###### [SemiCompleteDateTime](specification/Datatype.html#datetime)
* ''True'' iff `d1.ILB IN d2.ILB` is false.
* ''False'' otherwise.
......@@ -208,7 +263,15 @@ Examples:
* `2015-01-01T20:15:30 NOT IN 2015-01-01T20:15:30` is false.
##### Note
These semantics follow a three-valued logic with ''true'', ''false'' and ''undefined'' as truth values. Only ''true'' is truth preserving. I.e. only those expressions which evaluate to ''true'' pass the POV filter. `FIND ... WHICH HAS A somedate=2015-01` only returns entities for which `somedate=2015-01` is true. On the other hand, `FIND ... WHICH DOESN'T HAVE A somedate=2015-01` returns entities for which `somedate=2015-01` is false or undefined. Shortly put, `NOT d1=d2` is not equivalent to `d1!=d2`. The latter assertion is stronger.
These semantics follow a three-valued logic with `true`, `false` and `undefined` as truth
values. Only `true` is truth preserving. I.e. only those expressions which evaluate to `true`
pass the POV filter.
`FIND ... WHICH HAS A somedate=2015-01` only returns entities for which `somedate=2015-01` is
true. On the other hand, `FIND ... WHICH DOESN'T HAVE A somedate=2015-01` returns entities for which
`somedate=2015-01` is false or undefined. Shortly put, `NOT d1=d2` is not equivalent to
`d1!=d2`. The latter assertion is stronger.
#### Omitting the Property or the Value
......@@ -240,6 +303,12 @@ The following query returns records which have a _pname1_ property with any valu
`FIND ename WITH pname1`
`FIND ename WITH A pname1`
`FIND ename WITH A PROPERTY pname1`
`FIND ename WITH PROPERTY pname1`
`FIND ename . pname1`
`FIND ename.pname1`
......@@ -338,24 +407,53 @@ Any result set can be filtered by logically combining POV filters or back refere
#### Conjunction (AND)
* `FIND ename1 WHICH HAS A PROPERTY pname1=val1 AND A PROPERTY pname2=val2 AND A PROPERTY...`
* `FIND ename1 WHICH HAS A PROPERTY pname1=val1 AND A pname2=val2 AND ...`
* `FIND ename1 . pname1=val1 & pname2=val2 & ...`
As we saw above, we can combine conditions:
`FIND MusicalAnalysis WHICH HAS quality_factor>0.5 AND date IN 2019`
In general, the conjunction takes the form `FIND <eexpr> WHICH <filter1> AND <filter2>`. You can
also use `&` instead of `AND` or chain more than two conditions. If you mix conjunctions with
disjunctions, you need to add brackets to define the priority. For example: `FIND <eexpr> WHICH
(<filter1> AND <filter2>) OR <filter3>`.
`FIND Guitar WHICH REFERENCES Manufacturer AND price` is a combination of a reference filter and a
POV filter. For readability, you can also write
`FIND Guitar WHICH REFERENCES Manufacturer AND WHICH HAS A price`. However, the additional "WHICH
HAS A" is purely cosmetic (syntactic sugar).
#### Disjunction (OR)
The rules for disjunctions (`OR` or `|`) are the same as for conjunctions, see above.
* `FIND ename1 WHICH HAS A PROPERTY pname1=val1 OR A PROPERTY pname2=val2 Or A PROPERTY...`
* `FIND ename1 WHICH HAS A PROPERTY pname1=val1 OR A pname2=val2 OR ...`
* `FIND ename1 . pname1=val1 | pname2=val2 | ...`
#### Negation (NOT)
You can negate any filter by prefixing the filter with `NOT` or `!`:
`FIND <eexpr> WHICH NOT <filter1>`.
There are many syntactic sugar alternatives which are treated the same as "NOT":
- `DOES NOT HAVE`
- `ISN'T`
- and many more
* `FIND ename1 WHICH DOES NOT HAVE A PROPERTY pname1=val1`
* `FIND ename1 WHICH DOESN'T HAVE A pname1=val1`
* `FIND ename1 . NOT pname2=val2`
* `FIND ename1 . !pname2=val2`
#### ... and combinations with parentheses
#### Parentheses
Basically, you can put parantheses around filter expressions and con- or
disjunctions.
- `FIND Guitar WHICH (REFERENCES Manufacturer AND WHICH HAS A price)`.
- `FIND Guitar WHICH (REFERENCES Manufacturer) AND (WHICH HAS A price)`.
For better readability, the above query can be written as:
- `FIND Guitar WHICH (REFERENCES Manufacturer AND HAS A price)`.
Note, that without syntactic sugar this query looks like:
- `FIND Guitar WHICH (REFERENCES Manufacturer AND price)`.
* `FIND ename1 WHICH HAS A pname1=val1 AND DOESN'T HAVE A pname2<val2 AND ((WHICH HAS A pname3=val3 AND A pname4=val4) OR DOES NOT HAVE A (pname5=val5 AND pname6=val6))`
* `FIND ename1 . pname1=val1 & !pname2<val2 & ((pname3=val3 & pname4=val4) | !(pname5=val5 & pname6=val6))`
......@@ -369,7 +467,7 @@ Any result set can be filtered by logically combining POV filters or back refere
* NOT:: The logical negation. Equivalent expressions: `NOT, DOESN'T HAVE A PROPERTY, DOES NOT HAVE A PROPERTY, DOESN'T HAVE A, DOES NOT HAVE A, DOES NOT, DOESN'T, IS NOT, ISN'T, !`
* OR:: The logical _or_. Equivalent expressions: `OR, |`
* RECORD,RECORDTYPE,FILE,PROPERTY:: Role expression for restricting the result set to a specific role.
* WHICH:: The marker for the beginning of the filters. Equivalent expressions: `WHICH, WHICH HAS A, WHICH HAS A PROPERTY, WHERE, WITH, .`
* WHICH:: The marker for the beginning of the filters. Equivalent expressions: `WHICH, WHICH HAS A, WHICH HAS A PROPERTY, WHERE, WITH (A), .`
* REFERENCE:: This one is tricky: `REFERENCE TO` expresses a the state of _having_ a reference property. `REFERENCED BY` expresses the state of _being_ referenced by another entity.
* COUNT:: `COUNT` works like `FIND` but doesn't return the entities.
......@@ -420,7 +518,7 @@ would return any entity with that name and all children, regardless of the
entity's role. Basically, `FIND ename` *was* equivalent to `FIND ENTITY ename`.
Since 0.9.0 the default behavior has changed and now `FIND ename` is equivalent
to `FIND RECORD ename`. This default is, however, configurable via the
`FIND_QUERY_DEFAULT_ROLE` server property. See [Server Configuration](./administration/configuration.rst).
`FIND_QUERY_DEFAULT_ROLE` server property. See [Server Configuration](./administration/configuration).
## Future
......
......@@ -15,11 +15,10 @@ contain information about the Record. The following is a more detailed
explanation (also see this
[paper](https://www.mdpi.com/2306-5729/4/2/83)).
> Record Types and Abstract Properties are used to define the ontology
> for a particular domain in which the RDMS is used. Records are used
> to store the actual data and therefore represent individuals or
> particular things, e.g. a particular experiment, a particular time
> series, etc.
> Record Types and Abstract Properties are used to define the ontology for a particular domain in
> which the RDMS (research data management) is used. Records are used to store the actual data and
> therefore represent individuals or particular things, e.g. a particular experiment, a particular
> time series, etc.
## Record Types
......
====
FAQs
====
These FAQs (frequently asked questions) can be extended, if *you* help us. Please `submit an issue
<https://gitlab.com/caosdb/caosdb-server/issues/new>`__ if you have a question that should be
answered here.
.. contents:: Select your question:
:local:
How do I declare a LIST property?
=================================
Use the datatype parameter (available with Property constructors and
with the ``add_property`` method and the ``LIST`` function.
.. code:: python
# with constructor
p = caosdb.Property(name="ListOfDoubles", datatype=caosdb.LIST(caosdb.DOUBLE))
# with add_property method
my_entity.add_property(name="ListOfIntegers", datatype=caosdb.LIST(caosdb.INTEGER))
my_entity.add_property(name="ListOfThings", datatype=caosdb.LIST("Thing"))
my_entity.add_property(name="ListOfThings", datatype=caosdb.LIST(caosdb.RecordType('Thing'))
Which data types are there?
===========================
There are 7 basic data types:
- ``INTEGER``
- ``DOUBLE``
- ``DATETIME``
- ``TEXT``
- ``BOOLEAN``
- ``FILE``
- ``REFERENCE``
There is (so far) 1 data type for collections:
- ``LIST`` (Actually, LIST-of-another-data-type, e.g. ``LIST(INTEGER)``)
And furthermore,…
- Any RecordType can be used as a ``REFERENCE`` data type with a
limited scope. That is, a property
.. code:: python
p = caosdb.Property(name="Conductor", datatype="Person")
will only accept those Entities as value which have a “Person”
RecordType as a direct or indirect parent.
See also: :any:`Datatype<specification/Datatype>`.
......@@ -83,3 +83,44 @@ An invocation via a button in javascript could look like:
For more information see the :doc:`specification of the API <../specification/Server-side-scripting>`
Calling from the webui
---------------------
Refer to `webui documentation <https://docs.indiscale.com//caosdb-webui/extension/forms.html#calling-a-server-side-script>`_ to learn how to setup the webui side of this interaction.
The following example assumes that the form in the webui has only one filed
which is a file upload with the name ``csvfile``.
.. code-block:: python
import os
import linkahead as db
from caosadvancedtools.serverside import helper
from caosadvancedtools.serverside.logging import configure_server_side_logging
def main():
parser = helper.get_argument_parser()
args = parser.parse_args()
db.configure_connection(auth_token=args.auth_token)
# setup logging and reporting if serverside execution
userlog_public, htmluserlog_public, debuglog_public = configure_server_side_logging()
if not hasattr(args, "filename") or not args.filename:
raise RuntimeError("No file with form data provided!")
# Read the input from the form (form.json)
with open(args.filename) as form_json:
form_data = json.load(form_json)
# files are uploaded to this dicectory
upload_dir = os.path.dirname((args.filename))
# Read content of th uplaoded file
csv_file_path = os.path.join(upload_dir, form_data["csvfile"])
# Do something with the upload csv file...
if __name__ == "__main__":
main()
......@@ -22,13 +22,13 @@ from os.path import dirname, abspath
# -- Project information -----------------------------------------------------
project = 'caosdb-server'
copyright = '2022, IndiScale GmbH'
copyright = '2023, IndiScale GmbH'
author = 'Daniel Hornung, Timm Fitschen'
# The short X.Y version
version = '0.9.1'
version = '0.13.0'
# The full version, including alpha/beta/rc tags
release = '0.9.1-SNAPSHOT'
release = '0.13.0-dev'
# -- General configuration ---------------------------------------------------
......@@ -49,7 +49,7 @@ extensions = [
"sphinx.ext.autosectionlabel", # Allow reference sections using its title
"sphinx_rtd_theme",
"sphinxcontrib.plantuml", # PlantUML diagrams
"sphinx_a4doc", # antl4
"sphinx_a4doc", # antlr4
]
# Add any paths that contain templates here, relative to this directory.
......
# Benchmarking CaosDB #
Please refer to the file `doc/devel/Benchmarking.md` in the CaosDB sources for developer resources
how to do benchmarking and profiling of CaosDB.
Benchmarking CaosDB
===================
Benchmarking CaosDB may encompass several distinct areas: How much time
is spent in the server’s Java code, how much time is spent inside the
SQL backend, are the same costly methods called more than once? This
documentation tries to answer some questions connected with these
benchmarking aspects and give you the tools to answer your own
questions.
Before you start
----------------
In order to obtain meaningful results, you should disable caching.
MariaDB
~~~~~~~
Set the corresponding variable to 0: ``SET GLOBAL query_cache_type = 0;``
Java Server
~~~~~~~~~~~
In the config:
.. code:: cfg
CACHE_DISABLE=true
Tools for the benchmarking
--------------------------
For averaging over many runs of comparable requests and for putting the
database into a representative state, Python scripts are used. The
scripts can be found in the ``caosdb-dev-tools`` repository, located at
https://gitlab.indiscale.com/caosdb/src/caosdb-dev-tools in the folder
``benchmarking``:
Python Script ``fill_database.py``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This commandline script is meant for filling the database with enough
data to represeny an actual real-life case, it can easily create
hundreds of thousands of Entities.
The script inserts predefined amounts of randomized Entities into the
database, RecordTypes, Properties and Records. Each Record has a random
(but with defined average) number of Properties, some of which may be
references to other Records which have been inserted before. Actual
insertion of the Entities into CaosDB is done in chunks of a defined
size.
Users can tell the script to store times needed for the insertion of
each chunk into a tsv file.
Python Script ``measure_execution_time.py``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A somewhat outdated script which executes a given query a number of
times and then save statistics about the ``TransactionBenchmark``
readings (see below for more information about the transaction
benchmarks) delivered by the server.
Python Script ``sql_routine_measurement.py``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Simply call ``./sql_routine_measurement.py`` in the scripts directory.
An sql file is automatically executed which enables the correct
``performance_schema`` tables. However, the performance_schema of
mariadb needs to be enabled. Add ``performance_schema=ON`` to the
configuration file of mariadb as it needs to be enabled on start up.
This script expects the MariaDB server to be accessible on 127.0.0.1
with the default caosdb user and password (caosdb;random1234).
You might consider to increase
``performance_schema_events_transactions_history_long_size``.
::
performance_schema_events_transactions_history_long_size=1000000
The performance schema must be enabled (see below).
MariaDB General Query Log
~~~~~~~~~~~~~~~~~~~~~~~~~
MariaDB and MySQL have a feature to enable the logging of SQL queries’
times. This logging must be turned on on the SQL server as described in
the `upstream
documentation <https://mariadb.com/kb/en/general-query-log/>`__: Add to
the mysql configuration:
::
log_output=TABLE
general_log
or calling
.. code:: sql
SET GLOBAL log_output = 'TABLE';
SET GLOBAL general_log = 'ON';
In the Docker environment LinkAhead, this can conveniently be done with
``linkahead mysqllog {on,off,store}``.
MariaDB Slow Query Log
~~~~~~~~~~~~~~~~~~~~~~
See `slow query log
docs <https://mariadb.com/kb/en/slow-query-log-overview/>`__
MariaDB Performance Schema
~~~~~~~~~~~~~~~~~~~~~~~~~~
The most detailed information on execution times can be acquired using
the performance schema.
To use it, the ``performance_schema`` setting in the MariaDB server must
be
enabled(`docs <https://mariadb.com/kb/en/performance-schema-overview/#enabling-the-performance-schema>`__,
for example by setting this in the config files:
::
[mysqld]
performance_schema=ON
The performance schema provides many different tables in the
``performance_schema``. You can instruct MariaDB to create those tables
by setting the appropriate ``instrument`` and ``consumer`` variables.
E.g.
.. code:: sql
update performance_schema.setup_instruments set enabled='YES', timed='YES' WHERE NAME LIKE '%statement%';
update performance_schema.setup_consumers set enabled='YES' WHERE NAME LIKE '%statement%';
This can also be done via the configuration.
::
[mysqld]
performance_schema=ON
performance-schema-instrument='statement/%=ON'
performance-schema-consumer-events-statements-history=ON
performance-schema-consumer-events-statements-history-long=ON
You may want to look at the result of the following commands:
.. code:: sql
select * from performance_schema.setup_consumers;
select * from performance_schema.setup_instruments;
Note, that the ``base_settings.sql`` enables appropriate instruments and
consumers.
Before you start a measurement, you will want to empty the tables. E.g.:
.. code:: sql
truncate table performance_schema.events_statements_history_long ;
The procedure ``reset_stats`` in ``base_settings.sql`` clears the
typically used ones.
The tables contain many columns. An example to get an informative view
is
.. code:: sql
select left(sql_text,50), left(digest_text,50), ms(timer_wait) from performance_schema.events_statements_history_long order by ms(timer_wait);
where the function ``ms`` is defined in ``base_settings.sql``. Or a very
useful one:
.. code:: sql
select left(digest_text,100) as digest,ms(sum_timer_wait) as time_ms, count_star from performance_schema.events_statements_summary_by_digest order by time_ms;
Useful SQL configuration with docker
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In order to allow easy testing and debugging the following is useful
when using docker. Change the docker-compose file to include the
following for the mariadb service:
::
networks:
# available on port 3306, host name 'sqldb'
- caosnet
ports:
- 3306:3306
Check it with ``mysql -ucaosdb -prandom1234 -h127.0.0.1 caosdb`` Add the
appropriate changes (e.g. ``performance_schema=ON``) to
``profiles/empty/custom/mariadb.conf.d/mariadb.cnf`` (or in the profile
folder that you use).
Manual Java-side benchmarking
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Benchmarking can be done using the ``TransactionBenchmark`` class (in
package ``org.caosdb.server.database.misc``).
- Single timings can be added to instances of that class via the
``addBenchmark(object, time)`` method. Multiple benchmarks for the
same object (typically just strings) can be averaged.
- Benchmarks can be serialized into XML, ``Container`` and ``Query``
objects already use this with their included benchmarks to output
benchmarking results.
- To work with the benchmarks of often used objects, use these methods:
- ``Container.getTransactionBenchmark().addBenchmark()``
- ``Query.addBenchmark()``
To enable transaction benchmarks and disable caching in the server, set
these server settings:
.. code:: cfg
TRANSACTION_BENCHMARK_ENABLED=true
CACHE_DISABLE=true
Additionally, the server should be started via ``make run-debug``
(instead of ``make run-single``), otherwise the benchmarking will not be
active.
Notable benchmarks and where to find them
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+----------------------+---------------------------+------------------+
| Name | Where measured | What measured |
+======================+===========================+==================+
| ``Retrieve.init`` | transac | transaction/ |
| | tion/Transaction.java#135 | Retrieve.java#48 |
+----------------------+---------------------------+------------------+
| ``Re | transac | transaction/R |
| trieve.transaction`` | tion/Transaction.java#174 | etrieve.java#133 |
+----------------------+---------------------------+------------------+
| ``Retriev | transac | transaction/ |
| e.post_transaction`` | tion/Transaction.java#182 | Retrieve.java#77 |
+----------------------+---------------------------+------------------+
| ``EntityResource.h | resource/transactio | all except XML |
| ttpGetInChildClass`` | n/EntityResource.java#118 | generation |
+----------------------+---------------------------+------------------+
| ``ExecuteQuery`` | ? | ? |
+----------------------+---------------------------+------------------+
| | | |
+----------------------+---------------------------+------------------+
External JVM profilers
~~~~~~~~~~~~~~~~~~~~~~
Additionally to the transaction benchmarks, it is possible to benchmark
the server execution via external Java profilers. For example,
`VisualVM <https://visualvm.github.io/>`__ can connect to JVMs running
locally or remotely (e.g. in a Docker container). To enable this in
LinkAhead’s Docker environment, set
.. code:: yaml
devel:
profiler: true
Alternatively, start the server (without docker) with the
``run-debug-single`` make target, it will expose the JMX interface, by
default on port 9090.
Most profilers, like as VisualVM, only gather cumulative data for call
trees, they do not provide complete call graphs (as
callgrind/kcachegrind would do). They also do not differentiate between
calls with different query strings, as long as the Java process flow is
the same (for example, ``FIND Record 1234`` and
``FIND Record A WHICH HAS A Property B WHICH HAS A Property C>100``
would be handled equally).
Example settings for VisualVM
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In the sampler settings, you may want to add these expressions to the
blocked packages: ``org.restlet.**, com.mysql.**``. Branches on the call
tree which are entirely inside the blacklist, will become leaves.
Alternatively, specify a whitelist, for example with
``org.caosdb.server.database.backend.implementation.**``, if you only
want to see the time spent for certain MySQL calls.
How to set up a representative database
---------------------------------------
For reproducible results, it makes sense to start off with an empty
database and fill it using the ``fill_database.py`` script, for example
like this:
.. code:: sh
./fill_database.py -t 500 -p 700 -r 10000 -s 100 --clean
The ``--clean`` argument is not strictly necessary when the database was
empty before, but it may make sense when there have been previous runs
of the command. This example would create 500 RecordTypes, 700
Properties and 10000 Records with randomized properties, everything is
inserted in chunks of 100 Entities.
How to measure request times
----------------------------
If the execution of the Java components is of interest, the VisualVM
profiler should be started and connected to the server before any
requests to the server are started.
When doing performance tests which are used for detailed analysis, it is
important that
1. CaosDB is in a reproducible state, which should be documented
2. all measurements are repeated several times to account for inevitable
variance in access (for example file system caching, network
variablity etc.)
Filling the database
~~~~~~~~~~~~~~~~~~~~
By simply adding the option ``-T logfile.tsv`` to the
``fill_database.py`` command above, the times for inserting the records
are stored in a tsv file and can be analyzed later.
Obtain statistics about a query
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To repeat single queries a number of times,
``measure_execution_time.py`` can be used, for example:
.. code:: sh
./measure_execution_time.py -n 120 -q "FIND MusicalInstrument WHICH IS REFERENCED BY Analysis"
This command executes the query 120 times, additional arguments could
even plot the TransactionBenchmark results directly.
On method calling order and benchmarked events
----------------------------------------------
- ``Transaction.execute()`` :: Logs benchmarks for events like:
- ``INIT`` :: The transaction’s ``init()`` method.
- ``PRE_CHECK``
- ``CHECK``
- ``POST_CHECK``
- ``PRE_TRANSACTION``
- ``TRANSACTION`` -> typically calls
``database.backend.transaction.[BackendTransaction].execute()``,
which in turn calls, some levels deeper,
``backend.transaction.....execute(<k extends BackendTransaction> t)``
-> see next point
- …
- ``backend.transaction.[...].execute(transaction)`` :: This method is
benchmarked again (via parent class ``BackendTransaction``), this is
probably the deepest level of benchmarking currently (Benchmark is
logged as e.g. ``<RetrieveFullEntity>...</>``). It finally calls
``[MySQLTransaction].execute()``.
- ``[MySQLTransaction].execute()`` :: This is the deepest backend
implementation part, it typically creates a prepared statement and
executes it.
- Currently not benchmarked separately:
- Getting the actual implementation (probably fast?)
- Preparing the SQL statement
- Executing the SQL statement
- Java-side caching
What is measured
----------------
For a consistent interpretation, the exact definitions of the measured
times are as follows:
SQL logs
~~~~~~~~
As per https://mariadb.com/kb/en/general-query-log, the logs store only
the time at which the SQL server received a query, not the duration of
the query.
Possible future enhancements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- The ``query_response_time`` plugin may be additionally used in the
future, see https://mariadb.com/kb/en/query-response-time-plugin
Transaction benchmarks
~~~~~~~~~~~~~~~~~~~~~~
Transaction benchmarking manually collects timing information for each
transaction. At defined points, different measurements can be made,
accumulated and will finally be returned to the client. Benchmark
objects may consist of sub benchmarks and have a number of measurement
objects, which contain the actual statistics.
Because transaction benchmarks must be manually added to the server
code, they only monitor those code paths where they are added. On the
other hand, their manual nature allows for a more abstracted analysis of
performance bottlenecks.
Java profiler
~~~~~~~~~~~~~
VisualVM records for each thread the call tree, specifically which
methods were called how often and how much time was spent inside these
methods.
Global requests
~~~~~~~~~~~~~~~
Python scripts may measure the global time needed for the execution of
each request. ``fill_database.py`` obtains its numbers this way.
......@@ -6,6 +6,8 @@ Developing CaosDB
:maxdepth: 2
Structure of the Java code <structure>
Testing the server code <testing>
Logging server output <logging>
Benchmarking CaosDB <benchmarking>
CaosDB is an Open-Source project, so anyone may modify the source as they like. These pages aim to
......
Logging
=======
Framework
---------
We use the SLF4J API with a log4j2 backend for all of our Code. Please
do not use log4j2 directly or any other logging API.
Note that some libraries on the classpath use the ``java.util.logging``
API and log4j1 logging framework instead. These loggers cannot be
configurated with the help of this README by now.
Configuration
-------------
The configuration of the log4j2 backend is done via ``properties`` files which comply with the
`log4j2 specifications
<https://logging.apache.org/log4j/2.x/manual/configuration.html#Properties>`__.
XML, YAML, or JSON files are not supported. The usual mechanisms for
automatic configuration with such files is disabled. Instead, files have
to be placed into the ``conf`` subdirs, as follows:
Default and Debug Logging
~~~~~~~~~~~~~~~~~~~~~~~~~
The default configuration is located at
``conf/core/log4j2-default.properties``. For the debug mode, the
configuration from ``conf/core/log4j2-debug.properties`` is merged with
the default configuration. These files should not be changed by the
user.
User Defined Logging
~~~~~~~~~~~~~~~~~~~~
The default and debug configuration can be overridden by the user with
``conf/ext/log4j2.properties`` and any file in the directory
``conf/ext/log4j2.properties.d/`` which is suffixed by ``.properties``.
All logging configuration files are merged using the standard merge
strategy of log4:
.. rubric:: Composite Configuration
:name: composite-configuration
Log4j allows multiple configuration files to be used by specifying
them as a list of comma separated file paths on
log4j.configurationFile. The merge logic can be controlled by
specifying a class that implements the MergeStrategy interface on the
log4j.mergeStrategy property. The default merge strategy will merge
the files using the following rules:
1. The global configuration attributes are aggregated with those in later configurations
replacing those in previous configurations, with the exception that the highest status level
and the lowest monitorInterval greater than 0 will be used.
2. Properties from all configurations are aggregated. Duplicate properties replace those in
previous configurations.
3. Filters are aggregated under a CompositeFilter if more than one Filter is defined. Since
Filters are not named duplicates may be present.
4. Scripts and ScriptFile references are aggregated. Duplicate definiations replace those in
previous configurations.
5. Appenders are aggregated. Appenders with the same name are replaced by those in later
configurations, including all of the Appender’s subcomponents.
6. Loggers are all aggregated. Logger attributes are individually merged with duplicates being
replaced by those in later configurations. Appender references on a Logger are aggregated with
duplicates being replaced by those in later configurations. Filters on a Logger are aggregated
under a CompositeFilter if more than one Filter is defined. Since Filters are not named
duplicates may be present. Filters under Appender references included or discarded depending
on whether their parent Appender reference is kept or discarded.
`Source <https://logging.apache.org/log4j/2.x/manual/configuration.html#CompositeConfiguration>`__
Some Details and Examples
-------------------------
Make Verbose
~~~~~~~~~~~~
To make the server logs on the console more verbose, insert
``rootLogger.level = DEBUG`` or even ``rootLogger.level = TRACE`` into a
properties file in the ``conf/ext/log4j2.properties.d/`` directory or
the ``conf/ext/log4j2.properties`` file.
Log Directory
~~~~~~~~~~~~~
By default, log files go to ``./log/``,
e.g. ``./log/request_errors/current.log``. The log directory in
``DEBUG_MODE`` is located at ``./testlog/``.
To change that, insert ``property.LOG_DIR = /path/to/my/logs`` into a
properties file in the ``conf/ext/log4j2.properties.d/`` directory or
the ``conf/ext/log4j2.properties`` file
Special loggers
~~~~~~~~~~~~~~~
- ``REQUEST_ERRORS_LOGGER`` for logging server errors with SRID, full
request and full response. WARNING: This logger stores unencrypted
content of request with possibly confidential content.
- ``REQUEST_TIME_LOGGER`` for timing the requests.
These loggers are defined in the ``conf/core/log4j2-default.properties``
file.
Enable Request Time Logger
^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``REQUEST_TIME_LOGGER`` is disabled by default, its log level is set
to ``OFF``. To enable it and write logs to the directory denoted by
``property.LOG_DIR``, create a ``properties`` file under
``conf/ext/log4j2.properties.d/`` which contains at least
.. code:: properties
property.REQUEST_TIME_LOGGER_LEVEL = TRACE
debug.log
~~~~~~~~~
When in ``DEBUG_MODE``, e.g. when started with ``make run-debug``, the
server also writes all logs to ``debug.log`` in the log directory.
Testing the server code
-----------------------
Whether developing new features, refacturing code or fixing bugs, the server
code should be thoroughly tested for correct and incorrect behvaiour, on correct
and incorrect input.
Writing tests
~~~~~~~~~~~~~
Tests go into ``src/test/java/caosdb/``, the files there can serve as examples for
writing tests.
Running tests with Maven
~~~~~~~~~~~~~~~~~~~~~~~~
- Automatic testing can be done with ``make test`` or, after compilation, ``mvn test``.
- Tests of single modules can be started with ``mvn test -Dtest=TestClass``.
- Test of a single method ``footest``: ``mvn test -Dtest=TestClass#footest``
......@@ -11,6 +11,7 @@ Welcome to caosdb-server's documentation!
Getting started <README_SETUP>
Concepts <concepts>
tutorials
FAQ
Query Language <CaosDB-Query-Language>
administration
Development <development/devel>
......
# Note #
# AbstractProperty Specification
> This document has not been updated for a long time. Although it is concerned with the mostly
> stable API, its content may no longer reflect the actual CaosDB behavior.
**Warning:** This specification is outdated. It is included to serve as a starting point for a more
up-to-date description of the `Property` entity.
# AbstractProperty Specification
## Note ##
**Warning:** This specification is outdated. It is included to serve as a starting point for a more up-to-date description of the `Property` entity.
> This document has not been updated for a long time. Although it is concerned with the mostly
> stable API, its content may no longer reflect the actual CaosDB behavior.
## Introduction
An `AbstractProperty` is one of the basal objects of CaosDB.
......
==============
Authentication
==============
Some features of CaosDB are available to registered users only. Making any
changes to the data stock via HTTP requires authentication by ``username`` **plus**
``password``. They are to be send as a HTTP header, while the password is to be
hashed by the sha512 algorithm:
============= ======================
username: password:
============= ======================
``$username`` ``$SHA512ed_password``
============= ======================
Some features of CaosDB are available to registered users only. Making any changes to the data stock
via HTTP requires authentication.
Sessions
--------
========
Login
^^^^^
Request Challenge
^^^^^^^^^^^^^^^^^
* ``GET http://host:port/mpidsserver/login?username=$username``
* ``GET http://host:port/mpidsserver/login`` with ``username`` header
**No password is required to be sent over http.**
The request returns an AuthToken with a login challenge as a cookie.
The AuthToken is a dictionary of the following form:
.. code-block::
{scope=$scope;
mode=LOGIN;
offerer=$offerer;
auth=$auth
expires=$expires;
date=$date;
hash=$hash;
session=$session;
}
where
* ``$scope`` :: A uri pattern string. Example: ``{ **/* }``
* ``$mode`` :: ``ONETIME``, ``SESSION``, or ``LOGIN``
* ``$offerer`` :: A valid username
* ``$auth`` :: A valid username
* ``$expires`` :: A ``YYYY-MM-DD HH:mm:ss[.nnnn]`` date string
* ``$date`` :: A ``YYYY-MM-DD HH:mm:ss[.nnnn]`` date string
* ``$hash`` :: A string
* ``$session`` :: A string
The challenge is solved by concatenating the ``$hash`` string and
the user's ``$password`` string and calculating the sha512 hash of both.
Pseudo code:
.. code-block::
-----
$solution = sha512($hash + sha512($password))
Authentication is done by ``username`` and ``password``. They must be sent as form data with a POST
request to the `/login/` resource:
Send Solution
^^^^^^^^^^^^^
username:
The user name, for example ``admin`` (on demo.indiscale.com).
The old ``$hash`` string in the cookie has to be replaces by ``$solution`` and
the cookie is to be send with the next request:
password:
The password, for example ``caosdb`` (on demo.indiscale.com).
``PUT http://host:port/mpidsserver/login``
The server will return the user's entity in the HTTP body, e.g.
Logout
------
.. code-block::
The server does not invalidate AuthTokens. They invalidate after they expire or
when the server is being restartet. Client should just delete their AuthToken
to 'logout'.
<Response ...>
<User name="$username" ...>
...
</User>
</Response>
However, in order to remove the AuthToken cookie from the browsers there is a
convenient resource which will invalidate the cookie (not the AuthToken).
and a new AuthToken with ``$mode=SESSION`` and a new expiration date and so
on. This AuthToken cookie is to be send with every request.
Send
Logout
^^^^^^
``GET http://host:port/logout``
Send
and the server will return an empty AuthToken cookie which immediately expires.
``PUT http://host:port/mpidsserver/logout``
Example using ``curl``
----------------------
with a valid AuthToken cookie. No new AuthToken will be returned and no
AuthToken with that ``$session`` will be accepted anymore.
.. _curl-login:
Commandline solution with ``curl``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Login
~~~~~
To use curl for talking with the server, first save your password into a
variable: ``PW=$(cat)``
......@@ -102,10 +52,22 @@ password visible for a short time to everyone on your system:
.. code-block:: sh
curl -X POST -c cookie.txt -D head.txt -H "Content-Type: application/x-www-form-urlencoded" -d username=<USERNAME> -d password="$PW" --insecure "https://<SERVER>/login
curl -X POST -c cookie.txt -D head.txt -d username=<USERNAME> -d password="$PW" --insecure "https://<SERVER>/login
Now ``cookie.txt`` contains the required authentication token information in the ``SessionToken``
cookie (url-encoded json).
.. rubric:: Example token content
.. code-block:: json
["S","PAM","admin",[],[],1682509668825,3600000,"Z6J4B[...]-OQ","31d3a[...]ab2c10"]
Using the token
~~~~~~~~~~~~~~~
To use the cookie, pass it on with later requests:
.. code-block:: sh
curl -X GET -b cookie.txt --insecure "https://<SERVER>/Entity/12345"
curl -X GET -b cookie.txt --insecure "https://<SERVER>/Entity/123"
......@@ -73,7 +73,7 @@ Please file a new feature request as soon as you need them.
----
## REFERENCE
* Description: REFERENCE values store the [Valid ID](../Glossary#valid-id) of an existing entity. The are useful to establish links between two entities.
* Description: REFERENCE values store the [Valid ID](../Glossary.html#valid-id) of an existing entity. The are useful to establish links between two entities.
* Accepted Values: Any [Valid ID](./Glossary#valid-id) or [Valid Unique Existing Name](./Glossary#valid-unique-existing-name) or [Valid Unique Temporary ID](./Glossary#valid-unique-temporary-id) or [Valid Unique Prospective Name](./Glossary#valid-unique-prospective-pame).
* Note:
* After beeing processed successfully by the server the REFERENCE value is normalized to a [Valid ID](./Glossary#valid-id). I.e. it is guaranteed that a REFERENCE value of a valid property is a positive integer.
......
# Specification of the Message API
# Message API
## Introduction
API Version 0.1.0
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment