Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • caosdb/src/caosdb-server
1 result
Show changes
Commits on Source (7)
...@@ -20,6 +20,9 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ...@@ -20,6 +20,9 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
to determine whether the server's state has changed between queries. to determine whether the server's state has changed between queries.
* Basic caching for queries. The caching is enabled by default and can be * Basic caching for queries. The caching is enabled by default and can be
controlled by the usual "cache" flag. controlled by the usual "cache" flag.
* Add `BEFORE`, `AFTER`, `UNTIL`, `SINCE` keywords for query transaction
filters.
### Changed ### Changed
......
...@@ -79,7 +79,7 @@ server: ...@@ -79,7 +79,7 @@ server:
Replace `localhost` by your host name, if you want. Replace `localhost` by your host name, if you want.
- `keytool -importkeystore -srckeystore caosdb.jks -destkeystore caosdb.p12 -deststoretype PKCS12 -srcalias selfsigned` - `keytool -importkeystore -srckeystore caosdb.jks -destkeystore caosdb.p12 -deststoretype PKCS12 -srcalias selfsigned`
- Export the public part only: `openssl pkcs12 -in caosdb.p12 -nokeys -out cert.pem`. - Export the public part only: `openssl pkcs12 -in caosdb.p12 -nokeys -out cert.pem`.
The resulting ``cert.pem` can safely be given to users to allow ssl verification. The resulting `cert.pem` can safely be given to users to allow ssl verification.
- You can check the content of the certificate with `openssl x509 -in cert.pem -text` - You can check the content of the certificate with `openssl x509 -in cert.pem -text`
Alternatively, you can create a keystore from certificate files that you already have: Alternatively, you can create a keystore from certificate files that you already have:
......
...@@ -226,14 +226,13 @@ The following query returns entities which have a _pname1_ property with any val ...@@ -226,14 +226,13 @@ The following query returns entities which have a _pname1_ property with any val
### TransactionFilter ### TransactionFilter
*Definition* *Definition*
sugar:: `HAS BEEN` | `HAVE BEEN` | `HAD BEEN` | `WAS` | `IS` | sugar:: `HAS BEEN` | `HAVE BEEN` | `HAD BEEN` | `WAS` | `IS`
negated_sugar:: `HAS NOT BEEN` | `HASN'T BEEN` | `WAS NOT` | `WASN'T` | `IS NOT` | `ISN'T` | `HAVN'T BEEN` | `HAVE NOT BEEN` | `HADN'T BEEN` | `HAD NOT BEEN` negated_sugar:: `HAS NOT BEEN` | `HASN'T BEEN` | `WAS NOT` | `WASN'T` | `IS NOT` | `ISN'T` | `HAVN'T BEEN` | `HAVE NOT BEEN` | `HADN'T BEEN` | `HAD NOT BEEN`
by_clause:: `BY (ME | username | SOMEONE ELSE (BUT ME)? | SOMEONE ELSE BUT username)` by_clause:: `BY (ME | username | SOMEONE ELSE (BUT ME)? | SOMEONE ELSE BUT username)`
date:: A date string of the form `YYYY-MM-DD` datetime:: A datetime string of the form `YYYY[-MM[-DD(T| )[hh[:mm[:ss[.nnn][(+|-)zzzz]]]]]]`
datetime:: A datetime string of the form `YYYY-MM-DD hh:mm:ss` time_clause:: `[AT|ON|IN|BEFORE|AFTER|UNTIL|SINCE] (datetime) `
time_clause:: `ON ($date|$datetime) ` Here is plenty of room for more syntactic sugar, e.g. a `TODAY` keyword, and more funcionality, e.g. ranges.
`FIND ename WHICH ($sugar|$negated_sugar)? (NOT)? (CREATED|INSERTED|UPDATED|DELETED) (by_clause time_clause?| time_clause by_clause?)` `FIND ename WHICH (sugar|negated_sugar)? (NOT)? (CREATED|INSERTED|UPDATED) (by_clause time_clause?| time_clause by_clause?)`
*Examples* *Examples*
...@@ -247,8 +246,9 @@ The following query returns entities which have a _pname1_ property with any val ...@@ -247,8 +246,9 @@ The following query returns entities which have a _pname1_ property with any val
`FIND ename WHICH HAS BEEN CREATED BY erwin` `FIND ename WHICH HAS BEEN CREATED BY erwin`
`FIND ename . CREATED BY erwin ON ` `FIND ename WHICH HAS BEEN INSERTED SINCE 2021-04`
Note that `SINCE` and `UNTIL` are inclusive, while `BEFORE` and `AFTER` are not.
### File Location ### File Location
......
# Profiling #
If the server is started with the `run-debug-single` make target, it will expose
the JMX interface, by default on port 9090. Using a profiler such as VisualVM,
one can then connect to the CaosDB server and profile execution times.
## Example settings for VisualVM ## # Benchmarking CaosDB #
In the sampler settings, you may want to add these expressions to the blocked Benchmarking CaosDB may encompass several distinct areas: How much time is spent in the server's
packages: `org.restlet.**, com.mysql.**`. Branches on the call tree which are Java code, how much time is spent inside the SQL backend, are the same costly methods called more
entirely inside the blacklist, will become leaves. Alternatively, specify a than once? This documentation tries to answer some questions connected with these benchmarking
whitelist, for example with `org.caosdb.server.database.backend.implementation.**`, aspects and give you the tools to answer your own questions.
if you only want to see the time spent for certain MySQL calls.
## Before you start ##
In order to obtain meaningful results, you should disable caching.
### MariaDB
Set the corresponding variable to 0: `SET GLOBAL query_cache_type = 0;`
### Java Server
In the config:
```conf
CACHE_DISABLE=true
```
## Tools for the benchmarking ##
For averaging over many runs of comparable requests and for putting the database into a
representative state, Python scripts are used. The scripts can be found in the `caosdb-dev-tools`
repository, located at [https://gitlab.indiscale.com/caosdb/src/caosdb-dev-tools](https://gitlab.indiscale.com/caosdb/src/caosdb-dev-tools) in the folder
`benchmarking`:
### Python Script `fill_database.py` ###
# Manual Java-side benchmarking # This commandline script is meant for filling the database with enough data to represeny an actual
real-life case, it can easily create hundreds of thousands of Entities.
The script inserts predefined amounts of randomized Entities into the database, RecordTypes,
Properties and Records. Each Record has a random (but with defined average) number of Properties,
some of which may be references to other Records which have been inserted before. Actual insertion
of the Entities into CaosDB is done in chunks of a defined size.
Users can tell the script to store times needed for the insertion of each chunk into a tsv file.
### Python Script `measure_execution_time.py` ###
A somewhat outdated script which executes a given query a number of times and then save statistics
about the `TransactionBenchmark` readings (see below for more information about the transaction
benchmarks) delivered by the server.
### Python Script `sql_routine_measurement.py`
Simply call `./sql_routine_measurement.py` in the scripts directory. An sql
file is automatically executed which enables the correct `performance_schema`
tables. However, the performance_schema of mariadb needs to be enabled. Add
`performance_schema=ON` to the configuration file of mariadb as it needs to be
enabled on start up.
This script expects the MariaDB server to be accessible on 127.0.0.1 with the default caosdb user
and password (caosdb;random1234).
The performance schema must be enabled (see below).
### MariaDB General Query Log ###
MariaDB and MySQL have a feature to enable the logging of SQL queries' times. This logging must be
turned on on the SQL server as described in the [upstream documentation](https://mariadb.com/kb/en/general-query-log/):
Add to the mysql configuration:
```
log_output=TABLE
general_log
```
or calling
```sql
SET GLOBAL log_output = 'TABLE';
SET GLOBAL general_log = 'ON';
```
In the Docker environment LinkAhead, this can conveniently be
done with `linkahead mysqllog {on,off,store}`.
### MariaDB Slow Query Log ###
See [slow query log docs](https://mariadb.com/kb/en/slow-query-log-overview/)
### MariaDB Performance Schema ###
The most detailed information on execution times can be acquired using the performance schema.
To use it, the `performance_schema` setting in the MariaDB server must be enabled([docs](https://mariadb.com/kb/en/performance-schema-overview/#enabling-the-performance-schema), for example by setting
this in the config files:
```
[mysqld]
performance_schema=ON
```
The performance schema provides many different tables in the `performance_schema`. You can instruct MariaDB to create
those tables by setting the appropriate `instrument` and `consumer` variables. E.g.
```SQL
update performance_schema.setup_instruments set enabled='YES', timed='YES' WHERE NAME LIKE '%statement%';
update performance_schema.setup_consumers set enabled='YES' WHERE NAME LIKE '%statement%';
```
This can also be done via the configuration.
```
[mysqld]
performance_schema=ON
performance-schema-instrument='statement/%=ON'
performance-schema-consumer-events-statements-history=ON
performance-schema-consumer-events-statements-history-long=ON
```
You may want to look at the result of the following commands:
```sql
select * from performance_schema.setup_consumers;
select * from performance_schema.setup_instruments;
```
Note, that the `base_settings.sql` enables appropriate instruments and consumers.
Before you start a measurement, you will want to empty the tables. E.g.:
```sql
truncate table performance_schema.events_statements_history_long ;
```
The procedure `reset_stats` in `base_settings.sql` clears the typically used ones.
The tables contain many columns. An example to get an informative view is
```sql
select left(sql_text,50), left(digest_text,50), ms(timer_wait) from performance_schema.events_statements_history_long order by ms(timer_wait);
```
where the function `ms` is defined in `base_settings.sql`.
Or a very useful one:
```sql
select left(digest_text,100) as digest,ms(sum_timer_wait) as time_ms, count_star from performance_schema.events_statements_summary_by_digest order by time_ms;
```
### Useful SQL configuration with docker
In order to allow easy testing and debugging the following is useful when using docker.
Change the docker-compose file to include the following for the mariadb service:
```
networks:
# available on port 3306, host name 'sqldb'
- caosnet
ports:
- 3306:3306
```
Check it with `mysql -ucaosdb -prandom1234 -h127.0.0.1 caosdb`
Add the appropriate changes (e.g. `performance_schema=ON`) to `profiles/empty/custom/mariadb.conf.d/mariadb.cnf` (or in the profile folder that you use).
### Manual Java-side benchmarking #
Benchmarking can be done using the `TransactionBenchmark` class (in package Benchmarking can be done using the `TransactionBenchmark` class (in package
`org.caosdb.server.database.misc`). `org.caosdb.server.database.misc`).
...@@ -26,9 +161,95 @@ Benchmarking can be done using the `TransactionBenchmark` class (in package ...@@ -26,9 +161,95 @@ Benchmarking can be done using the `TransactionBenchmark` class (in package
- `Container.getTransactionBenchmark().addBenchmark()` - `Container.getTransactionBenchmark().addBenchmark()`
- `Query.addBenchmark()` - `Query.addBenchmark()`
# Miscellaneous notes #
Notes to self, details, etc. To enable transaction benchmarks and disable caching in the server, set these
server settings:
```conf
TRANSACTION_BENCHMARK_ENABLED=true
CACHE_DISABLE=true
```
Additionally, the server should be started via `make run-debug` (instead of
`make run-single`), otherwise the benchmarking will not be active.
#### Notable benchmarks and where to find them ##
| Name | Where measured | What measured |
|--------------------------------------|----------------------------------------------|-------------------------------|
| `Retrieve.init` | transaction/Transaction.java#135 | transaction/Retrieve.java#48 |
| `Retrieve.transaction` | transaction/Transaction.java#174 | transaction/Retrieve.java#133 |
| `Retrieve.post_transaction` | transaction/Transaction.java#182 | transaction/Retrieve.java#77 |
| `EntityResource.httpGetInChildClass` | resource/transaction/EntityResource.java#118 | all except XML generation |
| `ExecuteQuery` | ? | ? |
| | | |
### External JVM profilers ###
Additionally to the transaction benchmarks, it is possible to benchmark the server execution via
external Java profilers. For example, [VisualVM](https://visualvm.github.io/) can connect to JVMs running locally or remotely
(e.g. in a Docker container). To enable this in LinkAhead's Docker environment, set
```yaml
devel:
profiler: true
```
Alternatively, start the server (without docker) with the `run-debug-single` make target, it will expose
the JMX interface, by default on port 9090.
Most profilers, like as VisualVM, only gather cumulative data for call trees, they do not provide
complete call graphs (as callgrind/kcachegrind would do). They also do not differentiate between
calls with different query strings, as long as the Java process flow is the same (for example, `FIND
Record 1234` and `FIND Record A WHICH HAS A Property B WHICH HAS A Property C>100` would be handled
equally).
#### Example settings for VisualVM
In the sampler settings, you may want to add these expressions to the blocked
packages: `org.restlet.**, com.mysql.**`. Branches on the call tree which are
entirely inside the blacklist, will become leaves. Alternatively, specify a
whitelist, for example with `org.caosdb.server.database.backend.implementation.**`,
if you only want to see the time spent for certain MySQL calls.
## How to set up a representative database ##
For reproducible results, it makes sense to start off with an empty database and fill it using the
`fill_database.py` script, for example like this:
```sh
./fill_database.py -t 500 -p 700 -r 10000 -s 100 --clean
```
The `--clean` argument is not strictly necessary when the database was empty before, but it may make
sense when there have been previous runs of the command. This example would create 500 RecordTypes,
700 Properties and 10000 Records with randomized properties, everything is inserted in chunks of 100
Entities.
## How to measure request times ##
If the execution of the Java components is of interest, the VisualVM profiler should be started and
connected to the server before any requests to the server are started.
When doing performance tests which are used for detailed analysis, it is important that
1. CaosDB is in a reproducible state, which should be documented
2. all measurements are repeated several times to account for inevitable variance in access (for
example file system caching, network variablity etc.)
### Filling the database ###
By simply adding the option `-T logfile.tsv` to the `fill_database.py` command above, the times for
inserting the records are stored in a tsv file and can be analyzed later.
### Obtain statistics about a query ###
To repeat single queries a number of times, `measure_execution_time.py` can be used, for example:
```sh
./measure_execution_time.py -n 120 -q "FIND MusicalInstrument WHICH IS REFERENCED BY Analysis"
```
This command executes the query 120 times, additional arguments could even plot the
TransactionBenchmark results directly.
## On method calling order and benchmarked events ## ## On method calling order and benchmarked events ##
...@@ -56,29 +277,37 @@ Notes to self, details, etc. ...@@ -56,29 +277,37 @@ Notes to self, details, etc.
- Executing the SQL statement - Executing the SQL statement
- Java-side caching - Java-side caching
## Server settings ## ## What is measured ##
- To enable the SQL general logs, log into the SQL server and do: For a consistent interpretation, the exact definitions of the measured times are as follows:
```sql
SET GLOBAL log_output = 'TABLE';
SET GLOBAL general_log = 'ON';
```
- To enable transaction benchmarks and disable caching in the server, set these
server settings:
```conf
TRANSACTION_BENCHMARK_ENABLED=true
CACHE_DISABLE=true
```
- Additionally, the server should be started via `make run-debug` (instead of
`make run-single`), otherwise the benchmarking will not be active.
## Notable benchmarks and where to find them ## ### SQL logs ###
| Name | Where measured | What measured | As per https://mariadb.com/kb/en/general-query-log, the logs store only the time at which the SQL
|--------------------------------------|----------------------------------------------|-------------------------------| server received a query, not the duration of the query.
| `Retrieve.init` | transaction/Transaction.java#135 | transaction/Retrieve.java#48 |
| `Retrieve.transaction` | transaction/Transaction.java#174 | transaction/Retrieve.java#133 | #### Possible future enhancements ####
| `Retrieve.post_transaction` | transaction/Transaction.java#182 | transaction/Retrieve.java#77 |
| `EntityResource.httpGetInChildClass` | resource/transaction/EntityResource.java#118 | all except XML generation | - The `query_response_time` plugin may be additionally used in the future, see
| `ExecuteQuery` | ? | ? | https://mariadb.com/kb/en/query-response-time-plugin
| | | |
### Transaction benchmarks ###
Transaction benchmarking manually collects timing information for each transaction. At defined
points, different measurements can be made, accumulated and will finally be returned to the client.
Benchmark objects may consist of sub benchmarks and have a number of measurement objects, which
contain the actual statistics.
Because transaction benchmarks must be manually added to the server code, they only monitor those
code paths where they are added. On the other hand, their manual nature allows for a more
abstracted analysis of performance bottlenecks.
### Java profiler ###
VisualVM records for each thread the call tree, specifically which methods were called how often and
how much time was spent inside these methods.
### Global requests ###
Python scripts may measure the global time needed for the execution of each request.
`fill_database.py` obtains its numbers this way.
# Benchmarking CaosDB #
Benchmarking CaosDB may encompass several distinct areas: How much time is spent in the server's
Java code, how much time is spent inside the SQL backend, are the same costly methods clalled more
than once? This documentation tries to answer some questions connected with these benchmarking
aspects and give you the tools to answer your own questions.
## Tools for the benchmarking ##
For averaging over many runs of comparable requests and for putting the database into a
representative state, Python scripts are used. The scripts can be found in the `caosdb-dev-tools`
repository, located at [https://gitlab.indiscale.com/caosdb/src/caosdb-dev-tools](https://gitlab.indiscale.com/caosdb/src/caosdb-dev-tools) in the folder
`benchmarking`:
### `fill_database.py` ###
This commandline script is meant for filling the database with enough data to represeny an actual
real-life case, it can easily create hundreds of thousands of Entities.
The script inserts predefined amounts of randomized Entities into the database, RecordTypes,
Properties and Records. Each Record has a random (but with defined average) number of Properties,
some of which may be references to other Records which have been inserted before. Actual insertion
of the Entities into CaosDB is done in chunks of a defined size.
Users can tell the script to store times needed for the insertion of each chunk into a tsv file.
### `measure_execution_time.py` ###
A somewhat outdated script which executes a given query a number of times and then save statistics
about the `TransactionBenchmark` readings (see below for more information about the transaction
benchmarks) delivered by the server.
### Benchmarking SQL commands ###
MariaDB and MySQL have a feature to enable the logging of SQL queries' times. This logging must be
turned on on the SQL server as described in the [upstream documentation](https://mariadb.com/kb/en/general-query-log/). For the Docker
environment LinkAhead, this can conveniently be done with `linkahead mysqllog {on,off,store}`.
### External JVM profilers ###
Additionally to the transaction benchmarks, it is possible to benchmark the server execution via
external Java profilers. For example, [VisualVM](https://visualvm.github.io/) can connect to JVMs running locally or remotely
(e.g. in a Docker container). To enable this in LinkAhead's Docker environment, set
```yaml
devel:
profiler: true
```
Most profilers, like as VisualVM, only gather cumulative data for call trees, they do not provide
complete call graphs (as callgrind/kcachegrind would do). They also do not differentiate between
calls with different query strings, as long as the Java process flow is the same (for example, `FIND
Record 1234` and `FIND Record A WHICH HAS A Property B WHICH HAS A Property C>100` would be handled
equally).
## How to set up a representative database ##
For reproducible results, it makes sense to start off with an empty database and fill it using the
`fill_database.py` script, for example like this:
```sh
./fill_database.py -t 500 -p 700 -r 10000 -s 100 --clean
```
The `--clean` argument is not strictly necessary when the database was empty before, but it may make
sense when there have been previous runs of the command. This example would create 500 RecordTypes,
700 Properties and 10000 Records with randomized properties, everything is inserted in chunks of 100
Entities.
## How to measure request times ##
If the execution of the Java components is of interest, the VisualVM profiler should be started and
connected to the server before any requests to the server are started.
When doing performance tests which are used for detailed analysis, it is important that
1. CaosDB is in a reproducible state, which should be documented
2. all measurements are repeated several times to account for inevitable variance in access (for
example file system caching, network variablity etc.)
### Filling the database ###
By simply adding the option `-T logfile.tsv` to the `fill_database.py` command above, the times for
inserting the records are stored in a tsv file and can be analyzed later.
### Obtain statistics about a query ###
To repeat single queries a number of times, `measure_execution_time.py` can be used, for example:
```sh
./measure_execution_time.py -n 120 -q "FIND MusicalInstrument WHICH IS REFERENCED BY Analysis"
```
This command executes the query 120 times, additional arguments could even plot the
TransactionBenchmark results directly.
## What is measured ##
For a consistent interpretation, the exact definitions of the measured times are as follows:
### SQL logs ###
As per https://mariadb.com/kb/en/general-query-log, the logs store only the time at which the SQL
server received a query, not the duration of the query.
#### Possible future enhancements ####
- The `query_response_time` plugin may be additionally used in the future, see
https://mariadb.com/kb/en/query-response-time-plugin
### Transaction benchmarks ###
Transaction benchmarking manually collects timing information for each transaction. At defined
points, different measurements can be made, accumulated and will finally be returned to the client.
Benchmark objects may consist of sub benchmarks and have a number of measurement objects, which
contain the actual statistics.
Because transaction benchmarks must be manually added to the server code, they only monitor those
code paths where they are added. On the other hand, their manual nature allows for a more
abstracted analysis of performance bottlenecks.
### Java profiler ###
VisualVM records for each thread the call tree, specifically which methods were called how often and
how much time was spent inside these methods.
### Global requests ###
Python scripts may measure the global time needed for the execution of each request.
`fill_database.py` obtains its numbers this way.
...@@ -77,6 +77,22 @@ IN: ...@@ -77,6 +77,22 @@ IN:
[Ii][Nn] WHITE_SPACE_f? [Ii][Nn] WHITE_SPACE_f?
; ;
AFTER:
[Aa][Ff][Tt][Ee][Rr] WHITE_SPACE_f?
;
BEFORE:
[Bb][Ee][Ff][Oo][Rr][Ee] WHITE_SPACE_f?
;
UNTIL:
[Uu][Nn][Tt][Ii][Ll] WHITE_SPACE_f?
;
SINCE:
[Ss][Ii][Nn][Cc][Ee] WHITE_SPACE_f?
;
IS_STORED_AT: IS_STORED_AT:
(IS_f WHITE_SPACE_f?)? [Ss][Tt][Oo][Rr][Ee][Dd] (WHITE_SPACE_f? AT)? WHITE_SPACE_f? (IS_f WHITE_SPACE_f?)? [Ss][Tt][Oo][Rr][Ee][Dd] (WHITE_SPACE_f? AT)? WHITE_SPACE_f?
; ;
......
...@@ -153,14 +153,15 @@ idfilter returns [IDFilter filter] locals [String o, String v, String a] ...@@ -153,14 +153,15 @@ idfilter returns [IDFilter filter] locals [String o, String v, String a]
)? )?
; ;
transaction returns [TransactionFilter filter] locals [String type, TransactionFilter.Transactor user, String time] transaction returns [TransactionFilter filter] locals [String type, TransactionFilter.Transactor user, String time, String time_op]
@init{ @init{
$time = null; $time = null;
$user = null; $user = null;
$type = null; $type = null;
$time_op = null;
} }
@after{ @after{
$filter = new TransactionFilter($type,$user,$time); $filter = new TransactionFilter($type,$user,$time,$time_op);
} }
: :
( (
...@@ -169,8 +170,8 @@ transaction returns [TransactionFilter filter] locals [String type, TransactionF ...@@ -169,8 +170,8 @@ transaction returns [TransactionFilter filter] locals [String type, TransactionF
) )
( (
transactor (transaction_time {$time = $transaction_time.tqp;})? {$user = $transactor.t;} transactor (transaction_time {$time = $transaction_time.tqp; $time_op = $transaction_time.op;})? {$user = $transactor.t;}
| transaction_time (transactor {$user = $transactor.t;})? {$time = $transaction_time.tqp;} | transaction_time (transactor {$user = $transactor.t;})? {$time = $transaction_time.tqp; $time_op = $transaction_time.op;}
) )
; ;
...@@ -199,12 +200,25 @@ username returns [Query.Pattern ep] locals [int type] ...@@ -199,12 +200,25 @@ username returns [Query.Pattern ep] locals [int type]
( STAR {$type = Query.Pattern.TYPE_LIKE;} | ~(STAR | WHITE_SPACE) )+ ( STAR {$type = Query.Pattern.TYPE_LIKE;} | ~(STAR | WHITE_SPACE) )+
; ;
transaction_time returns [String tqp] transaction_time returns [String tqp, String op]
@init {
$op = "(";
}
: :
(
AT {$op = "=";}
| (ON | IN)
| (
BEFORE {$op = "<";}
| UNTIL {$op = "<=";}
| AFTER {$op = ">";}
| SINCE {$op = ">=";}
)
)?
( (
(ON | IN) TODAY {$tqp = TransactionFilter.TODAY;}
(value {$tqp = $value.text;}) | value {$tqp = $value.text;}
) | TODAY {$tqp = TransactionFilter.TODAY;} )
; ;
/* /*
......
...@@ -29,7 +29,6 @@ import java.sql.Types; ...@@ -29,7 +29,6 @@ import java.sql.Types;
import org.caosdb.datetime.Date; import org.caosdb.datetime.Date;
import org.caosdb.datetime.DateTimeFactory2; import org.caosdb.datetime.DateTimeFactory2;
import org.caosdb.datetime.Interval; import org.caosdb.datetime.Interval;
import org.caosdb.datetime.SemiCompleteDateTime;
import org.caosdb.datetime.UTCDateTime; import org.caosdb.datetime.UTCDateTime;
import org.caosdb.server.accessControl.Principal; import org.caosdb.server.accessControl.Principal;
import org.caosdb.server.accessControl.UserSources; import org.caosdb.server.accessControl.UserSources;
...@@ -96,14 +95,20 @@ public class TransactionFilter implements EntityFilterInterface { ...@@ -96,14 +95,20 @@ public class TransactionFilter implements EntityFilterInterface {
} }
} }
public TransactionFilter(final String type, final Transactor transactor, final String time) { public TransactionFilter(
final String type,
final Transactor transactor,
final String time,
final String timeOperator) {
this.transactor = transactor; this.transactor = transactor;
this.transactionTime = time; this.transactionTime = time;
this.transactionType = type; this.transactionType = type;
this.transactionTimeOperator = timeOperator;
} }
private final Transactor transactor; private final Transactor transactor;
private final String transactionTime; private final String transactionTime;
private final String transactionTimeOperator;
private final String transactionType; private final String transactionType;
@Override @Override
...@@ -123,7 +128,7 @@ public class TransactionFilter implements EntityFilterInterface { ...@@ -123,7 +128,7 @@ public class TransactionFilter implements EntityFilterInterface {
} else { } else {
try { try {
dt = (SemiCompleteDateTime) DateTimeFactory2.valueOf(this.transactionTime); dt = (Interval) DateTimeFactory2.valueOf(this.transactionTime);
} catch (final ClassCastException e) { } catch (final ClassCastException e) {
throw new QueryException("Transaction time must be a SemiCompleteDateTime."); throw new QueryException("Transaction time must be a SemiCompleteDateTime.");
} catch (final IllegalArgumentException e) { } catch (final IllegalArgumentException e) {
...@@ -201,14 +206,13 @@ public class TransactionFilter implements EntityFilterInterface { ...@@ -201,14 +206,13 @@ public class TransactionFilter implements EntityFilterInterface {
} else { } else {
prepareCall.setNull(10, Types.INTEGER); prepareCall.setNull(10, Types.INTEGER);
} }
prepareCall.setString(11, "("); // '(' means 'is in the
// interval' // interval'
} else { } else {
prepareCall.setNull(9, Types.BIGINT); prepareCall.setNull(9, Types.BIGINT);
prepareCall.setNull(10, Types.INTEGER); prepareCall.setNull(10, Types.INTEGER);
prepareCall.setNull(11, Types.CHAR);
} }
prepareCall.setString(11, transactionTimeOperator);
} else { } else {
// ilb_sec, ilb_nanos, eub_sec, eub_nanos, operator_t // ilb_sec, ilb_nanos, eub_sec, eub_nanos, operator_t
prepareCall.setNull(7, Types.BIGINT); prepareCall.setNull(7, Types.BIGINT);
...@@ -251,6 +255,8 @@ public class TransactionFilter implements EntityFilterInterface { ...@@ -251,6 +255,8 @@ public class TransactionFilter implements EntityFilterInterface {
return "TRANS(" return "TRANS("
+ this.transactionType + this.transactionType
+ "," + ","
+ this.transactionTimeOperator
+ ","
+ this.transactionTime + this.transactionTime
+ "," + ","
+ this.transactor + this.transactor
......
...@@ -237,6 +237,8 @@ public class TestCQL { ...@@ -237,6 +237,8 @@ public class TestCQL {
String queryIssue31 = "FIND FILE WHICH IS STORED AT /data/in0.foo"; String queryIssue31 = "FIND FILE WHICH IS STORED AT /data/in0.foo";
String queryIssue116 = "FIND *"; String queryIssue116 = "FIND *";
String queryIssue132a = "FIND ENTITY WHICH HAS BEEN INSERTED AFTER TODAY";
String queryIssue132b = "FIND ENTITY WHICH HAS BEEN CREATED TODAY BY ME";
// File paths /////////////////////////////////////////////////////////////// // File paths ///////////////////////////////////////////////////////////////
String filepath_verb01 = "/foo/"; String filepath_verb01 = "/foo/";
...@@ -5692,7 +5694,7 @@ public class TestCQL { ...@@ -5692,7 +5694,7 @@ public class TestCQL {
System.out.println(sfq.toStringTree(parser)); System.out.println(sfq.toStringTree(parser));
assertTrue(sfq.filter instanceof TransactionFilter); assertTrue(sfq.filter instanceof TransactionFilter);
assertEquals("TRANS(Insert,null,Transactor(some%,=))", sfq.filter.toString()); assertEquals("TRANS(Insert,null,null,Transactor(some%,=))", sfq.filter.toString());
} }
/** String ticket242 = "FIND RECORD WHICH HAS been created by some.user"; */ /** String ticket242 = "FIND RECORD WHICH HAS been created by some.user"; */
...@@ -5707,7 +5709,7 @@ public class TestCQL { ...@@ -5707,7 +5709,7 @@ public class TestCQL {
System.out.println(sfq.toStringTree(parser)); System.out.println(sfq.toStringTree(parser));
assertEquals("TRANS(Insert,null,Transactor(some.user,=))", sfq.filter.toString()); assertEquals("TRANS(Insert,null,null,Transactor(some.user,=))", sfq.filter.toString());
assertTrue(sfq.filter instanceof TransactionFilter); assertTrue(sfq.filter instanceof TransactionFilter);
} }
...@@ -5781,7 +5783,7 @@ public class TestCQL { ...@@ -5781,7 +5783,7 @@ public class TestCQL {
assertEquals("@(null,null)", n.getFilter().toString()); assertEquals("@(null,null)", n.getFilter().toString());
assertTrue(f2.getLast() instanceof TransactionFilter); assertTrue(f2.getLast() instanceof TransactionFilter);
assertEquals("TRANS(Insert,null,Transactor(null,=))", f2.getLast().toString()); assertEquals("TRANS(Insert,null,null,Transactor(null,=))", f2.getLast().toString());
} }
/** String ticket262e = "COUNT FILE WHICH IS NOT REFERENCED AND WAS created by me"; */ /** String ticket262e = "COUNT FILE WHICH IS NOT REFERENCED AND WAS created by me"; */
...@@ -5804,7 +5806,7 @@ public class TestCQL { ...@@ -5804,7 +5806,7 @@ public class TestCQL {
assertEquals("@(null,null)", n.getFilter().toString()); assertEquals("@(null,null)", n.getFilter().toString());
assertTrue(f2.getLast() instanceof TransactionFilter); assertTrue(f2.getLast() instanceof TransactionFilter);
assertEquals("TRANS(Insert,null,Transactor(null,=))", f2.getLast().toString()); assertEquals("TRANS(Insert,null,null,Transactor(null,=))", f2.getLast().toString());
} }
/** String ticket262f = "COUNT FILE WHICH IS NOT REFERENCED BY entity AND WAS created by me"; */ /** String ticket262f = "COUNT FILE WHICH IS NOT REFERENCED BY entity AND WAS created by me"; */
...@@ -5827,7 +5829,7 @@ public class TestCQL { ...@@ -5827,7 +5829,7 @@ public class TestCQL {
assertEquals("@(entity,null)", n.getFilter().toString()); assertEquals("@(entity,null)", n.getFilter().toString());
assertTrue(f2.getLast() instanceof TransactionFilter); assertTrue(f2.getLast() instanceof TransactionFilter);
assertEquals("TRANS(Insert,null,Transactor(null,=))", f2.getLast().toString()); assertEquals("TRANS(Insert,null,null,Transactor(null,=))", f2.getLast().toString());
} }
/** /**
...@@ -5853,7 +5855,7 @@ public class TestCQL { ...@@ -5853,7 +5855,7 @@ public class TestCQL {
assertEquals("@(entity,null)", n.getFilter().toString()); assertEquals("@(entity,null)", n.getFilter().toString());
assertTrue(f2.getLast() instanceof TransactionFilter); assertTrue(f2.getLast() instanceof TransactionFilter);
assertEquals("TRANS(Insert,null,Transactor(null,=))", f2.getLast().toString()); assertEquals("TRANS(Insert,null,null,Transactor(null,=))", f2.getLast().toString());
} }
/** String ticket262h = "COUNT FILE WHICH IS NOT REFERENCED BY entity WHICH WAS created by me"; */ /** String ticket262h = "COUNT FILE WHICH IS NOT REFERENCED BY entity WHICH WAS created by me"; */
...@@ -5876,7 +5878,7 @@ public class TestCQL { ...@@ -5876,7 +5878,7 @@ public class TestCQL {
assertNotNull(((Backreference) backref).getSubProperty()); assertNotNull(((Backreference) backref).getSubProperty());
assertEquals( assertEquals(
"TRANS(Insert,null,Transactor(null,=))", "TRANS(Insert,null,null,Transactor(null,=))",
((Backreference) backref).getSubProperty().getFilter().toString()); ((Backreference) backref).getSubProperty().getFilter().toString());
} }
...@@ -5917,7 +5919,7 @@ public class TestCQL { ...@@ -5917,7 +5919,7 @@ public class TestCQL {
assertNotNull(((Backreference) backref).getSubProperty()); assertNotNull(((Backreference) backref).getSubProperty());
assertEquals( assertEquals(
"TRANS(Insert,null,Transactor(null,=))", "TRANS(Insert,null,null,Transactor(null,=))",
((Backreference) backref).getSubProperty().getFilter().toString()); ((Backreference) backref).getSubProperty().getFilter().toString());
} }
...@@ -6686,4 +6688,34 @@ public class TestCQL { ...@@ -6686,4 +6688,34 @@ public class TestCQL {
assertEquals("POV(pname,=,with)", sfq.filter.toString()); assertEquals("POV(pname,=,with)", sfq.filter.toString());
assertNull(((POV) sfq.filter).getSubProperty()); assertNull(((POV) sfq.filter).getSubProperty());
} }
@Test
/** String queryIssue132a = "FIND ENTITY WHICH HAS BEEN INSERTED AFTER TODAY"; */
public void testIssue132a() {
CQLLexer lexer;
lexer = new CQLLexer(CharStreams.fromString(this.queryIssue132a));
final CommonTokenStream tokens = new CommonTokenStream(lexer);
final CQLParser parser = new CQLParser(tokens);
final CqContext sfq = parser.cq();
System.out.println(sfq.toStringTree(parser));
assertEquals("TRANS(Insert,>,Today,null)", sfq.filter.toString());
}
@Test
/** String queryIssue132b = "FIND ENTITY WHICH HAS BEEN CREATED TODAY BY ME"; */
public void testIssue132b() {
CQLLexer lexer;
lexer = new CQLLexer(CharStreams.fromString(this.queryIssue132b));
final CommonTokenStream tokens = new CommonTokenStream(lexer);
final CQLParser parser = new CQLParser(tokens);
final CqContext sfq = parser.cq();
System.out.println(sfq.toStringTree(parser));
assertEquals("TRANS(Insert,(,Today,Transactor(null,=))", sfq.filter.toString());
}
} }