F perf test
Summary
Some performance tests.
Test Environment
Analog zum monitor run the test as follows with the testbed configured.
set -a && . ./secrets.sh && python performance-tests/test.py
It is automatically checked that the memory usage is not larger than 1GB and there will be output related to the run times.
Check List for the Author
Please, prepare your MR for a review. Be sure to write a summary and a focus and create gitlab comments for the reviewer. They should guide the reviewer through the changes, explain your changes and also point out open questions. For further good practices have a look at our review guidelines
-
All automated tests pass -
Reference related issues -
Up-to-date CHANGELOG.md (or not necessary) -
Up-to-date JSON schema (or not necessary) -
Appropriate user and developer documentation (or not necessary)- How do I use the software? Assume "stupid" users.
- How do I develop or debug the software? Assume novice developers.
-
Annotations in code (Gitlab comments)- Intent of new code
- Problems with old code
- Why this implementation?
Check List for the Reviewer
-
I understand the intent of this MR -
All automated tests pass -
Up-to-date CHANGELOG.md (or not necessary) -
Appropriate user and developer documentation (or not necessary) -
The test environment setup works and the intended behavior is reproducible in the test environment -
In-code documentation and comments are up-to-date. -
Check: Are there specifications? Are they satisfied?
For further good practices have a look at our review guidelines.
Merge request reports
Activity
assigned to @henrik
requested review from @salexan
added 15 commits
-
42a4f27e...82c62a95 - 14 commits from branch
dev
- 94c56e49 - Merge branch 'dev' into f-perf-test
-
42a4f27e...82c62a95 - 14 commits from branch
- Resolved by Henrik tom Wörden
@henrik How am I supposed to read the output? When running the tests, the terminal is flushed with debugging output so that's not possible to read the actual performance test output (except for the last line). Of course I could implement something using grep or so myself, but can't we add a convenient way to interpret the output of the tests?
mentioned in commit d0e5c660