Skip to content
Snippets Groups Projects
Commit 78b6a354 authored by Henrik tom Wörden's avatar Henrik tom Wörden
Browse files

DOC: document logging and email

parent 6b569784
No related branches found
No related tags found
2 merge requests!123REL: Release v0.6.0,!107ENH: setup logging and reporting for serverside execution
Pipeline #35041 passed
......@@ -8,6 +8,13 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased] ##
### Added ###
- Standard logging for server side execution
- Email notification if the `pycaosdb.ini` contains a `[caoscrawler]` with
`send_crawler_notifications=True`.
- Creation of CrawlerRun Records that contain status information about data
integration of the crawler if the `pycaosdb.ini` contains a `[caoscrawler]`
with `create_crawler_status_records=True`.
### Changed ###
......
......@@ -9,6 +9,7 @@ Getting Started
Installation<INSTALL>
prerequisites
helloworld
optionalfeatures
This section will help you get going! From the first installation steps to the first simple crawl.
......
Optional Features
=================
Email notifications
-------------------
The crawler can send email notifications if it made some changes or if
new data was inserted. This is (currently) only available if the crawler
runs as server side script of CaosDB. You need to add the following
section to your ``.pycaosdb.ini``
.. code:: ini
[caoscrawler]
send_crawler_notifications=True
public_host_url=https://example..eu
sendmail_to_address=someone@example.de
sendmail_from_address=caosdb-no-reply@example.eu
This feature uses the ``sendmail`` functionality of
``caosadvancedtools``. Thus, it uses the setting
.. code:: ini
[Misc]
sendmail = /usr/sbin/sendmail
#sendmail = /usr/local/bin/sendmail_to_file
to decide what tool is used for sending mails (use the upper one if you
want to actually send mails. See ``sendmail`` configuration in the
LinkAhead docs.
Crawler Status Records
----------------------
The crawler can insert and update Records that contain essential
information about the data integration process. This is (currently) only
available if the crawler runs as server side script of CaosDB. To enable
this, add the following to your ``.pycaosdb.ini``
.. code:: ini
[caoscrawler]
create_crawler_status_records=True
You also need to add the data model needed for this as desribed by
``crawler_run_model.yml``.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment