This document provides answers to frequently asked question around deploying the Outpost24 Data Sovereign(DS) Agent.

DSAgent Deployment Requirement

Understanding the Requirements from DSAgent Deployment Perspective

From a high level standpoint, DSAgent deployment will look as follow.

In addition to the intercommunication between the HIAB Scheduler and the DSAgent server, both will require to be able to communicate with:

  • The Object Storage deployed and used by the customer
  • OUTSCAN platform hosted by OUTPOST24

the installed Agent will require the ability to communicate with the DSAgent server, meaning that this server need to be accessible from internet if any Agent is deployed outside customer premises.

DSAgent installation requirement

Do not forget that DSAgent server also require to be able to access OUTPOST24 GitHub repository to retrieve the Ansible scripts for the installation and update. We also highly recommend to register and save your customization in your own Git repository, meaning the DSAgent server will need to be able to access it as well.

Understanding the Steps for Deploying DSAgent

The DSAgent server deployment can be summarized with those main following steps:

  1. Deploy a CentOS 7 based server that will be used for installing DSAgent software (please check all details in the DSAgents deployment documentation).
  2. Retrieve Ansible scripts for deploying DSAgent server from Outspost24 GitHub public repository on the DSAgent server.
  3. Update the Ansible variables according to your setup (IP address, password, etc...) and save your settings by adding the project to your own Git repository (using ansible-vault as well) or any other method of your choice.
  4. Run the ansible scripts on the DSAgent server.
  5. Configure HIAB Scheduler and reboot it.

Backup and Disaster recovery

Properly backup the configuration of the Ansible scripts and the newly created variables (you can use Git) as without those specific settings it will be difficult or even  impossible to recover from any disaster on DSAgent server.

Do I Need a Specific Version of Ansible?

Yes, the Ansible playbook can, for example not be run with Ansible 2.7.7 that comes from Debian 10 (buster). In this case you will get the following error:

ansible-playbook --ask-vault-pass -CKD hiab-ansible/playbooks/hiab.yml
SUDO password:
Vault password:
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.

The error appears to have been in 'hiab-ansible/roles/teddy_salad/tasks/main.yml': line 74, column 7, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

    - name: Check if teddy-salad database is initialized
      ^ here

We recommend to use latest Ansible version available and install it according to Ansible documentation

Do I Need a Proper DNS Configuration or Could I rely on IPs?

It is strongly recommended to use a proper DNS configuration with FQDN, but there should be absolutely no problem to rely on IPv4 addresses and adjust the settings to reflect your configuration.

Agent Resource Usage


Note that it can not be said exactly how much resources agents will consume until it runs on the target. Usage majorly depends on how much there is to scan on the target and the number of targets.


When completely idle, the agent consumes a very small amount of clock cycles since it is only waiting to be woken up by a timer when it should attempt a call-home or do a scan.

When not waiting, it can do one out of two things:

  • Run a scan
  • Perform a Call-home

Calling home may consume a decent amount of CPU related to encrypting and sending traffic to the agent server.

Running a scan is very likely to consume 100% of a single core for a while. The duration depends on how much the scan is going to find. Our tools does some enumeration tasks and the more data is present, the more data will need to be processed, thus increasing the duration and the amount of necessary clock cycles. Currently this process is limited to a single core because of the utilities we are using.

The entire agent is configured to have low CPU priority, and unless other programs are configured to use low priority as well, the agent will not compete for clock cycles with other programs.

RAM & Disk

This depends heavily on the amount of data the agent is going to find when running a scan. Each unit of processing being done (varies on the task, some stream, some do not) must be stored in memory. Generally this is relatively little, for example each registry key being extracted and analyzed is flushed to disk before the next is fetched, thus reducing RAM usage, but sometime we need to read larger amount of data into memory. The agent should not spike significantly in memory usage.

Similarly, the data being extracted during scanning is stored on disk until the next call-home. The amount of data stored follows the same rules as RAM usage. However, only the latest scan data is being stored and does not accumulate more data over time even if a call-home is missed. 


Network usage is dependent on the amount of data needed to be sent during a call-home (general size of scan result) and how often it will have data to send (depend on the configured schedules). Do keep in mind that if an agent is part of multiple schedules, it will scan once for each schedule and then upload the scan result independently for these schedules as well.

Configuring MinIO as Object Storage and not AWS S3

When using MinIO as Object Storage, we strongly recommend to setup HTTPS with a valid Certificate on the MinIO server, to use latest version of TLS with strong ciphersuites and to default to standard port (443). That being said, in order to configure MinIO, the Ansible script can be adjusted during the configuration phase by adjusting the following parameters:

File: inventory/group_vars/teddy_salad/teddy-salad.yml

    endpoint: https://<MINIO_FQDN_OR_IP>:<MINIO_PORT>
    region: local
    certificate_authority: /<FULL_PATH_TO_MINIO_CA_FILE>

Once done with this configuration, you can run the Ansible deployment script.

Note: MinIO without HTTPS (NOT recommended)

In case you are using MinIO without any certificate, you need to update the configuration by replacing https with http and removing certificate_authority directive as follow:

File: inventory/group_vars/teddy_salad/teddy-salad.yml

    endpoint: http://<MINIO_FQDN_OR_IP>:<MINIO_PORT>
    region: local

Note: Self-signed CA et Certificate support for on-premises object storage

The self-signed CA or Certificate support for on-premises object storage has been introduced in DSAgent version 1.10.0-1 that can be checked using the following command line:

rpm -qa | grep teddy-salad

DSAgent Deployment Common Issues

Unable to run Ansible Playbook with an JSON Error in the Vault

If you are expecting a JSON decoding error pointing the vault file while running the Ansible playbook, double check that you have properly configured the vault file. One common mistake is to not quote the password that contains some special character. To avoid the problem, quote or double quote all password.

Do not only rely on the ansible-vault view command that is properly displaying your vault parameters.

Here is the error you may expect while running ansible-playbook command even if the ansible-vault view command is properly displaying your parameters.

ERROR! We were unable to read either as JSON nor YAML, these are the errors we got from each:
JSON: No JSON object could be decoded

Syntax Error while loading YAML.
  did not find expected node content

The error appears to be in 'vault_vars/hiab.yml': line 5, column 25, but may
be elsewhere in the file depending on the exact syntax problem.

Ansible Playbook Failed on Grab certificates from hiabs TASK

If you encounter a fatal issue while running Ansible script failing on "TASK [Grab certificates from hiabs]" with an error on getting the certificate, this probably means you made an issue while setting the HIAB Scheduler on the inventory/hosts.yml file. Edit the inventory/hosts.yml configuration file and properly adjust the scheduler variable to reflect your HIAB IP or DNS name.


Once done adjusting your configuration file, re-run the Ansible script.

Ansible Playbook Failed on Enroll libellum csr Using Outscan Account TASK

If you encounter a fatal issue while running Ansible script failing on "TASK [libellum_client : Enroll libellum csr using outscan account]" with an error on not being logged in, this probably means you have MFA enabled on your account. Please consider temporarily disabling MFA on your account for installing DSAgent. Once the installation is successfully completed, you can re-enable MFA on your account.

Ansible Playbook Failed on [Errno 14] curl#6 - "Could not resolve host:; Unknown error: Trying other mirror while Installing Package

If you encounter an issue while running Ansible script failing on "TASK [clavem_server : Install Clavem and pre-reqs]" with an error not being able to resolve Outpost24 repository, this probably means your local DNS server is not properly configured.

Consider adding an external well known DNS server such as Google Public DNS ( or in you "/etc/resolv.conf" file by adding the following directive for instance:

File: /etc/resolv.conf


Ansible Playbook Failed on teddy_salad : Initialize teddy-salad database" TASK

If you encounter an issue while running Ansible script failing on "TASK [teddy_salad : Initialize teddy-salad database]" with a fatal error on "first : file does not exist", this probably means that the teddy-salad configuration is erroneous and not able to find the database migrations file.

In order to fix this you can manually edit the Ansible scripts as follow:

  • Adjust hiab-ansible/roles/teddy_salad/tasks/main.yml file to add two new argv directives for Initialize teddy-salad database task, so that it looks as follow:

File: hiab-ansible/roles/teddy_salad/tasks/main.yml

- name: Initialize teddy-salad database
  when: teddy_salad.install is defined and teddy_salad.install and teddy_salad_db_initialized.rowcount is defined and teddy_salad_db_initialized.rowcount == 0
  run_once: true
      - /usr/bin/teddy-salad
      - create-and-update-db
      - --stderr-log
      - --migrations
      - /usr/share/teddy-salad/migrations
  • Adjust hiab-ansible/roles/teddy_salad/templates/etc/teddy-salad.conf.d/20-config.yml.j2 file to add one new directive in the configuration, so that it looks as follow:

File: hiab-ansible/roles/teddy_salad/templates/etc/teddy-salad.conf.d/20-config.yml.j2

# set the database to be used
  MigrationsDir: /usr/share/teddy-salad/migrations
  # max number of connections. the code works with as little as one but can be increased for performance
  maxopenconns: {{ teddy_salad.db.conns | default(10) }}

Then just re-run the Ansible script and the installation should succeed.

Unable to get Findings when Scanning with Agents, and Teddy-salad Logging error such as "Unable to upload file" or "Unable to open module results"

If you are using an on-premises object storage such as MinIO with self-signed CA or certificate, you may have forgotten to specify the proper self-signed CA or certificate for the object storage during the deployment of your DSAgent server. It can also be that the CA or Certificate has been renewed or need to be renewed because it is expired.

On you object storage server logs you should see the similar logs entries:

Object Storage logs

Jun 17 13:56:25 debian10-object-storage minio[2054]: http: TLS handshake error from XXX.XXX.XXX.XXX:45676: remote error: tls: bad certificate

In this case you can fix your DSAgent server by manually editing the teddy-salad configuration file "/etc/teddy-salad.conf.d/20-config.yml" and adding the "rootCa" directive or adjusting the path to the valid self-signed certificate or CA, as follow:


# store url
    region: local
    secretAccessKey: <OBJECT_STORAGE_PASSWORD>

DSAgent log error outputting "Unable to stat module settings"

If you can see "Unable to stat module settings" in your journalctl logs on the DSAgent server, this indicates that you have some issue with the date on DSAgent or HIAB Scheduler linked to it.

You can run the following command to get more detailed logs that you can put in a file for further investigation:

sudo journalctl -u teddy-salad -o json | tee -a debug_teddy-salad.json

After editing the logs, the following error should be displayed:

ERROR: Unable to stat module settings

  "__CURSOR" : ...
  "__REALTIME_TIMESTAMP" : "1624257931743579",
  "__MONOTONIC_TIMESTAMP" : "312607562850",
  "PRIORITY" : "3", "MESSAGE" : "Unable to stat module settings",
  "ERROR" : "RequestTimeTooSkewed: The difference between the request time and the server's time is too large.\n\u0009status code: 403, request id: , host id: ",
  "_COMM" : "teddy-salad", "_EXE" : "/usr/bin/teddy-salad", "_CMDLINE" : "/usr/bin/teddy-salad",
  "_SOURCE_REALTIME_TIMESTAMP" : "1624257931742623"

In this case, ensure that the timezone and time on both DSAgent server and HIAB Scheduler is properly setup. A good and recommended approach is to enable time synchronization with an NTP server.

DSAgent Deployment Checks

How to know the DSAgent Server and the HIAB Scheduler are Communicating Properly

The HIAB Scheduler should be able to reach the DSAgent server on the API port 5046 (default port that can be changed in the configuration). Checking the connectivity between the two, can be done by issuing the following command on the HIAB Scheduler.

curl -v -k -i -I --tlsv1.2 https://DSAGENT_SERVER_IP_OR_FQDN:5046/rest/openapi.json
*   Trying XXX.XXX.XXX.XXX...
* Connected to DSAGENT_SERVER_IP_OR_FQDN (XXX.XXX.XXX.XXX) port 5046 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* skipping SSL peer certificate verification
* NSS: client certificate not found (nickname not specified)
* ALPN/NPN, server did not agree to a protocol
* SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
* Server certificate:
*       subject: CN=317af421-3c35-4518-811b-c842fb3ce367,OU=317af421-3c35-4518-811b-c842fb3ce367,O=49c0d073-2801-4a47-864b-95c5c5c67d21
*       start date: Oct 26 10:17:51 2020 GMT
*       expire date: Apr 04 13:51:08 2021 GMT
*       common name: 317af421-3c35-4518-811b-c842fb3ce367
*       issuer: CN=49c0d073-2801-4a47-864b-95c5c5c67d21,OU=49c0d073-2801-4a47-864b-95c5c5c67d21
> HEAD /rest/openapi.json HTTP/1.1
> User-Agent: curl/7.53.1
> Accept: */*
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Accept-Ranges: bytes
Accept-Ranges: bytes
< Content-Length: 35513
Content-Length: 35513
< Content-Type: text/plain; charset=utf-8
Content-Type: text/plain; charset=utf-8
< Last-Modified: Tue, 20 Oct 2020 09:22:39 GMT
Last-Modified: Tue, 20 Oct 2020 09:22:39 GMT
< Date: Mon, 26 Oct 2020 16:38:55 GMT
Date: Mon, 26 Oct 2020 16:38:55 GMT

* Connection #0 to host DSAGENT_SERVER_IP_OR_FQDN left intact

How to know the DSAgent Server and the Object Storage are Communicating Properly

The DSAgent server should be able to communicate with the Object Storage. The connectivity between the two, can be checked buy issuing the following command on the DSAgent server.

curl -v -k -i -I https://OBJECT_STORAGE_FQDN:8443
* About to connect() to OBJECT_STORAGE_FQDN port 8443 (#0)
*   Trying XXX.XXX.XXX.XXX...
* Connected to OBJECT_STORAGE_FQDN (XXX.XXX.XXX.XXX) port 8443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* skipping SSL peer certificate verification
* SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
* Server certificate:
*       subject: CN=MY_OBJECT_STORAGE_FQDN
*       start date: Sep 08 00:00:00 2020 GMT
*       expire date: Oct 08 12:00:00 2021 GMT
*       common name: OBJECT_STORAGE_FQDN
*       issuer: CN=GeoTrust TLS DV RSA Mixed SHA256 2020 CA-1,O=DigiCert Inc,C=US
> HEAD / HTTP/1.1
> User-Agent: curl/7.29.0
> Accept: */*
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< x-amz-request-id: tx000000000000000b0af5e-005f96fd3f-5c6c1e-default
x-amz-request-id: tx000000000000000b0af5e-005f96fd3f-5c6c1e-default
< Content-Type: application/xml
Content-Type: application/xml
< Content-Length: 0
Content-Length: 0
< Date: Mon, 26 Oct 2020 16:45:51 GMT
Date: Mon, 26 Oct 2020 16:45:51 GMT

* Connection #0 to host OBJECT_STORAGE_FQDN left intact

The connectivity between DSAgent server and Object Storage can also be tested using command line tools such as s3cmd.

In order to do so for MinIO, you can create a configuration file with the following content:

File .s3cfg

access_key = <USERNAME>
secret_key = <PASSWORD>
host_base = https://<MINIO_FQDN_OR_IP>:<MINIO_PORT>
host_bucket = %(bucket).<MINIO_FQDN_OR_IP>
check_ssl_certificate = False

And then run the following command:

s3cmd -c .s3cfg-minio ls

After configuring MinIO object storage in HIAB Scheduler, you should be able to list a bucket with the following naming convention: s3://tenant-<UUID>


If you did not configure TLS on MinIO, then replace https with http in the configuration file and use the --no-ssl option in s3cmd command line.

Agent Deployment

Where to find the Agent packages

The Agent packages are downloadable from the HIAB Scheduler user interface linked to your DSAgent. In Main Menu > Support and enter the Agent Installers tab.

Just select the Platform, Architecture and Package options and click on the Download button.

Can I use Same Agent Packages as the One on OUTSCAN?

It is not possible to reuse the same Outpost24 Agent packages that is used for Agent enrolled on OUTSCAN platform as the packages contains some configurations files that are referring to the enrollment platform. For example, the DSAgent needs to report to your HIAB Scheduler and not OUTSCAN platform.

Troubleshooting Agent Deployment

The Agent is Stacked and not Communicating the the HIAB Scheduler

The Agent packages must contain a valid configuration file for the Agent to be able to properly communicate with the DSAgent Server. In some circumstances the Agent configuration file may be incomplete and missing the recipients certificate list. In this case, the Agent is blocked on trying to retrieve this list.

To resolve this issue, stop the Agent using the following command:

sudo systemctl stop o24-agent

Then update the agent configuration file /etc/o24-agent/agent-settings.yml, by duplicating the rootCas block in the recipients block.

Once the configuration is updated, simply restart the Outpost24 Agent with the following command:

sudo systemctl start o24-agent