Skip to main content


See a research for Logs aggregation.

Each worker pod has a sidecar container running Fluentd, which is a data collector allowing us to get the logs from a worker via syslog and send them to Splunk.

We use our fluentd-splunk-hec image, built via a workflow because we don't want to use image.

Where do I find the logs?

First, you have to get access to Splunk (CMDB ID is 'PCKT-002').

Then go to https://rhcorporate.splunkcloud.comSearch & Reporting

The more specific search, the faster it'll be. At least, specify index, source and msgid. You can start with this search and tune it from there. For example:

  • change msgid=packit-prod to service instance you want to see logs from, e.g. to msgid=packit-stg or msgid=stream-prod
  • add | search message!="pidbox*" to remove the "pidbox received method" message which Celery pollutes the log with
  • add | reverse if you want to se the results from oldest to newest
  • add | fields _time, message | fields - _raw to leave only time and message fields

All in one URL here - now just export it to csv; and you have almost the same log file as you'd get by exporting logs from a worker pod.

For more info, see (Red Hat internal):


To see the sidecar container logs, select a worker pod → Logsfluentd-sidecar.

To manually send some event to Splunk try this (get the host & token from Bitwarden):

$ curl -v "https://${SPLUNK_HEC_HOST}:443/services/collector/event" \
-H "Authorization: Splunk ${SPLUNK_HEC_TOKEN}" \
-d '{"event": "jpopelkastest"}'