Skip to main content

Resource requirements

Usual Packit Service deployment consists of the following services with these resource requirements.

CPU requirements

DeploymentRequested (always assigned)Limit
postgres30m1
redict10m10m
flower5m50m
nginx5m10m
pushgateway5m10m
tokman20m (prod, 5m otherwise)50m
dashboard5m50m
fedmsg5m50m
beat5m50m
service10m200m
worker (generic)100m400m
worker (short)80m400m
worker (long)100m600m

Memory requirements

DeploymentRequested (always assigned)Limit
postgres1Gi (prod, 256Mi otherwise)1536Mi (prod, 512Mi otherwise)
redict128Mi256Mi
flower80Mi128Mi
nginx8Mi32Mi
pushgateway16Mi32Mi
tokman100Mi (prod, 88Mi otherwise)160Mi (prod, 128Mi otherwise)
dashboard128Mi256Mi
fedmsg88Mi128Mi
beat160Mi256Mi
service320Mi1Gi (prod, 512Mi otherwise)
worker (generic)384Mi1024Mi
worker (short)320Mi640Mi
worker (long)384Mi1024Mi

Currently allowed requirements / limits

ResourceAllowed to requestLimit
CPU312
Memory6Gi8Gi

Total for production

DeploymentMemory requestMemory limitCPU requestCPU limit
non-scalable12052Mi3808Mi100m1480m
2× short worker640Mi1280Mi160m800m
2× long worker768Mi2048Mi200m1200m
Σ3460Mi7136Mi460m3480m

Proposed changes

  1. Revert to the pre-MP+ resources (they were higher for service, workers and postgres; lower values were used due to a hardcoded check in the templates);

    Pre-MP+ memory requirements/limits for production deployment:

    DeploymentRequestedLimit
    postgres2Gi4Gi
    service320m4Gi

    With the current setup (2× short and long-running workers), we would need

    ResourceRequestLimit
    CPU460m3480m
    Memory4484Mi12768Mi

    Requesting the memory quotas to be multiplied by 3 results in having ~11Gi memory left which should be enough to scale up for few more workers if needed. This setup would also allow scaling up to 8 workers per each queue.

  2. Request adjustments of the quotas such that we can have some buffer (database migrations, higher load on service, etc.), but also could permanently scale up the workers if we find service to be more reliable that way

    • Based on the calculations above, 2× the current quotas on memory would be sufficient, but if we were to scale the workers up too (and account for possible adjustments, e.g., Redict) we should probably go for 3×
  3. Migrate tokman to different toolchain, it's a small self-contained app, so it is easy to migrate to either Rust or Go that should leave smaller footprint.

    • Opened an issue for testing out running without Tokman deployment https://github.com/packit/tokman/issues/72

    • Opened an issue for migrating in case we need the tokman To be opened, if the previous issue “fails” (i.e. tokman is still needed, or dropping affects the amount of requests to GitHub in a negative way)


  1. includes non-scalable deployments, i.e., each runs just one pod, e.g., dashboard, redict, postgres, etc.