Can I use the packit service as soon as I install it into my repository? #
Thanks for your interest in Packit Service! In order to start using the service you need to be whitelisted, which is an action to be done by us. Once we put you on the whitelist, we’ll get in touch with you. We are now on-boarding Fedora contributors (with a Fedora Account System account).
Can I use packit service for any GitHub repository? #
Since packit service builds your PRs in Fedora COPR build service, by using Packit-as-a-service, your software needs to comply with COPR rules. If any of these points are violated, we’ll remove the builds and may put you on a blacklist so you won’t be able to use the service again.
How can I contact you? #
Why do I have to maintain .packit.yaml and a spec file upstream? #
We are working on simplifying the
.packit.yaml so it’s as small as possible.
We will also handle all potentially backward incompatible changes of
Spec file can be downloaded (see specific question below) from Fedora Pagure instead of having it included in the upstream repository.
But what are the benefits? #
Packit makes it trivial to run your project as part of an OS. It provides feedback to your project at the time when the changes are being developed so you can fix incompatible code when you are working on it, not when it’s already released. When you push commits to a pull request, you’ll get RPM build and test results right away.
Why Fedora? #
We’ve started with Fedora Linux because we work for Red Hat and we ❤ Fedora.
How is Packit different from other services? #
Can we use Packit with Gitlab? #
Since GitLab doesn’t have an app functionality to enable integration, you need to manually configure the webhooks: hop into Settings → Webhooks → Add Webhook and enter “https://prod.packit.dev/api/webhooks/gitlab"
Please bear in mind that not many people are using packit service via GitLab, so if anything doesn’t work as expected, please reach out to us.
How can I download RPM spec file if it is not part of upstream repository? #
If you do not want to have the RPM spec file in your upstream repository, you can download it in actions section.
Add actions section to your packit.yaml configuration file and
download the spec file in a hook
Packit service has a limited set of commands available so please use
The configuration file with downloading the RPM spec file now looks like this:
specfile_path: packit.spec synced_files: - packit.spec - .packit.yaml upstream_package_name: packitos downstream_package_name: packit actions: post-upstream-clone: "wget https://src.fedoraproject.org/rpms/packit/raw/master/f/packit.spec -O packit.spec"
I have a template of a spec file in my repo: can packit work with it? #
The solution is, again, actions and hooks. Just render the spec after the upstream repo is cloned:
specfile_path: my-project.spec upstream_package_name: my-project-src downstream_package_name: my-project actions: post-upstream-clone: "make generate-spec"
Where the “generate-spec” make target could look like this:
generate-spec: sed -e 's/@@VERSION@@/$(VERSION)/g' my-project.spec.template >my-project.spec
As a practical example, cockpit-podman project is using this functionality.
Can I use CentOS Stream with packit service? #
Yes, you can! It’s very simple, just add
centos-stream-x86_64 as a target for
jobs: - job: copr_build trigger: pull_request metadata: targets: - centos-stream-x86_64
After adding tests I see error ‘No FMF metadata found.’ #
If you encounter this error when running tests via Testing Farm,
it means you forgot to initialize the metadata tree with
and include the
.fmf directory in the pull request.
See Testing Farm documentation for more information.
Good that you ask. It does, packit works with rpmautospec quite nicely.
Before you start, please make sure that you follow latest documentation for rpmautospec.
rpmautospec utilizes two RPM macros:
autorel— to populate
autochangelog— to figure out changelog
If you want your upstream spec file to also work well when
rpmautospec-rpm-macros is not installed, set
Release to this:
This construct uses
autorel macro if it’s defined, and if it’s not, it sets release to 1.
%changelog, you don’t need to include the changelog file upstream and you can have it downstream only, which makes sense - changelog is specific to a release.
How do I install dependencies for my commands in packit-service? #
We are running all commands, defined by you, in a sandbox which is locked-down. At the moment we don’t have any mechanism for you to define the dependencies you need and us making them available for you.
In the mean time we are solving these requests one by one, so please reach out to us.
A command failed in packit-service: how do I reproduce it locally? #
In the meantime, you can pull our production sandbox image and run the command inside. As an example, this is how we were debugging build problems with anaconda:
Clone your upstream git repo.
Launch the container and bind-mount the upstream project inside:
$ docker run -ti --rm --memory 768MB -v $PWD:/src -w /src quay.io/packit/sandcastle:prod bash
- Run commands of your choice:
[root@4af5dbd9c828 src]# ./configure checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /usr/bin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking whether make supports nested variables... yes checking whether UID '0' is supported by ustar format... yes checking whether GID '0' is supported by ustar format... yes checking how to create a ustar tar archive... gnutar checking whether make supports nested variables... (cached) yes checking whether make supports the include directive... yes (GNU style) checking for gcc... gcc checking whether the C compiler works... yes ...
Since OpenShift invokes pods using arbitrary UIDs and as you can see, the command above is invoked as root, it does not match production packit-service. So, if running a local container didn’t help you with reproducing the issue, you can try running it in openshift!
Here is a simple python code how packit-service does it:
from sandcastle import Sandcastle # this should be the path to your local clone of the upstream project git_repo_path: str = "fill-me" # kubernetes namespace to use k8s_namespace: str = "myproject" command = ["your", "command", "of", "choice"] # This is how your code gets copied (via rsync) into the openshift pod m_dir = MappedDir(git_repo_path, "/sandcastle", with_interim_pvc=True) o = Sandcastle( image_reference="docker.io/usercont/sandcastle:prod", k8s_namespace_name=k8s_namespace, mapped_dir=m_dir ) o.run() try: output = o.exec(command=command) print(output) finally: o.delete_pod()
This script requires:
- sandcastle installed
- being logged in an openshift cluster (
oc whoamito confirm)
- rsync binary available
If none of these helped you, please reach out to us and we’ll try to help you.