Skip to main content

· 6 min read

Have you ever wanted to bring your pull request changes in a cloud image easily? Curious about how easy it can be? With Packit, it can be just about commenting on your pull request with /packit vm-image-build.

With the above command, Packit automates all the manual steps needed to create an RPM package with your pull request changes and asks the Image Builder to install it inside a brand new cloud image. Let's have a look at the prerequisites for this.

Join the Red Hat Developer Program

If you don't already have a business account you can create a Red Hat Developer account at no cost here.

You need a subscription in order to use the Image Builder service and launch the builded images in the AWS management console.

Prepare to upload AWS AMI images

Before uploading an AWS AMI image, you must configure the AWS system for receiving them.

Prerequisites

Procedure

Follow these steps to satisfy the above prerequisites.

The manual steps

Are you wondering what are the manual steps for bringing your pull request changes in a cloud image and why you should automate them?

There could be many ways to achieve this goal but let's see together the closest to our automated solution. Below you can find a summary of all the needed manual steps; I am quite sure after reading them, you will want to automate them with Packit!

  • Build an RPM package with your pull request changes through COPR, go to https://copr.fedorainfracloud.org

    1. Install copr-cli.
    2. Create your account and service token.
    3. Add your token to `~/.config/copr.
    4. Create a new COPR project.
    5. Start a build with your local pull request changes using copr-cli.
    6. WAIT for the build to finish.
  • Create a new cloud image through the Image Builder console, go to https://console.redhat.com/insights/image-builder

    1. Login with your Red Hat developer account.
    2. Click on the Create Image button, choose AWS image type and follow the wizard.
    3. WAIT for the build to finish.
    4. Open the Launch link for the builded image.
  • Launch and access the AWS image through the AWS management console, go to https://aws.amazon.com/console/

    1. The previous link will open an AWS console tab with the Launch an Instance wizard preset to use the builded image. You need to login into the AWS management console using an AWS Account ID allowed to access the AMI Image you just created.
    2. Select a Key pair, or create one if you don't have it already, to be able to ssh the image later.
    3. Click on Launch Instance
    4. Connect to instance using an ssh client
    5. Add the previously created COPR repo to the list of available dnf repositories.
    6. Install the package you have created at step number 4.
    7. Now you are ready to test your code in a real cloud image.

For every new pull request you want to test directly in a cloud image you have to repeat steps 4-16 or automate them through Packit!

Automate the steps

Install Packit

Installing Packit is pretty straightforward.

  1. Create a valid Fedora Account System (FAS) account (if you don't already have one). Why do you need it? After these few steps you will start building (and potentially shipping) Fedora packages through the COPR service and we need you to agree with the Fedora license.
  2. Install our GitHub application on GitHub Marketplace, or configure a webhook on GitLab (depending on where your project lives).
  3. Make Packit approve your FAS username; on Github the approval process is automated and for Gitlab you have to contact us.

Now you are ready to automate the process as described below.

Setup Packit

Create a .packit.yaml configuration file in your pull request.

But just the first time! After your pull request has been merged, Packit will take the .packit.yaml file from the target main branch.

The configuration file will look like the following:

---

jobs:
- job: copr_build
trigger: pull_request
targets:
- fedora-all

- job: vm_image_build
trigger: pull_request
image_request:
architecture: x86_64
image_type: aws
upload_request:
type: aws
options:
share_with_accounts:
- < shared-aws-account-id >
image_distribution: fedora-39
copr_chroot: fedora-39-x86_64
image_customizations:
packages: [hello-world]

copr_build job

The first job tells Packit service to build an RPM package, for the Fedora release you want, in this example all the active fedora releases, and to add your pull request changes to the package.

To further customize the COPR builds made by Packit you may want to give a look at this guide.

vm_image_build job

The second job tells Packit how to configure the Builder Image service.

The first two lines of this job are still meant for Packit; they allow Packit to react to your pull request comment /packit vm-image-build. Packit does not build a VM image automatically, as it does when it builds a COPR package, to save you from no wanted costs.

- job: vm_image_build
trigger: pull_request

The other lines are meant to customize the Image Builder behaviour.

You are asking to build an AWS image, with a fedora-39 distribution, for the x86_64 architecture and you want to share it with the listed AWS Account IDs.

image_request:
architecture: x86_64
image_type: aws
upload_request:
type: aws
options:
share_with_accounts:
- < shared-aws-account-id >
image_distribution: fedora-39

You don't want to manually install the COPR package into the image, for this reason you ask the Image Builder to install it (hello-world).

You tell Image Builder to take it from the COPR chroot fedora-39-x86_64, and you don't need to create or specify a COPR project because it has been automatically created by Packit for you.

copr_chroot: fedora-39-x86_64
image_customizations:
packages: [hello-world]

Create, comment and test a pull request!

Create a pull request, mine will show you the world word in green 🌿.

You are ready to go, just comment your pull request with

/packit vm-image-build

and the image will be built and customized for you.

Look for the check named vm-image-build-fedora-39-x86_64 and wait for it to finish.

Wait for check vm-image-build-fedora-39-x86_64 to finish

Open its details and you will find the link to the AWS image.

The check details have a link to the AWS image

Open the AWS link (you need to be already logged in) and see the details of your image ready to be launched.

The AWS image details

Launch your image instance and connect to it.

Connect to instance details

Test it!

Test it!

· 4 min read

Have you ever wanted to make changes in an RPM spec file programmatically? specfile library has been created for that very purpose. It is a pure Python library that allows you to conveniently edit different parts of a spec file while doing its best to keep the resulting changeset minimal (no unnecessary whitespace changes etc.).

Installation

The library is packaged for Fedora, EPEL 9 and EPEL 8 and you can simply install it with dnf:

dnf install python3-specfile

On other systems, you can use pip (just note that it requires RPM Python bindings to be installed):

pip install specfile

Usage

Let's have a look at a few simple examples of how to use the library.

Bumping release

To bump release and add a new changelog entry, we could use the following code:

from specfile import Specfile

with Specfile("example.spec") as spec:
spec.release = str(int(spec.expanded_release) + 1)
spec.add_changelog_entry("- Bumped release for test purposes")

Let's take a look at what happens here:

We instantiate Specfile class with a path to our spec file and use it as a context manager to automatically save all changes upon exiting the context.

We then use expanded_release property to get the current value of Release tag after macro expansion. We assume it is numeric, so we simply convert it to integer, add 1, convert the result back to string and assign the new value to release property.

tip

Note that release/expanded_release properties exclude dist tag (usually %{?dist}) - for convenience, it is ignored when reading and preserved unmodified when writing. If that's not what you want, you can use raw_release/expanded_raw_release properties instead.

Finally, we add a new changelog entry. We don't specify any other arguments but content, so the author is determined automatically using the same procedure as rpmdev-packager uses and date is set to current day.

Switching to %autochangelog

To make a switch from traditional changelog to %autochangelog, we could do the following:

import pathlib
from specfile import Specfile

spec = Specfile("example.spec", autosave=True)

with spec.sections() as sections:
entries = sections.changelog[:]
sections.changelog[:] = ["%autochangelog"]

pathlib.Path("changelog").write_text("\n".join(entries) + "\n")

Let's take a look at what happens here:

We instantiate Specfile class with a path to our spec file and we also set autosave argument that ensures that any changes are saved automatically as soon as possible.

specfile heavily relies on context managers. Here we are using sections() method that returns a context manager that we can use to manipulate spec file sections. Upon exiting the context, any modifications done are propagated to the internal representation stored in our Specfile instance, and since autosave is set, they are immediately saved to the spec file as well.

First, we store a copy of the content of the %changelog section. The content is represented as a list of lines.

Then we replace the content with a single line - "%autochangelog".

Finally, we save the stored content into a "changelog" file.

Iterating through tags

Contexts can be nested. Here is a code that iterates through all package sections (including the first, implicitly named one; also known as preamble) and prints expanded value of all Requires tags:

spec = Specfile("example.spec")

with spec.sections() as sections:
for section in sections:
# normalized name of a section is lowercased
if section.normalized_name != "package":
continue
with spec.tags(section) as tags:
for tag in tags:
# normalized name of a tag is capitalized
if tag.normalized_name != "Requires":
continue
print(f"Section: {section.id}, Tag: {tag.name}, Value: {tag.expanded_value}")

Let's take a look at what happens here:

We instantiate Specfile class with a path to our spec file. This time we don't set autosave because we are not doing any modifications (though we could still save any changes explicitly using save() method).

Then we use sections() context manager and iterate through sections; we skip sections not called "package" (the initial % is ommited for convenience).

After that we use tags() context manager and pass the current section as an argument. This allows us to iterate through tags in the current section. Without any argument, we would get a list of tags in preamble, the very first section in a spec file. We skip tags not called "Requires" and finally print the values of Requires tags after macro expansion. We also print tag names (not normalized) and section IDs - those are section names followed by options, e.g. "package -n alternative-name-for-example".

Are you interested in more details, trying the library out or even contributing? You can find specfile source code on GitHub. See the README for more tips and usage examples. You can also check out the API reference.

· 5 min read

"How absurdly simple!" I cried.

"Quite so!" said he, a little nettled. "Every problem becomes very childish when once it is explained to you."

  • Arthur Conan Doyle, "The Adventure of the Dancing Men"

We have planned for a while to use Packit to generate packages on Copr on demand for our somewhat complicated Rust executable, stratisd. It looked like this was going to be challenging, and in a sense it was, but once the task was completed, it turned out to have been pretty straightforward.

· 2 min read

In the upcoming months, we plan to migrate our service to a new cluster. However, this may affect propose_downstream and pull_from_upstream jobs due to the new firewall rules. The problematic aspects could be:

  • commands you run in your actions during syncing the release involving interactions with external servers
  • downloading your sources from various hosting services (crates.io, npm, gems, etc.)

To smoothen this transition, we kindly encourage you to enable one of these jobs on our already migrated staging instance. This recommendation is particularly important if you belong to one of the groups affected by the two previous points. This proactive step will help us identify and address any issues promptly.

Both instances can be run at the same time and the behaviour can be configured via the packit_instances configuration key, which is by default set to ["prod"]. Picking just one instance is required only for koji_build and bodhi_update jobs since both instances work with the production instances of Fedora systems. To avoid too much noise in your dist-git PRs, you may enable the pull_from_upstream/propose_downstream job for only one target, resulting in only one additional PR created.

Here's how you can enable one of the jobs on the staging instance:

  • pull-from-upstream: The only thing needed is to duplicate the job in your Packit config using packit_instances configuration option. Example:
- job: pull_from_upstream
trigger: release
packit_instances: ["stg"]
dist_git_branches:
- fedora-rawhide
  • propose-downstream: For this job, you first need to enable our staging Github app (you should be already automatically approved if you had been previously approved for production instance). After that, similarly to pull-from-upstream, you only need to duplicate the job in your Packit config using packit_instances. Example:
- job: propose_downstream
trigger: release
packit_instances: ["stg"]
dist_git_branches:
- fedora-rawhide
info

When merging the PRs created by Packit, please don't forget to merge the PRs created by the production instance if you have a follow-up koji_build job enabled to ensure your builds will not be skipped (or you can allow builds for staging instance as well, see allowed_pr_authors)).

We would be happy if you could then report any problems to us. We appreciate your collaboration in ensuring a seamless migration. Your Packit team!

· 3 min read

We are very happy to announce a major enhancement to Packit! We have now added support for monorepositories, enabling the integration of upstream repositories containing multiple downstream packages. If you have a repository in the monorepo format, Packit can now help you automate the integration to downstream distributions both from CLI and as a service.

· 4 min read

In the previous year, we automated the Fedora downstream release process in Packit. The first step of the release process, propagating the upstream release to Fedora, is covered by the propose_downstream job. This job updates the sources in Fedora, the spec file, and other needed files and creates pull requests with the changes in the dist-git repository.

The downside of this job is that for its execution, users need to install Packit Service GitHub/GitLab app since this job reacts only to GitHub/GitLab release webhooks. However, the person who maintains the package in Fedora may not be the upstream maintainer and may not have admin access to the upstream GitHub/GitLab repository.

To cover this case, we came up with a new job called pull_from_upstream, which aims to update Fedora dist-git similarly to propose_downstream, but is configured directly in the dist-git repository. Let's now look at how to set it up and how it works.