Skip to main content

· 4 min read

Have you ever wanted to make changes in an RPM spec file programmatically? specfile library has been created for that very purpose. It is a pure Python library that allows you to conveniently edit different parts of a spec file while doing its best to keep the resulting changeset minimal (no unnecessary whitespace changes etc.).

Installation

The library is packaged for Fedora, EPEL 9 and EPEL 8 and you can simply install it with dnf:

dnf install python3-specfile

On other systems, you can use pip (just note that it requires RPM Python bindings to be installed):

pip install specfile

Usage

Let's have a look at a few simple examples of how to use the library.

Bumping release

To bump release and add a new changelog entry, we could use the following code:

from specfile import Specfile

with Specfile("example.spec") as spec:
spec.release = str(int(spec.expanded_release) + 1)
spec.add_changelog_entry("- Bumped release for test purposes")

Let's take a look at what happens here:

We instantiate Specfile class with a path to our spec file and use it as a context manager to automatically save all changes upon exiting the context.

We then use expanded_release property to get the current value of Release tag after macro expansion. We assume it is numeric, so we simply convert it to integer, add 1, convert the result back to string and assign the new value to release property.

tip

Note that release/expanded_release properties exclude dist tag (usually %{?dist}) - for convenience, it is ignored when reading and preserved unmodified when writing. If that's not what you want, you can use raw_release/expanded_raw_release properties instead.

Finally, we add a new changelog entry. We don't specify any other arguments but content, so the author is determined automatically using the same procedure as rpmdev-packager uses and date is set to current day.

Switching to %autochangelog

To make a switch from traditional changelog to %autochangelog, we could do the following:

import pathlib
from specfile import Specfile

spec = Specfile("example.spec", autosave=True)

with spec.sections() as sections:
entries = sections.changelog[:]
sections.changelog[:] = ["%autochangelog"]

pathlib.Path("changelog").write_text("\n".join(entries) + "\n")

Let's take a look at what happens here:

We instantiate Specfile class with a path to our spec file and we also set autosave argument that ensures that any changes are saved automatically as soon as possible.

specfile heavily relies on context managers. Here we are using sections() method that returns a context manager that we can use to manipulate spec file sections. Upon exiting the context, any modifications done are propagated to the internal representation stored in our Specfile instance, and since autosave is set, they are immediately saved to the spec file as well.

First, we store a copy of the content of the %changelog section. The content is represented as a list of lines.

Then we replace the content with a single line - "%autochangelog".

Finally, we save the stored content into a "changelog" file.

Iterating through tags

Contexts can be nested. Here is a code that iterates through all package sections (including the first, implicitly named one; also known as preamble) and prints expanded value of all Requires tags:

spec = Specfile("example.spec")

with spec.sections() as sections:
for section in sections:
# normalized name of a section is lowercased
if section.normalized_name != "package":
continue
with spec.tags(section) as tags:
for tag in tags:
# normalized name of a tag is capitalized
if tag.normalized_name != "Requires":
continue
print(f"Section: {section.id}, Tag: {tag.name}, Value: {tag.expanded_value}")

Let's take a look at what happens here:

We instantiate Specfile class with a path to our spec file. This time we don't set autosave because we are not doing any modifications (though we could still save any changes explicitly using save() method).

Then we use sections() context manager and iterate through sections; we skip sections not called "package" (the initial % is ommited for convenience).

After that we use tags() context manager and pass the current section as an argument. This allows us to iterate through tags in the current section. Without any argument, we would get a list of tags in preamble, the very first section in a spec file. We skip tags not called "Requires" and finally print the values of Requires tags after macro expansion. We also print tag names (not normalized) and section IDs - those are section names followed by options, e.g. "package -n alternative-name-for-example".

Are you interested in more details, trying the library out or even contributing? You can find specfile source code on GitHub. See the README for more tips and usage examples. You can also check out the API reference.

· 5 min read

"How absurdly simple!" I cried.

"Quite so!" said he, a little nettled. "Every problem becomes very childish when once it is explained to you."

  • Arthur Conan Doyle, "The Adventure of the Dancing Men"

We have planned for a while to use Packit to generate packages on Copr on demand for our somewhat complicated Rust executable, stratisd. It looked like this was going to be challenging, and in a sense it was, but once the task was completed, it turned out to have been pretty straightforward.

· 2 min read

In the upcoming months, we plan to migrate our service to a new cluster. However, this may affect propose_downstream and pull_from_upstream jobs due to the new firewall rules. The problematic aspects could be:

  • commands you run in your actions during syncing the release involving interactions with external servers
  • downloading your sources from various hosting services (crates.io, npm, gems, etc.)

To smoothen this transition, we kindly encourage you to enable one of these jobs on our already migrated staging instance. This recommendation is particularly important if you belong to one of the groups affected by the two previous points. This proactive step will help us identify and address any issues promptly.

Both instances can be run at the same time and the behaviour can be configured via the packit_instances configuration key, which is by default set to ["prod"]. Picking just one instance is required only for koji_build and bodhi_update jobs since both instances work with the production instances of Fedora systems. To avoid too much noise in your dist-git PRs, you may enable the pull_from_upstream/propose_downstream job for only one target, resulting in only one additional PR created.

Here's how you can enable one of the jobs on the staging instance:

  • pull-from-upstream: The only thing needed is to duplicate the job in your Packit config using packit_instances configuration option. Example:
- job: pull_from_upstream
trigger: release
packit_instances: ["stg"]
dist_git_branches:
- fedora-rawhide
  • propose-downstream: For this job, you first need to enable our staging Github app (you should be already automatically approved if you had been previously approved for production instance). After that, similarly to pull-from-upstream, you only need to duplicate the job in your Packit config using packit_instances. Example:
- job: propose_downstream
trigger: release
packit_instances: ["stg"]
dist_git_branches:
- fedora-rawhide
info

When merging the PRs created by Packit, please don't forget to merge the PRs created by the production instance if you have a follow-up koji_build job enabled to ensure your builds will not be skipped (or you can allow builds for staging instance as well, see allowed_pr_authors)).

We would be happy if you could then report any problems to us. We appreciate your collaboration in ensuring a seamless migration. Your Packit team!

· 3 min read

We are very happy to announce a major enhancement to Packit! We have now added support for monorepositories, enabling the integration of upstream repositories containing multiple downstream packages. If you have a repository in the monorepo format, Packit can now help you automate the integration to downstream distributions both from CLI and as a service.

· 4 min read

In the previous year, we automated the Fedora downstream release process in Packit. The first step of the release process, propagating the upstream release to Fedora, is covered by the propose_downstream job. This job updates the sources in Fedora, the spec file, and other needed files and creates pull requests with the changes in the dist-git repository.

The downside of this job is that for its execution, users need to install Packit Service GitHub/GitLab app since this job reacts only to GitHub/GitLab release webhooks. However, the person who maintains the package in Fedora may not be the upstream maintainer and may not have admin access to the upstream GitHub/GitLab repository.

To cover this case, we came up with a new job called pull_from_upstream, which aims to update Fedora dist-git similarly to propose_downstream, but is configured directly in the dist-git repository. Let's now look at how to set it up and how it works.