Skip to main content

· 9 min read

Last month, we had the pleasure of engaging with a dynamic audience during our interactive talk on changelogs at the iDevConf in Brno, Czech Republic. In case you missed it, you can watch the recording here. Throughout the session, we explored various aspects of changelog usage, including their content, format, and the potential for automation. By asking a series of questions to the attendees, we gathered insights and opinions that highlighted both common practices and divergent viewpoints within the community. In this follow-up article, we aim to summarise the key findings from our discussion, analyse the trends and preferences that emerged, and offer our reflections on the role of changelogs in software development.

foto from the conference 1 foto from the conference 2

Content

One of the first questions we posed to our audience during the talk was, "What do you, as a user, like to see in changelogs?" The most popular elements that the audience showed to like to see in changelogs were breaking changes and new features. Interest in breaking changes indicates that users prioritise being informed about changes that might disrupt their current setup or workflow. As for the features, this shows a strong interest in understanding the latest enhancements and functionalities added to the software. Users appreciate knowing what new capabilities they can leverage. On a similar note, the information about deprecated functionality is also highly valuable for the audience. This can help users plan migrations and avoid using obsolete features. The audience also expressed a clear desire to understand the purpose and context of a new release. Responses such as "why", "purpose", "reasons", "relations", and "changes motivation" highlight this need. Understanding the rationale behind changes helps users comprehend the development trajectory and decision-making process. Another interesting response was the desire to know "am I affected", indicating that users primarily care about whether they need to take action or can safely ignore the update. There was also a response "not a copy of commit msgs". But we will get into that later…

Formats

The question that followed was, "What format do you prefer for changelogs?" The most popular format by far was Markdown. Its simplicity, readability, and widespread use in the developer community make it highly appealing. Markdown's flexibility in formatting and ease of conversion to other formats also contribute to its popularity. This preference may also indicate its frequent use in blog posts or other articles, as Markdown can be easily rendered.

Plain text files were also a prominent choice. We assume this could be for their simplicity and universal compatibility, making them accessible across various platforms and tools.

Other notable formats included ReStructuredText, LaTeX, YAML, and blog posts. These formats cater to specific needs, such as enhanced formatting capabilities, structured data representation, or providing more detailed explanations and context. Several unique and creative preferences also emerged, such as Asciidoc, email, and PDFs. This variety of preferences highlights that the needs of different projects and their users vary significantly.

Tools

Following the format, we’ve tried to collect tooling (if any) used by the audience to help with the changelog management. People mentioned various text editors, IDEs and of course, chatgpt/ai. But let’s take a look at one specific tool worth sharing:

towncrier

From the project’s homepage, towncrier delivers the news which is convenient to those that hear it, not those that write it. During a development, "news fragments" (~text files) are created and when there is a new release, one can merge those together. Being user-centric and storing the fragments in git (and being able to review) makes it a really interesting choice worth exploring. Sadly, the pre-commit hooks can’t remind you that you’ve forgotten to add a new news fragment. Luckily, there is help in the form of a Chronographer GitHub application created by Sviatoslav Sydorenko (@webknjaz) (who was by coincidence also present at the talk)

Quite interestingly, 11% of the responders mentioned that no automation is used – there is an opportunity for improvement! (But to be fair, it can also mean that someone from the projects wants to prepare something really useful and do this all by hand.)

There were also some tools like coffee, potato, postal pigeons or beer that we weren’t able to find documentation for. If you find these, let us know so we can add some links...;-)

There were also various git-based solutions suggested which leads us to the next question:

Automation based on commit messages

This is a tricky one, right? It might seem like an obvious choice of getting the input for our changelogs. But… yes, there is a “but”. There are two main reasons why one wants to avoid using commit messages for this:

  • Commit messages are meant to be read by developers.
  • Commit itself represents a change meaningful for developers, not users.

Based on these observations, we came up with the following rules in our team (and talk attendees mentioned the same):

  • The content of the changelog should be created for users, not developers.
  • Changelog should be created for the user-focused level of change => in our case, pull-request.
  • Changelog should be created by the author of the change when the change is being developed.

Of course, one can still use commits for this, but we don’t think it’s a good idea to have two goals for one text. If you really want to go this way, there is a Conventional Commits project. If nothing else, it can bring more attention to the commit messages and provide well-defined rules for the project contributors. (Talk attendees also mentioned git-cliff as a changelog generator for Conventional Commits.) You can also use this format independently to user-facing changelog. (Or, maybe as a base info for a human creating the changelog.) To mention also other responses, there were various git-log based solutions mentioned including the functionality provided directly by git-forges.

Conventional Commits: search for a user (nothing found)

Packit blog-post generator

As a follow-up to the previous questions, we’ve shown a solution we use in Packit. We researched and tried various solutions but this is what finally works for us:

  1. When submitting a pull-request, you put your changelog into a pull-request description.

Packit workflow 1: pull-request

  1. If you forget, a GitHub action will mark the PR red to remind you. (You can also put “N/A” if there is no user-facing change and the PR should be skipped for this check.)

  2. There is a GitHub action that we manually trigger when a new release should be prepared – as a result, new pull request is created with the aggregated changelog and version being updated. When this pull request is merged, the content of the changelog is also put into GitHub release description and from that taken when preparing downstream (i.e. Fedora) updates

Packit workflow 2: Manual triggering of Github workflow that prepares the changelog Packit workflow 3: Content of the pull request created by the workflow Packit workflow 4: Published release with the changelog from the pull request

  1. In Packit, most of our users do not install our packages manually but use our service. When doing a new deployment (by moving stable branches in our repositories), we collect the code snippets and prepare an update post that is published on our project page.

Packit workflow 5: Published blog post with changelog from the pull request

Important bit is that in both ways the changelog snippets are used, there is a review in place. So, you can still revisit the text, combine more entries together or remove if this is not relevant to the user after all.

You can, but don’t need to use the same, but try to think about this, have a discussion within a team. The discussion itself can help you think more about your users.

Nice changelog examples

Thanks to our fellow attendees, we can share some examples to be inspired by:

Space for improvement?

In addition to preferences, we also sought feedback on potential improvements for managing changelogs. The responses highlighted several key areas where the audience sees opportunities for enhancement:

  • AI: A significant number of responses emphasized the need for AI integration in changelog management, specifically using generative AI for writing changelogs was mentioned.
  • Standardization and consistency: Several responses called for standardizing the format and content of changelogs. Consistency in how changelogs are written and maintained can improve readability and usability. Specific suggestions included using templates and setting ground rules, such as always including issue IDs in commits.
  • Automation and integration with development tools: Improving the automation tools and Integrating changelog generation with existing development tools and CI/CD pipelines was another common suggestion. This could streamline the process, ensuring that changelogs are automatically updated and maintained as part of the development workflow.
  • Improving quality: Improving the quality of changelog messages was a recurring theme. Responses suggested focusing on clear, concise, and meaningful wording and also highlighted the need for changelogs to be more user-oriented rather than developer-centric.

Several responses addressed specific needs, such as differentiating upstream and downstream changelogs, supporting all CI systems, and referencing the tickets associated with changes. Additionally, there were responses emphasizing the importance of keeping changelogs simple and easy to understand.

Conclusion

And now what?

What you can do now? Improve changelog in your project. Get involved in the project you like and help with the changelogs. Read the changelos.

What we can do together? Let’s collaborate on the tools and share good practices!

And what about the standardisation? Let’s create a new standard! https://xkcd.com/927/

With that, let’s quote a response from one of our attendees: "Many more people read the changelog than write it, so it's worth it to put in the effort."


This post was also posted at medium.com.

· 5 min read

The first part of June is usually quite busy for our team. Why? The last couple of years, this has been a time of DevConf.CZ conference. (The unpredictable January had been changed into a more pleasant June.) Even though the conference itself is important, it’s used as an opportunity for various people from around the globe to come to Brno and thanks to that, a lot is happening also during the days around. For the Packit team, it’s a nice opportunity to have the whole team together in one place – we can do some fun teambuilding (like canoeing this year) but also discuss any technical topics or meet our users and realise how are the real people behind all the nicknames. This time we also prepared something for them:

Packit team at DevConf.CZ

Packit workshop

Before DevConf, we recognized a unique opportunity: numerous users and potential users of Packit would be visiting Brno for the conference. Therefore, we decided to organise an in-person workshop with a main focus on our release automation. We had previously organised multiple online runs, for which you can find the materials here, so we were mostly prepared. This initiative brought both Red Hatters and non-Red Hatters, resulting in a rich exchange of ideas and great feedback. In the end, the workshop served not only for learning about our release automation but also about the CI capabilities of Packit in upstream.

During the workshop, several key areas of interest emerged: Building in Sidetags - There was significant interest in building in sidetags. Participants provided valuable feedback on the workflow and configuration that Nikola Forro is currently developing, see the GitHub issue. One of the discussion points was the automatic resolution of dependencies as the next step for the current static configuration. Common specfile manipulation tasks - Participants expressed a need for ways to handle common specfile manipulation tasks, which could be utilised in Packit's actions or for debugging purposes. Specific use cases included removing source/patch ranges for Copr builds, replacing sources within specfiles or getting and setting specfile versions. Our specfile library already covers some of these, but besides that, also other alternative solutions were proposed, such as creating a CLI specifically for handling these tasks or adding Packit subcommands to facilitate these operations.

Besides those, in relation to the release automation, some existing issues were brought up, such as the one about Packit to not create divergent branches when syncing release. Additionally, a bug/inconsistency was directly addressed and fixed during the workshop, see https://github.com/packit/packit/pull/2327 .

Overall, the workshop was a success and we are happy for our great users for coming! The gained insights will definitely influence the ongoing development and improvement of Packit. We celebrated the successful workshop with a nice lunch together with the participants.

Packit members talks

Of all the various proposals prepared by our team (and there as a lot!), 3 were accepted and we were able to show some interesting topics to the audience.

The first session was an interactive on hold by Laura and František about changelogs – we used a Mentimer platform to be able to interact with the audience. We could not only collect information what people are interested to see in changelogs or what tooling do they use, but also showed charts from the research Laura made as part of here diploma thesis the last year. As part of the session, we were able to show the changelog automation we use in Packit. There is also a blog post covering the talk and all the interesting findings.

For the next session, František took a bunch of happy (of course..) Packit users and organised a user showcase. In just half an hour 8 people went to stage and provided an introduction to Packit, tmt, Testing Farm and showed 4 interesting usecases. Recording can be found here. Interestingly, the two of the usecases overlayed – Cockpit has introduced their tests-cases into their dependencies to realise issues soon and one of such dependencies is Podman. Both Podman and Cockpit was presented on the stage.

During the third session, Laura and Tomáš showed our journey to team role rotation and how we do this these days. They used Mentimer as well as for the changelog one so it was a great fun. Missed it? No worries, there is no only a recording, but also a blog-post serie covering this topic. The last part helps you do the same, and as usual, we have this automated…

Laura and Tomáš presenting

Even though some of our talk proposals were not accepted this year, our users represented us very well. In addition to the previously mentioned user showcase), there were two other dedicated talks by Packit users:

Additionally, Siteshwar Vashisht presented about OpenScanHub, a service for static analysis of Linux distributions, where he also mentioned Packit and our plans for integration (https://pretalx.com/devconf-cz-2024/talk/7C38GJ/).


So, that was it. A DevConf week we enjoy being part of same as enjoying it being over.

· 6 min read

Have you ever wanted to bring your pull request changes in a cloud image easily? Curious about how easy it can be? With Packit, it can be just about commenting on your pull request with /packit vm-image-build.

With the above command, Packit automates all the manual steps needed to create an RPM package with your pull request changes and asks the Image Builder to install it inside a brand new cloud image. Let's have a look at the prerequisites for this.

Join the Red Hat Developer Program

If you don't already have a business account you can create a Red Hat Developer account at no cost here.

You need a subscription in order to use the Image Builder service and launch the builded images in the AWS management console.

Prepare to upload AWS AMI images

Before uploading an AWS AMI image, you must configure the AWS system for receiving them.

Prerequisites

Procedure

Follow these steps to satisfy the above prerequisites.

The manual steps

Are you wondering what are the manual steps for bringing your pull request changes in a cloud image and why you should automate them?

There could be many ways to achieve this goal but let's see together the closest to our automated solution. Below you can find a summary of all the needed manual steps; I am quite sure after reading them, you will want to automate them with Packit!

  • Build an RPM package with your pull request changes through COPR, go to https://copr.fedorainfracloud.org

    1. Install copr-cli.
    2. Create your account and service token.
    3. Add your token to `~/.config/copr.
    4. Create a new COPR project.
    5. Start a build with your local pull request changes using copr-cli.
    6. WAIT for the build to finish.
  • Create a new cloud image through the Image Builder console, go to https://console.redhat.com/insights/image-builder

    1. Login with your Red Hat developer account.
    2. Click on the Create Image button, choose AWS image type and follow the wizard.
    3. WAIT for the build to finish.
    4. Open the Launch link for the builded image.
  • Launch and access the AWS image through the AWS management console, go to https://aws.amazon.com/console/

    1. The previous link will open an AWS console tab with the Launch an Instance wizard preset to use the builded image. You need to login into the AWS management console using an AWS Account ID allowed to access the AMI Image you just created.
    2. Select a Key pair, or create one if you don't have it already, to be able to ssh the image later.
    3. Click on Launch Instance
    4. Connect to instance using an ssh client
    5. Add the previously created COPR repo to the list of available dnf repositories.
    6. Install the package you have created at step number 4.
    7. Now you are ready to test your code in a real cloud image.

For every new pull request you want to test directly in a cloud image you have to repeat steps 4-16 or automate them through Packit!

Automate the steps

Install Packit

Installing Packit is pretty straightforward.

  1. Create a valid Fedora Account System (FAS) account (if you don't already have one). Why do you need it? After these few steps you will start building (and potentially shipping) Fedora packages through the COPR service and we need you to agree with the Fedora license.
  2. Install our GitHub application on GitHub Marketplace, or configure a webhook on GitLab (depending on where your project lives).
  3. Make Packit approve your FAS username; on Github the approval process is automated and for Gitlab you have to contact us.

Now you are ready to automate the process as described below.

Setup Packit

Create a .packit.yaml configuration file in your pull request.

But just the first time! After your pull request has been merged, Packit will take the .packit.yaml file from the target main branch.

The configuration file will look like the following:

---

jobs:
- job: copr_build
trigger: pull_request
targets:
- fedora-all

- job: vm_image_build
trigger: pull_request
image_request:
architecture: x86_64
image_type: aws
upload_request:
type: aws
options:
share_with_accounts:
- < shared-aws-account-id >
image_distribution: fedora-39
copr_chroot: fedora-39-x86_64
image_customizations:
packages: [hello-world]

copr_build job

The first job tells Packit service to build an RPM package, for the Fedora release you want, in this example all the active fedora releases, and to add your pull request changes to the package.

To further customize the COPR builds made by Packit you may want to give a look at this guide.

vm_image_build job

The second job tells Packit how to configure the Builder Image service.

The first two lines of this job are still meant for Packit; they allow Packit to react to your pull request comment /packit vm-image-build. Packit does not build a VM image automatically, as it does when it builds a COPR package, to save you from no wanted costs.

- job: vm_image_build
trigger: pull_request

The other lines are meant to customize the Image Builder behaviour.

You are asking to build an AWS image, with a fedora-39 distribution, for the x86_64 architecture and you want to share it with the listed AWS Account IDs.

image_request:
architecture: x86_64
image_type: aws
upload_request:
type: aws
options:
share_with_accounts:
- < shared-aws-account-id >
image_distribution: fedora-39

You don't want to manually install the COPR package into the image, for this reason you ask the Image Builder to install it (hello-world).

You tell Image Builder to take it from the COPR chroot fedora-39-x86_64, and you don't need to create or specify a COPR project because it has been automatically created by Packit for you.

copr_chroot: fedora-39-x86_64
image_customizations:
packages: [hello-world]

Create, comment and test a pull request!

Create a pull request, mine will show you the world word in green 🌿.

You are ready to go, just comment your pull request with

/packit vm-image-build

and the image will be built and customized for you.

Look for the check named vm-image-build-fedora-39-x86_64 and wait for it to finish.

Wait for check vm-image-build-fedora-39-x86_64 to finish

Open its details and you will find the link to the AWS image.

The check details have a link to the AWS image

Open the AWS link (you need to be already logged in) and see the details of your image ready to be launched.

The AWS image details

Launch your image instance and connect to it.

Connect to instance details

Test it!

Test it!

· 4 min read

Have you ever wanted to make changes in an RPM spec file programmatically? specfile library has been created for that very purpose. It is a pure Python library that allows you to conveniently edit different parts of a spec file while doing its best to keep the resulting changeset minimal (no unnecessary whitespace changes etc.).

Installation

The library is packaged for Fedora, EPEL 9 and EPEL 8 and you can simply install it with dnf:

dnf install python3-specfile

On other systems, you can use pip (just note that it requires RPM Python bindings to be installed):

pip install specfile

Usage

Let's have a look at a few simple examples of how to use the library.

Bumping release

To bump release and add a new changelog entry, we could use the following code:

from specfile import Specfile

with Specfile("example.spec") as spec:
spec.release = str(int(spec.expanded_release) + 1)
spec.add_changelog_entry("- Bumped release for test purposes")

Let's take a look at what happens here:

We instantiate Specfile class with a path to our spec file and use it as a context manager to automatically save all changes upon exiting the context.

We then use expanded_release property to get the current value of Release tag after macro expansion. We assume it is numeric, so we simply convert it to integer, add 1, convert the result back to string and assign the new value to release property.

tip

Note that release/expanded_release properties exclude dist tag (usually %{?dist}) - for convenience, it is ignored when reading and preserved unmodified when writing. If that's not what you want, you can use raw_release/expanded_raw_release properties instead.

Finally, we add a new changelog entry. We don't specify any other arguments but content, so the author is determined automatically using the same procedure as rpmdev-packager uses and date is set to current day.

Switching to %autochangelog

To make a switch from traditional changelog to %autochangelog, we could do the following:

import pathlib
from specfile import Specfile

spec = Specfile("example.spec", autosave=True)

with spec.sections() as sections:
entries = sections.changelog[:]
sections.changelog[:] = ["%autochangelog"]

pathlib.Path("changelog").write_text("\n".join(entries) + "\n")

Let's take a look at what happens here:

We instantiate Specfile class with a path to our spec file and we also set autosave argument that ensures that any changes are saved automatically as soon as possible.

specfile heavily relies on context managers. Here we are using sections() method that returns a context manager that we can use to manipulate spec file sections. Upon exiting the context, any modifications done are propagated to the internal representation stored in our Specfile instance, and since autosave is set, they are immediately saved to the spec file as well.

First, we store a copy of the content of the %changelog section. The content is represented as a list of lines.

Then we replace the content with a single line - "%autochangelog".

Finally, we save the stored content into a "changelog" file.

Iterating through tags

Contexts can be nested. Here is a code that iterates through all package sections (including the first, implicitly named one; also known as preamble) and prints expanded value of all Requires tags:

spec = Specfile("example.spec")

with spec.sections() as sections:
for section in sections:
# normalized name of a section is lowercased
if section.normalized_name != "package":
continue
with spec.tags(section) as tags:
for tag in tags:
# normalized name of a tag is capitalized
if tag.normalized_name != "Requires":
continue
print(f"Section: {section.id}, Tag: {tag.name}, Value: {tag.expanded_value}")

Let's take a look at what happens here:

We instantiate Specfile class with a path to our spec file. This time we don't set autosave because we are not doing any modifications (though we could still save any changes explicitly using save() method).

Then we use sections() context manager and iterate through sections; we skip sections not called "package" (the initial % is ommited for convenience).

After that we use tags() context manager and pass the current section as an argument. This allows us to iterate through tags in the current section. Without any argument, we would get a list of tags in preamble, the very first section in a spec file. We skip tags not called "Requires" and finally print the values of Requires tags after macro expansion. We also print tag names (not normalized) and section IDs - those are section names followed by options, e.g. "package -n alternative-name-for-example".

Are you interested in more details, trying the library out or even contributing? You can find specfile source code on GitHub. See the README for more tips and usage examples. You can also check out the API reference.

· 5 min read

"How absurdly simple!" I cried.

"Quite so!" said he, a little nettled. "Every problem becomes very childish when once it is explained to you."

  • Arthur Conan Doyle, "The Adventure of the Dancing Men"

We have planned for a while to use Packit to generate packages on Copr on demand for our somewhat complicated Rust executable, stratisd. It looked like this was going to be challenging, and in a sense it was, but once the task was completed, it turned out to have been pretty straightforward.

· 2 min read

In the upcoming months, we plan to migrate our service to a new cluster. However, this may affect propose_downstream and pull_from_upstream jobs due to the new firewall rules. The problematic aspects could be:

  • commands you run in your actions during syncing the release involving interactions with external servers
  • downloading your sources from various hosting services (crates.io, npm, gems, etc.)

To smoothen this transition, we kindly encourage you to enable one of these jobs on our already migrated staging instance. This recommendation is particularly important if you belong to one of the groups affected by the two previous points. This proactive step will help us identify and address any issues promptly.

Both instances can be run at the same time and the behaviour can be configured via the packit_instances configuration key, which is by default set to ["prod"]. Picking just one instance is required only for koji_build and bodhi_update jobs since both instances work with the production instances of Fedora systems. To avoid too much noise in your dist-git PRs, you may enable the pull_from_upstream/propose_downstream job for only one target, resulting in only one additional PR created.

Here's how you can enable one of the jobs on the staging instance:

  • pull-from-upstream: The only thing needed is to duplicate the job in your Packit config using packit_instances configuration option. Example:
- job: pull_from_upstream
trigger: release
packit_instances: ["stg"]
dist_git_branches:
- fedora-rawhide
  • propose-downstream: For this job, you first need to enable our staging Github app (you should be already automatically approved if you had been previously approved for production instance). After that, similarly to pull-from-upstream, you only need to duplicate the job in your Packit config using packit_instances. Example:
- job: propose_downstream
trigger: release
packit_instances: ["stg"]
dist_git_branches:
- fedora-rawhide
info

When merging the PRs created by Packit, please don't forget to merge the PRs created by the production instance if you have a follow-up koji_build job enabled to ensure your builds will not be skipped (or you can allow builds for staging instance as well, see allowed_pr_authors)).

We would be happy if you could then report any problems to us. We appreciate your collaboration in ensuring a seamless migration. Your Packit team!

· 3 min read

We are very happy to announce a major enhancement to Packit! We have now added support for monorepositories, enabling the integration of upstream repositories containing multiple downstream packages. If you have a repository in the monorepo format, Packit can now help you automate the integration to downstream distributions both from CLI and as a service.