Every bit used except for the Vox Pupuli OpenVox private key is available in public repos in the OpenVoxProject
Github org.
Building off of the work Jeff Clark did to improve Docker support in Vanagon, we added a bunch more things (and more since!) to make building these packages with containers work better. The choice of container images for each platform can be found in the platform default files. We also tried to standardize the platform defaults a bit since they've gotten rather divergent over the years. Using containerized builds also allows us to build for different architectures without having to build on a system of that particular architecture.
Generally, you will be able to build all packages locally yourself by using Rake tasks, or a GitHub action set up for this purpose.
In some cases, Perforce uses their own internal version of build tools (pl-build-tools) to build for older platforms that ship with build tools that are too old. Rather than do this, we've moved to utilizing publicly available updated tools instead. These days, that seems to be mostly for el7.
There are three component repos that are required for building the agent.
- puppet-runtime - vanagon repo containing components packaged in the All-In-One (AIO) agent package
- pxp-agent-vanagon - pxp-agent is primarily used by Puppet Enterprise for orchestration, but some pieces are used by members of the community
- openvox-agent - the vanagon repo that creates the rpm/deb packages
Within these repos, you'll find the following rake tasks:
vox:tag['<tag>']
- This tags the repo and pushes the tag to origin.vox:build['<project>','<platform>']
- This takes a project name (found inconfigs/project
) and platform to build for (found inconfigs/platforms
) and performs the build using vanagon'sdocker
engine. The component will be built inside the container, and files will end up in theoutput
directory of your repo clone.vox:upload['<tag>','<platform>']
- This uploads the artifacts generated by the build to the OSL openvox-artifacts S3 bucket or potentially a different S3 bucket if desired. You won't be able to use this without the AWS CLI set up with appropriate secrets.vox:promote_runtime['<puppet-runtime tag>']
- Found in the pxp-agent-vanagon repo, this modifies the puppet-runtime.json file to point to a given tagged puppet-runtime version that exists in the OSL openvox-artifacts S3 bucket. You can modify this file manually to point to a directory on disk by changinglocation
tofile:///path/to/puppet-runtime/output
.vox:promote['<component>','<tag>']
- This more generic task is found in the openvox-agent repo and can be used for promoting tagged builds of bothpuppet-runtime
andpxp-agent
.
First, puppet-runtime
is built and uploaded to the puppet-runtime artifacts directory. Then pxp-agent
, which utilizes puppet-runtime
and is uploaded to the pxp-agent artifacts directory is built. Then openvox-agent
, which utilizes both, is built and uploaded to the openvox-agent artifacts directory. In this last directory, the rpm and deb agent packages are stored, but these are unsigned.
The process for building the agent is now mostly in GitHub Actions. They share a build_vanagon.yml
workflow, which contains the full list of platforms that OpenVox currently supports for the agent. An example of how this shared workflow is used can be found in puppet-runtime. The shared workflow is able to upload these artifacts to the appropriate S3 bucket locations.
These are built using our slightly tweaked version of ezbake, which allows us to change the name of the packages. The openvox-server and openvoxdb repos contain similar rake tasks, but are used slightly differently:
vox:tag['<tag>']
- First, this changes the version found inproject.clj
to the tag and commits that change. Then it tags the repo. Then it creates a new commit after the tag that increments the Z part of the version with-SNAPSHOT
, following the current convention for these repos. Finally, it pushes the branch and the tag to origin.vox:build['<tag>']
- Because thevox:tag
task ends up creating a commit after the tag, this checks out the tag you want to build first. Then, it creates a container to do the ezbake build and saves the artifacts to theoutput
directory in your repo clone. Note that since these projects are fairly platform-agnostic, all of the packages can be built inside a single container. This container must be rpm-based, asrpmbuild
is needed byfpm
to create the rpms, but no special packages are needed to build the debs. The tasks have a default list of platforms to build for, but you can defineDEB_PLATFORMS
andRPM_PLATFORMS
environment variables. These are a comma-separated list of platforms with the architectecture excluded (e.g. ubuntu-18.04,debian-12 or el-9,amazon-2023). These are used by the GitHub build action.vox:upload['<tag>','<optional platform>']
- This uploads the artifacts generated by the build to the OSL openvox-artifacts S3 bucket or, potentially, a different S3 bucket if desired. You won't be able to use this without the AWS CLI set up with appropriate secrets.
The process for building openvox-server and openvoxdb are now mostly in GitHub Actions. They share a build_ezbake.yml
workflow. The default for the aformentioned environment variables listing the platforms to build for are defined here. An example of how this shared workflow is used can be found in openvox-server. The shared workflow is able to upload these artifacts to the appropriate S3 bucket locations. Before running these actions, you currently need to run the vox:tag
task locally first.
To create the repository packages (i.e. the rpm files at https://yum.overlookinfratech.com/ to set up the repo on your machine), openvox-release is used. The packages this generates will place the public key in the right place and import it, and set up the appropriate apt/yum repo on your machine.
Signing is performed using the sign_from_s3.rb
script run on an Overlook InfraTech GCP instance. You won't be able to use this yourself without the private signing key, but you can see the code used. It downloads the unsigned packages from the OSL openvox-artifacts S3 bucket, signs them, then incorporates them into yum and apt repos, which are then later synced to the S3 buckets. The apt repo is currently maintained with the aptly tool. At some point, we'll move this to a more automated and sustainable workflow.
The build machinery is all very new code, written to get things up and running as fast as possible. While we are fairly confident the packages should work as well as the last Perforce-built open source Puppet packages, we do not yet have the testing infrastructure that Perforce does. We'll be working on this soon!