Managing Complex Projects

A while ago I mentioned dev-pipeline, and I kept hacking away and improving the software. I didn’t think about it too much since it met my needs (or was easy to add the missing feature), but it came up at work the other day and I decided to see just how complex the project I was managing was. Since dev-pipeline learned how to generate dependency graphs in dot format, this was easy.

dev-pipeline build-order –method=dot provided the following output:

Dependency graph
Build ordering from dev-pipeline

This was when I realized dev-pipeline was suitable for projects outside my small world. Using a 96-line build.config and a 40-line profiles.conf, I have the above building in the following configurations:

  • x86_64-pc-linux-gnu-gcc‑7.3.0 (this is what I’m using most of the time)
  • x86_64-pc-linux-gnu-gcc‑6.4.0
  • armv7a-hardfloat-linux-gnueabi-gcc‑6.4.0 (toolchain for my Raspberry Pi)
  • clang‑6.0 with libc++

The last two bullet points are especially interesting to me, since building with libc++ requires building all dependencies with libc++, as does building for an entirely different architecture. There are also two things about this that are really cool:

  1. I have no sub-repositories anywhere, in any of the repositories. This means the dependencies (e.g., gtest) have to be built by dev-pipeline and passed along.
  2. None of the components have to know about dev-pipeline. Everything is build with cmake, but they just use standard cmake features like find_package, install, and export.

Since I come from the wonderful world of the Unix philosophy where everything is small and designed to be composed, this makes me very happy. I can work on a library like houseguest or bureaucracy, then plug it into another project. For distro maintainers (assuming anyway, since I’m not a maintainer), this also avoids having to work around a project’s dependency requirements. Since one of my pet-peeves is when a repository stores information about tooling that’s unrelated to directly turning source into something usable (e.g., .travis.yml, Jenkinsfile, conanbuildinfo.cmake), keeping a repository clean is a big win.

Now will this work for massive projects like KDE with hundreds of repositories? Doubt it, but that’s because I have no confidence my dependency ordering is efficient enough (patches welcome). Outside of possible performance issues however, I don’t see a reason this can’t scale.

Managing complex projects is hard, especially when projects are spread across repositories, but dev-pipeline looks like it’s solving the problem (at least in my use cases). I’ve also been able to convert an existing project with many sub-repositories, with the only changes being actually removing the sub-repositories.

sub-repository dependencies
A graph showing sub-repository dependencies

For reasons I won’t reiterate here, sub-repositories are full of problems, and this is proof better alternatives can exist without adding significant overhead to a developer’s workflow.

Leave a Reply

Your email address will not be published. Required fields are marked *