C projects with GNU Autotools in 2022

Lab Notes

I’ve been a user of the GNU Build System (aka GNU Autotools) nearly all my life, as the installation mechanism for countless open source software packages. Only recently have I tried setting up a software project that uses it. I ended up with a new project template and a lightweight C module management system, with unit testing and mocks.

In this article, I’ll give an overview of the template’s simple (possibly naïve) philosophy, and I’ll describe the Autotools features it tries to simplify. See the template README.md for the practical details on how to use it, and the template repo itself for examples. Hopefully this at least provides some hints to future Autotoolers, even if you don’t use the automation.

Update, August 20: I decided to challenge a premise of the first template to make a smaller simpler template. If you want modules with multiple source files, use the main template and the provided tool to generate Makefile.am. If you’re fine with one source file per module (common in small-to-medium C projects), this smaller template suggests a workflow with a hand-edited Makefile.am:

The Autotools we’ve known

Perhaps you’ve done this at a command line to download and install open source software for Linux or macOS:

wget https://example.com/downloads/coolsoftware-1.0.tar.gz
tar xzf coolsoftware-1.0.tar.gz
cd coolsoftware-1.0
sudo make install

This software was packaged with GNU Autotools, a powerful tool for distributing software that can run on multiple types of computers. Instead of downloading pre-built software for a specific operating system, Autotools expects you to download the source code, then use software building toolsto generate the final program tailored to your specific computer. The Autotools magic is in the ./configure command, which scans your system for tools and libraries, and adapts the installation process to what it finds. You can even give ./configure customization options to further tweak the result.

This is mostly useful among Unix and Linux machines that might have many variations of components. Autotools assumes they all comply with the POSIX standard, a baseline of functionality that it can use to bootstrap the custom install process. This includes Unix, Linux, BSD, and macOS (which is based on BSD). Windows users can install a POSIX-compliant subsystem such as Cygwin, MinGW, or the Windows Subsystem for Linux.

Even if you don’t distribute a source package like coolsoftware-1.0.tar.gz, Autotools can make it easier to produce app downloads for different operating systems from a single set of source files. Autotools is so widely used as to be a de facto standard for some things, and so featureful that it’s difficult to get the same result any other way.

Why (I think) I want Autotools in 2022

I’m working on a command-line tool written in the C language. It’ll be somewhat large and take advantage of C code and libraries written by others. It’ll be released as open source, and some customers will prefer a GNU standard source distribution ala coolsoftware-1.0.tar.gz. Other customers will require or prefer pre-compiled Windows and Mac apps, which will probably be generated by a Continuous Integration (CI) system from a public source repository. It may someday be a Homebrew installation recipe.

I’ll admit up front that I have not yet taken a close look at CMake or Meson, two newer build systems with good support for C programs that are increasingly popular. The user and developer community to which I’m contributing this hasn’t adopted either, and while I want build logic more formalized than hand-written Makefiles, I didn’t want to introduce a new tool dependency. In practice, I’m probably asking them to install a bunch of dependencies anyway, so this may not be a major concern.

I want my tool to behave according to the expectations of seasoned command-line users, expectations heavily influenced by and implemented by prevalent GNU libraries. Established well-documented tool chains should make this easier to pull together.

If this were a professional environment, I would do a more thorough comparative analysis, and maybe lean toward more modern tooling. For a personal project, the established vintage nature of Autotools has its charms, like using the decades-old machines in a high school woodshop. It’s an old-fashioned project for an old-fashioned community, and it’ll be fun to do a project in C with Autotools.

Organizing a C project

I want the project root directory to only contain high-level documentation, Autotools files, and Git configuration. Application source code goes in a src/ directory, test code and data goes in a tests/ directory, more documentation under docs/, and custom developer automation under scripts/.

The Unix-y way to depend on a third-party library is to inform the user through documentation that they must download and install the library on their system before building the application. If the library you want is open source in a way that’s compatible with your project, a useful alternative is to copy the library’s source code directly into your project, so the user has one less thing to worry about. Furthermore, if that library has a public Git repository, you can use Git Submodules to formalize the relationship. Either way, it belongs in a third-party/ directory to keep it separate from the project source.

Throw in some VSCode IDE project settings, and this seems like a sensible top-level file list:


Modules in C

Any large software project is organized into smaller self-contained units that refer to one another. Modern languages have facilities for defining these units built into the language, such as packages, classes, and namespaces. The C language doesn’t prescribe a way to do this, so C programmers adopt a discipline of best practices to accomplish the same effect. I’m confident that what I describe here is pretty close to common practice, at least in a simplistic sense, though different projects might do this differently because there is no real standard.

A module is a self-contained, well-understood set of functionality in our program. It can be tested independently of other modules, and it can depend on other modules explicitly. Ideally, a module’s internals are not visible to other modules, so they are easy to change without concern that an internal change in one module breaks another module.

C offers these features on a per-file basis. A single .c source file can define functions and storage for use by other files, and can also define static functions and storage visible only to the file. To share the non-static definitions to other source files, you declare them in a .h header file, and refer to the header file with an #include preprocessor directive in the source files that use them. Each .c compilation unit is compiled into an .o object file individually, and all of the object files are linked together by the names (symbols) of the declarations to form the final program.

Consider a module for parsing a configuration file into a data structure, which we’ll call cfgfile. The cfgfile.h header file might look like this:

#ifndef CFGFILE_H_
#define CFGFILE_H_

#include <stdbool.h>

 * Configuration for the program.
typedef struct {
    char *logfile_path;
    // ...
} config_t;

 * Parses the text of a configuration file.
bool parse(char *filetext, config_t **result);


This might be implemented in a cfgfile.c source file like this:

#include "cfgfile.h"

#include <stdbool.h>

static char tempbuf[1024];

static void handle_token(char *token, int len) {
    // ...

bool parse(char *filetext, config_t **result) {
    // ...

This has the features we’re looking for: it defines a struct type named config that we intend callers of the module to know about, and public function with the signature bool parse(char *, config **) for callers to use. Implementation details in cfgfile.c are hidden from the caller by the static declaration: callers can’t access tempbuf or handle_token, and could even declare their own tempbuf without conflicting with the module.

Unlike modern languages, the C language does not have a built-in notion of namespaces. Even though parse is declared in cfgfile.h, this fact is immaterial to the linking process. In the final program, there can be only one parse. If two source files try to provide a symbol named parse, the linker will fail because it doesn’t know which version to use when parse is called.

The solution is simple, but requires a bit of discipline: use the module name as a prefix for all non-static stuff.

typedef struct {
    // ...
} cfgfile_config_t;

bool cfgfile_parse(char *filetext, cfgfile_config_t **result);

This feels a bit unwieldy, but it makes some sense. There’s not much difference between cfgfile_parse and a (hypothetical) cfgfile::parse, at least when called from outside the (hypothetical) namespace.

Multi-file modules

Maybe it’s just habits I’ve formed from other languages, but I’d prefer to keep my source files shorter than what I have in mind for what would go in a module. I want each module to be a collection of source files in a subdirectory, and still retain the encapsulation benefits of a single C file as much as possible. For example:


As before, cfgfile.h declares our module exports, and cfgfile.c implements them. The module implementation spans across additional source files not intended for consumption outside of the module. cfgfile.c might #include "cfgmap.h", but nobody else should.

C doesn’t have a way to make symbols “static” across a set of files. The best we can do is use another naming convention: symbols shared across files within the module start with an underscore followed by the module name. Potential entries for cfgmap.h:


typedef struct {
    // ...
} _cfgfile_cfgmap_map;

_cfgfile_cfgmap_map *_cfgfile_cfgmap_create_map();


The underscore says “privacy, please,” but the module name is still needed in the prefix because these private symbols are still technically accessible across modules. Another module might have its own internal “create map” function, and we don’t want collisions. I also prefixed the header protection #define, to be thorough. Using both the module name and the filename in the prefix is probably overkill, but this is what keeps me from worrying about unintended consequences.

I stop short of adding a prefix to the header filename itself, because this is easily remedied another way: set the include path to src/, so all #includes must begin with the module directory name, such as #include "cfgfile/cfgfile.h". (We could also make a style choice to use all relative paths in #includes, but I think this is cleaner.) We’ll introduce the compiler setting for the include path later.

A module for main

This project is an application (as opposed to a library), so somewhere there must be a function with the signature int main(int argc, char **argv). Whether to put its source file in a submodule directory is an arbitrary choice because there’s little reason to treat main as if it’s in a module. Nevertheless:


When we discuss testing later, we’ll see that each test will be a small program with its own main routine. That makes myapp.c a difficult thing to test this way. This source file should probably contain very little code and export nothing. That makes it un-module-like. Locating this at src/myapp.c would also be a sensible choice.


Our source distribution has a local script named ./configure that knows everything our project needs. It consists of a bunch of code to gather user requests, tool locations, and dependencies, then generate a Makefile that can be executed by make.

We use a GNU Autotools tool, Autoconf, to generate the ./configure file from a specification, a file named configure.ac. The command autoreconf --install reads this file and generates ./configure (and a bunch of other temporary things).

Not everything goes in configure.ac. Much of the flexibility of Autotools comes from the fact that we still get to write Makefile rules when we need to. The most powerful rules are built in, and to use them we only need to define certain variables. These definitions and custom rules live in a file named Makefile.am. The generated ./configure command converts these into files named Makefile, which the make command uses in the end. This subsystem is known as Automake.

configure.ac lives in the project root. Let’s start with some canonical set-up:

AC_INIT([myapp], [0.1])
AM_INIT_AUTOMAKE([foreign -Wall -Werror -Wno-portability subdir-objects])





See the Autotools tutorial for introductions to these directives. Briefly:

  • AC_INIT has the name of the application and a version number. These are used in various places, including the source distribution filename (myapp-0.1.tar.gz).
  • AC_INIT_AUTOMAKE sets up Automake with a few options.
  • AC_PROG_* check for common tools, such as a gcc-compatible C compiler.
  • LT_INIT sets up Libtool, which we will use in our module system.
  • AC_CONFIG_HEADERS generates a C header named config.h in the project root directory. This header contains #define preprocessor statements with values calculated by the configuration process.
  • AC_CONFIG_FILES must mention every Makefile we want in our project. In general, Autotools needs every file to be mentioned explicitly somewhere to be considered part of the project.
  • AC_OUTPUT triggers the generation of files described by previous configuration macros.


Each module will have its own build definitions and rules, and it’d be great if those rules could live in separate files, one per module. Autotools supports traversing subdirectories for Makefile.am files, as long as they are declared in parent Makefile.am files with a SUBDIRS = ... variable.

While this keeps Makefiles short, it has the serious disadvantage that Makefiles can’t see each other, and so can’t reliably depend on each others’ targets. I started out assuming I’d have one Makefile.am per directory, only to realize that this made the ordering of SUBDIRS significant: if a module wasn’t built by the time it was needed by another module, the build would fail. I had to manage the dependency tree manually with SUBDIRS, and I couldn’t build certain individual targets without building the entire project because the targets didn’t know about their dependencies in other files. (Not to mention I kept forgetting to add new Makefiles to configure.ac’s AC_CONFIG_FILES variable.)

So now I’m a fan of maintaining one large Makefile.am for the project. It’s easy to define common rules and variables, and keep descriptions of modules and module dependencies succinct.

Here’s a start:


    -I$(top_srcdir) \

This sets ACLOCAL_AMFLAGS as recommended by Libtool, and sets up that #include path. I put the project root in the include path also, so the program can reach the config.h file that ./configure generates. This is optional, but I like it.

Making programs

We want Automake to build a tool named myapp with at least src/myapp/myapp.c as a source file. Without any dependencies on other modules, this would look like this in Makefile.am:

bin_PROGRAMS = myapp

myapp_SOURCES = myapp.c

bin_PROGRAMS is a list of programs to build. For each program, we provide a xxx_SOURCES variable, where xxx is the name of the program. (If the program name contained weird characters like hyphens, the variable name would use underscores, e.g. some-tool would be some_tool_SOURCES.) Automake uses the SOURCES list to figure out that it needs to run the C compiler on the .c files to produce .o files, and to link all of those .o files together into a program named myapp. This is also how Automake figures out to include all SOURCES in the source distribution.

So how do we bring in the cfgfile module so that myapp.c can call it? One way would be to just add all of cfgfile’s source files to myapp_SOURCES. Not only would this become difficult to manage (Makefile variables might help), as our dependency tree grows, our build would slow down with unnecessary re-builds of all of these source files. There is a better way.

Making modules

We want each module to act like a library, a self-contained unit that encapsulates all of the module’s code and dependencies. It is built as needed and linked in everywhere it is used.

Just as Makefile.am defines PROGRAMS, it can also define LTLIBRARIES. “LT” stands for Libtool, a mechanism for packaging a shared library and its dependencies into a file whose name ends in .la. Autotools can build and install shared libraries into the user’s system library directory for use by other program builds.

We don’t actually want that in this case: our modules will only ever be used to build this program, and we don’t want to clutter the user’s system with unnecessary files. Instead, we can tell Autotools to build a module as a library, but then not install it. The noinst_LTLIBRARIES variable declares a list of LT libraries to build for the sole purpose supporting other builds. These are known as convenience libraries because they just help us organize our code and build logic.

An LT library has a name beginning with lib and ending in .la, so our cfgfile module looks like this in Makefile.am:

noinst_LTLIBRARIES = \

libcfgfile_la_SOURCES = \
    src/cfgfile/cfgfile.c \
    src/cfgfile/cfgfile.h \
    src/cfgfile/cfgmap.c \

To declare that cfgfile is a dependency for myapp program, set myapp_LDADD like so:

myapp_LDADD = libcfgfile.la

To make one library module depend on another (library) module, use LIBADD instead of LDADD. For example, say a module named executor depends on cfgfile:

noinst_LTLIBRARIES = \
    libcfgfile.la \

# libcfgfile_la_SOURCES = ...

libexecutor_la_SOURCES = \
    src/executor/executor.c \
libexecutor_la_LIBADD = \

In some documentation and examples, you’ll see LIBRARIES that are not LTLIBRARIES, with names ending in .a instead of .la. The main difference is an LT library builds in all of the library’s dependencies. With plain libraries, each module would contain its own object code, but the program rule would need to explicitly depend on all of its dependencies’ dependencies. This leaks an implementation detail between modules. In practice, we could just list every module in myapp_LDADD and avoid LIBADD entirely, but I like not having to think about this.

You may wonder, as I did, whether there is an issue with two modules both depending on a common module, then both being dependencies for something else, aka a diamond dependency. Do the two copies of the common module conflict? Apparently not! The linker is smart enough to consolidate the results.

Trying out the build

The following commands use the files we just created to generate the configuration script, run it, then make the program:

autoreconf --install

The myapp program appears in the project root directory. If we were to make install, it would get copied to our system’s /usr/local/bin directory, available for people to use.

During development, when you change a source file, you only need to re-run make. When you change a Makefile.am, re-run ./configure then make. When you change configure.ac, re-run autoreconf --install, ./configure, then make.

The generated Makefile is usually smart enough to re-run ./configure automatically as needed, but this only works if the Makefile is functional. If something isn’t working, it is safe to re-run these commands to confirm Autotools configuration isn’t the issue.

Wrangling intermediate build files

Oh my gawd what are all those files doing in our project?? Yes, Autotools just made a big mess, spitting out helper scripts, log files, and intermediate forms of various things. None of these files should be committed to your source repo.

Github provides a default .gitignore file for Autotools projects that prevents build artifacts from getting committed. We definitely want all of those rules in our .gitignore file. I found a bunch more as I was setting up my project.

Is there a way to get Autotools to just… not do that? Like, put everything in a build/ subdirectory? Yes and no. Autoconf (the first autoreconf step) needs to produce files in the project root.

Assuming our Makefiles are written correctly, Automake is a little more flexible: you can run the configure script from any directory. That directory becomes the build tree, and all subsequent files are generated there. The location of the configure script is considered the source tree. Automake knows how to find what it needs in both locations, and avoids writing to the source tree. When we tell it to build the source distribution, it will test this by making the source files read-only while it does a build.

autoreconf --install
mkdir build
cd build

I say you can run the configure script, but what I really mean is the user can run the configure script. Nothing about the contents of the configure.ac and Makefile.am files themselves should assume the location of the build tree. You can write a utility script of your own to divert Automake output if you like. (Put it in scripts/.) Do not write rules that assume one way or the other.

OK, so if it’s so common to just let Autotools make a big mess, is there a way to clean it up? Yes and no:

  • make clean does a good job deleting files produced by make. Automake generates this rule for you in the Makefile it produces.
  • make distclean goes further and mops up the generated Makefile file and a few others. You can’t run make again until you re-run ./configure to re-generate Makefile.
  • There’s also a make maintainer-clean which is the same as distclean by default.

None of these rules undoes what autoreconf does. This is by design: these rules are available in the source distribution you send to users, and users are not expected to have Autotools installed, so they can’t (and shouldn’t) run autoreconf.

I settled on the following:

  1. Let Autoconf, Automake, and make spew files into my source tree.
  2. Block generated files from my repo with a thorough .gitignore, based on the Github starter, extending it as needed.
  3. As I add new rules that generate their own intermediate files, add these to .gitignore. Also add them to the CLEANFILES variable in the appropriate Makefile.am, which adds them to the list of files deleted by make clean.

Technically that’s sufficient if your Makefiles are written correctly. As I was setting up my first project, I discovered many cases where I had an error that generated a bad file, didn’t clean up automatically, and got stuck with bad behavior until I deleted it myself.

So I wrote a tool that “super-cleans” the project directory, deleting every file ignored by .gitignore. It also cleans up any files that were created in Git submodules, which in my case are always build artifacts because I am not editing submodules via the project directory. (We’re about to use submodules to bring in a testing library.)

python3 scripts/superclean.py

This gives me peace of mind that my project repo is buildable from scratch without any of the build artifacts giving me a false sense of security.

Installing Unity Test and CMock

I’m used to using automated unit tests as a regular part of my development workflow. I wouldn’t impress a Test-Driven Development aficionado, but I happily annoy my teammates with an insistence on high test coverage.

For a test framework, I went with Unity Test and CMock by ThrowTheSwitch.org. The C code is lean without dependencies on libraries or specific build systems. They use Ruby scripts for code generation, so this adds a dependency on Ruby, but only for running tests. Source distro users can still build and install with just the POSIX tools.

Unity Test and CMock are MIT licensed, which is compatible with my project’s license, so I can include them in my project and source distribution. I could have downloaded and copied CMock into my third-party/ library, and you might want to do it that way for the security of freezing a local copy. I used Git submodules to formalize the relationship directly in my repo metadata. CMock brings in Unity Test (and their CException library) recursively, so:

mkdir third-party
cd third-party
git submodule add https://github.com/ThrowTheSwitch/CMock.git
git submodule update --init --recursive

This creates .gitmodules in the project root, and downloads CMock into third-party/CMock/. Unity Test is inside, under third-party/CMock/vendor/unity/.

Be sure to mention in your project’s developer documentation that contributors should git clone your project repo with the --recurse-submodules option, or run git submodule update --init --recursive after cloning.

Users building from the source distribution will not need Git in any form, as we’ll soon see.

A build tool might spit out some temporary files inside the submodule directory. I’m not contributing to these submodules by editing them locally, so this is OK, but it confuses tools trying to report status. Edit .gitmodules and add ignore = dirty, like so:

[submodule "third-party/CMock"]
  path = third-party/CMock
  url = https://github.com/ThrowTheSwitch/CMock
  ignore = dirty

Commands like git status now know to ignore anything that Autotools creates under third-party/CMock/.

Alas, VSCode’s Git support does not know to honor ignore = dirty for submodules, and will draw annoying colored dots in the file explorer for third-party/ after a test build. I try to ignore it, and when I really want it to go away I use my superclean.py script. 😅

For the source distribution to work, it needs the files from Unity Test and CMock that we depend on in the Makefile. Everything listed under a SOURCES rule is already included, but this does not include the Ruby scripts. To make sure they also get included, define EXTRA_DIST in Makefile.am. (Also include CMock’s LICENSE.txt because we’re distributing CMock sources.)

    README.md \
    third-party/CMock/LICENSE.txt \
    third-party/CMock/README.md \
    third-party/CMock/config \
    third-party/CMock/lib \
    third-party/CMock/scripts \
    third-party/CMock/vendor/unity/LICENSE.txt \
    third-party/CMock/vendor/unity/README.md \

I tried just listing third-party/CMock here, but this can accidentally include some build artifacts in the final source distribution that cause the distribution verification build to fail.

Setting up Unity Test with Autotools

The CMock project repo has Meson build files in it, but thankfully we don’t need those. There are only a few C sources, and we can refer to them directly in our own build rules, or make them an LT library to avoid unnecessary rebuilds. We’ll also need new custom rules for running the Ruby scripts to generate source files.

I want to be extra supportive to my users regarding the Ruby dependency. I could just let the ruby command in my build rules fail if it’s not present, but it’s more polite to explicitly check for it and fail with a message if needed. It should not be required for the main build, and the test build should print a useful message and abort.

Let’s get Autoconf to help, because part of its job is to look for tools. In configure.ac:

AC_PATH_PROG([RUBY], [ruby])

This looks for ruby on the command path, and either sets the RUBY variable to the path to the tool if found, or sets it to the empty string if not found. Our Makefile rules that run Ruby scripts can start with this line to test this variable and bail with a nice error message:

    @test -n "$(RUBY)" || { echo "\nPlease install Ruby to run tests.\n"; exit 1; }

We could have used a different configure.ac macro to cause the ./configure step to fail if Ruby is not found, but again, I want the main build to work without Ruby.

We will put test source files under tests/, in a subdirectory named after the module under test. Test suites are C source files whose names start with test_.


A module test suite includes the header file of the module under test, and the unity.h header. Each test is a function with a name starting with test_. It can have an optional setUp() and tearDown(). Unity provides various assertions with useful error reporting.

#include "cfgfile/cfgfile.h"
#include "unity.h"

void setUp(void) {}

void tearDown(void) {}

void test_cfgfileFunc_ReturnsZero(void) {
  TEST_ASSERT_EQUAL_MESSAGE(0, cfgfile_func(999), "func returns 0");

See the Unity Test docs and more docs for details on how to write test suites. It’s good stuff.

Notice there’s no main() in the test suite. Unity Test generates a runner program with a main routine that does all the setting up, tearing down, and calling of test functions.

Let’s set up the test rules. Add the following lines to Makefile.am:

## Top of Makefile.am

check_LTLIBRARIES = libcmock.la
libcmock_la_SOURCES = \
    third-party/CMock/src/cmock.c \
    third-party/CMock/src/cmock.h \
    third-party/CMock/src/cmock_internals.h \
    third-party/CMock/vendor/unity/src/unity.c \
    third-party/CMock/vendor/unity/src/unity.h \
libcmock_la_CPPFLAGS = \
    -I$(top_srcdir)/third-party/CMock/vendor/unity/src \

CLEANFILES = tests/runners/runner_test_*.c
check_PROGRAMS =

## cfgfile module

check_PROGRAMS += tests/runners/test_cfgfile

# Replace these indents with tabs!
tests/runners/runner_test_cfgfile.c: tests/cfgfile/test_cfgfile.c
    @test -n "$(RUBY)" || { echo "\nPlease install Ruby to run tests.\n"; exit 1; }
    mkdir -p tests/runners
    $(RUBY) $(top_srcdir)/third-party/CMock/vendor/unity/auto/generate_test_runner.rb $< $@

tests/cfgfile/runners_test_cfgfile-test_cfgfile.$(OBJEXT): \
    tests/runners/runner_test_cfgfile.c \
    libcfgfile.la \

tests_runners_test_cfgfile_SOURCES = \
    tests/cfgfile/test_cfgfile.c \
nodist_tests_runners_test_cfgfile_SOURCES = \
tests_runners_test_cfgfile_LDADD = \
    libcfgfile.la \
tests_runners_test_cfgfile_CPPFLAGS = \
    -I$(top_srcdir)/third-party/CMock/vendor/unity/src \
    -I$(top_srcdir)/third-party/CMock/src \

## Bottom of Makefile.am


Remember that in a Makefile.am, just like in a Makefile, shell commands in rule definitions need to be indented with tab characters. Copy-pasting from this article won’t do this correctly (thanks to my Markdown editor, sorry), so be extra careful if you’re following along. The commands in the runners rule must be intended with tabs.

And oh my gawd the error messages for incorrect indentation are unhelpful.

check_LTLIBRARIES builds Libtool libraries specifically for testing (which Autotools calls “checking”). We define such a library for CMock sources, so it only gets built once for the project then linked into every test runner. libcmock.la definitions only need to appear once for the entire file.

check_PROGRAMS is similar to bin_PROGRAMS, but this specifically declares programs that should be built for the purposes of testing. TESTS is the list of programs to run when testing, which in this case is identical to check_PROGRAMS so we just make them equal.

tests/runners/runner_test_cfgfile.c: is a Makefile rule (not an Automake definition) for generating the test runner. When this file is needed, the rule invokes Unity Test with tests/cfgfile/test_cfgfile.c as an argument. These source files are referred to by CLEANFILES so that make clean knows to delete them.

tests/cfgfile/runners_test_cfgfile-test_cfgfile.$(OBJEXT): is another rule, though this rule has no commands, only dependencies. The rule name is the name of the object file for runner_test_cfgfile.c. The -test_cfgfile piece is the name of the program that depends on it. $(OBJEXT) is the object filename extension for the target platform (such as, but not always, .o). This rule says, “Before you try to build runner_test_cfgfile.c, make sure to trigger the rule that generates it, and the rules for the related convenience libraries.”

The convenience libraries are listed as dependencies here in case any library build generates a header file that we need to compile this test. LDADD causes the dependencies to built, but not necessarily before test sources are compiled.

We declared a tests/runners/test_cfgfile program, so it has tests_runners_test_cfgfile_SOURCES, with our test source and the header for the module under test is here. nodist_tests_runners_test_cfgfile_SOURCES lists the generated runner source. The nodist_ prevents the generated source from being included in the source distribution. The LDADD links in the dependency libraries. CPPFLAGS adds Unity Test, CMock, and the project src/ directory to the #include path.

The TESTS definition lists all programs that should be run when testing the project. For our purposes, it can be identical to check_PROGRAMS, so we equate them at the bottom of the file.

It’s a good practice to make sure generated sources are generated by the user’s build. Relying on Automake to generate sources then include the generated result in the source distribution is fragile and not the intent of Automake’s design.

The Built Sources Examples section of the Automake manual proposes several ways to trigger the generation of sources. The technique used here, with the dependency list on the first source file’s object file, allows the test runner to be built in isolation from a clean build tree. Alternatively, we could have defined BUILT_SOURCES as a list of the generated files to be built before building any targets, but as described in the manual, this would only trigger for make all, make check, and make install. With this method, we can make tests/runners/runner_text_cfgfile.

We can now run unit tests. To build and run every test suite:

make check

To build every test suite (as needed) and run a specific test:

make check TESTS='tests/cfgfile/test_cfgfile'

Setting up CMock

CMock takes a module’s external header file (such as src/cfgfile/cfgfile.h) and generates source for a mock version of the module with all of the same functions. The mock version allows tests to declare expected inputs and outputs for the functions, for the purposes of testing.

Why would we want this? If we’re writing tests for a module, we want our tests to call the module’s actual code. But we don’t want it to call the actual code of a module on which the module under test depends. In an extreme case, using all real code requires setting up a test environment that can run the entire program. For this to be a unit test, it should only test the one module.

If executor is the module under test, and it depends on cfgfile, we can build the runner program for executor’s test suite so that it links with the mock verison of cfgfile instead of the actual cfgfile. This ensures that only executor code is being tested, and avoids propagating test environment dependencies to every test suite.

In Makefile.am:

# Replace these indents with tabs!
tests/mocks/mock_cfgfile.c: src/cfgfile/cfgfile.h
    @test -n "$(RUBY)" || { echo "\nPlease install Ruby to run tests.\n"; exit 1; }
    mkdir -p tests/mocks
    CMOCK_DIR=$(top_srcdir)/third-party/CMock \
    MOCK_OUT=tests/mocks \
    $(RUBY) $(top_srcdir)/third-party/CMock/scripts/create_mock.rb $<

check_LTLIBRARIES += libcfgfile_mock.la
nodist_libcfgfile_mock_la_SOURCES = \
libcfgfile_mock_la_CPPFLAGS = \
    -I$(top_srcdir)/third-party/CMock/vendor/unity/src \
    -I$(top_srcdir)/third-party/CMock/src \
    -I$(top_srcdir)/src \
libcfgfile_mock_la_LIBADD = libcmock.la

CLEANFILES += tests/mocks/mock_cfgfile.c tests/mocks/mock_cfgfile.h

Remember to use tabs to indent the commands in the mock generation rule!

This should be familiar. The only tricky bit is we add src/cfgfile to the include path. CMock’s generated mock source attempts to #include the original module header without a leading path, so we add the module source root to the include path. (There’s an undocumented config option for CMock to add a header prefix, but I couldn’t see how to pass it on the command line, so I’m doing this instead.)

Earlier we imagined a module named executor that depends on cfgfile, so let’s continue that example. Set up unit tests for executor, linking the libexecutor.la library and the libcfgfile_mock.la libraries.

check_PROGRAMS += tests/runners/test_executor

# Replace these indents with tabs!
tests/runners/runner_test_executor.c: tests/cfgfile/test_executor.c
    @test -n "$(RUBY)" || { echo "\nPlease install Ruby to run tests.\n"; exit 1; }
    mkdir -p tests/runners
    $(RUBY) $(top_srcdir)/third-party/CMock/vendor/unity/auto/generate_test_runner.rb $< $@

tests/executor/runners_test_executor-test_executor.$(OBJEXT): \
    tests/runners/runner_test_executor.c \
    tests/mocks/mock_cfgfile.c \
    tests/mocks/mock_cfgfile.h \
    libcmock.la \
    libexecutor.la \

tests_runners_test_executor_SOURCES = \
    tests/executor/test_executor.c \
nodist_tests_runners_test_executor_SOURCES = \
    runners/runner_test_executor.c \
    tests/mocks/mock_cfgfile.c \
tests_runners_test_executor_LDADD = \
    libcmock.la \
    libexecutor.la \
tests_runners_test_executor_CPPFLAGS = \
    -I$(top_srcdir)/third-party/CMock/vendor/unity/src \
    -I$(top_srcdir)/third-party/CMock/src \
    -I$(top_srcdir)/src \

Now our tests/executor/test_executor.c can engage the mock cfgfile before calling executor methods, like so:

#include "executor/executor.h"
#include "mock_cfgfile.h"
#include "unity.h"

void test_Square_UsesExampleTwo(void) {
  cfgfile_func_ExpectAndReturn(7, 49);
  int result = executor_doit(7);
  TEST_ASSERT_EQUAL_INT(49, result);

This assumes the executor module exports an int executor_doit(int) function that calls cfgfile_func. I’ll let you invent your own definition for the example. 😄

Re-run our build commands, then make check to see the new test in action:

make check

Too much boilerplate!

We walked through Makefile.am code for setting up test runners and mocks just to understand it better, but that’s too much boilerplate to manage for each module. I just want to be able to say the name of a module and the modules it depends on, and have everything just work.

I resisted writing a module management system, I really did. I researched best practices for a solid week, hoping to find a canonical C language workflow with all of the modern features I’m used to. Perhaps not surprisingly, not even GNU Autotools itself has exactly one way to do some things.

I had an early version that minimized repeated text using Makefile variables and pattern rules, which helped some. Plain Makefiles have a way to define variables to contain entire templated rules, which would be super-useful here and almost a complete solution. Unfortunately, while Automake can pass through some Makefile definitions to the final Makefile, it does not support multi-line variable definitions or full-text macros. Regardless, what I actually need is for those templates to expand to Automake definitions.

I concluded that it’s just easier to reason about modules, and easier to write corresponding templated Automake rules, if I use another layer of code generation. Here also I tried looking for prior art. Gnulib has a module system called gnulib-tool that takes module description files and generates configure.ac and Makefile.am. I was running out of patience with on-boarding at this point, and decided that a little bit of Python within the repo was better than adding another tool dependency.

Yes, this means Python itself is a new dependency. I could have used Ruby, which is already needed for CMock, or Perl, which is already used by Automake, or C, which is the language of choice for the project itself. I could have even used m4, the macro language on which Autoconf is based and is therefore included with Autotools. If you want to port my scripts for me, have at it. 😉

It’s tempting to rewrite at least the code generator rules as pattern rules, so they only need to appear once in the file. Unfortunately, the way we’ve written them would require that the % pattern substitution happen more than once (the directory name and the source name), which is not supported by pattern rules.

Luckily we’re about to introduce a Makefile generation tool, so we won’t need to worry about too much repetition.

I don’t really understand how Autotools can have an entire macro language in it and not make it more accessible to Automake. Maybe I’m fundamentally misunderstanding how Autotools is meant to be customized, but I didn’t see any good examples of macro-izing Makefile.am in books and manuals.

Generating Makefile.am

I took everything covered in this article and rewrote it as a tool that scans the src/ and tests/ directories and generates the Automake definitions for all of the library and program modules, along with a preamble and postamble. Each module has a module.cfg file that declares whether the module is a library or program, and declares which library modules it depends on. For example, here’s src/executor/module.cfg:

library = executor
deps = cfgfile

The script generates a complete Makefile.am:

python3 scripts/makemake.py

You run the script every time you would normally edit Makefile.am, such as after creating or deleting a source file, or modifying a module.cfg file.

This greatly simplifies the management of Makefile rules—assuming the project can be organized into modules as we’ve described them.

See the README for instructions on how to use makemake.py and module.cfg files. I have opted to commit the generated Makefile.am to the source repo, so you can browse the example Makefile.am. There are a few other fun features in there, too.

To completely reset the workspace:

python3 scripts/superclean.py
python3 scripts/makemake.py
autoreconf --install

make distcheck

It’s time to make the donuts!

Run this command:

make distcheck

This produces the source distribution, myapp-0.1.tar.gz. Give it to your friends.

As if that weren’t already kind of amazing, distcheck also tests the source distribution. It does the whole tar xzf and ./configure and make process off in a temporary directory to make sure all of the generated rules run in isolation from your project directory. It also runs make check to perform your test suite on the result.

Next steps

This C Autotools project starter template and associated tools provide a simple way to organize, build, test, and distribute applications written in C. I’ll probably be revising it as I realize its shortcomings over the course of a real project.

As usual, I’m eager for feedback, especially if I’m overlooking Autotools or C project best practices, or popular tool alternatives.