Managing Makefiles in git

Author: Mario Antonioletti
Posted: 7 Apr 2016 | 16:36

I have become a bit of a fan of the distributed revision control provided by git. In my day-to-day work at EPCC, I find myself developing and running code across multiple machines. Trying to keep a code base coherent across all these systems would be a bit of a nightmare were it not for git or any other source control revision system. Arguably, SVN would work as well but I somewhat lost my faith in SVN after trying to commit files over a slow and unstable connection while travelling on a train.

I like git's ability to work offline, being able to commit locally when there is a need, and later being able to push my changes to whatever canonical repository is being. However, one of the annoyances with git lies in working with Makefiles.

If you do not know what a Makefile is then you should look it up as it could also save you a lot of time and effort if you have to compile a large number of files. There is a good tutorial on Makefiles on the Software Carpentry website.

Typically at the top of a Makefile you will have a set of macros that specify what compilers you are using to compile your code, eg:

FC        = gfortran                     # Compiler to use
LIBS      = -L/somepath/ -lsomelibrary   # Libraries to link
FCFLAGS   = -Wall -c                     # Compiler flags
OPTFLAGS  = -O3 -ffast-math              # Optimisation flags
LINKFLAGS = $(OPTFLAGS) -static-libgcc   # Link flags

As I develop code over a number of different machines, which invariably have different compilers available, it is useful to record what compiler you will be using and any compiler-specific flags in the Makefile - less to remember.

For instance, to compile a Fortran code on my laptop I would use gfortran. If I use ARCHER, the national UK supercomputer service based here in Edinburgh, Cray provides a compiler wrapper, ftn, which is recommended for compiling Fortran codes which, under the bonnet, could be using gfortran, the Intel Fortran compiler ifort or Cray's very own Fortran compiler crayftn - using the wrapper is good as it hides a lot of library linking and other compiler flags that are needed. However, you have to be aware of what compiler is being used under the hood if you are using compiler-specific flags. Another system has a bare, more up-to-date version of the Intel Fortran compiler and the list goes on.

Keeping your code synchronised is one thing but doing the same thing for a Makefile rapidly becomes an annoyance. You want to have the Makefile in your repository but you do not want to continually have to edit the Makefile to reflect the changes in compilers and the corresponding flags every time you move to a different system.

I managed to find a work-around for this, which was to have the default set of macros at the top and then overwrite these depending on what system you are logged on to based on the host name. Looking at a simplified excerpt of the resulting Makefile with explicit line numbers we have:

 1  # Default flags
 2  FC        = mpif90
 3  LIBS      = -L/somepath/.. -lfftw3_threads -lfftw3
 4  FCFLAGS   = -Wall -c
 5  OPTFLAGS  = -O3 -ffast-math
 6  LINKFLAGS = $(OPTFLAGS) -static-libgcc
 8  # Identify the hostname.
 9  HOSTNAME:=$(shell hostname)
11  ifeq ($(HOSTNAME),ultra) # ultra (intel based)
13    FC        = mpif90
14    LIBS      = -L/sompath2/..  -lfftw3_threads -lfftw3
15    FCFLAGS   = -c -xHost
16    OPTFLAGS  = -O3 -no-ipo -no-prec-div -recursive -openmp
17    LINKFLAGS = $(OPTFLAGS) -static-libgcc
19  else ifeq ($(HOSTNAME),phi.hydra) # phi.hydra (intel based)
21    FC       = mpiifort
22    LIBS     = -L/somepath/... -lfftw3_threads -lfftw3
23    FC FLAGS = -c
24    OPTFLAGS = -O3 -g -qopenmp
27  else ifeq (eslogin,$(findstring eslogin,$(HOSTNAME))) # ARCHER 
29    FC         = ftn
30    LIBS       =  -lfftw3_threads -lfftw3
31    FCFLAGS    = -c
32    INTELFLAGS = -no-ipo -no-prec-div -recursive -openmp
36  endif

Lines 1-6 specify a default set of compiling options. In this case we would be compiling for an MPI (message passing) code. In line 9 we find out what host we are running on and we compare this to specific hosts in lines 11, 19 and 27. The comparison on line 27 is a little bit more complicated because when you log in to ARCHER you could end up in a number of front end nodes eslogin001, eslogin002, etc so we have to check whether the HOSTNAME has the string eslogin in it and if it does it uses that bit so the check works.

Using this mechanism one does not have to continually be committing files to git or clobbering the Makefile. I, at least, found this useful when developing and the interaction between git and Makefiles less of an annoyance. There is still an issue when switching between debugging and optimisation versions of the code. If someone knows a better way of managing Makefiles in git then please let me know.


Nice post Mario! I think of Git being a more complicated way to do SVN for scenarios where you need to be able do dev work out of contact with a network connection. It's a trade off between the more complicated revision procedures and the convenience of not having to be on the network all the time. As I spend the majority of my time anchored to the network I find SVN easier and quicker to work with, but I can see the appeal of Git.

Mario Antonioletti's picture

I never branched in SVN - I read somewhere that it was complicated and merging back could be a problem. I branch all the time in git now so it's not just the off line capabilities that have won me over ...

Matching on host rather than compiler version itself seems like a somewhat temporary method, susceptible to breaking on system updates. Matching on compiler version would probably also let you more easily move to a new machine.

In my experience, this sort of switching seems more commonly moved to e.g. Autotools, or CMake. Are you not using something like that to automatically determine qualities of the build environment? (I know they can be a pain but definitely worth it for release.)

Mario Antonioletti's picture

I have used both before and indeed, both are wonderful, when they work but when they go wrong it's a mighty headache to get them fixed. Never quite got myself proficient in either - I find that autotools has a steep learning curve, cmake less so but it can still be awkward when it goes wrong. Maybe I shall have a look at them again. Thanks for the suggestion.

Blog Archive