Wednesday, August 20, 2014

Continuations and Exceptions

Summary: In moving Shake to continuations, exceptions were the biggest headache. I figured out how to somewhat integrate continuations and exception handling.

The git repo for Shake now suspends inactive computations by capturing their continuation instead of blocking their thread, based on the continuations I described in a previous blog post. The most difficult part was managing exceptions. I needed to define a monad where I could capture continuations and work with exceptions, requiring the definitions:

data M a = ... deriving (Functor, Applicative, Monad, MonadIO)
throwM :: SomeException -> M a
catchM :: M a -> (SomeException -> M a) -> M a
captureM :: ((a -> IO ()) -> IO ()) -> M a

I'm using M as the name of the monad. I want equivalents of throwIO and catch for M, along with a function to capture continuations.

The first observation is that since catchM must catch any exceptions, including those thrown by users calling error, then throwM can be defined as:

throwM = liftIO . throwIO

Using throwIO gives better guarantees about when the exception is raised, compared to just throw.

The second observation is that sometimes I want to raise an exception on the continuation, rather than passing back a value. I can build that on top of captureM with:

captureM' :: ((Either SomeException a -> IO ()) -> IO ()) -> M a
captureM' k = either throwM return =<< captureM k

The third observation (which I observed after a few weeks trying not to follow it) is that the continuation may never be called, and that means you cannot implement a robust finallyM function. In particular, if the person who was intending to run the continuation themselves raises an exception, the continuation is likely to be lost. I originally tried to come up with schemes for defining the function passed the continuation to guarantee the continuation was called, but it became messy very quickly.

The properties we expect of the implementation, to a rough approximation, include:

  • catchM (x >> throwM e) (\_ -> y) >> z === x >> y >> z -- if you throw an exception inside a catchM, you must run the handler.
  • captureM (\k -> x) >>= y === x -- if you execute something not using the continuation inside captureM it must behave like it does outside captureM. In particular, if the captureM is inside a catchM, that catchM must not catch the exception.
  • captureM (\k -> k x) >>= y === x >>= y -- if you capture the continuation then continue that must be equivalent to not capturing the continuation.
  • captureM (\k -> k x >> k x) >>= y === (x >>= y) >> (x >>= y) -- if you run the continuation twice it must do the same IO actions each time. In particular, if the first gets its exceptions caught, the second must do also.

These properties are incomplete (there are other things you expect), and fuzzy (for example, the second property isn't type correct) - but hopefully they give an intuition.

The implementation was non-trivial and (sadly) non-elegant. I suspect a better implementation is known in the literature, and I'd welcome a pointer. My implementation defines M as:

type M a = ContT () (ReaderT (IORef (SomeException -> IO ())) IO) a

Here we have a continuation monad wrapping a reader monad. The reader contains an IORef which stores the exception handler. The basic idea is that whenever we start running anything in M we call the Haskell catch function, and the exception handler forwards to the IORef. We can define catchM as:

catchM :: M a -> (SomeException -> M a) -> M a
catchM m hdl = ContT $ \k -> ReaderT $ \s -> do
    old <- liftIO $ readIORef s
    writeIORef s $ \e -> do
        s <- newIORef old
        hdl e `runContT` k `runReaderT` s `catch`
            \e -> ($ e) =<< readIORef s
    flip runReaderT s $ m `runContT` \v -> do
        s <- ask
        liftIO $ writeIORef s old
        k v
  • We store the previous exception handler as old, and insert a new one. After the code has finished (we have left the catchM block) we restore the old exception handler. In effect, we have a stack of exception handlers.
  • When running the handler we pop off the current exception handler by restoring old, then since we have already used up our catch, we add a new catch to catch exceptions in the handler.

We then define captureM as:

captureM :: ((a -> IO ()) -> IO ()) -> M a
captureM f = ContT $ \k -> ReaderT $ \s -> do
    old <- readIORef s
    writeIORef s throwIO
    f $ \x -> do
        s <- newIORef old
        flip runReaderT s (k x) `E.catch`
            \e -> ($ e) =<< readIORef s
        writeIORef s throwIO
  • We make sure to switch the IORef back to throwIO before we start running the users code, and after we have finished running our code and switch back to user code. As a result, if the function that captures the continuation throws an exception, it will be raised as normal.
  • When running the continuation we create a new IORef for the handler, since the continuation might be called twice in parallel, and the separate IORef ensures they don't conflict with each other.

Finally, we need a way to run the computation. I've called that runM:

runM :: M a -> (Either SomeException a -> IO ()) -> IO ()
runM m k = do
    let mm = do
            captureM $ \k -> k ()
            catchM (Right <$> m) (return . Left)
    s <- newIORef throwIO
    mm `runContT` (liftIO . k) `runReaderT` s

The signature of runM ends up being the only signature the makes sense given the underlying mechanisms. We define mm by using the facilities of captureM to insert a catch and catchM to ensure we never end up in an exception state from runM. The rest is just matching up the types.

Stack depth could potentially become a problem with this solution. If you regularly do:

captureM (\k -> k ())

Then each time a catch will be wrapped around the function. You can avoid that by changing captureM to throw an exception:

captureM :: ((a -> IO ()) -> IO ()) -> M a
captureM f = ContT $ \k -> ReaderT $ \s -> do
    old <- readIORef s
    writeIORef s $ \_ ->
        f $ \x -> do
            s <- newIORef old
            flip runReaderT s (k x) `E.catch`
                \e -> ($ e) =<< readIORef s
    throwIO anyException

Here we unwind the catch by doing a throwIO, after installing our exception handler which actually passes the continuation. It is a bit ugly, and I haven't checked if either the catch is a problem, or that this solution solves it.

The implementation in Shake is a bit different to that described above. In Shake I know that captured continuations are never called more than once, so I
can avoid creating a new IORef in captureM, and I can reuse the existing one. Since I never change the handler, I can use a slightly less powerful definition of M:

type M a = ReaderT (IORef (SomeException -> IO ())) (ContT () IO) a

The resulting code is Development.Shake.Monad, which implements the RAW monad, and also does a few extra things which are irrelevant to this post.

The cool thing about Haskell is that I've been able to completely replace the underlying Shake Action monad from StateT/IO, to ReaderT/IO, to ReaderT/ContT/IO, without ever breaking any users of Shake. Haskell allows me to produce effective and flexible abstractions.

Tuesday, August 12, 2014

Safe library - generalising functions

Summary: The Safe library now has exact versions of take/drop, with twelve functions implemented on top of a generalised splitAt.

The Safe library is a simple Haskell library that provides versions of standard Prelude and Data.List functions that usually throw errors (e.g. tail), but wrapped to provide better error messages (e.g. tailNote), default values (e.g. tailDef) and Maybe results (e.g. tailMay).

I recently released version 0.3.5, which provides a new module Safe.Exact containing crashing versions of functions such as zip/zipWith (which error if the lists are not equal length) and take/drop/splitAt (which error if there are not enough elements), then wraps them to provide safe variants. As an example, the library provides:

takeExact    :: Int -> [a] -> [a]
takeExactMay :: Int -> [a] -> Maybe [a]

These are like take, but if the Int is larger than the length of the list it will throw an error or return Nothing. Some sample evaluations:

takeExactMay 2 [1,2,3] == Just [1,2]
takeExact    2 [1,2,3] == [1,2]
takeExactMay 2 [1] == Nothing
takeExact    2 [1] ==
    1:error "Safe.Exact.takeExact, index too large, index=2, length=1"
take 1 (takeExact 2 [1]) == [1]

So takeExactMay computes up-front whether the whole computation will succeed, and returns a Nothing if it will fail. In contrast, takeExact produces elements while they are present, but if you demand an additional element that is missing it will result in an error. All the exceptions in the Safe library are designed to provide the maximum level of detail about what went wrong, here telling us the index we were after and the length of the list.

The library provides takeExact, dropExact and splitAtExact, plus Def/May/Note versions, resulting in twelve similar functions. While the implementation of any one function is reasonably short (although not that short, once proper error messages are provided), I didn't want to write the same code twelve times. However, generalising over functions that check up-front and those that check on-demand requires a bit of thought. In the end I settled for:

splitAtExact_ :: (String -> r) -> ([a] -> r) -> (a -> r -> r) -> Int -> [a] -> r
splitAtExact_ err nil cons o xs
    | o < 0 = err $ "index must not be negative, index=" ++ show o
    | otherwise = f o xs
    where
        f 0 xs = nil xs
        f i (x:xs) = x `cons` f (i-1) xs
        f i [] = err $
            "index too large, index=" ++ show o ++ ", length=" ++ show (o-i)

Here the splitAtExact_ function has a parameterised return type r, along with three functional arguments that construct and consume the r values. The functional arguments are:

  • err :: String -> r, says how to convert an error into a result value. For up-front checks this produces a Nothing, for on-demand checks this calls error.
  • nil :: [a] -> r, says what to do once we have consumed the full number of elements. For take we discard all the remaining elements, for drop we are only interested in the remaining elements.
  • cons :: a -> r -> r, says how to deal with one element before we reach the index. For take this will be (:), but for functions producing a Maybe we have to check the r parameter first.

With this generalisation, I was able to write all twelve variants. As a few examples:

addNote fun msg = error $ "Safe.Exact." ++ fun ++ ", " ++ msg

takeExact = splitAtExact_ (addNote "takeExact") (const []) (:)

dropExact = splitAtExact_ (addNote "dropExact") id (flip const)

takeExactMay = splitAtExact_ (const Nothing) (const $ Just []) (\a -> fmap (a:))

dropExactMay = splitAtExact_ (const Nothing) Just (flip const)

splitAtExact = splitAtExact_ (addNote "splitAtExact")
    (\x -> ([], x)) (\a b -> first (a:) b)

splitAtExactMay = splitAtExact_ (const Nothing)
    (\x -> Just ([], x)) (\a b -> fmap (first (a:)) b)

Normally I would have defined takeExact and dropExact in terms of fst/snd on top of splitAtExact. However, in the Safe library error messages are of paramount importance, so I go to additional effort to ensure the error says takeExact and not splitAtExact.

Saturday, July 26, 2014

Converting Make to Shake

Summary: I have converted over 10,000 lines from Make to Shake. Here are some tips I learnt along the way.

Make is the de facto build system for large projects - if no one made an active choice, your project is probably using Make. The Shake build system can be a better alternative, but how should you convert? The following tips are based on my experience converting a 10,000 line Make system to Shake.

Shake can do whatever Make can

Shake is more powerful than Make, so if Make could do something, Shake probably can too. As a first approximation, the Make snippet:

output: input1 input2
    shell command to run

Becomes:

"output" *> \out -> do
    need ["input1","input2"]
    cmd Shell "shell command to run"

In addition:

  • Variables in Make usually map to normal Haskell variables.
  • Definitions of rules and dependencies use the functions from Development.Shake. For example, .PHONY maps to the phony function.
  • Filepath manipulation uses the functions from Development.Shake.FilePath.
  • Dynamically generated include files can be handled with needMakefileDependencies from Development.Shake.Util.

Preserve the file/directory structure

The existing Make system will generate object files with particular names in particular places. Often these locations aren't what you would pick if you wrote the build system afresh. However, resist the temptation to "clean up" these pieces during the conversion. Treat the file locations as a specification, which lets you focus on the conversion to Shake without simultaneously redesigning a large and complex build system.

Treat the Makefile as a black box

Often the existing Makefile will be hard to understand, and sometimes won't be worth reading at all. The most important information in the Makefile is what commands it runs, which can be determined by make clean && make -j1 > log.txt, which captures a complete list of the commands run. From the commands it is usually relatively easy to determine the inputs and outputs, from which you can write the Shake rules. However, the Makefile can be useful to figure out which commands to group into a single rule, and how to generalise rules to cover multiple files.

Split the metadata from the logic

Often the Makefiles combine metadata (these object files go into this executable) with logic (use gcc -O2 to build all executables). Shake is great for writing build logic, but metadata is often better placed in separate files (the Haskell syntax can be a little heavy). You can use the full power of Haskell to store whatever metadata you require, and addOracle from Shake can introduce granular dependencies on the information. The module Development.Shake.Config provides some helper functions that might serve as a suitable base.

To bootstrap the Shake system, often the metadata can be extracted from the existing Makefiles. You can write a temporary script to parse the Makefile and extract whatever you consider the metadata, clean it up, and write it to new configuration files. Initially the config files are generated, but once you delete the Make original, they become source files.

Focus on a single platform/configuration

Often a build system will be cross-platform (Linux/Mac/Windows), build multiple targets (binaries/distribution package/documentation) and build multiple configurations (release/debug/profile). To start the conversion, focus only on the most heavily developed platform/configuration - if the migration is successful, abstracting over the differences is far easier in Shake than Make. You may wish to start with a simple target to try out Shake (e.g. documentation), but after that work on the target developers use every day, so that the developers can make use of the improvements sooner, motivating the migration.

Convert bottom up

Shake demands that it built all the dependencies (it checks the modification time is equal to what it remembered), in contrast Make only requires that targets are newer than their dependencies. As a result, you should start converting the leaves of the build system to Shake, and work upwards. Provided you use the same file/directory structure, you can then build what you have defined with Shake, then finish the build with Make, checking the result still works as expected.

Run Make and Shake in parallel

One you have migrated enough of the build system to be useful (the usual targets in the most common configuration), you should encourage some developers to try Shake instead of Make. These developers will find things that don't work properly, hidden features in the Make system that no one knew about etc. Expect to fix problems and iterate several times.

Hopefully the Shake system will be faster and more robust. Once these advantages have encouraged all the main developers to make the switch, you should delete/disable the Make system and expect it to bitrot quickly.

Refactor individual rules

As you are converting rules from Make to Shake you can translate them directly and refactor later, or convert straight into more idiomatic Shake. As an example, you might start with:

cmd Shell "ls >" out

The argument Shell tells Shake to use the system shell, meaning that > redirect works. Later on you may wish to switch to:

Stdout result <- cmd "ls"
writeFile' out result

Now you are invoking the ls command directly, capturing the output using Shake. Sometime later you may switch to:

getDirectoryFiles "." ["*"]

Which is the Shake tracked way of getting a list of files. Similarly, calling sed or for through Shell should probably be gradually converted to Shake/Haskell operations.

Refactor the whole

Once you have converted the whole build system, and disabled the original Make system, you may wish to refactor the build system - putting files in more appropriate places, rethinking file dependencies etc. In truth, I've never got round to this step, and I would be surprised if many people did. However, as the build system grows, hopefully the new bits with sensible decisions will gradually begin to outnumber the old bits with questionable design.

Ask if you get stuck

Build systems (even in Shake) are complex entities, with intricate coordination between files, which mostly run untyped external commands with many platform/version differences. As a result, build systems are often complex to write.

If you have a problem using Shake, just ask. If you can boil down the problem to something fairly standalone, ask on StackOverflow with the tag shake-build-system. If you are looking for more general advice, ask on the mailing list. If you succeed, write a blog post and tweet me.

Wednesday, July 23, 2014

Applicative vs Monadic build systems

Summary: Shake is a monadic build system, and monadic build systems are more powerful than applicative ones.

Several people have wondered if the dependencies in the Shake build system are monadic, and if Make dependencies are applicative. In this post I'll try and figure out what that means, and show that the claim is somewhat true.

Gergo recently wrote a good primer on the concepts of Applicative, Monads and Arrows (it is worth reading the first half if you are unfamiliar with monad or applicative). Using a similar idea, we can model a simple build system as a set of rules:

rules :: [(FilePath, Action String)]
rules = [("a+b", do a <- need "a"; b <- need "b"; return (a ++ b))
        ,("a"  , return "Hello ")
        ,("b"  , return "World")
        ]

Each rule is on a separate line, containing a pair of the file the rule produces (e.g. a for the second rule) and the action that produces the files contents (e.g. return "Hello"). I've used need to allow a rule to use the contents of another file, so the rule for a+b depends on the files a and b, then concatenates their contents. We can run these rules to produce all the files. We've written these rules assuming Action is a Monad, using the do notation for monads. However, for the above build system, we can restrict ourselves to Applicative functions:

rules = [("a+b", (++) <$> need "a" <*> need "b")
        ,("a"  , pure "Hello ")
        ,("b"  , pure "World")
        ]

If Action is applicative but not monadic then we can statically (without running any code operating on file contents) produce a dependency graph. If Action is monadic we can't generate a graph upfront, but there are some build systems that cannot be expressed applicatively. In particular, using a monad we can write a "dereferencing" build system:

rules = [("!a", do a <- need "a"; need a)
        ,("a" , pure "b")
        ,("b" , pure "Goodbye")
        ]

To build the file !a we first require the file a (which produces the contents b), then we require the file b (which produces the contents Goodbye). Note that the first rule has changed b the content into b the file name. In general, to move information from the file content to a file name, requires a monad. Alternatively stated, a monad lets you chose future dependencies based on the results of previous dependencies.

One realistic example (from the original Shake paper), is building a .tar file from the list of files contained in a file. Using Shake we can write the Action:

contents <- readFileLines "list.txt"
need contents
cmd "tar -cf" [out] contents

The only build systems that I'm aware of that are monadic are redo, SCons and Shake-inspired build systems (including Shake itself, Jenga in OCaml, and several Haskell alternatives).

While it is the case that Shake is monadic, and that monadic build systems are more powerful than applicative ones, it is not the case that Make is applicative. In fact, almost no build systems are purely applicative. Looking at the build shootout, every build system tested can implement the !a example (provided the file a is not a build product), despite several systems being based on applicative dependencies.

Looking at Make specifically, it's clear that the output: input1 input2 formulation of dependencies is applicative in nature. However, there are at least two aspects I'm aware of that increase the power of Make:

  • Using $(shell cat list.txt) I can splice the contents of list.txt into the Makefile, reading the contents of list.txt before the dependencies are parsed.
  • Using -include file.d I can include additional rules that are themselves produced by the build system.

It seems every "applicative" build system contains some mechanism for extending its power. I believe some are strictly less powerful than monadic systems, while others may turn out to be an encoding of monadic rules. However, I think that an explicitly monadic definition provides a clearer foundation.

Sunday, June 29, 2014

Optimisation with Continuations

Summary: Continuations are confusing. Here we solve a simple problem (that is at the heart of the Shake build system) using continuations.

Imagine we are given two IO a computations, and want to run them both to completion, returning the first a value as soon as it is produced (let's ignore exceptions). Writing that in Haskell isn't too hard:

parallel :: IO a -> IO a -> IO a
parallel t1 t2 = do
    once <- newOnce
    var <- newEmptyMVar
    forkIO $ t1 >>= once . putMVar var
    forkIO $ t2 >>= once . putMVar var
    readMVar var

We create an empty variable var with newEmptyMVar, fire off two threads with forkIO to run the computations which write their results to var, and finish by reading as soon as a value is available with readMVar. We use a utility newOnce to ensure that only one of the threads calls putMVar, defined as:

newOnce :: IO (IO () -> IO ())
newOnce = do
    run <- newMVar True
    return $ \act -> do
        b <- modifyMVar run $ \b -> return (False, b)
        when b act

Calling newOnce produces a function that given an action will either run it (the first time) or ignore it (every time after). Using newOnce we only call putMVar for the first thread to complete.

This solution works, and Shake does something roughly equivalent (but much more complex) in it's main scheduler. However, this solution has a drawback - it uses two additional threads. Can we use only one additional thread?

For the problem above, running the computations to completion without retrying, you can't avoid two additional threads. To use only one additional thread and run in parallel you must run one of the operations on the calling thread - but if whatever you run on the additional thread finishes first, there's no way to move the other computation off the the calling thread and return immediately. However, we can define:

type C a = (a -> IO ()) -> IO ()

Comparing IO a to C a, instead of returning an a, we get given a function to pass the a to (known as a continuation). We still "give back" the a, but not as a return value, instead we pass it onwards to a function. We assume that the continuation is called exactly once. We can define parallel on C:

parallel :: C a -> C a -> C a
parallel t1 t2 k = do
    once <- newOnce
    forkIO $ t1 (once . k)
    t2 (once . k)

This definition takes the two computations to run (t1 and t2), plus the continuation k. We fork a separate thread to run t1, but run t2 on the calling thread, using only one additional thread. While the parallel function won't return until after t2 completes, subsequent processing using the a value will continue as soon as either finishes.

Looking at the transformers package, we see Control.Monad.Trans.Cont contains ContT, which is defined as:

newtype ContT r m a = ContT {runContT :: (a -> m r) -> m r}

If we use r for () and IO for m then we get the same type as C. We can redefine C as:

type C a = ContT () IO a

The changes to parallel just involve wrapping with ContT and unwrapping with runContT:

parallel :: C a -> C a -> C a
parallel t1 t2 = ContT $ \k -> do
    once <- newOnce
    forkIO $ runContT t1 (once . k)
    runContT t2 (once . k)

Now we've defined our parallel function in terms of C, it is useful to convert between C and IO:

toC :: IO a -> C a
toC = liftIO

fromC :: C a -> IO a
fromC c = do
    var <- newEmptyMVar
    forkIO $ runContT c $ putMVar var
    readMVar var

The toC function is already defined by ContT as liftIO. The fromC function needs to change from calling a callback on any thread, to returning a value on this thread, which we can do with a forkIO and MVar. Given parallel on IO takes two additional threads, and parallel on C takes only one, it's not too surprising that converting IO to C requires an additional thread.

Aren't threads cheap?

Threads in Haskell are very cheap, and many people won't care about one additional thread. However, each thread comes with a stack, which takes memory. The stack starts off small (1Kb) and grows/shrinks in 32Kb chunks, but if it ever exceeds 1Kb, it never goes below 32Kb. For certain tasks (e.g. Shake build rules) often some operation will take a little over 1Kb in stack. Since each active rule (started but not finished) needs to maintain a stack, and for huge build systems there can be 30K active rules, you can get over 1Gb of stack memory. While stacks and threads are cheap, they aren't free.

The plan for Shake

Shake currently has one thread per active rule, and blocks that thread until all dependencies have rebuilt. The plan is to switch to continuations and only have one thread per rule executing in parallel. This change will not require any code changes to Shake-based build systems, hopefully just reduce memory usage. Until then, huge build systems may wish to pass +RTS -kc8K, which can save several 100Mb of memory.

Sunday, June 22, 2014

Announcing ghc-make

Summary: I've released ghc-make, which is an alternative to ghc --make.

I've just released v0.2 of ghc-make (on Hackage, on Github). This package provides an alternative to ghc --make which supports parallel compilation of modules and runs faster when nothing needs compiling. To unpack that:

  • Parallel compilation: Call ghc-make -j4 and your program will build by running up to four ghc -c programs simultaneously. You usually need at parallel factor of 2x-3x to match ghc --make on a single core, since ghc --make does a lot of caching that is unavailable to ghc-make. If you use -j1, or omit a -j flag, the compilation will be based on ghc --make and should take the same time to compile.
  • Faster when nothing needs rebuilding: If ghc --make is slow when there is nothing to rebuild, and most of your executions do no rebuilding, ghc-make will make things go faster. On Windows I have one project where ghc --make takes 23 seconds and ghc-make takes 0.2 seconds (more than 100x faster). Particularly useful for scripts that do ghc --make Main && ./Main.

See the README for full details.

How do I use it?

Install ghc-make (cabal update && cabal install ghc-make). Then replace your calls to ghc my -arguments with ghc-make my -arguments. Almost all arguments and flags supported by ghc are supported by ghc-make - it is intended as a drop-in replacement. Let me know about any bugs on the bug tracker.

To use ghc-make with Cabal, try cabal build --with-ghc=ghc-make --ghc-options=-j4. (This technique is due to the ghc-parmake project, which also does parallel ghc --make compiles.)

How is it implemented?

This program uses the Shake library for dependency tracking and ghc --make for building. The actual ghc-make project itself only contains 4 modules, and the largest of those is the test suite.

To pass options to the underlying Shake build system prefix them with --shake, for example --shake--report=- will write a profile report to stdout and --shake--help will list the available Shake options.

Tuesday, June 03, 2014

Shake file hashes/digests

Summary: Shake can now be configured to check file hashes/digests instead of modification times, which is great if you frequently switch git branches.

Build systems run actions on files, skipping the actions if the files have not changed. An important part of that process involves determining if a file has changed. The Make build system uses modification times to impose an ordering on files, but more modern build systems tend to use the modification time as a proxy for the file contents, where any change indicates the contents have changed (e.g. Shake, Ninja). The alternative approach is to compute a hash/digest of the file contents (e.g. SCons, Redo). As of version 0.13, Shake supports both methods, along with three combinations of them - in this post I'll go through the alternatives, and their advantages/disadvantages.

Modification times rely on the file-system updating a timestamp whenever the file contents are written. Modification time is cheap to query. Saving a file afresh will cause the modification time to change, even if the contents do not - as a result touch causes rebuilds. Unfortunately, working with git branches sometimes modifies a file but leaves it with the same contents, which can result in unnecessary rebuilds (see the bottom of this post for one problematic git workflow).

File digests are computed from the file contents, and accurately reflect if the file contents have changed. There is a remote risk that the file will change without its digest changing, but unless your build system users are actively hostile attackers, that is unlikely. The disadvantage of digests is that they are expensive to compute, requiring a full scan of the file. In particular, after every rule finishes it must scan the file it just built, and on startup the build system must scan all the files. Scanning all the files can cause empty rebuilds to take minutes. When using digests, Shake also records file sizes, since if a file size changes, we know the digest will not match - making most changed digests cheap to detect.

Modification time and file digests combines the two methods so that a file only rebuilds if both the modification time and digest have changed. The advantage is that for files that have not changed the modification time will cheaply detect that, without ever computing the file hash. If the file has changed modification time, then a digest check may save an expensive rebuild, but even if it doesn't, the cost is likely to be small compared to rerunning the rule.

Modification time and file digests on inputs takes the previous method, but only computes digests for input files. Generated files (e.g. compiled binaries) tend to be large (expensive to compute digests) and not edited (rarely end up the same), so a poor candidate for digests. The file size check means this restriction is unlikely to make a difference when checking all files, but may have some limited impact when building.

Modification time or file digests combines the two methods so that a file rebuilds if either modification time or file digest have changed. I can't think of a sensible reason for using this setting, but maybe someone else can?

Suggestions for Shake users

All these options can be set with the shakeChange field of shakeOptions, or using command line flags such as --digest or --digest-and-input. Switching between some change modes will cause all files to rebuild, so I recommend finding a suitable mode and sticking to it.

  • If you can't trust the modification times to change, use ChangeDigest.
  • If you are using git and multiple branches, use ChangeModtimeAndDigestInput.
  • If you have generated files that rewrite themselves but do not change, I recommend using writeFileChanged when generating the file, but otherwise use ChangeModtimeAndDigest.
  • Otherwise, I currently recommend using ChangeModtime, but some users may prefer ChangeModtimeAndDigest.

Appendix: The git anti-build-system workflow

Certain common git workflows change files from the current version, to an old version, then back again - causing modification-time checks to run redundant rebuilds. As an example, imagine we have two branches foo and bar, based on remote branches origin/foo and origin/bar, both of which themselves are regularly synced to a common origin/master branch. The difference between origin/foo and origin/bar is likely to be small. To switch from an up-to-date bar to an up-to-date foo we can run git checkout foo && git pull. These commands switch to an out-of-date foo, then update it. As a result, any file that has changed since we last updated foo will change to an old version, then change to a new version, likely the same as it was before we started. This workflow requires build systems to support file digests.