A lot of my bash scripting experience has been, in one sense, relatively simple. I have several scripts that span several hundred lines and do fairly complex things across multiple systems. From that perspective they aren’t necessarily simple. However it wasn’t until recently that I had to really starting thinking about managing when scripts run and particularly keeping them from “stepping all over each other” when multiple instances of the same script must be run… enter the topic of “Job Control” or “Controlled Execution.”

A common scenario is that your bash script is written to access some shared resource. A few examples of such shared resources:
-An executable file that can only have one running instance at any given time
-A log file that must be written to in a certain order
-Sensitive system files (such as the interfaces file).

What happens if a bash script gets executed once, and then before the first instance finishes running a second instance is fired off? The short answer is typically unexpected/bad stuff that tends to break things.

So the solution is to introduce some job control logic into your scripts. And to that end I want to talk about two methods of controlling job execution that I have started to employ heavily for one of my projects: Simple Lock Files, and the more involved FLOCK application built into most newer Linux distributions. For reference, most of this article is based on a system running Debian Jessie. (more…)