Functional Slice Of Cake

So often lately I’ve seen a lot of posts focussing on how to split stories, how they should be structured, and techniques for getting them into a certain state. What I haven’t seen are posts explaining the benefits, and the reasons why stories that are planned as a total slice through a system are a good idea for our development teams.

Firstly, I’d like to ask you a question: How often do your teams miss a sprint goal because of a front to back end dependency? Front to back here can be UI to API, or API to DB. Do you hear the excuse: Well the API changed and we didn’t have time to implement those changes? It’s an excuse that I have used, and that I’ve heard my teams use in the past.

The idea of slices through a system, is not a new idea, but it is one that is difficult to achieve without good planning and patience from your Scrum Master and development team, which, I was lucky enough to have in my last lead role.

I want you to think of your system like a piece of extravagant Victoria sponge with three or four layers of cake. Next I’d like you to think about how it can be cut in different ways to be manageable. You could cut it layer by layer into small bite size pieces and that would certainly be consumable, if you think you could fit six small pieces of sponge in one sprint, then it probably also fits the goal of small enough stories. But what if the aim is not to consume the sponge, but to reconstitute it, so that it all fits neatly together again? If you took it in small sections layer by layer, you would find it very hard to put it all back together, so that it was neat and presentable. Imagine what this would look like if you deconstructed the entire sponge!

Now take a new sponge of the same extravagance. Take a very thin slice all the way through the sponge. You don’t need to try to fit it all together, as it came as a connected piece. This is how we need to get our teams thinking when it comes to our stories. A great question to ask our teams is: If this is the only story you complete this sprint, does it deliver something to our users?

This something, may be just one button that they can click, that doesn’t necessarily do a lot for them on its own, but it fulfils the following criteria:

  1. It works
  2. It is useable
  3. It is displayed from the UI, and goes all the way to the bottom of your system (typically a database)

The aim of the sprint is to deliver working software, and this you have achieved. You still may not always achieve your sprint goal, but you start to reduce the dependencies and risk between different layers of your system.

The one thing that really motivates a development team is delivering working software. Nothing feels better than deploying something, and watching it work. I can tell you now, it’s a great feeling when you can deploy every story, and watch every deployment start up successfully, and for people to use the software you have just created.

One great way to ensure that your system entirely works, for every deployment is to pair your developers across disciplines. If your front end developer pairs with your back end developer to create an API, and your back end developer pairs with your front end developer to create the UI, you can almost guarantee that the solution will work. Although they may not be familiar with each others tech stacks, the underlying principles are generally the same. It’s also a great opportunity to cross pollinate a little knowledge about both sides of the same coin. If you can get your infrastructure engineers to pair with all of your developers, then each person knows how to deploy the system.

All of the above starts to de-risk the teams reliance on a single point of failure, which in turn de-risks each and every delivery the team will make.

Of course the biggest benefit here, is that if there is a weak point in the architecture or design, then it’s picked up at the earliest point, and can be rectified at very little cost.

As An Aside
I think it’s important here to mention “cross functional teams”. A discussion I recently had, is that I have no problem with cross functional teams, but that does not mean that every person in the team is cross functional. Generally, in the world of development, each person in the team has a specialism, whether they are UI, Backend, DB, Infrastructure etc. Whilst it is useful that each member of the team can fill in if a person is away, it needs to be understood, that not everyone will have the same level of expertise as the expert in the team. T-shaped people, in concept, is a great idea, but we have to be careful not to push our team members too wide, or stretch their capabilities, and remember what we hired them to do in the first place.

A Different Aspect On Things

Aspect Orientated Programming has been around for a long time, it’s not a new invention. However, as with an ever changing world, new requirements and inventions come in, and sometimes we look to the old, to give us inspiration for the new.

I’m not about to tell you all about the different ways you can surround your code with aspects, there are many posts out there that do that.
I want to tell you about a few ways I think that they are useful, to developers and to our customers alike.

The first is logging. How many times do you write a
LOG.debug() statement at the beginning or end of a method?
Maybe, you’d like to log requests as they come in?

Dependencies

For my example, I’ve written a simple Spring Boot Web API, and these are the dependencies I’ve used.

<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-aop</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-data-jpa</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-test</artifactId>
        <scope>test</scope>
    </dependency>
    <dependency>
        <groupId>com.h2database</groupId>
        <artifactId>h2</artifactId>
        <version>${h2.version}</version>
    </dependency>
</dependencies>

The Logging Aspect

I want to log information about every request. How long they took, what the arguments were, and which method ran.
The execution Pointcut below picks up every public method, and the @Within Pointcut, picks up every RestController.
By combining them in the @Around I can execute the method around every method that matches. The ProceedingJoinPoint
contains all the information required about the method that will be called, and enough to actually invoke it when you are ready.
By making it a Spring @Component it is wired into the Spring ecosystem for you.

@Aspect
@Component
public class RequestLoggingAspect {

    private static final String MSG_FORMAT = "Method %s took %dms with args %s";

    private final RequestLogger requestLogger;

    public RequestLoggingAspect(final RequestLogger requestLogger) {
        this.requestLogger = requestLogger;
    }

    @Pointcut("execution(public * * (..))")
    public void getPublicMethods() {
    }

    @Pointcut("@within(org.springframework.web.bind.annotation.RestController)")
    public void getRestControllers() {
    }

    @Around("getPublicMethods() && getRestControllers()")
    public Object restControllerPublicMethods(final ProceedingJoinPoint pjp) throws Throwable {
        var sw = new StopWatch();
        var methodName = pjp.getSignature().toLongString();
        var args = Arrays.stream(pjp.getArgs())
                .map(String::valueOf)
                .collect(Collectors.joining(","));
        sw.start();
        try {
            return pjp.proceed();
        } finally {
            sw.stop();
            var msg = String.format(MSG_FORMAT, methodName, sw.getTotalTimeMillis(), args);
            requestLogger.log(msg);
        }
    }
}

The Storing Aspect

The above example logs according to your logging frameworks configuration.
It may be that you need to do something more complicated. This works in exactly the same way, but lets have a look
at an aspect that stores requests to a database.

@Aspect
@Component
public class RequestStoringAspect {

    private final RequestLogService requestLogService;
    private final MethodParser methodParser;
    private final ArgumentMatcher argumentMatcher;

    public RequestStoringAspect(final RequestLogService requestLogService,
                                final MethodParser methodParser,
                                final ArgumentMatcher argumentMatcher) {
        this.requestLogService = requestLogService;
        this.methodParser = methodParser;
        this.argumentMatcher = argumentMatcher;
    }

    @Pointcut("execution(public * * (..))")
    public void getPublicMethods() {
    }

    @Pointcut("@within(org.springframework.web.bind.annotation.RestController)")
    public void getRestControllers() {
    }

    @Around("getPublicMethods() && getRestControllers()")
    public Object restControllerPublicMethods(final ProceedingJoinPoint pjp) throws Throwable {
        var sw = new StopWatch();
        var methodSignature = pjp.getSignature().toLongString();
        var params = methodParser.parse(methodSignature);
        var args = Arrays.stream(pjp.getArgs())
                .map(String::valueOf)
                .filter(str -> !StringUtils.isNullOrEmpty(str))
                .collect(Collectors.toList());
        sw.start();
        try {
            return pjp.proceed();
        } finally {
            sw.stop();
            var arguments =  argumentMatcher.match(params, args);
            var requestLog = new RequestLogDao();
            requestLog.setArguments(arguments);
            requestLog.setExecutionMillis(sw.getTotalTimeMillis());
            requestLog.setMethodName(methodSignature);
            requestLogService.saveRequestLog(requestLog);
        }
    }
}

Here I’ve started to store some more information about the methods called, and the arguments that were captured.

This can be really useful to start to build up monitoring information. You could also start to capture audits of who has accessed different endpoints, what exceptions were thrown. This can start to build up an historic picture of your application, its performance, and what your users are really doing. It can highlight bottlenecks in the application, and can help you prioritise where you focus your efforts in your application.

Code is available here (You’ll need Java 10 or above).

User Story Mapping

Jeff Patton defines Story Mapping as Talk about user’s journey through your product building a simple model that tells your user’s story as you do.”

Sounds simple right? But, how often do you have a Project Management type who has already defined the stories, and just wants you to agree with the stories they have come up with? Sometimes, this works, but more often than not, the development teams need to know answers to a few questions:

  1. Who for?
  2. Why?
  3. What?
  4. How?

The answers to these questions are really important for several reasons.

It’s important for the development team to know who the users are. Firstly, it provides a point of contact in the business for complex discussions that should not go third hand. Secondly, knowing that there is someone who is actually going to use your solution provides a sense of pride in the product in question. Thirdly, you know that when you receive feedback from someone in a review, that they are an actual user, and not just a stakeholder with outside interests. Whilst these stakeholders are important, they generally want to provide non-functional requests, or understand the benefits to their teams, which can be communicated with your Product Owner in a separate session.

It’s important for the development team to know why a story is created and needs to be played. It helps with prioritisation, and even implementation. It could provide information that helps decide how robust the solution needs to be, how scalable it needs to be, how secure it needs to be.

It’s important for the development team to know what the user actually wants. More often than not a BA or PO will decide what the solution should be without actually consulting any developers. By involving the developers in the conversations with the users, the right product for the user can be developed. It may be that the user doesn’t know what they want, in which case a developer can suggest a simple prototype to get things started. It will promote a  feedback cycle with the users, for example: “I really like the way the command line tool processed my files, but I only ever use the same parameters, can they be set as a default?”

It’s important for the development team to know how the product or solution should interact with other systems. Does it need to integrate with an antiquated database? Are there new technologies that the team need to learn? Can some work be outsourced to a specialist? Can we move away from any current techniques, and provide a more modern solution?

Have you ever sat around a screen, with a spreadsheet filled with stories, but absolutely no context? From a developers point of view there is nothing more frustrating than being provided with stories to complete without knowing the who, why, what or how. Mapping the stories, with the users present, ensures that the whole development team start a set of epics, features and stories from the same page, with the same understanding.

However, these meetings need to be will prepared and facilitated, as they can take all day. Sounds expensive right? A team of developers, with the users, PO, Scrum Master, BAs in a meeting for a whole day? I can guarantee that it will be cheaper to run this type of meeting four or five times a year, than to just let a BA or PO create stories for a development team.

Here is my simple recipe for a story mapping session:

Understand The Current Process

By understanding the current process, everyone in the room can start to design a new solution to improve what your users currently do. Encourage the team to ask questions like:

  • Can you describe the difficulties or pain points?
  • Is that a time consuming part of the process?
  • How long typically does it take?
  • Is it expensive to do?
  • Does it involve a lot of people?

Start To Map Out The Problems

Get the team to map out problems on post it notes, things that form part of the current process, and things that fall out of the questions the team have asked. If something is a big problem make sure it’s highlighted on the notes.

Write Down Features That Could Solve The Problems

Go over a set of the problems, noting features on post it notes that could help to solve problems. Don’t worry if you have more than one feature idea to solve a problem, that is a good place to be as you can start to give the Product Owner varying solutions to choose from! Make sure that the features are associated to the problems.

Determine The Stories

Start to form high level stories that could complete the features. Try to avoid implementation details, such as libraries or algorithms, but capture things like “Get sheep into field”. Think about how these stories could be accepted by the PO, and think about any non-functional requirements the users have asked for. Make sure the stories are associated with the features.

Estimate The Stories

These don’t have to massively accurate, it’s probably better to bubble sort them based on complexity, known effort and difficulty, any risks the team can describe, and how long similar work has taken, so that you can start to give the Product Owner an idea of how long it could take to deliver an initial cut of the product. You could even just T-Shirt size the stories, so you have a starting point.

In Summary

You’ll find you’ve probably used 4 or 5 post it note pads, and the team will definitely be exhausted at the end of the session, but by the time you leave, each and every person should be on the same page, as to the difficulties the users are experiencing, what a possible solution may be, and how they may achieve it. This is one of the most valuable parts of the entire process. The stories you have created should then fall through your usual refinement sessions to be fleshed out and broken down.

Constructors & DI

I’m a big advocate of constructor injection, but a recent office debate caused me to take a step back and reevaluate my beliefs.

I thought I’d take a stab at some good and bad points of different ways of injecting dependencies into a class using Spring.

Field Injection

@Component
public class FieldInjectionExample {

    @Value("${url.value}")
    private String url;

    @Autowired
    private RestTemplate restTemplate;

    @Autowired
    private ObjectMapper objectMapper;

    public String doSomething() throws IOException {
        final String object = restTemplate.getForObject(url, String.class);
        return String.valueOf(objectMapper.readValue(object, String.class));
    }
}

So here you can see that my class looks nice and clean, there are very few lines of code require to construct my object. Although the fields aren’t final, there are no setters so you could only modify them using Reflection. It’s fairly succinct, and you could see all the things needed to construct a class. So, I think I probably need write some tests for this class.

public class FieldInjectionExampleTest {
    
    private FieldInjectionExample fieldInjectionExample;
    
    @Before
    public void setup(){
        fieldInjectionExample = new FieldInjectionExample();
    }
    
    @Test
    public void testDoSomething() throws IOException {
        assertThat(fieldInjectionExample.doSomething()).isEqualTo("?");
    }
}

So I think the first thing you can spot, is that I am able to create my object under test without necessarily being aware of the dependencies required to make the class do it’s job, regardless of whether or not they are being mocked.

What constructor injection would give us here is an explicit contract, which documents exactly what is needed to do it’s job -it will fail at start up time, rather than at run time.

This will allow you to also determine if you need to create any beans as configuration (maybe a non-spring managed component), as you can check the constructor.

Take the assumption that I have now run my test, and it has failed. I’ve realised (or remembered) that my class needs a few things to make it work.

@RunWith(MockitoJUnitRunner.class)
public class FieldInjectionExampleTest {

    @Mock
    private RestTemplate restTemplate;

    @Mock
    private ObjectMapper objectMapper;

    @InjectMocks
    private FieldInjectionExample fieldInjectionExample;

    @Test
    public void testDoSomething() throws IOException {
        final String returnedObject = "String";
        when(restTemplate.getForObject(anyString(), any(Class.class))).thenReturn(returnedObject);
        when(objectMapper.readValue(anyString(), any(Class.class))).thenReturn(returnedObject);
        final String something = fieldInjectionExample.doSomething();
        assertThat(something).isEqualTo("String");
    }

}

Ok, so here I’m pretty sure that I’ve captured everything that I need, but I will never be sure until it fails.

Setter Injection

It’s entirely possible to place @Autowired annotations on setter methods of objects. This can be a neat way of instantiate a class without having all the dependencies created at run time, but I believe that this comes with down sides, such as how easy it becomes to create circular dependencies, and spot them until it’s too late.

@Component
public class SetterInjectionExample {
    
    private String url;
    private RestTemplate restTemplate;
    private ObjectMapper objectMapper;
    
    public String doSomething() throws IOException {
        final String object = restTemplate.getForObject(url, String.class);
        return String.valueOf(objectMapper.readValue(object, String.class));
    }

    @Autowired
    public void setUrl(@Value("${url.value}") final String url) {
        this.url = url;
    }

    @Autowired
    public void setRestTemplate(final RestTemplate restTemplate) {
        this.restTemplate = restTemplate;
    }

    @Autowired
    public void setObjectMapper(final ObjectMapper objectMapper) {
        this.objectMapper = objectMapper;
    }
}

This comes with much more code, and the ability to change the object at run time.

public class SetterInjectionExampleTest {

    private SetterInjectionExample setterInjectionExample;

    @Before
    public void setup(){
        setterInjectionExample = new SetterInjectionExample();
    }

    @Test
    public void testDoSomething() throws IOException {
        assertThat(setterInjectionExample.doSomething()).isEqualTo("?");
    }
}

This still has the same problem – you cannot easily find what is required to create and use the class. There is an expectation that this would work.
You can inject your mocks the same way you would as if you were injecting field variables.

@RunWith(MockitoJUnitRunner.class)
public class SetterInjectionExampleTest {

    @Mock
    private RestTemplate restTemplate;

    @Mock
    private ObjectMapper objectMapper;

    @InjectMocks
    private SetterInjectionExample setterInjectionExample;

    @Test
    public void testDoSomething() throws IOException {
        final String returnedObject = "String";
        when(restTemplate.getForObject(anyString(), any(Class.class))).thenReturn(returnedObject);
        when(objectMapper.readValue(anyString(), any(Class.class))).thenReturn(returnedObject);
        final String something = setterInjectionExample.doSomething();
        assertThat(something).isEqualTo("String");
    }
}

This still means that, although you have setters available, it still isn’t clear what is needed in order for the class to do its job. Maybe it could be useful for optional parameters.

Constructor Injection

Here is the same class, with dependencies injected into the constructor.

@Component
public class ConstructorInjectionExample {
    private final String url;
    private final RestTemplate restTemplate;
    private final ObjectMapper objectMapper;

    @Autowired
    public ConstructorInjectionExample(@Value("${url.value}") final String url,
                                       final RestTemplate restTemplate,
                                       final ObjectMapper objectMapper) {
        this.url = url;
        this.restTemplate = restTemplate;
        this.objectMapper = objectMapper;
    }
    
    public String doSomething() throws IOException {
        final String object = restTemplate.getForObject(url, String.class);
        return String.valueOf(objectMapper.readValue(object, String.class));
    }

}

It’s really clear to see what your class needs in order to do it’s job. I cannot instantiate this class without having a URL, a RestTemplate and an ObjectMapper.

This doesn’t mean that you cannot inject your mocks if you so wish, but it does allow to see what the class needs to do it’s job.

@RunWith(MockitoJUnitRunner.class)
public class ConstructorInjectionExampleTest {

    @Mock
    private RestTemplate restTemplate;

    @Mock
    private ObjectMapper objectMapper;

    @InjectMocks
    private ConstructorInjectionExample constructorInjectionExample;

    @Test
    public void testDoSomething() throws IOException {
        final String returnedObject = "String";
        when(restTemplate.getForObject(anyString(), any(Class.class))).thenReturn(returnedObject);
        when(objectMapper.readValue(anyString(), any(Class.class))).thenReturn(returnedObject);
        final String something = constructorInjectionExample.doSomething();
        assertThat(something).isEqualTo("String");
    }
}

In order to understand the different ways of injection properties, I’ve broken down the key points in a table. This will allow you to make an informed decision as to which one is appropriate for your work.

Field Setter Ctor
Cannot instantiate without necessary
properties
X
Cannot create circular dependencies X X
Clearly defined requirements in one place X
Potential for easy readability X X
Allow the use of the ‘final’ keyword X
Obvious when breaking SRP X
Useful for non-mandatory properties X

What Does Spring Say?

Finally, what does Spring say about the most appropriate form of property injection?

The Spring team generally advocates constructor injection as it enables one to implement application components as immutable objects and to ensure that required dependencies are not null. Furthermore constructor-injected components are always returned to client (calling) code in a fully initialized state. As a side note, a large number of constructor arguments is a bad code smell, implying that the class likely has too many responsibilities and should be refactored to better address proper separation of concerns.

Automating AWS Backups

As developers, we all know that things go wrong; machines break (for sometimes totally unpredictable reasons) and data can become corrupt. What better way of helping to mitigate these problems than taking backups and snapshots of your infrastructure and data.

In the office we use AWS extensively for our infrastructure. We have various tools running in EC2 instances that we use for our day to day development. You’d think that a suite of tools as sophisticated as AWS would have a way of automatically backing up volumes and instances and retaining them for periods of time!

We found a bash script that pretty much did what we were after, but there were licence implications with using it. Bash scripting can be a bit of a black art, especially for sophisticated operations, so why not do it in a language we all  understand – Java!

Spring Boot allows us to create a command line Java application, simply by implementing the CommandLineRunner interface:

@SpringBootApplication
public class BackupApplication implements CommandLineRunner {

  private final BackupService backupService;

  @Autowired
  public BackupApplication(final BackupService backupService) {
    this.backupService = backupService;
  }

  public static void main(String[] args) {
   SpringApplication.run(BackupApplication.class, args);
  }
 
  @Override
  public void run(String... args) throws Exception {
    backupService.backup();
    backupService.purge();
  }
}

Connecting to AWS using an already created access key and secret is also relatively simple using the AWS SDK. You can pass the parameters below on the command line (-Daws.secret.key=KEY):

@Configuration
public class AwsConfiguration {

  @Bean
  public AWSCredentials credentials(@Value("${aws.secret.key}") final String secretKey, @Value("${aws.secret.pass}") final String secretPass) {
    return new BasicAWSCredentials(secretKey, secretPass);
  }

  @Bean
  public AmazonEC2Client ec2Client(final AWSCredentials credentials) {
    return new AmazonEC2Client(credentials);
  }
}

In order to find the instances I want to back up, they’ve been tagged with ‘mint-backup’ as the key, and the period of backup for the value (e.g. ‘nightly’, ‘weekly’). These are passed in on the command line as arguments and are used by the Backup Service:

@Autowired
public BackupServiceImpl(final AmazonEC2Client ec2Client,
  @Value("${tag.key}") final String tagKey,
  @Value("${tag.value}") final String tagValue,
  @Value("${retention.days}") final long days) {
  this.ec2Client = ec2Client;
  this.tagKey = tagKey;
  this.tagValue = tagValue;
  this.duration = Duration.ofDays(days);
}

The AWS SDK allows you to search on Tags:

private List getVolumes() {
  final Filter filter = new Filter("tag:"+ tagKey, Collections.singletonList(tagValue));
  final DescribeVolumesRequest request = new DescribeVolumesRequest();

  DescribeVolumesResult result = ec2Client.describeVolumes(request.withFilters(filter));
  String token;
  final List volumes = result.getVolumes();
  volumes.addAll(result.getVolumes());
  while ((token = result.getNextToken()) != null) {
    request.setNextToken(token);
    result = ec2Client.describeVolumes(request);
    volumes.addAll(result.getVolumes());
  }
  return volumes;
}

Now I have a list of Volumes, I can create a snapshot of each one:

private List createSnapshots(final List volumes) {
  final List snapshotIds = new ArrayList<>();
  volumes.forEach(volume -> {
  final CreateSnapshotRequest createSnapshotRequest = new CreateSnapshotRequest(volume.getVolumeId(),
  "SNAPSHOT-" + DateTime.now().getMillis());
  final CreateSnapshotResult createSnapshotResult = ec2Client.createSnapshot(createSnapshotRequest);
  final Snapshot snapshot = createSnapshotResult.getSnapshot();
  snapshotIds.add(snapshot.getSnapshotId());
  });
return snapshotIds;
}

Once they are all created they are then tagged with a purge date, so that another process can remove them once they have expired:

private void tagSnapshots(final List snapshotIds) {
  final long purgeDate = DateTime.now().plus(period).getMillis();
  final Tag purgeTag = new Tag("purge-date", String.valueOf(purgeDate));
  final CreateTagsRequest createTagsRequest = new CreateTagsRequest()
    .withTags(purgeTag)
    .withResources(snapshotIds);
  ec2Client.createTags(createTagsRequest);
}

Purging is a little easier. We can request for all snapshots that have a ‘purge-date’ tag, and then filter them based on the value of the tag so that we only get ones before now, grab the ids and create a collection of new requests and issue each one to the ec2 client:

@Override
public void purge() {
  final DescribeSnapshotsRequest request = new DescribeSnapshotsRequest();
  final Filter filter = new Filter("tag-key", Collections.singletonList("purge-date"));
  DescribeSnapshotsResult result = ec2Client.describeSnapshots(request.withFilters(filter));
  String token;
  final List snapshots = new ArrayList<>();
  snapshots.addAll(result.getSnapshots());
  while ((token = result.getNextToken()) != null){
    request.setNextToken(token);
    result = ec2Client.describeSnapshots(request);
  snapshots.addAll(result.getSnapshots());
  }
  final DateTime now = DateTime.now();
  snapshots.stream()
    .filter(snapshot -> filterSnapshot(snapshot, now))
    .map(Snapshot::getSnapshotId)
    .map(DeleteSnapshotRequest::new)
    .forEach(ec2Client::deleteSnapshot);
  }

private boolean filterSnapshot(final Snapshot snapshot, final DateTime now) {
  for (final Tag tag : snapshot.getTags()){
    if (tag.getKey().equals("tag-key") && readyForDeletion(tag.getValue(), now)) {
      return true;
    }
  }
  return false;
}

private boolean readyForDeletion(final String tagValue, final DateTime now) {
  final long purgeTag = Long.parseLong(tagValue);
  final DateTime dateTime = new DateTime(purgeTag);
  return dateTime.isBefore(now);
}

This is packaged as an executable Jar, so that it can be placed on our Jenkins instances, and executed as needed (either by specifying a cron expression or clicking Build Now). It can also be run from the command line on a developers PC if needed. It means that the process of taking snapshots of our EBS Volumes is consistent, no matter who or where the process is executed, which should help avoid any problems or differences between back up executions.