From Cog to Contractor

As my first contract comes to an end, I thought I would take a few minutes to write about the journey from a standard employee, to becoming a Java Engineering Contractor.

There are many things that are appealing as a contractor:

  1. The money is better
  2. You can pick and choose the work (to a degree)
  3. There are no disagreements about who is going to work Christmas
  4. You are hardly involved in the political conversations of an organisation
  5. If the work becomes less interesting, it is easier to move on

These are all the things that really gave me the motivation to move away from being a regular employee within an organisation to becoming a contractor.

There are some downsides that you also need to think about:

  1. You do not get paid for taking holidays
  2. You need to consider the tax implications of each client
  3. You may need to travel further for the type of work you want
  4. The political conversations of an organisation can revolve around you
  5. An organisation can get rid of you much easier, if things are not working out

These are the risks that you need to think about when considering your options. I decided that the pros far outweighed the cons, and I took the plunge.

The first step was to find a contract. I have often been contacted by recruiters who have tried to tempt me into contracting, and they have been unsuccessful. The contracts have often been for companies I have not heard of, or have been too far away, or have been considered inside IR35 (I will come to that later). I used one of my many contacts to find a contract, where he had recently been placed. This gave me a lot of confidence, because I knew someone, and they had recommended the client. They had also recommended me to the client, which I think helped with my confidence for the first contract. There is nothing worse than imposter syndrome when you switch jobs, and thinking “Should I really be doing this? Am I really good enough? Do I have the skills?”, and the recommendation was really welcome. I really feel that having a good network is vital to successfully finding clients, as a contractor. These can be recruiters or people you have worked with previously, but it will prove useful to know many people so that you get a good range of work.

Contractor interviews tend to be much shorter, and efficient than regular employee interviews, and after a quick telephone interview, I was invited for a face to face interview with a technical test. My recommendation here is to make sure you leave plenty of time to get to a client. I left three and a half hours to get to Bristol for my face to face, for a journey that should take an hour and a half, and I was still late due to a combination of the M5 being shut at Taunton, and really heavy summer holiday traffic. The technical test was also a short affair, after apologising profusely for being 15 minutes late, it was over within another 30 minutes. My experience in interviewing and hiring contractors helped, because you listen out for key words, and watch for modern techniques, to understand that the person you are interview really does know what they are talking about.

Contract won, time to think about getting set up. There are a couple of ways of setting up as a contractor. The simplest way is to fall under an umbrella company. They take your pay, and calculate your tax, and then pay you, all for a handsome fee on top. I chose to set up a limited company – Devinity – as I could not help but think that paying someone for doing all the things I can do myself did not seem worth it. I ask the same colleague who helped me find the contract to who he used as an accountant – and he recommended Crunch. He also pointed me at QDOS for public liability and professional indemnity insurance – essentials for any contractor. The accountants had an offer when I signed up, and they would incorporate the limited company free of charge – a saving of £60. The insurance cost around £400, and I pay monthly for my accountants, but I get contract reviews to ensure I am working outside of IR35, and they file tax returns and self assessments for me.  The accountants recommended I set up a business account with Metro bank, as it integrated with their systems. All of these things took around five or six days to get set up, bar the bank which took a little longer – but it is not necessary to have that set up until you get paid, so taking into account notice periods and the first few weeks of work – you have plenty of time.

IR35 is UK government policy around Off-Payroll working. There are a number of things that make up being a regular employee. Having your own desk, letting the company reward you in the same way they reward their permanent staff, being paid for sick leave or holidays, are just a few of them. Avoid these and you are a supplier rather than a worker.

The first day at the clients site, was just like the first day anywhere else. There was mandatory training that needed completing, to make sure I was unbribable and that I understood the clients policies, which applied to companies working for them. There was also the same IT chaos that I have come to expect following various employments, and hirings of other developers – if I ever run my own company where I employ others, I will make sure this is seamless, easy, and ready for their first day. A quick tour of the team I had joined to help, and I was thrown in with a brief overview of the architecture of the system, and my first problem to solve. Needless to say, I was over the moon to be writing code so quickly.

I have learnt a lot in the last six months about running a business, and the complications associated within: expenses, pay, tax, self assessments, needs and wants. More importantly, I have learnt a lot about myself in the last six months; my own ability to adapt my technical skills; my own ability to run my company; my own ability to not get caught up in office politics; my own ability to come home and switch off; my own ability to relax. This contract is coming to an end, and I am starting my next almost immediately. I am looking forward to new challenges, and seeing what else I can learn about myself.

Minimising Work Done vs Building A Quality Product

I can hear you now, screaming “These two things are totally opposite to each other, how can I build a quality product if I don’t do loads of work?”, don’t worry, I’ll give you some information, and you can decide whether or not its possible.

Let’s start with minimising work done, and what that doesn’t mean.

It doesn’t mean:

  • Not doing any work
  • Not providing any functionality
  • Not writing code

It does mean:

  • Writing the least amount of code required to fulfil a piece of functionality
  • Not spending all of your time designing before you start writing code

These are both controversial items and even they are at odds with each other.

Sometimes we get great ideas, and feel that we must implement them immediately, otherwise it’ll never be done. If you feel this is happening to you, I’d ask you to ask yourself “Why won’t it get done?”. The answer here is probably because it is not important to your users. Have you asked them if they want your idea? If they say yes have you asked them if it is more important than the current priorities?

On the flip side, you might decide to spend all of your time designing your solution, so you know exactly how it will achieve the desired outcome before you have implemented it. You might even spend up to two or three days designing a solution. Inevitably, during the implementation phase of your design, you will come up against something you had not thought about. You may go back to the drawing board, and start your design again, taking into consideration what you have just learnt. This is time consuming, and is of little benefit to the overall design, as what you thought you knew is not relevant. How about, you take a rough, initial, 15 minute diagram, and give it a go. And adapt the design on-the-fly whilst implementing it. You will probably stay roughly akin to your initial 15 minute diagram, because you can adapt to what you have learnt as you implement the design. I also think it is important to go back and refactor, and remove anything that is dubious, or is not needed. Deleting code is satisfying, because it is another headache you will not have in a few months time!

The things I want you to take away from Minimising Work Done are:

  1. Take risks and try a solution, Learn from it, and adapt as you go.
  2. Start the solution with a minimal design only. Learn from it, and adapt as you go
  3. Use TDD/BDD to write just the code you need to fulfil the desired behaviour

You will still feel like you are contributing, but it will feel even better, because you will be delivering exactly what your users are after, and there will be no wasted effort.

Now, building a quality product. Quality is hard, it can take all forms of subjectivity to define quality with regards to the product you’re working on.

So start with asking: “What does quality mean to me?”

If the answer is “A product that does just about everything under the sun, or all these wonderful things I can think of.”, then we need words!

A common understanding is that a quality product is not just what you think is a good idea, it is also what your users or customers think is a good idea. It also means the things they think are a priority to them. For example, they may be able to work without actually ever logging into a system, so by forcing them to do so, you may be detracting from the usability of a product. If your application requires them to set up lots of settings in advance, but every user has the same settings, they may not even use your product at all.

Instead, turn quality into objective measures. Measuring quality is difficult, but here are some examples:

  1. Number of days the build is green
  2. Percentage up time
  3. Code coverage & testing status
  4. How many features do we deploy in a day?
  5. What is the turn around time from inception to production?
  6. How long does my automated pipeline take to deploy to production?
  7. Is there an accessibility rating for my product?
  8. How many errors do I log in a day?
  9. How many known vulnerabilities are there in my product?
  10. How many have I fixed?

The above are objective, quantifiable measures of quality, associated with a product, so that you know you are building quality in from the start. There are other measures for quality, but they really are subjective. It is still important to gain feedback from your users as to whether or not they like the product, it does what they need, and what they want next, but that forms Customer Satisfaction, rather than quality.

So, back to my original question – “How can I minimise work done, and build a quality product?”.

By creating an initial high level design, learning and adapting, and thinking about quality as a series of objective measures, you really can minimise work done and build quality right from the word go.

Functional Slice Of Cake

So often lately I’ve seen a lot of posts focussing on how to split stories, how they should be structured, and techniques for getting them into a certain state. What I haven’t seen are posts explaining the benefits, and the reasons why stories that are planned as a total slice through a system are a good idea for our development teams.

Firstly, I’d like to ask you a question: How often do your teams miss a sprint goal because of a front to back end dependency? Front to back here can be UI to API, or API to DB. Do you hear the excuse: Well the API changed and we didn’t have time to implement those changes? It’s an excuse that I have used, and that I’ve heard my teams use in the past.

The idea of slices through a system, is not a new idea, but it is one that is difficult to achieve without good planning and patience from your Scrum Master and development team, which, I was lucky enough to have in my last lead role.

I want you to think of your system like a piece of extravagant Victoria sponge with three or four layers of cake. Next I’d like you to think about how it can be cut in different ways to be manageable. You could cut it layer by layer into small bite size pieces and that would certainly be consumable, if you think you could fit six small pieces of sponge in one sprint, then it probably also fits the goal of small enough stories. But what if the aim is not to consume the sponge, but to reconstitute it, so that it all fits neatly together again? If you took it in small sections layer by layer, you would find it very hard to put it all back together, so that it was neat and presentable. Imagine what this would look like if you deconstructed the entire sponge!

Now take a new sponge of the same extravagance. Take a very thin slice all the way through the sponge. You don’t need to try to fit it all together, as it came as a connected piece. This is how we need to get our teams thinking when it comes to our stories. A great question to ask our teams is: If this is the only story you complete this sprint, does it deliver something to our users?

This something, may be just one button that they can click, that doesn’t necessarily do a lot for them on its own, but it fulfils the following criteria:

  1. It works
  2. It is useable
  3. It is displayed from the UI, and goes all the way to the bottom of your system (typically a database)

The aim of the sprint is to deliver working software, and this you have achieved. You still may not always achieve your sprint goal, but you start to reduce the dependencies and risk between different layers of your system.

The one thing that really motivates a development team is delivering working software. Nothing feels better than deploying something, and watching it work. I can tell you now, it’s a great feeling when you can deploy every story, and watch every deployment start up successfully, and for people to use the software you have just created.

One great way to ensure that your system entirely works, for every deployment is to pair your developers across disciplines. If your front end developer pairs with your back end developer to create an API, and your back end developer pairs with your front end developer to create the UI, you can almost guarantee that the solution will work. Although they may not be familiar with each others tech stacks, the underlying principles are generally the same. It’s also a great opportunity to cross pollinate a little knowledge about both sides of the same coin. If you can get your infrastructure engineers to pair with all of your developers, then each person knows how to deploy the system.

All of the above starts to de-risk the teams reliance on a single point of failure, which in turn de-risks each and every delivery the team will make.

Of course the biggest benefit here, is that if there is a weak point in the architecture or design, then it’s picked up at the earliest point, and can be rectified at very little cost.

As An Aside
I think it’s important here to mention “cross functional teams”. A discussion I recently had, is that I have no problem with cross functional teams, but that does not mean that every person in the team is cross functional. Generally, in the world of development, each person in the team has a specialism, whether they are UI, Backend, DB, Infrastructure etc. Whilst it is useful that each member of the team can fill in if a person is away, it needs to be understood, that not everyone will have the same level of expertise as the expert in the team. T-shaped people, in concept, is a great idea, but we have to be careful not to push our team members too wide, or stretch their capabilities, and remember what we hired them to do in the first place.

A Different Aspect On Things

Aspect Orientated Programming has been around for a long time, it’s not a new invention. However, as with an ever changing world, new requirements and inventions come in, and sometimes we look to the old, to give us inspiration for the new.

I’m not about to tell you all about the different ways you can surround your code with aspects, there are many posts out there that do that.
I want to tell you about a few ways I think that they are useful, to developers and to our customers alike.

The first is logging. How many times do you write a
LOG.debug() statement at the beginning or end of a method?
Maybe, you’d like to log requests as they come in?

Dependencies

For my example, I’ve written a simple Spring Boot Web API, and these are the dependencies I’ve used.

<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-aop</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-data-jpa</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-test</artifactId>
        <scope>test</scope>
    </dependency>
    <dependency>
        <groupId>com.h2database</groupId>
        <artifactId>h2</artifactId>
        <version>${h2.version}</version>
    </dependency>
</dependencies>

The Logging Aspect

I want to log information about every request. How long they took, what the arguments were, and which method ran.
The execution Pointcut below picks up every public method, and the @Within Pointcut, picks up every RestController.
By combining them in the @Around I can execute the method around every method that matches. The ProceedingJoinPoint
contains all the information required about the method that will be called, and enough to actually invoke it when you are ready.
By making it a Spring @Component it is wired into the Spring ecosystem for you.

@Aspect
@Component
public class RequestLoggingAspect {

    private static final String MSG_FORMAT = "Method %s took %dms with args %s";

    private final RequestLogger requestLogger;

    public RequestLoggingAspect(final RequestLogger requestLogger) {
        this.requestLogger = requestLogger;
    }

    @Pointcut("execution(public * * (..))")
    public void getPublicMethods() {
    }

    @Pointcut("@within(org.springframework.web.bind.annotation.RestController)")
    public void getRestControllers() {
    }

    @Around("getPublicMethods() && getRestControllers()")
    public Object restControllerPublicMethods(final ProceedingJoinPoint pjp) throws Throwable {
        var sw = new StopWatch();
        var methodName = pjp.getSignature().toLongString();
        var args = Arrays.stream(pjp.getArgs())
                .map(String::valueOf)
                .collect(Collectors.joining(","));
        sw.start();
        try {
            return pjp.proceed();
        } finally {
            sw.stop();
            var msg = String.format(MSG_FORMAT, methodName, sw.getTotalTimeMillis(), args);
            requestLogger.log(msg);
        }
    }
}

The Storing Aspect

The above example logs according to your logging frameworks configuration.
It may be that you need to do something more complicated. This works in exactly the same way, but lets have a look
at an aspect that stores requests to a database.

@Aspect
@Component
public class RequestStoringAspect {

    private final RequestLogService requestLogService;
    private final MethodParser methodParser;
    private final ArgumentMatcher argumentMatcher;

    public RequestStoringAspect(final RequestLogService requestLogService,
                                final MethodParser methodParser,
                                final ArgumentMatcher argumentMatcher) {
        this.requestLogService = requestLogService;
        this.methodParser = methodParser;
        this.argumentMatcher = argumentMatcher;
    }

    @Pointcut("execution(public * * (..))")
    public void getPublicMethods() {
    }

    @Pointcut("@within(org.springframework.web.bind.annotation.RestController)")
    public void getRestControllers() {
    }

    @Around("getPublicMethods() && getRestControllers()")
    public Object restControllerPublicMethods(final ProceedingJoinPoint pjp) throws Throwable {
        var sw = new StopWatch();
        var methodSignature = pjp.getSignature().toLongString();
        var params = methodParser.parse(methodSignature);
        var args = Arrays.stream(pjp.getArgs())
                .map(String::valueOf)
                .filter(str -> !StringUtils.isNullOrEmpty(str))
                .collect(Collectors.toList());
        sw.start();
        try {
            return pjp.proceed();
        } finally {
            sw.stop();
            var arguments =  argumentMatcher.match(params, args);
            var requestLog = new RequestLogDao();
            requestLog.setArguments(arguments);
            requestLog.setExecutionMillis(sw.getTotalTimeMillis());
            requestLog.setMethodName(methodSignature);
            requestLogService.saveRequestLog(requestLog);
        }
    }
}

Here I’ve started to store some more information about the methods called, and the arguments that were captured.

This can be really useful to start to build up monitoring information. You could also start to capture audits of who has accessed different endpoints, what exceptions were thrown. This can start to build up an historic picture of your application, its performance, and what your users are really doing. It can highlight bottlenecks in the application, and can help you prioritise where you focus your efforts in your application.

Code is available here (You’ll need Java 10 or above).

User Story Mapping

Jeff Patton defines Story Mapping as Talk about user’s journey through your product building a simple model that tells your user’s story as you do.”

Sounds simple right? But, how often do you have a Project Management type who has already defined the stories, and just wants you to agree with the stories they have come up with? Sometimes, this works, but more often than not, the development teams need to know answers to a few questions:

  1. Who for?
  2. Why?
  3. What?
  4. How?

The answers to these questions are really important for several reasons.

It’s important for the development team to know who the users are. Firstly, it provides a point of contact in the business for complex discussions that should not go third hand. Secondly, knowing that there is someone who is actually going to use your solution provides a sense of pride in the product in question. Thirdly, you know that when you receive feedback from someone in a review, that they are an actual user, and not just a stakeholder with outside interests. Whilst these stakeholders are important, they generally want to provide non-functional requests, or understand the benefits to their teams, which can be communicated with your Product Owner in a separate session.

It’s important for the development team to know why a story is created and needs to be played. It helps with prioritisation, and even implementation. It could provide information that helps decide how robust the solution needs to be, how scalable it needs to be, how secure it needs to be.

It’s important for the development team to know what the user actually wants. More often than not a BA or PO will decide what the solution should be without actually consulting any developers. By involving the developers in the conversations with the users, the right product for the user can be developed. It may be that the user doesn’t know what they want, in which case a developer can suggest a simple prototype to get things started. It will promote a  feedback cycle with the users, for example: “I really like the way the command line tool processed my files, but I only ever use the same parameters, can they be set as a default?”

It’s important for the development team to know how the product or solution should interact with other systems. Does it need to integrate with an antiquated database? Are there new technologies that the team need to learn? Can some work be outsourced to a specialist? Can we move away from any current techniques, and provide a more modern solution?

Have you ever sat around a screen, with a spreadsheet filled with stories, but absolutely no context? From a developers point of view there is nothing more frustrating than being provided with stories to complete without knowing the who, why, what or how. Mapping the stories, with the users present, ensures that the whole development team start a set of epics, features and stories from the same page, with the same understanding.

However, these meetings need to be will prepared and facilitated, as they can take all day. Sounds expensive right? A team of developers, with the users, PO, Scrum Master, BAs in a meeting for a whole day? I can guarantee that it will be cheaper to run this type of meeting four or five times a year, than to just let a BA or PO create stories for a development team.

Here is my simple recipe for a story mapping session:

Understand The Current Process

By understanding the current process, everyone in the room can start to design a new solution to improve what your users currently do. Encourage the team to ask questions like:

  • Can you describe the difficulties or pain points?
  • Is that a time consuming part of the process?
  • How long typically does it take?
  • Is it expensive to do?
  • Does it involve a lot of people?

Start To Map Out The Problems

Get the team to map out problems on post it notes, things that form part of the current process, and things that fall out of the questions the team have asked. If something is a big problem make sure it’s highlighted on the notes.

Write Down Features That Could Solve The Problems

Go over a set of the problems, noting features on post it notes that could help to solve problems. Don’t worry if you have more than one feature idea to solve a problem, that is a good place to be as you can start to give the Product Owner varying solutions to choose from! Make sure that the features are associated to the problems.

Determine The Stories

Start to form high level stories that could complete the features. Try to avoid implementation details, such as libraries or algorithms, but capture things like “Get sheep into field”. Think about how these stories could be accepted by the PO, and think about any non-functional requirements the users have asked for. Make sure the stories are associated with the features.

Estimate The Stories

These don’t have to massively accurate, it’s probably better to bubble sort them based on complexity, known effort and difficulty, any risks the team can describe, and how long similar work has taken, so that you can start to give the Product Owner an idea of how long it could take to deliver an initial cut of the product. You could even just T-Shirt size the stories, so you have a starting point.

In Summary

You’ll find you’ve probably used 4 or 5 post it note pads, and the team will definitely be exhausted at the end of the session, but by the time you leave, each and every person should be on the same page, as to the difficulties the users are experiencing, what a possible solution may be, and how they may achieve it. This is one of the most valuable parts of the entire process. The stories you have created should then fall through your usual refinement sessions to be fleshed out and broken down.

Constructors & DI

I’m a big advocate of constructor injection, but a recent office debate caused me to take a step back and reevaluate my beliefs.

I thought I’d take a stab at some good and bad points of different ways of injecting dependencies into a class using Spring.

Field Injection

@Component
public class FieldInjectionExample {

    @Value("${url.value}")
    private String url;

    @Autowired
    private RestTemplate restTemplate;

    @Autowired
    private ObjectMapper objectMapper;

    public String doSomething() throws IOException {
        final String object = restTemplate.getForObject(url, String.class);
        return String.valueOf(objectMapper.readValue(object, String.class));
    }
}

So here you can see that my class looks nice and clean, there are very few lines of code require to construct my object. Although the fields aren’t final, there are no setters so you could only modify them using Reflection. It’s fairly succinct, and you could see all the things needed to construct a class. So, I think I probably need write some tests for this class.

public class FieldInjectionExampleTest {
    
    private FieldInjectionExample fieldInjectionExample;
    
    @Before
    public void setup(){
        fieldInjectionExample = new FieldInjectionExample();
    }
    
    @Test
    public void testDoSomething() throws IOException {
        assertThat(fieldInjectionExample.doSomething()).isEqualTo("?");
    }
}

So I think the first thing you can spot, is that I am able to create my object under test without necessarily being aware of the dependencies required to make the class do it’s job, regardless of whether or not they are being mocked.

What constructor injection would give us here is an explicit contract, which documents exactly what is needed to do it’s job -it will fail at start up time, rather than at run time.

This will allow you to also determine if you need to create any beans as configuration (maybe a non-spring managed component), as you can check the constructor.

Take the assumption that I have now run my test, and it has failed. I’ve realised (or remembered) that my class needs a few things to make it work.

@RunWith(MockitoJUnitRunner.class)
public class FieldInjectionExampleTest {

    @Mock
    private RestTemplate restTemplate;

    @Mock
    private ObjectMapper objectMapper;

    @InjectMocks
    private FieldInjectionExample fieldInjectionExample;

    @Test
    public void testDoSomething() throws IOException {
        final String returnedObject = "String";
        when(restTemplate.getForObject(anyString(), any(Class.class))).thenReturn(returnedObject);
        when(objectMapper.readValue(anyString(), any(Class.class))).thenReturn(returnedObject);
        final String something = fieldInjectionExample.doSomething();
        assertThat(something).isEqualTo("String");
    }

}

Ok, so here I’m pretty sure that I’ve captured everything that I need, but I will never be sure until it fails.

Setter Injection

It’s entirely possible to place @Autowired annotations on setter methods of objects. This can be a neat way of instantiate a class without having all the dependencies created at run time, but I believe that this comes with down sides, such as how easy it becomes to create circular dependencies, and spot them until it’s too late.

@Component
public class SetterInjectionExample {
    
    private String url;
    private RestTemplate restTemplate;
    private ObjectMapper objectMapper;
    
    public String doSomething() throws IOException {
        final String object = restTemplate.getForObject(url, String.class);
        return String.valueOf(objectMapper.readValue(object, String.class));
    }

    @Autowired
    public void setUrl(@Value("${url.value}") final String url) {
        this.url = url;
    }

    @Autowired
    public void setRestTemplate(final RestTemplate restTemplate) {
        this.restTemplate = restTemplate;
    }

    @Autowired
    public void setObjectMapper(final ObjectMapper objectMapper) {
        this.objectMapper = objectMapper;
    }
}

This comes with much more code, and the ability to change the object at run time.

public class SetterInjectionExampleTest {

    private SetterInjectionExample setterInjectionExample;

    @Before
    public void setup(){
        setterInjectionExample = new SetterInjectionExample();
    }

    @Test
    public void testDoSomething() throws IOException {
        assertThat(setterInjectionExample.doSomething()).isEqualTo("?");
    }
}

This still has the same problem – you cannot easily find what is required to create and use the class. There is an expectation that this would work.
You can inject your mocks the same way you would as if you were injecting field variables.

@RunWith(MockitoJUnitRunner.class)
public class SetterInjectionExampleTest {

    @Mock
    private RestTemplate restTemplate;

    @Mock
    private ObjectMapper objectMapper;

    @InjectMocks
    private SetterInjectionExample setterInjectionExample;

    @Test
    public void testDoSomething() throws IOException {
        final String returnedObject = "String";
        when(restTemplate.getForObject(anyString(), any(Class.class))).thenReturn(returnedObject);
        when(objectMapper.readValue(anyString(), any(Class.class))).thenReturn(returnedObject);
        final String something = setterInjectionExample.doSomething();
        assertThat(something).isEqualTo("String");
    }
}

This still means that, although you have setters available, it still isn’t clear what is needed in order for the class to do its job. Maybe it could be useful for optional parameters.

Constructor Injection

Here is the same class, with dependencies injected into the constructor.

@Component
public class ConstructorInjectionExample {
    private final String url;
    private final RestTemplate restTemplate;
    private final ObjectMapper objectMapper;

    @Autowired
    public ConstructorInjectionExample(@Value("${url.value}") final String url,
                                       final RestTemplate restTemplate,
                                       final ObjectMapper objectMapper) {
        this.url = url;
        this.restTemplate = restTemplate;
        this.objectMapper = objectMapper;
    }
    
    public String doSomething() throws IOException {
        final String object = restTemplate.getForObject(url, String.class);
        return String.valueOf(objectMapper.readValue(object, String.class));
    }

}

It’s really clear to see what your class needs in order to do it’s job. I cannot instantiate this class without having a URL, a RestTemplate and an ObjectMapper.

This doesn’t mean that you cannot inject your mocks if you so wish, but it does allow to see what the class needs to do it’s job.

@RunWith(MockitoJUnitRunner.class)
public class ConstructorInjectionExampleTest {

    @Mock
    private RestTemplate restTemplate;

    @Mock
    private ObjectMapper objectMapper;

    @InjectMocks
    private ConstructorInjectionExample constructorInjectionExample;

    @Test
    public void testDoSomething() throws IOException {
        final String returnedObject = "String";
        when(restTemplate.getForObject(anyString(), any(Class.class))).thenReturn(returnedObject);
        when(objectMapper.readValue(anyString(), any(Class.class))).thenReturn(returnedObject);
        final String something = constructorInjectionExample.doSomething();
        assertThat(something).isEqualTo("String");
    }
}

In order to understand the different ways of injection properties, I’ve broken down the key points in a table. This will allow you to make an informed decision as to which one is appropriate for your work.

Field Setter Ctor
Cannot instantiate without necessary
properties
X
Cannot create circular dependencies X X
Clearly defined requirements in one place X
Potential for easy readability X X
Allow the use of the ‘final’ keyword X
Obvious when breaking SRP X
Useful for non-mandatory properties X

What Does Spring Say?

Finally, what does Spring say about the most appropriate form of property injection?

The Spring team generally advocates constructor injection as it enables one to implement application components as immutable objects and to ensure that required dependencies are not null. Furthermore constructor-injected components are always returned to client (calling) code in a fully initialized state. As a side note, a large number of constructor arguments is a bad code smell, implying that the class likely has too many responsibilities and should be refactored to better address proper separation of concerns.

Automating AWS Backups

As developers, we all know that things go wrong; machines break (for sometimes totally unpredictable reasons) and data can become corrupt. What better way of helping to mitigate these problems than taking backups and snapshots of your infrastructure and data.

In the office we use AWS extensively for our infrastructure. We have various tools running in EC2 instances that we use for our day to day development. You’d think that a suite of tools as sophisticated as AWS would have a way of automatically backing up volumes and instances and retaining them for periods of time!

We found a bash script that pretty much did what we were after, but there were licence implications with using it. Bash scripting can be a bit of a black art, especially for sophisticated operations, so why not do it in a language we all  understand – Java!

Spring Boot allows us to create a command line Java application, simply by implementing the CommandLineRunner interface:

@SpringBootApplication
public class BackupApplication implements CommandLineRunner {

  private final BackupService backupService;

  @Autowired
  public BackupApplication(final BackupService backupService) {
    this.backupService = backupService;
  }

  public static void main(String[] args) {
   SpringApplication.run(BackupApplication.class, args);
  }
 
  @Override
  public void run(String... args) throws Exception {
    backupService.backup();
    backupService.purge();
  }
}

Connecting to AWS using an already created access key and secret is also relatively simple using the AWS SDK. You can pass the parameters below on the command line (-Daws.secret.key=KEY):

@Configuration
public class AwsConfiguration {

  @Bean
  public AWSCredentials credentials(@Value("${aws.secret.key}") final String secretKey, @Value("${aws.secret.pass}") final String secretPass) {
    return new BasicAWSCredentials(secretKey, secretPass);
  }

  @Bean
  public AmazonEC2Client ec2Client(final AWSCredentials credentials) {
    return new AmazonEC2Client(credentials);
  }
}

In order to find the instances I want to back up, they’ve been tagged with ‘mint-backup’ as the key, and the period of backup for the value (e.g. ‘nightly’, ‘weekly’). These are passed in on the command line as arguments and are used by the Backup Service:

@Autowired
public BackupServiceImpl(final AmazonEC2Client ec2Client,
  @Value("${tag.key}") final String tagKey,
  @Value("${tag.value}") final String tagValue,
  @Value("${retention.days}") final long days) {
  this.ec2Client = ec2Client;
  this.tagKey = tagKey;
  this.tagValue = tagValue;
  this.duration = Duration.ofDays(days);
}

The AWS SDK allows you to search on Tags:

private List getVolumes() {
  final Filter filter = new Filter("tag:"+ tagKey, Collections.singletonList(tagValue));
  final DescribeVolumesRequest request = new DescribeVolumesRequest();

  DescribeVolumesResult result = ec2Client.describeVolumes(request.withFilters(filter));
  String token;
  final List volumes = result.getVolumes();
  volumes.addAll(result.getVolumes());
  while ((token = result.getNextToken()) != null) {
    request.setNextToken(token);
    result = ec2Client.describeVolumes(request);
    volumes.addAll(result.getVolumes());
  }
  return volumes;
}

Now I have a list of Volumes, I can create a snapshot of each one:

private List createSnapshots(final List volumes) {
  final List snapshotIds = new ArrayList<>();
  volumes.forEach(volume -> {
  final CreateSnapshotRequest createSnapshotRequest = new CreateSnapshotRequest(volume.getVolumeId(),
  "SNAPSHOT-" + DateTime.now().getMillis());
  final CreateSnapshotResult createSnapshotResult = ec2Client.createSnapshot(createSnapshotRequest);
  final Snapshot snapshot = createSnapshotResult.getSnapshot();
  snapshotIds.add(snapshot.getSnapshotId());
  });
return snapshotIds;
}

Once they are all created they are then tagged with a purge date, so that another process can remove them once they have expired:

private void tagSnapshots(final List snapshotIds) {
  final long purgeDate = DateTime.now().plus(period).getMillis();
  final Tag purgeTag = new Tag("purge-date", String.valueOf(purgeDate));
  final CreateTagsRequest createTagsRequest = new CreateTagsRequest()
    .withTags(purgeTag)
    .withResources(snapshotIds);
  ec2Client.createTags(createTagsRequest);
}

Purging is a little easier. We can request for all snapshots that have a ‘purge-date’ tag, and then filter them based on the value of the tag so that we only get ones before now, grab the ids and create a collection of new requests and issue each one to the ec2 client:

@Override
public void purge() {
  final DescribeSnapshotsRequest request = new DescribeSnapshotsRequest();
  final Filter filter = new Filter("tag-key", Collections.singletonList("purge-date"));
  DescribeSnapshotsResult result = ec2Client.describeSnapshots(request.withFilters(filter));
  String token;
  final List snapshots = new ArrayList<>();
  snapshots.addAll(result.getSnapshots());
  while ((token = result.getNextToken()) != null){
    request.setNextToken(token);
    result = ec2Client.describeSnapshots(request);
  snapshots.addAll(result.getSnapshots());
  }
  final DateTime now = DateTime.now();
  snapshots.stream()
    .filter(snapshot -> filterSnapshot(snapshot, now))
    .map(Snapshot::getSnapshotId)
    .map(DeleteSnapshotRequest::new)
    .forEach(ec2Client::deleteSnapshot);
  }

private boolean filterSnapshot(final Snapshot snapshot, final DateTime now) {
  for (final Tag tag : snapshot.getTags()){
    if (tag.getKey().equals("tag-key") && readyForDeletion(tag.getValue(), now)) {
      return true;
    }
  }
  return false;
}

private boolean readyForDeletion(final String tagValue, final DateTime now) {
  final long purgeTag = Long.parseLong(tagValue);
  final DateTime dateTime = new DateTime(purgeTag);
  return dateTime.isBefore(now);
}

This is packaged as an executable Jar, so that it can be placed on our Jenkins instances, and executed as needed (either by specifying a cron expression or clicking Build Now). It can also be run from the command line on a developers PC if needed. It means that the process of taking snapshots of our EBS Volumes is consistent, no matter who or where the process is executed, which should help avoid any problems or differences between back up executions.