In my first blogpost I explained why technological diversity is a good thing. However, it is important to evaluate new technologies to make experiments less risky.
Without a commitment to renewal, today’s modern technology can easily become tomorrow’s unsupportable legacy system. Over time, it becomes more and more difficult to hire and keep developers and more costly to update - which only makes the problem worse.
We believe that product teams should be encouraged to trial new technologies. We’ve published this guidance to help them make sensible choices and minimise risk.
Questions To Ask
When considering a new technology, ask the following questions (as a minimum):
Maturity
- Is it mature enough for production use?
- Can we identify at least one established organisation using it in production?
- Are they happy with it?
Availability
- Will it be easy to find available developers for this technology?
- Do we have developers in-house who are familiar with it already, or interested in learning it?
- Is there a decent-size and/or growing pool of developers in the market should we need to hire in future?
Applicability
- Does it solve a problem that we:
a) can’t solve more easily with a more established technology, and
b) expect to encounter on this product?
Learning Curve
Is it similar to an existing technology that we already use? For instance, it’s quite straightforward for an experienced Ruby developer to learn Python or Crystal. However, Erlang and Clojure are quite conceptually different, and will take more time for a Ruby developer to learn.
If the answers are mostly yes, then the risk of the new technology is comparatively low. If the answers are mostly no, the risk is much higher, and the choice should be considered carefully.
Minimising Risk
Trials of new technology should be considered experiments. They will always carry some risk. At the very least, the risk of some lost time if you decide to stop the experiment.
There are several ways to manage the risk:
Be clear that it’s an experiment | The purpose of an experiment is to test a hypothesis. To answer a question such as ‘is (new technology x) a suitable choice for (this component) at this time?’.
Achieving an answer which is conclusively ‘no’ is not a failed experiment. any more than getting an answer which is ‘yes’ - the team’s knowledge has increased, the question is answered, and the team can move on. |
Be small and well-defined in scope | Aim for something self-contained and rewritable in two weeks - a single microservice is ideal. |
Ensure a full set of automated tests | In the event of a rewrite, this ensures that the insights into the problem gained during development are not lost, and a replacement can be proven to be functionally identical. |
Collective learning & knowledge sharing | Solo experiments are more risky, they can introduce critical dependencies on single team members. Conclusions should be presented back to the team. |
Clear criteria for continuing or discontinuing each experiment | For instance, a demo and kill-or-continue team vote at the end of each sprint. Or an improvement in some measurable metric. Criteria should be agreed within the team for each experiment. |
Leave a comment