If it ain’t broke, don’t fix it. Or, what?

When I  promised to write about how to reduce friction from developers’ work day, I am sure that most expected me to focus on what non-developers ought to do in order to improve the working lives of developers.

But let’s face it, the major cause of friction, crappy legacy code, is our own fault. Developers wrote that crappy code, after all. So let’s focus on what we can do about that.

Software developers have a natural tendency to keep the hands off code that already works. After all, conventional wisdom tells us that it is a bad idea to try to fix something that works.

Such conventional wisdom often makes a lot of sense, even in the world of software development; any change involves an element of risk, and you surely do not want to risk introducing bugs unless you need to.

On the other hand, we need to think about the risk of not changing code. This risk is often overlooked, so I was happy that Martin Fowler in a recent article on the second edition of his Refactoring book reminded us that he was impressed while working with Kent Beck because,

[…]one thing that really stood out was they [sic] approach he took to continually reworking the code base to keep it healthy[…]

I strongly suggest that serious software developers dig into the wisdom of those two gurus (Martin Fowler and Kent Beck) in order to understand why code which is not continually trimmed will actually rot.

In my humble opinion, there are numerous reasons for constantly trimming code. For example, new requirements cause existing code to get in the way of changes, or as time goes by you get wiser on the underlying problem you are trying to solve. In short: When you get wiser, use that wisdom to continually improve your code.

If you don’t continually trim your code, if you postpone needed refactoring too long, then you end up with unmanageable code that you wish you could rewrite from scratch. Rewriting from scratch might be the only feasible option, but you will probably nevertheless have to focus your time elsewhere.

Don’t get me wrong here. I have written code which still runs unchanged after many years without needing a change. In the rare cases when this happen, don’t change the code just for the sake of a principle.

Still, in conclusion, even when code works, if the way the code is crafted is no longer good enough, you really should change it.

 

Frictionless Programming

It has been my plan for a while to start writing about all the things that make a software developer happy and active –  or loyal and productive, if you prefer to see it from a manager’s perspective.

When I am asked about my opinion on the topic, I usually mention that I need the right challenges, good colleagues, a decent work/life balance – and a fair compensation.

But if you have all that and you still feel frustrated every work-day, it might be because of what I intend to write about in future blog posts – too much friction.

Friction to me is everything that gets in the way when I need to get my work done. Friction to me is hour-long check-in times, processes that slow me down rather than helping me to get things right, using mind-sets, frameworks and tools that simply do not belong in 2018, and so much more.

At the end of the day, all I want is to look back and see that I actually managed to implement functionality that will benefit my users.

In future posts I will give examples as well as my opinion on what to do in order to achieve true frictionless programming.

The Indorse Decentralized Professional Network

TL;DR

Indorse is the new LinkedIn, only this time users keep ownership of their data, and skills claims are more reliable. This is interesting for existing and potential LinkedIn users as well as for investors. For software developers, Indorse is an example of a very interesting trend.

A Decentralized Professional Network

When I first stumbled upon Indorse and read the white paper, I was immediately convinced that the idea has some merit, to put it mildly. I especially liked the idea of taking ownership of my own data.

Don’t get me wrong. I appreciate that the nice people at LinkedIn let me create a profile for free and that they provide me with a nice platform to connect to people that I relate to professionally. However, if the information I put in about myself, as well as information about others in the form of endorsements, is sold for a profit, why don’t I get my cut? And now we are talking about endorsement, how valuable is it if I claim that I am an expert in, say, Docker, then have 5 co-workers endorse me as such? Maybe they stand behind my claim, but maybe they simply endorse me because they pity my lack of Docker experience.

Indorse will aim at solving both these problems, by creating a platform for a skills economy, in which skills are seen as commodities. The platform allows for users to claim that they possess certain skills and for other users to try to verify such claims and, if verified, to indorse it. The platform has a clever mechanism that ensures that it is in the interest of users to be honest – dishonesty will be punished.

BTW, endorse is the commonly used English term that you already know, on the other hand indorse is mostly used in American English and is a signature on a legal or financial document.

What’s in it for Developers

Indorse will be built on the Ethereum block-chain distributed app platform.

I have no doubt about block-chain technology being an important part of the new truly distributed internet, and so far it seems that Ethereum has by far the most momentum. BitCoin may be more popular by now, but is limited to currency.

In contrast, Ethereum also stores smart contracts in its block-chain, while its currency Ether (or ETH) is mainly used as fuel (gas) for transactions. Ethereum also allows easy creation of custom currencies (also known as custom tokens).

Smart contracts are programs written in the Solidity programming language. For example, a smart contract can be used to create a new custom token with a few lines of code and a few easy steps.

IMHO, learning about block-chain technology and Ethereum must be on any serious software developer’s continuous learning TODO list, and it is great to have good examples like Indorse for inspiration.

What’s in it for Investors – and what is an ICO

If a good old company like, say, Saxo Bank go public we call it an IPO, an initial public offering and Saxo Bank’s stock will be registered on a good old fashioned exchange.

Not so with Indorse. They will do an ICO, an initial coin offering. Indorse is one of many recent ICOs.

Technically, this means that a custom token, called IND, will be created and the public is offered to buy the token, thereby funding the Indorse project.

Any developer can create a new token, but doing a full ICO takes a bit more than that. Indorse has teamed up with TokenMarket, a company that helps with ICOs.

In addition to that, Indorse has registered with the Wings platform. Wings users can do educated guessing on how well the Indorse ICO will go.

Is IND a good investment? The short answer is that I really have no idea. You could win big time or you could lose all invested money.

But if you check my forecast on Wings you will see that I lean to the positive side. Well, at least I expect the offered IND tokens to sell out.

But even if all IND tokens sell out in the ICO, it does not necessarily mean that it is a good investment. First of all the liquidity of IND, as well as the liquidity of any other ICO coin, is lower than the liquidity of money. Also, the ICO is essentially a way to take money from investors, nothing more, and investors must scrutinize the mechanisms for getting money back in the case of the business getting successful. I am not saying that Indorse does not have a good plan for that, I am just saying that each investor should make up her own mind.

The serious investor will always do due diligence. Fortunately, a clever mind has already written about exactly that on SteemIt, and I strongly encourage potential investors to read this post. Also the serious investor will think about her motivation for each investment – is it in line with her investment  strategy or is she simply afraid to miss out when everybody else is cashing in?

As I have already mentioned, I find the idea of Indorse to be very convincing after having read the white paper.

On the other hand, as a developer I dislike that I don’t have access to neither the product nor the source code yet.

The product is supposed to be available in a pre-beta version, announced on Slack, some time end July 2017.

It is currently being discussed internally in the Indorse team how the code – or some part of the code – can be made open source. At the time of writing there is nothing in GitHub.

So, will you join the IND token sale? It is open for pre-sale at the time of writing this, but you need to invest at least 10 BitCoin or 100 Ether in order to participate in the pre-sale. If that is too much for you, a general token sale will open shortly after.

Finally a word of caution – there have been several attempts at phishing Ether from investors, so don’t send Ether to any address unless you are absolutely certain that it is the correct address. I suppose that statement is universally true, but in the case of ICOs there are bad guys out there that seriously want to trick your crypto currency from you.

 

Conclusion

This post is quite different from what I usually write – stay tuned for more developer-centric posts on block-chain in general and Ethereum contract development in particular. If time permits, that is (I do have a day-time job). But for now …

As a current LinkedIn user I look forward to using Indorse.

As an investor I will closely watch IND.

As a developer I want to learn how to make such a distributed app on Ethereum.

My Ether addy is 0x6B1c08c8D867314045384Eb0c5a3799375eec6F9. (Addy means address in block-chain geek slang.)

I do not own IND (well, nobody does at the time of writing because the Ethereum contract has not been made yet), but I expect to receive some through the Indorse bounty program on BitCoinTalk.

Building a Safety Net for Continuous Delivery with Developer Tests

It is impossible to develop a software system with a certain level of complexity unless it is built on top of a smaller working system.

I wanted to credit Bjarne Stroustup for expressing this point of view as early as 1985 in his book The C++ Programming Language, but after re-reading his Notes to the Reader, I see that the quote I remembered was on the importance of well-structured code (in which C++ excels over C), not correctness.

Still, I don’t think I am alone when I claim that we software developers find it natural to develop iteratively, thereby continuously building on top of the last iteration, the last working system.

The question is, how do we know that the system we build on is working?

The truth is that unless we have a very good verification process we don’t know if we build on a working system.

In a good continuous delivery process, we will have waves of verification in the form of continuous integration builds and deployments, automatic and manual testing by testers etc.

Naturally, developers will also develop unit tests as an integrated part of developing code, thereby ensuring that each implemented responsibility behaves as expected in isolation.

But is this good enough? Will it ensure that each iteration builds on a working system?

I think it is not good enough because,

  • Testing is usually decoupled in time and space from the development process.
  • Unit testing only verifies tiny pieces of logic in isolation, but bugs typically show up when these pieces of logic are composed into higher level behaviour.

If you ask me, developers needs to write what I call developer tests.

Developer Tests

A developer test is similar to a unit test, the difference being that we never mock any dependencies unless we absolutely must. For example, we mock external web services that our code calls, but we do not mock database access.

When we run a developer test, we run the exact same code as is run in the production system, which means that the behaviour of the test will closely match the behaviour of the production system. This means that the verification which is done by a developer test is very reliable.

When I develop new code, I always exercise the new behaviour through developer tests. This is typically much easier than setting up the production system with the relevant users with relevant permissions and relevant data to query and alter.

When my new developer tests turn green, I feel confident that my new behaviour works as intended, not only in isolation but also when run in context with huge parts of the existing functionality.

When I have verified that the existing developer tests are green I feel confident that I did not introduce regressions.

Then I check the code changes into the main branch and the new feature will be in the next release a short while after.

What Makes Developer Tests Work

I have developed the concept of developer tests over the last couple of years while working on TradingFloor.com. Since it is now second nature to use developer tests as an integral part of the software development process, it is difficult to remember why this seemed difficult, or impossible, to do just a couple of years ago.

A major part of the reason that developer tests work in TradingFloor.com is that the code is (largely) written with sensible principles in mind, and in this context one of the SOLID principles, the Dependency Inversion Principle (DIP), is essential. And furthermore, using Dependency Injection is practical.

This means that when I exercise my new behaviour through the method Foo on class Bar …

public class Bar
{
Bar(IMyDependency1 dep1, IMyDependency2 dep2) { /*…*/}
void Foo() { /*… use dep1 and dep2 */ }
}

… then I also run the code of the two dependencies (and their dependencies, and their dependencies …), including any kind of logging, interception and whatnot. This is in contrast to a unit test in which I would mock the two dependencies.

In addition to DIP, our experience is that the Command Query Separation (CQS) principle is a great help in general in our code structure, and in particular this principle makes writing developer tests easy. I suppose you can imagine that a code base composed of queries (we call them readers) and command handlers are very handy when building up a test scenario and when asserting the outcome of a test.

Why is the Entire World not Using Developer Tests

Developer tests allow for faster development, they provide fast feedback on correctness during development and they provide a safety net for the future.

Yet, I have not seen a rush for all other developers to get on board and start to use developer tests. Why?

Here are some of the counter arguments I have heard so far,

  • It cannot be done.
    That argument is a couple of years old. Today we are doing it on a daily basis.
  • It is too slow.
    No, our 850+ tests run in one minute on a typical developer PC.
  • Developer tests are very brittle.
    No, it is the other way around. Unit tests are often very brittle because you need to re-do your mocking when refactoring code. Developer tests don’t have this problem and they are surprisingly solid towards refactoring.
  • I cannot do it because my code is much more complex than your code.
    If your code is really complex, working without a safety net is not an option! You can do it.
  • I run a heavy SQL database, tests will be too slow and difficult to set up.
    Right, we run a no-SQL database so building up an entire database per test is fast and easy. Installing the database locally and on any build or test system is also easy and fast. All that will be a hassle with some SQL databases, but not impossible. If you have to, you can isolate SQL access and mock it out but I would prefer not to.

Where Are We Now

I would love to share more details but I feel that I need to introduce developer tests to at least one more project before I can express myself without going into too much detail.

I will come back with more information once I have done that. In the meantime, if you would like me to elaborate on this or that, please ask.

Continuous Delivery – a Safety Net will make you Lazy

I sincerely believe that some kind of safety net is needed when coding.

In fact I believe that having a safety net is especially important when doing continuous delivery.

Before I managed to blog on my opinions regarding this, I read Scott Berkun’s book The Year Without Pants in which Scott makes quite the opposite argument based on his experience from Automattic (and contrary to his experience from Microsoft).

A Safety Net will make you Lazy

Essentially, Scott backs up Automattic’s belief in the philosophy: safeguards don’t make you safe; they make you lazy. This may to a certain extend be true in some cases, as some people drive faster when they get ABS brakes, and football players take more risks because of their padding. And on the same token, if you find yourself in a high tower with no railing, you will be very cautious about every step you take as a fall would kill you. And since you are very cautious, it is unlikely that you will be killed.

Does this philosophy work in software development? Should we skip manual and automatic testing as well as other kinds of verification before we deploy the latest changes to the Live system? Should we essentially skip the entire safety net and rely on developers being very cautious?

No!

Being cautious only takes you part of the way. And if you are too cautious, there will be much needed changes to the code that you will never dare do. Besides, even cautiously made changes could have unexpected effects, regressions, on other parts of the code. If you are too cautious, your code will rot and eventually become unmanageable. (There is a brilliant description of code rot and how to avoid it in Robert C Martin’s book Clean Code)

Still, in a perfect world, coding without a safety net could actually work. In theory it’s simple, and I have already blogged about it. First of all, all changes to code must be small, additive increments – baby steps. Secondly, the code must be crafted by rigorously following the SOLID principles. With a perfect code base with low coupling and high coherence, most baby step changes would consist of adding new code that plug in without changing existing code, or would be a few simple changes to a single existing class, in either case the impact on the system would be restricted and well understood.

Alas, the world is not perfect, neither is the code base that most developers work on.

Besides, sometimes you need to do a refactoring that will impact quite a bit of functionality. Sometimes you change a single or a few lines of code, but there is no simple way to fully understand its impact. In both cases the risk of regressions can be lowered with rigorous verification.

Building a Safety Net

I am quite happy that I read Scott’s book, as it made me think a bit deeper about building up a safety net. (And I can certainly recommend the book to anyone interested in the process of developing software.) Note that what I mention here is regarding the part of the safety net that developers must build and maintain.

Here is my opinion:

  1. Build and maintain automatic tests for non-trivial functionality.
  2. Do not build tests for trivial, unimportant or easily verified functionality.

The second bullet is based on my experience that often huge amounts of tests are made, but the maintenance burden is so high that the tests are not maintained, new tests are not written and (unit) testing in general gets a bad reputation among developers. In such a case, needed refactoring is generally avoided and the code will rot. For these reasons, it is good practice to avoid tests that would only reveal bugs of low severity, of which many would be found anyway, simply with a quick glance at the system.

So the trick is to have exactly the tests that make sense and ensure that they are maintainable.

Even then, a safety net could make a developer lazy. It is never an option to simply throw the code over the fence to the Testing Department, effectively making buggy code somebody else’s problem. Rather, developers must build up a safety net as an integral part of developing code.

Being cautious and having a safety net is the way to go.

Why do some Developers Prefer not to have a Safety Net?

As a final note I have a possible explanation to why Automattic developers prefer working without a safety net.

Scott explains how he once went to India and climbed the stone tower of Jantar Mantar. There was no railing and a fall would kill anyone. But people were cautious because of the lack of safety measures.

I also climbed the stone tower of Jantar Mantar years ago when I was much younger. I clearly remember looking down at our hilarious guide at the ground, but I do not particularly remember the missing railing.

Could it be that focus on safety measures increases with age and experience?

Coding for Continuous Delivery – the SOLID Principles

I never thought that I would find myself blogging on the topic of SOLID Principles. Not that I think it is unimportant, because it certainly is an important topic, it is just that it is so easy to find books, blogs and other written material on the topic already.

On the other hand, people often ask me if it requires a special coding style if you want to implement Continuous Delivery. And I often reply that, no you are not forced to use any specific coding style. But it goes without saying that you need to deliver your code continuously. And in the spirit of Continuous Delivery you need to minimize the risk of each delivery.

Which means that you need to do small (baby-step), additive and non-breaking deliveries.

If you are in the habit of checking in several times a day and you are confident that your code is delivered to the Live environment at the end of each day, or several times a day, then you are good to go and you may not need to read any further.

On the other hand, if you either find it impossible to chop up your implementation into small and additive pieces, you should read on to learn my take on the SOLID principles.

And if you are then still not confident that following the SOLID principles will help you ensure that each of your check-ins can potentially be released any time of day, you will want to read my future blog posts on coding with a safety net and on patterns that fit nicely into all this.

The SOLID Principles

I will not go into details of the SOLID principles but rather sketch how I believe they interrelate in order to make my point. I do not strictly follow Uncle Bob’s original explanation regarding what principle(s) follow by rigorously applying what other principle(s), so you may find my points a bit controversial.

First of all, we need to think about responsibilities in all coding, hence the S – Single Responsibility (SR) – meaning that we want to put code that will change for the same reason together. This means that huge classes or methods are out of the question as these will almost certainly contain multiple responsibilities. One point that many well-meaning developers often overlook is that it is also poor practice to split the code into too small units. Splitting a class with a well-defined and coherent responsibility into several smaller classes will only give you more classes to manage – and the unit tests will be difficult to understand and maintain.

Secondly, we need to think hard about the way we allow our responsibilities to depend on each other. While it might seem obvious that high-level components must depend on lower level components, this kind of code structure tends to create highly coupled systems that are difficult to maintain. The letter D in the SOLID Principles – Dependency Inversion (DI) – states that high level modules must not depend on lover level modules, but rather they should depend on abstractions which do not depend on details.

In day-to-day work I usually state that Dependency Inversion essentially means that a class with a given responsibility must assume that its dependencies are injected into it without the class implementing any of the mechanics needed to get or dispose of the right instances at the right time. These mechanics must be implemented in a single place, the Composition Root, in the program. This day-to-day view is not exactly Uncle Bob’s original thoughts, and also ties into Dependency Injection which is not necessary in order to follow the DI principle. But using Dependency Injection it is, in my opinion, much easier to create a code structure which allows us to do Continuous Delivery.

The impact and importance of Dependency Injection can only be fully understood and appreciated by spending time with it on real-world projects. Personally, I had a hunch before I started to use DI that it could be useful. Today, I find it hard to imagine how to get by without it.

If you can accept that DI is almost a corollary of SR, then you can probably also accept that the way to achieve the O of the Solid Principles – Open/Close (OC) is due to DI and SR. OC means that the code must be open for extension but closed for modification.

When I say that baby steps must be additive, i.e. that new code is added but existing code is left unmodified, then I really refer to OC. When developers respond that OC is hardly ever possible, I acknowledge that it is an ideal that cannot always be achieved but if you pay close attention to the responsibilities of your code and inject all dependencies it is surprisingly often possible to follow OC. I don’t blame developers who get OC explained and then fail to understand how to put it into practice. My advice is to focus on SR and DI for some time, then revisit OC – it will most likely make much more sense then.

The I of the SOLID Principles, Interface Segregation (IS) – means that you should generally depend on client specific interfaces, not concrete implementations. That is almost a no-brainer, as you could not inject dependencies, leaving the IoC container the responsibility to create whatever concrete instance that makes sense, if your code depended on concrete instances. And having bloated, non-client specific interfaces would be a violation of the SR principle.

Depending on interfaces rather than concrete implementations means that you will have classes with interfaces, even though each of those classes only have a single implementation. Some developers dislike that. On the other hand, IS does not demand that all classes have an interface. For example, instances that are essentially values to be passed around can be newed-up independently of the IoC container and do generally not need an interface.

The final letter of the SOLID Principles is L – Liskov Substitution (LS). This principle means that you need to restrict yourself when using inheritance. To some developers, this is highly provocative since inheritance is core to object oriented programming and restricting its use seemingly means crippling the underlying principles of object oriented programming.

Personally, I learned early in my career how easy it is to get into trouble if you happily use inheritance without giving it a second thought. Deep inheritance hierarchies is probably a thing from the past that nobody would create today, but even with a single level of inheritance, even without multiple implementation inheritance, it is so easy to create unmanageable code. I actually abandoned inheritance completely for a few years since I could not see anything but traps and pitfalls. I am happy that Liskov was handling it more intelligently.

LS essentially means that if you inherit a class S from a class T used in your program, then you can use S in your program and it will still work according to requirements. In other words, if your program is working with your composition root set-up with T, it must also work with the composition root set-up with S. Note that the principle goes beyond just plugging-in S to see if it will crash and burn your current system – LS states that you must be cautious whenever you inherit as otherwise you will see problems in the future. It is about maintainability.

Where are we now?

So where does all this leave us? If we follow the SOLID Principles we will automagically have Continuous Delivery?

I am afraid not. Code structure is only one among several cogwheels in the larger machinery of Continuous Delivery.

However, I firmly believe that these principles are essentially common sense and that putting common sense in code will never harm.

In fact, it’s hard for me to imagine how we could have had Continuous Delivery work so well for us in TradingFloor.com if we had not focused on at least Single Responsibility and Dependency Inversion.

The ABC of Continuous Delivery

You probably know the triple L of real estate: location, location and location.

In the same way some would argue that the three most important focus areas, the triple A of continuous delivery, are automation, automation and automation. While I believe there is some truth to that, I am even more convinced that it’s not AAA, but rather ABC.

The A is obviously automation. Read the book, and you will see why it is so obvious.

The B is obviously baby steps. Read my last blog post, and you will see why it is so obvious. I will blog more about that later.

The C is not-so-obviously clean code. You didn’t see that one coming, did you? It could also mean consistency, which might seem to be a sub-set of clean code, but if you ask me it is much more than that and also much more important. I will blog more about that later.

Why is Clean Code so Important?

When I write about clean code in this context it is mainly in the Uncle Bob sense. Read the Clean Code book, and you will know what I mean.

You may wonder why it is so important to focus on the nitty-gritty details of code structure. Isn’t the larger picture of patterns and practices more important?

The answer to that is, yes, patterns and practices are absolutely infinitely more important. But keeping the code clean is important for the simple reason that you cannot write code unless you read existing code, so in order to efficiently write code, the code already written must be readable. And code that is structured strictly according to agreed upon rules is more readable than less consistent code.

While I am a fan of Uncle Bob, I am also sceptic about coding standards that cannot be automated. So, I must admit that the code guidelines I follow for my C# code is largely determined by close to 100% default ReSharper settings + a handful of StyleCop rules. These rules are sensible, they are automated and thus really easy to follow. They do not automate all aspects of clean code, but they go far in that direction. I also use a plugin to ReSharper for spell checking, as it helps me to semi-automatically use meaningful identifiers.

For a team that consistently follows such agreed upon and automated rules the benefit is obvious, since a developer will feel at home in any parts of the code, regardless who wrote it (well, honestly it takes a bit more effort to fully achieve that, but I will blog about that later). This is important for a team that needs to continuously deliver.

How does this Relate to Lean Manufacturing?

Some will argue that Lean Software Development and Lean Manufacturing differ in at least one important aspect.

For Lean Manufacturing, it is important that any kind of variation is minimized, since variation tends to create a ripple effect that will cause trouble in a physical factory. This effect is described in an easy to understand fashion in the two novels I mentioned in my last blog post.

But software is different, right? Minimizing variation would kill creativity and all software projects are so different and novel that it would be futile to fight variation, right?

If you ask me, we must fight unneeded variation in code. If you insist on placing curly brackets in your own way, and in general follow you own style instead of a style agreed upon by the team, then you create unneeded variation which will cause you team mates to be less efficient.

Instead, spend your brain cycles on whatever makes you product valuable. If you don’t know what that might be, find yourself another job.

Notice how fighting unneeded variation goes hand-in-hand with automation. Another example of this is the use of an automation tool that you already use every day – the compiler – to write code without compiler warnings. Compiler warnings can actually make a lot of sense and if you ignore some, you might accidentally one day ignore one of the really sensible compiler warnings.

On my team, we have set the warning level to the highest possible, and defined that warnings should be treated as errors.

How to get There

If you write new code in a new project, it is a no-brainer to write clean code from day one.

If you work on legacy code, as most of us do most of the time, you need to think a bit about how to get there. Do you change all the code in one revolutionary check-in? Do you evolve the code by only following the guidelines for new code? Do you clean existing code only if you need to make significant changes to it?

If you ask me, you might as well clean up as much code that you can automatically, if you have a tool that you trust. Once you need to do manual changes, you might break otherwise working code. Then you need to think a bit more about the process and be cautious – especially if you don’t have a safety net of automated tests.

That’s It

All this is really a no-brainer. Just do it, gain the benefit – and be prepared for the next level of clean code in which you consistently apply agreed upon patterns and practices.

On the other hand, if you find this level of clean code to be really hard to achieve for a team and its source code, you should consider if this team really is prepared to embark into the world of continuous delivery.