You write your tests and you ship your code, but many teams stop there - "our tests are green, it's in production, let's get onto the next ticket!". But then alarms go off. "But our tests were green! It can't be OUR code... Can it?" Sure it can. Production is a sensitive complicated place. There are loads of variables. Load balancing, numerous servers, different servers, network outages, transient slow downs for no apparent reason, databases with more than a hundred things in them, data that shouldn't be possible, third party dependencies (you know, those things you mocked out)... Oh, and customers - they do really strange things too. When it goes wrong, it costs money per minute. It's a high-stress, high-pressure environment, and continuous delivery can make it worse. I'm going to talk about some of the things you can do to tame the chaos that continuous delivery can enable • make your applications tell you whether they’re working or not • alerts demystified - they're just test automation that runs all the time in production • configuration management - it's just build scripts for servers not code • how to kill environment drift with infrastructure as code
YOU MAY ALSO LIKE:
Pete Mounce: How do you know your code is working RIGHT NOW?
I have worked in developer teams, an operations team, and been a generalist shipping software for JUST EAT since 2010 (with a few years doing that elsewhere first). I've lived through cultural and technological change and played a role in effecting it. I had a pivotal role in migrating our platform from a datacentre to the AWS cloud - consequently, we now do tens of deployments a week compared to a tens a year. I've automated systems, tests, servers, and in some cases myself. I've touched many parts of the platform and delude myself that I do an OK job at that.