Build a VM Test Lab using Unraid - Working From Home Part I
Understanding how hypervisors work is key to setting up a test lab on a home server. In this article, read about lessons I learned setting up a VM test lab on Unraid.
Automated Database Deployments is rather disruptive. But it is disruptive in a good way. Depending on the company, database deployments could be a "wild west" process where every team has their way of deploying changes. Getting any sense of consistency is going to be difficult. When I was working on this process I had to learn the hard way getting buy-in from key people was critical. In some ways more critical than the actual automated database deployment process, because without buy-in this process would wither and die very quickly.
There are many advantages to working for an enterprise. A lot of things, such as keeping the web servers up and running, maintaining the network, make sure the SQL servers have enough disc space, and so on, are off-loaded onto other teams of people. This is great because it lets me just focus on development. Inserting a new process, especially a highly disruptive one is one of the areas where it is more difficult working in an enterprise. This process affects a lot of people. It is just the way it is.
The key here is perseverance and attitude. Or as I like to call it, "don't be a dick." People are going to give push-back because this is "moving the cheese." This another one of those times where I wish I knew what I know now when I first started this process. I really thought I would help put together the process, everyone would see how great it is, and then adopt it quickly. I did not think about getting buy-in from everyone because I thought everyone would see the benefits right away. At one point during before the process was fully adopted, I got a lot push-back from a couple of key people. I responded poorly. As in, I acted like a child who didn't get to play with their favorite toy. Attitude can make all the difference and mine was piss poor. Thankfully this didn't delay the adoption too much, but I got very lucky. It could have been a disaster.
Here are the key job roles I (and the people helping me) focused on getting buy-in from while we were creating the process.
At Farm Credit Services of America, we have over 10 development teams. The DBAs have to support each team. Before automated database deployments there were 10 different ways teams asked the DBAs to deploy changes to production. Some teams put scripts in a folder in a specific order.
Others teams created a master deployment script and had the DBAs run that script from sqlcmd. It was very difficult to keep track when a change went to production. A script might change a stored proc which is called by an SQL job. If any SQL job fails the DBAs are paged and they have to look at them.
From their point of view the following problems were solved:
Prior to automated deployments this was the process of getting changes promoted to production
Tedious and prone to error. One of the databases my team is responsible for has over 6000 objects in the schema (stored procs for all CRUD operations adds up fast). It is unfair to expect one person to keep track of every change made. Ask a developer this question "what code did you change three weeks ago?" Why would we make a database developer remember the same thing? Why not ask them to memorize the complete works of Shakespeare while you're at it.
For database developers Automated Database Deployments solve:
Our automated database deployment process has each developer have a local copy of a database running. This basically made each developer a lite DBA. Their cheese moved a lot in this process.
Most application developers don't suffer from the same problems as database developers or DBAs. This is where I really I had to change my approach. My key areas of focus were:
This is a major mind-shift for most developers. That's okay. Anything worth having is hard. This is a great area where having a good attitude really goes a long way. There was a lot of mentoring going on because the skill set with running and maintaining an SQL server, as well as making a database change, can vary greatly with developers. The key here was reaching critical mass, where the number of developers who knew the process could support teaching the number of developers who needed to learn the process.
My approach changed significantly when getting buy-in from leaders. They don't have to deal with the process down in the weeds like the other job roles. My selling points boiled down to two key points, cost and time.
Let's look at cost first. These tools cost money. And in some cases, there is a sticker shock. The Red Gate SQL Toolbelt costs $2,745 per developer at the time of this writing. You get a price break at 5 licenses and 10 licenses. If you buy 10 licenses or more licenses then the cost is $1,922 per developer. Octopus Deploy costs anywhere from Free to $5,000. If you wanted to outfit a team of 10 developers you are looking at $20,000 added to the budget.
My leader was great because she knew we would see an ROI on this process rather quickly. This was a big help with getting other leaders on board. Below is, in a nutshell, the selling points used.
Each time we deployed the system with over 6000 database objects we inevitably had one or two changes that didn't go out. Each time that happened we got production support tickets coming in. It takes time to find out why a bug is happening. Let's say it takes an average two hours to find and fix a bug. Let's low-ball a developer's salary and say their average pay is $30 an hour.
2 Bugs * 2 hours * $30 = $120 in fixes per deployment
But that is not all, depending on the bug it could affect a number of people. We have 700 users. On average, we have anywhere from 100 to 200 in the system at a time. Let's say the bugs only affect 10% of the users. For two hours a user is unable to do their job properly.
2 Bugs * 2 hours * $30 * 10 users = $1200 in fixes per deployment
Now assume a deployment happens every 2 months, or 6 deployments a year. So now the math looks like
2 Bugs * 2 hours * $30 * 10 users * 6 deployments = $7200 a year in fixes.
I am going to caveat this that this math is speculative, I can't give hard numbers (sorry). But it is easy to see how they add up quickly. I have a feeling I might be low-balling my estimate. And that is just for one application. Most teams support more than one application.
One other item to consider. If a developer has to do production support then they are not working on new features to help meet deadlines.
The effort of getting buy-in is well worth the reward. In this case, it meant people are going to be using automated database deployments. Since my team has adopted this process we haven't had to do an emergency push to production because we forgot some random database change. It has freed us up to focus on more important work. That to me is worth all the effort it took.