Software On Demand

SaaS Journal

Subscribe to SaaS Journal: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get SaaS Journal: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


SaaS Authors: Mehdi Daoudi, Elizabeth White, Pat Romanski, Rajeev Gupta, Christoph Schell

Related Topics: Cloud Computing, SaaS Journal, Big Data on Ulitzer

Blog Feed Post

Change Management Got Bigger By @DMacVittie | @CloudExpo #Cloud

Today, we have to have far more thorough change analysis in place than most organizations needed 20 or 10 or even 5 years ago

While we were busy throwing parts of our organizations into the cloud, and (for those who don’t count it as cloud) SaaS, while we were moving parts of our organization over to Python, or Node, or Swift, while we were looking into Software Defined Everything, and containers started sounding like the hosting spot for a humongous jigsaw puzzle, something was growing that we should be paying more attention to.

The growth of these technologies and many more share one thing… A dependence upon other technologies. And those dependencies cascade as we get into ever more layered systems. Think of the number of technologies required to make a simple (in terms of use) application – like my own employers’ Stacki automated datacenter provisioning software –  work. Or perhaps more fitting for those on the other end of the spectrum, think of all of the technologies required to make an Android app (and its dev tools) work.

And if any of those underlying technologies goes through a fundamental change – no matter how well meaning or necessary the change – the cascade effect gets ugly quickly.

That means that today, we have to have far more thorough change analysis in place than most organizations needed 20 or 10 or even 5 years ago. Agile development, for all the good it brings in rolling out features at a steady, measurable rate, exacerbates this problem by making change more frequent, shortening the timeline before you have to worry about the changes in underlying technologies.

If you’re wondering what “fundamental changes” I’m talking about, here are three simple examples.

The first is easy. Write a web service in Python 2.x. Now access that web service in Python 3.x. It doesn’t work as expected at all. The reason it doesn’t is valid… 2.X was not internationalized, 3.X is – or more to the point, Unicode support and internal representations of said support are better. That was the impetus for 3.X to exist in the first place. But that doesn’t change the fact that interoperability issues abound between two versions of the same language… Preforming actions that are supposed to work between any two languages.

The next is more subtle, but just as impactful. The change to Red Hat 7.0 caused some amount of consternation in the Big Data world, while the changes needed to support containers natively caused other apps to have to adjust before offering upgrade paths.

And the third example I’ll offer is the one that spurred this article on in the first place. Oracle replaced all of the Date/Time classes to better support internationalization, and more importantly to be thread safe. Unfortunately, those changes did not propagate to JDBC, so any enterprise use of Java has to go through a conversion process before upgrades can occur. JDBC still expects the Date and Time objects, but they’ve been deprecated… So code must be (temporarily, one assumes) introduced to translate between them.

No matter how you adapt to these changes, they slow down the process, introduce bugs, and potentially make your IT look bad, unless you have a plan, and the time to execute it. In the agile world, time to execute is the killer. On top of normal goals for a stretch, you now have to introduce changes coming from beneath your application.

This is where change management shines. While I’m not a fan of delaying adoption of newer versions of support/infrastructure, the goal of the enterprise must be to manage the volume of change – particularly temporary change – that underlying tools, OS’s, databases, networks, etc introduce. A thorough understanding of what those changes will mean to apps, servers, switches, etc. is almost mandatory. When you get to a massive technology like OpenStack or Hadoop, the problem gets worse – because the law of multiplication drives the number of changes up commensurate with the number of underlying technologies.

You see this in both SSL and DNS in-the-wild. Over the last few years there has been heavy pressure to upgrade both, and yet if you follow security people, you know that there is a heavy inertia to upgrading. That is because organizations are uncertain what the impacts will be, or don’t have the hours to adjust the entire infrastructure to get there. These being security – an important aspect of change when improving – we can kind of get a gauge for the level of resistance to overall infrastructure change in most organizations. It’s huge. Generally speaking, when the change must occur, then it will.

Changes introduced in underlying technologies are normally aimed at improving the overall environment – be it general usability (Python above), performance (Java above), new feature implementation (Red Hat above), or security (DNS and SSL above).

Dealing with this type of issue is part of what Change Managers do, it has just gotten harder as the dependency tree of useful software has grown. How many apps out there are impacted by a change to underlying Java classes? How many purchased apps that your org doesn’t have source for? While we talked of one change to Red Hat 7.0, there were a ton more changes, how do those impact a given org’s servers?

We’ve gotten pretty good at the “dealing with” part – Tools like Stacki can reimage your servers, a subclass of LocalDate can be created that contains automatic conversions to Date until JDBC catches up, any reputable Python group will tell you to pick 2.X or 3.X and run with it… The list goes on.

What we’re not as good at is analysis. It’s tough, and lots of organizations don’t have a change management function, but rather have individuals about the organization perform the task as part of their daily job. This is part of the reason for inertia. There are of course a lot of other reasons – like the productive hours spent changing because the ground shifted instead of because you’re forwarding business cases. But again, Stacki can upgrade servers – one or one thousand – in minutes or hours, so this is less of an inertia issue than the cost/benefit analysis.

Give someone Change Mangement responsibility. You know your org, but I like putting them in PM, because changes cascade apps and infrastructure internally too, so often these are cross-team projects. Let them determine everywhere that needs to change to adapt for a given underlying technology change, and then offer recommendations. The fact is that weakness in SSL should never be tolerated at organizations for security reasons – particularly large or international organizations – but it is. Knowing the real cost of upgrading versus the risk management that it offers would be a great first step to setting a timeline for your upgrade process. When the change is inevitable… “Either we use RHEL/CentOS 7, or container support is limited, and we need high-use containers”, this same individual can weigh the cost of point solutions (install this rack with RHEL 7) and upgrade solutions (We can move 89% of our RHEL servers to 7 with no impact). These assessments need to occur, the question is whether you have an individual who has some (or all) of their time dedicated to figuring this stuff out, so you know before it becomes a “Oh. We have to upgrade X…” analysis. Those last minute “we must” changes are the ones that’ll get you. Every. Single. Time.

So think about it. Change is coming whether you want it or not. Either from the side (cloud, etc) or from beneath you (SSL, etc). Be prepared. Know what it will take before you upgrade, and be consistent. A large org has enough trouble with multiple everything, don’t multiply it with multiple versions of multiple everything unless there is a definitive benefit that makes sense for the larger org.

Read the original blog entry...

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.