Ajay Budhraja Cloud Big Data

cloudbigdataajaybudhraja

Subscribe to cloudbigdataajaybudhraja: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get cloudbigdataajaybudhraja: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Ajay Budhraja Cloud Big Data Authors: Greg Schulz, William Schmarzo, Paresh Sagar, Jason Bloomberg, AppDynamics Blog

Related Topics: OpenStack Journal, DevOps Journal, cloudbigdataajaybudhraja

Article

You Need #DevOps | @DevOpsSummit @DMacVittie #CD #APM #Monitoring

The problem is right in front of us, we’re confronting it every day, and yet a ton of us aren’t fixing it for our organizations

For those unfamiliar, as a developer working in marketing for an infrastructure automation company, I have tried to clarify the different versions of DevOps by capitalizing the part that benefits in a given DevOps scenario. In this case we’re talking about operations improvements. While devs – particularly those involved in automation or DevOps will find it interesting, it really talks to growing issues Operations are finding.

The problem is right in front of us, we’re confronting it every day, and yet a ton of us aren’t fixing it for our organizations, we’re merely kicking the ball down the road.

The problem? Complexity. Let’s face it, the IT world is growing more complex by the week. Sure, SaaS simplified a lot of complex apps that either weren’t central to the business we’re in or were vastly similar for the entire market, but once you get past those easy pickings, everything is getting more complex.

As I’ve mentioned in the past, we now have OpenStack on OpenStack. Yes, that is indeed a thing. But ignoring nested complexities to solve complexity issues (that is the stated purpose of OoO), rolling out an enterprise NoSQL database or even worse a Big Data installation is a complex set of multiple systems, some of which might be hosted in virtuals or the cloud, adding yet another layer of configuration complexity. The same is true for nearly every “new” development going on. Want SDN? Be prepared to install a swath of systems to support it. The list goes on and on. In fact, what started this thought for me was digging into Kubernetes. Like most geeks, I started with the getting started app – we have devolved to “try first, read later” in our industry, for good or bad. The Kubernetes Getting Started Guide is a good example of how bad our complexity has gotten. To make use of the guide you need Docker, GKE, and GCR, then you need to use bash, Node, and a command line with an array of parameters that, because you’re just getting started, you have no idea what they’re doing.

We need time to get this stuff going, and time is something that we increasingly over the last decade or so (at least) have less of. The amount and complexity of the gear Operations is overseeing has been increasing, the number of instances – be they virtual or cloud – has also, all at a faster rate than staff at most organizations. And that’s a growing problem too.

One does not simply “deploy Kubernetes” it appears. One has to work at it, like one has to struggle with Big Data installs or UCE configuration, or even in some orgs, Linux installations (which are still handled individually and done by hand in more places than makes sense to me – but I work for a company that sponsors a Linux install automation open source project, so perhaps my view is jaded by that experience).

To find the time to figure out and implement toolsets like Kubernetes and OoO, whose stated goals are to make your life easier in the long run, we need to remove the overhead of day-to-day operations. That’s where DevOPS comes in. If the man-hours to deploy a server or an app can be reduced to zero or near zero by the use of automation tools and a strong DevOps focus, then that recovered time can be reinvested in new tools to help improve operations. Yes, it’s a vicious circle, you need time to get time… But simple, easy-to-master tools can free time to tackle the more complex. Something like my employers’ Stacki project that is a simple “drop in the ISO, answer questions about the network, install, then learn a simple command line”. There are a lot of sophisticated tools out there that follow this type of install pattern and free up an impressive amount of time. Most of the application provisioning tools out there are relatively painless to set up these days (though that wasn’t always true), and can reap benefits quickly also, for example. My first run with Ansible, by way of explaining that statement, had me deploying apps in a couple of hours. While it would take longer to set it up and configure it to deploy complex datacenter apps, the fact is that most of us can find a few hours in the course of a couple weeks, particularly if we convince management of the potential benefits beforehand. As an added benefit, application provisioning tools are increasingly including network provisioning for most vendors, further reducing time spent doing manual tasks (once again, after you figure it out).

And that’s the real reason we need DevOPS. People talk about repeatability, predictability, reduced human errors… All true, but they come with their own trade-offs. The real reason is to free time so we can focus on more complex systems being rolled out and get them set without interrupting our day to do standard maintenance work that consumes an inordinate amount of time.

In the end, isn’t that what we all would love to have – the repeated steps largely automated so that we can look into new tools that improve operations or help drive the organization forward? Take some time and invest in cleaning up ops, so that you can free time to help move things forward. It’s worth the investment. In the case of servers, man-hours invested to get from nothing to hundreds of machines can be reduced from hundreds of machines * hours per machine to “Tell it about IPs and boot the machines to be configured”. That’s huge. Even if you sit and watch the installs to catch any problems, the faster server provisioning toolsets will be done with those hundreds of machines in an hour or two. Which means even after troubleshooting any problems, you’re likely to be off doing something else the next day. Not a bad ROI, if you invest the little bit of time to get started. Reinvest some of that savings in the next automation tool and compound the return… Soon you’re in nirvana, researching and implementing, while installs, reinstalls, and fixes to broken apps are handled by reviewing a report and telling the system in question (app or server provisioning systems) to fix it or install it.

It’s pretty clear that complexity will continue to increase, and tools to simplify that complexity will continue to come along. It is definitely worthwhile to invest a little time in those tools so you can invest more in those new systems.

But that’s me, I’m a fan of looking into the possible, not doing the same stuff over and over. I always assume most of IT is the same, if only they had the time. And we can have the time, so let’s do it.

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.