Why DevOps Matters: Things You Need to Know

DevOps creates an agile relationship between system development and operating departments, so the two collaborate in providing results that are technically effective, and work well for customers and users. This is an improvement over the traditional model where development delivers a complete design ? and then spends weeks and even months afterwards, fixing client side problems that should never have occurred.
Writing for Tech Radar Nigel Wilson explains why it is important to roll out innovation quickly to leverage advantage. This implies the need for a flexible organisation capable of thinking on its feet and forming matrix-based project teams to ensure that development is reliable and cost effective.
Skirmishes in Boardrooms
This cooperative approach runs counter to traditional silo thinking, where Operations does not understand Development, while Development treats the former as problem children. This is a natural outcome of team-centred psychology. It is also the reason why different functions pull up drawbridges at the entrance to their silos. This situation needs managing before it corrodes organization effectiveness. DevOps aims to cut through this spider web of conflict and produce faster results.

The Seeds of Collaboration

Social and personal relationships work best when the strengths of each party compensate the deficiencies of the other. In the case of development and operations, development lacks full understanding of the daily practicalities operating staff face. Conversely, operations lacks ? and should lack knowledge of the nuances of digital automation, for the very reason it is not their business.
DevOps straddles the gap between these silos by building bridges towards a co-operative way of thinking, in which matrix-teams work together to define a problem, translate it into needs and spec the system to resolve these. It is more a culture than a method. Behavioural change naturally leads to contiguous delivery and ongoing deployment. Needless to say only the very best need apply for the roles of client representative, functional tester and developer lead.

Is DevOps Worth the Pain of Change?

Breaking down silos encroaches on individual managers? turf. We should only automate to improve quality and save money. These savings often distil into organisational change. The matrix team may find itself in the middle of a catfight. Despite the pain associated with change resistance, DevOps more than pays its way in terms of benefits gained. We close by considering what these advantages are.

An Agile Matrix Structure ? Technical innovation is happening at a blistering rate. The IT industry can no longer afford to churn out inferior designs that take longer to fix than to create. We cannot afford to allow office politics to stand in the way of progress. Silos and team builds are custodians of routine and that does not sit well with development.

An Integrated Organization ? DevOps not only delivers operational systems faster through contiguous testing. It also creates an environment whereby cross-border teams work together towards achieving a shared objective. When development understands the challenges that operations faces ? and operations understands the technical limiters – a new perspective emerges of ?we are in this together?.

The Final Word ? With understanding of human dynamics pocketed, a DevOps project may be easier to commission than you first think. The traditional way of doing development – and the waterfall delivery at the end is akin to a two-phase production line, in which liaison is the weakest link and loss of quality inevitable.

DevOps avoids this risk by having parties work side-by-side. We need them both to produce the desired results. This is least until robotics takes over and there is no longer a human element in play.

Check our similar posts

New Focus on Monitoring Soil

There is nothing new about monitoring soil in arid conditions. South Africa and Israel have been doing it for decades. However climate change has increased its urgency as the world comes to terms with pressure on the food chain. Denizon decided to explore trends at the macro first world level and the micro third world one.

In America, the Coordinated National Soil Moisture Network is going ahead with plans to create a database of federal and state monitoring networks and numerical modelling techniques, with an eye on soil-moisture database integration. This is a component of the National Drought Resilience Partnership that slots into Barrack Obama?s Climate Action Plan.

This far-reaching program reaches into every corner of American life to address the twin scourges of droughts and inundation, and the agency director has called it ?probably ?… one of the most innovative inter-agency tools on the planet?. The pilot project involving remote moisture sensing and satellite observation targets Oklahoma, North Texas and surrounding areas.

Africa has similar needs but lacks America?s financial muscle. Princeton University ecohydrologist Kelly Caylor is bridging the gap in Kenya and Zambia by using cell phone technology to transmit ecodata collected by low-cost ?pulsepods?.

He deploys the pods about the size of smoke alarms to measure plants and their environment.?Aspects include soil moisture to estimate how much water they are using, and sunlight to approximate the rate of photosynthesis. Each pod holds seven to eight sensors, can operate on or above the ground, and transmits the data via sms.

While the system is working well at academic level, there is more to do before the information is useful to subsistence rural farmers living from hand to mouth. The raw data stream requires interpretation and the analysis must come through trusted channels most likely to be the government and tribal chiefs. Kelly Caylor cites the example of a sick child. The temperature reading has no use until a trusted source interprets it.

He has a vision of climate-smart agriculture where tradition gives way to global warming. He involves local farmers in his research by enrolling them when he places pods, and asking them to sms weekly weather reports to him that he correlates with the sensor data. As trust builds, he hopes to help them choose more climate-friendly crops and learn how to reallocate labour as seasons change.

Without Desktop Virtualisation, you can’t attain True Business Continuity

Even if you’ve invested on virtualisation, off-site backup, redundancy, data replication, and other related technologies, I?m willing to bet your BC/DR program still lacks an important ingredient. I bet you’ve forgotten about your end users and their desktops.

Picture this. A major disaster strikes your city and brings your entire main site down. No problem. You’ve got all your data backed up on another site. You just need to connect to it and voila! you’ll be back up and running in no time.

Really?

Do you have PCs ready for your employees to use? Do those machines already have the necessary applications for working on your data? If you still have to install them, then that’s going to take a lot of precious time. When your users get a hold of those machines, will they be facing exactly the same interface that they’ve been used to?

If not, more time will be wasted as they try to familiarise themselves. By the time you’re able to declare ?business as usual?, you’ll have lost customer confidence (or even customers themselves), missed business opportunities, and dropped potential earnings.

That’s not going to happen with desktop virtualisation.

The beauty of?virtualisation

Virtualisation in general is a vital component in modern Business Continuity/Disaster Recovery strategies. For instance, by creating multiple copies of virtualised disks and implementing disk redundancy, your operations can continue even if a disk breaks down. Better yet, if you put copies on separate physical servers, then you can likewise continue even if a physical server breaks down.

You can take an even greater step by placing copies of those disks on an entirely separate geographical location so that if a disaster brings your entire main site down, you can still gain access to your data from the other site.

Because you’re essentially just dealing with files and not physical hardware, virtualisation makes the implementation of redundancy less costly, less tedious, greener, and more effective.

But virtualisation, when used for BC/DR, is mostly focused on the server side. As we’ve pointed out earlier in the article, server side BC/DR efforts are not enough. A significant share of business operations are also dependent on the client side.

Desktop virtualisation (DV) is very similar to server virtualisation. It comes with nearly the same kind of benefits too. That means, a virtualised desktop can be copied just like ordinary files. If you have a copy of a desktop, then you can easily use that if the active copy is destroyed.

In fact, if the PC on which the desktop is running becomes incapacitated, you can simply move to another machine, stream or install a copy of the virtualised desktop there, and get back into the action right away. If all your PCs are incapacitated after a disaster, rapid provisioning of your desktops will keep customers and stakeholders from waiting.

In addition to that, DV will enable your user interface to look like the one you had on your previous PC. This particular feature is actually very important to end users. You see, users normally have their own way of organising things on their desktops. The moment you put them in front of a desktop not their own, even if it has the same OS and the same set of applications, they?ll feel disoriented and won’t be able to perform optimally.

Contact Us

  • (+353)(0)1-443-3807 – IRL
  • (+44)(0)20-7193-9751 – UK

Ready to work with Denizon?