Posts Tagged ‘virtualization’

3 Ways to Save money and Increase productivity

by Matthew Carmen on October 25, 2010

For most companies, Information Technology is not their core business.  IT is, however, one of the biggest factors by which a company can differentiate itself from its competitors.  By using technology wisely, companies can automate many of their processes, allowing its labor force to focus attention on new products and services for both external and internal clients.  Capital funding is required to accomplish this, and in this economy this means reallocation of current funds, not an addition of new funds for a companies IT department.  It is a proven fact that good investments in IT facilitate rapid company growth.

As with most other departments within a corporation, IT runs on human resources – labor.  Therefore, the last thing a company should look to do when wanting to save or reallocate capital is to cut its labor force.  Once the labor force is cut, it is very hard to get back the productivity in the future.  When this happens, most IT departments become very reactive – they end up fighting the many fires that crop up and have little time to implement new technologies that could benefit the entire company.

Decimating a labor force – though providing significant short term financial savings – inhibits corporate growth.  With this in mind, what areas within IT should companies consider to free up funds for growth initiatives?

  1. Managed sourcing options for routine activities.  By routine activities I mean regular processes such as batch computing, monitoring of the IT environment, network connectivity and maintaining the physical environment.  Most of these are important areas that tend to be labor-intensive activities.  Within IT, the monitoring, network and physical management of the datacenter tend to have the largest staff of any group.  By considering managed services, a company can get high quality service and typically good pricing areas as well.  With the savings, labor can be added to groups that are doing application development and other innovative services which stimulate business growth.
  2. Implement tight asset management procedures.  By tracking a company’s software and hardware assets, existing assets can then be utilized more efficiently.   On the software side, a company needs to know what it owns, what piece of equipment it is running on, and how much of the total purchase it owns is actually being used.  Having an inventory system will allow the user community access to all of this information, and encourages the use of existing inventory instead of always buying more simply because a new project has budget money.  By implementing a rigorous asset management system, companies waste a lot less – often as much as 20% – on new purchases and maintenance of those new purchases.  These savings can likewise be used for continued corporate growth.
  3. Going green can provide great cost savings. One way companies can become “greener” is through server virtualization.  By vitalizing servers, companies can save much needed capital on two fronts.  First, by having fewer physical devices in your datacenter, less electricity will be used, thereby lowering the company’s electricity costs.  Secondly, many states and utility companies offer rebates and/or lower rates for electricity for companies who virtualize their server farms.  For example, one utility in Wisconsin offers a rebate of $250 per virtualized server.   Through the use of VMware or similar product can, if engineered correctly, reduce the physical presence of a server farm down by as much as 20 percent – serious savings can continue on well into the future.

All in all, there are several ways in which a company can save much needed capital without reducing the workforce, then reallocate the capital in such a way as to stimulate new growth.   Yes, certain positions may be deemed as unnecessary and the workload shifted to managed service providers or the like, but overall, the company will be able to use more of its skilled teams for forward-thinking growth opportunities. Such initiatives, which can be funded by cutting wasteful spending, work seamlessly with careful planning and execution.

Matthew Carmen launched Datacenter Trust along with Marc Watley in February, 2010 and serves as Co-Founder & COO as well as Managing Partner of their Financial Intelligence practice. Datacenter Trust is a recently-launched consulting and services delivery firm, providing outsourced server hosting, bandwidth, cloud services, and IT financial intelligence and analysis services to growing businesses. Follow Datacenter Trust on Twitter @datacentertrust
Share

4 steps to effective Disaster Recovery planning

by Marc Watley on August 23, 2010

Question: A wildfire 10 miles away from your company headquarters is raging out of control. The fire captain just ordered everyone in your building to evacuate. All staff have safely evacuated premises, and now you are likewise heading out, taking one final look at your datacenter – still humming away, unsuspectingly. You have offsite data storage but no offsite server infrastructure, applications, etc.

What do you do?

I’m paraphrasing from a not-so-great movie here – Speed (Keanu may have been good in The Matrix but the predictable tête-à-tête between his and Dennis Hopper’s character in Speed still makes me chuckle) – but IT executives today are, in fact, increasingly faced with the threat of disasters – whether natural (such as a wildfire) or man-made (e.g. some ding-dong crashing a vehicle into your datacenter). I may be taking a bit of creative license here, but this could not be a more serious issue. (Recall those horrible wildfires in San Diego, California area a few years back? The example above was culled from situations experienced during that period.)

As organizations – and their customers – increasingly rely on database, server, and IP-connected applications and data sources, the importance and responsibility of maintaining continuity of the business infrastructure and limiting costly downtime in the event of a disaster, is paramount.

Though many an organization had active disaster recovery (DR) projects on the books a few years ago, the global financial crunch of the last 20 or so months has wreaked havoc on IT budgets everywhere; only now are many of these DR projects once again taking priority.

If you’re thinking that you can ‘wait it out’ and disaster won’t strike on your watch, think again. Apparently, some 93 percent of organizations have had to execute on their disaster recovery plans. Yep. This according to an annual DR survey from Symantec last year.  A few more points from this survey:

  • In general it takes companies [with active DR plans] on average three hours to achieve skeleton operations after an outage, and four hours to be up and running
  • The average annual budget for DR initiatives is $50MM (including backup, recovery, clustering, archiving, spare servers, replication, tape, services, DR plan development and offsite costs)
  • Virtualization has caused 64 percent of organizations worldwide to reevaluate their DR plans

Whether your organization is a small recently funded startup or well-entrenched in the Fortune 100, designing, implementing, and testing a DR plan is an endeavor that takes dedication, careful planning and time (the entire process can take weeks or even months). There are many excellent resources available which can provide knowledge and detail as to the individual steps of a DR planning initiative.  (Cisco’s DR Best Practices site or Disaster Recovery are great places to begin, by the way.)  What follows is a high-level, best-practices overview of the planning process:

Executive Sponsorship

This first step of a successful DR plan involves two key components: One is to secure plan sponsorship and engagement from senior company leadership – CEO, COO, CIO, etc. The other is to establish a planning team that is representative of all functional units of the organization – sales, operations, finance, IT, etc.  This step is the catalyst to a smooth planning initiative, and requires focus and patience.  (The ability to herd cats wouldn’t hurt, either.) It may also be helpful to reduce the impact on internal resources by leveraging outside help from a consulting firm well-versed in DR planning.

Information Gathering

This portion of the planning process – information gathering, due diligence and assessment – is the most involved and most time-consuming, and a true test of teamwork across the organization.

The first step in this part of a DR planning initiative is performing a Business Impact Analysis (BIA), which helps to assess the overall risk to normal business operations (and revenue flow) should disaster strike right this second. The BIA is typically comprised of identifying and ranking all critical business systems, analysis impact of interruption on critical systems, and most importantly, establishing the maximum length of time critical systems can remain unavailable without causing irreparable harm to the business. This length of time is also known as Maximum Tolerable Downtime (MTD).  Working backwards from the MTD will allow acceptable Recovery Point Objective (RPO) and the Recovery Time Objective (RTO) to be reached.

With BIA in hand, the next steps are conducting a risk assessment and developing the recovery strategy.  The risk assessment will help to determine the probability of a critical system becoming severely disrupted, identifying vulnerabilities, and documenting the acceptability of these risks to the organization.  Engagement from the entire planning team is necessary in order to accurately review and record details for critical records, systems, processing requirements, support teams, vendors, etc. – all needed in order to develop the recovery strategy.

Also important in the recovery strategy is identifying the recovery infrastructure and outsourcing options – ideally alternate datacenter facilities from which critical systems and data can be recovered in the event of a serious interruption.  This, as they say, is the point at which the bacon hits the frying pan: Many organizations are leveraging the power and abundance of Cloud-based IT resources to lower infrastructure costs, and Cloud is particularly applicable for DR.  In fact, there are more than a few services who provide continuous data protection: typically accomplished via unobtrusive software agents residing on each server in a datacenter. These agents are then connected to a black box also residing in the datacenter, incrementally taking images of each server, de-duplicating the data, then replicating that data via secure WAN to a remote data store, ultimately providing on-demand (via secure web console) recovery from the remote location at any time. Companies such as nScaled, iland, and Simply Continuous offer such services and can even help build a business case to illustrate the ROI for this service.  Point is, do thy homework and explore if Cloud services such as these might make a sound fit into your organization’s DR plan.

Planning and Testing

Armed with a full impact analysis, risk assessment, recovery goals, and outsourced options, now the actual DR plan can be developed. The DR plan is a living document that identifies the criteria for invoking the plan, procedures for operating the business in contingency mode, steps to recovering lost data, and criteria and procedures for returning to normal business operations. Key activity in this step is to identify in the DR plan – a recovery team (which should consist of both primary and alternate personnel from each business unit) and to identify recovery processes and procedures at each business unit level.  Also important is to ensure the DR plan itself is available offsite – both via the web and in permanent media form (print, CD-ROM, etc.)

Equally important to having a DR plan is regular testing. This step includes designing disaster/disruption scenarios and the development and documentation of action plans for each scenario. Conducting regular testing with full operational participation is key to successful testing.

Ongoing Plan Evaluation

An effective DR plan is only a good plan if continually kept in lock-step with all changes within the organization.  Such changes include infrastructure, technology, and procedures – all of which must be kept under constant review, and the DR plan updated accordingly.  Also, DR plan testing should be evaluated on a regular basis, and any adjustments made (systems, applications, vendors, established procedures, etc.).

So there you have it – four key building blocks to tailoring a DR plan for your organization.  Of course, if the ‘disaster’ arrives in the form of a city-sized asteroid hurtling towards Earth, needless to say any plan will likely not make much difference. Anything short of such a global catastrophe, however, and a well-developed and maintained DR plan will keep employees and customers connected and business moving forward, with minimum downtime.

Again, this is by no means a complete recipe for designing and implementing a DR plan but instead is meant to serve as a high-level overview…offered as food for thought.  I encourage you to learn more, explore options, ask for help if needed – whatever it takes to thoroughly prepare your organization for the worst, should the worst ever occur. To loosely paraphrase our man Keanu once again from another of his, er, more questionable films from back in the day – Johnny Mnemonic – this is one topic where you absolutely, positively don’t want to “get caught in the 404″.

Written by Marc Watley, Co-Founder & CEO of Datacenter Trust and CMO at Reppify. Datacenter Trust is an IT consulting and services delivery firm, helping growing businesses make smart decisions from sound financial analysis and business intelligence. Reppify is a leading-edge technology company pioneering the use of social media data to drive better business decisions. Follow on Twitter: Datacenter Trust @datacentertrust and Reppify @reppify
Share

Last November at the Business Technology Summit in Bangalore, India I shared a few thoughts on how leaders of IT organizations can refresh technology, continue to deliver sound IT and meet customer needs, and, by taking advantage of on-demand infrastructure services, remain in the good graces of the company CFO.  I’ll attempt to do a bit of a recap here, adding recent data points and additional thoughts along the way.  Well then, shall we?

The US Department of Commerce reports (most recent Information & Communication Technology Survey, which was in 2007) that capital expenditures (CAPEX) make up, on average, 63% of companies’ IT spending.

CIO’s today are currently faced with decreased capital and operating IT budgets, as well as staffing reductions or hiring freezes.  All the while, the expectation to scale applications and systems – and maintain performance and SLAs – remains.

In October 2009, Barclays Capital reported in its CIO survey that IT spending expectations would increase 0-5% in 2010 versus 2009, with priority placed on datacenter expansions, virtualization and Cloud initiatives. In the near term, CIOs had forecasted that their IT spend would be focused on storage, networking, and servers.  Comparing the Barclays results with a more recent CIO survey – this one conducted by Gartner this past January, where some 1,500 CIOs surveyed yielded a similarly bleak forecast: IT spending will largely remain flat, and if it does increase, it’ll do so by just over one percent.  (Which clearly isn’t great, but it also isn’t the elbow-drop of more than eight percent in IT budgets as seen in 2009 .)  Oh, the CIOs in this latest survey also posited that their IT organizations have about the same level of resources now as they had back when Kelly Clarkson’s “Since U Been Gone” ruled the airwaves; yep, welcome back to 2005. The lesson? Wring everything you can from what you’ve got.  Having fun yet?

Right, so how does one squeeze blood from this proverbial IT turnip? The first step is to dig into your IT spending a bit – gain as much understanding and insight as to what you have in your arsenal today – and the related costs – as possible. This sounds like a simple task, but you’d be surprised how many IT directors and CIOs don’t really know just exactly what they are using and how much they’re paying. (Sidebar: If you haven’t yet read my partner Brian Superczynski’s article from last week, I’d encourage it; he offers good thinking and a few tools around this business insight)

The CIOs surveyed by Gartner report that their top three business priorities for 2010 are:

  • Business process improvement
  • Reducing enterprise costs
  • In the use of informatics/analytics

In addition, their top three technology priorities for this year:

  • Virtualization
  • Cloud computing
  • Web 2.0 (read: Social Networks)

Taking a cue from this, the next step to a CAPEX-free world is to first address how the data points above stack up against your own business and technology priorities, then explore ways in which you can reduce capital costs by taking advantage of outsourced infrastructure and related services like virtualization.  Hang on now…don’t be afraid of that Cloud…embrace it. I’m not suggesting you entrust your most valuable corporate crown jewels to a multi-tenant (shared) Cloud service (Amazon Web Services/Elastic Compute Cloud and Rackspace Cloud largely fall into this category).  These services do have their place and you may find they will play an important role for your organization at some point. However, I’m referring to enterprise-class, private datacenter services, where you retain complete control over access and to your infrastructure above the operating system; it’s just that someone else manages everything else –  hardware/upgrades/monitoring, inter-networking, bandwidth, power, etc are all taken care of.  Think of this as an extension of your own datacenter that simply lives elsewhere…you still have the master key and you pay for everything “as-a-service”, largely free of capital costs.

These as-a-service solutions take on many forms, each designed to address specific IT needs: Compute Clouds for development or testing, storage arrays, backup datacenters/Disaster Recovery (DR) services, email, or simply a rack of dedicated servers with your name on it.  A few providers to consider in this area: private Cloud services like Terremark’s Enterprise Cloud, SoftLayer’s CloudLayer, or datacenter replication/DR from nScaled, CA’s ArcServe, or dedicated, managed servers from Latisys, The Planet, Rackspace, and others.  The point is to spend the time. Sit in on a few webinars. Perform your due diligence, seek help if you need it, and I think you’ll find that utilizing tools like infrastructure-as-a-service for some applications or areas of your infrastructure makes sound business sense.  Also, if you have a technology refresh initiative as core to achieving your goals in 2010, these services typically deploy on the latest hardware…double bonus!

By the way, much of the data from the Gartner survey can be found on the author’s blog post here.

The Japanese pioneered a concept you are probably familiar with called Kaizen: calculated steps, taken daily, designed to achieve a larger goal. Applying this methodology is key to a sound migration path away from capital IT expenditures and toward infrastructure-as-as-service.  (I know, I could have just said “baby steps” but this Kaizen thing just sounds better, no?) Here’s how you apply it:

  1. Start small
  2. Find a service that meets your liking then deploy – perhaps just one or two applications or projects
  3. Monitor performance and costs for a month or three, using the service as a Litmus test to evaluate and design the migration for the rest (or most) of your infrastructure.

If they pass muster, simply add more fuel to this as-a-service fire and before long, you’ll find yourself on the receiving end of praise from the CFO, your own team, and your customer as well. Or maybe you’ll find yourself singing happily along to “Since U Been Gone.” If the latter, you just might want to keep that one to yourself.

Written by Marc Watley, Co-Founder & CEO of Datacenter Trust and CMO at Reppify. Datacenter Trust is an IT consulting and services delivery firm, helping growing businesses make smart decisions from sound financial analysis and business intelligence. Reppify is a leading-edge technology company pioneering the use of social media data to drive better business decisions. Follow on Twitter: Datacenter Trust @datacentertrust and Reppify @reppify
Share