Posts Tagged ‘marc watley’

Week In Review – Aug 22 – Aug 28, 2010

by Magesh Tarala on August 29, 2010

4 steps to effective Disaster Recovery planning

by Marc Watley, Aug 23, 2010

IT executives today are, in fact, increasingly faced with the threat of disasters – whether natural or man-made. As organizations – and their customers – increasingly rely on database, server, and IP-connected applications and data sources, the importance and responsibility of maintaining continuity of the business infrastructure and limiting costly downtime in the event of a disaster, is paramount. Read this article to get a high-level, best-practices overview of the DR planning process. more…

Character and Personality #8: Competency

by Gary Monti, Aug 24, 2010

A good leader is also a good politician, one who finds a way to thread through a situation to reveal a path that, when followed, benefits the common good. Competence pulls technology and sophistication together so that one person can meet another person’s needs, i.e., a connection comprising the humanity of the stakeholders who need and commit to finding a solution that works. more…

Social Media and Tribes #9: The fear factor

by Deepika Bajaj, Aug 25, 2010

Even professionals who have gone through many technological innovations in the past don’t find the idea of being transparent and authentic over social media too appealing. They were worried about identity theft, making a wrong impression on a potential employer and above all were overwhelmed by the friend requests on Facebook. These are valid concerns, but not an excuse to avoid social media.  more…

Flexible Focus #16: The decision trap

by William Reed, Aug 26, 2010

Ambiguity causes anxiety in those who are inflexible, and creates possibilities in the minds of the people who have flexible focus. Tolerance for ambiguity drops when you have to make a decision. Urgency adds pressure, and when the decision affects the core areas of your life, you can feel as if you are lost in a labyrinth of choices. Your decision sets the wheels in motion, whereas with indecision the wheel turns without you. Read about the Six Criteria for Decision Making to stay in motion and steer the wheel. more…

Investment Value

by Steve Popell, Aug 27, 2010

In a previous post, Business Valuation in Divorce is Different, Steve discussed why Investment Value is more appropriate in the context of family law.  But, this method is not just for divorcing couples.  In any situation in which the party acquiring an interest (or a greater interest) in a company will become (or continue to be) part of the management team, Investment Value is often the most appropriate method.  Read this article to find out why. more…

4 steps to effective Disaster Recovery planning

by Marc Watley on August 23, 2010

Question: A wildfire 10 miles away from your company headquarters is raging out of control. The fire captain just ordered everyone in your building to evacuate. All staff have safely evacuated premises, and now you are likewise heading out, taking one final look at your datacenter – still humming away, unsuspectingly. You have offsite data storage but no offsite server infrastructure, applications, etc.

What do you do?

I’m paraphrasing from a not-so-great movie here – Speed (Keanu may have been good in The Matrix but the predictable tête-à-tête between his and Dennis Hopper’s character in Speed still makes me chuckle) – but IT executives today are, in fact, increasingly faced with the threat of disasters – whether natural (such as a wildfire) or man-made (e.g. some ding-dong crashing a vehicle into your datacenter). I may be taking a bit of creative license here, but this could not be a more serious issue. (Recall those horrible wildfires in San Diego, California area a few years back? The example above was culled from situations experienced during that period.)

As organizations – and their customers – increasingly rely on database, server, and IP-connected applications and data sources, the importance and responsibility of maintaining continuity of the business infrastructure and limiting costly downtime in the event of a disaster, is paramount.

Though many an organization had active disaster recovery (DR) projects on the books a few years ago, the global financial crunch of the last 20 or so months has wreaked havoc on IT budgets everywhere; only now are many of these DR projects once again taking priority.

If you’re thinking that you can ‘wait it out’ and disaster won’t strike on your watch, think again. Apparently, some 93 percent of organizations have had to execute on their disaster recovery plans. Yep. This according to an annual DR survey from Symantec last year.  A few more points from this survey:

  • In general it takes companies [with active DR plans] on average three hours to achieve skeleton operations after an outage, and four hours to be up and running
  • The average annual budget for DR initiatives is $50MM (including backup, recovery, clustering, archiving, spare servers, replication, tape, services, DR plan development and offsite costs)
  • Virtualization has caused 64 percent of organizations worldwide to reevaluate their DR plans

Whether your organization is a small recently funded startup or well-entrenched in the Fortune 100, designing, implementing, and testing a DR plan is an endeavor that takes dedication, careful planning and time (the entire process can take weeks or even months). There are many excellent resources available which can provide knowledge and detail as to the individual steps of a DR planning initiative.  (Cisco’s DR Best Practices site or Disaster Recovery are great places to begin, by the way.)  What follows is a high-level, best-practices overview of the planning process:

Executive Sponsorship

This first step of a successful DR plan involves two key components: One is to secure plan sponsorship and engagement from senior company leadership – CEO, COO, CIO, etc. The other is to establish a planning team that is representative of all functional units of the organization – sales, operations, finance, IT, etc.  This step is the catalyst to a smooth planning initiative, and requires focus and patience.  (The ability to herd cats wouldn’t hurt, either.) It may also be helpful to reduce the impact on internal resources by leveraging outside help from a consulting firm well-versed in DR planning.

Information Gathering

This portion of the planning process – information gathering, due diligence and assessment – is the most involved and most time-consuming, and a true test of teamwork across the organization.

The first step in this part of a DR planning initiative is performing a Business Impact Analysis (BIA), which helps to assess the overall risk to normal business operations (and revenue flow) should disaster strike right this second. The BIA is typically comprised of identifying and ranking all critical business systems, analysis impact of interruption on critical systems, and most importantly, establishing the maximum length of time critical systems can remain unavailable without causing irreparable harm to the business. This length of time is also known as Maximum Tolerable Downtime (MTD).  Working backwards from the MTD will allow acceptable Recovery Point Objective (RPO) and the Recovery Time Objective (RTO) to be reached.

With BIA in hand, the next steps are conducting a risk assessment and developing the recovery strategy.  The risk assessment will help to determine the probability of a critical system becoming severely disrupted, identifying vulnerabilities, and documenting the acceptability of these risks to the organization.  Engagement from the entire planning team is necessary in order to accurately review and record details for critical records, systems, processing requirements, support teams, vendors, etc. – all needed in order to develop the recovery strategy.

Also important in the recovery strategy is identifying the recovery infrastructure and outsourcing options – ideally alternate datacenter facilities from which critical systems and data can be recovered in the event of a serious interruption.  This, as they say, is the point at which the bacon hits the frying pan: Many organizations are leveraging the power and abundance of Cloud-based IT resources to lower infrastructure costs, and Cloud is particularly applicable for DR.  In fact, there are more than a few services who provide continuous data protection: typically accomplished via unobtrusive software agents residing on each server in a datacenter. These agents are then connected to a black box also residing in the datacenter, incrementally taking images of each server, de-duplicating the data, then replicating that data via secure WAN to a remote data store, ultimately providing on-demand (via secure web console) recovery from the remote location at any time. Companies such as nScaled, iland, and Simply Continuous offer such services and can even help build a business case to illustrate the ROI for this service.  Point is, do thy homework and explore if Cloud services such as these might make a sound fit into your organization’s DR plan.

Planning and Testing

Armed with a full impact analysis, risk assessment, recovery goals, and outsourced options, now the actual DR plan can be developed. The DR plan is a living document that identifies the criteria for invoking the plan, procedures for operating the business in contingency mode, steps to recovering lost data, and criteria and procedures for returning to normal business operations. Key activity in this step is to identify in the DR plan – a recovery team (which should consist of both primary and alternate personnel from each business unit) and to identify recovery processes and procedures at each business unit level.  Also important is to ensure the DR plan itself is available offsite – both via the web and in permanent media form (print, CD-ROM, etc.)

Equally important to having a DR plan is regular testing. This step includes designing disaster/disruption scenarios and the development and documentation of action plans for each scenario. Conducting regular testing with full operational participation is key to successful testing.

Ongoing Plan Evaluation

An effective DR plan is only a good plan if continually kept in lock-step with all changes within the organization.  Such changes include infrastructure, technology, and procedures – all of which must be kept under constant review, and the DR plan updated accordingly.  Also, DR plan testing should be evaluated on a regular basis, and any adjustments made (systems, applications, vendors, established procedures, etc.).

So there you have it – four key building blocks to tailoring a DR plan for your organization.  Of course, if the ‘disaster’ arrives in the form of a city-sized asteroid hurtling towards Earth, needless to say any plan will likely not make much difference. Anything short of such a global catastrophe, however, and a well-developed and maintained DR plan will keep employees and customers connected and business moving forward, with minimum downtime.

Again, this is by no means a complete recipe for designing and implementing a DR plan but instead is meant to serve as a high-level overview…offered as food for thought.  I encourage you to learn more, explore options, ask for help if needed – whatever it takes to thoroughly prepare your organization for the worst, should the worst ever occur. To loosely paraphrase our man Keanu once again from another of his, er, more questionable films from back in the day – Johnny Mnemonic – this is one topic where you absolutely, positively don’t want to “get caught in the 404″.

What can Cloud do for you?

by Marc Watley on June 28, 2010

By now, you’re probably well aware of the AT&T/iPad security debacle earlier this month, yes?

Good.

AT&T’s security breach was cause for serious concern for iPad users and was first reported at Gawker.

Since this story, there have been scores of articles prattling on about the “vulnerability of the Cloud”, “Cloud failures”, etc.  Sensational headlines pay bills, granted, and while it’s important that security issues receive attention, I’d much rather look at this from a more holistic angle:

Why adopting Cloud solutions is unavoidable for companies who want to remain competitive?

and also…

How Cloud can be introduced into IT environments in a secure and highly available fashion?

Let’s be Swarovski-crystal clear here: This incident was a good thing, friends! At fault in the iPad incident was a poorly-secured back-end on AT&T’s side of the fence.  As Gawker’s Ryan Tate accurately points out in his story,

“AT&T exposed a very large and valuable cache of email addresses, VIP and otherwise.”

That said, the pundits do have a point, which is that this incident has implications regarding security – in this particular instance with the underlying AT&T Cloud infrastructure powering the iPad.  Responsibility for security with Cloud services is a critical one and falls on all parties involved: the device manufacturers, application developers, and service/infrastructure providers, who must provide and maintain a secure environment as well as immediately resolve all issues when discovered.  Same goes for the end users. Thankfully in this case, only one person outside of Goatse Security (who were evidently behind the “attack”) was provided the list of 114,000 email addresses after having leaked the flaw to AT&T where it allegedly went unaddressed for almost a day.  That person was Ryan Tate at Gawker, who broke the story. While white hat groups like these are sometimes criticized for their “alternative disclosure process”, they actually do more help than harm.  The more ‘holes’ like this found, the more secure Cloud solutions will become available for all of us in the long run.  I say “hats off” (sorry couldn’t help that one!) and keep up the good work.

So, should these security issues be taken seriously?

Hell yes!

Should you hold off moving any of your company’s IT infrastructure to the Cloud as a result of incidents such as “iPadgate”?

Absolutely not!

Both consumers and small businesses alike have, en masse, placed their trust in Cloud-based solutions, much to the degree that services like GMail and GoToMeeting, for example, have become core to day-to-day life – both in personal and business settings.  At the enterprise level, CIOs and CTOs worldwide are rapidly climbing aboard the Cloud train as well, deploying various solutions within their organizations. These highly scalable, on-demand solutions can help businesses to deploy additional infrastructure quickly with reduced capital costs and refreshed technology – often helping to optimize operating costs as well. The rate of adoption in the business community is increasing rapidly: Gartner forecasts that, by 2012, some one in five businesses will own no IT assets, and that by 2013 businesses will be spending some $150 billion on Cloud services.

Question is, how can businesses take advantage of high-level Cloud solutions right now and still retain some peace of mind relative to availability and security?  Fairly easily, and in a just a few steps. Whether your organization is a startup or an established enterprise, Cloud solutions can play a key role in your IT organization. Risks related to security, control, and availability with Cloud services are not dissimilar from those in any IT environment, and can be mitigated through careful provider selection and sound planning. Here are a few steps that might be helpful in your adoption of these services:

First : Strategize. Plan. Then plan some more.

Devising a sound strategy and planning effectively is a sure first step to approaching and taking advantage of Cloud solutions for your business. The one thing you can’t afford to do is to get this stuff wrong.  This is especially true if your company itself is a service provider of one form or another, as most businesses today are.  It would only take one mishap – say, the inability to quickly test and release a patch to your popular online game, or having physicians who are unable to access their patients’ electronic medical records, etc. – to realize the importance of effective planning and smart Cloud provider selection.  If you need a hand with strategy vetting the  providers and options, don’t be afraid to ‘phone a friend’ – there are many IT consultants and brokerage firms out there fluent in Cloud who are objective and can be helpful from strategy through to implementation, often saving you both time and resources.

Planning for the deployment of Cloud services such as storage or rich content delivery is fairly straightforward, as the related services – Amazon’s S3 storage or EdgeCast’s Content Delivery Network (CDN) services, for example – are more or less plug-and-play and can be segregated from the rest of your infrastructure.  Those services that include compute functions however (Cloud-based servers and related infrastructure) will take a bit more time and detail at the planning stage.  Most businesses considering Cloud deployments of this type spend the necessary time to analyze existing and future needs around each of their main environments, which typically fall under:

  • Development
  • Testing
  • Quality Assurance (QA)
  • Production

Evaluating Cloud services by IT discipline is smart, since there are many available options for compute power (CPU), memory, storage, and networking – and the build requirements within each environment will likely be varied.  A good strategy should include a thorough understanding of the resources you currently have in place by spending the necessary time to evaluate the needs with each of your IT environments.

Understanding your existing financial exposure and budget for Cloud solutions during this stage is also important.  Some questions to consider:

  • Hardware: What is the current value of existing hardware per environment, and how close are you to needing hardware refresh? What are the capital costs associated with such a refresh?
  • Network: What are the current monthly costs for networking/IP bandwidth, per environment?
  • Labor: What are the current human resource costs associated with operating each environment (operations, monitoring, support, etc.)?
  • Roadmap: What are the hardware, infrastructure, performance, and human resource requirements, per environment, over the next  18-24 months needed to support growth demands?

From these and similar questions, you should be able to arrive at total monthly operating costs – both for your current environment and at scale.  (Consultants can be helpful here as well, many times providing that second set of objective “eyes”.)   With your Cloud approach now defined, you’ll likely see immediate capital  and operating cost reductions, the ability to quickly scale infrastructure commensurate with usage and growth, and the ability to reallocate human resources to support more business-critical IT functions. Still with me?  Alrighty then…on to finding and selecting the right providers.

Next : Thou shalt do thy homework.

There might be as many shops hawking Cloud services today as there were candy and toy shops selling Beanie Babies many years back…craziness! As you step through your due diligence with potential providers, beware the Cloud pricing that sounds too good to be true…because, as the adage dictates, it probably is.  When you dig below that wow-this-provider-is-30%-cheaper-than-the-others! pricing, don’t be too surprised at what you’ll likely find.  The provider in question might indeed have some of the shiny bells and whistles you’re after, but perhaps only one datacenter…so if the ground opens up during an earthquake and swallows it whole, or a tornado carries it away Dorothy-and-Toto-style, well…don’t say I didn’t warn you.  Other low-cost leaders tend to lure you in with great pricing, but have limited resources on hand (meaning they’ll need to go out and buy those Bugatti Veyron-grade servers you’re after and will charge huge setup fees accordingly).  Also, ensure your provider-to-be is well-certified and maintains full regulatory compliance (typically SAS-70 Type II at a minimum) with your organization.  So, let’s move those “life on the B/C/D-list” providers right into the rubbish bin, shall we?  Right.  So now we’re left with the few real players – names you’ll likely recognize: Amazon, Google, Joyent, Microsoft, Rackspace, Rightscale, Terremark.  (These are but a few of the many A-list providers available.) Spend time with each prospective provider; ask tough questions, ensuring the selected provider has ridiculously good support, lots of IP bandwidth options, and security features that exceed your own requirements.  Take a good, hard look at the providers’ pricing, negotiating wherever possible and comparing to your existing cost structure.  When it comes to final selection time, take a well-organized approach to this as well.  My colleague Matt Carmen recently broke down in detail the process of selecting the right outsourced IT provider, which I would recommend checking out.

Finally : A phased approach.

Now that you’ve got a good head for the Cloud services and options that will best suit your business, it’s time to get tactical.   A best practice when approaching any Cloud solution is to pilot, evaluate, then implement in phases.  Select which of your IT environments will work best for a pilot.  The pilot should be brief, but long enough to provide a thorough evaluation…30-45 days is usually enough time.  We’re still on the leading edge of what Cloud can provide, and solutions are constantly evolving.  Providing feedback during a pilot period is key – try to break the solution and be as granular as possible with your providers as to how their service can be improved.  The good providers will work with you and incorporate your feedback into their services…and everyone wins.

Post a successful pilot, make the move into a full launch window with the same, sure-footed project management aplomb as you might with releasing a new product to your customers. You’ll find here again that the provider(s) you’ve selected will work hand-in-hand with your team, ensuring a smooth transition and quickly addressing any issues.

Congratulations!

Now that wasn’t so bad, was it? Your organization is now among a rapidly growing number of businesses who have embraced Cloud solutions and whose IT organizations have realized increased availability, efficiency, and productivity as a result. Once you’ve got one environment successfully ported and humming right along, you’ll likely see the benefits of deploying the rest of your environments in similar fashion.

Stir. Repeat. Enjoy.


iPad: Faster than a speeding gurney

by Marc Watley on April 29, 2010

While at dinner in San Francisco recently, I’d asked a good friend – a nurse who works in the Sutter Health system – of his thoughts on the iPad. “Oh yeah…I want one..now!” he responded excitedly. (We were dining at Paxti’s if you must know; good Chicago-style pizza but nowhere near as yummo as the pies at Zachary’s across the Bay in Oakland) “I think they could be the answer to replacing our patient charts (you know, those old-school metal-clad clipboards dangling from the foot of patients’ adjustable beds). The iPad would be a fantastic tool for the healthcare industry.”

My friend’s enthusiasm got me to thinking: why wouldn’t healthcare COOs and/or CTOs (often doctors themselves) embrace the iPad for their Electronic Medical Record (EMR) and other patient care needs?  The productivity gains alone would seem to more than pay for the device, connectivity, and implementation costs. Then there’s the excitement factor…when was the last time you’ve seen end users this excited to use any device for work? Granted, the iPad may not be as indestructible as, say, a $1700 ruggedized WinTelMo device currently in use by some healthcare providers, but at less than a third the price, a larger interface and I mean, come on, a much better UI, seems to be a no-brainer, right? True, the iPad is a spanking-new device that has been on the market all of three weeks, and is as of yet unproven in the enterprise. However, this does appear to be changing…and fast.  Consider a couple of data points:

First, Citrix is already on board with the iPad, having recently released Receiver for iPad which provides secure access to corporate applications and desktops – from Windows environments to Oracle databases, to, well, 3D medical imaging applications.

Also, healthcare organizations are already beginning to show up among the early business adopters of the iPad. According to iMedicalApps, Kaweah Delta Health Care District near Fresno, California, has reportedly ordered 100 iPads for use with viewing radiology images, Electrocardiogram (ECG) and other patient test results. They feel the iPad will be especially useful for their mobile healthcare workers, such as hospice and home health nurses.  The iPad and Kaweah Delta Health Care presents an interesting test case for the device…could it also serve as a sort of bellwether for iPad adoption in healthcare? Possibly.

The subject of the iPad and its potential – for healthcare and many other industries – continues to flood the Internet, and will surely be a hot topic at upcoming conferences including, I suspect, next month’s CIO Healthcare Summit.

Now then, picture if you will…let’s call him Dr. Jones.  Our good doctor is commuting home via train after a long shift at a large metropolitan hospital.

Phone rings:

“Doctor! Mr. Smith in 207 is irate and uncontrollable…help!”

Dr. Jones: “Is that right? One moment.”

Enter iPad: Secure login > Recovery Wing NE > Patients > Bleau, Joseph R. – Rm.   207/A > [TAP] > Patient I.V.> Add > Sedatives > ”Sleepia TZ” > [TAP] > Administer > CONFIRM > [TAP].

Patient: Zzzzzz.

Hospital staffer: “Bless you Doctor!”

A fictitious situation? Sure. I’m just sayin’…

Week In Review – Mar 21 – Mar – 27, 2010

by Magesh Tarala on March 28, 2010

Everything is so amazing and nobody is happy

by Vijay Peduru, Mar 22, 2010

Have you taken a moment to reflect how amazing it is to be living this time and age? There are so many gadgets and tools that increase our capacity phenomenally, but we often end up complaining about trivial things about them. Watch a short humorous video in this post and that will help you realize that we are very lucky indeed! more…

Leadership Cancers #2: The insanity of multitasking

by Gary Monti, Mar 23, 2010

The human brain is similar to a single core microprocessor. Multitasking in either case involves context switching which is expensive. But is it effective? Multitasking should not be confused with some tasks we can perform simultaneously, like chewing gum and walking. In this post, Gary argues that high value tasks or tasks that have high impact when something goes wrong, are not conducive to multitasking. Don’t agree? Well, have you read about the impact of texting and driving? Or next time you go to a meeting, try working on your laptop and listening to the conversation at the same time.

One of our readers Avi commented that multitasking is related to picking up tasks in a “wait” state. While it is true that this enables efficient use of time, it does not mean that you can do multiple tasks at the same time. If task A takes 40 hours, you cannot expect task B to be fit in at the same time. If task A hits a roadblock and cannot progress, task B gets worked on. Do read Gary’s response too. more…

Past is NO way to the Future

by Guy Ralfe, Mar 23, 2010

Ever dealt with a financial advisor or read an investment brochure? Their standard disclaimer is that past performance is not an indication of future performance. While knowledge of the past definitely is valuable, we should be aware that the future will not mimic the past. Now, apply this to your life and your actions; don’t let the past hinder your future performance. more…

CAPEX-Free IT: How to refresh your technology, deliver stellar IT, and keep your CFO happy

by Marc Watley, Mar 25, 2010

Money is tight everywhere. According to most surveys and reports, CAPEX spending in IT is going to increase slightly this year at best, if not remain flat. Resources are down to 2005 levels. So, how do you do more with less.  With the advent of virtualization and cloud computing, there are numerous options to pay as you go. When implementing this strategy, do it the Kaizen way. more…

Author’s Journey #14: How to get others to help you write your book

by Roger Parker, Mar 26, 2010

In this post Roger describes three basic approaches to getting others to help you write your book. They are:

1. Paying for Help
2. The Network Approach
3. Social Media Approach

Read the post to understand what they are and how to leverage existing tools. As always, your choice should be determined by your goals and your resources. more…

Last November at the Business Technology Summit in Bangalore, India I shared a few thoughts on how leaders of IT organizations can refresh technology, continue to deliver sound IT and meet customer needs, and, by taking advantage of on-demand infrastructure services, remain in the good graces of the company CFO.  I’ll attempt to do a bit of a recap here, adding recent data points and additional thoughts along the way.  Well then, shall we?

The US Department of Commerce reports (most recent Information & Communication Technology Survey, which was in 2007) that capital expenditures (CAPEX) make up, on average, 63% of companies’ IT spending.

CIO’s today are currently faced with decreased capital and operating IT budgets, as well as staffing reductions or hiring freezes.  All the while, the expectation to scale applications and systems – and maintain performance and SLAs – remains.

In October 2009, Barclays Capital reported in its CIO survey that IT spending expectations would increase 0-5% in 2010 versus 2009, with priority placed on datacenter expansions, virtualization and Cloud initiatives. In the near term, CIOs had forecasted that their IT spend would be focused on storage, networking, and servers.  Comparing the Barclays results with a more recent CIO survey – this one conducted by Gartner this past January, where some 1,500 CIOs surveyed yielded a similarly bleak forecast: IT spending will largely remain flat, and if it does increase, it’ll do so by just over one percent.  (Which clearly isn’t great, but it also isn’t the elbow-drop of more than eight percent in IT budgets as seen in 2009 .)  Oh, the CIOs in this latest survey also posited that their IT organizations have about the same level of resources now as they had back when Kelly Clarkson’s “Since U Been Gone” ruled the airwaves; yep, welcome back to 2005. The lesson? Wring everything you can from what you’ve got.  Having fun yet?

Right, so how does one squeeze blood from this proverbial IT turnip? The first step is to dig into your IT spending a bit – gain as much understanding and insight as to what you have in your arsenal today – and the related costs – as possible. This sounds like a simple task, but you’d be surprised how many IT directors and CIOs don’t really know just exactly what they are using and how much they’re paying. (Sidebar: If you haven’t yet read my partner Brian Superczynski’s article from last week, I’d encourage it; he offers good thinking and a few tools around this business insight)

The CIOs surveyed by Gartner report that their top three business priorities for 2010 are:

  • Business process improvement
  • Reducing enterprise costs
  • In the use of informatics/analytics

In addition, their top three technology priorities for this year:

  • Virtualization
  • Cloud computing
  • Web 2.0 (read: Social Networks)

Taking a cue from this, the next step to a CAPEX-free world is to first address how the data points above stack up against your own business and technology priorities, then explore ways in which you can reduce capital costs by taking advantage of outsourced infrastructure and related services like virtualization.  Hang on now…don’t be afraid of that Cloud…embrace it. I’m not suggesting you entrust your most valuable corporate crown jewels to a multi-tenant (shared) Cloud service (Amazon Web Services/Elastic Compute Cloud and Rackspace Cloud largely fall into this category).  These services do have their place and you may find they will play an important role for your organization at some point. However, I’m referring to enterprise-class, private datacenter services, where you retain complete control over access and to your infrastructure above the operating system; it’s just that someone else manages everything else –  hardware/upgrades/monitoring, inter-networking, bandwidth, power, etc are all taken care of.  Think of this as an extension of your own datacenter that simply lives elsewhere…you still have the master key and you pay for everything “as-a-service”, largely free of capital costs.

These as-a-service solutions take on many forms, each designed to address specific IT needs: Compute Clouds for development or testing, storage arrays, backup datacenters/Disaster Recovery (DR) services, email, or simply a rack of dedicated servers with your name on it.  A few providers to consider in this area: private Cloud services like Terremark’s Enterprise Cloud, SoftLayer’s CloudLayer, or datacenter replication/DR from nScaled, CA’s ArcServe, or dedicated, managed servers from Latisys, The Planet, Rackspace, and others.  The point is to spend the time. Sit in on a few webinars. Perform your due diligence, seek help if you need it, and I think you’ll find that utilizing tools like infrastructure-as-a-service for some applications or areas of your infrastructure makes sound business sense.  Also, if you have a technology refresh initiative as core to achieving your goals in 2010, these services typically deploy on the latest hardware…double bonus!

By the way, much of the data from the Gartner survey can be found on the author’s blog post here.

The Japanese pioneered a concept you are probably familiar with called Kaizen: calculated steps, taken daily, designed to achieve a larger goal. Applying this methodology is key to a sound migration path away from capital IT expenditures and toward infrastructure-as-as-service.  (I know, I could have just said “baby steps” but this Kaizen thing just sounds better, no?) Here’s how you apply it:

  1. Start small
  2. Find a service that meets your liking then deploy – perhaps just one or two applications or projects
  3. Monitor performance and costs for a month or three, using the service as a Litmus test to evaluate and design the migration for the rest (or most) of your infrastructure.

If they pass muster, simply add more fuel to this as-a-service fire and before long, you’ll find yourself on the receiving end of praise from the CFO, your own team, and your customer as well. Or maybe you’ll find yourself singing happily along to “Since U Been Gone.” If the latter, you just might want to keep that one to yourself.