Posts Tagged ‘cloud computing’

Does your company’s leadership think that having a robust Business Intelligence function is only viable for large corporations? Think again. In today’s global world, with information shared in the blink of an eye it is imperative that all companies know their numbers and manage by them. The information that a Business Intelligence (BI) function can provide can mean the difference between growth and competitive decline. Utilizing BI has been proven to result in significant competitive advantages both for small companies as well as large corporations.

Business Intelligence Planning

Initiating a Business Intelligence function in your company does not need to be excessively expensive but does require careful planning. There are five key steps in developing a plan for a BI function that, if followed, increase your chance for success. The level of complexity required for these steps is dependent on the size and complexity of your organization. Small companies can rapidly design a BI program to accelerate the process with the help of a consultant with BI experience.

The five steps are:

  1. Evaluate the company’s  strategic objectives for critical success factors;
  2. Design the Performance Measurement Blueprint;
  3. Perform a Gap Analysis;
  4. Develop Key Performance Indicators (KPIs); and
  5. Develop the high level plan for Reporting – Scorecards, Dashboards, Reports

It is important to involve top management early in this process. Their support will be critical to getting funding for the BI program once the analysis has been done.  Depending on the company culture however a draft of a proposal detailing the potential costs and benefits early on may be beneficial. A clear demonstration of the need for a BI program can facilitate its approval and funding.

It is always best to start the planning process with an evaluation of the company’s strategic objectives. To maximize the probability of success, any BI program should be aligned to the mission, vision, and the strategic objectives of the organization. Another critical success factor is documenting the benefits of a BI program up front in order to garner the support of top management or ownership.

Once you have determined how to align to the strategic objectives take a look at what are the expected levels of performance in order to meet or exceed these objectives. Determine any dependencies between objectives in this review. This will help you determine where you can expect cost savings and cost avoidances. An initial draft of non-tangible benefits should be developed at this time. Common benefits include improving quality, improving customer retention, gaining market share, reducing costs, meeting regulatory requirements, and fostering continuous improvement and innovation.

The next two steps are to identify the high level requirements for data collection and to perform a gap analysis. The gap analysis will identify any gaps in current capabilities to measure, analyze, and present the elements of the performance plan.  From there you can start to develop the KPIs that are needed to track performance. The last step of the planning process is to determine the high level plan for what reporting components will be needed. Generally scorecards and reports will be needed for managers and staff while dashboards will be needed for management.

Before you can complete the proposal for establishing a BI program you will need to determine the expected costs and benefits for presentation to management. This involves determining how the program will be designed. There are several options to consider in developing your capabilities for initiating a BI function. Each option will have different costs, timelines, and pros and cons associated with it.

The most popular options for launching a BI program are:

  1. Outsourcing a portion or the entire function;
  2. Purchasing a package through one of the many BI vendors; or
  3. Starting small with an in-house team.

The costs and the benefits for each of these options should be included in the BI program proposal. For a small company the third option is often the best initial choice due to the lower cost. However the cost of outsourcing and vendor packages can often be competitive and can decrease the time to adoption.

With all aspects of the BI plan identified the last step is to put them together in a proposal that clearly shows the associated costs and the benefits of having a BI program. The most compelling benefit in today’s increasingly competitive environment is to gain the advantages that an analytically focused strategy can give to your company’s success regardless of its size.

Written by Linda Williams who is partnered with Datacenter Trust and also has a Business Intelligence consulting practice where she provides businesses with assistance in performance measurement, process improvement, and cost reduction.
Share

Business Intelligence in a Wiki World!

by Linda Williams on March 28, 2011

The role of the Business Intelligence (BI) function within the organization has become critical to thriving in today’s evolving business environment.   The ultimate purpose of Business Intelligence is to provide management with analytical insights that can be used to improve business performance and competitive position. Analytics provided by the BI department while intended to focus the organization on their core operations and progress toward aligning to their strategic objectives, increasingly can be the impetus for transformational change.

A review of top companies in their industries clearly shows that they all mange their performance using some sort of BI techniques.   The standard tools of BI are based upon gathering actionable metrics that can be used to increase the effectiveness and efficiency of their operations. This data is analyzed and compiled into reports including dashboards, scorecards, and predictive models. As an added service in  more evolved companies, the BI team generally provides consulting on metrics to propose ways to help make better decisions about operations and suggest improvement initiatives.

Often the development of these insights is closely guarded within the company to ensure at least a temporary advantage in the marketplace. The intent is that analytical capabilities will provide them the edge of a first mover as they develop new markets or approaches for their business.

The Problem

This advantage does not last for long in today’s connected world.

The basic analytical tools of BI however are well known in the public domain. Implementing basic BI has become not a luxury but a standard cost of doing business. Books such as Competing on Analytics give many examples of the types of analytics that can be collected and analyzed. There is also a tremendous amount of open information on BI and Key Performance Indicators (KPIs) on the web. Companies can use this information to identify enhancements to their current analysis through their own review of wikis and blogs and even competitors websites.

The Dilemma

The dilemma of what to hold close and what to open up is increasingly becoming a key decision point in a BI project’s lifecycle. The discussions weigh the pros and cons of when it is best to foster creativity through opening up their research to collaboration and when Intellectual Property (IP) should be preserved.  Often the decisions are not clear cut and there may be lively discussions between the BI team and the executive team around what is the best approach for this situation. At the heart of these discussions is whether competitive advantage would be better served by keeping their intent secret, for the short term, or whether in the interest of speed and expertise it would be better to tap into the wiki community.

Wikis

The overall purpose of Wikis is to provide a place to share content, ideas, links, and collaborate on information, technical documentation, or the development of new ideas. The Wiki world in contrast to the traditional BI world thrives on openness and transparency. Some of the key advantages of the wiki approach are:

  1. The potential to leverage the talents within the wider community;
  2. A reduction in the time to innovation; and
  3. The ability to incorporate social purposes that may go beyond the core competency of the company. An example is using external assistance in developing approaches to help the organization move into to being “green”.

Clearly there are compelling advantages to be gained by developing analytic dimensions with the help of the larger wiki community. Precedents for using this approach are also becoming more common. Some well-known examples of advances made by opening up IP include: the development of Linux; Netflix’s contest to develop an algorithm for customer preferences; and Google’s opening up application development for the Android. In each case the advantages of using the wiki world to enhance what may have been considered to be IP was outweighed by the benefits of collaboration.

Final Thoughts…

Secrecy in all areas of analytical review is no longer possible or even preferable in a world that is increasingly transparent with the pervasive use of social media by today’s employees who are mobile, connected, and less likely than previous generations to remain in one job for long periods. There are significant advantages to a business in tapping into the networked intelligence to speed up problem solving or make breakthroughs. These benefits may in some cases outweigh the potential risk of the competition using the same information or approach. The final decision however cannot be rote but must rest with the complexity of the use and the expertise of internal resources to meet that need.

Written by Linda Williams who is partnered with Datacenter Trust and also has a Business Intelligence consulting practice where she provides businesses with assistance in performance measurement, process improvement, and cost reduction.
Share

Cloud: A truly nebulous term

by Marc Watley on October 29, 2010

Yes, yes I know…ol’ Marc has subjected you to yet another bad pun. You’ve got to admit though that it fits the bill here. The term “cloud” is, in my book, one of the most over-used technology terms in recent memory, and it’s high time for change.

(Ridiculous sidebar: Anyone else watch Science Bob conjure that “cloud” on Jimmy Kimmel Live the other night? Hilarious!)

The thing is, almost all of what we use on the web today exists ‘in the cloud’ at some level or another. Think about it – your mail isn’t fed from a server sitting in your basement is it? No, it’s typically one of a cluster of mail servers in the “cloud” – perhaps located within your company’s datacenter or provided by Yahoo!, Hotmail, Gmail, or the like.  What about shopping? Our profiles, containing our shipping addresses, purchase preferences, and credit card numbers, likewise exist in the “cloud”.  The social utilities we’ve come to depend on for business and fun – LinkedIn, Facebook, Salesforce, Twitter, Foursquare, etcetera, are also services used almost entirely in the “cloud”.  The technology that powers the various “cloud” solutions continues to advance rapidly.  This, along with increased availability and reduced costs worldwide for high-speed Internet access, has allowed the service offerings to evolve as well.

The fact that both individuals and growing businesses can tailor solutions from the breadth of available “cloud” services is fantastic.  The issue at hand is the term “cloud” itself: an umbrella term most often used to describe and present ‘hosted’ or remote services – services which have expanded rapidly during the last two years. The term “cloud” has simply reached a point of causing confusion.  For example, though commonly referred to as “cloud computing”, it’s not always actually computing, is it?  We can now select from solutions allowing us to compute, store/archive/recover data, manage content, send/receive mail, place calls, conference, and network with colleagues, friends, and prospects – all with a moniker of “cloud” attached. “Cloud” is descriptive in this sense, sure, but only mildly so. My $0.02 is that the term “on demand infrastructure” – or simply “on-demand”- is more reflective of available solutions and less confusing than the term “cloud”.  Adopting the “on demand” term virtually eliminates the need for wonder, fretting, or quarrel over the best flavor of the solution – public/multi-tenant (Amazon EC2), private (your own VMware or Terremark Enterprise Cloud instance), Platform (Salesforce), or hybrid form. Whatever the end solution, simply think of it as on-demand infrastructure; the level of access, control, and security needed upon deployment are completely up to – and configurable by – the user.

I’ve noticed in the past several months that several technology companies including Oracle, F5, Servosity, and Rackspace have begun to use “on demand” (seemingly in place of “cloud”) to describe their services, features, and benefits. I think it’s a smart move, but who knows where this will end up; the term “on demand” might work best for everyone. Might not.

Anyhow, Cloud: you’ve served us pretty well…thanks. Now it’s time to bid adieu and bon voyage.  Oh, and when you reach wherever it is that you Internet buzzwords fade away to, please do say hello to our old friend “Web 2.0”, will you?

Written by Marc Watley, Co-Founder & CEO of Datacenter Trust and CMO at Reppify. Datacenter Trust is an IT consulting and services delivery firm, helping growing businesses make smart decisions from sound financial analysis and business intelligence. Reppify is a leading-edge technology company pioneering the use of social media data to drive better business decisions. Follow on Twitter: Datacenter Trust @datacentertrust and Reppify @reppify
Share

4 steps to effective Disaster Recovery planning

by Marc Watley on August 23, 2010

Question: A wildfire 10 miles away from your company headquarters is raging out of control. The fire captain just ordered everyone in your building to evacuate. All staff have safely evacuated premises, and now you are likewise heading out, taking one final look at your datacenter – still humming away, unsuspectingly. You have offsite data storage but no offsite server infrastructure, applications, etc.

What do you do?

I’m paraphrasing from a not-so-great movie here – Speed (Keanu may have been good in The Matrix but the predictable tête-à-tête between his and Dennis Hopper’s character in Speed still makes me chuckle) – but IT executives today are, in fact, increasingly faced with the threat of disasters – whether natural (such as a wildfire) or man-made (e.g. some ding-dong crashing a vehicle into your datacenter). I may be taking a bit of creative license here, but this could not be a more serious issue. (Recall those horrible wildfires in San Diego, California area a few years back? The example above was culled from situations experienced during that period.)

As organizations – and their customers – increasingly rely on database, server, and IP-connected applications and data sources, the importance and responsibility of maintaining continuity of the business infrastructure and limiting costly downtime in the event of a disaster, is paramount.

Though many an organization had active disaster recovery (DR) projects on the books a few years ago, the global financial crunch of the last 20 or so months has wreaked havoc on IT budgets everywhere; only now are many of these DR projects once again taking priority.

If you’re thinking that you can ‘wait it out’ and disaster won’t strike on your watch, think again. Apparently, some 93 percent of organizations have had to execute on their disaster recovery plans. Yep. This according to an annual DR survey from Symantec last year.  A few more points from this survey:

  • In general it takes companies [with active DR plans] on average three hours to achieve skeleton operations after an outage, and four hours to be up and running
  • The average annual budget for DR initiatives is $50MM (including backup, recovery, clustering, archiving, spare servers, replication, tape, services, DR plan development and offsite costs)
  • Virtualization has caused 64 percent of organizations worldwide to reevaluate their DR plans

Whether your organization is a small recently funded startup or well-entrenched in the Fortune 100, designing, implementing, and testing a DR plan is an endeavor that takes dedication, careful planning and time (the entire process can take weeks or even months). There are many excellent resources available which can provide knowledge and detail as to the individual steps of a DR planning initiative.  (Cisco’s DR Best Practices site or Disaster Recovery are great places to begin, by the way.)  What follows is a high-level, best-practices overview of the planning process:

Executive Sponsorship

This first step of a successful DR plan involves two key components: One is to secure plan sponsorship and engagement from senior company leadership – CEO, COO, CIO, etc. The other is to establish a planning team that is representative of all functional units of the organization – sales, operations, finance, IT, etc.  This step is the catalyst to a smooth planning initiative, and requires focus and patience.  (The ability to herd cats wouldn’t hurt, either.) It may also be helpful to reduce the impact on internal resources by leveraging outside help from a consulting firm well-versed in DR planning.

Information Gathering

This portion of the planning process – information gathering, due diligence and assessment – is the most involved and most time-consuming, and a true test of teamwork across the organization.

The first step in this part of a DR planning initiative is performing a Business Impact Analysis (BIA), which helps to assess the overall risk to normal business operations (and revenue flow) should disaster strike right this second. The BIA is typically comprised of identifying and ranking all critical business systems, analysis impact of interruption on critical systems, and most importantly, establishing the maximum length of time critical systems can remain unavailable without causing irreparable harm to the business. This length of time is also known as Maximum Tolerable Downtime (MTD).  Working backwards from the MTD will allow acceptable Recovery Point Objective (RPO) and the Recovery Time Objective (RTO) to be reached.

With BIA in hand, the next steps are conducting a risk assessment and developing the recovery strategy.  The risk assessment will help to determine the probability of a critical system becoming severely disrupted, identifying vulnerabilities, and documenting the acceptability of these risks to the organization.  Engagement from the entire planning team is necessary in order to accurately review and record details for critical records, systems, processing requirements, support teams, vendors, etc. – all needed in order to develop the recovery strategy.

Also important in the recovery strategy is identifying the recovery infrastructure and outsourcing options – ideally alternate datacenter facilities from which critical systems and data can be recovered in the event of a serious interruption.  This, as they say, is the point at which the bacon hits the frying pan: Many organizations are leveraging the power and abundance of Cloud-based IT resources to lower infrastructure costs, and Cloud is particularly applicable for DR.  In fact, there are more than a few services who provide continuous data protection: typically accomplished via unobtrusive software agents residing on each server in a datacenter. These agents are then connected to a black box also residing in the datacenter, incrementally taking images of each server, de-duplicating the data, then replicating that data via secure WAN to a remote data store, ultimately providing on-demand (via secure web console) recovery from the remote location at any time. Companies such as nScaled, iland, and Simply Continuous offer such services and can even help build a business case to illustrate the ROI for this service.  Point is, do thy homework and explore if Cloud services such as these might make a sound fit into your organization’s DR plan.

Planning and Testing

Armed with a full impact analysis, risk assessment, recovery goals, and outsourced options, now the actual DR plan can be developed. The DR plan is a living document that identifies the criteria for invoking the plan, procedures for operating the business in contingency mode, steps to recovering lost data, and criteria and procedures for returning to normal business operations. Key activity in this step is to identify in the DR plan – a recovery team (which should consist of both primary and alternate personnel from each business unit) and to identify recovery processes and procedures at each business unit level.  Also important is to ensure the DR plan itself is available offsite – both via the web and in permanent media form (print, CD-ROM, etc.)

Equally important to having a DR plan is regular testing. This step includes designing disaster/disruption scenarios and the development and documentation of action plans for each scenario. Conducting regular testing with full operational participation is key to successful testing.

Ongoing Plan Evaluation

An effective DR plan is only a good plan if continually kept in lock-step with all changes within the organization.  Such changes include infrastructure, technology, and procedures – all of which must be kept under constant review, and the DR plan updated accordingly.  Also, DR plan testing should be evaluated on a regular basis, and any adjustments made (systems, applications, vendors, established procedures, etc.).

So there you have it – four key building blocks to tailoring a DR plan for your organization.  Of course, if the ‘disaster’ arrives in the form of a city-sized asteroid hurtling towards Earth, needless to say any plan will likely not make much difference. Anything short of such a global catastrophe, however, and a well-developed and maintained DR plan will keep employees and customers connected and business moving forward, with minimum downtime.

Again, this is by no means a complete recipe for designing and implementing a DR plan but instead is meant to serve as a high-level overview…offered as food for thought.  I encourage you to learn more, explore options, ask for help if needed – whatever it takes to thoroughly prepare your organization for the worst, should the worst ever occur. To loosely paraphrase our man Keanu once again from another of his, er, more questionable films from back in the day – Johnny Mnemonic – this is one topic where you absolutely, positively don’t want to “get caught in the 404″.

Written by Marc Watley, Co-Founder & CEO of Datacenter Trust and CMO at Reppify. Datacenter Trust is an IT consulting and services delivery firm, helping growing businesses make smart decisions from sound financial analysis and business intelligence. Reppify is a leading-edge technology company pioneering the use of social media data to drive better business decisions. Follow on Twitter: Datacenter Trust @datacentertrust and Reppify @reppify
Share

Week In Review – Jun 27 – Jul 3, 2010

by Magesh Tarala on July 4, 2010

What can Cloud do for you?

by Marc Watley, Jun 28, 2010

The recent AT&T/iPad security debacle provided some sensational headlines. But that does not mean you should stay away from cloud computing. If you follow Marc’s recommendations in this post, you can adopt Cloud solutions to remain competitive and do so in a secure and highly available fashion. more…

Leadership and Mythology #8: Myth, Self-Discovery and Business

by Gary Monti, Jun 29, 2010

Tired of doing things you regret? Wonder why the behaviors continue even though they sabotage your position? Vacillate from submission to aggression when making business deals? Want to stop all this and just stay on your unique path? Wonder where the Hell that path is? Read this article to understand the three level of truth and how they tie to your Myth. more…

Social Media and Tribes #4: Tribal leadership

by Deepika Bajaj, on Jun 30, 2010

The word “tribe” has become part of the popular lexicon. If you have wondered what constitutes a tribe and how they function, this article is for you. People who end up as tribal leaders are the ones who leave the tribe better than they found them. more…

Flexible Focus #8: Memory is a slippery slope

by William Reed, Jul 1, 2010

Just like there is a learning curve, there is a forgetting curve. Without periodic review we forget what we learn and in a month’s time we retain only 20% of what we learned a month before. In this article William give describes how to use the Mandala Chart to improve retention. more…

Author’s Journey #28: Creating a marketing plan for your book

by Roger Parker, Jul 2, 2010

During the past 10 weeks, Roger’s post have covered different approaches to marketing your book, including list-building incentivesone sheets, and obtaining pre-publication quotes. This week’s article ties the previous 10 installments together and closes Part 3, Planning, by discussing the importance of creating a book marketing plan as early as possible. more…

Share

What can Cloud do for you?

by Marc Watley on June 28, 2010

By now, you’re probably well aware of the AT&T/iPad security debacle earlier this month, yes?

Good.

AT&T’s security breach was cause for serious concern for iPad users and was first reported at Gawker.

Since this story, there have been scores of articles prattling on about the “vulnerability of the Cloud”, “Cloud failures”, etc.  Sensational headlines pay bills, granted, and while it’s important that security issues receive attention, I’d much rather look at this from a more holistic angle:

Why adopting Cloud solutions is unavoidable for companies who want to remain competitive?

and also…

How Cloud can be introduced into IT environments in a secure and highly available fashion?

Let’s be Swarovski-crystal clear here: This incident was a good thing, friends! At fault in the iPad incident was a poorly-secured back-end on AT&T’s side of the fence.  As Gawker’s Ryan Tate accurately points out in his story,

“AT&T exposed a very large and valuable cache of email addresses, VIP and otherwise.”

That said, the pundits do have a point, which is that this incident has implications regarding security – in this particular instance with the underlying AT&T Cloud infrastructure powering the iPad.  Responsibility for security with Cloud services is a critical one and falls on all parties involved: the device manufacturers, application developers, and service/infrastructure providers, who must provide and maintain a secure environment as well as immediately resolve all issues when discovered.  Same goes for the end users. Thankfully in this case, only one person outside of Goatse Security (who were evidently behind the “attack”) was provided the list of 114,000 email addresses after having leaked the flaw to AT&T where it allegedly went unaddressed for almost a day.  That person was Ryan Tate at Gawker, who broke the story. While white hat groups like these are sometimes criticized for their “alternative disclosure process”, they actually do more help than harm.  The more ‘holes’ like this found, the more secure Cloud solutions will become available for all of us in the long run.  I say “hats off” (sorry couldn’t help that one!) and keep up the good work.

So, should these security issues be taken seriously?

Hell yes!

Should you hold off moving any of your company’s IT infrastructure to the Cloud as a result of incidents such as “iPadgate”?

Absolutely not!

Both consumers and small businesses alike have, en masse, placed their trust in Cloud-based solutions, much to the degree that services like GMail and GoToMeeting, for example, have become core to day-to-day life – both in personal and business settings.  At the enterprise level, CIOs and CTOs worldwide are rapidly climbing aboard the Cloud train as well, deploying various solutions within their organizations. These highly scalable, on-demand solutions can help businesses to deploy additional infrastructure quickly with reduced capital costs and refreshed technology – often helping to optimize operating costs as well. The rate of adoption in the business community is increasing rapidly: Gartner forecasts that, by 2012, some one in five businesses will own no IT assets, and that by 2013 businesses will be spending some $150 billion on Cloud services.

Question is, how can businesses take advantage of high-level Cloud solutions right now and still retain some peace of mind relative to availability and security?  Fairly easily, and in a just a few steps. Whether your organization is a startup or an established enterprise, Cloud solutions can play a key role in your IT organization. Risks related to security, control, and availability with Cloud services are not dissimilar from those in any IT environment, and can be mitigated through careful provider selection and sound planning. Here are a few steps that might be helpful in your adoption of these services:

First : Strategize. Plan. Then plan some more.

Devising a sound strategy and planning effectively is a sure first step to approaching and taking advantage of Cloud solutions for your business. The one thing you can’t afford to do is to get this stuff wrong.  This is especially true if your company itself is a service provider of one form or another, as most businesses today are.  It would only take one mishap – say, the inability to quickly test and release a patch to your popular online game, or having physicians who are unable to access their patients’ electronic medical records, etc. – to realize the importance of effective planning and smart Cloud provider selection.  If you need a hand with strategy vetting the  providers and options, don’t be afraid to ‘phone a friend’ – there are many IT consultants and brokerage firms out there fluent in Cloud who are objective and can be helpful from strategy through to implementation, often saving you both time and resources.

Planning for the deployment of Cloud services such as storage or rich content delivery is fairly straightforward, as the related services – Amazon’s S3 storage or EdgeCast’s Content Delivery Network (CDN) services, for example – are more or less plug-and-play and can be segregated from the rest of your infrastructure.  Those services that include compute functions however (Cloud-based servers and related infrastructure) will take a bit more time and detail at the planning stage.  Most businesses considering Cloud deployments of this type spend the necessary time to analyze existing and future needs around each of their main environments, which typically fall under:

  • Development
  • Testing
  • Quality Assurance (QA)
  • Production

Evaluating Cloud services by IT discipline is smart, since there are many available options for compute power (CPU), memory, storage, and networking – and the build requirements within each environment will likely be varied.  A good strategy should include a thorough understanding of the resources you currently have in place by spending the necessary time to evaluate the needs with each of your IT environments.

Understanding your existing financial exposure and budget for Cloud solutions during this stage is also important.  Some questions to consider:

  • Hardware: What is the current value of existing hardware per environment, and how close are you to needing hardware refresh? What are the capital costs associated with such a refresh?
  • Network: What are the current monthly costs for networking/IP bandwidth, per environment?
  • Labor: What are the current human resource costs associated with operating each environment (operations, monitoring, support, etc.)?
  • Roadmap: What are the hardware, infrastructure, performance, and human resource requirements, per environment, over the next  18-24 months needed to support growth demands?

From these and similar questions, you should be able to arrive at total monthly operating costs – both for your current environment and at scale.  (Consultants can be helpful here as well, many times providing that second set of objective “eyes”.)   With your Cloud approach now defined, you’ll likely see immediate capital  and operating cost reductions, the ability to quickly scale infrastructure commensurate with usage and growth, and the ability to reallocate human resources to support more business-critical IT functions. Still with me?  Alrighty then…on to finding and selecting the right providers.

Next : Thou shalt do thy homework.

There might be as many shops hawking Cloud services today as there were candy and toy shops selling Beanie Babies many years back…craziness! As you step through your due diligence with potential providers, beware the Cloud pricing that sounds too good to be true…because, as the adage dictates, it probably is.  When you dig below that wow-this-provider-is-30%-cheaper-than-the-others! pricing, don’t be too surprised at what you’ll likely find.  The provider in question might indeed have some of the shiny bells and whistles you’re after, but perhaps only one datacenter…so if the ground opens up during an earthquake and swallows it whole, or a tornado carries it away Dorothy-and-Toto-style, well…don’t say I didn’t warn you.  Other low-cost leaders tend to lure you in with great pricing, but have limited resources on hand (meaning they’ll need to go out and buy those Bugatti Veyron-grade servers you’re after and will charge huge setup fees accordingly).  Also, ensure your provider-to-be is well-certified and maintains full regulatory compliance (typically SAS-70 Type II at a minimum) with your organization.  So, let’s move those “life on the B/C/D-list” providers right into the rubbish bin, shall we?  Right.  So now we’re left with the few real players – names you’ll likely recognize: Amazon, Google, Joyent, Microsoft, Rackspace, Rightscale, Terremark.  (These are but a few of the many A-list providers available.) Spend time with each prospective provider; ask tough questions, ensuring the selected provider has ridiculously good support, lots of IP bandwidth options, and security features that exceed your own requirements.  Take a good, hard look at the providers’ pricing, negotiating wherever possible and comparing to your existing cost structure.  When it comes to final selection time, take a well-organized approach to this as well.  My colleague Matt Carmen recently broke down in detail the process of selecting the right outsourced IT provider, which I would recommend checking out.

Finally : A phased approach.

Now that you’ve got a good head for the Cloud services and options that will best suit your business, it’s time to get tactical.   A best practice when approaching any Cloud solution is to pilot, evaluate, then implement in phases.  Select which of your IT environments will work best for a pilot.  The pilot should be brief, but long enough to provide a thorough evaluation…30-45 days is usually enough time.  We’re still on the leading edge of what Cloud can provide, and solutions are constantly evolving.  Providing feedback during a pilot period is key – try to break the solution and be as granular as possible with your providers as to how their service can be improved.  The good providers will work with you and incorporate your feedback into their services…and everyone wins.

Post a successful pilot, make the move into a full launch window with the same, sure-footed project management aplomb as you might with releasing a new product to your customers. You’ll find here again that the provider(s) you’ve selected will work hand-in-hand with your team, ensuring a smooth transition and quickly addressing any issues.

Congratulations!

Now that wasn’t so bad, was it? Your organization is now among a rapidly growing number of businesses who have embraced Cloud solutions and whose IT organizations have realized increased availability, efficiency, and productivity as a result. Once you’ve got one environment successfully ported and humming right along, you’ll likely see the benefits of deploying the rest of your environments in similar fashion.

Stir. Repeat. Enjoy.


Written by Marc Watley, Co-Founder & CEO of Datacenter Trust and CMO at Reppify. Datacenter Trust is an IT consulting and services delivery firm, helping growing businesses make smart decisions from sound financial analysis and business intelligence. Reppify is a leading-edge technology company pioneering the use of social media data to drive better business decisions. Follow on Twitter: Datacenter Trust @datacentertrust and Reppify @reppify
Share

Last November at the Business Technology Summit in Bangalore, India I shared a few thoughts on how leaders of IT organizations can refresh technology, continue to deliver sound IT and meet customer needs, and, by taking advantage of on-demand infrastructure services, remain in the good graces of the company CFO.  I’ll attempt to do a bit of a recap here, adding recent data points and additional thoughts along the way.  Well then, shall we?

The US Department of Commerce reports (most recent Information & Communication Technology Survey, which was in 2007) that capital expenditures (CAPEX) make up, on average, 63% of companies’ IT spending.

CIO’s today are currently faced with decreased capital and operating IT budgets, as well as staffing reductions or hiring freezes.  All the while, the expectation to scale applications and systems – and maintain performance and SLAs – remains.

In October 2009, Barclays Capital reported in its CIO survey that IT spending expectations would increase 0-5% in 2010 versus 2009, with priority placed on datacenter expansions, virtualization and Cloud initiatives. In the near term, CIOs had forecasted that their IT spend would be focused on storage, networking, and servers.  Comparing the Barclays results with a more recent CIO survey – this one conducted by Gartner this past January, where some 1,500 CIOs surveyed yielded a similarly bleak forecast: IT spending will largely remain flat, and if it does increase, it’ll do so by just over one percent.  (Which clearly isn’t great, but it also isn’t the elbow-drop of more than eight percent in IT budgets as seen in 2009 .)  Oh, the CIOs in this latest survey also posited that their IT organizations have about the same level of resources now as they had back when Kelly Clarkson’s “Since U Been Gone” ruled the airwaves; yep, welcome back to 2005. The lesson? Wring everything you can from what you’ve got.  Having fun yet?

Right, so how does one squeeze blood from this proverbial IT turnip? The first step is to dig into your IT spending a bit – gain as much understanding and insight as to what you have in your arsenal today – and the related costs – as possible. This sounds like a simple task, but you’d be surprised how many IT directors and CIOs don’t really know just exactly what they are using and how much they’re paying. (Sidebar: If you haven’t yet read my partner Brian Superczynski’s article from last week, I’d encourage it; he offers good thinking and a few tools around this business insight)

The CIOs surveyed by Gartner report that their top three business priorities for 2010 are:

  • Business process improvement
  • Reducing enterprise costs
  • In the use of informatics/analytics

In addition, their top three technology priorities for this year:

  • Virtualization
  • Cloud computing
  • Web 2.0 (read: Social Networks)

Taking a cue from this, the next step to a CAPEX-free world is to first address how the data points above stack up against your own business and technology priorities, then explore ways in which you can reduce capital costs by taking advantage of outsourced infrastructure and related services like virtualization.  Hang on now…don’t be afraid of that Cloud…embrace it. I’m not suggesting you entrust your most valuable corporate crown jewels to a multi-tenant (shared) Cloud service (Amazon Web Services/Elastic Compute Cloud and Rackspace Cloud largely fall into this category).  These services do have their place and you may find they will play an important role for your organization at some point. However, I’m referring to enterprise-class, private datacenter services, where you retain complete control over access and to your infrastructure above the operating system; it’s just that someone else manages everything else –  hardware/upgrades/monitoring, inter-networking, bandwidth, power, etc are all taken care of.  Think of this as an extension of your own datacenter that simply lives elsewhere…you still have the master key and you pay for everything “as-a-service”, largely free of capital costs.

These as-a-service solutions take on many forms, each designed to address specific IT needs: Compute Clouds for development or testing, storage arrays, backup datacenters/Disaster Recovery (DR) services, email, or simply a rack of dedicated servers with your name on it.  A few providers to consider in this area: private Cloud services like Terremark’s Enterprise Cloud, SoftLayer’s CloudLayer, or datacenter replication/DR from nScaled, CA’s ArcServe, or dedicated, managed servers from Latisys, The Planet, Rackspace, and others.  The point is to spend the time. Sit in on a few webinars. Perform your due diligence, seek help if you need it, and I think you’ll find that utilizing tools like infrastructure-as-a-service for some applications or areas of your infrastructure makes sound business sense.  Also, if you have a technology refresh initiative as core to achieving your goals in 2010, these services typically deploy on the latest hardware…double bonus!

By the way, much of the data from the Gartner survey can be found on the author’s blog post here.

The Japanese pioneered a concept you are probably familiar with called Kaizen: calculated steps, taken daily, designed to achieve a larger goal. Applying this methodology is key to a sound migration path away from capital IT expenditures and toward infrastructure-as-as-service.  (I know, I could have just said “baby steps” but this Kaizen thing just sounds better, no?) Here’s how you apply it:

  1. Start small
  2. Find a service that meets your liking then deploy – perhaps just one or two applications or projects
  3. Monitor performance and costs for a month or three, using the service as a Litmus test to evaluate and design the migration for the rest (or most) of your infrastructure.

If they pass muster, simply add more fuel to this as-a-service fire and before long, you’ll find yourself on the receiving end of praise from the CFO, your own team, and your customer as well. Or maybe you’ll find yourself singing happily along to “Since U Been Gone.” If the latter, you just might want to keep that one to yourself.

Written by Marc Watley, Co-Founder & CEO of Datacenter Trust and CMO at Reppify. Datacenter Trust is an IT consulting and services delivery firm, helping growing businesses make smart decisions from sound financial analysis and business intelligence. Reppify is a leading-edge technology company pioneering the use of social media data to drive better business decisions. Follow on Twitter: Datacenter Trust @datacentertrust and Reppify @reppify
Share