Posts Tagged ‘datacenter’

Integration: Keys to the successful merger of companies

by Matthew Carmen on August 2, 2010

There is really only one reason for the merger of corporate entities: the creation of more shareholder value (whomever the shareholders are) from the two, than were there separately.  The creation of more profit is done through doing business more efficiently and effectively, which takes on many forms, depending on the where in the organization one resides.

When a corporate combination of any kind (merger, purchase of business unit, hostile takeover, etc) is announced, it is usually the first time that the vast majority of employees, on both sides of the transaction, have heard this is happening.  This announcement will trigger many acts.  Executive management, the “C level” and their support will be working on the new strategy of the combined entity, the purchased entities employees will be worrying about their futures, etc.  The real work is realized at the operational levels of both entities.

In most cases, the purchasing company is restricted from speaking with the company being acquired until the purchase is finalized.  The high level executives on each side may have spoken about strategic issues of running the combined entity, but the nuts and bolts of daily operations typically have not been fleshed out.  Once final approval for “the deal” has happened, the clock starts regarding  the time it will take to integrate the two organizations into one cohesive unit.  The faster this occurs, the quicker the goals of the merger, greater shareholder value, can be achieved.  According to CNBC, nine out of ten mergers do not fully meet the goals of their acquisition. This is largely due to a failure to integrate companies properly.

This failure to integrate can be seen in the current Gulf of Mexico oil spill debacle.  British Petroleum (BP) never integrated its purchases in the United States, Atlantic Richfield Company (ARCO) and American Oil Company (Amoco).  These companies don’t even share branding with their parent company, much less financial and IT systems.  An integrated entity may have helped BP to be more proactive in its maintenance of gulf oil platforms and wells, possibly stopping or at least lessening the catastrophe that occurred with Deep Water Horizon.

During the period of time between when the corporate combination is announced and when it is approved by shareholders, government entities, etc. a lot takes place behind the scenes, often without discussion between the entities.  The main task that takes place in successful integrations is proper preparation within operational areas.  This preparation includes putting together the processes and procedures that will need to take place to reach the corporate goals.  In companies that are in a merger mode, meaning they are growing through regular acquisitions, many of these processes and procedures can be used over and over again, and usually are.  For the company that does not participate in acquisition often, creating these processes and procedures can seem like a daunting task.  There are many consulting companies, from the Accenture’s and Deloitte’s of the world down to small specialty firms (such as my own – Datacenter Trust) who concentrate on the portions of the business where they have specific expertise and can help complete or even manage the integration process.

Once the acquisition has been approved, the integration process begins.  I participated in multiple integrations while employed at one of the largest entertainment companies in the world, where I was the lead financial representative to the Program Management Office (PMO).  One example of an operational process that needed to be looked at during each acquisition was the issue of entertainer royalties, the way in which actors, musicians, etc. get paid on their current and past work.  It was the job of that department to look at the current royalty application being used and the royalty application of the acquired entity and choose which application is better, if neither was best of breed, outside solutions may have been looked at.  “Better” is a very subjective idea, one which in this case was left up to the experts in each department that needed to make a choice, regardless of application.  Anyhow, once a choice was made, a plan was put together and included cost analysis, equipment needs, software licensing needs, etc.  Once all of the application consolidation plans, hundreds or even thousands, were finished, they were turned over to the PMO.

The job of the PMO was to look for efficiencies within all the plans.  These efficiencies were all tied to being able to have more purchasing power, economies of scale.  Economies of scale says that one company with 100,000 users will get better pricing then two companies with 50,000 users each.  This is due to the fact that it is harder for the company with 100,000 users to make a future change in usage. We were able to negotiate with vendors based on the size of the new entity, gaining pricing power in the areas of hardware (PC’s, servers, etc), software (applications and packages) and services (consulting, facilities, electricity, etc).   In the finance department, we were able to leverage the major vendors in the space, for our budget & planning system and accounting system, to get pricing that was over 35% better than it had been in the past.  We also got all of our users on the same systems, making reporting easier than it had been before.  The integration, in total, saved the combined entity over $400M in annual spend.  The largest areas of savings were software maintenance and facilities.

Datacenter consolidation is an area that must be looked at closely for a successful integration of IT operations.  Back to my example above, the combined entertainment giant decided that six global datacenters was the appropriate number.  The number could have been lower, but there were many global political issues, etc.  Formerly, the two companies had a total of 18 datacenters.  In two geographical instances, the two companies had data centers right down the street from one another.  By going to six global datacenters, the new company was able to save a great portion of the previously mentioned $400M.  Disaster recover became an in-house activity, electricity usage was cut by over 33% and the location of applications became a mute point, where it had been a political battle field before.

Conclusion

Integration of operational areas between merging entities is crucial to meeting the overall goal of growing shareholder value.  When done properly, the measurable goals of an acquisition can be easily met.  When convoluted, the acquisition looks like a bad idea two, five, and ten years out.  The last thing any management team wants to do is fail in an acquisition.  Loss of employment, legal proceedings, and possible acquisition by another entity are sure to follow, and no one wants to go through that.  Everyone associated with a company: executives, employees and shareholders alike, benefit from a well-planned and managed integration.

Matthew Carmen launched Datacenter Trust along with Marc Watley in February, 2010 and serves as Co-Founder & COO as well as Managing Partner of their Financial Intelligence practice. Datacenter Trust is a recently-launched consulting and services delivery firm, providing outsourced server hosting, bandwidth, cloud services, and IT financial intelligence and analysis services to growing businesses. Follow Datacenter Trust on Twitter @datacentertrust
Share

Last November at the Business Technology Summit in Bangalore, India I shared a few thoughts on how leaders of IT organizations can refresh technology, continue to deliver sound IT and meet customer needs, and, by taking advantage of on-demand infrastructure services, remain in the good graces of the company CFO.  I’ll attempt to do a bit of a recap here, adding recent data points and additional thoughts along the way.  Well then, shall we?

The US Department of Commerce reports (most recent Information & Communication Technology Survey, which was in 2007) that capital expenditures (CAPEX) make up, on average, 63% of companies’ IT spending.

CIO’s today are currently faced with decreased capital and operating IT budgets, as well as staffing reductions or hiring freezes.  All the while, the expectation to scale applications and systems – and maintain performance and SLAs – remains.

In October 2009, Barclays Capital reported in its CIO survey that IT spending expectations would increase 0-5% in 2010 versus 2009, with priority placed on datacenter expansions, virtualization and Cloud initiatives. In the near term, CIOs had forecasted that their IT spend would be focused on storage, networking, and servers.  Comparing the Barclays results with a more recent CIO survey – this one conducted by Gartner this past January, where some 1,500 CIOs surveyed yielded a similarly bleak forecast: IT spending will largely remain flat, and if it does increase, it’ll do so by just over one percent.  (Which clearly isn’t great, but it also isn’t the elbow-drop of more than eight percent in IT budgets as seen in 2009 .)  Oh, the CIOs in this latest survey also posited that their IT organizations have about the same level of resources now as they had back when Kelly Clarkson’s “Since U Been Gone” ruled the airwaves; yep, welcome back to 2005. The lesson? Wring everything you can from what you’ve got.  Having fun yet?

Right, so how does one squeeze blood from this proverbial IT turnip? The first step is to dig into your IT spending a bit – gain as much understanding and insight as to what you have in your arsenal today – and the related costs – as possible. This sounds like a simple task, but you’d be surprised how many IT directors and CIOs don’t really know just exactly what they are using and how much they’re paying. (Sidebar: If you haven’t yet read my partner Brian Superczynski’s article from last week, I’d encourage it; he offers good thinking and a few tools around this business insight)

The CIOs surveyed by Gartner report that their top three business priorities for 2010 are:

  • Business process improvement
  • Reducing enterprise costs
  • In the use of informatics/analytics

In addition, their top three technology priorities for this year:

  • Virtualization
  • Cloud computing
  • Web 2.0 (read: Social Networks)

Taking a cue from this, the next step to a CAPEX-free world is to first address how the data points above stack up against your own business and technology priorities, then explore ways in which you can reduce capital costs by taking advantage of outsourced infrastructure and related services like virtualization.  Hang on now…don’t be afraid of that Cloud…embrace it. I’m not suggesting you entrust your most valuable corporate crown jewels to a multi-tenant (shared) Cloud service (Amazon Web Services/Elastic Compute Cloud and Rackspace Cloud largely fall into this category).  These services do have their place and you may find they will play an important role for your organization at some point. However, I’m referring to enterprise-class, private datacenter services, where you retain complete control over access and to your infrastructure above the operating system; it’s just that someone else manages everything else –  hardware/upgrades/monitoring, inter-networking, bandwidth, power, etc are all taken care of.  Think of this as an extension of your own datacenter that simply lives elsewhere…you still have the master key and you pay for everything “as-a-service”, largely free of capital costs.

These as-a-service solutions take on many forms, each designed to address specific IT needs: Compute Clouds for development or testing, storage arrays, backup datacenters/Disaster Recovery (DR) services, email, or simply a rack of dedicated servers with your name on it.  A few providers to consider in this area: private Cloud services like Terremark’s Enterprise Cloud, SoftLayer’s CloudLayer, or datacenter replication/DR from nScaled, CA’s ArcServe, or dedicated, managed servers from Latisys, The Planet, Rackspace, and others.  The point is to spend the time. Sit in on a few webinars. Perform your due diligence, seek help if you need it, and I think you’ll find that utilizing tools like infrastructure-as-a-service for some applications or areas of your infrastructure makes sound business sense.  Also, if you have a technology refresh initiative as core to achieving your goals in 2010, these services typically deploy on the latest hardware…double bonus!

By the way, much of the data from the Gartner survey can be found on the author’s blog post here.

The Japanese pioneered a concept you are probably familiar with called Kaizen: calculated steps, taken daily, designed to achieve a larger goal. Applying this methodology is key to a sound migration path away from capital IT expenditures and toward infrastructure-as-as-service.  (I know, I could have just said “baby steps” but this Kaizen thing just sounds better, no?) Here’s how you apply it:

  1. Start small
  2. Find a service that meets your liking then deploy – perhaps just one or two applications or projects
  3. Monitor performance and costs for a month or three, using the service as a Litmus test to evaluate and design the migration for the rest (or most) of your infrastructure.

If they pass muster, simply add more fuel to this as-a-service fire and before long, you’ll find yourself on the receiving end of praise from the CFO, your own team, and your customer as well. Or maybe you’ll find yourself singing happily along to “Since U Been Gone.” If the latter, you just might want to keep that one to yourself.

Written by Marc Watley, Co-Founder & CEO of Datacenter Trust and CMO at Reppify. Datacenter Trust is an IT consulting and services delivery firm, helping growing businesses make smart decisions from sound financial analysis and business intelligence. Reppify is a leading-edge technology company pioneering the use of social media data to drive better business decisions. Follow on Twitter: Datacenter Trust @datacentertrust and Reppify @reppify
Share