Posts Tagged ‘cloud’

The state of the global economy notwithstanding, companies everywhere seem to be experiencing the some of the best growth seen in recent years. As the saying goes, however, mo’ money, mo’ problems. This couldn’t be truer when it comes to finding the best possible people to join your organization during the hockey-stick rise to prosperity.  In the past, this meant running an ad on an online job board, chatting to a few interesting candidates by phone, conducting a handful of interviews, and you were generally in good shape. Today, it means online job postings on multiple sites (often in multiple geographies) and potentially hundreds of resumes. Oh, and thanks to the recent belt-tightening, you’re now likely wearing more of those proverbial ‘hats’ in your organization – which means a lot less available time for sifting through resumes/CVs. Here are a few tools that can help you navigate that shiny hiring canoe of yours through glassier waters:

Write Great Job Postings

Like everyone else, I check out job postings now and again to see who’s hiring – jobs tend to be a pretty good barometer of what’s happening in the marketplace, and the ever-fluid tech sector in particular. I’m sure you likewise receive emails or calls from headhunters with the latest and greatest gig they think you’d be perfect for. I’ve got to say that in general, these guys are pretty good at what they do, and on the whole their descriptions of whatever job they’re plugging are fairly detailed and written well enough to capture my attention – at least for a moment or two anyway.

Now, contrasting this against the average job posting online (those you might find on sites like Indeed, CrunchBoard, or LinkedIn), I’m continually amazed at the lack of detail – and, quite frankly, good writing – in the average job posting. Little about the company and whether it’d be a fun, inspiring place to work, or a draconian bore-fest. Sparse details of the actual job duties. Run-of-the mill skills and experience lists. I mean, what caliber of candidates do companies expect to attract with such a mess of a posting? My point is this: when preparing your job posting, take your time. Put yourself in the shoes of that ideal person whom you want for the open positions. What schools should they have attended? Where should they have worked prior? If an engineer, should they actively contribute to coding forums or blog on their accomplishments? Should s/he have patents? If a business development or management position, whom should s/he know well/be close to? Are you looking for a thought leader or just a fantastic cold-caller? What competencies should they have that might indicate a top performer?

Also important is to be personal: Try to write the posting in a conversational style and be sure to include how great it is to be part of your company. People like working for fun companies. Spend an extra few minutes thinking about your posting and you’d be surprised at the high quality of responses you’ll receive as a result.

Use An Applicant Tracking System

Another surprising thing I find is the number of companies out there who apparently have only email as a means of receiving responses to job postings. Really? I get that loads of startups fit into this category, but what happens if you’re the next Zynga: your hot new product is taking off like a rocket and you’ve secured enough funding to scale. Now you need to hire – very quickly – perhaps 20 or so positions. Your postings are scattered about online and before you know it, you’ve got more than 500 resumes and cover letters to weed through, amidst your other 488 emails. Exactly: headache city.

The good news here is that there are several alternatives to email alone, whatever your organization size or budget. Applicant Tracking System/Talent Management Systems are readily available from The Resumator, Newton, Force.com apps (if using Salesforce), Taleo, Peopleclick Authoria, Kenexa…and as you can imagine, the list goes on. Most of these services are available – yes I’ll mention the dreaded word again this once – in the cloud, so no software or server to install. So spend a bit of time and investigate your options, but for goodness sake, move away from email-only. I repeat: Step away from the email. The great thing about just about any of these solutions is you can keep a database of job posting templates, publish/distribute to multiple job boards, screen incoming candidates, retain resumes for future consideration, and move candidates down the funnel to interview, offer letter, background check, and onboarding – all from a single environment. Look into it. You’ll thank me later.

Screening: Go Beyond The Resume

According to a 2010 survey of businesses across the US, UK, and EU by Cross-Tab, a market research provider, 85% of hiring managers feel that a positive online reputation influences their hiring decisions, and more than 70% of companies have a policy to screen all job candidates using – yep, you guessed it, social media. LinkedIn, Facebook, Twitter and other sites offer a treasure trove of data about whom the candidate is beyond his/her credentials and pedigree. If you aren’t screening candidates this way, you should be. That said, there is a bit of risk involved in screening this way, namely in the form of what the EEOC deems ‘protected class’ data (age, race, sexual orientation, political affiliation, etc.), which if you didn’t know is illegal to use when making a hiring decision in the US. (The UK and EU have similar privacy laws, by the by.) Three or four years ago, this wouldn’t have been too much of an issue with regard to screening, as the average resume/CV typically hasn’t changed much and typically doesn’t contain this kind of information. Visit a candidate’s LinkedIn or Facebook page, however, and you’ll invariably come across more than you should likely be seeing. Serious stuff, people. I’m not an attorney by any means, but I do know that lawsuits have been filed (and won) by didn’t-hire candidates over this sort of thing. Bottom line here is to move wisely; and most of all don’t be creepy – ‘friending’ candidates on Facebook so you can have a deeper view into his/her persona, etcetera.

Here too, though, comes our good friend technology to the rescue. A growing number of social media-driven resources are available to help get beyond the resume: LinkedIn offers some pretty good search tools. Klout, who analyze data to determine an individual’s level of influence (and whose scores apparently come up during candidate interviews here and again), and Reppify, who provide a web-based analysis of candidates’ online presence through their social networks according to your hiring criteria. (Disclosure: I currently hold a senior management position at Reppify.) Services such as these can help you to narrow that candidate funnel, identify the best candidate selections for your team, and mitigate discrimination liability risks.

Whatever your business, and however fast you may be growing, employing these three key strategies today should significantly help you to identify the candidates who best fit your organization, as well as save you loads of time and money (and probably a few grey hairs as well). Happy hiring!

Photo Credit: Woodleywonderworks

Cloud: A truly nebulous term

by Marc Watley on October 29, 2010

Yes, yes I know…ol’ Marc has subjected you to yet another bad pun. You’ve got to admit though that it fits the bill here. The term “cloud” is, in my book, one of the most over-used technology terms in recent memory, and it’s high time for change.

(Ridiculous sidebar: Anyone else watch Science Bob conjure that “cloud” on Jimmy Kimmel Live the other night? Hilarious!)

The thing is, almost all of what we use on the web today exists ‘in the cloud’ at some level or another. Think about it – your mail isn’t fed from a server sitting in your basement is it? No, it’s typically one of a cluster of mail servers in the “cloud” – perhaps located within your company’s datacenter or provided by Yahoo!, Hotmail, Gmail, or the like.  What about shopping? Our profiles, containing our shipping addresses, purchase preferences, and credit card numbers, likewise exist in the “cloud”.  The social utilities we’ve come to depend on for business and fun – LinkedIn, Facebook, Salesforce, Twitter, Foursquare, etcetera, are also services used almost entirely in the “cloud”.  The technology that powers the various “cloud” solutions continues to advance rapidly.  This, along with increased availability and reduced costs worldwide for high-speed Internet access, has allowed the service offerings to evolve as well.

The fact that both individuals and growing businesses can tailor solutions from the breadth of available “cloud” services is fantastic.  The issue at hand is the term “cloud” itself: an umbrella term most often used to describe and present ‘hosted’ or remote services – services which have expanded rapidly during the last two years. The term “cloud” has simply reached a point of causing confusion.  For example, though commonly referred to as “cloud computing”, it’s not always actually computing, is it?  We can now select from solutions allowing us to compute, store/archive/recover data, manage content, send/receive mail, place calls, conference, and network with colleagues, friends, and prospects – all with a moniker of “cloud” attached. “Cloud” is descriptive in this sense, sure, but only mildly so. My $0.02 is that the term “on demand infrastructure” – or simply “on-demand”- is more reflective of available solutions and less confusing than the term “cloud”.  Adopting the “on demand” term virtually eliminates the need for wonder, fretting, or quarrel over the best flavor of the solution – public/multi-tenant (Amazon EC2), private (your own VMware or Terremark Enterprise Cloud instance), Platform (Salesforce), or hybrid form. Whatever the end solution, simply think of it as on-demand infrastructure; the level of access, control, and security needed upon deployment are completely up to – and configurable by – the user.

I’ve noticed in the past several months that several technology companies including Oracle, F5, Servosity, and Rackspace have begun to use “on demand” (seemingly in place of “cloud”) to describe their services, features, and benefits. I think it’s a smart move, but who knows where this will end up; the term “on demand” might work best for everyone. Might not.

Anyhow, Cloud: you’ve served us pretty well…thanks. Now it’s time to bid adieu and bon voyage.  Oh, and when you reach wherever it is that you Internet buzzwords fade away to, please do say hello to our old friend “Web 2.0”, will you?

4 steps to effective Disaster Recovery planning

by Marc Watley on August 23, 2010

Question: A wildfire 10 miles away from your company headquarters is raging out of control. The fire captain just ordered everyone in your building to evacuate. All staff have safely evacuated premises, and now you are likewise heading out, taking one final look at your datacenter – still humming away, unsuspectingly. You have offsite data storage but no offsite server infrastructure, applications, etc.

What do you do?

I’m paraphrasing from a not-so-great movie here – Speed (Keanu may have been good in The Matrix but the predictable tête-à-tête between his and Dennis Hopper’s character in Speed still makes me chuckle) – but IT executives today are, in fact, increasingly faced with the threat of disasters – whether natural (such as a wildfire) or man-made (e.g. some ding-dong crashing a vehicle into your datacenter). I may be taking a bit of creative license here, but this could not be a more serious issue. (Recall those horrible wildfires in San Diego, California area a few years back? The example above was culled from situations experienced during that period.)

As organizations – and their customers – increasingly rely on database, server, and IP-connected applications and data sources, the importance and responsibility of maintaining continuity of the business infrastructure and limiting costly downtime in the event of a disaster, is paramount.

Though many an organization had active disaster recovery (DR) projects on the books a few years ago, the global financial crunch of the last 20 or so months has wreaked havoc on IT budgets everywhere; only now are many of these DR projects once again taking priority.

If you’re thinking that you can ‘wait it out’ and disaster won’t strike on your watch, think again. Apparently, some 93 percent of organizations have had to execute on their disaster recovery plans. Yep. This according to an annual DR survey from Symantec last year.  A few more points from this survey:

  • In general it takes companies [with active DR plans] on average three hours to achieve skeleton operations after an outage, and four hours to be up and running
  • The average annual budget for DR initiatives is $50MM (including backup, recovery, clustering, archiving, spare servers, replication, tape, services, DR plan development and offsite costs)
  • Virtualization has caused 64 percent of organizations worldwide to reevaluate their DR plans

Whether your organization is a small recently funded startup or well-entrenched in the Fortune 100, designing, implementing, and testing a DR plan is an endeavor that takes dedication, careful planning and time (the entire process can take weeks or even months). There are many excellent resources available which can provide knowledge and detail as to the individual steps of a DR planning initiative.  (Cisco’s DR Best Practices site or Disaster Recovery are great places to begin, by the way.)  What follows is a high-level, best-practices overview of the planning process:

Executive Sponsorship

This first step of a successful DR plan involves two key components: One is to secure plan sponsorship and engagement from senior company leadership – CEO, COO, CIO, etc. The other is to establish a planning team that is representative of all functional units of the organization – sales, operations, finance, IT, etc.  This step is the catalyst to a smooth planning initiative, and requires focus and patience.  (The ability to herd cats wouldn’t hurt, either.) It may also be helpful to reduce the impact on internal resources by leveraging outside help from a consulting firm well-versed in DR planning.

Information Gathering

This portion of the planning process – information gathering, due diligence and assessment – is the most involved and most time-consuming, and a true test of teamwork across the organization.

The first step in this part of a DR planning initiative is performing a Business Impact Analysis (BIA), which helps to assess the overall risk to normal business operations (and revenue flow) should disaster strike right this second. The BIA is typically comprised of identifying and ranking all critical business systems, analysis impact of interruption on critical systems, and most importantly, establishing the maximum length of time critical systems can remain unavailable without causing irreparable harm to the business. This length of time is also known as Maximum Tolerable Downtime (MTD).  Working backwards from the MTD will allow acceptable Recovery Point Objective (RPO) and the Recovery Time Objective (RTO) to be reached.

With BIA in hand, the next steps are conducting a risk assessment and developing the recovery strategy.  The risk assessment will help to determine the probability of a critical system becoming severely disrupted, identifying vulnerabilities, and documenting the acceptability of these risks to the organization.  Engagement from the entire planning team is necessary in order to accurately review and record details for critical records, systems, processing requirements, support teams, vendors, etc. – all needed in order to develop the recovery strategy.

Also important in the recovery strategy is identifying the recovery infrastructure and outsourcing options – ideally alternate datacenter facilities from which critical systems and data can be recovered in the event of a serious interruption.  This, as they say, is the point at which the bacon hits the frying pan: Many organizations are leveraging the power and abundance of Cloud-based IT resources to lower infrastructure costs, and Cloud is particularly applicable for DR.  In fact, there are more than a few services who provide continuous data protection: typically accomplished via unobtrusive software agents residing on each server in a datacenter. These agents are then connected to a black box also residing in the datacenter, incrementally taking images of each server, de-duplicating the data, then replicating that data via secure WAN to a remote data store, ultimately providing on-demand (via secure web console) recovery from the remote location at any time. Companies such as nScaled, iland, and Simply Continuous offer such services and can even help build a business case to illustrate the ROI for this service.  Point is, do thy homework and explore if Cloud services such as these might make a sound fit into your organization’s DR plan.

Planning and Testing

Armed with a full impact analysis, risk assessment, recovery goals, and outsourced options, now the actual DR plan can be developed. The DR plan is a living document that identifies the criteria for invoking the plan, procedures for operating the business in contingency mode, steps to recovering lost data, and criteria and procedures for returning to normal business operations. Key activity in this step is to identify in the DR plan – a recovery team (which should consist of both primary and alternate personnel from each business unit) and to identify recovery processes and procedures at each business unit level.  Also important is to ensure the DR plan itself is available offsite – both via the web and in permanent media form (print, CD-ROM, etc.)

Equally important to having a DR plan is regular testing. This step includes designing disaster/disruption scenarios and the development and documentation of action plans for each scenario. Conducting regular testing with full operational participation is key to successful testing.

Ongoing Plan Evaluation

An effective DR plan is only a good plan if continually kept in lock-step with all changes within the organization.  Such changes include infrastructure, technology, and procedures – all of which must be kept under constant review, and the DR plan updated accordingly.  Also, DR plan testing should be evaluated on a regular basis, and any adjustments made (systems, applications, vendors, established procedures, etc.).

So there you have it – four key building blocks to tailoring a DR plan for your organization.  Of course, if the ‘disaster’ arrives in the form of a city-sized asteroid hurtling towards Earth, needless to say any plan will likely not make much difference. Anything short of such a global catastrophe, however, and a well-developed and maintained DR plan will keep employees and customers connected and business moving forward, with minimum downtime.

Again, this is by no means a complete recipe for designing and implementing a DR plan but instead is meant to serve as a high-level overview…offered as food for thought.  I encourage you to learn more, explore options, ask for help if needed – whatever it takes to thoroughly prepare your organization for the worst, should the worst ever occur. To loosely paraphrase our man Keanu once again from another of his, er, more questionable films from back in the day – Johnny Mnemonic – this is one topic where you absolutely, positively don’t want to “get caught in the 404″.