Acquiring New Technology? Build-versus-Buy is Dead

Still debating the build-versus-buy decision at your organization for your IT purchases?  If so, you probably aren’t getting the biggest bang for your IT dollar: Build-versus-buy is dead.  For better decision-making when acquiring IT systems, forget build-versus-buy and remember the Technology Acquisition Grid.  You’ll not only save money, you’ll make smarter decisions for your organization long term, increasing your agility and speeding time-to-market.

In this article, I describe Software-as-a-Service (SaaS), application hosting, virtualization and cloud computing for the benefit of CEO’s, CFO’s, VP’s and other organization leaders outside of IT who often need to weigh in on the these key new technologies.  I also describe how these new approaches have changed technology acquisition for the better – from the old build-versus-buy decision, to the Technology Acquisition Grid. Along the way, you’ll learn some of the factors that will help you decide among the various options, saving your organization time and money.

The Old Model: Build-versus-Buy

When I earned my MBA in Information Systems in the mid-1990’s, more than one professor noted that the build-versus-buy decision was a critical one because it represented two often-costly and divergent paths.  In that model, the decision to “build” a new system from scratch gave the advantage of controlling the destiny of the system, including every feature and function.  In contrast, the “buy” decision to purchase a system created by a supplier (vendor) brought the benefit of reduced cost and faster delivery because the supplier built the product in advance for many companies, then shared the development costs across multiple customers.

Back then, we thought of build versus buy as an either-or decision, like an on-off switch, something like this:

build-versus-buy-switch

In the end, the build-versus-buy decision was so critical because, for the most part, once you made the decision to build or buy, there was no turning back.  The costs of backpedaling were simply too high.

The Advent of Application Hosting, Virtualization, SaaS and Cloud Computing

During the 2000’s, innovations like application hosting, virtualization, software-as-a-service (SaaS) and cloud computing changed IT purchasing entirely, from traditional build-versus-buy, to a myriad of hosting and ownership options that reduce costs and speed time-to-market.  Now, instead of resembling an on-off switch, the acquisition decision started to look more like a sliding dimmer switch on a light, like this:

 

build-to-buy-slider

Suddenly, there were more combinations of options, giving organizations better control of their budgets and the timeline for delivering new information systems.

What are each of these technologies and how do they affect IT purchasing?  Here’s a brief description of each:

Application Hosting

During the dot-com era, a plethora of application-service-providers (ASPs) sprung up with a new business model.  They would go out and buy used software licenses, then host the software at their own facilities, leasing the licenses to their customers on a monthly basis.   The customers of ASPs benefit from the lower cost-of-ownership and reduced strain on IT staff to maintain yet another system, while the ASPs made money by pooling licenses across customers and making use of often-idle software licenses.

While the dot-com bust put quite a few ASPs out of business, the application hosting model, where the software runs on hardware supported by a hosting company and customers pay monthly or yearly fees to use the software, still survives today.

Virtualization

One of the first technologies to change the build-versus-buy decision was virtualization. By separating the hardware from the software, virtualization separates the decision to buy from the need for new software.  In virtualization, first, computer hardware is purchased to support the organization’s overall technology needs.  Then, a self-contained version of a machine – a “virtual” machine – is installed on the hardware, along with application software, such as supply chain or human resources software, that the business needs at that point in time.

When the organization needs a new software application that is not compatible with the first application, because it runs on another operating system, they install another virtual machine and another application on the same hardware.  By doing this, the organization not only delivers software applications more quickly because it doesn’t need to buy, install and configure hardware for every application, the organization also spends less on hardware, because it can add virtual machines to take advantage of unused processing power on the hardware.

Even better, virtual machines can be moved from one piece of hardware to another relatively easily, so like a hermit crab outgrowing its shell, applications can be moved to new hardware in hours or days instead of weeks or months.

Software-as-a-Service (SaaS)

Like virtualization, Software-as-a-Service, or SaaS, reduces the costs and time required to deliver new software applications.  In the most common approach to SaaS, the customer pays a monthly subscription fee to the software supplier based on the number of users on the customer’s staff during a given month.  As an added twist, the supplier hosts the software at their facilities, providing hardware and technical support, all within the monthly fee.  So, as long as a reliable Internet connection can be maintained between the customer and the SaaS supplier, the cost and effort to support and maintain the system are minimal.  The customer spends few resources and worries little about the software (assuming the SaaS supplier holds their side of the bargain), enabling the organization to focus on serving it’s own customers, instead of on information technology.

Cloud Computing

The most recent technology innovation among the three, cloud computing brings together the best qualities of virtualization and SaaS.  Like SaaS, with cloud computing both hardware and software are hosted by the supplier.  However, where the SaaS model is limited to a single supplier’s application, cloud computing uses virtual machines to host many different applications with one (or a few) suppliers.  Using this approach, the software can be owned by the customer, but hosted and maintained by the supplier.  When the customer needs to accommodate more users, the supplier sells the customer more resources and more licenses “on demand”.  Depending upon the terms of the contract, either the customer’s IT staff maintains the hardware, or the supplier.  In addition, in most cases, the customer can customize the software for their own needs, to better represent the needs of their own customers.

Adding Application Hosting, Virtualization and Cloud-Computing to the Mix – The Technology Acquisition Grid

Remember the dimmer switch I showed a few moments ago?  With the addition of application hosting, virtualization, SaaS and cloud computing to the mix, it’s not only possible to choose who owns and controls the future of the software, it’s also possible to decide who hosts the software and hardware – in-house or hosted with a supplier, as well as how easily it can be transferred from one environment to another.  That is, it’s now a true grid, with build-to-buy on the left-right axis, and in-house-to-hosted on the up-down axis.  The diagram below shows the Technology Acquisition Grid, with the four main combinations of options to consider then acquiring technology.

technology-acquisition-grid

 

Here’s where application hosting, SaaS, virtualization and cloud computing fit into the Technology Acquisition Grid:

technology-acquisition-grid-with-new-tech

 

Making a Decision to Host, Virtualize, go SaaS, or seek the Cloud

If the rules of the game have now changed so much, how do we make the decision to use virtualization, application hosting, SaaS or cloud computing, as opposed to traditional build and buy?  There seem to be a few key factors that drive the decision.

At the most basic level, it comes down to how much control – and responsibility — your organization wants over the development of the software and the maintenance of the system.  Choose an option in the top-left of the Technology Acquisition Grid, and you have greater control of everything; choose an option at the bottom-right, and you have far less control and far less responsibility for the system.

In my own experience advising clients during technology acquisition and leading technology initiatives, decision-makers tend to choose a “control everything” solution because it’s the easiest to understand and poses the least risk.   While this may, in the end, be the best answer, organizations should weigh the other options, as well.  Certainly, more control usually sounds really good, but it almost always comes along with much higher costs, as well as delaying use of the system by months.  Particularly for smaller organizations,  which probably need those IT dollars to serve their own customers more effectively, a “control everything” answer is often the wrong decision.

Which should your organization choose?  Start by making an effort to include software products that take advantage of hosting, virtualization, SaaS and cloud computing among your choices when you start your search.  Then, weigh the benefits and downsides of each option and combination of options, choosing the one that balances cost and time-to-market with your own customer’s needs and your tolerance for risk. A good consulting company like Cedar Point Consulting can help you do this, as can your organization’s IT leadership.  Using this approach, you’re sure to free yourself from the old rules of build-versus-buy, delivering more for your own customers at a much lower cost.

Donald Patti is a Principal Consultant with Cedar Point Consulting, a management consulting practice based in the Washington, DC area, where he advises businesses in technology strategy, project management and process improvement. Cedar Point Consulting can be found at https://cedarpointconsulting.com.

 


Measuring the Success of Business Strategy

It’s so easy, right? A successful strategy means the business grows and is profitable. There might be other consequences that are positive that occur because of a successful strategy, but growth and profitability are the only ones the really matter.

Right?

Well, what if you have a technology company that is never intended to become profitable, but instead is intended to attract attention as an acquisition? How about another company that doesn’t grow but still manages to outlast almost all of its competitors?

I could go on and on with a lot more “what if” scenarios, but then someone from the back of the room would pipe up with this: “A successful strategy creates shareholder value. Whether it’s improving employee morale, market growth, building a brand, better HR systems, a strict business-attire dress code, netting a profit, it doesn’t matter. All those things create shareholder value.”

Okay, I can roll with that, but how in the world do you really measure the impact improving employee morale has on shareholder value? It’s more or less voodoo, isn’t it? You can measure an increase in employee morale and say that a subsequent increase in the value of the company happened as a result of the boost in morale, but I defy you to prove that beyond a doubt.

So is there a soft side to business strategy, intangible results that you intuitively believe exist, but cannot be proven? Is the checklist regarding strategy success employed by companies and strategy consultants incomplete? Should the success of strategy be measured using a large, holistic dashboard (and free crunchy granola and delicious banana bread for all)? Are we deluding ourselves with all of these rigidly-defined paths of business strategy that lead to an unalterable destiny, measured by accepted models, numerological systems and systemic markers?

Holy smokes! My world has been torn asunder and my mind is reeling right now.

No, just kidding, I’m fine.

The answer is that those measurement tools like KPI, Strategy Maps, ROC, IRR, Balanced Scorecard, ROA, Benefits Measurement, share price, etc. work very well for measuring the success of most business strategies. That’s why they’re used.

But strategy is dynamic and so are the successes attached to it. Sometimes, depending on the strategy, it’s going to be a little more Art than Science. This not going to happen very often, but sometimes you just have to let go and trust that it will pay off. Sometimes you do the right thing because it’s the right thing to do, and sometimes you do the right thing because it’s good business to do the right thing, and the positive results of the former are sometimes not as readily apparent of the latter.

Brendan Moore is a Principal Consultant with Cedar Point Consulting, a management consulting practice based in the Washington, DC area, where he advises businesses in marketing, sales, front-end operations, and strategy. Cedar Point Consulting can be found at https://cedarpointconsulting.com.


Product Development – Today’s Lesson Is From McDonald’s

You are probably aware that McDonald’s is not just an American experience anymore; the company has retail stores all over the world, and like other American fast-food corporations (KFC, etc.), has found great success in the international marketplace.

The company didn’t achieve that success by ignoring local tastes, and if you’ve ever been in a McDonald’s in Japan or Germany or some other country, some of the menu will be unrecognizable. Even the food that looks familiar may have a very different name on it (for those Pulp Fiction fans, this is your cue to recite those lines about a “Royale with Cheese”).

There’s a reason I’m bringing this up. According to an article published in various newspapers, McDonald’s in France has just rolled out a new product named the McBaguette for a six-week market trial, and it’s a perfect example of great product development. I have no idea what it actually tastes like, but it is a superb product development concept.

The new sandwich exploits the fact that the French love their bread (their fromage, too, but we’ll get to that in moment) with an admirable passion. In fact, 98% of all French people eat bread every day. And one of the most popular types of pain is the baguette, a cylindrical loaf baked with a hard crust. The French love bread; they really love baguettes, and this emotional attachment runs deep. Around 65% of the two billion sandwiches sold in France every year are built with a baguette as their underpinning.

What better thing to put a couple of hamburger patties on, then? For the next six weeks, customers in France can plunk down four and a half euros on the counter at McDonalds, and get a burger on a baguette (albeit a square one), covered with melted Emmental cheese (from France, naturellement!) and spicy mustard.

As I said, I have no idea how it tastes, or how it will taste to the average French man, woman or child, but it’s simple, yet brilliant product development and execution.

And that concludes today’s lesson, mon amis.

Brendan Moore is a Principal Consultant with Cedar Point Consulting, a management consulting practice based in the Washington, DC area, where he advises businesses in marketing and strategy. Cedar Point Consulting can be found at https://cedarpointconsulting.com.


Experience and the Turnaround

While at a business conference earlier this year, a fellow I had just met mentioned to me in an offhand way that they (his business unit) had just hired a young guy out of a prestigious business school program to “fix” their business, which has been on the decline for the past three years. They hired this guy as a permanent employee, and the CEO had great hopes that he would bring the company back into the black by the end of this year, using the latest business strategies. He reports directly to the CEO, not the head of the business unit.

After all, he did have a perfect GPA at the B-School he attended.

I hope that works out for them, but I did give him my business card. “Just in the instance that the task turns out to be a little bigger than you thought”, I said.

Now for some business-style preaching:

When managing a business turnaround, there is a great deal of reliance on data, on forecasts, on costs, on efficiency, on quality, etc. And many of the things you do to fix the company are the things taught in most business schools and/or internal management training programs at large corporations.

So, any young over-achiever right out of business school with the most up-to-date academic knowledge can probably fix what’s ailing your business and get it turned around pretty quick, right?

I’m waiting…

Yeah, that’s what I thought. Good answer.

Not many business owners or business CEOs are going to trust a turnaround to someone who doesn’t have a fair amount of battle experience. And believe me, it will be a battle, and there will be casualties.

Whether that trust from the person managing the business is because of their own business experience, or because of their intuitive belief that an experienced, steady hand is what’s needed in this type of dire situation, that trust is not misplaced.

Experience is what enables you to manage a crisis, and keep a crisis from degenerating into chaos.

Experience is what lets you walk into a process shop and know immediately that there are problems there.

Experience gives you the ability to monitor several dozen calls in a call center and quickly figure out why average talk time has gone through the roof.

Experience allows you to sense the submerged hostility between the head of operations and the head of sales and marketing, and realize they’re been sabotaging each other for years, and that mutual sabotage has now brought down the company. And it also allows you to realize that the COO won’t do anything to stop it because he is extremely competent but completely non-confrontational.

Experience lets you ascertain almost immediately that the guy in charge of IT is much more interested in building “the perfect beast” in terms of the company server, the website platform, etc. than doing what is best for the company, even if that means he doesn’t get those new Sun server boxes until next year, or the website update gets put off until after the busy season is over. And the company president doesn’t know enough about technology to challenge him on it.

Experience is what gives you the ability to figure out pretty quickly that the CFO stopped really caring about her job sometime ago, and for whatever reason, is now just basically going through the motions, and that there may be some gains available to the business by using cash flow more effectively, or more diligent oversight of expenditures – if someone would just do it.

It’s not that I want to portray experience as some black magic or some intangible “art” that is both mystical and inexplicable, but I also don’t want to discount its value when time is crucial and the health of the business is slowly ebbing away. If nothing else, many times it will prevent you from running down blind alleys at full speed for a couple of months, if only for the simple reason that you’ve already run down that blind alley before. It helps, it really does, and as long as you don’t get stuck in the “well, this is the way we’ve always done it before, and so this is the only way that will work” trap, experience is a big plus in turnaround efforts.

So, that concludes today’s sermon, congregants. I’m going to step down from my pulpit now, and I hope to shake your hand and wish you well as you exit. May Providence be with you.

Brendan Moore is a Principal Consultant with Cedar Point Consulting, a management consulting practice based in the Washington, DC area, where he advises businesses in rebirth and rejuvenation. 


Are You Planning to Crash?

Nearly every experienced project manager has been through it. You inherit a project with a difficult or near-impossible schedule and the order comes down to deliver on time.  When you mention how far the project is behind, you’re simply told to “crash the schedule”, or “make it happen.”

As a long time project manager who now advises others on how best to manage projects and project portfolios, the term “schedule crashing” still makes me bristle. I picture a train wreck, not a well-designed product or service that’s delivered on time, and for good reason. While schedule crashing sounds so easy in theory, in practice schedule crashing is a very risky undertaking that requires some serious evaluation to determine whether crashing will actually help or hurt.

In this article, I’ll explain the underlying premise behind schedule crashing and describe some of the typical risks involved in a schedule crashing effort.  Then, I’ll provide seven questions that can help you assess whether schedule crashing will really help your project.  Combined, the schedule crashing assessment and the risks can be brought to executive management when you advise them about how best to proceed with your project.

Schedule Crashing Defined

As defined by BusinessDictionary.com, schedule crashing is “Reducing the completion time of a project by sharply increasing manpower and/or other expenses,” while the Quality Council of Indiana‘s Certified Six Sigma Black Belt Primer defines it as “…to apply more resources to complete an activity in a shorter time.” (p.V-46). The Project Management Body of Knowledge (PMBOK), fourth edition describes schedule crashing as a type of schedule compression, including overtime and paying for expedited delivery of goods or services as schedule crashing techniques (PMBOK, p. 156), though I generally think of overtime as another type of schedule compression – not crashing.

From a scheduling perspective, schedule crashing assumes that a straight mathematical formula exists between the number of laborers, the number of hours required to complete the task, and the calendar time required to complete the task. Said simply, if a 40-hour task takes one person five days to complete (40 hours/one person * 8 hours/day=5 days), then according to schedule crashing, assigning five resources would take one day (40 hours/5 people*8 hours/day=1day).

The Risks of Crashing

Frederick Brooks had much to say about the problems with schedule crashing in, “The Mythical Man-Month“. In this ground-breaking work about software engineering, Brooks explains that there are many factors that might make schedule crashing impractical, including the dependency of many work activities on their preceding activities and the increased cost of communication. This phenomena is now referred to as Brook’s Law–adding resources to a late project actually slows the project down. I personally saw Brook’s Law in action on a large program led by a prestigious consulting firm where the client requested that extra resources be added in the final two months of the program; because the current resources were forced to train new staff instead of complete work, the program delivered in four more months instead of two.

Additional risks of crashing include increased project cost if they crashing attempt fails, delayed delivery if the crash adversely impacts team performance, additional conflict as new team members are folded into the current team to share responsibility, risks to product quality from uneven or poorly coordinated work, and safety risks from the addition of inexperienced resources.

In short, schedule crashing at its most extreme can be fraught with risks. Managers at all levels should be very cautious before recommending or pursuing a crashing strategy.

Making the Call to Crash

So, how can a project manager decide if crashing will help? Here are seven questions I ask myself when deciding if crashing is likely to succeed:

  1. Is the task (or group of tasks) in the critical path? Tasks in the critical path are affecting the overall duration and the delivery date of your project, while tasks outside of the critical path are not affecting your delivery date. Unless the task your considering crashing is in the critical path or will become a critical task activity if it substantially slips, crashing the activity is a waste of resources.
  2. Is the task (or group of tasks) long? If the task is short and does not repeat over the course of the project, then it’s unlikely you’ll gain any benefit from crashing the activity. A long task or task group, however, is far more likely to benefit from the addition of a new resource, as can tasks that require similar skills.
  3. Are appropriate resources available? Crashing is rarely useful when qualified resources are not available. Is there a qualified person on the bench who can be added to the project team to perform the work? If not, can someone be brought in quickly who has the needed skills? Recruiting skilled resources is a costly and time-consuming activity, so by the time the resource(s) are added to your team, the task may be complete and your recruiting efforts wasted.
  4. Is ramp-up time short? Some types of projects require a great deal of project-specific or industry-specific knowledge and it takes time to transfer that knowledge from the project team to the new team members. If the ramp-up time is too long, then it may not make sense to crash the schedule.
  5. Is the project far from completion? Often, people consider crashing when they’re near the end of a project and its become clear that the team will not meet it’s delivery date. Yet, this may be the worst time to crash the schedule. Frederick Brooks told the story about his schedule crashing attempt in “The Mythical Man-Month” where he added resources to one of his projects at the tail end, which further delayed delivery. In most cases, schedule crashing is only a viable option when a project is less than half complete.
  6. Is the work modular? On many projects, the work being delivered is modular in nature. For example, in automotive engineering, it’s possible for one part of the team to design the wiring for a new vehicle model while another part of the team designs the audio system that relies upon electricity, as long as points of integration and dependencies are defined early. Through fast-tracking, or completing these tasks in parallel, it becomes beneficial to also add resources, crashing the schedule.
  7. Will another pair of hands really help? All of us have heard that “too many cooks can spoil the broth,” but this also applies to engineering, software development and construction. Consider where the new resources would sit, how would they integrate with the current team, would their introduction cause an unnatural sharing of roles?

If you’ve answered these questions and responded “yes” to at least five of the seven questions, then you have a reasonably good project-crashing opportunity; a “yes” to three or four is of marginal benefit, while a “yes” to only one or two is almost certain to end for the worse.

Alternatives to the Crash

Fortunately, there are alternatives to schedule crashing that may be more appropriate than the crash itself.

  1. Increase hours of current resources. For a limited time period and within reason, asking current team members to work overtime can help you reach your delivery date more quickly than schedule crashing. When considering overtime, it’s important to remember the caveats, “a limited time period” and “within reason”. Asking resources to work 50-60 hours a week for six months is unreasonable, as is asking resources to work 70 hours per week for a month for all but the most critical projects.
  2. Increase efficiency of the current team. Though it’s surprisingly rare on projects, examining current work processes and adding new time-saving tools can improve the productivity of a team by 10% to 50% or more if a project is long. I once led a team that increased it’s productivity by roughly 30% simply by re-sequencing work activities and adding a single team member to speed up cycle time at a single step in the process.
  3. Accept the schedule. In some cases, it’s better to offset the downside effects of late delivery rather than attempt to crash the schedule. In some cases, this amounts to using a beta or prototype for training rather than a production-ready product.

A Final Caution About Crashing

Because it’s rarely well understand by anyone other than project managers, schedule crashing is often recommended by co-workers who really don’t understand the implications of the decision.  While they see an opportunity to buy time, they almost never see the inherent risks.

As a result, it’s critical that project managers not only assess the likelihood of success when considering crashing as an option, they also educate their stakeholders, their sponsor and other decision-makers about the risks of a schedule-crashing approach.  Doing anything less perpetuates the myth that crashing is a panacea that cures all that ails a late project, creating much bigger problems for everyone down the road.

Donald Patti is a Principal Consultant with Cedar Point Consulting, a management consulting practice based in the Washington, DC area, where he advises businesses in applying Lean and Agile to develop new products and services as well as improve organizational performance. Cedar Point Consulting can be found at https://cedarpointconsulting.com.


Business Blog Primer

If you’re a business these days, you’re supposed to have a blog to go along with your company website. The reasons why?

Well, it can keep your customers informed, for one. It can provide a great platform for your customers to interact with the company, for two. Third, it’s a great way to keep talking about the company in a positive way. Fourth, it’s a good way for the company viewpoint on issues to be delineated, if that is important to the business. Fifth, people may actually come to your site just to read your blog, or, some other site may find something interesting on your blog and link to it, thereby driving potential customers to your site. Sixth, each new blog post (and each new comment, if you allow comments) is yet another reason for the search engine bots to crawl your site, thereby moving you up in the search engine rankings, which is always good for business.

Okay, so a lot of good reasons to have a company blog. The problem is, of course, just as with other things, the execution. Apropos of that execution, how do you get a blog, how do you get good, relevant content for the blog, and how do you keep it going?

If you determine that your business needs a blog, you should decide what the goals of the blog are. Do you want your blog for all the reasons laid out here, or just some of those reasons? There is also a practical technological consequence to the decision to have a blog – can your current website accommodate a blog platform or will you need to pay for website development work to add a blog module? If you have no website, and you want a business blog, then you should make sure whatever website platform/theme/template you get can support a full-feature blog. Just for the record, a full-feature blog will have the ability to customize frames, colors, pages, fonts, etc., and will enable you to offer video, podcasts, images, PDF files and more to your blog readers. It will also have SEO (search engine optimization) tools built into the blog architecture. That’s at a minimum. You may wish to have other, extra features like flash animation, etc.

There are free blog platforms that are good, like WordPress, and there are other good free blog platforms that can cost a fair amount of money in development costs, like Drupal and Joomla. What’s the difference? You generally get more features and more control over those features with the platforms that cost money. There are also platforms like SQL, SQLServer, ASP.net, Flash, PHP and JavaScript, which can get very expensive in a hurry, but these are almost always used for more robust websites, and would be considered overkill just for a blog application. However, if your existing site is on one of these more expensive platforms, you will either need to build the blog on that same platform or switch over to something else.

That brings us to the next stage of this decision process. Who will build the blog? Who will maintain the blog and put new posts, photos, video, etc. in every week or every month?

Good question, right?

The most obvious answer is someone at your company. For instance, a WordPress blog is free, is fairly easy to build out, and offers an intuitive CMS (Content Management System) utility. But, it looks generic (because it is). As noted, there are other blog platforms that require more (or much more) technological expertise, but offer more features and more customization potential. Still, you may have the in-house capabilities to build your blog, and, to manage the maintenance thereof.

Or, you may be the best company in the world and not have those skills in-house. In this instance, you can hire a web developer, who will design, develop and code your site for you, whether that site is in Drupal, Joomla or WordPress (all open-source platforms), or, one of the other site technologies. They will also provide the CMS services after the install and launch, if you wish, although I strongly suggest you have the web development company build a site for you that has an easy and intuitive CMS functionality. That way, someone at your company can control the look and feel of the site, as well as text, photos, etc. There are hundreds of website development companies that do this sort of work; in fact, one of these types of companies is one of our clients – they do wonderful, innovative work, do it worldwide, and their rates are quite reasonable. Clients really do love their work. But, there are many companies that do custom website development design and finding one shouldn’t be a problem.

Now, on to the challenge of content. Who will write the blog posts, who will embed the video files or the podcasts, who will edit the site, and so forth?

Again, the most obvious answer is someone at your company. If there are people at your company that can write well in an entertaining manner, and can produce content on a consistent basis, then you’re set. Of course, you’ll still need an editor.

Why is consistent output a requirement? Why is an editor of the blog a requirement?

It’s important to have consistent output because you don’t want a “ghost blog” where the blog has, say, 10 posts in the first two months, and then, no new posts for the last year. It reflects badly on the company, it makes the blog look forlorn and abandoned, and makes the company look careless for leaving it up. There is also no reason whatsoever for people to visit the blog if there is never any new content, or, the new posts are so infrequent that people get tired of waiting for something new.

It’s important to have an editor because you want the blog to be well-written, with excellent spelling, grammar and continuity in the text; you want the subject matter (including photo, videos, etc.) to be appropriate, you want the company to be well-represented by the content, and you want a single person responsible for the look and feel of the site, so that there is central authority concerning the blog.

What if you don’t have those resources in-house? What if you’re a successful SaaS company and the only thing anyone can write is code? Which also makes sourcing an editor in-house out of the question? Or, what if you can have some content produced in house, but it’s not enough? Or, you can produce enough content, but there is no one that can be an editor? Or, you have an editor, but no content?

Then the company will need to hire a writer (or writers), or an editor, or both. Many companies hire an editor (full-time, part-time, or contract employee), who also contributes as a writer, and coordinates the purchase of content from external freelancers, which seems to work out well. In fact, there are so many business-related blogs, that it is not out of the question for your editor to obtain some content for free. Some blog owners will allow free reprints of their content as long as you attach a blurb to the post about the author, the blog, and provide a live link back to the originating blog. The blurb usually looks something like the one at the bottom of this post. In fact, when I get a reprint request from a business blog after I publish this piece, the tag below will be the blurb they will put at the end of the post when they publish it on their site.

So, free is always good, but you can’t rely on getting enough good content for your blog for free; you’re going to have to either have employees do it, hire someone to write your content, or buy content by the piece from freelancers. And you need good content on a consistent, frequent basis or there isn’t much point in having a blog. You need fresh content, you need well-written content, and it certainly doesn’t hurt to have a variety of authors, so that different writing styles are offered to your readers. That is the way to keep your blog relevant and vital, and to keep your readers coming back for more.

Which segues nicely into the last question regarding company blogs, and one I get asked all the time when we’re helping our clients with their business blogs:

“Should we allow comments on the blog articles?”

That question, by the way, is usually asked with some trepidation. Businesses are wary of letting people comment on their blogs because there is always the risk of some unhappy customer with an axe to grind poisoning the well for other customers (or potential customers) with his or her vitriolic commentary about the company, company personnel or products. There is also the more general issue of incivility so rampant on the internet; people say the worst sort of things to each other in the comments section, and companies don’t want to be part of such a hostile environment. And, then, of course, there is spam.

The obverse side of the coin is:

Companies tend to learn things about their products, service levels and personnel pretty quickly through comments on their blogs. And, people like the interaction with other people through comments, and they like the perceived interaction with the company through comments, and that is a very positive thing and makes return visits to the blog more likely. Also, customers do say nice things about the company in the comments – it’s not just negative. Lastly, all blog platforms allow moderation of comments before the comments are published; you don’t have to worry too much about profanity, spam and craziness slipping through, since you get to review all comments before publication.

That wraps up this primer about business blogs, and remember, you don’t have to figure all of this out by yourself, or, get it done by yourself. There are hundreds of vendors ready to help you with setting up and maintaining your business blog. The only hard part is shaking off the inertia and getting started.

Brendan Moore is a Principal Consultant with Cedar Point Consulting, a management consulting practice based in the Washington, DC area, where he advises businesses in marketing, sales, front-end operations, and strategy. Cedar Point Consulting can be found at https://cedarpointconsulting.com.


What’s Your “Hook”? If You Don’t Know, How Will Your Customers?

by Red Slice on October 7, 2010 – Reprinted here by express permission of Red Slice

Telling your brand story is sort of like a newspaper article: it’s all about the lead. Some folks may call this the “lead offer.” What does your business hang its hat on? When customers have that certain need or desire that certain experience, is it your company that comes to mind first?

Having a lead offer doesn’t mean you can’t have secondary messages. I often use the example of Nordstrom and Walmart. Nordstrom leads with a customer service and quality offer; Walmart with one about lowest prices. Does this mean Walmart is rude to their customers? Heck no. It just means that when you are looking for low prices, they want you to think of them. If you are thinking of a good customer service experience first, then maybe you should go elsewhere.Recently, Delta announced they were going to start leading their brand story with “service” not “size.” After acquiring Northwest Airlines, they became the largest player by traffic – until United recently merged with Continental. So now Delta is switching stories and focusing their budget on service: new flat-bed seats, video on demand and upgraded facilities in key markets.

United may decide to focus on size for a while in terms of the benefits it provides to customers: more routes, more convenience to get where you want to go, a larger network, etc. (Sidenote: big is only a relevant claim if it benefits a customer in some way and makes their life better, offers them more access, etc. Big for “big’s sake” just becomes chest-thumping.) We will have to see how the United-Continental brand story shakes out.

What do you lead with? Can you articulate the main offer you want to be known for? Service? Selection? Style? Convenient locations? Cutting-edge technology solutions? You can’t be everything so pick the main offer, the main place where do you want to “fit” inside your customers’ brains and build up from there.

Maria Ross is the founder and chief strategist of Red Slice, a branding and marketing consultancy based in Seattle. Her passion is storytelling and she has advised start-ups, solopreneurs, non-profits and large enterprises on how to craft their brand story to engage, inform and delight customers. Maria is the author of Branding Basics for Small Business: How to Create an Irresistible Brand on Any Budget (2010, Norlights Press).


Intuitive to Whom? In Web Design, it Matters

During a recent Management Information Systems course I taught for the University of Phoenix, I posed the discussion question to students, “What do you think are the most important qualities that determine a well-designed user interface?” While responses were very good, nearly all of my students used the term “intuitive” in their response without providing a more detailed description, as though the term has some universal, unambiguous meaning to user interface (user experience) designers and web users alike.

I responded by asking, “Intuitive to whom?…Would a college-educated individual and a new-born infant both look at the same user interface and agree it is intuitive? Or, would the infant prefer a nipple providing warm milk to embedded-flash videos of news stories?”

Far from obvious, an “intuitive” user interface is extremely hard to define because “intuitive” means many different things to many different people. In this article, I challenge the assumption that “intuitive” is obvious and suggest how we can determine what intuitive “is”.

Nature and Nurture

Our exploration of intuitive user interfaces and user experience starts with “nature” and “nurture”, much like the “Nature versus Nurture” debate that occurs when explaining the talents and intelligence of human beings. For those of us who haven’t opened a genetics book in a few decades, if ever, “Nature” assumes that we have certain talents at birth, while “Nurture” proposes that we gain talents and abilities over time.

Certainly, “Nature” plays a role in an intuitive user interface. According to research by Anya Hurlbert and Yazhu Ling (http://ts-si.org/neuroscience/2464-sex-differences-and-favorite-color-preference), there’s a great deal of evidence that we are born with color preferences and that color preferences naturally vary by gender. In addition, warning colors like red or yellow, such as red on stop signs and yellow on caution signs, are likely a matter of science and genetics rather than learned after we’re born. So, an “intuitive” interface is partly determined by our genes.

“Nurture” also plays a big role in determining our preferences in a user interface. For example, link-underlining on web pages and word density preferences are highly dependent upon your cultural background, according to Piero Fraternali and Massimo Tisi in their research paper, “Identifying Cultural Markers for Web Application Design Targeted to a Multi-Cultural Audience.” While research in personality and user interfaces is still in its infancy, there’s a strong indication that CEO’s have different color preferences from other individuals, as Del Jones describes in this USA Today article.

But, what about navigation techniques, like tabs and drop-down menus? In a recent conversation with Haiying Manning, a user experience designer with the College Board, I was told that “tabs are dead.” This crushed me, quite frankly, because I still like tabs to effectively group information and have a great deal of respect for Haiying’s skills and experience. As a Gen-Xer who spent much of his teen years sorting and organizing paper files on summer jobs, I’m also very comfortable with tabs in web interfaces, as are my baby-boomer friends. My Net-Gen (Millenial) friends seem to prefer a screen the size of a matchbox and a keyboard with keys the size of ladybugs, which I have trouble reading.  (Nevertheless, Haiying is right).

In the end, because of “Nature” and “Nurture”, the quest for an “intuitive” user interface is far more difficult than selection of a color scheme and navigation techniques everyone will like. What appeals to one gender, culture or generation is unlikely to appeal to others, so we need to dig further.

It’s all about the Audience

In looking back on successful projects past, the best user interface designers I’ve worked with have learned a great deal about their audience – not just through focus groups and JAD sessions, but through psychometric profiling and market research. This idea of segmenting audiences and appealing to each audience separately is far from new. Olga De Troyer called it “audience-driven web design” back in 2002, but the concept is still quite relevant today.

Once they better understood their target customers, these UI designers tailored the user interface to create a user experience that was most appealing to their user community. In some cases, they provide segment-targeted user interfaces – one for casual browsers and one for heavy users, for example. In other cases, they made personalization of the user interface easier, so that heavy users could tailor the interface based on their own preferences.

They also mapped out the common uses (use cases or user stories) for their web sites and gave highest priority to the most used (customer support) or most valuable (buying/shopping) uses, ensuring that they maximized value for their business and the customer. More importantly, the user interface designers didn’t rely upon the “the logo always goes at the top left” mind-set that drives most web site designs today.

Think about the Masai

In hopes of better defining what “intuitive” is, I spoke with Anna Martin, a Principal at August Interactive and an aficionado of web experience and web design. Evidently, “intuitive” is also a hot topic with Anna, because she lunged at the topic, responding:

Would you reach for a doorknob placed near the floorboard; or expect the red tube on the table to contain applesauce? Didn’t think so. But what’s intuitive depends largely on what you’re used to.  Seriously, talk to a Masai nomad about a doorknob – or ketchup for that matter – and see what you get. And good luck explaining applesauce. (Cinnamon anyone?). Clearly intuition is dependent on what comes NATURALLY to a user – no matter what the user is using.

So why would the web be any different? It’s not. Virtual though it may be, it’s still an environment that a PERSON needs to feel comfortable in in order to enjoy. Bottom line is this…if you wouldn’t invite your 6 year old niece or your 80 year old grandmother to a rage (did I just date myself?) then don’t expect that every website will appeal to every user.

Know your audience, understand what makes them comfortable; and most importantly try to define what ‘intuitive’ means specifically in regards to sorting, finding, moving, viewing, reading and generally experiencing anything in their generation.”

So, audience-driven web design has firmly embedded itself into the minds of great designers, who must constantly challenge the conventions to create truly creative interactive experiences on the web. Consequently, as the field of user design transitions into a world of user experience, it’s going to require second-guessing of many of the design conventions that are present on the web today. This not only means pushing the envelope with innovative design, it also means we need to have a good handle on what “intuitive” really is.

Donald Patti is a Principal Consultant with Cedar Point Consulting, a management consulting practice based in the Washington, DC area, where he advises businesses in project management, process improvement, and small business strategy.  Cedar Point Consulting can be found at https://cedarpointconsulting.com.

Baby Bottles and Web Design


Departing Waterfall – Next Stop Agile

It’s been more than a year since I penned, “Before Making the Leap to Agile”, an article intended to guide everyone from C-level executives to IT project managers on the adoption of Agile. The goal was to offer up some of the lessons I learned through actual implementations, so that readers could avoid of some of the pitfalls associated with Agile adoption.  While a few saw it as an assault on Agile, many understood that my goal was to assist Agile adopters and thanked me for writing it.

Five-thousand-plus page views later on the last article, and I’ve finally cleared my plate enough to address an equally important topic: why people, and organizations, are making the shift to Agile from the more typical Waterfall. After all, Agile is a revolutionary approach to software development and it continues to grow in popularity, so I think it’s important for those who do not yet use Agile to understand why others have embraced it.

Why are people abandoning Waterfall and moving to Agile?

1. Agile is Adaptive. For the project team, as well as the business, Agile enables you to make quick changes in direction so that your software product and your business can respond to a rapidly changing business environment.

How? Agile teams use two-to-four week iterations, often called sprints, in which to develop and then release a working product.  At the end of each sprint, the team uses retrospectives to look back on the work completed and see how productivity can be improved; the team also works with the customer to determine which work should be accomplished during the next sprint.  One technique enables continuous improvement, the other enables the business to re-prioritize work based on changes in the business climate.  Together, they make Agile highly adaptive when compared to a Waterfall approach that effectively locks the team in to both a process and business strategy for a number of months.

2. WYSIWYG (What You See Is What You Get) Development. Many of us are familiar with this wonderful cartoon that shows how projects really work — at least in a Waterfall world. Notice how there’s an enormous disconnect between the first image, “what the customer asked for”, and the last, “what the customer really needed”.

Arguably, this happens because those of us in software development listen dutifully to what our customer says, document their words in the form of requirements, and then go off and build it, assuming all along that our customer knows not only what they want – but what their end customers want.  In reality, many of us have a rough idea of what we want and often less of an idea of what our customers want, particularly with software products that serve the masses (sure, focus groups and usability testing make a big difference, but still fall short in many instances).

Agile takes an entirely different approach to requirements gathering. Product features are identified for development and then the team works together with the business customer to build the features cooperatively. In many cases, user stories are written, screen mockups are drawn and simple business rules are written, but nothing too sophisticated. Instead, the Agile team relies upon heavy interaction with the customer or product owner to elicit requirements on-the-fly.

For example, not sure what the business customer wanted on a particular screen? Show them what you’ve got and see if it fits their expectations. Even if it is what they asked for, see if it’s going to serve their customer’s needs as they intended, or if it needs some refinement. Either way, if they want a change, change it. Using this nimble approach, there is little risk of misinterpretation of requirements and even less risk that the finished product misses the mark.

3. Shorter Time-to-Market. Let’s be honest here – who among us hasn’t reported to a C-level who has a great idea and wants something on the market – yesterday. (Heck, I’ve been guilty of this myself). Using a Waterfall approach, delivering anything to the marketplace takes months and sometimes years. But, by taking an Agile approach, the bare-bones features of a new product can be delivered in weeks, then, further enhanced to provide a truly robust solution. Again, the secret to shorter time-to-market lies in the use of iterations (sprints), with the end of each sprint another opportunity to deliver more features to the customer. Agile has this – Waterfall doesn’t.

4. Greater Employee Satisfaction. One of the oft-cited byproducts of Agile development is greater employee satisfaction – both by the project team and the line-of-business responsible for delivering the product. According to Steve Greene and Chris Fry, Salesforce.com reported an 89% employee satisfaction rating after adopting Agile when compared to only 40% before adoption.

In a similar vein, research by Grigori Melnick and Frank Maurer from the University of Calgary showed 82% of employees at Agile-adopting businesses were satisfied or very satisfied with their jobs, while only 41.2% were satisfied or very satisfied in non-Agile shops (2006, Comparative Analysis of Job Satisfaction in Agile and Non-Agile Software Development Teams).

5. Higher Quality. By most accounts, adopting Agile reduces defects and results in higher product quality. While personally I have seen Agile projects head in the wrong direction and suffer from higher defect rates initially, many sources have noted significantly higher quality on Agile projects. According to a 2008 survey by Version One, 68% of respondents to a survey on Agile adoption and corresponding results reported improved product quality as one of the benefits (3rd Annual Survey on the State of Agile Development). Similarly, David Rico, et. al report on average a 75% improvement in quality by adopting Agile (The Business Value of Agile Software Methods, 2009, J Ross Publishing).

6. Higher ROI. If there’s one single reason for the corner office to be sold on Agile, it has to be higher ROI. Because Agile reduces project overhead, delivers beneficial work more quickly and produces higher quality products, Agile also delivers a higher ROI to the businesses who adopt it. According to research that compiled data from multiple different sources, including Microsoft, Version One and the University of Maryland, Agile projects average an 1788% ROI when compared with Waterfall projects at 173% (The Business Value of Agile Software Methods, 2009, J Ross Publishing). While these numbers appear to be skewed toward the low side for Waterfall because the comparison only included CMMI-adopting organizations, that hardly makes up for a 10-fold difference between the two.

With all of this evidence residing squarely in the corner “for” Agile adoption, it’s sometimes hard to understand why Waterfall is still practiced. But the truth is, adopting Agile takes a paradigm shift in thinking that is not easy for individuals, much less organizations, to make. It also takes experience not only in practicing Agile, but also in managing organizational change, two qualities critical in Agile consultants.

This is why the Cedar Point Consulting team tailors its Agile implementations to each organization, choosing the tools and techniques that best match your industry and needs so that you avoid many of the pitfalls and have a successful adoption. It’s also why I have personally put so much time and effort into making Agile even more robust, not only by exploring Agile at Scale, but also by off-setting some of Agile’s weaknesses with common-sense approaches that nearly every business can implement.

So, go ahead, make the leap to Agile. Just be certain you’re taking the right approach to Agile adoption for your organization before you begin.

Donald Patti is a Principal Consultant with Cedar Point Consulting, a management consulting practice based in the Washington, DC area, where he advises businesses in project management, process improvement, and small business strategy. Cedar Point Consulting can be found at https://cedarpointconsulting.com.

 

Waterfall Model diagram

 

 


Does Process Improvement Kill Creativity?

Early in my career, ISO-9000 was just coming into vogue and my employer, a Fortune 500 company, had earned the honor of being called ISO-9000 certified.  To say the least, the ISO-9000 concept was a little irritating to a young, creative-type:  Processes are documented, standardized, and followed without deviation because deviation yields an inconsistent outcome and inconsistent quality.  Even worse, ISO-9000 principles were being applied not to manufacturing but to services, where the human factor was so important.  While people certainly admire the fact that a Hershey bar has the same consistently delicious taste, would the feel the same if the service rep answered the phone in an identical manner every time, smiled at visitors in the identical manner and greeting them with the same Mr. or Ms. in the same robotic way?  Somehow, ISO-9000 seemed to be forcing the soul out of services and driving creativity out of the American worker.  This would not stand.

A big believer in creativity and diverse thinking, I know that the World’s greatest innovations come from ignoring conventional wisdom and trying something a different way, so the question, “Does Process Improvement Kill Creativity?” is not trivial.  However, there is a way to balance the roles of quality and creativity in your business, though the answer comes from two disparate figures:  Geoffrey A. Moore and Kiichiro Toyoda.

For those of you who don’t know Moore, he’s a business geek’s ultimate hero — the man behind the technology adoption lifecycle, Crossing the Chasm, and Dealing with Darwin.  It is in Dealing with Darwin that Moore introduces the concept of reallocating business resources from context to core.  Context is all that work done by employees that does NOT separate your business from its competitors.  Cores represents all work that is critical to delivering your products or services uniquely; core helps to separate you from your competitors and is the leading driver of innovation.  According to Moore, businesses spend far too much of their time (80%) in context activities and far too little in core (20%) involved in the core.

Let’s apply this to process improvement and process standardization.  These exercises provide a window for innovation, then they lock down a process so that it yields consistent results.  They also reduce a business’ emphasis on context activities by removing unnecessary steps and automating once-manual processes.  So, more time can be spent on the core, where a business can differentiate itself, developing new products or services with the creative mind.

Kiichiro Toyoda had a similar mindset nearly fifty years earlier when he developed the Kaizen philosophy of continuous improvement and the lean manufacturing concept targeting the elimination of waste.   Founder of Toyota Motor Corp, Toyoda had a keen eye that focused human efforts on eliminating waste and improving processes rather than perpetually repeating them without question.  Combined, Kaizen and lean are key reasons why Toyota leads in sales and product quality and why Toyota employees are among the happiest in the industry.

So, considering Toyoda and Moore when reflecting upon my past sins in the areas of process improvement and standardization, I’ve developed a few principles to keep in mind as we standardize:

(1) Wherever possible and cost-effective, automate.  There’s no sense in having people do work that a machine or computer can do faster and more consistently, especially when this is sure to dull the human capacity for innovation.  Instead, people should monitor repetitive processes, not do them.

(2) Involve workers and end-users in innovation.  Your best ideas often come from the line-worker, the front desk staff or a computer system’s end-users.  This also gives them an opportunity to flex their mental muscles.

(3) Focus your employees on creative efforts inside the core.  If you have people who are spending their time trying to marginally improve legacy products or services, redirect them to activities that create new products or radically transform current ones — efforts that will benefit most from the human capacity toward innovation.

(4) Leave room for creativity and individuality.  Where product quality won’t suffer and humans are involved in production, leave room for creativity and individuality.   This one is the hardest to follow, because we know that a consistent product is best created by a consistent process.  But, avoiding excessive detail in a process leaves room for grass-roots innovation and keeps the human mind engaged.

(5) Build a World that is Human-Centric.  Human beings are inherently creative and intuitive:  We move beyond patterns to think of completely different ways to solve a problem, create art or experience life.  All of the products, services and processes that we create need to remain human-centric, recognizing that they exist for the benefit of humans and to add value to the human experience.

Looking back at the list, there’s no guarantee that following these recommendations will bring harmony between high quality, innovation and human creativity.  But having the list makes Cedar Point Consulting teams aware of the need for balance, while providing a set of principles to follow and measure our progress. As a result, we do a better job of allowing creativity to flourish in a high-quality environment.

Donald Patti is a Principal Consultant with Cedar Point Consulting, a management consulting practice based in the Washington, DC area, where where he assists organizations in applying Lean and Agile to develop new products and services as well as improve organizational performance. Cedar Point Consulting can be found at https://cedarpointconsulting.com.