Still debating the build-versus-buy decision at your organization for your IT purchases? If so, you probably aren’t getting the biggest bang for your IT dollar: Build-versus-buy is dead. For better decision-making when acquiring IT systems, forget build-versus-buy and remember the Technology Acquisition Grid. You’ll not only save money, you’ll make smarter decisions for your organization long term, increasing your agility and speeding time-to-market.
In this article, I describe Software-as-a-Service (SaaS), application hosting, virtualization and cloud computing for the benefit of CEO’s, CFO’s, VP’s and other organization leaders outside of IT who often need to weigh in on the these key new technologies. I also describe how these new approaches have changed technology acquisition for the better – from the old build-versus-buy decision, to the Technology Acquisition Grid. Along the way, you’ll learn some of the factors that will help you decide among the various options, saving your organization time and money.
The Old Model: Build-versus-Buy
When I earned my MBA in Information Systems in the mid-1990’s, more than one professor noted that the build-versus-buy decision was a critical one because it represented two often-costly and divergent paths. In that model, the decision to “build” a new system from scratch gave the advantage of controlling the destiny of the system, including every feature and function. In contrast, the “buy” decision to purchase a system created by a supplier (vendor) brought the benefit of reduced cost and faster delivery because the supplier built the product in advance for many companies, then shared the development costs across multiple customers.
Back then, we thought of build versus buy as an either-or decision, like an on-off switch, something like this:
In the end, the build-versus-buy decision was so critical because, for the most part, once you made the decision to build or buy, there was no turning back. The costs of backpedaling were simply too high.
The Advent of Application Hosting, Virtualization, SaaS and Cloud Computing
During the 2000’s, innovations like application hosting, virtualization, software-as-a-service (SaaS) and cloud computing changed IT purchasing entirely, from traditional build-versus-buy, to a myriad of hosting and ownership options that reduce costs and speed time-to-market. Now, instead of resembling an on-off switch, the acquisition decision started to look more like a sliding dimmer switch on a light, like this:
Suddenly, there were more combinations of options, giving organizations better control of their budgets and the timeline for delivering new information systems.
What are each of these technologies and how do they affect IT purchasing? Here’s a brief description of each:
During the dot-com era, a plethora of application-service-providers (ASPs) sprung up with a new business model. They would go out and buy used software licenses, then host the software at their own facilities, leasing the licenses to their customers on a monthly basis. The customers of ASPs benefit from the lower cost-of-ownership and reduced strain on IT staff to maintain yet another system, while the ASPs made money by pooling licenses across customers and making use of often-idle software licenses.
While the dot-com bust put quite a few ASPs out of business, the application hosting model, where the software runs on hardware supported by a hosting company and customers pay monthly or yearly fees to use the software, still survives today.
One of the first technologies to change the build-versus-buy decision was virtualization. By separating the hardware from the software, virtualization separates the decision to buy from the need for new software. In virtualization, first, computer hardware is purchased to support the organization’s overall technology needs. Then, a self-contained version of a machine – a “virtual” machine – is installed on the hardware, along with application software, such as supply chain or human resources software, that the business needs at that point in time.
When the organization needs a new software application that is not compatible with the first application, because it runs on another operating system, they install another virtual machine and another application on the same hardware. By doing this, the organization not only delivers software applications more quickly because it doesn’t need to buy, install and configure hardware for every application, the organization also spends less on hardware, because it can add virtual machines to take advantage of unused processing power on the hardware.
Even better, virtual machines can be moved from one piece of hardware to another relatively easily, so like a hermit crab outgrowing its shell, applications can be moved to new hardware in hours or days instead of weeks or months.
Like virtualization, Software-as-a-Service, or SaaS, reduces the costs and time required to deliver new software applications. In the most common approach to SaaS, the customer pays a monthly subscription fee to the software supplier based on the number of users on the customer’s staff during a given month. As an added twist, the supplier hosts the software at their facilities, providing hardware and technical support, all within the monthly fee. So, as long as a reliable Internet connection can be maintained between the customer and the SaaS supplier, the cost and effort to support and maintain the system are minimal. The customer spends few resources and worries little about the software (assuming the SaaS supplier holds their side of the bargain), enabling the organization to focus on serving it’s own customers, instead of on information technology.
The most recent technology innovation among the three, cloud computing brings together the best qualities of virtualization and SaaS. Like SaaS, with cloud computing both hardware and software are hosted by the supplier. However, where the SaaS model is limited to a single supplier’s application, cloud computing uses virtual machines to host many different applications with one (or a few) suppliers. Using this approach, the software can be owned by the customer, but hosted and maintained by the supplier. When the customer needs to accommodate more users, the supplier sells the customer more resources and more licenses “on demand”. Depending upon the terms of the contract, either the customer’s IT staff maintains the hardware, or the supplier. In addition, in most cases, the customer can customize the software for their own needs, to better represent the needs of their own customers.
Adding Application Hosting, Virtualization and Cloud-Computing to the Mix – The Technology Acquisition Grid
Remember the dimmer switch I showed a few moments ago? With the addition of application hosting, virtualization, SaaS and cloud computing to the mix, it’s not only possible to choose who owns and controls the future of the software, it’s also possible to decide who hosts the software and hardware – in-house or hosted with a supplier, as well as how easily it can be transferred from one environment to another. That is, it’s now a true grid, with build-to-buy on the left-right axis, and in-house-to-hosted on the up-down axis. The diagram below shows the Technology Acquisition Grid, with the four main combinations of options to consider then acquiring technology.
Here’s where application hosting, SaaS, virtualization and cloud computing fit into the Technology Acquisition Grid:
Making a Decision to Host, Virtualize, go SaaS, or seek the Cloud
If the rules of the game have now changed so much, how do we make the decision to use virtualization, application hosting, SaaS or cloud computing, as opposed to traditional build and buy? There seem to be a few key factors that drive the decision.
At the most basic level, it comes down to how much control – and responsibility — your organization wants over the development of the software and the maintenance of the system. Choose an option in the top-left of the Technology Acquisition Grid, and you have greater control of everything; choose an option at the bottom-right, and you have far less control and far less responsibility for the system.
In my own experience advising clients during technology acquisition and leading technology initiatives, decision-makers tend to choose a “control everything” solution because it’s the easiest to understand and poses the least risk. While this may, in the end, be the best answer, organizations should weigh the other options, as well. Certainly, more control usually sounds really good, but it almost always comes along with much higher costs, as well as delaying use of the system by months. Particularly for smaller organizations, which probably need those IT dollars to serve their own customers more effectively, a “control everything” answer is often the wrong decision.
Which should your organization choose? Start by making an effort to include software products that take advantage of hosting, virtualization, SaaS and cloud computing among your choices when you start your search. Then, weigh the benefits and downsides of each option and combination of options, choosing the one that balances cost and time-to-market with your own customer’s needs and your tolerance for risk. A good consulting company like Cedar Point Consulting can help you do this, as can your organization’s IT leadership. Using this approach, you’re sure to free yourself from the old rules of build-versus-buy, delivering more for your own customers at a much lower cost.
Donald Patti is a Principal Consultant with Cedar Point Consulting, a management consulting practice based in the Washington, DC area, where he advises businesses in technology strategy, project management and process improvement. Cedar Point Consulting can be found at https://cedarpointconsulting.com.