Ditch the Demo: Buying a Visibility Solution in the Blind

I have participated in at least twenty serious supply chain software system selection events, both as a buyer and seller. In every case the system demo was a critical part of the evaluation, and often weighed heavily on the final selection or the price paid for it. As a buyer, the deep-dive demo with my own data loaded in the software was the apex of the evaluation interaction with the software vendor. After the demo (which may have been spread across several days on-site), the vendor mostly heard from me to address some follow up questions or to negotiate pricing and rollout. When I have helped sell supply chain software, the demo preparation and execution was also a major effort for the pursuit team. We invested lots of staff time in setting up, rehearsing, and polishing the demo. On both sides of the table, the unstated assumption seems to be that a demo proves what the software is capable of doing.

But now I strongly advise buyers against software demos. They are intuitive and satisfying, but unproductive at best and misleading at worst. This article will layout my thinking on why supply chain software demos are a bad idea and what to replace them with to achieve better software selection outcomes. We’ll start with why the buying side almost always wants a software demo, then look at why they are a bad idea, and end with what other activities could replace a software demo to better effect.

Why buyers ask for demos:

Buyers have three main drives behind their focus on the software demo: (1) gain direct access to the product to avoid being oversold, (2) reduce their evaluation effort, and (3) it mimics how they buy personal software. First, buying teams like demos because they feel it is direct access to the product. Said another way, it makes it harder for them to be tricked or over-sold on the software capabilities. It’s easy for a vendor to say “we offer that function”, but harder for them to demonstrate it if such a function is really missing. Buyers think a software demo makes the selection process safer. In fact, they are likely to have this idea reinforced by colleagues or their reporting line in the company. A buyer who acquires software without having seen it in a demo is considered reckless or suspected of having a too-friendly relationship with the software vendor. The second reason a software demo is satisfying is that it is graphic. Human beings are incredibly dependent on visual input and a software demo caters to this. Our natural fluency with visual input means that we can engage visual evaluations or tasks with less effort and for longer than abstract discussion or rigorous logical thinking. As an example we can skim or scan an image much faster than reading text, and with less fatigue over long periods of time. Watching an eight hour software demo (with occasional breaks) is fairly common. How many times are software selection teams sitting in conference rooms reading a document together for 8 hours? I’ve never heard of that happening. Finally, buyers of supply chain software are acclimated to a software selection and buying process based on their personal software buying behavior. Buying a TMS or WMS is nothing like buying apps for a mobile phone or new software for a personal laptop. But the approach to evaluating those systems spills in to the enterprise software space, and personal software is almost always selected after a trial or demo.

Why vendors give demos:

Vendors mostly provide software demos because they are pushed to do so by the buyer. But, they do propose them for their own reasons. First, in any given market segment there is going to be one supply chain software vendor with the most attractive visual user interface or user experience. This is something they have invested in, perhaps expressly to win new business with it, and they will want to demo their software in order to have it influence the sales cycle. Even the vendors who have not heavily invested in a graphic user experience may want the buyer to receive system demos simply because the competition’s system is even worse. Enterprise software is not known for nice GUIs or human factors, so vendors plant landmine questions or provocations with the buying side regarding a competitor’s software demo. As an example: “you might ask to see how a new supplier is setup, we’ve heard it’s not possible without an IT super user and some code-level changes”. In this regard, vendors also assume that a demo makes it harder to over-sell system capabilities. Finally, software vendors consider the demo as the best way to address open questions or concerns on functionality, enabling them to move to the other aspects of a potential deal. Within a vendor’s pursuit team, the demo is a gating step that must be passed through successfully before the salesmanship (on which the enterprise software business is so intimately built) can be brought to fruition.

Why demos fail to meet buyer’s needs

Software selection is all about identifying fitness between business needs and the software being purchased. No software will be perfectly suited to an existing or envisioned process, but it’s the selection committee’s job to buy the software with the best fit and where the gaps are known and addressable. I propose that supply chain software demonstrations fail to meet buyer’s needs because they do not increase the chances of finding good fit. There are three ways fitness is obscured by a demo: (1) over-confidence in demo veracity, (2) over-weighing the importance of look-and-feel, (3) the inherent ambiguity about software capabilities that comes from a demo. Let’s look at each in turn:

First, buyers are overly confident about their ability to discern software capabilities during a demo. In part, this is just human nature. We get fooled by visual illusions and cognitive biases all the time. But software vendors are not fools: they have been playing this game as long as the buyers and by now are often able to finesse their system demos. This is not to say that vendors cheat or lie, just that a demonstration is a tip of a process iceberg. Beneath the surface are lots of things the buyer can’t see and would never think to ask. Demos are incomplete usage experiences by definition, and the buyer-side staff often forget about or under-estimate the extent to which the unseen parts are critical to the real fitness for business purpose.

Second, buyers not only over-estimate how well they understand the software based on the demo, they also are heavily influenced by the look-and-feel aspects of the demo experience. If the look-and-feel of the demo was great (i.e. fast response, clean workflow, pleasing graphics, no crashing or bugs, etc.) they tend to conflate the satisfying demo experience with software fitness. In reality, these two have almost nothing in common. A demo that is slow, crashes often, and looks ugly with an out of date color scheme may nonetheless represent software exactly suited to the business needs of the buyer’s supply chain. This is because (1) graphic satisfaction is really not a critical success factor for most supply chain software, and (2) a demo experience is not the same as the user experience. There are simply few reasons why enterprise software needs to be graphically pleasing. Most personal software is graphically pleasing as a virality or customer-acquisition strategy. Would you download an app if the screenshots in the app-store looked awful? Probably not. This drives app developers to invest in beautiful user interfaces. And this, in turn, makes most people feel like all software should have graphically pretty user interfaces. It’s time to get over that: the GUI aesthetic is irrelevant unless it impacts learnability or usability. For this reason a monochrome screen with text on the bottom of the screen saying “press F1 to select shipment origin” is actually superior to a richly interactive zoomable map where a user right clicks on a location with their mouse to mark it as an origin. The monochrome screen is dead-simple to use and self-teaching. The map is slower to use and requires that at least one time the user is given additional instructions, since the screen itself doesn’t tell them what to do.

But more important is the fact that a demo experience is not the same as a user experience, however much we try to engineer it to mimic one. Users have skills, experience, environmental factors, and immediate task needs or priorities that are simply impossible to predict or simulate in a demonstration to a buying committee. A satisfying demo experience may have no correlation with the eventual user experience. Buying teams forget this, and in fact often forget that their own satisfaction is irrelevant to the ultimate software fitness or success. Things that annoy buyers, such as when the vendor fails to follow the demo sequence they were asked to use, are of no consequence to the ultimate success of the software selection process.

Lastly, demos are inherently ambiguous in terms of software capabilities. This is counter-intuitive because most buyers feel like a demonstrated capability increases their sense of it being “really there”. They feel like a picture is worth a thousand words, and a demo is worth much more than descriptions or PPT slides. The truth is really the opposite: a demo leaves many functional questions ambiguous. If a demo includes a screen with a quality-assurance form, does this mean the vendor supports quality-assurance checks? If the software produced an email alert during the demo, does this mean the software supports alerting based on business events? Buyers fool themselves (sometimes with the vendor’s help) in to accepting the demo as assurance of capabilities. The real assurance of capabilities is just that: a written, legally binding assurance of capabilities. Vendors really don’t like providing this kind of document, and buyers should take note of that reluctance. If a vendor is unwilling to sign a contract addendum assuring certain functionalities then they should be asking themselves how they can feel confident in them existing only based on a demo.

Alternatives to Demos:

Let’s assume you are willing to omit a software demo from your buying process, what activity should you add in its place? I have a few suggestions. These should increase the probability of finding a strong software fit. They come down to three distinct tactics: (1) segregate and quantify the user experience, (2) enforce vendor commitment to functionalities, and (3) be self-aware of your company’s tech adoption style.

First, supply chain software often has deep human touch points and these naturally need to be evaluated for usability. For example, if a WMS will be directing the work of 100+ DC staff it will be critical that the user experience be critiqued carefully. But the most effective way to do so is to segregate the team doing that evaluation. Split off a team, hopefully filling them with as many end-users as possible. This team only evaluates the software vendors in terms of user experience, often on the dimensions of learnability and usability. The best way to do this is to decide in advance what user tasks will make up the majority of interactions with the new software. Then simply ask the usability team to agree on three figures: (1) can the task be self-learned by a user with no previous experience and if so, how much time is needed, (2) if a user needs direct training, how much time together with the instructor is needed to learn this task and at what ratio of learners to instructor, and (3) how long does it take a user to execute this task once it is well learned. The usability evaluation team returns to the larger buying committee with these figures, and that turns the usability question in to a fairly straightforward running cost of the software. Make sure that the usability group doesn’t get other information which is irrelevant to their role, such as costs or non-usability feature differences. Likewise, the larger buying committee shouldn’t be looking at screen shots in PowerPoint presentations: ask the vendor to take all these out. By having no view on the software, it sharpens the focus on the other areas they should be evaluating.

Second, for functionalities that are critical and can be well defined, simply insist that the vendor legally commit to their existence and availability. This will be hard because (1) buyers often have vaguely defined functional needs, and (2) vendors loath signing a functional summary document that could be used against them later. Buyers have to make the first step and clean up their functional needs to the point it can be safely confirmed or rejected by a vendor. For example “ability to create a purchase order via interface” is way too ambiguous. What data elements will appear, in what format, are there unique keys, are there connections with other data objects, do some elements need to be checked upon receipt, at what speed must it be processed, and a dozen other key questions need to be addressed before a vendor could commit to this functionality. Obviously, an entire software solution cannot be locked down like this, but I suggest that any point that was being “verified” by a system demo can and should be handled this way instead. It forces cleaner thinking by the buyer, avoids the risk of over-selling by the vendor, and prevents ambiguity about what is in or out of scope of the software.

Third, buyers should use a self-awareness about their company’s technology adoption style to better run their supply chain software buying cycle. In other words, is your company a bleeding-edge early adopter, a middle-of-the-pack adopter, or a laggard? Each technology adoption style has a way to evaluate supply chain software that is better suited to it. I think the classic RFP or RFQ process is best suited to middle-of-the-pack adopters because they are buying software for a fairly well defined and industry standard purpose, often becoming the 2 or later generation of customers. Their goal is to find and quantify the impact of the tech gaps, which they assume will exist but which will often not be mission critical. Middle-of-the-pack technology adopters typically will not completely fail in their software selection process. At worst they will over-pay or face more rollout issues than expected, but it’s quite rare that the system is completely ripped out and written off as a loss.

Early adopters should spend as little time as possible in pre-engagement evaluation and focus that time on two questions: (1) if this software worked, would it give me 10x benefits over the standard industry solution, and (2) what is the smallest and fastest way I could test or prototype this software in a real-world situation? The nature of the tech maturity means that there is often not even a complete off-the-shelf software yet, so evaluating it in a slow RFP or RFQ cycle is ridiculous. Forget sending out an RFQ and waiting four weeks for responses, followed by on-site demos, followed by a final round presentation. Instead, use these months to run fast prototyping or piloting with one or several promising bleeding-edge tech providers. For early adopter supply chain software the point is not so much to evaluate fit prior to purchase as it is to make a series of rapid, sequentially larger investments via prototyping or piloting. This buying cycle doesn’t involve a fixed large-sized budget but rather a steady ramp up of spending, with a ruthless commitment to cut spending on a software that fails to hit its prototyping goals.

For laggards the software evaluation process should be minimal. They can easily confirm the software vendor capabilities through references rather than through their own evaluation, and anyway their own teams are often not great at discerning the strengths and weaknesses of the market options (since they work at a laggard adopter). Rather than pretending that their evaluation might uncover something the rest of the market missed, it’s cheaper and faster to simply identify a shortlist of faster adopting competitors and then research what software they are using. Invite those vendors to make a quote, and then negotiate aggressively. In theory, the business requirements should be so well known and mature that any vendor in the shortlist can handle the business. Make the decision on price or service level and go quickly to software rollout.

Does tech adoption style really make a difference to buying supply chain software? My experience indicates that it absolutely does. When I was trying to sell a bleeding-edge transport planning software I realized how important this was. Buyers who approached the software evaluation with an RFP or RFQ process were frustrated and ultimately just slowed themselves down. New and highly innovative software doesn’t have direct functional competitors, more like competitors for budget or attention. And cutting edge software is often in a state of flux and will be adapted to the needs of the first adopters. A typical RFP or RFQ process is about verifying capabilities and finding the lowest cost provider in the market. For cutting edge supply chain software, there is only one provider in the market and they will build-out capabilities based on the first customers’ needs: the RFQ is a waste of time. On the other hand, while supporting software sales at Manhattan Associates I would occasionally be involved in sales cycles to laggard tech adopters. They were replacing vintage systems with an industry standard like the WMOS warehouse management system, a system very widely used and noted as the leader by analysts like Gartner. Yet, their IT or logistics staff felt compelled to ask for demos and gave us questionnaires about basic functionality. These companies wasted their own time, and ours as well, by failing to be self-aware. All this is to say that a company’s buying style really does need to adapt to their technology adoption style.


Demos are an established ritual in the dance of buying and selling supply chain software. They are satisfying to the buyer because they appear to lower risk, give direct access to the product, and are easier to consume that dense written functionality summaries. Unfortunately they increase the risk of a poor software fit, by creating over-confidence in the buyer and distorting their sense of what makes the software good. They also leave critical points of functional scope ambiguous. The best thing a buyer or buying committee can do is to remove system demos and system screen shots from their vendor interactions. Setup a dedicated and structured evaluation team to assess the software user experience, primarily in terms of learnability and usability. The resulting assessment should be a quantitative estimate of the labor inputs to operate the system, hence it becomes a sub-item in the software’s predicted running costs. For functionality that would normally be verified by a demo, simply ask the vendor to verify it in writing as an addendum to the purchasing agreement. This forces the buyer to be precise and the vendor to take responsibility for their functional scope. Finally, if you are either an early adopter or laggard adopter of technology, use this information to fine tune the software selection process to best suit your needs. Early adopters should forget “evaluating before purchasing” and switch to pursuing any technology that has a 10x or greater improvement potential over industry standard solutions, and pursue these tech options via fast and gradually larger prototyping or pilots. Laggard tech adopters should shorten their evaluation process and use a strategy of “cheap following” by intentionally mimicking the software selection of shortlisted competitors. This lets them piggy-back off the sweat and occasional bloody mistakes of their competitors.

Leave a Reply