What’s your customers’ single point of failure?

“A single point of failure (SPOF) is a part of a system that, if it fails, will stop the entire system from working. They are undesirable in any system with a goal of high availability or reliability, be it a business practice, software application, or other industrial system.”

Thinking about customer needs when looking at creating a successful startup? Perhaps putting yourself in the customer’s shoes, and ask what is their single point of failure? The component or process which if it went down would potentially take the whole business with it?

Do I have an example? How about a few years ago while working for an award-winning ski holiday company I was presented with a ‘wicked problem’. Twin sisters I thought I had booked into a twin room were now told that they would have to share a double bed, as there was no twin rooms left. When I informed the pair the sisters were not happy at this bed-sharing prospect.

So I went back to the holiday supplier and suggested to the operator that they stand by the confirmation of the twin bed booking. They refused. They said as I had made the booking on the phone, rather by the preferred electronic online system (which was ‘down’ at the time of booking) that they would not honour it.

So I asked the company directors with many years of experience for guidance to help resolve the problem, and they were also baffled, suggesting that I might pitch human rights law at the holiday company to get them to budge on the issue. While I am not averse to using a big concept to solve a small problem it didn’t quite seem the right approach.

After sleeping on the issue I came back the next morning – and drafted a fax letter to the company. In the letter I simply asked if their response meant they regarded the telephone booking, and the online booking, as two separate distinct systems.

I waited. A few hours later, they had a ‘surprise’ change of mind, and the twin sisters got their twin hotel room. Which reminds me of the chirpy catchphrase we used in the office with travel customers, “we can request, but we can’t guarantee”.

unhappy-sisters

Systemantics and online communities

OK, it’s long list but it’s pretty useful when thinking of designing online communities for example! From John Gall. So as a planning tool how about thinking where your approach might fit into these. Good or bad!

1. The Primal Scenario or Basic Datum of Experience: Systems in general work poorly or not at all. (Complicated systems seldom exceed five percent efficiency.)
2. The Fundamental Theorem: New systems generate new problems.
3. The Law of Conservation of Anergy [sic]: The total amount of anergy in the universe is constant. (“Anergy” = ‘human energy’)
4. Laws of Growth: Systems tend to grow, and as they grow, they encroach.
5. The Generalized Uncertainty Principle: Systems display antics. (Complicated systems produce unexpected outcomes. The total behavior of large systems cannot be predicted.)
6. Le Chatelier’s Principle: Complex systems tend to oppose their own proper function. As systems grow in complexity, they tend to oppose their stated function.
7. Functionary’s Falsity: People in systems do not actually do what the system says they are doing.
8. The Operational Fallacy: The system itself does not actually do what it says it is doing.
9. The Fundamental Law of Administrative Workings (F.L.A.W.): Things are what they are reported to be. The real world is what it is reported to be. (That is, the system takes as given that things are as reported, regardless of the true state of affairs.)
10. Systems attract systems-people. (For every human system, there is a type of person adapted to thrive on it or in it.) [eg: watch out for contributors who dominate your community]
11. The bigger the system, the narrower and more specialized the interface with individuals.
12. A complex system cannot be “made” to work. It either works or it doesn’t.
13. A simple system, designed from scratch, sometimes works.
14. Some complex systems actually work.
15. A complex system that works is invariably found to have evolved from a simple system that works.
16. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over, beginning with a working simple system.
17. The Functional Indeterminacy Theorem (F.I.T.): In complex systems, malfunction and even total non-function may not be detectable for long periods, if ever.
18. The Newtonian Law of Systems Inertia: A system that performs a certain way will continue to operate in that way regardless of the need or of changed conditions.
19. Systems develop goals of their own the instant they come into being.
20. Intrasystem [sic] goals come first.
21. The Fundamental Failure-Mode Theorem (F.F.T.): Complex systems usually operate in failure mode.
22. A complex system can fail in an infinite number of ways. (If anything can go wrong, it will.) (See Murphy’s law.)
23. The mode of failure of a complex system cannot ordinarily be predicted from its structure.
24. The crucial variables are discovered by accident.
25. The larger the system, the greater the probability of unexpected failure.
26. “Success” or “Function” in any system may be failure in the larger or smaller systems to which the system is connected.
27. The Fail-Safe Theorem: When a Fail-Safe system fails, it fails by failing to fail safe.
28. Complex systems tend to produce complex responses (not solutions) to problems.
29. Great advances are not produced by systems designed to produce great advances.
30. The Vector Theory of Systems: Systems run better when designed to run downhill.
31. Loose systems last longer and work better. (Efficient systems are dangerous to themselves and to others.)
32. As systems grow in size, they tend to lose basic functions.
33. The larger the system, the less the variety in the product.
34. Control of a system is exercised by the element with the greatest variety of behavioral responses.
35. Colossal systems foster colossal errors.
36. Choose your systems with care.