In Dutch we have a saying: Trust comes on foot and leaves on horseback. Trust is something exceedingly fragile. The projects that I've seen fail, almost always failed because of a lack of trust between domain experts and software engineers. Without trust we cannot be certain that the code you need to deliver does the right thing.
Measuring the soft side
In my earlier blog Can software teams deliver basic quality? I drew the conclusion that we need Domain Engineers to join projects to bridge the information gap between domain experts and software engineers. Developing software requires knowledge in two domains: the subject-matter and engineering. Most members of a team are expert in one and not the other. How do you deal with issues like the transfer of knowledge between those different domains, and the effective collaboration and communication between software engineers and subject-matter experts. In this blog I show what happens in projects without Domain Engineers.
In virtually all projects a subject-matter expert has essential knowledge that needs to be codified by a software engineer. It does not matter whether the subject-matter experts are academic scholars expert in the correspondence between Italian merchants in the early renaissance, or safety specialists working in a project to build a new registration system for fire stations on oil rigs. In every instance you need to confirm that knowledge was transferred reliably and in such a way that the entire team can trust the software application implemented it consistently. Through unit and integration tests we continuously assert that the software does things right. How do we confirm that the code consistently does the right things? Also after months and years of continued development?
Sound knowledge transfer is crucial for the delivery of software development projects. And we have no mechanisms to test it.
Hope versus Trust
The most successful projects I was part of, were those where an interested team of software engineers acquired domain knowledge. Projects in which the developer and domain expert almost became peers. Why? Because at that point they shared a language, and the developer could make reliable judgments on the consequences of changes in the codebase.
Sharing a language is key: in one instance I encountered a lead engineer who had learned 17th century Latin in his spare time only in order to (quite literally) better understand the subject-matter experts in an historical archive. The project was a great success because of his personal interests. But learning Latin is not a very sustainable way of software development And without a shared language the subject-matter experts can only hope that the engineers understand them. They cannot trust it.
Symptoms of Hopitis
A major indication that a project is suffering from what I call Hopitis, is the inclination to retest everything with every change. I don't mean automated testing (which is good) but user testing by the domain experts themselves. They insist on clicking and calculating through the entire application in order to confirm that a change has no unintended consequences. Many subject-matter experts feel that such an exercise is the only way they can confirm their hope that everything is still fine. In a project in San Francisco I learned that this has a name: it is called dogfooding. Every week, before release of the latest version, the team would follow written scenarios to click through the entire application to confirm that everything was working as it should be. Dogfooding works. It is extremely tedious, but it works.
Another symptom of Hopitis manifests after the project is done and team members have moved on. There is no trail of trust left behind. How can the next team of engineers asked to add a new feature, do anything but hope that the transfer of knowledge was reliably codified? How can they trust the existing codebase? Yes, there may be documentation or even automated unit tests but those prove that the application consistently does the same thing right. They do not proof that the application does the right thing. It is very complicated to prove that all changes and adaptations to the code, written by multiple engineers over months of time, correctly reflect the knowledge of the subject-matter expert. And suppose that the experts are still around: how likely is it that they - without technical skills - can reliably confirm their knowledge was properly translated?
Without a trail of trust a new team can hardly be expected to use old code. Still we do that all the time. Software managers look despairingly at teams who don't, and complain that the software engineers suffer from the 'not-invented-here-syndrome'. But they are not at fault. The preceding project suffered from Hopitis. We cannot blame a new team for not trusting that the code does what it is supposed to do. So before adding new features you either have to throw potentially solid code overboard, or go through an expensive and time consuming assessment process. Which more often than not concludes that it is 'complicated' and that - just to be on the safe side - a redesign and partial rewrite is recommended. All because we miss a trail of trust that proves the subject-matter experts reliably transferred their knowledge to software developers.
Doing things right versus doing the right thing
We have many ways of proving that code does things right: unit tests, integration tests, Q&A-teams, and documentation. Yet we often find it complicated to consistently apply those in every project and codebase that we build for a user. But there is no way we can prove that the code does the right things. We need to fix that. Domain Engineers have a crucial role to play here.
In my last blog on Domain Engineers: How do you Verify Trust, I explore how Domain Engineers can help to verify trust within the team in a quantifiable way.