In a recent article, I have argued that the ability to automatically validate and evaluate the “English” language in technical documents let us write better requirements, but also let us establish clear, concise, and most importantly, verifiable goals. This results directly in cost reductions in testing and validation, as the QA team create test plans based on accurate, complete and verifiable requirements.
The tool I introduced is Semios, from the French Company Prometil: Initially targeted toward formal requirements and used in the aerospace industry, this paper gives it a try on non-formal user stories.
What user story?
To test Semios, I needed a user story so I googled “user story example”. I ended up on Writing Great User Stories where I discovered the following examples, organized in 3 “quality categories” by the author: Too broad, Too detailed and Just Right.
To run them in Semios, I reformatted the stories in a Microsoft Word Document (Semios can deal with a lot of formats, but it works best with a title associated with each requirement/story)
REQ.1. A team member can view iteration status.
REQ.2. A team member can view a table of stories with rank, name, size, package, owner, and status.
REQ.3. A team member can click a red button to expand the table to include detail, which lists all the tasks, with rank, name, estimate, owner, status.
REQ.4. A team member can view the iteration’s stories and their status with main fields.
REQ.5. A team member can view the current burn down chart on the status page, and can click it for a larger view.
REQ.6. A team member can view or hide the tasks under the stories.
REQ.7. A team member can edit a task from the iteration status page.
Let’s run Semios!
First, all the requirements were flagged as “Non-Mandatory” requirements. According to the IEEE standards and the best practices in requirements writing, a requirement should be expressed by using the modal shall. We will “ignore” those for now as user stories are different and do not require “shall”. This check can be turned off through configuration.
The output of Semios is as followed, where each finding is displayed as a “comment”:
A detailed review
Let’s explore Semios’ findings:
- REQ.1 is ok. The requirement states “The team member can view iteration status”. While it is broad (as the original author argued) this is a simple requirement easily verifiable and is not flagged by Semios. Neither is REQ.2.
- REQ.3 was considered by the author as “too detailed”. Semios states the opposite and flags it as having an issue with verifiability because it refers to “All the tasks”. Semios checks if the used terms in the requirements are clearly specified and if they correspond to a clear and specific concept. It is expected that expression such as “all tasks”, “several methods”, “every units” must be replaced by X and Z tasks or X and Y methods to specify exactly what kind of systems or methods are concerned with the given requirements. In IEEE terms, these are vague and general terms and shall be avoided. In simpler terms and if you are a tester, how would you test that “all the tasks” are visible? Are you going to go one by one? Will you work out of a list? Will your test case account for changes between each release?
- REQ.4 is flagged for increased ambiguity because of a pronoun with unclear antecedents. When a personal pronoun is used in a requirement, it is difficult to identify exactly the element to which the pronoun refers. When multiple elements can be potential candidates to the reference, the use of pronouns increases the problem of ambiguity. In REQ.4, “their” may relate to the stories, the iterations, or the team members. We could assume that because “iterations” is the only plural, “their” must relate to “iterations”, but then, we must ask ourselves whether this is the type of logic we want to rely on to clear up ambiguities? Writing “A team member can view the stories and the stories’ status with main fields.” would resolve it with no debate.
- REQ.5 is flagged for Verifiability because it contains two requirements in one. If you are a tester, how would you go about validating REQ.5? What happens if only one of the two requirements can be validated? Is the requirement failed? Half-failed? Validated? The goal is to highlight the occurrence of two requirements or more in a same sentence is contrary to requirement writing best practices. Indeed, in that case, the validation tests of the related requirements would be much more difficult to carry out, since each requirement of the sentence will be difficult to distinguish from the other. If there are several requirements to express, several sentences are needed.
- REQ.5 is also flagged for the use of “it” as “it” could mean the “current burndown chart” but also “the status page”. Which one should I click on?
- Lastly, REQ.5 is flagged for the use of the term “larger”. This warning means that the requirement includes one or more terms with a fuzzy sense and that the writer and the user-reader may not give it the same meaning. For a better understanding and execution of the requirement, it is advised to use unambiguous terms. Indeed, one of the characteristics of a well-formed requirement is to be measurable. “Larger” could mean it has one extra pixel…
- REQ.6. is flagged for the use of conjunction “or”. REQ.6. contains multiple requirements: “View the tasks” and “hide the tasks”. But what happens if the team member can view the tasks but not hide them? Is REQ.6. valid then? We can guess that the author meant, but not all the cases are this obvious, and again, is this the type of logic we want to rely on? Using too many coordinating conjunctions in a requirement produces the problems of readability, ambiguity and complexity.
I hope you enjoyed those simple examples. So, does Semios for Requirements work on non-safety critical user stories? Would those findings help your testers and developers? Seems like it does. Of course, we have only tested simple requirements, supposedly “Just Right”, and a few good points were made.
Still not convinced? Let me ask you this: Why do we teach developers about not using undefined variable in their code? Why do we even bother about implementing this as best practice in the simplest cases? Because, when the complexity increases, the impact of undefined variables becomes hidden and they then can create painful defects. If you look at requirements the same way, those simple cases are trivial but they are the opportunity of developing a best practice that will eventually pay out when you reach the testing stage in a real project. Semios is offering you a painless and unique opportunity to get started at writing better requirements and stories.
Why not give it a try on something you recently wrote? Please contact Emenda or Stephane.Raynaud[AT]Emenda.com for a demo.
Semios can provide statistics on the analysis and they can be used to evaluate external requirements you receive or send to 3rd party. Internally, you can also to track the progress of your team for quality, or set benchmarks to be met before developing a test plan or moving on to the next sprint.
Nb Reqs: 7
Nb Reqs with comments: 6
Nb analysed Reqs: 7
Nb Not Analysed Reqs : 0
Nb Comments: 13
Nb High Alerts: 3
Nb Medium Alerts: 9
Nb Low Alerts: 1
Nb OK Reqs: 1
Nb Ignored Reqs: 0
Processing time : 0h00:25
Version : Semios for requirements 2.3 EN Default
Language : en
Context : Default (C:\Program Files\Semios\init\CoreConfig)
Configuration : REQ
Word Version : Word 2016
OS Version : Microsoft Windows 10 Pro (10.0.15063)