SlideShare uma empresa Scribd logo
1 de 71
Baixar para ler offline
Bachelorarbeit
Comparative evaluation of
template-systems for requirements
documentation
Francisco Jos´e Caballero Cerezo
September 12, 2016
Gutachter: Prof. Dr. Jan J¨urjens
M. Sc. Katharina Großer
Prof. Dr. Jan J¨urjens
Institut f¨ur Softwaretechnik
Institut f¨ur Informatik
Universtit¨at Koblenz
Universit¨atsstraße 1
56070 Koblenz
https://rgse.uni-koblenz.de
Francisco Jos´e Caballero Cerezo
francaballero@uni-koblenz.de
Matrikelnummer: 215202495
Studiengang: Bachelor Informatik
Pr¨ufungsordnung: PO2012
Thema: Comparative evaluation of template-systems for requirements doc-
umentation
Eingereicht: September 12, 2016
Betreuer: Prof. Dr. Jan J¨urjens
M. Sc. Katharina Großer
Prof. Dr. Jan J¨urjens
Institut f¨ur Softwaretechnik
Institut f¨ur Informatik
Universtit¨at Koblenz
Universit¨atsstraße 1
56070 Koblenz
i
ii
Ehrenw¨ortliche Erkl¨arung
Ich erkl¨are hiermit ehrenw¨ortlich, dass ich die vorliegende Arbeit selbst-
st¨andig angefertigt habe; die aus fremden Quellen direkt oder indirekt ¨uber-
nommenen Gedanken sind als solche kenntlich gemacht.
Die Arbeit wurde bisher keiner anderen Pr¨ufungsbeh¨orde vorgelegt und
auch noch nicht ver¨offentlicht.
Koblenz, den September 12, 2016
Francisco Jos´e Caballero Cerezo
iii
Abstrakt
Die vorliegende Bachelorarbeit umfasst die Auswertung verschiedener Tem-
plate-Systeme f¨ur die Anforderungsdokumentation.
Die Auswahl der richtigen Template-Systeme beeinflusst den Erfolg jedes
Softwareprojektes. Zun¨achst wurde eine theoretische Evaluation durchge-
f¨uhrt, um die St¨arken und Schw¨achen der Template-Systeme zu analysieren.
Um an der Anforderungsdokumentation mitwirken zu k¨onnen, wurde an-
schließend eine Benutzeroberfl¨ache f¨ur Template-Systeme implementiert.
Die vergleichende Evaluation wurde schlussendlich von einem Experiment
begleitet, welches mithilfe von Industriefachpersonal und Studenten durchge-
f¨uhrt wurde.
Die Ergebnisse weisen darauf hin, dass Template-Systeme eine gute Verbesse-
rungsstrategie f¨ur die Anforderungsdokumentation sind und MASTeR das
passendere darstellt.
This thesis deals with the evaluation of different template-systems for re-
quirements specification.
The selection of the right template-system influences the success of any
software project. As a first step, a theoretical evaluation was conducted
to analyse the strengths and weaknesses of the template-systems. Second,
in order to assist in the requirements documentation, a user interface for
template-systems was implemented. Finally, the comparative evaluation
was supported through an experiment performed with industrial profes-
sionals and students.
The results suggest that template-systems are a good improvement for re-
quirements specification and MASTeR would be the more suitable one.
iv
Acknowledgements
Firstly, I would like to express my sincere gratitude to Prof. Dr. Jan J¨ur-
jens for his support and for providing me with the opportunity to write this
Bacherlor’s thesis. I would also like to thank M. Sc. Katharina Großer for
her useful comments, remarks and patience. Her guidance has helped me
in all the time of research and writing of this thesis.
My sense of gratitude to one and all, who directly or indirectly, have sup-
ported me in this venture. Especially, I would like to thank everyone who
has participated in the experiment. Without their help, the test could not
have been successfully conducted.
I also take this opportunity to thank my parents and my friends for sup-
porting me from a distance. Lastly, I am also grateful to my partner who
has backed me throughout the entire process by keeping me harmonious
and encouraged.
Este trabajo est´a dedicado a mi abuelo Paco, una persona que ha sido mi
referente y ejemplo durante toda mi vida y que me ha hecho ser quien soy
a d´ıa de hoy.
CONTENTS v
Contents
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Targets of the Thesis . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Structure of the thesis . . . . . . . . . . . . . . . . . . . . . . . 4
2 State of the Art in Requirements Specification 5
2.1 Requirements Engineering . . . . . . . . . . . . . . . . . . . . 5
2.1.1 Software Requirements . . . . . . . . . . . . . . . . . . 5
2.1.2 Requirement Engineering Processes . . . . . . . . . . 9
2.2 Requirements Specification . . . . . . . . . . . . . . . . . . . . 12
2.2.1 MASTeR . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.2 EARS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.3 Planguage . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3 Comparative Evaluation 23
3.1 Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.1.1 First Approach . . . . . . . . . . . . . . . . . . . . . . . 23
3.1.2 Second Approach . . . . . . . . . . . . . . . . . . . . . 25
3.1.3 Formula variations . . . . . . . . . . . . . . . . . . . . 28
3.2 Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.3 Results and Analysis . . . . . . . . . . . . . . . . . . . . . . . . 30
4 User Interface for Template-Systems 32
4.1 Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.1.1 Technologies . . . . . . . . . . . . . . . . . . . . . . . . 32
4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5 Users Experiment 38
5.1 Experiment design . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.1.1 Introduction section . . . . . . . . . . . . . . . . . . . . 39
5.1.2 Reading section . . . . . . . . . . . . . . . . . . . . . . 40
5.1.3 Writing section . . . . . . . . . . . . . . . . . . . . . . . 40
5.2 Test subjects . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.3 Results and Analysis . . . . . . . . . . . . . . . . . . . . . . . . 41
5.3.1 Reading requirements . . . . . . . . . . . . . . . . . . . 42
5.3.2 Writing requirements . . . . . . . . . . . . . . . . . . . 44
CONTENTS vi
5.3.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 46
6 Summary 48
A Checklists 49
A.1 Inspecting Quality . . . . . . . . . . . . . . . . . . . . . . . . . 49
A.2 Inspecting Formality . . . . . . . . . . . . . . . . . . . . . . . . 54
A.3 Inspecting Participants’ Requirements . . . . . . . . . . . . . 54
B Experiment 55
B.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
B.2 Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
B.3 Writing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Bibliography 59
LIST OF FIGURES vii
List of Figures
2.1 Classification of requirements types, extracted from [Poh96]. 6
2.2 RE four processes, adapted from [Poh96]. . . . . . . . . . . . 9
2.3 Users of a requirements document, extracted from [Som11]. 11
2.4 Example of schema represented in Z notation, adapted from
[WD96]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5 Requirements are specified by a combination of templates
and attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.6 FunctionalMASTeR template, adapted from [RdS14b]. . . . 14
2.7 PropertyMASTeR template, adapted from [RdS14b]. . . . . 16
2.8 EnvironmentMASTeR template, adapted from [RdS14b]. . 16
2.9 ProcessMASTeR template, adapted from [RdS14b]. . . . . . 17
2.10 ConditionMASTeR template, adapted from [RdS14b]. . . . 17
2.11 LogicMASTeR template, adapted from [RdS14b]. . . . . . . 18
2.12 EventMASTeR template, adapted from [RdS14b]. . . . . . . 18
2.13 TimeMASTeR template, adapted from [RdS14b]. . . . . . . 19
3.1 Checklist-based inspection results (detailed). . . . . . . . . . 26
3.2 Results of the formula application for first and second vari-
ations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.1 W3C HTML successful validation. . . . . . . . . . . . . . . . 34
4.2 W3C CSS successful validation. . . . . . . . . . . . . . . . . . 34
4.3 Main page, containing MASTeR and EARS introduction. . 34
4.4 MASTeR view, with the control panel. . . . . . . . . . . . . 35
4.5 How requirements can be specified with the tool. . . . . . . . 35
4.6 Glossary view, where terms can be defined. . . . . . . . . . . 36
4.7 Results can be exported at glossary view. . . . . . . . . . . . 36
4.8 The responsive design adapts the page layout to different
resolutions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.1 Component diagram of the web-based test. . . . . . . . . . . 39
5.2 The user interface developed in Chapter 4 was adapted to
the web-based experiment. . . . . . . . . . . . . . . . . . . . . 41
5.3 Correctness of participants’ answers in reading section, grouped
by notation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.4 How the test subjects evaluated the reading facilities. . . . . 43
5.5 Quality of participants’ requirements, grouped by notation. 44
LIST OF FIGURES viii
5.6 How the test subjects evaluated the writing facilities. . . . . 45
5.7 Comparison between the comparative evaluation and the ex-
periment results. . . . . . . . . . . . . . . . . . . . . . . . . . . 46
B.1 Questionnaire about participant’s previous experience and
occupation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
CHAPTER 1. INTRODUCTION 1
1 Introduction
This Bachelor’s thesis belongs to the T-Reqs project, a project of the Soft-
ware Engineering research group of the Institute for Software Technology
(IST) at the University of Koblenz-Landau in collaboration with the Euro-
pean Space Agency (ESA).
The T-Reqs project is focused on technologies that can improve the quality
of requirements relating to precision, accuracy and completeness by the use
of ontologies. Ontologies are information models that make descriptions of
concepts of the real word readable and accessible for humans and machines.
Template-systems are semi-formal notations for requirements specification
that support this ontology approach. The goal of this thesis is to select and
evaluate template-systems.
1.1 Motivation
In 1996, the ESPITI (European Software Process Improvement Training
Initiative) project stated that one of the most important reasons for soft-
ware projects’ cancellations, cost overruns or delays were related to poor
requirements specifications [Eak96].
Semi-formal notations, like template-systems, make it possible to construct
requirements following a plan (a template) and assigning a similar structure
to each requirement. This approach helps to avoid mistakes from the early
stages of the development process by specifying high-quality requirements
that are cost and time efficient [RdS14a].
Notations for requirements specification have been studied for decades and
many semi-formal notations have been designed and developed. Some of
them are as follows:
• Template-systems, like:
– MASTeR (Musterg¨ultige Anforderungen - die SOPHIST Tem-
plates f¨ur Requirements) [RdS14b].
– EARS (Easy Approach to Requirements Syntax) [MWHN09].
CHAPTER 1. INTRODUCTION 2
– Adv-EARS (Advanced-EARS) [MSKB11].
– Boilerplates [HJD05].
– Extensions to Boilerplates [Far12].
– RLS (Requirements Specification Language) of CESAR project
[DSP10].
– Gherkin [WH12].
– Spider (Specification Pattern Instantiation and Derivation En-
vironment) [KC05].
• Keywords-driven languages, like Planguage [Gil05] or Volatile [RR06].
• Controlled syntaxes, like ISO/IEC/IEEE 29148 syntax recommenda-
tions [ISO11].
Every system provides different outputs, strengths and weaknesses. Before
starting a project, an analysis to stipulate the most appropriate notation
should be performed. This could get very complex depending on the num-
ber of notations analysed and will greatly affect the results of the quality
tests and the development process.
1.2 Targets of the Thesis
The main target of this thesis is to select and compare different template-
systems by evaluating which strengths and weaknesses they present over
other templates and unrestricted natural language.
A theoretical evaluation of the selected semi-formal notations will be per-
formed. Later on, this will be analysed and contrasted by conducting an
experiment on users.
A second target is to develop a user interface for template-systems that can
assist researchers and industry professionals in the usage of the notations.
The required steps to accomplish this goals are as follows:
1. Search and select a variety of different template notations, by exam-
ining the existing literature.
2. Conduct a theoretical evaluation of the selected notations, based on
human readability, use of natural language, formality, expressiveness,
etc.
CHAPTER 1. INTRODUCTION 3
3. Choose a selection of the most promising template-systems for the
experiment, based on the results of the comparative evaluation.
4. Develop a User Interface for the selected systems.
5. Design, implement, execute and analyse the experiment.
6. Summarise and give an insight of the future work.
1.3 Related Work
In recent years, many research about template-systems has been made.
Some exemplary works are the one of Farfeleder [Far12] and Majumdar et
al. [MSKB11].
The PhD conducted by Farfeleder contributed a framework for require-
ments specification and requirements analysis. In there, Farfeleder pre-
sented, among other things, an extension of the original Boilerplates set of
Hull et al. [HJD05] that includes role and type attributes, allows several
strings at one single attribute and defines a set of default attributes.
Majumdar et al., for their part, proposed a syntax called Adv-EARS, based
on EARS [MWHN09]. That proposal is focused on identifying the elements
of use case diagram1 from which the latter would be generated automat-
ically. They accomplished it by extending the existing requirements syn-
taxes of EARS and also introducing a new requirement type (hybrid re-
quirement).
However, not many researches have been focused on comparing the differ-
ent template-systems.
Johannessen, in his Master’s thesis [Joh12], presented a comparison be-
tween natural language and the Boilerplates of Hull et al. by carrying out
a test with university students. His experiment concluded without a clear
answer, but the results suggested that requirements’ errors are easier to
find in Boilerplates.
On the presentation of EARS [MWHN09], Mavin et al. expounded a case
study that compared natural language requirements from the CS-E (Certifi-
cation Specification for Engines) document and their EARS interpretation.
Quantitative results for the case study were provided, which stated that
EARS presents a number of advantages over the use of natural language.
1
”A UML diagram that shows actors, use cases, and their relationships” [ISO10].
CHAPTER 1. INTRODUCTION 4
Nevertheless, no publication has been found by the author that selects and
evaluates different template-systems. As mentioned in the beginning of the
chapter, a selection of the most suitable notation is essential for the future
of a project.
This thesis is the first study in this area. It will evaluate the strengths and
weaknesses of different template-systems based on metrics such as reading
and writing facilities, expressiveness, etc. This evaluation will be tested and
contrasted with the results of an experiment performed with test subjects
(industry professional, scientific researchers and students).
1.4 Structure of the thesis
After the introduction described in the current chapter, the document has
the following structure:
Chapter 2 This chapter will explain the state of the art of requirements
specifications, from the basic concepts of Requirements Engineering,
to the description of template-systems for requirements specification.
Chapter 3 The comparative evaluation of the selected template-systems
will be detailed in this chapter.
Chapter 4 A user interface for the specification of requirements through
template-systems will be developed. This chapter explains the aim
and the results of this tool.
Chapter 5 This chapter will describe the experiment design, the test sub-
jects and the analysis of results of the test performed to evaluate the
different template-systems.
Chapter 6 The final chapter will contain a summary of the realized work
and an insight about the future work.
CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 5
2 State of the Art in Requirements Specification
In this chapter, the bases for this thesis will be exposed. First, the basics
of Requirements Engineering will be briefly described. After that, the re-
quirements specifications techniques will be explained in more detail and
the selected template-systems will be expounded and described.
2.1 Requirements Engineering
Requirements Engineering (RE) refers to the process of discovering, analy-
sing and verifying the requirements of a system [Som11].
The correct understanding (elicitation), documentation (specification) and
validation of the customer needs is an essential part to make the system
meet the user needs [Poh96]. Corrections on requirements after delivering
the product are normally between 60 and 100 times more expensive than
changes at early stages of the development [Dav95].
2.1.1 Software Requirements
A software requirement is a service or a constraint that has to be satis-
fied by a system. At a high abstraction level, it is a property that must
be demonstrated by a software in order to solve some problem in the real
world [BF14].
According to ISO/IEC/ IEEE 24765 Systems and software engineering -
Vocabulary [ISO10], a requirement is:
”A condition or capability that must be met or possessed by a system, system
component, product, or service to satisfy an agreement, standard, specifica-
tion, or other formally imposed documents”.
CHAPTER2.STATEOFTHEARTINREQUIREMENTSSPECIFICATION6
Figure 2.1: Classification of requirements types, extracted from [Poh96].
CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 7
Many authors have proposed different classifications for requirements types.
In 1996, Pohl [Poh96] stated a classification of requirements types, shown
in Figure 2.1. According to his work, the different types of requirements
are as follows:
Functional Requirements Also known as capabilities or features, func-
tional requirements represent the functions that the software has to
provide (e.g., formatting text, modulating a signal...) [BF14]. They
describe the services that the system should provide and how the
system should react in particular situations. As Figure 2.1 shows,
external interfaces for users, hardware and software express function-
ality.
Non-Functional Requirements Non-functional requirements, someti-
mes known as constraints or quality requirements, do not describe
functionalities, but properties that the system has to fulfill. They
constrain the solution [BF14].
According to the classification of Pohl (Figure 2.1), they can be clas-
sified as requirements of portability, maintainability, error situation,
design constraints, error description, error handling, user supply,
safety-security, test cases, performance, backup/recovery, cost con-
straints, time constraints and documentation [Poh96].
Requirements Quality Criteria
According to Wiegers [Wie99], a high-quality requirement should be:
Correct The requirement should accurately describe the functionality to
be delivered. In addition, there should not be any conflicts with other
requirements of the corresponding system.
A source of the requirement (e.g., a stakeholder1 or a higher-level
specification) should be provided as a reference of correctness. Re-
garding requirements inspections, representative users can determine
the correctness of user requirements.
Feasible It must be possible to implement the requirement according the
capabilities and limitations of the system. In order to avoid infeasi-
ble features (e.g., requirements technologically overly complex or too
costly), domain experts should participate during the elicitation and
1
”Individual or organization having a right, share, claim, or interest in a system or
in its possession of characteristics that meet their needs and expectations” [ISO10].
CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 8
the negotiation process (Section 2.1.2).
Necessary The requirement should reflect something that the customer/-
user really needs or something that is required for conformance to
an external requirement, external interface, or standard. In the case
that a requirement is not traceable back to its source, it may be not
really necessary.
Prioritized Each requirement should have an implementation priority. An
agreement with the customer should be made in order to identify
the most important requirements, being prepared for changes during
development, budget cuts, schedule overruns or any other project
management risk.
An example of priority levels is the following:
• High priority: The requirement must be in next product re-
lease.
• Medium priority: The requirement is necessary, but can be
postponed to later release.
• Low priority: The requirement is nice to have, but it may be
dropped if there are insufficient time or resources.
Modal verbs, as shall, should or will, are used to express liability
(Section 2.2.1) and also provide priorities.
Unambiguous Requirements should have only one possible interpreta-
tion by the reader. Natural language is highly prone to ambiguity;
for this reason, subjective words (such as user-friendly, easy, simple,
rapid, efficient, several, improved, maximize, minimize...) should be
avoided. The requirement should be written in a simple and straight-
forward language of the user domain.
Example: “The HTML Parser shall produce an HTML markup error
report which allows quick resolution of errors when used by HTML
novices.”. Ambiguity is produced by the word quick.
Verifiable It must be possible to determine if the requirement is properly
implemented in the product. A requirement needs to be correct, fea-
sible, and unambiguous in order to be verifiable. This property can
also be found in the literature named as testable.
CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 9
2.1.2 Requirement Engineering Processes
Many authors, like Bourque et al. [BF14], Sommerville [Som11] or Pohl
[Poh96], have proposed different definitions for requirements engineering
processes. In 1996, Pohl stated four tasks that were to be performed dur-
ing the RE process: the elicitation, the negotiation, the specification, and
the verification/validation of requirements. These fours tasks, the interac-
tions and the relationship with the stakeholders, are presented in Figure 2.2.
Figure 2.2: RE four processes, adapted from [Poh96].
Requirements Elicitation
Requirement elicitation is the process of gathering information about the
required system and the pre-existing systems. This process uses the in-
formation obtained from stakeholders, documentation and specifications of
similar systems to establish user and system requirements2 [Som11].
The interaction with participants is developed through elicitation tech-
niques. The most common are: interviews, scenarios, prototypes, facili-
tated meetings, observation and user stories [BF14].
2
”System requirements are more detailed descriptions of the software system’s func-
tions, services, and operational constraints [...] It may be part of the contract between
the system buyer and the software developers” [Som11].
CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 10
As RE processes are iterative (Figure 2.2), the understanding of the re-
quirements by the analysts team gets better with every round of the cycle.
Requirements Negotiation
Also known as conflict resolution, requirements negotiation concerns re-
solving problems with requirements [BF14]. Some of the conflicts that can
arise are related to:
• Requirements and resources;
• Functional and non-functional requirements;
• Disagreements between stakeholders.
It is important to make every decision traceable back to its source, in or-
der to avoid the reoccurrence of these problems. During this negotiation
process, tradeoffs can be agreed as alternatives.
Requirements Specification & Documentation
Often called the ”software requirements document”, the Software Require-
ments Specification (or SRS) is an official statement which specifies every
requirement that has to be implemented in the system [Som11]. As Fig-
ure 2.3 shows, many different stakeholders like system customers, managers,
system engineers, system test engineers or system maintenance engineers
can access and use this document.
While the requirements’ quality criteria (Section 2.1.1) are focused on in-
dividual requirements, the IEEE Std 830-1998 Recommended Practice for
Software Requirements Specifications [ISO98] extends the quality criteria
from individual requirements to an SRS level with some new properties:
Complete An SRS is complete if, and only if, it includes the following
elements:
1. Every significant requirement relating to functionality, perfor-
mance, design constraints, attributes, external interfaces and
any external requirement imposed by a system specification.
2. Description of the software response to every valid or invalid
class of input data in all realizable classes of situations.
CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 11
Figure 2.3: Users of a requirements document, extracted from [Som11].
3. Labels and references to every figure, table and diagram, as well
as definitions of all terms and units of measure.
Consistent An SRS is not consistent if it does not agree with some higher-
level document or with itself. Furthermore, two or more requirements
should not describe the same need.
Ranked for importance and/or stability Each requirement of the SRS
should have an identifier to indicate its importance or stability.
Modifiable An SRS is complete if, and only if, any changes to the re-
quirements can be made easily, completely, and consistently without
changing the style or the structure.
Traceable The origin of each requirement should be clear and it should
facilitate the referencing of each requirement in future development
or enhancement documentation.
These quality criteria may vary according to the authors [Che13] [Ebe08],
but each project should always state a systematic approach to follow in
CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 12
order to guarantee high-quality requirements.
Requirements Validation & Verification
According to Boehm [Boe79], an informal interpretation of validation and
verification is:
Verification ”Am I building the product right?”.
Validation ”Am I building the right product?”.
This means that verification is focused on checking if a system matches its
specification, while validation checks if the system presents the functional-
ity expected by the customer/user.
During the validation process, different types of checks, like validity, con-
sistency, completeness, realism and verifiability checks, should be applied
to the requirements document [Som11].
2.2 Requirements Specification
There are many possible notations that exist and could be used to write
requirements.
Natural language is the most flexible notation. It allows software require-
ments to be written in a quick way, making them easy to read and under-
stand by the reader. Nevertheless, there can be some problems that could
arise, and therefore hamper the compliance of the quality criteria (Section
2.1.1). According to Mavin et al. [MWHN09], some of these problems are:
Ambiguity Requirements could have more than one interpretation.
Untestable Requirements could be unverifiable once the system is imple-
mented.
Inconsistency Several requirements could define the same need.
Vagueness Requirements could present a lack of precision, detail or struc-
ture.
Complexity Some requirements could contain complex sub-clauses and/or
several interrelated statements.
CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 13
Omission Missing requirements, especially requirements to manage un-
wanted behaviour.
Wordiness Requirements could contain an unnecessary number of words.
Inappropriate implementation details Requirements could describe a-
bout how the system should be built, rather than what it should do.
Requirements are often written in natural language; however, this may be
supplemented by formal or semiformal descriptions [Som11].
Graphical notations, like the Unified Modeling Language (UML) [Lav03],
and Formal notations, based upon elementary mathematics, can be used to
specify precise, unambiguous and structured requirements. Some examples
of formal notations are Event-B [Abr10], Alloy [Jac01] and Z [WD96].
Z is a mathematical language with a powerful structuring mechanism. An
example of usage is presented in Figure 2.4.
Figure 2.4: Example of schema represented in Z notation, adapted from
[WD96].
In contrast to natural language, formal notations require a high level of
training by the development team and the customers/users in order to
write and understand requirements.
Semi-formal notations, like template-systems and keywords-driven languages,
are approaches that facilitates the satisfaction of the quality criteria (Sec-
tions 2.1.1 and 2.1.2) specifying requirements by a combination of templates
(syntax) and attributes (semantics), as shown in Figure 2.5.
Three promising notations have been selected from the list presented in Sec-
tion 1.1: MASTeR template-system, EARS template-system and PLan-
guage keywords-driven language. Their widespread use in the industry,
as well as there complexity and there documentation support, have made
them valuable to be compared and evaluated on this thesis. This selection
is described in more detail in the following sections.
CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 14
Figure 2.5: Requirements are specified by a combination of templates and
attributes.
2.2.1 MASTeR
In 2014, Rupp et al. presented MASTeR (Musterg¨ultige Anforderungen -
die SOPHIST Templates f¨ur Requirements) [RdS14b], a template-system
that is currently used by many well-know german companies3.
MASTeR allows writing requirement specifications through some basic
types: functional and non-functional requirements, divided in property,
environment and process requirements.
Functional requirements
Functional requirements can be specified with the FunctionalMASTeR tem-
plate, shown in Figure 2.6. The uppercase words represent the template
fixed values and the lowercase words with angle brackets represent the at-
tributes that have to be filled in [RdS14b], as mentioned in [Gro15].
Figure 2.6: FunctionalMASTeR template, adapted from [RdS14b].
The elements are:
1. Condition: Optional conditions can be specified with the Condition-
MASTeR, LogicMASTeR, EventMASTeR or TimeMASTeR tem-
plates (described below in this section).
3
Details can be consulted at:
https://www.sophist.de/referenzen/unsere-kunden/
CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 15
2. System: Represents the system or component that should provide
the functionality.
3. Liability: Modal verbs are used to express liability:
(a) Shall: Statement is legal binding and mandatory.
(b) Should: Statement is desired, but not mandatory.
(c) Will: Statement is recommended, but not mandatory.
4. Activity type: The different types of system activities are the fol-
lowing [RdS14a]:
(a) Independent system action: (-) the system performs the
process by itself.
(b) User interaction: (PROVIDE <actor> WITH THE ABILITY
TO) the system provides the user with some process functional-
ity.
(c) Interface requirement: (BE ABLE TO) the system performs
the process dependent on a third factor.
5. Process verb: Represents the process: procedures (functionality)
and actions to be provided by the system [RdS14a].
6. Object: Characterizes the process word. It’s the answer to the ques-
tion what? [RdS14a].
Example: The document editor shall provide the user with the ability to
create new documents.
Property requirements
Property requirements describe properties of the system. PropertyMASTeR
template (Figure 2.7) is proposed.
The specific elements of the PropertyMASTeR template are:
1. Characteristic: Is the property of the subject matter.
2. Subject matter: Represents the system, sub-system, component,
function, etc.
3. Qualifying expression: Specifies the range of the value.
CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 16
Figure 2.7: PropertyMASTeR template, adapted from [RdS14b].
4. Value: Is connected with the qualifying expression through the verb
to BE.
Example: The design of the website should be responsive.
Environment requirements
Technological requirements of the system’s environment are described with
EnvironmentMASTeR template (Figure 2.8). The fixed values are designed
in a way that make the requirement belongs to the system and not the en-
vironment [RdS14b], as mentioned in [Gro15].
Figure 2.8: EnvironmentMASTeR template, adapted from [RdS14b].
Example: The charger of the device shall be designed in a way the system
can be operated in a range 100-240V/50-60Hz.
Process requirements
Process requirements can be worded with the ProcessMASTeR template
(Figure 2.9), which is based on the independent system action version of
CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 17
the FunctionalMASTeR template. Process requirements are related to ac-
tivities or legal-contractual requirements, as well as non-functional require-
ments. In this template, the subject of the requirement is an actor and not
the system [RdS14b], as mentioned in [Gro15].
Figure 2.9: ProcessMASTeR template, adapted from [RdS14b].
Example: The software developers should work according to the Personal
Software Process (PSP).
Conditions
In the case that the functionality is only given or provided under certain
logical or temporal conditions, the ConditionMASTeR template can be
applied (Figure 2.10) [RdS14a].
Figure 2.10: ConditionMASTeR template, adapted from [RdS14b].
Example: If the server could not found the document requested.
A more precise specification can be obtained by the use of LogicMASTeR,
EventMASTeR or TimeMASTeR templates.
CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 18
LogicMASTeR (Figure 2.11) is used to specify logical conditions. The log-
ical statement is made through a compared object, an actor or a system
[RdS14b].
Figure 2.11: LogicMASTeR template, adapted from [RdS14b].
With EventMASTeR (Figure 2.12), requirements are initiated as soon as
the event condition is satisfied. The term event summarises the possible
events that may affect the system [RdS14b].
Figure 2.12: EventMASTeR template, adapted from [RdS14b].
TimeMASTeR (Figure 2.13) is used to specify a certain period of time
when a system or object may have temporary behaviours. Both, condi-
tions and requirements, end at the same time [RdS14b].
2.2.2 EARS
EARS (Easy Approach to Requirements Syntax) is a template-system cre-
ated by Mavin et al. [MWHN09] and currently used by companies like Intel
Corporation or Rolls-Royce [Ter13].
CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 19
Figure 2.13: TimeMASTeR template, adapted from [RdS14b].
The simplest structure of EARS system is the generic requirement syntax
(shown in Listing 2.1).
<optional preconditions> <optional trigger>
the <system name> shall <system response>
Listing 2.1: Generic requirement syntax.
According to Marvin et al. [MWHN09], the elements are:
1. Precondition: Necessary condition to invoke the requirement.
2. Trigger: Event that initiates the requirement.
3. System response: Represents the system behaviour, which is acti-
vated if and only if, the preconditions and trigger are activated.
Mavin et al. stated that the generic requirement syntax is specialized into
five types (Ubiquitous, Event-driven, Unwanted behaviours, State-driven
and Optional features), described in the following subsections.
Ubiquitous requirements
An ubiquitous requirement defines a fundamental property of the system
and has no preconditions or trigger. The format is shown in Listing 2.2.
The <system name> shall <system response>
Listing 2.2: Ubiquitous requirements format.
Example: ”The software shall be written in Java” [Ter13].
CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 20
Event-driven requirements
An Event-driven requirement is activated only when a trigger occurs or is
detected. It uses the keyword WHEN. Listing 2.3 shows its format.
WHEN <optional preconditions> <trigger>
the <system name> shall <system response>
Listing 2.3: Event-driven requirements format.
Example: ”When a DVD is inserted into the DVD player, the OS shall spin
up the optical drive” [Ter13].
Unwanted behaviours
Requirements for unwanted behaviours (failures, error conditions, distur-
bances, deviations...) use IF and THEN keywords and have the format
shown in Listing 2.4.
IF <optional preconditions> <trigger>, THEN
the <system name> shall <system response>
Listing 2.4: Unwanted behaviours format.
Example: ”If the memory checksum is invalid, then the software shall dis-
play an error message” [Ter13].
State-driven requirements
State-driven requirements are active while the system is in a specific state.
They use the keywords WHILE or DURING (Listing 2.5).
WHILE <in a specific state> the <system name>
shall <system response>
Listing 2.5: State-driven requirements format.
Example: ”While the autopilot is engaged, the software shall display a vi-
sual indication to the pilot” [Ter13].
Optional requirements
Optional requirements are invoked only if the system includes a special fea-
ture. They use WHERE keyword, as shown in Listing 2.6.
CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 21
WHERE <feature is included> the <system name>
shall <system response>
Listing 2.6: Optional requirements format.
Example: ”Where a HDMI port is present, the software shall allow the user
to select HD content for viewing” [Ter13].
Complex requirements
Requirements with complex conditional clauses can be defined with a com-
bination of the keywords WHEN, IF and THEN, WHILE and WHERE.
Example: ”When the landing gear button is depressed once, if the software
detects that the landing gear does not lock into position, then the software
shall sound an alarm” [Ter13].
2.2.3 Planguage
Planguage is a keyword-driven language created by Gilb in 1998 [Gil05].
The name Planguage is a combination of the words planning and language.
Planguage provide a diverse set of keywords; the most common are shown
in Table 2.1.
In addition, Planguage is an extensible language. Sub-keywords can be used
to add precision and specifity. An example is shown in Table 2.2. Quali-
fiers add richness, precision, and utility by allowing precise description of
conditions and events.
Listing 2.7, adapted from [Sim01], shows an example of qualifiers usage.
PLAN [Q1 00]: 20,000 units sold
MUST [First year]: 120,000 units sold
WISH [First release, entrp. version]: 1 Dec. 2000
PLAN [US market, first 6 months of production]:
Defects Per Million < 1,000
METER [Prototype]: Survey of focus group
METER [Release Candidate]: Usability lab data
Listing 2.7: Example of qualifiers usage.
CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 22
Keyword Definition
TAG Unique and persistent identifier
GIST Statement’s short and simple description
STAKEHOLDER
Person or organization affected
by the requirement
SCALE
Scale of measurement used to quantify
the statement
METER
Process or device used to measure
using SCALE
MUST Minimum level required to avoid failure
PLAN Level at which is considered successful
STRETCH Goal if everything goes perfectly
WISH Desirable level of achievement
PAST Previous results for comparison
TREND Historical data or extrapolation of this
RECORD Best-known achievement
DEFINED Official definition of a term
AUTHORITY Person, group or level of authorization
Table 2.1: Planguage Keywords, adapted from [Sim01].
Keyword Definition
METHOD
Method for measuring to determine
on the SCALE
FREQUENCY Frequency of measurements
SOURCE Responsible entity for making the measurement
REPORT Where and when the measure will be reported
Table 2.2: Sub-keywords for the METER Keyword, adapted from [Sim01].
CHAPTER 3. COMPARATIVE EVALUATION 23
3 Comparative Evaluation
In this chapter, the comparative evaluation of the selected template-systems,
from the preparation of the evaluation to the execution, results and analy-
sis is described.
3.1 Preparation
This evaluation compares different template-systems based on metrics. The-
re exists many metrics that can be obtained from semi-formal notations.
Some of them are measured by automated processes (e.g., length of the
requirements), while other metrics can be measured by human revision of
the notation’s syntax.
Table 3.1 presents some of these metrics, a brief definition of them and the
type of measurement.
3.1.1 First Approach
In the first place, this evaluation will aim to look at a generic software
project, a project that is not focused on any specific field and includes any
type of requirement.
In order to find the notation that is more suitable to specify requirements in
that project, a first approach which attempts to identify the notation that
provides higher-quality requirements is proposed. In this case, Suitability
is defined as:
Suitability = Quality
Where the Quality attribute is measured by the satisfaction of the quality
criteria (Sections 2.1.1 and 2.1.2). The template-system or keyword-driven
language that provides higher-quality requirements would be the most suit-
CHAPTER 3. COMPARATIVE EVALUATION 24
Metric Description Measurement
Number
of words
Length of the final
requirements.
Automated
Number of
process verbs
Numbers of process verbs
of the requirements.
Automated
Number of
keywords
Length of the keywords set. Automated
Number of
decision branches
Number of possible branches
in a template.
Automated
Expressiveness
Capability of the notation
to cover the domain problem.
Inspection
Formality
Level of structuring of
the notation.
Inspection
Human readability
Capability of the requirements
to be understood by humans.
Inspection
Computer
readability
Capability of the requirements
to be analyzed and processed
by computer programs.
Inspection
Quality
Level of quality, according
to the satisfaction of the
quality criteria
(Sections 2.1.1 and 2.1.2).
Inspection
Table 3.1: Notations’ metrics.
able for the project.
In order to measure the quality criteria compliance, a checklist-based in-
spection can take place. To accomplish this, a requirements-focused check-
list [Mos97] has been adapted to create a checklist which enables to measure
the quality criteria satisfaction based on the notation’s syntax. The answers
to these checks could be subjective, because they depend on the judgment
or perception of the evaluator, so, in order to obtain accurate results, sev-
eral people should perform this inspection. In the current work, the author
has performed the inspection twice. Both, checklist and full results, are
available in the section A.1 of Appendix A.
The result of this inspection will be a number in the range between 0 and
100, which will indicate the level of quality that a notation provides, ac-
cording to the compliance of the quality criteria. To add some semantic
meaning to this value, a scale has been designed and is presented in Table
3.2.
The results of the inspection carried out by the author are shown in Table
3.3. These results suggest that Planguage is the template-system which pro-
CHAPTER 3. COMPARATIVE EVALUATION 25
Level of Quality Quality total score
Excellent 80 to 100
Good 60 to 79
Fair 40 to 59
Poor 20 to 39
Bad 0 to 19
Table 3.2: Scale for quality values.
vides higher-quality requirements, followed closely by MASTeR and, lastly,
by EARS, which presents a poor quality. However, subjectivity problems,
previously discussed, may have affected the results of the inspection, so no
significant difference between Planguage and MASTeR qualities should be
interpreted.
Notation Score Level of Quality
MASTER 81 Excellent
EARS 40 Fair
Planguage 84 Excellent
Table 3.3: Checklist-based inspection results.
For a proper analysis of the results, a deeper insight into the checklist
evaluation is needed. The check results corresponding to the main quality
properties have been collected and displayed in Figure 3.1. These results in-
dicate that the main difference established between EARS, Planguage and
MASTeR is set by the completeness property, which influences other prop-
erties like reliability. This could be attributed to the scope of EARS, which
is basically focused on functional requirements. MASTeR and Planguage,
for their part, present a more complete set of templates, which allows to
specify more variety of non-functional requirements.
3.1.2 Second Approach
The results of the first approach do not yet fulfill the aims of this evalua-
tion, since it can not resolve issues such as:
Can untrained stakeholders read and understand the specifications?.
Can requirements engineers specify requirements without a significant amo-
unt of training hours?.
In order to measure the reading and writing facilities, a second approach is
needed. To accomplish that goal, some of the metrics presented at the be-
CHAPTER 3. COMPARATIVE EVALUATION 26
Figure 3.1: Checklist-based inspection results (detailed).
ginning of the chapter have been now selected: Expressiveness, (Average)
Number of words and Formality.
Expressiveness Refers to how well the notation’s syntax covers the do-
main phenomena [HAL+14]. The purpose of this metric is to measure
the level of expressiveness, which will affect positively to the final
suitability.
This metric is to be calculated by all the possible combinations that
the notation can provide to cover the domain phenomena:
Expressiveness = Ncombinations∗
* Measuring the number of combinations can depend on the nota-
tion’s syntax. This is the reason why the author has not provided a
specific formula to calculate this value.
In the special case of a notation allows nesting1, only one level of nest-
ing will be taken into account for the measurement. This is due to
higher levels of nesting represent an exponential growth of the num-
ber of combinations and, according to the author’s opinion, it will
not properly represent the value of expressiveness and will distort the
final result.
The results of applying of the formula will be numeric, so, in order
to get a better picture of them, a scale has been designed and is pre-
sented at Table 3.4.
Average Number of Words The Average Number of Words (NoW) in-
dicates the average length of the requirements. For untrained stake-
holders, too long requirements could be unreadable, while too shorts
1
Ability to specify requirements by combining different templates or keywords.
CHAPTER 3. COMPARATIVE EVALUATION 27
Expressiveness scale Expressiveness value
Excellent >300
Good 200 to 300
Fair 100 to 199
Poor 0 to 99
Table 3.4: Scale for expresiveness values.
requirements could not be complete. High values will affect negatively
to the suitability.
In this metric, an algorithm can be used to automate the count of the
number of words, based on the next formula:
NoW =
∑
n
req=1 Nwords(req)
n
Formality The purpose of this metric is to measure the level of formaliza-
tion that a notation presents (e.g., mathematical notations could be
identified as high-formalized, while natural language could be consid-
ered as low-formalized). High formality levels usually present a lack
of flexibility and hardly can be understood by untrained stakeholders,
so they affect negatively to the suitability. A low level of formality,
however, could imply specification problems (Section 2.2).
This metric will be measured by a checklist-based inspection. The
checklist, partly based on [MR13] and [AYI04], and its application
are available at the section A.2 of the Appendix A.
The results of the inspection will be in the range between 1 and 6.
For a better understanding of these results, the scale shown in Table
3.5 can be utilised.
Formality scale Formality score
Over-Formalized 6
Extremely-Formalized 5
Highly-Formalized 4
Well-Formalized 3
Fair-Formalized 2
Under-Formalized 1
Table 3.5: Scale for formality values.
For this second approach, some of the metrics presented at the beginning
of the chapter have been finally discarded. It is the case of the number of
CHAPTER 3. COMPARATIVE EVALUATION 28
process verbs, which can be obtained in notations like MASTeR, but is not
obvious in notations like EARS or Planguage, due to their lack of atomicity.
The number of decision branches, for its part, is to calculate the possible
combinations. Finally, the computer readability has been remained out of
the scope of this evaluation. This is a point that could be added to future
researches. As a first approach, it could be measured taking into account
the analysed formality and the atomicity that the notations provide.
3.1.3 Formula variations
At this point of the evaluation, two positive metrics, Quality and Expres-
siveness, and two negative metrics, Average Number of Words and Formal-
ity have been identified. They can be used to identify the most appropiate
template-system according to the properties of every particular project.
However, there is still a lack of order (a formula) that can sort the nota-
tions according to the suitability for a generic project. In order to achieve
this goal, two variations are proposed:
First variation
Suitability =
Quality ∗ Expressiveness
NoW ∗ Formality
In this variation, positive properties are multiplied and negative properties
are divided.
Second variation
Suitability = Quality + Expressiveness − NoW − Formality
In this second variation, positive properties are added and negative prop-
erties are subtracted.
Thus, our second approach is based on the idea that the template-system
which provides requirements with higher quality and more expressiveness
with less number of words and less formality would be the most appropiate
one. Positive metrics contribute to the suitability while negative metrics
penalize it.
Both formula variations have been applied to the selected notations and
the results are shown at Figure 3.2. Both provide the same sort: MASTeR
is the most suitable notation followed by EARS and, lastly, Planguage.
CHAPTER 3. COMPARATIVE EVALUATION 29
Figure 3.2: Results of the formula application for first and second variations.
However, no sufficient data is analyzed at the current evaluation that can
support a decision of which variation to use. The evaluation will continue
by using the first variation, but, as mentioned, the second variation would
also be valid according to the current data.
3.2 Execution
Summarising, the steps followed to run the formula have been:
Quality A checklist-based inspection has taken place to collect the quality
score for each notation. In order to accomplish it, a checklist based
on [Mos97] has been used. Both the checklist and the results of the
inspection are available in the section A.1 of Appendix A.
Expressiveness All combinations that each notation can provide have
been calculated. MASTeR’s combinations have been calculated by
the combinations of conditions and decision branches. For EARS,
combinations have been calculated according to all the decision bran-
ches and one level of nesting. For Planguage, the number of keywords,
as well as the qualifiers and the number of keywords that can be nested
have been taken into account.
Average Number of Words An algorithm has been used to count the
number of words of the requirements. Some of the requirements have
been created from scratch and others have been based on examples
found in [Mav10] and [Sim11].
CHAPTER 3. COMPARATIVE EVALUATION 30
Formality A checklist, partly based on [MR13] and [AYI04], has been
created to measure this metric. Both checklist and results of the in-
spection are available at the section A.2 of the Appendix A.
3.3 Results and Analysis
The results of the first formula variation, previously presented in Section
3.1.2, suggest that MASTER is the most suitable template-system, followed
by EARS and Planguage. Extended information about the formula appli-
cation has been collected. This is shown in Table 3.6.
MASTeR EARS Planguage Avg
Quality 81 40 84 68.33
Expressiveness 324 196 126 215.33
NoW 24 18.27 94.33 45.53
Formality 3 2 5 3.33
Suitability 364.50 214.46 22.43 200.76
Table 3.6: Detailed results of the formula application.
These results suggest that MASTeR is the most ideal template-system,
due to its excellent quality (a score of 81/100 points) in concise and well-
structured requirements..
EARS, for its part, is the second notation at the results’s sort. The fair-
formalized and concise specifications are compromised by a fair quality (40
points).
Finally, results indicate that Planguage is the less ideal notation. This no-
tation provides excellent quality, but it is highly penalized by the level of
formality and the length of its specifications.
Thus, as results indicate, MASTeR would be an ideal notation to specify
requirements in a generic project due to the great possibilities and expres-
siveness that this notation offers to cover the domain’ phenomena. On the
other hand, if a project has mostly functional requirements, results mark
EARS as an ideal option, due to its reading and writing facilities. Finally,
if users are working on a critical project that is focused in the quality of
their requirements leaving aside the ease of reading and writing, Planguage
is suggested as an ideal option.
CHAPTER 3. COMPARATIVE EVALUATION 31
As mentioned, extended results of this evaluation can be utilised to select
the most ideal notation according to the project’s properties.
In addition, it is important to state that the judgement of the evaluator
could have influenced a percentage of the measurements. In addition, not
enough data has been analysed that can validate the formula. Due to this,
an experiment will be conducted to test the template-systems usability on
users (Chapter 5).
CHAPTER 4. USER INTERFACE FOR TEMPLATE-SYSTEMS 32
4 User Interface for Template-Systems
In this chapter, the aim and the results of a software application developed
for requirements specification are related. This tool allows users to write
requirements using MASTeR and EARS template-systems.
4.1 Preparation
The main purpose of this user interface is to assist in the experiment, pro-
viding the test subjects with a tool where they can specify requirements
using template-systems. In addition, this tool is also designed for scientific
researchers that investigate template-systems and semi-formal notations.
Finally, this tool can help to industry professionals who want to specify
their requirements by the use of template-systems.
The only public available MASTeR tool that has been found at the date of
publication is the one developed by Eckey on his Master’s thesis [Eck15].
The tool that he presents is focused on analyzing the consistency of require-
ments transformed from BPMN1 models.
Regarding EARS and Planguage systems, no public available tool has been
found by the author that can support the process of specification.
The formality that MASTeR and EARS presented at the comparative eval-
uation (Chapter 3) suggests that they are the easiest notations to learn by
untrained stakeholders. For this reason, as well as the similarities between
their complexity and format, MASTeR and EARS have been selected to be
evaluated at the experiment (Chapter 5). This user interface will support
both notations.
4.1.1 Technologies
The user interface will be designed as a web-based application. The main
advantage of this approach is the accessibility: the tool will be platform
independent, accessible anytime and anywhere from any device which has
1
Business Process Modeling Notation [GDW09].
CHAPTER 4. USER INTERFACE FOR TEMPLATE-SYSTEMS 33
Internet connectivity and a compatible web browser. This helps to improve
the coverage of the tool, increasing the number of users that can make use
of the developed software.
In order to accomplish this goal, the following technologies are used:
Markup language HTML5 as the markup language to code the views of
the application.
Presentation layer The presentation of the documents is developed with
LESS2, a CSS pre-processor. On the other hand, Bootstrap3, a CSS
framework, will help to develop a responsive design, making the tool
more user-friendly for a wider range of devices.
Client-side scripting The controller will be coded with AngularJS4. In
this JavaScript framework developed by Google, the models are spec-
ified in JSON format, which can help to improve the interoperability
with other tools. In addition, AngularJS facilitates an asynchronous
communication with the server (AJAX-based) that improves the user
experience.
Storage HTML5 Web storage will be used to storage information at the
client’s web browser. This will allow to save in memory all require-
ments in which the user is working, although he closes the browser or
turns off the device.
4.2 Results
The source code of the developed application is available at the CD-ROM
and at GitHub5. A public demo is also available online6.
The markup’s validity has been checked according to the W3C (World
Wide Web Consortium) Markup Validation Service7. Figure 4.1 shows the
successful output of the validation.
For its part, CSS code has been validated according to the W3C CSS Val-
idation Service8 and the output of such validation is shown in Figure 4.2.
2
http://lesscss.org
3
http://getbootstrap.com
4
https://angularjs.org
5
https://github.com/francaballero/requirements_generator
6
https://userpages.uni-koblenz.de/~francaballero/tool/ and also http://
francaballero.net/requirements_generator/
7
https://validator.w3.org
8
https://jigsaw.w3.org/css-validator/
CHAPTER 4. USER INTERFACE FOR TEMPLATE-SYSTEMS 34
Figure 4.1: W3C HTML successful validation.
Figure 4.2: W3C CSS successful validation.
Finally, JSHint tool9, a static code analysis tool for JavaScript, has been
used to validate the JavaScript code.
The main page of the application is shown at Figure 4.3. In this view, users
are introduced to MASTeR and EARS template-systems by an explanation
of their syntax and examples.
Figure 4.3: Main page, containing MASTeR and EARS introduction.
Then, users are able to navigate through four different views: MASTeR,
EARS, Glossary and Results.
At the MASTeR and EARS views, the tool provides users with the ability
to specify requirements. A control panel is available on the right side of
the page, as shown in Figure 4.4, where users can create new requirements
(for MASTeR and EARS) and conditions (for MASTeR).
When the user click on one of these buttons, he/she will be guided to the
view shown in Figure 4.5. By clicking in the attributes, the user is able to
9
http://jshint.com
CHAPTER 4. USER INTERFACE FOR TEMPLATE-SYSTEMS 35
Figure 4.4: MASTeR view, with the control panel.
select a term from the list of terms. The ”New Term” button (highlighted
in green) is used to create new terms, which are shared between MASTeR
and EARS views.
Figure 4.5: How requirements can be specified with the tool.
The glossary view is shown at Figure 4.6. In this view is where users are
able to define their terms. At the right side, the control panel will allow,
again, the creation of new terms.
Results view provides users with the ability to save their work by printing
or saving a PDF document containing all their progress. This is shown at
Figure 4.7. In addition, users can remove terms and requirements from the
local storage.
On the other hand, the responsive design of the webpage adapts the page
layout to different devices and resolutions. A sample of this functionality
is shown at Figure 4.8.
CHAPTER 4. USER INTERFACE FOR TEMPLATE-SYSTEMS 36
Figure 4.6: Glossary view, where terms can be defined.
Figure 4.7: Results can be exported at glossary view.
Figure 4.8: The responsive design adapts the page layout to different resolu-
tions.
CHAPTER 4. USER INTERFACE FOR TEMPLATE-SYSTEMS 37
This tool offers industry professionals and researchers the possibility of
documenting requirements by the usage of MASTeR and EARS notations.
Furthermore, this tool could be later improved by integrating a JSON-
based interoperability with existing tools that automate the processing of
requirements. Finally, EARS’s complex requirements could be improved
by enabling multiple nesting levels.
CHAPTER 5. USERS EXPERIMENT 38
5 Users Experiment
In this chapter, all the steps of the design, the test subjects selection and
the analysis of results of the experiment will be described.
5.1 Experiment design
The aim of this experiment is to test and evaluate the results of the compar-
ative evaluation, described in Chapter 3, with real users. The experiment
shall measure the reading and writing facilities of the template-systems. To
accomplish this goal, the test subjects will be asked to solve tasks related
to requirements specification and provide some feedback about their expe-
rience.
The test group will be composed of industry professionals and students.
The experience of industry professionals makes them ideal test subjects for
this evaluation. However, it will not be possible to recruit a significant
number of participants.
This lack of participants will be compensated with students of advanced
years of Software Engineering, Computer Engineering, Computer Science
and Web Science. This test group will have presumably less experience
in Requirements Engineering, so the expectations for their performance
should be adjusted. However, this may actually be considered as an ad-
vantage at measuring the template-systems’ learnability. In addition, their
lack of experience may affect similarly to both, template-systems and nat-
ural language, making their results more reliable.
It is important to state that it will not be possible to perform the experiment
in a laboratory. In addition, the test group will not homogeneous, because
the test subjects will have different backgrounds and experience. At last,
the test is not planned to be performed by a really significant number of
users. All these reasons make the sample not representative enough and it
should be taken as a first approximation that can be improved in the future.
In order to accomplish all these goals and reach as many participants as
possible, the test is planned to be web-based. This approach will also allow
CHAPTER 5. USERS EXPERIMENT 39
the use of algorithms to automate part of the analysis of the results.
The same technologies described in Section 4.1.1 will be used to implement
the experiment. Moreover, the user interface designed for MASTeR and
EARS (Chapter 4) will be integrated. However, a database is now needed
to store and consult the results. For this purpose, OpenWS1, a web-service
that provides a REST API to store JSON objects, will be used. The compo-
nent diagram of Figure 5.1 shows the physical structure of the application.
Figure 5.1: Component diagram of the web-based test.
The source code of the web-based experiment is available at the CD-ROM
and at GitHub2. A public demo of the application is also available online3.
The experiment operates with the requirements of NBDiff, an open source
project that provides a means of viewing changes made to IPython Note-
book files and resolving merge conflicts. The requirements of the project4
have been changed and adapted for the use of the experiment.
The test consists of three sections, introduction, reading and writing, de-
scribed in the following sections.
5.1.1 Introduction section
First of all, at the introduction section (Appendix B.1), test subjects will be
asked to complete a short questionnaire about their occupation and their
1
https://openws-app.herokuapp.com
2
https://github.com/francaballero/requirements_user_testing
3
https://userpages.uni-koblenz.de/~francaballero/
and also
http://francaballero.net/requirements_user_testing/
4
http://nbdiff-docs.readthedocs.io/en/latest/SRS.html
CHAPTER 5. USERS EXPERIMENT 40
previous experience, used for statistics purposes. Next, if the test sub-
ject has indicated that he/she has no previous experience on MASTeR or
EARS, he/she will be conducted to a video that explains the aim and usage
of the template-systems. This explanation can be reviewed in text-format
or can be extended with the mentioned references.
Later on, the NBDiff system is briefly described and the test subjects are
asked to send any possible question to the author before proceeding with
the reading section.
5.1.2 Reading section
At the reading section (Appendix B.2), participants will be asked to an-
swer questions regarding some requirements that they shall read. Two of
the requirements presented have been written using MASTeR, another two
have been written using EARS and one requirement have been expressed
in natural language. Questions are related to quality criteria’ properties
(Sections 2.1.1 and 2.1.2) like completeness, ambiguity and inconsistency.
No definition of the terms has been provided. This will enable to see and
measure the need of a glossary.
At the end, they will be asked to complete a questionnaire about their ex-
perience with the template-systems’ readability. For this, five statements
are presented for which they have to decide between: strongly disagree, dis-
agree, agree or strongly agree. This four-options questionnaire is designed
in a way the test subjects are forced to make decisions.
The test subjects could learn from reading and evaluating these require-
ments. This is another of the purposes of the reading section, so they will
be able to perform the writing section.
5.1.3 Writing section
In this section of the test (Appendix B.3), the test subjects will be intro-
duced to the tool designed in Chapter 4 through a video tutorial.
Later on, they will be asked to implement requirements using the tool,
which is integrated in the test (Figure 5.2). The tasks are introduced
through four short descriptions of what the requirements should represent.
Participants will be able to use MASTeR, EARS or natural language, de-
pending on the restrictions of each task. Some terms are already predefined
in order to encourage the use of the template-systems. This part of the ex-
periment is designed in such a way the subjects should use MASTeR and
CHAPTER 5. USERS EXPERIMENT 41
EARS at least once.
Figure 5.2: The user interface developed in Chapter 4 was adapted to the
web-based experiment.
At the end, a questionnaire is displayed and the test subjects are asked to
complete it according to their experience.
5.2 Test subjects
The experiment was carried out between between August 23 and Septem-
ber 9, 2016. The link of the test5 was spread in scientific groups, working
groups of ESA and students groups.
A total of 20 participants completed the test: 5 industry professionals and
15 students. The background of the participants is shown in Table 5.1.
It is important to emphasise that the large majority of participants, espe-
cially the students group, were not native English speakers, so this may
have influenced the reading and writing of requirements in English.
5.3 Results and Analysis
The data obtained from the experiment has many forms. Some algorithms
can be applied to process part of the data. However, the the answers of the
reading and writing sections are not easily processable and they should be
manually evaluated to judge the correctness.
5
https://userpages.uni-koblenz.de/~francaballero/
CHAPTER 5. USERS EXPERIMENT 42
Experience
in RE
Experience in
template
systems
Experience
in MASTeR
Experience
in EARS
Industry
professionals
60% 40% 0% 0%
Students 20% 0% 0% 0%
Total
average
40% 10% 0% 0%
Table 5.1: Information about the test subjects’ background.
5.3.1 Reading requirements
At the reading section, test subjects answered five questions related to five
requirements that they read. The questions had multiple answers that
could be considered as correct, so they had been manually evaluated by the
author of this thesis.
The results of the evaluation have been grouped by notation and listed
in Figure 5.3. Participants completed 92% of the questions. The 77% of
the answers related to MASTeR requirements were correct, the 88% related
to EARS were correct and the 55% related to natural language were correct.
Figure 5.3: Correctness of participants’ answers in reading section, grouped by
notation.
At the same time, response times to every question were collected. The
average times have been calculated and grouped by notation. The results
are shown at Table 5.2. MASTeR and natural language answers took 1.37
times and 1.20 times longer, respectively, than EARS. These results also
show how industry professionals took 2.42 times longer than students to
answer the questions, which could have highly influenced to their better
CHAPTER 5. USERS EXPERIMENT 43
performance on this section of the test.
Notation
MASTeR EARS Natural language
Industry
professionals
4.44 mins 3.96 mins 1.80 mins
Students 1.42 mins 0.808 mins 1.96 mins
Total of
participants
2.18 mins 1.59 mins 1.92 mins
Table 5.2: Average response times to reading questions, grouped by notation.
Finally, the participants gave their feedback about their experience at read-
ing requirements. The results are listed in MASTeR Figure 5.4.
Figure 5.4: How the test subjects evaluated the reading facilities.
Thus, these results suggest that MASTeR and EARS provide more read-
ing facilities than natural language. This is supported by the participants
performance and feedback. However, it should be considered as just a
tendency, due to the participants not solving enough natural requirements
tasks in this test.
Regarding MASTeR and EARS, EARS has got a better performance rate
averaging less time, but MASTeR has got a better feedback from the par-
ticipants. At this point of the analysis, both notations present a similar
CHAPTER 5. USERS EXPERIMENT 44
readability rate and no significant differences can be identified between
them.
In addition, the manual evaluation of the answers has served to support the
hypothesis that the existence of a correct glossary would have significantly
increased the quality requirements.
5.3.2 Writing requirements
At the writing section, participants wrote requirements to specify four func-
tionalities of the NBDiff system. Due to the nature of the template-systems,
the analysis could have been automated. However, in this analysis, the an-
swers have been manually evaluated by the author. The evaluation criteria
was based on six quality properties: ambiguity, priority, vagueness, com-
pleteness, absence of implementation details and verifiability, as well as on
the correct use of the notation’s syntax. The checklist used for this purpose
is available in the section B.3 of Appendix B.
The 76% of the proposed tasks were completed by the participants. Re-
sults of the evaluation based on 7 quality points are shown at Figure 5.5.
It shows how the quality of MASTeR requirements was 1.48 times higher
than EARS’s and 3.43 times higher than natural language’s. In the tasks
where participants could decide which notation to use, the 57.5% chose
MASTeR, the 10% chose EARS, the 7.5% chose natural language and the
25% did not solve the task.
Figure 5.5: Quality of participants’ requirements, grouped by notation.
CHAPTER 5. USERS EXPERIMENT 45
The average writing times have been collected and grouped in Table 5.3.
This shows how writing requirements using MASTeR took to participants
1.39 times longer than using EARS and 1.5 times longer than using natural
language. Once again, the results indicate how industry professionals spent
more time (1,23 times more) to specify better requirements.
Notation
MASTeR EARS Natural language
Industry
professionals
14.06 mins 6.54 mins 0.46 mins
Students 8.02 mins 2.83 mins 6.35 mins
Total of
participants
8.84 mins 6.35 mins 5.89 mins
Table 5.3: Average writing times, grouped by notation.
According to the experience that participants have gained during this sec-
tion of the test, they gave the feedback collected in Figure 5.6.
Figure 5.6: How the test subjects evaluated the writing facilities.
The results indicate a significant difference between MASTeR writing fa-
cilities and the facilities the other notations provide. Participants have
specified their best requirements using MASTeR, which also has the best
feedback rate. However, test subjects took longer to specify requirements
CHAPTER 5. USERS EXPERIMENT 46
using this notation. This suggests that some interest and training from
part of the participants is necessary to specify high-quality requirements.
On the other hand, the results suggest that EARS also significantly im-
proves the specification of requirements.
However, the overall requirements written by students contained many mis-
takes and not all of them managed to write all the solutions. Part of this
results can be attributed to the complexity of the test and their lack of
experience specifying requirements. This again suggests that some level of
training is needed to specify high-quality requirements.
5.3.3 Conclusions
The results of the experiments have not indicated any significant difference
on notations readability. It is not so with the ease of writing, which has
presented a significant improvement by using MASTeR.
The collected information about answers, response times and feedback has
been combined and scaled to be comparable with the comparative evalua-
tion described in Chapter 3. This comparison is shown at Figure 5.7.
Figure 5.7: Comparison between the comparative evaluation and the experi-
ment results.
The scaled results of the comparative evaluation and the experiment results
show a similar tendency: MASTeR is more suitable than EARS. As it is
CHAPTER 5. USERS EXPERIMENT 47
said, some factors, like a sample that is not significant enough and the
lack of experience of part of the test subjects, could have influenced to the
results of the experiment. However, it is important to emphasise that the
results of the comparative evaluation and the experiment coincide in the
same tendency.
CHAPTER 6. SUMMARY 48
6 Summary
The purpose of this thesis was to evaluate template-systems for require-
ments specification. The results of a theoretical evaluation and an ex-
periment conducted with industrial professional and students suggest that
MASTeR is the notation that provides higher-quality specifications and
the best reading and writing facilities.
However, this should be considered as a tendency. The theoretical evalua-
tion needs more information to be contrasted with an experiment that has
not been performed with a significant amount of experienced users.
The current work is the first one in this field and can be taken as a first ap-
proximation that could be improved upon. More notations can be analysed
and more experienced test subjects should perform the experiment.
APPENDIX A. CHECKLISTS 49
A Checklists
In this chapter of the appendix, the checklists utilised during the compar-
ative evaluation are expounded.
A.1 Inspecting Quality
The following Table shows the checklist used to evaluate the satisfaction of
the quality criteria. Results are presented in columns 2, 3 and 5, represent-
ing MASTeR, EARS and Planguage scores, respectively.
Quality Checklist. MASTeR, EARS, Planguage scores
Clarity
Can all requirements be written in non-technical
understandable language?
1 1 1
Can any requirement have more that one
interpretation?
2 2 2
Can be each characteristic of the final product
described with an unique terminology?
3 3 3
Is there a glossary in which the specific
meaning(s) of each term can be defined?
4 4 4
Could the requirements be understood and
implemented by an independent group?
5 5 5
Completeness
Can be all terms be defined? 6 6 6
Can all units of measure be defined? 7 7 6
Can the missing information be defined
in the requirement?
7 7 7
Should any requirement be specified in more detail? 7 7 8
Should any requirement be specified in less detail? 8 8 9
Can all of the requirements be included? 8 8 9
Can all the requirements related to
functionality be included?
9 9 10
Could be any requirement which make you
feel uneasy?
10 10 11
To be continued on next page
APPENDIX A. CHECKLISTS 50
Quality Checklist. MASTeR, EARS, Planguage scores
Can all requirements related to performance
be included?
11 10 12
Can all requirements related to design
constraints be included?
12 12 13
Can all requirements related to attributes
be included?
12 12 14
Can all requirements related to external
interfaces be included?
13 12 15
Can all requirements related to databases
be included?
14 12 16
Can all requirements related to software
be included?
15 12 17
Can all requirements related to communications
be included?
16 12 18
Can all requirements related to hardware
be included?
17 12 19
Can all requirements related to inputs
be included?
18 12 20
Can all requirements related to outputs
be included?
19 12 21
Can all requirements related to security be
included?
20 12 22
Can all requirements related to maintainability
be included?
21 12 23
Can all requirements related to installation
be included?
21 12 24
Can all requirements related to criticality
be included?
22 12 25
Can all requirements related to the permanency
limitations be included?
23 12 26
Can all possible changes to the requirements
be specified?
23 12 27
Can be the likelihood of change be specified
for each requirement?
24 13 28
Consistency
Could be any requirements describing the same
object that conflict with other requirements
with respect to terminology?
24 13 28
Could be any requirements describing the same
object that conflict with respect
to characteristics?
25 14 29
Could be any requirements that describe two
or more actions that conflict logically?
26 15 30
Could be any requirements that describe two
or more actions that conflict temporally?
27 16 31
To be continued on next page
APPENDIX A. CHECKLISTS 51
Quality Checklist. MASTeR, EARS, Planguage scores
Traceability
Can be all requirements traceable back
to a specific user need?
27 16 32
Can be all requirements traceable back
to a specific source document or person?
27 16 33
Can be all requirements traceable forward
to a specific design document?
27 16 34
Can be all requirements traceable forward
to a specific software module?
27 16 34
Verifiability
Could be any requirements included which are
impossible to implement?
28 17 35
For each requirement, could exist a process that
can be executed by a human or a machine to verify
the requirement?
29 18 36
Modifiability
Could be any redundancy in the requirements? 30 19 37
Content
General
Can be each requirement relevant to the problem
and its solution?
31 20 38
Can any of the defined requirements contain design
details?
32 20 30
Can any of the defined requirements contain
verification details?
33 22 40
Can any of the defined requirements contains project
management details?
34 23 41
Specific
Inputs
Can all input sources be specified? 35 23 42
Can all input accuracy requirement be specified? 36 23 43
Can all input range values be specified? 37 23 44
Can all input frequencies be specified? 38 23 45
Can all input formats be specified? 39 23 46
Outputs
Can all output destinations be specified? 40 24 47
Can all output accuracy requirements be specified? 40 24 48
Can all output range values be specified? 41 24 49
Can all output frequencies be specified? 42 24 50
Can all output formats be specified? 43 24 51
Functions
Can all software functions be specified? 44 25 52
Can all system throughput rates be specified? 45 24 53
To be continued on next page
APPENDIX A. CHECKLISTS 52
Quality Checklist. MASTeR, EARS, Planguage scores
Reliability
Can the consequences of software failure be
specified for each requirement?
46 25 53
Can the information to be protected from failure be
specified?
46 25 54
Can a strategy for error detection be specified? 47 26 54
Can a strategy for correction be specified? 48 27 54
Tradeoffs
Can acceptable tradeoffs be specified for
competing attributes?
48 28 54
Hardware
Can the minimum memory be specified? 49 28 55
Can the minimum storage be specified? 50 28 56
Can the maximum memory be specified? 51 28 57
Can the maximum storage be specified? 52 28 58
Software
Can the required software environments/OS’s be
specified?
53 28 59
Can all of the required software utilities be
specified?
54 28 60
Can all purchased software products that are to
be used with the system be specified?
55 28 61
Communication
Can the target network be specified? 56 29 62
Can the required network protocols be specified? 57 30 63
Can the required network capacity be specified? 58 31 64
Can the required/estimated network throughput
rate be specified?
59 31 64
Can the estimated number of network connections be
specified?
60 31 64
Can the minimum network performance requirements
be specified?
61 31 64
Can the maximum network performance requirements
be specified?
62 31 65
Can the optimal network performance requirements
be specified?
63 31 66
Can all aspects of the processing be specified for
each function?
64 32 67
Can all outputs be specified for each function? 65 33 68
Can all performance requirements be specified for
each function?
66 34 69
To be continued on next page
APPENDIX A. CHECKLISTS 53
Quality Checklist. MASTeR, EARS, Planguage scores
Can all design constrains be specified for each
function?
67 35 69
Can all attributes be specified for each function? 68 36 70
Can all security requirements be specified for each
function?
69 37 71
Can all maintainability requirements be specified
for each function?
70 37 72
Can all database requirements be specified for
each function?
71 37 73
Can all operational requirements be specified for
each function?
72 37 74
Can all installation requirements be specified for
each function?
72 37 75
External interfaces
Can all user interfaces be specified? 73 38 75
Can all hardware interfaces be specified? 73 38 76
Can all software interfaces be specified? 74 38 77
Can all communications interfaces be specified? 75 38 78
Can all interface design constraints be specified? 76 38 79
Can all interface security requirements be
specified?
77 38 80
Can all interface maintainability requirements be
specified?
78 38 81
Can all human-computer interactions be specified
for user interfaces?
79 39 82
Timing
Can all expected processing times be specified? 80 40 83
Can all data transfer rates be specified? 81 40 84
Table A.1: Quality Criteria Checklist, based on [Mos97]
.
APPENDIX A. CHECKLISTS 54
A.2 Inspecting Formality
Table A.2 shows the checklist utilised to measure the formality of the nota-
tions in the comparative evaluation (Section 3.2). Results of the application
for MASTeR, EARS and Planguage templates are represented in columns
2, 3 and 4, respectively.
Formality Checklist. MASTeR, EARS and Planguage scores
Does the notation present rules about the
structure of the elements?1.
1 1 1
Does the notation present rules related to tokens2?. 1 1 2
Are the tokens represented in symbolic form3?. 1 1 2
Is it necessary to parse4 the resulting requirement
so a human can understand its semantics?.
1 1 3
Is the notation free of idioms?. 2 1 4
Is the notation free of poetry and metaphors?. 3 2 5
Table A.2: Formality Checklist, partly based on [MR13] and [AYI04].
A.3 Inspecting Participants’ Requirements
Table A.3 shows the checks utilised to measure the quality of the require-
ments written by the participants of the experiment (Section 5.3.2).
Checklist for participants’ requirements
Is the requirement free of ambiguities?
Is the requirement complete?
Is the requirement free of vagueness?
Is the requirement prioritised?
Is the requirement verifiable?
Is the requirement free of inappropriate implementation details?
Is the requirement free of error in the usage of template-systems?
Table A.3: Checklist utilised to test the quality of participants’ requirements.
1
Any language will present structure of its elements, even natural languages like
english or german.
2
Allowed elements of the language.
3
Use of symbols and logical, arithmetic, ..., connectors.
4
Analyze and understand the structure of the requirement.
APPENDIX B. EXPERIMENT 55
B Experiment
The content of the web-based experiment is summarised in this chapter.
B.1 Introduction
Figure B.1 shows the questionnaire about the occupation and previous ex-
perience of the participants. This is placed at the introduction of the test
(Section 5.1.1) and is used for statistical purposes.
Figure B.1: Questionnaire about participant’s previous experience and occu-
pation.
B.2 Reading
Table B.1 lists the requirements and questions that participants had to read
and answer at the reading section of the experiment (Section 5.3.1). Table
B.2 lists the feedback statements of which the participants had to to agree
or disagree.
B.3 Writing
Table B.3 lists all the descriptions that the test subjects had to specify at
the writing section of the experiment (Section 5.3.2). Table B.4 shows all
the statements of the writing questionnaire.
APPENDIXB.EXPERIMENT56
Requirement Question
Validate notebook file format.
Is it necessary to specify the file format
so that the requirement is complete? Why?
The system shall provide users with the ability to resolve merge
conflicts manually.
Is the requirement complete?
If a merge is not intentioned, the system should provide users with
the ability to undo the merge.
Could the requirement have more than one
interpretation? Why?
If an IPython Notebook file is corrupted, then the system shall display
a message to the user.
Should the requirement be specified in
more/less detail? Why?
While the system is requesting a notebook from remote servers, the
system shall not accept parallels requests to the same notebook.
Is the requirement testable?
Table B.1: Questions to be answered at the reading section.
APPENDIXB.EXPERIMENT57
Statement Do you agree or disagree?
Natural language requirements are easy to understand. ◻ Strongly Disagree ◻ Disagree ◻ Agree ◻ Strongly Agree
MASTeR requirements are easy to understand. ◻ Strongly Disagree ◻ Disagree ◻ Agree ◻ Strongly Agree
EARS requirements are easy to understand. ◻ Strongly Disagree ◻ Disagree ◻ Agree ◻ Strongly Agree
MASTeR requirements are more complete than natural
language requirements.
◻ Strongly Disagree ◻ Disagree ◻ Agree ◻ Strongly Agree
EARS requirements are more complete than natural
language requirements.
◻ Strongly Disagree ◻ Disagree ◻ Agree ◻ Strongly Agree
Table B.2: Content of the reading section’s feedback questionnaire.
Description Notation
This functionality allows the user to diff and display
modifications made between two versions of the same IPython notebook,
allowing the user to view the added, removed or modified elements.
MASTeR
EARS
Natural language
This functionality allows the user to generate a diff of a notebook from two
local IPYNB files, provided by the user.
MASTeR
This functionality generates a diff of a notebook on a line based level for text
based cells, indicating to the user when a cell has been modified and which
lines were added or deleted.
EARS
This functionality generates a diff of a notebook on a line based level for text
based cells, indicating to the user when a cell has been modified and
which lines were added or deleted.
MASTeR
EARS
Natural language
Table B.3: Descriptions of the system functionalities to be specified in the writing section.
APPENDIXB.EXPERIMENT58
Statement Do you agree or disagree?
Descriptions were easy to specify in MASTeR. ◻ Strongly Disagree ◻ Disagree ◻ Agree ◻ Strongly Agree
Descriptions were easy to specify in EARS. ◻ Strongly Disagree ◻ Disagree ◻ Agree ◻ Strongly Agree
Descriptions were easy to specify in natural language. ◻ Strongly Disagree ◻ Disagree ◻ Agree ◻ Strongly Agree
MASTeR contains a full set of templates. ◻ Strongly Disagree ◻ Disagree ◻ Agree ◻ Strongly Agree
EARS contains a full set of templates. ◻ Strongly Disagree ◻ Disagree ◻ Agree ◻ Strongly Agree
Table B.4: Content of the writing section’s feedback questionnaire.
BIBLIOGRAPHY 59
Bibliography
[Abr10] Jean-Raymond Abrial. Modeling in Event-B: System and Soft-
ware Engineering. Cambridge University Press, New York,
NY, USA, 1st edition, 2010.
[AYI04] Yunus Adanur, Oktay Yagiz, and Ahmet I¸sik. Mathematics
and language. Journal of the Korea Society of Mathematical
Education Series, 8:31–27, Mar 2004.
[BF14] Pierre Bourque and R. E. Fairley. SWEBOK: guide to the
software engineering body of knowledge, volume 3. IEEE Com-
puter Society, 2014.
[Boe79] Barry Boehm. Guidelines for verifying and validating software
requirements and design specifications. Euro IFIP 79, pages
711–719, 1979.
[Che13] Lianping Chen. Characterizing architecturally significant re-
quirements. IEEE Software, 30(2):38–45, Apr 2013.
[Dav95] A. M. Davis. Remontando: Una Necesidad Simple Descuid´o.
IEEE Software, 1995.
[DSP10] Definition and Exemplication of RSL and RMM, CESAR
Project. Report, EADS Deutschland GmbH - IW, Germany,
2010.
[Eak96] Kevin Eakin. Implementing espiti on a european scale. Journal
of Systems Architecture, 42(8):579–582, 1996.
[Ebe08] Christof Ebert. Systematisches Requirements-Engineering
und Management: Anforderungen ermitteln, spezifizieren,
analysieren und verwalten. dpunkt-Verl., 2008.
[Eck15] Florian Eckey. Konzeption und entwicklung einer software zur
konsistenzhaltung zwischen bpmn modellierung und textueller
anforderungsdokumentation, 2015.
[Far12] S. Farfeleder. Requirements Specification and Analysis for Em-
bedded Systems. PhD thesis, Vienna University of Technology,
2012.
BIBLIOGRAPHY 60
[GDW09] Alexander Grosskopf, Gero Decker, and Mathias Weske. The
process: business process modeling using BPMN. Meghan-
Kiffer Press, 2009.
[Gil05] Tom Gilb. Competitive engineering: a handbook for systems
engineering, requirements engineering, and software engineer-
ing using Planguage. Butterworth-Heinemann, 2005.
[Gro15] Katharina Großer. Investigating the use of ontology techniques
as a support for on-board software requirement engineering,
2015.
[HAL+14] Jennifer Horkoff, Fatma Basak Aydemir, Feng-Lin Li, Tong Li,
and John Mylopoulos. Conceptual modeling: Evaluating Mod-
eling Languages, An Example from the Requirements Domain,
page 260–274. 33rd International Conference, 2014.
[HJD05] Elizabeth Hull, Ken Jackson, and Jeremy Dick. Requirements
engineering. Springer, 2005.
[ISO98] IEEE recommended practice for software requirements speci-
fications. Standard, International Organization for Standard-
ization, Switzerland, 1998.
[ISO10] Systems and software engineering — vocabulary. Standard,
International Organization for Standardization, Switzerland,
2010.
[ISO11] Systems and software engineering - life cycle processes - re-
quirements engineering. Standard, International Organization
for Standardization, Switzerland, 2011.
[Jac01] Daniel Jackson. Alloy: A lightweight object modelling nota-
tion. Laboratory of Computer Science, Massachusetts Institute
of Technology, November 2001.
[Joh12] V. Johannessen. Cesar - text vs. boilerplates: What is more
efficient - requirements written as free text or using boilerplates
(templates), 2012.
[KC05] Sascha Konrad and Betty HC Cheng. Facilitating the con-
struction of specification pattern-based properties. 13th IEEE
International Requirements Engineering Conference, 2005.
[Lav03] L. Lavazza. Rigorous Description of Software Requirements
with UML. 15th International Conference on Software Engi-
neering and Knowledge Engineering (SEKE2003), July 2003.
[Mav10] Alistair Mavin. EARS Tutorial. Rolls-Royce, 2010.
BIBLIOGRAPHY 61
[Mos97] Daniel J. Mosley. Software requirements specification walk-
through checklist. Technical report, CSST Technologies, Inc,
1997.
[MR13] Brad Miller and David Ranum. Formal and natural languages,
2013.
[MSKB11] D. Majumdar, S. Sengupta, A. Kanjilal, and S. Bhattacharya.
Automated Requirements Modelling With Adv-EARS. Inter-
national Journal of Information Technology Convergence and
Services, 2011.
[MWHN09] A. Mavin, P. Wilkinson, A. Harwood, and M. Novak. EARS
(Easy Approach to Requirements Syntax). 17th IEEE Interna-
tional Requirements Engineering Conference, 2009.
[Poh96] Klaus Pohl. Requirements engineering: an overview. RWTH,
Fachgruppe Informatik, 1996.
[RdS14a] Chris Rupp and die SOPHISTen. Requirements Templates -
The Blueprint of your Requirement. SOPHIST GmbH, 2014.
[RdS14b] Chris Rupp and die SOPHISTen. Schablonen f¨ur alle F¨alle.
SOPHIST GmbH, 2014.
[RR06] Suzanne Robertson and James Robertson. Mastering the re-
quirements process. Addison-Wesley, 2 edition, 2006.
[Sim01] Erik Simmons. Quantifying quality requirements using plan-
guage. Technical report, Intel Corporation, Hillsboro, 2001.
[Sim11] Erik Simmons. 21st Century Requirements Engineering: A
Pragmatic Guide to Best Practices. PNSQC, 2011.
[Som11] Ian Sommerville. Software engineering. Pearson, 2011.
[Ter13] John Terzakis. Ears: The easy approach to requirements syn-
tax, Jul 2013.
[WD96] Jim Woodcock and Jim Davies. Using Z: specification, refine-
ment, and proof. Prentice Hall, 1996.
[WH12] Matt Wynne and Aslak Hellesoy. The Cucumber Book:
Behaviour-Driven Development for Testers and Developers.
The Pragmatic Bookshelf, 2012.
[Wie99] Karl E. Wiegers. Writing quality requirements. Software De-
velopment, May 1999.

Mais conteúdo relacionado

Mais procurados

Thesis Nha-Lan Nguyen - SOA
Thesis Nha-Lan Nguyen - SOAThesis Nha-Lan Nguyen - SOA
Thesis Nha-Lan Nguyen - SOA
Nha-Lan Nguyen
 
software-testing-framework 3
software-testing-framework 3software-testing-framework 3
software-testing-framework 3
pradeepcutz
 
NIC Project Final Report
NIC Project Final ReportNIC Project Final Report
NIC Project Final Report
Kay Karanjia
 
Concrete Design and TestingReport_Pennington_Melissa
Concrete Design and TestingReport_Pennington_MelissaConcrete Design and TestingReport_Pennington_Melissa
Concrete Design and TestingReport_Pennington_Melissa
Melissa Pennington
 
Managing sap upgrade_projects
Managing sap upgrade_projectsManaging sap upgrade_projects
Managing sap upgrade_projects
Kishore Kumar
 
Expert tm syllabus beta version 041511
Expert tm syllabus beta version 041511Expert tm syllabus beta version 041511
Expert tm syllabus beta version 041511
jicheng687
 
Trainee Tracking System
Trainee Tracking SystemTrainee Tracking System
Trainee Tracking System
Rajvi Parekh
 
Pragmatic+unit+testing+in+c%23+with+n unit%2 c+second+edition
Pragmatic+unit+testing+in+c%23+with+n unit%2 c+second+editionPragmatic+unit+testing+in+c%23+with+n unit%2 c+second+edition
Pragmatic+unit+testing+in+c%23+with+n unit%2 c+second+edition
cuipengfei
 

Mais procurados (17)

Thesis Nha-Lan Nguyen - SOA
Thesis Nha-Lan Nguyen - SOAThesis Nha-Lan Nguyen - SOA
Thesis Nha-Lan Nguyen - SOA
 
software-testing-framework 3
software-testing-framework 3software-testing-framework 3
software-testing-framework 3
 
NIC Project Final Report
NIC Project Final ReportNIC Project Final Report
NIC Project Final Report
 
Concrete Design and TestingReport_Pennington_Melissa
Concrete Design and TestingReport_Pennington_MelissaConcrete Design and TestingReport_Pennington_Melissa
Concrete Design and TestingReport_Pennington_Melissa
 
Report
ReportReport
Report
 
Researchproject
ResearchprojectResearchproject
Researchproject
 
Master Thesis
Master ThesisMaster Thesis
Master Thesis
 
Ibm tivoli ccmdb implementation recommendations
Ibm tivoli ccmdb implementation recommendationsIbm tivoli ccmdb implementation recommendations
Ibm tivoli ccmdb implementation recommendations
 
Managing sap upgrade_projects
Managing sap upgrade_projectsManaging sap upgrade_projects
Managing sap upgrade_projects
 
The Readiness Plan A Spotlight on Customer Success
The Readiness Plan A Spotlight on Customer SuccessThe Readiness Plan A Spotlight on Customer Success
The Readiness Plan A Spotlight on Customer Success
 
Expert tm syllabus beta version 041511
Expert tm syllabus beta version 041511Expert tm syllabus beta version 041511
Expert tm syllabus beta version 041511
 
Trainee Tracking System
Trainee Tracking SystemTrainee Tracking System
Trainee Tracking System
 
THE IMPACT OF SOCIALMEDIA ON ENTREPRENEURIAL NETWORKS
THE IMPACT OF SOCIALMEDIA ON ENTREPRENEURIAL NETWORKSTHE IMPACT OF SOCIALMEDIA ON ENTREPRENEURIAL NETWORKS
THE IMPACT OF SOCIALMEDIA ON ENTREPRENEURIAL NETWORKS
 
Final Thesis Paper
Final Thesis PaperFinal Thesis Paper
Final Thesis Paper
 
Solmanfocusedbuild
SolmanfocusedbuildSolmanfocusedbuild
Solmanfocusedbuild
 
Pragmatic+unit+testing+in+c%23+with+n unit%2 c+second+edition
Pragmatic+unit+testing+in+c%23+with+n unit%2 c+second+editionPragmatic+unit+testing+in+c%23+with+n unit%2 c+second+edition
Pragmatic+unit+testing+in+c%23+with+n unit%2 c+second+edition
 
Nato1968
Nato1968Nato1968
Nato1968
 

Destaque

6 клас творчий натюрморт чарівний світ предметів вабить наші очі
6 клас творчий натюрморт чарівний світ предметів вабить наші очі6 клас творчий натюрморт чарівний світ предметів вабить наші очі
6 клас творчий натюрморт чарівний світ предметів вабить наші очі
ирина ковальчук
 
Grade Plus International Catalogue 2016-2017
Grade Plus International Catalogue 2016-2017Grade Plus International Catalogue 2016-2017
Grade Plus International Catalogue 2016-2017
Sheikh Sultan
 
ROZA Sports Catalogue 2015-2016
ROZA Sports Catalogue 2015-2016ROZA Sports Catalogue 2015-2016
ROZA Sports Catalogue 2015-2016
Sheikh Sultan
 
Donhauser - 2012 - Jump Variation From High-Frequency Asset Returns
Donhauser - 2012 - Jump Variation From High-Frequency Asset ReturnsDonhauser - 2012 - Jump Variation From High-Frequency Asset Returns
Donhauser - 2012 - Jump Variation From High-Frequency Asset Returns
Brian Donhauser
 

Destaque (17)

Carlos Joaquín: Competitividad turística, base del desarrollo
Carlos Joaquín: Competitividad turística, base del desarrolloCarlos Joaquín: Competitividad turística, base del desarrollo
Carlos Joaquín: Competitividad turística, base del desarrollo
 
6 клас творчий натюрморт чарівний світ предметів вабить наші очі
6 клас творчий натюрморт чарівний світ предметів вабить наші очі6 клас творчий натюрморт чарівний світ предметів вабить наші очі
6 клас творчий натюрморт чарівний світ предметів вабить наші очі
 
Neumec Adagio Mulund West
Neumec Adagio Mulund WestNeumec Adagio Mulund West
Neumec Adagio Mulund West
 
Батькам на замітку
Батькам на заміткуБатькам на замітку
Батькам на замітку
 
Grade Plus International Catalogue 2016-2017
Grade Plus International Catalogue 2016-2017Grade Plus International Catalogue 2016-2017
Grade Plus International Catalogue 2016-2017
 
Muktzeh slide set 1
Muktzeh slide set 1Muktzeh slide set 1
Muktzeh slide set 1
 
North east india paradise unexplored
North east india paradise unexploredNorth east india paradise unexplored
North east india paradise unexplored
 
PTSA Coffee with Administration Slide Presentation (8/20/15)
PTSA Coffee with Administration Slide Presentation (8/20/15)PTSA Coffee with Administration Slide Presentation (8/20/15)
PTSA Coffee with Administration Slide Presentation (8/20/15)
 
ROZA Sports Catalogue 2015-2016
ROZA Sports Catalogue 2015-2016ROZA Sports Catalogue 2015-2016
ROZA Sports Catalogue 2015-2016
 
ベイズの識別規則
ベイズの識別規則ベイズの識別規則
ベイズの識別規則
 
Carp Andreea Georgiana1
Carp Andreea Georgiana1Carp Andreea Georgiana1
Carp Andreea Georgiana1
 
Donhauser - 2012 - Jump Variation From High-Frequency Asset Returns
Donhauser - 2012 - Jump Variation From High-Frequency Asset ReturnsDonhauser - 2012 - Jump Variation From High-Frequency Asset Returns
Donhauser - 2012 - Jump Variation From High-Frequency Asset Returns
 
Photoshoot 2
Photoshoot 2Photoshoot 2
Photoshoot 2
 
Ptsa meeting 1 15-15
Ptsa meeting 1 15-15Ptsa meeting 1 15-15
Ptsa meeting 1 15-15
 
Oberoi Eternia & Enigma
Oberoi Eternia & EnigmaOberoi Eternia & Enigma
Oberoi Eternia & Enigma
 
Viaje a paris
Viaje a parisViaje a paris
Viaje a paris
 
Youth Survey Results
Youth Survey ResultsYouth Survey Results
Youth Survey Results
 

Semelhante a BA_FCaballero

Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...
Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...
Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...
Jason Cheung
 
Thesis - Nora Szepes - Design and Implementation of an Educational Support Sy...
Thesis - Nora Szepes - Design and Implementation of an Educational Support Sy...Thesis - Nora Szepes - Design and Implementation of an Educational Support Sy...
Thesis - Nora Szepes - Design and Implementation of an Educational Support Sy...
Nóra Szepes
 
UCHILE_M_Sc_Thesis_final
UCHILE_M_Sc_Thesis_finalUCHILE_M_Sc_Thesis_final
UCHILE_M_Sc_Thesis_final
Gustavo Pabon
 
UCHILE_M_Sc_Thesis_final
UCHILE_M_Sc_Thesis_finalUCHILE_M_Sc_Thesis_final
UCHILE_M_Sc_Thesis_final
Gustavo Pabon
 
SocioTechnical-systems-sim
SocioTechnical-systems-simSocioTechnical-systems-sim
SocioTechnical-systems-sim
Rub Afonso
 
ImplementationOFDMFPGA
ImplementationOFDMFPGAImplementationOFDMFPGA
ImplementationOFDMFPGA
Nikita Pinto
 
Aspect_Category_Detection_Using_SVM
Aspect_Category_Detection_Using_SVMAspect_Category_Detection_Using_SVM
Aspect_Category_Detection_Using_SVM
Andrew Hagens
 

Semelhante a BA_FCaballero (20)

Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...
Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...
Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...
 
Thesis - Nora Szepes - Design and Implementation of an Educational Support Sy...
Thesis - Nora Szepes - Design and Implementation of an Educational Support Sy...Thesis - Nora Szepes - Design and Implementation of an Educational Support Sy...
Thesis - Nora Szepes - Design and Implementation of an Educational Support Sy...
 
dissertation
dissertationdissertation
dissertation
 
Thesis_Report
Thesis_ReportThesis_Report
Thesis_Report
 
Thesis
ThesisThesis
Thesis
 
UCHILE_M_Sc_Thesis_final
UCHILE_M_Sc_Thesis_finalUCHILE_M_Sc_Thesis_final
UCHILE_M_Sc_Thesis_final
 
UCHILE_M_Sc_Thesis_final
UCHILE_M_Sc_Thesis_finalUCHILE_M_Sc_Thesis_final
UCHILE_M_Sc_Thesis_final
 
online examination management system
online examination management systemonline examination management system
online examination management system
 
SocioTechnical-systems-sim
SocioTechnical-systems-simSocioTechnical-systems-sim
SocioTechnical-systems-sim
 
An Optical Character Recognition Engine For Graphical Processing Units
An Optical Character Recognition Engine For Graphical Processing UnitsAn Optical Character Recognition Engine For Graphical Processing Units
An Optical Character Recognition Engine For Graphical Processing Units
 
diss
dissdiss
diss
 
thesis_online
thesis_onlinethesis_online
thesis_online
 
Work Measurement Application - Ghent Internship Report - Adel Belasker
Work Measurement Application - Ghent Internship Report - Adel BelaskerWork Measurement Application - Ghent Internship Report - Adel Belasker
Work Measurement Application - Ghent Internship Report - Adel Belasker
 
ImplementationOFDMFPGA
ImplementationOFDMFPGAImplementationOFDMFPGA
ImplementationOFDMFPGA
 
Aregay_Msc_EEMCS
Aregay_Msc_EEMCSAregay_Msc_EEMCS
Aregay_Msc_EEMCS
 
web_based_ide
web_based_ideweb_based_ide
web_based_ide
 
Aspect_Category_Detection_Using_SVM
Aspect_Category_Detection_Using_SVMAspect_Category_Detection_Using_SVM
Aspect_Category_Detection_Using_SVM
 
Fraser_William
Fraser_WilliamFraser_William
Fraser_William
 
Student database management system PROJECT
Student database management system PROJECTStudent database management system PROJECT
Student database management system PROJECT
 
document
documentdocument
document
 

BA_FCaballero

  • 1. Bachelorarbeit Comparative evaluation of template-systems for requirements documentation Francisco Jos´e Caballero Cerezo September 12, 2016 Gutachter: Prof. Dr. Jan J¨urjens M. Sc. Katharina Großer Prof. Dr. Jan J¨urjens Institut f¨ur Softwaretechnik Institut f¨ur Informatik Universtit¨at Koblenz Universit¨atsstraße 1 56070 Koblenz https://rgse.uni-koblenz.de
  • 2. Francisco Jos´e Caballero Cerezo francaballero@uni-koblenz.de Matrikelnummer: 215202495 Studiengang: Bachelor Informatik Pr¨ufungsordnung: PO2012 Thema: Comparative evaluation of template-systems for requirements doc- umentation Eingereicht: September 12, 2016 Betreuer: Prof. Dr. Jan J¨urjens M. Sc. Katharina Großer Prof. Dr. Jan J¨urjens Institut f¨ur Softwaretechnik Institut f¨ur Informatik Universtit¨at Koblenz Universit¨atsstraße 1 56070 Koblenz
  • 3. i
  • 4. ii Ehrenw¨ortliche Erkl¨arung Ich erkl¨are hiermit ehrenw¨ortlich, dass ich die vorliegende Arbeit selbst- st¨andig angefertigt habe; die aus fremden Quellen direkt oder indirekt ¨uber- nommenen Gedanken sind als solche kenntlich gemacht. Die Arbeit wurde bisher keiner anderen Pr¨ufungsbeh¨orde vorgelegt und auch noch nicht ver¨offentlicht. Koblenz, den September 12, 2016 Francisco Jos´e Caballero Cerezo
  • 5. iii Abstrakt Die vorliegende Bachelorarbeit umfasst die Auswertung verschiedener Tem- plate-Systeme f¨ur die Anforderungsdokumentation. Die Auswahl der richtigen Template-Systeme beeinflusst den Erfolg jedes Softwareprojektes. Zun¨achst wurde eine theoretische Evaluation durchge- f¨uhrt, um die St¨arken und Schw¨achen der Template-Systeme zu analysieren. Um an der Anforderungsdokumentation mitwirken zu k¨onnen, wurde an- schließend eine Benutzeroberfl¨ache f¨ur Template-Systeme implementiert. Die vergleichende Evaluation wurde schlussendlich von einem Experiment begleitet, welches mithilfe von Industriefachpersonal und Studenten durchge- f¨uhrt wurde. Die Ergebnisse weisen darauf hin, dass Template-Systeme eine gute Verbesse- rungsstrategie f¨ur die Anforderungsdokumentation sind und MASTeR das passendere darstellt. This thesis deals with the evaluation of different template-systems for re- quirements specification. The selection of the right template-system influences the success of any software project. As a first step, a theoretical evaluation was conducted to analyse the strengths and weaknesses of the template-systems. Second, in order to assist in the requirements documentation, a user interface for template-systems was implemented. Finally, the comparative evaluation was supported through an experiment performed with industrial profes- sionals and students. The results suggest that template-systems are a good improvement for re- quirements specification and MASTeR would be the more suitable one.
  • 6. iv Acknowledgements Firstly, I would like to express my sincere gratitude to Prof. Dr. Jan J¨ur- jens for his support and for providing me with the opportunity to write this Bacherlor’s thesis. I would also like to thank M. Sc. Katharina Großer for her useful comments, remarks and patience. Her guidance has helped me in all the time of research and writing of this thesis. My sense of gratitude to one and all, who directly or indirectly, have sup- ported me in this venture. Especially, I would like to thank everyone who has participated in the experiment. Without their help, the test could not have been successfully conducted. I also take this opportunity to thank my parents and my friends for sup- porting me from a distance. Lastly, I am also grateful to my partner who has backed me throughout the entire process by keeping me harmonious and encouraged. Este trabajo est´a dedicado a mi abuelo Paco, una persona que ha sido mi referente y ejemplo durante toda mi vida y que me ha hecho ser quien soy a d´ıa de hoy.
  • 7. CONTENTS v Contents 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Targets of the Thesis . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.4 Structure of the thesis . . . . . . . . . . . . . . . . . . . . . . . 4 2 State of the Art in Requirements Specification 5 2.1 Requirements Engineering . . . . . . . . . . . . . . . . . . . . 5 2.1.1 Software Requirements . . . . . . . . . . . . . . . . . . 5 2.1.2 Requirement Engineering Processes . . . . . . . . . . 9 2.2 Requirements Specification . . . . . . . . . . . . . . . . . . . . 12 2.2.1 MASTeR . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2.2 EARS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.2.3 Planguage . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3 Comparative Evaluation 23 3.1 Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.1.1 First Approach . . . . . . . . . . . . . . . . . . . . . . . 23 3.1.2 Second Approach . . . . . . . . . . . . . . . . . . . . . 25 3.1.3 Formula variations . . . . . . . . . . . . . . . . . . . . 28 3.2 Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.3 Results and Analysis . . . . . . . . . . . . . . . . . . . . . . . . 30 4 User Interface for Template-Systems 32 4.1 Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4.1.1 Technologies . . . . . . . . . . . . . . . . . . . . . . . . 32 4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 5 Users Experiment 38 5.1 Experiment design . . . . . . . . . . . . . . . . . . . . . . . . . 38 5.1.1 Introduction section . . . . . . . . . . . . . . . . . . . . 39 5.1.2 Reading section . . . . . . . . . . . . . . . . . . . . . . 40 5.1.3 Writing section . . . . . . . . . . . . . . . . . . . . . . . 40 5.2 Test subjects . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 5.3 Results and Analysis . . . . . . . . . . . . . . . . . . . . . . . . 41 5.3.1 Reading requirements . . . . . . . . . . . . . . . . . . . 42 5.3.2 Writing requirements . . . . . . . . . . . . . . . . . . . 44
  • 8. CONTENTS vi 5.3.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 46 6 Summary 48 A Checklists 49 A.1 Inspecting Quality . . . . . . . . . . . . . . . . . . . . . . . . . 49 A.2 Inspecting Formality . . . . . . . . . . . . . . . . . . . . . . . . 54 A.3 Inspecting Participants’ Requirements . . . . . . . . . . . . . 54 B Experiment 55 B.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 B.2 Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 B.3 Writing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Bibliography 59
  • 9. LIST OF FIGURES vii List of Figures 2.1 Classification of requirements types, extracted from [Poh96]. 6 2.2 RE four processes, adapted from [Poh96]. . . . . . . . . . . . 9 2.3 Users of a requirements document, extracted from [Som11]. 11 2.4 Example of schema represented in Z notation, adapted from [WD96]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.5 Requirements are specified by a combination of templates and attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.6 FunctionalMASTeR template, adapted from [RdS14b]. . . . 14 2.7 PropertyMASTeR template, adapted from [RdS14b]. . . . . 16 2.8 EnvironmentMASTeR template, adapted from [RdS14b]. . 16 2.9 ProcessMASTeR template, adapted from [RdS14b]. . . . . . 17 2.10 ConditionMASTeR template, adapted from [RdS14b]. . . . 17 2.11 LogicMASTeR template, adapted from [RdS14b]. . . . . . . 18 2.12 EventMASTeR template, adapted from [RdS14b]. . . . . . . 18 2.13 TimeMASTeR template, adapted from [RdS14b]. . . . . . . 19 3.1 Checklist-based inspection results (detailed). . . . . . . . . . 26 3.2 Results of the formula application for first and second vari- ations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 4.1 W3C HTML successful validation. . . . . . . . . . . . . . . . 34 4.2 W3C CSS successful validation. . . . . . . . . . . . . . . . . . 34 4.3 Main page, containing MASTeR and EARS introduction. . 34 4.4 MASTeR view, with the control panel. . . . . . . . . . . . . 35 4.5 How requirements can be specified with the tool. . . . . . . . 35 4.6 Glossary view, where terms can be defined. . . . . . . . . . . 36 4.7 Results can be exported at glossary view. . . . . . . . . . . . 36 4.8 The responsive design adapts the page layout to different resolutions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 5.1 Component diagram of the web-based test. . . . . . . . . . . 39 5.2 The user interface developed in Chapter 4 was adapted to the web-based experiment. . . . . . . . . . . . . . . . . . . . . 41 5.3 Correctness of participants’ answers in reading section, grouped by notation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 5.4 How the test subjects evaluated the reading facilities. . . . . 43 5.5 Quality of participants’ requirements, grouped by notation. 44
  • 10. LIST OF FIGURES viii 5.6 How the test subjects evaluated the writing facilities. . . . . 45 5.7 Comparison between the comparative evaluation and the ex- periment results. . . . . . . . . . . . . . . . . . . . . . . . . . . 46 B.1 Questionnaire about participant’s previous experience and occupation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
  • 11. CHAPTER 1. INTRODUCTION 1 1 Introduction This Bachelor’s thesis belongs to the T-Reqs project, a project of the Soft- ware Engineering research group of the Institute for Software Technology (IST) at the University of Koblenz-Landau in collaboration with the Euro- pean Space Agency (ESA). The T-Reqs project is focused on technologies that can improve the quality of requirements relating to precision, accuracy and completeness by the use of ontologies. Ontologies are information models that make descriptions of concepts of the real word readable and accessible for humans and machines. Template-systems are semi-formal notations for requirements specification that support this ontology approach. The goal of this thesis is to select and evaluate template-systems. 1.1 Motivation In 1996, the ESPITI (European Software Process Improvement Training Initiative) project stated that one of the most important reasons for soft- ware projects’ cancellations, cost overruns or delays were related to poor requirements specifications [Eak96]. Semi-formal notations, like template-systems, make it possible to construct requirements following a plan (a template) and assigning a similar structure to each requirement. This approach helps to avoid mistakes from the early stages of the development process by specifying high-quality requirements that are cost and time efficient [RdS14a]. Notations for requirements specification have been studied for decades and many semi-formal notations have been designed and developed. Some of them are as follows: • Template-systems, like: – MASTeR (Musterg¨ultige Anforderungen - die SOPHIST Tem- plates f¨ur Requirements) [RdS14b]. – EARS (Easy Approach to Requirements Syntax) [MWHN09].
  • 12. CHAPTER 1. INTRODUCTION 2 – Adv-EARS (Advanced-EARS) [MSKB11]. – Boilerplates [HJD05]. – Extensions to Boilerplates [Far12]. – RLS (Requirements Specification Language) of CESAR project [DSP10]. – Gherkin [WH12]. – Spider (Specification Pattern Instantiation and Derivation En- vironment) [KC05]. • Keywords-driven languages, like Planguage [Gil05] or Volatile [RR06]. • Controlled syntaxes, like ISO/IEC/IEEE 29148 syntax recommenda- tions [ISO11]. Every system provides different outputs, strengths and weaknesses. Before starting a project, an analysis to stipulate the most appropriate notation should be performed. This could get very complex depending on the num- ber of notations analysed and will greatly affect the results of the quality tests and the development process. 1.2 Targets of the Thesis The main target of this thesis is to select and compare different template- systems by evaluating which strengths and weaknesses they present over other templates and unrestricted natural language. A theoretical evaluation of the selected semi-formal notations will be per- formed. Later on, this will be analysed and contrasted by conducting an experiment on users. A second target is to develop a user interface for template-systems that can assist researchers and industry professionals in the usage of the notations. The required steps to accomplish this goals are as follows: 1. Search and select a variety of different template notations, by exam- ining the existing literature. 2. Conduct a theoretical evaluation of the selected notations, based on human readability, use of natural language, formality, expressiveness, etc.
  • 13. CHAPTER 1. INTRODUCTION 3 3. Choose a selection of the most promising template-systems for the experiment, based on the results of the comparative evaluation. 4. Develop a User Interface for the selected systems. 5. Design, implement, execute and analyse the experiment. 6. Summarise and give an insight of the future work. 1.3 Related Work In recent years, many research about template-systems has been made. Some exemplary works are the one of Farfeleder [Far12] and Majumdar et al. [MSKB11]. The PhD conducted by Farfeleder contributed a framework for require- ments specification and requirements analysis. In there, Farfeleder pre- sented, among other things, an extension of the original Boilerplates set of Hull et al. [HJD05] that includes role and type attributes, allows several strings at one single attribute and defines a set of default attributes. Majumdar et al., for their part, proposed a syntax called Adv-EARS, based on EARS [MWHN09]. That proposal is focused on identifying the elements of use case diagram1 from which the latter would be generated automat- ically. They accomplished it by extending the existing requirements syn- taxes of EARS and also introducing a new requirement type (hybrid re- quirement). However, not many researches have been focused on comparing the differ- ent template-systems. Johannessen, in his Master’s thesis [Joh12], presented a comparison be- tween natural language and the Boilerplates of Hull et al. by carrying out a test with university students. His experiment concluded without a clear answer, but the results suggested that requirements’ errors are easier to find in Boilerplates. On the presentation of EARS [MWHN09], Mavin et al. expounded a case study that compared natural language requirements from the CS-E (Certifi- cation Specification for Engines) document and their EARS interpretation. Quantitative results for the case study were provided, which stated that EARS presents a number of advantages over the use of natural language. 1 ”A UML diagram that shows actors, use cases, and their relationships” [ISO10].
  • 14. CHAPTER 1. INTRODUCTION 4 Nevertheless, no publication has been found by the author that selects and evaluates different template-systems. As mentioned in the beginning of the chapter, a selection of the most suitable notation is essential for the future of a project. This thesis is the first study in this area. It will evaluate the strengths and weaknesses of different template-systems based on metrics such as reading and writing facilities, expressiveness, etc. This evaluation will be tested and contrasted with the results of an experiment performed with test subjects (industry professional, scientific researchers and students). 1.4 Structure of the thesis After the introduction described in the current chapter, the document has the following structure: Chapter 2 This chapter will explain the state of the art of requirements specifications, from the basic concepts of Requirements Engineering, to the description of template-systems for requirements specification. Chapter 3 The comparative evaluation of the selected template-systems will be detailed in this chapter. Chapter 4 A user interface for the specification of requirements through template-systems will be developed. This chapter explains the aim and the results of this tool. Chapter 5 This chapter will describe the experiment design, the test sub- jects and the analysis of results of the test performed to evaluate the different template-systems. Chapter 6 The final chapter will contain a summary of the realized work and an insight about the future work.
  • 15. CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 5 2 State of the Art in Requirements Specification In this chapter, the bases for this thesis will be exposed. First, the basics of Requirements Engineering will be briefly described. After that, the re- quirements specifications techniques will be explained in more detail and the selected template-systems will be expounded and described. 2.1 Requirements Engineering Requirements Engineering (RE) refers to the process of discovering, analy- sing and verifying the requirements of a system [Som11]. The correct understanding (elicitation), documentation (specification) and validation of the customer needs is an essential part to make the system meet the user needs [Poh96]. Corrections on requirements after delivering the product are normally between 60 and 100 times more expensive than changes at early stages of the development [Dav95]. 2.1.1 Software Requirements A software requirement is a service or a constraint that has to be satis- fied by a system. At a high abstraction level, it is a property that must be demonstrated by a software in order to solve some problem in the real world [BF14]. According to ISO/IEC/ IEEE 24765 Systems and software engineering - Vocabulary [ISO10], a requirement is: ”A condition or capability that must be met or possessed by a system, system component, product, or service to satisfy an agreement, standard, specifica- tion, or other formally imposed documents”.
  • 17. CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 7 Many authors have proposed different classifications for requirements types. In 1996, Pohl [Poh96] stated a classification of requirements types, shown in Figure 2.1. According to his work, the different types of requirements are as follows: Functional Requirements Also known as capabilities or features, func- tional requirements represent the functions that the software has to provide (e.g., formatting text, modulating a signal...) [BF14]. They describe the services that the system should provide and how the system should react in particular situations. As Figure 2.1 shows, external interfaces for users, hardware and software express function- ality. Non-Functional Requirements Non-functional requirements, someti- mes known as constraints or quality requirements, do not describe functionalities, but properties that the system has to fulfill. They constrain the solution [BF14]. According to the classification of Pohl (Figure 2.1), they can be clas- sified as requirements of portability, maintainability, error situation, design constraints, error description, error handling, user supply, safety-security, test cases, performance, backup/recovery, cost con- straints, time constraints and documentation [Poh96]. Requirements Quality Criteria According to Wiegers [Wie99], a high-quality requirement should be: Correct The requirement should accurately describe the functionality to be delivered. In addition, there should not be any conflicts with other requirements of the corresponding system. A source of the requirement (e.g., a stakeholder1 or a higher-level specification) should be provided as a reference of correctness. Re- garding requirements inspections, representative users can determine the correctness of user requirements. Feasible It must be possible to implement the requirement according the capabilities and limitations of the system. In order to avoid infeasi- ble features (e.g., requirements technologically overly complex or too costly), domain experts should participate during the elicitation and 1 ”Individual or organization having a right, share, claim, or interest in a system or in its possession of characteristics that meet their needs and expectations” [ISO10].
  • 18. CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 8 the negotiation process (Section 2.1.2). Necessary The requirement should reflect something that the customer/- user really needs or something that is required for conformance to an external requirement, external interface, or standard. In the case that a requirement is not traceable back to its source, it may be not really necessary. Prioritized Each requirement should have an implementation priority. An agreement with the customer should be made in order to identify the most important requirements, being prepared for changes during development, budget cuts, schedule overruns or any other project management risk. An example of priority levels is the following: • High priority: The requirement must be in next product re- lease. • Medium priority: The requirement is necessary, but can be postponed to later release. • Low priority: The requirement is nice to have, but it may be dropped if there are insufficient time or resources. Modal verbs, as shall, should or will, are used to express liability (Section 2.2.1) and also provide priorities. Unambiguous Requirements should have only one possible interpreta- tion by the reader. Natural language is highly prone to ambiguity; for this reason, subjective words (such as user-friendly, easy, simple, rapid, efficient, several, improved, maximize, minimize...) should be avoided. The requirement should be written in a simple and straight- forward language of the user domain. Example: “The HTML Parser shall produce an HTML markup error report which allows quick resolution of errors when used by HTML novices.”. Ambiguity is produced by the word quick. Verifiable It must be possible to determine if the requirement is properly implemented in the product. A requirement needs to be correct, fea- sible, and unambiguous in order to be verifiable. This property can also be found in the literature named as testable.
  • 19. CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 9 2.1.2 Requirement Engineering Processes Many authors, like Bourque et al. [BF14], Sommerville [Som11] or Pohl [Poh96], have proposed different definitions for requirements engineering processes. In 1996, Pohl stated four tasks that were to be performed dur- ing the RE process: the elicitation, the negotiation, the specification, and the verification/validation of requirements. These fours tasks, the interac- tions and the relationship with the stakeholders, are presented in Figure 2.2. Figure 2.2: RE four processes, adapted from [Poh96]. Requirements Elicitation Requirement elicitation is the process of gathering information about the required system and the pre-existing systems. This process uses the in- formation obtained from stakeholders, documentation and specifications of similar systems to establish user and system requirements2 [Som11]. The interaction with participants is developed through elicitation tech- niques. The most common are: interviews, scenarios, prototypes, facili- tated meetings, observation and user stories [BF14]. 2 ”System requirements are more detailed descriptions of the software system’s func- tions, services, and operational constraints [...] It may be part of the contract between the system buyer and the software developers” [Som11].
  • 20. CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 10 As RE processes are iterative (Figure 2.2), the understanding of the re- quirements by the analysts team gets better with every round of the cycle. Requirements Negotiation Also known as conflict resolution, requirements negotiation concerns re- solving problems with requirements [BF14]. Some of the conflicts that can arise are related to: • Requirements and resources; • Functional and non-functional requirements; • Disagreements between stakeholders. It is important to make every decision traceable back to its source, in or- der to avoid the reoccurrence of these problems. During this negotiation process, tradeoffs can be agreed as alternatives. Requirements Specification & Documentation Often called the ”software requirements document”, the Software Require- ments Specification (or SRS) is an official statement which specifies every requirement that has to be implemented in the system [Som11]. As Fig- ure 2.3 shows, many different stakeholders like system customers, managers, system engineers, system test engineers or system maintenance engineers can access and use this document. While the requirements’ quality criteria (Section 2.1.1) are focused on in- dividual requirements, the IEEE Std 830-1998 Recommended Practice for Software Requirements Specifications [ISO98] extends the quality criteria from individual requirements to an SRS level with some new properties: Complete An SRS is complete if, and only if, it includes the following elements: 1. Every significant requirement relating to functionality, perfor- mance, design constraints, attributes, external interfaces and any external requirement imposed by a system specification. 2. Description of the software response to every valid or invalid class of input data in all realizable classes of situations.
  • 21. CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 11 Figure 2.3: Users of a requirements document, extracted from [Som11]. 3. Labels and references to every figure, table and diagram, as well as definitions of all terms and units of measure. Consistent An SRS is not consistent if it does not agree with some higher- level document or with itself. Furthermore, two or more requirements should not describe the same need. Ranked for importance and/or stability Each requirement of the SRS should have an identifier to indicate its importance or stability. Modifiable An SRS is complete if, and only if, any changes to the re- quirements can be made easily, completely, and consistently without changing the style or the structure. Traceable The origin of each requirement should be clear and it should facilitate the referencing of each requirement in future development or enhancement documentation. These quality criteria may vary according to the authors [Che13] [Ebe08], but each project should always state a systematic approach to follow in
  • 22. CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 12 order to guarantee high-quality requirements. Requirements Validation & Verification According to Boehm [Boe79], an informal interpretation of validation and verification is: Verification ”Am I building the product right?”. Validation ”Am I building the right product?”. This means that verification is focused on checking if a system matches its specification, while validation checks if the system presents the functional- ity expected by the customer/user. During the validation process, different types of checks, like validity, con- sistency, completeness, realism and verifiability checks, should be applied to the requirements document [Som11]. 2.2 Requirements Specification There are many possible notations that exist and could be used to write requirements. Natural language is the most flexible notation. It allows software require- ments to be written in a quick way, making them easy to read and under- stand by the reader. Nevertheless, there can be some problems that could arise, and therefore hamper the compliance of the quality criteria (Section 2.1.1). According to Mavin et al. [MWHN09], some of these problems are: Ambiguity Requirements could have more than one interpretation. Untestable Requirements could be unverifiable once the system is imple- mented. Inconsistency Several requirements could define the same need. Vagueness Requirements could present a lack of precision, detail or struc- ture. Complexity Some requirements could contain complex sub-clauses and/or several interrelated statements.
  • 23. CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 13 Omission Missing requirements, especially requirements to manage un- wanted behaviour. Wordiness Requirements could contain an unnecessary number of words. Inappropriate implementation details Requirements could describe a- bout how the system should be built, rather than what it should do. Requirements are often written in natural language; however, this may be supplemented by formal or semiformal descriptions [Som11]. Graphical notations, like the Unified Modeling Language (UML) [Lav03], and Formal notations, based upon elementary mathematics, can be used to specify precise, unambiguous and structured requirements. Some examples of formal notations are Event-B [Abr10], Alloy [Jac01] and Z [WD96]. Z is a mathematical language with a powerful structuring mechanism. An example of usage is presented in Figure 2.4. Figure 2.4: Example of schema represented in Z notation, adapted from [WD96]. In contrast to natural language, formal notations require a high level of training by the development team and the customers/users in order to write and understand requirements. Semi-formal notations, like template-systems and keywords-driven languages, are approaches that facilitates the satisfaction of the quality criteria (Sec- tions 2.1.1 and 2.1.2) specifying requirements by a combination of templates (syntax) and attributes (semantics), as shown in Figure 2.5. Three promising notations have been selected from the list presented in Sec- tion 1.1: MASTeR template-system, EARS template-system and PLan- guage keywords-driven language. Their widespread use in the industry, as well as there complexity and there documentation support, have made them valuable to be compared and evaluated on this thesis. This selection is described in more detail in the following sections.
  • 24. CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 14 Figure 2.5: Requirements are specified by a combination of templates and attributes. 2.2.1 MASTeR In 2014, Rupp et al. presented MASTeR (Musterg¨ultige Anforderungen - die SOPHIST Templates f¨ur Requirements) [RdS14b], a template-system that is currently used by many well-know german companies3. MASTeR allows writing requirement specifications through some basic types: functional and non-functional requirements, divided in property, environment and process requirements. Functional requirements Functional requirements can be specified with the FunctionalMASTeR tem- plate, shown in Figure 2.6. The uppercase words represent the template fixed values and the lowercase words with angle brackets represent the at- tributes that have to be filled in [RdS14b], as mentioned in [Gro15]. Figure 2.6: FunctionalMASTeR template, adapted from [RdS14b]. The elements are: 1. Condition: Optional conditions can be specified with the Condition- MASTeR, LogicMASTeR, EventMASTeR or TimeMASTeR tem- plates (described below in this section). 3 Details can be consulted at: https://www.sophist.de/referenzen/unsere-kunden/
  • 25. CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 15 2. System: Represents the system or component that should provide the functionality. 3. Liability: Modal verbs are used to express liability: (a) Shall: Statement is legal binding and mandatory. (b) Should: Statement is desired, but not mandatory. (c) Will: Statement is recommended, but not mandatory. 4. Activity type: The different types of system activities are the fol- lowing [RdS14a]: (a) Independent system action: (-) the system performs the process by itself. (b) User interaction: (PROVIDE <actor> WITH THE ABILITY TO) the system provides the user with some process functional- ity. (c) Interface requirement: (BE ABLE TO) the system performs the process dependent on a third factor. 5. Process verb: Represents the process: procedures (functionality) and actions to be provided by the system [RdS14a]. 6. Object: Characterizes the process word. It’s the answer to the ques- tion what? [RdS14a]. Example: The document editor shall provide the user with the ability to create new documents. Property requirements Property requirements describe properties of the system. PropertyMASTeR template (Figure 2.7) is proposed. The specific elements of the PropertyMASTeR template are: 1. Characteristic: Is the property of the subject matter. 2. Subject matter: Represents the system, sub-system, component, function, etc. 3. Qualifying expression: Specifies the range of the value.
  • 26. CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 16 Figure 2.7: PropertyMASTeR template, adapted from [RdS14b]. 4. Value: Is connected with the qualifying expression through the verb to BE. Example: The design of the website should be responsive. Environment requirements Technological requirements of the system’s environment are described with EnvironmentMASTeR template (Figure 2.8). The fixed values are designed in a way that make the requirement belongs to the system and not the en- vironment [RdS14b], as mentioned in [Gro15]. Figure 2.8: EnvironmentMASTeR template, adapted from [RdS14b]. Example: The charger of the device shall be designed in a way the system can be operated in a range 100-240V/50-60Hz. Process requirements Process requirements can be worded with the ProcessMASTeR template (Figure 2.9), which is based on the independent system action version of
  • 27. CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 17 the FunctionalMASTeR template. Process requirements are related to ac- tivities or legal-contractual requirements, as well as non-functional require- ments. In this template, the subject of the requirement is an actor and not the system [RdS14b], as mentioned in [Gro15]. Figure 2.9: ProcessMASTeR template, adapted from [RdS14b]. Example: The software developers should work according to the Personal Software Process (PSP). Conditions In the case that the functionality is only given or provided under certain logical or temporal conditions, the ConditionMASTeR template can be applied (Figure 2.10) [RdS14a]. Figure 2.10: ConditionMASTeR template, adapted from [RdS14b]. Example: If the server could not found the document requested. A more precise specification can be obtained by the use of LogicMASTeR, EventMASTeR or TimeMASTeR templates.
  • 28. CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 18 LogicMASTeR (Figure 2.11) is used to specify logical conditions. The log- ical statement is made through a compared object, an actor or a system [RdS14b]. Figure 2.11: LogicMASTeR template, adapted from [RdS14b]. With EventMASTeR (Figure 2.12), requirements are initiated as soon as the event condition is satisfied. The term event summarises the possible events that may affect the system [RdS14b]. Figure 2.12: EventMASTeR template, adapted from [RdS14b]. TimeMASTeR (Figure 2.13) is used to specify a certain period of time when a system or object may have temporary behaviours. Both, condi- tions and requirements, end at the same time [RdS14b]. 2.2.2 EARS EARS (Easy Approach to Requirements Syntax) is a template-system cre- ated by Mavin et al. [MWHN09] and currently used by companies like Intel Corporation or Rolls-Royce [Ter13].
  • 29. CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 19 Figure 2.13: TimeMASTeR template, adapted from [RdS14b]. The simplest structure of EARS system is the generic requirement syntax (shown in Listing 2.1). <optional preconditions> <optional trigger> the <system name> shall <system response> Listing 2.1: Generic requirement syntax. According to Marvin et al. [MWHN09], the elements are: 1. Precondition: Necessary condition to invoke the requirement. 2. Trigger: Event that initiates the requirement. 3. System response: Represents the system behaviour, which is acti- vated if and only if, the preconditions and trigger are activated. Mavin et al. stated that the generic requirement syntax is specialized into five types (Ubiquitous, Event-driven, Unwanted behaviours, State-driven and Optional features), described in the following subsections. Ubiquitous requirements An ubiquitous requirement defines a fundamental property of the system and has no preconditions or trigger. The format is shown in Listing 2.2. The <system name> shall <system response> Listing 2.2: Ubiquitous requirements format. Example: ”The software shall be written in Java” [Ter13].
  • 30. CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 20 Event-driven requirements An Event-driven requirement is activated only when a trigger occurs or is detected. It uses the keyword WHEN. Listing 2.3 shows its format. WHEN <optional preconditions> <trigger> the <system name> shall <system response> Listing 2.3: Event-driven requirements format. Example: ”When a DVD is inserted into the DVD player, the OS shall spin up the optical drive” [Ter13]. Unwanted behaviours Requirements for unwanted behaviours (failures, error conditions, distur- bances, deviations...) use IF and THEN keywords and have the format shown in Listing 2.4. IF <optional preconditions> <trigger>, THEN the <system name> shall <system response> Listing 2.4: Unwanted behaviours format. Example: ”If the memory checksum is invalid, then the software shall dis- play an error message” [Ter13]. State-driven requirements State-driven requirements are active while the system is in a specific state. They use the keywords WHILE or DURING (Listing 2.5). WHILE <in a specific state> the <system name> shall <system response> Listing 2.5: State-driven requirements format. Example: ”While the autopilot is engaged, the software shall display a vi- sual indication to the pilot” [Ter13]. Optional requirements Optional requirements are invoked only if the system includes a special fea- ture. They use WHERE keyword, as shown in Listing 2.6.
  • 31. CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 21 WHERE <feature is included> the <system name> shall <system response> Listing 2.6: Optional requirements format. Example: ”Where a HDMI port is present, the software shall allow the user to select HD content for viewing” [Ter13]. Complex requirements Requirements with complex conditional clauses can be defined with a com- bination of the keywords WHEN, IF and THEN, WHILE and WHERE. Example: ”When the landing gear button is depressed once, if the software detects that the landing gear does not lock into position, then the software shall sound an alarm” [Ter13]. 2.2.3 Planguage Planguage is a keyword-driven language created by Gilb in 1998 [Gil05]. The name Planguage is a combination of the words planning and language. Planguage provide a diverse set of keywords; the most common are shown in Table 2.1. In addition, Planguage is an extensible language. Sub-keywords can be used to add precision and specifity. An example is shown in Table 2.2. Quali- fiers add richness, precision, and utility by allowing precise description of conditions and events. Listing 2.7, adapted from [Sim01], shows an example of qualifiers usage. PLAN [Q1 00]: 20,000 units sold MUST [First year]: 120,000 units sold WISH [First release, entrp. version]: 1 Dec. 2000 PLAN [US market, first 6 months of production]: Defects Per Million < 1,000 METER [Prototype]: Survey of focus group METER [Release Candidate]: Usability lab data Listing 2.7: Example of qualifiers usage.
  • 32. CHAPTER 2. STATE OF THE ART IN REQUIREMENTS SPECIFICATION 22 Keyword Definition TAG Unique and persistent identifier GIST Statement’s short and simple description STAKEHOLDER Person or organization affected by the requirement SCALE Scale of measurement used to quantify the statement METER Process or device used to measure using SCALE MUST Minimum level required to avoid failure PLAN Level at which is considered successful STRETCH Goal if everything goes perfectly WISH Desirable level of achievement PAST Previous results for comparison TREND Historical data or extrapolation of this RECORD Best-known achievement DEFINED Official definition of a term AUTHORITY Person, group or level of authorization Table 2.1: Planguage Keywords, adapted from [Sim01]. Keyword Definition METHOD Method for measuring to determine on the SCALE FREQUENCY Frequency of measurements SOURCE Responsible entity for making the measurement REPORT Where and when the measure will be reported Table 2.2: Sub-keywords for the METER Keyword, adapted from [Sim01].
  • 33. CHAPTER 3. COMPARATIVE EVALUATION 23 3 Comparative Evaluation In this chapter, the comparative evaluation of the selected template-systems, from the preparation of the evaluation to the execution, results and analy- sis is described. 3.1 Preparation This evaluation compares different template-systems based on metrics. The- re exists many metrics that can be obtained from semi-formal notations. Some of them are measured by automated processes (e.g., length of the requirements), while other metrics can be measured by human revision of the notation’s syntax. Table 3.1 presents some of these metrics, a brief definition of them and the type of measurement. 3.1.1 First Approach In the first place, this evaluation will aim to look at a generic software project, a project that is not focused on any specific field and includes any type of requirement. In order to find the notation that is more suitable to specify requirements in that project, a first approach which attempts to identify the notation that provides higher-quality requirements is proposed. In this case, Suitability is defined as: Suitability = Quality Where the Quality attribute is measured by the satisfaction of the quality criteria (Sections 2.1.1 and 2.1.2). The template-system or keyword-driven language that provides higher-quality requirements would be the most suit-
  • 34. CHAPTER 3. COMPARATIVE EVALUATION 24 Metric Description Measurement Number of words Length of the final requirements. Automated Number of process verbs Numbers of process verbs of the requirements. Automated Number of keywords Length of the keywords set. Automated Number of decision branches Number of possible branches in a template. Automated Expressiveness Capability of the notation to cover the domain problem. Inspection Formality Level of structuring of the notation. Inspection Human readability Capability of the requirements to be understood by humans. Inspection Computer readability Capability of the requirements to be analyzed and processed by computer programs. Inspection Quality Level of quality, according to the satisfaction of the quality criteria (Sections 2.1.1 and 2.1.2). Inspection Table 3.1: Notations’ metrics. able for the project. In order to measure the quality criteria compliance, a checklist-based in- spection can take place. To accomplish this, a requirements-focused check- list [Mos97] has been adapted to create a checklist which enables to measure the quality criteria satisfaction based on the notation’s syntax. The answers to these checks could be subjective, because they depend on the judgment or perception of the evaluator, so, in order to obtain accurate results, sev- eral people should perform this inspection. In the current work, the author has performed the inspection twice. Both, checklist and full results, are available in the section A.1 of Appendix A. The result of this inspection will be a number in the range between 0 and 100, which will indicate the level of quality that a notation provides, ac- cording to the compliance of the quality criteria. To add some semantic meaning to this value, a scale has been designed and is presented in Table 3.2. The results of the inspection carried out by the author are shown in Table 3.3. These results suggest that Planguage is the template-system which pro-
  • 35. CHAPTER 3. COMPARATIVE EVALUATION 25 Level of Quality Quality total score Excellent 80 to 100 Good 60 to 79 Fair 40 to 59 Poor 20 to 39 Bad 0 to 19 Table 3.2: Scale for quality values. vides higher-quality requirements, followed closely by MASTeR and, lastly, by EARS, which presents a poor quality. However, subjectivity problems, previously discussed, may have affected the results of the inspection, so no significant difference between Planguage and MASTeR qualities should be interpreted. Notation Score Level of Quality MASTER 81 Excellent EARS 40 Fair Planguage 84 Excellent Table 3.3: Checklist-based inspection results. For a proper analysis of the results, a deeper insight into the checklist evaluation is needed. The check results corresponding to the main quality properties have been collected and displayed in Figure 3.1. These results in- dicate that the main difference established between EARS, Planguage and MASTeR is set by the completeness property, which influences other prop- erties like reliability. This could be attributed to the scope of EARS, which is basically focused on functional requirements. MASTeR and Planguage, for their part, present a more complete set of templates, which allows to specify more variety of non-functional requirements. 3.1.2 Second Approach The results of the first approach do not yet fulfill the aims of this evalua- tion, since it can not resolve issues such as: Can untrained stakeholders read and understand the specifications?. Can requirements engineers specify requirements without a significant amo- unt of training hours?. In order to measure the reading and writing facilities, a second approach is needed. To accomplish that goal, some of the metrics presented at the be-
  • 36. CHAPTER 3. COMPARATIVE EVALUATION 26 Figure 3.1: Checklist-based inspection results (detailed). ginning of the chapter have been now selected: Expressiveness, (Average) Number of words and Formality. Expressiveness Refers to how well the notation’s syntax covers the do- main phenomena [HAL+14]. The purpose of this metric is to measure the level of expressiveness, which will affect positively to the final suitability. This metric is to be calculated by all the possible combinations that the notation can provide to cover the domain phenomena: Expressiveness = Ncombinations∗ * Measuring the number of combinations can depend on the nota- tion’s syntax. This is the reason why the author has not provided a specific formula to calculate this value. In the special case of a notation allows nesting1, only one level of nest- ing will be taken into account for the measurement. This is due to higher levels of nesting represent an exponential growth of the num- ber of combinations and, according to the author’s opinion, it will not properly represent the value of expressiveness and will distort the final result. The results of applying of the formula will be numeric, so, in order to get a better picture of them, a scale has been designed and is pre- sented at Table 3.4. Average Number of Words The Average Number of Words (NoW) in- dicates the average length of the requirements. For untrained stake- holders, too long requirements could be unreadable, while too shorts 1 Ability to specify requirements by combining different templates or keywords.
  • 37. CHAPTER 3. COMPARATIVE EVALUATION 27 Expressiveness scale Expressiveness value Excellent >300 Good 200 to 300 Fair 100 to 199 Poor 0 to 99 Table 3.4: Scale for expresiveness values. requirements could not be complete. High values will affect negatively to the suitability. In this metric, an algorithm can be used to automate the count of the number of words, based on the next formula: NoW = ∑ n req=1 Nwords(req) n Formality The purpose of this metric is to measure the level of formaliza- tion that a notation presents (e.g., mathematical notations could be identified as high-formalized, while natural language could be consid- ered as low-formalized). High formality levels usually present a lack of flexibility and hardly can be understood by untrained stakeholders, so they affect negatively to the suitability. A low level of formality, however, could imply specification problems (Section 2.2). This metric will be measured by a checklist-based inspection. The checklist, partly based on [MR13] and [AYI04], and its application are available at the section A.2 of the Appendix A. The results of the inspection will be in the range between 1 and 6. For a better understanding of these results, the scale shown in Table 3.5 can be utilised. Formality scale Formality score Over-Formalized 6 Extremely-Formalized 5 Highly-Formalized 4 Well-Formalized 3 Fair-Formalized 2 Under-Formalized 1 Table 3.5: Scale for formality values. For this second approach, some of the metrics presented at the beginning of the chapter have been finally discarded. It is the case of the number of
  • 38. CHAPTER 3. COMPARATIVE EVALUATION 28 process verbs, which can be obtained in notations like MASTeR, but is not obvious in notations like EARS or Planguage, due to their lack of atomicity. The number of decision branches, for its part, is to calculate the possible combinations. Finally, the computer readability has been remained out of the scope of this evaluation. This is a point that could be added to future researches. As a first approach, it could be measured taking into account the analysed formality and the atomicity that the notations provide. 3.1.3 Formula variations At this point of the evaluation, two positive metrics, Quality and Expres- siveness, and two negative metrics, Average Number of Words and Formal- ity have been identified. They can be used to identify the most appropiate template-system according to the properties of every particular project. However, there is still a lack of order (a formula) that can sort the nota- tions according to the suitability for a generic project. In order to achieve this goal, two variations are proposed: First variation Suitability = Quality ∗ Expressiveness NoW ∗ Formality In this variation, positive properties are multiplied and negative properties are divided. Second variation Suitability = Quality + Expressiveness − NoW − Formality In this second variation, positive properties are added and negative prop- erties are subtracted. Thus, our second approach is based on the idea that the template-system which provides requirements with higher quality and more expressiveness with less number of words and less formality would be the most appropiate one. Positive metrics contribute to the suitability while negative metrics penalize it. Both formula variations have been applied to the selected notations and the results are shown at Figure 3.2. Both provide the same sort: MASTeR is the most suitable notation followed by EARS and, lastly, Planguage.
  • 39. CHAPTER 3. COMPARATIVE EVALUATION 29 Figure 3.2: Results of the formula application for first and second variations. However, no sufficient data is analyzed at the current evaluation that can support a decision of which variation to use. The evaluation will continue by using the first variation, but, as mentioned, the second variation would also be valid according to the current data. 3.2 Execution Summarising, the steps followed to run the formula have been: Quality A checklist-based inspection has taken place to collect the quality score for each notation. In order to accomplish it, a checklist based on [Mos97] has been used. Both the checklist and the results of the inspection are available in the section A.1 of Appendix A. Expressiveness All combinations that each notation can provide have been calculated. MASTeR’s combinations have been calculated by the combinations of conditions and decision branches. For EARS, combinations have been calculated according to all the decision bran- ches and one level of nesting. For Planguage, the number of keywords, as well as the qualifiers and the number of keywords that can be nested have been taken into account. Average Number of Words An algorithm has been used to count the number of words of the requirements. Some of the requirements have been created from scratch and others have been based on examples found in [Mav10] and [Sim11].
  • 40. CHAPTER 3. COMPARATIVE EVALUATION 30 Formality A checklist, partly based on [MR13] and [AYI04], has been created to measure this metric. Both checklist and results of the in- spection are available at the section A.2 of the Appendix A. 3.3 Results and Analysis The results of the first formula variation, previously presented in Section 3.1.2, suggest that MASTER is the most suitable template-system, followed by EARS and Planguage. Extended information about the formula appli- cation has been collected. This is shown in Table 3.6. MASTeR EARS Planguage Avg Quality 81 40 84 68.33 Expressiveness 324 196 126 215.33 NoW 24 18.27 94.33 45.53 Formality 3 2 5 3.33 Suitability 364.50 214.46 22.43 200.76 Table 3.6: Detailed results of the formula application. These results suggest that MASTeR is the most ideal template-system, due to its excellent quality (a score of 81/100 points) in concise and well- structured requirements.. EARS, for its part, is the second notation at the results’s sort. The fair- formalized and concise specifications are compromised by a fair quality (40 points). Finally, results indicate that Planguage is the less ideal notation. This no- tation provides excellent quality, but it is highly penalized by the level of formality and the length of its specifications. Thus, as results indicate, MASTeR would be an ideal notation to specify requirements in a generic project due to the great possibilities and expres- siveness that this notation offers to cover the domain’ phenomena. On the other hand, if a project has mostly functional requirements, results mark EARS as an ideal option, due to its reading and writing facilities. Finally, if users are working on a critical project that is focused in the quality of their requirements leaving aside the ease of reading and writing, Planguage is suggested as an ideal option.
  • 41. CHAPTER 3. COMPARATIVE EVALUATION 31 As mentioned, extended results of this evaluation can be utilised to select the most ideal notation according to the project’s properties. In addition, it is important to state that the judgement of the evaluator could have influenced a percentage of the measurements. In addition, not enough data has been analysed that can validate the formula. Due to this, an experiment will be conducted to test the template-systems usability on users (Chapter 5).
  • 42. CHAPTER 4. USER INTERFACE FOR TEMPLATE-SYSTEMS 32 4 User Interface for Template-Systems In this chapter, the aim and the results of a software application developed for requirements specification are related. This tool allows users to write requirements using MASTeR and EARS template-systems. 4.1 Preparation The main purpose of this user interface is to assist in the experiment, pro- viding the test subjects with a tool where they can specify requirements using template-systems. In addition, this tool is also designed for scientific researchers that investigate template-systems and semi-formal notations. Finally, this tool can help to industry professionals who want to specify their requirements by the use of template-systems. The only public available MASTeR tool that has been found at the date of publication is the one developed by Eckey on his Master’s thesis [Eck15]. The tool that he presents is focused on analyzing the consistency of require- ments transformed from BPMN1 models. Regarding EARS and Planguage systems, no public available tool has been found by the author that can support the process of specification. The formality that MASTeR and EARS presented at the comparative eval- uation (Chapter 3) suggests that they are the easiest notations to learn by untrained stakeholders. For this reason, as well as the similarities between their complexity and format, MASTeR and EARS have been selected to be evaluated at the experiment (Chapter 5). This user interface will support both notations. 4.1.1 Technologies The user interface will be designed as a web-based application. The main advantage of this approach is the accessibility: the tool will be platform independent, accessible anytime and anywhere from any device which has 1 Business Process Modeling Notation [GDW09].
  • 43. CHAPTER 4. USER INTERFACE FOR TEMPLATE-SYSTEMS 33 Internet connectivity and a compatible web browser. This helps to improve the coverage of the tool, increasing the number of users that can make use of the developed software. In order to accomplish this goal, the following technologies are used: Markup language HTML5 as the markup language to code the views of the application. Presentation layer The presentation of the documents is developed with LESS2, a CSS pre-processor. On the other hand, Bootstrap3, a CSS framework, will help to develop a responsive design, making the tool more user-friendly for a wider range of devices. Client-side scripting The controller will be coded with AngularJS4. In this JavaScript framework developed by Google, the models are spec- ified in JSON format, which can help to improve the interoperability with other tools. In addition, AngularJS facilitates an asynchronous communication with the server (AJAX-based) that improves the user experience. Storage HTML5 Web storage will be used to storage information at the client’s web browser. This will allow to save in memory all require- ments in which the user is working, although he closes the browser or turns off the device. 4.2 Results The source code of the developed application is available at the CD-ROM and at GitHub5. A public demo is also available online6. The markup’s validity has been checked according to the W3C (World Wide Web Consortium) Markup Validation Service7. Figure 4.1 shows the successful output of the validation. For its part, CSS code has been validated according to the W3C CSS Val- idation Service8 and the output of such validation is shown in Figure 4.2. 2 http://lesscss.org 3 http://getbootstrap.com 4 https://angularjs.org 5 https://github.com/francaballero/requirements_generator 6 https://userpages.uni-koblenz.de/~francaballero/tool/ and also http:// francaballero.net/requirements_generator/ 7 https://validator.w3.org 8 https://jigsaw.w3.org/css-validator/
  • 44. CHAPTER 4. USER INTERFACE FOR TEMPLATE-SYSTEMS 34 Figure 4.1: W3C HTML successful validation. Figure 4.2: W3C CSS successful validation. Finally, JSHint tool9, a static code analysis tool for JavaScript, has been used to validate the JavaScript code. The main page of the application is shown at Figure 4.3. In this view, users are introduced to MASTeR and EARS template-systems by an explanation of their syntax and examples. Figure 4.3: Main page, containing MASTeR and EARS introduction. Then, users are able to navigate through four different views: MASTeR, EARS, Glossary and Results. At the MASTeR and EARS views, the tool provides users with the ability to specify requirements. A control panel is available on the right side of the page, as shown in Figure 4.4, where users can create new requirements (for MASTeR and EARS) and conditions (for MASTeR). When the user click on one of these buttons, he/she will be guided to the view shown in Figure 4.5. By clicking in the attributes, the user is able to 9 http://jshint.com
  • 45. CHAPTER 4. USER INTERFACE FOR TEMPLATE-SYSTEMS 35 Figure 4.4: MASTeR view, with the control panel. select a term from the list of terms. The ”New Term” button (highlighted in green) is used to create new terms, which are shared between MASTeR and EARS views. Figure 4.5: How requirements can be specified with the tool. The glossary view is shown at Figure 4.6. In this view is where users are able to define their terms. At the right side, the control panel will allow, again, the creation of new terms. Results view provides users with the ability to save their work by printing or saving a PDF document containing all their progress. This is shown at Figure 4.7. In addition, users can remove terms and requirements from the local storage. On the other hand, the responsive design of the webpage adapts the page layout to different devices and resolutions. A sample of this functionality is shown at Figure 4.8.
  • 46. CHAPTER 4. USER INTERFACE FOR TEMPLATE-SYSTEMS 36 Figure 4.6: Glossary view, where terms can be defined. Figure 4.7: Results can be exported at glossary view. Figure 4.8: The responsive design adapts the page layout to different resolu- tions.
  • 47. CHAPTER 4. USER INTERFACE FOR TEMPLATE-SYSTEMS 37 This tool offers industry professionals and researchers the possibility of documenting requirements by the usage of MASTeR and EARS notations. Furthermore, this tool could be later improved by integrating a JSON- based interoperability with existing tools that automate the processing of requirements. Finally, EARS’s complex requirements could be improved by enabling multiple nesting levels.
  • 48. CHAPTER 5. USERS EXPERIMENT 38 5 Users Experiment In this chapter, all the steps of the design, the test subjects selection and the analysis of results of the experiment will be described. 5.1 Experiment design The aim of this experiment is to test and evaluate the results of the compar- ative evaluation, described in Chapter 3, with real users. The experiment shall measure the reading and writing facilities of the template-systems. To accomplish this goal, the test subjects will be asked to solve tasks related to requirements specification and provide some feedback about their expe- rience. The test group will be composed of industry professionals and students. The experience of industry professionals makes them ideal test subjects for this evaluation. However, it will not be possible to recruit a significant number of participants. This lack of participants will be compensated with students of advanced years of Software Engineering, Computer Engineering, Computer Science and Web Science. This test group will have presumably less experience in Requirements Engineering, so the expectations for their performance should be adjusted. However, this may actually be considered as an ad- vantage at measuring the template-systems’ learnability. In addition, their lack of experience may affect similarly to both, template-systems and nat- ural language, making their results more reliable. It is important to state that it will not be possible to perform the experiment in a laboratory. In addition, the test group will not homogeneous, because the test subjects will have different backgrounds and experience. At last, the test is not planned to be performed by a really significant number of users. All these reasons make the sample not representative enough and it should be taken as a first approximation that can be improved in the future. In order to accomplish all these goals and reach as many participants as possible, the test is planned to be web-based. This approach will also allow
  • 49. CHAPTER 5. USERS EXPERIMENT 39 the use of algorithms to automate part of the analysis of the results. The same technologies described in Section 4.1.1 will be used to implement the experiment. Moreover, the user interface designed for MASTeR and EARS (Chapter 4) will be integrated. However, a database is now needed to store and consult the results. For this purpose, OpenWS1, a web-service that provides a REST API to store JSON objects, will be used. The compo- nent diagram of Figure 5.1 shows the physical structure of the application. Figure 5.1: Component diagram of the web-based test. The source code of the web-based experiment is available at the CD-ROM and at GitHub2. A public demo of the application is also available online3. The experiment operates with the requirements of NBDiff, an open source project that provides a means of viewing changes made to IPython Note- book files and resolving merge conflicts. The requirements of the project4 have been changed and adapted for the use of the experiment. The test consists of three sections, introduction, reading and writing, de- scribed in the following sections. 5.1.1 Introduction section First of all, at the introduction section (Appendix B.1), test subjects will be asked to complete a short questionnaire about their occupation and their 1 https://openws-app.herokuapp.com 2 https://github.com/francaballero/requirements_user_testing 3 https://userpages.uni-koblenz.de/~francaballero/ and also http://francaballero.net/requirements_user_testing/ 4 http://nbdiff-docs.readthedocs.io/en/latest/SRS.html
  • 50. CHAPTER 5. USERS EXPERIMENT 40 previous experience, used for statistics purposes. Next, if the test sub- ject has indicated that he/she has no previous experience on MASTeR or EARS, he/she will be conducted to a video that explains the aim and usage of the template-systems. This explanation can be reviewed in text-format or can be extended with the mentioned references. Later on, the NBDiff system is briefly described and the test subjects are asked to send any possible question to the author before proceeding with the reading section. 5.1.2 Reading section At the reading section (Appendix B.2), participants will be asked to an- swer questions regarding some requirements that they shall read. Two of the requirements presented have been written using MASTeR, another two have been written using EARS and one requirement have been expressed in natural language. Questions are related to quality criteria’ properties (Sections 2.1.1 and 2.1.2) like completeness, ambiguity and inconsistency. No definition of the terms has been provided. This will enable to see and measure the need of a glossary. At the end, they will be asked to complete a questionnaire about their ex- perience with the template-systems’ readability. For this, five statements are presented for which they have to decide between: strongly disagree, dis- agree, agree or strongly agree. This four-options questionnaire is designed in a way the test subjects are forced to make decisions. The test subjects could learn from reading and evaluating these require- ments. This is another of the purposes of the reading section, so they will be able to perform the writing section. 5.1.3 Writing section In this section of the test (Appendix B.3), the test subjects will be intro- duced to the tool designed in Chapter 4 through a video tutorial. Later on, they will be asked to implement requirements using the tool, which is integrated in the test (Figure 5.2). The tasks are introduced through four short descriptions of what the requirements should represent. Participants will be able to use MASTeR, EARS or natural language, de- pending on the restrictions of each task. Some terms are already predefined in order to encourage the use of the template-systems. This part of the ex- periment is designed in such a way the subjects should use MASTeR and
  • 51. CHAPTER 5. USERS EXPERIMENT 41 EARS at least once. Figure 5.2: The user interface developed in Chapter 4 was adapted to the web-based experiment. At the end, a questionnaire is displayed and the test subjects are asked to complete it according to their experience. 5.2 Test subjects The experiment was carried out between between August 23 and Septem- ber 9, 2016. The link of the test5 was spread in scientific groups, working groups of ESA and students groups. A total of 20 participants completed the test: 5 industry professionals and 15 students. The background of the participants is shown in Table 5.1. It is important to emphasise that the large majority of participants, espe- cially the students group, were not native English speakers, so this may have influenced the reading and writing of requirements in English. 5.3 Results and Analysis The data obtained from the experiment has many forms. Some algorithms can be applied to process part of the data. However, the the answers of the reading and writing sections are not easily processable and they should be manually evaluated to judge the correctness. 5 https://userpages.uni-koblenz.de/~francaballero/
  • 52. CHAPTER 5. USERS EXPERIMENT 42 Experience in RE Experience in template systems Experience in MASTeR Experience in EARS Industry professionals 60% 40% 0% 0% Students 20% 0% 0% 0% Total average 40% 10% 0% 0% Table 5.1: Information about the test subjects’ background. 5.3.1 Reading requirements At the reading section, test subjects answered five questions related to five requirements that they read. The questions had multiple answers that could be considered as correct, so they had been manually evaluated by the author of this thesis. The results of the evaluation have been grouped by notation and listed in Figure 5.3. Participants completed 92% of the questions. The 77% of the answers related to MASTeR requirements were correct, the 88% related to EARS were correct and the 55% related to natural language were correct. Figure 5.3: Correctness of participants’ answers in reading section, grouped by notation. At the same time, response times to every question were collected. The average times have been calculated and grouped by notation. The results are shown at Table 5.2. MASTeR and natural language answers took 1.37 times and 1.20 times longer, respectively, than EARS. These results also show how industry professionals took 2.42 times longer than students to answer the questions, which could have highly influenced to their better
  • 53. CHAPTER 5. USERS EXPERIMENT 43 performance on this section of the test. Notation MASTeR EARS Natural language Industry professionals 4.44 mins 3.96 mins 1.80 mins Students 1.42 mins 0.808 mins 1.96 mins Total of participants 2.18 mins 1.59 mins 1.92 mins Table 5.2: Average response times to reading questions, grouped by notation. Finally, the participants gave their feedback about their experience at read- ing requirements. The results are listed in MASTeR Figure 5.4. Figure 5.4: How the test subjects evaluated the reading facilities. Thus, these results suggest that MASTeR and EARS provide more read- ing facilities than natural language. This is supported by the participants performance and feedback. However, it should be considered as just a tendency, due to the participants not solving enough natural requirements tasks in this test. Regarding MASTeR and EARS, EARS has got a better performance rate averaging less time, but MASTeR has got a better feedback from the par- ticipants. At this point of the analysis, both notations present a similar
  • 54. CHAPTER 5. USERS EXPERIMENT 44 readability rate and no significant differences can be identified between them. In addition, the manual evaluation of the answers has served to support the hypothesis that the existence of a correct glossary would have significantly increased the quality requirements. 5.3.2 Writing requirements At the writing section, participants wrote requirements to specify four func- tionalities of the NBDiff system. Due to the nature of the template-systems, the analysis could have been automated. However, in this analysis, the an- swers have been manually evaluated by the author. The evaluation criteria was based on six quality properties: ambiguity, priority, vagueness, com- pleteness, absence of implementation details and verifiability, as well as on the correct use of the notation’s syntax. The checklist used for this purpose is available in the section B.3 of Appendix B. The 76% of the proposed tasks were completed by the participants. Re- sults of the evaluation based on 7 quality points are shown at Figure 5.5. It shows how the quality of MASTeR requirements was 1.48 times higher than EARS’s and 3.43 times higher than natural language’s. In the tasks where participants could decide which notation to use, the 57.5% chose MASTeR, the 10% chose EARS, the 7.5% chose natural language and the 25% did not solve the task. Figure 5.5: Quality of participants’ requirements, grouped by notation.
  • 55. CHAPTER 5. USERS EXPERIMENT 45 The average writing times have been collected and grouped in Table 5.3. This shows how writing requirements using MASTeR took to participants 1.39 times longer than using EARS and 1.5 times longer than using natural language. Once again, the results indicate how industry professionals spent more time (1,23 times more) to specify better requirements. Notation MASTeR EARS Natural language Industry professionals 14.06 mins 6.54 mins 0.46 mins Students 8.02 mins 2.83 mins 6.35 mins Total of participants 8.84 mins 6.35 mins 5.89 mins Table 5.3: Average writing times, grouped by notation. According to the experience that participants have gained during this sec- tion of the test, they gave the feedback collected in Figure 5.6. Figure 5.6: How the test subjects evaluated the writing facilities. The results indicate a significant difference between MASTeR writing fa- cilities and the facilities the other notations provide. Participants have specified their best requirements using MASTeR, which also has the best feedback rate. However, test subjects took longer to specify requirements
  • 56. CHAPTER 5. USERS EXPERIMENT 46 using this notation. This suggests that some interest and training from part of the participants is necessary to specify high-quality requirements. On the other hand, the results suggest that EARS also significantly im- proves the specification of requirements. However, the overall requirements written by students contained many mis- takes and not all of them managed to write all the solutions. Part of this results can be attributed to the complexity of the test and their lack of experience specifying requirements. This again suggests that some level of training is needed to specify high-quality requirements. 5.3.3 Conclusions The results of the experiments have not indicated any significant difference on notations readability. It is not so with the ease of writing, which has presented a significant improvement by using MASTeR. The collected information about answers, response times and feedback has been combined and scaled to be comparable with the comparative evalua- tion described in Chapter 3. This comparison is shown at Figure 5.7. Figure 5.7: Comparison between the comparative evaluation and the experi- ment results. The scaled results of the comparative evaluation and the experiment results show a similar tendency: MASTeR is more suitable than EARS. As it is
  • 57. CHAPTER 5. USERS EXPERIMENT 47 said, some factors, like a sample that is not significant enough and the lack of experience of part of the test subjects, could have influenced to the results of the experiment. However, it is important to emphasise that the results of the comparative evaluation and the experiment coincide in the same tendency.
  • 58. CHAPTER 6. SUMMARY 48 6 Summary The purpose of this thesis was to evaluate template-systems for require- ments specification. The results of a theoretical evaluation and an ex- periment conducted with industrial professional and students suggest that MASTeR is the notation that provides higher-quality specifications and the best reading and writing facilities. However, this should be considered as a tendency. The theoretical evalua- tion needs more information to be contrasted with an experiment that has not been performed with a significant amount of experienced users. The current work is the first one in this field and can be taken as a first ap- proximation that could be improved upon. More notations can be analysed and more experienced test subjects should perform the experiment.
  • 59. APPENDIX A. CHECKLISTS 49 A Checklists In this chapter of the appendix, the checklists utilised during the compar- ative evaluation are expounded. A.1 Inspecting Quality The following Table shows the checklist used to evaluate the satisfaction of the quality criteria. Results are presented in columns 2, 3 and 5, represent- ing MASTeR, EARS and Planguage scores, respectively. Quality Checklist. MASTeR, EARS, Planguage scores Clarity Can all requirements be written in non-technical understandable language? 1 1 1 Can any requirement have more that one interpretation? 2 2 2 Can be each characteristic of the final product described with an unique terminology? 3 3 3 Is there a glossary in which the specific meaning(s) of each term can be defined? 4 4 4 Could the requirements be understood and implemented by an independent group? 5 5 5 Completeness Can be all terms be defined? 6 6 6 Can all units of measure be defined? 7 7 6 Can the missing information be defined in the requirement? 7 7 7 Should any requirement be specified in more detail? 7 7 8 Should any requirement be specified in less detail? 8 8 9 Can all of the requirements be included? 8 8 9 Can all the requirements related to functionality be included? 9 9 10 Could be any requirement which make you feel uneasy? 10 10 11 To be continued on next page
  • 60. APPENDIX A. CHECKLISTS 50 Quality Checklist. MASTeR, EARS, Planguage scores Can all requirements related to performance be included? 11 10 12 Can all requirements related to design constraints be included? 12 12 13 Can all requirements related to attributes be included? 12 12 14 Can all requirements related to external interfaces be included? 13 12 15 Can all requirements related to databases be included? 14 12 16 Can all requirements related to software be included? 15 12 17 Can all requirements related to communications be included? 16 12 18 Can all requirements related to hardware be included? 17 12 19 Can all requirements related to inputs be included? 18 12 20 Can all requirements related to outputs be included? 19 12 21 Can all requirements related to security be included? 20 12 22 Can all requirements related to maintainability be included? 21 12 23 Can all requirements related to installation be included? 21 12 24 Can all requirements related to criticality be included? 22 12 25 Can all requirements related to the permanency limitations be included? 23 12 26 Can all possible changes to the requirements be specified? 23 12 27 Can be the likelihood of change be specified for each requirement? 24 13 28 Consistency Could be any requirements describing the same object that conflict with other requirements with respect to terminology? 24 13 28 Could be any requirements describing the same object that conflict with respect to characteristics? 25 14 29 Could be any requirements that describe two or more actions that conflict logically? 26 15 30 Could be any requirements that describe two or more actions that conflict temporally? 27 16 31 To be continued on next page
  • 61. APPENDIX A. CHECKLISTS 51 Quality Checklist. MASTeR, EARS, Planguage scores Traceability Can be all requirements traceable back to a specific user need? 27 16 32 Can be all requirements traceable back to a specific source document or person? 27 16 33 Can be all requirements traceable forward to a specific design document? 27 16 34 Can be all requirements traceable forward to a specific software module? 27 16 34 Verifiability Could be any requirements included which are impossible to implement? 28 17 35 For each requirement, could exist a process that can be executed by a human or a machine to verify the requirement? 29 18 36 Modifiability Could be any redundancy in the requirements? 30 19 37 Content General Can be each requirement relevant to the problem and its solution? 31 20 38 Can any of the defined requirements contain design details? 32 20 30 Can any of the defined requirements contain verification details? 33 22 40 Can any of the defined requirements contains project management details? 34 23 41 Specific Inputs Can all input sources be specified? 35 23 42 Can all input accuracy requirement be specified? 36 23 43 Can all input range values be specified? 37 23 44 Can all input frequencies be specified? 38 23 45 Can all input formats be specified? 39 23 46 Outputs Can all output destinations be specified? 40 24 47 Can all output accuracy requirements be specified? 40 24 48 Can all output range values be specified? 41 24 49 Can all output frequencies be specified? 42 24 50 Can all output formats be specified? 43 24 51 Functions Can all software functions be specified? 44 25 52 Can all system throughput rates be specified? 45 24 53 To be continued on next page
  • 62. APPENDIX A. CHECKLISTS 52 Quality Checklist. MASTeR, EARS, Planguage scores Reliability Can the consequences of software failure be specified for each requirement? 46 25 53 Can the information to be protected from failure be specified? 46 25 54 Can a strategy for error detection be specified? 47 26 54 Can a strategy for correction be specified? 48 27 54 Tradeoffs Can acceptable tradeoffs be specified for competing attributes? 48 28 54 Hardware Can the minimum memory be specified? 49 28 55 Can the minimum storage be specified? 50 28 56 Can the maximum memory be specified? 51 28 57 Can the maximum storage be specified? 52 28 58 Software Can the required software environments/OS’s be specified? 53 28 59 Can all of the required software utilities be specified? 54 28 60 Can all purchased software products that are to be used with the system be specified? 55 28 61 Communication Can the target network be specified? 56 29 62 Can the required network protocols be specified? 57 30 63 Can the required network capacity be specified? 58 31 64 Can the required/estimated network throughput rate be specified? 59 31 64 Can the estimated number of network connections be specified? 60 31 64 Can the minimum network performance requirements be specified? 61 31 64 Can the maximum network performance requirements be specified? 62 31 65 Can the optimal network performance requirements be specified? 63 31 66 Can all aspects of the processing be specified for each function? 64 32 67 Can all outputs be specified for each function? 65 33 68 Can all performance requirements be specified for each function? 66 34 69 To be continued on next page
  • 63. APPENDIX A. CHECKLISTS 53 Quality Checklist. MASTeR, EARS, Planguage scores Can all design constrains be specified for each function? 67 35 69 Can all attributes be specified for each function? 68 36 70 Can all security requirements be specified for each function? 69 37 71 Can all maintainability requirements be specified for each function? 70 37 72 Can all database requirements be specified for each function? 71 37 73 Can all operational requirements be specified for each function? 72 37 74 Can all installation requirements be specified for each function? 72 37 75 External interfaces Can all user interfaces be specified? 73 38 75 Can all hardware interfaces be specified? 73 38 76 Can all software interfaces be specified? 74 38 77 Can all communications interfaces be specified? 75 38 78 Can all interface design constraints be specified? 76 38 79 Can all interface security requirements be specified? 77 38 80 Can all interface maintainability requirements be specified? 78 38 81 Can all human-computer interactions be specified for user interfaces? 79 39 82 Timing Can all expected processing times be specified? 80 40 83 Can all data transfer rates be specified? 81 40 84 Table A.1: Quality Criteria Checklist, based on [Mos97] .
  • 64. APPENDIX A. CHECKLISTS 54 A.2 Inspecting Formality Table A.2 shows the checklist utilised to measure the formality of the nota- tions in the comparative evaluation (Section 3.2). Results of the application for MASTeR, EARS and Planguage templates are represented in columns 2, 3 and 4, respectively. Formality Checklist. MASTeR, EARS and Planguage scores Does the notation present rules about the structure of the elements?1. 1 1 1 Does the notation present rules related to tokens2?. 1 1 2 Are the tokens represented in symbolic form3?. 1 1 2 Is it necessary to parse4 the resulting requirement so a human can understand its semantics?. 1 1 3 Is the notation free of idioms?. 2 1 4 Is the notation free of poetry and metaphors?. 3 2 5 Table A.2: Formality Checklist, partly based on [MR13] and [AYI04]. A.3 Inspecting Participants’ Requirements Table A.3 shows the checks utilised to measure the quality of the require- ments written by the participants of the experiment (Section 5.3.2). Checklist for participants’ requirements Is the requirement free of ambiguities? Is the requirement complete? Is the requirement free of vagueness? Is the requirement prioritised? Is the requirement verifiable? Is the requirement free of inappropriate implementation details? Is the requirement free of error in the usage of template-systems? Table A.3: Checklist utilised to test the quality of participants’ requirements. 1 Any language will present structure of its elements, even natural languages like english or german. 2 Allowed elements of the language. 3 Use of symbols and logical, arithmetic, ..., connectors. 4 Analyze and understand the structure of the requirement.
  • 65. APPENDIX B. EXPERIMENT 55 B Experiment The content of the web-based experiment is summarised in this chapter. B.1 Introduction Figure B.1 shows the questionnaire about the occupation and previous ex- perience of the participants. This is placed at the introduction of the test (Section 5.1.1) and is used for statistical purposes. Figure B.1: Questionnaire about participant’s previous experience and occu- pation. B.2 Reading Table B.1 lists the requirements and questions that participants had to read and answer at the reading section of the experiment (Section 5.3.1). Table B.2 lists the feedback statements of which the participants had to to agree or disagree. B.3 Writing Table B.3 lists all the descriptions that the test subjects had to specify at the writing section of the experiment (Section 5.3.2). Table B.4 shows all the statements of the writing questionnaire.
  • 66. APPENDIXB.EXPERIMENT56 Requirement Question Validate notebook file format. Is it necessary to specify the file format so that the requirement is complete? Why? The system shall provide users with the ability to resolve merge conflicts manually. Is the requirement complete? If a merge is not intentioned, the system should provide users with the ability to undo the merge. Could the requirement have more than one interpretation? Why? If an IPython Notebook file is corrupted, then the system shall display a message to the user. Should the requirement be specified in more/less detail? Why? While the system is requesting a notebook from remote servers, the system shall not accept parallels requests to the same notebook. Is the requirement testable? Table B.1: Questions to be answered at the reading section.
  • 67. APPENDIXB.EXPERIMENT57 Statement Do you agree or disagree? Natural language requirements are easy to understand. ◻ Strongly Disagree ◻ Disagree ◻ Agree ◻ Strongly Agree MASTeR requirements are easy to understand. ◻ Strongly Disagree ◻ Disagree ◻ Agree ◻ Strongly Agree EARS requirements are easy to understand. ◻ Strongly Disagree ◻ Disagree ◻ Agree ◻ Strongly Agree MASTeR requirements are more complete than natural language requirements. ◻ Strongly Disagree ◻ Disagree ◻ Agree ◻ Strongly Agree EARS requirements are more complete than natural language requirements. ◻ Strongly Disagree ◻ Disagree ◻ Agree ◻ Strongly Agree Table B.2: Content of the reading section’s feedback questionnaire. Description Notation This functionality allows the user to diff and display modifications made between two versions of the same IPython notebook, allowing the user to view the added, removed or modified elements. MASTeR EARS Natural language This functionality allows the user to generate a diff of a notebook from two local IPYNB files, provided by the user. MASTeR This functionality generates a diff of a notebook on a line based level for text based cells, indicating to the user when a cell has been modified and which lines were added or deleted. EARS This functionality generates a diff of a notebook on a line based level for text based cells, indicating to the user when a cell has been modified and which lines were added or deleted. MASTeR EARS Natural language Table B.3: Descriptions of the system functionalities to be specified in the writing section.
  • 68. APPENDIXB.EXPERIMENT58 Statement Do you agree or disagree? Descriptions were easy to specify in MASTeR. ◻ Strongly Disagree ◻ Disagree ◻ Agree ◻ Strongly Agree Descriptions were easy to specify in EARS. ◻ Strongly Disagree ◻ Disagree ◻ Agree ◻ Strongly Agree Descriptions were easy to specify in natural language. ◻ Strongly Disagree ◻ Disagree ◻ Agree ◻ Strongly Agree MASTeR contains a full set of templates. ◻ Strongly Disagree ◻ Disagree ◻ Agree ◻ Strongly Agree EARS contains a full set of templates. ◻ Strongly Disagree ◻ Disagree ◻ Agree ◻ Strongly Agree Table B.4: Content of the writing section’s feedback questionnaire.
  • 69. BIBLIOGRAPHY 59 Bibliography [Abr10] Jean-Raymond Abrial. Modeling in Event-B: System and Soft- ware Engineering. Cambridge University Press, New York, NY, USA, 1st edition, 2010. [AYI04] Yunus Adanur, Oktay Yagiz, and Ahmet I¸sik. Mathematics and language. Journal of the Korea Society of Mathematical Education Series, 8:31–27, Mar 2004. [BF14] Pierre Bourque and R. E. Fairley. SWEBOK: guide to the software engineering body of knowledge, volume 3. IEEE Com- puter Society, 2014. [Boe79] Barry Boehm. Guidelines for verifying and validating software requirements and design specifications. Euro IFIP 79, pages 711–719, 1979. [Che13] Lianping Chen. Characterizing architecturally significant re- quirements. IEEE Software, 30(2):38–45, Apr 2013. [Dav95] A. M. Davis. Remontando: Una Necesidad Simple Descuid´o. IEEE Software, 1995. [DSP10] Definition and Exemplication of RSL and RMM, CESAR Project. Report, EADS Deutschland GmbH - IW, Germany, 2010. [Eak96] Kevin Eakin. Implementing espiti on a european scale. Journal of Systems Architecture, 42(8):579–582, 1996. [Ebe08] Christof Ebert. Systematisches Requirements-Engineering und Management: Anforderungen ermitteln, spezifizieren, analysieren und verwalten. dpunkt-Verl., 2008. [Eck15] Florian Eckey. Konzeption und entwicklung einer software zur konsistenzhaltung zwischen bpmn modellierung und textueller anforderungsdokumentation, 2015. [Far12] S. Farfeleder. Requirements Specification and Analysis for Em- bedded Systems. PhD thesis, Vienna University of Technology, 2012.
  • 70. BIBLIOGRAPHY 60 [GDW09] Alexander Grosskopf, Gero Decker, and Mathias Weske. The process: business process modeling using BPMN. Meghan- Kiffer Press, 2009. [Gil05] Tom Gilb. Competitive engineering: a handbook for systems engineering, requirements engineering, and software engineer- ing using Planguage. Butterworth-Heinemann, 2005. [Gro15] Katharina Großer. Investigating the use of ontology techniques as a support for on-board software requirement engineering, 2015. [HAL+14] Jennifer Horkoff, Fatma Basak Aydemir, Feng-Lin Li, Tong Li, and John Mylopoulos. Conceptual modeling: Evaluating Mod- eling Languages, An Example from the Requirements Domain, page 260–274. 33rd International Conference, 2014. [HJD05] Elizabeth Hull, Ken Jackson, and Jeremy Dick. Requirements engineering. Springer, 2005. [ISO98] IEEE recommended practice for software requirements speci- fications. Standard, International Organization for Standard- ization, Switzerland, 1998. [ISO10] Systems and software engineering — vocabulary. Standard, International Organization for Standardization, Switzerland, 2010. [ISO11] Systems and software engineering - life cycle processes - re- quirements engineering. Standard, International Organization for Standardization, Switzerland, 2011. [Jac01] Daniel Jackson. Alloy: A lightweight object modelling nota- tion. Laboratory of Computer Science, Massachusetts Institute of Technology, November 2001. [Joh12] V. Johannessen. Cesar - text vs. boilerplates: What is more efficient - requirements written as free text or using boilerplates (templates), 2012. [KC05] Sascha Konrad and Betty HC Cheng. Facilitating the con- struction of specification pattern-based properties. 13th IEEE International Requirements Engineering Conference, 2005. [Lav03] L. Lavazza. Rigorous Description of Software Requirements with UML. 15th International Conference on Software Engi- neering and Knowledge Engineering (SEKE2003), July 2003. [Mav10] Alistair Mavin. EARS Tutorial. Rolls-Royce, 2010.
  • 71. BIBLIOGRAPHY 61 [Mos97] Daniel J. Mosley. Software requirements specification walk- through checklist. Technical report, CSST Technologies, Inc, 1997. [MR13] Brad Miller and David Ranum. Formal and natural languages, 2013. [MSKB11] D. Majumdar, S. Sengupta, A. Kanjilal, and S. Bhattacharya. Automated Requirements Modelling With Adv-EARS. Inter- national Journal of Information Technology Convergence and Services, 2011. [MWHN09] A. Mavin, P. Wilkinson, A. Harwood, and M. Novak. EARS (Easy Approach to Requirements Syntax). 17th IEEE Interna- tional Requirements Engineering Conference, 2009. [Poh96] Klaus Pohl. Requirements engineering: an overview. RWTH, Fachgruppe Informatik, 1996. [RdS14a] Chris Rupp and die SOPHISTen. Requirements Templates - The Blueprint of your Requirement. SOPHIST GmbH, 2014. [RdS14b] Chris Rupp and die SOPHISTen. Schablonen f¨ur alle F¨alle. SOPHIST GmbH, 2014. [RR06] Suzanne Robertson and James Robertson. Mastering the re- quirements process. Addison-Wesley, 2 edition, 2006. [Sim01] Erik Simmons. Quantifying quality requirements using plan- guage. Technical report, Intel Corporation, Hillsboro, 2001. [Sim11] Erik Simmons. 21st Century Requirements Engineering: A Pragmatic Guide to Best Practices. PNSQC, 2011. [Som11] Ian Sommerville. Software engineering. Pearson, 2011. [Ter13] John Terzakis. Ears: The easy approach to requirements syn- tax, Jul 2013. [WD96] Jim Woodcock and Jim Davies. Using Z: specification, refine- ment, and proof. Prentice Hall, 1996. [WH12] Matt Wynne and Aslak Hellesoy. The Cucumber Book: Behaviour-Driven Development for Testers and Developers. The Pragmatic Bookshelf, 2012. [Wie99] Karl E. Wiegers. Writing quality requirements. Software De- velopment, May 1999.