CMPE 131

Vice President and Editorial Director, ECS: Marcia J. Horton Executive Editor:1tacy Dunkelberger Assistant Editor: Melinda Hasgerty Director of Team-Based Project Management: Vince O'Brien Senior Managing Editor: Scott Disanno Production Liaison: Jane Bonnell Production Editor: Pavithra Jayapaul, TexTech Senior Operations Specialist: Alan Fischer Operations Specialist: Lisa McDowell Marketing Manager: Erin Davis Marketing Assistant: Mack Patterson Art Director: Kenny Beck Cover Designer: Kristine Carney Cover Image: [credit to come] Art Editor: Greg Dulles Media Editor: Dani.el Sandin Media Project Manager: John M. Cassar Composition/Full-Service Project Management:Tex'Tuch International Pvt. Ltd.

Copyright © 2010, 2006, 2001, 1998 by Peiu son Higher Education. Upper SaddJe Rh•er, New Jersey 07458. All rights reserved. Manufactured in the United States of America. lbis publication is protected by Copy- 1ight and permission should be obtained from the publisher prior to any prohibited reproduction,storage in a retrieval system, or transmission in any form or by any means, electronic, mechanicaJ,photocopying, record- ing, or likewise. To obtain permission(s) to use materials from this work, please submit a w!'.itten request to Pearson Higher Education, Permissions Department, 1 Lake Street, Upper Saddle R iver, NJ 07458.

The author and publisher of this book have used their best efforts in preparing this book. These efforts include the development, research, and testing of the theories and programs to dete!'.mine their etrectiveness. The author and publisher make no warranty of any kind, expressed or implied, with regard to these programs or the documentation contained in this book. The author and publisher shall not be liable in any event for incidental or consequential damages in connection \vi th, or arising out o:f, the furnishing, performance, or use of these programs.

Library of Congress Cataloging-In-Publication Data Pfteeger. Shari Law:rence.

Software engineering: theory and practice I Shari Lawrence Pfleeger, Joanne M. Atlee. -4th ed. p.cm.

Includes bibliographical references an.d index. ISBN-13: 978-0-13-606169-4 (alk. paper) ISBN-10: 0.13-606169-9 (a.lk. paper)

1. Software engineering. I. Atlee, Joanne M. II. Title. QA76.758.P492010 005.l-<lc22 2008051400

Prentice Hall is an imprint of

PEARSON www.pear:sonhighered .com

10 9 8 7 6 5 4 3 2 1

ISBN-13: 978-0-13-606169-4

lSBN-10: 0.13-606169-9

"From so much loving and journeying, books emerge." Pablo Neruda

To Florence Rogart for providing the spark; to Norma Mertz for helping to keep the flame bu ming.

S.L.P.

To John Gannon, posthumously, for his integrity, inspiration, encouragement, friendship, and the legacy he has left to all of

us in software engineering. J.M.A.

Pref ace

BRIDGING THE GAP BETWEEN RESEARCH AND PRACTK E

Software engineering has come a long way since ]968, when the term was first used at a NATO conference. And software itself has entered our lives in ways that few had anti- cipated, even a decade ago. So a firm grounding in software engineering theory and practice is essential for understanding how to build good software and for evaluating the risks and opportunities that software presents in our everyday lives. This text repre- sents the blending o f the two current software engineering worlds: that of the practi- tioner, whose main focus is to build high-qlllality products that perform useful functions, and that of the researcher, who strives to find ways to improve the quality of productts and the productivity of those who build them. Edsgar Dykstra continually reminded us that rigor in research and practice tests our understanding of software engineering and helps us to improve our thinking, our approaches, and ultimately our productts.

It is in this spirit that we have enhanced our book, buildin g an underlying frame- work for this questioning and improvement. In particular, this fourth edition contains extensive material about bow to abstract and model a problem, and how to use models, design principles, design pauerns, and design strategies to create appropriate solutions. Software engineers are more than programmers fo llowing instructions, much as chefs are more than cooks following recipes. There is an art to building good software, and the art is embodied in understanding how to abstract and model the essential elements of a problem and then use those abstractions to design a solution. We often hear good devel- opers talk about "elegant" solutions, meaning that the solution addresses the heart of the problem, such that not only does the software solve the problem in its current form but it can also be modified as the problem evolves over time. In this way, students learn to blend research with practice and art with science, to build solid software.

The science is always grounded in reality. Designed for an undergraduate soft- ware engineering curriculum, this book paints a pragmatic picture of software engi- neering research and practices so that students can apply what they learn directly to the real-world problems they are trying to solve. Examples speak to a student's limited experience but illustrate clearly how large software developme nt projects progress from need to idea to reality. The examples represent the many situations that readers are likely to experience: large projects and small, "agile" methods and highly structured ones, object-oriented and procedural approaches. real-time and transaction processing, development and maintenance situations.

The book is also suitable for a graduate course offering an introduction to soft- ware engineering concepts and practices, or for practitioners wishing to expand their

xiii

xiv Preface

knowledge of the subject. Io particular, Chapters 12, 13, and 14 present thought- provoking material designed to interest graduate students in current research topics.

KEY FEATURES.

This text has many key features that distinguish it from other books.

• Unlike other software engineering books that consider measurement and model- ing as separate issues, this book blends measurement and modeLing with the more general discussion of software engineering. That is, measurement and modeling are considered as an integral part of software engineering strategies, rather than as separate discipLines. Thus, students learn how to abstract and model, and how to involve quantita tive assessment and improvement in their daily activities. They can use their models to understand the important elements of the problems they are solving as well as the solution alternatives; they can use measurement to eval- uate their progress on an individual, team, and project basis.

• Similarly, concepts such as reuse, risk management, and quaLity engineering are embedded in the software engineering activities that are affected by them, instead of being tre ated as separate issues.

• The current edition addresses the use of agile methods, including extreme program- ming. It describes the benefits and risks of giving developers more autonomy and contrasts this agility with more traditional approaches to software development.

• Each chapter applies its concepts to two common examples: one that represents a typical information system, and another that represents a real-time system. Both examples are based on actual projects. The information system example describes the software needed to determine the price of advertising time for a large British te levision company. The real-time system is the control software for the Ariane-5 rocket; we look at the problems reported, and explore bow software engineering techniques could have helped to locate and avoid some of them. Students can foUow the progress of two typical projects, seeing how the various pract.ices described in the book are merged into the technologies used to build systems.

• At the end of every chapter, the results are expressed in three ways: what the con- te nt of the chapter means for development teams, what it means for individual developers, and what it means for researchers. The student can easily review the highlights of each chapter, and can see the chapter's relevance to both research and practice.

• The Companion Web site can be found at www.prenhalJ.com/pfteeger. It contains current examples from the lite rature and examples of real artifacts from reaJ projects. It also includes Links to Web pages for relevant tool and method vendors. It is here that students can find real requirements documents, designs, code, test plans, and more. Students seeking additional, in-depth information are pointed to re putable, accessible publications and Web s.ites. Tue Web pages are updated regu- larly to keep the material in the textbook current, and include a facility for feed- back to the author and the pubLisher.

• A Student Study G uide is available from your local Pearson Sales Representative.

Preface xv

• PowerPoint slides and a full solutions manual are available on the lnstructor Resource Center. Please contact your local Pearson Sales Representative for access information.

• The book is reple te with case studies and examples from the literature. Many of tbe one-page case studies sbowo as sidebars in tbe book are expanded on tbe Web page. The student can see bow tbe book's theoretical concepts are applied to real- life situations.

• Each chapter ends with thought-provoking questions about policy, legal, and e th- ical issues in software engineering. Students see software engineering in its social and political contexts. As with other sciences, software engineering decisions must be viewed in terms of the people their consequences will affect.

• Every chapter addresses both procedural and object-orienrted development. In addition, Chapter 6 on design explains the steps of an object-oriented develop- ment process. We discuss several design principles and use object-oriented examples to show bow designs can be improved to incorporate these principles.

• The book has an annotated bibliography that points to many of the seminal papers in software engineering. Io addition, the Web page points to annotated bibliographies and discussion groups for specialized areas, such as software relia- biJity, fault tolerance, computer security, and more.

• Each chapter includes a description of a term project, involving development of software for a mortgage processing system. The instructor may use this term project, or a variation of it, in class assignme nts.

• Each chapter ends with a list of key references for the concepts in the chapter, enabling students to find in-depth information about particular tools and methods discussed in the chapter.

• This edition includes examples highlighting computer security. In particular, we emphasize designing security in, instead of adding it during coding or testing.

CONTENTS AND ORGANIZATION

This text is organized in three parts. The first part motivates the reader, explaining why knowledge of software engineering is important to practitioners and researchers alike. Part I a lso discusses the need for understanding process issues, for making decisions about the degree of "agility" developers will have, and for doing careful project plan- ning. Part II walks through the major steps of development and maintenance, regard- less of the process mode l used to build the software: e liciting, modeling, and checking the requirements; designing a solution to the problem; writing and testing the code; and turning it over to the customer. Part III focuses on evaluation and improvement. It looks at bow we can assess the quality of our processes and products, and how to take steps to improve them.

Chapter 1: Why Software Engineering?

In this cbapter we address our track record, motivating the reader and highlighting where in later chapters certain key issues are examined. Io particular, we look a t

xvi Preface

Wasserman's key factors that help define software engineering: abstraction, analysis and design methods and notations, modularity and architecture, software life cycle and process, reuse, measurement, tools and integra ted environments, and user interface and prototyping. We discuss the difference be tween computer science and software engi- neering, explaining some of the major types of problems that can be encountered, and laying the groundwork for the rest of the book. We also explore the need to take a sys- tems approach to buiJding software, and we introduce the two common examples that wiU be used in every chapter. It is here that we introduce the context for the te rm project.

Chapter 2: Modeling the Process and Life Cycle

Io this chapter, we present an overview of different types of process and life-cycle mod- els, including the waterfalJ model, the V-model, the spiral model, and various protort:yp- ing models. We address the need for agile methods, where developers are given a great deal of autonomy, and contrast them with more traditional software development pro- cesses. We also describe several modeling techniques and tools, including systems dynamics and other corrunonJy-used approaches. Each of the two common examples is modeled in part with some of the techniques introduced here.

Chapter 3: Planning and Man.aging the Project

Here, we look at project planning and scheduJing. We introduce notions such as activi- ties and milestones, work breakdown structure, activity graphs, risk management, and costs and cost estimation. Estimation models are used to estimate the cost and schedule of the two common exam ples. We focus on actua l case studies, including management o f software development for the F-16 airplane and for Digital's alpha AXP programs.

Chapter 4: Capturing the Requirem ents

This chapter emphasizes the critical roles of abstraction and modeling in good software engineering. In particuJar, we use models to tease out misunderstandings and missing details in provided requirements, as well as to communicate require ments to others. We explore a number of different modeling paradigms, study example notations for each paradigm, discuss when to use each paradigm, and provide advice about how to make particular modeling and abstraction decisions. We discuss diffe rent sources and differ- ent types of requirements (functional requireme10ts vs. quality requirements vs. design constraints), explain how to write testable requirements, and describe bow to resolve conflicts. O ther topics discussed include requirements e licitation, requirements docu- mentation, requirements reviews, requirements q uality and how to measure it, and an example of bow to select a specification method. The chapter ends with application of some of the methods to the two common examples.

Chapter 5: Designing the Architecture

This chapter on software architecture bas been completely revised for the fourth edition. It begins by describing the role of architecture in the software design process and in the

Preface xvii

larger development process. We examine the steps involved in producing the a rchitec- ture, including modeling, analysis, documentation, a nd review, resulting in the creatio n of a Software Architecture Document that can be used by program designers in describing modules and interfaces. We discuss bow to decompose a problem into parts, and bow to use diffe renl views to examine the several aspeccs of the problem so as to find a suitable solution. Next, we focus on modeling the solution ·using one or more architectural styles, including pipe-and-filter, peer-to-peer, client-server, publish-subscribe, repositories, and layering. We look at combining styles and using them to achieve quality goals, such as modifia bilil y, performance, securil y, re liabilil y, robustness, and usability.

Once we have an initial architecture, we evaluate and refine it. In this chapter , we show ho w to measure design quality and to use evaluation techniques in safe ty analysis, security analysis, trade-off analysis, and cost-bene fit analysis to selecl the best archi lec- ture for the customer's needs. We stress the importance of documenting the design rationale, validating and verifying that the design matches the requirements, and er.eat- ing an architecture that suits the customer's product needs. Towards the end of the chapter, we examine ho w to build a product-line architecture that aUows a software provide r to reuse the design across a family of similar products. The chapter ends with an arcltitectural analysis of our information system and real-time e xamples.

Chapter 6: Designing the Modules

Chapter 6, substantially revised in this edition, investigates how to move from a descrip- tion o f the system architeclure to descriptions of the design's individual modules. We begin with a discussion of the design process and then introduce six key design prin- ciples to guide us in fashioning modules from the architecture: modularity, inte rfaces, information hiding, incremental development, abs traction, and gene rality. Next, we take an in-depth look at object-oriented design and how it supports our s ix principles. Using a variety of notations from the Unified Modeling Language, we show how to represent multiple aspects of module functionality and inte raction, so that we can build a robust and maintainable design. We also describe a collection of design patte rns, each with a particul!a r purpose, and demonstrate how they can be used to re inforce the design prin- ciples. Next, we discuss global issues such as data management, exception handling, user interfaces, and frameworks; we see how consistency and clarity of approach can lead to more effective designs.

Taking a careful look at object-oriented measurement, we apply some of the com- mon object-oriented metrics to a service sta tion example. We note how changes in met- rics values, due to changes in the design, can help us decide bow Ito aUocate resources and search for faults. Fina lly, we apply object-orie nted concepts to our information sys- tems and real-time examples.

Chapter 7: Writing the Programs

In this chapte r, we address code-level design decisions and the issues involved in imple- menting a design to produce high-quali ty code. We discuss standa rds and procedures, and suggest some simple programming guidelines. Examples are provided in a varie ty of languages, including both object-oriented and procedural. We discuss the need. for

xvi ii Preface

program documentatiom and an error-handling strategy. The chapter ends by applying some of the concepts to the two common examples.

Chapter 8: Testing the Programs

lo th.is chapte r, we explo re several aspects of test:ing programs. We distinguish conven- tional testing approaches from the cleanroom method, and we look at how to test a variety of systems. We present definitions and categories of software problems, and we discuss how orthogonal defect classification cam make data collection and analysis more effective. We then explain the diffe rence between unit testing and integration testing. After introducing several automated test tools and techniques, we explain the need for a testing life cycle and how the tools can be integrated into it. F111ally, the chap- te r applies these concepts to the two common examples.

Chapter 9: Testing the System

We begin with principles of system testing, including reuse of test suites and data, and the need for careful configuration management. Concepts introduced include function test- ing, performance testing, acceptance testing and installa tion testing. We look at the spe- cial needs of testing object-oriented systems. Several test tools are described, and the roles of test team members are discussed. Next, we introduce the reader to software re lia- bility modeling, and we explore issues of reliability, maintainability, and availability. The reader learns how to use the results of testing to estimate the likely characteristics of the delivered product. The several types of test documentation are introduced, too, and the chapter ends by describing test strategies for the two common examples.

Chapter 10: Delivering the System

This chapter discusses tbe need for training and documentation, and presents several examples of training and documents that could accompany the information system and real-time examples.

Chapter 11: Main taining the System

ln this chapter, we address the results of system change. We explain how changes can occur during the system's life cycle, and how system design, code, test process, and documenta- tion must accommodate them. Typical maintenance problems are discussed, as well as the need for careful configuration management. There is a thorough discus.sion of the use of measurement to predict likely changes, and to evaluate the effects of change. We look at reengineering and restructuring in the overall context of rejuvenating legacy systems. Finally, the two common examples are evaluated ill! terms of the like Li hood of change.

Chapter 12: Evaluating Products, Processes, and Resources

Since many software engineering decisions involve the incorporatjon and integration of existing components, this chapter addresses ways to evaluate processes and products. It discusses the need for empirical evaluation and gives several examples to show how measure ment can be used to establish a baseline for quality and productivity We look at

Preface xix

several quality models, !bow to evaluate systems for reusability, how to perform post- mortems, and how to understand return on investment in information technology. These concepts are applied to the two common examples.

Chapter 13: Improving Predictions, Products, Processes, and J<esources

This ch.apter builds on Chapter 11 by showing bow prediction, product, process, and resource improvement can be accomplished. It contains several in-depth case studies to show how prediction models, inspection techniques, and other aspects of software engi- neering can be understood and improved using a variety of investigative techniques. This chapter ends with a set of guidelines for evaluating current situations and identify- ing opportunHies for imiProvement.

Chapter 14: The Fwure of Software Engineering

In this final chapter, we look at several open issues in software engineering. We revisit Wasserman's concepts to see bow well we are doing as a discipline. We examine sev,eral issues in technology transfer and decision-making, to determine if we do a good job at moving important ideas from research to practice. Finally, we examine controversial issues, such as licensing of software engineers as professional engineers and the trend towards more domain-specific solutions and methods.

ACKNOWLEDGMENTS

Books are written as fri,ends and fa mily provide technical and emotional support. It is impossible to list here all those who helped to sustain us during the writing and revising, and we apologize in advance for any omissions. Many thanks to the readers of earlier editions, whose careful scrutiny of the text generated excellent suggestions for correc- tion and clarification. As far as we know, all such suggestions have been incorporated into this edition. We continue to appreciate feedback from readers, positive or negative.

Carolyn Seaman (University of Maryland-Baltimore Campus) was a terrific reviewer of the first edition, suggesting ways to clarify and simplify, leading to a tighter, more understandable text. She also prepared most of the solutions to the exercises, and helped to set up an early version of the book's Web site. I am grate:ful for her friendship and assistance. Yiqing Liang and Carla Valle updated the We b site and added sub- stantial new material for the second edition; Patsy Ann Zimmer (University of Water- loo) revised the Web site for the third edition, particularly with respect to modeling notations and agile methods.

We owe a huge thank-you to Forrest Shull (Fraunhofer Center-Maryland) and Roseanne Tesoriero (Washington College), who developed the initial study guide for this book; to Maria Vieira Nelson (Catholic University of Minas Gerais, Brazil), who revised the study guide and the solutions manual for the third edition; and to Eduardo S. Barrenechea (University of Waterloo) for updating the materials for the fourth edition. Thanks., too, to Hossein Saiedian (University of Kansas) for preparing the PowerPoint presentation for the fourth edition. We are also particularly indebted to Guilherme Travassos (Federal University of Rio de Janeiro) for the use of material that he developed

xx Preface

with Pfleeger at the University of Maryland-College Park, and that he enriched and expanded considerably for use in subsequent classes.

Helpful and thoughtful reviewers for all four editions included Barbara Kitcben- ham (Keele University, UK), Bernard Woolfolk (Lucent Technologies), Ana Regina Cavakanti da Rocha (Federal University of Rio de Janeiro), Frances Uku (University of California at Berkeley), Lee Scou Ehrhart (MITRE), Laurie Werth (University of Texas), Vickie Almstrum (University ofTexas), Lionel Briand (Simula Research, Nor- way), Steve Thibaut (University of Florida), Lee Wittenberg (Kean College of New Jer- sey), Philip Johnson (University of Hawaii), Daniel Berry (University of Waterloo, Canada), Nancy Day (University of Waterloo), Jianwei Niu (University of Waterloo), Chris Gorringe (University of East Anglia, UK), Ivan Aaen (Aalborg University), Damla Turget (University of Central Florida), Lamie Williams (North Carolina State University), Ernest Sibert (Syracuse University),Allen HoUiday (California State Uni- versity, Fullerton) David Rine (George Mason University),Anthony Sullivan (Univer- sity of 1exas, Dallas), D avid Chesney (University of Michigan, Ann Arbor), Ye Duan (Missouri University), Rammohan K. Ragade (Kentucky University), and several anonymous reviewers provided by Prentice Hall. Discussions with Greg Hislop (Drexel University), John Favaro ( lntecs Sisterni, Italy), Filippo Lanubile (Universita di Bari, Italy), John d' Ambra (University of New South Wales, Australia), Chuck H ow- ell (MITRE), Tim Yieregge (U.S. Army Computer Emergency Response Team) and James and Suzanne Robertson (Atlantic Systems Guild, UK) led to many improve- ments and enhancements.

Thanks to Toni Holm and Alan Apt, who made the third edition of the book's production interesting and relatively painless. Thanks, too, to James and Suzanne Robertson for tbe use of tbe Piccadilly example, and to Norman Fenton for the use of material from our software metrics book. We are grateful to Tracy Dunkelberger for encouraging us in producing this fourth edition; we appreciate both her patience and her professionalism. Thanks, too, to Jane Bonne ll! and Pavithra Jayapaul for seamless production.

Many thanks to the publishers of several of the figures and examples for granting permission to reproduce them here. The material from Complete Systems Analysis (Robertson and Robertson 1994) and Mastering the Requirements Process (Robertson and Robertson 1999) is drawn from and used with permission from Dorset House Pub- Lishing, at www.dorsethouse.com; au rights reserved. The article in Exercise 1.1 is repro- duced from the Washington Post with permission from the Associated Press. Figures 2.15 and 2.16 are reproduced from Barghouti et a l. (1995) by permission of John Wiley and Soos Limited. Figures 12.14 and 12.15 are reproduced from Rout (1995) by permis- sion of John Wiley and Sons Limited.

Figures and tables in Chapters 2, 3, 4, 5, 9, 11, 12, and 14 that are noted with an IEEE copyright are reprinted with permission of the Institute of Electrical and Elec- tronics Engineers. Similarly, the three tables in Chapter 14 that are noted with an ACM copyright are reprinted with permission of the Association of Computing Machinery. Table 2.1 and Figure 2.11 from Lai (1991) are reproduced with perrnis.sion from the Software Productivity Consortium. Figures 8.16 and 8.17 from Graham (1996a) are reprinte d with permission from Dorothy R. Graham. Figure 12.11 and Table 12.2 are adapted from Liebman (1994) with permis.sion from the Cente r for Science in the

Preface xxi

Public Interest, 1875 Connecticut Avenue NW, Washington DC. Tables 8.2, 8.3, 8.5, and 8.6 are reproduced with permission of The McGraw-Hill Companies. Figures and examples from Shaw aod Garlan (1996), Card aod Glass (1990), Grady (1997), aod Lee and Tepfenhart (1997) are reproduced with permission from Prentice Hall.

Tables 9.3, 9.4, 9.6, 9.7, 13.1, 13.2, 13.3, and 13.4, as well as Figures 1.15, 9. 7. 9.8, 9.9, 9.14, 13.1, 13.2, 13.3, 13.4, 13.5, 13.6, and 13.7 are reproduced or adapted from Fenton and Ptleeger (1997) in whole or in part with permjssion from Norman Fenton. Figures 3.16, 5.19, and 5.20 are reproduced or adapted from Norman Fenton's course notes, with rus kind perntission.

We especially appreciate our employers, the RAND Corporation and the Univer- sity of Waterloo, respectively, for their encouragement.1 And we thank our friends and family, who offered their kfadness, support, and patience as the book-writing stole time orrunarily spent with them. In particular, Shari Lawrence Plleeger is grateful to Manny Lawrence, the manager of the real Royal Service Station , and to his bookkeepe r, Bea Lawrence, not only for working with her and her students on the specification of the Royal system, but a lso for their affection and guidance in their other job: as her parents. Jo Atlee gives special thanks to her parents, Nancy and Gary Atlee, who have sup- ported and encouraged her in everytlting she bas done (and attempted); and to her col- leagues and students, who graciously took on more than their share of work during the major writing periods. And, most especially, we thank Charles Pfteeger andl Ken Salem, who were constant and much-appreciated sources of suppo rt, encouragement, and good humor.

Shari Lawre nce Ptleeger Joartne M. AtJee

1Please note tbat tbis book is not a product of the RAND Corporation and bas not undergone RAND's quality assurance process. The work represents us as authors, not as employees of our respective institutions.

About the Authors

Shari Lawrence POeeger (Ph.D., Information Technology and Engineering, George Mason University; M.S., Planning, Pennsylvania State University; M.A., Mathematics, Pennsylvania State University; B.A., Mathematics, Harpur College) is a seniior info rma- tion scientist at the RAND Corporation. Her current research focuses on policy and decision-making issues that help organizations and government agencies understand whether and how information technology supports their missions and goals. Her work at RAND has involved assisting cLients in creating software measurement programs, supporting gove rnment agencies in defining information assurance po Licies, and sup- porting decisions about cyber security and homeland security.

Prior to joining RAND, she was the president of Systems/Software, Ille., a consul- tancy specializing in software engineering and technology. She bas been a visiting pro- fessor at City University (London) and the University of Maryland a nd was the founder and director of Howard University's Center for Research in Evaluating Soft- ware Technology. The author of many textbooks on software engineering and computer security, Pfleeger is we lJ known for her work in empirical studies of softwa re engineer- ing and for her muJtjdiscipLinary approach to solving information technology problems. She has been associate editor-in-chief of IEEE Software, associa te editor of IEEE Transactions on Soflware Engineering, associate editor of IEEE Security and Privacy, and a member of the IEEE Computer Society Technical Council on Software Engi- neering. A frequent speaker at conferences and workshops, Pfleeger has been named repeatedly by the Journal of Systems and Software as one of the world's top software engineering researchers.

Joanne M. Atlee (Ph.D. and M.S., Computer Science, University of Maryland; B.S., Computer Science and Physics, ColJege of WilJiam and Mary; P.Eng.) is an Associate Professor in the School of Computer Science at the University of Wa terloo. Her research focuses on software modeling, documentation, and analysis. She is best known for her work on model checking software requirements specifications. Other research interests include model-based software engineering, modular software development, feature interactions, and cost-benefit analysis of formal software development tech- niques. Atlee serves on the editorial boards for IEEE Transactions on Software Engi- neering, Software and Systems Modeling, and the Requi.rements Engineering Journal and is Vice Chair of the International Federation for Information Processing (IFIP) Working G roup 2.9, an inte rnational group of researchers. working on advances in soft- ware requirements engineering. She is Program Co-Chair for the 31st International Conference on Software Engineering (ICSE'09).

Atlee also has strong inte rests in software engineering education. She was the founding Director o f Water loo's Bachelor's program in Software Engineering. She

xx iii

xxiv About the Authors

served as a member of the Steering Committee for the ACMJIBEE-CS Computing Curricula-Software Engineering (CCSE) volwne, wh.ich provides curricular guide- lines for undergraduate programs in software engineering. She also served on a Cana- dian Engineering Q ualifications Board committee whose mandate is to set a software engineering syllabus, to offer guidance to provincial engineering associations on what constitutes acceptable academic quaLifications for Licensed Professional Engineers who practice software engineering.

SOFTWARE ENGINEERING

1

In this chapter, we look at • what we mean by software

engineering • software engineering's track record • what we mean by good software • why a systems approach is important • how software engineering has

changed since the 1970s

Software pervades our world, and we sometimes take for granted its role in making our Lives more comfortable, efficient, and effective. For example, consider the simple tasks involved in preparing toast for breakfast. The code in the toaste r controls how brown the bread wiU get and when the finished product pops up. Programs control and regu- late the delivery of electricity to the house, and software biUs us for our energy usage. In fact, we may use automated programs to pay the electricity bill, to orde r more gro- ceries, and even to buy a new toaste r! Today, softwar e is working both explicitly and behind the scenes in virtua lly all aspects of our Jives, including the critical systems that affect our health and well-being. For this reason, software engineering is more impor- tant than ever. Good software engineering practices must ensure that software makes a positive contribution to bow we lead our Lives.

This book highlights the key issues in software engineering, describing what we know about techniques and tools, and how they affect the resulting products we build and use. We will look at both theory and practice: what we know and how it is applied in a typical software development or maintenance project. We will also examine what we do not yet know, but what would be helpful in making our products more re liable, safe, useful, and accessible.

We begin by looking at bow we analyze problems and develop solutio ns. Then we investigate the differences between computer science problems and engineering ones. Our ultimate goal is to produce solutions incorporating high-quality software, and we consider characteristics that contribute to the quality.

2 Chapter 1 Why Software Engineering?

We also look at how successful we have been as developers of software systems. By examining several examples of software fa ilure, we see how far we have come and how much farther we must go in mastering the art of quality software development.

Next, we look at the people involved in software development. After describing tbe roles and responsibilities of customers, users, and developers, we turn to a study of the system itself. We see that a system can be viewed as a group of objects related to a set of activities and enclosed by a boundary. Alternatively, we look at a system with an engineer's eye; a system can be developed much as a house is built. Having defined the steps in building a system, we discuss the roles of the development team at each step.

Finally, we discuss some of the changes that have affected the way we practice software engineering. We present Wasserman's eight ideas to tie together our practices into a coherent whole.

1.1 WHATIS SOFTWARE ENGINEERING?

As software engineers, we use our knowledge of computers and computing to belp solve problems. Often the problem with which we are dealing is re lated to a computer or an existing computer system, but sometimes tbe difficulties underlying the problem have no thing to do with computers. Therefore, it is essential that we first understand the nature of the problem. [n particular, we must be very careful not to impose computing machinery or techniques on every problem that comes our way. We must solve the problem first. Then, if need be, we can use techno logy as a tool to implement our solu- tion. For the remainder o f this book, we assume that our analysis has shown that some kind of computer system is necessary or desirable Ito solve a particular problem at hand.

Solving Problems

Most problems are large and sometimes tricky to handle, especiaUy if they represent something new that bas never been solved before. So we must begin investigating a problem by analyzing it, that is, by breaking it into pieces that we can understand and try to deal with. We can thus describe the larger problem as a coUection of small prob- lems and their interrelationships. Figure 1.1 iUustrates how analysis works. It is impor- tant to remember that the relationships (the arrows in the figure, and the relative positions of the subproblems) are as essentia l as the subproblems themselves. Some- times, it is the relationships that hold the clue to how to solve the larger problem, rather than simply the nature of the subproblems.

Once we have anal yzed the problem, we mu.st construct our solution from compo- nents that address the problem's various aspects. Figure 1.2 illustra tes th.is reverse pro- cess: Synthesis is the putting together of a large structure from small building blocks. As with analysis, the compositio n of the individual solutions may be as challenging as the process. of finding the solutions. To see why, consider the process of writing a novel. The dictionary contains all the words that you might want to use in your writing. But the most di.fficult part of writing is deciding how to organize and compose the words into sentences, and likewise the sentences into paragraphs and chapters to form the complete book. Thus, any problem-solving technique must have two parts: analyzing the problem to dete rmine its nature, and then synthesizing a solution based on o ur analysis.

Section 1.1 What Is Software Engineering? 3

PROBLEM

Q

.__________.I D I

~--~II I Q

Subproblem 4 Subproblem 1 Subproblem 2 I~~.-~

~--~~----~11~~-~-~-~~31~~~~ FIGURE 1.1 The process of analysis.

11

Solution 4

~S-olu-tion_i_~~S-olu-tio-n2---~~-~~~~~~ _ . _ II Solution 31_

Q .-----------.I D

I II I

Q SOLUTION

FIGURE 1.2 The process of synthesis.

4 Chapter 1 Why Software Engineering?

To help us solve a problem, we employ a variety of methods, tools, procedures, and paradigms. A method or technique is a formal procedure for producing some result. For example, a chef may prepare a sauce using a sequence of ingredients com- bined in a carefully timed and ordered way so that the sauce thicke ns but does not cur- dle or separate. The procedure for preparing the sauce involves timing and ingredients but may not depend on the type of cooking equipment used.

A tool is an instrument or automated system for accomplishing something in a bette r way. This "better way" can mean that the tool makes us more accurate, more e ffi- cient, o r more productive or that it enhances the quality of the resulting product. For example, we use a typewriter or a keyboard and printer to write letters because the resulting documents are easier to read than our handwriting. Or we use a pair of scis- sors as a tool because we can cut faster and stra£gbter than if we were tearing a page. However, a tool is not always necessary for making something well. For example, a cooking technique can make a sauce better, not tbe pot or spoon used by the chef.

A procedure is IJke a recipe: a combination of tools and techniques tbat, in con- cert, produce a particular product. Fo r instance, as we wiU see in later chapters, our test plans describe our test procedures; they tell us which tools will be used on which data sets under which circumstances so we can determine whether our software meets its requirements.

FinaUy, a paradigm is like a cooking style; it represents a particular approach or philosophy for building software. Just as we can distinguish French cooking from Chi- nese cooking, so too do we distinguish paradigms IJke object-oriented development from procedural ones. One is not better than ano ther; each has its advantages and dis- advantages, and there may be situations when one is more appropriate than another.

Software engineers use tools, techniques, procedures, and paradigms to enhance the qua lJty of their software products. Their aim is to use efficient and productive approaches to generate effective solutions to problems. In the chapters that follow, we wiU highlight particular approaches that support the development and mainte nance activities we describe. An up-to-date set of pointe rs to tools and techniques is listed in this book's associated home page on the World Wide Web.

Where Does the Software Engineer Fit In?

To understand how a software engineer fits into tlile computer science wo rld, let us look to another discipline for an example. Consider the study of chemistry and its use to solve problems. The chemist investigates chemicals: their structure, their interactions, and the theory behind their behavior. Chemical engineers apply the results of the chemist 's studies to a var iety of problems. Chemistry as viewed by chemists is the object o f study. On the other h and, for a chemical engineer, chemistry is a tool to be used to address a general problem (which may not even be "chemical" in nature).

We can view computing in a similar light. We can concentrate on the computers and programming languages, or we can view them as tools to be used in designing and implementing a solution to a problem. So ftware engineering takes the latter view, as shown in Figure 1.3. Instead of investigating hardware design or proving theorems about how algoritluns wo rk, a software engineer focuses on the computer as a pro blem- solving tool. We will see la ter in this chapter that a software engineer works with the

COMPUTER SCIENCE

ENGINEERING

Tools and Tec~niques to Sol~e Problem

Section 1.2

CUSTOMER

Problem

How Successful Have We Been? 5

FIGURE 1.3 The relationship be tween computer science and software engineering.

functions of a computer as part of a general solution, rather than with the structure or theory o f the computer ~tse lf.

1.2 HOW SUCCESSFUL HAVE WE BEEN?

Writing software is an art as well as a science, and it is important for you as a student of computer science to understand why. Computer scientists and software engineering researchers study computer mechanisms and theorize about how to make them more product ive or efficient. However, they also design computer systems and write pro- grams lo perform tasks on those systems, a practice that involves a great deal of art, ingenuity, and skill. There may be many ways to perform a particular task on a particu- lar system, but some are better than others. One way may be more efficient, more pre- cise, easier to modify, easier to use, o r easier to understand. Any hacker can write code to make something work, but it takes the skill and understanding o f a professional soft- ware engineer to prodll!ce code that is robust, easy to understand and maintain, and does its job in the most efficient and effective way possible. Consequently, software engineering is about des.igning and developing high-quality software.

Be fore we examine what is needed to produce quality software systems, let us look back to see how success:ful we have been. Are users happy with their existing software systems? Yes and no. Software has enabled us to perform tasks more quickly and effec- tively !Jl.an ever before. Consider life before word processing, spreadsheets, electronic mail, or sophisticated telephony, for example. And software has supported life-sustaining or Life-saving advances in medicine, agriculture, transportation, and most other indus- tries. In addition, software has enabled us to do things that were never imagined in the past: microsurgery, multimedia education, robotics, and more.

6 Chapter 1 Why Software Engineering?

However, software is not without its problems. Often systems function , but not exactly as expected. We all have heard stories of systems that just barely work. And we aU have written faulty programs: code that contains mistakes, but is good enough for a passing grade or for demonstrating tbe feasibility of an approach. Clearly, sucb behav- ior is not acceptable when developing a system for delivery to a customer.

There is an enormous difference between an error in a class project and one in a large software system. In fact, software faults and the difficulty in producing fault-free software are frequently discussed in tbe literature and in the hallways. Some faults are merely annoying; others cost a great deal of time and money. Still others are ltife- tbreatening. Sidebar 1.1 explains the re lationships among faults, e rrors, and failures. Let us look at a few examples of fai lures to see wbat is going wrong and why.

SIDEBAR 1.1 TERMINOLOGY FOR DESCRIBING BUGS

Often, we talk about "bugs" in software, meaning many things that depend on the context. A "bug" can be a mistake in interpreting a requirement, a syntax error in a piece of code, o r the (as-yet-Wlknown) cause of a system crash. The Institute of Electrical and Electronics

Engineers (IEEE) has suggested a standard terminology (in IEEE Standard 729) for describ- ing "bugs" in our software products (IEEE 1983).

A fault occurs when a human makes a mistake, called an error, in performing some soft-

ware activity. For example, a designer may misWlderstand a requirement and create a design that does not match the actual intent of the requirements analyst and the user. This de.~ign fa1Ult

is an encoding of the error, and it can lead to other fallllts, such as incorrect code and an incor-

rect description in a user manual. Thus, a single error can generate many faults, and a fault can reside in any development or maintenance product.

A failure is a departure from the system's required behavior. It can be discovered before or after system delivery, during testing, or during operation and maintenance. As we will see

in Chapter 4, the requirements documents can contain faults. So a failure may indicate that

the system is not performing as required, even though it may be performing as specified

Thus, a fault is an inside view of the system, as seen by the eyes of the developers, whereas a failure is an outside view: a problem that the user sees. Not ev,ery fault corresponds

to a failure; for example, if faulty code is never executed or a particular state is never ente red, then the fault will never cause the code to fail. Figure 1.4 shows the genesis of a failure.

..... 15' ..... ?I ean lead to ean lead lo • • human error fault failure

FIGURE 1.4 How human error causes a failure.

Section 1.2 How Successful Have We Been? 7

Io the early 1980s, the United States Internal Revenue Service (IRS) hired Sperry Corporation to build an automated federal income tax form processing system. Accord- ing to the Wash ington Post, the "system . . . proved inadequate to the workload, cost nearly twice what was expected and must be replaced soon" (Sawyer 1985). lo 1985, an extra $90 million was needed to enhance the original $103 million worth of Sperry equip- ment. In addition , because tbe problem prevented the IRS from re turning refunds to tax- payers by tbe deadline, the IRS was forced to pay $40.2 million in inte rest and $22.3 million in overtime wages for its employees who were trying to catch up. In 1996, the situ- a tion had not improved. The Los Angeles Times reported on March 29 that there was stiU no master plan for the modernization of IRS computers,onJy a 6000-page technical docu- ment. Congressman Jim Lightfoot called tbe project "a $4-billion fiasco that is flounder- ing because of inadequate planning" (Yartabedian 1996).

Situations such as these still occur. In the United States, the Federal Bureau of Investigation's (FBI's) Trilogy project attempted to upgrade the FBI's computer systems. The results were devastating: "Afte r more than four years o f hard work and half a billion dollars spent, however, Trilogy has had little impact on the FBI's antiquated case-man- agement system, which today remains a morass of mainframe green screens and vast stores of paper records" (Knorr 2005). Similarly, in the United Kingdom, the cost of over- hauling the National Health Service's information systems was double tbe o riginal esti- mate (Ballard 2006). We will see in Chapter 2 why project planning is essential to the production of quality software.

For many years, the public accepted tbe infusion of software in their daily lives with little question. But President Reagan's proposed Strategic Defense Initiative (SDI) heightene d the public's awareness of the di[flculty of producing a fau lt-rree soft- ware system. Popular newspape r and magazine reports (such as Jacky 1985; Parnas 1985, Rensburger 1985) expressed skepticism in the computer science community. And now, years la te r, when the U.S. Congress is asked to allocate funds to build a simila r sys- tem, many computer scientists and software engineers continue to believe there is no way to write and test the software to guarantee adequate reliability.

For example, many software engineers think that an anliballis lic-rnis.sile system would require at least 10 milLion Lines of code; some estimates range as high as one hun- dred million. By comparison, the software supporting the American space shuttle consists of 3 million lines of code, including computers on the ground controlling the launch and the flight; there we re 100,000 lines of code in the shuttle itself in 1985 (Rensburger 1985). Thus, an antimissile software system would require the testing of an enormol!lS amount of code. Moreover , the reliability constraints would be impossible to test. To s.ee why, con- sider the notion of safety-critical software. Typically, we say that something that is safety- critical (i.e., something whose failure poses a threat to life or health) should have a reliabi(jty of at least 10-9. As we shall see in Chapter 9, this terminology means that the system can fail no more often than once in 109 hours of operation. To observe this degree of reLiability, we would have to run the system for at least 109 hours to verify that it does not fail. But 109 hours is over 114,000 years-far too long as a testing interval!

We will also see in Chapter 9 that helpful technology can become deadly when software is improperly designed or programmed. For example, the medical community was aghast when the Therac-25, a radiation therapy and X-ray machine, malfunctioned and killed several patients. The software designers bad not anticipated the use of several

8 Chapter 1 Why Software Engineering?

arrow keys in nonstandard ways; as a consequence, the software retained its high set- tings and issued a highly concentrated dose of radiation when low levels were intended (Leveson and Turner 1993).

Similar examples of unanticipated use and its dangerous consequences are easy to find. Fo r example, recent efforts to use off-the-she lf components (as a cost savings mea- sure instead of custom-crafting of software) result in designs that use components in ways not intended by the original developers. Many licensing agreements explicitly point to the risks of unanticipated use: "Because each end-user syste m is customized and differs from utilized testing platforms and because a user or application designer may use the software in combination with other products in a manner not evaluated or con- templated by [the vendor] or its suppliers, the user or application designer is ultimately responsible for verifying and validating the [software]" (Lookout Direct n.d.).

Unanticipated use of the system should be considered throughout software design activities. These uses can be handled in at least two ways: by stretching your imagination to think of how the system can be abused (as well as used properly), and by assuming that the system will be abused and d!esigning the software to handle the abuses. We discuss these approaches in Chapter 8.

Although many vendors strive for zero-defect software, in fact most software products are not fault-free. Market forces encoruage software developers to detiver products quickly, with lilttle time to test thoroughly. Typically, the test team will be able to test only those functions most likely to be used, or those tha t are most like ly to endanger or irritate users. For this reason, many users are wary of installing the first version of code, knowing that the bugs will not be worked out until the second version. Furthermore, the modifications needed to fix known faults are sometimes so difficuat. to make that it is easier to rewrite a whole system than to change existing code. We will investigate the issues involved in software maintenance in Chapter 11.

In spite of some spectacular successes and the overall acceptance of software as a fact of life, there is still much room for improvement in the quality of the software we produce. For example, lack of quality can be costly; the longer a fault goes undetected, the more expensive it is to correct. In particular, the cost of correcting an error made during the initial analysis of a project is estimated to be only one-tenth the cost of cor- recting a similar error after the system has been turned over to the customer. Unfo rtu- nately, we do not catch most of tbe errors early on. Half of the cost of correcting faults found during testing and maintenance comes l:rom errors made much earlier in the life of a system. In Chapters 12 and 13, we will look at ways to evaluate the effectiveness of our development activities and improve the processes to catch mistakes as early as possible.

One of tbe simple but powerfuJ techniques we will propose is the use of review and inspection. Many students are accustomed to developing and testing software on their own. But their testing may be less effective than tbey think. For example, Fagan studied the way faults were detected. He discovered that testing a program by running it with test data revealed only a bout a fifth of the faults located during systems develop- ment. However, peer review, the process whereby coUeagues examine and comment on each other's designs and code, uncovered the remaining four out of five faults found (Fagan 1986). Thus, the ,quality of your software can be increased dramatically just by having your coUeagues review your work. We will learn more in later chapters about how the review and insp ection processes can be· used after each major development

Section 1.3 What Is Good Software? 9

step to find and fix faults as early as possible. And we wiU see in Chapter 13 how to improve the inspection process itself.

1.3 WHAT IS GOOD SOFTWARE?

Just as manufacturers look for ways to ensure the quality of the products they produce, so too must software engineers find methods to ensure that their products are of acceptable quality and utility. Thus, good software engineering must always include a strategy for pro- ducing quality software. But before we can devise a strategy, we must understand what we mean by quality software. Sidebar 1.2 shows us how perspective inftuences what we mean by "quality." In this section, we examine what distinguishes good software from bad.

SIDEBAR 1.2 PERSPECTIVES ON QUALITY

G all'Vin (1984) discusses about how different people perceive quality. He describes quality from five different perspectives: • the transcendental view, whe re quality is something we can recognize but not d efine

• the user view, whe re quality is fitness for purpose

• the manufacturing view, whe re quality is conformance to specification

• the product view, where q uality is tied to inhe rent product characteristics

• the value-based view, where quality depends o n the amount the customer is willing to pay for it

The transcendental view is much like Plato's description of the ideal or Aristotle's con-

cept o f form. In other words, just as every actual table is an approximation of an ideal table,

we can think of software quality as an ideal toward which we strive; however, we may never

be able to implement it comple tely. The transcendental view is ethereal, in contrast to the more concrete view of the user.

We take a user view when we measure product characteristics, such as defect density or relia-

bility, in order to understand the overall product quality.

The ma nufacturing view looks at quality during production and after delivery. In partic-

ular, it examines whether the product was built right the first time, avoiding costly rework to

fix delivered faults. Thus, the manufacturing view is a process view, advocating conformance to good process. However, there is little evidence that conformance to process actually results

in products with fewer faults and failures; process may indeed lead to h igh-quality products,

but it may possibly institutionalize the production of mediocre products. We examine some of

these issues in Chapter 12.

The user and manufacturing views look at the product from the o utside, b ut the product

view peers inside and evaluates a product's inherent characte ristics. This view is the one oft.en

advocated by software metrics experts; they assume that good internal quality indicators will

lead to good external ones, such as reliability and maintainability. However, more research is

10 Chapter 1 Why Software Engineering?

needed to verify these assumptions and to determine which aspects of quality affect the prod-

uct's actual use. We may have to develop models that link the product view to the user view. Customers or marketers often take a user view of quality. Researchers sometimes hold a

product view, and the development team has a manufacturing view. if the differences in view-

points are not made explicit, then confusion and misunderstanding can lead to bad decisions and poor products. The value-based view can link these disparate pictures of quality. By

equating quality to what the customer is willing to pay, we can look at trade-offs between cost

and quality, and we can manage conflicts when they arise. Similarly, purchasers compare product costs with potential benefits, thinking of quality as value for money.

Kitchen.ham and Pfteeger (1996) investigated the answer to this question in their introduction to a special issue of IEEE Software on quaJity. They note that the context helps to de te rmine the answer. FauJts tolerated in word processing software may not be acceptable in safety-critical or mission-critical systems. Thus, we must consider quality in a t least three ways: the quality of the product, lhe quality of the process that resuJ t.s in the product, and the quality of the product in the context of the business environment in which the product wiU be used.

The Quality of the Product

We can ask people to name the characteristics of software that contribute to its overalJ quality, but we are likely to get different answers from each person we ask. Thjs differ- ence occurs because the importance of the characteristics depends on who is analyzing the software. Users judge software to be of high quality if it does what they want in a way that is easy to learn and easy to use. However , sometimes quality and functionality are inte rtwined; if some thing is hard to learn or use but its functionality is worth the trouble, then it is stiU considered to have high quality.

We try to measure software quality so that we can compare one product with another. To do so, we identify those aspects of the system that contribute to its overall quality. Thus, when measuring software quality, users assess such external characte ristics as the number of failures and type of failures. For example, they may define faiJures as minor, major, and catastrophic, and hope that any failures that occur are onJy minor ones.

The software must also be judged by those who are designing and writing the code and by those who must maintain the programs after they are written. These practi- tioners tend to look at internal characte ristics of the products, sometimes even before the product is delive red to the user. In particular , practitioners often look at numbers and types o f faults for evide nce of a product's quality (or lack of it) . For example, devel- opers track the number of faults found in requirements, design, and code inspections and use them as indicators of the like ly quaJity of the final product.

For this reason, we often build models to relate the user's exte rnal view to the developer's inte rnal view of the software. Figure 1.5 is an example of an early quality model built by McCaU and bis colleagues to show how external quality factors (on the left-hand side) re la te to product quality criteria (on the right-hand side). McCaU associ- ated each right-band criterion with a measurement to indicate the degree to which an

Section 1.3 What Is Good Software? 11

Correctneu

Rel iabilily

lnte rit

Usability

Mainhinabilily

Testabilitf

I nteropmbil itr

FIGURE 1.5 McCall's quality model.

e lement of quality was addressed (McCall, Richards, and Walte rs 1977). We will exam- ine several product quality models in Chapter 12.

The Quality of the Process

There are maoy activities that affect the ultimate product quality; if aoy of the activities go awry, tbe product quality may suffer. For this reason, many software engineers feel that the quality of the development and maintenance process is as important as prod- uct quality. One of the advantages of modeling the process is that we can examine it and look for ways to improve it. For example, we can ask questions such as:

• Where and when are we likely to find a particular kiod of fault? • How cao we fiod faults earlier in the development process?

• How can we build io fault tole rance so that we minimize the likeLihood that a fault wiU become a failure?

• How cao we design secure, high-quality systems? • Are there alternative activities that can make our process more effective or effi-

cient at ensuriog quality?

These questioos can be applied to the whole development process or to a subprocess, such as configuration management, reuse, or testing; we will investigate these processes in late r chapters.

In the 1990s, there was a well-publicized focus on process modeling and process improvement in software engineering. Inspired by the work of Deming and Juran, and implemented by companies such as IBM, process guidelines such as tbe Capability

12 Chapter 1 Why Software Engineering?

Maturity Mode l (CMM), ISO 9000, and Software Process Improvement and Capability dEte rmination (SPICE) suggested that by improving the software development process, we can improve the quality of the resulting products. In Chapter 2, we will see how to iden- tify relevant process activities and model their effects on intermediate and final products. Chapte rs 12 and 13 will examine process models an.d improvement frameworks in depth.

Quality in the Context of the Business Environment

When the focus of quality assessment is on products and processes, we usually measure quality with mathematical expressions involving faults, failures, and timing. Rarely is the scope broadened to include a business perspective, where qualjty is viewed in terms of the products and services being provided by the business in which the software is embedded. That is, we Look at the technical value of our products, rather than more broadly at their business value, and we make decisions based only on the resulting products' technical quality. In other words, we assume that improving technical quality will automaticaJly translate into business value.

SeveraJ researchers have taken a close look at the relationships between business value and technical value . For example, Simmons inte rviewed many Australian busi- nesses to de termine how they make their information technology-related business decisions. She proposes a framework for understanding what companies mean by "business vaJue" (Simmons 1996). In a report by Favaro and Pfteeger (1997), Steve Andrioie, chief information officer for Cigna Corporation, a large U.S. insurance com- pany, described how his company distinguishes technical value from business value:

We measure the quality [of our software] by the obvious metrics: up versus down time, maintenance costs, costs connected with modifications, and the like. In other words, we manage development based on operational performance within cost parameters. HOW the vendor provides cost-effective performance is less of a concern than the results of the effort. ... The issue of business versus technical value is near and dear to our heart ... and one [on] which we focus a great deal of attention. I guess I am surprised to learn that com- panies would contract with companies for their technical value, at the relative expense of business value. If anything, we err on the other side! If there is not clear (expected) busi- ness value (expressed quantitatively: number of claims processed, etc.) then we can't launch a systems project. We take very seriously the "purposeful" requirement phase of the project, when we ask: "why do we want this system?" and "why do we care?"

There have been several attempts to re late technical value and business value in a quantitative and meaningful way. For example, Humphrey, Snyder, and Willis (1991) note that by improving its development process according to the CMM "maturity" scale (to be discussed in Chapter 12), Hughes Aircraft improved its productivity by 4 to 1 and saved millions of dollars. Similarly, Dion (1993) reported that Raytheon's twofold increase in productivity was accompanied by a $7.70 re turn on every dollar invested in process improvement. And pe rsonne l at Tmker Air Force Base in Oklahoma noted a productivity improvement of 6.35 to 1 (Lipke and Butler 1992).

H owever, Brodman and Johnson (1995) took a closer look at the business value of process improvement. They surveyed 33 companies that performed some kind of process improvement activities, and examined several key issues. Among othe r things, Brodman

Section 1.3 What Is Good Software? 13

and Johnson asked companies how they defined return on investment (ROI), a concept that is clearly defined in the business community. They note that the textbook definition of return on investment, derived from the financial community, describes the investment in terms of what is given up for other purposes. That is, tbe "investment must not only return the original capital but enough more to at least equal what the funds would have earned elsewhere, plus an allowance for risk" (Putnam and Myers 1992). Usually, the business community uses one of three models to assess ROI: a payback model, an accounting rate-of-return model, and a discounted cash flow model.

However, Brodman and Johnson (1995) found that the U.S. government and U.S. industry interpret ROI in very different ways, each different from the other, and both dif- ferent from the standard business school approaches. The government views ROI in terms of dollars, looking at reducing operating costs, predicting do llJar savings, and calculating the cost of employing new technologies. Government investments are also expressed in dol- lars, such as the cost of introducing new technologies or process improvement initiatives.

On the other band, industry viewed investment in terms of effort, rather than cost or dollars. That is, companies were interested in saving time or using fewer people, and their definition of re turn on investment reflected this focus on decreasing effort. Among the companies surveyed , return on investment included such items as

• training • schedule

• risk • quality • productivity • process • customer • costs • business

"The cost issues included in the definition involve meeting cost predictions, improving cost performance, and staying within budget, rather than reducing operating costs or streamlining the project or organization. Figure 1.6 summarizes the frequency with which many organizations included an investment item in their definition of RO I. For example, about 5 percent of those interviewed included a quality group's effort in the ROI effort calculation, and approximately 35 percent included soflware costs when considering numbe r of dollars invested.

The difference in views is disturbing, because it means that calculations of ROI cannot be compared across organizations. But there are good reasons for these differ- ing views. Dollar savings from reduced schedule, higher quality, and increased produc- tivity are re turned to the government rather than the contractor. On the other hand, contractors are usuaUy looking for a competitive edge and increased work capacity as weU as greater profit; thus, the contractor's ROI is more effort- than cost-based. In par- ticular, more accurate cost and schedule estimation can mean customer satilsfaction and repeat business. And decreased time to market as well as improved product quality are perceived as offering business value, too.

14 Chapter 1

:! ...

Why Software Engineering?

Facilities E===~----------------1

Soltwue costs:

Har~ware costs J:=====:::I Materia ls

General _,_ _________ _

Assessments i----.. SCE costs

Internal R&D Process .,. _______ _.

Doeumantation1 Quality group'

Software process groupJ:=====::J

Genera I ~;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;~J

0% 10% 20% 30% 40% SO% 60% 70%

Percent of lnterwlewees

FIGURE 1.6 Terms included in industry definition of return on investment.

Even though the different ROI calculations can be justi fied for each organization, it is worrying that softwa re technology return on investment is not the same as financial ROI. At some point, program success must be reported to higher levels of manage- ment, many of which are related not to software but to the main company business, such as telecommunications or banking. Much confusion will result from the use of the same terminology to mean vastly different things.. Thus, our success criteria must make sense no t only for software projects and processes, but also for the more general busi- ness practices they support. We will examine this issue in more detail in Chapter 12 and look at using several common measures of business value to choose among technology options,.

1.4 WHO DOES SOFTWARE ENGINEERING?

A key component of software development is communication be tween customer and developer; if that fa ils, so too wiU the system. We must understand what the customer wants and needs before we can build a system to help solve the cus tomer 's problem. To do this, let us turn our attention to the people invo lved in soft ware development.

The number of people working on software development depends on the project's size and degree of difficulty. However, no matter how many people are involved , the roles played throughout the Life of the project can be distjnguished. Thus, for a large project, one person or a group may be assigned to each of the roles identi- fied; on a smaU project, o ne person or group may take on several ro les at once.

Usually, the participants in a project fall into one of three categories: customer, user, or developer. The customer is the company, organization, or person who is paying for the so fl ware system to be developed. The developer is the company, organizalioo, or person who is building tlhe software system for the customer. This category includes any

USER

Section 1.4 Who Does Software Engineering? 15

CUSTOMER Sponsors system development

Needs

Software system

FIGURE 1.7 Participants in software development.

managers needed to coordinate and guide the programme rs and testers. The user is the person or people who wiU actuaUy use the system: the ones who sit at the te rminal or submit the data or read the output. Although for some projects the customer, user, and developer are the same person or group, often these are different sets of people. Figure 1. 7 shows the basic relationships among the three types of participants.

The customer, being in control of the funds, usually negotiates the contract and signs the acceptance papers. However, sometimes the customer is not a user. For example, suppose Wittenberg Water Works signs a contract with Gentle Systems, Inc., to build a computerized accounting system for the company. The president of Witten- berg may describe to the representatives of Gentle Systems exactly what is needed, and she will sign the contract. However, the president wiU not use the accounting system directly; the users will be the bookkeepers and accounting c lerks. Thus, it is important that the developers understand exact! y what both the customer and users want and need.

On the otber band, suppose Wittenberg Water Works is so large that it bas its own computer systems development division. The division may decide that it ne:eds an auto- mated tool to keep track of its own project costs and schedules. By building the tool itself, the division is at the same time the user, customer, and developer.

In recent years, the simple distinctions among customer, user, and developer have become more complex. Customers and users have been involved in the development process in a variety of ways. The customer may decide to purchase Commercial Off-The- Sbelf (COTS) software to be incorporated in the final product that the developer will supply and support. When this happens, the customer is involved in system architecture decisions, and there are many more constraints on development. Similarly, tbe developer may choose to use additional developers, caUed subcontractors, who build a subsystem and deliver it to the developers to be included in the final product. The subcontractors may work side by side with the primary developers, or they may work al a di[ferent site,

16 Chapter 1 Why Software Engineering?

coordinating their work with the primary developers and delivering the subsystem late in the development process. The subsystem may be a t urnkey system, where the code is incorporated whole (without additional code for integration), or it may require a sepa- rate integration process for building the links from tbe major system to the subsystem(s).

Thus, the notion of ·•system" is important in software engineering, not onJy for understanding the problem analysis and solution synthesis, but also for organizing the development process and for assigning appropria tte roles to the participants. In the next section, we look at the role of a systems approach :in good software engineering practice.

1.5 A SYSTEMS APPROACH

The projects we develop do not exist in a vacuum .. Often, the hardware and software we put together must interact with users, with o ther software tasks, with other pieces of hardware, with existing databases (i.e., with carefully defined sets of data and data re la- tionships), o r even with o the r computer systems. Therefore, it is important to provide a context for any project by knowing the boundaries of the project: what is included in the project and what is not. For example, suppose you a re asked by your supervisor to write a program to print paychecks for the people in your office. You must know whether your program simply reads hours worked from another system and prints the results or whe the r you must also calculate the pay information. SimiJarly, you must know whe ther the program is to ca lcuJate taxes, pensions, and be nefits or whether a report of these items is to be provided with each paycheck. What you are really asking is: Where does the project begin and end? The same question applies to any system. A system is a collection of objects and activities, plus a description of the relationships that tie the objects and activities together. Typically, our system de finition includes, for each activity, a list of inputs required, actions taken, and outputs produced. Thus, to begin, we must know whether any object or activity is included in the system or not.

The Elements of a System

We describe a system by naming its parts and then identifying how the component parts are related to one anothe r. This identification is the first step in analyzing the problem presented to us.

Activities and O bjects. First, we distinguis h between activities and objects. An activity is something tha t happens in a system. Usually described as an event initiated by a trigger, the activity transforms one thing to anothe r by chang i-ng a characteristic. This transformation can mean that a data e lement is moved from one location to another , is changed from one value to another, or is combined with o ther data to supply input for yet another activity. For example, an item of data can be moved from one file to another. In this case, tbe characteristic changed is the location. Or the value of the data ite m can be incremented. Finally, the address of the data item can be included in a list of parameters with the addresses of several o ther data items so that another routine can be caUed to handle a ll the data at once.

The e lements involved in the activities a re caUed objects or entities. Usually, these objects are re lated to each o the r in some way. For instance, the objects can be

Section 1.5 A Systems Approach 17

arranged in a table or matrix. Often, objects are grouped as records, wbere each record is arranged in a prescribed format. An employee history record, for example, may con- tain objects (called fields) for each employee, such as the following:

First name Middle name Last name Street address City State

Postal code SaJary per hour Benefits per hour Vacation hours accrued Sick leave accmed

Not only is each fie ld in the record defined, but the size and rela tionsltip o f each fie ld to the others are named. Thus, the record description states the data type of each fie ld, the starting location in the record, and the length of the field. In turn, since there is a record for each employee, the records a re combined into a file, and file characteris tics (such as maximum number o f records) may be specified.

Sometimes, the objects are defined slightly differently. Instead of considering each item as a fie ld in a larger record, the object is viewed as being independent. The object description contains a Listing of the characteristics of each object, as well as a list of au the actions that can take place using the object or affecting the object. For example, consider the object "polygon." An o bject description may say that this object bas characteristics such as number of sides and length of each side. The actions may include calcula tion of the area or of the perimeter. There may even be a characteristic called "polygon. type,'' so that each instantiation of ·'polygon" is identified! when it is a "rhombus" or "rectangle," for instance. A type may itself have an object description; "rectangle" may be composed of types "square" and "not square," for example. We will explore these concepts in Chapter 4 when we investigate requirements ana lysis, and in depth in Chapte r 6 when we discuss object-oriented development.

Relationships and the System Boundary. Once entities and activities are defined, we match the entities with their activities. The relationships among entities and activi- ties are clearly and carefully defined. An entity definition includes a description of where the entity originates. Some items reside in files that already exist; others are cre- a ted during some activity. The entity's destination is important, too. Some items are used by only one activity, but ot!hers are destined to be input to other systems. That is, some items from one system are used by activities outside the scope of the system being examined. Thus, we can think of the system at which we are looking as having a border or boundary. Some items cross the boundary to enter our system, and others are prod- ucts of our syste m and travel out for another system's use.

Using these concepts, we can define a system as a collection of things: a set of entities, a set of activities, a description of il:he relationships among entities and activities, and a def- inition of the boundary of the system. This definition of a system applies not only to com- puter systems but to anything in which objects interact in some way with other objects.

Examples of Systems. To see how system definition works, consider the parts of you that a llow you to take in oxygen and excrete carbon dioxide and water: yoru respi- ratory system. You can define its boundary easily: If you name a particular organ of

18 Chapter 1 Why Software Engineering?

FIGURE 1.8 Respiratory system.

ENTITIES: Particulate matter Oxygen Carbon dioxide Water Nitrogen Nose Mouth Trachea Bronchial tubes Lungs Alveoli

ACTIVITIES: Inhale ,ans Filter gases T ransler molecules

to/from bl ocd Exhale ,ans

your body, you can say whether or not it is part of your respiratory system. Molecules of oxygen and carbon dioxide are entities or objects moving through the system in ways that are clearly defined. We can also describe the activities in the system in terms of the inte ractions of the entities. If necessary, we can imustrate the syste m by showing what enters and leaves it; we can also supply tables to describe all entities and the activities in which they are involved. Figure 1.8 illustrates the respiratory system. Note that each activity involves the entities and can be defined by describing whjch entities ac t as input, how they are processed, and what is produced (output).

We must describe our computer systems clearly, too. We work with prospective users to define the boundary of the system: Where does our work start and stop? In addition, we need to know what is on the boundary of the system and thus determine the origins of the input and destinations of the output. For example, in a system that prints paychecks, pay information may come from the company's payroll system. The system output may be a set of paychecks sent to the mail room to be delivered to the appropriate recipients. In the system shown in Figure 1.9, we can see the boundary and can understand the entities, the activities, and their relationships.

Interrelated Systems

The concept of boundary is important, because very few systems are independent of other systems. For example, the respiratory system must interact with the digestive sys- tem, the circulatory syste m, the nervous system, and others. The respiratory system could not function witbout the nervous system; neither could the circulatory system function without the respiratory system. The interdependencies may be complex. (Indeed, many of our environmental problems arise and are intensified because we do not appreciate the complexity of our ecosystem.) However, once the boundary of a system is described, it is easier for us to see what is within and without and what crosses the boundary.

In turn, it is possible for one system to exist inside anothe r system. When we describe a computer system, we often concentrate on a small piece of what is really a

l[I I ~ Computer i~ ~

D1te ulidation

~ System bound1ry

Section 1.5

M•H•u•ijJ

~ Printin9

A Systems Approach 19

FIGURE 1.9 System definition of paycheck production.

much larger system. Such a focus allows us to define and build a much less complex sys- tem than the enveloping one. If we a re careful in documenting the interactions among and be tween systems affecting ours, we lose nothing by concentrating on this sma ller piece of a larger system.

Let us look at an example of bow this can be done. Suppose we are developing a water-monito ring system where data are gathe red at many points through.out a river val- ley. At the collection sites, several calculations are done, and the results are communicated to a central location for comprehensive reporting. Such a system may be implemented with a computer at the central site communicating with several dozen smalle r computers at the remote locations. Many system activities must be considered, including the way the water data a re gathered, the calculations perfo rmed at the remote locations,. the commu- nication of information to the central site, the storage of the communicated data in a data- base or shared data file, and the creation of reports from the data. We can view this system as a coUection of systems, each with a special purpose. In particula r, we can conside r only the communications aspect of the larger system and develop a communications system to transmit data from a set of remote sites to a central one. If we care fully define the bound- ary between the communications and the larger system, the design and development of the communications system can be done independently of the larger system.

The complexity of the entire water-monitoring system is much greate r than the com- plexity of the communications system, so our treatment of separate, smalle r pieces makes our job much simple r. If the boundary definitions are detailed and correct, building the larger system from the smaller ones is relatively easy. We can describe the building pro- cess by considering the larger system in layers, as illustrated in Figure 1.10 fo r our water- monito ring example. A layer is a system by itself, but each layer and those it contains also form a system. The circles of the figure represent the boundaries of the respective systems, and the entire set of circles incorporates the entire water-monito ring system.

20 Chapter 1 Why Software Engineering?

FIG URE 1.1 O Layers of a water-monitoring system.

Recognizing that one system contains another is important, because it reflects the fact that an object or activity in one system is part of every system represented by the oute r layers. Since more complexity is introduced witb each layer, understanding any one object or activity becomes more difficult with each more encompassing system. Thus, we maximize simplicity and our consequent understanding of the system by focusing on tbe smallest system possible at first.

We use this idea when building a system to replace an older version, either man- ual or automated. We want to understand as mucb as possible about bow botb the old and new systems work. Often, the greater the diffe rence be tween the two systems, the more difficult the design and development. This difficulty occurs not only because people tend to resist change, but also because the difference makes learning difficult. In building or synthesizing our grand system, it helps dramatically to construct a new sys- tem as an incremen tat series o f intermediate systems. Rather than going from system A to syste m B, we may be able to go from A to A' to A" to B. For example, suppose A is a manual system consisting of three major functions, and Bis to be an automated vers ion of A We can define system A ' to be a new system with function 1 automated but func- tions 2 and 3 still manual. Then A" has automated functions 1 and 2, but 3 is still manual. Finally, B has aU three automated functions. By dividing the "distance" from A to B in thirds, we have a series of small problems that may be easier to handle than the whole.

In our example, the two systems are very similar; the functions. are the same, but the style in which they are implemented diffe rs. However, the target system is often vastly different from the existing one. lo particular, it is usuaUy desirable that the target be free of constraints imposed by existing hardware or software. An incremental del'clopment

Section 1.6 An Engineering Approach 21

approach may incorporate a series of stages, each o f which frees the previous system from another such constraint. For example, stage 1 may add a new piece of hardware, stage 2 may replace the software performing a particular set of functions, and so on. The system is slowly d rawn away from old software and hardware until it reflects the new system design.

Thus, system devenopment can first incorporate a set of changes to an actual sys- tem and then add a series of changes to generate a complete design scheme, rather than trying to jump from present to future in one mo ve. With such an approach, we must view the system in two d ifferent ways simultaneously: statically and dynamically. The static view tells us how tlhe system is working today, whereas the dynamic view shows us bow the system is changing into what it wiU eventually become. O ne view is not com- plete without the other.

1.6 AN ENGINEERING APPROACH

Once we understand the system's nature, we are ready to begin its construction. At this point, the "engineering" part of software engineering becomes relevant and comple- ments what we have done so far. Recall that we began this chapter by acknowledging that wrtiting softwa re is an art as well as a science. Tue art of producing systems involves the craft of software production. As artists, we develop techniques and tools that have proven helpful in producing useful, high-quality products. For instance, we may use an optimizing compiler as a tool to generate programs that run fast on the machines we are using. Or we can include special sort or search routines as techniques for saving time or space in our system. These software-based techniques are used just as tech- niques and tools are used in crafting a fine piece of furniture or in building a house. Indeed, a popular collection o f programming tools is called the Programmer's Wo rk- bench, because programmers rely on them as a ca.rpenter relies on a workbench.

Because building a system is similar to building a house, we can look to house building for other examples of why the "artistic" approach to software development is important.

Building a House

Suppose Chuck and Betsy Howell hire someone to build a house for them. Because of its size and complexity, a house usually requires more than one person on the construc- tion team; consequently, the Howells hire McMullen Construction Company. The first event involved in the house building is a conference between the Howells and McMulJen so the Howells can explain what they want. lhis conference explores not only what the Howells want the house to look like, but also what features are to be included. Then McMullen draws up floor plans and an architect's rendering of the house. After the Howells discuss the details with McMullen, changes are made. Once the Howe lls give their approval to McMuUen, construction begins.

During the construction process, the Howells are likely to inspect the construction site, thinking of changes they would like. Several such changes may occur during con- structio n, but eventually the house is completed. During construction and before the HoweUs move in, several components of the house are tested. For e xample, electriciians

22 Chapter 1 Why Software Engineering?

test the wiring circuits, plumbers make sure that p~pes do not leak, and carpenters adjust for variation in wood so that the floors are smooth and level. Finally, the Howells move in. If there is something that is not constructed properly, McMullen may be called in to fix it, but eventually the Howells become fully responsible for the house.

Let us look more closely at what is involved in this process. First, since many people are working on the house at the same time, documentation is essential. No t only are floor plans and the architect's drawings necessary, but details must be written down so that specialists such as plumbers and electricians can fit their products together as the house becomes a whole.

Second, it is unreasonable to expect the Howells to describe their house at the beginning of the process and walk away until the house is completed. Instead, the How- ells may modify the house design several times during construction. These modifica- tions may result from a number of situations:

• Materials that were specified initially are no longer available. For example, certain kinds of roof tiles may no longer be manufactured.

• The Howells may !have new ideas as they see the house take shape. For example, the Howells might realize that they can add a skylight to the kitchen for Liittle additional cost.

• Availability or financial constraints may require the Howells to change require- ments in order to meet their schedule or budget. For example, the special windows that the Howells wanted to order will not be ready in time to complete the house by winter, so stock windows may be subsliluled.

• Items or designs initially thought possible might tum out to be infeasible. For example, soil percolation tests may reveal that the land surrounding the house can- not support the number of bathrooms that the Howells bad originally requested.

McMullen may also reoommend some changes after construction has begun, perhaps because of a better idea or because a key member of the construction crew is unavail- able. And both McMullen and the Howells may change their minds about a feature of the house even after the feature is completed.

Third, McMullen must provide blueprints, wiring and plumbing diagrams, instrnc- tion manuals for the appliances, and any other documentation that would enable the HowelJ s to make modifications or repairs after they move in.

We can summarize this construction process in the following way:

• determiJting and analyzing the requirements • producing and documenting the overall design of the house • producing detailed specifications of the house • identifying and designing the components • building each component of the house • testing each component of the house • integrating the components and making final modifications after the residents

have moved in

• continuing maintenance by the residents of the house

Section 1.6 An Engineering Approach 23

We have seen how the participants must remain flexible and allow changes in the origi- nal specifications at various points during construction.

It is important to remember tha t the house is built within the context of the social, economic, and governmental structure in which it is to reside. Just as the wate r- monitoring system in Figure 1.10 depicted the dependencies of s ubsystems, we must th.ink of the house as a subsystem in a larger scheme. For example, construction of a house is done in the context of the city or county building codes and regulations. The McMullen employees are licensed by the city o r county, and they are expected to pe r- form according to building standards. The construction site is visited by building inspec- tors, who make sure thatt the standards are being followed. A nd the building inspectors set standards for quality, with the inspections serving as quality assurance checkpoints for the building project. There may also be social o r customary constraints that suggest common or acceptable behavior; for example, it is no t customary to have the front door open directly to the kitchen or bedroom.

At the same time, we must recognize that we cannot prescribe the activities of build- ing a house exactly; we must leave room for decisions based on experience, to deal with unexpected or nonstandard situations. For example, many houses are fashioned from p re- existing components; doors are supplied already in the frame, bathrooms use pre-made shower stalls, and so on. But the standard house-building process may have to be alte red to accommodate an unusual feature or request. Suppose that the framing is done, the dry- wall is up, the subfloor is laid, and the next step is pllltting down tile on the bathroom floor. The builders find, much to their dismay, that the walls and floor are not exactly square. This problem may not be the result of a poor process; houses are built from parts that l1ave some natmal or manufacturing variation, so problems of inexactitude can occur.The ftoor tile, being composed of small squares, will highlight the inexactitude if laid the standard way. It is he re that art and expertise come to play. The builder is like ly to remove the ll:iles from the ir backing, and lay them one at a time, making small adjustments with each one so tha t the overall variation is imperceptible to all but the most discerning eyes.

Thus, house building is a complex task with many opportunities for change in pro- cesses, p roducts, o r resources a long the way, tempered by a healthy dose of art and expertise. The house-building process can be standardized, but the re is always need for expert judgment and creativity.

Building a System

Software projects progress in a way similar to the house-building process. The Howells were the customers and users, and McMullen was the developer in our example. Had the Howells asked McMullen to build the house for Mr. Howell's pa rents to live in, the users, customers, and developer would have been distinct. In the same way, software development involves U1sers, customers, and developers. If we are asked to develop a software system for a cus tomer, the first step is meeting with the customer to determine the requi rements. These requirements describe the system, as we saw before. Without knowing the boundary, the entities, and the activities, it is impossible to describe the software and how it will interact with its environment.

Once requirements a re defined, we create a system design lo meet the specified requirements. As we will see in Chapte r 5, tbe system design shows the customer what

24 Chapter 1 Why Software Engineering?

the system will look like from the customer 's perspective. Thus, just as the Howells looked at floor plans and architect's drawings, we present the customer with pictures of the video display screens that will be used, the reports that will be generated, and any other descriptions that will explain how users willl interact with the completed system. H the system has manual backup or override procedures, those are described as well. At fust, the Howells were interested only in the appearance and functionality of their house; it was not until later that they had to decide on such items as copper or plastic pipes. Likewise, the system design (also called architectural) phase of a software project describes onJy appearance and functionality.

Tue design is then reviewed by the customer. When approved, the overall system design is used to generalte the designs of the individual programs involved. Note that it is not until this step that programs are mentioned. Until functionality and appearance are dete rmined, it often makes no sense to consider coding. In our house example, we would now be ready to discuss types of pipe or quality of electrical wiring. We can decide on plastic or copper pipes because now we know where water needs to flow in the structure. Likewise, when the system design is approved by all, we are ready to dis- cuss programs. The basis for our discussion is a well-defined description of the software project as a system; the system design includes a complete description of the functions and interactions involve d.

When the programs have been wri tten, they are tested as individual pieces of code before they can be linked together. This first phase of testing is calJed module or unit testing. Once we are convinced that the pieces work as desired, we put them together and make sure that they work properly when joined with others. This sec·ond testing phase is often referred to as integration testing, as we build our system by adding one piece to the next until the entire system is operational. The final testing phase, called system teslting, involves a test of the whole system to make sure that the functions and interactions specified initially have been implemented properly. In this phase, the system is compared with the specified requirements; the developer, cus- tomer, and users check that the system serves its intended purpose.

At last, the final product is delivered. As it is used, discrepancies and problems are uncovered. If ours is a turnkey system, the customer assumes responsibility for the sys- tem after delivery. Many systems are not turnkey systems, though, and the developer or other organization provides maintenance if anything goes wrong or if needs and requirements change.

Thus, development of software includes the following activities:

• requirements analysis and definition • system design • program design • writing the programs (program implementation) • unit testing

• integration testing • system testing • system delivery • maintenance

Section 1.7 Members of the Development Team 25

In an ideal situatio n, the activities are performed one at a time; when you reach the end of the List, you have a comple ted software project. However, i_n reali ty, many of the steps are repeated. For example, in reviewing the system design, you and the cus- tomer may discover that some requirements have yet to be documented. You may work with the customer to adcl requirements and possibly redesign the system. Simila rly, when writing and testing code, you may find tha t a device does not function as described by its documentation. You may have to redesign the code, reconsider the sys- tem design, o r even re tu rn to a discussion with the customer about how to meet the requirements. For this reason, we define a software development process as any description of software development that contains some of the nine activities Lis ted before, organized so that together they produce tested code. Io Chapter 2, we will explore several of the diffe rent development processes tha t a re used in building software. Subse- quent chapters will examine each of the subprocesses and their activities, from require- ments analysis through maintenance. But before we do, let us look at who develops software and how the cha llenge of software development has changed over the years.

1.7 MEMBERS OF THE DEVELOPMENTTEAM

Earlie r in this chapte r, we saw that customers, users, and developers play major roles in the definition and creation of the new product. The developers are software engineers, but each engineer may specialize in a pa rticular aspect of development. Let us look in more de ta il a t the role o f the members of the development team.

The first step in any development process is finding out what the customer wants and documenting the requirements. As we have seen, analysis is the process of breaking things into their component pa rts so tha t we can understand them better. Thus, the deve lopment team includes one or more requirements analysts to work with the cus- tomer, breaking down what the customer wants into discrete requirements.

Once the requirements are known and documented, analysts work with designers to generate a system-level description of what the system is to do. In turn, the designers work with programmers to describe the system in such a way tha t programmers can write Lines of code tha t implement what the requirements specify.

Afte r the code is generated, it must be tested. O ften, the first testing is done by the programmers themselves; sometimes, additional testers are also used to he lp catch faults that the programmers overlook. When units of code a re integrated into function- ing groups, a team o f teste rs works with the implementation team to verify that as the system is built up by combining pieces, it works properly and according to speciiication.

When the deve lopment team is comfortable with the functionality and quality of the system, attention turns to the customer. The test team and customer work together to ve rify that the comple te system is what the customer wants; they do this by compar- ing bow the system works with the initial set of requirements. Then, trainers show users how to use the system.

Fo r many software systems, acceptance by the customer does not mean the end of the developer's job. If faults are discovered aft.er the system has been accepted, a maintenance team fixes them. Moreover, the customer's requirements may change as time passes, and correspo nding changes to the system must be made. Thus, maintenance

26 Chapter 1 Why Software Engineering?

can involve analysts who determine what requirements are added or changed, design- e rs to determine where in the system design the change should be made, programmers to implement the changes, testers to make sure that the changed system still runs prop- erly, and trainers to explain to users how the ch ange affects the use of the system. Figure 1.11 illustrates ho w the roles of the development team correspond to the steps of deve lopment.

Students often work by themselves or in smaU groups as a development team for class projects. The documentation requested by the instructor is minimal; students. are usually not required to write a user manual or training documents. Moreover, the assignment is re latively stable; the requirements do not change over the life of the project. FinaUy, student-built systems are likely to be discarded at the end of the course; their purpose is to demonstra te ability but not necessarily to solve a problem for a real customer. Thus, program size, system complexity, need for documentation, and need for maintainability are relatively small for class projects.

However, for a real customer, the system size and complexity may be large and the need for documentation and maintainability great. For a project involving many thousands of lines of code and much interaction among members of the development team, control of the various aspects of the project may be difficult. To support everyone on the development team, several people may become involved with the system at the beginning of developme nt and remain involved throughout.

Librarians prepare and store documents that are used during the life of the sys- tem, including requirements specifications, design descriptions, program documenta- tion, training manuals, test data, schedules, and more. Working with the librarians are

REQUIREMENTS ANALYSIS AND DEFINITION I

SYSTEM DESIGN J I PROGRAM DESIGN

I PROGRAM

_A ANALYST

. ~· DESIGNER

.A PROGRAMMER IMPLEMENTATION

UNIT I TESTING

I INTEGRATION TESTING l

_A TESTER

SYSTEM TESTING J

SYSTEM -DELIVERY J I - I I MAINTENANCE

~ TR.AIMER

FIGURE 1.11 The roles of the development terun.

Section 1.8 How Has Software Engineering Changed? 27

the members of a configuration management tearn . Configuration management involves maintaining a correspondence among the requirements, the design, the implementation, and the tests. This cross-reference tells developers what program to alter if a change in requirements is needed, or what parts of a program will be affected if an alteration of some kind is proposed. Configuration management staff a lso coordinate the different versions of a system tha.t may be built and supported. For example, a software system may be hosted on differe nt platforms or may be delivered in a ser ies of releases. Con- figuration management ensures that the functionality is consistent from one platform to another, and that it doesn't degrade with a new release.

The development roles can be assumed by one person o r several. For smaU projects, two or three people may share a ll roles. However, for larger projects, the development team is often separated into distioct groups based on their function in development. Sometimes, those who maintain the system are diffe rent from those who design or write the system initially. For a very large development project, the customer can eve n !Lire one company to do the initial deve lopment and another to do the main- te nance. As we discuss the development and maintenance activities in later chapters, we will look at what skills are needed by each type o f development role.

1.8 HOW HAS SOFTWARE ENGINEERING CHANGED?

We have compared the building of software to the building o f a house. Each year, lmn- dreds of houses are built across the country, and satisfied customers move in. Each year, hundreds of soflware products are built by developers, but customers are too o!ten unhappy with the result. Why is there a diCference? If it is so easy to enumerate the steps ill! the development of a system, why are we as software engineers having such a difficult time producing quality software?

Think back to our house-building example. During the building process, the How- ells continually reviewed the plans. They also had many opportunities to change their minds about what they wanted. In the same way, software development aUows the cus- tomer to review the plans at every step and to make changes in the design. After all, if the deve loper produces a marvelous product that does not meet the customer's needs, the resultant system will have wasted everyone's time and effort.

For this reason, it is essential that our software engineerin g tools and techniques be used with an eye toward f:lexibility. In the pas t, we as develope rs assumed that our customers knew from the start what they wanted. llrnt stability is not usually the case. As the various stages of a project unfold, constraints arise that we re not anticipated at the beginning. For instance, afte r having chosen hardware and software to use for a project, we may find that a change in the customer requirements makes it difficult to use a parlicular database management system to produce menus exactly as promised to the customer. Or we ma y find that another system with which ou rs is to interface has changed its procedure or the format of the expected data. We may even find that hard- ware or software does not work quite as the vendor's documentation had promised. Thus, we must remember that each project is unique and that tools and techniques must be chosen that re f:lect the constraints placed on the individual project.

We must also acknowledge that most syste ms do not stand by themselves. They interface with other systems, either to receive or to provide information. Developing

28 Chapter 1 Why Software Engineering?

such systems is complex simply because they require a great deal of coordination with the systems with which they communicate. This complexity is especially true of systems that are being developed concurrently. In the past, developers bad difficulty ensuring the accuracy and completeness of the documentation of interfaces among systems. In subsequent chapte rs, we will address the issue of controlling the interface problem.

The Nature of the Change

These problems are among many that affect the success of our software development projects. Whatever approach we take, we must look both backward and forward. That is, we must look back at previous development projects to see what we have learned , not only about ensuring software quality, but also about the effectiveness of our techniques and tools. And we must look ahead to the way software development and the use of software products are Likely to change our practices in the future. Wasserman (1995) points out that the changes since the 1970s have been dramatic. Fo r example, early applications were intended to run on a single processor, usuaUy a mainframe. The input was Linear, usually a deck of cards or an input tape, and the output was alphanumeric. The system was designed in one of two basic ways: as a transformation, where input was converted to output, or as a transaction, where input determined which function would be performed. Today's software-based systems are far different and more complex. Typically, they run on multiple systems, sometimes configured in a client-server architecture with distributed functionality. Software performs not only the primary functions that the user needs, but also network control, security, user-iDterface presentation and processing, and data o r object management. The traditional "waterfall" approach to development, which assumes a linear progres- sion of development activities, where one begins only when its predecessor is complete (and which we will study in Chapter 2), is no longer Hexible or suitable for today's systems.

In his Stevens lecture, Wasserman (1996) summarized these changes by identify- ing seven key factors that have altered software engineering practice, illustrated in Figure 1.12:

1. criticality of time-to-market for commercial products 2. shifts in the economics of computing: lower hardware costs and greater develop-

ment and maintenance costs 3. availability of powerful desktop computing 4. extensive local- and wide-area networking 5. availability and adoption of object-oriented technology 6. graphical user interfaces using windows, icons, menus, and pointers 7. unpredictability of the waterfall model of software development

For example, the pressures of the marketplace mean that businesses must ready their new products and services before their competitors do; otherwise, the viability of the business itself may be at stake. So traditional techniques for review and testing cannot be used if they require large investments of time that are not recol.!lped as reduced fault or failure rates. Similarly, time previously spent in optimizing code to improve speed or

Section 1.8 How Has Software Engineering Changed? 29

Problemt with 111aterlall

I ~ CHANCES IN ~ SOFTWARE / Time to m~r~at

ENCINEERINC

/ t g

Shifts in economics );;_•l User interfaces

FIGURE 1.12 The key factors that have changed software development.

reduce space may no longer be a wise investment; an additional disk or memory card may be a far che aper solution to the problem.

Moreover, desktop computing puts development power in the hands of users, who now use their systems to develop spreadsheet and database applications, smaU programs, and even specialized user interfaces and simulations. This shift of develop- ment responsibility means that we, as software engineers, are likely to be building more complex systems than before. Similarly, the vast networking capabilities available to most users and developers make it easier for users to find information without special applications. For instance, searching the World Wide Web is quick, easy, and effective; the user no longer needs to write a database application to find what he or she needs.

Developers now find their jobs enhanced, too. Object-oriented technology, coupled with ne tworks and reuse repositories, makes available to developers a large collection of reusable modules for immediate and speedy inclusion in new applications. And graphical user interfaces, often developed with a specialized tool, help put a friendly face on complicated applications. Because we have become sophisticated in the way we analyze problems, we can now partition a system so we develop its subsys- tems in paraUel, requiring a development process very different from the waterfaU model. We will -see in Chapter 2 that we have many choices for this process, including some that aUow us to build prototypes (to verify with customers and users that the requirements are correct, and to assess the feasibility of designs) and ite rate among activities. These steps help us to ensure that our requirements and designs are as fault- free as possible before we instantiate them in code.

30 Chapter 1 Why Software Engineering?

Wasserman's Discipline of Software Engineering

Wasserman (1996) points out that any one of the seven technological changes would have a significant effect on the software development process. But taken together, they ht1ve trnnsformed the w11y we work. Jn his present.ations, De Marco descrihes this radi- cal shjfrt: by saying lhal we solved the easy problems first- which means that the set of problems left to be solved is much harder now than it was before. Wasserman addresses this challenge by suggesting that there are eight fundamental notions in software engi- neering that form the basis for an effective discipline o f software engineering. We intro- duce them briefly here, and we return to them in later chapters to see where and how they apply to what we do.

A bstraction. Sometimes, looking at a pro blem in its "natural state" (i.e., as expressed by the customer or user) is a daunting task. We cannot see an obvious way to tackle the problem in an effective or even feasibl e way. An abstraction is a description of the problem at some level of generalization that allows us to concentrate on the key aspects of the problem without getting mired in the details. This notion is different from a transformation, where we translate the proble m to another environment that we understand better; transformation is often used to move a problem from the real world to tbe mathematical world, so we can manipulate numbers to solve the problem.

Typically, abstraction involves identifying classes of objects that allow us to group items together; this way, we can deal with fewer things and concentrate on the com- monalities of the items in each class. We can talk of the properties or attributes of the items in a class and examine tile relationships among properties and classes. For exampl e, suppose we are asked to build an environmental monitoring system for a large and complex river. The monitoring equipment may involve sensors for air quality, water quality, temperature, speed, and other characteristics of the environment. But, for our purposes, we may choose to define a class called "sensor"; each item in the class bas certain properties, regardless of the characteristic it is monitoring: height, weight, e lectrical requirements, maintenance schedule, and so on. We can deal with the class, rather than its elements, in learning about the problem context, and in devising a solu- tion. In this way, the classes help us to simplify the problem statemeat and focus on the essential elements or characteristics of the problem.

We can form hiera rchies of abstractions, too. For instance, a sensor is a type of electrical device, and we may have two types of sensors: water sensors and air sensors.

Thus, we can form the simple hierarchy illustrated in Figure 1.13. By hiding some of the details, we caa concentrate on the essential nature of the objects with which we must deal and derive soEutions that are simple and elegant. We will take a closer look at abstraction and information hiding in Chapters 5, 6, and 7.

Analysis and Design Methods and Notations. When you design a program as a class assignment, you usually work on your own. The documentation that you produce is a formal description of your notes to yourself about why you chose a particular approach, what the variable names mean, and which algorithm you implemented. But when you work with a team, you must commurucate with many other participants in the development process. Most engineers, no matter what kind of engineering they do,

I

Wiler sensor

Section 1.8

Electric1I device

I Sensor

I I

Air sensor

How Has Software Engineering Changed? 31

FIGURE 1.13 Simple hierarchy for monitoring equipment.

use a standard notation to belp them communicate, and to document decisions. For example, an a rchitect draws a diagram or blueprint that an y other architect can under- stand. More importantly, the common notation allows the building contractor to under- stand the architect's intent and ideas. As we will see in Chapters 4, 5, 6, and 7, there are few similar standards in software engineering, and the misinte rpretation that results is one o f the key problems of software engineering today.

Analysis and design methods offer us more than a communication medium. They allow us to build models and check them for comple teness and consistency. Moreover, we can more readily reuse requirements and design components from previous projects, increasing our productivity and quality with relative ease.

But there are many open questions to be resolved before we can settle on a com- mon set of methods and tools. As we will see in later chapters, different tools and tech- niques address different aspects of a problem, and we need to identify the modeLing primitives that will allow us to capture all the important aspects of a problem with a single technique. Or we need to develop a representation technique that can be used with a ll methods, possibly tailored in some way.

User Interface Prototyping. Prototyping means building a small version o f a system, usually with limited functionality, tha t can be used to

• he lp the user or customer identify the key requirements of a system

• demonstra te the fcasibiLity of a design or approach

Often, the proto typing process is iterative: We build a prototype, evaluate it (with user and customer feedback), conside r how changes might improve the product or design, and then build another prototype. The iteration ends when we and our customers think we have a satisfactory solution to the problem at band.

Prototyping is often used to design a good user interface: the part of the system with which the user inte racts. However, there are other opportunities for using proto- types, even in embedded systems (i.e., in systems where the software functions are not explicitly visible to the user). The prototype can show the user what functions wiU be available, regardless of whether they are implemented in software or hardware. Since the user interface is, in a sense, a bridge between the application domain and the software

32 Chapter 1 Why Software Engineering?

development team, prototyping can bring to the surface issues and assumptions that may not have been clear using other approaches to requirements analysis. We will con- sider the role of user interface prototyping in Chapters 4 and 5.

Software Architecture. The overa ll architecture of a system is important not only to the ease of implementing and testing it, but also Ito the speed and effectiveness of main- taining and changing it. The quality of the arcbiltecture can make or break a system; indeed, Shaw and Garlan (1996) present architecture as a discipline on its own whose effects are felt throughout the entire development process. The arcbiitectural structure of a system should reflect the principles of good design that we will study in Chapters 5 and 7.

A system's architecture describes the system. in terms of a set of architectural units, and a map of how the units relate to one another. The more independent the units, the more modular the architecture and the more easily we can design and develop the pieces separate ly. Wasserman (1996) points out that there are at least five ways that we can partition the system into units:

1. modular decomposition: based on assigning functions to modules

2. data-oriented decomposition: based on external data structures 3. event-oriented decomposition: based on events that the system must handle 4. outside-in design: based on user inputs to the system

5. object-oriented design: based on identifying classes of objects and their interrela- tionships

These approaches are not mutually exclusive. For example, we can design a user inter- face with event-oriented decomposition, while we design the database using object- o riented or data-oriented design. We will examine these techniques in further detail in late r chapters. The importance of these approaches is their capture of our design expe- rience, enabling us to capitalize on our past projects by reusing botlh what we have done and what we learned by doing it.

Software Process. Since the late 1980s, many software engineers have paid care- ful attention to the process of developing software, as well as to the products that result. The organization and discipline in the activities have been acknowledged to contribute to the quality of the software and to the speed with which it is developed. However, Wasserman notes that

the great variations among application types and organizational cultures make it impos- sible to be prescriptive about the process itselt: Thus, it appears that the software process is not fundamental to software engineering in the same way as are abstraction and modular- ization. (Wasserman 1996)

lnstead , he suggests that diffe rent types of software need different processes. In partic- ular, Wasserman suggests that enterprisewide applications need a great deal of control, whereas individual and departmental applications can take advantage of rapid applica- tion development, as we illustrate in Figure 1.14.

By using today's tools, many small and medium-sized systems can be built by one or two developers, each of whom must take on multiple roles. The tools may include a

Openmirrors.com

Controll•d davalopment

Rapid appl ieation development

Section 1.8 How Has Software Engineering Changed? 33

Departmental appl ieations

S in!le-um, desktop productivity tools

• M ission-critieal • Maltiuser • Multiplatform • 2 · to 3-tier development·

• Limited scope/vision • Low/medium risk • Single/multiplatform • 1 • to 2-tiar development

• Packages/minimal development • Low cost/low risk • Single platform

FIG URE 1. 14 Differences in development (Wasserman 1996).

text editor, programming environment, testing support, and perhaps a small database to capture key data elements about the products and processes. Because the project's risk is re latively low, little management support or review is needed.

However, large, complex systems need more structure, checks, and balances. These systems often involve many customers and users, and development continues over a long period of time. Moreover, the developers do not always have control over the entire development, as some critical subsystems may be supplied by others or be implemented in hardware. Th.is type of high-risk system requires analysis and design tools, project management, configuration management, more sophisticated testing tools, and a more rigorous system of review and causal analysis. la Chapter 2, we wiU take a careful look at several process alternatives to see bow varying the process addresses different goals. Then, in Chapters 12 and 13, we evaluate the effectiveness of some processes and look at ways to improve them.

Reuse. In software development and maintenance, we often take advantage of the commonalities across applications by reusing items from previous development. For example, we use the same operating system or database management system from one development project to the next, rather than buildjng a new one each time. Simi- larly, we reuse sets of requirements, parts of designs, and groups of test scripts or data when we build systems that are similar to but not the same as what we have done before. Barnes and Bollinger (1991) point out that reuse is not a new idea, and they provide many interesting examples of how we reuse much more than just code.

Prie to-Dfaz (1991) introduced the notion of reusable components as a business asset. Companjes and organizations invest in items that are reusable and then gain quantifiable benefit when those items are used again in subsequent projects. However,

34 Chapter 1 Why Software Engineering?

estabLisbing a long-te rm, effective reuse program can be difficult, because there are sev- e ral barrie rs:

• It is sometimes faster to build a small component than to search for one iin a repository of reusahle components.

• It may take extra time to make a component gene ral enough to be reusable easily by other developers in the future.

• It is difficult to document the degree o f qlllality assurance and testing that have been done, so that a potential reuser can feel comfortable about the quality o f the component.

• It is not clear who is responsible if a reused component fails or needs to be updated. • It can be costly and time-consuming to und!erstand and reuse a component writ-

ten by someone else. • There is often a conflict between generality and specificity.

We will look at reuse in more detail in Chapter 12, examining several examples of suc- cessful reuse.

Measurement. Improvement is a driving force in software e ngineering research: improving our processes, resources, and methods so that we produce and maintain !bet- te r products. But sometimes we express improvement goals generaUy, with no quantita- Live description of where we are and where we would like to go. For this reason, software measurement has become a key aspect of good software engineering practice. By quan- tifying where we can and what we can, we describe our actions and their outcomes in a common mathematical language that allows us to evaluate our progress. In addition, a quantitative approach permits us to compare progress across disparate projects. For exampl e, when John Young was CEO of Hewlett-Packard, he set goals of"lOX," a ten- fold improvement in quality and productivity, for every project at Hewlett-Packard, regardless of application type or domain (Grady and CasweU 1987).

At a lower level of abstraction, measurement can help to make speci fic character- istics of our processes and products more visible. It is often useful to transform our understanding of the real, empirical world to elements and relatio nships in the formal, mathematical world, where we can manipulate them to gain furthe r understanding. As illustrated in Figure 1.15, we can use mathematics and statistics to solve a problem, look for trends, or characterize a situation (such as with means and standard deviations). This new information can then be mapped back to the real world and applied as part of a solution to the empirical problem we are trying to solve. Throughout this book, we wiU see examples of how measurement is used to support analysis and decision making.

Tools and Integrated Environments. For many years, vendors touted CASE (Computer-Aided Software Engineering) tools, where standardized, integrated devel- opment environments would enhance software development. However, we have seen how different developers use different processes, methods, and resources, so a unifying approach is easie r said th an done.

On the other hand, researchers have proposed several frameworks that aUow us to compare and contrast both existing and proposed environments. These frameworks

Openmirrors.com

Section 1.9 Information Systems Example 35

REAL, EMPIRICAL WORLD FORMAL, MATHEMATICAL WORLD

Empirica l relational sys I om

enbtio~ lmplem or s olulion

Empirica l, relevant results

' ' Measurement Formal relational - system

Mathemati es, statist ies

I nterp~etalion

' Numeric results ' ' ' . I

FIGURE 1.15 Using measurement to help find a solution.

permit us to examiae the services provided by ea.ch software engineering environment and to decide which environment is best for a given problem or application development.

One of the major di[ficulties in comparing tools is that vendo rs rarely address the entire development Life cycle. Instead, they focus on a small set of activities, such as design o r testing, and it is up to the user to integra te the selected tools into a complete development environment. Wassennan (1990) has ide ntified five issues that must be addressed in any tool integration:

1. platform integration: the ability of tools to interoperate on a heterogeneous network 2. presentation integration: commonality of user interface 3. process integration: Linkage between the tools and the development process

4. data integration: the way tools share data 5. control integration: the ability for oae tool to notify and initia te action in aaother

In each of the subsequent chapters of this book, we will examine tools that support the activities and concepts we describe in the chapter.

Yo u can think of the eight concepts described here as eight threads woven through the fabric of thiis book, tying together the disparate activities we call software eagineering. As we learn more about software engineering, we will revisit these ideas to see how they unify and e levate software engineering as a scienlific discipline.

1.9 INFORMATION SYSTEMS EXAMPLE

Throughout this book, we will end each chapter with two examples, one of an informa- tion system and the othe r of a real-time system. We wiU apply the concepts described in the chapter to some aspect of each example, so that you can see what the concepts mean io practice, aot j us.t in theory.

36 Chapter 1 Why Software Engineering?

FIGURE 1.16 PiccadillyTelevision franchise area.

Pieeadillv Tele~ision

Our information system example is drawn (with permission) from Complete Sys- Lenis Analysis: The Workbook, the Textbook, the Answers, by James and Suzanne Robertson (Robertson and Robertson 1994). It involves the deve lopment of a system to sell advertising time for Piccadilly Television, the holder of a regional British televi- sion franchise. Figure 1.16 ilJustrates the PiccadilJy Television viewing area. As we shalJ see, the constraints on the price of television time· are many and varied, so the problem is both interesting and difficult In this book, we highlight aspects of the problem and its solution; the Robertsons' book shows you detailed methods for capturing and analyz- ing the system requirements.

In Britain, the broadcasting board issues an eight-year franchise to a commercial television company, giving it exclusive rights to broadcast its programs in a carefully defined region of the country. In re turn, the franchisee must broadcast a prescribed bal- ance of drama, comedy, sports, children's and other programs. Moreover, there are restrictions on which programs can be broadcast at which ti.mes, as well as rules about the content of programs and commercial advertising.

A commercial advertiser has several choices to reach the Midlands audience: Pic- cadilly, the cable channels, and the satelJite chafllllels. However, Piccadilly attracts most of the audience. Thus, Piccadilly must set its rates to attract a portion of an advertiser's national budget. One of the ways to attract an advertiser's attention is with audience ratings that reHect the number and type of viewers at different times of the day. The rat- ings are reported in terms of program type, audience type, time of day, television com- pany, and more. But the advertising rate depends on more than just the ratings. For example, the rate per hour may be cheaper if the advertiser buys a large number of hours. Moreover, there are restrictions on the type of advertising at certain times and for certain programs. For example,

• Advertisements for alcohol may be shown only after 9 P.M. • If an actor is in a show, then an advertisement with that actor may not be broad-

cast within 45 minutes of the show.

Openmirrors.com

Aiw tising A9encies

Section 1.10

n Spot

Copy Requiremen1s Selected

T ransmiu ion Spots Instructions

Program Transmission ----,,r

Schedu le

Commerei~S 1

/ C a es

0 PY. T uget Revenue Reeord1n1 . Reports

\ lnslruet1on1 ,.......... __ ........,

Production Companies

Piccadilly Mana5ement

Broaicastin5 Board

Real-Time Example 37

Bunaas

Program

Program Suppliers

FIGURE 1.17 Piccadilly context diagram showing system boundary (Robertson and Robertson 1994).

• If an advertiseme111t for a class o f producl (such as an automobile) is scheduled for a particular comme rcial break, then no other advertisement for something in that class may be shown during that break.

As we explore this example in more detail, we will note the additional rules and regula- tions about advertising and its cost. The system context diagram in Figure 1.17 shows us the system boundary and how it relates to these rules. The shaded oval is the Piccadilly system that concerns us as ou.r info rmation system example; the system boundary is simply the perimeter of the oval. The arrows and boxes display the items that can affect the working of the Piccadilly system, but we consider them only as a collection of inp uts and outputs, with their sources and destinations, respectively.

In later chapters, we will make visible th·e activities and elements inside the shaded oval (i.e., within the system boundary). We will examine the design and devel- opment of this system using the software engineering techniques that are described in each chapter.

1.10 REAL-TIME EXAMPLE

Ou.r real-time example is based on the embedded software in the A.riane-5, a space rocket belonging to the European Space Agency (ESA). On June 4, 1996, on its maiden flight, the A riane-5 was launched and performed perfectly for approximately 40 seconds. Then, it began to veer off course. At the direction o f an A.riane ground controUer, the rocket was destroyed by remote control. The destruction of the urnin- sured rocket was a loss not only of the rocket itself, but also of the four satellites it

38 Chapter 1 Why Software Engineering?

contained; the total cost of the disaster was $500 million (Lions e t al. 1996; Newsbytes home page 1996).

Software is involved in almost all aspects of the syste m, from the guidance of the rocket to the internal workings of its component parts. The failure of the rocket and its subsequen t destruction raise many q uestions a bout software qua lity. As we will see in later chapte rs, the inquiry board that investigated the cause of the proble m focused on software quaLity and its assurance. In this chapte r, we look at quaLity in terms of the business value of the rocket.

There were many o rganizations with a stake in the success of Ariane -5: the ESA , the Centre National d 'E tudes Spatiales (CNES, the Fre nch space agency in overalJ command of the Ariane program), a nd 12 othe r E u.ropean countries. The rocket's loss was a nothe r in a series of delays and proble ms to affect the Ariane program, includ ing a nitrogen leak during e ngine testing in 1995 tha t kilJed two e ngineers. H owever, the June 1996 incident was the first whose cause was directly a ttributed to software failure.

The business impact of the incide nt went well beyond the $500 million in equipment. In 1996, the Ariane-4 rocket and previous variants held more than half of the world's launch contracts, ahead of American, Russia n, and Chinese launchers. Thus, the credibility of the program was at stake, as well as the potentiall business from tutu.re Ariane rockets

The fu ture business was based in part on the new rocket's a bili ty to carry heavier payloads into orbit than previous launchers could. Ariane-5 was designed to carry a single satellite up to 6.8 tons or two satellites with a combined weight of 5.9 tons. Further devel- opment work hoped to add an extra ton to the launch capacity by 2002. This increased car- rying capacity has clear business advan tages; o ften, operators reduce their costs by sharing launches, so Ariane can offer to host several companies' payloads at the same time.

Conside r what quality means in the context o f this example . The destruction of Ariane-5 turned ou t to be the result o f a requirement tha t was misspecified by the cus- tome r. In this case, the developer might claim that the syste m is still high quaLity; it was just built to the wrong specification. Indeed, the inquiry board formed to investigate the cause and cure of the disaster noted that

The Board's findings are based on thorough and open presentations from the Ariane-5 project teams, and on documentation which has demonstrated the high quality of the Ariane-5 programme as regards engineering work in general and completeness and trace- ability of documents. (Lions et al. 1996)

But from the user's and custome r's point o f view, the specification process should have been good enough to identify the specificatio n flaw and force the customer to cor- rect the specification before damage was done. The inquiry board acknowledged that

The supplier of the SRI (the subsystem in which the cause of the problem was eventually located] was only following the specification given to it, which stipulated that in the event of any detected exception the processor was to be stopped. The exception which occurred was not due to random failure but a design error . .The exception was detected, but inappro- priately handled because the view bad been taken that software should be considered cor- rect until it is shown to be at fault. The Board has reason to believe that this view is also accepted in other areas of Ariane-5 software design. The Board is in favour of the opposite view, that software should be assumed to be faulty until applying the currently accepted best practice methods can demonstrate that it is correct. (Lions et al. 1996)

Openmirrors.com

Section 1. 11 What This Chapter Means for You 39

In later chapters, we will investigate this example in more detail, looking at the design, testing, and mai.1!1tenance implications of the developers' and customers' deci- sions. We will see how poor systems engineering at the beginning of development led to a series of poor decisions that led in turn to disaster. On the other hand, the openness of a ll concerned, including ESA and the inquiry board, coupled with high-quality docu- mentatnon and an earnest desire to get a t the truth quickly, resulted in quick resolution of the immediate proble m and an effective plan to prevent such problems in the future.

A systems view allowed the inquiry board, in cooperation with the developers, to view the Ariane-5 as a collection of subsystems. This collection reflects the analysis of the problem, as we described in this chapter, so that different developers can work on separate subsystems with distinctly different functions. For example:

The attitude of the la uncher and its movements in space are measured by an Inertial R ef- ere nce System (SRI). It has its own internal computer, in which angles and velocities are calculated on the basis of information from a " strap-down" inertial platform, with laser gyros and accelerometers. The data from the SR I are transmitted through the databus to the On-Board Computer (OBC), which executes the flight program and controls the noz- zles of the solid boosters and the Vulcain cryogenic engine, via servovalves and hydraulic actuators. (Lions et al. 1996)

But the synthesis of the solution must include an overview of au the component parts, where the parts are viewed together to determine if the "glue" that holds them togethe r is sufficient and appropriate. lo the case of Ariane-5, the inquiry board sug- gested that the customers and developers should have worked together to find the crit- ical software and make sure that it could handle not only anticipated but also unanticipated behavior.

This means that critical software-in the sense that failure of the software puts the mis.5ion at risk-must be identified at a very detailed level, that exceptional behaviour must be con- fined. and that a reasonable back-up policy must take software failures into account. (Lions et al. 19<J6)

1.11 WHAT THIS CHAPTER MEANS FOR YOU

This chapter has introduced many concepts that are essential to good software engi- neering research and practice. You, as an individual software deve loper, can use these conceptts in the following ways:

• When you are given a problem to solve (whether or not the solution involves soft- ware), you can analyze the problem by breaking it into its component parts, and the relationships among the parts. Then, you can synthesize a solution by solving the individual subproblems and merging them to form a unified whole.

• Yo u must understa nd that the requirements may change, even as you are analyz- ing the problem and building a solution. So your solution should be well-docu- mented and flexible, and you should document your assumptions and the algorithms you use (so that they are easy to change later).

• Yo u must view quality from several different perspectives., understanding that technical quality and business quality may be very different.

40 Chapter 1 Why Software Engineering?

• You cao use abstractioo aod measurement to belp identify the essential aspects of the problem and solution.

• You cao keep the system boundary in mindl, so that your solution does not over- lap with the related systems that interact with the ooe you are building.

1.12 WHAT THIS CHAPTER MEANS FOR YOUR DEVELOPMENT TEAM

Much of your work wiJI be done as a member of a larger development team. As we have seen in this chapter, development iovolves requirements analysis, design, implementa- tion, testing, configuration maoagement, quaJity assuraoce, and more. Some of the people on your team may wear muJtiple bats, as may you, and the success o:f the project depeods in large measure on the communication and coordination among the team members. We have seen in this chapter that you can aid the success of your project by selectiog

• a development process that is appropriate to your team size, risk level, and appli- cation domain

• tools that are well-integrated and support the type of communication your project demands

• measurements and supporting tools to give you as much visibility and under- standing as possible

1.13 WHAT THIS CHAPTER MEANS FOR RESEARCHERS

Many o f the issues discussed in this chapter a re good subjects for further research. We have noted some of the open issues in software engineering, including the need to find

• the right levels of abstraction to make the problem easy to solve • the right measurements to make the essential nature of the problem and solution

visible and helpful

• an appropriate problem decomposition, where each subproblem is solvable • a common framework or nota tion to allow easy and effective tool integration,

and to maximize communication among project participants

In late r chapte rs, we will describe many techniques. Some have been used and are well-proven software development practices, whereas others are proposed and have been demonstrated only on small, " toy," or student projects. We hope to show you how to improve what you are doing now and at Lbe same time to inspire you to be creative and thoughtful about trying new techniques and p rocesses in the future.

1.14 TERM PROJECT

It is impossible to learn software engineering without partjcipating in developing a soft- ware project with your colleagues. For this reason, each chapter of this book will present infonnation about a term project tbal you can perfonn with a team of classmates. The project, based on a real system in a real organization, will allow you to address some of

Openmirrors.com

Section 1. 14 Term Project 41

the very real chaUenges of analysis, design, implementation , testing, and maintenance. In addition, because you will be working with a team, you will deal with issues of team dive rsity and project management.

The term project involves the kinds of loans you might negotiate with a bank when you want to buy a house. Banks generate income in many ways, often by borrowing money from their depositors at a low interest rate and then lending that same money back at a higher interest rate in the form of bank loans. However, long-term property loans, such as mortgages, typically have terms of up to 15, 25, or even 30 years. That is, you have 15, 25, or 30 years to repay the loan: the principal (the money you originally borrowed) plus interest a t the specified rate. Although the income from interest on these loans is lucrative, the loans tie up money for a long time, preventing the banks from using their money for other transactions. Consequently, the banks often sell their loans to consolida ting o rganizations, taking less long-term profit in exchange for freeing the capital for use in other ways.

The appLication for your te rm project is called the Loan A rranger. It is fashioned on ways in which a (mythical) Financial Consolidation O rganization (FCO) handles the loans it buys from banks. The consolidation organizatio n makes money by purchas- ing loans from banks and selling them to investors. The bank se lls the loan to FCO, ge t- ting the principa l i.n return. Then, as we shall see, FCO sells the loan to investors who are wiJLing to wait longer than the bank to get their return.

To see bow the transactions work, consider how you get a loan (caUed a "mort- gage") for a house. You may purchase a $150,000 house by paying $50,000 as an initial payment (called the "down payment") and taking a loan for the remaining $100,000. 1l1e "terms" of your loan from the First National Bank may be for 30 years at 5% inte r- est. This te rminology means that tbe First National Bank gives you 30 years (the te rm of the loan) to pay back the amount you borrowed (the "principal") plus interest on whatever you do not pay back right away. For example, you can pay the $100,000 by making a payme nt once a month for 30 years (that is, 360 " installments" or "monthly payments"), with inte rest on the unpaid balance. If the initial balance is $100,000, the bank calculates your monthly payment using the amount of principal, the inte rest rate, the amount of time you have to pay off the loan, and the assumption that all monthly payments should be the same amount.

For instance, suppose the bank tells you that your monthly payment is to be $536.82. The firs t month's inte rest is (1112) x (.05) x ($100,000), or $416.67. The rest of the payment ($536.82 -416.67) pays for reducing the principal: $120.15. For the second month, you now owe $100,000 minus the $120.15, so the interest is reduced to (1/12) x (.05) x ($100,000 - 120.15), o r $416.17. Thus, during the second month, only $416.17 of the monthly payment is interest, and the remainder, $120.65, is applied to the remaining principal. Over time, you pay less inte rest and more toward reducing the remaining ba l- ance of principal, until you have paid off the entire principal and own your prope rty free and clear of any encumbrance by the bank.

First National Bank may sell your loan to FCO some time during the period when you are making payments. First National negotiates a pr ice with FCO. In turn, FCO may sell your loan to ABC Investment Corporation. You still make your mortgage pay- ments each month, but your payment goes to ABC, not First National. Usually, FCO sells its loans in "bundles," not individual loans, so that an investor buys a coUection of loans based on risk, principal involved, and expected rate of return. In other words, an

42 Chapter 1 Why Software Engineering?

investor such as ABC can contact FCO and specify how much money it wishes to invest, for how long, how much risk it is wiJ Ling to take (based on the history of the people or organizations paying back the loan), and how much profit is expected.

The Loan Arranger is an apptication that allows an FCO analyst to select a bun- dle of lo ans to match an investor's desired investment characteris tics. The application accesses information about Joans purchased by FCO from a varie ty of lending institu- tions. When an investor specifies investment criteria, the system selects tbe optimal bundle of loans that satisfies the criteria. While the system will allow some advanced optimizations, such as selecting the best bundle of loans from a subset of those available (for instance, from all loans in Massachusetts, rather than from au the loans available), the system will still allow an analyst to manually select loans in a bundle for the client. Io addition to bundle selection, the system also automates information management activities, such as updating bank information, updating Joan information, and adding new loans when banks provide that information each month.

We can summarize this information by saying that the Loan Arranger system allows a Joan analyst to access information about mortgages (ho me Joans, described here simply as "Joans") purchased by FCO from multiple lending institutions with the intention of repackaging the Joans to sell to other investors. The Joans purchased by FCO for investment and resale are collectively known as the Joan portfolio. The Loan Arranger system tracks these portfolio loans in itts repository of lo an information. The Joan analyst may add, view, update, or delete loan information about lenders and the set of Joans in the portfolio. Additionally, the system allows the loan analyst to create "bundles" of Joans for sale to investors. A user of Loan Arranger is a loan analyst who tracks the mortgages purchased by FCO.

In later chapters, we will explore the system's requirements in more depth. For now, if you need to brush up on your understanding of principal and interest, you can review your old math books or look at htlp://www.inte rest.com/hugb/calc/formula.html.

1.1 5 KEY REFERENCES

You can find out about software faults and failures by looking in the Risks Forum, moder- ated by Peter Neumann. A paper copy of some o f the Risks is printed in each issue of Software Engineering Notes, published by the Association for Computer Machinery's Spe- cial Interest Group on Software Engineering (SIGSOFf). The Risks archives are avail- able on ftp.sri.com, cd risks. The Risks Forum newsgroup is available online at comp.risks, o r you can subscribe via the automated list server a t [email protected]

You can find out more about the Ariane-5 project from the European Space Agency's Web site: http://www.esrin.esa.it/htdocs/esa/ariane. A copy of the joint ESA/CNES press release describing the mission failure (in English) is at http://www. esrin.esa.it/htdocs/tidc/Press/Press96/pressl9.htmJ. A French version of the press release is at http://www.cnes.fr/Acces_EspaceNo1_50x.html. An electronic copy of the Ariane-5 Flight 501 Fa ilure Report is at http ://www.esrin.esa.it/htdocs/tidc/Press/ Press96/ariane5rep.html.

Leveson and Turner (1993) describe the The rac software design and testing prob- lems in care ful detaiJ.

Openmirrors.com

Section 1.16 Exercises 43

The January 1996 issue of IEEE Software is devoted to software quality. In partic- ular, the introductory article by Kitcbenham and Pfleeger (1996) describes and cri- tiques several quality frameworks, and the article by D romey (1996) discusses how to de fine quality in a measurable way.

Fo r m o re information about the Piccadilly Televis io n example, you may consult (Robertson and Robertson 1994) o r explore th e R obertsons' approach to require- ments at www.systemsguild.com.

1.16 EXERCISES

1. The following article appeared in the Washington Post (Associated Press 1996):

PILOT'S COMPUTER ERROR CITED IN PLANE CRASH. AMERICAN AIRLINES SAYS ONE-LETTER CODE WAS REASON

JET HIT MOUNTAIN IN COLOMBIA.

Dallas,, Aug. 23-The captain of an American Airlines je t that crashed in Colombia last December entered an incorrect one-letter compute r command that sent the plane into a mountain, the airline said today.

The crash killed all but four of the 163 people aboard. American's investigators concluded that the captain of the Boeing 757 apparently

thought he had entered the coordinates for the intended destination, Cali. But on most South American aeronautical charts, the one-letter code for Cali is the same

as the one for Bogota, 132 miles in the opposite direction. The coordinates for Bogota directed the plane toward the mountain, according to a le t-

ter by Cecil Ewell, American's chief pilot and vice president for flight The codes for Bogota and Cali are different in most computer databases, Ewell said.

American spokesman John Hotard confirmed that Ewell's letter, first reported in the Dallas Morning News, is being delivered this week to all of the airline's pilots to warn them of the coding problem.

American's discovery also prompted the Federal Aviation Adminis tration to issue a bul- letin to all airlines, waroiing them of inconsistencies between some computer databases and aeronautical charts, the newspaper said.

The computer error is not the final word on what caused the crash. The Colombian gov- ernment is investigating and is expected to release its findings by October.

Pat Cariseo, spokesman for the National Transportation Safety Board, said Colombian investigators also are examining factors such as flight crew training and air traffic control.

The computer mistake was found by investigators for American when they compa17ed data from the jet's navigation computer with information from the wreckage, Ewell said.

The data showed the mistake went undetected for 66 seconds while the crew scrambled to follow an air traffic controller's orders to take a more direct approach to the Cali airport.

Three minutes la ter, while the plane still was descending and the crew trying to figure out why the plane had tumed, it crashed.

44 Chapter 1 Why Software Engineering?

Ewell said the crash presented two important lessons for pilots. "First of all,no matter how many times you go to South America or any other place-the

Rocky Mountains-you can never, never, never assume anything," he told the newspaper. Second, he said, pilots must understand they can't let automation take over responsibility for flying the airplane.

Is this article evidence that we have a software crisis? H ow is aviation better off because of software engineering? What issues should be addressed during software development so that problems like this will be prevented in the future?

2. Give an example of problem analysis where the problem components are relatively simple, but the difficulty in solving the problem lies in the interconnections among sub- problem components.

3. Explain the difference between errors, faults, and failures. Give an example of an error that leads to a fault in the requirements; the design; the code. Give an example of a fault in the requirements that leads to a failure; a fault in the design that le ads to a failure; a fault in the test data that leads to a failure.

4. Why can a count of faults be a misleading measure of product quality? 5. Many developers equate technical quality with overall product quality. Give an example

of a product with high technical quality that is not considered high quality by the cus- tomer. Are there ethical issues involved in narrowing the view of quality to consider o nly technical quality? Use the Therac-25 example to illustrate your point.

6. Many organizations buy commercial software, thinking it is cheaper than developing and maintaining software in-house. Describe the pros and cons of using COTS software. For example, what happens if the COTS products are no longer supported by their vend.ors? What must the customer, user, and developer anticipate when designing a product that uses COTS software in a large system?

7. What are the legal and ethical implications of using COTS software? Of using subcon- tractors? For example, who is responsible for fixing the problem when the major system fails as a result of a fault in COTS software? Wh o is liable when such a failure causes harm to the users, directly (as when the automatic brakes fail in a car) or indirectly (as when the wrong information is supplied to another system, as we saw in Exercise 1). What checks and balances are needed to ensure the quality of COTS software before it is integrated into a larger system?

8. The Piccadilly Television example, as illustrated in Figure 1.17, 'contains a great many rules and constraints. Discuss three of them and explain the pros and cons of keeping them outside the system boundary.

9. When the Ariane-5 rocket was destroyed, the news made headlines in France and else- where. Liberation, a French newspaper, called it "A 37-billion-franc fireworks display" on the front page. In fact, the explosion was front-page news in almost all European news- papers and headed the main evening news bulle tins on most European TV networks. By contrast, the invasion by a hacker of Panix, a New York-based Internet provider, forced the Panix system to close down for several hours. News of this event appeared only on the front page of the business section of the Washington Post. What is the responsibility olf the press when reporting softwa re-based incidents? How should the potential impact of soft- ware failures be assessed and reported?

Openmirrors.com

2

In this chapter, we look at • what we mean by a "process" • software development products,

processes, and resources • several models of the software

development process • tools and techniques for process

modeling

We saw in Chapter 1 that engineering software is both a creative and a step-by-step process., often involving many people producing many different kinds of products. ln this chapter, we examine the steps in more detail, Iookfog at ways to organize our activi- ties, so that we can coordinate what we do and when we do it. We begin the chapter by defining what we mean by a process, so that we understand what must be included when we model software development. Next, we examine several types of software process models. Once we know the type of model we wish to use, we take a close look at two types of modeling techniques: static and dynamic. Finally, we apply several of these techniques to our information systems and real-time examples.

2.1 THE MEANING OF PROCESS

When we provide a service or create a product, whether it be developing software, writing a report, or taking a business trip, we always follow a sequence of steps to accomplish a set of tasks. The tasks are usually performed in the same order each time; for example, you do not usually put up the drywall before the wiring for a house is installed or bake a cake before all the ingredients are mixed together. We can think of a set of ordered tasks as a process: a series of steps involving activities, constraints, and resources that produce an intended output of some kind.

45

46 Chapter 2 Modeling the Process and Life Cycle

A process usuaUy involves a set of tools and techniques, as we defined them in Chapte r l. Any process has the following characte ristics:

• The process prescribes aU of the major process activities. • The process uses resources, subject to a set of constraints (such as a schedule), and

produces intermediate and final products. • The process may be composed of subprocesses that are linked in some way. The

process may be defined as a hierarchy of processes, organized so that each sub- process has its own process model.

• Each process activity has entry and exit criteria, so that we know when the activ- ity begins and ends.

• The activities are organized in a sequence, so that it is clear when one activity is performed relative to the other activities.

• Every process has a set o f guiding principles that explain the goals of each activity. • Constraints or controls may apply to an activity, resource, or product. For

example, the budget or schedule may constrain the length of time an activity may take or a tool may limit the way in which a resource may be used.

When the process involves the building of some product, we sometimes refer to the process as a life cycle. Thus, the software development process is sometimes called the software life cycle, because it describes the life of a software product from its concep- tion to its implementatio n, deLivery, use, and maintenance.

Processes are important because they impose consistency and structure on a set o f activities. These characteristics are useful when we know how to do someth1ng well and we want to ensure that others do it the same way. For example, if Sam is a good bricklayer, he may write down a description of the bricklaying process he uses so ll:bat Sara can learn how to do it as well. He may take into account the differences in the way people prefer to do things; for instance, be may write his instructions so that Sara can lay bricks whether she is right- or left-banded. Similarly, a software development pro- cess can be described in flexible ways that allow people to design and build software using preferred techniques and tools; a process model may require design to occur before coding, but may a llow many different design techniques to ibe used. For this rea- son, the process he lps to maintain a level of consistency and quali ty in products or ser- vices that are produced by many different people.

A process is more than a procedure. We saw in Chapter 1 that a procedure is Like a recipe: a structured way of combining tools and techniques to produce a product. A process is a collection of procedures, organized so that we build products to satisfy a set o f goals or standards. In fact, the process may suggest that we choose from several p ro- cedures, as long as the goal we are addressing is mel. For instance, the process may require that we check our design components before coding begins. The checking can be done using informal reviews or formal inspections, each an activity with its own p ro- cedure, but both addressing the same goal.

The process structure guides our actions by allowing us to examine, understand, control, and improve the activities that comprise the process. To see how, consider the process of making chocolate cake with chocolate icing. The process may contain several procedures, such as buying the ingredients and finding the appropriate cooking utensils.

Openmirrors.com

Section 2.1 The Meaning of Process 47

The recipe describes the procedure for actuaUy mixing and baking the cake. The recipe contains activities (such as "beat the egg before mixing with other ingredients"), con- straints (such as the temperature requirement in "heat the chocolate to the melting point before combining with the sugar"), and resources (such as sugar, fl.our, eggs, and chocolate). Suppose Chuck bakes a chocolate cake according to this recipe. When the cake is done, he tastes a sample and decides that the cake is too sweet. He looks at the recipe to see which ingredient contributes to the sweetness: sugar. The n, he bakes another cake, but this time he reduces the amount of sugar in the new recipe. Again he tastes the cake, but now it does not have enough chocolate flavor. He adds a measure of cocoa powder to his second revision and tries again. After several ite rations, each time changing an ingredient or an activity (such as baking tbe cake longer, o r letting the chocolate mixture cool before combining with the egg mixture), Chuck arrives at a cake to his liking. Without the recipe to document this part of the process, Chuck would not have been able to make changes easily and evaluate the results.

Processes are also important for enabling us to capture our experiences and pass them along to others. Just as master chefs pass on their favorite recipes to their coUeagues and friends, maste r craftspeople can pass along documented processes and procedures. Indeed, the notions of apprenticeship and mentoring are based on the idea that we share our experience so we can pass down our skills from senior people to junior ones.

Io the same way, we want to learn from our past development projects, document the practices that work best to produce high-quality software, and folJow a software development process so we can understand, control, and improve what happens as we build products (or our customers. We saw in Chapter 1 that software development usu- aUy involves the following stages:

• requireme nts analysis and definition • system design • program design • writing the programs (program implementation)

• unit testing • integration testing • system testing • system delivery

• maintenance

Each stage is itself a process (or collection of processes) that can be described as a set of activities. And each activity involves constraints, outputs, and resources. For example, the requirements analysis and definitions stage need as initial input a state- ment of desired funct ions and features that the user expresses in some way. The fina l output from this stage is a set of requirements, but there may be intermediate products as the dialog be tween user and developer resuJts in changes and alternatives. We have constraints, too, such as a budget and scheduJe for producing the requirements docu- ment, and standards about the kinds of requirements to include and perha ps the nota- tion used to express them.

Each of these stages is addressed in this book. For each one, we will take a close look at the processes, resources, activiUes, and outputs that are involved, and we will learn how

48 Chapter 2 Modeling the Process and Life Cycle

they contribute to the quality of the final product: useful software. There are many ways to address each stage of development; each configuration of activities, resources, and outputs constitutes a process, and a collection of processes describes what happens at each stage. For instance, design can involve a prototyping process, where many of the design decisions are explored so that developers can choose an appropriate approach, and a reuse process, where previously generarted design components a re included in the current design.

Each process can be described in a variety of ways, using text, pictures, or a com- bination. Software engineering researchers have suggested a varie ty of formats fo r such descriptions, usually organized as a model tha t contains key process features. For the remainder of this chapte r, we examine a variety of software development process mod- els, to see how organizing process activities can make development more effective.

2.2 SOFTWARE PROCESS MODELS

Many process models are described in the software engineering literature. Some are prescriptions for the way software development should progress, and others are descriplions of the way software development is done in actuality. In theory, the two kinds of models should be similar or the same, but in practice, they are not. Building a process model and discussing its subprocesses help the team understand the gap between what should be and what is.

There a re several o ther reasons for modeling a process:

• When a group writes down a description of its development process, it forms a common understanding of the activities, resources, and constraints involved in software development.

• Creating a process model helps the development team find inconsistencies, redundancies, and omissions in the process and in its constituent parts. As these problems are noted and corrected, the process becomes more effective and focused on building the final product.

• The model should reflect the goals of development, such as building high-quality software, finding fa ults early in development, and meeting required budget and schedule constraints. As the model is built, the development team evaluates can- didate activities for their appropriateness io addressing these goals. For example, the team may include requirements reviews, so that problems with the require- ments can be found and fixed before design begins.

• Every process sho!Uld be tailored for the special situation in which it will be used. Building a process model helps the development team understand wbere that tai- loring is to occur.

Every software development process model includes system requirements as input and a delivered product as output. Many such models have been proposed over the years. Let us look at several of the most popular models to understand their com- monalities and diffe rences.

Waterfall Model

One of the first models to be proposed is the waterfall model, illustrated in Figure 2.1, where the stages are depicted as cascading from one to another (Royce 1970). As the

Openmirrors.com

REQUIREMENTS: ANALYSIS

SYSTEM DESIGN

Section 2.2

PROGRAM DESIGN

CODING

UNIT&INTE- GRATION TESTING

SYSTEM TESTING

ACCEPTANCE TESTING

Software Process Models 49

OPERATION & MAINTENANCE

FIGURE 2.1 The waterfall model.

figure implies, o ne development stage should be completed before the next begins. Thus, when au of the requirements are elicited from the customer, a nalyzed for completeness and consistency, and documented in a requireme nts document, then the development team can go on to system design activities. Tlhe waterfall model presents a very high-level view of what goes on during development, and it suggests to developers the sequence o f events they should expect to encounter.

The waterfall model bas been used to prescribe software development activities in a varie ty of contexts. For example, it was tbe basis for software development deliver- ables in U.S. Department of Defense contracts for many years, defined in D epartment of Defense Standard 2167-A. Associated with each process activity were milestones and deliverables, so that project managers could use the model to gauge bow close the project was to completion at a given point in time. For instance, " unit and integration testing" in the waterfaU ends with the milestone "code modules written , tested, and integrated"; the intermediate de liverable is a copy of the tested code. Next, the code can be turned over to the system testers so it can be merged with other system compo- nents (hardware or software) and tested as a larger whole ..

The waterfall model can be very useful in helping developers lay out what they need to do. Its simplicity makes it easy to explain to customers who are not familiar with software development; it makes explicit which intermediate products are neces- sary in order to begin the next stage of development. Many other, more complex

50 Chapter 2 Modeling the Process and Life Cycle

models are reaUy just e mbellishments of the waterfaU, incorporating feedback loops and extra activities.

Many problems with the waterfaU model have been discussed in the literature, and two of them are summarized in Sidebar 2.1. The biggest problem with the waterfall model is that it does not re flect the way code is really developed. Except for very well- understood problems, software is usually developed with a great deal of ite ration. Often, software is used in a solution to a problem that has never before been solved or whose solution must be upgraded to re flect some change in business climate or operat- ing environment. For example, an a irplane manufacturer may require software for a new airframe that wiU be bigger or faster than existing models, so there are new chal- lenges to address, even though the software developers have a great deal of experience in building aeronautical software. Neither the users nor the developers know au the key factors that affect the desired outcome, and much of the time spent during require- ments analysis, as we wiU see in Chapter 4, may be devoted to understanding the items and processes affected by the system and its software, as well as the relationship between the system and the environment in which it will operate. Thus, the actual soft- ware development process, if uncontroUed, may look like Figure 2.2; developers may

SIDEBAR 2.1 DRAWBACKS OF THE WATERFALL MODEL

Ever since the waterfall model was introduced, it has had many critics. For example, McCracken and Jackson (1981) pointed out that the model imposes a project manage- ment structure on system development. "To contend that any life cycle scheme, even w:ith

variations, can be applied to a ll system development is either to Hy in the face of reality or to

assume a life cycle so rudimentary as to be vacuous."

Notice that the waterfall model shows how each major phase of development te rminates

in the production of some artifact (such as requirements, design, or code). There is no insight into how each activity transforms one artifact to another, such as requirements to design.

Thus, the model provides no guidance to managers and developers on how to handle changes

to products and activities that are likely to occur during development. For instance, when requirements change during coding activities, the subsequent changes to design and code a re

not addressed by the wate rfall model.

Curtis, Krasner, Shen, and Iscoe (1987) note that the waterfall model's major short.coming is its failure to treat software as a problem-solving process. The waterfall model was derived from

the hardware world, presenting a manufacturing view of software development. But manufactur-

ing produces a particular item and reproduces it many times. Software is not developed like that; rather, it evolves as the problem becomes understood and the alternatives are evaluated Th.us,

softwa:re is a creation process, not a manufacturing process. The waterfaU model tells us nothing

about the typical back-and-forth activities that lead to creating a final product. In particular, cre-

a tion usually involves trying a little of this or that, developing and evaluatiing prototypes, assess- ing the feasibility of requirements, contrasting several designs, learning from failure, and

eventually settling on a satisfactory solution to the problem at hand

Openmirrors.com

0 -----~

0 DELIVERY

0 ~--.__--.II

SYSTEM TESTING

Section 2.2 Software Process Models 51

SYSTEM DESIGN

0 PROGRAM

DESIGN

0

0 FIGURE 2.2 The software development process in reality.

thrash from one activity to the next and then back again, as they strive to gather knowl- edge about the problem and how the proposed solution addresses it.

The software development process can help to control the thrashing by including activities and subprocesses that enhance understanding. Prototyping is such a sub- process; a prototype is a partiaUy developed product that enables customers aod devel- opers to examine some aspect of the proposed system and decide if it is suitable or appropriate for the finished product. For example, developers may build a system to implement a smaU portion of some key requirements to ensure that the requirements are consistent, feasible, and practical; if not, revisions are made at the requirements stage rather than at the more costly testing stage. Similarly, parts of the design may be prototyped, as shown in Figure 2.3. Design prototyping helps developers assess alterna- tive design strategies and decide which is best for a particular project. As we will see in Chapter 5, the designers may address the requirements with several radically different designs to see which has the best properties. For instance, a network may be built as a ring in one prototype and a star in another, and performance characteristics evaluated to see which structure is better at meeting performance goals or constraints.

Often, the user interface is built and tested as a prototype, so that lbe users can understand what the new system will be like, and the designers get a better sense of how the users like to interact with the system. Thus, major kinks in the requirements are addressed and fixed well before the requirements are officially validated during system testing; validation ensures that the system has implemented all of the requirements, so that each system function can be traced back to a particular requirement in the specification. System testing also verifies the requirements; verification ensures that each function works correcUy. That is, validation makes sure thall the developer is building the

52 Chapter 2 Modeling the Process and Life Cycle

REQUIREMENTS ANALYSIS

................ ~alidate

· .... .. Verify "· "· ····... . ·..... "· ..

"""11""';===---. "· ......

' i PROTOTYPING !_. -. -·-. -·-. -~ -· -~ _,;

...... \ . ..... ....

c•::;;,~ •,~~;:;, \\ SYSTEM TESTING

ACCEPTANCE TESTING OPERATION ~ & MAINTENANCE

FIGURE 2.3 The waterfall model with prototyping.

right product (according to the specification), and verification checks the quality of the implementation. Prototyping is useful for verification and validation, but these activities can occur during other parts of tbe development process, as we will see in later chapters.

VModel

The V model is a variation of the waterfaU model that demonstrates how the testing activities are related to analysis and design (Ge rman Ministry of Defense 1992). As shown in Figure 2.4, coding forms the point of the V, with analysis and design on the left, testing and maintenance on the right. Unit and integration testing addresses the correctness of programs, as we shall see in later chapters. The V model suggests that unit and integration testing can also be used to veri fy the program design. That is, dur- ing unit and integration testing, the coders and test team members should ensure that all aspects o f the program design have been implemented correctly in the code. Simi- larly, system testing should verify the system design, making sure that au system design aspects are correctly imr lemented. Acceptance testing, which is conducted by the cus- tomer rather than the developer, validates the requirements by associating a testing step with each element of the specification; this type of testing checks to see that all requirements have been fully implemented before the system is accepted and paid for.

The model's Linkage of the left side with the right side of the V impLies that if problems are found during verification and validation, then the left side of the V can be reexecuted to fix and improve the requirements, design, and code before the testing steps on the right side are reenacted. In other wo rds, the V model makes rnore expLicit sorne oI the ite ration and rework that are bidden in the waterfaU depiction. Whereas

Openmirrors.com

Section 2.2 Software Process Models 53

Validate reqijirements OPERATION

REQUIREMENTS ANALYSIS ·-------- - - -.... & MAINTENANCE

SYSTEM DESIGN

PROGRAM DESIGN

----- ---

Verily dui!n

SYSTEM TESTING

<1----- UNIT & INTE- GRATION TESTING

FIGURE 2.4 The V model.

ACCEPTANCE TESTING

the focus o f the waterfaU is often documents and artifacts, the focus of the V model is activity and correctness.

Prototyping Model

We have seen how the waterfall model can be amended with prototypin g activities to improve understanding. But prototyping need not be solely an adjunct of a waterfall; it can itself be the basis for an effective process model, shown in Figure 2.5. Since the

LIST OF REVISIONS

1-<1 - -

revise user/

prototype eutomar review

PROTOTYPE r REQUIREMENTS SYSTEM

REQU IREMENTS (sometimes iniormal

or incomplete)

-

LIST OF <I -

LIST OF REVISIONS REVISIONS

'

PROTOTYPE - PROTOTYPE DESIGN SYSTEM

FIGURE 2.5 TI1e prototyping model.

- TEST - t

DELIVERED SYSTEM

54 Chapter 2 Modeling the Process and Life Cycle

prototyping model allows aU or part of a system to be constructed quickly to under- stand or clarify issues, it has the same objective as an engineering prototype, where requirements or design require repeated investigation to ensure that the developer, user, and customer have a common understanding both of what is needed and what is proposed. One or more of the loops for prototyping requirements, design, or the system may be eliminated, depending on the goals of the prototyping. However, the overaU goal remains tbe same: reducing risk and uncertainty in development.

For example, system development may begin with a nominal set of requirements supplied by the customers and users. Then, alternatives are explored by having inter- ested parties look at possible screens, tables, reports, and other system output that are used directly by the customers and users. As the users and customers decide on what they want, the requireme nts are revised. Once there is common agreement on what the requirements should be, the developers move on to design. Again, alte rnative designs are explored, often with consultation with customers and users.

The initial design is revised until the developers, users, and customers are happy with the result. Indeed, considering design alte rnatives sometimes reveals a problem with the requirements, and the developers drop back to the requirements activities to reconsider and change the requirements specification. EventuaUy, the system is coded and alte rnatives are discussed, with possible iteration through requirements and design again.

Operational Specification

For many systems, uncertainty about the requirements leads to changes and proble ms later in development. Zave (1984) suggests a process model that allows the developers and customers to examine the requirements and their implications early in the development process, where they can discuss and resolve some of the uncertainty. In the operational specification model, the system requirements are evaluated or executed in a way that demonstrates the behavior of tbe system. That is, once the requirements are specified, they can be enacted using a software package, so their implications ca111 be assessed before design begins. For example, if the specification requires the proposed system to handle 24 users, an executable form of the specification can help analysts determine whether that number of users puts too much of a performance burden on the system.

This type of process is very diffe rent from traditional models such as the waterfall mode l. 111e waterfaU model separates the functionality of the system from the design (i.e., what the system is to do is separated from how the system does it), intending to keep the customer needs apart from tbe implementation. However, an operational speci- fication aUows the functionality and tbe design to be merged. Figure 2.6 iJJustrates how an operational specification. works. Notice that the operational specifica tion is similar to pro- totyping; the process enables user and developer to examjne requirements early on.

Transformational Model

Balzer's transfonnational model tries to reduce the opportunity for error by eliminat- ing several major development steps. Using automated support, t he transformational process applies a series of transformations to change a specification into a deliverable system (Balzer 1981a).

Openmirrors.com

Execute 1n• ~evise

OPERATIONAL SPEC IFICATION

(prob I em-oriente4)

SYSTEM REQUIREMENTS

(sometimes inlorm1I or incomplete)

Section 2.2

TRANSFORMED SPECIFICATIGN (imple111enhtion-

orient1d)

Software Process Models 55

TEST

DELIVERED SYSTEM

FIGURE 2.6 The operational specification model.

Sample transformations can include

• changing the data representations

• selecting algorithms • optimizing

• compiling

Because many paths can be taken from the specification to the delivered system, the sequence of transformations and the decisions they reflect are kept as a formal devel- opment record.

The transformational approach bolds great promise. However, a major impedi- ment to its use is the need for a formal specification expressed precisely so the transfor- mations can operate om it, as shown in Figure 2.7. As formal specification methods become more popuJar, the transformational model may gain wider acceptance.

Phased Development: Increments and Iterations

In the early years of software development, customers were wiUirug to wait a long time for software systems to be ready. Sometimes years would pass between the time the requirements documents were written and the time tbe system was delivered, caUed the cyde time. However, today's business environment no longer tolerates long delays. Software helps to distinguish products in the ma rketplace, and customers are always looking for new quality and functionality. For example, in 1996, 80 percent o f Hewlett- Packard's revenues were derived from products introduced in the previous two years. Consequently, new process models were developed to help reduce cycle time.

One way to reduce cycle time is to use phased development, as shown in Figure 2.8. The system is designed so that it can be delivered in pieces, enabling the users to have some functionality while the rest is being developed. Thus, there are usuaUy two systems functioning in parallel: the production system and the develo pment system. The

56 Chapter 2 Modeling the Process and Life Cycle

Compare with requirements;

update as needed FORMAL DEVELOPMENT RECORD

Sequence of transformations plus rationale for them

FORMAL SPECIFICATION

:SYSTEM REQUIREMENTS

(sometimes informal or incomplete)

TRANSFORM N

TRANSFORM 2

TRANSFORM I

FIGURE 2.7 The transfonn:ational model.

TEST

DELIVERED SYSTEM

operational or production system is the one currently being used by the customer and user; the development system is the next version that is being prepared to replace the cur- rent production system. Often, we refer to the systems in te rms of their release numbers: the developers build Release 1, Lesl it, and Lum il over lo the users as Lhe first operational release. Then, as the users use Release 1, the developers are buiJding Release 2. Thus, the developers a re a lways working on Release n + 1 while Release n. is opera tional.

There are many ways for the developers to decide how to organize development into releases. The two most popular approaches are incremental development and ite ra- tive development. In incremental development, the system as spedfied in the require- ments documents is partitioned into subsystems by functionality. The releases are defined by beginning with one small, functional sulbsystem and then adding functionality

-

Development systems

Build Re lem t Buil~ Release 2 Build Release 3

Use Relme 1 Uu Release 2

Production systems

FIGURE 2.8 The phased-development model.

Openmirrors.com

Time .. Ute Release 3

Section 2.2 Software Process Models 57

INCREMENTAL DEVELOPMENT

ITERATIVE DEVELOPMENT

FIGURE 2.9 The incremental and iterative models.

with each new re lease. The top of Figure 2.9 shows bow incremental development slowly builds up to fuJJ functionaJity with each new release.

However, iterative development delivers a fuU system at the very beginning and then changes the functionality of each subsystem with each new release. The bottom of Figure 2.9 iUustrates three re leases in an ite rative development.

To understand the diffe rence between incremental and ite rative development, consider a word processing package. Suppose the package is to deliver three types of functionality: creating text, organizing text (i.e., cutting and pasting), and formatting text (such as using different type sizes and styles). To buUd such a system using incre- menta l development, we might provide only the creation functions in Release 1, then both creation and organization in Release 2, and finally creation, organization, and for- matting in Release 3. However, using ite rative developme nt, we would provide primi- tive forms of all three types of functionality in Release 1. For example, we can create text and then cut and paste it, but the cutting and pasting functions might be clumsy or slow. So in the next iteration, Release 2, we have the same functionality, but have enhanced the quality; now cutting and pasting are easy and quick. Each release improves on the previous ones in some way.

In reality, many organjzations use a combination of iterative and incremental development. A new re lease may include new functionality, but existing functionality from the current re lease may have been enhanced. These forms of phased development are desirable for several reasons:

1. Training can begin on an e arly release, even if some functions are missing. The training process aUows developers to observe bow certain functions a re executed, suggesting enhancements for later releases. In this way, the develo pers can be very respo nsive to the users.

2. Marke ts can be created early for functionality that has never before been offered.

3. Frequent releases aUow developers to fix unanticipated problems globaUy and quickJy, as they are reporte d from the operationaJ system.

4. The development team can focus on different areas of expertise with diffe rent re leases. For instance, one release can change the system from a command-driven

58 Chapter 2 Modeling the Process and Life Cycle

one to a point-and-click interface, using the expertise of tbe user-interface spe- cialists; another release can focus on improving system performance.

Spiral Model

Boehm (1988) viewed tbe software development process in Light of the risks involved, suggesting that a spiral model could combine deve lopment activities with risk manage- ment to minimize and control risk. The spiral model, shown in Figure 2.10, is in some sense like the ite rative development shown in Figure 2.9. Beginnfog with the require- ments and an initial plan for development (includ ing a budget, constraints, and alterna- tives fo r staffing, design , and development environment), the process inserts a step to evaluate risks and prototype alternatives before a "concept of operations" docwnent is produced to describe at a high level how the system should work. From that document, a set of requirements is sp ecified and scrutinized to ensure that the requirements are as complete and consistent as possible. Thus, the concept of operations is the product of the first iteration, and the requirements a re the principal product of the second. In the third ite ration , system development produces the design, and the fourth enables testing.

With each iteration, the risk analysis weighs different alternatives in light o f the requirements and constraints, and prototyping verifies feasibility or desirability before a particular alternative is chosen. When risks are mdentified, the project managers must decide how to eliminate or minimize the risk. For example, designers may not be sure whether users will prefer one type of inte rface over another. To minimize the risk of choosing an interface that wilJ prevent productive use of the new system, the designers can prototype e ach inte rface and run tests to see which is pre ferred, o r even choose to incl.ude

HTERMINE GOALS, ALTERNATIVE$, CONSTRAINTS

PLAN lmplem.ntation

plan

FIGURE 2. 10 The spir:aJ model.

Openmirrors.com

EVALUATE ALTERNATIVES AND RISKS

HVELOP AND TEST

Sectio n 2.2 Software Process Models 59

two different interfaces in the design, so the users can select an interface when they log on. Constraints such as budget and schedule help to de termine which risk-management strat- egy is chosen. We will discuss risk management in more detail in Chapter 3.

Agile Methods

Many of the software development processes proposed and used from the 1970s through the 1990s tried to impose some form o f rigor on the way in which software is conceived, documented, developed, and tested. In the late 1990s, some developers who bad resjsted this rigor formulated the ir own principles, trying to highlight the roles t hat flexibility could play in producing software quickly and capably. They codified t!beir thinking in an "agile manifesto" tha t focuses on four tene ts of an alternative way of thinking about software development (Agile Alliance 2001):

• They value individuals and interactions ove r processes and tools. This philosophy includes supplying develope rs with the resources they need and then trusting them to do the ir jobs well. Teams o rganize themselves and communicate through face-to-face interaction rather than through documentation.

• They prefer to invest time in producing workmg software rather than in p ro- ducing comprehensive documentation. That is, the primary measure of success is the degree to which the software works prope rly.

• They focus on customer collaboration ra the r than contract negotiation, thereby involving the customer in key aspects of the development process.

• TI1ey concentrate on responding to change rathe r than on creating a plan and then following it, because they believe that it is impossible to anticipate alJ requirements at the beginning of development.

The overalJ goal of agile development is to satisfy the customer by "early and continuous delivery of valuable softwa re" (Agile Alliance 2001). Many customers have business needs that change over time, reflecting no t only newly discovered needs but also the need to respond to changes in the marke tplace. For example, as software is being designed and constructed, a competitor may re lease a new product tha t requires a change in the soft- ware 's planned functionali ty. Similarly, a government agency or standards body may impose a regulation or standard tha t affects the software's design o r requirements. It is thought that by building flexibility into the development process, agile methods can enable customers to add or change requirements la te in the deve lopment cycle.

There are many examples of agile processes in the current Lite rature. Each is based o n a set of principles that implement the te:nets of the agile manifesto. Examples include the following.

• E'xtrcmc programming (XP), described in detail below, is a set of techniques for leveraging the crea tivity of develope rs and minimjzing the amount of administra- tive ove rhead.

• Crystal is a colJection of approaches based o n the no tion that every project needs a diffe rent set of policies, conventions, and me thodologies. Cockburn (2002), the creato r of Crystal, believes that people have a majo r influence on software qual- ity, and thus the quality of projects and processes improves as the quality of the

60 Chapter 2 Modeling the Process and Life Cycle

people involved improves. Productivity increases through better communication and frequent delivery, because there is less need for ioterrnediate work products.

• Scrum was created at Object Technology io 1994 and was subsequently commer- cialized by Scbwaber and Beedle (2002). It uses iterative development, where each 30-day iteration is called a "sprint," to implement the product's backJog of prioritized requirements. Multiple self-organiziog aod autonomous teams imple- ment product incre ments io parallel. Coordination is done at a brief daily status meeting called a "scrum" (as in rugby).

• Adaptive software development (ASD) has six basic principles. There is a mission that acts as a guideline, setting out the destination but not prescribing how to get there. Features are viewed as the crux of customer value, so the project is orga- nized around building components to provide the features. Iteration is important, so redoing is as critical is doing; cbaoge is embraced, so that a change is viewed not as a correction but as ao adjustment to the realities of software developmeot. Fixed delivery times force developers to scope down the requirements essential for each version produced. At the same time, risk is embraced, so that the devel- opers tackle the hardest problems first.

Often, the phrase "extreme programming" is used to describe the more general con- cept of agile methods. In fact, XP is a particular fo rm of agile process, with guiding prin- ciples that reflect the more general tenets o f the agile manifesto. Proponents of XP emphasize four characteristics of agility: communication, simplicity, courage, and feed- back. Communication involves the continual interchange between customers and developers. Simplicity encourages developers to select the simplest design or imple- mentation to address the needs of their customers. Courage is described by XP creators as commitment to delivering functionality early and often. Feedback loops are built into the various activities during the development process. For example, programmers work together to give each other feedback on the best way to implement a design , and customers work with developers to perform planning exercises.

These characteristics are embedded in what are known as the twelve [acets of XP

• The planning game: ln this aspect of XP, the customer, who is on-site, defines what is meant by "value," so that each requirement can be assessed according to how much value is added by implementing it. The users write stories about how the system should work, and the developers then estimate the resources necessary to re alize the stories. The stories describe the actors and actions involved, much like the use cases we define in more detail in Chapters 4 and 6. Each story relates one requirement; two or three sentences are all that is needed to explain the value of the requirement in sufficient detail for the developer to specify test cases and esti- mate resources for implementing the requirement. Once the stories are written, the prospective users prioritize requirements, splitting and merging them until consensus is reached on what is needed, what is testable, and what can be done with the resources available. The planners then generate a map of each release, documenting what the release includes and when it will be delivered.

• Smaff releases: The system is designed so that functionality can be delivered as soon as possible. Functions are decomposed into small parts, so that some functionality

Openmirrors.com

Section 2.2 Software Process Models 61

can be delive red early and then improved or expanded on in later releases. The sma ll re leases require a phased-development approach, with incremental or ite ra- tive cycles.

• Metaphor: Tue development team agrees on a common vision of how the system will operate. To support its vis ion, the team chooses common names and agrees on a common way of addressing key issues.

• Simple design: Design is kept simple by addressing only current needs. This approach reflects the phllosophy that anticipating future needs can lead to unnecessary func- tionality. U a particular portion of a system is very complex, the team may build a spike-a quick and narrow implementation- to help it decide how to proceed.

• Writing tests first: To ensure that the customer's needs are the driving force behind deve lopment, test cases are written first, as a way of forcing customers to specify requirements. that can be tested and verified once the software is built. Two kinds of tests are used in XP: funclional tests that a re specified by the customer and exe- cuted by both developers and users, and unit tests tha t a re written and run by deve lopers. In XP, functional tests are automated and, ideally, run daily. The func- tional tests are considered to be part of the system specification. Unit tests are written both !before and after coding, to verif-y that each modular portion of the implementation works as designed. Both functional and. unit testing are described in more detaiJ in Chapte r 8.

• Refactoring: A s the system is built, it is Likely that requirements wiU change. Because a major characteristic of XP philosophy is to design only to current requirements1 it is often the case that new requirements force the developers to reconsider their existing design. Refactoring refers to revisiting the requirements and design, reformulating the m to match new and existing needs. Sometimes refactoring addresses ways to restructure design and code without perturbing the system's exte rnal behavior. Tue refactoring is done in small steps, supported by unit tests and pair programming, with simplicity guiding the effort. We wiJl discuss the difficulties of refactoring in Chapter 5.

• Pair programming: As noted in Chapter 1, there is a tension be tween viewing software engineering as an art and as a science. Pair program.ming attempts to address the artistic side of software development, acknowledging that the apprentice-master metaphor can be useful in teaching novice software develop- e rs how to develop the instincts of maste rs. Using one keyboard, two paired pro- grammers develop a system from the speciJications and design. One person has responsibility for finishing the code, but the pairing is flexible: a developer may have more than one partne r on a given day. We will see in Chapter 7 how pair pro- gramming compares with the more traditional approach of individuals working separately until the ir modules have been unit-tested.

• Collective ownership: In XP, any deve loper can make a change to any pa rt of the system as il is being developed. In Chapte r 11, we will address the difficulties in managing change, including the errors introduced when two people try to change the same module simultaneously.

• Continuous integration: Delive ring functionality quickly means that working sys- tems can be promised to the customer daily and sometimes even hourly. The

62 Chapter 2 Modeling the Process and Life Cycle

emphasis is on smaU increments or improvements rather than on grand leaps from one revision to the next.

• Sustainable pace: XP's emphasis on people includes acknowledging that fatigue can produce errors. So proponents of XP suggest a goal of 40 hours for each work week; pushing developers to devote heroic amounts of time to meeting deadlines is a signal that the deadlines are unreasonable or that there are insufficient resources for meeting them.

• On-site customer: Ideally, a customer should be present on-s.ite, working with the developers to determine requirements and providing feedback about bow to test them.

• Coding standards: Many observers think of XP and other agile methods as provid- ing an unconstrained environment where anything goes. But in fact XP advocates clear definition of coding standards, to encourage teams to be able to understand and change each other's work. These standards support other practices, such as testing and refactoring. The result should be a body of code that appears to have been written by on e person, and is consistent in its approach and expression.

Extreme programming and agile methods are relatively new. The !body of evidence for its effectiveness is small but growing. We wiU revisit many agile methods and concepts, and their e mpirical evaluation, in later chapters, as we discuss their related activities.

The process models presented in this chapter are only a few of those that are used or discussed. Other process models can be defined and tailored to the needs of the user, customer, and developer.As Sidebar 2.3 notes, we should really capture the development process as a collection of process models, rather than focusing on a single model or view.

SIDEBAR 2.2 WHEN IS EXTREME TOO EXTREME?

A s with most software development approaches, agile methods are not without their cr it-ics. For example, Stephens and Rosenberg (2003) point out that many of extreme pro- gramming's practices are interdependent, a vulnerability if one of them is modified. To see

why, suppose some people are uncomfortable with pair programming. More coordination and documentation may be required to address the shared vision that is missing when people

work on the ir own. Simi1arly, many developers prefer to do some design before they write

code. Scrum addresses this preference by organizing around monthly sprints. Elssamadissy and Schalliol (2002) note that, in extreme programming, requirements are expressed as a set

of test cases that must be passed by the software. This approach may cause customer represen-

ta tives to focus on the test cases instead of the requirements. Because the test cases a re a detailed expression of the requirements and may be solution oriented, the emphasis on test

cases can distract the representatives from the project 's underlying goals and can lead to a s it-

uation where the system passes all the tests but is not what the customers thought they were paying fo r.As we will see in Chapter 5, refactoring may be the Achilles heel of agile methods;

it is difficult to rework a software system without degrading its architecture.

Openmirrors.com

Section 2.3 Tools and Techniques for Process Modeling 63

SIDEBAR 2.3 COLLECTIONS OF PROCESS MODELS

W e saw in Sidebar 2.1 that the development process is a problem-solving activity, but few of the popular process models include problem solving. Curtis, Krasner, and Iscoe (1988) performed a field study of 17 large projects, to determine which problem-solving fac- tors should be captured in process models to aid our understanding of software development.

In particular, they looked at the behavioral and organizational factors that affect project out-

comes. Their results suggest a layered behavioral model of software development, including five key perspectives: the business milieu, the company, the project, the team, and the individ-

ual. The individual view provides information about cognition and motivation, and project and team views tell us about group dynamics. The company and business milieu provide infor-

mation about organizational behavior that can affect both productivity and quality. This

model does not replace traditional process models; rather, it is orthogonal, supplementing the traditional models with information on how behavior affects the creation and production

activities.

As the developers and customers learn about t!be problem, they integrate their knowl- edge of domains, technology, and business to produce an appropriate solution. By viewing

development as a collection of coordinating processes, we can see the effects of learning, tech-

nical communication, customer interaction, and requirements negotiation. Current mod.els that prescribe a series of development tasks "provide no help in analyzing how much new

information must be learned by the project staf~ how discrepant requirements should be nego-

tiated, how a design team can resolve architectural conflicts, and how these and similar factors contribute to a project's inherent uncertainty and risk" (Curtis, Krasner, and Iscoe 1988). How-

ever, when we include models of cognitive, social, and organizational processes, we begin to see

the causes of bottlenecks and inefficiency. It is this insight that enables managers to understand and control the development process. And by aggregating behavior across layers of models, we

can see how each model contributes to or compounds the effects of another mcxlel's factors.

No matter what process model is used, many activities are common to all. As we investigate software engineering in later chapters, we wiU examine each development activity to see what it involves and to find out what tools and techniques make us more effective and productive.

2.3 TOOLS AND TECHNIQUES FOR PROCESS MODELING

There are many choices for modeling tools and t,echniques, once you decide what you want to capture in your process model; we have seen several modeling approaches in our model depictions in the preceding section. The appropriate technique for you depends on your goals and your preferred work style. Io particular, your choice for notation depends on what you want to capture in your model. The notations range from textual ones that express processes as functions, to graphical ones that depict processes

64 Chapter 2 Modeling the Process and Life Cycle

as hierarchies of boxes and arrows, to combinations of pictures and text that link the graphical depiction to tables and functions elabo rating on the high-level illustration. Many of the modeling n otations can also be used for representing requirements and designs; we examine some of them in later chapters.

In tbis chapter, the notation is secondary to the type of model, and we focus on two major categories, sta tic and dynamic. A static model depicts the process, showing that the inputs are transformed to outputs. A dynamic model enacts the process, so the user can see how intermediate and final products are transformed over time.

Static Modeling: Lai Notation

There are many ways to model a process staticaUy. In the early 1990s, Lai (1991) devel- oped a compre hensive process notation that is intended to enabl'e someone to model any process at any level of detail. It builds on a paradigm where people perform roles while resources perform activities, leading to the production of artifacts. The process model shows the relationships among the roles, activities, and artifacts, and state tables show information about the completeness of each artifact at a given time.

In part icular, the elements of a process are viewed in terms of seven types:

1. Activity: Something that wiU happen in a prncess. lllis element can be related to what happens be fore and after, what resources are needed, what triggers the activity's start, what rules govern the activity, how to describe the algorithms and lessons learned, and how to relate the activity to the project team.

2. Sequence: The order o f activities. The sequence can be described using triggers, programming constructs, transformations, ordering, or satisfaction of conditions.

3. Process model: A view of interest about tbe system. Thus, parts of tbe process may be represented as a separate model, either to predict process behavior or to examine certain characteristics.

4. Resource: A necessary item, tool, or person. Resources can include equipment, time, office space, people, techniques, and so on. The process model identi fies how mucb of each resource is needed for each activity.

5. Control: An external influence over process enactment. The controls may be manual or automatic, human or mechanical.

6. Policy: A guiding principle. This higb-level process constraint influences process enactment. It may include a prescribed development process, a tool that must be used, or a mandatory management style.

7. O rganization: The hierarchical structure of process agents, wiith physical grouping corresponding to logical grouping and related roles. The mapping from physi- cal to logical grouping should be flexible enough to reflect changes in physical environment.

The process description itself has several levels of abstraction, including the software development process that directs certain resources to be used in constructing specific modules, as well as generic models that may resemble the spiral or waterfall models. Lai's notation includes seve ral templates, such as an Artifact DefinHion Template, which records info rmatio n about particular artifacts.

Openmirrors.com

Sectio n 2.3 Tools and Techniques for Process Modeling 65

Lai's approach can be applied to modeling software development processes; later in this chapter, we use it to model the risk involved in development. However, to demonstrate its use and its ability to capture many facets of a complex activity, we apply it to a rela tively simple but familiar process, driving an automobile. Table 2.1 con- tains a description of the key resource in this process, a car.

Other templates define relations, process states, operations, analysis, actions, and roles. Graphical diagrams represent the relationships between elements, capturing the main relationships and secondary ones. For example, Figure 2.11 illustrates the process of starting a car. The "initiate" box represents the entrance conditions, and the "park" box represents an exit condition. The left-hand column of a condition box Lists ar tifacts, and the right-hand column is the artifact state.

TABLE 2.1 Artifact Definition Form for Artifact "CAR" (Lai 1991)

Name Car

Synopsis This is the artifact that represents a class of cars.

Complexity type Composite

Data type (car c, user-defined)

Artifact-state list

parked ((state of(car.engine) = off) (state of(car.gear) = park) (state=of(car.speed) = stand))

Car is not moving and engine is not nmning

initiated ((state of(car.engine) = on) (state o f (car.key hole) = has-key) (state-of(car-driVer(car.)) = in-car) state of(car.gear) = drive) (state_of(car.speed ) = stand))

Car is not movin& but the engine is r111111i11g.

movi11g ((state of(car.engi11e) = on) Car is moving forward or backward. (state of(car.keyhole) = has-key) (stale-of(car-driver(car.)) = driving) ((state of(car.gear) = drive) or (state of(car.gear) = reverse)) ((state of(car.speed) = stand) or (state_of(car.speed) = slow) or (state of(car.speed) = medium ) or (state_of(car.speed ) = high))

Subartifact list

doors Thefo11rdoors of a car

engine The engine of a car

keyhole The ignition keyhole of a car

gear The gear of a car

speed The speed of a car

Relations list

car-key This is the relation between a car and a key.

car-driver This is the relatio11 between a car a11d a driver.

66 Chapter 2 Modeling the Process and Life Cycle

m.en,lne l 1lee_on m .eulne off 1lee_o1r driver. ~ut_m !0 01 m.englne~ - eir.11•lne 01 tm_oll of( by. ln_poeklt opu d010r ln_m driver. - - driver. ln_m get_ off out_ett drtnr. - Ikey. ln_puru c Iott-doot I dtlYI Cll.,Hr car.,ear

,,,v. rudy_to_p1rll ark car.,111 -

- e11.~11r Plrli lock - stind 111.tpeed ear.speed stand open_door st1nd m.tpeed m.1pud tllnd 11l1m_uy park clot1_door

lnltl~te put_key_l1 uke_uw_out dtlff tu1n_by_ln_to_l11t .....-::::::::

~ unlock ---~

~~ ratdy_lor_~w ~

~ hck forw11d 1pMll

on drive drlm. l1_ett 1pted_1p ln_etr drlm. ett.9u1 dtlve dtlve ctr.gear 1low_down ett.tpted mnd open_door lllld car.OHd

~

drive clo11_door lock qiloek

FIGURE 2.11 The process of starting a car(Lai 1991).

Transition diagrams supplement the process model by showing bow the states are related to one another. For example, Figure 2.12 iatustrates the transitions for a car.

La.i 's notation is a good example of how multiple structures and strategies can be used to capture a great deal of information about the software development process. But it is also useful in organizing and depicting process information about user require- ments, as the car example demonstrates.

FIGURE 2. 12 Transition diagram for a car (Lai 1991).

PARKED

initiy

~-out ---- '4 stop

90

MOVING

Openmirrors.com

Section 2.3 Tools and Techniques for Process Modeling 67

Dynamic Modeling: System Dynamics

A desirable property of a process model is the ability to enact the process, so that we can wattcb what happens to resources and artifacts as activities occur. In other words, we want to descrihe a model of the process and then watch as software shows us how resources flow through activities to become outputs. This dynamic process view enables us to simulate the process and make changes before the resources are actuaLiy expended. For example, we can use a dynamic process model to help us decide how many testers we need or when we must initiate testing in order to finish on schedule. Similarly, we can include or exclude activities to see their effects on effort and schedule. For instance, we can add a code-review activity, making assumptions about how many fa ults we will find during the review, and determine whether reviewing shortens test time significantly.

There are several ways to build dynamic process models. The systems dynamics approach, introduced by Forrester in the 1950s, has been useful for simulating diverse processes, including eoological, economic, and political systems (Forrester 1991). Abdel-Hamiel and Madnick have applied system dynamics to software development, enabling project manage rs to " test out" their process choices before imposing them oo developers (Abdel-Hamid 1989; Abdel-Hamid and Madnick 1991).

To see how system dynamics works, consider how the software development pro- cess affects productivity. We can build a descriptive model of the various activities that involve developers' time and then look at bow changes in the model increase or decrease the time it takes to design, write, and test the code. First, we must determine wrucb factors affect overall productiviity. Figure 2.13 depicts Abdel-Hamid's understanding of these

Fmtlon of stafr experienced Percent of

Experienced staff no111lnal potential productivity j New staff nollllnal project eio111pleted poteftllal producth11ty I ~ A 1 1 Learning 111ultlpller vera9e no111 na

Actual fraction of person-day

/ on project

Over/under work tolrnnees

potential productivity ~ .....__ Potential ~

productivity

+ S6ft11m development

productivity

~ Motivation and co111111unlca~lon ...__ Com111unlca1lon

lonu overhead

Schedule pressure t Suff tile FIGURE 2.13 Model of factors contributing to productivity (Abdel-Hamid 1996).

68 Chapter 2 Modeling the Process and Life Cycle

factors. The arrows indicate how changes in one factor affect changes in another. For example, if the fraction of experienced staff increases fro m one-quar ter to one-half of the people assigned to the project, then we would expect the average potential productivity to increase, too. Similarly, the larger the staff (reflected in staff size), the more time is devoted to communicatio n among project members (communication overhead).

The figure shows us that average nominal potential productivity is affected by three things: the productivity of the experienced staff, the fraction of experienced staff, and the productivity of the new staff. At the same time, new staff must learn about the project; as more of the project is completed, the more the new staff must learn before they can become productive members of the team.

Other issues affect the overall development productivity. First, we must consider the fraction of each day that each developer can devo te to the project. Schedule pres- sures affect this fractio n, as do the developers' tolerances for workload. Staff size affects productivity, too, but the more staff, the more likely it is thall: time will be needed just to communicate information among team members. Communication and motivation, combined with the potential productivity represented in the upper half of Figure 2.13, suggest a general software development productivity relationship.

Thus, the first step in using system dynamics is to identify these relationships, based o n a combination of empirical evidence, research reports, arnd intuition. The next step is to quantify the relationships. The quantification can invo lve direct relationships, such as that between s taff size and communication. We know that if n people are assigned to a project, the n there are n(n - 1)/2 potential pairs of people who must com- municate and coordinate with one another. For some relationships, especially those that involve resources that change over time, we must assign distributions that describe the building up and diminishing of the resource. For example, it is rare for everyone on a project to begin work on the fLrst day. The systems analysts begin, and coders join the project once the significant requirements and design components are documented. Thus, the distribution describes the rise and fall (or even the fluctuation, such as avail- ability around holidays or summer vacations) of the resources.

A system dynamics model can be extensive and complex. For example, Abdel- Hamid's software develo pment model contains more than 100 causal links; Figure 2.14 shows an overview of the relationships he defined. He defined four major areas llhat affect productivity: software production, human resource management, planning, and control. Production includes issues of quality assurance, learning, and development rate. Human resources address hiring, turnover, and experience. Planning concerns scheduJes and the pressures they cause, and control addresses progress measurement and the effort required to finish the project.

Because the number of links can be quite large, system dynamics models are sup- po rted by software that captures both the links and their quanlitallive descriptions and then simulates the overall process or some subprocess.

The power of syste m dynamics is impressive, but this method should be used with caution. The simulated results depend on the quantified relationships, which are often heuristic or vague, not clearly based on empirical research. However, as we will see in later chapters, a historical database of measurement information about the various aspects of development can help us gain confidence in our understanding of rela tion- ships, and thus in the results of dynamic models.

Openmirrors.com

Section 2.3

SOFTWARE PRODUCTION

Seledule pruture

t M1dul1d

ForeC11t1d

Tools and Techniql.lles for Process Modeling

MUMAN RESOURCE MANAGEMENT

., Perceived , profotlilty ,

Project tuh / Workforce co111pletlon

level d1t1 p1rulv1d ' H

0111plellon , d1te

pmelnd co111plete - -- iuuracr I 11 111111urln! mded T f

Adjurt111ent1 to worllrorce and scledule

I PLANNING

t prosm• ·' Effort percelv1d P1rulnd

still needed .........._ project

CONTROL

FIGURE 2.14 Structm e of software development (Abdel-H amid 1996).

SIDEBAR 2.4 PROCESS PROGRAMMING

69

In the mid-1980s, Osterweil (1987) proposed that software engineering processes be specified using algorithmic descriptions. That is, if a process is well-understood, we should be able to write a program to describe the process, and then run the program to enact the process. The goal of process programming is to eliminate uncertainty, both by having enough ·understand- ing to write software to capture its essence, and by turning the process into a d eterministic solution to the problem.

Were process programming possible, we could have management visibility into all pro- cess activities, automate all activities, and coordinate and change all activities with ease. Thus, process programs could form the basis of an automated environment to produce software.

However, Curtis, Krasner, Shen, and Iscoe (1987) point o ut that Osterweil's analogy to computer programming does not capture the inherent variability of the underlying develop- ment process. When a computer program is written, the programmer assumes that the imple- mentation environment works properly; the operating system, database manager, and

hardware are reliable and correct, so there is little variability in the computer 's response to an instruction. But when a process program issues an instruction to a member of the project team, there is great variability in the way the task is executed and in the results produced. As

70 Chapter 2 Modeling the Process and Life Cycle

we will see in Chapter 3, differences in skill, experience, work habits, understanding the cus-

tomer's needs, and a host of other factors can increase variability dramatically. Curtis and his

colleagues suggest that process programming be restricted only to th~ situations with min- imal variability.Moreover, they point out that Osterweil 's examples provide in formation only

about the sequencing of tasks; the process program does not help to wam managers of

impending problems. "The coordination of a web of creative intellectual tasks does not appear to be improved greatly by current implementations of process programming, because

the most important source of coordination is to ensure that all of the interacting agents share the same mental model of how the system should operate" (Curtis et al. 1987).

2.4 PRACTICAL PROCESS MODELING

Process modeling has long been a focus of software engineering research. But how practical is it? Several researchers report that, used properly, process modeLing offers great benefits for unders tanding processes and revealing inconsistencies. For example, Bargbo uti, Rosenblum, Belanger, and AJliegro (1995) conducted two case studies to dete rm.lne the feasibility, utility, and limitations of using process models in large orga- nizations. In this section, we examine what they did and what they found.

Marvel Case Studies

In both studies, the researche rs used MSL, the Marvel Specification Language, to define the process, and then generated a Marvel process ,enactment environment for it (Kaiser, Feile r, and Popovich 1988; Barghouti and Kaiser 1991 ). MSL uses three main constructs-- classes, rules, and tool envelopes-to produce a three-part process description:

1. a rule-based specification of process behavior

2. an object-oriented definition of the model's information process 3. a set of envelopes to inte rface between Marvel and external software tools used

to execute the process.

The first case study involved an AT&T call-processing network that carried phone calls, and a separate signaling ne twork responsible for routing the caUs and bal- ancing the ne twork's load. Marvel was used to describe the Signaling Fault Resolution process that is responsible for detecting, servicing, and resolving problems with the sig- naling ne twork. Workcenter 1 monitored the ne twork, detected failllts, and referred the fault to one of the two other workcenters. Workcente r 2 handled software or human faults that required detailed analysis, and Workcenter 3 dealt with hardware failwes. Figure 2.15 depicts this process. Double dashed lines indicate which activity uses the tool or database represe nted by an oval. A rectangle is a task or activity, and a diam ond is a decision. Arrows indicate the flow of control. A s you can see, the figure provides an overvie w but is not de tajjed enough to capture essential process elements.

Consequently, each of the entities and workcenters is modeled using MSL. Figure 2.16 illustrates how that is done. The upper half of the figure defines the class

Openmirrors.com

Section 2.4 Practical Process Modeling

WORKCEMTER I SUBPROCESS

WORKCEMTER 2 SUBPROCESS

~ -------~

Initiate process

.___c_r_n_te~tic_k_ei _ __. ::~

R•ler lo Workcenter 3

Workcenter 3: Fix equipment problem

Refer to Workcenter 2

Clm ticket

·, Cr.ate ticket

Diagnou

yes

Close ticket

FIGURE 2.15 Signaling Fault Resolution process (Bargbouti et al 1995).

TICKET:: uperelus ENTITY status: (Initial, open, relerred_out, relernl_do11,

dia9nostics '•~el description referreCto referrals proeeu

end

cloud, fiud) = initial; (termini !, non_terminal, none)= none; Integer; text; I ink WORKCENTER; n t_ol link TICKET; I ink PROC_I NST;

diagme [7t: TICKED: (oists PROC_INST 7p m~t~at (linkto [7t.proeess ?p])J

(and (?I.status= ep11}(?t.dia9nostics =none)) (TICKET_ UTIL dla9nm ?t.Name} (and (7t.dla9nestiei = terminal)

(7p.lut_tuk = dia9me) (7p.uxt_lask = refer_te_WC3));

(11d (7t.dia9ustics = •H_tar•iaal) (?p.lut_task = diagme) (?p.uxt_task = reFer_to_WC2));

Class defhititn for trouble tickets

Ihle for dia9usin9 ticket

FIGURE 2.16 Examples of Marvel commands (B<trghuuli el al. 1995).

71

72 Chapter 2 Modeling the Process and Life Cycle

TICKET, where a ticket represents the trouble ticket (or problem report) written when- ever a failure occurs. As we will see in the chapters on testing, trouble tickets are used to track a problem from its occurrence to its resolution. The entire network was represented with 22 such MSL classes; all information created or required by a process was included.

Ne xt, Che model addressed behavioral aspeccs of the Signaling Fault Resoluttion process, The lower half of Figure 2.16 is an MSL rule that corresponds loosely to the box of Figure 2.15 labeled " Diagnose." Thus, the MSL describes the rule for diagnosing open problems; it is fired for each open ticket. When the process model was done, there were 21 MSL rules needed to describe the system.

The second case study addressed part of the software maintenance process for AT&T\s 5ESS switching software. Unlike the first case study, whe re the goal was p ro- cess improvement, the second study aimed only to document tbe process steps and interactions by capturing them in MSL. The model contained 25 classes and 26 rules.

For each model, the MSL process descriptions were used to generate "process enact- ment environments," resulting in a database populated with instanoes of the information model's classes. Then, researchers simulated several scenarios to verify that the models performed as expected. During the simulation, they collected timing and resource utiliza- tion dana, providing the basis for analyzing likely process performance. By changing the rules and executing a scenario repeatedly, the timings were compared and contrasted, leading to significant process improvement without major investment in resources.

The modeling and simulation exercises were useful for early problem identifica- tion and resolution. For example, the software maintenance process definition uncov- ered three types of problems with the existing process documentation: m1ssing task inputs and outputs, ambiguous input and output crite ria, and inefficiency in the process definitio n. The signaling fault model simulation d iscovered ineffic iencies in the sepa- rate descriptions of the workcenters.

Barghouti and his colleagues note the impo rtance of dividing the process model- ing problem into two pieces: modeling the information and modeling the behavior. By separating these concerns, the resulting model is clear and concise. They also point out that computer-intensive activities are more easily modeled than human-intensive ones, a lesson noted by Curtis and bis colleagues, too.

Desirable Properties of Process Modeling Tools and Techniques

There are many process modeling tools and techniques, and researchers continue to work to determine which ones are most appropriate for a given situation. But there are some characteristics that are helpful, regardless of technique. Curtis, Kellner, and Over (1992) have identified five categories of desirable properties:

1. Facilitates human understanding and communication. The techn ique should be able to represent the process in a form that most customers and developers can understand, encouraging communication about the process and agreement on its form and improvements. The technique should include sufficient information to allow one or more people to actually perform the process. And the model or !tool should form a basis fo r training.

2. Supports process improvement. The technique should identify the essential compo- nents of a development or maintenance process. It should allow reuse of processes

Openmirrors.com

Section 2.5 Information Systems Example 73

or subprocesses on subsequent projects, compare alternatives, and estimate the impact of changes before the process is actuallly put into practice. Similarly, tbe tech- nique should assist in selecting tools and techniques for the process, in encouraging organizational learning, and in supporting continuing evolution of the process.

3. Supports process management. The technique should alJow the process to be project-specific. Then, developers and customers should be able to reason about attributes of software creation or evolution. The technique should also support planning and forecasting, monitoring and managing the process, and measuring key process characteristics.

4. Provides automated guidance in. performing the process. The technique should de fine all or part of the software development environment, provide guidance and suggestions, and re tain reusable process representatio ns for later use.

5. Supports automated process execution. The technique should automate all or part of the process, support cooperative work, capture relevant measurement data, and enforce rules to ensure process integrity.

These characteristics can act as useful guidelines for selecting a process modeling technique for your development project. Item 4 is especially important if your organi- zation is attempting to standardize its process; tools can help prompt developers about what to do next and provide gateways and checkpoints to assure that an artifact meets certain standards before the next steps are taken. For example, a tool can check a set of code components, evaluating their size and structure. If size or strncture exceeds pre- defined limits, the developers can be notified before testing begins, and some compo- nents may be reviewed and perhaps redesigned.

2.5 INFORMATION SYSTEMS EXAMPLE

Let us consider which development process to use for supporting our information sys- tem example, the Piccadilly Television advertising program. Recall that there are many constraints on what kinds of advertising can be sold when, and that the regulations may change with rulings by the Advertising Standards Authority and other regulatory bodies. 'fhus, we want to build a software system that is easily maintained and changed. There is even a possibility that the constraints may change as we are building the system.

The waterfall model may be too rigid for our system, since it permits little flexi- bility afte r the requirements analysis stage is complete. Prototyping may be useful for building the user interface, so we may want to include some kind of prototyping in our model. But most of tbe uncertainty Lies in the adv,ertising regula tions and business con- straints.. We want to use a process model that can be used and reused as the system evolves. A variation of the spiral model may be a good candidate for building the Pic- cadilly system, because it encourages us to revisit our assumptions, analyze our risks, and prototype various system characteristics. The repeated evaluation of alternatives, shown in the upper-left-hand quadrant of the spiral, helps us build flexibility into our requirements and design.

Boehm's representation of the spiral is high-level, without enough detail to direct the actions of analysts, designers, coders, and testers. However, tlbere are many tech- niques and tools for representing the process model at finer levels of detail. The choice

74 Chapter 2 Modeling the Process and Life Cycle

of technique or tool depends in part on personal preference and experience, and in part on suitability for the type of process being represented. Let us see how Lai's notation might be used to represent part of the Piccadilly system's development process.

Because we want to use the spiral model to help us manage risk, we must include a characterization of "risk" in our process model. That is, risk is an artifact that we must describe, so we can measure and track risk in each iteration of our spiral. Each potential problem has an associated risk, and we can think of the risk in terms of two facets: prob- ability and severity. Probability is the likelihood that a particular problem will occur, and severity is the impact it will have on the system. For example, suppose we are con- sidering the problem of insufficient training in the development method being used to build the Piccadilly system. We may decide to use an object-oriented approach, but we may find that the developers assigned to the project have littJe or no experience in object orientation. This problem may have a low probabiLity of occurring, since all new employees are sent to an intensive, four-week course on object-oriented development. On the other hand, should the problem actually occur, it would have a severe impac!L on the ability of the development team to finish the software within the assigned schedule. Thus, the probability of occurrence is low, but the severity is large.

We can represent these risk situations in a Lai artifact table, shown in Table 2.2. Here, risk is the artifact, with subartifacts probability and severity. For simplicity, we

TABLE 2.2 Artifact Definition Form for Artifact "Risk"

Name Risk ( ProblemX)

Synopsis process This is the anifacl that represents tire risk that problem X will occur and have a negative a/feel on some aspect of tire development process.

Complexity type Composite

Data type (risk_s, user_defined)

Artifact-state list

low ((state of(probability.x) '"' low) Probability of problem is low, severity (stau_of(severity.x) -- small) problem impact is small.

lriglr-medi11m. ((stare of(probability.x) = low) Probability of problem is low, severity (stau_of (severity.x) -- large)) problem impact is large.

low-111edi.11m ((State of(probabilily.x) = high) Probability of problem is high, severity (stau_of(severily.x) = small)) problem impact is small.

high ((state of(probability.x) = high) Probability of problem is high, severity (staJe_of(severily.x) = large)) problem impact is large.

Subartifact list

probability.x Tire probability that problem X will occur.

severity.x Tire severity of tire impact slro11/d problem X occ.11r on the project.

Openmirrors.com

Section 2.6 Real-Time Example 75

bave chosen only two states for each subartifact: low and high for probability, and srnaU and large for severity. In fact, each of the subartifacts can have a large range of states (sucb as extremely small, very small, somewhat small, medium, somewhat higb, very high, extremely high), leading to many different states for the artifact itself.

In the same way, we can define the other aspects of our development process and use diagrams to iUustrate the activities and their interconnections. Modeling the pro- cess in this way has ma ny advantages, not the Least of which is buiJding a common unde rstanding of what deve lopment will entail. If use rs, customers, and developers par- tkipate in defining and de picting Piccadilly's development process, each will have expectations about what activities are involved, what they produce, and when each product can be expected. In particular, the combination of spiral model and risk table can be used to evaluate the risks periodically. With each revolution around the spiral, the probability and seve rity of each risk can be revisited and restated; when risks are unacceptably high, the process mode l can be re vised to include risk mitigation and reductio n techniques, as we wiU see in Chapter 3.

2.6 REAL-TIME EXAMPLE

The Ariane-5 software involved the reuse of software from Ariane-4. Reuse was intended to reduce risk, increase productivity, and increase quality. Thus, any process mode l for developing new Ariane software shoulld include reuse activities. In particu- lar, the process model must include activities to check the quality of reusable compo- nents, wilh safeguards to make sure that the reused software works properly within the context of the design of the new system.

Such a process model might look like the simplified mode l of Figure 2.17. The boxes in tbe model represent activities. 1be arrows entering the box from the left are resources, and those leaving on the right are outputs. Those ente ring from the top are controls or constraints, such as schedules, budgets, or standards. And those entering from below are mechanisms that assist in performing the activity, such as tools, data- bases, o r techniques.

The Ariane-4 reuse process begins with the software's mission, namely, control- ling a new rocke t, as well as software from previous airframes, unmet needs, and other software components available from other sources (such as purchased software or reuse reposito ries from othe r projects). Based on the business strategy of the aerospace builder, the developers can identify reusable subprocesses, describe them (perhaps with annotations re lated to past experience), and place them in a library for considera- tion by the requirements analysts. The reusable processes wiU often involve reusable components (i.e., reusable requfrements, design or code componernts, o r even test cases, process descriptions, and othe r documents and artifacts).

Next, the requirements analysts examine the requirements for the new airframe and the reusable components that are available in the library. They produce a revised set of requirements, consisting of a mix of new and reused requirements. Then, the designe rs use those requirements to design the software. Once their design is complete, they evaluate all reused design components to certify that they ar'e correct and consis- tent with the new parts of the design and the overall intention of the system as described in the require ments. Finally, the certified components are used to build or

76 Chapter 2 Modeling the Process and Life Cycle

Business strategies

(Chan,ing) ~ I Missions I Rem bl

Software lrom previous Identify reusable

Subproees:ses Describe

e SIS subproces

- ai rlrame - subprocesses sub- -- process 1

Reusable an~ unreusable - eomponants t

Unmet domain needs Experience base

L Populate R equiremtnts Reviud requirements for new airframe Requirements domain - analysis library

- Pot•ntia I rtusable

components

Evaluate Buil4 or Design Design _ and certify Certified components

airframe selecttd - change i--- - software so'1vlare reusable Software

components

FIGURE 2.17 Reuse process model for new airframe software.

change the software and produce the final system. As we will see in later chapters, such a process might have prevented the destruction of Ariane-5.

2.7 WHAT THIS CHAPTER MEANS FOR YOU

lo this chapter, we have seen that the software development process involves activities, resources, and products. A process model is useful for guiding your behavior when you are working with a group. Detailed process models tell you how to coordinate and col- laborate with your colle agues as you design and build a system. We have also seen that process models include organizational, functional , behavioral, and other perspectives, so you can focus on particular aspects of the development process to enhance your understanding or guide your actions.

2.8 WHAT THIS CHAPTER MEANS FOR YOUR DEVELOPMENT TEAM

A process model has clear advantages for your development team, too. A good model shows each team member which activities occur when, and by whom,so that the division of duties is clear. In addition, the project manager can use process tools to enact the pro- cess, simulating activities and tracking resources to determine the best mix of people and activities in order to meet the project's budget and schedule. This simulation is done before resources are actually committed, so time and money are saved by not having to

Openmirrors.com

Section 2.11 Key References 77

backtrack or correct mistakes. Indeed, iteration and incremental development can be included in the process model, so the team can learn from proto typing or react to evolv- ing requirements and still meet the appropriate deadlines.

2.9 WHAT THIS CHAPTER MEANS FOR RESEARCHERS

Process. modeling is a r1ch field of research interest in software engineering. Many soft- ware developers fee l that, by using a good process, the quality of the products of develop- ment can be guaranteed. Thus, there are severaJ areas into which researchers are looking:

• Process notations: How to write down the process in a way that is understandable to those who must carry it out

• Process models: H ow to depict the process, using an appropriate set of activities, resources, products, and tools

• Process modeling support tools: How to enact or simulate a process model, so t hat resource availabiLity, usage, and performance can be assessed

• Process measurem ent and assessment: H ow to determine which activities, resources, subprocesses, and model types are best for producing high-quality products in a specified time or environment

Many o f these efforts are coordinated with process improvement research, an a rea we will investigate in Chapter 13.

2.10 TERM PROJECT

It is early in the deve lopment process of the Loan Arranger system for FCO. You do not yet have a comprehensive set o f requirements for the system. All you have is an overview of system functionaLity, and a sense of bow the system will be used to support FCO's business. Many of the terms used in the overview are unfamiLiar to you, so you have asked the customer representatives to prepare a glossary. They give you the descript ion in Table 2.3.

This information c larifies some concepts for you, but you are still fa r from having a good set o f requirements. Nevertheless, you can make some preliminary decisions about how the development should proceed. Review the processes presented in this chapte r and determine which ones might be appropriate for developing the Loan Arranger. For each process, make a list of its advantages and disadvantages with respect to the Loan Arranger.

2.11 KEY REFERENCES

As a result of the FUth International Software Process Workshop, a working group chaired by Kellner formuJated a standard problem, to be used to evaluate and compare some of the more popular process modeling techniques. The problem was designed to be complex enough so that it would test a technique's ability to include each of the following:

• multiple leve ls of abstraction

• control flow, sequencing, and constraints on sequencing

78 Chapte r 2 Modeling t he Process and Life Cycle

TABLE 2.3 Glossary of Terms for the Loan Arranger

Borrower. A borrower is the recipient of money from a lender. Borrowers may receive loans jointly; that is, each loan may have multiple borrowers. Each borrower has an associated name and a unique borrower identification number.

Borrower's risk: The risk factor associated with any borrower is based on the borrower's payment history. A borrower with no loans outstanding is assigned a nominal borrower's risk factor of 50. The risk factor decreases when the borrower makes payments on time but increases when a borrower makes a payment late or defaults on a loan. The borrower's risk is calculated using the following forroula:

Risk= 50 - (10 x (numberofyearsofloansingoodstanding)] + [20 x (number of years of loans in late standing)] + (30 x (nwnber of years of loans in default standing)]

For example, a borrower may have three loans. The first loan was taken out two years ago, and all payments have been made on time. That loan is in good standing and has been so for two years. The second and third loans are four and five years old, respectively, and each one was in good standing until recently. Thus, each of the two late-standing loans has been in late standing only for one year. Thus, the risk is

50 - [10 x 2) + (20 x {l + l}] + (30 x OJ = 70.

The maximum risk value is 100, and the minimum risk value is 1.

Bundle: A bundle is a collect ion of loans that has been associated for sale as a single wlit to an investor. Associated with each bundle is the total value of loans in the bw1dle, the period of rime over which the loans in the bundle are active (i.e., for which borrowers are still making payments on the loans), an estimate of the risk involved in purchasing the bundle, and the profit to be made when all loans are paid back by the borrowers.

Bundle rtsk: The risk of a loan bundle is the weighted average of the risks of the loans in the bw1dle, with each loan's risk (see loan risk, below) weighted according to that loan's value. To calculate the weighted average over 11 loans. assume that each loan Uhas remaining principal Pi and loan risk Ri. The weighted average is then

n

'L,PiRi 1~1

n

'L,Pi 1- 1

Discount: The discount is the price at which FCO is willing to sell a loan to an investor. It is calculated according to the formula

Discoun t = (principal remaining) x [(interest rate) x (0.2 + (.005 x {101 - (loan risk))))] Interest rate type: An interest rate on a loan is either fixed or adjustable. A fixed-rate loan (called an FM) has the same interest rate for the term of the mortgage. An adjustable rate loan (called an ARM) has a rate that changes each year, based on a government index supplied by the U.S. Department or the Treasury.

Investor. An investor is a person or organization that is interested in purchasing a bundle of loans from FCO.

lm•estment request: An investor makes an investment request,specifying a maximum degree of risk at which the investment will be made, the minimum amount of profit required in a bundle, and the maximum period of time over which the loans in the bundle must be paid.

Le nder. A lender is an institution that makes loans to borrowers. A lender can have zero, one, or many loans.

Le nder informatio n: Lender information is descriptive data that are imported from outside the application. Lender information cannot be changed or deleted. The following information is associated with each lender: lender name (institution), lender contact (person at that institution) , phone number for contact, a Wlique lender identification number. Once added to the system, a lender entry can be edited but not removed.

(continues)

Openmirrors.com

Section 2.1 1 Key References 79

TABLE 2.3 (continued)

Lending Institution: A synonym for lender. See lender.

Loan: A loan is a set of information that describes a home loan and the borrower-identifying information associated with the Joan. The following information ii; associated with each loan: loan amount , interest rate. interest rate type (adjustable or fixed), settlement dat.e (the date the borrower originally borrowed the money from the lender), term (expressed as number of years), borrower, lender, loan type Qu.mbo or regular), and property (identified by the address of the property). A loan must have exactly one associated lender and exactly one associated borrower. In addition, each loan is identified with a loan risk and a loan status.

Loan analyst: The loan analyst is a professional employee of FCO who is trained in using tile Loan Arranger system to manage and bundle loans. Loan analysts are familiar with the terminology of loans and lending, but they may not have all the relevant information at hand with which to evaluate a single loan or collection of loans.

Loan rlsl.:: Each loan is associated witll a level of risk, indicated by an integer from 1 to 100. 1 represents the lowest-risk loan.; that is, it is unlikely that the borrower will be late or default on this loan. 100 represents the highest risk; that is, it is almost certain that the borrower will default on this loan.

Loan stntus: A loan can have one of three status designations: good, late, o r default. A loan is in good status if the borrower has made all payments up to the current time. A loan is in late status if the borrower's last payment was made but not by the payment due dat.e. A loan is in default status if the borrower's last payment was not re<:eived within 10 days of the due date.

Loan type: A loan is either a jumbo mortgage, where the property is valued in excess of $275,000, or a regular mortgage, where the property value is $275,000 or less.

Portfolio: The collection of loans purchased by FCO and available for inclusion in a bundle. The repository maintained by the Loan Arranger contains information about all of the loans in the portfolio ..

• decision points • iteration and feedback to earlier steps • user creativity

• object and information management, as weU as ftow through the process • object structure, attributes, and interrelationships • organizational responsibility for specific tasks

• physical communication mechanisms for information transfer • process measurements

• temporal aspects (both absolute and relative) • tasks executed by humans • professional judgment or discretion

• connection to narrative explanations • tasks invoked or executed by a tool

• resource constraints and allocation, schedule determination • process modification and improvement • multiple levels of aggregation and parallelism

80 Chapter 2 Modeling the Process and Life Cycle

Eighteen different process modeling techniques were applied to tbe common problem, and varying degrees of satisfaction were found with each one. The results are reported in KeUner and Rombach (1990).

Curtis, Kellner, and Over (1992) present a comprehensive survey of process mod- eling techniques and tools. The paper also summarizes basic language types and con- structs and gives examples of process modeling approaches that use those language types.

Krasner et al. (1992) describe lessons learned when implementing a software pro- cess modeling system in a commercial environment.

Several Web sites contain information about process modeling.

• The U.S. Software Engineering Instilute (SEI) continues to investigate process modeling as part of its process improvement efforts. A list of its technical reports and activities can be found at http://www.sei.cmu.edu. The information at http:// www.sei.cmu.edu/coUaborating/spins/ describes Software Process Improvement Networks, geographically-based groups of people interested in process improve- ment who often meet to hear speakers or discuss process-related issues.

• The European Community has long sponsored research in process modeling and a process model language. Descriptions of current research projects are available at h ttp://cordis.europa.eu/fp7 /projects_en.h1tml.

• The Data and Analysis Centre for Software Engineering maintains a list of resources about software process at https://www.thedacs.com/databases/url/key/39.

More information about Lai notation is available in David Weiss and Robert Lai's book, Software Product Line Engineering: A Family-based Software Develop- ment Process (Weiss and Lai 1999).

lbe University of Southern california's Center for Software Engineering has developed a tool to assist you in selecting a process model suitable for your project's requirements and constraints. It can be ftp-ed from ftp://usc.edu/pub/soft_engineering/ demos/prnsa.zip, and more information can be fo und on the Center's Web site: http:// sunset.usc.edu.

Journals such as Software Process-Improvement and Practice have articles addressing the role of process modeling in software development and maintenance. They also report the highJights of relevant conferences, such as the International Soft- ware Process Workshop and the International Conference on Software Engineering. The July/August 2000 issue of IEEE Software focuses on process diversity and has sev- eral articles about the success of a process maturity approach to software development.

There are many resources available for learning about agile methods. The Agile Manifesto is posted at http://www.agilealliance.org. Kent Beck's (1999) is the seminal book on extreme programming, and Alistair Cockburn (2002) describes the Crystal famiJy of methodologies. Martin Beck (1999) explains refactoring, which is one of the most difficult steps of XP. Two excellent references on agile methods are Robert C. Martin's (2003) book on agile software development, and Daniel H. Steinberg and Daniel W. Palmer's (2004) book on extreme software engineering. Two Web sites providing additional information about extreme programming are http://www. xprgramming.com and http://www.extremeprogramming.org.

Openmirrors.com

Section 2.12 Exercises 81

2.12 EXERCISES

1. H ow does the description of a system relate to the notion of process models? For example, how do you decide what the boundary should be for the system described by a process model?

2. For each of the process models described in this chapter, what are the benefits and draw- backs of using the model?

3. For each of the process models described in this chapter, bow does the model handle a significant change in requirements late in development?

4. Draw a diagram to capture the process of buying an airplane ticket for a business trip. S. Draw a Lai arti fact table to define a module. Make sure that you include arti fact states

that show the module when it is untested, partially tested, and completely tested. 6. Using the notation of your choice, draw a process diagram of a software development

process that prototypes three different designs and choose the best from among them. 7. Examine the characteristics of good process models described in Section 2.4. Which char-

acteristics are essential for processes to be used on projects where the problem and solu- tion are not well understood?

8. In this chapter, we suggested that software development is a creation process, not a manu- facturing process. Discuss the characteristics of manufacturing that apply to software development and explain which characteristics of software development are more like a creative endeavor.

9. Should a development organization adopt a single process model for all of its software development? Discuss the pros and cons.

10. Suppose your contract with a customer specifies that you use a particular software devel- opment process. How can the work be monitored to enforce the use of this process?

11. Consider the processes introduced in this chapter. Which ones give you the most flex- ibility to change in reaction to changing requirements?

12. Suppose Amalgamated, Inc., requires you to use a given process model when it contracts with you to build a system. You comply, building software using the prescribed activities, resources, and constraints. After the software is delivered and installed,your system expe- riences a catastrophic failure. When Amalgamated investigates th.e source of the fai!ure, you are accused of n ot having done code reviews that would have found the source ot the problem before delivery. You respond that code reviews were not in the required process. What are the legal and ethical issues involved in this dispute?

3

In this chapter, we look at • tracking project progress • project personnel and organization • effort and schedule estimation • risk management • using process modeling with project

planning

As we saw in the previous chapters, the software development cycle includes many steps, some of which are repeated until the system is complete and tbe customers and users are satisfied. However, before committing funds for a software development or maintenance project, a customer usuaUy wants an estimate of how much the project wiU cost and how long the project wiU take. This chapter examines the activities neces- sary to plan and manage a software development project.

3.1 TRACKING PROGRESS

82

Software is useful only if it performs a desired function or provides a needed service. Thus, a typical project begins when a customer approaches you to discuss a perceived need. For example, a large national bank may ask you for help in bujlding an informa- tion system that allows the bank's clients to access tbeir account information, no matter where i_n the world the clients are. Or you may be contacted by marine biologists who would Like a system to connect with their water-morutoring equipment and perform statistical analyses of the data gathered. Usually, customers have several questions to be answered:

• Do you understand my problem and my needs? • Can you design a system that will solve my problem or satisfy my needs? • How long will it take you to develop such a "System? • How much wiU it cost to have you develop such a system?

Openmirrors.com

Section 3.1 Tracking Progress 83

Answering the last two questions requires a weU-thought-out project schedule. A project schedule describes the software development cycle for a particular project by enumerating the phases or stages of the project and breaking each into discrete tasks or activities to be done. Tue schedule also portrays the interactions among these activities and estimates the time that each task or activity will take. Thus, the schedule is a time- Line that shows when activities will begin and end, and wben the related development products will be ready.

In Chapte r 1, we learned that a systems approach involves both analysis and syn- thesis: breaking the problem into its component parts, devising a solution for each part, and then putting the pieces together to form a coherent whole. We can use this approach to determine the project schedule. We begin by working with customers and potential users to understand what they want and need. At the same time, we make sure that they are comfortable with our knowledge of their needs. We List au project deliverables, that is, the items that the customer expects to see during project develop- ment. Among the del.iverables may be

• documents

• demonstrations of function • demonstrations of subsystems • demonstrations of accuracy • demonstrations of reliability, security, or performance

Next, we determine what activities must take place in order to produce these deliverables. We may use some of the process modeling techniques we learned in Chapter 2, laying out exactly what must happen and which activities depe nd on other activities, products, or resources. Certain events are designated to be milestones, indi- cating to us and our customers t!hat a measurable level of progress bas been made. For example, when the requirements are documented, inspected for consistency and com- pleteness, and turned over to the design team, the requirements specification may be a project milestone. Similarly, milestones may include the completion of the user's man- ual, the performance of a given set of calculations, or a demonstration of the system's ability to communicate with another system.

In our analysis of the project, we must distinguish clearly between milestones and activities. An activity is a part of the project that takes place over a period of time, whereas a milestone is the completion of an activity-a particular point in time. Thus, an activity has a beginning and an end, whereas a milestone is the end of a specially des- ignated activity. For example, the customer may want the system to be accompan.ied by an online operator tutorial. The development of the tutorial and its associated pro- grams is an activity; it culminates in the demonstration of those functions to the cus- tomer: the milestone.

By examining the project carefuJly in this way, we can separate development into a succession of phases. Each phase is composed of steps, and each step can be subdi- vided further if necessary, as shown in Figure 3.1.

To see bow this analysis works, consider the phases, steps, and activities of Table 3.1, which describes the building o f a house. First, we consider two phases: landscaping the lot and building the house itself. Then, we break each phase into smaller s teps, such as

84 Chapter 3 Planning and Managing t he Project

FIGURE 3.1 Phases, steps, an:d activities in a project.

PROJECT

PHASE I

/STEPI PHASE 2 L..__ STEP 2

/ STEP I PHASE n ..:::::._STEP 2

ACTIVITY 2.1 ACTIVITY 2.2 ACTIVITY 3.3

clearing and grubbing, seeding the turf, and plant£ng trees and shrubs. Where necessary, we can divide the steps into activities; for example, finishing the inte rior involves com- ple ting the inte rior plumbing, interior electrical work, wallboard, interior painting, floor covering, doors, and fixtures. Each activity is a measurable event and we have objective criteria to determine when the activity is complete. Thus, any activity's end can be a mile- stone, and Table 3.2 lists the milestones for phase 2.

This analytical breakdown gives us and our customers an idea of what is involved in constructing a house. Similarly, analyzing a software development or maintenance project and identifying the phases, steps, and activities, both we and our customers have a bette:r grasp of what is involved in building and maintaining a system. We saw in Chapter 2 that a process model provides a high-level view of the phases and steps, so process modeling is a useful way to begin analyzing the project. In later chapters., we wiU see that the major phases, such as requirements engineering, implementation , or testing, involve many activities, each o f which contributes to product or process quali ty.

Work Breakdown and Activity Graphs

Analysis o f this kind is sometimes described as generating a work breakdO\m structure for a given project, because it depicts the project as a set of discrete pieces of work. Notice that the activities and milestones are items that both customer and developer can use to track development or maintenance. At any point in the process, the customer may want to foUow our progress. We developers can point to activities, indicating what work is under way, and to milestones, indicating what work has been completed. H ow- ever, a project's work breakdown structure gives no indication of the interdependence of the work units or of the parts of the project tha t can be developed concurrently.

We can describe each activity with four parameters: the precursor, duration, due date, and endpoint A precursor is an event or set of events that must occur before the activity can begin; it describes the set o f conditions that allows the activity to begin. The duration is the length of time needed to complete the activity. The due date is. the date by which the activity must be completed, frequently determined by contractual deadlines. Signifying that the activity bas ended, the endpoint is usually a milestone or

Openmirrors.com

Sectio n 3.1 Tracking Progress 85

TA BLE 3.1 Phases,Steps,andActivitiesofBuildinga House

Phase 1: Landscaping the Lot Phase 2: Building the House

Step 1.1: Step2.l: Clearing Prepare a11d the site grnbbi11g

Activity 1.1.l: Remove trees Activity 2.1.1: Survey the land

Activity 1.1.2: Remove stumps Activity2.1.2: Request permits

Step 1.2: Activity 2.1.3: Excavate for the foundation Seeding the turf

Activity 1.2.1: Aerate the soil Activity 2.1.4: Buy materials

Activity 1.2.2: Disperse the seeds Step2.2: B<4ildi11g the exterior

Activity 1.2.3: Water and weed Activity 2.2.1: Lay the foundation

Step 1.3: Activity 2.2.2: Build the outside walls Pla11ti11g shmbsa11d trees

Activity 1.3.l: Obtain shrubs and trees Activity 2.2.3: Install exterior plumbing

Activity 1.3.2: Dig holes Activity 2.2.4: Exterior electrical work

Activity 1.3.3: Plant shrubs and trees Activity 2.2.5: Exterior siding

Activity 1.3.4: Anchor the trees and mulch Activity 2.2.6: Paint the exterior around them

Activity2.2.7: Install doors and fixtures

Activity 2.2.8: Install roof

Step 2.3: Fi11islri11g rhe imerior

Activity 2.3.1: Install the interior plumbing

Activity 2.3.2: Install interior electrical work

Activity2.3.3: Install wallboard

Acrivity2.3.4: Paint the interior

Activity 2.3.5: Install lloor covering

Activity 2 .3.6: Install doors and fixtures

86 Chapter 3 Planning and Managing the Project

TABLE 3.2 Milestones in Building a House

l.l. Survey complete 1.2. Penni ts issued 1.3. Excavation complete 1.4. Materials on hand 2.1. Foundation laid 2.2. Outside waUs complete 2.3. Exterior plumbing complete 2.4. Exterior electrical work complete 2.5. Exterior siding complete 2.6. Exterior painting complete 2.7. D oors and fixtures mounted 2.8. Roof complete 3.1. Interior plumbing complete 3.2. Interior electrical work complete 3.3. Wallboard in place 3.4. Interior painting complete 3.5. Floor covering laid 3.6. D oors and fixtures mounted

deliverable. We can illustrate the relationships among activities by using these parame- ters. In particular, we calll draw an activity graph tlo depict the dependencies; the nodes of the graph are the project milestones, and the Lines Linking the nodes represent the activities involved. Figure 3.2 is an activity graph for the work described in phase 2 of Table 3..1.

Install exterior electrical Install exterior siding

Paint exterior

Install exterior doors and fixtures

Install exterior plumb in!

Surveying

Excmtion

Buy materials

lay loandation

Build 0<1t~ide wall

Install interior plumbing Install interior electrical 3.2

3.3 Paint interior

Install wallboard

interior doors and lixturu

FIGURE 3.2 Activity graph for building a house.

Openmirrors.com

Section 3.1 Tracking Progress 87

Many important characteristics of the project are made visible by the activity graph. For example, it is. clear from Figure 3.2 that neither of the ll:wo plwnbing acll:ivi- ties can start before milestone 2.2 is reached; that is, 2.2 is a precursor to both interior and exterior plumbing. Furthermore, the figure shows us that several things cam be done simultaneously. Fo r instance, some of the interior and exterio r activities are inde- pendent (such as instaUing waUboard, connecting exterior electrical plumbing, and others tieading to milesto nes 2.6 and 3.3, respectively). Tue activit.ies on the left-hand path do not depend on those on the right for their initiation, so they can be worked on concurrently. Notice thai t there is a dashed line from requesting permits (node 1.2) to surveying (node 1.1). This Line indicates that these activities must lbe completed before excavation (the activity leading to milestone 1.3) can begin. However,since there is no real activity that occurs after reaching milestone ll.2 in order to get to milestone 1.1, the dashed line indicates a re lationship without an accompanying activity.

It is impo rtant to realize that activity graphs depend on an understanding of the parallel nature of tasks. If work cannot be done in parallel, then the (mostly straight) graph is not useful in depicting how tasks will be coordinated. Moreover, the graphs must reflect a realistic depictio n of the parallelism. In our house-building example, it is clear that some of the tasks, like plumbing, will be done by different people from those doing other tasks, like electrical work. But on software development projects, where some people have many skills, the theoretical parallelism may not reflect reality. A restricted number of people assigned to the project may result in the same person doing many things in series, even though they could be done in parallel by a larger development team.

Estimating Completion

We can make an activity graph mo re useful by adding to it information about the estimated time it will ta ke to complete each activity. For a given activity, we label the corresponding edge of the graph with the estimate. For example, for the activities in phase 2 of Table 2.1, we can append to the activity graph of Figure 3.2 estimates of the number of days it will take to complete each activity. Table 3.3 contains the estimates for each activity.

The result is the graph shown in Figure 3.3. Notice that milestones 2.7, 2.8, 3.4, and 3.6 are precursors to the finish. That is, these milestones must all be reached in orde r to consider the project complete. The zeros on the links from those nodes to the finish show that no additional time is needed. There is also an implicit zero on the Link from node 1.2 to1.1,since no additional time is accrued on the dashed link.

This graphical depiction of the project tells us a lot about the project's schedule. For example, since we estimated that the first activity would take 3 days to complete, we cannot hope to reach milestone 1.1 before the end of day 3. Similarly, we cannot reach milestone 1.2 before the end of day 15. Because the beginning of excavation (activity 1.3) cannot begin until milestones 1.1 and 1.2 are both reached , excavation caillflot begin until the beginning o f day 16.

Analyzing the paths among the milestones of a project in this way is called the Critical Path Method (CPM). The paths can show us the minimum amount of time it will take to complete the project, given our estimates o f each activity's duration. Moreover, CPM reveals those activities that are most critical to completing the project on time.

88 Chapter 3 Planning and Managing the Project

TABLE 3.3 Activities andTime Estimates

Activity Tune Estimate (in Days)

Step 1: Prepare tire site

Activity 1.1: Survey the land 3

Activity 1.2: R equest permits 15

Activity 1.3: Excavate for the foundation 10

Activity 1.4: Buy materials 10

Step 2: Building the exterior

Activity 2.1: Lay the foundation 15

Activity 2.2: Build the outside walls 20

Activity 2.3: Install exterior plumbing 10

Activity 2.4: Install exterior electrical work 10

Activity 2.5: install exterior siding 8

Activity 2.6: Paint the exterior 5

Activity 2.7: install doors and fixtures 6

Activity 2.8: install roof 9

Srep 3: Finishing rhe i111erior

Activity 3.1: Install interior plumbing 12

Activity 3.2: install interior e lectrical work 15

Activity 3.3: install wallboard 9

Activity 3.4: Paint the interior 18

Activity 3.5: install floor covering 11

Activity 3.6: install doors and fixtures 7

To see how CPM works, consider again our house-buildin.g example. First, we notice that the activities leading to milestones 1.1 (surveying) and 1.2 (requesting per- mits) can occm concurrently. Since excavation (the activity culminaHng in milestone 1.3) cannot begin until day 16, surveying bas 15 days in which to be completed, even though it is only 3 days in duration. Thus, surveying has 15 days of available time, but requires only 3 days of real time. Io the same way, for each activity in our graph, we can compute a pair of times: real time and available time. The real time or actual time for an activity is the estimated amount of time required for the activity to be completed, and the available time is the amount of time available in the schedule for the activity's com- ple tion. Slack time or float for an activity is the difference between the avaiJable t ime and the real time for that activity:

Slack time = available time - real time

Openmirrors.com

3

10

10

1S

20

10

8

6

Section 3.1 Tracking Progress 89

FIGURE 3.3 Activity graph with durations.

Anothe r way of looking at slack time is to compare the earliest time ao activity may begin with the latest time the activity may begin without delaying the project. For example, surveying may begin on day 1, so the ea rliest start time is day 1. However, because it will take 15 days to request and receive pennits, surveying can begin as late as day 13 and still no t hold up the project schedule. Therefore,

Slack time = latest start time - earliest start time

Let us compute the slack for our example's activities to see what it te lls us about the project schedule. We compute slack by examining aJJ paths from the start to the finish. As we have seen, it must take 15 days to complete milestones 1.1 and 1.2. An additional 55 days are used in comple ting milestones 1.3, 1.4, 2.1, and 2.2. At this poi_nt, the re are four possible paths to be taken:

1. Following milestones 2.3 through 2.7 on the graph requires 39 days.

2. Following milestones 2.3 through 2.8 on the graph requires 42 days.

3. Following milestones 3.1 through 3.4 on the graph requires 54 days. 4. Following milestones 3.1 through 3.6 on the graph requires 54 days.

Because milestones 2.7, 2.8, 3.4, and 3.6 must be met before the project is finished, our schedule is constrained by the longest path. As you can see from Figure 3.3 and our preceding calculations, the two paths on the right require 124 days to comple te, and the two paths on the left require fewer days. To calculate the slack, we can work backward along the pa th to see how much slack time the re is for each activity leading to a node. First, we note tha t the re is zero slack on the longest pa th. Then, we examine each of the remaining nodes to calculate the slack for the activities leading to them. For example,

90 Chapter 3 Planning and Managing the Project

54 days are available to complete the activities leading to milestones 2.3, 2.4, 2.5, 2.6, and 2.8, but onJy 42 days are needed to complete these. Thus, this portion of tbe graph has 12 days of slack. Similarly, the portion of the graph for activities 2.3 through 2.7 requires only 39 days, so we have 15 days of slack along this route. By working forward tbrough the graph in th.is way, we can compute the earliest start time and slack for each of the activities. Then, we compute the latest start time for each activity by moving from tbe finish back through each node to the start. Table 3.4 shows the results: the slack time for each activity in Figure 3.3. (At milestone 2.6, the path can branch to 2.7 or 2.8. The latest start times in Table 3.4 are calculated by using the route from 2.6 to 2.8, rather than from 2.6 to 2.7.)

The longest path bas a slack of zero for each of its nodes, because it is the path tbat determines wbethe r or not the project is on schedule. For this reason, it is called the critical path. Thus, the critical path is the one for which the slack at every node is zero. As you can see from our example, there may be more than one critical path.

TABLE 3.4 Slack 'TI me for Project Activities

Activity Earliest Start 'Time Latest Start'Time Slack

l.l 1 13 12

1.2 1 1 0

1.3 16 16 0

1.4 26 26 0

2.1 36 36 0

2.2 51 51 0

2.3 71 83 12

2.4 81 93 12

2.5 91 103 12

2.6 99 111 12

2.7 104 119 15

2.8 104 116 12

3.1 71 71 0

3.2 83 83 0

3.3 98 98 0

3.4 107 107 0

3.5 107 107 0

3.6 118 118 0

Finish 124 124 0

Openmirrors.com

Section 3.1 Tracking Progress 91

Since the critical path has no slack, there is no margin for error when performing the activities along its route.

Notice what happens when an activity on the critical path begins late (i.e., later than its earliest start time). The late start pushes alJ subsequent critical path activities forward, forcing them to be late, too, if there is no slack. And for activities not on the critical path, the subsequent activities may also lose slack time. Thus, the activity graph belps us to unde rstand tbe impact of any scbedule slippage.

Consider what happens if the activity graph has several loops in it. Loops may occur when an activity must be repeated. For instance, in our house-building example, the building inspector may require the plumbing to be redone. In software development, a design inspection may require design or requirements to be respecified. The appear- ance of these loops may change the critical path as the loop activities are exe rcised more than once. In this case, the effects on the schedule are far less easy to evaluate.

Figure 3.4 is a bar chart that shows some software development project activities, including information about the early and late start dates; this cbart is typical of those produced by automated project management tools. The horizontal bars represent the duration o f each activity; those bars composed of asterisks indicate the critical path. Activities depic ted by dashes and Fs are not on the critical path, and an F represents Hoat or slack tjme.

Critical path anaJysis of a project schedule telJs us who must wait for what as the project is being developed. It also tells us which activities must be completedl on schedule to avoid delay. This kind of analysis can be enhanced in many ways. For instance, our house-building example supposes that we know exactly bow long each activity will take. Often, this is not tbe case. Instead, we bave onJy an estimated duration for an activity, based on our knowledge of similar projects and events. Thus, to each activity, we can

Early Late Jan Jan Jan Jan Jan Feb Feb M Feb Demlptlon Date Date I 8 IS 22 29 s 12 17' 24

Test of phase 1 1 Jan 98 S Feb 98 I ................ ,,* • • • •••••• •

Deline tut mes I Jan 98 8 Jan 98 ~ Write test phn 9 Ju 98 22 Jan 98 I···**** Inspect test plan 9 Jan 98 22 Jan 98 I······· I nte~ratlon tertl ng 23 Jan 98 1 Feb 98 I······ I lnterfaee testing 23 Jan 98 1 Feb 98 , __ FFFFF I

Document m ulls 23 Jan 98 1 Feb 98 1-----FFFI System testing 2 Feb 98 17 Feb 98 E········ ••• I Perlormuce tests 2 Feb 98 17 Feb 98 1-------- FFFF FFFI Confl guntlon tests 2 Feb 98 17 Feb 98 1------- FFFFFFFFI Document results 17 Feb 98 24 Feb 98 B

FIGURE 3.4 CPM bar chart.

92 Chapter 3 Planning and Managing the Project

assign a probable duration according to some probability distribution, so that each activ- ity has associated with it an expected value and a variance. In other words, instead of knowing an exact duration, we estimate a window or interval in which the actual time is likely to fall. The expected value is a point within the interval, and the variance descriibes the width of the interval. You may be familiar WEth a standard probability distribut ion called a normal distribution, whose graph is a be ll-shaped curve. The Program Evaluation and Review Technique (PERl) is a popular critical path analysis technique that assumes a normal distribution. (See Hillier and Lieberman [2001) for more information about PERT.) PERT determines the probability that the earliest start time for an activity is dose to the scheduJed time for that activity. Using information such as probability distribution, latest and earliest start times, and the activity graph, a PERT program can calcuJate the critical path and identify those activities most likely to be bottlenecks. Many project man- agers use the CPM or PE RT method to examine their projects. However, these methods are valuable only for stable projects in which several activities take place concurrently. If the project's activities are mostly sequential, then almost all activities are on the crit ical pa th and are candidates for bottlenecks. Moreover, if the project requires redesign or rework, the activity graph and critical path are likely to change during development.

Tools to Track Progress

There are many tools that can be used to keep track of a project's progress. Some are manual, others are simple spreadsheet applications, and still others are sophisticated tools with complex graphics. To see what kinds of tools may be useful on your projects, consider the work breakdown structure depicted in Figure 3.5. He re, the overalJ objec- tive is to build a system involving communications software, and the project manager has described the work in te rms of five steps: system planning, system design, coding, testing, and delivery. For simplicity, we concentrate on the first two steps. Step 1 is then partitioned into four activities: reviewing the specifications, reviewing the budget, reviewing the scheduJe, and developing a project plan. Similarly, th.e system design is

Build commwnioations sof tware

S~stem plannin! (1.0) S~tem design (2.0)

Review specification(l.1) Top-level design (2.1)

Review ~ud9et (1.2) Prototyping (:2.2)

Review sc~edule ( 1.3) User interface (2.3)

Develop plan (1. 4) Detai led desi !" (2.4)

FI GURE 3.5 Example work breakdown structure.

Openmirrors.com

Section 3.1 Tracking Progress 93

developed by doing a top-level design, prototyping, designing the user interface, and then creating a detailed design.

Many project management software systems draw a work breakdown structure and also assist the project manager in tracking progress by step and activity. For example, a project management package may draw a Gantt chan , a depiction or the project where tbe activities are shown in paraUel, with the degree of comple tion indi- cated by a color or icon. The chart helps the project manager to understand which activ- ities can be performed concurrently, and also to see which items are on the critical path.

Figure 3.6 is a Gantt chart for the work breakdown structure of Figure 3.5. The project began in January, and the dashed vertical line la be.led "today" indicates that the project team is working during the middie of May. A vertical bar shows progress on each activity, and the color of the bar denotes comple tion, duration, or criticality. A dia- mond icon shows us where there bas been slippage, and the triangles designate an activ- ity's start and finish. The Gantt chart is similar to the CPM chart of Figure 3.4, but it includes more information.

Simple charts and graphs can provide information about resources, too. For example, Figure 3.7 graphs the relationship between the people assigned to the project and those needed at each stage of development; it is typical of graphs produced by project management tools. It is easy to see that during January, February, and March,

: TODAY

ACTIVITY NUMBER JAN I FEB I MARI APR I MAY I JUN I JUL I AUG I SEP I OCT I HOV I DEC DESCRIPTION

WBS 1.0 $\'STEM PLAN NINC :

1.1 Review 1pecification 6. \7 _ . __ . __ Y ; SptclnuUu opprw•'

I

1.2 Review buiget A .0 \7 ' 0 ••• ~-.... Bd9e1 apprw•• I I '

1.3 Review schedwle L:::,,. ~ ________ _ <.j> So hiule opprwei I •:

1.4 Develop pl1n L:::,,. : \7 "'" .,, .... , I I I

WBS 2.0 SYSTEM DESICN ' ' 2.1 Top-level dui gn L:::,,. : "? ........ 0 Deiig•

4 I I I op prov•

2.2 Prototyping :6. '7 :• I

2.3 Um interface : L:::,,. \7

2.4 Detailed detign : L:::,,. \7 Outs• : I I opprmi eo .. pletti Du11tlon FIHt Clitlul Sllptoge Stott t11k Flnlth tuk

r - - - - - - - - -, ,_ -............... . <> FIGURE 3.6 Gantt chart for example work breakdown structure.

94 Chapter 3 Planning and Managing the Project

FIGURE 3. 7 Resource histogram.

c=:1 Load c=:l Overload C:=J Unfoload

-

-

JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV &EC

people are needed but no one is assigned. In April and May, some team members are working, but not enough to do the required job. On the other hand, the period during which there are too many team members is clearly shown: from the beginning of June to the end! of September. The resource allocation for this project is clearly out of balance. By changing the graph's input data, you can change the resource allocation and try to reduce the overload, finding the best resource load for the schedule you have to meet.

Later in this chapter, we will see bow to estimate the costs of development. Project management tools track actual costs against the estimates, so that budget progress can be assessed, too. Figure 3.8 shows an example of bow expenditures can be

FIG URE 3.8 ltacking planned vs. actual expenditures.

.., I>< < ::::: 0 0

•• -• --. Planne• expen•itwre -- Actyal expenditwre

...

.

JAN FEB MAR APR MAY JUN

Openmirrors.com

.

JUL AUG SEP OCT NOV DEC

Section 3.2 Pr·oject Personnel 95

monitored. By combining budget tracking witl1 pe rsonnel tracking, you can use project management tools to de termine the best resources for your limited budget.

3.2 PROJECT PERSONNEL

To de termine the project schedule and estimate the associated effort and costs, we need to know approximate ly bow many people will be working on the project, what tasks they will pe rform, and what abilities and experience they must have so they can do their jobs effectively. In this section, we look at bow to decide who does what and how the staff can be organized.

Staff Roles and Characteristics

In Chapte r 2, we examined several software process models, each depicting the way in which the several activities of software development are related. No matter the model, there are certain activities necessary to any software project. For example, every project requires people to inte ract with the customers to de termine wha t they want and by when they want it. O ther project pe rsonnel design the system, a nd stiU othe rs write or test the programs. Key project activities are Like ly to include

1. requirements analysis 2. system design 3. program design

4. program implementation 5. testing

6. tra ining

7. maintenance 8. quaLity assurance

H oweve r, no t every task is pe rformed by the same person or group; the assignmen t of staff to tasks depends on project size, staff expertise, and staff expe rience. There is great advantage in assigning different responsibilities to different sets of people, offering "checks and balances" that can identify faults early in the develo pment process. For example, suppose the test team is separate from those who design and code the system. Testing new or modified software involves a system test, where the developers demon- strate to the customer that the system works as specified. The test team must define and docume nt the way in which this test wiLI be conducted and the criteria for linking the demonstrated functionality and performance characte ristics to the requirements spec- ified by the customer. The test team can generat,e its test plan from the requirements docume nts without knowing bow the inte rnal pieces of the system a re put togethe r. Because the test team bas no preconceptions about how the hardware and software will work, it can concentrate on system functionality. This approach makes it easier for the test team to catch e rrors and omissions made iby the designers or programmers. It is in part for this reason that the cleanroom method is o rganized to use a n independent test team , as we will see in la te r chapte rs (Mills, D yer, and Linger 1987).

96 Chapter 3 Planning and Managing the Project

For similar reasons, it is useful for program designers to be different from system designers. Program designers become deeply invo lved with the details of the code, and they sometimes neglect the larger picture of bow tthe system should work. We will see in late r chapters that techniques such as walkthroughs, inspections, and reviews can bring the two types of designers together to double-check the design before it goes on to be coded, as weU as to provide continuity in the development process.

We saw in Chapter 1 that there are many other roles for personnel on the devel- opment or maintenance team. As we study each of the major tasks of development in subsequent chapters, we wiU describe the project team members who perform those tasks.

Once we have decided on the roles of project team members, we must decide which kinds of people we need in each role. Project personnel may differ in many ways, and it is not enough to say that a project needs an analyst, two designers, and five pro- grammers, for example. Two people with the same job title may differ in at least one of the following ways:

• ability to perform the work

• interest in the worl< • experience with similar applications • experience with similar tools or languages

• experience with similar techniques • experience with similar development environment • t r.ai nin g • ability to communicate with others • ability to share resp onsibility with others • management skills

Each of these characteristics can affect an individual's ability to perform productively. These variations help to explain why one programmer can write a particular routine in a day, whereas another requires a week. The differences can be critical, not only to schedule estimation, but also to the success of the project.

To understand each worker's performance, we must know bis or her ability to perform the work at hand. Some are good at viewing " the big picture," but may not enjoy focusing on detail if asked to work on a small part of a large project. Such people may be be tter suited to system design or testing than to program design or coding. Sometimes, ability is related to comfort. In classes or on projects, you may have worked with people who are more comfortable programming in one language than another. Indeed, some developers feel more confident about their design abilities than their coding prowess. This feeling of comfort is important; people are usually more produc- tive when they have confidence in their ability to perform.

Interest in the work can also determine someone's success on a project. AJthough very good at doing a particular job, an employee may be more interested in trying something new than in repeating something done many times before. Thus, the novelty of the work is sometimes a factor in generating interest in it. O n the other hand, there are always people who prefer doing what they know and do best, rather than venturing

Openmirrors.com

Section 3.2 Project Personnel 97

into new territory. It is important that whoever is chosen for a task be excited about performing it, no matter what the reason.

Given equal ability and interest, two people may sWl differ in the amo unt of expe- rience or training they have had with similar applications, tools, or techniques. The per- son who has already been successful at using C to wrire a communications controller is more likely to write another communkations controlle r in C faster (but no t necessarily more clearly or efficiently) than someone who has neither experience with C nor knowl- edge of what a communications controller does. Thus, selection of project personnel involves not only individual ability and skill, but also experience and training.

On every software development or maintenance project, members o f the devel- opment team communicate with one another, with users, and with the customer. The project's progress is affected not only by the degree of communication, but also by the ability of individuals to communicate their ideas. Software failures can result from a breakdown in communication and understanding, so the number of people who need to communicate with one another can affect the quality of the resulting product. Figure 3.9 shows us how quickly the lines of communication can grow. Increasing a work team from two to three people triples the number of possible lines of communication. In gen- eral, if a project has n workers, then there are n(n - 1)/2 pairs of people who might need to communicate, and 2n - 1 possible teams that can be created to work on smaller pieces of the project. Thus, a project involving only 10 people can use 45 lines of communication, and there are 1023 possible committees o r teams that can be formed to handle subsystem development!

Many projects involve several people who must share responsibility for complet- ing one or more activities. Those working on one aspect of project development must trust othe r team members to do their parts. In classes, you are usually in total control of the projects you do. You begin with the requirements (usually prescribed by your instructor), design a solution to the problem, outline the code, write the actual lines of code, and test the resulting programs. However, when working in a team, either in school or for an employer or customer, you must be able to share the workload. Not only does this require verbal communication of ideas and results, but it a lso requires written docume nta tion of what you plan to do and what you have done. You must

Two people

Thrae people

Four people

Five people

" people

~ .. -A .A_

~~~ A- ~

.A.><;A

1 line of communication

3 Ii nes of communication

6 Ii nes of communication

& ""-.. 10 lines o{ communication ~\··.52f. It = 11("· 11/2 lines of

communication

FIGURE 3.9 Communication paths on a projec1.

98 Chapter 3 Planning and Managing the Project

accept the results of others without redoing their work. Many people have difficulty in sharing control in this way.

Control is an issue in managing the project, too. Some people are good at direct- ing the work of others. This aspect of personnel interaction is also related to the com- fort people feel with the jobs they have. Those who feel uncomfortable with the idea of pushing their colleagues to stay on schedule, to document their code, or to meet with the customer are not good candidates for development jobs involving the management of other workers.

Thus, several aspects of a worker's background can affect the quality of the project team. A project manager should know each person's interests and abilities when ch oosing who wilJ work together. Sidebar 3.1 explains bow meetings and their organization can enhance or impede project progress. As we will see later in this chap- ter, employee background and communication can also have dramatic effects on the project's cost and schedule.

SIDEBAR 3.1 MAKE MEETINGS ENHANCE PROJECT PROGRESS

Some of the communication on a software project takes place in meetings, either in person or as teleconferences or electronic conversations. However, meetings may take up a great deal of time without accomplishing much. Dressler (1995) tells us that "running bad meetings can be expensive ... a meeting of eight people who earn $40,000 a year could cost $320 an hour, including salary and benefit costs. That's nearly $6 a minute." Common complaints

about meetings include

• The purpose of the meeting is unclear.

• The attendees are unprepared.

• E ssential people are absent or late.

• The conversation veers away from its purpose.

• Some meeting participants do not discuss substantive issues. Inste ad, they argue, domi- nate the conversation, or do not participate.

• Decisions made at the meeting are never enacted afterward.

Good project management involves planning all software development activities, including meetings. There are several ways to ensure that a meeting is productive. First, the manager should make clear to others on the project team who should be at the meeting, when it will start and end, and what the meeting will accomplish. Second, every meeting should have a written agenda, dis tributed in advance if possible. Third, someone should take respon- sibility for keeping discussion on track and for resolving conflicts. Fourth, someone should be responsible for ensuring that each action item decided at the meeting is actually put into practice. Most importantly, minimize the number of meetings, as well as the number of people who must attend them.

Openmirrors.com

Section 3.2 Project Personnel 99

Work Styles

Different people have different preferred styles for interacting with others on the job and for understanding problems that arise in the course of their work. For example, you may prefer to do a de tailed analysis of aU possible information before making a deci- sion, whereas your colleague may rely on "gut feeling" for most of bis important deci- sions. You can think of your preferred work style in te rms of two components: the way in which your thoughts are communicated and ideas gathered, and the degree to which your emotions affect decision making. When co mmunicating ideas, some people tell o the rs their thoughts, and some people ask for suggestions from others before forming an opinion. Jung (1959) caUs the fo rmer extroverts and the latter introverts. Clearly, your communication style affects the way you interact with others on a project. Simi- larly, intuitive people base their decisions on feeliings about and emotional reactions to a problem. Others are rational, deciding primarily by examining the facts and carefully considering aU options.

We can describe the variety of work styles by considering the graph of Figure 3.10, where communication style forms the horizontal axis and decision style the vertical one. The more extroverted you are, the farthe r to the right your work style falJs on the graph. Similarly, the more emotions play a part in your decisions, the higher up you go. Thus, we can deflne four basic work styles, corresponding to the four quadrants of the graph. The rational extroverts tend to assert their ideas and not let "gut feeling" affect the ir decision making. They te ll their colleagues what they want them to know, but they rare ly ask for more information before doing so. When reasoning, t hey rely on logic, not emotion. The rational introverts also avoid emotional decisions, but they are wiUing to take time to conside r all possible courses of action. Rational introverts are informal.ion gathe re rs; they do not feel comfortable making a decision unless they are convinced that au the facts are at hand.

In contrast, intuitive extrove rts base mall!y decisions on emotional reactions, tending to want to teU others about them rather than ask for input. They use their intu- ition to be creative, and they often suggest unusual approaches to solving a problem. The inh1itive intro,•ert is creative, too, but applies creativity only after having gathe red

INTUITIVE FIGURE 3 .10 Work styles.

INTUITIVE INTU ITIVE INTROVERT: ElCTROVERT: Asks others Tells others Acknowledges feelings Acknowlej9es feelings .....

"" m .... ~ > 0 "" "" 0 ..... RATIONAL < !!: RATIONAL m "" INTROVERT: ElCTROVERT: .....

Asks others Tells others

Decides logicallv Deci'8s lo9icallv

RATIO NAL

100 Chapter 3 Planning and Managing the Project

sufficient information on which to base a decision. Winston Churchill was an intuitive introvert; when be wanted to learn about an issue, he read every !bit of material avail- able that addressed it. H e often made his decisions based on bow he felt about what he had learned (Manchester 1983).

To see how work styles affect interactions on a project, consider several typical staff profiles. Kai, a rational extrovert, judges her coUeagues by the results they pro- duce. When making a decision, ber top priority is efficiency. Thus, she wants to know only the bottom line. She examines her options and their probable effects, but she does not need to see documents or hear explanations supporting each option. If her time is wasted or her efficiency is hampered in some way, she asserts her authority to regain control of the situa tion. Thus, Kai is good at making sound decisions quickly.

Marcel, a rational introvert, is very different from his coUeague Kai. He judges his peers by how busy they are, and he has little tolerance for tbose who appear not to be working hard all the time . He is a good worker, admired for the energy he devotes to his work. His reputation as a good worker is very important to him, and be prides himself on being accurate and thorough. He does not like to make decisions without complete information. Wben asked to make a presentation, Marcel does so only after gathering au relevant information on the subject.

Marcel shares an office with David, an intuitive extrovert. Whereas Marcel will not make a decision without complete knowledge of the situation, David prefers to fol- low his feelings. Often, he wilJ trust his intuition about a problem, basing his decision on professional judgment rather than a slow, careful analysis of the information at hand. Since he is assertive, David tends to tell Lhe olhe:rs on his project about his new ideas. He is creative, and he enjoys when others recognize bis ideas. David Ukes to work in an environme nt where there is a great deal of interaction among the staff members.

Ying, an intuitive introvert, also thrives on her coUeagues' attention. She is sensi- tive and aware of her emotional reactions to people and problems; it is very important that she be Liked by her peers. Because she is a good listener, Ymg is the project mem- ber to whom others turn to express their feelings. Ymg takes a lot of time to make a decision, not only because she needs complete information, but also because she wants to make the right decision. She is sensitive to what others think about her ability and ideas. She analyzes situa tions much as Marcel does, but with a di(ierent focus; Marcel looks at all the facts and figures, but Ying examines relational dependencies and emo- tional involvements, too.

Clearly, not everyone fits neatly into one of the four categories. Different people have different tendencies, and we can use the framework of Figure 3.10 to describe those te ndencies and preferences.

Communication is critical to project success, and work style determines commu- nication style. For example, if you are responsible for a part of the project that is behind schedule, Kai and David are Likely to te ll you when your work must be ready. David may offer several ideas to get the work back on track, and Kai wiU give you a new schedule to follow. However, Marcel and Ymg wilJ probably ask when the results wilJ be ready. Marcel, in analyzing his options, will want to know why it is not ready; Ying wiU ask if there is anything she can do to help.

Understanding work styles can help you to be Hexible in your approach to other project team members and to customers and users. In particular, work styles give you

Openmirrors.com

Section 3.2 Project Personnel 101

information about the priorities of others. If a colleague's priorities and interests are different from yours, you can present information to her in terms of what she deems important. For example, suppose Claude is your customer and you are preparing a presentation for him Oll! the status o f the project. If Claude is an introvert, you know that he prefers gathering information to giving it. Thus, you may Oiganize your presen- tation so that it tells him a great deal about how the project is structured and how it is progressing. However, if Claude is an extrovert, you can include questions to allow him to te ll you what be wants or needs. Similarly, if Claude is intuitive, you can take advan- tage of his creativity by soliciting new ideas from him; if he is rational, your presen- ta tion can include facts or figures rather than judgments or feelings. Thus, work styles affect interactions among customers, developers, and users.

·w ork styles can also involve choice of worker for a given task. For instance, intuitive employees may prefer design and development (requiring new ideas) to maintenance programming and design (requiring a ttention to detail and analysis of complex results).

Project Organization

Software development and maintenance project teams do not consist of people working independently or without coordination. Instead, team members are organized in ways that enhance the swift completion of quality products. The choice of an appropriate structure for your project depends on several things:

• the backgrounds and work styles of the team members • the number of people on the team

• tb:e management styles of the customers and developers

Good project managers a re aware of these issues, and they seek team members who are flexible enough to interact with a ll players, regardless of work style.

One popular organizational structure is the chief programmer team, first used at IBM (Baker 1972). On a chief programmer team, one person is totally responsible for a system's design and development. All other team members report to the chief pro- grammer, who has the final say on every decision. The chief programmer supervises au others, designs all programs, and assigns the code development to the other team mem- bers. Assisting the chief is an understudy, whose principal job is substituting for the chief programmer when necessary. A librarian assists the team, responsible for main- taining all project documentation. The librarian also compiles and links the code, and performs preliminary testing of a ll modules submitted to the library. This division of labor aUows the programmers to focus on what they do best: programming.

The organization of the chief programmer team is illustrated in Figure 3.11. By placing all responsibility for all decisions with the chief programmer, the team structure minimizes the amount of communication needed during the project. Each team member must communicate often with the chief, but not necessarily with other team members. Thus, if the team consists of n - 1 programmers plus the chief, the team can establish only n - 1 paths of communication (one path for each team member's interaction with the chief) out of a potential n(n - 1)/2 paths. For example, rathe:r than working out a problem themselves, the programmers can simp!y approach the chief for an answer. Similarly, the chief reviews au design and code, removing the need for peer reviews.

102 Chapter 3 Planning a nd Managing the Project

FIGURE 3.11 Chief programmer team organization.

I

Senior pro 9 r1111111 a rs

I Junior

pro9rammers

Chief pro9rammer

Assishnt chief pro9rammer

I

I

Librarian A'mi11istration Test team

Although a chief programmer team is a hierarchy, groups of workers may be formed to accomplish a specialized task. For instance, one or more team members may form an administrative group to provide a status report on the project's current cost and schedule.

Clearly, the chief programmer must be good at making decisions quickly, so the chief is likely to be an extrovert. However, if most of the team members are introverts, the chie f programmer team may not be the best structure for the project. An alterna- tive is based on the idea of "egoless" programmi.ng, as described by Weinberg (1971). Instead o f a single poinU o f responsibility, an egoless approach holds eve ryone equally responsible. Moreover, Uhe process is separated from the individuals; criticism is made o f the product or the result, not the people involved. The egoless team structure is democratic, and aU team members vote on a decision, whether it concerns design con- siderations or testing techniques.

Of course, there are many other ways to organize a development or maintenance project, and the two described above represent ext remes. Which structure is preferable? The more people on the project, the more need the re is for a formal structure. Certainly, a development team witfu only three or four members does not always need an elaborate o rganizational structure. However, a team of several dozen workers must have a weLJ- defined organization. In fact, your company or your customer may impose a structure on the deve lopment team, based on past success, on the need to track progress in a certain way, or on the desire to minimize points of contact. For example, your customer may insist that the test team be totally independent of program design and development.

Researchers continue to investigate how project team structure affects the result- ing product and how to choose the most appropriate organization in a given situation. A Natio nal Science Foundation (1983) investigation found that projects with a high degree of certainty, stability, uniformity, and re petition can be accomplished more effectively by a hierarchical organizational structure such as the chief programmer team. These projects require little communication among project members, so they are well-suited to an o rganization that stresses rules, specialization, fo rmality, and a clear definitio n of organizatio nal hierarchy.

Openmirrors.com

Section 3.2 Project Personnel 103

TABLE 3.5 Comparison of Organizatjonal Structures

Highly Struc tured loosely Stn«:tured

High certainty Uncerta inty

Repetition New techniques or technology

Large projects Small projects

On the other hand, when there is much uncertainty involved in a project, a more democratic approach may be better. For example, if the requirements may change as development proceeds, the project has a degree of uncertainty. Likewise, suppose your customer is building a new piece of hardware to interface with a system; if the exact specification of tbe hardware is not yet known, then the level of uncertainty is high. Here, participation in decision making, a loosely defined hierarchy, and the encourage- ment of open communication can be effective.

Table 3.5 summarizes the characteristics of projects and the suggested organiza- tional structure to address them. A large project with high certainty and repetition proba- bly needs a highly structured organization, whereas a small project with new techniques and a high degree of certainty needs a looser structure. Sidebar 3.2 describes the need to balance structure with creativity.

SIDEBAR 3.2 STRUCTURE VS. CREATIVITY

Kunde (1997) reports the results of experiments by Sally Philipp, a developer of software training materials. When Philipp teaches a management seminar, she divides her class into two groups. Each group is assigned the same task: to build a hotel with construction

paper and glue. Some teams are structured, and the team members have clearly defined

responsibilities. Others are left alone, given no direction or structure other than to build the

hotel. Philipp claims that the results are always the same. "The unstructured teams always do

incredibly creative, multistoried Taj Mahals and never complete one on time. The structured teams do a Day's Inn [a bland but functional small hotel], but they're finished and putting

chairs around the pool when I call time," she says.

One way she places structure on a team is by encouraging team members to set dead- lines. The overall task is broken into small subtasks, and individual team members are res-

ponsible for time estimates. The deadlines help to prevent "scope creep,'' the injection of

unnecessary functionality into a product. The experts in Kunde's article claim that good project management means finding a

balance between structure and creativity. Left to their own devices, the software developers

will focus only on functionality and creativity, disregarding deadlines and the scope of the

specification. Many software project management experts made similar claims. Unfortu-

nate ly, much of this infonnatioo is based on anecdote, not on solid empirical investigation.

104 Chapter 3 Planning and Managing the Project

The two types of organizational structure can be combined , where appropriate. For instance, programme rs may be asked to develop a subsystem on their own, using an egoless approach within a hierarchical structure. Or the test team of a loosely struc- tured project may impose a hierarchical structure on itself and designate one person to be responsible for all major testing decisions.

3.3 EFFORT ESTIMATION

One of the crucial aspects of project planning and management is understanding how much the project is likely to cost. Cost overruns can cause customers to cancel projects, and cost underestimates can force a project team to invest much of its time without financial compensation. A s described in Sidebar 3.3, there are many reasons for inaccu- rate estimates. A good cost estimate early in the project's life helps the project manager to know bow many developers will be required and to arrange for the appropriate staff to be available wben they are needed.

The project budget pays for several types of costs: facilities, staff, methods, and tools. The facilities costs include hardware, space, furniture, telephones, modems, heat- ing and air conditioning, cables, disks, paper, pens, photocopiers, and all other items tbat provide tbe physical environment in whkb the developers will work. For some projects, tbis environment may already exist, so the costs are well-understood and easy to esti- mate. But for other projects, tbe environment may have to be created. For example, a new project may require a security vault, a raised floor, temperature or humidity con- trols, or special furniture. Here, the costs can be estimated, but they may vary from ini- tial estimates as the environment is built or changed. For instance, installing cabling in a building may seem straightforward until the builders discover that the building is of special historical significance, so tbat the cables must be routed around the walls instead of through them.

There are sometimes hidden costs that are not apparent to the managers and developers. For example, studies indicate that a programmer needs a minimum amount of space and quiet to be able to work effectively. McCue (1978) reported to his col- leagues at IBM that the minimum standard for programmer wo rk space shouJd be 100 square feet o f dedicated floor space with 30 square feet of horizon Ital work surface. The space also needs a floor-to-ceiling enclosure for noise protection. DeMarco and Lister's (1987) work suggests that programmers free from telephone calls and uninvited visilors are more e fficient and produce a better product than those who are subject to repeated interruption.

Other project costs involve purchasing software and tools to support develop- ment efforts. Io addition to tools for designing and coding the system, the project may buy software to capture requirements, organjze documentation, test the code, keep track of changes, generate test data, support group meetings, and more. These tools, sometimes caJJed Computer-Aided Software Engineering (or CASE) tools, are some- times required by the customer or are part of a company's standard software develop- ment process.

For most projects, the biggest component of cost is effo rt. We must determine how many staff-days of effort wiU be required to complete the project. Effort is certainly

Openmirrors.com

Section 3.3 Effort Estimation 105

SIDEBAR 3.3 CAUSES OF INACCURATE ESTIMATES

Lederer and Prasad (1992) investigated the cost-estimation practices of 115 different orga-nizations.Thirty-five percent of the managers surveyed on a five-point Likert scale indi- cated that their current estimates were "moderately unsatisfactory" or "very unsatisfactory."

The key causes identified by the respondents included

• frequent requests for changes by users

• overlooked tasks

• users' lack of understanding of their own requirements

• insufficient analysis when developing an estimate

• lack of coordination of systems development, technical services, operations, data administration, and other functions during development

• lack of an adequate method or guidelines for estimating

Several aspects of the project were noted as key influences on the estimate:

• complexity of the proposed application system

• required integration with existing systems

• complexity of the programs in the system

• size of the system expressed as number of functions or programs

• capabilities of the project team members

• project team's experience with the application

• anticipated frequency or extent of potential changes in user requirements

• project team's experience with the programming language

• database management system

• number of project team members

• extent of programming or documentation standards

• availability of tools such as application generators

• team's experience with the hardware

the cost component with the greatest degree of uncertainty. We have seen how work style, project organization, ability, interest, experience, trainfog, and other employee characteristics can affect the time it takes to complete a task. Moreover, when a group of workers must communicate and consult with one another, the effort needed is increased by the time required for meetings, documentation, and training.

Cost, schedule, and effort estimation must be done as early as possible during the project's Life cycle, since it affects resource allocation and project feasibility. (If it costs too much, the customer may cancel the project.) But estimation should be done repeatedly throughout the life cycle; as aspects of the project change, the estimate can

106 Chapter 3 Planning and Managing the Project

4x

2x ..... Cb ~ 1.Sx "' 1.2Sr .... N c;; .... > ~ 5 ..... c.: O.Sx

0.2Sx

* Product Concept of Requirements design opention$ $pecilicalion speer

* Size (SLOC) + Cost ($)

Detailed foign specs

Accepted software

Feasibi I ity Plan$ & Product l>ehi led Development ud lest requirements des i!n desi!n

PHASES AND MILESTONES

FIGURE 3.12 Changes in estin13tion accuracy as project progresses (Boehm et al. 1995).

be refined, based on more complete information about the project's characteristics. Figure 3.12 illustrates how uncertainty early in the project can affect the accuracy of cost and size estimates (Boehm et al. 1995).

The stars represent size estimates from actual projects, and the pluses are cost estimates. The funnel-shaped lines narrowing to the right represent Boehm's sense of how our estimates get more accurate as we learn more about a project. Notice that when the specifics of the project are not yet known, the estimate can differ from the eventual actual cost by a factor of 4. As decisions are made about the product and the process, the factor decreases. Many experts aim for estimates that are within 10 percent of the actual value, but Boehm's data indicate that sucb estimates typicaUy occur only when the project is almost done-too late to be useful for project management.

To address the need for producing accurate estimates, software engineers have developed techniques for capturing the relationships among effort and staff character- istics, project requirements, and other factors that can affect the time, effort, and cost of developing a software system. For the rest of this chapter, we focus on effort-estimation techniques.

Expert Judgment

Many effort-estimation methods rely on expert judgment. Some are informal tech- niques, based on a manager's experience with similar projects. Thus, the accuracy of the prediction is based on the competence, experience, objectivity, and perception of the estimator. In its simplest form, such an estimate makes an educated guess about the effort needed to build an entire system or its subsystems. The complete estimate can be computed from eithe r a top-down or a bottom-up analysis of what is needed.

Openmirrors.com

Section 3.3 Effort Estimation 107

Many times analogies are used to estimate effort. If we have already built a system much Like the one proposed, then we can use the similarity as the basis for our estimates. For example, if system A is similar to system B, then the cost to produce system A should be very much like the cost to produce B. We can extend the analogy to say that if A is about hat[ the size or complexity of B, then A should cost about half as much as B.

lbe analogy process can be formalized by asking several experts to make three predictions: a pessimistic one (x), an optimistic one (z), and a most likely guess (y) . Then our estimate is the mean of the beta probability distribution determined by these num- bers: (x + 4y + z)/6. By using this technique, we produce an estimate that "normal- izes" the individual estimates.

The Delphi technique makes use of expert judgment in a different way. Experts are asked to make individual predictions secretly, based on their expertise and using whatever process they choose. Then, the average estimate is calculated and presented to the group. Each expert has the opportunity to revise his or her estimate, if desired. The process is repeated until no expert wants to revise. Some users of the Delphi tech- nique discuss the average before new estimates are made; at other times, the users aUow no discussion.And in another variation, the justifications of each expert are circu- lated anonymously among the experts.

Wolverton (1974) built one of the first models of software development effort. His software cost matrix captures bis experience with project cost at TRW, a U.S. soft- ware development company. As shown in Table 3.6, the row name represents the type of software, and the column designates its difficulty. Difficulty depends on two factors: whether the problem is old (0) o r new (N) and whether it is easy (E), moderate (M), or hard (H). The matrix elements are the cost per line of code, as calibrated from historical data at TRW. To use the matrix, you partition the proposed software system into mod- ules. Then, you estimate the size of each module in terms of lines o f code. Using the matrix, you calcula te the cost per module, and then sum over all the modules. For instance, suppose you have a system with three modules: one input/output module that is old and easy, one algorithm module that is new and hard, and one data management module that is old and moderate. If the modules are like ly to have 100, 200, and 100 Lines of code, respectively, then the Wolverton model estimates the cost to be (100 x 17) + (200 x 35) + (100 x 31) = $11,800.

TABLE 3.6 Wolverton Model Cost Matrix

Diffic11/ty

Type of software OE OM OH NE NM NH

Control 21 27 30 33 40 49

Input/output 17 24 27 28 35 43

Pre/post processor 16 23 26 28 34 42

Algorithm 15 20 22 25 30 35

Data management 24 31 35 37 46 57

nme-critical 75 75 75 75 75 75

108 Chapter 3 Planning and Managing the Project

Since the model is based on TRW data and uses 1974 doUars,it is not applicable to today's software development projects. But the technique is useful and can be trans- ported ,easily to your own development or maintenance environment.

I1i1 general, experiential models, by relying mostly on expert judgment, are subject to a ll its inaccuracies. They rely on the expert's ability to determine which projects are similar and in what ways. However, projects that appear to be very similar can in fact be quite different. For example, fast runners today can run a mile in 4 minutes.A marathon race requires a runner to run 26 miles and 365 yards. If we extrapolate the 4-minute time, we might expect a runner to run a marathon in 1 hour and 45 minutes. Yet a marathon has never been run in under 2 hours. Consequently, there must be character- istics of running a maratthon that are very different from those of running a mile. Like- wise, tbere are often characteristics of one project that make it very different from another project, but the characteristics are not always apparent.

Even when we know how one project differs from another, we do not always know how the differences affect the cost. A proportional strategy is unreliable, because project costs are not always linear: Two people cannot produce code twice as fast as one. Extra time may be needed for communication and coordination, or to accommodate differ- ences in interest, ability, and experience. Sackman, Erikson, and Grant (1968) found that the productivity ratio be tween best and worst programmers averaged 10 to 1, with no easily definable relationship between experience and performance. Likewise, a more recent study by Hughes (1996) found great variety in the way software is designed and developed, so a model that may work in one organization may not apply to another. Hughes also noted that past experience and knowledge of available resources are major factors in determining cost.

Expert judgment suffers not only from variability and subjectivity, but also from dependence on current data. The data on which an expert judgmentt model is based must reflect current practices., so they must be updated often. Moreover, most expert judg- ment techniques are simplistic, neglecting to incorporate a large nlll1Ilber of factors il:bat can affect the effort needed on a project. For this reason, practitioners and researchers have turned to algorithmic methods to estimate effort.

Algorithmic Methods

Researchers have created models that express the relationship be tween effort and the factors that influence it. l11e models are usually described using equations, where effort is the de pendent variabEe, and several factors (such as experience, size, and application type) are the independent variables. Most of these models acknowledge that project size is the most influential factor in the equation by expressing e ffort as

E = (a + bSc)m (X )

where Sis the estimated size of the system, and a, b, and c are constants.X is a vector of cost factors, x1 through Xm and m is an adjustment multiplier based on these factors. In other words, the effo rt is de termined mostly by the size o f tlhe proposed system, adjusted by the effects of several other project, process, product, or resource characteristics.

Openmirrors.com

Section 3.3 Effort Estimation 109

WaJston and Felix (1977) developed one of the first models of this type, finding that IBM data from 60 projects yielded an equation of the form

E = 5.25S'°·91

111e projects that supplied data built systems with sizes ranging from 4000 to 467,000 Lines of code, written in 28 different high-level languages on 66 computers, and representing from 12 to 11,758 person-months of effort. Size was measured as lines of code, including comments as long as they did not exceed 50 percent of the total lines in the program.

The basic equation was supplemented with a productivity index that reflected 29 factors that can affect productivity, shown in Table 3.7 . Notice that the facto rs are tied to a ve ry specific type of deve lopment, including two platforms: an operational computer and a deve lopment compute r. The model reflects the particular development style of the IBM Federal Systems organizations that provided the data.

TABLE 3.7 Walston and Felix Model Productivity Factors

1. Customer interface complexity 16. Use of design. and code inspections

2. User participation in requirements definition 17. Use of top-down development

3. Customer-originated program design changes 18. Use of a chie[ programmer team

4. Customer experience with the application area 19. Overall complexity of code

5. Overall personnel experience 20. Complexity of application processing

6. Percentage of development programmers 21. Complexity of program flow who participated in the design of functional specifications

7. Previous experience with the 22. Overall constraints on program's design operational computer

8. Previous experience with the programming 23. Design constraints on the program's language main storage

9. Previous experience with applications 24. Design constraints on the program's timing of similar size and complexity

10. Ratio of average staff si7.e to project 25. Code for real-time or interactive o:peration duration (people per month) or for execution under severe time constraints

11. Hardware under concurrent development 26. Percentage on code for delivery

12. Access to <level opment computer 27. Code classified as nonmathematical application open under special request and input/output formatting programs

13. Access to development computer closed 28. Number of classes of items in the database per 1000 lines of code

14. Classified security environment for computer 29. Number of pages of delivered documentation and at least 25% of programs and data per 1000 lines of code

15. Use of structured programming

110 Chapter 3 Planning a nd Managing the Project

Each of the 29 factors was weighted by 1 if the factor increases productivity, 0 if it has no effect on productivity, and -1 if it decreases productivity. A weighted sum of the 29 factors was then used. to generate an effort estimate from the basic equation.

Bailey and Basili (1981) suggested a modeling technique, called a meta-model , for building an estimation equation that reflects your own organization's characteristics. They demonstrated their technique using a database of 18 scientific projects written in Fortralll at NASA's Goddard Space Flight Center. First, they minimized the standard e rror estimate and produced an equation that was very accurate:

E = 5.5 + 0.73S u 6

Then, they adjusted this initial estimate based on the ratio of errors. If R is the ratio between the actual effort, E, and the predicted effort, E', then the effort adjustmen t is defined as

{ R - 1

E Radj = 1 - 1/ R if R 2:: 1 if R < 1

They then adjusted the initial effort estimate Eadi this way:

E . = {(1 + ERa11;)E if R 2:: 1 adi E/(1 + ERadj) if R < 1

FinaUy, Bailey and BasiLi (1981) accounted for other factors that affect effort, shown in Table 3.8. For each entry in the table, the project is scored from 0 (not present) tto 5 (very important), depending on the judgment of the project manager. llms, the total

TABLE 3.8 Bailey-Basili Effort Modifiers

Total Methodology Cumulative Complexity Cumulative Experience (METH) (CPLX) (EXP)

nee charts Customer interface complexity Programmer qualifications

Top-down design Application complexity Programmer machine experience

Formal documentation Program How complexity Programmer language experience

Chief programmer teams Internal communication Programmer application complexity experience

Formal training Database complexity Team experience

Formal test plans External communication complexity

Design fo rmalisms Customer-initiated program design changes

Code reading

Unit development folders

Openmirrors.com

Section 3.3 Effort Estimation 111

score for METH can be as high as 45, for CPLX as high as 35, and for EXP as ltigh as 25. Their model describes a procedure, based on multilinear least-square regression, for using these scores to further modify the effort estimate.

Clearly, ooe of the problems with models of this type is their dependence on size as a key variable. Estimates are usuaUy reqwred early, well before accurate size infor- mation is availa ble, and certainly before the system is expressed as Lines of code. So the models simply translate tbe effort-estimation problem to a size-estimation problem. Boebm's Constructive Cost Model (COCOMO) acknowledges this problem and incor- porates three sizing techniques in the latest version, COCO MO II.

Boehm (1981) developed the original COCOMO model in the 1970s, using an extensive database of information from projects at TRW, an American company that built software for many different clients. Considering software development from both an engineering and an economics viewpoint, Boehm used size as the primary determi- nant of cost and then adjusted U1e initial estimate using over a dozen cost drivers, includ- ing attributes of the staff, the project, the product, and the development environment. In the 1990s, Boehm updated the original COCOMO model, creating COCOMO II to re flect the ways in which software development bad matured.

The COCOMO II estimation process reflects three majo r stages of a ny develop- ment project. Whereas the original COCOMO model used delivered source lines of code as its key input, the new model acknowledges that Lines of code are impossible to know early in the development cycle. At stage 1, projects usually build p.rototypes to resolve high-risk issues involving user interfaces, software and system inte raction, per- formance, or technological maturity. Here, little is known about the likely size of the final product under consideration, so COCO MO II estimates size in what its creators caJI application points. As we shall see, this technique captures size in terms of high- level e ffort generators, such as the nwnber of screens and reports, and the number of third-generation language components.

At stage 2, the early design stage, a decision bas been made to move forward with development, but the designers must explore alte rnative architectures and concepts of operation. Again, there is not enough information to support fine-grained effort and duration estimation, but far more is known than at stage 1. For stage 2, COCOMO II employs function points as a size measure. Function points, a technique explored in depth in IFPUG (1994a and b), estimate the functionality captured in the require- ments, so they offer a richer syste m description than application points.

By stage 3, the postarcbitecture stage, development has begun, and far more information is known. In this stage, sizing can be done in terms of functio n points or Lines of code, and many cost factors can be estimated with some degree of comfort.

COCOMO II also includes models of reuse, takes into account maintenance and breakage (i.e., tbe change in requirements over time), and more. As with tbe original COCO MO, the model includes cost factors to adjust the initial effort estimate. A research group at Ule University of Southern California is assessing and improving its accuracy.

Let us look at COCOMO EI in more detail. The basic model is of the form

E = bScm(X )

where the initial size-based estimate, bSc, is adjusted by the vector of cost driver infor- mation, m(X). Table 3.9 describes the cost drivers at each stage, as well as the use of other models to modi fy the estimate.

112 Chapter 3 Planning a nd Managing the Project

TABLE 3.9 Three Stages of COCO MO II

Stage 1: Application Stage 2: Stage3:

Model Aspect Composition Early Design Post architecture

Size Application Function points (FPs) and FP and language or source points language lines of code (SLOC}

Reuse Implict in Equivalent SLOC as function Equivalent SLOC as function of model or other variables other variables

Requiremellls Implicit in % change expressed as % change expressed as a cha11ge model a cost factor cost factor

Mai111e1umce Application Function of ACT, software Function of ACT, software points, understanding, unfmiliarity understanding, unfamiliarity annual change traffic (ACT)

Scale (c) i11 1.0 0.91to1.23,depending on 0.91to1.23, depending on 11omi11al ejfon precedentedness, conformity, precedentedness, conJormi.ty, equation early architecture, risk early architecture, risk

resolution, team cohesion, resolution, team cohesion, and and SEI process maturity SEI process maturity

Product cost None Complexity, required Reliability, database size, drivers reusability documentation needs, required

reuse, and product complexity

Platfonn cost None Platform difficulty Execution time constraints, drivers main storage constraints, and

virtua l machine volatility

Perso1111el cost None Personnel capability and Analyst capability, applications drivers experience experience, programmer

capability, programmer experience, language and tool experience, and persounel continuity

Project cost None Requued development Use or software tools, required drivers schedule, development development schedule, and

environment multisite development

At stage 1, application points supply the s ize measure. TI"tis size measure is an extension of the object-point approach suggested by Kauffman and Kumar (1993) and productivity data reported by Banker, Kauffman, and Kumar (1992). To compute appli- cation points, you first oount the number of screens, reports, and third-generation lan- guage components that will be involved in the application. It is. assumed that these elements are defined in a standard way as part of an integrated computer-aided soft- ware engineering enviro nment. Next, you classify each application element as simple, medium, or difficult. Table 3.10 contains guidelines for this classification.

Openmirrors.com

Section 3.3 Effort Estimation 113

TABLE 3.1 0 Application Point Complexity Levels

Number of views

contained

< 3

3-7

s+

For Screens For Reports

Number and source Number and source or data tables or data tables

Total < 4 Total < S Total s+ Number of Tota1 < 4 Total < S Total s+ (< 2 servers, (2.-3 servers, (> 3 servers, sections (< 2 servers, (2- 3 servers, (> 3 servers, < 3 clients) 3-5 clients) >5 clients) co111ai11ed < 3 clients) 3-:S clients) >5 cUents)

Simple Simple Medium Oor 1 Simple Simple Medium

Simple Medium Difficult 2or3 Simple Medium Difficult

Medium Difficult Difficult 4+ Medium Difficult Difficult

The number to be used for simple, medium, or difficult application points is a complexity weight found in Table 3.11. The weights reflect the relative effort required to implement a report or screen of that complexity level.

lben, you sum the weighted reports and screens to obtain a single application- point number. If r percent of the objects wiU be re11.1Sed from previous projects, the num- ber of new application points is calculated to be

New application points = (application points) x (100 - r )/100

To use this number for e ffort estimation, you use an adjustment factor, called a produc- tivity rate, based on developer experience and capability, coupled with CASE maturity and capability. For example, if the developer experience and capability are rated low, and the CASE maturity and capability are rated low, then Table 3.12 teUs us that the productivity factor is 7, so the number of person-months required is the number of new application points divided by 7. When the developers' experience is low but CASE maturity is high, the productivity estimate is the mean of the two values: 16. Likewise, when a team of developers has experience levels that vary, the productivity estimate can use the mean of the experience and capability weights..

At stage 1, the cost drivers are not applied to this effort estimate. However, at stage 2, the effort estimate, based on a function-point calculation, is adjusted for degree of reuse, requirements change, and maintenance. The scale (i.e., the value for c in the effort equation) had been set to 1.0 in stage l ; for stage 2, the scale ranges from 0.91 to

TABLE 3.11 Complexity Weights for Application Points

Element Type Simple Medium Difficult

Screen 1 2 3

Report 2 5 s

3GL component - - 10

114 Chapter 3 Planning a nd Managing the Project

TABLE 3.12 Productivity Estimate Calculation

Developers' experience and Very low Low Nominal High Very high capability

CASE mamrity and capabiliry Very low Low Nominal High Very high

Productiviry factor 4 7 13 25 50

1.23, depending on the degree of novelty of the system, confonnity, early architecture and risk resolution, team cohesion, and process maturity.

Tfue cost drivers in stages 2 and 3 are adjustment factors expressed as effort multi- pLiers based on rating your project from "extra low" to "extra high," depending on its characteristics. For example, a development team's experience with an application type is considered to be

• extra low if it has fe wer than 3 months of experience

• very lo w if it bas at least 3 but fewer than 5 months of experience • low if it has at least 5 but fewer than 9 months of experience • nominal if it has at least 9 months but less than one year of experience • hi'gh if it has al least 1 year but fewer than 2 years of experience • very high if it bas ait least 2 years but fewer than 4 years of experience • e:ctra high if it has a t least 4 years of experience

Similarly, analyst capability is measured on an ordinal scale based on percentiJe ranges. Fo r instance, the rating is " very high" if the analyst is in the ninetieth percentile and "nominal" for the fifty-filth percentile. Correspondingly, COCO MO II assigns an effort muJtiplier ranging from 1.42 for very low to 0.71 for very high. These muJtipliers reflect the notion that an analys t with very low capability expends 1.42 times as much effort as a nominal or average analyst, while one with very high capability needs about tluee- quarters the effort of an average analyst. Similarly, Table 3.13 Lists the cost driver cate- gories for tool use, and the multipliers range from 1.17 for very low to 0.78 for very high.

TABLE 3.13 Tool Use Categories

Category

Very low

Low

Nominal

High

Very high

Mea11i11g

Edit, code, debug

Simple front-end, back-end CASE, little integration

Basic life-cycle tools, moderately integrated

Strong, mature life-cycle tools, moderately integrated

Strong, mature, proactive life-cycle tools, well-integrated with processes, methods, reuse

Openmirrors.com

Section 3.3 Effort Estimation 115

Notice that stage 2 of COCO MO II is intended for use during the early stages of design. The set of cost drivers in this stage is smaller than the set used in stage 3, reflect- ing lesser understanding of the project's parameters at stage 2.

The various components of the COCO MO model are intended to be tailored to fil the characteristics of your own organization. Tools are available that implement COCOMO II and compute the estimates from the project characteristics that you sup- ply. Later in this chapter, we will apply COCO MO to our information system example.

M achine-Learning Methods

In the past, most effort- and cost-modeling techniques have relied on algorithmic methodls. That is, researchers have examined data from past projects and generated equatio ns from them tbat are used to predict effort and cost on future projects. H ow- ever, some researchers are looking to machine learning for assistance in producing good estimates. For example, neural networks can represent a number of interconnected , interdependent units, so they are a promising tool for representing the various activities involved in producing a software product. In a neural network, each unit (called a neu- ron and represented by network node) represents an activity; each activity has inputs and outputs. Each unit of the network has associated software that performs an account- ing of its inputs, computing a weighted sum; if the sum exceeds a threshold value, the lllllit produces an output. The output, in tum, becomes :input to other related units in the net- work, until a final output value is produced by the network. The neural network is, in a sense, an extension or the activity graphs we examined earlier in this chapter.

There are many ways for a neural network to produce its outputs. Some tech- niques involve looking back to what has happened at other nodes; these are called back-propagation techniques. They are similar to the method we used with activity graphs to look back and determine the slack on a path. Other techniques look forward , to anticipate what is about to happen.

Ne ural networks aire developed by " training" them with data from past projects. Relevant data are supplied to tbe network, and the network uses forward and back- ward algorithms to "learn" by identi[)'ing patterns in the data. For example, historical data about past projects might contain information about developer experience; the network may identify relationships between level of experience and the amoun.t of effort required to complete a project.

Figure 3.13 illustrates how Shepperd (1997) used a neural ne twork to produce an effort estimate. There are three layers in the network , and the network has no cycles. The fo ur inputs are factors that can affect effort on a project; the network uses them to produce effort as the single output. To begin, the network is initialized with random weights. Then, new weights, calculated as a "training set" of inputs and outputs based on past his tory, are fed to the network. The user o f the model specifies a training algorithm that explains how the training data are to be used ; this algorithm is also based on past history, and it commonly involves back-propagation. Once the network is trained (i.e., once the network values are adjusted to reflect past experience), ill: can then be used to estimate effort on new projects.

Several researchers have used back-propagation algorithms on similar neural networks to predict development effort, including estimation for projects using

116 Chapter 3 Planning a nd Managing the Project

Problem complexity

Noveltv ol application

Use of design tools

Team size

Input laver Intermediate lavers Output laver

FIGURE 3.13 Shepperd's feed-forward neural network.

fourth-generation languages (Wittig and Finnie 1994; Srinivasan and Fisher 1995; Samson, EUison, and Dilgard 1997). Shepperd (1997) reports that the accuracy of this type of model seems to be sensitive to decisions about the topology of the nernal network, the number of learning stages, and the initial random we ights of the neurons within tthe network. The networks also seem to require large training sets in order to give good predictions. In other words, they must be based on a great deal of experience rather than a few representative projects. Data o f this type are sometimes difficult to obtain, especially coUected consistently and in large quantity, so the paucity of data lim- its this technique's usefulness. Moreover, users tend to have difficulty understanding neural networks. However, if the technique produces more accurate estimates, organi- zations may be more wiUing to collect data for the networks.

In general, this "learning" approach has been tried in different ways by other researchers. Srinivasan and Fisher (1995) used Ke merer's data (Kemerer 1989) with a statistical technique caUecl a regression tree; they produced predictions more accUiate than those of the original COCOMO model and SLIM, a proprietary commercial model. However, their results were not as good as those produced by a neural network o r a model based on function points. Briand, Basili, and Thomas (1992) obtained better results from using a tree induction technique, using the Keme rer and COCOMO datasets. Po rter and Selby (1990) also used a tree-based approach; they constructed a decision tree that identifies which project, process, and product characteristics may be useful in predicting likely effo rt. They also used the technique to predict which modules are likely to be fa ult-pro ne.

A machine-learning technique called Case-Based Reasoning (CBR) can be applied to analogy-based estimates. Used by the artificial intelligence community, CBR builds a decision algorithm based on the several combinations of inputs that might be encountered on a project. Like the other techniques described here, CBR requires information about past projects. Shepperd (1997) points out that CBR offers two clear advantages over many o f the other techniques. First, CBR deals only with events that actually occur, rather than with the much larger set of all possible occurrences. lbis

Openmirrors.com

Section 3.3 Effort Estimation 117

same feature also aUows CBR to deal with poorly understood domains. Second, it is easier for users to understand particular cases than to depict events as chains of rules or as neural networks.

Estimation using CBR involves four steps:

1. The user identifies a new problem as a case. 2. The system retriev.es similar cases from a repository of historical information.

3. The system reuses knowledge from previous cases. 4. The system suggests a solution for the new case.

The solution may be revised, depending on actual events, and the o utcome is placed io the repository, building up the coUection of completed cases. However, there are two big hurdles in creating ai successfltl CBR system: characterizing cases and determining similarity.

Cases are characterized based on the information that happens to be available. UsuaUy, experts are asked to supply a list of features that are significant in describing cases and, in particuJar, in determining when two cases are similar. ln practice, simiJar- ity is usuaUy measured using an n-dimensional vector of n features. Shepperd, Schofield, and Kitchenham (1996) found a CBR approach to be more accurate than traditio nal regression analysis-based algorithmic methods.

Finding the Model for Your Situation

There a re many effo rt and cost models being used today: comme rcial tools based on past experience or intricate models of development, and home-grown tools that access databases of historical information about past projects. Validating these models (i.e., making sure the models reflect actual practice) is difficult, because a large amount of data is needed for the validation exercise. Moreover, if a model is to apply to a large and varied set of situations, the supporting database must include measures from a very large and varied set o f development environments.

Even when you fmd models that are designed for your development environ- ment, you must be able to evaluate which are the most accurate on your projects. There are two statistics that are often used to help you in assessing the accuracy, PRED a nd MMRE. PRED(x/100) is the percentage of projects for which the estimate is within x% of the actual value. For most effort, cost, and schedule models, managers evaluate PRED(0.25), tthal is, those models whose estimates are within 25% of the actual value; a model is considered lo funct ion well if PRED(0.25) is greater than 75%. MMRE is the mean magnitude of relative error, so we hope that the MMRE for a particular model is very small. Some researchers consider an MMRE of 0.25 to be fairly good, and Boehm (1981) suggests that MMRE should be 0.10 or less. Table 3.14 lists the best values for PRED and MMRE reported in the literature for a variety of models. As you can see, the statistics for most models are disappointing, indicating llhat no modlel appears to have captured the essential characteristics and their relationships for au ttypes of development. However, the relationships among cost factors are not simple, aod the models must be flexible enough to handle changing use of tools aod method s.

118 Chapter 3 Planning a nd Managing the Project

TABLE 3.14 Swumary of Model Performance

Model PRED(0.25) MMRE

Walston-Felix 0.30 0.48

Basic COCO MO 0.27 0.60

lnte rmediate CO COMO 0.63 0 .22

Inte rmediate COCOMO (variation) 0.76 0 .19

Bailey-Basili 0.78 0.18

Pfieeger 0.50 0.29

SUM 0.06-0.24 0.78-1.04

Jensen 0.06-0.33 0.70-1.01

COPMO 0.38--0.63 0.23-5.7

Gene ral COPMO 0.78 0 .25

Moreover, Kitchenham, MacOonell, Pickard, and Shepperd (2000) point out that the MMRE and PRED statistics are not direct measures of estimation accuracy. They suggest that you use the simple ratio of estimate to actual: estimate/actual. This mea- sure has a distribution that directly reflects estimation accuracy. By contrast, MMRE and PR ED are measures o f the spread (standard deviation) and peakedness (kurtosis) o f the ratio, so they te ll us onJy characteristics of the distribution.

Even when estima tion models produce reasonably accurate estimates, we must be able to understand which types of effort are needed during development. For example, designers may not be needed until the requirements analysts have finished developing the specification. Some effort and cost models use formulas based on past experie nce to apportion the effort across the software development life cycle. For instance, the original COCOMO model suggested effort requiied by development activity, based on percentages aUotted to key process activities. But, as Figure 3.14 iUus- trates, researchers report conflicting values for these percentages (Brooks 1995; Your- don 1982). Thus, when you are building your own database to support estimation in your organization, it is important to record not only how much effort is expended on a project, but also who is doing it and for what activity.

FIGURE 3.14 Diffe rent reports of effort distributio n.

Openmirrors.com

Broo~s

Plunint

Yoar~on

Section 3.4 Risk Management 119

3.4 RISK MANAGEMENT

As we have seen, many software project managers take steps to ensure that t!beir projects are done on time and within effort and cost constraints. However, project man- agement involves far more than tracking effort and schedule. Managers must determine whether any unwelcome events may occur during development or maintenance and make plans to avoid these events or, if they are inevitable, minimize their negative colllSe- quences. A risk is an unwanted event that has negative consequences. Project managers must engage in risk management to understand and control the risks on their projects.

What Is a Risk?

Many events occur dUiing software development; Sidebar 3.4 lists Boehm's view of some of the riskiest ones. We distinguish risks from other project events by looking for three things (Rook 1993):

1. A loss associated with the event. The event must create a situation where some- thing negative happens to the project: a lloss of time, quality, money, control, understanding, and so on. For example, if requirements change dramaticalJy after

SIDEBAR 3.4 BOEHM'S TOP TEN RISK ITEMS

Boehm (19<Jl) identifies 10 risk items and recommends risk management techniques to address them. 1. Personnel shortfalls. Staffing with top talent; job matching; team building; morale build-

ing; cross-training; prescheduling key people.

2. Unrealistic schedules and budgets. Detailed muLtisource cost and schedule estimation; design to cost; incremental development; software reuse; requirements scrubbing.

3. Developing the wrong software functions. Organizational analysis; mission analysis; opera-

tional concept formulation; user surveys; prototyping; early user's manuals.

4. Developing the wrong user interface. Prototyping; scenarios; task analysis.

5. Gold plating. Requirements scrubbing; prototyping; cost-benefit analysis; design to cost.

6. Continuing stream of requirements changes. High change threshold; information hiding; incremental development (defer changes to later increments).

7. Shortfalls in externally performed tasks. Reference checking; preaward audits; award-fee coo tracts; competitive design or prototyping; team building.

8. Shortfalls in externally fumished components. Benchmarking; inspections; reference checking; compatibility analysis.

9. Real-time performance shortfalls. Simulation; benchmarking; modeling; prototyping;

instrumentation; tuning. 10. Straining computer science capabilities. Technical analysis; cost-benefit analysis; prototyp-

ing; reference checking.

120 Chapter 3 Planning and Managing the Project

the design is done, then the project can suffer from Joss o f control and under- standing if the new requireme nts are for functions or features with which the design team is unfamiliar. And a radical change in requirements is Likely to lead to losses of time and money if the design is not flexible enough to be changed quickly and easily. The loss associated with a risk is called the risk impact.

2. The likelihood that the event will occur. We must have some idea of the probability that the e vent will occur. For example, suppose a project is being developed on one machine and will be ported to anothe r when the syste m is fully tested. If the second machine is a ne w mode l to be de live red by the vendor, we must estimate the Like lihood that it will not be ready on time. The likelihood of the risk, mea- sured from 0 (impossible) to 1 (certainty) is called the risk probability. When the risk probability is 1, the n the risk is called a [Problem, since it is certain to happen.

3. The degree to which we can change the outcom e. For each risk , we must de te rmine what we can do to minimize or avoid the impact of the event. Risk control involves a set o f actions take n to reduce o r eliminate a risk. For example, if the requirements may change afte r design, we can minimize the impact of the change by creating a tlexible design. If the second machine is not ready when the software is tested , we may be able to ide ntify other models or brands that have the same functionality and performance and can run our ne w software until the new model is delivered.

We can quantify the effects of the risks we identify by multiplying the risk impact by the risk probability, to yie ld the risk exposure. For example, if the likelihood that the requireme nts wilJ change after design is 0.3, and the cost to redesign to new require- me nts is $50,000, then the risk exposure is $15,000. Clearly, the risk probability can change over time, as can the impact, so part of a project manager's job is to track these values over time and plan for the events accordingly.

There are two major sources of risk: generic risks and project-specific risks. Generic risks are those common to alJ software projects, such as misunderstanding the requireme nts, losing key personnel, or allowing insufficient time for testing. Project- specific risks are threats that result from the particular vulne ra bilities of the given project. For example, a ve ndor may be promising network software by a particular date, but tbe re is some risk that the ne twork software will not be ready on lime.

Risk Management Activities

Risk management invo lves several important ste ps, each of which is illustrated in Figure 3 .15. First, we assess the risks on a project, so that we understand what may occur during the course of deve lopment o r maintenance. ll1e assessme nt consists of three activities: ide ntifying the risks, ana lyzing the m, and assigning priorities to each of the m. To ide ntify them, we may use many different techniques.

II the system we are building is similar in some way to a system we have built before, we may have a checklist of problems that may occur; we can review the check- list to de termine if the ne w project is likely to be subject to the risks listed. Fo r systems tbat are new in some way, we may augme nt the checklist with an analysis of each of the activities in the de velopme nt cycle; by decomposing the process into small pieces, we may be able to anticipa te proble ms that may a rise. For example, we may decide t hat

Openmirrors.com

Risk assessment

I Risk management

Section 3.4 Risk Management 121

C~ecklill De~omposilion

R' k 'd 'f . Anumpticn mlysis

1 is 1 en ti ieahon Decision driver analysis

Risk analysis System dynamics Per(ormance models

Risk prioritiz11~·on Cost models

\

Network analysis Decision analysis Quality risk hotor m lysis

Risk upome Compoun4 risk re4uction

Buyin9 in(ormation Risk midance Risk t ransrtr \

Risk control 1 Risk reduction Risk reduction I avenge

Doelopment process Risk management planning ~Risk element plannin5

RI k I , Risk plan integration

s reso ul1on --r Risk mili,alion Risk monitoring and reporting Risk reauessmenl

FIGURE 3.15 Steps in risk management (Rook 1993).

there is a risk of the chief designer leaving during the design process. Similarly, we may analyze the assumptions or decisions we make about bow tbe project will be done, wbo will do it, and with what resources. Then, each assumption is assessed to determine the risks involved.

FinaUy, we analyze the risks we bave identified, so tbal we can understand as much as possible about when, why, and where they might occur. There arc many tech- niques we can use to enhance our understanding, including system dynamics models, cost models, performance models, network analysis, and more.

Once we bave itemized all tbe risks, we use our understanding to assign priorities them. A priority scheme enables us to devote our Limited resources only to the most threatening risks. UsuaUy, priorities are based on tbe ris!k exposure, whicb takes into account not only Likely impact, but also the probabiLity of occurrence.

The risk exposure is computed from the risk impact and the risk probabili ty, so we must estimate each of these risk aspects. To see how the quantification is done, consider the analysis depicted in Figure 3.16. Suppose we have analyzed the system develop- ment process and we know we are working under tight deadlines for delivery. We have decided to build the system in a series of releases, where each release has more func- tionaLity than the one that preceded it. Because the system is designed so that functions are relatively independent, we consider testing only the new functions for a release, and we assume that the existing functions still work as they did before. However, we may worry that there are risks associated with not performing regression testing: the assur- ance that existing functionality stiU works correctly.

For each possible outcome, we estimate two quantities: the probability of an unwanted outcome, P(UO), and the loss associated with the unwanted outcome,

122 Chapter 3 Planning a nd Managing t he Project

P(UO) = o.7S L(UO) = $.SM

Fin~ critical fault

P{UO) = o.os l (UO) = $30M

Don't find critical fault L(UOI = $.OM

P(UO) = 0.20 No critical faolt

P(UOI = o.2s L(UO) = $.SM

Fin~ critical fault

P(UO) = o.ss L(UOI = $30M

Don't find critical fault P(UO) = o.20 l (UOI = $oM

No critical fault

RISK EXPOSURE

$.mM

$1.SOM

$OM

$.12SM

$OM

FIGURE 3.16 Example of risk ex:posure calculation.

COMBINED RISK

EXPOSURE

$1.87SM

$16.62SM

L(UO). For instance, there are three possible consequences of perfonning regression testing: finding a critical fault if one exists, not finding the critical fault (even though it exists), or deciding (correctly) that there is no cri tical fault. As the figure illustrates, we have estimated the probability of the first case to be 0.75, of the second to be 0.05, and o f the third to be 0.20. The loss associated with an unwanted outcome is estimated to be $500,000 if a critical fault is found, so that the risk exposure is $375,000. Similarly, we calculate the risk exposure for the other branches of this decision tree, and we find rthat our risk exposure if we perform regression testing is almost $2 million. However, the same kind of analysis shows us that the risk exposure if we do not perform regression testing is almost $17 miUion. Thus, we say (loosely) that more is at risk if we do not per- form regression testing.

Risk exposure helps us to list the risks in priority order, with the risks of most con- cern given the highest priority. Next, we must take steps to control the risks. The nortion o f control acknowledges that we may not be able to eliminate all r:isks. Instead, we may be able to minimize the risk or mitigate it by taking action to handle the unwanted out- come in an acceptable way. Therefore, risk control involves risk reduction, risk plan- ning, and risk resolution.

There are three strategies for risk reduction:

• avoiding the risk, by changing requirements for performance or functionality

• transferring the risk, by allocating risks to other systems or by buying insurance to cove r any financiaE loss should the risk become a reality

• assuming the risk, by accepting it and controlling it with the project's resources

Openmirrors.com

Section 3.5 The Project Plan 123

To aid decision making about risk reduction, we must take into account the cost of reducing the risk. We call risk leverage the difference in risk exposure divided by the cost of reducing the risk. In other words, risk reduction leverage is

(Risk exposure be fore reduction - risk exposure after reduction) /(cost of risk reduction)

If the leverage value is n ot high enough to justify the action, then we can look for other less costly or more effective reduction techniques.

In some cases, we can choose a development process to help reduce the risk. For example, we saw in Chapter 2 that prototyping can improve understanding of the require- ments and design, so selecting a prototyping process can reduce many project risks.

It is useful to record decisions in a risk management plan, so that both customer and development team can review how problems are to be avoided, as well as how they are to be handled should they arise. Then, we should monitor the project as development progresses, periodically reevaluating the risks, their probability, and their likely impact.

3.5 THE PROJECT PLAN

To communicate risk analysis and management, project cost estimates, schedule, and organization to our customers, we usuaUy write a document called a proj ect plan . The plan puts in writing the customer's needs, as weU as what we hope to do to meet them. The custome r can refe r to the plan for informatio n about activities in the development process, making it easy to follow the project's progress during development. We can a lso use the plan to con.firm with the customer any assumptions we are making, espe- cially about cost and schedule.

A good project plan includes the following items:

1. project scope

2. project schedule 3. project team organization 4. technical description o f the proposed system 5. project standards, procedures, and proposed techniques and tools 6. quality assurance plan

7. configuration management plan 8. documentation plan 9. da ta management plan

10. resource management plan 11. test plan 12. training plan 13. security plan 14. risk management plan 15. maintenance plan

124 Chapter 3 Planning a nd Managing the Project

The scope defines the system boundary, explaining what wiU be included in the system and what will nol be included. It assures the customer that we understand what is wanted. The schedule can be expressed using a work breakdown structure, the deliv- erables, and a timeline to show what will be happening at each point during the project lire cycne. A Gantt chart can be useful in illustrating the parallel nature of some of the development tasks.

The project plan also lists the people on the development team, bow they are o rganized, and what they will be doing. As we have seen, not everyone is needed all the time dming the project, so the plan usually contains a resource allocation chart to show staffing levels at different times.

Writing a technical description forces us to answer questions and address issues as we anticipate how development will proceed. This description Lists hardware and software, including compilers, interfaces, and special-purpose equipment or software. Any special restrictions on cabling, execution time, response time, security, or other aspects of functionality o r performance are documented in the plan. The plan also lists any stamdards or methods that must be used, such as

• algorithms • tools • review or inspectio n techniques

• design languages o r representations • coding languages • testing techniques

For large projects, it may be appropriate to include a separate quality assurance plan, to describe bow reviews, inspections, testing, and other techniques will help to evaluate quality and ensure that it meets the customer's needs. Similarly, large projects need a configuration management plan, especially when there are to be multiple ver- sions and releases of the system. As we will see in Chapter 10, configuration manage- ment he lps to control multiple copies of the software. TI1e configuration management plan teUs the customer how we will track changes to the requirements, design, code, test plans, and documents.

Many documents are produced during development, especially for large projects where information abou t the design must be made available to project team members. The project plan lists the documents that will be produced, explains who will write them and when, and, in concert with the configuration management plan, describes how documents will be changed.

Because every software system involves data for input, calculation, and output, the project plan must explain bow data will be gathered, stored, manipuJated, and archived. The plan should also explain how resources will be used. For example, if the hardware configuration includes removable disks, then the resource management part of the project plan should explain what data are on each disk and how the disk packs or diskettes wiU be allocated and backed up.

Testing requires a great deal of planning to be effective, and the project plan describes the project's overall approach to testing. In particular, the plan should state

Openmirrors.com

Section 3.6 Process Models and Project Management 125

bow test data will be generated, how each program module will be tested (e.g., by test- ing aU paths or all statements), how program modules will be integrated with each other and tested, how the entire system will be tested, and who will perform each type of testing. Sometimes, systems are produced in stages or phases, and the test plan should explain how eacb stage will be tested. When new functionality is added to a sys- tem in stages, as we saw in Chapter 2, then the test plan must address regression testing, ensuring that the existing functionality stiU works correctly.

Training classes and documents are usually prepared during development, rather than after the system is complete, so that training can begin as soon as the system is ready (and sometimes before). The project plan explains how training wiU occur, describing each class, supporting software and documents, and the expertise needed by each student.

When a system has securi ty requirements, a separate securi ty plan is sometimes needed. The security plan addresses the way that the system wiU protect data, users, and hardware. Since security involves confidentiaLity, availabili ty, and integrity, the plan must explain how each facet of security affects system development. For example, if access to the system will be limited by using passwords, then the plan must describe who issues and maintai.ns the passwords, who develops the password-handLing soft- ware, and what the password encryption scheme wiU be.

FinaUy, if the project team wiU maintain the system after it is delivered to the user, the project plan should discuss responsibilities for changing the code, repairing the hardwa re, and updating supporting documentation and training materials.

3.6 PROCESS MODELS AND PROJECT MANAGEMENT

We have seen how different aspects of a project can affect the effort, cost, and schedule required, as weU as the risks involved. Managers most successful at building quality products on time and within budget are those who tailor the project management tech- niques to the particular characteristics of the resources needed, the chosen process, and the people assigned.

To understand what to do on your next project, it is useful to examine project management techniques used by successful projects from the recent past. lo this sec- tion, we look at two projects: Digital's Alpha AXP program and t!he F-16 aircraft soft- ware. We also investigate the merging of process and project management.

Enrollment Management

Digital Equipment Corpora tion spent many years developing its Alpha AXP system, a new system arclLitecture and associated products that formed the largest project in Dig- ital's his tory. The software portion of the effort involved four operating systems and 22 software engineering groups, whose roles included designing migration tools, network systems, compilers, databases, integration frameworks, and applications. Unlike many other development projects, the major problems with Alpha involved reaching mile- stones too early! Thus, it is instructive to look at bow the project was managed and what effects the management process had on the final product.

126 Chapter 3 Planning a nd Managing the Project

During the course of development, the project managers developed a model that incorpo rated four tene ts, called the Enrollment Management model:

1. establishing an appropriately large shared vision 2. de legating completely and e liciting specific commitments from participants 3. inspecting vigorously and providing supportive feedback 4. acknowledging every advance and learning as the program progressed (Conklin

1996)

Figure 3.17 illustrates the model. Vision was used to "enroU" the related pro- grams, so they all shared common goals. Each group or subgroup of the project defined its own objectives in te rms of the global ones stated for the project, including the com- pany's business goals. Next, as managers developed plans, they delegated tasks to groups, soliciting comments and commitments about the content of each task and the scheduEe constraints imposed. Each required result was measurable and identified with a particular owner who was held accountable for delivery. The owner may not have been the person doing the actual work; rather, he or she was the person responsible for getting the work done.

Managers continually inspected the project to make sure that delivery would be on time. Project team me mbers were asked to identify risks, and when a risk threatened to keep the team from meeting its commitments, the project manager declared the project to be a "cusp": a critical event. Such a declaration meant that team members were ready to make substantial changes to help move the project forward. For each project step, the managers acknowledged progress both personally and publicly. They recorded what had been learned and they asked team members how things could be improved the next time.

Coordinating all the hardware and software groups was difficult, and managers realized that they had to oversee both technical and project events. That is, the tec!hni- cal focus involved technical design and strategy, whereas the project focus addressed

FIGURE 3. 17 Enrollment Management model (Conklin 1996).

Personal Public

E ncoura5ement

Openmirrors.com

Bus ineu goal$ Project objectives

VISION ENROLLMENT

T mt Acmnhbility (tas~-owner-~ate)

Section 3.6 Process Models and Project Management 127

System board o/ directors

:········ ............. . ! Project ! ! mana1ers : ~--······ ........... :

!, ... ··~~~~·~;~:;····1:. directors

~--................. :

FIGURE 3.18 Alpha project organization (Conklin 1996).

commitments and deliverables. Figure 3.18 illustrates tbe organization that allowed both foci to contribute to the overall program.

The simpLicity of the model and organization does not mean that managing the Alpha program was simple. Several cusps threatened the project and were dealt with in a variety of ways. For example, management was unable to produce an overall plan, and project managers had difficulty coping.At the same time, technical leaders were gener- ating unacceptably large design documents that were difficult to understand. To gain control, the Alpha program managers needed a programwide work plan that illustrated the order in which each contributing task was to be done arnd how it coordinated with the other tasks. They created a master plan based only on the critical program components- those things that were critical to business success. The plan was restricted to a single page, so that the participants could see the "big picture," without complexity or detail. Similarly, one-page descriptions of designs, schedules, and other key items enabled project partici- pants to have a global picture of what to do, and when and how to do it.

Another cusp occurred when a critical Lask was announced to be several months behind schedule. The management addressed this problem by instituting regular opera- tional inspections of progress so there would be no more surprises. The inspection involved presentation of a one-page report, itemizing key points about the project:

• schedule • milestones • critical path events in the past month • activities along the critical path in the next month • issues and dependencies resolved • issues and dependencies not resolved (with ownership and due dates)

An important aspect of Alpha's success was the managers' realizatio n that engi- neers are usually motivated more by recognition than by financial gain. Instead of rewarding participants with mooey, they focused on announcing progress and on mak- ing sure that the pubLic knew how much the managers appreciated the engineers' work.

128 Chapter 3 Planning and Managing the Project

The result of AJpl:ia's flexible and focused management was a program that met its schedule to tbe morntb, despite setbacks along the way. Enrollment management enabled small groups to recognize their potential problems early and take steps to handle them while the problems were small and localized. Constancy of purpose was combined with concinual learning to produce an exceptional produce. Alpha met its performance goals, and its quality was reported to be very higb.

Accountability Modeling

The U.S. Air Force and Lockheed Martin formed! an Integrated Product Development Team to build a modular software system designed to increase capacity, provide needed functionality, and reduce the cost and schedule of future software changes to the F-16 aircraft. The resulting software included more than four million lines of code, a quarter of which met real-time deadlines in flight. F-16 development also involved building device drivers, real-time extensions to the Ada run-time system, a software engineering workstation network, a111 Ada compiler for the modular mission computer, software build and configuration management tools, simulation and test software, and interfaces for loading software into tbe airplane (Parris 1996).

The ftight software's capability requirements were well-understood and stable, even though about a million lines of code were expected to be needed from the 250 developers organized as eight product teams, a chief engineer, plus a program manager and staff. However, the familiar capabilities were to be implemented in an unfamiliar way: modular software using Ada and object-oriented design and analysis, plus a transition from mainframes to workstations. Project management constraints included rigid "need dates" and commitment to developing three releases of equal task size, called tapes. The approach was high risk, because the first tape included Little time for learning the new methods and tools, including concurrent development (Parris 1996).

Pressure on the project increased because funding levels were cut and schedule deadlines were considered to be extremely unrealistic. In addition, the project was organized in a way unfamiliar to most of the engineers. The participants were used to working in a matrix organization, so that eacb engineer belonged to a functional 11.lllit based on a type of skill (such as the design group or the test group) but was assigned to one or more projects as that skill was needed. In other words, an employee could be identified by his or ber place in a matrix, with functional skills as one dimension and project names as the other dimension. Decisions were made by the functional unit hier- archy in this traditional organization. However, the contract for the F-16 required the project to be organized as an integrated product development team: combining indi vi d- uals from different functional groups into an interdisciplinary work unit empowered with separate channels of accountability.

To enable the project members to handle the culture change associated with the new organization, the F-16 project used the accountability model shown in Figure 3.19. In the model, a team is any collection of people responsible for producing a given result. A stakeholder is anyone affected by that result or tbe way in wbich the result is achieved. Tue process involves a continuing exchange of accountings (a report of what you bave done, are doing, or plan to do) and consequences, with the goal of doing only

Openmirrors.com

Section 3.6 Process Models and Project Management 129

DESIRED RESULTS

TEAM

CONSEQUENCES • clarlflcatlon or djutt111ent

of expectlont • utlttance • direction • reinforcement (positive or negative)

FIGURE 3.19 A ccountability model {Parris 1996).

what makes sense for both the team and the stakeholders. The model was applied to the design of management systems and to team operating procedures, repladng indepen- dent behaviors with interdependence, emphasizing " being good rather than looking good" (Parris 1996).

As a result, several practices were required, including a weekly, one-hour team status review. To reinforce the notions of responsibility and accounta bility, each personal action item had explicit closure criteria and was tracked to completion. An action item couJd be assigned to a team member or a stakeholder, and often involved clarifying issues or requirements, providing missing information or reconcil- ing confiicts.

Because the teams had multiple, overlapping activities, an activity map was used to illustrate progress on each activity in the overall context of the project. Figure 3.20 shows part of an activity map. You can see bow each bar represents an activity, and each activity is assigned a method for reporting progress. The point on a bar indicates when detailed planning shouJd be in place to guide activities. The "today" line shows current status, and an activity map was used during the weekly reviews as an overview of the progress to be d iscussed.

For each activity, progress was tracked using an appropriate evaluatio n or perfor- mance method. Sometimes the method included cost estimation, critical path analysis, or scheduJe tracking. Earned value was used as a common measure for comparing progress on different activities: a scheme for comparing activities determined how much of the project had been completed by each activity. The earned-value calcuJation included weights to represent what percent o f the total process each step constituted, relative to overall effort. Similarly, each component was assigned a size value that

130 Chapter 3 Planning and Managing the Project

To41y

Prior 199S 1996 ind Oct: Nov Dec Jan Feb Mar Apr May Jrun Jul Aug Sep Oct Nov Dec 10

on :

I Ttpe 2 problem rm lution (PPL) I . """"""-- Tape 3 eo~e (EVS) I . . ~ . . I Tape 3 prohlem rm lution [PPL) :

~ Throughput and memory recovery (TPP} I , ........................................................

{ ,.~ " '"'"'' I : Reportln5 11ef~o41: : EVS Eme4 ~alue 11ttu1l19

detisn ( EVS) : PPL Prlorltlze4 problem list

L ~~~. !~~~~,~~ ~ P~~-o!~!~~~ !! ~ Tepe UI code (EVS} I

.......

lft

~ T 1pe U2 uptbil ity size

I estim1tes (EVS} I Tape UI probfe·m ruofvtion(PP L) FIGURE 3.20 Sample activity roadmap (adapted from Parris 1996).

represe nted its proportio n o f the total product, so that progress rel.alive to the final size could be tracked, too. Then, an earned-value summary chart, similar to Figure 3.21, was presented at each review meeting.

Once part of a product was completed, its progress was no longer tracked. lnstead, its performance was tracked, and problems were recorded. Each problem was assigned a priority by the stakeholders, and a snapshot of the top fi ve problems on each product team's list was presented at the weekly review meeting for discussion. The pri- o rity lists generated discussion about why the problems occurred, what work-arounds could be put in place, and how similar problems could be prevented in the future.

The project managers found a major problem with the accountability model: it told them nothing about coordination among different teams. As a result, they built software to catalog and track the hand-offs from one team to another, so that every team could understand who was waiting for action or products from them. A model of the hand-offs was used for planning, so that undesirable patterns or scenarios could be elim inated. Thus, an examination of the band-off model became part of the review process.

It is easy to see how the accountability model, coupled with the hand-off model, addressed several aspects of project managemernt. First, it provided a mechanism for communication and coordination. Second, it encouraged risk man agement, especially by forcing team members to examine problems in review meetings. And third, it inte- gra ted progress reporting with problem solving. Thus, the model actuaJJy prescribes a project management process that was followed on the F-16 project.

Openmirrors.com

Section 3.6 Process Models and Project Management 131

3000

-c 0 2SOO ... .. c

2000

~ .. ISOO ~ .! ~ 1000 c ~

.~ .. soo ... ...

··----· Plufte~ completions

-- Actual completions

TODA'!'

:

1 2 3 4 S 6 7 8 9 10 II 12 13 14 IS 16 17 13 19 20

Week

FIGURE 3.21 Example earned-value summary chart (Parris 1996).

Anchoring Milest ones

In Chapter 2, we examined many process models that described how the technical activities of software development should progress. Then, in this chapter, we looke d at several methods to organize projects to perform those activities. The Alpha AXP and F-16 examples have shown us that project management must be tjgbtly integrated with the development process, not just for tracking progress, but, more importantly, for effective planning and decision making to prevent major problems from derailing the project. Boehm (1996) has identified three milestones common to au software development processes that can serve as a basis for both technical process and project management:

• life-cycle objectives • life-cycle architecture • initial operational capability

We can examine each milestone in more detail. The purpose of the life-cycle objectives milestone is to make sure the stakehold-

ers agree with the system's goals. The key stakeho lders act as a team to determine the system boundary, the environment in wruch the system will operate, and the external systems with which the system must interact. Then , the stakeholders work through sce- narios of bow the system wiU be used. The scenarios can be expressed in terms of proto- types, screen layouts, data Hows, or other representations, some of which we will learn about in later chapters. If the system is business- o r safe ty-critical, the scenarios should also include instances where the system fails, so that designers can determine bow the

132 Chapter 3 Planning and Managing the Project

system js supposed to react to or even avoid a critical failure. Similarly, other essential features of the system are derived and agreed upon. The result is an initial life-cycle plan that Jays out (Boehm 1996):

• Objectives: Why is the system being developed? • Milestones and schedules: What wiU be done by when? • Responsibilities: Wbo is responsible for a function? • Approach: How will the job be done, technically and managerially? • Resources: How much of each resource is needed? • Feasibility: Can this be done, and is there a good business reason fo r doing it?

The life-cycle architecture is coordinated with the life-cycle objectives. 1be pur- pose of the life-cycle architecture milestone is defining both the system and the soft- ware architectures, the components of which we will study in Chapters 5, 6, and 7. The architectural choices must address the project risks addressed by the risk management plan, focusing on system evolution in the Jong term as well as system requirements in the short term.

The key e lements o f the initial operational capability are the readiness of the soft- ware itself, the site a t which the system will be used, and the selection and training of tbe team that will use it. Boehm notes that diffe rent processes can be used to imple- ment the initial operatfonal capabiUty, and different estimating techniques can be applied at diffe rent stages.

To supplement these milestones, Boehm suggests using ilhe Win- Win spiral model, illustrated in Figure 3.22 and intended to be an extension of the spiral model we examined in Chapter 2. The model encourages participants to con verge on a common understanding of the system's next-level objectives, al terna ti ves, and constraints.

Boehm applied Win-Win, called the Theory W approach, to the U.S. Department of Defe nse's STARS program, whose focus was developing a set of prototype software

I. Identify next-level stakeholders.

7. Review, commitment.

6. Validate product and process definitions.

2. Identify stakeholders' win conditions.

S. Define next level ol product and process· including partitions.

win conditions. Establish nu t·

4. Evaluate product ~nd process a lternati~es. Resolve risks.

FIGURE 3.22 Win- Win spiral model (Boehm 1996).

Openmirrors.com

Section 3.7 Information Systems Example 133

engineering environments. The project was a good candidate for Theory W, because there was a great mismatch between wbat the government was planning to build and what the potential users needed and wanted. The Win-Win model led to several key compromises, including n egotiation of a set of common, open interface specifications to enable tool vendors to re ach a larger marketplace at reduced cost, and the inclusion of three de monstration projects to reduce risk. Boehm reports tbat Air Force costs on the project were reduced from $140 to $57 per deLivered line of code and that quality improved from 3 to 0.035 faults per thousand delivered lines of code. Several other projects report similar success. TRW developed over half a million Jines of code for oom- plex distributed software within budget and schedule using Boebm's milestones with five increments. The first increment included distributed kernel software as part of the life- cycle architecture milestone; the project was required to demonstrate its ability to meet projections that the own ber of requirements would grow over time (Royce 1990).

3.7 INFORMATION SYSTEMS EXAMPLE

Let us re turn to the PiccadiUy Television airtime sales system to see how we might esti- mate the amount of effort required to build the software. Because we are in the prelim- inary stages of understanding just what the software is to do, we ca111 use COCOMO Il's initial e ffort model to suggest the number of person-months needed. A person-month is the amount of time one person spends working on a software development project for one month. The COCOMO model assumes that tbe number of person-months does not include hoUdays and vacations, nor time off at weekends. The number of person- montbs is not the same as the time needed to fini sh building the system. For instance, a system may require 100 person-months, but it can be finished in one month by having te n people work in parallel for one month, or in two months by having five people work in paralJel (assuming that the tasks can be accomplished in that manner) .

The first COCOMO II model, appLication composition, is designed to be used in the earliest stages of development. Here, we compute application points to help us determine the likely size of the project. lbe application point count is determined from three calcula tions: the number of server data tables used with a screen or report, the number of client data tables used with a screen or report, and the percentage of screens, reports, and modules reused from previous applications. Let us assume that we are not reusing any code in building the Piccad1Uy system. Then we must begin our estimation process. by predicting bow many screens and reports we will be using in this application. Suppose our initial estimate is that we need three screens and one report:

• a booking screen to record a new advertising sales booking • a ratecard screen showing the advertising rates for each day and hour • an availability screen showing which time slots are available • a sales report showing total sales for the month and year, and comparing them

with previous months and years

For each screen or report, we use the guidance in Table 3.10 and an estimate of the number of data tables needed to produce a description of the screen or report. For example, the booking screen may require the use of three data tables: a table of available

134 Chapter 3 Planning and Managing the Project

TABLE 3.15 Rating.-; for Piccadilly Screens and Reports

Name Screen or Report Complexity Weight

Booking Screen Simple 1

Ratecard Screen Simple 1

Availability Screen Medium 2

Sales Report Medium 5

time slots, a table of past usage by this customer, and a table of the contact information for this customer (such as name, address, tax numlber, and sales representative handling the sale). Thus, the number of data tables is fewer than four, so we must decide whether we need more than eight views. Since we are likely to need fewer than eight views, we rate the booking screen as "simple" according to the application point table. Similarly, we may rate the ratecard screen as "simple," the availability screen as "medium," and the sales report as "medium." Next, we use Table 3.11 to assign a complexity rate of 1 to simple screens, 2 to medium screens, and 5 to medium reports; a summary of our ratings is shown in Table 3.15.

We add all the weights in the rightmost colwnn to generate a count of new appli- cation points (NOPS): 9. Suppose our developers have low experience and low CASE maturity. Table 3.12 tells us that the productivity rate for this circumstance is 7. Then the COCO MO model tells us that the estimated effort to build the Piccadilly system is NOP divided by the productivity rate, or 1.29 person-months.

As we understand more about the requirements for Piccadilly, we can use the other parts of COCOMO: the early design model and the postarchitecture model, based on nominal effort estimates derived from lines of code or function points. These models use a scale exponent computed from the project 's scale factors, listed in Table 3.16.

TABLE 3.16 Scale Factors for COCOMO II Early Design and Postarchitecture Models

Scale Factors Very Low Low Nominal High Very High Extra High

Precedentedness TI10roughly Largely Somewhat Generally Largely Thoroughly unprecedented unprecedented unprecedented familiar familiar familiar

Flexibility Rigorous Occasional Some General Some General relaxation relaxation conformity conformity goals

Significant risks Little (20%) Some(40%) Often (60%) Generally Mostly Full (100%) eliminated (75%) (90%)

Team interaction Very difficult Some difficult Basically Largely Highly Seamless process interactions interactions cooperative cooperative cooperative interactions

inte ractions

Process Determined by Detennined by Determined by Determined by Determined by Determined by maturity questionnaire questionnaire questionnaire questionnaire questionnaire questionnaire

Openmirrors.com

Section 3.8 Real-Time Example 135

"Extra high" is equivalent to a rating of zero, "very high" to 1, "high" to 2, "nomi- nal" to 3, " low" to 4, and "very low" to 5. Each of the scale factors 1s rated, and the sum of all ra tings is used to weight the initial effort estimate. For example, suppose we know that the type of application we are building for Piccadilly is generaLiy familiar to the development team; we can rate the first scale factor as ·'high." Similarly, we may rate flexibility as "very bigh," risk resolution as "nominal," team interaction as "high," and the maturity rating may turn out to be "low." We sum the ratings (2 + 1 + 3 + 2 + 4) to get a scale factor of 12. Then, we compute the scale exponent to be

1.01 + 0.01(12)

or 1.13. This scale expo nent teUs us that if our initiaJ effort estimate is 100 person- montbs, then our new estimate, relative to the characteristics reflected in Table 3.16, is 1001·13, or 182 person-months. In a similar way, the cost drivers adjust this estimate based o n characteristics such as tool usage, analyst expertise, and reliability require- ments. Once we calculate the adjustment factor, we multiply by our 182 person-months estimate to yield an adjusted effort estimate.

3.8 REAL-TIME EXAMPLE

'The board investigating the Ariane-5 failure examined the software, the documenta- tion, and the data captured before and during flight to determine what caused the fa il- ure (Lions e t al. 1996). Its report notes that the launcher began to disintegrate 39 seconds after takeoff because the angle of attack exceeded 20 d egrees, causing the boosters to separate from the main stage of the rocket; this separation triggered the launcher's self-destructi on. The angle of attack was determined by software in the on- board computer on the basis of data transmitted by the active inertial reference system, SRI2. As the report notes, SRI2 was supposed to contain valid Hight data, but instead it contained a diagnostic bit pattern that was interpreted erroneously as Hight data. The erroneous data bad been declared a failure, and the SRI2 bad been shut off. NormaUy, the on-board computer would have switched to the other inertial reference system, SRil, but that, too, had been shut down for the same reason.

The error occurred in a softwa re module that computed meaningful results only before lift-off. As soon as the launcher lifted off, the function performed by this module served :no useful purpose, so it was no longer needed by the rest of the system. How- ever, tbe module continued its computations for approximately 40 seconds of ftight based on a requirement for the Ariane-4 that was not needed for Ariane-5.

The internal events that led to the failure were reproduced by simulation calcula- tions supported by memory readouts and examination of the soft ware itself. Thus, the Ariane-5 destruction might have been prevented had tbe project managers developed a risk management plan, reviewed it, and developed risk avoidance or mitigation plans for each identified risk. To see how, consider again the steps of Figure 3.15. The first stage of risk assessment is risk identification. The possible problem with reuse of the Ariane-4 software might bave been identified by a decomposition of the functions; someone might have recognized early on that the requirements for Ariane-5 were

136 Chapter 3 Planning and Managing the Project

different from Ariane-4. Or an assumption analysis might have revealed that the assumptions for the SRI in Ariane-4 were different from those for Ariane-5.

Once the risks were identified, the analysis phase might have included simula- tions, which probably would have higWighted the problem that eventually caused the rocket's destruction. And prioritization wouJd have identi.fied the risk exposure if the SRI did not work as planned; the high exposure might have prompted the project team to examine the SRI and its workings more carefully before implementation.

Risk control involves risk reduction, management planning, and risk resolution. Even if the risk assessment activities had missed the problems inherent in reusing the SRI from Ariane-4, risk reduction techniques including risk avoidance analysis ntight have noted that both SRis could have been shut down for the same underlying cause. Risk avoidance might have involved using SRis with two different designs, so that the design e rror would have shut down one but not the other. Or the fact that the SRI cal- culations were not needed after Lift-off might have prompted the designers or imple- menters to shut down the SRI earlier, before it corrupted the data for the angle calculations. Similarly, risk resolution includes plans for mitigation and continual reassessment of risk. Even if the risk of SRI failure had not been caught earlier, a risk reassessment during design or even during unit testing might have revealed the prob- lem in the middle of development. A redesign or development at that stage would h ave been costly, but not as costly as the complete loss of Ariane-5 on its maiden voyage.

3.9 WHAT THIS CHAPTER MEANS FOR YOU

This chapter bas introduced you to some of the key concepts in project manageme nt, including project planning, cost and schedule estimation, risk management, and team organization. You can make use of this information in many ways, even if you are not a manager. Project planning involves input from all team members, including you, and understanding the planning process and estimation techniques gives you a good idea of how your input will be used to make decisions for the whole team. AJso, we have seen how the number of possible communication paths grows as tbe size of the team increases. You can take communication into account when you are planning your work and estimating the time it will take you to complete your next task.

We have also seen how communication styles differ and how they affect the way we interact with each other on the job. By understanding your teammates' styles, you can create reports and presentations for them that match their expectations and needs. You can prepare summary information for people with a bottom-line style and offer complete analytical information to those who are rational.

3.10 WHAT THIS CHAPTER MEANS FOR YOUR DEVELOPMENT TEAM

At the same time, you have learned how to organize a development team so that team interaction helps produce a better product. There are several choices for team structure, from a hierarchical chief programmer team to a loose, egoless approach. Each has its benefits, and each depends to some degree on the uncertainty and size of the project.

Openmirrors.com

Section 3.13 Key References 137

We have also seen bow the team can work to anticipate and reduce risk from the project's beginning. Redundant functionality, team reviews, and other techniques can help us catch e rrors early, before tbey become embedded in the code as faults waiting to cause failures.

Similarly, cost estimation should be done early and often, i.ncluding input from team members about progress in specifying, designing, coding, and testing the system. Cost estimation and risk management can work hand in band; as cost estimates raise concerns about finishing on time and within budget, risk management techniques can be used to mitigate or even eliminate risks.

3.11 WHAT THIS CHAPTER MEANS FOR RESEARCHERS

This chapter has described many techniques that still requi.re a great deal of resea rcb. Little is known about which team organizations work best in which situations. Like- wise, cost- and schedule-estimation models are not as accurate as we would Like them to be, and improvements can be made as we learn more about how project, process, prod- uct, and resource characteristics affect our efficiency and productivity. Some methods, such as machine learning, look promising but require a great deal of historical data to make them accurate. Researchers can help us to understand how to balance practicality with accuracy when using estimation techniques.

Similarly, a great deal of research is needed in making risk management tech- niques practical. The calculation of risk exposure is currently more an art than a sci- ence, and we need methods to help us make our risk calculations more re levant and our mitigation techniques more effective.

3.12 TERM PROJECT

Often, a company or organization must estimate the effort and time required to com- plete a project, even before de tailed requirements are prepared. Using the approaches described in this chapte r, or a tool of your choosing from other sources, estimate the effort required to build the Loan Arranger system. How many people will be required? What kinds of skills should they have? How much experience? What kinds of tools or techniques can you use to shorten the amount of time that development will take?

You may want to use more than one approach to generate your esti.roates. If you do, then compare and contrast the results. Examine each approach (its models and assumptions) to see what accounts for any substantial differences among estimates.

O nce you have your estimates, evaluate them and their underlying assumptions to see how much uncertainty exists in them. Then, perform a risk analysis. Save your results; you can examine them at the end of the project to see which risks turned into real problems, and which ones were mitigated by your chosen risk strategies.

3.13 KEY REFERENCES

A great deal of information about COCO MO is available from the Center for Software E ngineering at the University of Southern CaJiforoia. The Web site, http://suoset.usc.

138 Chapter 3 Planning a nd Managing the Project

edu/csse/research/COCOMOil/cocomo_main.html, points to current research on COCO MO, including a Java implementation of COCO MO II. It is at this site that you can also find out about COCOMO user-group meetings and obtain a copy of the COCOMO II user's manual. Related information about function points is available from IFPUG, the International Function Point User Group, in Westerville, Ohio.

The Center for Software Engineering also performs research on risk manage- ment. You can ftp a copy of its So ftware Risk Technical Advisor at ftp://usc.edu!pub/ soft_engineeringtdemos/stra.tar.z and read about current research at http://sUI11set. usc.edu.

PC-based tools to support estimation are d.escribed and available from the Web site for Bournemouth University's Empirical Software Engineering Research Group: http://dec.bournemouth.ac.uk/ESERG.

Several companies producing commercial project management and cost-estimation tools have information available on their Web sites. Quantitative Software Manage- ment, producers of the SLIM cost-estimation package, is located at http://www.qsm.com. Likewise, Software Productivity Research offers a package called Checkpoint. Informa- tion can be found at http://www.spr.com. Computer Associates has developed a large suite of project management tools, including Estimacs for cost estimation and Plan.macs for planning. A full description of its products is at http://www.cai.com/products.

The Software Technology Support Cente r at Hill Air Force Base in Ogden, Utah, produces a newsletter called CrossTalk thal reports on method and tool evaluation. Its guidelines for successful acquisition and management can be found at http://stsc.hill.af. mil/stscdocs.hlml. The Center's Web pages also contain pointers lo several technology areas, including project management and cost estimation: you can find the listing at http://stsc.hill.af.mil.

Team building and team interaction are essential on good soft ware projects. Weinbe rg (1993) discusses work styles and their application to team building in the sec- ond volume of his series on software quality. Scholtes (1995) includes material on how to hand!le difficult team members.

Project manageme nt for small projects is necessarily different from that for large projects. The October 1999 issue of IEEE Computer addresses software engineering in the smaU, with articles about smaU projects, Internet time pressures, and extreme programming.

Project management for Web applications is somewhat different from more tradi- tional software engineering. Mendes and Moseley (2006) explore the differences. In particular, they address estimation for Web applications in their book on "Web engineering."

3.14 EXERCISES

1. You are about to bake a two-layer birthday cake with icing. Describe the cake-baking project as a work breakdown structure. Generate an activity graph from that structure. What is the critical path?

2. Figure 3.23 is an activity graph for a software d evelopment project. The number corre- sponding to each edge of the graph indicates the number of days required to complete the activity represented by that branch. For example, it will take four days to complete the

Openmirrors.com

Section 3. 14 Exercises 139

START

FIGURE 3.23 Activity graph for Exercise 2.

activity that ends i.n milestone E. For each activity, List its precursors and compute the earliest start time, the latest start time, and the slack. Then, ide ntify the critical path.

3. Figure 3.24 is an activity graph. Find the critical path.

START

FIGURE 3.24 Activity graph for Exercise 3.

4. On a software development project, what kinds of activities can be performed in parallel? Explain why the activity graph sometimes hides the interdependencies of these activities.

5. Describe how adding personnel to a project that is behind schedule might make the project completion date even later.

6. A large government agency wants to contract with a software development firm for a project involving 20,000 lines of code. The Hardand Software Company uses Walston and Felix's estimating technique for determining the number of people required for the time needed to write that much code. How many person-months does Hardand estimate will be needed? If the government's estimate of size is 10% too Low (i.e., 20,CXX) lines of code represent only 90% of the actual size), how many additional person-months will be needed? In general, if the government's size estimate is k% too low, by how much must the person-month estimate change?

140 Chapter 3 Planning a nd Managing the Project

7. Explain why it takes longer to develop a utility program than an applications program and longer still to develop a system program.

8. Manny's Manufacturing must decide whether to build or buy a software package to keep track of its inventory. Manny's computer experts estimate that it will cost $325,000 to buy the necessary programs. To build the programs in-house, programmers will cost $5000 each per month. Wlhat factors should Manny consider in making bis decision? When is it better to buiJd?To buy?

9. Brooks says that adding people to a late project makes it even later (Brooks 1975). Some schedule-estimation techniques seem to indicate that adding people to a project can shorten development time. Is this a contradiction? Why or why not?

10. Many studies indicate that two of the major reasons that a project is late are changing requirements (called requirements volatility or instability) and employee turnover. Review the cost models discussed in this chapter, plus any you may use on your job, and d etermine which models have cost factors that reflect the effects otf these reasons.

11. E ven on your student projects, there are significant risks to your finishing your project on time. Analyze a student software development project and list the risks. What is the risk exposure? What techniques can you use to mitigate each risk?

12. Many project managers plan their schedules based on programmer productivity on past projects. This productivity is often measured in terms of a unit of size per unit of time. For example, an organmzation may produce 300 lines of code per day or 1200 application points per month. ls it appropriate to measure productivity in this way? Discuss the mea- surement of productivity in terms of the fol.lowing issues:

• Different languages can produce different numbers of lines of code for implementa- tion of the same design.

• Productivity in lines of code cannot be measured until implementation begins. • Programmers may structure code to meet productivity goals.

Openmirrors.com

4

In this chapter, we look at • eliciting requirements from our

customers • modeling requirements • reviewing requirements to ensure

their quality • documenting requirements for use

by the design and test teams

In earlier chapters, when looking at various process models, we noted several key steps for successful software development. In particular, each proposed model of the software-development process 1ncludes activities aimed at capturing requirements: understanding o ur customers' fundamental problems and goals. Thus, our understand- ing of system intent and function starts with an examina tion of requirements. In this chapter, we look at the various types of requirements and their different sources, and we discuss how to resolve confticting requirements. We detail a variety of modeling notations and requirements-specification methods, with examples of both automated and manual techniques. These models help us understand the requirements and docu- ment the re lationships among them. Once the requirements are well understood, we learn how to review them for correctness and completeness. At the end of the chapter, we learn how to choose a requirements-specification method that is appropriate to the project under consideration, based on the project's size, scope, and the criticality of its mission.

AnaJyzing requirements involves much more than merely writing down what the customer wants.. As we sha ll see, we must find requirements on which both we and the customer can agree and on which we can build our test procedures. First, let us examine exactly what requirements a re, why they are so important (see Sidebar 4.1), and how we work with users and custome rs to define and document them.

141

142 Chapter 4 Capturing the Require ments

SIDEBAR 4. 1 WHY ARE REQUIREMENTS IMPORTANT?

The hardest single part of building a software system is deciding precisely what tu build. Nu uther part uf the conceptual work is as difficult as establishing tire detailed technical requirements, including all the interfaces to people, to machines, and to other software systems. No other part. oft/re work so cripples tire resulting system if done wrong. No other part is more difficult to rectify later.

(Brooks 1987)

In 1994, the Standish Group surveyed over 350 companies about thei.r over 8000 software projects to find out how well they were faring. The results are sobering. Thirty-one percent of the software projects were canceled before they were completed More·over, in large compa- nies, only 9% of the projects were delivered on time and within budget; 16% met those criteria

in small companies (Standish 1994). Similar results have been reported since then; the bottom

line is that developers have trouble delivering the right system on time and within budget.

To understand why, Standish (1995) asked the survey respondents to explain the causes

of the failed projects. The top factors were reported to be

1. Incomplete requirements (13.1%)

2. Lack of user involvement (12.4%)

3. Lack of resources (10.6%)

4. Unrealistic expectations (9.9%)

5. Lack of executive support (9.3%)

6. Changing requirements and specifications (8.7%)

7. Lack of planning (8.1 % )

8. System no longer needed (7.5%)

Notice that some part of the requirements elicitation, definition. and. management process is involved in almost all of these causes. Lack of care in understanding, documenting, and manag-

ing requirements can lead to a myriad of problems: building a system that solves the wrong prob- lem, that doesn't function as expected, or that is difficult for the users to understand and use.

Furthermore, requirements errors can be expensive if they are not detected and fixed

early in the development process. Boehm and Papaccio (1988) report th.at if it costs $1 to find and fix a requirements-based problem during the requirements definition process, it can cost

$5 to repair it during design, $10 during coding, $20 during unit testing, and as much as $200 after delivery of the system! So it pays to take time to understand the problem and its con-

text, and to get the requirements right the first time.

4.1 THE REQl!.JIREMENTS PROCESS

A customer who asks us to build a new system has some notion of what the system sbould do. Often, the customer wants to automate a manual task, such as paying bills electronically rather than with handwritten checks. Sometimes, the customer wants to

Openmirrors.com

Section 4.1 The Requirements Process 143

eobance or exteod a current manual or automated system. For example, a telephone biJling system that charged cui;tomers only for local telephone service and long- distance calls may be updated to bill for call forwarding, call waiting, and other oew fea- tures. More and more frequently, a customer wants products that do things that have never been done before: tailoring electronic news to a user's interests, changing the shape of an airplane wing in mid-fLigbt, or monitoring a diabetic's blood sugar aod automatically controlling insulin dosage. No matter whether its functiooaJity is old or new, a proposed software system has a purpose, usually expressed in terms of goals or desired behavior.

A requirement is an expression of desired behavior. A requirement deals with objects or eotities, the states they cao be in, and the functions that are performed to change states or object characteristics. For example, suppose we are building a system to generate paychecks for our customer 's company. Ooe requirement may be that the checks are to be issued every two weeks. Another may be that direct de posit o f an employee's check is to be aJlowed for every employee at a certain salary level or higher. The customer may request access to the paycheck system from several different com- pany locations. All of these requiremeots are specific descriptioos of functions or char- acteristics that address the geoeral purpose of the system: to generate paychecks. Thus, we look for requirements that identify key entities ("ao employee is a person who is paid by the company"), limit entities ("an employee may be paid for no more than 40 hours per week"), or define relationships among entities ("employee X ms supervised by employee Y ifY can authorize a change to X's salary").

Note that none of these requirements specif)' how the system is to be implemented. lllere is no mention of what database-management system to use, whether a client-server architecture will be employed, how much memory the computer is to have, o r what pro- gramming language must be used to develop the system. These implementation-specific descriptions are not considered to be requirements (unJess they are mandated by the cus- tomer). The goal of the requirements phase is to understand the customer's problems and oeeds. Thus, requirements focus on the customer and the problem, not on the solution or the implementa tion. We often say that requirements designate what behavior the cus- tomer wants, without saying how that behavior will be realized. Any discussion of a solution is premature uotil the problem is clearly defined.

It helps to describe requirements as interactions among real-world p henomena, without any refe rence to system phenomena. For example, billing requirements should refer to customers, services billed, billing periods and amouots-without mentioning system data or procedures. We take this approach partly to get at the heart of the cus- tomer's needs, because sometiroes the stated needs are not the real needs. Moreover, the customer's problem is usuaLiy most easily stated io terms of the custo mer's busi- ness. Another reason we take this approach is to give the designer maximum flexibility in deciding how to carry out the requirements. During the specification phase, we wiU decide which re quirements wiU be fuJfilJed by our software system (as opposed to requirements that are addressed by special-purpose hardware devices, by othe r soft- ware systems, or by human operators or users); duriog the design phase, we will devise a plan for how the specified behavior will be implemented.

Figure 4.1 illustrates the process of determining the requirements for a proposed software system. The person performing these tasks usually goes by the title of requirements analyst or systems analyst. As a requirements analyst, we first work witb

144 Chapter 4 Capturing the Requirements

Soltwm

E llelt1tlon A11lysl1 Spee1nut101 V1lld1tlon R1.1lruient1 Speelncatloi (SRSJ

CtllHll19 th• m t'1

U ndt tsllnd I n9 ind 11odelln9 t••

Dom1tnlln9 th b1•1vlor or th1

Chtckl19 th11 Oii 1pecl/le1tlon

req1lru11ntt des I red beh1vlo r propo11d n hwi11 111ttths the u11(1 tyl l l ll requlre11utr

FIGURE 4.1 Process for capturin,g the requirements.

out customers to elicit the requirements by asking questions, examining current behav- ior, or demonstrating similar systems. Next, we capture the requirements in a model or a prototype. This exercise helps us to better understand the required behavior.and usu- aUy raises additional questions about what the customer wants to happen in certain sit- uations (e.g., what if an employee leaves the company in the middle of a pay period?). Once the requirements are weU understood, we progress to the specification phase, in which we decide which parts o f the required behavior will be implemented in software. During validation, we cbcck that our specification matches what the customer expects to see in the final product. Analysis and validation activities may expose problems or omissio ns in our models or specification that cause us to revisit the customer and re vise our models and specification. The eventual outcome of the requirements process is a So ftware Requirements Specification (SRS), which is used to communicate to other software developers (designers, testers, maintainers) how the final product ought to behave. Sidebar 4.2 discusses how the use of agile methods affects the requirements process and the resulting requirements documents. The remainder of this chapter explores the requirements process in more detail.

4.2 REQUIREMENTS ELICITATION

Requirements e licitation is an especially critical part of the process. We must use a vari- e ty of techniques to determine what the users and customers realJy want. Sometimes, we are automating a manual system, so it is easy to examine what the system already does. But often, we must work with users and customers to understand a completely new problem. This task is rarely as simple as asking the right questions to pluck the require- ments from the customer's head. At the early stage of a project, requirements are ill- formed and ill-understood by everyone. Custome rs are not always good at describing exactly what they want or need, and we are not always good at understanding some- one else's business concerns. The customers know their business, but they cannot always describe their business problems to outsiders; their descriptions are full of jargon and

Openmirrors.com

Section 4.2 Requirements Elicitation 145

SIDEBAR 4.2 AGILE REQUIREMENTS MODELING

As we noted in Chapter 2, requirements analysis plays a large role in deciding whether to use agile methods as the basis for software development. If the requirements are tightly coupled and complex, or if future requirements and enhancements are likely to cause major changes to the system's architecture, then we may be better off with a "heavy" process that emphasizes up-front modeling. In a heavy process, developers put off coding until the requirements have been modeled and analyzed, an architecture is proposed that reflects the requirements, and a detailed design has been chosen. Each of these steps requires a model, and the models are related and coordinated so that the design fuJJy implements the require- ments. This approach is most appropriate for large-team development, where the documenta- tion helps the developers to coordinate their work, and for safety-critical systems, where a system's correctness and safety are more important than its release date.

However, for problems where the requirements are uncertain, it can be cumbersome to

employ a heavy process and have to update the models with every change to the require- ments. As an alternative approach, agile methods gather and implement the requirements in increments. The initial release implements the most essential requirements, as defined by the stakeholders' business goals. As new requirements emerge with use of the system or with bet-

ter understanding of the problem, they are implemented in subsequent releases of the sys- tem. This incremental development allows for "early and continuous delivery of valuable software" (Beck et a l. 2004) and accommodates emergent and late-breaking requirements.

Extreme programming (XP) takes agile requirements processes to the extreme, in that

the system is built to the requirements that happen to be defined at the time, with no planning or designing for possible future requirements. Moreover, XP forgoes traditional requirements documentation, and instead encodes the requirements as test cases that the eventuaJ imple- mentation must pass. Berry (2002a) points out that the trade-off for agiJe methods' flexibility is

the difficulty of making changes to the system as requirements are added. deleted. or changed. But there can be additional problems: XP uses test cases to specify requirements, so a poorly written test case can lead to the kinds of misunderstandings described in this chapter.

assumptions with which we may not be familiar. Likewise, we as developers know about computer solutions, but not always about bow possible solutions will affect our customers' business activities. We, too, have our jargon and assumptions, and some times we think everyo ne is speaking the same language, when in fact people have different meanings for the same words. It is onJy by djscussing the requirements with everyone who has a stake in the system, coalescing these different views into a coherent se t of requirements, and reviewing these documents with the stakeholders that we all come to an agreement about what the requirements are. (See Sidebar 4.3 for an alternative viewpoint.) If we cannot agree on what the requirements a:re, then the project is doomed to fail.

146 Chapter 4 Capturing the Requirements

SIDEBAR 4.3 USING VIEWPOINTS TO MANAGE INCONSISTENCY

A lthough most software engineers strive for consistent requirements, Easterbrook and Nuseibeh (1996) argue that it is often desirable to tole rate and even encourage inconsis- tency during the requirements process. They claim that because the s takeholders' under- standing of the domain and their needs evolve over time, it is pointless to try to resolve inconsistencies early in the requirements process. Early resolutions are expensive and often unnecessary (and can occur naturally as stakeholders revise their views). They can also be counter-productive if the resolution process focuses attention on how to come to agreement rather than on the underlying causes of the inconsistency (e.g., stakeholders' misunderstand- ing of the domain).

Instead, Easterbrook and Nuseibeh propose that stakeholders' views be documented and maintained as separate Viewpoints (Nuseibeh et a l. 1994) throughout the software devel- opment process. The requirements analyst defines consistency rules that should apply

between Viewpoints (e.g., how objects, states, or transitions in one Viewpoint correspond to similar entities in another Viewpoint; or how one Viewpoint refines ano ther Viewpoint), and the Viewpoints are analyzed (possibly automatically) to see if they conform to the consis- tency rules. If the rules a re violated, the inconsistencies are recorded as part of the View-

points. so that other software developers do not mistakenly implement a view that is being contested. The recorded :inconsistencies are rechecked whenever an associated Viewpoint is modified, to see if the Viewpoints are still inconsistent; and the consistency rules are checked

periodlically, to see if any have been broken by evolving Viewpoints.

The outcome of this approach is a requirements document that accommodates all stake- holders' views at all times. Inconsistencies are highlighted but not addressed until there is suf- ficient information to make an informed decision. This way, we avoid committing ourselves prematurely to requirements or design decisions.

So who are the stakeholders? It turns out that there are many people who have something to contribute to the requirements of a new system:

• Clients, who are the ones paying for the software to be developed: By paying for the development, the clients are, in some sense, the ultimate stakeholders, and have the final say about what the product does (Robertson and Robertson 1999).

• Customers, who buy the software after it is developed: Sometimes the customer and the user are thte same; other times, the customer is a business manager who is interested in improving the productivity of h.er employees. We have to understand the customers' needs well enough to build a product that they will buy and find useful.

• Users, who are familiar with the current system and will use the future system: These are the experts on how the current system works, which features are the most useful, and which aspects of the system need improving. We may want to consult a lso with special-interest groups of users, such as users with disabilities,

Openmirrors.com

Section 4.2 Requirements Elicitation 147

people who are unfamiliar with or uncomfortable using computers, e xpert users, and so on, to understand their particular needs.

• Domain experts, who are familiar with the problem thal the software must auto- mate: For example, we would consult a financial expert if we were building a financial package, or a meteorologist if our software were to model the weather. These people can contribute to the requirements, or wiU know about the kinds of environme nts to which the product will be exposed.

• Market researchers, who have conducted surveys to determine ftt1ure trends and polential customers' needs: They may assume the role of the customer if our soft- ware is be ing developed for the mass market and no particular customer has been identified yet.

• Lawyers or auditors, who are familiar with government, safety, or legal require- ments: For example, we might consult a tax expert to ensure that a payroll pack- age adheres to the tax law. We may also consult with experts on standards that are re levant to the product's functions.

• Software engineers or other technology experts: These experts ensure that the product is technkaUy and economically feasible. Tuey can educate the customer about innovative hardware and software technologies, and can recommend new functionality that takes advantage of these technologies. They can a lso estimate the cost and development time of the product.

Each stakeholder has a particular view of the system and how it should work, and often these views conflict. One of the many ski Us of a requirements analyst is the ability to understand each view and capture the requirements in a way that re flects the con- cerns of each participant. For example, a customer may specify that a system perfo rm a particular task, but the customer is not necessarily the user of the proposedl system. TI1e user may want the task to be performed in three modes: a learning mode, a novice mode, and an expert mode; this separation will allow the user to learn and master the system gradually. Some systems are implemented in this way, so that new users can adapt to the new system graduaJly. However, conflicts can a rise when ease of use sug- gests a slower system than response-time requirements permit.

Also, diffe rent participants may expect differing levels of de tail in the requirements documentation, in which case the requirements will need to be packaged in different ways for differe nt people. In addition, users and developers may have preconceptions (right or wrong) about what the other group values and how it acts. Table 4.1 summarizes some of the common stereotypes. This table emphasizes the role that human interaction plays in the development of software systems; good requirements analysis requires excel- lent interpersonal skills as weU as solid technical skills. The book's Web site contains sug- gestions for addressing each of these differences in perception.

lo addition to interviewing stakeholders, other means of eliciting requirements include

• Reviewing available documentation, such as documented procedures of manual tasks, and specifications or user manuals of automated systems

• Observing the current system (if one exists), to gather objective information about how the users perform their tasks, and to bette r understand the system we are about to automate or to change; often, when a new computer system is developed,

148 Chapter 4 Capturing the Requirements

TABLE 4.1 How Users and Developers View Each Other (Scharer 1990)

How Developers See Users

Users don't know whatthey want. Users can't articulate what they want.

Users are unable to provide a usable statement of needs.

Users have too many needs that are politically motivated. Users want everything right now. Users can't remain on schedule.

Users can't prioritize needs. Users are unwilling to compromise. Users refuse to take responsibility for the system. Users are not committed to development projects.

How Users See Developers

Developers don't understand operational needs. Developers can't translate clearly stated needs into

a successful system. Developers set unrealistic standards for requirements

definition. Developers place too much emphasis on technicalities. Developers are always late. Developers can't respond quickly to legitimately

changing needs. Developers are always over budget. Developers say "no" all the time. Developers try to tell us bow to do our jobs. Developers ask users for time and effort, even to the

detriment of the users' important primary duties.

the old system continues to be used because it provides some critical function that the designers of the new system overlooked

• Apprenticing with users (Beyer and Holtzblatt 1995), to learn about users' tasks in more detail, as the user performs them

• Interviewing users or stakeholders in groups, so that they wiU be inspired by one another's ideas

• Using domain-specific strategies, such as Joint Application Design (Wood and Silver 1995) or PIECES (Wetherbe 1984) for information systems, to ensure that stakeholders consider specific types of requirements that are relevant to tlheir particular situations

• Brainstorming with current and potential ll.lsers about how to improve the pro- posed product

The YoJere requirements process model (Robertson and Robertson 1999), as shown in Figure 4.2, suggests some additional sources for requirements, such as templates and ljbraries of requirements from related systems that we have developed.

4.3 TYPES OF REQUIREMENTS

When most people think about requirements, they think about required functionality: What services should be provided? What operations should be performed? What should be the reaction to certain stimuli? How does required behavior change over time and in response to the rustory o f events? A functional rc.qufrcmcnt describes required behavior in terms of required activities, such as reactions to inputs, and the state of each entity before and after an activity occurs. For instance, for a payroll sys- tem, the functional requirements state how often paychecks are issued, what input is necessary for a paycheck to be printed, under what conditions the amount of pay can be changed, and what causes the removal of an employee from the payroLI list.

Openmirrors.com

Suke~older w1n11

'"' ""'' Cment 0191nlz1tlon

ind 1pte111

Exl1tln9 dmnentt

Sectio n 4.3

Oon1ln nodalr

Reuu•I• 1tqulre11111h

\

Camel 11tu1tlon 111o4tl

Types of Requirements 149

FIGURE 4.2 Sources of possible requirements (Robertson and Robertson 1999).

The functional requirements define the boundaries of the solution space for our problem. The solution space is the set of possible ways that software can be designed to implement the requirements, and initially that set can be very large. However, in prac- tice it is usually not enough for a software product to compute correct outputs; there are othe r types of requirements that also distinguish between acceptable and unaccept- able products. A quality requirement, or nonfunctional requirement, describes some quali ty characteristic that the software solution must possess, such as fast response time, ease of use, high reliability, or low maintenance costs. A d esign constraint is a design decision, such as choice of platform or interface components, that has alre ady been made and that restricts the set of solutions to our problem. A process constraint is a restric tion on the techniques o r resources that can be used to buiJd the system. For example, customers may insist that we use agile methods, so that they can use early ver- sions of the system while we continue to add features. Thus, quality requirements, design constraints, and process constraints further restrict our solution space by differ- entia ting acceptable, well-liked solutions from unused products. Table 4.2 gives examples of each kind o f requirement.

Quality requirements sometimes sound like "motherhood " c!haracteristics that aU products ought to possess. After a ll, who is going to ask for a slow, unfriendly, unreli- able, unmaintainable software system? It is bette r to think of quality requirements as design criteria that can be optimized and can be used to choose among aJternative implementations of functional requirements. Given this approach, the question to be answered by the require ments is: To what extent must a product satisfy these quali ty requirements to be acceptable? Sidebar 4.4 explains how lo express quali ty require- ments such that we can test whether they are met.

Resolving Conflicts

In trying to elicit all types of requirements from aU of the relevant stakeholders, we are bound to encounter conflicting ideas of what the requirements ought to be. It usu aUy helps to ask the customer to prioritize requirements. This task forces the customer to

150 Chapter4 Capturing the Requirements

TABLE 4.2 Questions to Tease Out Different l)rpes of Requirements

Fu11ctlo11al RequJre ments Functionality

• What will the system do? • When will the system do it? • Are there several modes of operation? • What kinds of computations or data

transformations must be performed? • What are the appropriate reactions to possible

stimuli? Data

• For both input and output, what should be the format of the data?

• Must any data be retained for any period of time?

Design Constraints Physical Environment

• Where is the equipment to be located? • ls there one location or several? • Are there any environmental restrictions, such

as temperature, humidity, or magnetic interference?

• Are there any constraints on the size of the system?

• Are there any constraints on power, heating, or air oonditioning?

• Are there constraints on the programming language because of existing software components?

Interfaces • ls input coming from one or more other systems? • ls output going to one or more other systems? • Is there a prescribed way in which inpuVoutput

data must be formatted? • Is there a prescribed medium that the data

must use? Users

• Who will use the system? • Will there be several types of users? • What is the skill level of each user?

Process Constraints Resources

• What materials, personnel, or other resources are needed to build the system'.1

• What skills must the developers have? Documentation

• How much documentation is required? • Should it be onJine, in book format, or both? • To what audience should each type of

documentation be addressed? Standards

Q uality Requirements Performance

• Are there constraints on execution speed, response time, or throughput?

• What efficiency measures will apply to resource usage and response time?

• How much data will flow through the system? • How often will data be received or sent?

Usability and Human Factors • What kind of training will be required for each

type of user? • How easy should it be for a user to understand

and use the system? • How difficult should it be for a user to misuse the

system? Security

• Must access to the system or information be controlled?

• Should eacb user's data be isolated from the data of otber users?

• Should user programs be isolated from other programs and from the operating system?

• Sbould precautions be taken against theft or vandalism?

Reliability and Availability • Must the system detect and isolate faults? • What is the prescribed mean time between failures? • Is there a maximum time allowed for restarting

the system after a failure? • How often will the system be backed up? • Must backup copies be stored at a different

location? • Should precautions be taken against fire or water

damage? Maintainability

• Will maintenance merely correct errors, or will it also include improving the system?

• When and in what ways might the system be changed in the future?

• How easy should it be to add features to the system?

• How easy should it be to port the system from one platform (computer, operating system) to another?

Precision and Accuracy • How accurate must data calculations be? • To wlilat degree of precision must calculations

be made? Time to D elivery I Cost

• ls there a prescribed timetable for development? • Is there a limit on the amount of money to be spent

on development or on hardware or software?

Openmirrors.com

Section 4.3 Types of Requirements 151

SIDEBAR 4.4 MAKING REQUIREMENTS TESTABLE

In writing about good design, Alexander (1979a) encourages us to make our requirements testable. By this, he means that once a requirement is stated, we should be able to determine whether or not a proposed solution meets the requirement. This evaluation must be objec- tive; that is, the conclusion as to whether the requirement is satisfied must not vary according to who is doing the evaluation.

Robertson and Robertson (1999) point out that testability (which they call "measurabil- ity") can be addressed when requirements are being elicited. The idea is to quantify the extent to which each requirement must be met. These 6t criteria form objective standards for judging whether a proposed solution satisfies the requirements. When such criteria cannot be

easily expressed, then the requirement is likely to be ambiguous, incomplete, or incorrect. For example, a customer may state a quality requirement thjs way:

Water quality information must be accessible immediately.

How do you test that your product meets this requirement? The customer probably has a clear idea about what "immediately" means, and that notion must be captured in the require- ment. We can restate more precisely what we mean by "immediately":

Water quality records must be retrieved within 5 seconds of a request.

This second formulation of the requirement can be tested objectively: a series of requests is made, and we check that the system supplies the appropriate r;ecord within 5 seconds of each request.

It is relatively easy to determine fit criteria for quality requirements that are naturally quantitative (e.g., performance, size, precision, accuracy, time to delivery). What about more subjective quality requirements, like usability or maintainabil.ity? In these cases, developers use focus groups or metrics to evaluate fit criteria:

• 75% of users shall judge the new system to be as usable as the existing system.

• After training, 90% of users shall be able to process a new account within 5 minutes.

• A module will encapsulate the data representation of at most one data type.

• Complllation errors shall be fixed within 3 weeks of being reported.

Fit criteria that cannot be evaluated before the final product is delivered ar e harder to assess:

• The system shall not be imavailable for more than a total maximum of 3 minutes each

year.

• The mean-time-between-failures shall be no less than. 1 year.

In these cases, we either estimate a system's quality attributes (e.g., there are techniques for estimating system reliability, and for estimating the number of faults per line of code) or

152 Chapter 4 Capturing the Requirements

evaluate the delivered system during its operation-and suffer some financial penalty if the

system does not live up to its promise. Interestingly, what gets measured gets done. That is, unless a fit criterion is unrealistic, it

will probably be met. 'The key is to determine, with the customer, just how to demonstrate

that a de livered system meets its requirements. The Robertsons suggest three ways to he lp make requirements testable:

• Specify a quantitative description for each adverb and adjective so that the meaning of qualifiers is clear and unambiguous.

• Replace pronouns with specific names of entities.

• Make sure that every noun is defined in exactly one place in the requirements documents.

An alternative approach, advocated by the Quality Function Deployment (QFD) school

(Akao 1990), is to realize quality requirements as special-purpose functional requirements, and to test quality requirements by testing how well their associated functional requirements

have been satisfied. This approach works bette r for some quality requirements than it does for

others. For example, real-time requirements can be expressed as additional conditions or con- straints on when required functionality occurs. Other quality requirements, such as security,

maintainability, and performance, may be satisfied by adopting existing designs and protocols

that have been developed specifically to optimize a particular quality requirement.

re flect on which of the requested services and features are most essential. A loose prioritization scheme might separate requirements into three categories:

1. Requirements that absolutely must be met (Essential) 2. Requirements that are highly desirable but not necessary (Desirable) 3. Requirements that are possible but could be eliminated (Optional)

For example, a credit card billing system must be able to list curren t charges, sum them, and request payment by a certain date; these are essential requirements. But the biUing system may also separate the charges by purchase type, to assist the purchaser in under- standing buying patterns. Such purchase-type analysis is a desirable but probably nonessential requirement. Finally, the system may print the credits in black and the debits in red, which wouJd be useful but is option al. Prioritizing requirements by cate- gory is helpful to all parties in understanding what is really needed. It is also useful when a software-development project is constraililed by lime or resources; if the system as defined will cost too much or take too long to develop, optional requirements cao be dropped, and desirable requirements can be analyzed for e limjnation or postponement to la ter versions.

Prioritiza tion can be especially helpful in resolving conflicts among quality requirements; often, two quality attributes will conflict, so that it i.s impossible to opti- mize for both. For example, suppose a system is required to be maintainable and deliver responses quickly. A design that emphasizes maintainability through separation of concerns and encapsulation may slow the performance. Likewise, tweaking a system to

Openmirrors.com

Section 4.3 Types of Requirements 153

perform especially well on one platform affects its portability to other platforms, and secure systems necessarily control access and restrict availability to some users. Emphasizing security, reliability, robustness, usability, or performance can all affect maintainability, in that realizing any of these characteristics increases the design 's complexity and decreases its coherence. Prioritizing quality requirements forces the customer to choose those software-quality factors about which the customer cares most-which helps us to provide a reasonable, if not optimal, solution to the customer's quality requirements.

We can also avoid trying to optimize multiple conflicting quality requirements by identifying and aiming to achieve fit criteria, which establish clear-cut acceptance tests for these requirements (see Sidebar 4.4). But what if we cannot satisfy the fit criteria? Then it may be time to reevaluate the stakeholders' views and to employ negotiation. However, negotiation is not easy; it requires skill, patience, and experience in finding mutually acceptable solutions. Fortunate ly, stakeholders rarely disagree about the underlying problem that the software system is addressing. More likely, conflicts will pertain to pos- sible approaches to, or design constraints on, solving the problem (e.g., stakeholders may insist on using different database systems, diffe rent encryption algorithms, diffe rent user inte rfaces, or different programming languages). More seriously, stakeho lders may dis- agree over priorities of requirements, or about the business policies to be incorporated into the system. For example, a university's colleges or departments may want different policies for evaluating students in their respective programs, whereas university adminis- trators may prefer consolidation and uniformity. Resolution requires determining exactly wby each stakeholder is adamant about a particular approach, policy, or priority rankirug- for example, they may be concerned about cost, security, speed, or quality-and then we need to work toward agreement on fundamental requiiements. With effective negotiation, the stakeholders will come to understand and appreciate each other's fundamental needs, and will strive for a resolution that satisfies everyone; such resolutions are usually very different from any of the stakeholders' original views.

Two Kinds of Requirements Documents

In the end, the requirements are used by many different people and for different pur- poses. Requfrements analysts and their clients use requirements to explain their under- standing of bow the system should behave. Designers treat requirements as constraints on what would be considered an acceptable solution. The test team derives from the requirements a suite of acceptance tests, which will be used to demonstrate to the cus- tomer that the system being delivered is indeed what was ordered. The maintenance team uses the requirements to help ensure that system enhancements (repairs and new features) do not interfe re with the system's original intent. Sometimes a single docu- ment can serve all of these needs, leading to a common understanding among cus- tomers. requirements analysts, and developers. But often two documents are needed: a requirements definition t hat is aimed at a business audience, such as clients, customers, and users, and a requirements specification that is aimed at a technical audience, such as designers, testers, and project managers.

We illustrate the distinction using a small running example from Jackson and Zave (1995). Consider a software-controlled turnstile situated at the entrance to a zoo.

154 Chapter 4 Capturing the Requirements

When the turnstile is fed a coin, it unlocks, allowing a visitor to push through the turnstile and ente r the zoo. Once an unlocked turnstile has rotated enough to allow one entry, the turnstile locks again, to prevent another person from entering without payment.

A requirements de finition is a complete listing of everything the customer wants to achieve. The document expresses requirements by describing the entities in the en vi- ronment in which the proposed system will be installed, and by describing the desired constraints on, monitoring of, or transformations of those entities. The purpose of the propose d system is to realize these requirements (Zave and Jackson 1997). Thus, the requirements are written entirely in terms of the environment, describing how the envi- ronment will be affected by the proposed system. Our turnstile example has two require- ments: (1) no one should enter the zoo without paying an entrance fee, and (2) for every entrance fee paid, the system should not prevent a corresponding entry.1 The require- ments definition is typically written jointly by the client and the requirements analyst, and it represents a contract describing what functionality the developer promises to deliver to the client.

The requireme nts specification restates the requirements as a specification of how the proposed system shall behave. The specification also is written entirely in terms of the envfronment, except that it refers solely to environmental entities that are accessi- ble to the system via its interface. That is, the system boundary makes explicit those environmental entities that can be monitored or controlled by the system. This distinc- tion is depicted in Figure 4.3, with requirements defined anywhere within the environ- ment's domain, including, possibly, the system's interface, and with the specification restricted only to the intersection between the e nvironment and system domains. To see the distinction, consider tbe requirement that no one sbouJd enter the zoo without paying an entrance fee. If the turnstile has a coin slot and is able to detect when a valid coin is inserted, then it can determine when an entrance fee bas been paid. In contrast, the concept of an entry event may be outside the scope of the system. Thus, the require- ment must be rewritten to realize entry events using only events and states that the turnstile can. detect and control, such as whether the turnstile is unlocked and whether it detects a visitor pushing the turnstile:

When a visitor applies a certain amount of force on an unlocked turnstile, the turnstile will automatically rotate a one-half turn, ending in a locked position.

In this way, the specification refines the original requirements definition. The requirements specification is written by the requirements analyst and is used

by the o ther software developers. The analyst must be especially careful that no infor- mation is Jost or changed when refining the requirements into a specification. There must be a direct correspondence between each require ment in the definition document and those in the specification document.

1 A more intuitive expression of this second requirement, that anyone who pays sbould be allowed to enter the zoo, is not implementable. There is no way for the system to prevent external factors from keeping the paid visitor from entering the zoo: another visitor may push through the w1locked turnstile before the paid visitor, the zoo may close before the paid visitor enters the turnstile, the paid visitor may decide to leave, and so on (Jackson and Zave 1995).

Openmirrors.com

Envlten1u11

Section 4.4 Characteristics of Requirements 155

Shtrtd lnt1rfm

FIG URE 4.3 Requirements vs. Specification.

4.4 CHARACTERISTICS OF REQUIREMENTS

To ensure that the eventual product is successful , it is important that the requirements be of high quaLity; what is not specified usuaUy is not built. We discuss la ler in this chap- ter how to validate and verify requirements. lo the meantime, we list below the desir- able characteristics for which we should check.

1. Are the requirements correct? Both we and the customer shouJd review the docu- mented requirements, to ensure that they confo rm to our understanding of the requirements.

2. Are the requiremen.ts consistent? That is, are there any confli cting requirements? Fo r example, if one requirement states that a maximum of 10 users can be using the system at one time, and another requfre ment says that in a certain situat ion there may be 20 simuJtaneous users, the two requirements are said to be inconsis- tent. In general, two requirements are inconsistent if it is impossible to satisfy bo th simultaneously.

3. Are the requirements unam biguous? The requirements are ambiguous if multiple readers of the requirements can walk away with different ibut vaLid interpreta- tions. Suppose a customer for a satellite control system requires the accuracy to be sufficient to support mission planning. The requirement does not te ll us what mission planning requires for support. The customer and the developers may have very different ideas as to what level of accuracy is needed. Further discus- sion o f the meaning of "mission planning" may result in a more precise require- ment: "In identifying the position of the satellite, position error sha ll be less then 50 feet a long orbit, less than 30 feet off orbit." Given this more detailed require- ment, we can test for position error and know exactly whether or not we have met the requirement.

4. Are Lhe requirements complete? The set of requirements is complete if it spedfies required behavior and output for all possible inputs in all possible states under aU possible constraints. Thus, a payroll system should describe what happens when an employee takes a leave without pay, gets a raise, or needs an advance. We say that the requirements are externally complete if all states, state changes, inputs,

156 Chapter 4 Capturing the Requirements

products, and constraints are described by some requirement. A requirements description is inte rnally complete if there are no undefined terms among the requirements.

5. Are the requirem ents feasible? That is, does a solution to tibe customer's needs even exist? Fo r example, suppose the customer wants users to be able to access a main computer that is located several thousand miles away and have the response time for remote users be the same as for local users (whose workstations are con- nected directly to the main computer) . Questions of feasibility often arise when the customer requires two or more quality reqllirements, such as a request for an inexpensive system that analyzes huge amounts of data and outputs the analysis results within seconds.

6. ls every requirement relevant? Sometimes a requirement restricts the developers unnecessariJy, or includes functions that are not directly related to the customer's needs. For example, a general may decide that a tank's new software system should allow soldiers to send and receive electronic mail, even though the main purpose of the tank is to traverse uneven terrain. We should endeavor to keep this "feature explosion" under control, and to help keep stakeholders focused on their essential and desirable reqllirements.

7. A re the requirem ents testable? The requirements are testable if they suggest acceptance tests that would clearly demonstrate whether the eventual product meets the requirements. Consider how we might test the requirement that a sys- te m provide real-time response to queries. We do not krnow what ·'real-time response" is. However, if fit criteria were given, saying that the system shall respond to queries in not more than 2 seconds, then we know exactly how to test the system's reactio n to queries.

8. Are the requirements traceable? Are the requirements organized and uniquely labeled for easy reference? Does every entry in the requirements definition have corresponding entries in the requirements specification, and vice versa?

We can think of these characteristics as the functional and quality requirements for a set of product requirements. These characte ristics can help us to decide whe rn we have collected enough information, and when we need to learn mo re about what a par- ticular requirement means. As such, the degree to which we want to satisfy these char- acteristks wiU affect the type of information that we gather during requirements e licitation, and how comprehensive we want to be. It will also affect the specificat ion languages we choose to express the requirements and the validation and verificat ion checks that we eventually perform to assess the requirements.

4.5 MODELING NOTATIONS

One trait o f an engineering discipline is that it bas repeatable prncesses, such as the techniques presented in Chapter 2, for developing safe and successful products. A sec- ond trait is that there exist standard notations for modeling, documenting, and commu- nicating decisions. Modeling can help us to unde rstand the requirements thoroughly, by teasing out what quest ions we should be asking. Holes in our models reveal unknown or ambiguous behavior. Multiple, conflicting outputs to the same input reveal

Openmirrors.com

Sectio n 4.5 Modeling Notations 157

inconsistencies in the requirements. As the model develops, it becomes more and more obvious what we don 't know and what the customer doesn 't know. We carnnot complete a model without understanding the subject of the model. Also, by restating the requirements in a completely different form from the customer's original requests, we force the customer to examine our models care fully in order to validate the model's accuracy.

It we look at the Eiterature, we see that there is a seemingly infinite numbe r of specification and design notations and methods, and that new notations are being intro- duced and marketed au the time. But if we step back and ignore the details, we see llhat many notations have a similar look and feel. Despite the number of individual nota- tions, there are probably fewer than ten basic paradigms for expressing informattion about a problem's concepts, behavior, and properties.

This section focuses on seven basic notational paradigms that can be applied in several ways to steps in the development process. We begin our discussion of each by introducing the paradigm and the types of problems and descriptions for which it is particularly apt. Then we describe one or two concrete examples of notations from that paradigm. Once you are familiar with the paradigms, you can easily learn and use a new notation because you wiU understand how it relates to existing notations.

However, caution is advised. We need to be .especially careful about the termilllol- ogy we use when modeling requirements. Many of the requirements notations are based on successful design methods and notations, which means that most other refer- ences for the notations provide examples of designs rather than of requirements, and give advice about how to make design-oriented modeling decisions. Requirements decisions are made for di[fe rent reasons, so the terminology is interpreted differently. For exa mple, in require ments modeling we discuss decomposition, abstraction, and separation of concerns, all o f which were originally design techniques for creating ele- gant moduJar designs. We decompose a requirements specification along separate con- cerns to simplify the resulting model and make it easier to read and to understand. In contrast, we decompose a design to improve the system's quality attributes (modular- ity, maintainability, performance, time to delivery, etc.); the requirements name and constrain those attributes, but decomposition plays no role in this aspect of specifica- tion. Thtus, although we use the terms decomposilion and modularity in both specifica- tion and design, the decomposition decisions we make at each stage are different because they have different goals.

Throughout this section, we illustrate notations by using them to model aspects of the turnstile problem introduced earlier (Jackson and Zave 1995) and a library prob- lem. The library needs to track its texts and other materials, its loan records, and inlor- mation about its patrons. Popular items are placed on reserve, meaning that their loan periods are shorter than those of other books and materials, and that the penalty for re turning them late is higher than the late penalty for returning unreserved items.

Entity-Relationship Diagrams

Early in the requirements phase, it is convenient to build a conceptual model o f the problem that identifies what objects or enti ties are involved, what they look like (by defining their attributes), and how they relate to o ne another. Such a model designates names for the basic elements of the problem. These elements are then reused in other

158 Chapter 4 Capturing the Requirements

descriptions of the requirements (possibly written in other notations) that specify how the objects, their attributes, and their relationships would change in the course of the proposed system's execution. Thus, the conceptual model helps to tie together multiple views and descriptions o f the requirements.

The e ntity-relationship diagram (ER diagram) (Chen 1976) is a popular graph.ical notational paradigm for representing conceptual models. As we will see in Chapter 6, it forms the basis of most object-oriented requirements and design notations, where it is used to model the relationships among objects in a problem description or to model the structure o f a software application. This notation paradigm is also popular for describ- ing database schema (i.e., describing the logical structure of data stored in a database).

ER diagrams have three core constructs-entities, attributes, and relations-lthat are combined to specify a problem's elements and their interrelationships. Figure 4.4 is an ER diagram of the turnstile. An entity, depicted as a rectangle, represents a coUec- tion (sometimes called a class) of real-world objects that have common properties and behaviors. For example, the world contains many Coins, but for the purpose of model- ing the turnstile problem, we treat all Coins as being equivalent to one another in au aspects (such as size, shape, and weight) except perhaps for their monetary value. A relationship is depicted as an edge between two entities, with a diamond in the middle of the edge specifying the type of relationship. An attribute is an annotation on an entity that describes data or properties associated with the entity. For example, in the turnstile problem, we are most interested in the Coins that are inserted into the turn- stile's CoinSlot (a relationship), and how their monetary values compare to the price of admission into the zoo (comparison of attribute values). Variant ER nota- tions introduce additional constructs, such as attributes on relationships, one-to-many relationships, many-to-many relationships, special relationships like inheritance, and class-based in addition to individual-entity-based attributes. Fo r example, our turnstile model shows the cardinality (sometimes called the "arity") of the relationships, assert- ing that the turnstile is to admit multiple Visitors. More sophisticated notations have the concept of a mutable e ntity, whose membership or whose relations to mem- bers of other entities may change over time. For example, in an E R diagram depicting a family, family members and their interrelations change as they get married, have children, and die. By convention, the entities and relationships are laid out so that rela- tionships are read from left to right, or from top to bottom.

ptltt ol ~i111lulo"

Col" Co11Slot

Bttller

loth4 # Ulllll

FIGURE 4.4 Entity-relationship diagram of turnstile problem.

Openmirrors.com

Section 4.5 Modeling Notations 159

ER diagrams are popular because they provide an overview of the problem to be addressed (i.e., they depict all of the parties involved), and because this view is rela- tively s table when changes are made to the problem 's requirements. A change in requirements is more likely to be a change in how one or more entities behave than to be a change in the set of participating entities. For these two reasons, an ER diagram is Likely to be used to model a problem early in the requirements process.

The simplicity of ER notations is deceptive; in fact, it is quite difficult to use ER modeling notations weU in practice. It is no t always obvious at what level of detail to model a particular problem, even though the re are only three major language con- structs. For example, should the barrie r and coin slot be modeled as entities, or should they be represented by a more abstract turnstile entity? Also, it can be difficult to decide what data a re entities and wha t data are attributes. For example, should the Jock be an entity? There are arguments for and against each of these choices. The primary crite ria fo r making decisions are whe ther a choice results in a clearer description, and whether a choice unnecessarily constrains design decisions.

Exa mple : UML Class Diagrams.

An ER notation is often used by more complex approaches. For example, the U nified Modeling Language (UML) (OMG 2003) is a coUection of notations used to document softwar·e specifications and designs. We will use UML extensive ly in Chapter 6 to describe object-oriented specifications and designs. Because UML was originally con- ceived for object-oriented systems, it represents systems in terms of objects and methods. Objects a re akin to entities: they a re organized in classes that have an inheri- tance hie rarchy. Each object provides methods that perform actions on the object's variables. As objects execute, they send messages to invoke each other's metho ds, acknowledge actions, and transmit data.

The flagship model in any UML specification is the class diagram, a sophisticated ER diagram relating the classes (entities) in the specification. Although most UML texts treat class diagrams primarily as a design notation, it is possible and conveniernt to use UML class diagrams as a conceptual modeling notation, in which classes represent real-world entities in the problem to be modeled. It may be that a class in the concep- tual model, such as a Customer class, corresponds to a program class in the implemen- tation, s uch as a CustomerRecorcl, but this need not always be the case. It is the software designer's task to take a class-diagram specification and construct a suitable design model of the implementation's class structure.

In general, the kinds of real-world entities that we would want to represent in a class diagram include actors (e.g., patrons, ope ra tors, personnel)~ complex data to be sto red, analyzed, transformed, or displayed; o r records of transient events (e.g., busi- ness transactions, phone conversations). The entities in our library problem include people, Like the patrons and librarians; the items in the library's inventory, like books and periodicals; and loan transactions.

Figure 4.5 depicts a simple UML class diagram for such a library. Each box is a class that represents a collection of similarly typed entities; for example, a single class represents a ll of the Library's books. A class has a name; a set of attributes, which are simple data variables whose values can vary over time and among different entities of

160 Chapter 4 Capturing the Requirements

Pt lttr1 * hhlleatltn * ~ttton ID 0 •. 1 bo110Wt o .. * u ll au11bi1 llU bOllOWtl I mle poute 1'41m I Y1lce

~ fl nu I depml1tlon 1 eheekhnu() I

fun ul•' lnmm llnun l tt• .. ,,,, r1111nuu due 41te ruerte loan fetlod rmllNot fll over4o hnt ruerte 1in1 ttl•

I 11pouu ule 4ce 411e() rmil~erlO<I

ule mr4u• hne() lt14j1111•1: Plbltullon nmted to rmwO hrll

rmll() Im() borrow(I ret~rno IUtrlt() UftllHrll(I 4mmmua(I

y I

l ttk Pttltdlul

M~or ''''°' ,.,,,.. 1u111h1

~ --·- 0

* Artie le

1utho1

FIG URE 4.5 UML class model of the library problem.

the class; and a set of operations on tbe class's att ributes. By "simple data variable," we mean a variable whose values are loo simple for lbe variable to be a class by itself. Thus, we model a Patron's address as an attribute, likely as one or more string values, whereas we would model a Patron's credit card information, credit institution, credit card nU1 mber, expiration date, or billing address as a separate class (not shown). Note that many attributes we might expect in the library class diagram are mjssing (e.g., records and films) or are imprecisely defined (e.g., periodical, which doesn't distinguish between newspapers and magazines), and operations are omitted (e.g., dealing with book repair or loss). This imprecision is typical in early conceptual diagrams. The idea is to provide enough attributes and operations, and in sufficient detail, that anyone who reads the specification can grasp what the class represents and what its respon- sibilities are.

U ML also allows the specifier to designate attributes and operations as being asso- ciated with the class rather than with instances of the class. A class-scope attribute, represented as an underlined attribute, is a data value that is shared by all instances of the class. In the library class diagram, the attributes reserve loan period and reserve fine rate are values tbat apply to all publications on reserve. Thus in this model, the

Openmirrors.com

Section 4.5 Modeling Notations 161

Librarian can set and modify the loan duration for classes of items (e.g., books, periodi- cals, items on reserve) but not for individual items. Similarly, a class-scope operation, written as an underlined operation, is an operation performed by the abstract class, rather than by class instances, on a new instance or on the whole collection of instances; create (),search () , and delete () are common class-scope operations.

A Line between two classes, called an association, indicates a relationship between the classes' entities. An association may represent interactions or events that involve objects in the associated classes, such as when a Patron borrows a Publication. Alte rnatively, an association might re late classes in which o ne class is a property or e le- ment of the other class, such as the relationship between a Patron and his Credit Card. Sometimes these latte r types of associations are aggregate associations, or "has-a" re lationships, as in our example. An aggregate association is drawn as an association with a white diamond on one end, where the class at the diamond end is the aggregate and it includes o r owns instances of the class( es) at the other end(s) of the association. Composition association is a special type of aggregation, in which instances of the com- pound class are physically constructed from instances of the component dasses (e.g., a bike consists of wheels, gears, pedals, a handlebar); it is re presented as an aggregation with a black diamond. In our library model, each Periodical,such as a newspaper or magazine, is composed of Articles.

Ao association with a triangle on one end represents a generalization association, a lso called a sub-type re lation or an "is-a" rela tion, where the class at the triangle end of the association is the parent class of the classes at the other ends of the association, called subclasses. A subclass inherits all of the parent class's attributes, operations, and associations. Thus, we do not need to specify explicitly that Patrons may borrow Books, because this association is inherited from the association between Patron and Publication. A subclass extends its inherited behavior with additional attrib- utes, operations, and associations. In fact, a good clue as to whether we want to model an entity as a new subclass, as opposed to as an instance of an existing class, is whether we really need new attributes, operations, or associations to model the class variant. In many cases, we can model variants as class instances that have different attribute values. In our Library problem, we represent whether an ite m is on reserve or on loan using Publication attributes2 rather than by creating Reserved and OnLoan subclasses.

Associations can have labels, usually verbs, that describe the relations.hip be tween associated entities. An end of an association can also be labeled, to describe the role that entity plays in the association. Such role names are useful for specifying the context of an entity with respect to a particular association. In the Library example, we might keep track of which patrons are married, so that we can warn someone whose spouse has overdue books. Association ends can also be annotated with multiplicities, which specify constraints on the number of entities and the number of hnks between associated enti- ties. Multiplicities can be expressed as specific numbers, ranges of numbers, or unlimited numbers (designated "*").A multiplicity on one end of an association indicates how

2In later examples, we model an item's loan state and reserve state as states in a state-machine model (Figure 4.9), and this information is included in the library's detailed class model (Figure 4.18) .

162 Chapter 4 Capturing the Requirements

many instances of that dass can be Linked to one instance of the associated class. Thus at any point in time, a Patron may borrow zero or more Publicat i o ns, but an individ- ual Publication can be borrowed by at most one Patro n.

The L oan class in the library model is an association class, which relates att rib- utes and operations to an association. Association classes are used to collect informa- tion tha t cannot be attributed solely to one class or another. For example, the Loan attributes are not properties o f the borrower or of the item borrowed, but rather of the loan transaction or contract. An association class has exactly one instantiation per link in the associa tion, so our modeling Loan as an association class is correct onJy if we want to model snapshots of the library inventory (i.e., model only current loans). If we wanted instead to maintain a history of all loan transactions, then (because a patron might borrow an item multiple times), we would model Loan as a full-Hedged class.

Event Traces

Although ER diagrams are helpful in providing an overall view of the problem being modeled, the view is mostly structural, showing which entities are related; the diagram says no thing about how the entities are lo behave. We need other nota tion paradigms for describing a system's behavioral requirements.

An event trace is a graphical description of a sequence of events that are exchanged between reaE-world entities. Each vertical line represents the timeline for a distinct entity, whose name appears at the top of the line. Each horizontal line repre- sents an event or interaction between the two entities bounding the Line, usually con- ceived as a messase passed from one entity to another. Time progresses from the top to the bottom of the trace, so if one event appears above another event, then the upper event occurs before the lower event. Each graph depicts a single trace, representing only one of several possible behaviors. Figure 4 .6 shows two traces for the turnstile problem: the trace on the left represents typical behavior, whereas the trace on the right shows exceptional behavior of what happens when a Visitor tries to sneak iinto the zoo by inserting a valueless token (a slug) into the coin slot.

Eveot traces are popular among both developers and customers because traces have a semantics that is rela tively precise, with the exception of timing issues, yet is simple and easy to understand. Much of the simplicity comes from decomposing requirements descriptions into scenarios, and considering (modeling, reading, understanding) e ach

Ylrltor

coin

puh

IOltll~

coin

puh

101m~

ttmrtlle Wlsltot

1lu9

t lu9

slug

coin

rotttd

FIGURE 4.6 Event traces in the turnstile problem.

Openmirrors.com

Section 4.5 Modeling Notations 163

scenario separately as a distinct trace. But these very properties, make event traces inefficie nt for documenting behavior. We would not want to use traces to provide a com- ple te description of required behavior, because the nwnber of scenarios we would have to draw can quickly become unwieldy. Instead, traces are best used at the start of a project, to come to consensus on key requireme nts and to help developers identify important entities in the problem being modeled.

Example: Message Sequence Chart

Messag,e Sequence Charts (ITU 1996) are an enhanced event-trace notation, with facil- ities for creating and destroying entities, specifying actions and timers, and composing traces. Figure 4.7 displays an example Message Sequence Chart (MSC) for a loan trans- action in our library problem. Each vertical line represents a participating entity, and a message is depicted as an arrow from the sending entity to the receiving entity; the arrow's label specifies the message name and data parameters, if any. A message arrow may slope downwards (e.g., message recall noti ce) to reflect the passage of time between when the message is sent and when it is received. Entities may come and go during the course of a trace; a dashed arrow, optionally annota ted with data parame- ters, represents a create event that spawns a new entity, and a cross at the bottom of an entity liine represents the end of that entity's execution. In contrast, a solid rectangle at the endl o f the Line represents the end of an entity's specification without meaning the end of its execution. Actions, such as invoked operations or changes to variable values, are specified as labeled rectangles positioned on an entity's execution Line, located at

Publledlon Pstm

borrow(p11101)

no 1111

esle 4o 411•

publlcttlon on loin

lltel"ie reetll

1mll nollhut101(publlestlon)

lll!UIR ,,,,,, eslc ovt14o fine

lnemnhm(omfo fine)

FIGURE 4 .7 Message Sequence Chart for library loan transaction.

164 Chapter 4 Capturing the Requirements

the point in the trace where the action occurs. Thus, in our MSC model of a library loan, loan requests are sent to the Publication to be borrowed, and the Publication entity is responsible for creating a Loan entity that manages loan-specific data, such as tbe due date. Reserving an item that is out on loan results in a recall of that item. Returning the borrowed item terminates the loan, but not befo re calculating the overdue fine, if any, for returning tbe item after the loan's due date.

There are facilities for composing and refining Message Sequence Charts. For example, important states in an entity's evolution can be specified as conditions, repre- sented as labeled hexagons. We can then specify a small collection of subtraces between conditions, and derive a variety of traces by composing the charts at points where the entities' states are tbe same. For example there are multiple scenarios between state publication on loan and the end of the loan transition: the patron renews the Loan once, the patron renews the Joan twice, the patron returns the publication, the patron reports the publication as being lost. Each of these subscenarios could be appended to a prefix trace of a patron successfully borrowing the publication. Such composition and refinement features help to reduce the number of MSCs one would need to write to specify a problem completely. However, these features do not completely address the trace-explosion problem, so Message Sequences Charts are usuaJly used only to describe key scenarios rather than to specify entire problems.

State Machines

State-machine notations are used to represent collections of event traces in a single model. A state machine is a graphical description of all dialog between the system and its environment. Each node, called a state, represents a stable set of conditions that exists between event occurrences. Each edge, called a transition, represents a change in behavior or condition due to the occurrence of an event; each transi tion is labeled with the triggering event, and possibly with an output event, preceded by the symbol "/", that is generated when the transition occurs.

State machines are useful both for specifying dynamic behavior and for describ- ing how behavior should change in response to tbe history of even ts that have already occurred. That is, they are particularly suited for modeling bow the system's responses to the same input change over the course of the system's execution. For each state, the set of transitions emanating from that state designates both the set of events that can trigger a response and the corresponding responses to those events. Thus, when our turnstile (shown in Figu re 4.8) is in the unlocked state, its behavior is different frnm

FIGURE 4.8 Finite-state-machine model of the turnstile problem.

Openmirrors.com

Section 4.5 Modeling Notations 165

when it is in state locked; in particular, it responds to different input events. If an unanticipated event occurs (e.g., if the user tries to push through the turnstile when the machine is in state lociked), the event will be ignored and discarded. We could have specified this latte r behavior explicitly as a transition from state locked to state locked, triggered by evem push; however, the inclusion of such "no-effect" transi- tions cam clutter the model. Thus, it is best to restrict the use of self-looping transitions to those that have an observable effect, such as an output event.

A path through the state machine, starting from the machine's initial state and following transitions from state to state, represents a trace of observable events in the environment. If the state machine is deterministic, meaning that for every state and event there is a unique response, then a path through the machine represents the event trace that wiU occur, given the sequence of input events that trigger the path's transi- tions. Example traces of our turnstile specification include

coin, push, rotated, coin, push, rotated, ....

slug, slug, slug, coin, push, rotated, ...

which correspond to the event traces in Figure 4.6. Yo u may have encountered state machines in some of your other computing

courses.. In theory-of-computing courses, futile-state machines are used as automata that recognize strings in regular languages. ln some sense, state-machine specifications serve a similar purpose; they specify the sequences of input and output events that the proposed system is expected to realize. Thus, we view a state-machine specification as a compact representation of a set of desired, externally observable, event traces, just as a finite-state automaton is a compact representation of the set of strings that the automa- ton recognizes.

Exa mple: UML Statechart Diagrams

A UML statechart diagram depicts the dynamic behavior of the objects in a UML dass. A UML class diagram gives a static, big-picture view of a problem, in terms of the enti- ties involved and lheir relationships; it says nothing about bow the entities behave, or bow their behaviors change in response to input events. A statecbart diagram shows bow a class's instances should change state and how their attributes should change value as the objects interact with each other. Statechart diagrams are a nice counterpart to Message Sequence Charts (MSC). An MSC shows the events that pass between entities without saying much about each entity's behavior, whereas a statechart diagram shows bow a single entity reacts to input events and generates output events.

A UML model is a colleclion of concurre ntly executing statecharts--one per instantiated object-tha t communicate with each other via message passing (OMG 2003). Eve ry class in a UML class diagram has an associated statechart diagram that specifies the dynamic behavior of the objects of that class. Figure 4.9 shows the UML statechart diagram for the Publication class from our Library class model.

UML statechart diagrams have a rich syntax, much of it borrowed from Harel's originaE conception o f sttatecharts (Hare! 1987), including state hierarchy, concurrency, and interrnacbine communication. State hierarchy is used to unclutter diagrams by col- lecting into superstates those states with common transitions. We can think of a superstate

166 Chapter 4 Capturing the Requirements

Publication

bolfowlpttron) "lotn.erutelpttm, ult)

~ ~(~1-nl-lb-,,-~----.i::::=:=;;~-~~-'-'u_ne~•1~~-==:J~-_;._~o-nl_n_•__,

--,.-,,-,.- "-1-u-n.-ie-le-le-

rawn

tUtb

FIGURE 4.9 UMLstatecbart diagram forthe Publication class.

as a submacbine, with its own set of states and transitions. A transition whose destina- tion state is a superstate acts as a transition to the superstate's default initial state, des- ignated by an arrow from tbe superstate's internal black circle. A transition whose source s tate is a superstate acts as a set of transitions, one from each of the superstate's internal states. For example, in the Publicatio n state diagram, the transition t rig- gered by event lose cam be enabled from any of the superstate's internal states; this transitio n ends in a final sta te and designates the e nd o f the object 's life.

A superstate can actually comprise multiple concurrent submachines, separated by dashed lines. The UML statechart for Publication includes two submachines: one that indicates whether or not the publication is out on loan, and another that indicates whether or not the publication is on reserve. The submachines are said to operate concurrently, in that a Publication instance could at any time receive and respond to events of interest to either or both submachines. ln general, concurrent submachines are used to model separate, unrelated subbehaviors, making it easie r to understand and conside r each subbehavior. An equivalent statechart for Publication in Figure 4.10 that does not make use of state hierarchy or concurrency is comparatively messy and repetitive. Note that this messy statechart has a state for each combination of states from Figure 4.9 (stacks "' Publication is in library and not on reserve, onloan "' Publication is on loan and not on reserve, etc.).3

31be messy statechart also has a recall state that covers the case where a publication that is being put on reserve is on loan and needs to be recalled; this behavior cannot be modeled as a transition from on loan to r eserveloan, because state reserveloan bas a transition cancel (used to disallow a loan request if the Pa tr on has outstanding fines) that would be inappropriate in this situation. This special case is modeled in Figure 4.9 by tesring on entry (keyword entry is explained below) to state reserve whether the concur- rent submachine is In state on loan and issuing a recall event if it is.

Openmirrors.com

Section 4.5 Modeling Notations 167

Publication

borrow(pttron) " Lou.crute(pttron, tell)

lhtb cancel

return Aloin.411111

1etun Al02n.41lete

Ion borrow( patron) 1011 Alou.m1te(p1tron, tell)

cancel

return "loin.lelm

FIG URE 4.10 Messy UML statechart diagram for Publication class.

State trans itions are labele d with their enabljng events and conditions and with their side effects. Transition labels have syntax

event (args) [condition] /action* "Object . event (args) *

where the triggering event is a message that may carry parameters. The enabling condition, delimited by square brackets, is a predicate on the object's a ttribute values. If the transition is taken, its a<.1ions, each prefaced with a slash (/),specify assignments made to the object's attributes; the asterisk "*" indicates that a transition may have arbitrarily many actions. If the transition is taken, it may generate arbitrarily many output events, /"Object . event, each prefaced with a caret (");an output event may carry parameters and is either designated for a target Object or is broadcast to aU objects. For example, in the messy Publication statechart (Figure 4.10), the transi- tion to state recall is enabled if the publication is in stalle onloan when a request to put the item on reserve is received. When the transition is taken, it sends an event to the Loan object , which in turn will notify the borrower that the item must be returned to the Library sooner than the loan's due date. Each of the transition-label elements is optiona l. For example, a transition need not be enabled by an input event; it could be enabled only by a condition or by nothing, in whjch case the transitio n is a lways enabled.

The U ML statechart diagram for the Loan association class in Figure 4.11 illus- trates how states can be annotated with local variables (e.g., variable nwn renews), actions, and activities. Variables that are local to a state are declared and initialized in

168 Chapter 4 Capturing the Requirements

eheck llau

tnlry " p1tm.ch1cklln11(J

Loan

no fine " publlullon.formmlue()

Iott " p11ton.lncremfln11(p•blle1tlon.nlu1)

llne " p1bllutlon.e11eel

llftlW

"Pq•l11e1tlon.Jurmmloe()

n1111 ltntWt :e 0

entry /no11 11new++ 41 /ulc •ue d1t1(bouewl m1ll "p1tron.rmllNot1fy(poblletlloil

/cilc due date (mall)

''''" 41 /u le omdu1 llne(I

"pttm.lnerem llnu(owdue fine)

FIG URE 4. 11 UMLstatechart diagram for Loan class.

the center section of the stale. The state's lower section lists actions and activities on the state's local variables as well as on the object's attributes. The distinction between actions and activities is subtle: an action is a computation that takes relatively no t ime to complete and that is uninterruptible, such as assigning an expression to a variable or sending a message. An action can be triggered by a transition entering or exiting the state, io which case it is d esignated by keyword entry or exit foUowed by arbitrarily many actions and generated events; or it can be triggered by the occurrence of an event, in which case it is designated by the event name followed. by arbitrarily many actions and generated events. In our Loan statechart, variable num renews is incre- mented! every time state item on loan is entered, that is, every time the loan is renewed; and a recallNotify event is sent to the Patron whenever a recall event is received in that state. In contrast to actions, an activity is a more complex com- putation that executes o ver a period of time and that may be interruptible, such as exe- cuting an operation. Activities are initiated on entry to the state. When a transition, including a looping transition like the one triggered by renew, executes, the order in which aclions are applied is as follows: first, the exit actions of the transition's source state are applied, followed by the transition's own actions, followed by the entry actions and activities of the new state.

The semantics of UML diagrams, and how the different diagrams fit together, are intentionaUy undefined, so that specifiers can ascribe semantics that best fit their prob- lem. However, most practitioners view UML statecharts as communicating finite-state machines with first-in, first-out (FIFO) communication channels.. Each object's s.tate machine has an input queue that holds the messages sent to the object from other objects in the model or from the model's environment. Messages are stored in the input

Openmirrors.com

Section 4.5 Modeling Notations 169

queue in tbe order io which they are received. In each execution step, tbe state machine reads the message at the head of its input queue, removing the message from the queue. The message either triggers a transition in tbe statecbart or it does not, in whicb case the message is discardedl; the step runs to completion, meaning that the machine contin- ues to execute enabled transitions, including transitions that wait for operations to complete, until no more transitions can execute without new input from the input queue. Thus, the machine reacts to only one message at a time.

The hardest part of constructing a state-machine model is deciding how to decompose an object's behavior into slates. Some ways of thinking about states include

• EquivaJence classes of possible future behavior, as defined by sequences of input events accepted by lhe machine: for example, every iteration of event sequence coin, push, rotated leaves the turnstile in a locked position waiting for the next visitor

• Periods of time between consecutive events, such as the time between the start and the end of an operation

• Named control points in an object's evolution , during which the object is performing some computation (e.g., state calculate fine) or waiting [or some input event (e.g., state item on loan)

• Partitions of an object's behavior: for example, a book is out on lo an or is in the library stacks; an item is on reserve, meaning that it can be borrowed for only short periods, or it is not on reserve

Some object properties could be modeled either as an attribute (defined in the class diagram) or as a state (defined in the object's statechart diagram), and it is not obvious which representation is best. Certainly, if the set of possible property values is large (e.g., a Patron's Library fines), then it is best to model the property as an attribute. AJternatively, if the events to which the object is ready to react depend on a property (e.g., whether a book is out on loan), then it is best to model the property as a state. Otherwise, choose the re presentation that results in the simplest mo del that is easiest to understand.

Example: Petri Net s

UML statechart diagrams nicely modularize a problem's dynamic behavior into the behaviors of individual class objects, with the effect that it may be easier to consider each class's behavior separately than it is to specify the whole problem in one diagram. This modularization makes it harder, though, to see how objects interact with each other. Looking at an individual statechart diagram, we can see when an object sends a message to another object. However, we have to examine the two objects' diagrams simultaneously to see that a message sent by one object can be received by the other. In fact, to be completely sure, we would have to search the possible executions (event traces) of the two machines, to confirm that whe never one object sends a message to the other, the target object is ready to receive and react to the message.

Petri nets (Peterson 1977) are a form of state-transition notation that is used to model concurrent activities and tbeir interactions. Figure 4.12 shows a basic Petri net specifying the behavior of a book loan. The circles in the net are places that represent

170 Chapter 4

lnltllt1 Loin

Capturing the Requirements

Lou R1qu111

On loin

FIGURE 4.12 Petri net of book loan.

lnltl1t1 R1tu11

activities or conditions, and the bars represent transitions. Directed arrows, caUed arcs, connect a transition witb its inpul places and its output p laces. The places are popula ted with tokens, wrucb act as enabLing condjtions for Lhe transitions. Wlhen a transition jires, it removes tokens from e ach of its input places and inserts tokens into each of its output places. Each arc can be assigned a weight that specifies how many tokens are removed from the arc's input place, or inserted into the arc's output place, when the transition ftres. A lransjtion is enabled if each of its input places contains enough tokens to con- tribute its arc's weight's worth of tokens, should the enabled transition actually fire. Thus, in Figure 4.12, transitions Re turn, Withdraw Re turn Reque s t, and Withdraw Loan Request are all enabled; firing transition Return removes a token from each of the places ReturnReques t and OnLoan, and inserts a token into Avail. The net's markjng, which is the dis tribution of tokens among places, changes as transitions fire. In each execution step, the marbng determjnes the set of enabled transitions; one enabled transition is nondeterministically selected to fire; the firing of thfa t ransition produces a new markjng, wruch may enable a different set of transitions. We can model concurrent behavior by combilling into a single net the activities, transitions, and tokens fo r several executing entities. Concurrent entities are syncluoruzed whenever their activities, or places, act as input places to the same transition. Thjs synchronization ensures that alJ o f the pre-transition activities occur before the transition fires, but does not constrain the order in which these activities occur.

These features of concurrency and synchronization are especially useful for mod- eling events whose order of occurrence is not impo rtant. Consider the emergency room in a hospital. Before a pa tient can be treated, several events must occur. TI1e triage staff must attempt to find out the name and address of the patient and to determine the patient's blood type. Someone must see if the patient is breathing, and also examjne the patient for injuries. The events occur in no particular order, but all must occur before a team of doctors begins a more thorough examination. Once the treatment begins (i.e., once the transition is made from a preliminary examination to a thorough ooe), tbe doctors start new activities. The orthopedic doctors check for broken bones, while the bematologist runs blood tests and the surgeon puts stitches in a bleeding

Openmirrors.com

Section 4.5 Modeling Notations 171

wound. The doctors' activities are independent of one another, but none can occur until the transition from the pre limjnary examination takes place. A state-machine model of the emergency room might specify only a single order of events, thereby excluding sev- eral acceptable behaviors, or it might specify all possible sequences of events, resulting in an overly complex model for a re latively simple problem. A Petri ne t model of the same eme rgency room nicely avoids both of these problems.

Basic Petri nets are fine for modeling bow control flows through events or among concurrent entities. But if we want to model control that depends on the value of data (e.g., borrowing a particular book from a collection of books), then we need to use a high-level Pe tri ne t notation. A number of extensions to basic Petri nets have been pro- posed to improve the notation's expressibility, including inhibitor arcs, which enable a transition only if the input place is empty of tokens; priority among transilions; timing constraints; and struc tured tokens, which have values.

To model our library problem, which tracks information and events for multiple patrons and publications, we need a Petri ne t notation that supports structured tokens and transition actions (Ghezzi e t al. 1991). A transition action constrains which tokens in the input places can enable the transition and specifies the values of the output tokens. Figure 4.13 is a high-level Pe tri net specification for the library problem. Each place stores tokens of a diffe rent data type. Avail stores a token for every library i tern that is not currently out on loan. A token in Fines is an n-tuple (i.e., an ordered set of n elements, sometimes called a tuple for short) that maps a patron to the value

lnltlttt 1.otn

'''J P1y111ut I 1erutt flnu

'''""~· f'Y/11161/ /. \ Proun 1,,,,,,, n,,, ~ ,,,,,., n •• , P1ynent '''""· ""'" fill/

'''''"'· f/a11 - f1y111111/ ? Avtll ~,~:4u

flnu r\_

~ ''''"'· ,,,,,,, ,,,., flm{p1tr01/~o 111111 "'\_____/

Bolf~ ~t~rn ,,,,,,,, """' '''''"· "'"· ''''"'· ,,,,., ''""'· ,,,,., ,.,,,,,, ,,,,,,.,

D Onuan -Wlthfow loin Requeit Wtthfow Rtturn Reqmt

FIGURE 4. 13 Petri net of the library problem.

lntttst• Ret~rn

172 Chapter 4 Capturing the Requirements

of his or her total outstanding library fines. A token in OnLoan is another type of tuple tbat maips a patron and a library item to a due date. A few of the transition predi- cates and actions are shown in Figure 4.13,such as the action on Process Payment, where the inputs are a payment and the patron's current fines , and the output is a new Fines tuple. The p redicates not shown assert that if the token elements in a tran- sition's input or output tokens bave tbe same name, tbey must bave tbe same value. So in transition Borrow, the patron making the Loan Request must match the patron with no outstanding Fines and match the patron who appears in the gener- ated OnLoan tuple; al the same time, the item be ing borrowed must match an item in Avail. Otherwise, those tuples do not enable the transition. The net starts with an initial marking of item tuples in Avail and (patron, o) tuples in Fines. As library users trigger input transitions Pay, Initiate Loan, and Initiate Return, new tokens are introduced to the system, which enable the library transitions, which in turn fire and update the Fines tokens and the Avail and OnLoan tokens, and so on.

Data-Flow Diagrams

The notation paradigms discussed so far promote decomposing a problem by entity (ER diagram); by scenario (event trace): and by control state (i.e., equivalence classes of scenarios) (state machines). However, early requirements tend to be expressed as

• Tasks to be completed • Functions to be computed • Data to be analyzed, transformed, or recorded

Such requirements, when decomposed by entity, scenario, or state, devolve into colJec- tions of lower-level behaviors that are distributed among multiple entities and ll:bat must be coordinated. This modular structure makes it harder to see a model's high- level functionality. In our library example, none of the above modeling notations is effective in showing, in a single model, all of the steps, and their variants, that a patron must take to borrow a book. For this reason, notations that promote decomposition by functionality have always been popular.

A data-flow diagram (DFD) models functio nality and the flow of data from one function to another. A bubble represents a process, or function, that transforms data. An arrow represents data flow, where an arrow into a bubble represents an input to the bubble's function, and an arrow out of a bubble represents one o f the function's out- puts. Figure 4.14 shows a high-level data-flow diagram for our Library problem. The problem is broken down into steps, with the results of early steps flowing into later steps. Data that persist beyond their use in a single computation (e.g., information about patrons' outs tanding fines) can be saved in a data store-a formal repository or database of information-that is represented by two parallel bars. Data sources or sinks, represented by rectangles, are actors: entities that provide input data or receive the output results. A bubble can be a high-level abstraction of another data-How dia- gram that shows in more detail how the abstract function is computed. A lowest-level bubble is a function whose effects, such as pre-conditions, post-conditions, exceptio ns, can be specified in another notation (e.g., text, mathematical functions, event traces) in a separately linked document.

Openmirrors.com

Section 4.5 Modeling Notations 173

R1qmu to flt 1111111 on ""IYt

~Kftowl1d11ot llbttty h1011 ud co1t1ntt __ ...,.,_

putting 1111111 on m1rn

Ru11Ye R1co141

Uuso 11end1

1!11111 h11ow0 __ _,_ __ _

l.o1n R1m41

lt11111h11ow14 ~l11m1tetu111d-~

i 1!1111 1etq1n1d Rt lt1n / Ou 41111 ltettt 11tu1ned ........... ~~ ...... Ptttont' Finer

0111du1 flntt- PtllOI 1-----P~mlll ------'~ - -----

O•tt11ndl19 flm

FIGURE 4.14 Data-ftow diagram of the library problem.

One of the strengths of data-flow diagrams is that they provide an intuitive model of a proposed system's high-level functionality and of the data dependencies among the various processes. Domain experts find them easy to read and understand. How- ever, a data-flow diagram can be aggravatingly ambiguous to a software developer who is less familiar with the problem being modeled. Tn particular, there are multiple ways of interpreting a DFD process that has multiple input flows (e.g., process Borrow): are all inputs needed to compute the function, is only one of the inputs needed, or is some subset of the inputs needed? Similarly, there are multiple ways of interpreting a DFD process that has multiple output flows: are all outputs generated every time the process executes, is only one o f the outputs generated, or is some subset generated? IL is also not obvious that two data flows with the same annotation represent the same values: are the Items Returned that flow from Return to Loan Records the same as the Items Returned that flow from Return to Process Fines? For these reasons, DFDs are best used by users who are familiar with the application domain being mod- eled, and as early models of a problem, when details are less important.

Example: Use Cases

A UML use-case diagram (OMG 2003) is similar to a top-level data-flow diagram that depicts observable, user-initiated functionality in terms of interactions between the sys- tem and its environment. A large box represents the system boundary. Stick figures out- side the box portray actors, both humans and systems, and each oval inside the box is a use case that represents some major required functionality and its variants. A line between an actor and a use case indicates that the actor participates in the use case. Use cases are not meant to model aU the tasks that the system should provide. Rather, they are used to specify user views of essential system behavior. As such, they model onJy

174 Chapter 4 Capturing the Requirements

FIGURE 4.15 Library use cases.

Library lnmtory

system functionality that can be initiated by some actor in the environment. For example, in Figure 4.15, key Library uses include borrowing a book, returning a bor- rowed book, and paying a library fine.

Each use case encompasses several possible scenarios, some successful and some not, but au related to some usage of tbe system. External to the use-case diagram, the use cases and their variants are detailed as textual event traces. Each use case identifies pre-conditions and alternative behavior if the pre-conditions are not met, such as look- ing for a lost book; post-conditions, whicb summarize the effects of the use case; and a normal, error-free scenario comprising a sequence of steps performed by actors or by tbe system. A completely detailed use case specilies alJ possible variations of each step in the normal scenario, including both valid behaviors and errors. It also describes the possible scenarios that s.tem from the valid variations and from recoverable failures. If tbere is a sequence of steps common to several use cases, the sequence can be extracted out to form a subcase that can be caUed by a base use case like a procedure call. la the use-case diagram, we draw a dashed arrow from a base case to eacb of its subcases and annotate these arrows with stereotype «include».4 A use case can also be appended with an extension subcase that adds functionality to the end of the use case. In the use-case diagram, we draw a dashed arrow from tbe extension subcase to the base use case and annotate the arrow with stereotype «extend». Examples of stereotypes are included in Figure 4.15.

41n UML, a stereotype is a meta-language facility for extending a modeling notation, allowing the user to augment one of the nota tion's constructs with a new « keyword» .

Openmirrors.com

Section 4.5 Modeling Notations 175

Functions and Relations

The notational paradigms discussed so far are representational and relational. They use annotated shapes, lines, and arrows to convey the entities, relationships, and character- istics inivolved in the prohlem heing mode led. In contrnst, the re maining three nota- tional paradigms lhal we discuss are more strongly grounded in mathematics, and we use them to build mathematical models of the requirements. Mathematically based specification and design techniques, called formal methods or approaches, are encour- aged by many software engineers who build safety-cri tical systems-that is, systems whose failure can affect the health and safe ty of people who use them or who are nearby. For example, Defence Standard 00-56, a draft British standard for building safety-critical systems, requires that formal specification and design be used lo demon- strate required functionality, re liability, and safety. Advocates argue that mathematical models are mo re precise and less ambiguous than other models, aod that mathematical models lend themselves to more systematic and sophisticated analysis and verification. In fact, many formal specifications can be checked automatically for consistency, complete- ness, nondeterminism, and reachable states, as well as for type correctness. Mathematical proofs have revealed significant problems in requirements specifications, where they are more easily fixed than if revealed during testing. For example, Pfleeger and Hatton (1997) report o n software developers who used forma l methods to specify and evaluate the com- plex communications requirements of an air-traffic-control support system. Early scrutin)( of the formal specification revealed major problems that were fixed well before design began, thereby reducing risk as well as saving development time. At the end of this chapter, we see how formal specification might have caught problems with Ariane-5.

Some formal paradigms model requirements or software behavior as a collecllion of mathematical functions or relations that, when composed together, map system inputs to system outputs. Some functions specify the state of a system's execution, and other functions specify outputs. A relation is used instead of a function whenever an input value maps to more than one output value. For example, we can represent the turnstile problem using two functions: one function to keep track of the state of the turnstile, mapping from the current state and input event to the next state; and a second function to specify tbe turnstile's output, based on the current state and input event:

Nextst·ate(s,e)

Output ( s, e) =

{

unloc~ed rotating

locked

{ buzz

< none>

s=loc:ked AND e=coin

s=unlocked AND e=push

(s=rotating AND e=rotated)

OR (s = locked AND e=slug)

s = loc:ked AND e = slug

Otherwise

Together, the above functions are semantically equivalent to the graphical state- machine model of the turnstile shown in Figure 4.8.

Because it maps each input to a single output, a function is by definition consis- tent. If lhe function specifies an output for every distinct input, it is called a total func- tion and is by definition complete. Thus, functional specifications lend themselves to systematic and straightforward tests for consistency and completeness.

176 Chapter 4 Capturing the Requirements

(event) bottew T T T F F F F F (•Yentl """ F F F T T F F F (eYent) tetll'YI F F F F F T T F (event) urutrve F F F F F F F f 111111 out on lun F T . F T F 111111 on tuerve F T patt0n.flnu > 40.00 F T

(R1-)Calnlote fo d111 x x Put 111111 In sucks x x Put 111111 on m1rv1 1h11f x x Sand mall notlee x Rejw emt x x

FIGURE 4. 16 Decision table for library functions.

Example: Decision Tables

A decision table (Hurley 1983) is a tabular representation of a functional specification tbat maps events and conditions to appropriate responses or actions. We say that the specification is informal because the inputs (events and conditions) and outputs (actions) may be expressed in natural language, as mathematical expressions, or both.

Figure 4.16 shows a decision table for the library functions borrow, return, reserve, and unreserve. AJI of the possible input events (i.e., function invocations), conditions, and actions are listed along the left side of the table, with the input events and conditions listed above the horizontal line and the actions Listed below the Line. Each column represents a rule tbat maps a set of conditions to its corresponding result(s). An entry of "T" in a cell means that the row's input condition is true, "F" means that the input condition is false, and a dash indicates that the value of the condi- tion does not matte r. Ao entry of "X" at the bottom of the table means that the row's action should be performed whenever its corresponding input conditions hold. Thus, column 1 represents tbe situation wbere a library patron wants to borrow a book, the book is not alieady out on loan, and the patron bas no outstanding fine; in this situation, the loan is approved and a due date is calculated. Similarly, column 7 illustrates the case where there is a request to put a book on reserve but the book is currently out on loan; in this case, the book is recalled and the due date is recalculated to reflect the recall.

This kind of representation can result in very large tables, because the numbe r of conditions to consider is equal to the number of c·ombinations of input conditions. That is, iJ the re are n input conditions, tbere are 2n possible combinations of conditions. For- tunately, many combinations map to the same set of results and can be combined into a single column. Some combinations of conditions may be infeasible (e.g., an item cannot be borrowed and re turned at the same time). By examining decision tables in this way, we can reduce tbeir size and make them easier to understand.

What else can we te ll about a requirements specification that is expressed as a decision table? We can easily check whether every combination of conditions has been conside red, to determine iJ the specification is complete. We can examine the table for consiste ncy, by identifying multiple instances of the same input conditions and elimi- nating any conflicting outputs. We can also search the table for patterns to see bow strongly individual input conditions correlate to individual actions. Such a search would be arduous on a specification modeled using a traditional textual notation for express- ing mathematical functions.

Openmirrors.com

Section 4.5 Modeling Notations 177

Exa mple: Parna s Tables

Pamas tables (Pa mas 1992) are tabular representations of mathematical functions or rela- tions. Like decision tables, Pamas tables use rows and columns to separate a function's definition into its different cases. Each ta hie entry either specifies an input condition that partially identifies some case, or it specifies the output value for some case. Unlike deci- sion tables, the inputs and outputs of a Pamas table are purely mathematical expressions.

To see bow Parnas tables work, consider Figure 4.17. The rows and columns define Cale due date, an operation in our library example. The information is repre- sented as a Normal Table, which is a type of Pamas table. Tue column and row headers are precticates used to specify cases, and the inte rnal table entries store the possible function results. Thus, each internal table entry represents a distinct case in the flUlc- tion's de finition. For example, if the event is to renew a loan (column header), and the publication being borrowed is on reserve (column header), and the patron making the request has no outstanding fines (row header), then the due date is calculated to be public a ti on. reserve loan period days from Today. A table entry of"X" indi- cates that the operation is invalid under the specified conditions; in other specifications, an entry of"X" could mean that the combination of conditions is infeasible. Notice how the column and row headers are structured lo cover all possible combinations of condi- tions that can affect the calculation of a loaned item's due date. (The symbol--, means "not", so •publication. Instate (reserve ) means that the publication is not on reserve.)

The phrase Pamas tables actually refers to a collection of table types and abbrevi- ation strategies for organizing and simplifying functional and re lational expressio ns. Another table type is ao Inverted Table, which looks more like a conventional decision table: case conditions are specified as expressions in the row headers and in the table entries, and the function results are listed in the column headers, at the top or the bot- tom of the table. Io gene ral, the specifier's goal is to choose or create a table format that results in a simple and oompact representation for the function or relation being spec- ified. The tabular structure of these representations makes it easy for reviewers to check that a specificatio n is complete (i.e., there are no missing cases) and consistent (i.e., the re are no duplicate cases). It is easier to review each function's definition case by case, rather than examining and reasoning about the whole specification at once.

A functional specification expressed using Parnas tables is best decomposed into a single function per output variable. For every input event and for every condiltion on entit ies or other variables, each function specifies the value o f its corresponding output variable. The advantage of this model structure over a state-machine model is that the definition of each output variable is localized in a distinct table, rather than spread throughout the model as actions on state transitions.

nut E {lttttew, rutw) llltnf = rteall pultlleatlu.la State p1ltlleafltn. ln Stata

pat,. •• n .. = o publlutlu .mtlYI lun petlo4 publltttlon.lun pe11o4 Mln(4ut 4ttt, pu•1tc1tlon.r1ttll pttlo.l) patrt1.nu > o x x x

FIGURE 4.17 (Normal) Pamas Jable for operation Cale due date.

178 Chapter 4 Capturing the Requirements

Logic

With the exception of ER diagrams, the notations we have considered so far have been model-based and are said to be operational. An operational notation is a notation used to descrrihe a prohlem orr a proposed software solution in terms of .situational hehavior: how a software system should respond to differe nt input events, how a computat ion should :flow from one step to another, and what a system should o utput under various conditions. The result is a model of case-based b.ehavior that is particularly useful for answering questions about what the desired response should be to a particular situa- tion: for example, what the next state or system output should be, given the current state, input event, process completion, and variable values. Such models also help the reader to visualize global behavior, in the form of paths representing allowable execu- tion traces through the model.

Operational notations are less effective at expressing global properties or con- straints. Suppose we we re modeling a traffic light, and we wanted to assert that the Lights controlling traffic :in cross directions are never green at the same time, or that the Lights in each direction are periodically green. We could build an operational model that exhibits these behaviors implicitly, in that all paths through the model satisfy these properties. However, unless the model is closed- meaning that the model expresses all of the desired behavior, and that any implementation that performs additional func- tionality is incorrect-it is ambiguous as to whether these properties are requirements to be satisfied or simply are accidental effects of the modeling decisions made.

Instead, global properties and constraints are better expressed using a descriptive notation, such as logic. A descriptive notation is ai notation that describes a problem or a proposed solution in terms of its properties or its invariant behaviors. For example, ER diagrams are descriptive, in that they express relationship properties among enti- ties. A logic consists of a language for expressing properties, plus a set of inference rules for dermving new, consequent properties from the stated properties. Jn mathematics, a logical expression,5 called a formula, evaluates to either true or false, depending on the values of the variables that appear in the formula. In contrast, when logic is used to express a property of a software problem or syste m, the property is an assertion about the problem or system that should be true. As such, a property specification represents only those values of the property's variables for which the property's expression evalu- ates to true.

There are multiple variants of logic that differ in bow expressive their property notation is, or in what inference rules they provide. The logic commonly used to express properties of software requirements is first-order logic, comprising typed variables; constants; functions; predicates, like relational o perators < and > ; equality; logical connectives/\ (and), V (or),-, (not), => (implies), and ¢:::> (logical equivalence); and quantifiers 3 (there exists) and V (for all). Consider the following variables of the turn- stile problem, with their initial values:

5You can think of a logic as a function that maps expressions to a set of possible values. An 11-valued logic maps to a set of n values. Binary logic maps expressions to (true, false}, but n can in general be larger than 2. ln this book, we assume that 11 is 2 unless otherwise stated.

Openmirrors.com

Section 4.5 Modeling Notations 179

nurn_coins : integer := 0

nurn_entries : integer := O;

harrier : {locked, unlocked} := locked;

may_enter : boolean : = false; insert_coin : boolean : = false;

push : boolean : = false;

/ * number of coins inserted

/ * number of half-rotati ons of

turnstile

/ * whether harrier is l ocked

/* whether anyone may enter /* event of coin being inserted

/* turnstile i s pushed sufficiently

hard to rotate it one-half

rotation

*/

*/ */

* /

*/

*/

The following are examples of turnstile properties over these variables, expressed in first-order logic:

num_coins ~ num_entries (num_coins > num_entries) ~ (barrier = unlocked ) (barrier = locked) ~ ,may_enter

Together, these formulae assert that the number of entries through the turnstile's bar- rier should never exceed the number of coins inserted into the turnstile, and that when- ever the number of coins inserted exceeds the number of entries, the barrier is unlocked to allow another person to enter the gated area. Note that these properties say nothing about how variables change value, such as how the number of inserted coins increases. Presumably, another part of th e specification describes this. The above properties simply ensure tbat however the variables' values change, the values always satisfy the formulae's constraints.

Temporal logic introduces additional logical connectives for constraining how variables can change value over time-more precisely, over multiple points in an exe- cution. For example, temporal-logic connectives can express imminent changes, like variable assignments (e.g., an insert_coin event results in variable num_coins being incremented by 1), or they can express future variable values (e.g., after an insert_ coin event, variable may_enter remains true until a push event). We can model this behavior in first-order logic by adding a time parameter to eacb of the model's variables and asking about the value of a variable at a particular time. How- ever, temporal logic, in allowing variable values to change and in introducing special temporal logic connectives, represents varying behavior more succinctly.

As with logics in general, there are many variants o f temporal logic, which differ in the connectives that they int.roduce. The followin g (Linear-time) connectives con- strain future variable values, over a single execution trace:

D f = f is true now and throughout tbe rest of the execution 0 f = f is true now or at some future point in the execution 0 f = f is true in the next point of the execution f W g = f is true untiJ a point where g is true, but g may never be true

In the following, the temporal turnstile properties given above are expressed in temporal logic:

O(insert_coin ~ 0 (may_enter w push)) 0 (V' n ( i n sert_coin A num_coins=n) ~ 0 (num_coins n+l))

180 Chapter 4 Capt uring the Requirements

Properties are often used to augment a model-based specification, either to impose constraints on tbe model's a llowable behaviors or simply to express redundant but nonobvious global properties of the specification. lo the first case, a property spec- ifies behavior not expressed in the model, and the desired behavior is the conjunc!l:ion of the nnodel ancl the property. In the second case, the property does not alter the spec- ifiecl behavior but may aid in understanding the model by explicating otherwise implicit behavior. Redundant properties also aid in requirements verification, by providing expected properties of the model for the reviewer to check.

Example: Object Constraint Language (OCL)

The Obje<.1 Constraint Language (OCL) is an attempt to create a constraint language that is both mathematically precise and easy for nonmathematicians, Like customers, to read, write, and understand. The language is specially designed for expressing con- straints on object models (i.e., ER diagrams), and introduces language constructs for navigating from one object to another via association paths, for dealing with collections of objects, and for expressing queries on object type.

A partial Library class model from Figure 4.5 appears in Figure 4.18, in which three classes have been de tailed and annotated with OCL constraints. The leftmost constraint is an invariant on the Patron class, an d specifies that no patron's fines may

'""" * p111ot ID: l1t19er n1111 : strln~ 1ddreu : str n9 /ins: tHI

checkflnu(): 1nn1, noflu?; lnCfHll nnes INOHl:rul ; p1y hnes(11uut:rul); rmllNonly(pu•:Pultwlon)

lus~O

Pqbll .. 11u.tlliulHCH->!ot1ll(t' · r21 p1<>p2 iNpllu p1.ullu1h1 <> p2.ull1uNhr)

Pl&llntltn * 0 .. 1 httOWt 0 ..• ull nu111b11 : sttlns httOlllf I 11111 : strins vslu: rHI I

! Lua

due 4111 : 1)111 overdqe fine : rul

ult due im(opmtlot:llrin5) ult overdue llu() renew() rm Ill)

41pmbllo1 : 1111 lou 211io4 : Dmt1on1 Ont rate: ml rmr11 lun f!rloi : D1r1tlot mar1e fine rate: rul rmll teilo~ : Dumlon 1ot1 sute: 11111amy, onlotn} 1mr1t tttte : {stte~1. tutrlt}

nn4jt1th : 11r119I: P~bl1ut1on hy(I lose() borrow( p1tm: Pttron) ratu1n() rmr11() Uflllrleq dmmmqel)

•rmotdltlon. borrower.lines = o ind Im 11111 = lnlibmy •rotteond1t1u,. pott: loin rllte = onlotn

FIGURE 4.18 Library classes an.notated with OCL properties.

Openmirrors.com

Section 4.5 Modeling Notations 181

have a negative value ( i.e., the library always makes change if a patron's payment exceeds his o r her fines). The lopmost constraint is an invariant on the Publicatio n class, and specifies that call numbers are unique. This constraint introduces

• Construct allinstances, which returns all instances of the Publication class • Symbol - , which applies the attribute or o peration of its right operand to all of

the objects in its le ft operand

• Constructs /oral/, and, and implies, which correspond to the first-order connec- tives described above

Thus, the constraint lite rally says that for any two publications, p l and p2 , returned by al/instances, if pl and p2 are not the same publication, then they have different caU numbers. The third const raint, attached to the method b orrow (),expresses the opera- tion's pre- and post-conditions. One of the pre-conditions concerns an attribute in the Patron class, which is accessible via the b orrow association; the patron object can be referenc.ed either by its role name, b o r rowe r , or by the class name written in lower- case le tters, if the association's far end bas no role name. If the association's multiplicity were greater than 0 .. 1, then navigating the association would re turn a collection of objects,, and we would use - nota tion, rather than dot notation, to access the objects' attributes.

Although not originally designed as part of UML, OCL is now tightly coupled with UML and is part of the UML standard. OCL can augment many of UM L's models. For example, it can be used to express invariants, preconditions, and post-conditiorns in class diagrams, or invariants and transition conditions in statechart diagrams. It can also express conditions on events in Message Sequence Charts (Warme r and Kleppe 1999). OCL annotations of UML require a relatively detailed class model, complete with attribute types, operation signatures, role names, multiplicities, and state enumerations of the class's statechart diagram. OCL expressions can appear in UML diagrams as UML notes, or they can be Listed in a supporting document.

Example: Z

Z (pronounced "zed") is a formal requirements-specification language that structures set-theoretic definitions of variables into a complete abstract-data-type model of a problem, and uses logic to express the pre- and post-conditions for each operation. Z uses software-engineering abstractions to decompose a specification into manageably sized modules, called schemas (Spivey 1992). Separate schemas specify

• The system state in terms of typed variables, and invariants on variables' values • The system's initial state (i.e., initial variable values) • Operations

Moreover, Z offers the precision of a mathematical nota tion and au of its benefits, such as being able to evaluate specifications using proofs or automated checks.

Figure 4.19 shows part of our library example specified in z. Patro n, Item, Date, and Dura tion are a ll basic types that correspond to their respective real-world designa tions. (See Sidebar 4.5 for more on designations.) The Library schema declares the problem to consist of a Catalogue and a set of items OnReserve, both

182 Chapter 4 Capturing the Requirements

[Patron, ltu1, Data, Dq11t1on) IAtnPerllo4 : Du11t1on; Ruer1ell41n : Dmtlon; D1llyfln.e : II;

llbfJry ------- C111lo9ue, 01Rmrn : P ltu1 BorrOWtr: lteN -++ P1t1on DueD1t1: lte"-++ D1tt fine: P1t1on-++ N

d1111 Bo''°"" is. C111lo9ua OnReuNt s C111lo9u delft llo"°w" = dolft DoD111

lnltll~11ty ------­ llbrary

C111lo5qe = 0 A OnRmrtt .. 0 delft llot10Wt1 = 0 delft OoDm = 0 dt lft Fiie = 0

Gtt Dre Ottt .!llbr.ity 17 : Item iqel : Date

17E do111 Bor<ower iqel e 000111 (17)

FIGURE 4. 19

Buy--------- A ll~my I?: het1

I? « Cmlosue C1t1lo9u' = C111lo9a• U (I?) OnRemve' = OnRerern Borrower' = Borr-r DuDate' - h eDll• fine'= fine

Rt turn ------------- A rnraty 17: ltn1 p? : Patroa toity? : D1te

I? E dt lft BOllOWtl .. p? = Borrowe<(I?) Borrower' = {17} • B:otrowu OuDate' - (I?}• Dre011t Ou01te(17) • toity? < 0 =>

fine' • fine © {p? >-+ (Flne(t7J + ((OuDatt(I?) • toi1(1)*D1llyflne)} OuD111(1?) • toity? ~ o ..

Fine'= fine C111lo9u' = C111lo9ae OnRuerte' • OnRer.er1e

Partial Z specifica1ion of !be library problem.

SIDEBAR 4.5 GROUND REQUIREMENTS IN THE REAL WORLD

Jackson's advice (Jackson 1995) to ground the requirements in the real world goes beyond the message about expressing the requirements and the specification in terms of the pro- posed system's environment. Jackson argues that any model of the requirements will include primitive terms that have no formal meaning (e.g., Patron, Publication, and Article in

our library example), and that the only way to establish the meaning of a primitive term is to relate it to some phenomenon in the real world. He calls these descriptions designations, and he distinguishes designations from definitions and assertions. Definitions are formal meanings

of terms based on the meanings of other terms used in the model (e.g., the definition of a book out on loan), whereas assertions describe constraints on te rms (e.g., patrons can borrow

only those items not currently out on loan). If a mode l is to have any meaning with respect to

real-world behavior. its primitive terms must be clearly and precisely tied to the real world, and its designations" must be maintained as an essential part of the requirements documenlta-

tion" (Zave and Jackson 1997).

Openmirrors.com

Section 4.5 Modeling Notations 183

declared as powersets (P) of Items; these declarations meao that the values of Catalogue and OnReserve can change during execution to be any subset of Items. The schema also declares partial mappings (-+>) that record the Borrowers and DueDates for the subset of Items that are out on Joan, and record Fines for the sub- set of Patrons who have outs tanding fines. The domain (dom) of a partial mapping is the subset of entities currently being mapped; hence, we assert that the subset of items out on loan should !Je exactly the subset of items that have due dates. The Ini tLibrary scheme initializes all of the variables to be empty sets and functions. All of the re maining schemas correspond to library operations.

The top section of an operation schema indicates whe the r the operation modifies (~) or simply queries (E ) the system state, and identifies tbe inputs (?) and outputs(!) of the operation. The bottom section of an operation schema specifies the operation's pre-conditions and post-conditions. In operations that modify the system state, unprimed variables represent variable values be fore the operation is performed, and primed variables represent values following the operation. For example, the input to operation Buy is a new library Item, and the pre-condition specifies that the Item not already be in tbe library Catalogue. The post-conditions update the Catalogue to include tbe new Item, and specify that the o ther library variables do not change value (e.g., the updated value Fine' equals the old value Fine). Operation Return is more complicated. It takes as input the Library Item being returned, the Patron who bor- rowed the Item, and the Date of the return. The post-conditions remove the returned item from variables Borrowers and DueDates; these updates use Z symbol <a, "domain subtraction," to return a submapping of their pre-operation values, excluding any ele- ment whose domain value is the Item being returned. The next two post-conditions are updates to variable Fines, conditioned on whether the Patron incurs an overdue fine for re turning the Item later than the loan's due date. These updates use symbol "4, which "maps" a domain value " to" a range value, and symbol e which "overrides" a function mapping, usually with a new "maps-to" element. Thus, if today's date is later than the re turned Item's DueDate, then the Patron's Fine is overridden with a new value that re flects the old fine value plus the new overdue fioe. lhe last two post-conditions specify that the values of variables Catalogue and OnReserve do not change.

Algebraic Specifications

With the exception of logic and OCL notations, a U of the notation paradigms we have considered so far tend Lo result in models that suggest particular implementations. For example,

• A UML class model suggests what classes ought to appear in the final (object- orie nted) impleme nta tion.

• A data-flow specification suggests how an implementation ought to be decom- posed into data-transforming modules.

• A state-machine model suggests how a reactive system should be decomposed ioto cases.

• AZ specification suggests how complex data types can be implemented in te rms of sets, sequences, or functions.

184 Chapter 4 Capturing the Requirements

Such implementation bias in a requirements specification can lead a software designer to produce a design that adheres to the specification's model, subconsciously disregard- ing possibly better designs that would satisfy the specified behavior. For example, as we wiU see in Chapter 6, the classes in a UML class diagram may be appropriate for expressing a problem simply and succinctly, but the same class decomposition in a design may result in an inefficient solution to the problem.

A comple tely di.ffe rent way of viewing a system is in te rms of what happens when combinations of operatio ns are performed. This multi-operational view is the main idea behind algebraic specifications: to specif)' the beh avior of operations by specifying the interactions between pairs of operations rather than modeling individual operations. An execution trace is the sequence of operations that have been performed since the start of execution. For example, one execution of our turnstile problem, starting with a new turnstile, is the operation sequence

new () .coin () .push () .rotated () .coin () .push ().rotated () ...

or, in mathematical-function notation:

... (rotated(push(coin(rotated(push(coin(new()))))))) ...

Specification axioms specify the effects of applying pairs of operations on an arbitrary sequence of operations that have already executed (where SEQ is some prefix sequence of operations):

num_entries (coin(SEQ)) .. num_entries (SEQ)

num_1mtries (puslh(SEQ)) .. num_entriQS (SEQ)

num_entries (rotated(SEQ) ) a 1 + num entries (SEQ) num_entries (new()) .. 0

The first three axioms specify the behavior of operation num_entries when applied to sequences ending with op erations coin, push, and rotated, respectively. For example, a rotated operation indicates that another visitor has entered the zoo, so num_entries applied to rotated(SEQ) should be one more than num_entries applied to SEQ. lhe fourth axiom specifies the base case of num_en tries when applied to a new turnstile. Together, the four axioms indicate that operation num_entries re turns, for a given sequence, the number of occurrences of the operation rotated- witbou t saying anything about bow that information may be stored o r computed. Similar axioms would need to be written to specify the behaviors of other pairs of operations.

Algebraic specilication notations are not [popular among software developers because, for a collection of operations, it can be tricky to construct a concise set of axioms tha t is comple te and consistent-and correct! Despite their complexity, alge- braic notations have been added to several formal specification languages, to enable specifie rs to define their own abstract data types for the ir specifications.

Example: SDL Data

SOL data definitions are used to create user-defined data types and parameterized data types in the Specification and Descript ion Language (SDL) (ITU 2002).An SOL data type definition introduces the data type being specified, the signatures of au operations

Openmirrors.com

N e WTYPE LI bmy LITERALS New; OPERATORS

•uy: Llbmy, 111111 ..... Llbnty; loN: lllnty, ltu1 ..... Llbraty; •omiw: Llbnty, 111111 ..... Llbnty; " " ''" Ll•taty, I r.11 _, Llbt1ty; 11111'18: u•11ty, ten ..... llbn ty; unrmt'le: Llbrlr'f, lten ..... u•my; recall: Llbrity, 111111 .... ll•ttty; ltl1C1UIO!UI: Llbtl'Y, 111111 .... booleu; ltOnLoan: llbn!J, lte111 ..... boolun; ltOnRmr11: Llbttty, 111111 ..... boolun;

/*9nmto11 111 New, b1y, •omiw, 1111r1e */

Section 4.5 Modeling Notations 185

AXIOMS FOR ALL 11• In Llbr1ty (

FOR ALL I, 12 11 lten ( loN(New, 1) ; ERROR; loN(buy(lib, I), 12) = 111= 12 tlu 11•;

} }

elu biy(lou lll•. 12), 11; Ion( borrow( lib, 1), 12) • If I - 12 1~1n lonf llh, 12\

elu borrow(lote(llb, 12), ); loN(ri11r1e(m, 11. 12) •If 1. 12 1h1 lmtm, 121

elte rmt'le(lm (llb, 12), I); 1111m(N1w, I) =: ERROR; llltm(buy(lib, I), 12) ;;;: If I = 12 l~U hr (lib, I);

1lte h y(111m(I b, 12), I); 1111m(bo1m1(lib, I), 12) !!'II I = 12 then lib·;

du bo1tow(rel1m(llb, 12), I); 1111m(t1Ht'le(l1•, I), 12) !!' ltllt'll(t1111n(lib, 12), I);

lslnC111lo9u(New, I) =bin; ltlnC111lo9u(hy(ll•, 1), 12) • 11 1 • 12 tlu Im;

1111 l1hC112lo9u1(ll•, 12); 11lnCl11lo9u(h rr01w(llb, I), 12) =: lrlnC111lo9ae (Iii, 12); lslnClulo9u(rm rve(lib, I), 12) !!' lslnCl11lo9u (lib, 12);

E NDNE WTYPE Library;

FIGURE 4.20 Partial SOL data specification for the library problem.

oo that data type, and axioms that specify how pairs of operations interact. Figure 4.20 shows a partial SOL data specification for our Library problem, whe re the Library itself-tbe catalogue of publications, and each publication 's loan and reserve status-is treated as a complex data type. NEWTYPE introduces the Libra ry data type. The LITERALS section declares any constants of the new data type; in this case, New is the value of an empty library. The OPERATORS section declares all of the library opera- tions, including each operator's parameter types and return type. The AXIOMS section specifies the behaviors of pairs of operations.

As mentio ned above, the hardest part of constructing an algebraic specification is defining a set of axioms that is complete and consistent and that ref'lects the desired behavior. It is especially difficult to ensure that the axioms are consistent because they are so interrela ted: each axiom contributes to the specification of two operations, and each operation is specified by a collection of axioms. As such, a change to the spec- ification necessarily implicates multiple axioms. A heuristic that helps to reduce the number of axioms, thereby reducing the risk of inconsiste ncy, is to separate the opera- tions into

• Generators, which help to buiJd canonical representations of the defined data type

• Manipulators, which re turn values of tbe defined data type, but are not generators • Queries, which do not re turn vaJues of the defined data type

'Ille set of generator operations is a minimal set of operations needed to construct any value of the data type. That is, every sequence of operations can be reduced to some canonical sequence of only generator operations, such that the canonical sequence

186 Chapter 4 Capturing the Requirements

represents the same data value as the original sequence. In Figure 4.20, we select New, buy, borrow, and reserve as our generator operations, because these opera- tions can represent any state of the library, with respect to the contents of the library's catalogue, and the loan and reserve states of its publications. This decision leaves lose, return, unreserve, and renew as our manipulator operations, because they are the remaining operations that have return type Library, and leaves isinCatalogue, isOnLoan, and isOnReserve as our query operations.

The second part of the heuristic is to provide axioms that specify the effects of applying a nongenerator operation to a canonical sequence of operations. Because canonical sequences consist only of generator operations, this step means that we need to provide axioms only for pairs of operations, where each pair is a nongenerator oper- ation tbat is applied to an application of a generator operation. Each axiom specifies how to reduce an operation sequence to its canonical form: applying a manipulator operation to a canonical sequence usually results in a smaller canonical sequence, because the manipulator often undoes the effects of an earljer generator operation, such as returning a borrowed book; applying a query operation, like checking whether a book is out on loan, returns some result without modifying the already-canorucal system state.

The axioms for each nongenerator operation are recursively defined:

1. There is a base case that specifies the effect of each nongenerator operation on an empty, New library. In our library specification (Figure 4.20), losing a book from an empty Library is an ERROR.

2. There is a recursive case that specifies the ,effect of two operations on common parameters, such as buying and losing the same book. In general, such operations interact. In this case, the operations cancel each other out, and the result is the state of the library, mjnus the two operations. Lookjng at the case of losing a book that has been borrowed, we discard the borrow operation (because there is no need to keep any Joan records for a Jost book) and we apply the lose operation to the rest of the sequence.

3. There is a second recursive case that applies two operations to different parame- te rs, such as buying and losing different books. Such operations do not interact, and the axiom specifies how to combine the effects of the inner operation with the result of recursive I y applying the outer operation to the rest of the system state. In the case of buying and losing rufferent books, we keep the ,effect of buying one book, and we recursively apply the lose operation to the rest of the sequence of operations executed so far.

There is no need to specify axioms for pairs of nongenerator operations because we can use the above axioms to reduce to canonical form the application of each non- generator operation before consider1ng the next nongenerator operation. We could write axioms for pairs of generator operations; for example, we could specify that con- secutive Joans of the same book are an ERROR. However, many combinations of gener- ator operations, such as consecutive loans of different books, wiU not result in reduced canonical forms. Instead, we write axioms assuming that many of the operations have pre-conditions that constrain when operations can be applied. For example, we assume

Openmirrors.com

Section 4.6 Requirements and Specification Languages 187

in our library specification (Figure 4.20) that it is invalid to borrow a book that is already out on loan. Given this assumption, the effect of returning a borrowed book

return(borrow(SEQ,i),i)

is that the two operations cancel each other out and the resuJt is equivalent to SEQ. If we do not make this assumption, then we wouJd write the axioms so that the return operation removes all corresponding borrow operations:

return(New,i) .. ERROR;

return(buy(lib, i), i2 ) a if i = i2 then buy(lib, i); else buy(return(lib, i2), i);

return(borrow(lib, i), i2) "" if i = i2 then return(lib, i2); else borrow(return(lib, i2), i);

return(reserve(lib, i), i2) "" reserve(return(lib, i2), i);

Thus, the effect of re turning a borrowed book is to discard the borrow operation and to reapply the return operation to the rest of the sequence, so that it can remove any extraneous matching borrow operations; this recursion terminates when the return operation is appLied to the corresponding buy operation, which denotes the beginning of the book's existence, o r to an empty library, an ERROR. Together, these axioms spec- ify that operation return removes from the Library state any trace of the item being borrowed, which is the desired behavior. A specification written in thjs style wouJd nul- Lify consecutive buying, borrowing, or reserving of the same item.

4.6 REQUIREMENTS AND SPECIFICATION LANGUAGES

At this point, you may be wondering how the software-engineering community could have developed so many types of software models with none being the preferred or ideal notation. The situation is not unlike an architect workjng with a coUection of blue- prints: each blueprint maps a particular aspect of a building's design (e.g., structural support, heating conduits, electrical circuits, water pipes), and it is the coUection of plans that enables the architect to visualize and communicate the building's whole design. Each of the notational paradigms described above models problems from a dif- fe rent perspective: entities and relationships, traces, execution states, functions, proper- ties, data. As such, each is the paradigm of choioe for modeLing a particular view o f a software problem. With practice and experience, you will learn to judge which view- points and notations are most appropriate for understanding or communicating a given software problem.

Because each paradigm has its own strengt!hs, a complete specification may con- sist of several models, each of which illustrates a different aspect of the problem. For this reason, most practical requirements and specification languages are actuaUy com- binations of several notational paradigms. By understanding the relationships between specification languages and the notational paradigms they employ, you can start to rec- ognize the simjlarities among different languages and to appreciate the essential ways in which specification languages differ. At this end of this chapter, we discuss criteria for evaluating and choosing a specification language.

188 Chapter 4 Capturing the Requirements

Unified Modeling Language (UML)

The Unified Modeling Language (UML) (OMG 2003) is the language best known for combining multiple notation paradigms. Altogether, the UML standard comprises eight graphical modeling notations, plus the OCL constraint language. The l IML nota- tions that are used during requirements definition and specifica tion include

• Use-case diagram (a high-level DFD): A use-case diagram is used at the start of a new project, to record the essential top-level functions that the to-be-developed product should provide. In the course of detailing the use cases' scenarios, we may identify important entities that play a role in the problem being modeled.

• Class diagram (an ER diagram): As mentio ned previously, the class diagram is the flagship model of a UML specification, emphasizing the problem's entities and their interrela tionships. The remaining UML specification models provide more detail about how the classes' objects behave and how they interact with one another. As we gain a better understanding of the problem being modeled, we de tail the class diagram, with additional attributes, attribute classes, operatio ns, and signatures. ldeaUy, new insight into the problem is more likely to cause changes to these details than to affect the model's entities or relationships.

• Sequence diagram (an event trace): Sequence diagrams are early behavioral models that depict traces of messages passed between class instances. They are best used to document important scenarios that involve multiple objects. When creating sequence diagrams, we look fo r common subseque nces that appear in several diagrams; these subsequences may help us to identify states (e.g., the start and end points o f the subsequence) in the objects' local behaviors.

• Collaboration diagram (an event trace): A collaboration diagram illustrates one or more event traces, overlayed on the class diagram. As such, a collaboration dia- gram presents the same information as a sequence diagram. The difference is llhat the sequence diagram emphasizes a scenario's temporal ordering of messages because it organizes messages along a timeline. On the other hand, the collabo ra- tion diagram emphasizes the classes' relationships, and treats the messages as elaborations of those relationships in the way that it represents messages as arrows between classes in the class diagram.

• Statechart diagram (a state-machine model): A UML statechart diagram specifies how each instance of one class in the specification's class diagram behaves. Before writing a statechart for a class (more specifically, for a representative object of that class), we should identify the states in the object's life cyle, the events that this object sends to and receives from other objects, the order in which these states and events occur, and the operations that the object invokes. Because such information is fairly detailed, a statechart diagram should not be attempted until late in the requirements phase, when the problem's details are better understood.

• OCL properties (logic): OCL expressions are properties about a model's ele- ments (e.g., objects, attributes, events, states, messages). OCL properties can be used in any o f the above models, to explicate the model's implicit behavior or to impose constraints on the model's specified behavior.

Openmirrors.com

Section 4.6 Requirements and Specification Languages 189

Most of these notations were discussed in the preceding section, as examples of differ- ent notation paradigms. In Chapter 6, we will see more details about how UML works, by applying it to a real-world problem for both specification and design.

Specification and Description Language (SOL)

The Specification and Description Language (SD!L) (ITU 2002) is a language standard- ized by the International Telecommunications Union for specifying precisely the behavior of real-time, concurrent, distributed processes that communicate with e ach other via unbounded message queues. SDL comprises three graphical diagrams, plus algebraic specifications for defining complex data types:

• SDL system diagram (a DFD): An SDLsystem diagram, shown in Figure 4.2l(a), depicts the top-level blocks of the specification and the communication chanroels that connect the blocks. The channels are directional and are labeled with the types of signals that can flow in each direction. Message passing via channels is asynchronous, meaning that we cannot malke any assumptions about when sent messages will be received; of course, messages sent along the same channel wiU be received in the ordler in which they were sen t.

• SDL block diagram (a DFD): Each SDL block may model a lower-level collec- tion of blocks and the message-delaying channels that interconnect them. Alter- natively, it can model a collection of lowest-level processes th at communicate via signal routes, shown in Figure 4.2l(b ). Signal routes pass messages synchronously, so messages sent between processes in the same block are received insta!llta" neously. In fact, this difference in communication mechanisms is a factor when deciding how to dlecompose behavior: processes that need to synchronize with one another and that are highly coupled sho uld reside in the same block.

119 .---~

block

119 11! ....----, block

(aJ An SOL ty11ei. 01 block of blocb

(bJ An SOL bloek ol pmenu

(•I An SOL procm

FIGURE 4.21 SDL graphical notations.

190 Chapter 4 Capturing the Requirements

• SDL proce~ diagram (a state-machine model): An SOL process, shown in Figure 4.2l(c), is a state machine, whose transitions are sequences of language constructs (input, decisions, tasks, outputs) that start and end at state con structs. Io each exe- cution step, the process removes a signal from the head of its input queue and compares the signa l to the input constructs that follow the process's current state. If the signal matches one of the state's inputs, the process executes all of the con- structs that follow the matching input, until the execution reaches the next state construct.

• SDL data type (algebraic specification): An SOL process may declare local vari- ables, and SOL data type definitions are used to declare complex, user-defined variable types.

la addition, an SOL specification is often accompanied by a set of Message Sequence Charts (MSC) (ITU 1996), each of which illustrates a single execution of the specifica- tion in terms o f the messages passed between the specification's processes.

Software Cost Reduction (SCR)

Softwar e Cost Reduction (SCR) (Heitmeyer 2002) is a collection of techniques that were designed to encourage software developers !lo employ good software-engineering design principles. An SCR specification models software requirements as a mathemati- cal function, REQ, that maps monitored variables, which are environmental variables that are sensed by the system, to controlled varia bles, which are environmental vari- ables that are set by the system. The function REQ is decomposed into a collection of tabular functions, simiJar to Parnas tables. Each of these functions is responsible for set- ting the value of one controlled variable or the value of a term, which is a macro-like variable that is referred to in other functions' definitions.

REQ is the result of composing these tabular functions into a network (a DFD, as shown in Figure 4.22), whose edges reflect the data dependencies among the functions. Every execution step starts with a change in the value of one monitored variable. lbis change is then propagated through the network, in a single, synchronized step. The specification's functions are applied in a topological sort that adheres to the functions' data dependencies: any function that refers to updated values of variables must execute after the functions that update those values. Thus, an execution step resembles a wave of variable updates tlowing through the network, starting with newly sensed moni- tored-variable values, followed by updates to te rm variables, folJowed by updates to controUed-variable values.

A B C D

: FIGURE 4 .22 SCR specification as a network of tabular functions.

Openmirrors.com

Section 4.7 Prototyping Requirements 191

Other Features of Requirements Notations

There are many other requirements-modeLing techniques. Some techniques include facilities for associating tbe degree of uncertainty or risk with each requirement. Other techniques have faciLities for tracing requirements to other system documents, such as design or code, or to other systems, such as when requirements are reused. Most specifi- cation techniques have been automated to some degree, making it easy to draw dia- grams, coUect te rms and designations into a darta dictionary, and check for obvious inconsis tencies. As tools continue to be developed to aid software engineering activi- ties, documenting and tracking requirements will be made easie r. However, the most difficult part of requirements analysis-understanding our customers' needs-is still a human endeavor.

4.7 PROTOTYPING REQUIREMENTS

When trying to determine requirements, we may find that our customers are uncertain of exactly what they want o r need. Elicitation may yield onJy a "wish list" of what the custome rs wouJd like to see, with few details or without being clear as to whether the list is complete. Beware! These same customers, who are indecisive in their require- ments, have no trouble distinguishing between a delivered system that meets their needs and one that does not-known also as "I'U know it when I see it" customers (Boehm 2000). In fact, most people find it easier il:o critique, in detail, an existing prod- uct than to imagine, in de tail, a new product. As sillch, one way that we can elicit details is to build a prototype of tbe proposed system and to solicit feedback from potential users about what aspects they would Like to see improved, which features are not so useful, or what functionality is missing. BuiJding a prototype can also help us determine whether the customer's probJem has a feasible solution, or assist us in exploring options for optimizing quality requirements.

To see how prototyping works, suppose we are building a tool to track how much a user exercises each day. Our customers are exercise physiologists and trainers, and their clients wiU be the users. The tool wiLI help the physiologists and trainers to work with their clients and to track their clients' training progress. The tool's user inte rface is important, because the users may not be familiar with computers. For example, in ente ring information about their exercise routines, the users will need to enter the date for each routine. The trainers are not sure wbat this inte rface should look like, so we build a quick prototype to demonstrate the possibiLities. Figure 4.23 shows a first proto- type, in which the user must type the day, month, and year. A more interesting and sophisticated interface involves a calendar (see Figure 4.24), where the user uses a mouse to select the month and year, the system displays the chart for that month, and the user selects the appropriate day in the chart. A third alternative is depicted in Figure 4.25, in which, instead of a calendar, the user is presented with three slider bars. As the user then uses the mouse to slide each bar left or right, the box at the bottom of the screen changes to show the selected day, month, and year. This third inte rface may provide the fastest selection, even though it may be very different from what the users are accustomed to seeing. In this example, prototyping helps us to select the right ' 'look

192 Chapter 4 Capturing the Requirements

FIGURE 4.23 Keyboard-entry prototype.

FIGURE 4 .24 Calendar-based prototype.

FIGURE 4 .25 Slide-bar-based prototype.

Enter vear: __

Enter month: __

Enter dav:--

Jalr 2001.

I 2 3 4 s 16 7 8 9 10 " 12 13 14 IS If> 17 I& 19 20 21 22 23 24 2S 26 27 23 29 30 31

~2S ==01===

31

====I~ Jan Dec

and fee l" for the user's interaction with the proposed system. The prototype interfaces would be difficult to describe in words or symbols, and they demonstrate how some types of requirements are better represented as pictures or prototypes.

There are two approaches to prototyping: throwaway and evolutionary. A throwaway prototype is software that is developed to learn more about a problem or about a proposed solutio n, and that is never intended to be part of the delivered soft- ware. This approach allows us to write "quick-and-dirty" software that is poorly struc- tured, inefficient, with no erro r checking-that, in fact , may be a facade that does not implement any of the desired functionality-but that gets quickly to the heart of ques- tions we have about the problem or about a proposed solution. Once our questions are answered, we throw away the prototype software and start engineering the software that wrn be delivered. In contrast, an evolutionary prototype is software that is devel- oped no t only to help us answer questions but also to be incorporated into the final

Openmirrors.com

Section 4.8 Requirements Documentation 193

product. As such, we have to be much more careful in its development, because this software has to eventually exhjbit the quality requirements (e.g., response rate, modu- larity) of the final product, and these quaLities cannot be retrofitted.

Both techniques are sometimes called rapid prototyping, because they involve building software in order to answer questions about the requirements. The te rm "rapid" distingwshes software prototyping from t!hat in other engineering disciplines, in which a prototype is typically a comple te solution, Like a prototype car or plane that is built manually according to an already approved design. The purpose of such a proto- type is to test the design and product before automating or optinUzing the manufactur- ing step for mass production. In contrast, a rapid prototype is a partial solution that is built to help us understand the requirements or to evaluate design alte rnatives.

Questions about tbe reqwrements can be explored via either modeling or proto- typing. Whether one approach is bette r than the other depends on what our questions are, how well they can be expressed in models or in software, and how qukkly the mod- els or prototype software can be built. As we saw above, questions about user interfaces may be easie r to answer using prototypes. A prototype that implements a number of proposed features would more effectively help users to prioritize these features, and possibly to identify some features that are unnecessary. On the other hand, questions about constraints on the order in which events should occur, or about the synchroniza- tion of activities, can be answered more qwckly using models. In the end, we need to produce final reqwrements documentation for the testing and maintenance teams, and possibly for regulatory bodies, as well as final software to be delivered. So, whether it is be tte r to model or to prototype depends on whether it is faster and easier to model, and to develop the software from the refined models, or faster and eas.ier to prototype, and to develop docume ntation from the refined prototype.

4.8 REQUIREMENTS DOCUMENTATION

No matter what method we choose for defining reqwrements, we must keep a set of docume nts recording the result. We and our customers will refer to these documents throughout development and maintenance. Therefore, the requirements must be docu- mented so that they are useful not only to the customers but also to the technical staff on our development team. For example, the reqllirements must be organized in such a way that they can be tracked throughout the system's development. Clear and precise illustrations and diagrams accompanying the docillffientation should be consistent with the text. Also, the level at which the reqwrements are written is important, as explained in Sidebar 4.6.

Requirements Definition

The reqllirements definition is a record of the requirements expressed in the cus- tomer's te rms. Workmg with the customer, we document what the customer can expect of the delivered system:

1. First, we outline the general purpose and scope of the system, including relevant benefits, objectives, and goals. References to other related systems are included, and we list any te rms, designations, and abbreviations that may be useful.

194 Chapter 4 Capturing the Requirements

SIDEBAR 4.6 LEVEL OF SPECIFICATION

In 1995, the Australian D efence and Technology Organisation reported the results of a sur-vey of problems with requirements specifications for Navy software (Gabb and Henderson 1995). One of the problems it highJighted was the uneven level of specifications. That is, some requirements had been specified at too high a level and others were too detailed. The uneven- ness was compounded by several situations:

• Requirements analysts used different writing styles, particularly in documenting differ- ent system areas.

• The difference in experience among analysts led to different levels of detail in the requirements.

• In attempting to reuse requirements from previous systems, analysts used different for- mats and writing styles.

• Requirements were often overspecified in that analysts identified particular types of computers and programming languages, assumed a particular solution, or mandated inappropriate processes and protocols. Analysts sometimes mixed requirements with partial solutions, leading to "serious problems in designing a cost-effective solution."

• Requirements were sometimes underspecified, especially when describing the operat- ing environment, maintenance, simulation for training, administra tive computing, and fault tolerance.

Most of those surveyed agreed that there is no universally correct level of specification.

Customers with extensive experience prefer high-level specifications, and those with less expe- rience like more detail. The survey respondents made several recommendations, including:

• Write each clause so that it contains only one requirement.

• Avoid having one requirement refer to another requirement.

• Collect similar requirements together.

2. Next, we describe the background and the rationale behind the proposal for a new system. For example, if the system is to replace an existing approach, we explain why the existing approach is unsatis factory. Current methods and proce- dures are outlined in enough detail so that we can separate those elements with which the custome r is happy from those that are disappointing.

3. Once we record this overview o f the problem, we describe the essential character- istics of an acceptable solution. This record includes brief descriptions of the product's core functionality, at the level of use cases. It a lso includes quality requirements, such as timing, accuracy, and responses to failures. ldeaJly, we would prioritize these requirements and identify those that can be put off to later versions o f the system.

Openmirrors.com

Section 4.8 Requirements Documentation 195

4. As part of the problem's context, we describe the environment in which the sys- tem will operate. We list any known hardware and software components with which the proposed system will have to interact. To help ensure that the user interface is appropriate, we sketch the general backgrounds and capabilities of the intended users, such as their educationa:I background, experience, and techni- cal expertise. For example, we would devise different user interfaces for knowl- edgeable users than we would for first-time users. In addition, we list any known constraints on the requirements or the design, such as applicable Jaws, hardware limitations, audit checks, regulato ry policies, and so on.

5. If the customer has a proposal for solving the problem, we outline a description of the proposal. Remember, though, that the purpose of the requirements docu- ments is to discuss the problem, not the solution. We need to evaluate the pro- posed solution carefully, to determine if it is a design constraint to be satisfied or if it is an overspecification that could exclude better solutions. In the end, if the customer places any constraints on the development or if there are any special assumptions to be made, they should be incorporated into the requirements definition.

6. FinaUy, we list any assumptions we make about how the environment behaves. lo pa rticular, we describe any environmental conditions that would cause the pro- posed system to foil , and any changes to the environment that would cause us to change our require ments. Sidebar 4.7 explains in more detail why it is important to document assumptions. The assumptions should be documented separately from the requirements, so that developers know which behaviors they are respon- sible for implementing.

Requirements Specification

The requirements specification covers exactly the same ground as tbe requirements definitio n, but from the perspective of the developers. Where the requirements defi - nition is written in terms of the customer's vocabulary, referring to objects, states, events, and activities in the customer 's world, the requirements specification is writtten in terms of the system's interface. We accomplish this by rewriting the requirements so that they refer only to those real-world objects (states, events, actions) that are sensed or actuated by the proposed system:

1. In documenting the system's interface, we describe all inputs and outputs in detail, including the sources of inputs, the destinations of outputs, the value rnnges and data fo rmats of input and output data, protocols governing the order in which certain inputs and outputs must be exchanged , window formats and organization, and any timing constraints. Note that the user interface is rarely the only system interface; the system may interact with other software components (e.g., a database), special-purpose hardware, the Internet, and so on.

2. Next, we restate the required functionality in terms of the interfaces' inputs and outputs. We may use a functional notation o r data-flow diagrams to map inputs to outputs, or use logic to document functions' pre-conditions and post-conditions. We may use state machines or event traces Lo illustrate exact sequences of operations

196 Chapter 4 Capturing the Requirements

SIDEBAR 4.7 HIDDEN ASSUMPTIONS

Zave and Jackson (1997) have looked carefully at problems in software requirements and specification, including undocumented assumptions about how the real world behaves. There are actually two types of environmental behavior of interest: desired behavior to

be realized by the proposed system (i.e., the requirements) and exis ting behavior that is uncharnged by the proposed system. The latter type of behavior is often called assumptions or domain knowledge. Most requirements writers consider assumptions to be simply the condi- tions wnder which the system is guaranteed to operate correctly. While necessary, these condi- tions are not the only assumptions. We also make assumptions about how the environment will behave in response to the system's outputs.

Consider a rail road-crossing gate a t the intersection of a road and a set of railroad tracks. Our requirement is that trains and cars do not collide in the intersection. However, the trains and cars are outside the control of our system; all our system can do is lower the cross-

ing gate upon the arrivaE of a train and lift the gate after the train passes. The only way our crossing gate will prevent collisions is if trains and cars follow certain rules. For one thing, we have to assume that the trains travel a t some maximum speed, so that we know how early to lower the crossing gate to ensure that the gate is down well before a sensed train reaches the

intersection. But we also have to make assumptions about how car drivers wilJ react to the crossing gate being lowered: we have to assume that cars will not stay in or enter the intersec- tion when the gate is down.

or exact orderings of inputs and outputs. We may use an entity-relationship dia- gram to collect related activities and operations into classes. In the end, the speci- fication should be complete, meaning that it should specify an output for any feasible sequence of inputs. Thus, we include validity checks on inputs and system responses to exceptional situations, such as violated pre-conditions.

3. FlnaUy, we de vise fit criteria for each of the customer's quality requirements, so that we can conclusively demonstrate whe ther our system meets these quality requirements.

The result is a description of what the developers are supposed to produce, writ- ten in sufficient detaiJ to distinguish between aoceptable and unacceptable solutions, but without saying how the proposed system should be designed or implemented.

Several organizations, such as the Institute of Electrical and Electronic Engineers and the U.S. Department of Defense, have standards for tbe content and format of the requirements documents. For example, Figure 4.26 shows a template based on IEEE's recommendations for organizing a software requirements specification by classes or objects. The IEEE standard provides similar templates for organizing the requirements specification by mode of operation, function, feature, category of user, and so on. You may want to consult these standards in preparing documents for your own projects.

Openmirrors.com

Section 4.8

1. lntrofotlon 10th Oon 1111nt 1.1 P~~m ol the Pro' w 1.2 Seopt or th Pro4get

Requirements Documentation

1.3 Aerony111r, Abbml1tlonr, Oennltlou 1.4 Rnermu 1.S Outline ol the m t ol the SR$

2. Cem1I Omrlpllon ol P10,uc1 2.1 Context ol Profot 2.2 Product ftntllm 2.3 Uur Chm cterllllct 2.4 Coutrtlntr 2.S Anu111p1lou 1n4 Otpu4inele.r

3. Sptclllc Req•lrt11ents 3.1 Externtl lnttrlm R•1ulru 1entt

3.1.1 Uttr hterlmt 3.1.2 Htr, llm lnt1rlmr 3.1.3 Solhlm lnterlmr 3.1.4 Co"'unlutlonr lnterltcu

3.2 Funcllonll Requlmunu 3.2.1Clm 1 3.2.2 Cl1n 2

3.3 Perlomm e R111lre11entt 3.4 Oetl91 Conrt11l1ts 3.S Quilty R1qulre111111t 3.6 Other R11alre11entt

4. Appu,lur

197

FIGURE 4.26 IEEE standard for Software Requiremenls Specification organized by object (IEEE 1998).

Process Management and Reqiuirements Traceability

There must be a direct correspondence between the requirements in the definition doc- ument and those in the specification document It is here that the process management methods used throughout the life cycle begin. Process management is a set of proce- dures that track

• The requirements that define what the system should do • The design modules that are generated from the requirements • The program code that implements the design • The tests that verify the functionality of the system • The documents that describe the system

In a sense, process management provides the threads that tie the system parts together, integrat ing documents and artifacts that have been developed separately. These threads aUow us to coordinate the development activities, as shown by the horizontal " threads" among entities in Figure 4.27. In particular, during requirements activit ies, we are concerned about establishing a correspondence between elements of the requirements definition and those of the require ments specification, so that the cus- tomer's view is tied to the developer's view in an organized, traceable way. If we do not define these links, we have no way of designing test cases to determine whether the code meets the requirements. In later chapters, we will see how process management

198 Chapter4 Capturing the Requirements

Sp1elnut1on h•pln11n11t1on V111t1eat1on

FIGUR!E 4.27 Links between software-development entities.

also allows us to determine the impacl of changes, as weU as to control the effects of parallel development.

To facilitate this correspondence, we establish a numbering scheme or data file for convenient tracking of requirements from one document to anothe r. Often, the process management team sets up or extends this numbering scheme to tie requirements to other components and artifacts of tlte system. Numbering the requirements alJows us to cross-reference them with the data dictionary and other supporting documents. If any changes are made to the requirements during the remaining phases of develop- ment, the changes can be tracked from the requirements document through the design process and alJ the way to the test procedures. Ideally, then, it should be possible to trace any feature or function of the system to its causal requirement, and vice versa_

4.9 VALIDATION AND VERIFICATION

Remember that the requirements documents serve both as a contract between us and the customer, detailing what we are to deliver, and as guidelines for the designers, de tailing what they are to build. Thus, before the requirements can be turned over to the designers, we and our customers must be a bsolutely sure that each knows the other's intent, and that our intents are captured in the requirements documents.. To establish this certainty, we validare the requirements and verify the specification.

We have been using the terms "verify" and "validate" thro ughout this chapter without fo rmally defining them. In requirements validation, we check that our require- ments definition accurately reflects the customer's-actually, aU of the stakeholders'- needs. Validation is tricky because there are only a few documents that we can use as the basis for arguing that the requirements definitions are correct. In verification, we check that one document or artifact conforms to another. Thus, we verify that our code conforms to our design, and that our design conforms to our requirements specification; at the requirements level, we verify that our requirements specification conforms to the

Openmirrors.com

Section 4.9 Validation and Verification 199

requirements definition. To summarize, verification ensures that we build the system right, whereas validation ensures that we build the right system!

Requirements Validation

O ur criteria for valida ting the requirements a re the characteristics that we Listed in Section 4.4:

• Correct • Consistent • Unambiguous

• Complete • Relevant • Testable • Traceable

Depending on the definition techniques that we use, some of the above checks (e.g., that the requirements are consistent or are traceable) may be automated. Also, common errors can be recorded in a checklist, which reviewers can use to guide their search for errors. Lutz (1993a) reports on the success of using checklists in validating requirements at NASA's Jet Propulsion Laboratory. However, most validation checks (e.g., that a reqwre- meot is correct, relevant, or unambiguous; or that the requirements are complete) are sub- jective exercises in tha t they involve comparing the requirements definition against the stakeho lders' mental model of what they expect the system to do. For these validation checks, our only recourse is to rely on the stakeholders' assessment of our documents.

Table 4.3 lists some of the techniques that can be used to vaLidate the require- ments. Validation can be as simple as reading the document and reporting errors. We

TABLE 4.3 Validation and Verification Techniques

Validation Walkthroughs

Verification

Checking

Reading Interviews Reviews

Checklists Models to check functions and relationships Scenarios Prototypes Simulation Formal inspections

Cross-referencing Simulation Consistency checks

Completeness checks Checks for unreachable states or transitions Model checking Mathematical proofs

200 Chapter 4 Capturing the Requirements

can ask the validation team to sign off on the document, thereby declaring that they have reviewed the document and that they approve it. By signing off, the stakeholders accept partial responsibility for errors that are subsequently found in the document. Alternatively, we can hold a walkthrough, in wbich one of the document's authors pres- ents the requirements to the rest of the stakeholders, and asks for feedback. Walk- throughs work best when there are a large number of varied stakeholders, and it is unrealistic to ask them all to examine the document in detail. At the other extreme, val- idation can be as structured as a formal inspection, in which reviewers take on specific roles (e.g., presenter, moderator) and follow prescribed ruJes (e.g., rules on how to examine the requirements, when to meet, when to take breaks, whether to schedule a follow-up inspection).

More often, the requirements are validated in a requirements review. In a review, represe ntatives from our staff and the customer's staff examine the requirements docu- ment individually and then meet to discuss identified problems. The customer's repre- sentatives include those who wilJ be operating the system, those who will prepare the system's inputs, and those who will use the system's outputs; managers of these employ- ees may also a ttend. We provide members of the design team, the test team, and the process team. By meeting as a group, we can do more than check that the requirements definition satisfies the valida tion crite ria:

1. We review the stated goals and objectives of the system. 2. We compare the requirements with the goals and objectives, to make certain ll:hat

all requirements are necessary. 3. We review the environment in which the system is to operate, examining the

interfaces between our proposed system and alJ other systems and checking ll:hat their descriptions are complete and correct.

4. The customer's representatives review the information flow and the proposed functions, to confirm that they accurately reflect the customer 's needs and inten- tions. Our represeatatives review the proposed functions and constraints, to con- firm that they are realistic and witrun our development abilities. All requirements are checked again for omissions, incompleteness, and inconsistency.

5. If any risk is involved in the development or in the actual functioning of the sys- te m, we can assess and document this risk, discuss and compare alternatives, and come to some agreement on the approaches to be used.

6. We can talk about testing the system: how the requirements will be revalidated as the requirements grow and change; who wjjl provide test data to the test team; which requirements will be tested in which phases, if the system is to be de vel- oped in phases.

Whenever a problem is identified, the problem is documented, its cause is deter- mined, and the requirements analysts are charged with the task of fixing the problem. For example, validation. may reveal that there is a great misunde rstanding about the way in wbicb a certain function will produce resuJts. The customers may require d ata to be repo rted in miles, whereas the users want the data in kilometers. The customers may set a re liability or avaiJability goal that developers deem impossible to meet. These con- flicts need to be resolved before design can begin. To resolve a conflict, the developers

Openmirrors.com

Section 4.9 Validation and Verification 201

may need to construct simulations or prototypes to explore feasibility constraints and then work with the customer to agree on an acceptable requirement Sidebar 4.8 dis- cusses the nature and number of requirements-related problems you are likely to find.

Our choice of valjdation techruque depends on the experience and preferences of the stakeholders and on the technique's appropriateness for the notations used in the requirements de finition. Some notations have tool support for checking consistency and completeness. There are also tools that can help with the review process and with tracking problems and their resolutions. For example, some tools work with you and the customer to reduce the amount of uncertainty in the re quirements. This book 's Web page points to requirements-related tools.

SIDEBAR 4.8 NUMBER OF REQUIREMENTS FAULTS

H ow many development problems are created during the process of capturing require-ments? There are varying claims. Boehm and Papaccio (1988), in a paper analyzing soft- ware at IBM and TRW, found that most errors are made during design, and there are usually

three design faults for every two coding faults. They point out that the high number of faults

attributed to the design stage could derive from requirements errors. In bis book on software

engineering economics, Boehm (1981) cites studies by Jones and Thayer and others that

attribute

• 35% of the faults to design activities for projects of 30,000-35,000 delivered source

instructions

• 10% of the faults to requirements activities and 55% of the faults to design activities for projects of 40,000-80,000 delivered source instructions

• 8% to 10% of the faults to requirements activities and 40% to 55% of the faults to design activities for projects of 65,000-85,000 delivered source instructions

Basili and Perricone (1984), in an empirical investigation. of software errors, report that

48% of the faults observed in a medium-scale software project were "attributed to incorrect

or misinterpreted functional specifications or requirements."

Beizer (1900) attributes 8.12% of the faults in his samples to problems in functional

requirements. He includes in his count such problems as incorrect requirements; illogical or

unreasonable requirements; ambiguous, incomplete, or overspecified reqwrements; unverifi-

able or untestable requirements; poorly presented requirements; and changed requirements. However, Beizer's taxonomy includes no design activities. He says, "Requirements, especially

expressed in a specification (or often, as not expressed because there is no specification) are a major source of expensive bugs. The range is from a few percent to more than 50%, depend- ing on application and environment. What hurts most about these bugs is that they're the ear-

liest to invade the system and the Last to leave. It's not unusual for a faulty requirement to get through all development testing, beta testing, and initial field use, only to be caught after hun- dreds of sites have been installed."

202 Chapter 4 Capturing the Requirements

Other summary statistics abound. For example, Perry and Stieg (1993) conclude that

79.6% of interface faults and 20.4% of the implementation faults are due to incomplete or

omitted requirements. Similarly, Computer Weekly Report (1994) discussed a study showing that 44.1 % of all system faults occurred in the specification stage. Lutz (1993b) analyzed

safety-related errors in two NASA spacecraft software systems, and found that " the primary cause of safety-related interface faults is misunderstood hardware interface specifications"

(48% to 67% of such fa.ults), and the "primary cause of safety-related functional faults is

errors in recognizing (understanding) the requirements" (62 % to 79 % of such faults). What is the right number for your development environment? Only careful record keeping will tell

you. These records can be used as a basis for measuring improvement as you institute new

practices and use new tools.

Verification

In verification, we want to check that our requirements-specification document corre- sponds to our requirements-definition document. This verification makes sure that if we implement a system that meets the specification, then that system will satisfy the cus- tomer's requirements. Most often, this is simply a check of traceability, where we ensure that each requirement in the definition document is traceable to the specification.

However, for critical systems, we may want to do more, and actually demonstrate that the specification fuliills the requirements. This is a more substantial effort, in which we prove that the specification realizes every function, event, activity, and con- straint in the requirements. The specification by itself is rarely enough to make this kind of argument, beca1USe the specification is written in terms o f actions performed at the system's interface, such as force applied to an unlocked turnstile, and we may want to prove something about the environme nt away from the inte rface, such as about the number of entries into the zoo. To bridge this gap, we need to make use of our assumptions about how the environment behaves-assumptions about what inputs the system will receive, o r about how the environment will react to outputs (e.g., that if an unlocked turnstile is pushed with sufficient force, it wiU rotate a balf- turn, nudging the pusher into the zoo). Ma thematically, the specification (S) plus our environmental assumptions (A) must be sufficient to prove that the requirements (R) hold:

S,A ~ R

For example, to show that a the rmostat andl furnace will control air temperature, we have to assume that air temperature changes continuously rather than abruptly, although the sensors may detect discrete value changes, and that an ope rating furnace will raise the air temperature. These assumptions may seem obvious, but if a building is sufficiently porous and the outside temperature is sufficiently cold, then our second assumption will not hold. In such a case, it would be prudent to set some boundaries on tbe requirement: as long as the outside temperature is above -lOO"C, the thermostat and rurnace wiH control the air temperature.

Openmirrors.com

Section 4.9 Validation and Verification 203

'This use of environmental assumptions gets at the heart of why the documenta- tion is so important: we rely on the environment to help us satisfy the customer 's requirements, and if our assumptions about how the environment behaves are wrong, then our system may not work as the customer expects. If we cannot prove that our specification and our assumptions fulfill the customer's requirements, then we need either to change our specification, strengthen our assumptions about the environment, or weaken the requirements we are trying to achieve. Sidebar 4.9 discusses some techniques for automating these proofs.

SIDEBAR 4.9 COMPUTER-AIDED VERIFICATION

Model checking is an exhaustive search of a specification's execution space, to determine whether some temporal-logic property holds of the executions. The model checker com- putes and searches the specification's execution space, sometimes computing the execution space symbolically and sometimes computing it on-the-Hy during the search. Thus, the verifica-

tion, while completely automated, consumes significant computing resources. Sreemani and

Atlee (1996) used the SMV model checker to verify five properties of an SCR specification of the A-7 naval aircraft. Their SMV model consisted of 1251 lines, most of them translated auto-

matically from the SCR specification, and theoretically had an execution space of 13 X 1<>22

states. In their model checking, the researchers found that one of the properties that was thought not to hold did indeed hold: it turned out that the conditions under which it would no t hold were

unreachable.They also di:soovered that a safety property did not hoht according to the specifica- tion, it was possible for the weapon delivery system to use stale data in tracking a target's loca- tion when the navigation sensors were deemed to be reporting unreasonable values..

A theorem proveruses a collection of built-in theories, inference rules, and decision proce-

dures for determining whether a set of asserted facts logically entails some unasserted fact; most sophisticated theorem provers require human assistance in sketching out the proof strat-

egy. Dutertre and Stavridou (1997) used the theorem prover PVS to verify some of the func- tional and safety requirements of an avionics system. For examiPle, the assumption that relates

the wing sweep angle WSPOS at time t + eps and the wing sweep command CMD at time t, in

the case where aone of the interlocks is active, is expressed in PVS as:

crnd_wings : AXIOM

constant_in_interval (CMD, t, t + eps)

and

not wings_locked_in_interval (t, t + eps)

implies

CMD (t) = WSPOS (t + eps) or

CMD (t) < WSPOS (t + eps) and

WSPOS (t + eps) <= WPOS (t) - eps * WS,Jnin_rate or

CMD (t) > WSPOS (t + eps) and

WSPOS (t + eps) >= WPOS (t) 2 eps * ws_min_rate

204 Chapter 4 Capturing the Requirements

The entire PVS model, including specification and assumptions, consisted of about 4500

lines of PVS, including comments and blank lines. The verification of requirements involved two steps. Some theorems were proved to check that the PVS model was internally consistent and complete, and others were used directly to prove three main safety properties. The proof of safety was quite complex. In total, 385 proofs were performed, of which about 100 were discharged automatically by the theorem prover, and the rest required human guidance. It

took approximately 6 person-months to write the PVS model and support libraries, and another 12 person-months to formalize the assumptions and carry out the verification; the verification of the three main safety properties took 9 of the 12 person-months.

When requirements validation and ve ri1ication are comple te, we and our cus- tomers should feel comfortable about the requirement specification. Understanding what the customer wants, we can proceed with tbe system design. Meanwhile, the cus- tomer has in band a document describing exactly what the delivered system should do.

4.10 MEASURING REQUIREMENTS

There are many ways to measure characte ristics of requirements such that the informa- tion collected tells us a lo t about the requirements process and about the quality of the requirements themselves. Measurements usually focus on three are as: product, process, and resources (Fenton a nd Pfleeger 1997). The number of requirements in the require- ments definition and specification can give us a sense of bow large the developed sys- tem is like ly to be. We saw in Chapter 3 that effort-estimation models require an estimate of product size, and requirements size can be used as input to such models. Moreover, requirements size and effort estimation can be tracked throughout develop- ment. As design and development lead to a deepe r understanding of both problem and solution, new requireme nts may arise that were not apparent during the initial require- ments-capture process.

Simila rly, we can measure the number of changes to requirements. A large nwn- ber of changes indicates some instability or uncertainty in our understanding of wbat the system should do or how it should behave, and suggests that we should take actions to try to lower the rate o f changes. The tracking o f changes also can continue through- out development; as the system requirements change, the impact of the changes can be assessed.

Where possible, requirements-size and change measurements should be recorded by requirements type. Such category-based metrics tell us whether change or uncer- ta inty in requirements is product wide, or rests solely with certa in kinds of require- ments, such as user-inte rface or database requirements. This info rmation helps us to dete rmine if we need to focus our attention on particular types of requirements.

Because the requirements are used by the designers and testers, we may wa rnt to devise measures that reflect the ir assessment of the requirements. For example, we can ask the designers to rate each requirement on a scale from 1 to 5:

Openmirrors.com

Section 4.10 Measuring Requirements 205

1. You (the designer) understand this requirement completely, you have designed from similar requirements in the past, and you should have no trouble developing a design from this requirement.

2. There are elements of this requirement that are new to you, but they a re not radi- cally different from requirements you have successfully designed from in the past

3. There are elements of this requirement that are very different from requirements you have designed from in the past , but you understand the requi rement and think you can develop a good design from it.

4. There are parts of this requirement that you do not understand, and you are not sure that you can develop a good design.

S. You do not understand this requirement at all, and you cannot develop a design for it.

We can create a similar rating scheme that asks testers how well they unde rstand each requirement and how confident they are about being able to devise a suitable tesl suite for each requirement. In both cases, the profiles of rankings can serve as a coarse indicator of whe ther the require ments are written at the .appropriate leve l of detail. If the designers and testers yield profiles with mostly ls and 2s, as shown in Figure 4.28(a), then the requirements are in good shape and ca:n be passed on to the design team. However, if there are many 4s and Ss, as shown in Figure 4.28(b), then the requirements should be revised, and the revisions reassessed to have better profiles, before we proceed to design. Although the assessment is subjective, the general trends should be clear , and the scores can provide useful fee dback to both us and our customers.

We can also take note, for each requirement, of when it is reviewed, implemented as design, imple mented as code, and tested. These measures telJ us the progress we are making toward completion. Testers can also measure the thoroughness of their test cases with resp ect to the requirements, as we will see in Chapters 8 and 9. We can measure the number of requirements covered by each test case, and the number of requirements that have been tested (Wilson 1995).

Nu11ber or tequlreiuntt

Nu11ber or requlre1111ntt

2

2

ft )

4

4

FIGURE 4.28 Measuring requirements readiness.

206 Chapter 4 Capturing the Requirements

4.11 CHOOSING A SPECIFICATION TECHNIQUE

This chapter has presented examples of several requirements-specification techniques, and many more are available for use on your projects. Each one has useful characteris- tics, but some are more appropriate for a given project than others. That is, no tech- nique is best for all projects. Thus, it is important to have a set of criteria for deciding, for each project, which technique is most suitable.

Le t us consider some of the issues that shouJd be included i.n such a set of crite ria. Suppose we are to build a computerized system for avoiding collisions among aircraft. Participating aircraft are to be fitted with radar sensors. A subsystem on each aircraft is to monitor o ther aircraft io its vicinity, detect when an aircraft is flying dangerously close, establish communications with that aircraft, negotiate evasive maneuvers to avoid a collision (after aU, we wouldn't want both planes to independently choose maneuvers that would put them or keep them on a collision course!), aod instruct its navigation system to execute the negotiated maneuvers. Each aircraft's subsystem per- forms its own data analysis and decision-making procedures on onboard computers, although it shares flight plans with other aircraft aocl transmits a ll data and final maneu- vers to a central site for further analysis. One of the key characte ristics of this coUision- avoidance system is that it is a distributed, reactive system. That is, it is a reactive system io that e ach aircraft's subsystem is continuously monitoring and reacting to the posi- tions of other aircraft. It is a distributed system i.n that the system's functions are dis- tributed over several aircraft. The complexity of this system makes it essential that the require ments be specified exactly and comple te ly. loterfaces must be well-defined, and communications must be coordinated, so that each aircraft's subsystem can make decisions in a timely manner. Some specification techniques may be more appropriate tban others for this problem. For example, testing this system will be difficult because, for safe ty reasons, most testing cannot take place in the real environment. Moreover, it will be hard to detect and replicate transient e rrors. Thus, we rn.ight prefer a tech- nique that offers simulation , or that facilitates exhaustive or automated verification of the specification. Io particular, techniques that automatically check the specification or syste m for consistency and completeness may catch e rrors that are not easy to spot o therwise.

More generally, if a system has real-time requirements, we need a specification technique that supports lhe notion of time. Any n eed for phased development means that we wiU be tracking requirements through several intermediate systems, which not only complicates the requirements tracking, but also increases the li.kelihood that the requirements will change over the life of the system. As the users work with intermedi- ate versions of tbe system, they may see the need for new features, or want to change existing features. Thus, we need a sophisticated me thod that can handle change easily. If we want our requirements to have all of the desirable characteristics listed early in the chapter, then we look for a method that helps us to revise the requirements, track the changes, cross-reference the data and functional items, and analyze the requirements for as many characteristics as possible.

Ardis and his colleagues (1996) have proposed a set of criteria for evaluating specification methods. They associate with each crite rion a list of questions to help us to

Openmirrors.com

Section 4.11 Choosing a Specification Technique 207

determine how well a particular method satisfies that criterion. These criteria were intended for evaluating techniques for specifying reactive systems, but as you will see, most of the criteria are quite general:

• A pplicability: Can the technique describe real-world problems and solutions in a natural and realistic way? If the technique makes assumptions about the environ- ment, are the assumptions reasonable? Is tbe technique compatible with the othe r techniques that will be used on the project?

• Implementability: Can the specification be refined or translated easily into an implementation? How difficult is the translation? Is it automated? If so, is the generated code efficient? Is tbe generated code in the same language as is used in tbe manually produced parts of the implementation? Is there a clean, well- defined interface between the code that is machine-generated and the code that is not?

• Testability/simulation: Can the specification be used to test the imple mentation? Is every statement in the specification testable by tbe implementation? Is it pos- sible to execute the specification?

• Cbeckability: Are the specifications readable by nondevelopers, such as the cus- tomer? Can domain experts (i.e., experts on the problem being specified) check the specification for accuracy? Are tbere automated specification checkers?

• Maintaina bility: Will the specification be useful in making changes to the system? Is it easy to cbange the specification as the system evolves?

• Modularity: Does the method a llow a large specification to he decomposed into smaller parts that are easie r to write and to understand? Can changes be made to the smalle r parts without rewriting the entite specification?

• Level of abstraction/expressibility: H ow closely and expressively do objects, states, and events in the language correspond to the actual objects, actions, and conditions in tbe problem domain? How concise and elegant is tlhe resulting specification?

• Soundness: D oes the language or do the tools facili tate checking for incons isten- cies or ambiguities in the specification? Are the semantics of the specification lan- guage defined precisely?

• Verifiability: Can we demonstrate formally that the specification satisfies the requireme nts? Cao the verification process be automated, and, if so, is the automation easy?

• R untime safety: If code can be generated automatically from the specification, does the code degrade gracefully under unexpected runtime conditions, such as ove rflow?

• Tools maturity: If the specification technique has tool support, are the tools of high quality? Is there training available for learning how to use them? How large is the user base for the tools?

• Looseness: Can the specification be incomplete or admit nondete rminism?

• Learning curve: Can a new user learn quickly the technique's concepts, syntax, semantics, and heuristics?

208 Chapter 4 Capturing the Requirements

• Technique maturity: Has the technique been certified or standardized? Is there a user group or large user base?

• Data modeling: Does the technique include data representation, relationships, or abstractions? Are the data-modeling facilities an integrated part of the technique?

• Discipline: Does the technique force its users to write we ll-structured, under- standable, and well-behaved specifications?

The first step in choosing a specification technique is to determjne for our particular problem which of the above cri teria are especially important. Different problems place different priorities on the criteria. Ardis and his colleagues were interested in develop- ing telephone switching systems, so they judged whether each of the criteria is helpful in developing reactive systems. They considered not only the criteria's effects on requirements activities, but their effects on other life-cycle activities as well. Table 4.4 shows the results o f thefr evaluation. The second step in choosing a specification tech- nique is to evaluate each o f the candidate techniques with respect to the criteria. For example, Ardis and colleagues rated Z as strong in modularity, abstraction, verifiability, looseness, technique maturity, and data modeling; adequate in applicabiljty, checkabil- ity, maintainability, soundness, tools maturity, learning curve, and discipline; and weak in imp~ementability and testability/simulation. Some of their assessments of tha t Z, such as Z inherently supports modularity, hold for all problem types, whereas o ther assessments, such as applicabili ty, are specific to the problem type. In the end, we choose a specification technique that best supports the criteria that are most important to our particular problem.

Since no one approach is universally applicable to all systems, it may be necessary to combine several approaches to define the requirements compl etely. Some methods are better at capturing control ftow and synchronization, whereas other methods are

TABLE 4.4 Importance of Specification Criteria During Reactive-System Life Cycle (Ardis.et al. 1996) (R = Requirements, D = Design, I = Implementation, T=Tusting, M =Maintenance, 0 = Other) © 1996 IEEE

R 0 T M 0 Criteria

+ + Applicability + + Implementability

+ + + Testability/simulation + + + Checkability

+ Maintainability + + Modularity

+ + Level of abstraction/expressability + + Soundness + + + + + Verifiability

+ + Runtime safety + + + Tools maturity

+ Looseness + Leaming curve + Technique maturity

+ Data modeling + + + + Discipline

Openmirrors.com

Section 4.12 Information Systems Example 209

be tte r at capturing data transformations. Some problems are more easily described in te rms of events and actions, whereas other problems are bette r described in terms of control states that reflect stages of behavior. Thus, it may be useful to use one method for data requirements and another to describe processes or time-related activities. We may need to express changes in behavior as we ll as global invariants. Models that are adequate for designers may be difficult for the test team to use. Thus, the choice of a specification technique(s) is bound up in the characteristics of tbe individual project and the preferences of developers and customers.

4.12 INFORMATION SYSTEMS EXAMPLE

Recall that our Piccadilly example involves selling advertising time for the Piccadilly Televisiion franchise area. We can use several specification notations to model the requirements related to buying and selLing advertising time. Because this problem is an information system, we will use only notations that are data-oriented.

First, we can draw a use-case diagram to represent the key uses of the system, showing the expected users and the major functions that each user might initiate. A partial diagram might look Like Figure 4.29. Notice that this high-level diagram cap- tures the essential functionality of the system, but it shows nothing about the ways in which each o f these use cases might succeed or fail; for example, a campaign request

x---+--------t Plmdllly

M1n~91111ut

Q.---1-------~ Report teletl1lon A tatln~1 Audlm 1

Memre111ut Buua

FIGURE 4.29 Use case for the Piccadilly Te levision advertising system (adapted from Robertson and Robertson 1994).

210 Chapter 4 Capturing the Requirements

I Admt1sln9 I I Conn11clal I I C: Con11111elll 11 R: Rite SeQnent I I ~I Bruk Btuk

ju111~119n hd!ll, ar91 iudluee,

target r1t11s p11cent1s1, lllrt dtte, 114 dtte, u11P.1l~1 dotttlon,

_!·-~!'~-!~O!_~!t!!O~! C111p1l91 I Httel (tltrt dlll, end d111)

- c• lonll c '" c' llnd tine (1at41t 11110)

111111 (•tHk 11111, do11t1on)

(01111 R In 1 j cal prlce (btuk start, fo1t1on)

C*-> Rate S1s•11t ~ p1le1

lonll C In C~ ___ J~:b!!~k_t_l!tl C.dmtloi, C.R111 ~!!~!.n!l_ ______ Co1111111cl1I I ------------- --- _Slot

"!!Hiid m1p1l!n (eo1111111clll tpot!, 10111 ptlce)

FIGURE 4.30 Message Sequeace Chart for a successful request for aa advertising campaiga (;ic;!<iptec;! frnm Robertson ;inc;! Rotiertson 1994).

would faiJ if all of the commercial time was already sold. The diagram also shows noth- ing about the type of information that the system might input, process, or output. We need more information about each of the uses, to better understand the problem.

As a next step, we can draw event traces, such as the one slhown in Figure 4.30, tbat depict typical scenarios within a use case. For example, the request for a campaign involves

• Searching each re levant commercial break to see whether there is any unsold time and whether commercials during that break are likely to be seen by the campaign's intended target audience

• Computing the price of the campaign, based on the prices of the available com- mercial spots found

• Reserving available commercial spots for the campaign

Figure 4.30 uses UML-style Message Sequence Charts, in which entities whose names are underlined represent object instances, whereas entities whose names are not underlined represent abstract classes. Thus, the search for available commercial spots is done by fi rst asking the class for the set C* of relevant Commercial Breaks, and then asking eacb of these Commercial Break instances whether it bas available time rtbat is suitable for the campajgn. Boxes surround sequences of messages that are repeated for mulltiple instances: for example, reserving time in multiple Commercial Break

Openmirrors.com

A ..

I drtH "ln• p one n1nlar

Pro ram

1udlme type

Section 4.13 Real-Time Example 211

AdnrtltlR Cain al R 0 •• 1 eoordln1IH O •• • camP1lqn nun••r 19eney e1,.p1l91 11111 Ole

ind d1h ur~tlon Ud,et 11r,1l t 'udluce tor 1' r1tlnq ""'1''9' req re4 1p11 "dunt on 1 mnnerela I/ re nove eon narc al() bull4 ca111p1l9•/j prl Cl CUI p1 l9n

* Coon,..erclal S ot dut1tlon

- ---<* Commtrchl 4a1111 dltpos1l d111

u:soclated with Ru• St mint day or week 119nent ""' momblllty pereent19• or 1udlenee typ•

I OI " '' n le tlu ()

FIGURE 4.31 Partial UMLclass diagram of Piccadilly Television advertising system (adapted from Robertson and Robertson 1994).

instances, or creating multiple Commercial Spots. The resulting campaign of com- mercial spots and the campaign's price are re turned to the requesting Advert ising Agency. SimiJar traces can be drawn for other possible responses to a campaign request and for other use cases. In drawing these traces, we start to identify key entities and rela tionships, which we can record in a UML class diagram, as in figure 4.31.

The complete specifica tion fo r Piccadilly is quite long and invo lved, and the Robertsons' book provides many of the details. However, the examples here make it clear that different notations are suitable for representing different aspects of a prob- lem's requirements; it is important to choose a combination of techniques that paints a comple te picture of the problem, to be used in designing, implementing, and testing the system.

4.13 REAL-TIME EXAMPLE

Recall that the Ariane-5 explosion was caused by the reuse of a section of code from Ariane-4. Nuseibeh (1997) analyzes the problem from the point of view of require- ments reuse. That is, man y software engineers fee l that great benefits can be had from reusing requirements specifications (and their related design, code, and test cases) from previously developed systems. Candidate specifications are identified by looking for functionality or behavioral requirements that are the same or similar, and then making modifications where necessary. In the case of Ariane-4, the inertial reference system (SRI) performed many o f the functions needed by Ariane-5.

212 Chapter 4 Capturing the Requirements

However, Nuseibeh notes that although the needed functionality was similar to tbat in Ariane-4, there were aspects of Ariane-5 that were significantly different. In par- ticular, the SRI functionality that continued after liftoff in Ariane-4 was not needed after Liftoff in Ariane-5. Had requirements validation been done properly, the analysts would have discovered that the functions active a t'ter lit'toff could not be traced back to any Ariane-5 requirement in tbe requirements definition or specification.1bus, require- ments validation could have played a crucial role in preventing the rocket's explosion.

Another preventive measure migbt have been to simulate the requirements. Sim- ulation would have shown that the SRI continued to function after liftoff; then, Ariane-S's design could have been changed to reuse a modified version of the SRI code. Consider again the list of criteria proposed by Ardis and colleagues for selecting a speciJkation language. This List includes two items that are especially important for specifying a sys- tem such as Ariane-5: testability/simulation and runtime safety. In Ardis's study, the team examined seven specification languages-Modechart, VFSM, Esterel, Latos, Z, SOL, and C- for suitability against each of the criteria; only SOIL was rated "strong" for testability/simulation and runtime safety. An SOL model consists of several concur- rent communicating processes like the coin slot process in Figure 4.32.

To validate an SOL model, the system requirements can be written as temporal- logic invariants:

CILAIM;

Barrier = locked IMPLIES (Barrier = locked)

UNLESS (sum >= entryfee);

ENDCLAIM;

DCL val, /* value of col• */ IUl'I lnttgtr: /* IUlll of COIH lnmted */

DCL lftt f11 Intl er: = 100 : /* fH to enter zoo•/

FIGURE 4.32 SOL process for the coin slot of the twnstile problem.

Openmirrors.com

Section 4.14 What This Chapter Means for You 213

SOL is a mature formal method that includes object-oriented concepts and powerful modeling features: for example, processes can be spawned dynamically, can be assigned identifiers, and can store persistent data; events can carry data parameters and can be directed to specific processes by referring to process identifiers; and timers can model real-time delays and deadlines. Commercial tools are available to support design, debuggjng, and maintenance of SOL specifications. Thus, one possible prevention tech- nique migbt have been the use of a specification method Like SOL, with accompanying tool support.

We will see in later chapters that preventive steps could also have been taken dur- ing design, implementation, or testing; however, measures taken during requirements analysis would have led to a greater understanding of the differences between Ariane-4 and Ariane-5, and to detecting the root cause of the error.

4.14 WHAT THIS CHAPTER MEANS FOR YOU

In this chapter, we have shown tbe best practjces for developing quality software requirements. We have seen that requirements activities should not be performed by the software developer in isolation: definition and specification efforts require working closely with users, customers, testers, designers, and other team members. Still, there are several skills that are important for you to master on your own:

• It is essential tbat the requirements definition and specification documents describe the problem, leaving solution selection to the designers. The best way of ensuring that you do not stray into the solution space is to describe requirements and specifications in te rms of environmental pbenomena.

• There are a varie ty of sources and means for e liciting requirements. There are botb functional and quality requirements to keep in mind. The functional require- ments explain wbat the system will do, and the quality requirements constrain solutions in terms of safety, reliabiJity, budge t, schedule, and so on.

• There are many different types of definition and specification tecbniques. Some are descriptive, such as entity-relationship diagrams and logic, while others are behavioral, such as event traces, data-flow diagrams, and functions. Some have graphical nota tions, and some are based on rnatbematics. Each emphasizes a dif- fe rent view of the problem, and suggests different criteria for decomposing a problem into subproblems. It is often desirable to use a combination of tech- niques to specify the different aspects of a system.

• The specification tecbniques also differ in terms of their tool support, maturity, unde rstandability, ease of use, and mathematical formality. Each one should be judged for the project at band, as there is no best universal technique.

• Requirements questions can be answered using models or prototypes. In either case, the goal is to focus on the subproblem that is at the heart of the question, ratber than necessarily modeling or prototyping the entire problem. If prototyping, you need to decide ahead of time whether the resulting software will be kept or thrown away.

• Requirements must be validated to ensure that they accurately reflect the cus- tomer's expectations. The requirements should also be checked for completeness,

214 Chapter 4 Capturing t he Requirements

correctness, consistency, feasibility, and more, sometimes using techniques or tools that are associated with the specification methods you have chosen. Finally, you should verify that the specification fulfills the requirements.

4.15 WHAT THIS CHAPTER MEANS FOR YOUR DEVELOPMENT TEAM

Your development team must work together to elicit, understand, and document require- ments. Often, diffe rent team members concentra te on separate aspects of the require- ments: the networking expert may work on network requirements, the user-inte rface expert on screens and reports, the database expert o n data capture and storage, and so on. Because the disparate requirements will be integrated into a comprehensive whole, requirements must be written in a way that a llows them to be Jinked and controlled. For example, a change to one requirement may affect other, re lated re quirements, and the methods and tools must support the changes to ensure that errors are caught early and quickly.

At the same time, the requirements part of your team must work closely with

• Customers and users, so that your team builds a product that serves their needs • Designers, so that they construct a design that fulfills the requirements specification • Testers, so that their test scripts adequately evalua te whether the implementation

meets the requirements

• Documentation writers, so that they can write user manuals from the specifications

Your team must also pay attention to measureme nts that reflect requirements quality. The measures can suggest team activities, such as prototyping some requirements when indicators show that the requirements are not well-understood.

Finally, you must work as a team to review the requirements definition and speci- fication documents, and to update those docume nts as the requirements change and grow during the development and maintenance processes.

4.16 WHAT THIS CHAPTER MEANS FOR RESEARCHERS

There are many research areas associated with requirements activities. Researchers can

• Investigate ways to reduce the amount of uncerta inty and risk in requirements • Develop specification techniques and tools tha t permit easier ways to prove assump-

tions and assertions, and to demonstrate consistency, completeness, and determinism • Develop tools to allow traceability across the various intermedfate and final

products of softwa re development. In parlicular, the tools can assess the impacl of a proposed change on products, processes, and resources.

• Evaluate the many different ways to review requirements: tools, checklists, inspections, walktbroughs, and more. It is mmportant to know which techniques are best for what situations.

• Create new techniques for simulating requirements behavior. • Help us to understand what types of requirements are best for reuse in subsequent

projects, and how to write requirements in a way that enhances their later reuse.

Openmirrors.com

Section 4.17 Term Project 215

4.17 TERM PROJECT

Your clients at FCO have prepared the foUowing set of English-language require- ments for the Loan Arranger system. Like most sets of requirements, this set must be scrutinized in several ways to determine if it is correct , complete, and consistent. Using the requirements bere and in supplementary material about the Loan Arranger in earlier chapters, evaluate and improve this set of requirements. Use many of the tech- niques presented in this chapter, including requirements measurement and Ardis's list. If necessary, express the requirements in a requirements lan guage or mode ling technique, to make sure tbat tbe static and dynamic properties of the system are expressed well.

Preconditions and Assumptions

• The Loan Arranger system assumes that there already exist lenders, borrowers, and loans from wbicb to choose, and that investors exist wbo are interested in buying bundles of loans.

• The Loan Arranger system contains a repository of information about loans from a varie ty of lenders. This repository may be empty.

• At regular intervals, each lender provides reports listing tbe loans that it has made. Loans that have already been purchased by FCO will be indicated on these re ports.

• Each loan in the Loan Arranger repository represents an investment to then be bundled and sold with other loans.

• The Loan Arranger system may be used by up to four loan analysts simultaneously.

High-Level Description of Functionality

1. The Loan Arranger system will receive monthly reports from eacb lender of new loans issued by that lender. The loans in tbe report recently purchased by FCO for its investment portfolio will be marked in the report. The Loan Arranger system will use tbe report information to update its repository of available loans.

2. The Loan Arranger system will receive monthly reports from each lender provid- ing updates about the sta tus of loans issued by that lender. The updated informa- tion will include: the current interest rate for an adjustable rate mortgage, and the status of the borrower with respect to the loan (good, late, or default) . For loans in tbe FCO portfolio, the Loan Arranger will update the data in the repository. Loans not in the FCO portfolio will also be examined in order to determine if a borrower's standing should be updated. FCO will provide e acb le ader with the format for the reports, so that all reports will share a common format.

3. The loan analyst can change individual data records as described in Data Opera- tions.

4. AU new data must be validated before they are added to the repository (accord- ing to the rules described in Data Constraints).

5. The loan analyst can use the Loan Arranger to identify bundles of Joans to sell to particular investors.

216 Chapter 4 Capturing the Requirements

Functional Requirements

1. The loan analyst sh ould be able to review all of the information in the repository for a particular lending institution, a particular Joan, or a particular borrower.

2. The loan analyst can create, view, edit, or delete a loan from a portfolio or bundle. 3. A Joan is added to the portfolio automaticaUy, when the Loan Arranger reads the

re ports provided by the lenders. A report can be read by the Loan Arranger only after the associated lender bas been specified.

4. The loan analyst can create a new lender. 5. The Joan analyst can dele te a lender only :if there are no loans in the portfolio

associated with this lender. 6. The loan analyst can change lender contact and phone number but not lender

name and identification number. 7. The loan analyst cannot change borrower information. 8. The loan analyst can ask the system to sort, search, or organize loan informaition

by certain crite ria: amount, interest rate, settlement date, borrower, lender, type of loan, or whether it has been marked for inclusion in a certain bundle. The orga- nizational crite ria should include ranges, so that information will be included only if it is within two specified bounds (such as between January 1, 2005 and January 1, 2008). The organizational criteria can also be based on exclusion such as all loans not marked, or all loans not between January 1, 2005 and January 1, 2008.

9. The loan analyst should be able to request reports in each of three formats: in a file, on the screen, and as a printed report.

10. The loan analyst should be able to request tbe fo llowing info rmation in a report: any a ttribute of loan, lender, or borrower, and summary statistics of the attributes (mean, standard deviation, scatter diagram, and histogram). The information in a report can be restricted to a subset of the total information , as described by the loan analyst's organizing criteria.

11. The loan analyst must be able to use the Loan Arranger to create bundles ithat meet the prescribed characteristics of an investment request. 1be loan analyst can id!entify these bundles in several ways:

• By manually identifying a subset of loans that must be included in the burndJe, either by naming particular loans or by describing tbem using attributes or ranges

• By providing the Loan Arranger with tbe investment crite ria, and allowing the Loan Arranger to run a loan bundle optimization request to select the best set of loans to meet those criteria

• By using a combination of tbe above, wbere a subset of loans is fiist chosen (manually or automatically), and then optimizing the chosen subset according to the investment criteria

12. Creating a bundle consists of two steps. F:irst, the loan analyst works with the Loan Arranger to create a bundle according to the crite ria as described above. Then the candidate bundle can be accepted, rejected, or modified. Modifying a

Openmirrors.com

Section 4.14 Term Project 217

bundle means that the analyst may accept some but not all of the loans suggested by the Loan Arranger for a bundle, and can add specific loans to the bundle before accepting it.

13. The loan analyst must be able to mark loans for possible inclusion in a loan bun- dle. Once a loan is so marked, it is not available for inclusion in any other bundle. If the loan analyst marks a loan and decides not to include it in the bundle, the marking must be removed and the loan made available for other bundling decisions.

14. When a candidate bundle is accepted, its loans are removed from consideration for use in other bundles.

15. All current transactions must be resolved before a loan analyst can exit the Loan Arranger system.

16. A loan analyst can access a repository of investment requests.. This repository may be empty. For each investment request, the analyst uses the request constraints (on risk, profit, and term) to define the parameters of a bundle. Then, the Loan Arranger system identifies loans to be bundled to meet the request constraints.

Dat a Constraints

1. A single borrower may have more than one loan.

2. Every lender must have a unique identifier. 3. Every borrower must have a unique identifier.

4. Each loan must have at least one borrower. S. Each loan must have a loan amount of at least $1000 but not more than $500,000. 6. There are two types of loans based on the amount of the loan: reguJar and jumbo.

A regular loan is for any amount less than or equal to $275,000. A jumbo loan is for any amount over $275,000.

7. A borrower is considered to be in good standing if all loans to that borrower are in good standing. A borrower is considered to be in defauJt standing if any of the loans to that borrower have default standing. A borrower is said to be in late standing if any of the loans to that borrower have late standing.

8. A loan o r borrowe r can change from good to late, from good to defauJt, from late to good, or from late to default. Once a loan or borrower is in default standing, it cannot be changed to another standing.

9. A loan can change from ARM to FM, and from FM to ARM. 10. The profit requested by an investor is a number from 0 to 500. 0 represents no

profit on a bundle. A nonzero profit represents the rate of return on the bundle; if the profit is x, then the investor expects to receive the original investment plus x percent of the original investment when the loans are paid off. Thus, if a bundle costs $1000, and the investor expects a rate of return of 40, then the investor hopes to have $1400 when all the loans in the bundle are paid off.

11. No loan can appear in more than one bundle.

218 Chapter 4 Capturing the Requirements

Design and Interface Constraints

1. The Loan Arranger system should work on a Unix system. 2. The loan analyst should be able to look at information about more than one loan,

lending institution, or borrower at a time. 3. The Joan analyst must be able to move forward and backwards through the

information presented on a screen. When tbe information is too voluminous to fit on a single screen, the user must be informed that more information earn be viewed.

4. When the system displays the results of a search, the current organizing criteria must always be displayed along with the info rmation.

5. A single record or line of output must never be broken in the middle of a fie ld. 6. The user must be advised when a search request is inappropriate or illegal. 7. When an error is encountered, the system should return the user to the previous

screen.

Quality Requir,ements

1. Up to four loan analysts can use the system at a given time. 2. If updates are made to any displayed information, the information is refreshed

within five seconds of adding, updating, or deleting information. 3. The system must respond to a Joan analyst's request for information in less than

five seconds from submission of the request. 4. The system must be available for use by a loan analyst during 97% of the business

day.

4.18 KEY REFERENCES

Michae l Jackson's book Software Requirements and Specifications (1995) provides gen- eral advice on how to overcome common problems in understanding and formulating requirements. His ideas can be applied to any requirements technique. Donald Gause and Gerald Weinberg's book Exploring Requirements (1989) focuses on the human side of the requirements process: problems and techniques for working with customers and users and for devising new products.

A comprehensive requirements-definition template developed by James and Suzanne Robertson can be found at the Web site of the Atlantic Systems Guild: http:// www.systemsguild.com. This template is accompanied by a description of the Volere pro- cess model, which is a comple te process for eliciting and checking a set of requirements. Use of IIbe template is described in the ir book Mastering the Requirements Process (1999).

Pe te r Coad and Edward Yourdon's book Object-Oriented Analysis (1991) is a classic 1rext on object-oriented requirements analysis. The most thorough references on tbe Unified Modeling Language (UML) are the books by James Rumbaugh, Ivan Jacobson, and Grady Booch, especially the Unified Modeling Language Reference Manual, and the docume nts re leased by the Object Management Group; the latte r can

Openmirrors.com

Section 4. 19 Exercises 219

be downloaded from the organization's Web site: http://www.omg.org. Martin Fowler 's book Analysis Patterns (1996) provides guidance on how to use UML to model com- mon business problems.

Beginning in 1993, the IEEE Computer Socie ty has started to sponsor two con- fe rences that were directly re lated to requirements and were hel.d in aJternate years: the Inte rnational Conference on Requirements Engineering and the International Symposium on Requirements Engineering. These confe rences me rged in 2002 to form the Inte rnational Requirements Engineering Conference, which is he ld every year. Info rmation about upcoming confe rences and a bout proceedings from past confer- ences can be found a t the Computer Society's We b page: http://www.computer.org.

The Requirements Engineering Journal focuses exclusively on new results in elicit- ing, representing, and validating requirements, mostly with respect to software systems. IEEE Software had special issues on requirements engineering in March 1994, March 1996, March/April 1998, May/June 2000, January/February 2003, and March/April 2004. Other IEEE publications often have special issues on particular types of requirements analysis and specificatio n methods. For example, the September 1990 issues of I EEE Computer, IEEE Software, and IEEE Transactions on Software Engineering focused on formal methods, as did the May 1997 and January 1998 issues of IEEE Transactions on Software Engineering and the A pril 1996 issue of IEEE Computer.

There are several standards re la ted to software requirements. The U.S. Depart- ment o f Defense bas produced MilStd-498, Data Item Description for Software R equirements Specifications (SRS). The IEEE has produced IEEE Std 830-1998, which is a set of recommended practices and sta11dards for formula ting and structu![ing requirements specifications.

There a re seve ra l tools that support requirements capture and traceability. DOO RS/ERS (Tele logic), Analyst Pro (Goda Software), and RequisitePro (IBM Rationa l) a re popular tools for managing requirements, tracing requirements in down- stream artifacts, tracking changes, and assessing the impact of changes. Most modeling nota tions have tool support that at the least supports the creation and editing of mod- e ls, usually supports some form of welJ-formedoess checking and report generation, and a t the best offers automated validation and verification. An independent survey of requirements tools is located a t www.systernsguild.com.

There is an IFIP Working Group 2.9 on Software Requirements E ngineering. Some of the presentations from their allnual meetings are available from their Web site: http://www.cis.gsu.edu/-wrobinso/ifip2_9

4.19 EXERCISES

1. Developers work together with customers and users to define requirements and specify what the proposed system will do. If, once it is built, the system works according to specifi- cation but harms someone physically or financially, who is responsible?

2. Among the many nonfunctional requirements that can be included in a specification are those related to safety and reliability. How can we ensure that these requirements are testable, in the sense defined by the Robertsons? In particular, how can we demonstrate the reliability of a system that is required never to fail?

220 Chapter 4 Capturing the Requirements

3. In an early meeting with your customer, the customer lists the following "requirements" for a system he wants you to build:

(.a) The client daemon must be invisible to the user ( b) The system should provide automatic verification of corrupted links or outdated data (<:) An internal naming convention should ensure that records are unique ( d) Communication between the database and servers should be encrypted (e) Relationships may exist between title groups (a type of record in the database] (:f) Files should be organizable into groups of file dependencies (g) The system must interface with an Oracle database ( h) The system must handle 50,000 users concurrently

Oassify each of the above as a functional requirement, a quality requirement, a design constraint, or a process constraint. Which of the above might be premature design deci- sions? Re-express each of these decisions as a :requirement that the design decision was meant to achieve.

4. Write a decision table that specifies the rules for the game of checkers. 5. If a decision table has two identical columns, the n the requirements specification is redun-

dant. How can we tell if the specification is contradictory? What other characteristics of a decision table warn us of problems with the requirements?

6. Write a Parnas table that describes the output of the algorithm for finding the roots of a quadratic equation using the quadratic formula.

7. Write a state-machine specification to illustrate the requirements of an automatic bank- ing machine (ABM).

8. A state-machine specification is complete if and only if there is a transition specified for e very possible combination of state and input symbol. We can change an incomplete spec- ification to a complete one by adding an extra state, called a trap state. Once a transition is made to the trap state, the system remains in the trap state, no matter the input. For example, if 0, 1, and 2 are the only possible inputs, t.he system depicted by Figure 4.33 can be completed by adding a trap state as shown in Figure 4.34. In same manner, complete your state-machine specification from Exercise 7.

9. A safety property is an invariant property that specifies that a particular bad behavior never happens; for example, a safety property of the turnstile problem is that the number of entries into the zoo iis never more than the number of entry fees paid. A liveness property is a property that specifies that a particular behavior eventually happens; for example, a live- ness property for t!he turnstile problem is that when an entry fee is paid, the turnstile becomes unlocked. Similarly, a liveness property for the library system is that every borrow

FIGURE 4.33 Original system for Exercise 7.

Openmirrors.com

Section 4. 19 Exercises 221

FIGURE 4.34 Complete system with trap state for Exercise 7.

request from a Patron who has no outstanding library fines succeeds. These three proper- ties, when expressed in logic, look like the following:

0 (nurn_coins ~ nurn_entries) 0 (insert_coin 0 barri er=unl ocked) D (borrow(Patron,Pub) A Patron.fines = 0) ~

03 Loan. [Loan.borrower~Patron A Loan.Publi cati on ~ Pub] )

List safety and liveness properties for your automated banking machine specification from Exercise 6. Express these properties in temporal logic.

10. Prove that your safety and liveness properties from Exercise 8 hold for your state- machine model of your automated banking machine specification from Exercise 6. What assumptions do you have to ma ke about the ABM's environment (e.g., that the machine has sufficient cash) for your proofs to succeed?

ll. Sometimes part of a system may be built quickly to demonstrate feasibility or functional- ity to a customer. This prototype system is usually incomplete; the real system is con- structed after the customer and developer evaluate the prototype. Should the system requirements document be written before or after a prototype is developed? Why?

U . Write a set of UML models (use-case diagram, MSC diagrams, class diagram) for an on- line telephone directory to replace the phone book that is provided to you by your phone company. The directo ry should be able to provide phone numbers when presented with a name; it showld also list area codes for different parts of the country and gene rate emer- gency telephone numbers for your a rea.

13. Draw data-flow diagrams to illustrate the functions and data flow for the on-line tele- phone directo ry system specified in the previous problem.

14. What are the benefits of separating functional flow from data flow?

15. What special kinds of problems are presented when specifying the requirements of real- time systems?

16. Contrast the benefits of an object-oriented requirements specification with those of a functional decomposition.

222 Chapter 4 Capturing t h e Requirements

17. Write a Z specification for a presentation scheduling system. The system keeps a record of which presenters are to give presentations on which dates. No presenter should be sched- uled to give more than one presentation. No more than four presentations should be scheduled for any particular date. There shouldl be operations to Add and Remove pres- entations from the schedule, to Swap the dates of two presentatio ns, to List the presen- tations scheduled for a particular date, to List the date on which a particular presenter is scheduled to speak, and to send a Reminder message to each presenter on the date of his or her presentation . You may define any additional operations that help simplify the specification.

18. Complete the partial SDL data specification for the library problem in Ftgure 4.20. In particular, write axioms for nongenerator operations unres erve , i sOnLoan, and isOnReserve. Modify your axioms for opera tion unreserve so that this operation assumes that multiple requests to put an item on reserve might occur between two requests to unreserve that item.

19. What kinds of problems should you look for when doing a requirements review? Make a checklist of these problems. Can the checklist be universally applicable or is it better to use a checklist that is specific to the application domain?

20. Is it ever possible to have the requirements definition document be the same as the requirements specification? What are the pros and cons of having two documents?

21. Pfleeger and Hatton (1997) examined the qua lity of a system that had been specified using formal methods. They found that the syste m was unusually well-structured and easy to test. They speculated that the high quality was due to the thoroughness of the specifica- tion, not necessarily its formality. How could you design a study to determine whether it is formality or thoroughness that leads to high qua lity?

22. Sometimes a customer requests a requirement that you know is impossible to implement. Should you agree to put the requirement in the definition and specification documents anyway, thinking th at you might come up with a novel way of meeting it, or thinking that you will ask that the requirement be dropped later? Discuss the ethical implications of promising what you know you cannot deliver.

23. Find a set of natural-language requirements at your job or at this book's Web site. Review the requirements to determine if there are any problems. For example, are they consis- tent? Ambiguous? Conflicting? Do they contain any design or implementation decisions? Which representation techniques might help reveal and eliminate these problems? If the p roblems remain in the requirements, what is their likely impact as the system is designed and implemented?

Openmirrors.com

5

In this chapter, we look at • views of software architecture • common architectural patterns • criteria for evaluating and comparing

design a lternatives • software a rchitecture docume ntation

In the last chapter, we learned how to work with our customers to determine wbat they want the proposed system to do. The result of the requirements process was two docu- ments: a requirements document that captures the customers' needs and a requirements specification tbat describes how the proposed system should behave. The next step in development is to start designing how the system will be constructed. lf we are building a re latively small system, we may be able to progress directly from the specification to the design of data structures and algorithms. Howe ver, if we are building a larger system, then we will want to decompose the system into units of manageable size, such as sub- systems or modules, before we contemplate details about the data or code.

The software architecture is this decomposition. In this chapter, we examine dif- ferent types of decomposition. Just as buildings are sometimes constructed of prefabri- cated sections based on commonly needed architectural constructs, some prefabricated software architectural styles can be used as guidelines for decomposing a new system. Often, tbere will be multiple ways to design the architecture, so we explore how to com- pare co mpeting designs and choose the one that best sui ts our needs. We learn bow to document our decisions in a software architecture document (SAD) as the architecture starts to stabilize, and bow to verify that this architecture will meet the customer 's requirements. The steps we lay out result in a software architecture that guides the rest of the system's development.

5.1 THE DESIGN PROCESS

At this point in the development process, we have a good understanding of our cus- tomer's problem, and we have a requirements specification that describes what an

223

224 Chapter 5 Designing the Architecture

acceptable software solution would look like. If the specification was done well, it has focused on function, not form; that is, it gives few hints about how to build the proposed system. Design is the creative process of figuring out bow to implement all of the cus- tomer's requirements; tbe resulting plan is also caJled the design.

Early design deci.sions address the system's architecture, explaining how to decompose the system into units, how the units relate to one another, and describing any externally visible properties of the units (Bass, Clements, and Kazman 2003). Later design decisions address how to implement the fodividual units. To see how architec- ture relates to both design and requirements, consider again the example in which Chuck and Betsy HoweU want to build a new ho1J1se. TheiI requirements include

• rooms for them and their truee children to sleep • a place for the children to play • a kitchen and a large dining room that will hold an extendable table • storage for bicycles, lawn mower, ladder, barbecue, patio furniture, and more • a place for a piano

• heating and air conditioning

and so on. From the requirements, an architect produces preliminary designs for the Howells to consider. The architect may start by showing the Howells some generic plans based on different styles of houses, such as two-story colonials and bungalows, to get a better feel for what style the Howells would prefer. Within a particular architectural style, the architect may sketch. out various design alternatives. For instance, in one design, the kitchen, dining room, and children's play space may share one large open area, whereas another design may locate the play space in a Jess public part of the house. One design may emphasize large bedrooms, and another may reduce bedroom size to make room for an additional bathroom. How the Howells choose from among the design alternatives will depend on their preiferences for a design's distinguishing characteristics, such as the utility and character of the rooms' layouts, or the estimated cost of construction.

The resulting design is the house's architecture, as suggested by FigUie 5.1. Most obviously, the architecture describes the skeletal structure of the house, the locations of walls and support beams, and the configuration of each room. It also includes plans for

FIGURE 5.1 Architectural plans.

Openmirrors.com

Section 5.1 The Design Process 225

sufficient heating and cooling, and the layout o f air ducts; maps of water pipes and t!heir connections to the city's water mains and sewer lines; and maps of electrical circ111its, locations of outlets, and the amperage of circuit breakers. Architectural decisions tend to be structural, systematic, and systemic, making sure that all essential elements of the requirements are addressed in ways that harmonize customer needs with the realities of materials, cost, and availability. They are the earliest design decisions to be made and, once implemented, are the hardest to change. In contrast, later design decisio ns, such as those regarding flooring, cabinetry, wall paint, or paneling, are relatively local- ized and easy to modify.

Difficult to change does not mean impossible. It is not unusual or unreasonable for the architecture or specifications to change as the house is being built. Modifications may no t be proposed on a whim, but instead on a change in perception or need, or in reaction to new informa tion. In the Howells' house, engineers ma.y suggest changes to reduce costs, such as moving bathrooms or the location of the kitchen sink, so that they can share water pipes and drains. As the Howells think about how rooms will be u'Sed, they may ask that a beating duct be rerouted, so that it does not run along the walJ where they plan to put their piano. If there are construction-cost. overruns, they may scale back their plans to stay within their budget. It makes sense for the Howells to raise these issues and to change their specifications now, rather than be stuck wiilh a house that displeases them, does not suit their needs, or costs more than they can afford. Indeed, a customer, in concert with the developers, will often modify require- ments well after the initiial requirements analysis is complete.

In many ways, designing softwa re resembles the process of designing a new house. We are obligated to devise a solution that meets the customer's needs, as docu- mentecll in the requirements specification. However, as with the Howells' house, there may no t be a single "best" or "correct" architecture, and the nwnber of possible solu- tions may be limitless. By gleaning ideas from past solutions and by seeking regular feedback from the customer, designers create a good architecture, one that is able to accommodate and adapt to change, that results in a product that will make the cus- tomer happy, and that is a useful reference and source of guidance throughout the product's Wetime.

Design Is a Creative Process

Designing software is an intellectually challenging task. It can be taxing to keep track of all the possible cases that the software system mjght encounter, including the exceptional cases (such as missing or incorrect information) that the system must accommodate. And this effort takes into account only the system's expected functionality. In addition, the sys- tem has nonfunctional design goals to fulfill, such as being easy to maintain and extend, being easy to use, or being easy to port to other platforms. These nonfunctional reqlllire- ments oot only constrain the set of acceptable solutions, but also may actually conft.ict with each other. For example, techniques for malting a software system retiable or reusable are costly, and thus hinder goals to keep development costs within a specified budget. Furthermore, external factors can complicate the design task. For example, the software may have to adhere to preexisting hardware in terface specificalions, work with legacy software, or conform with standard data formats or government regulations.

226 Chapter 5 Designing the Architecture

There are no instructions or formulae that we can foUow to guarantee a successful design. Creativity, intelligence, experience, and expert judgment are needed to devise a design that adequately satisfies aU of the system's requirements. Design methods and techniques can guide us in making design decisions, but they are no substitute for cre- ativity and judgment.

We can improve our design skills by studying examples of good design. Most design work is routine design (Shaw 1990), in which we solve a problem by reusing and adapting solutions from similar problems. Conside r a chef who is asked to prepare din- ner for a finicky patron who has particular lastes and die tary constraints. There may be few recipes that exactly fit the patron's tastes, and the chef is not likely to concoct brand new dishes for the occasion. Instead, the chef may seek inspiration from favorite recipes, will substitute ingredients, and will adapt cooking methods and times as appro- priate. lbe recipes alone are not enough; there is considerable creativity in the making of this meal: in choosing the starting recipes, in adapting the ingredient list, and in mod- ifying the cooking instructions to accentuate flavors, textures, and colors. What the chef gains by starting from proven recipes is efficiency in quickly settling on a plan for the meal and predictability in knowing that the resulting dishes should be simiJar in quality to dishes derived previously from the same recipes.

Similarly, experienced software developers rarely design new software from fust principles. Instead, they borrow ideas from existing solutions and manipulate them into new, but not entirely original, designs. By doing so, developers can usually arrive at a suitable design more quickJy, using the properties of the borrowed solutions to assess the properties of the proposed design. Figure 5.2 shows several sources from which developers can draw when looking for guidance in designing a new system.

There are many ways to leverage existing solutions. One extreme is clon.ing, whereby we borrow a whole design, and perhaps even the code, making minor adjust- ments to fit our particular problem. For example, a developer might clone an existing system to customize it for a new customer-though, as we will see, there are better ways of producing variations of the same product. Slightly less extreme is to base our design on a reference model: a standard generic arcltitecture for a particular application domaia. A reference model suggests only how we decompose our system into its major

FIG URE 5.2 Sources of design advice.

Experience

Design Principle1

Openmirrors.com

Design Co~nntions

Similar Systems

Design Patterns

Reference Models

Architectu111I Styles

Section 5.1 The Design Process 227

components and how those components interact with each other. The design of the individual components and the details of their interactions will depend 0 111 the specific application being developed. As an example, Figure 5.3 shows the reference model for a compiler; the specifics of the parser, semantic analyzer,optimizations, and data repos- itories will vary greatly with the compiler's programming language. There are existing or proposed reference models for a varie ty of application domains, including operating systems, interpreters, database management systems, process-control systems, inte- grated tool environments, communication networks, and Web services.

More typically, our problem does not have a reference model, and we create a design by combining and adapting generic design solutions. In your other courses, you have learned about generic, low-level design solutions such as data structures (e.g., lists, trees) and algorithm paradigms (e.g., divide-and-conquer, dynamic programming) that are useful in addressing entire classes of problems. Software architectures have generic solutions too, called architectura I styles. Like reference models, architectural styles give advice about how to decompose a problem into software units and how those units should interact with each other. Unlike reference models, architectural styles are not optimized for specific application domains. Rather, they give generic advice about how to approach generic design problems (e.g., how to encapsulate data shared by all aspects of the system).

Sometimes, improving one aspect of a software system has an adverse effect on another aspect. For this reason, creating a software system design can raise several orthogonal issues. Good software architectural design is about selecting, adapting, and integrating several architectural styles in ways that best produce the desired result. There are many tools at our disposal for understanding our options and evaluating the

KE'I I Component I Dita store

Dita fetch/store

- - - - - Control flow

Stmbol table

Attributed parse tree

FIGURE 5.3 Reference model for a compiler (adapted from Shaw and Garlan 1996).

228 Chapter 5 Designing the Architecture

chosen architecture. Design pat.terns are generic solutions for making lower-level design decisions about individual software modules or smaU coUections of modules; they will be discussed in Chapter 6. A design convention or idiom is a coUection of design decisions and advice that, taken together, promotes certain design qualities. For example, abstract data types (ADTs) are a design convention that encapsulates data representation and supports reusabitity. As a design convention matures and becomes more widely used, it is packaged into a design pattern or architectural style, for easier reference and use; uJtimately, it may be encoded as a program- language construct. Objects, moduJes, exceptions, and templates are examples of design and programming conventions that are now supported by programming languages.

Sometimes, existing solutions cannot solve a problem satisfactorily; we need a novel solution that requires innovative design (Shaw 1990). In contrast to routine design, the innovative design process is characte rized by irregula r bursts of progress tbat occur as we bave filashes of insight. The only guidance we can use in innovative design is a set of basic design principles that are descriptive characteristics of good design, rather than prescriptive advice about how to design. As such, they are more use- ful when we are evaluating and comparing design alternatives than when we are designing. Nevertheless, we can use the principles during design to assess how weU a particular design decision adheres to them. In the end, innovative design usually takes longer than routine desi.gn, because often there are stagnant pe riods between insights. Innovative designs must be more vigorously evaluated than routine designs, because they have no track record. Moreover, because such design evaluation is based more on expert judgment than on objective crite ria, an innovative design should be examined by several senior design ers before it is formally approved. In general, an innovative design shouJd be superior to competing routine designs to justify the extra cost o f its development and evaluation.

So shouJd we always stick with tried and true approaches, rather than explore new ways of designing systems? As with other skilled disciplines, such as music or sports, it is only through continuous learning and practice that we improve our design skills. What distinguishes an experienced chef from a novice is her larger repertoire of recipes, her proficiency with a wide range of cooking techniques, her deep understandfog of ingre- dients and how they change when cooked, and her ability to refashion a recipe or a meal to emphasize or enhance the desired experience. Similarly, as we gain more expe- rience with software design, we understand bette r how to select from among generic design solutions, how to apply design principles when deviating from generic solutions, and how to combine partial solutions into a coherent design whose characteristics improve on past solutions to similar problems.

Design Process Model

Designing a software system is an iterative process, in which designers move back and forth among activities involving understanding the requirements, proposing possible solutions, testing aspects of a solution for feasibili ty, presenting possibilities to the customers, and documenting the design for the programmers. Figure 5.4 illustrates the process of converging on a software architecture for a proposed system. We start the process by analyzing the system's requirements specification and identifying any

Openmirrors.com

Mod1lln9

Erperl11eatlng wltl. pou1bl1

deto11potltlonr

Anllytlt Dou1untaflon

Astenlng !he Recording p11l11111nuy mhllect•rtl mhltect~te decisions

Section 5.1 The Design Process 229

Review

Checking th1t ou ltthlfltfUll 111itllu

th req1l1u1entt

Soltwm A1ehlt1ttur1

Doeu1U1t (SADJ

FIGURE 5.4 Process for developing a software architecture.

critical prope rties o r constraints that the eventua l design must exlhibit or reflect. These properties can help us identify wbicb architectural styles might be usefuJ in our design. D iffe re nt properties will suggest different architectural styles, so we will likely develop several architectural plans in paralle l, eacb de picting a single facet or view of the archi- tecturn. The multiple vie ws in software design ar,e analogous to the blueprints that the H owells' architect produced for their house.

During the first part of the design phase, we ite rate among three activities: drawing architectural plans, analyzing how well the proposed architecture promotes desired properties, and using the analysis results to improve and optimize the architectural plans. The types o f analysis we perform at this stage focus on the system's quality attrib- utes, such as performance, security, and reliability. Thus, our architectural models must include sufficient detail to support whatever analyses we are most interested in perform- ing. Howeve r, we do not want our models to be too de tailed. At the architectural stage, we focus o n system-level decisions, such as communication, coordinatio n, synchroniza- tion , and sharing; we defer more detailed design decisions, such as those that affect indi- vid ual modules, to the detailed design phase. As the architecture starts to stabilize, we docume nt our models. Each of our models is an architectural vie w, and the views are interconnected, so that a change to one view may have an impact on othe r views. Thus, we keep track o f how the views are re lated and how they work together to form a coher- e nt integrated design. Finally, once the architecture is documented , we conduct a formal design review, in which the project team checks that the architecture meets all of the sys- tem's re q uirements and is of high quality. If problems are identified during the design review, we may have to revise our design yet again to address these concerns.

The final outcome of the software architecture process is the SAD, used to com- municate syste m-level design decisions to the rest of the development team. Because the SAD provides a hig h-level overview of the system 's design , the document is a lso useful for quickly bringing new developmen t team members up to speed , and for edu- cating t he mainte nance team about how the system works. Project managers may use the SAD as the basis for organizing development teams and tracking the teams' progress.

230 Chapter 5 Designing the Architecture

Software architecture and architecture documents play a Jess clear role in agile d evelopment me thods. The re is an inhe rent conflict between software architecture, which documents the syste m's load-bearing, hard-to-change design decisions, and the agile goal of avoiding irreversible decisions. This conftict is discussed further in Sidebar 5.1.

SIDEBAR 5.1 AGILE ARCHITECTURES

A s we noted in Chapter 4, it can sometimes be helpful to use an agile process when there is a great deal of uncertainty about requirements. In the same way, agility can be helpful when it is not yet clear what the best type of design might be.

Agile architectures are based on the four premises of agile methods as stated in the "agile manifesto" (see http://agilemanifesto.org):

• valuing individuals and interactions over processes and tools

• valuing working software over comprehensive documentation

• valuing customer collaboration over contract negotiation

• valuing response to change over following plans

Agile methods can be used to generate an initial design that describes essential require- ments.As new requirements and design considerations emerge, agile methods can be applied

to "refactor" the design so that it matures with the understanding of the problem and the cus- tomer's needs.

But architectural generation is particularly difficult using agile methods, because both complexity and change must be managed carefully. A developer adhering to agile methods is at the same time trying to minimize documentation and to lay out the variety of choices avail- able to customers and coders. So agile architectures are based on models, but only small fea-

tures are modeled, often one at a time, as different options and approaches are explored. Models are often discarded or rebuilt as the most appropriate solution becomes clear. As Ambler (2003) puts it, an agile model is "just barely good enough": it "meets its goals and no more."

Because agile methods employ iteration and exploration, they encourage programmers to write the code as the models are being produced. Such linkage may be a significant prob- lem for agile architectures. As Ambler points out, although some agile methods advocates have high confidence in architectural tools (see Uhl 2003, for instance), others think the tools are not ready for prime time and may never be (see Ambler 2003).

A bigger problem with agile methods is the need for continuous refactoring. The inher- ent conflict between an architecture's representing a significant design decision and the need

for continuous refactoring means that systems are not refactored as often as they should be. Thomas (2005) calls the refactoring of large, complex systems high-risk " wizard's work," par- ticularly when there is a great deal of legacy code containing intricate dependencies.

Openmirrors.com

Section 5.3 Decomposition and Views 231

5.2 MODELING ARCHITECTURES

In modeling an architecture, we try to represent some property of the architecture while hiding others. In this way, we can learn a great deal about the property without being distracted by othe r aspects o f the system. Most importantly, the colJection of models helps us to reason about whether the proposed architecture wilJ meet the spec- ified requirements. Garlan (2000) points out that there are six ways we can use the architectural models:

• to understand the system: what it will do and how it will do it • to determine how much of the system wilJ re use clements of previously built sys-

tems and how much of Lhe system will be reusable in the future • to provide a blueprint for constructing the system, including where the "load-

bearing" parts of the system may be (i.e., those design decisions that will be diffi- cult to change later )

• to reason about how the system might evolve, including performance, cost, and prototyping concerns

• to analyze dependencies and select the most appropriate design, implementation, and testing techniques

• to support management decisions and unde rstand risks inherent in implementa- tion and maintenance

In Chapter 4, we described many techniques for modeling requirements. How- ever, software architecture modeling is not so mature. Of the many ways to model architectures, your choice depends partly on the model's goal and partly on personal preference. Each has pros and cons, and there is no universal techrnique that works best in every situation. Some developers use the Unified Modeling Language (UML) class diagrams to depict an architecture, emphasizing subsystems rather than classes. More typically, software architectures are modeled using simple box-and-arrow diagrams, perhaps accompanied by a legend that explains the meaning of different types of boxes and arrows. We use this approach in our examples. As you build and evaluate real sys- tems, you may use another modeling technique. But the principles expressed in boxes and arrows can easily be translated to other models.

5.3 DECOMPOSITION AND VIEWS

In the past, software designers used decomposition as their primary tool, making a large problem more tractable by decomposing it into smaller pieces whose goals were easier to address.. We call this approach "top down,'' because we start with the big picture and decompose it into smaller, lower-level pieces. By contrast, many of today's design ers work on the architecture from the bottom up, packaging together small modules and components into a larger whole. Some experts think that a bottom-up approach produces a system that is easier to maintain. We will look more carefull y at these maintenance issues in Chapter 11. As a rchitectural and design approaches change over time, and as we gather more evidence to support claims of maintainability and other qualHy characte ris- tics, we will have a better understanding of the impact of each design approach.

232 Chapter 5 Designing t he Architecture

Some design problems have no existing solutions or components with which to start. H ere, we use decomposition, a traditiona l approach that helps the designers understand and isolate the key problems that the system is to solve. Because under- standing decomposition can also shed light on the best ways to test, enhance, and main- tain an existing system, in this section, we explore and contrast several decomposition methods. Design by decomposition starts with a high-level description of the system's key elements. Then we iteratively refine the design by dividing each of the system's e le- ments into its constitue111t pieces and describing their interfaces. We are done when fur- ther refmement results in pieces that have oo interfaces. This process is depicted in Figure 5.5.

Here are brief descriptions of some popular design methods:

• Functional decomposition: This method partitions functions o r requirements into modules. The designer begins with the functions that are listed in the require- ments specification ; these are system-level functions that exchange inputs and outputs with the system's environment. Lower-level designs divide these func- tions into subfuncitions, which are then assigned to smaller modules. The design also describes which modules (subfuoctions) call each other.

• Feature-oriented design: This method is a type of functional decomposition that assigns features to modules. The high-level design describes the system in terms of a service and a cornection of features. Lower-level designs describe how each fea- ture augments the service and identifies inte ractions among features.

• Data-oriented decomposition: This method focuses on how data will be parti- tioned into modules. The high-level design describes conceptual data structures, and lower-level designs provide detail as to how data are distributed among mod- ules and how the distributed data realize the conceptual models.

• P rocess-oriented decomposition: This method partitions the system into concur- rent processes. The high-level design (1) identifies the system's main tasks, which

FIG URE 5.5 Levels of decomposirioo.

Openmirrors.com

fop lev·el

First level of decomposition

Second level of decomposition

Section 5.3 Decomposition and Views 233

operate mostly independently o f each other, (2) assigns tasks to runtime processes, and (3) explains how the tasks coordinate with each other. Lower-level designs describe the processes in more detail.

• Event-oriented decomposition: This method focuses on the events that the sys- tem must handle and assigns responsibility for events to different modules. The high-Level design catalogues the system's expected foput events, and lower-level designs decompose the system into states and describe how events trigger state trans formations.

• Object-oriented design: This method assigns objects to modules. The high-level design identifies the system's object types and explains how objects are related to one another. Lower-level designs detail the objects' attributes and operations.

How we choose which design method to use depends on the system we are developing: Which aspects of the system's specification are most prominent (e.g., functions, objects, features)? How is the system's interface described (e.g., input events, data streams)? For many systems, it may be appropriate to view several decomposition s, or to use different design methods a t different levels of abstraction. Sometimes, the choice of design method is not so important and may be based on tbe designer's preferences.

To see bow decomposition works, suppose we want to follow a data-oriented design method. We start with the conceptual data stores that were identified during requirements analysis. This analysis included externally visible operations on these data, such as creation, queries, modifications, and dele tio n. We design the system by clustering the conceptual data into data objects and ope rations on those objects. We further decompose complex data objects into aggregations of simpler objects, with the simplest objects storing one type of data. Each object type provides access operations for querying and manipulating its data. This way, other parts of the system can use the object's stored information without accessing the data directly. The resulting design is a hierarchical decomposition of objects. The design differs from the requirements specifi- cation because it includes information about how system data are to be distributed into objects and how objects manipulate data values, and not just information about what data will be manipulated by the system.

No matter which design approach we use, the resulting design is likely to refer to several types of software units, such as component, subsystem, runtime process, module, class, package, library, or procedure. Tue different te rms describe different aspects of a design. For example, we use the term module to refer to a structural unit of the software's code; a module could be an atomic unit, like a Java class, or it could be an aggregation of other modules. We use the te rm component to refer to an identifiable runtime element (e.g., the parser is a component in a compiler)-although this term sometimes has a specific meaning, as explained in Sidebar 5.2. Some of the above terms designate software unjts at different levels o f abstraction. For example, a system may be made up of subsystems, which may be made up of packages, which are in turn made up of classes. In other cases, the terms may overlap and present different views of the same entity (e.g., a parser might be both a compooeot and a high-Level module). We use the term software unit when we want to talk about a system's composite parts without being precise about what type of part.

234 Chapter 5 Designing the Architecture

SIDEBAR 5.2 COMPONENT-BASED SOFTWARE ENGINEERING

Component-based software engineering (CBSE) is a methcx:I of software development whereby systems are created by assembling together preexisting components. In this set- ting, a component is "a self-contained piece of software with a well-defined set of interfaces"

(Herzum and Sims 2000) that can be developed, bought, and sold as a distinct entity. The goal

of CBSE is to support the rapid development of new systems, by reducing development to

component integration, and to ease the maintenance of such systems by reducing mainte-

nance to component replacement.

At this point, CBSE is stiU more of a goal than a reality. There are software components for sale, and part of software design is deciding which aspects of a system we should buy off

the shelf and which we should build ourselves. But there is still considerable research being done on figuring out how to

• specify components, so that buyers can determine whether a particular component fits

their needs

• certify that a component performs as claimed

• reason about the properties of a system (e.g., reliability) from the properties of its components

We say that a design is modular when each activity of the system is performed by exactly one software unit, and wben the inputs and outputs of each software unit are well-defined. A software unit is well-defined if its interface accurately and precisely specifies the unit's externally visible behavior: each specified input is essential to the unit's function, and each specified output is a possible result of the unit's actions. In addition, "well-defined" means that the interface says nothing about any property or design detail that cannot be discerned outside the software unit. Chapter 6 includes a section on design principles that describes in more detail bow to make design decisions that result in modular designs.

Architectura l Views

We want to decompose the system's design into its constituent programmable units, such as modules, objects, or procedures. However, this set of elements may not be the only decomposition we consider. If the proposed system is to be distributed over sev- eral computers, we may want a view of lhe design that shows the distribution of the sys- tem's components as weU as bow those components communicate with each other. Alternatively, we may want a view of the design that shows the various services that the system is to offer and how the services operate together, regardless of how the services are mapped to code modules.

Common types of architectural views include the following:

• Decomposition view: This traditional view of a system's decomposition portrays the system as programmable units. As depicted in Figure 5.5, this view is likely to be

Openmirrors.com

Section 5.4 Architectural Styles and Strategies 235

hie rarchical and may be represented by multiple models. For example, a software unit in one model may be expanded in another model to show its constituent units.

• D ependencies view: This view shows dependencies among software units, such as when one unit ca.Us anothe r unit's procedures o r when one unit relies on data pro- duced by one o r more othe r units. This view is useful in project planning, to iden- tify which software units are dependency free and thus can !be implemented and tested in isolation. It is also useful for assessing the impact of making a design change to some software unit.

• Genera lization view: This view shows software units that are generalizations or specializations of one another. An obvious example is an inheritance hierarchy among object-oriented classes. In general, this view is useful when designing abstract or extendible software units: the general unit encapsulates common data and functionality, and we derive various specialized units by instantiating and extending the gene ral unit.

• Execution view: This view is the traditional box-and-arrow diagram that software a rchitects draw, showing the runtime structure of a system in terms of its compo- nents and connectors. Each component is a distinct executing entity, possibly with its own program stack. A connecto r is some intercompooent communication mechanism, such as a communication channel, shared data repository, or remote procedure call

• Implementation view: This view maps code units, such as modules, objects, and procedures, to the source file that contains the ir implementation. lhis view helps progrnm..mers find the implementation of a software unit within a maze of source- code files.

• D eployment view: This view maps runtime e ntities, such as components and con- nectors, onto computer resources, such as processors, data stores, and communica- tion networks. It helps the architect analyze the quality attributes of a design, such as pe rformance, reliability, and security.

• Work-assignment view: This view decomposes the system's design into work tasks that can be assigned to project teams. It helps project managers plan and aUocate project resources, as well as track each team's progress.

Each view is a model of some aspect of the system's structure , such as code struc- ture, runtime structure, file structure, o r project team structure. A system's architecture represents the system's overall design structure; thus, it is the fu!U collection of these views. NormaUy, we do not attempt to combine views into a single integrated design, because such a description-comprising multiple overlays of different decompositioos- would be too difficult to read and keep up-to-date. Later in this chapter, we discuss how to document a system's a rchitecture as a collection of views. The documentation includes mappings among views so that we can understand the big picture.

5.4 ARCHITECTURAL STYLES AND STRATEGIES

Creating a software a rchitectural design is not a straightforward task. The design progresses in bursts of activi ty, with the design team often alternating between top-down

236 Chapter 5 Designing t h e Architecture

and bo ttom-up analysis. In top-down design, the team tries to partitfon the system 's key functions into distinct modules that can be assigned to separate compone nts. Howe ver, if the te am recognizes that a known, pre viously implemented design solution might be useful , the team may switch to a bo ttom-up design approach, adapting a prepackaged solution .

O fte n, our approaches to solving some problems have common features, and we can take advantage of the commonality by applying generalized patterns. Software archjtectural styles are established , large-scale pa tte rns o f syste m structure. Analogous to a rchitectural styles fo r buildings, software arc hitectural styles have defining rules, e lements, and techniques that result in designs with recognizable structures and well- unde rstood properties. H owever, styles are no t comple te detailed solutions. Rather, they are loose templates that offer distinct solutions fo r coordinating a system's com- pone nts. To be specific, architectural styles focus on the d iffere nt ways that components might communicate, synchronize, or share data with one another. As such, their struc- tures codify constrained inte ractions among compone nts and offer mechanisms, such as protocols, for realizing t!hose inte ractions. In the early stages of software development, architectural styles can be useful for explo ring and e xplo iting known approaches to o rganizing and coordina ting access to data and functionality. In general, by constrain- ing inte rcomponent inte ractions, architectura l styles can be used to help the resulting syste m achieve specific system properties, such as da ta security (by restricting data flow) and maintainability (by simplifying communication inte rfaces).

R esearchers are continuously analyzing good software designs, looking for useful architel:tural styles that can be applied more generally. These styles are then colJel:teu in style catalogues that an architect can re fe rence when conside ring the best architec- ture for a give n set of requirements. A few of these catalogues a re listed at the end of this chapter.

In the rest of this. section, we examine six: architectural styles commonly used in software development: pipe-and-filter, client-server, peer-to-peer, p ublish-subscribe, repositories, and layering. For each style, we describe the software e lements comprising the style, the constraints on interactions among e leme nts, and some prope rties (good and bad) of the resulting system.

Pipe-and-Filter

In a pipe-and-filter style, illustrated in Figure 5.6, system functionality is achieved by passing input data through a sequence o f data-transforming components, called filters, to produce output data. Pipes are connecto rs that simply transmit data from one filter to the next without modifying the data. Each filter is an indepe ndent function that makes no assumptions a bout othe r fillers tha t maiy be applied to the data.1llus, we can build our system by connecting together different fi lte rs to form a varie ty of configura- tions. If the format o f the data is fixed-that is, if aU of the filters and pipes assume a common representation of the data being transmitted-then we can join filters togethe r in any configuration. Such syste ms have seve ral important prope rties (Shaw and Garlan 1996):

• We can understand the system 's transforma tion of input data to output data as the functional composition of the filters' data transfo rmations.

Openmirrors.com

Section 5.4 Architectural Styles and Strategies 237

KEY Pipe.

Filter

FIGURE 5.6 Pipes aad filters.

• Filte rs can be reused in any other pipe-and-filter style program that assumes the same format for input and output data. Examples of such filte rs and systems include image-processing systems and Unjx shell programs.

• System evolution is re latively easy; we can simply introduce new filters into our system's configuration, or replace or remove existing filters, without having to modHy other parts of the system.

• Because of filter independence, we can perform certain types of analyses, such as throughput analysis.

• There are performance penalties when using a pipe-and-filte r arcrutecture. To support a fixed data fo rmat during data transmission, each filter must parse input da ta before performing its computation and then convert its results back to the fixed data format for output. This repeated parsing and unparsing of data can hamper system performance. It can also make the construction of the individual filters more complex.

Io some pipe-and-filter style systems, the filte rs are independent data-transforming functions, but the representation of data passed between filte rs is not fixed. For example, old-style compilers had pipe-and-filte r architectures in which the output of each filter (e.g., the lexical analyzer or the parser) was fed directly into the next filter. Because the filters in such systems a re independe nt and have precise input and output formats, it is easy to replace and improve filters but hard to introduce or remove fi lters. For example, to remove a filter, we may need to substitute a stub t!hat converts the out- put from the previous filter into the input format expected by the next filter.

Client -Server

In a client-server arcrutecture, the design is divided into two types of components: clients and servers. Server components offer services, and clients access them using a request/reply protocol. The components execute concurrently and are usually distrib- uted across severaJ computers. There may be one centralized server, several replica ted servers distributed over several machines, or several distinct servers each ofiering a ilif- fe rent set of services. The relationship be tween clients and servers is asymmetric: Clients know the identities of the servers from which they request information, but servers

238 Chapter 5 Designing the Architecture

SIDEBAR 5.3 THE WORLD CUP CLIENT-SERVER SYSTEM

In 1994, the World Cup soccer matches were held in the United States. Over a single month, 24 teams played 52 games, drawing huge television and in-person audiences. The games were played in nine different cities that spanned four time zones. As a team won a match, it often moved to another city for the next game. During this process, the results of each game were recorded and disseminated to the press and to the fans. At the same time, to prevent the likelihood of violence among the fans, the organizers issued and tracked over 20,000 identifi- cation passes.

This system required both central control and distributed functions. For example, the system accessed central information about all the players. After a key play, the system could present historical information (images, video, and text) about those players involved. Thus, a client-server architecture seemed appropriate.

The system that was built included a central database, located in 'fexas, for ticket man-

agement, security, news services, and Internet links. This server also calculated games statis- tics and provided historical information, security photographs, and clips of video action. The clients ran on 160 Sun workstations that were located in the same cities as the games and pro- vided support to the administrative staff and the press (Dixon 1996).

know nothing about which, or even how many, clients they serve. Clients initiate oom- munications by issuing a request, such as a message or a remote-procedure call, and se rvers respond by fulfill ing the request and reply ing with a result. Normally, servers are passive oomponents that simply react to clients' requests, but in some cases, a server may initiate actions on behalf of its clients. For example, a client may send the server an exe- cutable function, called a callback, which the server subsequently calls under specific cir- cumstances. Sidebar 5.3 describes a system implemented using the client-server style.

Because this architectural style separates client oode and server code into differ- ent components, it is possible to improve system performance by ~hufiling the compo- nents among computer processes. For example, client code might execute locally on a user's personal computer or might execute remotely on a more powerful server com- pute r. In a multi tier system, like the example shown in Figure 5.7, servers are structured hie rarchically into application-specific servers (the middle tiers) that in turn use servers offering more generic services (the bottom tie r) (Clements et al. 2003). This architec- ture improves the system's modularity and gives designers more flexibility in assigning activities to processes. Moreover, the client-serve r style supports reuse, in that servers providing common services may be useful in multiple applications.

Peer-to -Peer

Technically, a peer-to-peer (P2P) architecture is one in which each component executes as its own process and acts as both a client of and a server to other peer components. Each component bas an interface that specifies not only the services it provides, but

Openmirrors.com

Section 5.4 Architectur.al Styles and Strategies 239

KEV Client Preslftbtion

server

- request/ reply Business lo9io

Server

FIGURE 5.7 1bree-tiered client-server architecture.

Information System

also the services that it requests from other peer components. Peers communicate by requesting services from each other. In this way, P2P communication is like the request/reply communication found in client-server architecture, but any component can initiate a request to any other peer component.

The best known P2P architectures are file-sharing networks, such as Napster and Freenet, in which the components provide similar services to each other. What differs among components are the data each component stores locally. Thus, the system's data are distributed among the components; whenever a component needs information not stored locally, it retrieves it from a peer component.

P2P networks are attractive hecause the y scale up we ll. Although each added component increases demands on the system in the form of additional requests, it also increases the system's capabilities, in the form of new or replicated data and as addi- tional server capacity. P2P networks are also highly tolerant of component and network failures, because data are replicated and distributed over multiple peers. Sidebar 5.4 describes the pros and cons of Napster's P2P architecture.

SIDEBAR 5.4 NAPSTER'S P2P ARCHITECTURE

N apster. the popular music-sharing system, uses a P2P architecture. Typically, the peers are users' desktop computer systems running general-purpose computing applications, such as electronic mail clients, word processors, Web browsers, and more. Many of these user sys-

tems do not have stable Inte rnet protocol (IP) addresses, and they are not always available to the rest of the network. And most users are not sophisticated; they are more interested in

content than in the network's configuration and protocols. Moreover, there is great variation in methods for accessing the network, from slow dial-up lines to fast broadband connections.

Napster's sophistication comes instead from its servers, which organize requests and man-

age content. The actual content is provided by users, in the form of files that are shared from peer to peer,and the sharing goes to other (anonymous) users, not to a centralized file server.

240 Chapter 5 Designing t h e Architecture

This type of architecture works well when the files are static (i.e. , their content does not

change often or at all), when file content or quality do not matter, and when the speed and reliability of sharing are not important. But if the file content changes frequently (e.g., stock prices or evaluations), sharing speed is key (e.g., large files are needed quickly), file quality is critical (e.g., photographs or video), or one peer needs to be able to trust another (e.g., the content is protected or contains valuable corporate information), then a P2P architecture may not be the best choice; a centralized server architecture may be more appropriate.

Publish-Subscribe

In a publish-subscribe archHecture, components interact by broadcasting and reacting to events. A component expresses interest in an event by subscribing to it. Then, when another component announces (publishes) that the event has taken place, the subscrib- ing components are notified . The underlying publish-subscribe infrastructure is respon- sible bo th for registering event subscriptions and for delivering published events to the appropria te components. Implicit invocation is a common form of pubLish-subscribe architecture, in which a subscribing component associa tes one of its procedures with each event o f interest (called registering the procedure). In th.is case, when the event occurs, the publish-subscribe infrastructure invokes all of the event's registered proce- dures. In contrast to cLient-server and P2P compo nents, publish-subscribe components know nothing about each other. Instead, the publishing component simply announces events a nd then waits fo r interested components to react; each subscribing component simply reacts to event announcements, regardless of how they are published. In models o f this kind of architecture, the underlying infrastructure is ofte n represented as an event bus to which all publish-subscribe compone nts are connected.

This is a common architectural style for integrating tools in a shared environ- ment. For example, Reiss (1990) reports on an e nvironment ca lled Field, where tools such as editors register for events that might occur during a debugger 's functioning. To see how, consider that the debugger processes code, one Line at a time. When it recog- nizes that it has reached a set breakpoint, it announces the event "reached breakpoint"; tben, tbe system forwards the event to aU registered tools, iocluding the editor, and the editor reacts to the event by automatically scrolLing to the source-code Lioe that corre- sponds to the breakpoint. O ther events that the debugger might announce include entry and exit points of fuoctions, runtime errors, and commeots to clear or reset the program's execution. However, the debugger is not aware of which tools, if any, h ave registered for the different events, and it has no control over what the other tools will do io response to one of its events. For this reason, a pubLish-subscribe system wiU often include some expLicit invocations (e.g., calls to access methods) when a component wants to enforce or confirm a specific reaction to a critical event.

Publish-subscribe systems have several strengths and weaknesses (Shaw and Garlan 1996):

• Such systems provide strong support for system evolution and customization. Because all interactions are orchestrated using events, any pubLish-subscribe

Openmirrors.com

Section 5.4 Architectural Styles and Strategies 241

component can be added to the system and can register itself without affecting other components.

• For the same reason, we can easily reuse publish-subscribe components in other event-driven systems.

• Components can pass data at the time they announce events. But if components need to share persistent data, the system must include a shared repository to sup- port that interaction. This sharing can diminish the system's extensibility and re usability.

• Publish-subscribe systems are difficult to test, because the behavior of a publish- ing component will depend on which subscribing components are monjtoring its events. Thus, we cannot test the component in isolation and infer its correctness in an integrated system.

Re positories

A repository style of arcrutecture consists of two types of components: a central data sto re and associated data-accessing components. Shared data are stockpiled in the data store, and the data accessors are computational units that store, retrieve, and update the information. It is chaUenging to design such a system, because we must decide how the two types of components will interact. In a traditional database, the data store acts like a server component, and the data-accessing clients request information from the data sto re, perform calculations, and request that results be written back to the data store. In such a system, the data-accessing components are active, in that they initiate the sys- tem's computations.

However, in the blackboard type of repository (illustrated in Figure 5.8), the data- accessing components are reactive: they execute in reaction to the current contents of the data sto re. Typically, the blackboard contains information about the current state of the system's execution that triggers the execution of individual data accessors, called knowledge sources. For example, the blackboard may store computation tasks, and an idle knowledge source checks a task out of the blackboard, performs the computation locaUy, and checks the result back into the blackboard. More commonly, the blackboard stores the current state of the system's computation, and knowledge sources detect pieces of the U10Solved problem to tackle. For example, in a rule-based system, the current state

KEY nowle,ge $0~1C8

hlacHmd

-----e 1aad/w1ite

FIGURE 5.8 Typical blackboard.

Knowledge tourea 2

242 Chapter 5 Designing t he Architecture

of the solution is stored in the blackboard, and knowledge sources iteratively revise and improve the solution by applying rewriting rules. The style is analogous to a computation or proof that is written on a real-world blackboard, where people (knowledge sources) iteratively improve the write-up by walking up to the blackboard, erasing some part of the wri ting, and replacing it with new writing (Shaw and Garlan 1996).

An important property of this style of architecture is the centralized management o f the system's key data. In the data store, we can localize responsibility for storing per- sistent data, managing concurrent access to the data, enforcing security and privacy policies, and protecting the data against faults (e.g., via backups). A key architectural decision is whether to map data to more than one data store. Distributing or replica ting data may improve system performance, but often there are costs: adding complexity, keeping data stores consistent, and reducing security.

Layering

Layered systems organize the system's software units into layers, each of which pro- vides services to the layer above it and acts as a client to the layer below. In a " pure" layered system, a software unit in a given layer can access only the other units in the same layer and services offered by the interface to the layer immediately below it. To improve performance, this constraint may be relaxed in some cases, allowing a layer to access the services of la}•ers below its lower neighbor; this is called layer bridging. H ow- ever, if a design includes a lot of layer bridging, the n it loses some of the portability and maintainability that the layering style offers. Under no drcllllliitances does a layer access the services o[fered by a higher-level layer; the resulting architecture would no longer be called layered.

To see bow this type of system works, consider Figure 5.9, which depicts the Open Systems Interconnection (OSI) reference model for network communications (Inter- national Telecommunication Union 1994). The bottom layer provides facilities for transferring data bits over a physical link, like a cable-possibly unsuccessfully. The next layer, the Data Link Layer, provides more complex facilities: it transmits fixed- sized data frames, routes data frames to local addressable machines, and recovers from simple transmission errors. The Data Link Layer uses the bottom Physical Layer 's facil- ities to perform the actual transmission of bits between physicaUy connected machines. The Ne twork Layer adds the ability to transmit variable-sized data packets, by break- ing packets into fixed-sized data frames which are then sent using the Data Link facili- ties; tbe Network Layer also expands tbe routing of data packets to nonlocal machines. The Transport Layer adds reliability, by recovering from routing errors, such as when data frames are lost or reordered as they are routed (along possibly different pa tbs) through the ne twork. The Session Layer uses the Transport Layer 's reliable data-transfer services to provide long-term communication connections, over which lengthy data exchanges can take place. The Presentation Layer provides translation among different data representations, to support data exchanges a mong components that use different data formats. The Application Layer provides application-specific services, such as tile transfers if the application is a file-transfer program.

In the OSI example, each layer raises the level of abstraction of the communica- tion services that are available to the next layer, and hides all of the details about how

Openmirrors.com

KEY

~ procefora call

_ ph1sical c1bla

Sectio n 5.4 Architectural Styles and Strategies 243

FIGURE 5.9 Layered architecture of OSI model for network communications.

those services are implemented. In general, the layering style is useful whenever we can decompose our system's functionality into steps, each of which builds on previous steps. The resulting design is e asie r to port to other platforms, if the lowest level encapsulates the software's interactio ns with the platform. Mo reover, because a layer can inte ract only wi th its ne ighboring layers, layer modification is relatively easy; such changes should affect at most only the two adjacent layers.

On the other hand, it is not always easy to structure a system into djstinct layers of increasingly powerful services, especiaUy when software uruts are seemingly interde- pendent. Also, it may appear that layers introduce a performance cost from the set of calls and data transfers between system Layers. Fortunately, sophisticated compilers, linkers, and loaders can reduce thjs overhead (Cle ments et al. 2003).

Combining Architectural Styles

It is easiest to think about an architectural style and its associated properties when we consider the style in its purest form. However, actual software architectures are rarely based purely on a single style. Instead, we build our software architecture by combirting different a rchitectural styles, selecting and adapting aspects of styles to solve particular problems in our design.

Architectural styles can be combined in several ways:

• We mjgbt use diffe rent styles at diffe rent levels of our system's decomposition. For example, we might view our system's overaU structure as a client-server architec- ture, but subsequently decompose tbe server component into a series of layers. Or a simple intercomponent connection at one level of abstraction may be refined to be a collection of components and connectors at a lower level of decomposition.

244 Chapter 5 Designing the Architecture

For instance, we might decompose one architecture's publish-subscribe inter- actions to elaborate the components and protocols used to manage event subscrip- tions and to notify subscribing components of event announcements.

• Our architecture may use a mixture of architectural styles to model different components or different types of interactions among components, such as the architecture shown in Figure 5.10. In this example, several client components interact with each other using publish-subscribe communications. These same components use server components via request/reply protocols; in turn, the server components interact with a shared data repository. In this example, the architecture integrates different styles into a single model by aUowing compo- nents to play multiple roles (e.g., client, publisher, and subscriber) and to engage in multiple types of interactions.

• Integration of different architectural styles is easier iI the styles are compatible. For example, all the styles being combined might relate runtime components, or all might relate code units. Alternatively, we may create and maintain different views of the architecture, as building architects do (e.g., the electrical wiring view, the plumbing view, the heating and air conditioning v]ew, and so on). This approach is appropriate iI integrating the views would result in an overly complex model, such as when components interact with each other in multiple ways (e.g., if pairs of components interact using both implicit invocation and explicit method calls), or if the mapping between the views' components is messy (i.e., is a many- to-many relationship).

If the resulting architecture is expressed as a collection of models, we must docu- ment how tbe models relate to one another. If one model simply shows the decomposi- tion of an element from a more abstract model, then this relationship is straightforward. If, instead, two models show different views of the system, and if there is no obvious mapping between the two views' software units, th.en documenting the views' correspon- dence is all the more important. Section 5.8 describes how to record the correspondences among views.

KEY

lttYll

tepotltoty

=jF=jf pu•ll11Vsubmlb1

- tequett/teply

~ d111b11t quules, tru11et1011

ppllcttlon 01ttbm

ppllutlon 01t1b11t

Client Applle1tlon and Prmntll Ion

SetY•Mlde Pmenlltlon

Enterprlse lnfo11utl01 Sy1t111t

FIGURE 5.10 Combination of publish-subscribe. client-seiver. and repository architecture styles.

Openmirrors.com

Section 5.5 Achieving Quality Attributes 245

5.5 ACHIEVING QUALITY ATTRIBUTES

In Chap ter 4, we saw that software requirements are about more than the proposed system's functionality. Other attributes, such as performance, reliability, and usability, are care fu lly specified , re flecting the characteristics that users want to see in the prod- ucts we build. As we design our system, we want to select architectural styles that promote the required quality attributes. However, architectural style offers only a coarse-grained solution with generally beneficial properties; there is no assurance that a specific quality attribute is optimized. To assure the support of particular attributes, we use t actics (Bass, Clements, and Kazman 2003): more fine-grained design decisions that can improve how weU our design achieves specific quality goals.

Modifiability

Modifiability is the basis of most of the architectural styles presented in this chapter. Because more than half of the full life-cycle cost of a system-including development, problem fixing, enhancement, and evolution-is spent after the first version of the soft- ware is d eveloped and released, modifiability is essential. That is, above all, we want our design to be easy to change. Different architectural styles address different aspects of modifiability, so we must know how to select the styles that address our specific modifi- ability goals.

Given a particular change to a system, Bass, Clements, and Kazman (2003) distin- guish be tween the software units that are directly affected by the cibange and those that are indirect! y affected. The directly affected units a.re those whose responsibilities change to accommodate a system modification. We can structure our code to minimize the num- ber of units requiring change. The indirectly affected units are those whose respon- sibilities do not change, but whose implementations must be revised to accommodate changes in the directly affected units. The difference is subtle. Both types aim to reduce the number of altered software units, but the tactics associated with each are different.

Tactics for minimiz ing the number of software units directly affected by a soft- ware cbange focus on clustering anticipated changes in the design:

• Anticipate expected changes: Identify design decisions that are most likely to change, and e ncapsulate each in its own software unit. Anticipated changes are no t limited to future features that the customer would like to see implemented. A ny service, functionality, or internal check that the system performs is susce pti- ble to fu ture improvement or obsolescence, and as such is a candidate for future changes.

• Cohesion: We will see in Chapter 6 that a software unit is cohesive if its pieces, data, and functionality all contribute to the unit's purpose and responsibilities. By keeping our software units highly cohesive, we increase the chances that a change to the system's responsibilities is confined to the few units that are assigned those responsibilities.

• Generality: Tue mo re general our software units, the more likely we can accom- modate change by modifying a unit's inputs rather than modifying the unit it-self. This characteristic is particularly true of servers, which ought to be general enough to accommodate all variants of requests for their services. For example,

246 Chapter 5 Designing the Architecture

an object that encapsulates a data structure should provide sufficient access methods to e nable other objects to easily retrieve and update data values.

By contrast, tactics for minimizing the impact on indirectly affected software units focus on reducing dependencies among software units. The goal is to reduce the degree to which a change to a directly affected unit a lso affects the system's other Wlits:

• Coupling: As we discuss in detail in Chapte r 6, the level of coup ling among software units is the degree to which the units depend o n each othe r. By lowering coupling, we reduce the Likelihood that a change to one unit will ripple to other units.

• Interfaces: A software unit's inte rface reveals the unit's public requisites and responsi bi Ii ties, and hides the unit 's private design decisions. If a unit interacts with other units onJy through their interfaces (e.g., caUing public access methods), then changes to one unit will not spread beyond the unit's boundary unJess its interface changes (e.g., if method signatures, preconditio ns, or postconditions change).

• Multiple interfaces: A unit modified to provide new data or services can offer them using a new interface to the unit without changing any of the unit's existing interfaces. This way, dependencies on the e xisting interfaces a re unaffected by the change.

The above tactics apply when modifiabili ty goals include reducing the cos.t of changing the design or implementation of our software. Another type of modiliability refe rs to our being able to mod if)' a system after it is released , perhaps during set-up or o n the fty during execution. For example, the process of installing Unix or Linux on a computer involves a complex configuration step in which many questions must be answe re d about the computer 's hardware and pe riphe ral devices to be supported , and which libraries, tools, application software, and versions of software are to be installe d. Sidebar 5.5 describes a tactic, called self-managing, whe re the software changes on the fiy in response to changes in its environment.

SIDEBAR 5.5 SELF-MANAGING SOFTWARE

In response to increasing demands that systems be able to operate optimally in different and sometimes changing environments, the software community is starting to experiment with self-managing software. It goes by several names (such as autonomic, adaptive. dynamic, self-

configuring, self-optimizing. self-healing, context-aware), but the essential idea is the same: that the software system monitors its environment or its own performance, and changes its behavior in response to changes that it detects. That is, as its environment changes, the system may change its sensors. Here are some examples:

• Change the input sensors used, such as avoiding vision-based sensors when sensing in

the dark.

• Change its communication protocols and interprotocol gateways.. such as when users, each with a communication device, join and leave an electronic meeting.

Openmirrors.com

Section 5.5 Achieving Quality Attributes 247

• Change the Web servers that are queried, based on the results and performance of past

queries.

• Move running components to different processors to balance processor load or to recover from a processor failure.

Self-managing software sounds ideal, but it is not easy to build. Obstacles include:

• Few architectural styles: Because self-managing software so strongly emphasizes context- dependent behavior, its design tends to be application specific, which caUs for significant innovative design. If there were more general architectural styles that supported self-

managing software, it would be easier to develop such software more rapidly and reliably.

• Monitoring nonfunctional requirements: Autonomic goals tend to be highly related to nonfunctional requirements. Thus, assessing how well the system is achieving these goals means that we have to relate the goals to measurable characteristics of the sys- tem's execution, and then monitor and dynamically evaluate these characteristics.

• Decision making: The system may have to decide whether to adapt on the basis of incom- plete information about itself or its environment. Also, the system should not enter a perpetual state of adaptation if its environment Huctuates around a tt:hreshold point.

Performance

Performance attributes describe constraints on system speed and capacity, including:

• Response time: How fast does our software respond to requests? • Throughput: How many requests can it process per minute? • Load: How many users can it support before response time and throughput start

to suffer?

The obvious tactic for improving a system's performance is increasing computing resources. That is, we cam buy faster computers, more memory, or additional communi- cation bandwidth. However, as Bass, Clements, and Kazman (2003) explain, there are also software design tactics that can improve system performance.

One tactic is improving the utilization of resources. For example, we can make our software more concurrent, thereby increasing the number of re quests that can be processed at the same time. This approach is effective if some resources are blocked or idle, waiting fo r other computations to finish. For instance, multiple ATMs can simulta- neously gather information about customers' banking requests, authenticate the cus- tomers' identification information, and confirm the requested transactions with the customers, before forwarding the requests to one of the bank's servers. ln this design, the server receives only authenticated and confirmed requests, which it can process without further interaction with the customer, the reby increasing the server's through- put. Another option is to replicate and distribute shared data, thereby reducing con- tention for the data. However, if we replicate data, then we must also introduce mechanisms to keep the distributed copies in synch. The overhead of keeping data

248 Chapter 5 Designing the Architecture

copies consistent must be more than offset by tbe performance improvements we gain from reducing data contention.

A second tactic is to manage resource allocation more effectively. That is, we sbould decide carefully how competing requests for resources are granted access to tbem. C rite ria for a llocating resources include minimizing response time, maximizing tbroughput, maximizing resource utilization, favoring high-priority or urgent requests, and maximizing fairness (Bass, Clements, and Kazman 2003). Some common sched- uling policies are given below.

• First-come/first-served: Requests are processed in the order in wbich they are received. This policy ensures that all requests are eventually processed. But it also means that a high -priority request could be stuck waiting for a lower-priority request to be serviced.

• Explicit priority: Requests are processed in order of tbeir assigned priorities. Tbis po licy ensures that important requests are handled quickJy. However, it is pos- sible for a low-priority request to be delayed forever in favor of new bigb-priority requests. One remedy is to dynamically .increase the priority of delayed requests, to ensure that they are eventually scheduJed.

• Earliest deadline first: Requests are processed in order of their impending dead- lines. Tbis policy ensures tbat urgent requests are handled quickly, thereby help- ing the system meet its real-time deadlines.

lbe above policies assume tbat once a request is s.cheduled for service, it is processed to comple tion. Alternatively, the system can inte rrupt an executing request. For example, a lower-priority request can be preempted in favor of a higber-priority request , in which case the pree mpted request is rescheduJed for completion. Or we can use round robin scheduling that allocates resources to requests for fixed time intervaJs; if a request is not completely serviced witbin tbis time period, it is preempted and the uncompleted portion is rescheduJed. How we cboose from among these scbeduling policies depends entirely on what performance property the customer wants to optimize.

A tbird tactic is to reduce demand for resources. At first glance, this approach may seem unhelpful, because we may have no control over inputs to our software. However, we can sometimes reduce demand on resources by making our code more efficient. Better yet, in some syste ms, we may be able to service only a fraction of the inputs we receive. For example, if some of tbe system inputs are sensor data rather than requests for service, our software may be able to sample inputs at a lower frequency without the system's missing important changes in the sensed environment.

Security

Our choice of architectural style has a significant impact on our ability to implement security requirements. Most security requirements describe what is to be protected and from whom; the protection needs are often discussed in a threat model, which casts each need in terms of the possible threats to the system and its resources. It is the arcbi- tecture that describes how the protection should be achieved.

There are two key arcbitecturaJ characteristics that are particularly relevant to security: immunity and resilience. A system has high immunity if it is able to thwart an

Openmirrors.com

Section 5.5 Achieving Quality Attributes 249

a ttempted attack. It bas high resilience if it can recover quickJy and easily from a suc- cessful attack. The a rchitecture encourages immunity in several ways:

• ensuring that all security features are included in the design

• minimizing the security weaknesses that might be exploited by an attacker

Likewise, the architecture encourages resilience by

• segmenting the functionality so that the effects of an attack can be contained to a smaU part of the system

• enabling the system to quickly restore functionality arnd performance in a short time

Thus, several, more general quality characte ristics, such as redundancy, contribute to the a rchitecture's security.

A fuU discussion of securit y architectures is beyond the scope of this book; for a de tailed discussion, see Pfieeger and Pfteeger (2006), where the architectural discus- sions are based on type of application, such as operating system, database, or user inte r- face. No matter the application, some architectural styles, such as layering, are well-suited for any kind of security; they inherently ensure that some objects and processes cannot inte ract with oth er objects and processes. Other styles, suclb as P2P, are much more di ffi- cult to secure.

Johnson, McGuire, and Willey (2008) investigated just how insecure a P2P net- work can be. They point out that this type of architecture is at least 40 years old: the U.S. Department of D efense deve loped the Arpanet as a P2P system:

TCP/IP, introduced in 1973, cemented the notion of direct host-to-host communication, with the network handling the mechanics of guiding the packets to their destination. Most of the protocols created since then (HTTP, SMTP, DNS, etc.) build on the idea that a host that needs data connects directly to the host that has it, and that it is the network's task to enable this. The techniques used by P2P file-sharing networking systems are s imply an evo- lution of these principles.

Although a P2P ne twork has advantages such as replication and redundancy, the under- lying design encourages data sharing even when the da ta are not intended for open view.

To see how, consider that a typical P2P network involves users who place share- able items in a designated fo lde r. You may thin.k that a care f11I user's files would be safe. But the re are many ways tha t da ta are unintentionally shared:

• TI1e user accidentally shares files or folders containing sensitive information.

• Files or da ta are misplaced.

• The user inte rface may be confusing, so the user does not realize that a file is being shared. Good and Kreke Ibe rg (2003) found tbe KaZaA system to have this problem.

• Files or da ta a re poorly organized.

• The user re lies on software to recognize file or data types and make tbe m available, and the software mistakenly includes a file or data that should have been protected.

• Malicious software shares files or folders without the user's knowledge.

Indeed, Krebs (2008) describes how an investment furn employee used his company computer to pa rticipate in LimeWire, an online P2P file-sharing network for people

250 Chapter 5 Designing the Architecture

trading in music and videos. In doing so, he inadvertently exposed his firm's private files. These files included the names, dates of birth, and Social Security numbers for 2000 of the firm's clients, among whom was a U.S. Supreme Court Justice! The bead of Tiversa, the company hired to help contain the breach said, "such breaches are hardly rare. About 40 to 60 percent of all data leaks take place outside of a company's secured network, usually as a result of employees or contractors installing file-sharing software on company computers" (Krebs 2008). Leaked often are files containing confidential company plans or designs for new products. So architectural considerations sho uld address both conventional and unconventional uses of the system being developed.

Reliability

Sidebar 5.6 warns us that software safety should not be taken for granted. That is, we need to be diligent in our design work, anticipating faults and handling them in ways that minimize disruption and maximize safety. The goal is to make our software as fault-free as possible, by building fault prevention and fault recovery into our designs. A software system or unit is reliable if it correctly performs its required functions under assumed conditions (IEEE 1990). In contrast, a system or unit is robust if it is able to function cor- rectly "in the presence of invalid inputs or stressful environment conditions" (IEEE 1990). Tbat is, reliability has to do with whether our software is inte rnally free of e rrors, and robustness has to do with how well our software is able to withstand errors or sur- prises from its environment. We discuss tactics for robustness in the next section.

SIDEBAR 5.6 THE NEED FOR SAFE DESIGN

How safe are the systems we are designing? The re ports from the field are difficult to inter-pret. Some systems clearly benefit from having some of their functions implemented in software instead of hardware (or instead of leaving decisions to the judgment of the people who are controlling them). For example, the automobile and aviation industries claim that large numbers of accidents have been prevented as more and more software is introduced into control systems. However, other evidence is disturbing. For instance, from 1986 to 1997,

there were over 450 reports filed with the U.S. Food and Drug Administration (FDA) detail- ing software defects in medical devices, 24 of which led to death or injury (Anthes 1997). Rockoff (2008) reports that the FDA established a software forensics unit in 2004 after it

noticed that medical device makers were reporting more and more software-based recalls. The reported numbers may represent just the tip of the iceberg. Because reports to the

FDA must be filed within 15 days of an incident, manufacturers may not yet have discovered the true cause of a failure when they write their reports. For instance, one reported battery failure was ultimately traced to a software flaw that drained it. And Leveson and Turner

(1993) describe in great detail the user-interface design problems that led to at least four deaths and several injuries from a malfunctioning radiation therapy machine.

The importance of software design is becoming evident to many organizations that were formally unaware of software's role. Of course, design problems are not limited to medical

Openmirrors.com

Section 5.5 Achieving Quality Attributes 251

devices; many developers take special precautions. The Canadian Nuclear Safety Commission recommends that all "level 1" safety-critical software running in nuclear power plants be spec- ified and designed using formal (i.e., mathematical) notations, "so that the functional analysis can use mathematical methods and automated tools" (Atomic Energy Control Board 1999). And many groups at Hewlett-Packard use formal inspections and proofs to eliminate faults in the design before coding begins (Grady and van Slack 1994).

Anthes (1997) reports the suggestions of Alan Barbell, a project manager at Environ-

mental Criminology Research, an institute that evaluates medical devices. Barbe ll notes that software designers must see directly how their products wilJ be used, rather than rely on sales- people and marketers. Then the designers can build in preventative measures to make sure that their products are not misused.

How do faults occur? As we saw in Chapter 1, a fault in a software product is the result of some human error. For example, we might misunderstand a user-interface requirement and create a design that reflects our misunderstanding. Tue design fault can be propagated as incorrect code, incorrect instructions in the user manual, or incor- rect test scripts. Io this way, a single error can generate one or more faults, in one or more development products.

We distinguish faults from failures. A failure is an observable departure of the sys- tem from its required behavior. Failures can be discovered both before and after system delivery, because they can occur in testing as well as during operation. In some sense, faults and failures refer respectively to invisible and visible Haws. In other words, faults represent tlaws that only developers see, whereas failures are problems that users or customers see.

It is important to realize that not every fault corresponds to a failure, since the conditions under which a fault manifests itself as an observable failure may never be met. For example, fault-containing code may never be executed or may not exceed the boundaries of correct behavior (as with Ariane-4).

We make our software more reliable by preventing or tolerating faults. That is, rather than waiting for the software to fail and then fixing the problem, we anticipate what might happen and construct the system to react in an acceptable way.

Al.1ivc Fault Detection. When we design a system to wait until a failure occurs during executio n, we are practicing passive fault detectiom. However, if we periodicaUy check for symptoms of faults, or try to anticipate when failures will occur, we are per- forming active fault detection. A common method for detecting faults within a process is to identify known exceptions-that is, situations that cause the system to deviate from its desired behavior. Then, we include exception handling in our design, so that the system addresses eacb exception in a satisfactory way and returns the system to an acceptable state. Thus, for each service we want our system to provide, we identify ways it may fail and ways to detect that it has failed. Typical exceptions include

• failing to provide a service • providing the wrong service

252 Chapter 5 Designing t he Architecture

• corrupting data • violating a system invariant (e.g., security proper ty)

• deadlocking

For example, we can detect data problems by identifying relationships or invariants tbat sbould hold among data values and checking reguJarly at runtime tbat tbese invari- ants still hold at runtime. Sucb checks can be embedded in the same code that manipu- la tes the data. We say more in Chapter 6 about how to use exceptions and exception handLing effectively.

Another approach to active fault detection is to use some form of redundancy and then to check that the two techniques agree. For example, a data structure can include both forward and backward pointers, and tbe program can check that paths tbrough the data structure are consistent. Or an accounting program can add up aJJ the rows and then aU of the columns to verify that the totals are identical. Some systems go so far as to provide, and continuously compare, multiple versions o f the whole system. The theory behind this approach, called n-versio111 programming, is that if two function- ally equivalent systems are designed by two different design teams at two different times using different techniques, tbe chance of the same fauJt occurring in both imple- mentations is very small. Unfortunately, n-version programming bas been shown to be less reliable than originally thought, because many designers learn to design in similar ways, using similar design patterns and principles (Knight and Leveson 1986).

In other systems, a second computer, running in parallel, is used to monitor the progress and health of the primary system. The second system interrogates the first, examining the system's data and processes, looking for signs that might indicate a prob· lem. For instance, the second system may find a process that has no t been scheduled for a long period of time. This symptom may indicate that the ftrst system is "stuck" some- where, looping through a process or waiting for input. Or the system may find a block of storage that was allocated and is no longer in use, but is not yet on the list of available blocks. Or tbe second system may discover a communication Line that has not been released at the end of a rt:ransmission. If the second system cannot directly examine the first system's data or processes, it can instead initiate diagnostic tra nsactions. This tech- nique involves having the second system generate false but benign transactions in the first computer, to determine if the first system is working properly. For example, the sec- ond system can open a communication channel to the first, to ensure that the first sys- tem stil 1 responds to suc!h requests.

Fault Recovery. A detected fault must be handled as soon as it is discovered, rather than waiting until processing is complete. Such immediate fault handling helps to Limit the fauJt's damage, rather than allowing the fauJt to become a failure and create a trail of destruction. FauJt-recovery tactics usuanJy involve some overhead in keeping the system ready for recovery:

• Undoing transactions: The system manages a series of actions as a single transaction that either executes in its entire ty, or whose partial effects are easily undone if a fault occurs midway through the transaction.

• Checkpoint/rollback: PeriodicaUy, or after a specific operation, the software records a checkpoint of its current state. If the system subsequently gets into

Openmirrors.com

Section 5.5 Achieving Quality Attributes 253

trouble, it "roUs" its execution back to this recorded state, and reapplies logged transactions that occurred since the checkpoint.

• Backup: The system automatically substitutes the faulty unit with a backup unit. In a safe ty-critical system, this backup unit can run in parallel with the active unit, processing events and transactions. Thjs way, the backup is ready to take over for the active unit at a moment's notice.Alternatively, the backup unit is brought on-line only when a failure occurs, which means that the backup needs to be brought up-to-speed on the current state of the system, possibly by using check- points and logged transactions.

• Degraded service: The system re turns to its previous slate, perhaps using check- points and roUback, and then offers some degraded version of the service.

• Correct and continue: If the monitoring software detects a problem with data con- sistency or a stalled process, it may be easier to treat the symptoms rather than fix the fault. For example, the software may be able to use redundant information to infe r how to fix data errors. As another example, the system may te rminate and restart hung processes. Telecommunications systems operate this way, dropping bad connections with the expectation that the customer can reinitiate the call. In this manner, the integrity of the overall system takes precedence over any individ- ual call that is placed.

• Report: The system returns to its previous state and reports the problem to an exception-handling unit. Alternatively, the system may simply note the existence of the failure, and record the state of the system at the time the failure occurred. It is up to the developers or mruntainers to return to fix the problem hiter.

The criticality o f the system dete rmjnes which tactic we choose. Sometimes it is desir- able to stop system execution when a fault affects the system in some way (e.g., when a failure occurs). It is much easier to find the source of the problem if system processing ceases abruptly on detecting a fault than if the system continues executing; continuing may produce other effects that bide the underlying fault, or may overwrite critical data and program state information needed to locate the fault.

Other times, stopping system execution to correct a fault is too expensive, risky, or inconvenient. Such a tactic would be unthinkable for software in a medical device or aviation system. Instead, our soft ware must mirumize the damage done by the fault and then carry on with little disruption to the users. For example, suppose software controls several equivalent conveyor belts in an assembly line. If a fault is de tected on one of the belts, the system may sound an aJarm and reroute the materials to the other belts. When the defective belt is fixed, it can be put back into production. This approach is certainly preferable to stopping production completely until the defective belt is fixed. Similarly, a banking system may switch to a backup processor or make duplicate copies o f data and transactions in case one process fails.

Some fault-recovery tactics rely on the ability to predict the location o f faults and the timing of failures. To build workarounds in the system design, we must be able to guess what might go wrong. Some faults are easy to anticipate, but more complex sys- tems are more difficult to analyze. At the same time, complex systems are more Likely to have significant faults. To make matters worse, the code to implement fault de tection and recovery may itseli contain faults, whose presence may cause irrepara ble damage.

254 Chapter 5 Designing the Architecture

Thus, some fault-recovery strategies isolate areas of Likely faults rather than predict actual faults.

Robustness

When we learn to drive a car, we are told to drive defensively. That is, we not onJy make sure that we follow the driving rules and laws, but we also take precautions to avoid accidents that might be caused by problems in our surroundings, such as road condi- tions and othe r vehicles. In the same way, we should design defensively, trying to antici- pate external factors tha t might lead to problems in our software. Our system is said to be robust if it includes mechanisms for accommodating or recovering from problems in the environment or in o ther units.

Defensive designing is not easy. It requires diligence. For example, we may foUow a policy of mutual suspicion, where each software unit assumes that the other units con- tain fauJts. In this mode, each unit checks its input for correctness and consistency, and tests that the input satisfies the unit's preconditions. Thus, a payroll program would ensure that hours_worked is nonnegative before calculating an employee's pay. Simi- larly, a checksum, guard bit, o r parity bit included in a data stream can warn the system if input data are corrupted. In a distributed system, we can check the health of remote processes and the communication ne twork by pe riodicaJly issuing a "ping" and check- ing that the processes answer within an acceptabl e time frame. In some distributed sys- tems, multiple computers pe rform the same calculations; the space shuttle operates this way, using five duplicate computers that vote to dete rmine the next operation. lbis approach is different from n-version programming, in that all of the computers run the same software. Thus, this redundancy will not catch logic errors in the software, but it wiU overcome hardware failures and transient errors caused by radiation. As we will see late r in this chapte r, we can use fault-tree analysis and failure-mode analysis to help us identify potentia l hazards that our software must detect and recover from.

R obustness tactics for de tecting faults diffe r from reliability tactics because the source of problems is different. That is, the problems are in our software's environment rather than in our own software. H owever, the recovery tactics are similar: our software can rollback the system to a checkpoint state, abort a transaction, initiate a backup unit, provide reduced service, correct the symptoms and continue processing, or trigger an exception.

Usability

Usability attributes re tlect the ease with which a user is able to operate the system. Most aspects of user-inte rface design are about how information is presented to and collected from the use r.111ese design decisions tend not to be architectural, so we post- pone a de tailed discussion of this topic until the next chapter. H owever, there are a few user-interface decisions that do significantly affect the software's architecture, and they are worth mentioning he re.

First and foremost, the user inte rface should reside in its own software unit, or possi- bly its own architectural Jaye r. lbis separation makes it easier to customize the user inter- face for diffe rent audiences, such as users of diffe re nt nationalities or different abiJHies.

Openmirrors.com

Section 5.5 Achieving Quality Attributes 255

Second, there are some user-initiated commands that require architectural support. These include generic commands such as cancel, undo, aggregate, and show multiple views (Bass, Clements, and Kazman 2003). At a minimum, the system needs a process that lis- tens for these commands, because they could be generated at any time, unlike user com- mands lhal are input in response to a system prompt In addition, for some of these commands, the system needs to prepare itself to receive and execute the command. For example, for the undo command, the system must maintain a chain of previous states to which to return. For the show multiple views command, the system must be able to present multiple displays and keep them up-lo-date and consistent as data change. In general, the design should include facilities for de tecting and responding to any expected user input.

Third, there are some system-initiated activities for which the system should main- tain a model of its environment. TI1e most obvious examples are time-activated activiities that occur at defined tiroe inte rvals or on specified dates. For examjple, a pacemaker can be configured to trigger a heartbeat 50, 60, or 70 times a minute, and an accounting system can be set up to generate monthly paychecks or bills automatically. The system must track the passage of time or days to be able to initiate such time-sensitive tasks. Similarly, a process-control system, such as a system that moniitors and controls a chemical reaction, will maintain a model of the process being controlled, so that it can make informed deci- sions about how to react to particular sensor input. If we encapsula te the model, we will be bette r able to replace this software unit when new modeling technologies are invented , or to tailor the model for diffe rent applications or customers.

Business Goals,

The system may have quality attributes that it is expected to exhibit. In addition, we or our customers may have associated business goals that are important to achieve. The most common of these goals is minimizing the cost of development and the time to m ar- ket. Such goals can have major effects on our design decisions:

• Buy vs. build: It is becoming increasingly possible lo buy major components. In addition to saving development time, we may actually save money by buying a component rather than paying our employees to build it. A purchased component may also be more re liable, especially if it has a hfatory and has been tested out by other users. On the other band, using a third-party or existing component places constraints on the rest of our design, with respect to how the architecture is decomposed and what its interfaces to the component look like. It can also make our system vulnerable to the availability of the component's. supplier, and can be disastrous if the supplie r goes out of business or evolves the component so that it is no longer compatible with our hardware, software, or needs.

• Initial development vs. m aintenance costs: Many of the arc!hitectural styles and tactics that promote modifiability also increase a design's complexity, and thus increase the system's cost. Because more than half of a system's to ta l development costs are incurred after the first version is released, we can save money by making our system more modifiable. However, increased complexity may delay the system's initia l re lease. During this delay, we collect no payment for our product, we risk losing market share to our competitors, and we risk our reputation as a reliable

256 Chapter 5 Designing the Architecture

software supplier. Thus, for each system that we build, we must evaluate the trade- off between early delivery and easier maintenance.

• New vs. known technologies: New technologies, architectural styles, and compo- nents may require new expertise. Past examples of such technological break- throughs include object-oriented programming, middleware technologies, and open-systems standards; a more recent example is the prevalence of cheap multi- processors. Acquiring expertise costs money and delays product release, as we eithe r learn how to use the new technology or hire new personnel who already have that knowledge. EventuaUy, we must develop the expertise ourselves. But fo r a given project , we must decide when and how to pay the costs and reap the bene fits of applying new technology.

5.6 COLLABORATIVE DESIGN

Not aU design questions are technical. Many are sociological, in that the design of soft- ware systems is usuaUy performed by a team of de velopers, rather than by a single per- son. A design team works coUaboratively, often by assigning different parts o f the design to various people. Several issues must be addressed by the team, including who is best suited to design each aspect of the system, how to document the design so lthat each team member understands the designs of others, and how to coordinate the result- ing software units so that they work weU when integrated together. The design team must be aware of the causes of design breakdown (Sidebar 5.7) and use the team's s trengths to address the m.

SIDEBAR 5.7 THE CAUSES OF DESIGN BREAKDOWN

Guindon, Krasner, and Curtis (1987) studied the habits of designers on 19 projects to determine what causes the design process to break down. They found three classes of breakdown: lack of knowledge, cognitive limitations, and a combination of the two.

The main types of process breakdown were

• lack of specialized data schemas

• lack of a meta-schema about the design process, leading to poor allocation of resources

to the various design activities

• poor prioritization of issues, leading to poor selection of alternative solutions

• difficulty in considering all the stated or inferred constraints in defining a solution

• difficulty in performing mental simulations with many steps or test cases

• difficulty in keeping track of and returning to subproblems whose solution has been

postponed

• difficulty in expanding or merging solutions from individual subproblems to form a

complete solution

Openmirrors.com

Section 5.6 Collaborative Design 257

One of the major problems in performing coUaborative design is addressing dif- fe rences in personal experie nce, understanding, and preference. Another is that people sometimes behave differently in groups from the way they behave individuaUy. For example, Japanese software developers are less likely to express individual opinions when working in a group, because they value teamwork more than they value individ- ual work. Harmony is very important, and junior personnel in Japan defer to the opin- ions of their more senior colleagues in meetings (Ishii 1990). Watson, Ho, and Raman (1994) found a similar s ituation when they compared groupware-supported mee ting behavior in the United States to that in Singapore. ParaUel communication and anony- mous information exchange were important for the American groups, but were not as important for the Singaporean groups, who valued harmony. In cases such as these, it may be desirable to design using a groupware tool, where anonymity can be preserved. Indeed, Valacicb et al. (1992) report that preserving anonymity in this way can enhance the group's overall performance. There is a trade-off, however. Anonymity in some groups can lead to diminished responsibilities for individuals, leading them to believe that they can get away with making fewer contributions. Thus, it is important to view the group interaction in its cultural and ethical contexts.

Outsourcing

As the software industry seeks to cut costs and improve productivity, more software development activities wiU be outsourced to other companies or divisions, some of which may be located in other countries. In such cases, a collaborative design team may be distributed around the world, and the importance of understanding group behavior will increase.

Yo urdon (1994) ide ntines four stages in tltis kind of distributed development:

1. In the first stage, a project is performed at a single site with on-site developers from foreign countries.

2. In the second stage, on-site analysts determine the system's requirements. Then, tbe requirements a re provided to off-site groups of developers and programmers to continue development.

3. In the third stage, off-site developers build generic products and components that are used worldwide.

4. In the fourth stage, the off-site developers build products that take advantage of their individual areas of expertise.

Notice that this model conflicts with advising designers to shuttle among requirements analysts, teste rs, and coders, in order to enhance everyone's unde rstanding of the sys- tem to be developed. As a development team advances through the stages ofYourdon's model, it is like ly to encounter problems at stage 2, where communication paths must remain open to support an ite rative design process.

Time zone differences and unstable Inte rnet connections are just some of the challenges that can make it difficult for a distributed design team to coordinate its efforts. Yourdon (2005) bas studied the trends and effects of outsourcing knowledge- based work, and he reports that distributed team& often use different development pro- cesses. Outsourced subteams are more likely to be staffed with junior developers who

258 Chapter 5 Designing t he Architecture

employ current best practices, whereas more mature subteams tend to use older methods that have proven effective on past projects. This mismatch of processes can be a source of contention. Also, outsourced subteams, especiaUy thos.e that work offshore (i.e., in another country), are less likely to know local business rules, customs, and laws.

Communication among distributed team members can be enhanced using notes, prototypes, graphics, and other aids. However, these expLicit representations of the requirements and design must be unambiguous and capture all of the assumptions about how the system should work. Polanyi (1966) notes that intentions cannot be spec- ified fully in any language; some nuances are not obvious. Thus, communication in a group may break down when an information recipient interpre ts information in te rms of his or her understanding and context. For example, in person, we convey a great deal of information using gestures and facial expressions; this type of information is lost when we are collaborating electronically (Krauss and Fussell 1991).

This difficulty is compounded when we communicate in more than one language. For example, there are hundreds of words to describe pasta in ItaLian, and Arabic has over 40 words for camel. It is extremely difficult to translate the nuances embedded in these differences. Indeed, Winograd and Flores (1986) assert that complete translation from one natural language to another is impossible, because the semantics of a natural language cannot be defined formally and comple tely. Thus, a major challenge in pro- ducing a good software design is reaching a shared understanding among groups of people who may view the system and its environment in very different ways. Th.is chal- lenge derives not just from "the complexity of technical problems, but [also] because of lhe social interaction when users and system developers learn to create, develop and express their ideas and visions" (Greenbaum and Kyng 1991).

5.7 ARCHITECTURE EVALUATION AND REFINEMENT

Design is an iterative process: we propose some design decisions, assess whether they are the right decisions, perhaps make adjustments, and propose more decisions. In this section, we look at several ways to evaluate the cllesign, to assess its quality and to gain insight into how to improve the design before we implement it. These techniques evalu- ate the design according to how well it achieves specific quality a ttributes.

M easuring Design Quality

Some researchers are developing metrics to assess certain key aspects of design quality. For example, Cbidamber and Kemerer (1994) have proposed a general set of metrics to apply to object-oriented systems. Briand, Morasca, and Basili (1994) have propos.ed metrics for evaluating high-level design, including cohesion and coupLing, and Briand, Devanbu, and Melo (1997) build on those ideas to propose ways to measure coupling.

To see bow these measurements reveal information about the design, consider the latter group's coupling metrics. Briand et al. mote that coupling in C++-Like design can be based on three different characteristics: relationships between classes (e.g., friendship, inheritance), types of interactions between classes (e.g., class-attribute interaction, class-method interaction, method-me thod interaction), and the loci of rip- ple effects due to design changes (i.e., whether a change Hows toward or away from

Openmirrors.com

Section 5.7 Architecture Evaluation and Refinement 259

a class). For each class in a design, they defined metrics that count the interactions between the class and other classes or methods. Then, using empirical informartion about the design for a real system and the resullting system's faults and failures, they analyzed the relationship between the type of coupling and the kinds of faults that were found. Fo r example, they report that when a class depended on a large number of attributes that belonged to other classes that were not ancestors, descendants, or friends of that class, then the resulting code was more fault prone than usual. Similarly, when many methods belonging to friend classes depended on the methods of a par- ticular class, then that class was mo re fault prone. lo th.is way, design information can be used to predict which parts of the software are most likely to be problematic. We can take steps during the design stage to build in fault prevention or fault tolerance, and we can focus more of our initial testing efforts on the most fault-prone parts of the design.

Safety Analysis

We learned earlier about the importance of fault identification, correction, and tolerance in creating reliable and robust designs. There are several techniques that can be used dur- ing design to identify possible faults or their likely locations. Fault-tree analysis, a method originally developed for the U.S. Minuteman missile program, helps us to examine a design and look for faults that might lead to failure. We build fault t rees that trace back- wards through a design, along the logical path from effect to cause. The trees are then used to help us decide which faults to correct or avo id, and which faults to tolerate.

We begin our analysis by identif-ying possible failures. Although our identificail:ion takes place during design , we consider failures that might be affected by design, opera- tion, or even maintenance. We can use a set of guidewords to help us understand how the system might deviate from its intended behavior. Table 5.1 illustrates some of the guidewords that might be used; you can select your own guidewords or checklists, based on your system's application domain.

Next, we build an upside-down tree whose root node is some failure that we want to analyze, and whose other nodes are events or faults that realize or lead to the root node's failure. The edges of the graph indicate the relationships among nodes. Parent nodes aire drawn as logic gate operators: an and-gate if both of the child nodes' events must occur for the parent node's event to occur; an or-gate if o ne chjld's event is suffl- cient to cause the parent's event. Sometimes, an edge is labeled n._of Jn if the system

TABLE 5.1

Guideword

no more less part of otbertban early late before after

Guidewords for Identifying Possible Failures

Interpretation

No data or control signal was sent or received The volume of data is too mucb or too fast 111e volume of data is too low or too slo w 111e data or control signal is iuoomplete The data o r control signal has another component The signal arrives too early for the clock The signal arrives too late for the clock The signal arrives too early in the expected sequence The signal arrives too late in the expected sequence

260 Chapter 5 Designing the Architecture

Both events must occYr to cause fai IYre

Fai lwre ······················

1111111 aeous ,, .. ,111

Semity bmeli

·····

FIG URE 5. 11 Fault tree for a security breach.

Either event luds to failura

/ Buie events

includes m redundant components, where n. faulty components lead to the designated failure. Each node represents an independent event; otherwise, the analysis results may be invalid, especially with respect to compouml faults.

For example, consider the fault tree presented in Figure 5.11 . The tree shows that a security breach could occur either if a previous logout is not recognized Oeaving the previous user logged in) or an unauthorized user gains access to th.e system. For the lat- ter to happen, both of two basic events must happe n: a valid user's password is exposed, and the password is not changed between the time of the exp osure and the time an unauthorized user attempts to use it.

The concepts we have described can be applied to any system's hardware or soft- ware. The challenge is to identify key possible failures, and then trace backwards through the design, looking for data and computations that could contribute to the fail- ure.Da ta-flow and control-flow graphs can help us trace th1ough a design.As we saw in Chapter 4, a data-flow graph depicts the transfer of data from one process to another. The same ideas can be applied to an architectural design, to show what kinds of data flow among the design's software units. In this way, if one of the failures we are analyz- ing is data rela ted, we can trace backwards through the data-fl.ow graph to find the soft- ware units that could affect the data and thereby cause the fault. Similarly, a control-flow graph depicts possible transfers of control among software units. When applied to a design, a control-flow graph can show bow a control thread progresses from one unit to the next during execution. If we are analyzing a failure that is related to computation or to a quality attribute, we can trace backwards through the control- flow graph to find the so ftware units involved in that computation.

Once the fault tree is constructed, we can search for design weaknesses. For example, we can derive another tree, called a cut-set tree, that reveals which event com- binations can cause the failUJe. A cut-set tree is especiaLiy useful when the fault tree is

Openmirrors.com

Section 5.7 Architecture Evaluation and Refinement 261

complex and difficult to analyze by eye. The rules for formjng the cut-set tree are as follows:

1. Assign the top node of the cut-set tree to match the logic gate at the top of the fault tree.

2. Working from the top down, expand the cut-set tree as follows:

• Expand an or-gate node to have two children, one for each or-gate cillld. • Expand an and-gate node to have a child composition node listing both of the

and-gate cllldren. • Expand a composition node by propagating the node to its children, but

expanding one of the gates listed in the node.

3. Continue until au leaf nodes are basic events or composition nodes of basic events.

The cut-set is the set of leaf nodes in the cut-set tree. For example, consider the faul t tree on the left side of Figure S.12. Gl is the top logic gate in the fault tree, and its or condition leads us to expand its corresponding node in the cut-set tree to have two child nodes, G2 and G3. In turn, G2 in the fault tree is composed of both G4 and GS, so G2 in the cut-set tree is expanded to have a composition child node with label IG4, GS). Continuing in this manner, we end up with the cut-set{Al ,A3), IA1,A4),IA2,A3), IA2, A4), IA4, AS). The cut-set represents the set of minimal event combinations that could lead to the failure listed at the top of the cut-set tree. Thus, if any member o f the cut-set is a singleton event set, !Ai), then the top failure could be caused by a single event, Ai. Similarly, the cut-set element !Ai, Aj) means that the top failure can occur if both events Ai and Aj occur, and cut-set element !Ai, Aj, .. . , An) means thall: failure can occur only if aU of the composite events occur. Thus, we have reasoned from a failure to au possible causes of it.

GI

~ G2 G3

! l {G4, GS} {A4, AS)

~ {A I, GS} {A2, GS}

/\ ~ {Al, A4} {A2, A3} {A2, A4}

Fault tree Cut·sel tree

FIGURE 5.12 Cut-set tree gene rated from a fault tree.

262 Chapter 5 Designing the Architecture

Once we know the points of failure in our design, we can redesign to reduce the vulnerabilities. We have several choices when we :find a fault in a design:

• Correct the fault. • Add components or conditions to prevent the conditions that cause the fault.

• Add components that detect the fault o r failure and recover from the damage.

Although the first option is preferable, it is not always possible. We can aJso use fauJt trees to calcuJate the probability that a given failure wilJ occur

by estimating the probability of the basic events amd propagating these computations up through the tree. But the re are some drawbacks to fauJt-lree anaJysis. First, constructing the grapbs can be time consuming. Second, many systems involve many dependencies, wbich means that anaJysis of the design's data-flow and control-How graphs yields a large number of suspect software units to explore; it is difficuJt to focus only on tbe most critical parts of the design unJess we have very low coupling. Moreover, the number and bnds of preconditions that are necessary for each failure are daunting and not always easy to spot, and the re is no measurement to help us sort them out. However, researche rs continue to seek ways to automate the tree building and analysis. I n the United States and Canada, fault-tree analysis is used for criticaJ aviation and nuclear applications, where tbe risk of failure is worth the intense and substantial effort to build and evaJuate the fauJt trees.

Security Analysis

In Chapte r 3, we saw how risk analysis is used to determine the likely threats to a project's cost and schedule. When designing the system's architecture, we must carry out a similar analysis, this time to address security risks. Allen et al. (2008) describe the six steps in performing a security analysis:

1. Software characterization. In the first step, we review the software requirements, business case, use cases, SAD, test plans, and other available documents to give us a complete unde rstanding of what the system is supposed to do and wby.

2. Threat analysis. Next, we look for threats to the system: Who might be interested in attacking the system, and when might those attacks occur? NIST (2002) lists many sources of threats, including hacke rs, computer criminals, terrorists, indus- trial espionage agents, and both na'ive and malicious inside rs. The threat activities might include indl!lstrial espionage, blackmail, interception or disruption of com- munications, system tampering, and more.

3. Vulnerability assessment. Security problems occur when a threat exploits a vuJ- ne rability. The vulnerability may be a flaw in tbe software design or in the code itself. Examples include failure to authenticate a pe rson or process before allow- ing access to data or a system, or use of a cryptographic algorithm that is easy to break. VuJnerabilities can arise not only from mistakes but also from ambiguity, de pendency, or poor error handling. Ant6n e t al. (2004) describe a complete methodology for finding a system's vuJnerabilities.

4. Risk likelihood determination. Once the threats and vulnerabilities are documented, we examine the likelihood that each vulnerabili ty will be exploited. We must con- sider four things: the motivation (i.e., why the person or system is threatening), the

Openmirrors.com

Section 5.7 Architecture Evaluation and Refinement 263

ability of the threat to exploit a vulnerability, the impact of the exploitation (i.e., how much damage will be done, bow long the effects will be felt, and by whom), and the degree to which current controls can prevent the exploitation.

5. Risk impact determination. Next, we look a t the business consequences if the attack is successful. What asse ts are threa tened? How long will functionality be impaired? How much will recognition and remedia tion cost? Pfieeger and Ciszek (2008) describe RAND's InfoSecure methodology, which provides guidelines for recognizing and ranking various threats and vulnerabilities in terms of business impact. The highest rank is business-ending, where an organization or business would not be able to recover from the attack. For example, the business may lose the designs for aJI of its new products. The next category is damaging: a temporary loss of business from which it may be difficult but not impossible to recover. For instance, the business may lose sensitive financial data tha t can eventually be re trieved from backup media. The next lower category is recoverable; here, the business may lose its corporate benefit po licies (such as life and health insur- ance), which can easily be replaced by the insurance providers. The lowest cate- gory is nuisance, where assets such as nonsensitive email a re dele ted from the server; restoration may not even be necessary.

6. Risk mitigation planning. The final step involves planning to reduce the likelihood and consequences of the most severe risks. InfoSecure performs this planning by first having us devise projects to address each risk. The projects specify both staff impact and policy impact. The projects are prioritized according to the business impact, considering both capital and overhead costs. A final list of projects to imple- ment is based on likelihood of risks, business impact of mitigations, and cash flow.

Although the six security analysis steps can be applied to evaluate how well an a rchitectural design meets security needs, these steps can also be applied later in the development process. Even when the system is o perational, it is useful to perform a security analysis; thieats and vulnerabilities change over time, and tbe system should be updated to meet new securi ty needs.

Trade-off Analysis

Often, there are several alternative designs to consider. In fact, as professionals, it is our duty to explore design alternatives and not simply implement the first design that comes to mind. For example, it may not be immediate ly obvious which a rchitectural styles to use as the basis for a design. This is especially true if the design is expected to achieve quality a ttributes that conflict with one another. Alternatively, different me mbers of our design team may promote competing designs, and it is our responsibility to decide which one to pursue. We need a measl!IIement-based method for comparing design alternatives, so that we can make informed decisions and can justify our decisions to others.

One Specification, Many Designs. To see how different architecture styles can be used to solve the same problem, consider the p roblem posed by Parnas (1972):

The [key word in context] KWIC system accepts an ordered set o f lines; each line is an ordered set of words, and each word is an ordered set of characters. Any line may be

264 Chapter 5 Designing t he Architecture

"circularly shifted" by repeaiedly removing the first word and appending it at the end of the line. The KWIC index system outputs a list of all circular shifts of all lines in alphabeti- cal order.

Such systems are used to index text, supporting rapid searching for keywords. For example, KWIC is used in electronic library catalogues (e.g., flnd all titles that contain the name "Martin Luther King Jr.") and in online help systems (e.g., find all index entries that contain the word "customize").

Shaw and Garlan (1996) present four different architecturaJ designs to implement KWIC: repository, data abstraction, implicit invocation (a type of publish-subscribe), and pipe-and-filter. The repository-based solution, shown in Figure 5.13, breaks the problem into its four primary functions: input, circular shift, alphabetize, and output. Thus, the system's functionality is decomposed and modularized. "These four modules are coordinated by a master program that calls them in sequence. Because the data are localized in their own modules, and not replicated or passed among the computational modules, the design is efficient. However, as Paroas points out, this design is di.fficult to change. Because the computational modules access and manipula te the data directly, via read and write operations, any change to the data and data format wiU affect all modules. Also, none of the elements in the design are particularly reusable.

Figure 5.14 illustrates a second design that has a similar decomposition of func- tionality into sequentiaLly called modules. However, in this design, the data computed by each computational module is stored in that module. For example, tbe circular-shift module maintains the index to keywords in the text, and the alphabetic-shift module maintains a sorted (alphabetized) version of this index. Each module's data are accessi- ble via access methods, rather than directly. Thus, 1!.he modules form data abstraclions . In a data abstraction, the methods' interfaces give no hint of the module's data or data representations, making it easier to modify data-re la ted design decisions without affecting other modules. And because data-abstraction modules encompass both the

KEV Direct memory access

¢ Subprogram cal I Svstem 1/0

Input medium

FIGURE 5.13 Shared-data solution for KWIC(Sbaw and Garlan 1996).

Openmirrors.com

Output mediqm

KEY

¢ Subprogr1m call ----- System 1/0

Section 5.7

Input medium

Architecture Evaluation and Refinement 265

Output medium

FIGURE 5.14 Data-module solution for KWlC (Shaw and Garlan 1996).

data to be maintained and the operations for maintaining the data, these modules are easier to reuse in other applications than modules from our first design. On the down- side, changes to the system's functionality may not be so easy, because the functionality is so tightly coupled with the data. For example, omitting indices whose circular shifts start with noise words means either (1) enhancing an existing module, making it more complex, more context specific, and less reusable; or (2) inserting a new module to remove useless indices after they have been created, which is inefficient, and modifying existing modules to call the new module (Garlan, Kaiser, and Notkin 1992).

Instead, Garlan, Kaiser, and Notkin (1992) propose a third design, shown in Figure 5.15, in which the data are stored in ADTs that manage generic data types, such

KEY - Implicit invocation c:::::::;> Subprogram call ---- System 1/0

Input

Input medium

FIGURE 5.15 ADTsolution for KWIC (Shaw and Garlan 1996).

Output

Output medium

266 Chapter 5 Designing the Architecture

as Lines of text and sets of indices, rather than in KWIC-based data abstractions. Because ADTs are gene ric, they are even more reusable than data abstractions. More- over, the data and operations on data are encapsulated in the ADT modules, so changes to data representation are confined to these modules. This design resembles the first design, in that the system's functionality is decomposed and modularized into compu- tationaJ modules. However, in this design, many of the computation al modules are trig- gered by the occurrence of events rather than by explicit procedure invocation. For example, the circular-shiJt module is activated whenever a new Line of text is input. The design can be easily extended, via new computa tional modules whose methods are trig- gered by system events; existing modules do not need to be modified to integrate the new modules into the system.

One complication with an implicit-invocation design is that multiple computa- tional methods may be triggered by the same event. If that happens, au the triggered methods will execute, but in what order? If we do not impose an order, then the trig- gered methods may execute in any arbitrary order, which may not be desired (e.g., a method that expands macros in new lines of text should execute before a method that inserts a new line into the ADT). However, to control execution order, we must devise some generic strategy that applies to both current and future sets of methods triggered by the same event, and we cannot always predict what methods may be added to the system in the future.

Th.is complication leads us to a fourth design, based on a pipe-and-filter archjtec- ture (Figure 5.16), where the sequence of processing is controlled by the sequence of the filte r mouules. This uesign is easily extenueu to include new features, in that we can sim- ply insert additional filte rs into the sequence. Also, each filter is an independent entity that can be changed without affecting other filte rs. The design supports the reuse of fil- ters in other pipe-and-filte r applications, as long as a filter's input data is in the form that it expects (Shaw and Garlan 1996). The fLlters may execute in paraUel, processing inputs as they are received (although the Alphabetizer cannot output its results until it bas received and sorted all o f the input lines); this concurrent processing can enhance per- formance. Unlike the otber designs, a data item is no longer stored in any location, but ra ther flows from one filter to another, being transformed along the way. As such, the design is not conducive to changes that would require the storage of persistent data,

KEY - Pipe ----System 1/0

Input medium

Alphabetim

Input

Output

Circular shift

Output mo~ium

FIGURE 5.16 Pipe-and-filter solution for KWIC (Shaw and Gari an 1996).

Openmirrors.com

Section 5.7 Architecture Evaluation and Refinement 267

TABLE 5.2 Comparison of Proposed KWICSolutions

Shared Data Implicit Pipe and Attribute Data Abstraction Invocation Filter

Easy to change algorithm + + Easy to change data representation + Easy to add functionality + + + Good performance + + Efficient data representation + + + Easy to reuse + +

Source: Ada:pted from Shaw and Garlan (1996).

such as an undo operation. Also, there are some space ine fficiencies: the circular shifts can no longer be represented as indices into the originaJ text and instead are permuted copies of the original lines of text. Moreover, the data item is copied each time a filter outputs its results to the pipe leading to the next filter.

We can see that each design has its positive and negative aspects. Thus, we need a method for comparing different designs that aUows us to choose the best one for our purpose.

Comparison Tables. Shaw and Garlan (1996) compare the four designs accord- ing to how well they achieve important quality attributes and then organize this infor- mation as a table, shown as Table 5.2. Each row represents a quality attribute, and there is one column for each of the proposed designs. A minus in a celJ means that the attrib- ute represented by the row is not a property of the design for that column; a plus means that the design has the attribute. We can see from the table that the choice is still not clear; we must assign priorities to the attributes and form weighted scores if we want to select the best design for our particular needs.

We start by prioritizing quality attributes with respect to how important the attributes are to achieving our customer's requirements and our development strategy. For example, on a scale of 1 to 5, where 5 indicates that an attribute is most desirable, we may assign a "5" to reusability if the design is to be reused in several other products.

Next, we form a matrix, sh.own in Table 5.3, labeling the rows of the matrix with the attributes we value. The second column lists the priorities we have determined for

TABLE 5.3 Weighted Comparison of Proposed KWIC Solutions

Shared ID a ta Implicit Pipe and Attribute Priority Data Abstraction Invocation Filter

Easy to change algorithm 2 4 5 Easy to change data representation 4 1 5 4 1 Easy to add functionality 3 4 1 3 5 Good performance 3 4 3 3 5 Efficient data representation 3 5 5 5 Easy to reuse 5 1 4 5 4 Total 49 69 78 62

268 Chapter 5 Designing the Architecture

each of the attributes. Io the remaioing columns, we record how well each design achieves each attribute, on a scale of 1 (low achievement) to 5 (high achievement). Thus, the entry in the cell in the ith row andjth column rates the design represented by column i in terms of how well it satisfies the quality attribute represented by row j.

Finally, we compute a score fo r each design by multiplying the priority of each row by the design's score for that attribute, and summiog the results. For example, the pipe-and-filte r desigo score would be calculated as 1 x 5 + 4 x 1 + 3 x 5 + 3 x 5 + 3 x 1 + 5 x 4 = 62. The scores of the other designs are listed on the bottom row of Table 5.3.

In this case, we would choose the implicit-invocation design. However, the priori- ties and ratings, as well as the choice of attributes, are subjective and depend on the needs o f our customers and users, as well as our preferences for building and maintain- ing systems. Other attributes we might have considered ioclude

• modularity • testability

• security • ease of use • ease of understaoding • ease of iotegration

Including these attributes (in particular, ease of integration) in our analysis might have affected our final decisio n. Other evaluators are likely to score the designs differently and may reach different conclusions. As we learn more about measuring design attrib- utes, we can remove some of the subjectivity from this rating approach. But design eval· uation will always require some degree of subjectivity and expert judgment, since each of us has different perspectives and experiences.

Cost-Benefit Analys is

Trade-o ff analysis gives. us a means for evaluatiog design alternatives, but it focuses only on the technical merits of designs, in terms of bow well the designs achieve desired quality attributes. At least as important are the business merits of a design: Will the ben- efits of a system based on a particular design outweigh the costs of its implementati.on? If there are competing designs, which one will give us the highest re turn on investment?

Suppose that we are responsible for maintaining the catalogue for a national online video rental company. Customers use the catalogue to search for and request video reotals, to identify actors in videos, and to access video reviews. KWIC is used to index catalogue entries for fast lookup by keywords in video titles, acto rs' names, and so oo. The number of users, as well as the size of the catalogue, has been growing steadily, and the response time for querying the catalogue has gotten noticeably longer- lo the point that users have started to complain. We have some ideas about how to improve the system's response time. Figure 5.17 shows the part of the sys.tern architecture that is affected by our proposed changes, including the impacts of our three ideas:

1. We could eliminate entries in the KWIC index that start with noise words, such as articles ("a, the") a nd prepositions.1bis change reduces the number of indices to

Openmirrors.com

Section 5.7 Architecture Evaluation and Refinement 26'9

KEY

hnpllclt ln¥0e1tlon

~ Subprogru1 c1ll ------ Sytt111 l/O

Muter control

Input

l1p1t 11111dlan

lnJu '------ Query ~--,

": promtot ~ ~ _ : .-;::: '("... • I

' .... , t ~ I ,, '' t -:.. I ', ' " t .It I

' ...4- - - - - - - - - - I I a I I Qoery I 1 I I ,... _.. I , p1ou uor 2 , ... - - '-----------

FIGURE 5.17 Proposed changes to KWIC.

be searched when servicing a lookup query. It would involve adding a filter mod- ule, between the circular-shift and sorted-indices modules, that aborts any request to store an index to a noise word. (The design changes involve a new Hiter mod- ule, sbown in dasbed lines in Figure 5.17.)

2. We could change the representation of indices to be bins of indices, where each bin associates a keyword with the set of indices that point to lines that contain tbat word. This change reduces the time it takes to find subsequent instances of a keyword once the ftrst instance (i.e., the bin) bas been found. (The design changes the internal data representation within the Index module, shown in dashed lines in Figure 5.17.)

3. We could increase server capacity by adding another computer, Query Processor 2, that shares the task of processing queries. This change involves not only buying the server, but also changing the software architecture to include a Dispatcher moduJe that assigns queries to servers. (The design changes shown involve a new Query Processor 2 and a Dispatcher, shown in dashed lines in Figure 5.17.)

AH three proposals would improve the time it takes to lookup catalogue entries, but which one would be the most effective?

Most companies define "effectiveness" of a system in terms of vaJue and cost: how much vaJue will a proposed change add to the system (or how much value will a new

270 Chapter 5 Designing the Architecture

system add to the company's business) compared with bow much it wiU cost to imple- ment the change. A cost-benefit analysis is a widely used business tool for estimating and comparing the costs and benefits of a proposed change.

Comput ing Benefits. A cost-benefil analysis usually contrasts financial benefits with finiancial costs. Thus, if the benefit of a design is in the extra features it provides, or in the degree to which it improves quality attributes, then we must express these bene- fits as a financial value. Costs are often one-time capital expenses, with perhaps some ongoing operating expenses, but benefits almost always accrue over time. Thus, we cal- culate benefits over a specific time period, or calculate the time it would take for bene- fits to pay for the costs.

For example, le t us compute the benefits of the above design proposals, with respect to bow weU we expect them to improve the response time for querying the video catalogue. Suppose that the current catalogue contains 70,000 entries, and that the average size of an entry (i.e., the number of words per record, including the video's title, the director's name, tbe actors' names, and so on) is 70 words, for a total of almost five million circular shifts. On average, it currently takes the system 0.016 seconds to find and output all entries that contain two keywords, which means that the system accommodates about 60 such requests per second. However, at peak times, the system will receive up to 100 queries per second, and the system is eventually expected to handle 200 queries per second.

Table 5.4 summarizes the benefits of implementing the three designs. Eliminating noise words does not significantly improve performance, mainly because most of the words in an entry are names, and names do not have noise words. In contrast, adding a second server almost doubles the number of requests that the system can service per sec- ond. And restructuring the sorted-index data structure provides the greatest benefits, because it reduces the search space (to keywords rather than circular shifts), and because the result of the search re turns all of the associated indices rather than just one.

Next, we compute the financial value of these improvements. The value oif an improvement depends on how badly it is needed, so value might not increase propor- tionally with increases in quality. In some cases, a small improvement may be of Little

TABLE 5.4 Cost-Benefit Analysis of Design Proposals

Eliminate Store Indices Add Second Noise Words in Bins Server

Benefits Search time 0.015 sec 0.002sec 0.008sec Throughput 72 requests/sec 500 requests/sec 115 requests/sec Added value $24,000/yr $280,000/yr $110,000/yr

Costs Hardware $5,000 Software $50,000 $300,000 $200,000 Business losses $28,000+/yr Total costs first year $78,000 $300,000 $205,000

Openmirrors.com

Section 5.7 Architecture Evaluation and Refinement 271

L4w v1lue L4w vilue Low ulu =------<-LL.L.---

C m 11 t INprov14 Cumll l111prov14 Cuuent h11p1m4

019111 of qu1flty 111t1•u11 019111 of qu1l1ty 1ttrlhtt 019m or q;u1flty 1tt1lhte

FIGURE 5.18 Value added by improving a quality attribute (Bass, Clements, and K3Zllilan 2003).

vaJue if it is not enough to address a problem; in other cases, small improvements are significant, and further improvements yield Little additional value. Figure 5.18 shows several ways in which value may increase as a quality attribute improves. Given such a vaJue function for a particula r quaLity attribute for a particular system, tbe ne t value of an improvement is tbe area under tbe curve between the current and improved mea- sures o f tbe qua(jty a ttribute.

For simplicity, suppose that every additional request per second that the system can process, up to 200 requests/second, would save the company $2000 per year, based on retained customers and reduced calls to technical support. Given this value func- tion, eLimina ting noise words would save tbe company $24,000 per year, caJculated as

(72 requests/second - 60 requests/second) x $2000/year = $24,000/year

Adding a second server would save the company $110,000 per year, calculated as

(115 requests/second - 60 requests/second) x $2000/year = $110,000/year

The second design option would improve the system's throughput beyond what will be needed (the system will receive at most 200 requests per second). Therefore, the value added by changing to bin-based indexing is the maximum possible vaJue:

(200 requests/second - 60 requests/second) x $2000/year = $280,000/year

If there are multiple attributes to consider (e.g., the time it takes to update, reindex, and re-sort the catalogue), then a design's total financial added value is the sum of the added values ascribed to eacb o f the attributes-some of whose added values may be negative if a design improves one attribute at the expense of a confLicting a ttribute.

Computing Return on Investment (ROI). The return on in.vestment of making one of these design changes is the ratio of the benefits gained from making the change to the cost o f its implementation:

ROI = Benefits/Cost

These costs are estimated using the techniques described in Chapter 3. The estimated costs of the proposed design changes in our example are s!hown in Table 5.4. ln general,

272 Chapter 5 Designing the Architecture

an ROI of 1 or greate r means that the design's benefits outweigh its costs. The higher the ROI value, the more effective the design.

Another useful measure is the payback period: the length of time before accumu- lated benefits recover tbe costs of implementation. In our example, the payback period for restructuring the sorted-index module (design 2) is

$300,000/$280,000 = 1.07 of a year = approximately 13 months

We retlLIIn to calcula tions such as these in Chapter 12, where we investigate the best techniques for evaluating the re turn on investing in software reuse.

Prototyping

Some d!esign questions a re best answered by prototyping. A prototype is an executable model of the system we are developing, built to answer specific questions we have about the system. For example, we may want to test that our protocols for signaling, synchro nizing, coordinating, and sharing data among concurrent processes work as expected. We can develop a prototype that implements processes as stubs and that exercises our communication protocols, so that we can work out the kinks and gain confidence in our plans for how the processes will interact. We may also learn some- thing about the Limits (e.g., minimum response times) of our design.

Prototyping offers different advantages in the design stage from those gained during requirements analysis. As we saw in Chapter 4, prototyping is an effective rtool for testing the feasibility of questionable requirements and for exploring user-inte rface alternatives. The process of developing the prototype encourages us to communicate with our customers and to explore areas of uncertainty that emerge as we think about the system's requirements. As long as our customers understand that the prototype is an exploratory model, not a be ta-version of the product, prototyping can be useful in helping both us and our customers understand what the system is to do.

In the design phase, prototyping is used to answer design questions, compare design altemaltives, test complex interactions, or explore the effects of change requests. A proto- type omits many of the details of functionality and performance that will be part of the real system, so that we can focus narrowly on particuJar aspect-s of the system. For example, a small team may generate a prototype for each design issue to be resolved: one prototype for modeling the user interface, one for certifying that a purchased component is compatible witb our design, one for testing the network performance between remote processes, one for each security tactic being considered, and so on. "The final design is then a synthesis of the answers obtained from the indivi.dual prototypes.

If a prototype is intended only to explore design issues, we may not give its devel- opment the same careful attention that we would give to a real system. For this reason, we frequently discard the prototype and build the actual system from scratch, rather than try to salvage parts from the prototype. In these cases, the throw-away prototype is meant to be discarded; its development is intended only to assess the feasibility o r par- ticular characteristics in a proposed design. In fact, Brooks (1995) recommends building a system , throwing it away, and building it again. The second version of the system bene- fits from the learning and the mistakes made in the process of building the first system.

Openmirrors.com

Section 5.8 Documenting Software Architectures 273

Alternatively, we may attempt to reuse parts of the prototype in the actual sys- tem. By taking care in the design and development of the prototype's components, we can produce a prototype that answers questions about the design and at the same time provides building blocks for the final system. The challenge is to ensure that this style of prototyping is still fast. lf the prototype cannot be built more quickly than the actual system, then it loses its value. Moreover, if too much effort is invested in developing a quality prototype, we may become too attached to the design decisions and other assets embedded in the prototype, and we may be less open to considering design alternatives.

An extreme versio n of this approach is called rapid prototypiing, in which we pro- gressively refine the pro totype until it becomes the final system. We start with a proto- type of the requirements, in the form of a preliminary user interface that simulates the system's responses to user input. In successive ite rations of the prototype, we flesh out the system's design and implementation, providing the functionality promised by the initial U1Se r interface. In many ways, rapid prototyping resembles an agile development process, in that the syste m's development is ite rative and there is continual feedback from the customer. The difference is that the initial prototype is a user-interface shell rather than the core of the operational system.

Boehm, Gray, and Seewaldt (1984) studied projects that we re developed using rapid prototyping, and they report that such projects performed albout as well as those developed using traditio nal design techniques. In addition, 45 percent less effort was expended and 40 percent fewer lines of code were generated by the developers who used prototypes. Also, the speed and efficiency of the systems developed with proto- types were almost the same as those of the tradit ionally developed systems. However, there are some risks in using rapid prototyping. l ih e biggest risk is that by showing cus- tomers an operational prototype, we may mislead them into believfog that the system is close to being finished. A related risk is that customers may expect the final system to exhibit the same performance characteristics as the prototype, which could be umealis- ticaH y fast due to omitted functionality, smaUer scale, and communication delays. Also, because of the lack of documentation, prototyping as a development process is best suited for smaller projects involving smaller development teams.

5.8 DOCUMENTING SOFTWARE ARCHITECTURES

A system's architecture plays a vital role in its overall development: it is the basis on which most subsequent decisions about design, quality assurance, and project manage- ment are made. As such, it is crucial that the system's developers and stakeholders have a consistent understanding of the system's architecture. The SAD serves as a repository for info rmation about the architectural design and helps to commWlicale this vision to the various members o f the development team.

The SAD's contents depend heavily on how it will be used. That is, we try to antici- pate what information will be sought by different types of SAD readers. Customers will be looking for a natural-language description of what the system will d o. Designers wiJJ be looking for precise speci.fications of the software units to be developed. A performance analyst will want enough information about the software design, the computing platform, and the system's environment to car:ry out analyses of likely speed and load. Different

274 Chapter 5 Designing the Architecture

team members read the SAD, eacb with a different purpose. For example, coders read the SAD to understand the overall design and make sure that each design feature or function is imple mented somewhere in tbe code. Testers read the SAD to ensure that their tests exercise all aspects of the design. Maintainers use the SAD as a guide, so that architectural integri ty is maintained as problems are ftXed and new features implemented.

Given these uses, a SAD should include the following information:

• System overview: This section provides an introductory description of the system, in te rms of its key functions and usages.

• Views: Each view conveys information about tbe system's overall design structure, as seen from a particular perspective. In addition to the views, we document also how tbe views relate to one another. Because the views are lilkely to be read by all SAD readers, each section is prefaced with a summary of the view and its main ele- ments and interactions; technical details are addressed in separate subsections.

• Software units: We include a complete catalogue of the software units to be devel- oped, including precise specifications of their interfaces. For each software unit, we i.ndicate a ll of the views in which the unit appears as an element.

• Analysis data and results: This section contains enough details about the system's architecture, computing resources, and execution environment so that a design analyst can measure the design's quality attributes. The results of the analyses are also recorded.

• Design rationale: Design decisions must be explained and defended, and the rationale for the ch osen design is recorded to ensure that project managers and future architects have no need to revisit design alternatives that were originally dismissed for good reason.

• Definitions, glossary, acronyms: These sections provide all readers with the same understanding of the technical and domain vocabulary used throughout the document.

lo addition, the SAD is identified with a version number or date of issue, so that readers can easily confirm that they are working with the same version of the document and that the version is recent.

There are few guidelines, in the form of standards or recommended templates, for how to organize aU of this information as a useful. technical reference. For example, the IEEE recommendations for documenting software architectures, IEEE Standard 1471-2000, prescribe what information to include in an architectural document, but say little about how to structure or format the information. Thus, it makes sense to develop an in-ho use standard for organizing the SAD's contents, including guidance on the doc- ument's structure, conte nts, and the source of each type of in.formation (e.g., whether the writer must collect or create the information). More importantly, a standard helps the reader know bow to navigate through the document and find information quickly. Like other reference texts, such as dictionaries and encyclopedias, technical documen- tation such as the SAD :is rarely read from cover to cover; most users use the SAD for quick queries about design decisions that are described at a high level in one part of the docume nt but are described in detail e lsewhere in the document. Thus, the SAD should be organized and indexed for easy reference.

Openmirrors.com

Section 5.8 Documenting Software Architectures 275

As Bass, Clements, and Kazman (2003) note,

One of the most fundamental rules for technical documentation irn general, and software architecture documentation in particular, is to write from the point of view of the reader. Documentation that was easy to write but is not easy to read will not be used, and "easy to re.ad" is in the eye of the beholder-or in this case, the stakeholder.

Given the many uses of the SAD, we have many readers to satisfy with a single document. As such, we may choose to split the SAD into diffe rent but related documents, where each piece addresses a different type of reader. Alternatively, we can try to merge all informa- tion into a single document, with directions to guide different readers to their information of interest. For example, we may suggest that customers read the system overview plus summaries of each view. B y contrast, developers would be expected to read details of the software units they are to implement, the uses view to see how the Wlits relate to the rest of the system, any other view that uses the corresponding architectural elements, and the mappings among views to see if the units correspond to other architectural elements.

Mappings among Views

How many and which views to include in the SAD depends on the structure of the sys- tem being designed and the quality attributes that we want to measure. At the very least, the SAD should contain a decomposition view showing the design's constituent code units, plus an execution view showing the system's runtime structure. In addition, a deployment view that assigns software units to computing resources is essential if we wanl lo reason about the system's performance. Alte rnatively, we may include multiple execution views, each based on a different architectural style, if each of the styles re flects useful ways o f thinking about the system's structure and inte ractions. For example, if our design is based on a publish-subscribe style, in which components are triggered by events, we may also include a pipe-and-filte r view tha!I: depicts the orde r in which components are to be invoked.

Because our design is documented as a collection of views, we should show how the views rela te to one another. If one view details ao elemeot that is a software unit in a more abstract view, then this relationship is straightforward. However, if two views show dmfferent aspects of the same part of the design, and if there is no obvious corre- spondence be tween the e lements in the two views, then mapping tbis correspondence is essential. For example, it is useful to record how runtime components and connectors in an execution view map to code-level units in a decomposition view. Such a mapping documents bow the components and connectors will be implemented. Similarly, it is useful to record bow elements in one decomposition view (e.g., a module view) map to e lements in another decomposition view (e.g., a layer view). Such a mapping reveals all of the um.its needed to implement each layer.

Clements et a l. (2003) describe how to document a mapping between two views as a table, indexed by the e lements in one of the views. For each e lement in the first view, the table lists the corresponding element(s) in the second view and describes the nature of the correspondence. For example, the indexed e lement implements the other ele- ment(s ), or is a generalization of the other e lement(s). Because it is possible that parts of elements in one view map to parts of e lements in the other view, we should also indicate whether the correspondence is partial or complete.

276 Chapter 5 Designing the Architecture

Docume nting Rationale

In addition to design decisions, we also document design rationale, outlining the critical issues and trade-offs tba t were considered in generating the design. This guiding philos- ophy he lps the customers, project managers, and other developers (particularly main- tainers) understand bow and why certain parts of the design fit together. It also helps tbe architect remember the basis for certain decisions, thereby avoiding the need to revisit these decisions.

Rationale is usually expressed in te rms of the system's requirements, such as design constraints that limit the solution space, or quaLity attributes to be optimized. This section of the SAD lists decision alternatives that were considered and rejected, along witb a justification for why tbe chosen option is best; if several a lternatives are equally good, then those shouJd be described too. The design rationale may also include an evaluation of the pote ntial costs, benefits, and ramifications of changing the decision.

Good practice dictates that we provide rationale for lower-level design decisions, such as details about software-unit interfaces or the structure of a view, as well as over- all architectural decisions, such as choice of arch.itectural style{s). But we need not jus- tify every decision we make. Clements et al. (2003) offer good advice on when to docume nt the rationale behind a decision:

• Silgnificant time was spent on considering the options and arriving at a decision. • The decision is critical to achieving a require ment.

• The decision is counterintuitive or raises questions. • It would be costly to change the decision.

5.9 ARCHITECTURE DESIGN REVIEW

Design review is an essential part of engineering practice, and we evaluate the quaLity o f a SAD in two different ways. First, we make sure that the design satisfies all of the requirem ents specified by the customer. This procedure is known as validating the design. Then we address the quality of the design. Verification involves ensuring that tbe design adheres to good design principles, and that the design documentation fits tbe needs of its users. Thus, we validate the design to make sure that we are buiJding what the customer wants (i.e., is this the right system?), and we verify our documenta- tion to help ensure that the developers will be productive in their development tasks (i.e., are we buiJding the system right?).

Validation

During validation, we make sure that all aspects of the requirements are addressed by our design. To do that, we invite seve ral key people to the review:

• the analyst(s) who helped to define the system requirements • the system architect(s) • the program designer(s) for this project

Openmirrors.com

Section 5.9 Architecture Design Review 277

• a system tester • a system maintainer

• a moderator • a recorder • otber interested system developers who are not otherwise involved in this project

The number of people actually at the review depends on the size and complexily of the system under development. Every member of the review team should have the author- ity to act as a representative of h.is or her organization and to make decisions and com- mitments. The total number should be kept small, so that discussion and decision making are not hampered.

The mode rator leads tbe discussion but bas no vested interest in the project itself. He or sbe encourages discussion , acts as an intermediary Ile tween opposing viewpoints, keeps the discussion moving, and maintains objectivity and balance in the process.

Because it is difficult to take part in the discussion and also record the main points and outcomes, another impartial partjcipant is recruited to serve as recorder. The recorder does not get involved in the issues that arise; his or her sole job is to document what transpires. However, more than stenographic skills are required; the recorder must bave enough technical knowledge to understand the proceedings and record rele- vant technical information.

Developers who are not in.valved with the project provide an outside r's perspec- tive. They can be objective wben commenting on the proposed design, because they bave no personal stake in it. In fact, they may bave fresh ideas and can offerr a new slant on things. They also act as ad hoc verifiers, checking that t!he design is correct, is consis- tent, and conforms to good design practice. By participating in the review, they assume equal responsibility for the design with the designers the mselves. nus shared respon- sibility forces aU in the review process to scrutinize every design detail.

During the review, we present the proposed architecture to our audience. In doing so, we demonstrate that the design bas the required structure, function, and char- acteristics specified by tbe requirements docwnents. Together, we confirm tbat the pro- posed design includes the required hardware, interfaces witb other systems, input, and output. We trace typical execution paths through the architecture, to convince our- selves that the communication and coordination mechanisms work properly. We also trace exceptional execution paths, to review the design measures we have taken to detect aod recover from faults and bad inputs. To validate nonfunctional requirements, we review the results of analyses that have been performed to predict likely system behavior, and we examine any documented design rationale that pertains to quality attributes.

Any discrepancies found during the review are noted by the recorder and dis- cussed by the group as a whole. We resolve minor issues as they appear. However, if major faults or misunderstandings arise, we may agree to revise the design. In this case, we schedule another design review to evaluate the new design. Just as the Howells would rather redo the blueprints of their house than tear out the foundation and walls later and start again, we too would rather redesign the system now, on paper, instead of later, as code.

278 Chapter 5 Designing the Architecture

Verification

Once we have convinced ourselves that the design will lead to a product with which the customer will be happy, we evaluate the quality of the design and the documentation. In particullar, we examine the design to judge whether it adheres to good design principles:

• Is the architecture modular, we LI structured, and easy to understand? • Can we improve the structure and understandability of the architecture? • Is the a rchHecture portable to other platforms? • Are aspects of the architecture reusable? • Does the architecture support ease of testing? • Does the architecture maximize performance, where appropriate? • Does the architecture incorporate appropriate techniques for handling fauJts and

preventing failures? • Can the architecture accommodate aU of the expected design changes and exten-

sions that have been documented?

The review team also assures that the documentation is comple te by checking that there is an inte rface specification for every referenced software unit and that these specifications are complete. The team also makes sure that the documentation describes a lte rnative design strategies, with explanations of how and why major design decisions were made.

An active design review (Parnas and Weiss 1985) is a p articuJarly effective method for evaluating the quality of the SAD and determining whether it contains the right information. In an active review, reviewers exercise the design document by using it in ways tha t developers will use the final document in practice. That is, rather t han reading the documentation and looking for problems, which is characterized as a passive review process, the reviewers are given or devise questions that they must answer by looking up information in the SAD. Each reviewer represents a different class of reader and is asked questions suitable to his or her use of a SAD. Thus, a main- tainer may be asked to determine which software units wouJd be affected by an expected change to the system, whereas a program designer may be asked to explain why an interface's preconditions are necessary.

In general, the point of the design review is to detect faults rather than conect them. It is important to remember that those who participate in the review are investi- gating tt.he integrity of the design, not of the designers. Thus, the review is valuabl.e in emphasizing to all concerned that we are working toward the same goal. The criticism and discussion during the design review are egoless, because comments are directed at the process and the product, not at the participants. The review process encourages and enhances communication among the diverse members of the team.

Moreover, the process benefits everyone by finding faults and problems when they are easy and inexpensive to correct. It is far easie r to change something in its abstract, conceptual stage than when it is already implemented. Much of the difficuJty and expense of fixing fauJts late in development derive from tracking a fault to its source. If a fault is spotted in the design review, there is a good chance that the problem is located somewhere in the design. However, if a fauJt is not detected until the system

Openmirrors.com

Section 5.10 Software Product Lines 279

is operational, the root o f the problem may be in several places: the hardware, the soft- ware, tbe design, the imp lementa tion, or the documentation. The sooner we identify a problem, the fewer places in which we have to look to find its cause.

5.10 SOFTWARE PRODUCT LINES

Throughout this chapter, we have focused on the design and development of a single software system. Bul many software companies buiJd and sell multiple products, often working with different kinds of customers. Some successful companies build their repu- tations and their set of c lients by specializing in particular application domains, such as business support software or computer games. That is, they become known not only for providing quality software but also for their understanding of the special needs of a particulla r market. Many o f these companies succeed by reusing their expertise and software assets across families of related products, thereby spreading the cost of devel- opment across products and reducing the time to market for each one.

The corporate strategy for designing and developing the related products is based on the reuse of elements of a common product line. The company plans upfront to manufacture and market several re lated products .. Part of the planning process involves deciding how those products will share assets and resources. The products may appear to be quite different fro m one another, varying in size, quality, fe atures, or price. But they have enough in common to a.Uow the company to take advantage of economies of scope by sharing techno logies (e.g., a rchitecture, common parts, test suites, or environ- ments),. assembly facilities (e.g., workers, assembly plants), business plans (e.g., budgets, re lease schedules), marketing stra tegies and distribution channels, and so on. As a resuJt, the cost and effor t lo develop the family of products is far less than the sum of the costts to develop the products individually. The product-line notion is not particular to software; it has been used for years in aU types of manufacturing. For example, an automo bile company offers muJtiple models of cars, each with its own specifications of passenger and cargo space, power, and fuel economy. Individual brands have their own distinct appearance, dashboard interfaces, feature packages, and luxury options; the brand is targeted a t a particular market and sold at prices within a particular range. But many of the models are built on the same chassis, use common parts from the same sup- pliers, use common software, are assembled a t th e same manufacturing plant, and are sold at the same dealerships as other models.

A distinguishing feature of building a product line is the treatment of the derived products as a product family. "rbeir simuJtaneous development is planned from the beginning. The family's commonalities are described as a coUection of reusable assets (includmng requirements, designs, code, and test cases), all stored in a core asset base. When developing products in the family, we retrieve assets from the base as needed. As a resul tt, development resembles an assembly line: many of the components can be adapted from components in the core asset base and then plugged together, rather than developing each from scratch. The design of the core asset base is planned carefuUy, and it evolves as the product family grows to include new products.

Because the products in the product family are rela ted, the opportunities for reuse abound and extend well beyond the reuse of code units. Clements and

280 Chapter 5 Designing t he Architecture

Northrop (2002) describe why a number of candidate elements may belong in the core asset base:

• Requirements: Related products often have common functional requirements and quaJity attributes.

• Software architecture: Product lines are based on a commo n architecture that realjzes the product family 's shared requireme nts. Differe nces among famjly members can be isolated or parameterized as variations in the architecture; for example, features, user interfaces, computing platforms, and some quality attrib- utes can be altered to address particular product needs.

• Models and analysis results: Models and analyses (e.g., performance analysis) of an individuaJ product's architecture are likely to build on analyses of the product- line architecture. So it is important to get the product-line architecture right, because it affects the performance of so ma111y associated products.

• Software units: The reuse of software units is more than just code reuse. It includes the reuse of significant design work, including interface specifications, re lationships and interactions with other units, documentation, test cases, scaf- folding code (i.e., code developed to support testing and analysis that is not deliv- ered with the finaJ product), and more.

• Testing: Reuse of testing includes test plans, test documentation, test data, and testing environments. It may also include the test results of reused software units.

• Project planning: Project budgets and delivery schcduJcs of product-family mem- bers are likely to be more accurate than products developed from scratch, because we use our knowledge of the costs a nd schedules of past fami ly members as guides in estimating the costs of subseque nt members.

• Team organization: Because product-family members have similar design struc- tures, we can reuse information from past decisions on how to decompose a prod- uct into work pieces, how to assign work pieces to teams, and what skill sets those teams need.

According to the Software Engineering Institute's Product Line HaU of Fame (at http://www.sei.cmu.edu/productlines/plp_hof.btml), companies such as Nokia, Hewlett- Packard, Boeing, and Lucent report a three- to sevenfold improvement in development costs, time-to-market, and productivity from using a product-line approach to software development. Sidebar 5.8 describes one company's conversion to a software product line.

St rategic Sco ping

Product lines are based not just on commonalities among products but also on the best way to exploit them. First, we employ strategic business planning to identify the family of products we want to build. We use knowledge and good judgment to forecast market trends and predict the demand for various products. Second, we scope our plans, so that we focus on products that have enough in common to warrant a product-line approach to development. That is, the cost of developing the (common) product line must be

Openmirrors.com

Section 5.10 Software Product Lines 281

SIDEBAR 5.8 PRODUCT-LINE PRODUCTIVITY

Brownsword and Clements (1996) report on the experiences of Celsius Tech AB, a Swedish naval defense contractor, in its transition from custom to product-line development. The transition was motivated by desperation. In 1985, the company, then Philips Elektronikin- dustier AB, was awarded two major contracts simultaneously, one for the Swedish Navy and one for the Danish Navy. Because of the company's past experiences with similar but smaller

systems, which resulted in cost overruns and scheduling delays, senior managers questioned whether they would be able to meet the demands of both contracts, particularly the promised (and fixed) schedules and budgets, using the company's current practices and technologies.

This situation provided the genesis of a new business strategy: recognizing the potential business opportunity of selling and building a series, or family, of related systems ratherthan some number of specific systems . .. . The more flexible and extendable the product line, the greater the business opportunities. These business drivers . .. forged the technical strategy. (Brownsword and Clements 1996)

Development of the product line and the first system were initiated at the same time; development of the second system started six months later. The two systems plus the product line were completed using roughly the same amount of time and staff that was needed previ- ously for a single product. Subsequent products had shorter development timelines. On aver- age, 70--80 percent of the seven systems' software units were product-line units (re)used as is.

more than offse t by the savings we expect to accrue from de riving family me mbe rs fro m the produc t line.

Product-Line scoping is a challenging problem. If we strive for some no tio n of optimal commonality among tbe products (e.g., by insisting on reusing 80 pe rcent of the code), we may e xclude some interesting and profitable products that He outside o f the scope. On the othe r hand, if we try to include any product that looks related , we reduce the degree of commonality among the derived products, and consequently the amount o f savings that can be achieved. A successful product line Ji es somewhe re in the middle o f these two e xtremes, with a core architecture that strongly supports the more promis- ing products that we want to build.

ln the end, a product line's success depends on both its inhe re nt variability and the degree o f overlap be tween its core asset base and the de rived produc ts. To obtain desired productivity-improve me nt numbers, each derived product's architecture must start with the product-line architecture, incorporating a significant fractio n of its soft- ware units from the product line's core asset base. Then the product design adds easily accommodated changes, such as component replacements or extensions and re trac- tions of the arc hitecture. The less the final de rived product has in common with the product line, the mo re the de rived product 's development will resemble a complete ly ne w (i.e., nonde rived) product.

282 Chapter 5 Designing the Architecture

Advantages of Product-Line Architecture

A product-line architecture promotes planned modifiability, where known differences among product-family members are isolated in tbe architecture to allow for easy adap- ta tion. E xamples of product-line variability are given below:

• Component replacements: A software unit c.an be realized by any implementation that satisfies the unit's interface specification. Thus, we can instantiate a new product-family member by changing the implementations o f one or more soft- ware units. For example, we can use a layered a rchitecture to isolate a product family's interfaces to users, communication ne tworks, o ther software compo- nents, the computing platforms, input sensors, and so on.1his way, we can instanti- ate new family members by reimplementing the interfac.e layers to accommodate new inte rfaces. Similarly, we can improve quality attributes, such as performance, by reimplementing key software units.

• Component specializations: Specialization is a special case of component replace- ment that is most strongly supported by object-oriented designs. We can replace any class with a subclass whose methods augment or override tbe parent's methods. Thus, we can instantiate new family members by creating and using new subclasses.

• Producl-line param eters: We can think of software units as parameters in the product-line archiitecture, so that varying the parameters results in a set of pos- sible system configurations. For example, parame ters could be used to specify fea- ture combina tions, which a re then instantiated as component additions or re placements. If parameters are the onl y source of product-line variation, then we can automatically configure and generate p roducts by setting parameter val.ues. See Sidebar 5.9 for a description of o ther work on genera ting product-family members automatically.

SIDEBAR 5.9 GENERATIVE SOFTWARE DEVELOPMENT

Generative software development (Czarnecki 2005) is a form of product-line development that enables products to be generated automatically from specifications. It proceeds in two phases: first, domain engineers build the product line, including mechanisms for generat- ing product instances, and then application engineers generate individual products.

The domain engineer defines a domain-specific language (DSL) that application engi- neers then use to specify products to be generated. A DSL can be as simple as a collection of parameters, with a set menu of supported parameter values, or it cao be as complex as a

specia1-purpose programming language. In the forme r case, the engine for deriving product instances is a collection of construction rules (for selecting and assembling prefabricated components) and optimiz ation rules (for optimizing the code with respect to the combination

of parameter values specified). In the latte r case, the product line includes a compiler that transforms a DSL program into a product, making heavy use of the product-line architecture

and prefabricated components.

Openmirrors.com

Section 5.10 Software Product Lines 283

Lucent developed several product lines and ge:nerative tools for customizing different

aspects of its SESS telephone switch (Ardis and Green 1998):

• forms that service-·providcr operators and administrators use to enter and change switch-rela ted data about customers (e.g., phone numbers, features subscribed)

• billing records that are generated for each call

• co11jigura1io11-control software that monitors and records the status of the switch's hardware components and assists with component transitions

Lucent created GUI-based DSLs that could be used to specify custom data-entry forms, billing-

record contents, and hardware-interface specifications It built compilers and other tools to gen- erate code and user documentation. 'This technology allowed Lucent to customize its telephone switches as its customer base evolved from internal sales within AT&T to external and inter-

national service providers.; as hardware technology evolved; and as feature sets grew.

• Architecture extens ions and retractions: Some architectural styles, such as publish- subscribe and client-server, allow for easy feature additions and removals; these styles are useful in product lines that have varying feature sets. More generally, we can use dependency graphs to evaluate derived architectures. For example, a via ble subarchiteclure of the product-line architecture corresponds to a subset of the product line's modules plus all of the modules' dependencies. We want to limit architectural extensions to those whose effects on the architecture's dependency graph are striclly additive. In other words, we seek only those extensions that aug- ment the product line's dependency graph with new nodes such that all new dependencies originate from the new nodes..

Documenting a product-line architecture is different from docwnenting the architecture for a specific system, because the product line is not in itself a product. Rather, it is a means for rapidly deriving products. Thus, its docwnentation focuses on the range of products that can be derived from the product line, the points of variability in the product-Line architecture, and the mechanisms for deriving family members. The documentation of a give n product is then reduced to a description of how it differs from o r instantiates the product-line architecture, in terms of specific feature sets, com- ponent instances, subclass definitions, parameter values, and more.

Product-Line Evolution

After s tudying several industrial examples, Clements (Bass, Clements, and Kazmao 2003) concludes that the most important contributor to product-line success is having a producl-line mindset. That is, the company's primary focus is on the development and evolutio n of the product-Line assets, rather than on individual products. Product-line changes are made for the purpose of improving the capability to de rive products, while remaining backwards compatible with previous products (i.e., previous products are still derivable). Thus, no product is developed or evolves separately from the product

284 Chapter 5 Designing the Architecture

line. In this sense, a company with a product line is like the farmer with a goose that Jays golden eggs. Instead of focusing on the eggs, the company nurtures the goose, so that it wiU continue to lay golden eggs for years to come .

5.11 INFORMATION SYSTEMS EXAMPLE

So, what might be a suitable software architecture for the Piccadilly system? Certainly a key component would be the repository of information that needs to be maintained about Lelevision programs, program scheduling, commercial spots, agreements, and so on. In addition, the system should be able to process multiple he terogeneous queries on this information, in parallel, so that the information can be kept up-to-date and can be used to make important decisions about future commercial campaigns.

A typical reference architecture for an information or business-processing system is an n-tiered client-server a rchitecture (Morgan 2002). Such an architecture for our Pic- cadilly system is depicted in Figure 5.19. The bottom laye r is a data server that simply maintains all the information that Piccadilly must track for its own business as well as information about its competito rs. The application programming interface (API) for this layer is likely to consist of basic queries and updates on these data. The middle layer con- sists of application services that provide richer, more application-specific queries and updates on the lower-level data. An example of an application-specific query might be to find all television programs that air at the same time as some Piccadilly program. The top layer of the architecture consists of the user interfaces through which Piccadilly's man- agers, accountants, te levision-programming specialists, and sales force use the informa- tion system.

In addition to the high-level architecture of the system, we also need to provide some details about each of the components. For example, we need to describe the data and rel.ationships that the data server is to maintain, the application services that the application layer is to maintain, and the user interfaces that the presentation laye r is to

KEY

I

Pro91111 Pirehuln9

.. tY., I mlynr

D1t1bm Sernr

reqmt/reply

FIGURE 5.19 N-tier architecture of the Piccadilly system

Openmirrors.com

Cllut Prueiutlon

Bllllng ud Bu1lne1t Aceout1n9 lo~lc

lnfonn1tlon Sy1te111

Section 5 .11 Information Systems Example 285

Asmy Uvertltln9 C1111pal9n fttlU 0 .. 1 coo1d 111111 o .. * •111p1i91 au11bor

~ tddms t91ncy c1np1l9n rhrt •••• Ce11111ml1I phone 1UNb11 ....... UIH

duntioa dlepoul ••tt ld51t ... , •• 11di111e1 111511 nti19 '•ru1t19• 11quir1d 1p1t •mtiu

14' co111Nercl1I() llNOVI COl!INercl1I() bllld en1pa19n() ptlCI 11nip1l9n()

t Program Co1111111rcl1I S~ot

1udlenc1 type duttlon

I schedule~ ~nt

* mocl111d with Coon111rcl1I Bruk Riie S19on1nt

Eplude * * dtte dty of week pmen1191 of 1ud111ce 11u1 tine u91111nt tine tyte .. , llNI niootblllty

sold value rpot 1111 u~rold v1lue

NlllN1n "" ulc prlee(J prtdlclld 11t1n9

ru11• fl fl1d ilNt()

FIGURE 5.20 Partial domain model of the Piccadilly system.

provide. The domain model that we created in Chapter 4, as part of our elaboration of the system's requirements, can form the basis of the data model for the data serve r. The model is reproduced in Figute 5.20. We augment this model with additional concepts and relationships that arise as we work out the details of the application services. For example, a high-level description of the query to find the television programs that are aired at the same time as PiccadiJly programs might look like the following:

Input: Episode For each Opposition television company,

For each Programming schedule, If Episode schedule date = Opposition transmission date

AND Episode start time = Opposition transmission time Create instance of Opposition program

Output: List of Opposition programs

This function suggests new entities and relationships to lbe added to our data model, such as the concept of an opposition company. Each opposition company broadcasts its

286 Chapter 5 Designing the Architecture

Oppo11tlon Opposition Program

n1me 1..* broadcasts * pro9ram name a4dress shtion pro5ram transmission 4ate phone nwmber transmission start time

transmission end time transmission 4uration predicted rating

FIGURE 5 .. 21 Newly identified concept of 0pposi ti on programs.

own programs, and each program has a time and date. We include this information in tbe data model, so that our system is able to issue queries on programs tbat air on com- peting s tations. These ne w concepts are shown in iFigure 5.21.

In addition, we need to tbink about how well our architecture meets any nonfunc- tional requirements that have been specified. For e xample, if the specialists in charge of te levision programming want to use the system to explore how diffe rent broadcasting scheduJes affect predicted ratings and corresponding prices of commercial spots, then our architecture needs to support the ability to undo edits. As another example, secu- rity may be a concern. We want to ensure tbat the competition cannot break into the system and access Piccadilly's business plans or proposed prices.

These models and descriptions are clearly high level. Architecture focuses on design decisions that re quire conside ration of rnuJtiple components and their inter- actions.. In Chapter 6, we narrow our focus to the more detailed design of individual components-{!(}Cb of which we are able to consider in isolation.

5.12 REAL-TIME EXAMPLE

One of the findings of the inquiry board that was set up to investigate the Ariane-5 acci- dent was that the Ariane program in general had a "culture ... of only addressing ran- dom hardware failures" (Lions et al. 1996) and of assuming that seemingly correct software was in fact correct. The board came to this conclusion partly because of the way the Ariane-S's fauJt-recovery system was designed. The Ariane-5 included a num- ber of redundant components in which both the ha rdware equipment and the associ- ated software were identical. In most cases, one of the components was supposed to be active; the o the r was to stay in "hot standby" modle, ready to become the active compo- nent if the current activ'e component failed. But this architecture is not a good choice for this kind of system, because hardware failures are very different from software fail- ures. Hardware failures are independent: if one unit fails, the standby unit is usually unaffected and can take over as the active unit. By contrast, software faults tend to be logical e rrors, so all copies of a software component have the same set of faults. More- over, even if the software is not the underlying cause of a failure, replicated software components are like ly to exhjbit the same bad behavior in response to bad input. For these re asons, the hot sttandby redundancy in Alriane-5 is Likely to recover only from hardware failures.

To see why, consider Table 5.5's List of sources of software failures (NASA 2004). Some of the software faults are internal to lhe software itself, and some are caused by

Openmirrors.com

Section 5.13 What This Chapter Means for You 287

TABLE 5.5 Causes of Safety-Related Software Failures (NASA 2004)

Software Faults

Data sampling rate Data collisions lllegal commands Commands out of sequence Tune delays, deadlines Multiple events Safe modes

Failures in the Environment

B roken sensors Memory oveiwritten Missing parameters Parameters out of range Bad input. Power Huctuations Gamma radiation

input errors or erroneous events in the environment. Regardless of the source, most types of faults would adversely affect all instances of replicated software components that are processing the same input at the same time (with the exception of gamma radi- atjon, wruch is a serious concern in software on spacecraft). As a result, software !fail- ures among replicated components in hot standby mode are rarely independent. lb.is was the situation with the fa iled inertial reference systems (SRis) on the Ariane-5: both units Slllffered the same overflow error, simultaneously.

The manner in which SRI problems were handled is another factor that con- tributed to the Ariane-5 accident. A key design decision in any software architecture is determining which component is responsible for handling problems that occur at run- time. Sometimes the component in which the problem occurs has information about its cause and is best swted to address it. However, if a problem occurs in a service compo- nent, it is o ften the client component (which invoked the service) that is responsible for deciding how to recover. With this strategy, the client can use contextual informail:ioo about what goals it is trying to achieve when it determines a recovery plan. In the Ariane-5, the planned exception-handling strategy for any SRI exception was to log the error and to shut down the SRI processor (Lions et al. 1996). As we saw earlier in tills chapter, shutting down or rebooting is an extreme error-handing strategy that is not advisable in critical syste ms. Another recovery strategy, such as working with the maxi- mum value that the affected data variable would allow, might have kept the software operating well enough fo r the rocket to have achieved orbit.

5.13 WHAT THIS CHAPTER MEANS FOR YOU

In tbis chapter, we have investigated what it means Lo design a system based on care- fully expressed requirements. We have seen that design begins with a high-level archi- tecture, where architectural decisions are based not only on system functionality and required constraints but also on desirable attributes and the long-tterm intended use of the system (including product lines, reuse, and like ly modification). You should keep in mind several characteristics of good architecture as you go, including appropriate user interfaces, performance, modularity, security, and fault tolerance. You may want to build a prototype to evaluate several options or to demonstrate possibilities to your customers.

The goal is not to design the ideal software architecture for a system, because such an architecture might not even exist. Rather, the goal is to design an architecture

288 Chapter 5 Designing the Architecture

tbat meets all of the customer's requirements while staying within the cost and scbed- ule constraints that we discussed in Chapter 3.

5.14 WHAT THIS CHAPTER MEANS FOR YOUR DEVELOPMENT TEAM

There are many team activities involved in architecture and design. Because designs are usually expressed as collections of components, the interrelationships among com- ponents and data must !be well documented. Part of tbe design process is to have fre- quent discussions with other team members, no t onJy to coordinate how different components will interact but also to gain a better understanding of the requirements and of the implications of each design decision you make.

You must also work with users to decide how to design the system's interfaces. You may develop several prototypes to show use rs the possibilities, to determine what meets performance requirements, or to evaluate for yourseli the best "look and feel."

Your choice of architectural strategy and documentation must be made in the context of who wiU read your designs and who must understand them. Mappings among views help explain which parts of the design affect which components and data. It is essential that you document your design clearly and completely, with discussions of the options you had and the choices you made.

As a team member, you will participate in architectural reviews, evaluating the architecture and making suggestions for improvement. Remember that you are criticiz- ing the architecture, not the architect, and that software development works best when egos are left out of the discussion.

5.15 WHAT THIS CHAPTER MEANS FOR RESEARCHERS

The architectures in this chapter are depicted as simple box-and-Line diagrams, and we note that the modeling techniques discussed in Chapters 4 and 6 may also be useful in modeling the system. However, there are several drawbacks to representing the system onJy with diagrams. Garlan (2000) points out that informal diagrams cannot easily be evaluated for consistency, correctness, and completeness, particuJarly when the system is large or complex. Neither can desired architectura l properties be checked and enforced as the system evolves over time. Thus, many researchers are investigating the creation and use of formal languages for expressing and analyzing a software architecture. These Architectural Description Languages (ADLs) include three things: a framework, a notation, and a syntax for expressing a software architecture. Many also have associated tools fo r parsing, displaying, analyzing, compiling, o r simulating an architecture.

The ADL is often specific to an application domain. For instance, Adage (Coglia nese and Szymanski 1993) is intended for use in describing avionics navigation and guidance, and Darwin (Magee et al. 1995) supports distributed message-passing systems. Researchers are also investigating ways to integrate various architectural tools into high.er-level architectural environments, some of which may be domain specific. And other researchers are mapping ADL concepts to object-based approaches, such as the UML (Medvidovic and Rosenblum 1999) .

Another area ripe for research is bridging tbe gap across architectural styles. Sometimes systems are developed from pieces that are specified in different ways.

Openmirrors.com

Sectio n 5.16 Term Project 289

DeLine (2001) and othe rs are examining ways to translate a collection of different pieces into a more coherent whole.

Finally, researchers are continually challenged by systems that are "network cen- tric," having little or no centralized control, confonning to few standards, and varying widely in hardware and applications from one user to another. "Pervasive computing" adds complications, as users are employing diverse devices that were not designed to interoperate, and even moving around geographicaUy as they use them. As Garlan (2000) points out, this situation presents the following four problems:

• Architectures must scale to the size and variability of the Internet. Traditionally, one "assumes that event delivery is reliable, that centralized routing of messages wilJ be sufficient, and that it makes sense to define a common vocabulary of events that are understood by all of the components. In an In ternet-based setting, au of these assumptions are questionable."

• Software must operate over "dynamically formed, task-specific coalitions of dis- tributed autonomous resources." Many of the Internet's resources are "indepen- dently developed and independently suppor ted; they may even be transient," but the coalitions may have no control over tlhese independent resources. Indeed, "selection and composition of resources [are] likely to be done afresh for e ach task, as resources appear, change, and disappear." We will need new techniques for managing architectural models at runtime, and for evaluating the properties of the systems they describe.

• We will need flexible architectures that accommodate services provided by pri- vate industry, such as billing, security, and communications. These applications are likely to be composed from both local and remote computting capabilities and offered at each user 's desktop, which in turn can be built from a wide variety of hardware and software.

• End users may want to compose systems themselves, tailoring available applica- tio ns to their particular needs. These users may have very little experience in building systems, but they still want assura.nces that the composed systems wiU perform in expected ways.

We are designing systems that are larger and more complex than ever before. Northro p et al.'s repor t (2006) on ultra-large scale systems explains our aeed to develop huge systems with thousands of sensors and decision nodes that are connected through heterogeneous and opportunistic networks and that adapt to unforeseen changes to their environment. These systems will need special arclhitectural considera- tions, because current testing techniques will not work. Shaw (2002) discusses why it will be iimpossible for such systems to be absolutely correct, and that users and deveiop- ers will need to soften their views and expectations about correctness. She suggests !that we strive instead for sufficient correctness.

5.16 TERM PROJECT

Architecture is as much an artistic and creative endeavor as an engineering one. Difier- eat expert architects caa take very different approaches to how they conceive aad

290 Chapter 5 Designing t he Architecture

document their designs, with the results of each being solid and elegant. We can think of architects approaching their jobs along a continuum, from what is called task-centered to user-centered design. Task-centered design begins with thinking about what the sys- tem must accomplish. By contrast, user-centered design begins with the way in which a user interacts with the system to perform the req uired tasks. The two are not mumally exclusive and indeed can be complementary. However, one design philosophy often dominates the other.

As part of your term project, develop two diffe rent architectural approaches to the Loan Arranger: one that is task centered and one that is user centered. What archi- tectural style(s) have you chosen fo r each? Compare and contrast the results. Which architecture is easier to change? To test? To configure as a product Line?

5.17 KEY REFERENCES

There are many good books about software architecture. The first one you should read is Shaw and Garlan (1996), to provide a good foundation for how you learn about architecture and design. This and other books can act as architectural style catalogues, including Buschmann et al. (1996) and Schmidt e t al. (2000). There are several books that address particular kinds of architectures: Gomaa (1995) for re al-time systems, Hix and Hartson (1993) and Shneiderman (1997) for interface design, and Weiderhold (1988) for databases, for example.

More generaUy, Hofmeister, Nord, and Soni (1999) and Kazman, Asundi, and Klein (2001) discuss how to make a rchitectural design de cis ions, and Cleme nts e t a l. (2003) and Krutchen (1995) address the best way to document an architecture. In addition, the IEEE and other standards organizations publish various architectural standards.

You can read several product-line success stories at the Product Line Hall of Fame Web site (http://www.sei.cmu.edu/productliines/plp_hof.btml), maintained by the Software Engineering Institute (SEI).

Scott Ambler has written extensively about the views of proponents of agile methods on architecture and agile modeling. See bis Web site (http://www.agilemodel ing.com) and his book on agile modeling (Ambler 2002).

5.18 EXERCISES

1. What type of architectural style is represented by the NIST/ECMA model (shown in Figure 5.22) for environment integration? (Chen and Norman 1992).

2. For each of the architectural styles described in this chapter, give an example of a real- world application whose software design might incorpora te that style.

3. R eview the four different architectural styles proposed by Shaw and Garlan (1996) to implement KW1C: repository, da ta abstraction, implicit invocation (a type of publish- subscribe ), and pipe-and-filter. For each one, are the high-level components likely to have high or low cohesion and coupling?

4. Give an example of a system for which developing a prototype would not result in saving a significant amount of development time.

5. List the characteristics of a system for which prototyping is most appropriate.

Openmirrors.com

FIGURE 5.22 NlST/ECMA model.

Section 5.18 Exercises

Repos1itor¥ services

User interlace services

Rlti;,c.., ....... ,~

""'II I

291

6. Explain why modularity and application generators are inseparable concepts. Give an example of an application generator with which you have worked.

7. Explain why a shared data architecture is not easy to reuse.

8. List the characteristics you might include in an architecture evaluation table similar to Table 5.2. For each of the following systems, identify the weights you might use for each characteristic: an operating system, a word processing system, and a satellite tracking system.

9. Many of your class projects require you to develop your programs by yourself. Assemble a small group of students to perform an architectural review for one such project. Have several students play the roles of customers and users. Try to express all the requirements and system characteristics in nontechnical terms. List an the changes that are suggested by the review process. Com pa re the time required to make the changes at the architecture stage to that of changing your existing programs.

10. You have been hired by a consulting firm to develop an income tax calculation package for an accowiting firm. You have designed a system according to the customer's require- ments and presented your design at an architectural review. Which of the following ques- tions might be asked at the review? Explain your answers.

(a) What computer will it run on? (b) What will the input screens look like?

(c) What reports will be produced? (d) How many concurrent users will there be?

(e) Will you use a multiuser operating system?

(I) What are the details of the depreciation algorithm?

11. For each of the systems described below, sketch an appropriate software architecture and explain how you would assign key functionalities to the design's components.

292 Chapter 5 Designing t he Architecture

(.a) a system of automated banking machines, acting as distributed kiosks that bank customers can use to deposit and withdraw cash from their accounts

(b) a news feeder that notifies each user of news bulletins on topics in which the user has expressed an interest

(c) image-processing software that allows users to apply various operations to modify their pictures (e.g., rotation, color tinting, cropping)

( d) a weather forecasting application that analyzes tens of thousands of data elements collected from various sensors; the sensors periodically transmit new data values

12. Propose a redesign of your software architecture for the system of automated banking machines from the previous exercise so that it improves performance. Propose an alter- nate redesign that improves security. Does your strategy to improve performance adversely affect security, or vice versa?

13. Suggest how the weather forecasting application in exercise ll(d) might detect faults in its data sensors.

14. D erive the cut-set trree for the fault tree given in Figure 5.11. 15. Table 5.4 shows a cost-benefit analysis for three competing design proposals. The compu-

tation of benefits is based on projections that the rate of queries could increase to a peak of 200 queries per second. Suppose that, due to increased competition by other on-line com- panies, more recent projections estimate that there will never be more than 150 queries per second. How does this new information affect the original cost-benefit analysis?

16. Your university wants to automate the task of checking that students who are scheduled to graduate have actually satisfied the degree requirements in their respective majors. A key challenge in automating this task is that every degree major has its own unique requirements. Study the degree requirements of three disciplines at your university; iden- tify which graduation requirements they have in common and where they diverge. D escribe how the variability might be generaliized so that checking the degree require- ments of each major can be derived from the same product line.

17. D esign a simple full-screen editor on a video display terminal. The editor allows text to be inserted, deleted, and modified. Sect.ions of text can be "cut" from one part of the file and "pasted" to another part of the file. The user can specify a text string, and the editor can find the next occurr,ence of that string. Through the editor, the user can specify margin set- tings, page length, and tab settings. Then, evaluate the quality of your design.

Openmirrors.com

6

In this chapter, we look at • design principles • object-oriented design heuristics • design patterns • exceptions and exception handling • documenting designs

In the last chapter, we looked at strategies and patterns for creating a high-level archi- tectural design of our software system. This type of design identifies what the system's major compone nts will be, and how tbe components will interact with each other and share information. The next step is to add more detail, deciding bow the individual components will be designed at a moduJar level so that developers can write code thal implements the design. Unlike architectural design, where we have architectural styles to guide our design work, tbe process of creating a more detailed design offers us fewer ready-made solutions for how to decompose a componenll: into moduJes. Thus, module- level design is Ukely to involve more improvisation than architectural design does; it relies on both innovation and continual evaluation of our design decisions, and we pay careful attention to design principles and conventions as we proceed.

In this cbapter, we summarize the abundance of module-level design advice that exists in the literature. We begin with design principles: general properties of good designs that can guide you as you create your own designs. Then, we present several design heuristics and patterns that are particularly useful for object-oriented (00) designs. 00 notations and programming languages were developed specifically to encode and promote good design principles, so it makes sense to look at how to use these technologies to their best advantage. We also look at documenting a module-level design in a sufficiently precise way that allows the design to be implemented easily by other developers.

Your experience with module-level design from your previous programming courses may help you to understand this chapter. The design advice we include is based on the collective experience of a wide variety of developers building many types of sys- tems. We have chosen advice based on quality attributes that the design is to achieve, such as improving modularity or ensuring robustness. This chapter can help you see

293

294 Chapter 6 Designing the Modules

why certain design principles and conventions are applicable, and then assist you in deciding when you shoulld apply them.

To iUustrate design choices, concepts, and principles, we introduce an example: automating a business, the Royal Service Station, that services automobiles. This example enables us to see that designing modules must retlecL not only technological options but also business constraints and developer experience. Some aspects of our example were originally developed by Professor Guilherme Travassos, of COPPE/ Sistemas at the Federal University of Rio de Janeiro, Brazil. Mo re details plus other examples are available at Prof. Travassos's Web site: bltp://www.cos.ufrj.br/-ght.

6.1 DESIGN METHODOLOGY

At this point in the development process, we have an abstract description of a solution to our customer's problem, in the form of a software architectural design. As such, we have a plan for decomposing the design into software units and allocating the system's functional requirements to them. The architecture also specifies any protocols that constrain how the units interact with each other, and specifies a precise interface for each unit. Moreover, the architectural design process bas already resolved and documented any known issues about data sharing and about coordination and synchronization of concurrent compo- nents. Of course, some of these decisions may change when we learn more about designing the individual software units. But at this point, the system's design is complete enough to allow us to treat the designing of the various units ais independent tasks.

In practice, there is no sharp boundary between the end of the architecture-design phase and the start of the module-design phase. In fact, many software architects argue that a software architecture is not complete until it is so detailed that it specifies all of the system's atomic modules and interfaces. However, for the purposes of project manage- ment, it is convenient to separate design tasks that require consideration of the entire system from design tasks that pertain to individual software units, because the latter can be parUtioned into distinct work assignments afild assigned to separate design teams. Thus, the more we can restrict ourselves during the architecture-design phase to identi- fying the major software units and their interfaces, the more we can parallelize the rest of the design work. This chapter focuses on the detailed design of a well-defined archi- tectural unit, looking at bow to decompose the unit into constituent modules.

Once again, the software architecture-desig111 decisions correspond to meal prepa- ration decisions: degree of formality, number o f guests, number of courses, culinary theme (e.g., Italian or Mexican), and perhaps maiin ingredients (e.g., meat or fish in the main course, and which seasonal vegetables). These decisions help to scope the set of possible dishes that we could make. And, like ar,chitectural decisions, they are funda- mental to the planning and preparation of the e!lllire meal; they are difficult lo change in the middle of cooking. Plenty of open design questions will remain, such as which specific dishes to prepare, cooking methods for the meats and vegetables, and comple- mentary ingredients and spices. These secondary decisions tend to apply to speciflc dishes rather than to the whole meal; they can be made in isolation or delegated to other cooks. However, secondary decisions still require significant knowledge and expertise, so that the resulting dishes are tasty and are ready at the appropriate time.

Openmirrors.com

Section 6.2 Design Principles 295

Although there are many recipes and instructional videos to show you how to move from ingredients to a complete meal, there are no comparable design recipes for progressing from a software unit's specification to its modular design. Many design methods advocate top-down design, in which we recursively decompose design ele- ments into smalle r constituent e lements. However, in reality, designers alternate among top-down, bottom-up, and outside-in design methods, sometimes focusing on parts of the design that are less well understood and at other times Heshing out details with which they are familiar. Krasner, Curtis, and Iscoe (1987) studied the habits of develop- ers on 19 projects; they report, and other evidence confirms, that designers regularly move up and down a design's levels of abstractions, as they understand more about the solution and its implications. For example, a design team may start off using a top-down method or an outside-in method that focuses first on the system's inputs and expected outputs. AJternatively, it may make sense to explore the hardest and most uncertain areas of the design first, because surprises that arise in clarifying an obscure problem may force changes to the overall design. If we are using agile methods, then the design progresses in vertical slices, as we iteratively design and implement subsets of features at a time. Whenever the design team recognizes that known design solutions might be useful, the team may switch to a bottom-up design approach in which it tries to tackle parts of the design by applying and adapting prepackaged solutions. Periodically, design decisions are revisited and revised, in an activity called refactoring, to simplify an overly complicated solution or to optimize the design for a particular quality attribute.

The process we use to work towards a final solution is not as important as the doc- umentation we produce so that other designers can understand it. lllis understanding is crucial not only for the programmers who will implement the design but also for the maintainers who will change it, the testers and reviewers who will ensure that the design imple ments the requirements, and the specialists who will write user documen- tation describing how the system works. One way to achieve this understanding is by "faking the rational design process": writing the design documentation to reflect a top- down design process, even if this is not how we arrived at the design (Parnas and Clements 1986), as described in Side bar 6.1. We discuss design documentation in more detail in Section 6.8.

6.2 DESIGN PRINCIPLES

With cle ar requirements and a high-level system architecture in hand, we are ready to add de tail to our design. As we saw in Chapter 5, architectural design can be expressed in te rms of architectural styles, each of which provides advice about decomposing the system into its major components. Architectural styles help us to solve generic prob- lems of communication, synchronization, and data sharing. However, once we focus on decomposing individual components and software units into modules, we must address functionality and properties that are no longer generic; rather, they are specific to our design problem and therefore are less likely to have ready-made solutions.

D esign principles are guidelines for decomposing our system's required function- ality and behavior into modules. In particular, they identify the crite ria that we should use in two ways: for decomposing a system and then for deciding what information to

296 Chapter 6 Designing the Modules

SIDEBAR 6.1 "FAKING" A RATIONAL DESIGN PROCESS

In an ideal, methodical, and reasoned design process, the design of a software system would progress from high-level specification to solution, using a sequence of top-down, error-free design decisions resulting in a hierarchical collection of modules. For several reasons (e.g.,

poorly understood or changing requirements, refactoring, human error), design work rarely

proceeds directly or smoothly from requirements to modules. Nonetheless, Pamas and

Clements (1986) argue that we should behave as if we are following such a rational process:

• The process can provide guidance when we are unsure of how to proceed.

• We will come closer to a rational design if we attempt to follow a rational process.

• We can measure a project's progress against the process's expected deliverables.

Parnas and Oements suggest that we simulate this behavior by "writing the documenta-

tion that we would have produced if we had followed the ideal process." That is, we document

design decisions according to a top-down process by (1) decomposing the software unit into

modules, (2) defining the module interfaces, (3) describing the interdependencies among mod-

ules, and, finally. (4) documenting the internal designs of modules. As we take these steps, we insert placeholders for design decisions that we put off. Later, as details become known and

deferred decisions are made, we replace the placeholders with the new information. At the same time, we update documents when problems are found or the design is revised. The result

is a design documenc that reads as if the design process were purely top-down and linear.

The distinction between the actual design process and the ideal one is similar to the dis-

tinction between discovering the main steps of a new mathematical proof and later fornm- Jating it as a logical argument. "Mathematicians diligently polish their proofs, usually

presenting a proof [in a published paper that is] very different from the first one that they dis- covered" (Parnas and Clements 1986).

provide (and what to conceal) in the resulting modules. Design principles are use ful when creating innovative designs, but they have other uses too, especially in forming the basis for the design advice that is packaged as design conventions, design patterns, and architectural styles. Thus, to use styles and patterns effectively, we must understand and appreciate their underlying principles. Othe rwise, when we define, modify, and extend patte rns and styles to fit our needs, we are Likely to violate the very principles that the conventions and patte rns engender and promote.

The collection of software design principles grows as we ·'encode" our collective experie nce and observations in new design advice . For example, Davis (1995) proposes 201 principles of software development, many of which are design related. In this book, we restrict our discussion to six dominant principles: modularity, interfaces, information hiding, incremental development, abstraction, and generality. Each seems to have stood tbe test of time and is independent of style and metbodology. Collectively, tbey can assist us in building effective, robust designs.

Openmirrors.com

Section 6.2 Design Principles 297

Modularity

Modularity, also called separation of concerns, is the principle of keeping separate the various unrelated aspects of a system, so that each aspect can be studied in isolation (Dijkstra 1982). Concerns can be functions, data, features, tasks, qualities, or any aspect of the re quire me nts or design that we want to define or unde rstand in more detail. To build a modular design, we decompose our system by identifying thte system's unrelated concerns and e ncapsulating each in its own module. If the principle is applied we ll, e ach resulting module will have a single purpose and will be relatively independent of the others; in this way, each module will be easy to understand and de velop. Module inde - pendence also makes it easier to locate faults (because there are fewer suspect modrules pe r fault) and to change the system (because a change to one module affects relatively fe w other modules).

To determine how well a design separates concerns, we use two concepts that mea- sm e module independe nce: coupling and cohesion (Yourdon and Constantine 1978).

Coupling. We say that two modules are tightly coupled when they depend a great deal on each o ther. Loosely coupled modules have some dependence, but tlheir inte rcol!lllections are weak. Uncoupled modules have no inte rconnections at all; they are complete ly unrelated, as shown in Figure 6.1.

There are many ways that modules can be de pendent on each othe r:

• The references made from one module to another: Module A may invoke opera- tions in module B, so module A de pends on module B for completion of its func- tion or process.

• The amount of data passed from one module to another: Module A may pass a parameter, the contents of an array, o r a block of data to module B.

DD DD

loosely coupled • nr11 depend11el11

FIGURE 6.1 Module coupling.

Umupled · no d1p1nd1n1ln

Tl9~tly m pied 1111ny dependencies

298 Chapter 6 Designing t he Modules

TIC HT COU PLI NC

C011•u eoupll19

Coetrtl cou,1119 LOOSE COU PLI NC

Sta111p m,1119

LOW COUPLINC

FIGURE 62 l}'pes of coupling.

• The amount of control that one module has over the other: Module A may pass a con- trol Jlag to module B. The value of the Jlag tells module B the state of some resource or subsystem, which procedure to invoke, or whether to invoke a procedure at aJl.

We can measure coupling along a spectrum o f dependence, ranging from complete dependence to complete independence (uncoupled), as shown in Figure 6.2.

In actuality, it is unlikely that a system wouJd be built of completely uncoupled modules. Just as a table and chairs, although imdepeodent., can combine to form a dining-room set, context can indirectly couple seemingly uncoupled modules. For exampl e, two unrelated features may interact in such a way that one feature disables the possible execution of the other feature (e.g., an authorization feature that prohibits an unauthorized user from accessing protected services). Thus, our goal is not necessar- ily to have complete independence among modules, but rather to keep the degree of their coupling as low as possible.

Some types of coupling are less desirable tlllan others. The least desirable occurs when one module actually modifies another. In such a case, the modified module is com- ple tely dependent on the modifying one. We call this content coupU ng. Content coupling might occur when one moduJe is imported into a nother module, modifies the code of another module, or branches into the middle of another module. In Figure 6.3, module B generates and then invokes module D. (This situation is possible in some programming

FIGURE 6.3 Example of content coupling.

MoMt 8

Gen111t1 D---------- Ctll D

Openmirrors.com

Modul• D

--------

Section 6 .. 2 Design Principles 299

languages, such as LISP and Scheme.) Although self-modifying code is an extremely powerful tool for implementing programs that can improve themselves or learn dynam- ically, we use it knowing the consequences: the resulting modules are tightly coupled and cannot be designed or modified independently.

We can reduce the amount of coupling somewhat by organjzing our design so that data are accessible from a common data store. However, dependence still exists: mak- ing a change to the common data means that, to evaluate the effect of the change, we have to look at all modules that access those data. This !kind of dependence is called common coupling. With common coupling, it can be difficult to determine which mod- ule is responsible for having set a variable to a particular value. Figure 6.4 shows how common coupling works.

When one module passes parameters or a return code to control the behavior of another module, we say that there is control coupling between the two. It is impossible for the controUed module to function without some direction from the controlling module. If we employ a design with control coupling, it helps if we limit each module to be responsible for only one function or one activity. TIUs restriction minimizes the amount of information that is passed to a controlled module, and it simplifies the mod- ule's interface to a fixed and recognizable set of parameters and re turn values.

When complex data structures are passed between modules, we say there is stamp coupling between the modules; if only data values, and not structured data, are passed, then the modules are connected by data coupling. Stamp coupling represents a more complex interface between modules, because the modules have to agree on the data's format and organization.111us, data coupling is simpler and less likely to be affected by changes in data representation. H coupling must exist between modules, data coupling

Module X

Glohlt: At A2 A3

V1rl1bl11: Vt V2

Module V

[email protected]

Com 1101 d1u tru

ModQle Z

@ ~ V2 + A1

FIG URE 6.4 Example of common coupling.

300 Chapter 6 Designing the Modules

is the most desirable; it is easiest to trace data through and to make changes to data- coupled modules.

Objects in an 00 design often have low coupling, since each object contains its own data and operations on those data. In fact, one of the objectives of the 00 design methodology is to promote loosely coupled designs. However, basing our design on objects does not guaranttee that au of the modules in the resulting design will have low coupLing. For example, if we create an object that serves as a common data store that can be manipulated, via its methods, by several o ther objects, then these objects suffer from a form of common coupling.

Cohesion. In contrast to measuring the inte rdependence among multiple mod- ules, cohesion refers to tlhe dependence within and among a module's inte rnal elements (e.g., data, functions, inte rnal modules). The more cohesive a module, the more closely re lated its pieces are, both to each other and to the module's singular purpose. A mod- ule that serves multiple purposes is at greater risk of its elements needing to evolve in different ways or at diffe rent rates. For example, a module that encompasses both data and routines for displaying those data may change frequently and may grow in differ- ent dfrections, as new uses of the data require bo th new functions to manipulate data and new ways of visuaLizing them. Instead, our design goal is to make each module as cohesive as possible, so that each module is easie r to understand and is less Like ly to change. Figure 6.5 shows the several types of cohesion.

The worst degree of cohesion, coincidental, is found in a module whose parts are unrelated to one anothe r. In this case, unrelated functions, processes, or data arc com- bined in the same modnle for reasons of convenience or serendipity. For example, it is not uncommon for a mediocre design to consist of several cohesive modules, with the rest of the system's functionality clumped together into modules MiscellaneousA and MiscellaneousB.

A module has logical cohesion if its parts are related only by the logic structure of its code. As an example, shown in Figure 6.6, consider a template module or procedure that performs very different operations depending on the values of its parameters. Although the different operations have some cohesion, in that they may share some

FIGURE 6.5 Types of cohesion. LOW COHESION

Celncl4uul

L9'1ul

r •• ,.,.1 Prou4urtl

C.m111,nleatlonal

Fuctlenal

lnfen11tleul

Hlc;H COHESION

Openmirrors.com

Mod1le X (pirnl, pum2, ••• , pir111N )

If (pir111I = I)

1 lulf (p11111I = 2)

If (p1rll12 = I)

Section 6 .. 2 Design Principles 301

P111n1te1lzed code

------j- Co111111on code

FIGURE 6.6 Example of logical cohesion.

program statements and code structure, this cohesion of code structure is weak compared to cohesion of data, function, or purpose. It is likely that the different operations will evolve in different ways over time, and that this evolution, plus the possible addition of new operations, will make the module increasingly difficult to llllderstand and maintain.

Sometimes a design is divided into modules tbat represent the different phases of execution: initialization, read input, compute, print output, and cleanup. TI1e cohesion in these modules is temporal, in that a module's data and functions are related only because they are used at the same time in an execution. Such a design may lead to duplicate code, in which multiple modules perform similar operations on key data structures; in this case, a change to the data structure would mandate a change to all of the modules that access the data structure. Object constructors and destructors in 00 programs help to avoid temporal cohesion in initiaLizatiorn and clean-up modules.

Often, functions must be performed in a certain order. For example, data must be entered before they can be checked and then marupulated. When functions are grouped together in a module to encapsulate the order of their execution, we say that the module is p:roceduraUy cohesive. Procedural cohesion is similar to temporal cohe- sion, with the added advantage that the functions pertain to some related action or pur- pose. However, such a module appears cohesive only in the context of its use. Without knowing the module's context, it is hard for us to understand bow and why the module works, or to know how to modify the module.

Alternatively, we can associate certain functions because they operate on the same data set. For instance, unrelated data may be fetched together because the data are collected from the same input sensor or with a single disk access. Modules that are designed around data sets in th.is way are communicationally cohesive. The cure for commllllicational cohesion is placing each data element in its own module.

Our ideal is functional cohesion, where two conditions hold: all elements essen- tial to a single function are contained in one module, and all of that module's elements

302 Chapter 6 Designing t he Modules

are essential to the performance of that function. A functionaUy cohesive module per- forms only the function for which it is designed, and nothing e lse. The adaptation of functional cohesion to data abstraction and object-based design is called informational cohesion. The design goal is the same: to put data, actions, or objects together only when they have one com moo, sensible purpose. For example, we say that an 00 design component is cohesive if all of the attributes, methods, and action are strongly interde- pendent and essential to the object. Well-designed 00 systems have highly cohesive designs because they encapsulate in each module a single, possibly complex, data type and a ll operations on that data type.

Interfaces

In Chapter 4, we saw that a software system has an external boundary and a correspon- ding interface through which it senses and controls its environment. Similarly, a software unit has a boundary that separates it from the rest of the system, and! an interface through which the unit interacts with other software units. An interface defines what services the software unit provides to the rest of the system, and how other units can access those services. For example, llie interface to an object is the collection o f the object 's public operations and the operations' signatures, which specify each operation's name, parame- ters, and possible return values. To be complete, an inte rface must also define what the unit requires, in terms of services or assumptions, for it to work correctly. For example, in the object interface described above, one of the operations may use program libraries, invoke external services, or make assumptions about the context in which it is invoked. If any of these requirements is absent or violated, the operation may fail in its attempt to offer its services. Thus, a software unit's interface describes what the unit requires o f its environment, as well as what it provides to ils environment. A software unit may have several interfaces that make different demands on its environment o r that offer different levels of service, as shown in Figure 6.7. For example, the set of services provided may depend on the user privileges of the client code.

Module

h11pl1111en11tlon lC

0111

Opmtlon 1

Opet1tlon 2

Opentlon 3

Opmtlon 4

FIG URE 6. 7 Example of interfaces.

Openmirrors.com

h tetlm A

Op111tlon t fl Op111tlon 2 () Ope11t1on 4 ()

h tetlice B

Op111tlon 2 () Op111tlon 3 fl

Section 6 .. 2 Design Principles 303

Interfaces are the design construct that allows us to encapsulate and hide a soft- ware unit's design and implementation detai.ls from other developers. Fo r example, rather than manipulate a stack variable directly, we define an object called stack and methods to perform stack operations push and pop. We use the object and its methods, not the stack itself, to modify the contents of the stack. We can also define probes to give us information about the stack- whether it is full or empty and what element is on top-without changing the stack's state.

The specification of a software unit's interface describes the externally visible properties of tile software unit. Just as a requirements specification describes system behavior in terms of entities at the system's boundary, an interface specification's descriptions refer only to entities that exist at the unit's boundary: the unit's access functions, parameters, re turn values, and exceptions. An interface specification shou.ld communicate to other system developers everything that they need to know to use our software unit correctly. This information is not limited to t!he unit's access functions and their signatures:

• Purpose: We document the functionality of each access function, in enough detail that other developers can identify which access functions fit their needs.

• Preconditions: We list aU assumptions, called preconditions, that our unit makes about its usage (e.g., values of input parameters, states of global resources, or presence o f program libraries or other software units), so that other developers know under what conditions the unit is guaranteed to work correctly.

• Protucols: We include protocol information about the order in which access func- tions should be invoked, or the pattern in which two components should exchange messages. For example, a ca lling modu.le may need to be authorized before accessing a shared resource.

• Postconditions: We document aU visible effects, called postconditions, of each access function, including return values, raised exceptions, and changes to shared variables (e.g., output files), so that the calling code can react appropriately to the function's output.

• Quality auributes: We describe any quality attributes (e.g., performance, reliabil- ity) that are visible to developers or users. For example, a client of our software may want to know whether internal data structures !have been optirnized for data iosertioos or data re trievals. (Optimizing for one operation usually slo ws the per- formance of the other.)

Ideally, a unit's interface specification defines exactly the set of acceptable implementa- tions. At least, the specification needs to be precise enouglh so that any implementation that satisfies the specification would be an acceptable implementation of the unit. For example, the specification of a Find operation that re turns the index of an element in a list should say what happens if the element occurs mu.ltiple times in the list (e.g., returns the index of the first occurrence, or the index of an arbitrary occurrence), if the element is not found in the list, if the list is empty, and so on. In addition, the specification should not be so restrictive that it excludes several acceptable implementations. For example, the specification of the Find operation should not specify that the operation re turns the first occurrence of an element when aoy occurrence would do.

304 Chapter 6 Designing the Modules

Inte rface specifications keep other developers from knowing about and exploit- ing our design decisions. At first glance, it may seem desirable to aJlow other developers to optimize their code based on knowledge about how our software is designed. How- ever, such optimization is a form of coupling among the software units, and it reduces the maintainability of the software. If a developer writes code that depends on how our software is implemented, then the interface between that developer's code and our code has changed: the developer's code now requires more from our software than what is advertised in our software's inte rface speciiication. When we want to change our software, e ither we must adhere to this new interface or the other developer must change her code so that it is optimized with respect to the new implementation.

A software unit's inte rface can also suggest the nature of coupling. If an inte rface restricts aJI access to the software unit to a collection of access functions that can be invoked, then there is no content coupling. If some of the access functions have com- plex data parameters, then there may be stamp coupling. To promote low coupling, we want to keep a unit's interface as small and simple as possible. We also want to mini- mize the assumptions and requirements that the software unit makes of its environ- ment, to reduce the chance that changes to other parts of the syste m will violate those assumptions.

Information Hiding

Information hiding (Parnas 1972) aims to make the software system easier to maintain. IL is c.listinguishe<l by its guidance for decomposing a system: each software unit encap- sulates a separate design decision that could be changed in the future. Then we use the interfaces and inte rface specifications to describe each software unit in terms of its externally visible properties. The principle's name thus reflects rt:he result: the unit's design decision is hidden.

The notion of a "design decision" is quite general. It could refer to many things, includil!lg a decision about data format or operations on data; the hardware devices or other components with which our software must interoperate; protocols of messages between components; or the choice of algorithms. Because the design process involves many kinds of decisions about the software, the resulting software units encapsulate different kinds of information. Decomposition by information hiding is different frnm the decomposition methodologies Listed in Chapter 5 (e.g., functionaJ decomposition, data-oriented decomposition), because the software units that result from the la tter encapsulate only information of the same type (i.e ., they all encapsulate functions, data types, or processes). See Sidebar 6.2 for a discussion on how well 00 design metho- dologies implement information hiding.

Because we want to encapsulate changeable design decisions, we must ensure that our interfaces do not, themselves, refer to aspects of the design that are likely to change. For example, suppose that we encapsula te in a module the choice of sorting algorithm. The sorting module could be designed to transform input strings into sorted output strings. However, this approach results in stamp coupling (i.e., the data passed between the units are constrained to be strings). If changeability of da ta format is a design d ecision, the data format should not be exposed in the module's interface. A bet- te r design would encapsulate the data in a single, separate software unit; the sorting

Openmirrors.com

Section 6 .. 2 Design Principles 305

SIDEBAR 6.2 INFORMATION HIDING IN 00 DESIGNS

In 00 design, we decompose a system into objects and their abstract types. That is, each object (module) is an instance of an abstract data type. In tlhis sense, each object hides its data representation from other objects. The only access that other objects have to a given

object's data is via a set of access functions that the object advertises in its inte rface. This

information hiding makes it easy to change an object's data representation without perturb-

ing the rest of the system.

However, data representation is not the only type of design decision we may want to

hide. Thus, to create an 00 design that exhibits information hiding, we may need to expand

our notion of what an object is, to include types of information besides data types. For

example, we could encapsulate an independent procedure, such as a sorting algorithm or an event dispatcher, in its own object.

Objects cannot be completely uncoupled from one another, because an object needs to

know the identity of the other objects so that they can interact. In particular, one object must know the name of a second object to invoke its access functions. This dependence means that

changing the name of an object, or the number of object instances, forces us also to change all units that invoke the object. Such dependence cannot be helped when accessing an object

that has a distinct identity (e.g., a customer record), but it may be avoided when accessing an arbitrary object (e.g., an instance of a shared resource). In Section 6.5, we discuss some design patterns that help to break these types of dependencies.

module could input and output a generic object type, and could re trieve and reorder the object's data values using access functions advertised in the data unit's interface.

By following the information-biding principle, a design is likely to be composed of many small modules. Moreover, the modules may exhibit all kinds of cohesion. For example:

• A module that hides a data representation may be informationally cohesive.

• A module that hides an algorithm may be functionally cohesive. • A module that bides the sequence in which tasks are performed may be procedu-

rally cohesive.

Because each software unit hides exactly one design decision, all th.e units have high cohesion. Even with procedural cohesion, other software units hide the designs of the individual tasks. The resulting large number of modules may seem unwieldy, but we have ways to deal with this trade -off between number of modules and information hid- ing. Later in this chapter, we see how to use dependency graphs and abstraction to man- age large collections of modules.

A big advantage of information hiding is that the resulting software units are loosely coupled. The interface to each unit lists the set of access functions that the unit offers, plus the set of other units' access functions that it uses. This feature makes the software units easier to understand and maintain, because each unit is re latively

306 Chapter 6 Designing t he Modules

self-contained. And if we are correct in predicting which aspects of the design will change over time, then our software will be easier to maintain later on, because changes wiU be localized to particular software units.

Incremental Develo pment

Given a design consisting of software units and their interfaces, we can use the informa- tion about the un.its' dependencies to devise an imcremental schedule of development. We start by mapping out the units' uses relation (Pamas 1978b), wh.ich relates each software unit to the othe r software units on which it depends. Recall from our discus- sion about coupling that two software units, A and B, need not invoke each other in order to depend on each other; for example, unit A may depend on unit B to popula te a data structure, stored in a separate unit C, that unit A subsequently queries. In general, we say that a software unit A '·uses" a software unit B if A "requires the presence of a correct version of B" in order for A to complete its task, as specified in its interface (Parnas 1978b). Thus, a unit A uses a unit B if A does not work correctly unless B works. The above discussion assumes that we can determine a system's uses rela tion from its units' interface specifications. If the interface specifications do not completely describe tbe units' dependencies, tbeo we will need to know enough about each unit's planned implementations to know which other units it will use.

Figure 6.8 depicts the uses relation o f a system as a uses graph, in which nodes represent software units, and directed edges run from the using units, such as A, to the used units, such as B. Such a uses graph can help us to identify progressively larger sub- sets of our system that we can implement and test incrementally.A subset of our system is some useful subprogram together with alJ of the software units that it uses, and all of the software units that those units use, and so on. '"Conceptually, we pluck a program P1 out from the uses graph , and then see what programs come dangling beneath it. This is our subset" (Clements e t al. 2003). Thus, the degree to which our system can be con- structed incrementally depends on the degree to which we can find useful small subsets that we can implement and test early.

A uses graph can also help us to idenlify areas of the design that could be improved, with respect to enabling incremental development. For example, consider Designs 1 and 2 in Figure 6.8 as two possible designs of the same system. We use the te rm fa n-in to refer to the number of units that use a particular software unit, and the te rm fan -out to refer to the number of units used by a software unit. Thus, unit A bas a fan-out of three in Design 1 but a fan-out of five in Design 2. In general, we want to minimize the number of units with bigh fan-out. High fan-out usually indicates that the software unit is doing too much and probably ought to be decomposed into smaller, simpler w.1its. Thus, Desi.go 1 may be better than Design 2, because its components have lower fan-out. On the other hand, if several units perform similar functions, such as

Design 2

FIGURE 6.8 Uses graphs for two designs.

Openmirrors.com

(1)

Section 6.2

~~ ~

(~)

Design Principles 307

(c)

FIGURE 6.9 Sandwiching, to break a cycle in a uses grapb.

string searchiog, theo we may prefer to combine these units into a siogle, more general- purpose unit that can be used in place of any oft.he original units. Such a utility unit is Likely to have high fan-in. One of our goals in designiog a system is to create software units with high fao-io and low fao-out.

Consider another example, shown as a uses graph in Figure 6.9(a). The cycle io this uses graph identifies a collection of units that are mutually dependeot oo each other. Such cycles are not oecessarily bad. If the problem that the units are solving is oaturally recursive, then it makes sense for the design to include modules that are mutually recursive. But large cycles Limit the design's ability to support incremental development: oone of the units in the cycle can be developed (i.e., implemented, tested, debugged) until all of the cycle's units are developed. Moreover, we cannot choose to build a subset of our system that incorporates only some of the cycle's units. We can try to break a cycle in the uses graph using a technique called sandwiching (Paroas 1978b ) . In sandwiching, one of Lhe cycle's units is decomposed into two units, such Lhal one of the new units has no dependencies (e.g., unit B2 i_n Figure 6.9(b)) and the other has no dependents (e.g., unit B1 in Figure 6.9(b)). Sandwiching can be applied more than once, to break either mutual dependencies in tightly coupled units or long dependency chaios. Figure 6.9(c) shows the results of applying sandwiching twice, once to unit A aod once to unit B, to transform a dependency loop into two shorter depeodency chains. Of course, sandwiching works only if a unit's data and functionaLity can be cleanly partitioned, which is not always the case. In the next section, we explore a more sophisticated technique, called dependency inversion, that uses 00 technologies to reverse the direction of dependency between two units, thereby breaking the cycle.

llle best uses graph has a tree structure o r is a forest of tree structures. In sucb a structure, every subtree is a subset of our system, so we can incrementally develop our system one software unit at a time. Each completed unit is a correct implementation of part of our system. Each increment is easier to test and correct because faults are more Likely to be found in the new code, not in the caUed units that have already been tested and deemed correct. In addition, we always have a working version of a system subset to demonstrate to our c1Ustomer. Moreover, morale among developers is high, because they frequently make visible progress on the system (Brooks 1995). Contrast these advantages of incremenrt:al development to its alte rnative, in which nothing works until everything works.

Abstraction

An abstraction is a model or representation that omits some details so that it can focus on other details. The definition is vague about which details are lefl out of a model ,

308 Chapter 6 Designing the Modules

because different abstractions, built for different putposes, omit different kinds of details. For tbis reason, it may be easier to understand the general notion of abstraction by reviewing the types of abstractions we have encountered already.

We discussed decomposition in Chapter 5. Figure 5.5 was an example of a decomposition hierarchy: a frequently used form of abstraction in which the system is divided into subsystems, wbich in turn are divided into sub-subsys tems, and so on. The top level of the decomposition provides a system-level overview of the solution, hiding details that might otberwise distract us from the design functions and features we want to study and understand. As we move to lower levels of abstraction, we find more de tail about eacb software unit, in terms of its major elements and the relations among those elements. In this way, eacb level of abstraction hides information about how its ele- ments are further decomposed. Instead, each element is described by an interface specification, another type of abstraction that focuses on the eleme nt's external behav- iors and avoids any reference to the element's internal design details; those details are revealed in models at the next level of decomposition.

As we saw in Chapter 5, there may not be a single decomposition of the system. Rather, we may create several decompositions that show different structures. For instance, one view may show the system's different runtime processes and their inter- connections, and another view may show the system's decomposition into code units. Each of these views is an abstraction that highlights one aspect of the system's struc- tural design (e.g., runtime processes) and ignores other structural information (e.g., code units) and nonstructural details.

A fourth type of abstraction is the virtual machine, sui..;h as that in a layered ar·chi- tecture. Each layer i in the architecture uses the services provided by the layer i - 1 below it, to create more powerful and reliable services, which it then offers to tbe layer i + l above it. Recall that in a true layered architecture, a layer can access only those services offered by the layer directly below it, and cannot access the services of lower- level layers (and certainly not the services of the higher-level layers). As such, layer i is a virtual machine that albstracts away tbe details of the lower-leve l layers and presents only its services to the rnext layer; the guiding design principle is that layer i's services are improvements over the lower-layers' services and thus supersede them.

The key to writing good abstractions is determilling, for a particular model, which details are extraneous and can therefore be ignored. The nature of tbe abstraction depends on wby we are !building tbe model in the first place: what information we want to communicate, or what analysis we want to perform. Sidebar 6.3 illustrates how we can model different abstractions of an algoritbm for different purposes.

Generality

Recall from Chapter 1 that one of Wasserman's principles of software engineering is re usability: creating software units that may be used again in future software products. The goal is to amortize the cost of developing the unit by using it multiple times. (Amortization means tbat we consider the cost of a software unit in terms of its cost per use rather than associate tbe full cost with the project that developed the unit.) Generality is the design principle that makes a software unit as universally applicable as possible, to increase the cbance that it will be useful in some future system. We make

Openmirrors.com

Section 6 .. 2 Design Principles 309

SIDEBAR 6.3 USING ABSTRACTION

W e can use abstraction to view different aspects of our design. Suppose that one of the system functions is to sort the elements of a (jst L. The initial description of the design is

Sort L in nondecreasing order

The next level of abstraction may be a particular algorithm:

DO WHILE I is between 1 and (length of L)-1:

Set LOW to index of smallest value in L(I), ... , L(length of L)

Interchange L(I) and L(LOW)

END DO

The algorithm provides a great deal of additional information. It tells us the procedure that will be used to perform the sort operation on L. However, it can be made even more detailed. The third and final algorithm describes exactly how the sorting operation will work:

DO WHILE I is between 1 and (length of L)-1

Set LOW to current value of I DO WHILE J is between I+l and (length of L)

IF L(LOW) is greater than L(J)

THEN set LOW to current value of J

ENDIF

END DO Set TEMP to L(LOW) Set L(LOW) to L(I)

Set L(I) to TEMP

END DO

Each level of abstraction serves a purpose. If we care only about what L looks like before and after sorting, then the first abstraction provides all the information we need. If we

are concerned about the speed of the algorithm, then the second level of abstraction provides sufficient detail to analyze the algorithm's complexity. However, if we are writing code for the sorting operation, the third level of abstraction tells us exactly what is to happen; little addi- tional information is needed.

If we were presented only with the third level of abstraction, we might not djscem immediately that the procedure describes a sorting algoritlun; with the first level, the nature of the procedure is obvious, whereas the third level distracts us from the real nature of the procedure. In each case, abstraction keeps us focused on the purpose of the respective description.

310 Chapter 6 Designing t he Modules

a unit more general by increasing the number of contexts in which can it be used. There are several ways of doing this:

• Parameterizing context-specific information: We create a more general version of our software by making into parameters the data on which it operates.

• Removing preconditions: We remove preconditions by making our software work under conditions t!hat we previously assumed would never happen.

• Simplifying postconditions: We reduce postconditions by splitting a complex soft- ware unit into multiple units that divide responsibility for providing the postcon- ditions. The units can be used together to solve the original problem, or used separately when only a subset of the postconditions is needed.

For example, the following four procedure interfaces are listed in order of increasing generality:

PROCEDURE SUM: INTEGER;

POSTCONDITION: returns sum of 3 global variables

PROCEDURE SUM (a, b, c : INTEGER) : INTEGER;

POSTCONDITION: returns sum of parameters

PROCEDURE SUM (a [ ] : INTEGER; len: INTEGER) : INTEGER

PRECONDITION: 0 <= len <= size of array a

POSTCONDITION: returns sum of elements 1 .. len in array a

PROCEDURE SUM (a [ ] : INTEGER) : INTEGER

POSTCONDI TION: returns sum of elements in array a

The first procedure works only in contexts where global variables have names that match the names used within the procedure body. The second procedure no longer needs to know the names of the actual variables being summed, but its use is restricted to summing exactly three variables. The third procedure can sum any number of vari- ables, but the calling code must specify the number of elements to sum. The last proce- dure sums all of the e lements in its array parameter. Thus, the more general the procedure, the more Likely it is that we can reuse the procedure in a new context by modifying its input parameters rathe r than its impleme nta tion.

Although we would always like to create reusable units, othe r design goals some- times conflict with this goal. We saw in Chapte r 1 that software engineering diffe rs from computer science in part by focusing on context-specific software solutions. That is, we tai- lor our solution for the specific needs of our customer. The system's requirements specifi- cation lists specific design criteria (e.g., performance, efficiency) to optimize in the design and code. Often, this customization decreases the software's gene rality, reflecting the trade-off between generali ty (and the refore reusaibility) and customization. There is no general rule to help us balance these competing design goals. The choice depends on the situation, the importance of the design criteria, and the utility of a mo re general version.

6.3 00 DESIGN

Design characteristics have significant effects on subsequent development, maintenance, and evolution. For this reason, new software engineering technologies are frequently

Openmirrors.com

Section 6.3 00 Design 311

created to help developers adhere to the design principles we introduced in the last sec- tion. For example, design methodologies codify advice on how to use abstraction, sepa- ration of concerns, and interfaces to decompose a system into software units that are modular. 00 methodologies are the most popular and sophisticated design metho- dologies. We caU a design object oriented if it decomposes a system into a coUection of runtime components called objects that encapsuJate data and functionality. The folJow- ing features distinguish objects from other types of components:

• Objects are uniquely identifiable runtime entities that can be designated as the target of a message or request.

• Objects can be composed, in that an object's data variables may themselves be objects, thereby encapsulating the impleme ntations of the object's internal vari- ables.

• The implementation of an object can be reused and extended via inheritam.-e, to de fine the implementalion of other objects.

• 00 code can be polymorphic: written in generic code that works with objects of different but related types. Objects of related types respond to the same set of messages or requests, but each object's response to a request depends on the object's specific type.

lo this section, we review these features and some of the design choices they pose, and we present heuristics for improving the quality of 00 designs. By using 00 fea- tures to their best advantage, we can create designs that respect design principles.

Terminology

The runtime structure of an 00 system is a set o f objects, each of which is a cohesive collection of data plus all operations for creating, reading, altering, and destroying those data. An object's data arc called attributes, and its operations arc called methods. Objects interact by sending messages to invoke each other's methods. On receiving a message, an object executes the associated method , which reads o r modifies the object's data and perhaps issues messages to other objects; when the method terminates, the object sends the resuJts back to the requesting object.

Objects are primarily runtime entities. As such, they are often not represented directly in software designs. Instead, an 00 design comprises objects' classes and inter- faces. An interface advertises the set of exte rnally accessible attributes and methods. This information is typically limited to public me thods, and includes the methods' sig- natures, preconditions, postconditions, protoco l requirements, and visible quality attributes. Thus, like other interfaces, the interface of an object re presents the object's pubLic face, specifying all aspects of the object's externaUy observable behavior. Other system components that need to access the object's data must do so indirectly, by invoking the methods advertised in the object's interface. An object may have multiple interfaces, each offering a different level of access to the object's data and meth ods. Such interfaces are hierarchically related by type: if one interface offers a strict subset of the services that another interface offers, we say that the first interface is a subtype of the second interface (the supertype).

An object's implementation details are encapsulated in its c lass definition. To be precise, a class is a software module that partially or totally implements an abstract data

312 Chapter 6 Designing t h e Modules

type (Meyer 1997). It includes definitions of the attributes' data; declarations of the methods that operate on the data; and implementations o f some or aU of its methods. Thus, it is the class modules that contain the actual code that implements the objects' data representations and method procedures. If a class is missing implementations for some of its methods, we say that it is an abstract class. Some 0 0 notations, including the Unified Modeling Language (UML), do not separate an object's interface from its class module; in such notations, the class definitio n distinguishes between public defi- nitions (constituting the interface) and private definitions (constituting the class's implementation). In this chapter, we consider inte rfaces and classes to be distinct enti- ties, because the set of objects satisfying an interface can be much larger than the set of objects instantiating a class definition. Graphically, we distinguish an interface or abstract class from other classes by italicizing its name and the names of its unimple- mented methods.

Suppose we are designing a program that logs all sales transactions for a particu- lar store . Figure 6.10 shows a partial design of a Sa l e class that defines attributes to store information associated with a sale (such as the list of items sold, their prices, and sales tax). The class implements a number of ope rations on transaction data (such as adding or removing an item from the transaction , computing the sales tax, or voiding the sale). Each Sale object in our program is an instance of this class: each object encapsulates a distinct copy of the class's data variables and pointe rs to the class's oper- ations. Moreover, the d ass definition includes constructor methods that spawn new object instances. Thus, during execution, our program can instantiate new Sal e objects to recor<l the details of each sale as the sale occurs.

We also have instance variables, which are program variables whose values are refe rences to objects. An object is a distinct value o f its class type, just as '3' is a value of the INTEGER data type. Thus, an instance variable can refer to different object instances during program execution, in the same way that an integer variable can be assigned different integer values. However, there is a critical difference between instance variables and traditional program variables. An instance variable can be declared to have an interface type, rather than be a particular class type (assuming that interfaces and classes are distinct entities); in this case, an instance variable can refer to objects of any class that implements the variable 's (interface) type. Moreover, because

Stla Dal•

. I dty: 1..31 subtotal : Monty nont~ : t .• 12 tu : Mouy ••le '"' total : Mouy

year : lnt19ar

tddlte11( I tan) ta111ovalta111lprodut1N o.) ltn 1 eo111put1S1bto11I()

~

. product No. eo11put1Tul) -

eo11puteT011l l) fttllll

voldSt la() duertptlH p11e1 : Monay

FIGURE 6.10 Partial design of a Sale class.

Openmirrors.com

Section 6.3 00 Design 313

of the sub typing relation among inte rfaces, an instance variable can also refer to objects of any class that implements some ancestor supertype of t!he variable's (inte rface) type. The variable can even refer to objects of diffe rent classes over the course of a pro- gram's execution. This fiexibiLity is called dynamic binding, because the objects to wh.ich the variables refer cannot be infe rred by examining the code. We write code tha t operates on instance variables accordfog to their inte rfaces, but the actual behavior of that code varies during program execution, depending on the types of objects on wh.ich the code is operating.

Figure 6.11 shows these four 00 constructs-classes, objeds, interfaces, and ins tance variables-and bow they are related. Directed arrows depict the relationsh.ips be tween constructs, and the adornments at the ends of each arrow indicate the multiplic- ity (sometimes caUed the "arity") of the relationship; the multiplicity tells us bow many of an item may exist. For example, the re lationship between instance variables and objects is many (*) to one (1), mean.ing that many instance variables may refer to the same object a t any point in a program's execution. Some of the other relationships merit mention:

• Each class encapsulates the implementation details of one or more inte rfaces. A class that is declared to implement one interface also implicitly (via inheritance) implements all of the interface's supertypes.

• Each interface is implemented by one or more classes. For example, diffe rent class implementations may emphasize different quality attributes.

• Each object is an instance of one class, whose attribute and method defin.itions de te rmine what data the object can hold and what method implementations the object executes.

• Multiple instance variables of different types may refer to the same object, as long as the object's class implements (directly or implicill:Jy via supertypes) each vari- able's (inte rface) type.

• Each instance variable's type (i.e., interface) determines what data and methods can be accessed using that variable.

Both the separation of object instances from instance variables and of interfaces from class definitions give us considerable ftexibilit)' in encapsulating design decisions and in modifying and reusing designs.

Support for reuse is a key characteristic of 00 design. For example, we can build new classes by combin.ing component classes, much as children build structures from building blocks. Such construction is done by object composition, whereby we define a

Ru11-t1111 11,111111

Object

0 •• 1 11ler1ncer •

lnlltnce w11•I•

FIGURE 6.11

Ctde Ty~• NldMIH dtcl1111ftnt

t1bclut ubtyp• It of lftl

Mela-model of 00 constructs.

314 Chapter 6 Designing the Modules

class's attributes to be instance variables of some interface type. For example, the Sale class defined in Figure 6.10 uses composition to maintain an aggregated record of the Items sold, and uses a component Date object to record the date of the sale. An advantage of object composition is its support of modularity; the composite class knows nothing about the implementations of its object-based attributes and can manip- ulate these attributes only by using their interfaces. As such, we can easily replace one class component with another, as long as the replacement complies with the same inter- face. Th.is technique is much the same as replacing a red building block with a blue o ne, as long as the two blocks are the same size and shape.

Alternatively, we can build new classes by extending or modifying definitions of existing classes. This kind of construction, called inheritance, defines a new class by directly reusing (and adding to) the definitions of an existing class. Inheritance is com- parable to creating a new type of building block by drilling holes in an existing block. In an inheritance relation, the existing class is called the parent class. The new class, called a subclass, is said to "inherit" the parent class's data and function definitions. To see how inheritance works, suppose that we want to create a Bulk Sale class for record- ing large sales transactions in which a buyer qualifies for a discount. If we define Bulk Sale as an extension of our regular Sale class, as in Figure 6.12, then we need to pro- vide onJy the definitions that distinguish Bulk Sale from its pare.at class, Sale. These definitions include new attributes to record the discount rates, and a revised met.hod that applies the discount when totaling the cost of the sale. Any Bulk Sale object will

Date Sale

• I 4ay: 1 .. 31 subtotal : Money 1tl1 •au month: 1 .. 12 tax : Money year : inte5er

total : Money

aWtem(ltem) remmltem(produet No.} Item computeSubtotal(J • product No. ~ compute Tu(} - name computeT otal(I 4emiption voidSale() price : Money

I Bulk Sale

Discount addDiscount(threshold, rate) • removeD iscount(rate) ~ thmhol4 : Money compute D iseountedSubtota I() rate : Percentage oomputeDiscountedT ax(} computeDiscountedTota 1(1

FIGURE 6.12 Example of inheritance.

Openmirrors.com

Section 6.3 00 Design 315

comprise attributes and methods defined in the parent Sale class together with those defined in the Bul k Sale class.

Object orientation also supports polymorphism, whereby code is written in terms of interactions with an interface, but code behavior depends on tbe object associated with the inte rface a t runtime and on the implementations of that object's methods. Objects of different types may react to the same message by producing type-specific responses. Designers and programmers need not know the exact types of the objects that the poly- morphic code is manipula ting. Rather, they need make sure only that the code adheres to the instance variables' interfaces; they rely on each object's class to specialize how objects of that class should respond to messages. lo the Sales program, the code for finalizing a purchase can simply request the total cost from the appropriate Sale object. How the cost is calculated (i.e., which method is executed and whether a discount is appLied) will depend on whether the object is an ordinary Sale or a Bulk Sale.

Inheritance, object composition, and polymo rphism are important features of an 00 design that make the resulting system more useful in many w.ays. The next sect ion discusses strategies for using these concepts effectively.

Inheritance vs. Object Composition

A key design decision is determining how best to structure and relate complex objects. In an 00 system, there are two main techniques for constructing large objects: inheri- tance and composition. That is, a new class can be created by extending and overriding the behavior o f an existing class, or it can be created by combining simpler classes to form a composite class. The distinction between these two approaches is exhibited by examples similar to those of Bertrand Meyer (1997),shown in Figure 6.13. On tbe le ft , a Software Engineer is defined as a subclass o f Engineer and inherits its parent class's engineering capabili ties. On the right, a Software Engineer is defined as a composite class that possesses engineering capabilities from its component Engineer object. Note that both approaches enable design reuse and extension. That is, in both approaches, the reused code is maintained as a separate class (i.e., the parent class or the component object), and the new class (i.e., the subclass or the composite object) extends this behavio r by introducing new attributes and methods and not by modifying the reused code. Moreover, because the reused code remains encapsulated as a sepa- ra te class, we can safely change its implementation and thereby indirectly update the behavior of the new class. Thus, changes to the Engineer class ar,e automatically real- ized in the So ftware Engineer class, regardless of whether the Software Engi - neer class is constructed using inheritance or composition.

Engineer Sotwue Enalneer I engCapabilties 1 1 Engineer

FIGURE 6. 13 Class inheritance (left) vs. object construction (right).

316 Chapter 6 Designing the Modules

Each construction paradigm has advantages and disadvantages. Composition is bet- te r than inheritance at preserving the encapsulation of the reused code, because a com- posite object accesses the component only through its advertised interface. In our example, a Software Engineer would access and update its engineering capabilities using ca Us to its component's methods. By contrast, a subclass may have direct access to its inherited attributes, depenrung on the design. The greatest advantage of composition is that it allows dynamic substitution of object components. The component object is an attribute variable of its composite object and, as with any variable, its value can be changed during program execution. Moreover, if the component is defined in terms of an interface, then it can be replaced with an object of a different but compatible type. In. the case of the composite Software Engineer, we can change its engineering capabilities, including method implementations, by reassigning its engCapabilities attribute to another object. This degree of variability poses its own problems, though. Because compo- sition-designed systems can be reconfigured at runtime, it can be harder to visualize and reason about a program's runtime structure simply by studying the code. It is not always clear which objects reference which other objects. A second disadvantage is that object composition introduces a level of indirection. Every access of a component's methods must first access the component object; this indirection may affect runtime performance.

By contrast, using the inheritance approach, the subclass's implementation is determined at design time and is static. The resulting objects are Jess flexible than objects instantiated from composite classes because the methods they inherit from tlheir parent class cannot be changed at runtime. Moreover, because the inherited properties o f the parent class are usually visible to the subclass, if not &