Software engineering academics have been exploring software metrics for several decades and have come up with various measures that seem to show promise of building a predictive theory for software. Measures for cohesion and coupling as well as the cyclomatic complexity are some. While these were originally envisioned for code level analysis, they have been used at higher levels of abstraction with some success. This is a well recognized area of research and one which will continue to develop. It must be within the scope of the education for a well-educated software architect.
But in contrast to these quantitative metrics, there is a great deal of attention paid to the methodologies that are to be used in the system's development life cycle as they relate to the creation of a software architecture. While there is a good case to be made that the study of these methodologies is more properly in the scope of management information science, I believe it is a mistake to completely separate the methodologies from the more quantitative aspects of software engineering. A multi-disciplinary approach is required since there must not be a bright line between software architecture and the project roles through which a software architect may rise. The development of a talented software architect must balance the "harder" engineering knowledge with the "softer" management knowledge to achieve the synergism that will result in the most capable worker.
Most software engineering education focuses on the lower level aspect of design, specifically at the level of the object or module. This is necessary since without an understanding of the very basic concepts of data structures, control structures, formal syntax and object-oriented methods, a true understanding of a software-oriented system is not possible. The inherent limits of the tools of software engineering is needed just as strength of materials is needed for a civil engineer or basic chemistry is needed for a chemical engineer. These are the hard stops that can be encountered and the engineer must understand them if successful designs are to be created.
However as the size and complexity of the software systems grow, the layers of abstraction must also grow if the resulting system is to remain comprehensible. The literature is rife with case studies of systems that were created by individuals over an extended period of time whose design is really only known by the creator and exists in no communicable artifact anywhere. With luck, there is always some underling who is prepared to fill the role of the designer if that person leaves the organization. This illustrates several different ways in which software architecture begins to separate from the lower levels of design.
First is to note that if you envision a system small enough that it is the result of a single person, that suggests that the person single-handedly design and implemented the system. There was no separation of roles between the designer and the builder. It should be self-evident that this organizational structure presents limits to the size and complexity of the system to be built and the time frames in which it can be built. Some exceptional people have build sophisticated systems in short periods of time but for the purpose of an engineering discipline we ignore these outliers since these feats tend to be difficult to repeat with any dependability. This is outside the realm of what we aspire to in the day-to-day world of engineering. We all hope that we will rise to these ranks but it should never be the expectation that a sole person create something so extraordinary.
Once the problem becomes so large that it is not reasonable for a single individual to create the system that is required, it is natural that separation of concerns and focus on skill sets begins to create distinct roles and that these roles are brought together into a team environment. Given the complexity of the modern programming languages and the tools needed to create them, the role of the coder was long ago established. This often provides entry level positions for the software engineer since the specification for a module can be very highly structures so as to leave relatively little latitude for the coder yet allowing him to learn the business architecture of his client and the existing architecture of the system under construction (or maintenance).
Defining the role of the coder immediately creates a new role; that of the person who writes the specification for the module to be coded. The practice once was that this responsibility fell to a business analyst. It became their responsibility to gather requirements and specify the modules that needed to be created.
Since there would often be many coders and often different levels of coders depending upon their capabilities, the project team would be sufficiently large that it required the talent of a team lead who would report to the client on management matter such as schedule and budget.
In many ways this naive team structure hasn't existing in exactly this form for a generation or more. It has been found to be wanting in the same way that the waterfall methodologies that were created in the formative stages of software engineering were inadequate to explain what was actually done as opposed to a helpful intellectual model for what was supposedly the ideal way. But before we explore the ways in which this was left behind, let's continue by looking at the legacy that these early methodologies left.
Much systems thought probably comes from the work in the military-industrial complex. These large, mission critical projects taken on by these large organizations inherited the culture from the military of how to take a very large effort and deconstruct it into a set of smaller tasks with the needed oversight and control. Whatever else is said about the waterfall model, it has been the reference for how a large work team should be organized. I'll assume for now that you understand that model and plow on.
As befits its origin, systems development methodologies reflected a mechanistic attitude toward the creation of a software system with all the inputs, processes and outputs neatly laid out in a graph reflecting the predecessor tasks and artifacts of each process and specified their output. If the inputs and processes are correct, the outputs will be sufficient for the successor tasks. The entire process would flow as smoothly as a well oiled machine.
The most basic assumption of a strictly enforced waterfall project plan is that everything that is needed to make important decisions is known at the conclusion of the requirements phase of the project. Details may need to be worked out but the ability to see the structure to be built is sufficient to allow for prediction of the cost and effort.
This assumption has been more wrong than right in practice. Successful projects seem to be a product of shrewd negotiators with enough experience to argue for sufficient resources in the absence of hard data to support it, to secure that funding from business managers and to manage the project by limiting the scope to the money and time available, not to some theoretical document that adequately articulate all of the requirements for the product to be built.
This cynical discussion is tangential to the main point I will make but important to provide the context in which software engineering takes place in the real world. To ignore the real world and embrace some model of perfectly logical business managers is about a realistic as an architect designing a building in which the wind will never exceed 10 mph; it is fantasy or art, not engineering. In engineering you substitute reality for desire and accept the limits whether they be logic or the results from social science.
The second big fallacy in the well developed waterfall methodologies of the past is that they assume that the future is like the past, that the system to be created is sufficiently like the other systems that all the tasks and artifacts can be predicted. Since many of the failures are attributable to failures in the requirements gathering phase, those errors are expensive to fix, if they can be fixed within the time and budget constraints. A sufficiently experienced team may know the needs of the client better than the client. In those cases, the project can be steered toward success even when the requirements gathering phase has technically been incomplete, inconsistent or incorrect. In many captive development shops, this has been the state for many years. The success may be attributed to the methodology but in reality the success is because of the staff, not the tool.
This suggests another reason why the waterfall methodology is flawed. Business managers who must make hire and fire decisions must provide the workers to staff the project. Yet their ability to assess the capabilities of the untested workers is limited. Hence, just because someone fills a particular role on a project team is no guarantee they will do their job well. Some of this can be addressed by a good quality assurance program but often the organizations with many new workers, business managers with little experience staffing projects and a project with tight time and money constraints are the same organizations that have poor quality assurance programs. Again, the historical roots in the military-industrial complex are not carried over into a commercial environment since the imperative of the mission critical project means something wholly different in a military context than it does in most commerce.
There have been two very different reactions to the failure of these waterfall methodologies. One reaction is to impose a quality assurance program. By its nature, a quality assurance program requires artifacts against which verification and validation can be performed. For very large projects, these artifacts are complex and expensive to produce. Yet without them no QA can be performed. The somewhat logical response of management to project failures was to improve the quality processes creating greater emphasis on the artifacts or adding to the artifacts that must be created in an attempt to detect project problems earlier and mount corrective action. It must be clear that this can become a self-reinforcing feedback loop. After a few cycles the systems development life-cycle becomes a bureaucratic morass of paperwork to be filled out and documents to be created which go to committees for review and approval before work can officially progress.
The frustration software engineers felt when caught in this kind of environment led to the Agile Manifesto which was a cri de coeur from the software engineers that saw the folly of this progression. Here is one version:
- Individuals and interactions over processes and tools
- Working software over comprehensive documentation
- Customer collaboration over contract negotiation
- Responding to change over following a plan
Here is how I interpret this declaration and what it opposes. The opposition to processes (i.e. methodologies) is explicit. A methodology is something that exists to promote the proper interactions of the individuals on the team. But it had become a straight-jacket forcing developers to ignore their instincts and shutting down debate rather than supporting it. This is particularly attractive to the gen x crowd and the changing realities of software engineering decisions. The culture of top-down management and the unidirectional communications no longer made sense in such a complex development environment. People needed to work more cooperatively and exert greater thought and depend less on "tools" (whether they are programming languages which can lead to academic discussions among well educated software engineers that are beside the point, rubrics which were probably best adapted to technology from a prior generation at best, or project management systems that sought to measure and categorize every hour expended on the development) There was an intuitive understanding that people needed to talk to each other and to develop a sense of shared commitment for success.
The second point of the manifesto was a reaction to the exceedingly long lag times between project initiation and the delivery of a working product. Even in the best of circumstances, the business reality can fundamentally change in that period of time. Consumers, commerce and government wanted to be more nimble and to be able to respond to changes more quickly. While human processes can be changed relatively quickly, automated processes were proving very difficult to change. Once created software was not modifiable. Even while the project was ongoing, responding to a change request was often contentious and difficult to assimilate without impact to the budget and schedule.
What the Agile Manifesto implicitly required was a form of iterative development where the delivery of working software was accelerated even when the product was not necessarily something that completely solved the problem. One of the stated reasons is that since requirements gathering and documentation was so infallible, why do it at all? Why not create a prototype that would demonstrate what the developers believed was needed from conversations with the client and then demonstrate it? Clients respond more favorably, and more constructively, to a working prototype than they do to an abstract document that they are never sure they completely understand. The cycle of development can become much shorter, ideally measured in weeks, and the ultimate product developed by continually iterating, getting closer to the final product with each iteration.
The call for customer collaboration was a reaction to the inherent animosity that the unbridled waterfall methodology created between the client and the development organization. The model envisioned that the customer could collaborate on the creation of a requirements document that would act as a contract for development. Either explicitly or implicitly the client was asked to sign off on the requirements document. Inevitably developers would attempt to design the system against this document. Misunderstandings or errors of incompleteness, inaccuracy or inconsistency would eventually be found and the impact to the budget and schedule would lead to acrimonious discussions between the development organization and the client regarding the interpretation of the requirements document.
Agile wanted to sidestep this unhelpful dynamic by stressing the continuing role of the client throughout the development process. If they client could not, or would not, commit the proper resources to answer questions as they arose instead of depending upon a requirements document that was never complete enough, then the failure would become a major indicator that the project was already in trouble long before code was created. The sense of shared commitment and responsibility has always been needed for success projects. The Manifesto reminded everyone of it.
Systems developed under the waterfall methodologies were often brittle and unmodifiable. Attempts to create systems that were modifiable often led to very complex designs as the developers attempted to make as much as possible easier to change. Inevitably their attempts failed as the cost of this complex design was difficult to deliver at an acceptable price and there always seemed to be one more inflection point in the design that had not been handled.
What the Agile Manifesto stressed was the inevitability of change and the need of everyone involved in the development effort to remain flexible, expect that change will happen and respond to it with a client-centered acceptance instead of a reactionary and defensive posture.
At this point the clash between at least what SEI espouses for a development methodology and the Agile Manifesto is brought into the classroom. Students now are well indoctrinated into the Agile Manifesto and the need for iterative design. However the creation of an architecture for a large software product requires a fair amount of Big Analysis Up Front (BAUF) in order to perform the kind of decisions that are needed for the first few decompositions. How can this be resolved?
So methodologies are an important part of the toolbox for large-system creation. It is unlikely that there are right and wrong methodologies in any absolute sense but rather drivers of the specific methodology that should be adopted by a specific project for a specific organization for the creation of a specific product. It must be driven by the risk factors of the effort, the organization and relationships between the developing organization and the client, and the novelty of the product to be created.
Besides these human process tools, there are more engineering oriented tools which assist with the technical aspects of design. They include tools to help in the task of architecture reconstruction, architecture presentation and documentation, and architecture design.
Procedural Tools for Architectural Analysis
ATAM, CBAM [Bass2003], SAAM, an earlier version of ATAM [Kazman94], quantified design space [Jum90][Hou91]
Tools for architecture reconstruction/recovery
Dali [Bass2003], Sneed's reengineering workbench [Sneed98], the software renovationfactories of Verhoef and associates [Band97], rearchitecting tool suite by Philips Research [Krikhaar99], Rigi Standard Form [Mueller93][Wong94], [Bowman99] outlines a method similar to Dali for extracting architectural documentation from the code of an implemented system, Harris and associates outline a framework for architecture reconstruction using a combined bottom-up and top-down approach [Harris95], [Guo99]outlines the semi-automatic architecture recovery method called ARM for systems that are designed and developed using patterns.
Tools for Architectural Design
Universal Connector Language, UniCon (http://www.cs.cmu.edu/afs/cs/project/tinker-arch/www/html/1997/lectures/24B.UniCon/base.004.html),
Architecture Language Tools
Module Interconnection language (MIL),
Interface definition languages (IDL),
WRIGHT architecture-specification language (http://www.cs.cmu.edu/~able/wright/) (http://www.iturls.com/English/SoftwareEngineering/SE_sb.asp),
No comments:
Post a Comment