Thursday, 31 December 2009

Rapid Application Development

Rapid Application Development - RAD

The perceived slowness and inflexibility of the Waterfall SDLC menthod led to the development of a method that reacted quickly to change and adapted to changing conditions: a normal state of affairs in the fast-moving ICT industry. Where the Waterfall model would slowly chug forward, the market or technologies would change under its feet: sometimes by the time software was released, its was already out of date, irrelevant or incompatible with new industry standards.

RAD works differently. It develops products in a sequence of small upgrades: each release has slightly more functionality than the previous one, and over time the product matures into a finished product. In the time a team would take to fully develop a product using the SDLC, a RAD team might have released a dozen incremental versions.

The drawback of RAD is its short-sightedness: you could tell what would be happening in a week's time; you could guess what you'd be doing in a month; but trying to predict the state of a project in a year's time would be like reading tea leaves.

Unlike the SDLC, RAD makes it easy for customers to redefine their basic needs and expectations from a product.

A possible drawback of RAD, however, is that it would seem to encourage a "make it up as you go along" approach which will limit a product's scalability or future modification. The thorough planning and analysis of the SDLC would tend to create products that anticipated future needs and allowed expansion. A RAD product would more likely be hammered together to cater for immediate needs, but cope poorly if it had to be radically altered later.

I see SDLC like building a skyscraper, with the TV aerial on the penthouse fully designed before the foundations are even dug. I see RAD more like a shanty town that has new shacks thrown together as the need arises: quick, responsive, but not too elegant or enduring.

A classic example of rapid, successful development was the original IBM PC and its accompanying operating system - MS-DOS.

A classic example of development gone wrong is the game Duke Nukem Forever, which has been awaited for six years, with still no sign of release. It has won Wired News' "Vapourware Product of the Year" every year since 2001.

Agile methods

Agile methods - an adaptive approach

Reacting against the perceived strict regimentation of the Waterfall Model, the Agile model appeared in the 1990s. Its developers believed the Waterfall model was too slow and bureaucratic and did not comfortably accommodate the ways systems/software engineers actually work best. Agile, put simply, is where software is developed progressively with each new release adding more capabilities

It appeared under different names and flavours such as: Scrum (in management), Crystal Clear, Extreme Programming (XP), Adaptive Software Development, Feature Driven Development, and DSDM.

Agile aims to reduce risk by breaking projects into small, time-limited modules or timeboxes ("iterations") with each interation being approached like a small, self-contained mini-project, each lasting only a few weeks. Each iteration has it own self-contained stages of analysis, design, production, testing and documentation. In theory, a new software release could be done at the end of each iteration, but in practice the progress made in one iteration may not be worth a release and it will be carried over and incorporated into the next iteration. The project's priorities, direction and progress are re-evaluated at the end of each iteration.

Agile teams tend to work as a team in a bullpen - an open floor-plan work area that makes face-to-face communications easy. Agile, however, has been criticised for its lack of formal documentation.

(One wonder how well Agile development techniques would work for Virtual Teams.)

Agile's aims and characteristics include:

  • Customer satisfaction by rapid, continuous delivery of useful software
  • Working software is delivered frequently (weeks rather than months)
  • Working software is the principal measure of progress.
  • Even late changes in requirements are welcomed.
  • Close, daily, cooperation between developers and customers
  • Face-to-face conversation is the best form of communication.
  • Projects are built around motivated individuals, who should be trusted (rather than micro-managed)
  • Continuous attention to technical excellence and good design.
  • Simplicity
  • Self-organising teams
  • Regular adaptation to changing circumstances

Such flexibility is seen by some as a lack of discipline, but its ability to adapting quickly to change can make it a powerful method of tackling big projects - and ICT is a field where rapid and significant change is the rule rather than the exception! On the other hand, long-term (beyond a couple of months) planning is very hard to do with an Agile approach.

I guess an Agile project manager would be updating his Gantt chart daily :-)

Agile methods have features in common with RAD.


Waterfall SDLC model

Waterfall SDLC model - a predictive approach

The VCE IT model of the System Development Life Cycle (SDLC) contains 5 stages that flow from one to the next in order (hence the 'waterfall' imagery.) As with a real waterfall, the progression from stage to stage is one-way only, and a stage, once completed, is not revisited.

An advantage of the water It is popular because each stage can be compartmentalised (so one stage is completely separate to other stages and there is no overlapping) and the project's deadlines can be set, monitored and managed tidily.

A disadvantage of the Waterfall model is that it is so linear and sequential. Once a phase begins, if a team discovers a previous stage they had not thought out properly or a vastly better method were possible, the team would have to persevere with the existing plan: they cannot revisit the analysis phase, for example, to do more observation and better understand the nature of the problem.

There are alternatives to the strictness of the Waterfall model: you will need to be familiar with Rapid Application Development (RAD), but other models include Joint Application Development (JAD), Synch and Stabilise, Build and Fix, and the Spiral Model of the SDLC.

The Waterfall SDLC steps are:

  • Analyse - study the current system; determine if there is really a problem; determine if the problem can be fixed; determine if the problem is worth fixing; determine what the new or changed system should be able to achieve. Finish with a logical design. Details.
  • Design - Consider alternative ways of solving the problem; plan what hardware, software, procedures and data need to be created, purchased or assembled. Design other key features such as documentation, training, testing, implementation and evaluation requirements. Finish with a physical design.
  • Develop - write the software, build the hardware, buy equipment, assemble components, formalise procedures of how the product should be used, perform ongoing informal component testing and integration testing. Write the documentation and training procedures. Finish with format testing, including acceptance testing.
  • Implement - roll out the solution to its users using strategies such as direct, phased, parallel and/or pilot.
  • Evaluate - review the development process and the finished product to learn from mistakes and identify good practices. Ensure the finished product is performing as specified during the design phase.

System Development Life Cycle (SDLC)

System Development Life Cycle (SDLC)

Introduction

Software systems development is, from a historical perspective, a very young profession.
The first official programmer is probably Grace Hopper, working for the Navy in the mid-1940s. More realistically, commercial applications development did not really take off until the early 1960s. These initial efforts are marked by a craftsman-like approach based on what intuitively felt right. Unfortunately, too many programmers had poor intuition.

By the late 1960s it had become apparent that a more disciplined approach was required.
The software engineering techniques started coming into being. This finally brings us to the SDLC.
What evolved from these early activities in improving rigor is an understanding of the scope and complexity of the total development process. It became clear that the process of creating systems required a system to do systems. This is the SDLC. It is the system used to build and maintain software systems.


The System Development Life Cycle is the process of developing information systems through investigation, analysis, design, implementation, and maintenance.
The systems development life cycle (SDLC) is a conceptual model used in project management that describes the stages involved in an information system development project, from an initial feasibility study through maintenance of the completed application.

SDLC Objectives

When we plan to develop, acquire or revise a system we must be absolutely clear on the objectives of that system. The objectives must be stated in terms of the expected benefits that the business expects from investing in that system. The objectives define the expected return on investment.


An SDLC has three primary business objectives:
- Ensure the delivery of high quality systems;
- Provide strong management controls;
- Maximize productivity.

In other words, the SDLC should ensure that we can produce more function, with higher quality, in less time, with less resources and in a predictable manner.

1. Ensure High Quality

Judging the quality of a wine or a meal is a subjective process. The results of the evaluation reflect the tastes and opinions of the taster. But we need a more rigorous, objective approach to evaluating the quality of systems. Therefore, before we can ensure that a system has high quality, we must know what quality is in a business context. The primary definition of quality in a business context is the return on investment (ROI) achieved by the system. The business could have taken the money spent on developing and running the system and spent it on advertising, product development, staff raises or many other things. However, someone made a decision that if that money was spent on the system it would provide the best return or at least a return justifying spending the money on it.

This ROI can be the result of such things as: operational cost savings or cost avoidance; improved product flexibility resulting in a larger market share; and/or improved decision support for strategic, tactical and operational planning. In each case the ROI should be expressed quantitatively, not qualitatively. Qualitative objectives are almost always poorly defined reflections of incompletely analyzed quantitative benefits.
The SDLC must ensure that these objectives are well defined for each project and used as the primary measure of success for the project and system. The business objectives provide the contextual definition of quality. There is also an intrinsic definition of quality. This definition of quality centers on the characteristics of the system itself: is it zero defect, is it well-structured, it is well-documented, is it functionally robust, etc. The characteristics are obviously directly linked to the system's ability to provide the best possible ROI. Therefore, the SDLC must ensure that these qualities are built into the system. However, how far you go in achieving intrinsic quality is tempered by the need to keep contextual quality (i.e., ROI) the number one priority. At times there are trade-offs to be made between the two. Within the constraints of the business objectives, the SDLC must ensure that the system has a high degree of intrinsic quality.

2. Provide Strong Management Control

The essence of strong management controls is predictability and feedback. Projects may last for many months or even years. Predictability is provided by being able to accurately estimate, as early as possible, how long a project will take, how many resources it will require and how much it will cost. This information is key to determining if the ROI will be achieved in a timely manner or at all. The SDLC must ensure that such planning estimates can be put together before there have been any significant expenditures of resources, time and money on the project. The feedback process tells us how well we are doing in meeting the plan and the project's objectives. If we are on target, we need that verified. If there are exceptions, these must be detected as early as possible so that corrective actions can be taken in a timely manner. The SDLC must ensure that management has timely, complete and accurate information on the status of the project and the system throughout the development process.

System Development Life Cycle

There are two basic definitions of productivity. One centers on what you are building; the other is from the perspective of how many resources, how much time and how much money it takes to build it. The first definition of productivity is based on the return on investment (ROI) concept. What value is there in doing the wrong system twice as fast?
It would be like taking a trip to the wrong place in a plane that was twice as fast. You might have been able to simply walk to the correct destination. Therefore, the best way to measure a project team's or system department's productivity is to measure the net ROI of their efforts. The SDLC must not just ensure that the expected ROI for each project is well defined. It must ensure that the projects being done are those with the maximum possible ROI opportunities of all of the potential projects.
Even if every project in the queue has significant ROI benefits associated with it, there is a practical limit to how large and how fast the systems organization can grow. We need to make the available staff as productive as possible with regard to the time, money and resources required to deliver a given amount of function. The first issue we face is the degree to which the development process is labor intensive. Part of the solution lies in automation. The SDLC must be designed in such a way as to take maximum advantage of the computer assisted software engineering (CASE) tools.
The complexity of the systems and the technology they use has required increased specialization. These specialized skills are often scarce. The SDLC must delineate the tasks and deliverables in such a way as to ensure that specialized resources can be brought to bear on the project in the most effective and efficient way possible.
One of the major wastes of resources on a project has to do things over. Scrap and rework occurs due to such things as errors and changes in scope. The SDLC must ensure that scrap and rework is minimized. Another activity that results in non-productive effort is the start-up time for new resources being added to the project. The SDLC must ensure that start-up time is minimized in any way possible. A final opportunity area for productivity improvements is the use of off-the-shelf components. Many applications contain functions identical to those in other applications. The SDLC should ensure that if useful components already exist, they can be re-used in many applications.
What we have identified so far are the primary business objectives of the SDLC and the areas of opportunity we should focus on in meeting these objectives. What we must now do is translate these objectives into a set of requirements and design points for the SDLC.


STAGES OF SDLC

Preliminary investigation is the first step in the system development life cycle. The preliminary investigation is a way of handling the user's request to change, improve or enhance an existing system. The objective is to determine, whether the request is valid and feasible before any recommendation is made to do nothing, improve or modify the existing system, or build altogether a new one. It is not a design study, nor does it include the collections of details to completely describe the business system. The following objectives should be accomplished, while working on the preliminary investigation. System investigation includes the following two sub-stages.
1. Problem Definition
2. Feasibility Study


1. PROBLEM DEFINITION:
Problem Initiation includes define necessary input, output, storage etc. Define what the problem really is. State a goal to be achieved
A problem initiation will describe:
• required input (what data has to be acquired to produce the output?)
• required output (i.e. what information is the system supposed to produce?)
Problem analysis breaks the problem down into its parts and describes them. Note that this step does not care what solution will be used to solve the problem. The analysis lays down the basic requirements that the eventual solution must achieve (a logical design).
During problem initiation, one of the first things to do is to define the problem correctly. If you get this wrong (or skip it completely) everything you do afterwards could be a complete waste of time and money.
The most important task in creating a software product is extracting the requirements. Customers typically know what they want, but not what software should do, while incomplete, ambiguous or contradictory requirements are recognized by skilled and experienced software engineers. Frequently demonstrating live code may help reduce the risk that the requirements are incorrect.

Problem Initiation is the task of precisely describing the software to be written, possibly in a rigorous way. In practice, most successful specifications are written to understand and fine-tune applications that were already well-developed, although safety-critical software systems are often carefully specified prior to application development. Specifications are most important for external interfaces that must remain stable.

Here are some possible definitions of problems:
1. The existing system has a poor response time, i.e. it is slow.
2. It is unable to handle workload.
3. The problem of cost, i.e. the existing system is not economical.
4. The problem of accuracy and reliability.
5. The requisite information is not produced by the existing system.
6. The problem of security.

Similarly, a system analyst should provide a rough estimate of the cost involved for the system development. This is again a very important question that too often is not asked until it is quite late in the system development process.

2. FEASIBILITY STUDY

The literal meaning of feasibility is vitality. This study is undertaken to know the likelihood of the system being useful to the organization. Feasibility study, basically, is a high-level capsule version of the entire process, intended to answer a number of questions like what is the problem? Is the problem even worth solving? However, as the name indicates in preliminary investigation, feasibility study should be relatively brief, as the objective at this stage is not only to get an idea of the scope. The findings of this study should be formally presented to the user management. This presentation marks a crucial decision point in the life of the project. If the management approves the project, the feasibility study report represents an excellent model of the system analyst's understanding of the problem and provides a clear sense of direction for the subsequent development of the system.

The aim of the feasibility study is to access alternative systems and to propose the most feasible and desirable system for development. Thus, feasibility study provides an overview of the problem and acts as an important checkpoint that should be completed before committing more resources.

The feasibility of a proposed system can be assessed in terms of four major categories, as summarized below.
1. Organizational Feasibility: the extent to which a proposed information system supports the objective of the organization's strategic plan for information systems determines the organizational feasibility of the system project. The information system must be taken as a sub-set of the whole organization.

2. Economic Feasibility: in this study, costs and returns are evaluated to know whether returns justify the investment in the system project. The economic questions raised by the analyst during the preliminary investigation are for the purpose of estimating the following:
(a) The cost of conducting a full system investigation.
(b) The cost of hardware and software for the class of application being considered.
(c) The benefits in the form of reduced costs, improved customer service, improved resource utilization or fewer costly errors.

3. Technical Feasibility: whether reliable hardware and software, capable of meeting the needs of the proposed system can be acquired or developed by the organization in the required time is a major concern of the technical feasibility. In the other words, technical feasibility includes questions like:
(a) Does the necessary technology exist to do what is suggested and can it be acquired?
(b) Does the proposed equipment have the technical capacity to hold the data required to use the new system?
(c) Will the proposed system provide adequate responses to inquiries, regardless of the number of locations and users?
(d) Can the system be expanded?
(e) Is there any technical surety of accuracy, reliability, ease of access and data security?

4. Operational Feasibility: the willingness and the ability of the management, employees, customers, suppliers, etc., to operate, use and support a proposed system come under operational feasibility. In the other words, the test of operational feasibility asks if the system will work when it is developed and installed. Are there major barriers to implementation? The following questions are asked in operational feasibility.
(a) Is there sufficient support from the management? From employees? From customers? From suppliers?
(b) Are current business methods acceptable to the users?
(c) Have the users been involved in the planning and development of the system project?

Operational feasibility would pass the test if the system is developed as per rules, regulations, laws, organizational culture, union agreements, etc., and above all with the active involvement of the users.
Besides these four main categories, the system should also be accessed in terms of legal feasibility and schedule feasibility. Whereas legal feasibility refers to the viability if the system from the legal point of view, i.e. it checks whether the system abides by all laws and regulations of the land, the schedule feasibility evaluates the probability of completing the system in the time allowed for its development, since for the system to be useful, it must be finished well before the actual requirement of its usage.
For the determining feasibility, a project proposal must pass all these tests. Otherwise, it is not a feasible project. For example, a personnel record system that is economically feasible and operationally attractive is not feasible if the necessary technology does not exist, Infeasible projects are abandoned at this stage, unless they are reworked and resubmitted as new proposal.

3. Analysis
Primary objective of Analysis phase is to understand the user's needs and develop requirements for Software development.
It involves activities like:
• Gathering Information
• Define Software requirement.
• Prioritize requirements.
• Generate & Evaluate alternatives.
• Review recommendations with management.
Before you start trying to solve a problem it's important to study the existing system before embarking on major changes.
Consider the Analysis phase like a visit to the doctor. You would be pretty worried if you told the doctor you had a headache and the doctor immediately started merrily injecting you with various things before even looking at you or asking you any questions. Such behaviour is likely to cause more problems than it solves, so doctors always analyse their patients - observing, questioning, testing - before beginning any treatment.
So also do problem solvers study the system they intend to change, and the organisation it's in, before they decide what needs to be done. By thoroughly understanding a system, its operation, its context, its strengths and weaknesses, one can better decide how to start improving it.
There's not much good getting heavily into a project if the whole thing is a silly idea to start with. The preliminary investigation is an early test of whether the project should even be started.

4. Design:
The finished design of a solution should contain:
• data structure (e.g. field names, data types and lengths, filenaming, folder structure schemes etc).
• how the data is to be acquired (what procedures and equipment will be needed?)
• data input procedures and equipment (e.g. keyboard? barcode reader? ICR/OMR?)
• interfaces (e.g. what will a data entry screen look like? Will people need to leave the main screen to access functions? How will menus be organised into commands and submenus? What shortcut keys will be used? Will you use a text box, listbox, combo box, tickbox for a particular item of data entry? What colour scheme will be used? What navigation scheme will be used? What icons represent what meaning? Will the layout of the data entry form help users enter data in the required order and the required format?
• control procedures - What validation rules will be used on what fields to check for data reasonableness, existence or format?) What will different error messages say? How can output be checked for accuracy (e.g. an average can be compared with the data items from which it was calculated). How can procedural errors or problems be detected? (e.g. an order may be cross-checked against the stock database to ensure the ordered item is in stock, or whether it needs to be backordered and the potential customer notified of the delay)
• what workloads and capacities the system must be capable of - e.g. storage capacities, number of transactions per hour, disaster-recovery abilities
• documentation and training requirements for different types of users
• validation and storage methods to be used
• how to produce the output (i.e. processing actions)
• procedures to be followed to use the solution
• backup requirements and procedures - what needs to be backed up, how often, how backups are stored, what backup scheme will be used?
• how the solution is to be tested to ensure it works properly - what needs to be tested? Functionality, presentation, usability, accessibility, communication of message. How will you test?
A data flow diagram is used to describe the flow of data through a complete data-processing system. Different graphic symbols represent the clerical operations involved and the different input, storage, and output equipment required. Although the flow chart may indicate the specific programs used, no details are given of how the programs process the data.
Gantt Charts is detailed timeline of events in a project laid out. In short it is a schedule of Software Development Life Cycle.
Structure chart consist of a top-down description of a process and its sub-processes.
Data Dictionary - describes (for example) a database's fields, types, lengths, validation rules, formulae.

5. Development (Coding):
The design must be translated into a machine-readable form. The code generation step performs this task. If the design is performed in a detailed manner, code generation can be accomplished without much complication. Programming tools like compilers, interpreters, debuggers etc... are used to generate the code. Different high level programming languages like C, C++, Pascal, Java are used for coding. With respect to the type of application, the right programming language is chosen.
Software coding standards are language-specific programming rules that greatly reduce the probability of introducing errors into your applications, regardless of which software development model is being used to create that application.
Languages used:
• Java
• C/C++
• Web-related scripting: HTML, JavaScript
• Database related: MS-SQL, Oracle.


6. Testing:
Software testing:
Software testing is the process of checking software, to verify that it satisfies its requirements and to detect errors.
Software testing is an empirical investigation conducted to provide stakeholders with information about the quality of the product or service under test, with respect to the context in which it is intended to operate. This includes, but is not limited to, the process of executing a program or application with the intent of finding software bugs.
Testing can never completely establish the correctness of computer software. Instead, it furnishes a criticism or comparison that compares the state and behaviour of the product against a specification. A primary purpose for testing is to detect software failures so that defects may be uncovered and corrected.
Testing methods: Software testing methods are traditionally divided into black box testing and white box testing. These two approaches are used to describe the point of view that a test engineer takes when designing test cases.
Black box testing
Black box testing treats the software as a black-box without any understanding of internal behavior. It aims to test the functionality according to the requirements. Thus, the tester inputs data and only sees the output from the test object. This level of testing usually requires thorough test cases to be provided to the tester who then can simply verify that for a given input, the output value (or behavior), is the same as the expected value specified in the test case. Black box testing methods include: equivalence partitioning, boundary value analysis, all-pairs testing, fuzz testing, model-based testing, traceability matrix etc.
White box testing
White box testing, however, is when the tester has access to the internal data structures and algorithms. (and the code that implement these)
Types of white box testing
The following types of white box testing exist:
• code coverage - creating tests to satisfy some criteria of code coverage. For example, the test designer can create tests to cause all statements in the program to be executed at least once.
• mutation testing methods.
• fault injection methods.
• static testing - White box testing includes all static testing.

7. Implementation:
In SDLC, implementation refers to post-development process of guiding a client to use the software or hardware that was purchased. This includes Requirements Analysis, Scope Analysis, Customizations, Systems Integrations, User Policies, User Training and Delivery.
Software Implementations involve several professionals like Business Analysts, Technical Analysts, Solution Architect , and Project Managers.Analysts in the implementation phase acts as the intermediator between user and developers. The Implementation Phase includes:
 Hardware and software installation
 User Training
 Documentation

Primary objectives of Implementation phase are to ensure that:
• Software is installed - The systems are placed and used by actual users.
• The users are all trained - Training is provided to the users of the system usually through workshops or online
• The business is benefiting


8. Maintenance and Support
Maintenance includes activities like keeping the system up to date with the changes in the organization and ensuring it meets the goals of the organization by:
• Building a help desk to support the system users – having a team available to aid technical difficulties and answer questions
• Implementing changes to the system when necessary.

System maintenance involves the monitoring, evaluating and modifying of a system to make desirable or necessary improvements. In other words, maintenance includes enhancements, modifications or any change from the original specifications. Therefore, the information analyst should take change as his/her responsibility so as to keep the functioning at an acceptable level.
Software needs to be maintained not because some of its modules or programs wear out and need to be replaced, but because there are often some residual errors remaining in the system which have to be removed as soon they are discovered. This is an on-going process, until the system stabilizes.
Maintaining and enhancing software to cope with newly discovered problems or new requirements can take far more time than the initial development of the software. Not only may it be necessary to add code that does not fit the original design but just determining how software works at some point after it is completed may require significant effort by a software engineer. About ⅔ of all software engineering work is maintenance, but this statistic can be misleading. A small part of that is fixing bugs. Most maintenance is extending systems to do new things, which in many ways can be considered new work.

Systems Development Life Cycle

Systems Development Life Cycle


 
Model of the Systems Development Life Cycle with the Maintenance bubble highlighted.

The Systems Development Life Cycle (SDLC), or Software Development Life Cycle in systems engineering and software engineering, is the process of creating or altering systems, and the models and methodologies that people use to develop these systems. The concept generally refers to computer or information systems.

In software engineering the SDLC concept underpins many kinds of software development methodologies. These methodologies form the framework for planning and controlling the creation of an information system[1]: the software development process.

 Overview

A Systems Development Life Cycle (SDLC) is any logical process used by a systems analyst to develop an information system, including requirements, validation, training, and user (stakeholder) ownership. Any SDLC should result in a high quality system that meets or exceeds customer expectations, reaches completion within time and cost estimates, works effectively and efficiently in the current and planned Information Technology infrastructure, and is inexpensive to maintain and cost-effective to enhance.[2]

Computer systems have become more complex and often (especially with the advent of Service-Oriented Architecture) link multiple traditional systems potentially supplied by different software vendors. To manage this level of complexity, a number of systems development life cycle (SDLC) models have been created: "waterfall"; "fountain"; "spiral"; "build and fix"; "rapid prototyping"; "incremental"; and "synchronize and stabilize".[citation needed]

SDLC models can be described along a spectrum of agile to iterative to sequential. Agile methodologies, such as XP and Scrum, focus on light-weight processes which allow for rapid changes along the development cycle. Iterative methodologies, such as Rational Unified Process and Dynamic Systems Development Method, focus on limited project scopes and expanding or improving products by multiple iterations. Sequential or big-design-upfront (BDUF) models, such as Waterfall, focus on complete and correct planning to guide large projects and risks to successful and predictable results.[citation needed]

Some agile and iterative proponents confuse the term SDLC with sequential or "more traditional" processes; however, SDLC is an umbrella term for all methodologies for the design, implementation, and release of software.[3][4]

In project management a project can be defined both with a project life cycle (PLC) and an SDLC, during which slightly different activities occur. According to Taylor (2004) "the project life cycle encompasses all the activities of the project, while the systems development life cycle focuses on realizing the product requirements".[5]

History

The systems development lifecycle (SDLC) is a type of methodology used to describe the process for building information systems, intended to develop information systems in a very deliberate, structured and methodical way, reiterating each stage of the life cycle. The systems development life cycle, according to Elliott & Strachan & Radford (2004), "originated in the 1960s to develop large scale functional business systems in an age of large scale business conglomerates. Information systems activities revolved around heavy data processing and number crunching routines".[6]

Several systems development frameworks have been partly based on SDLC, such as the Structured Systems Analysis and Design Method (SSADM) produced for the UK government Office of Government Commerce in the 1980s. Eversince, according to Elliott (2004), "the traditional life cycle approaches to systems development have been increasingly replaced with alternative approaches and frameworks, which attempted to overcome some of the inherent deficiencies of the traditional SDLC".[6]

Systems development phases

Systems Development Life Cycle (SDLC) adheres to important phases that are essential for developers, such as planning, analysis, design, and implementation, and are explained in the section below. There are several Systems Development Life Cycle Models in existence. The oldest model, that was originally regarded as "the Systems Development Life Cycle" is the waterfall model: a sequence of stages in which the output of each stage becomes the input for the next. These stages generally follow the same basic steps but many different waterfall methodologies give the steps different names and the number of steps seem to vary between 4 and 7. There is no definitively correct Systems Development Life Cycle model, but the steps can be characterized and divided in several steps.


The SDLC can be divided into ten phases during which defined IT work products are created or modified. The tenth phase occurs when the system is disposed of and the task performed is either eliminated or transferred to other systems. The tasks and work products for each phase are described in subsequent chapters. Not every project will require that the phases be sequentially executed. However, the phases are interdependent. Depending upon the size and complexity of the project, phases may be combined or may overlap.[7]

Initiation/planning

To generate a high-level view of the intended project and determine the goals of the project. The feasibility study is sometimes used to present the project to upper management in an attempt to gain funding. Projects are typically evaluated in three areas of feasibility: economical, operational, and technical. Furthermore, it is also used as a reference to keep the project on track and to evaluate the progress of the MIS team.[8] The MIS is also a complement of those phases. This phase is also called the analysis phase.

Requirements gathering and analysis

The goal of systems analysis is to determine where the problem is in an attempt to fix the system. This step involves breaking down the system in different pieces and drawing diagrams to analyze the situation, analyzing project goals, breaking need to be created and attempting to engage users so that definite requirements can be defined. Requirement Gathering sometimes require individual/team from client as well as service provider side to get a detailed and accurate requirements.

Design

Strengths and weaknesses

Few people in the modern computing world would use a strict waterfall model for their Systems Development Life Cycle (SDLC) as many modern methodologies have superseded this thinking. Some will argue that the SDLC no longer applies to models like Agile computing, but it is still a term widely in use in Technology circles. The SDLC practice has advantages in traditional models of software development, that lends itself more to a structured environment. The disadvantages to using the SDLC methodology is when there is need for iterative development or (i.e. web development or e-commerce) where stakeholders need to review on a regular basis the software being designed. Instead of viewing SDLC from a strength or weakness perspective, it is far more important to take the best practices from the SDLC model and apply it to whatever may be most appropriate for the software being designed.

A comparison of the strengths and weaknesses of SDLC:

Strength and Weaknesses of SDLC
[9]
Strengths Weaknesses
Control. Increased development time.
Monitor Large projects. Increased development cost.
Detailed steps. Systems must be defined up front.
Evaluate costs and completion targets. Rigidity.
Documentation. Hard to estimate costs, project overruns.
Well defined user input. User input is sometimes limited.
Ease of maintenance.
Development and design standards.
Tolerates changes in MIS staffing.

An alternative to the SDLC is Rapid Application Development, which combines prototyping, Joint Application Development and implementation of CASE tools. The advantages of RAD are speed, reduced development cost, and active user involvement in the development process.

It should not be assumed that just because the waterfall model is the oldest original SDLC model that it is the most efficient system. At one time the model was beneficial mostly to the world of automating activities that were assigned to clerks and accountants. However, the world of technological evolution is demanding that systems have a greater functionality that would assist help desk technicians/administrators or information technology specialists/analysts.

Thursday, 3 December 2009

Internet and World Wide Web III

BUSINESS USES OF THE INTERNET

Although there are scores of specific Internet applications that benefit businesses, they can all be grouped under two broad categories: (1) information exchange and dissemination, and (2) facilitating e-commerce.

INFORMATION EXCHANGE.

The information exchange function is the broader of the two and includes such diverse applications as:

  • e-mail and other person-to-person communications, e.g., computer conferencing
  • online marketing and brand building
  • employee recruitment
  • investor and public relations information distribution
  • intranets for employee knowledge sharing and collaboration
  • extranets to enable outsourcing and supply-chain integration

The economic value of these applications is difficult to measure, but for large organizations they have the potential to save millions of dollars in costs, and depending on the application, to stimulate sales as well. Only a couple of the possibilities will be discussed here.

The internal information management and knowledge-sharing abilities of corporate intranets can be substantial. Intranets, which are corporate information networks based on Internet technology but are usually restricted access sites available only to select users, allow central storage and versatile dissemination of diverse information, including corporate handbooks and manuals, customer or marketing databases, employee databases, project discussion boards, and other internal documentation. Intranets are substantially more efficient than circulating paper copies of documents, both in terms of immediacy of information and, in most cases, maintenance costs. Because they rely on simple, Web-based client/server technology, they're also typically easier to implement and use than proprietary databases.

The extranet, which enables supply-chain integration and automation, is an especially powerful use of the Internet, and one that is increasingly being adopted by large corporations. Although there are many variations, supply-chain management is generally a hybrid of data exchange and electronic commerce that allows companies to better coordinate their procurement and distribution practices with those of their suppliers and clients. Based on the efficiency principles of electronic data interchange (EDI) and just-in-time inventory, this coordination can afford several benefits. The streamlining effects can include eliminating paperwork, reducing staff hours, and improving data accuracy. Web-based ordering systems likewise tend to be easier to use than their old-line counterparts, which also contribute faster and more accurate results. Such automated systems can also provide management with more timely and detailed information about corporate purchasing habits and needs, allowing better resource planning and even providing a blueprint for cost control. Extranets can also be established to provide customer service and other external communications functions.

As an information source, the Web is also a particularly efficient means of comparison shopping for business procurement. With relative ease, a procurement officer can find price quotes from several vendors, some of which may not even be aware the other exists. The buyer can then use this information to either choose the low-cost vendor or to gain concessions from established vendors. The downside to this, of course, is when companies are on the receiving end of this informed negotiation, which usually leads to tighter profit margins.

E-COMMERCE.

There are also multiple facets to e-commerce, although they are much more closely integrated than information-exchange functions. Specifically, businesses may focus on one or more of these aspects of commerce:

  • preparing customers for the sale
  • facilitating the actual transaction
  • managing any follow-up to the sale

It may not be feasible or profitable to do all three in equal proportion, or even at all. For example, the most logical commerce-related application for Federal Express and similar companies is delivery tracking, which is done after the transaction is completed. It also makes sense to provide pre-transaction services, such as account set-up and drop-off center locators, on the site. However, in this example the transaction itself is more difficult to accomplish. It consists of two main parts, dropping off a package and arranging for payment. The latter could easily be done over the Internet, but it's less clear how the company would profitably obtain the packages for delivery. Federal Express offers pick-up services, but it's uncertain whether it would be profitable for the company to pick up the majority of the parcels it carries, which are traditionally dropped off at local retail centers.

In other trades, of course, the Internet may well be the ideal locus of transaction. The case is particularly compelling for products or services that can be delivered online, such as software applications via high speed connection, musical recordings, or databases. But strong—if not initially profitable—business models have also been adopted in many other fields, notably by booksellers such as Amazon.com and auction houses like eBay, not to mention vendors that electronically service the mundane but lucrative business-to-business supply chain.

Thus, while e-commerce is commonly the more celebrated business application, as noted above it isn't the appropriate model for all types of trade. Companies contemplating a new e-commerce initiative would do well to consider this maxim: all e-commerce is not the same. Internet-era mythology holds that (1) competitors big and small are all on equal footing on the Internet, and (2) anything and everything can be sold online.

Equal footing is only possible in the limited sense that minor players in some line of business can, assuming they have the funding (in 1998 the median development price for a mid-sized Web site was estimated at $100,000 and rising briskly) and technical resources, build Internet sites that are as good as—or better than—those of their major competitors, as measured by site convenience, functionality, marketing tactics, and so forth. However, this doesn't mean that the smaller company will be any more effective in the larger sense. The smaller company must also have a cost-efficient system for order fulfillment, a powerful marketing operation that ensures potential customers are reached, and many other supporting capabilities in order to succeed. For instance, say a small Internet-only start-up wants to sell high-end home appliances online. They may create the best site in the business, but will they be equipped to serve a national market? Probably not. For such large and expensive products, traditional retailers like Sears, Roebuck and Co. and the so called appliance superstores have tremendous competitive advantages in the online arena as well, not the least of which is an established and already profitable physical distribution system. That's not to mention that the marketing model may be off target; Internet transactions may not appeal to potential buyers of expensive appliances, as these people might value the ability to see the product in the showroom and talk with sales staff. While that conclusion in this example is arguable, there are clearly some product and service transactions in which customers place a high value on in-person contact and immediate fulfillment (real life examples being health care, video rental, and grocery shopping), and these so far haven't been good candidates for an Internet-only approach.

The point here is not to deny the Internet's revolutionary implications for the business world, or to suggest that its entry barriers can't be significantly lower than in traditional industries. Instead, the lesson is that many traditional business principles and practices still apply to successful e-commerce. Market research, customer service before and after the sale, cost-benefit analysis, and return on investment all have essential roles in fashioning an Internet commerce strategy.

Internet and World Wide Web II

A SHORT HISTORY OF THE INTERNET

ARPANET.

The Internet originated as an experimental communication system funded by the U.S. Department of Defense and hosted by several universities. Its impetus was a defense experiment to create a cost-efficient, decentralized, widely distributed electronic communications network for linking research centers. This network was named Arpanet, after its sponsoring agency, the Defense Advanced Research Projects Agency (DARPA). Arpanet began operating in 1969, but it took several years before it became reliable, thanks to packet switching (breaking information into small manageable pieces that could each be routed separately and reassembled at the receiving computer), and acquired familiar functions like electronic mail. Arpanet's first international links were established in 1973, when hosts in Great Britain and Norway signed on.

EARLY COMMERCIAL NETWORKS.

The Internet's first commercial forebear was called Telenet and was run by Bolt, Beranek & Newman (BBN), a defense contractor with close ties to the Arpanet project. Introduced in 1974, Telenet enjoyed only a lukewarm reception and its founders couldn't keep up with the steep level of investment needed to make it truly commercially viable. Five years later BBN sold Telenet Communications Corp., by then a publicly traded company, to General Telephone & Electronics, better known as the telecommunications company GTE Corp. GTE would eventually spin Telenet off in a joint venture that formed US Sprint, the long-distance and networking giant, but Telenet never became a dominant player. More important were the originally closed (proprietary, non-Internet) networks of CompuServe, Prodigy, and America Online, which would provide the commercial model for consumer Internet service providers (ISPs) and Web content centers, and large commercial network backbone operators, which would give businesses fast access to the Internet and eventually take over the Internet's operation.

EMERGENCE OF THE MODERN INTERNET.

Despite its relative obscurity at the time, the 1980s were the Internet's most defining years. By the early 1980s Arpanet had adopted the TCP/IP communications standards that would become commonplace on the Internet, and more importantly, other interconnected research networks began to spring up, both within the United States and abroad. One of the most important was the National Science Foundation's NSFNET, which came online in the mid-1980s to link several supercomputing laboratories with U.S. universities. In this period the collective network was increasingly known as the Internet, although the generic term of internetworking, or connecting networks to other networks, had existed since at least the mid-1970s. Enjoying rapid growth and technical upgrading, the NSF's network became the official backbone of the Internet by the late 1980s, eclipsing Arpanet, which by that time was comparatively small, slow, and outmoded. From just 213 host computers on Arpanet in 1981, the Internet had burgeoned to include some 10,000 hosts by 1987, and topped 300,000 by 1990, the year Arpanet was officially decommissioned.

WORLD WIDE WEB.

The final major breakthrough of the 1980s—and one that would decidedly set the course for the 1990s and beyond—was a 1989 proposal at the Swiss physics lab CERN to create a World Wide Web. The idea came from Tim Berners-Lee, a British-born physicist working at CERN at the time. His plan, which was not well received initially, was to allow colleagues at laboratories around the world to share information through a simple hypertext system of linked documents. Eventually gaining CERN's approval, Berners-Lee and others at the research center began developing the now familiar standards for the Web: hypertext transfer protocol (HTTP) to delineate how servers and browsers would communicate; hypertext mark up language (HTML) to encode documents with addressed links to other documents; and a uniform resource locator (URL) format for addressing Internet resources (e.g., http://www.cern.ch or mailto:webmaster@domain.com). By 1990, Berners-Lee had likewise created the first Web browser and server software to feed information to the browser.

Although Berners-Lee's vision was for a collaborative, informal medium of information exchange, perhaps that typified in chat rooms and newsgroups and whiteboard applications, more commercially motivated Web innovations soon followed. Most important was Marc Andreessen's Mosaic browser, which he developed as an undergraduate employee at the National Center for Supercomputing Applications (NCSA) of the University of Illinois at Urbana Champaign. Mosaic, which debuted in early 1993, was more graphical and user friendly than other Web applications up to that point. It was an instant success, albeit not a money maker because it was mostly distributed for free. Andreessen finished his degree in computer science later that year, and in early 1994 established Netscape Communications Corp. with Silicon Valley titan Jim Barksdale, founder of the high-end computer hardware maker Silicon Graphics. Netscape's Navigator quickly became the dominant browser on the Internet, at one point claiming 75 percent of all users. Curiously, the NCSA claimed rights to Mosaic and wrangled with Andreessen over the commercial use of the browser application code and its name; the NCSA would later license Mosaic to Microsoft Corp. to use in a competing browser, enabling the software giant to outmaneuver Netscape within a couple years with its Internet Explorer product. By this time the Web was nearly synonymous with the Internet.

As the browser wars fed on the phenomenal public interest in the Internet in the mid-1990s, the network became a predominantly commercial entity, as businesses set up Internet sites in droves and millions of new users—both private individuals and corporate users—began logging on. The NSF officially bowed out of running the Internet backbone in 1995, when commercial operators took over; however, the NSF continued its policies of funding research into advanced networking applications that could improve the Internet and newer high-speed research networks.

Internet and World Wide Web

By now it's almost cliché to take note of the Internet's vast potential as a business resource, but triteness doesn't diminish the fact that this once-obscure computer network has changed—and will continue to change—business and society profoundly. A number of estimates pegged the value of Internet commerce in 1998 around $100 billion for the United States, and more than one projection for the early 2000s foresaw worldwide e-commerce surpassing a trillion dollars within the first five years of the 21st century. Although much attention has been devoted to the vast consumer market accessible via the Internet (which is a multibillion-dollar franchise in its own right), business-to-business transactions make up the large majority of e-commerce sales in terms of value. Total U.S. 1998 economic activity surrounding the Internet, including computer hardware purchases, Web authoring services, commerce, and so forth, was estimated at more than $300 billion in sales and 1.2 million jobs. And these statistics don't even address the non commerce efficiencies and savings that Internet-based technologies bestow on businesses in areas such as supply-chain management.

Whereas during the Internet's early commercialization companies were consumed with simply getting online, perhaps without much forethought about what to do once they got there, increasingly corporations are formulating exacting Internet strategies to capitalize on the network's strengths as well as to cope with its shortfalls. Despite the popular metaphor of a virtual store serving all the same functions as a physical store, conventional transaction-based commerce is not the appropriate Internet business model for all companies. Rather, businesses must evaluate the financial and competitive advantages of using the Internet as a primary vehicle for communication and exchange versus traditional and hybrid options. Some firms may find, for example, that it's more profitable to provide users with Internet tools to help make a purchasing decision than to try to facilitate the entire transaction electronically. Meanwhile, other types of companies will find that doing business exclusively over the Internet is the best approach. No blanket policy is likely to work across dissimilar business lines; the key to determining which model is best is intricately tied to the specific market being served, the logistics of delivering the product or service being offered, and what other non-Internet alternatives exist.

Computer Networks VI

NETWORK TOPOLOGY

The topology, or the physical layout, of the network is the concern of configuration management. The three main arrangements are the bus, ring, and star as shown in Figure 1 below. In the bus configuration, each node is connected to a common cable and detects messages addressed to it. Because it is reliable and uses the least amount of cabling, this layout is often used in offices. However, fiber-optic systems cannot usually be arranged this way.

In the ring layout, packets of information are retransmitted along adjacent nodes. It has the possibility of greater transmission distances and fiber-optic systems can use this layout. However, the components necessary can be more expensive. A popular implementation of ring topology is IBM's Token Ring configuration.

In the star arrangement, all traffic is routed through one central node. It offers the advantages of simplified monitoring and security. Also, unlike the other layouts, the failure of one node, unless it is the central one, does not cause the entire network to fail. This drawback is addressed in the clustered star layout, in which a number of star networks are linked together.