Software evolution and Evolution of Software Architecture
Evolution of the software has been very interesting over the years. It has been taken versatile shape with variety of tasks to be accomplished. Now software’s are available to perform almost every commercial task. The architecture, designing paradigm has changed a lot.
Not only this, there has been a positive change in the difficulty level of the programming languages used to make programs. These have been more easier and user friendly now. Costs are also becoming favorable.
Evolution of Software Architecture
Evolution of Software Architecture architecture has to go hand in hand with the evolution in the hardware:
In the mainframe software architectures, all the intelligence lies with the central host computer. Users interact with the host computer through terminals that captures keystrokes and sends that information to the host. These terminals are not intelligent. The main limitation of this type of architecture is that it does not support any graphical user interface or access to multiple databases from geographically dispersed sites.
File Sharing Architecture:
In this kind of architecture, the server downloads file’s from the shared location to the desktop environment. The requested user job is then run both logic and data in the desktop environment. These architectures are good if shared usage and content updating are very low as well as the volume of data to be transferred is low.
Main limitation of this architecture is that the file sharing gets disturbed with the increase of number of online users. The full file has to be downloaded to the user machine each time he/she requests for a file, it enhances the traffic.
This approach overcomes the limitation of file server where for each query the full file content has to be uploaded every time. Here, by using a relational database management system, also called RDBMS, user queries are answered directly.
This reduces the network traffic by supplying relevant query response to the client instead of the total file transfer. It highly improves multi-user updating through a GUI front end to a shared database. In this architecture, remote procedure calls or standard query language statements are being used by clients to communicate with servers.
A server machine acts as host and runs programmes to share resources with the clients. Clients send requests to the servers and it fulfills them. The client and server system may be of the following types:
The client/server system may be two-tiered, three-tiered or n-tiered.
Two-tiered Architecture: With the advent of RDBMS, it became possible to send a query and fetch the required detail instead of mounting the whole file as in the file server approach. Further, the use of graphical user interface made it easier to access the database. This system also helped in reducing the traffic as the only necessary set of information is being mounted. Here, the logic may retain with the server or with the client.
The problem with this type of system was that with the increase in the number of clients the network congestion happened. The system was not very sustainable with the increasing clients. Though, it Was better in handling congestion as compared to file system because the full file was not being mounted every time but still very high number of clients made it congested over the network.
Three-tiered Architecture: With the advancements, another tier appeared in between client and server and the architecture was called a 3 tier architecture. Here, the presentation (user interface), processing (business functionality) and data are separated into separate distinct entities. This helps in improved performance.
The first tier, called the presentation layer normally consists of a graphical user interface. The middle tier consists of the application logic and the third tier is the data layer. The three logical layers can lie in the same machine also through smart software configuration. This separation brings increase in performance and flexibility.
The problem with this architecture is of re usability of a computer program in various situations, which may be helpful in reducing the cost of the software development because no new software has to be developed for each and every tiny task. Scale ability is also problem with this architecture.
In the term “N-tier”, “N” implies any number, like 2-tier, or .4-tier, basically any number of distinct tiers used in the architecture. Any number of levels arranged above another, each serving distinct and separate tasks. It can increase re usability and reliability.
It is by definition a meta network, a constantly changing collection of thousands of individual networks intercommunicating with a common protocol. This architecture is based in the very specification of the standard TCP/IP protocol, designed to connect any two networks which may be different in internal hardware, software and technical design.