ClientServer Computing

A client/server system is a networked computing model that distributes processes between clients and servers, which supply the requested service. (McFadden, Hoffer, & Prescott, 1999) A client/server network connects many computers, called clients, to a main computer, called a server. A client can be defined as a networked information solicitor, usually a desktop computer or workstation, that can query database and/or other information from a server. The Client handles the presentations logic, processing logic, and much of the storage logic. The client provides the graphical interface, while the server provides access to shared resources, typically a database. Objects break up the client and server sides of an application into smart components that can work across networks. (Orfali, Harkey, & Edwards, 1999)

A server can be defined as a device that manages applications programs and is shared by each of the client computers that are attached to the local area network (LAN). The server is usually a high-powered workstation, a minicomputer, or a mainframe, that stores information for use by networked clients. A file server is computer that manages file operations. It is shared by each of the client computers attached to the local area network. This connection allows the client computers to share the server computers resources, such as printers, files and programs. The server runs software that coordinates the information flow among the other computer, called clients. The file server is like an additional hard drive for each of the computers attached. If most of the processing occurs on the client rather than on a server the client is called a fat client. (McFadden et al., 1999; Perry & Schneider, 1999)

The major difference between the server and the client computers is that the server is ordinarily faster and has more storage space. The server generally performs most of the processing tasks. Some servers are dedicated to performing a specific task such as printing or managing files. A thin server is intended for the home user and provides access to the Internet. A client/server network typically provides an efficient means to connect ten or more computers together. Because of the size of a client -server network, most client/server networks have a network administrator who oversees this system. (Shelly, Cashman, Vermaat, & Walker, 1999)

In a file server environment, each client computer is authorized to use the database management system (DBMS) when a database application program runs on that computer. The primary characteristic of file server architecture is that all the data manipulation is performed at the client computers not at the file server. The file server acts solely as a shared data storage device. Software at the file server queues access requests, but it is up to the application program at each client computer to handle all data management functions. (McFadden et al., 1999)

One of the most used buzzwords of the 1990s is client/server. Nearly all hardware and software vendors have something to say on the subject. New developments in distributed computing and object-orientation together have brought about the creation of a new class of database systems. These systems use a client/server computing model to provide quick response times for users and also support for complex, shared data in a distributed environment. Current relational DBMS products are based on a query-shipping approach in which most query processing is performed within the servers. The clients are mainly used to administer the user interface. Object-oriented database systems (OODBMS) on the other hand, support data-shipping, which allows data request processing to be performed at the clients. (Franklin, Carey, & Livny, 1997)

Client/server computing was created because of a need for computer managers to be able to respond quickly to business demands, which they could not do easily with the central, mainframe-based applications. Application development time was too slow, and the results could not be tailored for the special needs of each department. The personal computer environment allowed the users to have computing power and data under their control. Unfortunately, this environment did not lend itself to collaboration between workers. There was a great need to create a system that would work for each departmental to have control over their own formatting and data usage standards. This led to departmental level client/server. (Stallings & Van Slyke, 1997)

The next step was a move to a two-tier, client/server system. The only real change here was that a true DBMS was substituted for the File Server. This database server is a computer that is responsible for database storage, access, and processing in a client/server environment. The client workstation here is responsible for managing the user interface, including presentation logic, data processing logic and business rules logic. The database server is responsible for database storage, access, and processing. This allowed for a multi-user system that was very reliable which made it a good solution for many different problems. The problems with two-tier unfortunately soon became obvious. A two-tier system does not scale well and is not suitable for enterprise computing. Management problems grow with the size of the system.

This led to the development of a three-tier system. Adding an application server, to handle business and data logic, created the three-tier system. This improvement provided more power, reduced the need for software on the client and added more power and scalability at reduced support costs. Three-tier architecture has the database as the top tier. It works like any client-server environment on a server, waiting to process data requests from approved users.

The middle tier acts as a mediator, processing requests coming from the user and from the database. It maintains a full-time connection to the database using either native drivers, open database connectivity (ODBC) or Java database connectivity (JDBC). The middle tier often has its own user login to make that connection. The database interaction all occurs at the middle tier. Significantly, the database thinks that there is only one user presently accessing it in this model. Therefore, with a client-server database like Oracle or Sybase or SQL Server facilitating 500 people, the system can only detect the one user.

At the bottom of the structure is a very thin client tier probably written in Java or a type of Web-based technology that allows it to be used within your browser. The connection from the client tier to the middle tier is carried out through technologies designed specifically to accommodate requests depending on the hardware platform and the development environment.

This type of technology has several advantages. First, the part of the software which must be transmitted to the users terminal, can be quite small. Having a very thin client lets a limited amount of data load, this allows faster start-up times. In large businesses this kind of architecture provides a simple means of centralized configuration management. As this layer needs only to handle the results of the application, the thin client can easily handle a multi-platform environment. These improvements also came with some challenges. Within this type of system there are more potential points of failure with few tools are available, performance sometimes suffers and upgrades become a significant task.

The Internet is an example of a successful adaptation of a three-tier client/server system that uses an open set of standards, which allows various networks to interconnect. The introduction of a separate Web server expands the power and use of the system. A mainframe can be brought into the system allowing the use of existing, legacy applications in a new context. Microsoft, IBM, and Netscape are coming together to promote their own version of what should be. (Anonymous, 1996; Vandersluis, 1999)

Client/server systems have become the computing architecture for many business organizations. Technically, a client/server system places application processing close to the user and thus increases performance. Due to the recent improvements in the price and performance characteristics of workstations and the networking capabilities, the client-server system architecture has become very popular for database systems. A client-server DBMS provides the management of a database within a client-server system. The database is stored on disks that can be accessed only by the servers. Copies of database items are cached in the global memory, which consist of all the memories of the computers connected to the system. This reduces disk access. This efficient global memory design reduces the handling creating less disk input/output during the use of the database. The success of client/server computing in the marketplace is not just a matter of new jargon on top of old solutions. Client/server is responsive to and creates the conditions for new ways of organizing business.

The Client-Server approach to computer database application design has been quoted as being a notable advancement in database technology. Client/Server systems are considered by many large organizations a way to greatly enhance delivery of services to customers. Users of client/server systems in the early 90s had increased drastically by the mid-1990s. (Stallings & Van Slyke, 1997)

There are several advantages of using a client/server computing system. If one machine in the network goes down the rest of the network of these small, powerful machines will still function. The computers in this system provides the power to get things done without monopolizing resources. End users are empowered to work locally. Some workstations can be as powerful as mainframes but cost a tremendous amount less. This system offers you the flexibility to make other purchases with the difference in this cost or to increase your profits. These open systems allow you to pick and choose hardware, software and services from various vendors. Client/server systems grow easily and it is easy to modernize your system as your needs change. You can mix and match computer platforms to suit the needs of individual departments and users.(McFadden et al., 1999; Stallings & Van Slyke, 1997)

Disadvantages of client/server computing include the maintenance challenges that are sometimes experienced, there are several possible reasons and solutions when something goes wrong. With the client/server architecture, you must often locate or build support tools yourself. (Stallings & Van Slyke, 1997)

There are three limitations worth noting when using file servers on local area networks. Firstly, as there is lots of data movement generated across the network, the server does very little work. The client, however, is quite busy with expansive data manipulation and the network is transferring large blocks of data. Consequently, a client-based LAN concentrates a high traffic network load on the client. Secondly, each client workstation has to commit memory to a complete version of the database management system leaving much less memory for any application programs running on the client computer. Increasing the random access memory (RAM) within the computer will improve performance here however. Each client needs to be quite powerful to provide good response time. The server does not need much RAM and it does not need to be very powerful as it does very little work itself. Thirdly, and possible the most important fact is that the DBMS copy on each workstation needs to manage the shared database integrity. Application programmers must be rather sophisticated to understand various subtle conditions that can arise in a multiple-user database environment. They need to be aware of exactly how the application will interface with the DBMSs, especially concerning recovery and security controls. They need to have the ability to program such controls effectively into their applications. (McFadden et al., 1999)

Anonymous. (1996, December 23). Database & Client/Server World. Paper presented at the Conference Analysis In-depth Reports on Leading IT Conferences, Chicago, IL.

Franklin, M. J., Carey, M. J., & Livny, M. (1997). Transactional client-server cache consistency: alternatives and performance. ACM Transactions on Database Systems, 22(3), 31-80.

McFadden, F. R., Hoffer, J. A., & Prescott, M. B. (1999). Modern Database Management. (5th Ed. ed.): Addison-Wesley.

Orfali, R., Harkey, D., & Edwards, J. (1999). Client/server Survival Guide. (3rd Edition ed.): Robert Ipsen.

Perry, J. T., & Schneider, G. P. (1999). The Internet: Course Technology.

Shelly, G. B., Cashman, T. J., Vermaat, M. E., & Walker, T. J. (1999). Discovering Computers 2000: Concepts for a Connected World, Web and CNN Enhanced: Course Technology.

Stallings, W., & Van Slyke, R. (1997). Business Data Communications. (3rd ed.). Upper Saddle River, NJ: Prentice-Hall.

Vandersluis, C. (1999). Third-tier to end the rule of client/server. Computing Canada, 25(16), 17-18.