History of servers

This is a part of chapter "A brief history of IT Infrastructures" of my forthcoming book "Infrastructure Architecture". Please feel free to comment using my email address stated in the right column of this website.     

The first computers  

Originally the word computer was used for a person who did manual calculations (or computations). Starting from the early 1900's the word computer started to be used for calculating machines. And the first computing machines were actually mechanical calculators. However, those were not real computers. Real computers have two properties: they calculate and they are programmable. Making programmable computers became only useable in practice after the invention of punched cards. Using punched cards the computers could be programmed and reprogrammed easily.

The first general purpose universal computer was the ENIAC. The Electronic Numerical Integrator And Computer (ENIAC) was designed in 1943 and was financed by the United States Army in the midst of World War II. While the purpose of ENIAC was to calculate artillery firing tables for the United States Army's Ballistic Research Laboratory, it was actually first used doing calculations for the hydrogen bomb.

The machine was finished and in full operation in 1946 (after the war) and was in continuous operation until 1955. The ENIAC could perform no less than 5,000 operations per second, which was spectacular at the time.

The ENIAC was very large in size and complexity. It contained:

  • 17,468 vacuum tubes
  • 7,200 crystal diodes
  • 1,500 relays
  • 70,000 resistors
  • 10,000 capacitors
  • Around 5 million hand-soldered joints

The ENIAC weighed about 30 tons, was roughly 2.6 m × 0.9 m × 26 m, took up 63 m2, and consumed 150 kW of power.

The ENIAC got its input using an IBM card reader, and IBM punched cards were used for output as well. In 1945 John von Neumann wrote a paper called "First Draft of a Report on the EDVAC". It described an architecture for a stored-program computer storing computer programs and data electronically instead of on punched cards. A few years later the first computers based on the Von Neumann Architecture became operational making them more versatile.

In the 1960s computers started to be built using transistors instead of the large and heavy vacuum tubes used in earlier computers.

Transistor- based machines were smaller, faster, cheaper to produce, required less power, and were more reliable.

The transistor-based machines were followed in the 1970s by machines based on integrated circuit (IC) technology. ICs are small chips that contain a large set of transistors. They provided standardized building blocks like AND gates, OR gates, counters, adders, flip flops, etc. Combining those building blocks created CPUs and memory circuits. The subsequent creation of microprocessors decreased size and cost and further increased speed and reliability of computers. In the 1980s microprocessors became so cheap that home computers and the personal computer became popular.

Mainframes

Mainframes are powerful computers used mainly by large organizations for critical applications, typically performing bulk data processing.

The term Mainframe originally referred to the large cabinets that housed the central processing unit and main memory of early computers. Later the term was used to distinguish high-end commercial machines from less powerful ones.

Many mainframes were originally designed in the 1960s and have evolved from those systems to the very powerful machines still in use today.

Modern mainframe computers excel not only in a very high computational speed (defined as MIPS — Millions of Instructions Per Second) but also because of their built-in redundancy and the resulting high reliability, high security, extensive input- output facilities, strict backward compatibility with older software, and high utilization rates to support massive throughput. Mainframes often run for years without interruption, with repairs and hardware upgrades taking place during normal operation.

In the 1960s, most mainframes had no interactive interface. They accepted sets of punched cards, paper tape, and/or magnetic tape and operated solely in batch mode to support back office functions, such as customer billing. By the early 1970s, many mainframes acquired interactive user interfaces and operated as timesharing computers, supporting hundreds of users simultaneously along with batch processing. Users gained access through specialized terminals or, later, from personal computers equipped with terminal emulation software. Most modern mainframes still use character based terminals for the user interface, often running in separate windows on an ordinary PC.

Mainframes are designed to handle very high volumes of input and output (I/O). Mainframes include several subsidiary computers (called channels or peripheral processors) that manage the I/O devices, leaving the CPU free to deal only with high- speed memory.

Nowadays most mainframes have the ability to host multiple operating systems in virtual machines. Since mainframes typically have dozens of processors and many, many gigabytes of memory (the IBM System z9 Enterprise Class has up to 54 CPUs and 512 GB memory) they can host an enormous amount of virtual machines.

For mainframes it is not uncommon to run both production and test servers. This leads to an environment where one mainframe could replace all server racks in a datacenter. This lowers the total cost of ownership (the system administrators do not need to manage many racks of hardware), and increases flexibility.

Minicomputers

Minicomputers are multi-user computers that are in the middle range of the computing spectrum, in between the mainframe computers and the smaller microcomputer based servers. They are therefore also known as midrange computers.

The term minicomputer evolved in the 1960s to describe the small computers that became possible with the use of IC and core memory technologies. Small was relative however; minicomputers usually took up a few cabinets the size of a large refrigerator.

The first commercially successful minicomputer was Digital Equipment Corporation’s (DEC) PDP-8, launched in 1964. The PDP-8 sold for one-fifth the price of the smallest IBM 360 mainframe. The speed, small size, and reasonable price enabled the PDP-8 to go into thousands of manufacturing plants, small businesses, and scientific laboratories.

DEC created a complete range of PDP systems (of course starting with the PDP-1), but the PDP-8 and the PDP-11 were the most successful ones. Later DEC produced another very successful minicomputer series: the VAX.

DEC was the leading minicomputer manufacturer, at one time the 2nd largest computer company after IBM. DEC was sold to Compaq in 1998 which in its turn became part of HP some years later.

In the 70's minicomputers were running the first Computer Aided Design (CAD) / Computer Aided Manufacturing (CAM) software. CAD/CAM was a revolution in manufacturing at the time enabling faster design and production of end user products.

During the 1970s minicomputers became powerful systems that ran full multi-user, multitasking operating systems like VMS and UNIX.

Halfway through the 1980s minicomputers became less popular due to the lower cost of microprocessor based hardware, the emergence of inexpensive and easily deployable local area network systems, the emergence of more powerful microprocessors, and the desire of end-users to be less reliant on inflexible minicomputer manufacturers and IT departments. However at places where high availability, performance and security are very important minicomputers are still used.

Most minicomputers today run a flavor of the UNIX operating system. HP (HP 9000 series and Alpha series) run HP-UX UNIX and Tru64 UNIX and OpenVMS. Sun produces SPARC processor based systems running Solaris UNIX. These Sun systems are now also built using Intel processors.

IBM's Power processor based systems run AIX UNIX. IBM also has a second range of minicomputers: the AS/400 series, later renamed to the IBM System-i, running the operating system OS/400.

Supercomputers

A supercomputer is built using a specialized computer architecture designed to maximize speed of calculation. They are the fastest machines available at any given time. Since computing speed increases continuously supercomputers are superseded by new supercomputers every year. Supercomputers are used for weather forecasts, geology (like calculating where to drill for oil), military (for instance calculation on atomic bomb design) and more positive things like rendering of movies like Toy Story and Shrek.

Supercomputers were produced primarily by a company named Cray Research. The Cray-1 was a major success when it was released, faster than all computers at the time. The first Cray-1 was installed at Los Alamos National Laboratory in 1976 and it went on to become one of the best known and most successful supercomputers in history. The machine cost $8.86 million at the time.

Supercomputers used Vector CPUs, specially designed for performing calculations on large sets of data. Together with large pipelines and dedicated hardware for certain instructions (like multiply and divide) this boosted performance. Cray’s designers spent as much effort on the cooling system as the rest of the mechanical design. Liquid Freon running in stainless steel pipes drew the heat away from the circuit boards to the cooling unit below the machine.

The entire chassis was bent into a large C-shape. Speed-dependent portions of the system were placed on the "inside edge" of the chassis where the wire-lengths were shorter.

By using vector instructions carefully and building useful instruction chains, the system could peak at 250 MFLOPS (Million Floating Point Operations per second). In 1985 the very advanced Cray-2 was released, capable of 1.9 GFLOPS peak performance. (As a comparison, as of 2010, the Intel Core i7 980 XE CPU has a peak performance of 107 GFLOPS).

Supercomputers as single machines started to disappear in the 1990s. Their work was taken over by clustered computers - a large amount of simple off-the-shelf PC based computers, connected using fast networks to form one large computer array. Nowadays high performance computing is done mainly with large arrays of Linux systems. In 2010 the fastest computer array was a Linux cluster with more than 200,000 CPU cores.

Supercomputers today are used for highly calculation-intensive tasks such as problems involving quantum physics, weather forecasting, climate research, etc.

x86 servers

Most modern servers in datacenters today are based on the x86 architecture. This x86 architecture (also known as PC architecture) is based on the original IBM PC. The IBM PC’s history is described in more detail in de section on Workstations.

Based on the IBM PC in the 1990s servers started to appear based on the x86 architecture. These servers were basically PCs, but housed in 19” racks and without dedicated keyboards and monitors.

Over the years x86 servers became the de-facto standard for servers. Their low cost, and the fact that many manufacturers offer these systems (opposed to for instance mainframes of mini computers), all able to run standard operating systems like Microsoft Windows and Linux, made them extremely popular.


This entry was posted on Saturday 11 September 2010

Earlier articles

Quantum computing

Security at cloud providers not getting better because of government regulation

The cloud is as insecure as its configuration

Infrastructure as code

DevOps for infrastructure

Infrastructure as a Service (IaaS)

(Hyper) Converged Infrastructure

Object storage

Software Defined Networking (SDN) and Network Function Virtualization (NFV)

Software Defined Storage (SDS)

What's the point of using Docker containers?

Identity and Access Management

Using user profiles to determine infrastructure load

Public wireless networks

Supercomputer architecture

Desktop virtualization

Stakeholder management

x86 platform architecture

Midrange systems architecture

Mainframe Architecture

Software Defined Data Center - SDDC

The Virtualization Model

What are concurrent users?

Performance and availability monitoring in levels

UX/UI has no business rules

Technical debt: a time related issue

Solution shaping workshops

Architecture life cycle

Project managers and architects

Using ArchiMate for describing infrastructures

Kruchten’s 4+1 views for solution architecture

The SEI stack of solution architecture frameworks

TOGAF and infrastructure architecture

The Zachman framework

An introduction to architecture frameworks

How to handle a Distributed Denial of Service (DDoS) attack

Architecture Principles

Views and viewpoints explained

Stakeholders and their concerns

Skills of a solution architect architect

Solution architects versus enterprise architects

Definition of IT Architecture

What is Big Data?

How to make your IT "Greener"

What is Cloud computing and IaaS?

Purchasing of IT infrastructure technologies and services

IDS/IPS systems

IP Protocol (IPv4) classes and subnets

Infrastructure Architecture - Course materials

Introduction to Bring Your Own Device (BYOD)

Fire prevention in the datacenter

Where to build your datacenter

Availability - Fall-back, hot site, warm site

Reliabilty of infrastructure components

Human factors in availability of systems

Business Continuity Management (BCM) and Disaster Recovery Plan (DRP)

Performance - Design for use

Performance concepts - Load balancing

Performance concepts - Scaling

Performance concept - Caching

Perceived performance

Ethical hacking

The first computers

Open group ITAC /Open CA Certification


Recommended links

Ruth Malan
Gaudi site
Esther Barthel's site on virtualization
Eltjo Poort's site on architecture


Feeds

 
XML: RSS Feed 
XML: Atom Feed 


Disclaimer

The postings on this site are my opinions and do not necessarily represent CGI’s strategies, views or opinions.

 

Copyright Sjaak Laan