Midrange systems architecture

The midrange platform is positioned between the mainframe platform and the x86 platform. The size and cost of the systems, the workload, the availability, their performance, and the maturity of the platform is higher than that of the x86 platforms, but lower than that of a mainframe.

Today midrange systems are produced by three vendors:

  • IBM produces the Power Systems series of midrange servers (the former RS/6000, System p, AS/400, and System i series).
  • Hewlett-Packard produces the HP Integrity systems.
  • Oracle produces the original Sun Microsystems’s based SPARC servers.

Midrange systems are typically built using parts from only one vendor, and run an operating system provided by that same vendor. This makes the platform relatively stable, leading to high availability and security.

The term minicomputer evolved in the 1960s to describe the small computers that became possible with the use of IC and core memory technologies. Small was relative, however; a single minicomputer typically was housed in a few cabinets the size of a 19” rack.

The first commercially successful minicomputer was DEC PDP-8, launched in 1964. The PDP-8 sold for one-fifth the price of the smallest IBM 360 mainframe. This enabled manufacturing plants, small businesses, and scientific laboratories to have a computer of their own.

In the late 1970s, DEC produced another very successful minicomputer series called the VAX. VAX systems came in a wide range of different models. They could easily be setup as a VAXcluster for high availability and performance.

DEC was the leading minicomputer manufacturer and the 2nd largest computer company (after IBM). DEC was sold to Compaq in 1998 which in its turn became part of HP some years later.

Minicomputers became powerful systems that ran full multi-user, multitasking operating systems like OpenVMS and UNIX. Halfway through the 1980s minicomputers became less popular as a result of the lower cost of microprocessor based PCs, and the emergence of LANs. In places where high availability, performance, and security are very important, minicomputers (now better known as midrange systems) are still used.
Most midrange systems today run a flavor of the UNIX operating system, OpenVMS or IBM i:

  • HP Integrity servers run HP-UX UNIX and OpenVMS.
  • Oracle/Sun’s SPARC servers run Solaris UNIX.
  • IBM's Power systems run AIX UNIX, Linux and IBM i.

Midrange systems architecture
Midrange systems used to be based on specialized Reduced Instruction Set Computer (RISC) CPUs. These CPUs were optimized for speed and simplicity, but much of the technologies originating from RISC are now implemented in general purpose CPUs. Some midrange systems therefore are moving from RISC based CPUs to general purpose CPUs from Intel, AMD, or IBM.

The architecture of most midrange systems typically use multiple CPUs and is based on a shared memory architecture. In a shared memory architecture all CPUs in the server can access all installed memory blocks. This means that changes made in memory by one CPU are immediately seen by all other CPUs. Each CPU operates independently from the others. To connect all CPUs with all memory blocks, an interconnection network is used based on a shared bus, or a crossbar.

A shared bus connects all CPUs and all RAM, much like a network hub does. The available bandwidth is shared between all users of the shared bus. A crossbar is much like a network switch, in which every communication channel between one CPU and one memory block gets full bandwidth.

The I/O system is also connected to the interconnection network, connecting I/O devices like disks or PCI based expansion cards.

Since each CPU has its own cache, and memory can be changed by other CPUs, cache coherence is needed in midrange systems. Cache coherence means that if one CPU writes to a location in shared memory, all other CPUs must update their caches to reflect the changed data. Maintaining cache coherence introduces a significant overhead. Special-purpose hardware is used to communicate between cache controllers to keep a consistent memory image.

Shared memory architectures come in two flavors: Uniform Memory Access (UMA), and Non Uniform Memory Access (NUMA). Their cache coherent versions are known as ccUMA and ccNUMA.

The UMA architecture is one of the earliest styles of multi-CPU architectures, typically used in servers with no more than 8 CPUs. In an UMA system the machine is organized into a series of nodes containing either a processor, or a memory block. These nodes are interconnected, usually by a shared bus. Via the shared bus, each processor can access all memory blocks, creating a single system image.


UMA systems are also known as Symmetric Multi-Processor (SMP) systems. SMP is used in x86 servers as well as early midrange systems.

SMP technology is also used inside multi-core CPUs, in which the interconnect is implemented on-chip and a single path to the main memory is provided between the chip and the memory subsystem elsewhere in the system.


UMA is supported by all major operating systems and can be implemented using most of today’s CPUs.

In contrast to UMA, NUMA is a server architecture in which the machine is organized into a series of nodes, each containing processors and memory, that are interconnected, typically using a crossbar. NUMA is a newer architecture style than UMA and is better suited for systems with many processors.


A node can use memory on all other nodes, creating a single system image. But when a processor accesses memory not within its own node, the data must be transferred over the interconnect, which is slower than accessing local memory. Thus, memory access times are non-uniform, depending on the location of the memory, as the architecture’s name implies.

Some of the current servers using NUMA architectures include systems based on AMD Opteron processors, Intel Itanium systems, and HP Integrity and Superdome systems. Most popular operating systems such as OpenVMS, AIX, HP-UX, Solaris, and Windows, and virtualization hypervisors like VMware fully support NUMA systems.

This entry was posted on Friday 25 September 2015

Mainframe Architecture

A mainframe is a high-performance computer made for high-volume, processor-intensive computing. Mainframes were the first commercially available computers. They were produced by vendors like IBM, Unisys, Hitachi, Bull, Fujitsu, and NEC. But IBM always was the largest vendor – it still has 90% market share in the mainframe market.

Mainframes used to have no interactive user interface. Instead, they ran batch processes, using punched cards, paper tape, and magnetic tape as input, and produced printed paper as output. In the early 1970s, most mainframes got interactive user interfaces, based on terminals, simultaneously serving hundreds of users.

While the end of the mainframe is predicted for decades now, mainframes are still widely used. Today’s mainframes are still relatively large (the size of a few 19" racks), but they don’t fill-up a room anymore. They are expensive computers, mostly used for administrative processes, optimized for handling high volumes of data.

The latest IBM z13 mainframe, introduced in 2015, can host up to 10TB of memory and 141 processors, running at a 5GHz clock speed. It has enough resources to run up to 8000 virtual servers simultaneously.


Mainframes are highly reliable, typically running for years without downtime. Much redundancy is built in, enabling hardware upgrades and repairs while the mainframe is operating without downtime. Sometimes a separate system is added to the mainframe which primary job it is to continuously check the mainframe’s health. When a hardware failure is detected, automatically an IBM engineer is called, sometimes even without the systems managers knowing it!

All IBM mainframes are backwards compatible with older mainframes. For instance, the 64 bits mainframes of today can still run the 24-bit System/360 code from the early days of mainframe computing. Much effort is spent in ensuring all software continues to work without modification.

Mainframe architecture
A mainframe consists of processing units (PUs), memory, I/O channels, control units, and devices, all placed in racks (frames). The architecture of a mainframe is shown below.


The various parts of the architecture are described below.

Processing Units
In the mainframe world the term PU (Processing Unit) is used instead of the more ambiguous term CPU. A mainframe has multiple PUs, so there is no central processing unit. The total of all PUs in a mainframe is called a Central Processor Complex (CPC).

The CPC resides in its own cage inside the mainframe, and consists of one to four so-called book packages. Each book package consists of processors, memory, and I/O connections, much like x86 system boards.

Mainframes use specialized PUs (like the quad core z10 mainframe processor) instead of off-the-shelf Intel or AMD supplied CPUs.

All processors in the CPC start as equivalent processor units (PUs). Each processor is characterized during installation or at a later time, sometimes because of a specific task the processor is configured to do. Some examples of characterizations are:

Processor unit (PU) Task
Central processors (CP) Central processors are the main processors of the system that can be used to run applications running on VM, z/OS, and ESA/390 operating systems.
CP Assist for Cryptographic Function (CPACF) CPACF assists the CPs by handling workload associated with encryption/decryption.
Integrated Facility for Linux (IFL) IFL assists with Linux workloads: they are regular PUs with a few specific instructions that are needed by Linux.
Integrated Coupling Facility (ICF) This facility executes licensed internal code to coordinate system tasks.
System Assisted Processor (SAP) A SAP assists the CP with workload for the I/O subsystem, for instance by translating logical channel paths to physical paths.
IBM System z Application Assist Processors (zAAP) Used for Java code execution
zIIP Processing certain database workloads
Spares Used to replace any CP or SAP failure

Main Storage
Each book package in the CPC cage contains from four to eight memory cards. For example, a fully loaded z9 mainframe has four book packages that can provide up to 512 GB of memory.

The memory cards are hot swappable, which means that you can add or remove a memory card without powering down the mainframe.

Channels, ESCON and FICON
A channel provides a data and control path between I/O devices and memory.

Today’s largest mainframes have 1024 channels. Channels connect to control units, either directly or via switches. Specific slots in the I/O cages are reserved for specific types of channels, which include the following:

  • Open Systems Adapter (OSA) – this adapter provides connectivity to various industry standard networking technologies, including Ethernet
  • Fiber Connection (FICON) - this is the most flexible channel technology. With FICON, input/output devices can be located many kilometers from the mainframe to which they are attached.
  • Enterprise Systems Connection (ESCON) - this is an earlier type of fiber-optic technology. ESCON channels can provide performance almost as fast as FICON channels, but at a shorter distance.

The FICON or ESCON switches may be connected to several mainframes, sharing the control units and I/O devices.

The channels are high speed – today’s FICON Express16S channels provide up to 320 links of 16 Gbit/s each.

Control units
A control unit is similar to an expansion card in an x86 or midrange system. It contains logic to work with a particular type of I/O device, like a printer or a tape drive.

Some control units can have multiple channel connections providing multiple paths to the control unit and its devices, increasing performance and availability.

Control units can be connected to multiple mainframes, creating shared I/O systems. Sharing devices, especially disk drives, is complicated and there are hardware and software techniques used by the operating system to control updating the same disk data at the same time from two independent systems.

Control units connect to devices, like disk drives, tape drives, and communication interfaces. Disks in mainframes are called DASD (Direct Attached Storage Device), which is comparable to a SAN (Storage Area Network) in a midrange or x86 environment.

This entry was posted on Friday 04 September 2015

Software Defined Data Center - SDDC

A Software Defined Data Center (SDDC) – a term coined by VMware, also known as a Virtual Data Center (VDC) is a datacenter where all infrastructure components are delivered as virtual devices. Using software, the components’ configurations are controlled and deployment is automated. A SDDC typically includes Software Defined Computing (SDC), Software Defined Storage (SDS), and Software Defined Networking (SDN).


A SDDC is the basis for cloud computing. It enables (end) users and systems managers to create and deploy new infrastructures using user friendly software. The software allows the user to select the needed infrastructure components and their sizing and required availability and automatically configures the SDDC components to deliver the required infrastructure implementation. The SDDC software also provides tools for costing, logging, reporting, scaling (up and down), and decommissioning of the infrastructure (components).

SDDC is not the solution for all problems – there are a large number of application(stacks) that need a much more custom-designed infrastructure than the standard building blocks SDDC provides. Examples are SAP HANA, High performance databases, OLTP, High secure bank or stock trade transaction systems, and SCADA systems.

This entry was posted on Thursday 30 April 2015

The Virtualization Model

One model can be used for virtualization technologies, such as Software Defined Compute (SDC), Software Defined Networking (SDN), and Software Defined Storage (SDS), as shown in the picture below.


The model shows 3 layers: at the bottom, physical devices, then a virtualization layer that creates abstract system resources, and at the top a number of virtual devices that are compiled using the abstract system resources. It is a well-known way of implementing a virtual machine environment, but it also applies to networking and storage.

When the virtualization layer is programmable, using APIs, it can be controlled by (external, third party) software, to support software defined computing, software defined storage, or software defined networking.

The number of physical devices is typically independent of the number of virtual devices. The physical devices can be commodity hardware, or enterprise grade hardware, or a mixture. Because of the virtualization layer the physical hardware can be upgraded, replaced or phased out independent of the operation of the virtual devices.

The virtualization layer provides a resource pool that enables virtual devices to be configured. Ideally, the virtualization layer should decrease the performance, as delivered by the physical devices, by no than 10%.

The virtualization layer can provide advanced features, such as:

  • Storage (SDS): deduplication, RAID, Snapshots;
  • Compute (SDC): Live migration, virus scanning;
  • Networking (SDN): VLANs, filtering, IDS, firewalls, virus scanning;

and provides APIs for scripting (orchestration), as provided by software such as OpenStack.

This entry was posted on Thursday 23 April 2015

Software Defined Computing (SDC), Networking (SDN) and Storage (SDS)

Software Defined Computing

While virtualization has been around for many decades, it was mainly focused on the virtualization of computing power – the use of multiple virtual machines running on one physical machine. This allowed a better use of the physical computer’s resources, as most of the physical machines ran at a fairly low CPU and memory utilization. A hypervisor is used as a layer between the physical and virtual machines. Apart from providing virtual machines, this hypervisor also allowed for additional functionality, like managing virtual machines from one management console, adding virtual memory of CPU cores to a virtual machine, high availability by restarting failing machines, and dynamically moving running machines between physical machines to allow load balancing and maintenance. This extra functionality (that the virtual machines are not aware of) can be called Software Defined Computing (SDC), as the hypervisor is controlled by software. In addition, SDC provides open APIs to enable third party software to monitor and control the SDC’s hypervisor(s).

Not only compute resources can be virtualized. Lately, virtualization of networking and storage resources is becoming more popular. This allows not so much for a better utilization of the hardware – as this is not necessarily solved by this virtualization, but does allow for software defined storage (SDS) and software defined networking (SDN).

Software Defined Storage

With SDS, the physical storage pool is virtualized into virtual storage pools (LUNs). This is nothing new, this was possible for many years. In addition to this, SDS provides extra functionalities, like the connection of heterogeneous storage devices and using open APIs.

SDS enables to use storage devices from multiple vendors and manage them as one storage pool by using open APIs and by allowing to physically couple physical storage devices together.

Software Defined Networking

With SDN a relatively simple physical network can be used to provide a complex virtual network. Technologies like vLANs have been around for a long time, but they require complex configurations on a number of devices to work properly. SDN provided one point of control to configure the network in a dynamic way.

In a SDN environment, the physical network is typically based on a spine and leaf topology, as shown below.


This topology has a number of benefits:

  • Each server is always exactly four hops away from every other server
  • The topology is simple to scale: just add spine or leaf servers
  • Since there are no interconnects between the spine switches, the design is highly scalable

Because of the relatively flat hierarchy and the fixed number of hops, the topology can easily be virtualized using vLANs. The virtual network can then have an hierarchical, complex and secured virtual structure that can easily be changed without touching the physical switches. The network can be controlled from a single management console and open APIs can be provided to manage the network using third party software.

This entry was posted on Wednesday 15 April 2015

Earlier articles

Midrange systems architecture

Mainframe Architecture

Software Defined Data Center - SDDC

The Virtualization Model

Software Defined Computing (SDC), Networking (SDN) and Storage (SDS)

What are concurrent users?

Performance and availability monitoring in levels

UX/UI has no business rules

Technical debt: a time related issue

Solution shaping workshops

Architecture life cycle

Project managers and architects

Using ArchiMate for describing infrastructures

Kruchten’s 4+1 views for solution architecture

The SEI stack of solution architecture frameworks

TOGAF and infrastructure architecture

The Zachman framework

An introduction to architecture frameworks

How to handle a Distributed Denial of Service (DDoS) attack

Architecture Principles

Views and viewpoints explained

Stakeholders and their concerns

Skills of a solution architect architect

Solution architects versus enterprise architects

Definition of IT Architecture

My Book

Infrastructure Architecture - Infrastructure Building Blocks and Concepts

What is Big Data?

How to make your IT "Greener"

What is Cloud computing and IaaS?

Purchasing of IT infrastructure technologies and services

IDS/IPS systems

IP Protocol (IPv4) classes and subnets

Infrastructure Architecture - Course materials

Introduction to Bring Your Own Device (BYOD)

IT Infrastructure Architecture model

Book general available

Fire prevention in the datacenter

Where to build your datacenter

Availability - Fall-back, hot site, warm site

Reliabilty of infrastructure components

Human factors in availability of systems

Business Continuity Management (BCM) and Disaster Recovery Plan (DRP)

Performance - Design for use

Performance concepts - Load balancing

Performance concepts - Scaling

Performance concept - Caching

Perceived performance

Ethical hacking

Computer crime

Introduction to Cryptography

Introduction to Risk management

The history of UNIX and Linux

The history of Microsoft Windows

The history of Storage

The history of Networking

The first computers

Open group ITAC /Open CA Certification

Human factors in security

Google outage

Sjaak Laan

Recommended links

Ruth Malan
Gaudi site
XR Magazine
Esther Barthel's site on virtualization


XML: RSS Feed 
XML: Atom Feed 


The postings on this site are my opinions and do not necessarily represent CGI’s strategies, views or opinions.


Copyright Sjaak Laan