In 2017, the third edition of my book on Infastructure Architecture called "Infrastructure Architecture - Infrastructure Building Blocks and Concepts" was published.
IT infrastructure has been the foundation that enabled successful application deployments for many decades. Yet, general and up to date infrastructure knowledge is not widespread. Experience shows that software developers, system administrators, and project managers often have little knowledge of the big influence IT infrastructure has on performance, availability and security of software applications.
This book explains the concepts, history, and implementation of IT infrastructures. Although many of books can be found on each individual infrastructure building block, this is the first book to describe all of them: datacenters, servers, networks, storage, operating systems, and end user devices.
The building blocks described in this book provide functionality, but they also provide the non-functional attributes performance, availability, and security. These attributes are explained on a conceptual level in separate chapters, and more specific in the chapters about each individual building block.
Whether you need an introduction to infrastructure technologies, a refresher course, or a study guide for a computer science class, you will find that the presented building blocks and concepts provide a solid foundation for understanding the complexity of today’s IT infrastructures.
This book can be used as a study book – it is used by a number of universities in the USA, as part of their IT architecture courses, based on the IS 2010.4 curriculum.
Download the Table of Contents.
A preview of the book can be downloaded here.
How to order
Hardcover ISBN 978-1-326-91297-0
eBook ISBN 978-1-326-92569-7
Hardcover: 446 pages
Note to the Third Edition
In the third edition of this book, a number of corrections were made, some terminology is explained in more detail, and several typos and syntax errors were fixed. In addition, the following changes were made:
- The infrastructure model was updated to reflect the Networking-Storage-Compute terminology used by most vendors today, and to emphasize the position of systems management.
- The chapter on infrastructure trends was removed.
- The text was blended with the text in the other chapters.
- The amount of text on the historic context for each building block was reduced.
- The Virtualization chapter and Server chapter were combined and renamed to Compute.
- The storage chapter was reorganized to reflect the new storage building block model.
- The chapter on Security was rearranged and updated.
- Part IV on infrastructure management was added, with chapters on the infrastructure lifecycle, deployment options, assembling and testing, running the infrastructure, systems management processes, and decommissioning.
- In various parts of the book, new cloud technology concepts were added, like Software Defined Networking (SDN), Software Defined Storage (SDS), Software Defined Datacenters (SDDC), Infrastructure as a Service (IaaS), infrastructure as code, and container technology.
- A chapter was added explaining the infrastructure purchase process, as this is part of the IS 2010.4 curriculum.
- All footnotes were converted to endnotes.
- The index was renewed.
- Finally, as technology advanced in the past years, the book was updated to contain the most recent information.
The book is used in a number of universities in the USA, Australia, Chile, and Kuwait, as study material for their IT infrastructure courses. The book is especially suited for courses based on the IS 2010.4 curriculum. A reference matrix of the IS 2010.4 curriculum topics (as used in many universities in the USA) and the relevant sections in this book is provided in the appendix.
Based on requests from university professors, I created a set of course materials. It contains all pictures used in the book in both Viso and high-resolution PNG format, the list of abbreviations, and a PowerPoint slide deck for each chapter (work in progress).
The course materials can be downloaded here.
Previous Edition (Second Edition)
While the third edition is more up to date than the previous version, for those that want to keep using the second edition, it is still available from a the following bookstores:
Hardcover ISBN 978-1-291-25079-5
eBook ISBN 978-1-291-25682-6
Some course material of the second edition can be found here.
This entry was posted on Tuesday 31 January 2017
DevOps is a contraction of the terms "developer" and "system operator". DevOps teams consist of developers, testers and application systems managers, and each team is responsible for developing and running one or more business applications or services.
The whole team is responsible for developing, testing, and running their application(s). In case of incidents with the applications under their responsibility, every team member of the DevOps team is responsible to help fix the problem. The DevOps philosophy is “If you built it, you run it”.
While DevOps is typically used for teams developing and running functional software, the same philosophy can be used to develop and run an infrastructure platform that functional DevOps teams can use. In an infrastructure Devops team, infrastructure developers design, test, and build the infrastructure platforms and manage their lifecycle; infrastructure operators keep the platform running smoothly, fix incidents, and apply small changes.
This entry was posted on Friday 06 January 2017
Infrastructure as a Service provides virtual machines, virtualized storage, virtualized networking and the systems management tools to manage them.
IaaS is typically based on cheap commodity white label hardware. The philosophy is to keep the cost down by allowing the hardware to fail every now and then. Failed components are either replaced or simply removed from the pool of available resources.
IaaS provides simple, highly standardized building blocks to applications. It does not provide high availability, performance or security levels. Consequently, applications running on IaaS should be robust to allow for failing hardware and should be horizontally scalable to increase performance.
In order to use IaaS, users must create and start a new server, and then install an operating system and their applications. Since the cloud provider only provides basic services, like billing and monitoring, the user is responsible for patching and maintaining the operating systems and application software.
Not all operating systems and applications can be used in a IaaS cloud; many software licenses prohibit the use of a fully scalable, virtual environment like IaaS, where it is impossible to know in advance on which machines software will run.
This entry was posted on Friday 28 October 2016
In a traditional infrastructure deployment, compute, storage and networking are deployed and managed independently, often based on components from multiple vendors. In a converged infrastructure, the compute, storage, and network components are designed, assembled, and delivered by one vendor and managed as one system, typically deployed in one or more racks. A converged infrastructure minimizes compatibility issues between servers, storage systems and network devices while also reducing costs for cabling, cooling, power and floor space.
The technology is usually difficult to expand on-demand, requiring the deployment of another rack of infrastructure to add new resources. The following picture shows an example of a converged system.
While in a converged infrastructure the infrastructure is deployed as individual components in a rack, a hyperconverged infrastructure (HCI) brings together the same components within a single server node.
A hyperconverged infrastructure comprises a large number of identical physical servers from one vendor with direct attached storage in the server and special software that manages all servers, storage, and networks as one cluster running virtual machines.
The technology is easy to expand on-demand, by adding servers to the hyperconverged cluster. The following picture shows an example of a hyperconverged system.
Hyperconverged systems are an ideal candidate for deploying VDI environments (see section 12.3.3), because the storage is close to the compute (as it is in the same box) and the solution scales well with the rise of the number of users.
A big advantage of converged and hyperconverged infrastructures is having to deal with one firmware and software vendor. Vendors of hyperconverged infrastructures provide all updates for compute, storage and networking in one service pack and deploying these patches is typically much easier than deploying upgrades in all individual components in a traditional infrastructure deployment.
Drawbacks of converged and hyperconverged infrastructures are:
- Vendor lock-in – the solution is only beneficial if all infrastructure is from the same vendor
- Scaling can only be done in fixed building blocks – if more storage is needed, compute must also be purchased. This can have a side effect: since some software licenses are based on the number of used CPUs or CPU cores, adding storage also means adding CPUs and hence leads to extra license costs.
This entry was posted on Friday 21 October 2016
Object storage is a storage architecture that manages data as objects, where an object is defined as a file with its metadata, and a globally unique identifier called the object ID.
Examples of metadata are filename, date and time stamps, owner, access permissions, the level of data protection, and replication settings to for instance a different geography.
Object storage stores and retrieves data using a REST API over HTTP, served by a webserver, and is designed to be highly scalable.
Where a traditional file system provides a structure that simplifies locating files (for example, a log file is stored in /var/log/proxy/proxy.log), in object storage, a file’s object ID must be administered by the application using it. Using the object ID, the object can be found without knowing the physical location of the data. For example, an application has administered that its log file is stored in object ID 8932189023.
Using object IDs enables simplicity and massive scalability of the storage system, as the object ID is a link to an object that can be stored anywhere.
Data in object storage can’t be modified. Instead, if a file is modified, the original file must be deleted, and a new file must be created, leading to a new object ID. This makes object storage unsuitable for frequenty changing data. But it is a good fit for data that doesn't change much, like backups, archives, video and audio files, and virtual machine images.
Object storage allows for high availability using commodity servers with direct attached disk drives. It can be setup to replicate objects across multiple servers and locations (typically, at least three copies of every file are stored in mutiple geographical zones). If one or more servers or disks fail, data can still be made available, without impact to the application or the end user.
While object storage was not designed to be used as a file system, some systems emulate a file system using object storage. For instance, Amazon’s S3FS creates a virtual filesystem, based on S3 object storage, that can be mounted to an operating system in the traditional way, however, with significant performance degradation. A much better solution is to use object storage with applications designed for it.
This entry was posted on Friday 07 October 2016