Software Defined Networking (SDN) and Network Function Virtualization (NFV)

Software Defined Networking (SDN) is a relatively new concept. It allows networks to be defined and controlled using software external to the physical networking devices.

With SDN, a relatively simple physical network can be programmed to act as a complex virtual network. It can become a hierarchical, complex and secured virtual structure that can easily be changed without touching the physical network components.

An SDN can be controlled from a single management console and open APIs can be used to manage the network using third party software. This is particularly useful in a cloud environment, where networks change frequently as machines are added or removed from a tenant’s environment. With a single click of a button or a single API call, complex networks can be created within seconds.

SDN works by decoupling the control plane and data plane from each other, such that the control plane resides centrally and the data plane (the physical switches) remain distributed, as shown in the next figure.

Software Defined Networking (SDN)

In a traditional switch or router, the network device dynamically learns packet forwarding rules and stores them in each device as ARP or routing tables. In an SDN, the distributed data plane devices are forwarding network packets based on ARP or routing rules that are loaded into the devices by an SDN controller devices in the central control plane. This allows the physical devices to be much simpler and more cost effective.


Network Function Virtualization

In addition to SDN, Network Function Virtualization (NFV) is a way to virtualize networking devices like firewalls, VPN gateways and load balancers. Instead of having hardware appliances for each network function, in NFV, these appliances are implemented by virtual machines running applications that perform the network functions.

Using APIs, NFV virtual appliances can be created and configured dynamically and on-demand, leading to a flexible network configuration. It allows, for instance, to deploy a new firewall as part of a script that creates a number of connected virtual machines in a cloud environment.


This entry was posted on Friday 09 September 2016

Software Defined Storage (SDS)

Software Defined Storage (SDS) abstracts data and storage capabilities (the control plane) from the underlying physical storage systems (the data plane). This allows data to be stored in a variety of storage systems while being presented and managed as one storage pool to the servers consuming the storage. The figure below shows the SDS model.

Software Defined Storage (SDS) model

Heterogeneous physical storage devices can be made part of the SDS system. SDS enables the use of standard commodity hardware, where storage is implemented as software running on commodity x86-based servers with direct attached disks. But the physical storage can also be a Storage Area Network, a Network Attached Storage system, or an Object storage system. SDS virtualizes this physical storage into one large shared virtual storage pool. From this storage pool, software provides data services like:

  • Deduplication
  • Compression
  • Caching
  • Snapshotting
  • Cloning
  • Replication
  • Tiering

SDS provides servers with virtualized data storage pools with the required performance, availability and security, delivered as block, file, or object storage, based on policies. As an example, a newly deployed database server can invoke an SDS policy that mounts storage configured to have its data striped across a number of disks, creates a daily snapshot, and has data stored on tier 1 disks.

APIs can be used to provision storage pools and set the availability, security and performance levels of the virtualized storage. In addition, using APIs, storage consumers can monitor and manage their own storage consumption.


This entry was posted on Friday 09 September 2016

What's the point of using Docker containers?

Introduction

Originally, operating systems were designed to run a large number of independent processes. In practice, however, dependencies on specific versions of libraries and specific resource requirements for each application process led to using one operating system – and hence one server – per application. For instance, a database server typically only runs a database, while an application server is hosted on another machine.

Compute virtualization solves this problem, but at a price – each application needs a full operating system, leading to high license and systems management cost. And because even the smallest application needs a full operating system, much memory and many CPU cycles are wasted just to get isolation between applications. Container technology is a way to solve this issue.

Container isolation versus overhead

The figure above shows the relation between isolation between applications and the overhead of running the application. While running each application on a dedicated physical machine provides the highest isolation, the overhead is very high. An operating system, on the other hand, provides much less isolation, but at a very low overhead per application.

Container technology, also known as operating-system-level virtualization, is a server virtualization method in which the kernel of an operating system provides multiple isolated user-space instances, instead of just one. These containers look and feel like a real server from the point of view of its owners and users, but they share the same operating system kernel. This isolation enables the operating system to run multiple processes, where each process shares nothing but the kernel.

2016-06/container-technology.jpg

Containers are not new – the first UNIX based containers, introduced in 1979, provided isolation of the root file system via the chroot operation. Solaris subsequently pioneered and explored many enhancements, and Linux control groups (cgroups) adopted many of these ideas.

Containers are part of the Linux kernel since 2008. What is new is the use of containers to encapsulate all application components, such as dependencies and services. And when all dependencies are encapsulated, applications become portable.

Using containers has a number of benefits:

  • Isolation – applications or application components can be encapsulated in containers, each operating independently and isolated from each other.
  • Portability – since containers typically contain all components the embedded application or application component needs to function, including libraries, patches, containers can be run on any infrastructure that is capable of running containers using the same kernel version.
  • Easy deployment – containers allow developers to quickly deploy new software versions, as the containers they define can be moved to production unaltered.

Container technology

Containers are based on 3 technologies that are all part of the Linux kernel:

  • Chroot (also known as a jail) - changes the apparent root directory for the current running process and its children and ensures that these processes cannot access files outside the designated directory tree. Chroot was available in Unix as early as 1979.
  • Cgroups - limits and isolates the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes. Cgroups is part of the Linux kernel since 2008.
  • Namespaces - allows complete isolation of an applications' view of the operating environment, including process trees, networking, user IDs and mounted file systems. It is part of the Linux kernel since 2002.

Linux Containers (LXC), introduced in 2008, is a combination of chroot, cgroups, and namespaces, providing isolated environments, called containers.

Docker can use LXC as one of its execution drivers. It adds Union File System (UFS) – a way of combining multiple directories into one that appears to contain their combined contents – to the containers, allowing multiple layers of software to be "stacked". Docker also automates deployment of applications inside containers.

Containers and security

While containers provide some isolation, they still use the same underlying kernel and libraries. Isolation between containers on the same machine is much lower than virtual machine isolation. Virtual machines get isolation from hardware - using specialized CPU instructions. Containers don't have this level of isolation. However, there are some operating systems, like Joyent SmartOS' offering, that run on bare metal, and providing containers with hardware based isolation using the same specialized CPU instructions.

Since developers define the contents of containers, security officers lose control over the containers, which could lead to unnoticed vulnerabilities. This could lead to using multiple versions of tools, unpatched software, outdated software, or unlicensed software. To solve this issue, a repository with predefined and approved container components and container hierarchy can be implemented.

Container orchestration

Where an operating system abstracts resources such as CPU, RAM, and network connectivity and provides services to applications, container orchestration, also known as a datacenter operating system, abstracts the resources of a cluster of machines and provides services to containers. A container orchestrator allows containers to be run anywhere on the cluster of machines – it schedules the containers to any machine that has resources available. It acts like a kernel for the combined resources of an entire datacenter instead of the resources of just a single computer.

2016-06/container-orchestration.jpg

There are many frameworks for managing container images and orchestrating the container lifecycle. Some examples are:

  • Docker Swarm
  • Apache Mesos
  • Google's Kubernetes
  • Rancher
  • Pivotal CloudFoundry
  • Mesophere DC/OS

This entry was posted on Wednesday 22 June 2016

Identity and Access Management

Identity and Access management (IAM) is the process of managing the identity of people or systems and their permissions on systems.

IAM is a three-way process. In an IAM solution, users or systems first announce who they are (identification – they provide their name), then their claimed account is checked (authentication – they provide for instance a password, which is checked), and then the account is granted the permissions related to their identity and the groups they belong to (authorization – they are allowed into the system).

Most systems have a way to connect identities and their permissions. For instance, the kernel of an operating system owns an administration of users and a list of user rights that describes which identities are allowed to read, write, modify, or delete files.

IAM is not only used on the operating system level, but also in applications, databases, or other systems. Often these systems have their own stand-alone IAM system, which leads to users logging in to each and every system they use. With Single sign-on (SSO), a user logs in once and is passed seamlessly, without an authentication prompt, to applications configured with it. SSO provides user friendliness, but does not necessarily enhance security – when the main login credentials are known, an attacker gains access to all systems. SSO is typically implemented using LDAP, Kerberos, or Microsoft Active Directory. 

Federated identity management extends SSO above the enterprise level, creating a trusted authority for digital identities across multiple organizations. In a federated system, participating organizations share identity attributes based on agreed-upon standards, facilitating authentication from other members of the federation and granting appropriate access to systems.

Users can be authenticated in one of three ways:

  • Something you know, like a password or PIN
  • Something you have, like a bank card, a token or a smartphone
  • Something you are, like a fingerprint or an iris scan

Many systems only use a username/password combination (something you know), but more and more systems use multi-factor authentication, where at least two types of authentication are required. An example is an ATM machine, where a bank card is needed (something you have) and a PIN (something you know).

Typically, users are members of one or more groups (typically named after their roles in the organization) and, instead of granting permissions to individual users, these groups are granted permissions. And since groups can be nested (a group is member of another group), this so-called Role Based Access Control (RBAC) is very powerful.


This entry was posted on Friday 01 April 2016

Using user profiles to determine infrastructure load

To be able to predict the load a new software system will pose on the infrastructure, and to be able to create representative test scripts before the software is built, user profiling can be used.

In order to predict the load on the infrastructure, it is important to have a good indication of the future usage of the system. This can be done by defining a number of typical user groups of the new system (also known as personas) and by creating a list of the tasks they will perform on the new system.

First a list of personas must be defined – preferably less than ten personas. Representatives of these persona groups must be interviewed to understand how they will use the new system. A list can be compiled with the main tasks (like login, start the application, open a document, create a report, etc.).

For each of these tasks, an estimation can be made on how, and how often they will use the system’s functionality to perform the task. Based on these estimations, and the number of users the personas represent, a calculation can be made on how often each system task is used in a given time frame, and how these relate to infrastructure tasks. A very simplified example is given below:

Persona Number of users per persona System task Infrastructure task Frequency
Data entry officer 100 Start application Read 100 MB data from SAN Once a day
Data entry officer  100 Start application Transport 100 MB data to workstation Once a day
Data entry officer  100 Enter new data Transport 50 KB data from workstation to server 40 per hour
Data entry officer  100 Enter new data Store 50 KB data to SAN 40 per hour
Data entry officer  100 Change existing data Read 50 KB data from SAN 10 per hour
Data entry officer  100 Change existing data Transport 50 KB data from server to workstation 10 per hour
Data entry officer  100 Change existing data Transport 50 KB data from workstation to server 10 per hour
Data entry officer  100 Change existing data Store 50 KB data to SAN 10 per hour
Data entry officer  100 Close application Transport 500 KB configuration data from workstation to server Once a day
Data entry officer  100 Close application Store 500 KB data to SAN Once a day

This leads to the following profile for this persona group:

Infrastructure task
Per day Per second
Data transport from server to workstation (KB) 10,400,000 361.1
Data transport from workstation to server (KB) 2,050,000 71.2
Data read from SAN (KB) 10,400,000 361.1
Data written to SAN (KB) 2,050,000 71.2

Of course, in practice, this exercise is much more complicated. There might be many personas, complex tasks, tasks are spread in time, or show hotspots (like starting the application or logging in, which typically happens at the start of the day), the system can have background processes running, and the load on the system for a specific task can be very hard to predict.

But as this very simplified example shows, user profiles can help determining the load on various parts of the infrastructure, even before the application software is written.


This entry was posted on Sunday 21 February 2016


Earlier articles

Software Defined Networking (SDN) and Network Function Virtualization (NFV)

Software Defined Storage (SDS)

What's the point of using Docker containers?

Identity and Access Management

Using user profiles to determine infrastructure load

Public wireless networks

Supercomputer architecture

Desktop virtualization

Stakeholder management

x86 platform architecture

Midrange systems architecture

Mainframe Architecture

Software Defined Data Center - SDDC

The Virtualization Model

What are concurrent users?

Performance and availability monitoring in levels

UX/UI has no business rules

Technical debt: a time related issue

Solution shaping workshops

Architecture life cycle

Project managers and architects

Using ArchiMate for describing infrastructures

Kruchten’s 4+1 views for solution architecture

The SEI stack of solution architecture frameworks

TOGAF and infrastructure architecture

The Zachman framework

An introduction to architecture frameworks

How to handle a Distributed Denial of Service (DDoS) attack

Architecture Principles

Views and viewpoints explained

Stakeholders and their concerns

Skills of a solution architect architect

Solution architects versus enterprise architects

Definition of IT Architecture

My Book

What is Big Data?

How to make your IT "Greener"

What is Cloud computing and IaaS?

Purchasing of IT infrastructure technologies and services

IDS/IPS systems

IP Protocol (IPv4) classes and subnets

Infrastructure Architecture - Course materials

Introduction to Bring Your Own Device (BYOD)

IT Infrastructure Architecture model

Fire prevention in the datacenter

Where to build your datacenter

Availability - Fall-back, hot site, warm site

Reliabilty of infrastructure components

Human factors in availability of systems

Business Continuity Management (BCM) and Disaster Recovery Plan (DRP)

Performance - Design for use

Performance concepts - Load balancing

Performance concepts - Scaling

Performance concept - Caching

Perceived performance

Ethical hacking

The first computers

Open group ITAC /Open CA Certification

Sjaak Laan


Recommended links

Ruth Malan
Gaudi site
Byelex
XR Magazine
Esther Barthel's site on virtualization


Feeds

 
XML: RSS Feed 
XML: Atom Feed 


Disclaimer

The postings on this site are my opinions and do not necessarily represent CGI’s strategies, views or opinions.

 

Copyright Sjaak Laan