Software Defined Computing
While virtualization has been around for many decades, it was mainly focused on the virtualization of computing power – the use of multiple virtual machines running on one physical machine. This allowed a better use of the physical computer’s resources, as most of the physical machines ran at a fairly low CPU and memory utilization. A hypervisor is used as a layer between the physical and virtual machines. Apart from providing virtual machines, this hypervisor also allowed for additional functionality, like managing virtual machines from one management console, adding virtual memory of CPU cores to a virtual machine, high availability by restarting failing machines, and dynamically moving running machines between physical machines to allow load balancing and maintenance. This extra functionality (that the virtual machines are not aware of) can be called Software Defined Computing (SDC), as the hypervisor is controlled by software. In addition, SDC provides open APIs to enable third party software to monitor and control the SDC’s hypervisor(s).
Not only compute resources can be virtualized. Lately, virtualization of networking and storage resources is becoming more popular. This allows not so much for a better utilization of the hardware – as this is not necessarily solved by this virtualization, but does allow for software defined storage (SDS) and software defined networking (SDN).
Software Defined Storage
With SDS, the physical storage pool is virtualized into virtual storage pools (LUNs). This is nothing new, this was possible for many years. In addition to this, SDS provides extra functionalities, like the connection of heterogeneous storage devices and using open APIs.
SDS enables to use storage devices from multiple vendors and manage them as one storage pool by using open APIs and by allowing to physically couple physical storage devices together.
Software Defined Networking
With SDN a relatively simple physical network can be used to provide a complex virtual network. Technologies like vLANs have been around for a long time, but they require complex configurations on a number of devices to work properly. SDN provided one point of control to configure the network in a dynamic way.
In a SDN environment, the physical network is typically based on a spine and leaf topology, as shown below.
This topology has a number of benefits:
- Each server is always exactly four hops away from every other server
- The topology is simple to scale: just add spine or leaf servers
- Since there are no interconnects between the spine switches, the design is highly scalable
Because of the relatively flat hierarchy and the fixed number of hops, the topology can easily be virtualized using vLANs. The virtual network can then have an hierarchical, complex and secured virtual structure that can easily be changed without touching the physical switches. The network can be controlled from a single management console and open APIs can be provided to manage the network using third party software.
This entry was posted on Wednesday 15 April 2015
I found that there is no clear definition of the number of concurrent users a system must support.
When a system is used by a large group of users, not all users are active all the time. For instance, if your organization has 10,000 employees, not everyone is in the office working every day. People have holidays, or can be sick. And if they are in the office, they are not behind their desks all the time, as they can be in meetings, standing at the coffee machine, etc. And when they are at their desk, using the system, they are not always active using the system’s back-end systems. For instance, when they are reading an article fetched from the internet, only the fetching of the document puts a load on the system. The time the user is reading the text, does not put a load on the system.
Consider the following example.
|Total number of employees
|Only 80% is at the office
| 70% of their time is spent at their desk
| At their desk, they use the system 70% of the time
In that time, they only perform actions that put a
load on the infrastructure for 5% of the time
This means that during the day, on average, of all employees, only 196 people are actively using the infrastructure at any given moment.
As an alternative, we can use the ratio between usage of the system and “thinking time”. In our example, the percentage thinking time is 100% - 10000/196 = 98%.
A further breakdown could show how the system is used:
||Open file in an office application (like Excel or Word)
||Save document from an office application
||Open file explorer
|Send HTTP request
||Push a button in a browser-based application, leading to sending data, or use AJAX calls
|Receive HTTP data
||Receive data from a webserver when using a browser-based application, or use AJAX calls
|Send data to the Internet
||Push a button on an internet page, use AJAX calls or send data using protocols like FTP
|Receive data from the Internet
||Receive a web page from the internet, use AJAX calls or receive data using protocols like FTP
||Send a typed email to the email server, or update calendar items
|Receive email/calendar updates
||Receive new emails from the email server
|Send VDI/SBC data
||In a SBC or VDI environment, send keyboard and mouse input to the server
|Receive VDI/SBC data
||In a SBC or VDI environment, receive screen output from the server
|Send and receive data from DNS
||Use DNS to resolve IP addresses
|Send and receive data from AD
||Use AD to handle login/logout or to check credentials
||Other uses of the infrastructure
Using such a categorization, the actual load on the infrastructure can be calculated, if we know how the system is setup, how the actions relate to a certain load and what a typical user’s behavior is. Not all users are alike. By observing groups of people, their typical behavior can be mapped to the defined categories over time. For instance, a group called secretaries will typically:
- Open 25 existing Word documents
- Save 40 Word documents (including new documents)
- Send 25 emails
- Receive 25 emails
Based on these numbers, and with the insight in the setup of the system, the actual load on the various parts of the infrastructure can be calculated. This calculation can then be used to shape performance tests.
This entry was posted on Friday 23 January 2015
The availability of an IT component can be obtained by measuring (monitoring) the performance of that component. If the performance is below a certain threshold, the IT components is reported unavailable.
Monitoring IT systems can be done using a variety of tools. Vendors like IBM, HP, BMC and others provide tools to:
- Measure performance
- Capture logging
- Generating alarms based on thresholds
- Report the collected data in dashboards or other overviews
Typically, the number of measuring points in an IT landscape is quite overwhelming. When installed out of the box, monitoring tools will typically detect many issues per second, leading to many false alarms. Therefore, it is essential to tune the monitoring system to only generate useful alarms and to create reports containing useful information for specific stakeholders.
Performance measurement (and as derivate – availability detection) can be done on multiple levels:
- Business process level
- Application component level
- Infrastructure component level
It is important to have separated performance measurements on all three levels and to have processes to solve issues on all individual levels.
For the end user of the system, only the business process level is important – as soon as the performance of this level is too low, the end users will be in trouble. Therefore, the business process level should be measured. Today’s tools are able to measure individual business process steps either by measuring their normal use or by measuring the effect of generated business actions. For instance, it can be measured how long it takes to print an invoice and it can be measured how long a simulated fake order takes to be processed in a certain business step.
If the performance on the business process level is below the set threshold, first the performance of the underlying application component(s) should be verified. Since every layer is responsible for its own performance, it could be that there is a problem in the application component layer causing the performance issue in the business process layer. And the application component layer could have performance issues due to a performance issue in the infrastructure component layer. Therefore it is important to separate these layers and give systems managers specific responsibilities for a certain layer. Between the layers, service level agreements should be agreed (Service Level Agreements – SLAs).
If the performance of the business process level is too low and there is no problem in the underlying application components, the solution to the performance issue must be found in the business process layer itself. If this is not the case, then there is a mismatch between the layers – a certain business process issue is apparently not detected in the lower application service layer.
Of course, this reasoning is also valid for the relation between the application components layer and the infrastructure component layer.
On the application component level, performance can be measured effectively if the application components contains “hooks” that the monitoring tool can use to verify the performance of a software component. Without these hooks, measuring can only be done on a much lower granularity. Especially when bespoke software is developed it is advised to invest in building these hooks in the software as part of the regular development process. Typical measurements are the number of times a (part of an) application component is used and how long it takes to finish a certain task. In software, typically there are some hot spots – parts of the code that are used much more frequently than others. By measuring using hooks in the software, these hot spots can be found, monitored, and optimized for performance.
On the infrastructure component level, the performance of each individual component can be measured. Examples are:
- CPU load
- Memory usage
- Network response time
- Network load
- Storage response time
- Storage load
Based on these measurements, low performance, or even unavailability of a certain component or a set of components can be detected.
Systems managers can react on the detection of low performance by addressing the issue at hand. It is important to acknowledge that early detection and resolving of performance issues is essential to avoid performance problems at the higher layers. Early detection and resolving keeps the systems managers busy, but reduces the risk that end users experience performance issues.
It is like the people who work hard to keep the trains running on time. If they do their work well, no one will notice…
This entry was posted on Friday 09 January 2015
I am involved in a project that designs a browser based application. The customer has very specific ideas about the way the UI/UX aspects. As Forrester states: “Ultimately any service must meet the customer’s needs, be easy to use and enjoyable – the three facets of customer experience.”
UI (User Interface) and UX (User Experience) are related concepts. In practice the difference is fuzzy, to say the least. In theory, UI is about the buttons, drop-down lists, colors, and other elements on a user interface, while UX is about the whole user experience (like how easy can one switch between tasks, or how responsive a user interface is to user actions). UX is mostly about the joy of using a user interface.
I found that it can be hard to define the demarcation point between the UI/UX part and the rest of the application. In older applications, there was no real border between both. But splitting the UI/UX layer from the rest of the application components is very beneficial.
The main rule is that the UI/UX layer should not have any business logic; all business logic must be implemented in the underlying application. This allows using multiple user interfaces with the same application. For instance, a modern browser, but also older browser versions, mobile apps on tablets of phones, an interface for disabled people, or whatever comes next.
The user interface should only help users to get a better experience. In theory, a text-based interface (green screen) should suffice to operate the application via its API services. The UI/UX layer is just helpful for humans, but does not improve or extend the business functions of the application.
For instance, the user interface could have a drop-down list to select a value. In theory, such a value could also be entered manually with the same functionality (but it could lead to errors when a value is mistyped). Therefore, drop-down lists are part of the UI/UX layer. When the user interface is ported to a mobile app, this app could use another interface to select a value. In all cases, the application and its API service is unchanged. This means that UI/UX is all about non-functional requirements, not about application or business functionalities.
In general, the release cycle of user interfaces like mobile apps is much faster than that of functional management, or software development. Where software development to create new functions typically takes months to get into production, and changes on the functional management layer could take weeks, changes on the UI/UX layer could be done in days. This allows for frequent updates of the user interface and a constant increase of the user experience.
But to allow a new version of a user interface to work seamlessly with the underlying application, it is essential for the app to have no business logic. Instead it should use the API services of the underlying application(s) on the server. This allows for instance for changing one user interface to benefit from the latest browser enhancements, while still providing an unaltered user interface to users of older browsers, without changing anything on the appliation or its API services.
The question is how to handle functionalities like spell checkers or auto-fill fields like Google uses. Strictly speaking, these functionalities are only helping the user to get a better experience. For instance, most applications would still work when words are mistyped in a text editor. The spell checker is only helping the user and enhancing the user experience. Therefore, a spell checker is part of the UI/UX. But to have a functional spell checker, system calls are needed (it makes little sense to load a full dictionary in the user interface of a web browser; the typed words are typically checked against a dictionary on a server using a API service). Therefore, a API service is needed, which is a service-side application component. As this type of UI/UX functionality is typically used in a number of applications, in most cases one generic service is used by the UI/UX layers of multiple applications. But architecturally, it is still a UI/UX component, as it has no business function.
This entry was posted on Friday 18 July 2014
According to Wikipedia, Technical Debt refers to the eventual consequences of poor software architecture and software development within a codebase. The debt can be thought of as work that needs to be done before a particular job can be considered complete. If the debt is not repaid, then it will keep on accumulating interest, making it hard to implement changes later on. Unaddressed technical debt increases software entropy.
So, technical debt is creating a solution that is not complying to the architecture or detailed design. The reason for this is most of the time lack of time. In some circumstances, shortcuts are needed in order to deliver a project on time. This is not necessarily bad – there can be good economic, commercial or political reasons for this. Such a shortcut is known as technical debt.
Technical debt can be compared to Development without architecture, as described in the DYA architecture method. I posted an article about this some time ago. Technical debt must be “paid back”; the chosen shortcut solution must be temporarily by definition and the definitive solution – according to the architecture – must be built eventually. This means that besides the extra effort to run the solution in the suboptimal way, money must be reserved to build the definitive solution. Preferably, building the definitive solution should be started immediately – in parallel with building the technical debt.
I think technical debt can very well be extended to include infrastructure technical debt, or solution architecture technical debt for that matter. Just like technical debt can occur in software, it can be interoduced in infrastructures as well. Such technical debt can be introduced by creating an infrastructure solution that is deviating from the defined infrastructure architecture. This deviation can lead to additional cost, for instance in terms of increased systems management effort, replacing hardware before the economic life span is reached, or temporarily using additional software licenses.
Nobody likes technical debt. It is a temporary solution to a problem that needs to be solved anyway. The only real reason for creating technical debt is time constraints. If enough time is available, it is always best to create the definitive solution right away. But because of time constraints, extra money is spent creating a temporarily solution quickly, and redo it later. It must be understood by the stakeholders that by creating technical debt, they are actually paying extra for their impatience.
This entry was posted on Saturday 28 June 2014