What are concurrent users?

I found that there is no clear definition of the number of concurrent users a system must support.

When a system is used by a large group of users, not all users are active all the time. For instance, if your organization has 10,000 employees, not everyone is in the office working every day. People have holidays, or can be sick. And if they are in the office, they are not behind their desks all the time, as they can be in meetings, standing at the coffee machine, etc. And when they are at their desk, using the system, they are not always active using the system’s back-end systems. For instance, when they are reading an article fetched from the internet, only the fetching of the document puts a load on the system. The time the user is reading the text, does not put a load on the system.

Consider the following example.

Total number of employees 10000
Only 80% is at the office 8000
 70% of their time is spent at their desk  5600
 At their desk, they use the system 70% of the time  3920

 In that time, they only perform actions that put a
load on the infrastructure for 5% of the time

 196

This means that during the day, on average, of all employees, only 196 people are actively using the infrastructure at any given moment.

As an alternative, we can use the ratio between usage of the system and “thinking time”. In our example, the percentage thinking time is 100% - 10000/196 = 98%.

A further breakdown could show how the system is used:

Action Example
Load file  Open file in an office application (like Excel or Word)
Save file Save document from an office application
Browse files Open file explorer
Send HTTP request Push a button in a browser-based application, leading to sending data, or use AJAX calls
Receive HTTP data Receive data from a webserver when using a browser-based application, or use AJAX calls
Send data to the Internet Push a button on an internet page, use AJAX calls or send data using protocols like FTP
Receive data from the Internet Receive a web page from the internet, use AJAX calls or receive data using protocols like FTP
Send email/calendar Send a typed email to the email server, or update calendar items
Receive email/calendar updates Receive new emails from the email server
Send VDI/SBC data In a SBC or VDI environment, send keyboard and mouse input to the server
Receive VDI/SBC data In a SBC or VDI environment, receive screen output from the server
Send and receive data from DNS Use DNS to resolve IP addresses
Send and receive data from AD Use AD to handle login/logout or to check credentials
Other Other uses of the infrastructure

Using such a categorization, the actual load on the infrastructure can be calculated, if we know how the system is setup, how the actions relate to a certain load and what a typical user’s behavior is. Not all users are alike. By observing groups of people, their typical behavior can be mapped to the defined categories over time. For instance, a group called secretaries will typically:

  • Open 25 existing Word documents
  • Save 40 Word documents (including new documents)
  • Send 25 emails
  • Receive 25 emails

Based on these numbers, and with the insight in the setup of the system, the actual load on the various parts of the infrastructure can be calculated. This calculation can then be used to shape performance tests.


This entry was posted on Friday 23 January 2015

Performance and availability monitoring in levels

The availability of an IT component can be obtained by measuring (monitoring) the performance of that component. If the performance is below a certain threshold, the IT components is reported unavailable.

Monitoring IT systems can be done using a variety of tools. Vendors like IBM, HP, BMC and others provide tools to:

  • Measure performance
  • Capture logging
  • Generating alarms based on thresholds
  • Report the collected data in dashboards or other overviews

Typically, the number of measuring points in an IT landscape is quite overwhelming. When installed out of the box, monitoring tools will typically detect many issues per second, leading to many false alarms. Therefore, it is essential to tune the monitoring system to only generate useful alarms and to create reports containing useful information for specific stakeholders.

Performance measurement (and as derivate – availability detection) can be done on multiple levels:

  • Business process level
  • Application component level
  • Infrastructure component level

It is important to have separated performance measurements on all three levels and to have processes to solve issues on all individual levels.

For the end user of the system, only the business process level is important – as soon as the performance of this level is too low, the end users will be in trouble. Therefore, the business process level should be measured. Today’s tools are able to measure individual business process steps either by measuring their normal use or by measuring the effect of generated business actions. For instance, it can be measured how long it takes to print an invoice and it can be measured how long a simulated fake order takes to be processed in a certain business step.

2015-01/performance-monitoring-in-levels.jpg

If the performance on the business process level is below the set threshold, first the performance of the underlying application component(s) should be verified. Since every layer is responsible for its own performance, it could be that there is a problem in the application component layer causing the performance issue in the business process layer. And the application component layer could have performance issues due to a performance issue in the infrastructure component layer. Therefore it is important to separate these layers and give systems managers specific responsibilities for a certain layer. Between the layers, service level agreements should be agreed (Service Level Agreements – SLAs).

If the performance of the business process level is too low and there is no problem in the underlying application components, the solution to the performance issue must be found in the business process layer itself. If this is not the case, then there is a mismatch between the layers – a certain business process issue is apparently not detected in the lower application service layer.

Of course, this reasoning is also valid for the relation between the application components layer and the infrastructure component layer.

On the application component level, performance can be measured effectively if the application components contains “hooks” that the monitoring tool can use to verify the performance of a software component. Without these hooks, measuring can only be done on a much lower granularity. Especially when bespoke software is developed it is advised to invest in building these hooks in the software as part of the regular development process. Typical measurements are the number of times a (part of an) application component is used and how long it takes to finish a certain task. In software, typically there are some hot spots – parts of the code that are used much more frequently than others. By measuring using hooks in the software, these hot spots can be found, monitored, and optimized for performance.

On the infrastructure component level, the performance of each individual component can be measured. Examples are:

  • CPU load
  • Memory usage
  • Network response time
  • Network load
  • Storage response time
  • Storage load

Based on these measurements, low performance, or even unavailability of a certain component or a set of components can be detected.

Systems managers can react on the detection of low performance by addressing the issue at hand. It is important to acknowledge that early detection and resolving of performance issues is essential to avoid performance problems at the higher layers. Early detection and resolving keeps the systems managers busy, but reduces the risk that end users experience performance issues.

It is like the people who work hard to keep the trains running on time. If they do their work well, no one will notice…


This entry was posted on Friday 09 January 2015

UX/UI has no business rules

I am involved in a project that designs a browser based application. The customer has very specific ideas about the way the UI/UX aspects. As Forrester states: “Ultimately any service must meet the customer’s needs, be easy to use and enjoyable – the three facets of customer experience.”

UI (User Interface) and UX (User Experience) are related concepts. In practice the difference is fuzzy, to say the least. In theory, UI is about the buttons, drop-down lists, colors, and other elements on a user interface, while UX is about the whole user experience (like how easy can one switch between tasks, or how responsive a user interface is to user actions). UX is mostly about the joy of using a user interface.

I found that it can be hard to define the demarcation point between the UI/UX part and the rest of the application. In older applications, there was no real border between both. But splitting the UI/UX layer from the rest of the application components is very beneficial.  

The main rule is that the UI/UX layer should not have any business logic; all business logic must be implemented in the underlying application. This allows using multiple user interfaces with the same application. For instance, a modern browser, but also older browser versions, mobile apps on tablets of phones, an interface for disabled people, or whatever comes next.

The user interface should only help users to get a better experience. In theory, a text-based interface (green screen) should suffice to operate the application via its API services. The UI/UX layer is just helpful for humans, but does not improve or extend the business functions of the application.

For instance, the user interface could have a drop-down list to select a value. In theory, such a value could also be entered manually with the same functionality (but it could lead to errors when a value is mistyped). Therefore, drop-down lists are part of the UI/UX layer. When the user interface is ported to a mobile app, this app could use another interface to select a value. In all cases, the application and its API service is unchanged. This means that UI/UX is all about non-functional requirements, not about application or business functionalities.

In general, the release cycle of user interfaces like mobile apps is much faster than that of functional management, or software development. Where software development to create new functions typically takes months to get into production, and changes on the functional management layer could take weeks, changes on the UI/UX layer could be done in days. This allows for frequent updates of the user interface and a constant increase of the user experience.

But to allow a new version of a user interface to work seamlessly with the underlying application, it is essential for the app to have no business logic. Instead it should use the API services of the underlying application(s) on the server. This allows for instance for changing one user interface to benefit from the latest browser enhancements, while still providing an unaltered user interface to users of older browsers, without changing anything on the appliation or its API services.

The question is how to handle functionalities like spell checkers or auto-fill fields like Google uses. Strictly speaking, these functionalities are only helping the user to get a better experience. For instance, most applications would still work when words are mistyped in a text editor. The spell checker is only helping the user and enhancing the user experience. Therefore, a spell checker is part of the UI/UX. But to have a functional spell checker, system calls are needed (it makes little sense to load a full dictionary in the user interface of a web browser; the typed words are typically checked against a dictionary on a server using a API service). Therefore, a API service is needed, which is a service-side application component. As this type of UI/UX functionality is typically used in a number of applications, in most cases one generic service is used by the UI/UX layers of multiple applications. But architecturally, it is still a UI/UX component, as it has no business function.


This entry was posted on Friday 18 July 2014

Technical debt: a time related issue

According to Wikipedia, Technical Debt refers to the eventual consequences of poor software architecture and software development within a codebase. The debt can be thought of as work that needs to be done before a particular job can be considered complete. If the debt is not repaid, then it will keep on accumulating interest, making it hard to implement changes later on. Unaddressed technical debt increases software entropy.

So, technical debt is creating a solution that is not complying to the architecture or detailed design. The reason for this is most of the time lack of time. In some circumstances, shortcuts are needed in order to deliver a project on time. This is not necessarily bad – there can be good economic, commercial or political reasons for this. Such a shortcut is known as technical debt. Technical debt can be compared to Development without architecture, as described in the DYA architecture method. I posted an article about this some time ago. Technical debt must be “paid back”; the chosen shortcut solution must be temporarily by definition and the definitive solution – according to the architecture – must be built eventually. This means that besides the extra effort to run the solution in the suboptimal way, money must be reserved to build the definitive solution. Preferably, building the definitive solution should be started immediately – in parallel with building the technical debt. I think technical debt can very well be extended to include infrastructure technical debt, or solution architecture technical debt for that matter. Just like technical debt can occur in software, it can be interoduced in infrastructures as well. Such technical debt can be introduced by creating an infrastructure solution that is deviating from the defined infrastructure architecture. This deviation can lead to additional cost, for instance in terms of increased systems management effort, replacing hardware before the economic life span is reached, or temporarily using additional software licenses. Nobody likes technical debt. It is a temporary solution to a problem that needs to be solved anyway. The only real reason for creating technical debt is time constraints. If enough time is available, it is always best to create the definitive solution right away. But because of time constraints, extra money is spent creating a temporarily solution quickly, and redo it later. It must be understood by the stakeholders that by creating technical debt, they are actually paying extra for their impatience.

This entry was posted on Saturday 28 June 2014

Solution shaping workshops

2014-05/solution-shaping-workshops.jpg

When architecting a new IT system, at some point fundamental decisions need to be made. For instance about the structure of the system and its components, about the usage of some commercial product, or about the integration of components. These decisions are architectural, as they are fundamental – they cannot be changed easily afterwards.

In a typical project, architects, designers and other stakeholders all have their own view on the system that is to be built. They create their own views and sketches, have their own mental map, talk to each other (but not to everyone, let alone the lead architect) and make up their mind on how things should be solved. However, not everyone has the same knowledge level on all relevant topics and few – if any – are able to oversee the consequences of their proposed solutions.

Various architectural methods define some point “where the magic happens”. Some methods are quite vague about it (come up with the best fitting solution), and other try to highly organize it, such that is becomes unpractical (create extensive lists with alternatives, use a repository of architecture building blocks, etc.).

But in practice, the magic happens in a discussion by a small group of architects. Brainstorms are held, solutions are found, and the most fundamental views are created. I call it the Solution Shaping Workshop.

Now, don’t confuse a Solution Shaping Workshop with a traditional workshop, as in for instance the MetaPlan Method or a Brown Paper session. A Solution Shaping Workshop can be planned, but can also happen spontaneous – for instance as a result of an informal discussion between architects at the coffee machine.

A Solution Shaping Workshop has the following characteristics:

  • No workshop mediator, no system
  • Small group of people (2 to 4 members)
  • Availability of whiteboards or flip-overs to make drawings
  • Relaxed environment
  • Focus on one problem
  • Everyone is free to brainstorm, ideas are welcome
  • One hour max

In a Solution Shaping Workshop one architectural concern is addressed (for instance: “How does our system interface with the current system?”). In the workshop one member of the team explains the problem and provides his first line of thinking. Then a free-format brainstorm is done, where all members are expected to provide input. In practice, quick sketches are made on whiteboards or flip-over pages that help the creative process. When everyone agrees about the outcome of the creative process – the drawn picture – the workshop ends. When no solution can be agreed upon in one hour, the Solution Shaping Workshop should end; apparently the team is not ready to make a decision on the subject yet. In such a case, actions should be agreed for a follow-up (for instance, it could be agreed to get more information from other stakeholders, or to put the issue on the architectural concerns list).

Be sure to make pictures of the whiteboards (using your phone) before the meeting ends, or to take the flip-over sheets with you. One person (typically the one that started the Solution Shaping Workshop) transforms the drawings to a proposal for an architectural decision. Such an architectural decision typically states:

  • The definition of the architectural concern that is addressed (what problem are we solving)
  • The proposed solution as agreed upon in the Solution Shaping Workshop (including drawings and clarifying text)
  • The pros and cons of the proposed solution
  • The alternatives that were discussed and the reason they are not chosen
  • The impact and implications of the decision

This architectural decision (typically a few pages in length) is then sent to the members of the Solution Shaping Workshop to be peer-reviewed. After this review, the decision typically needs to be approved by some project board or design authority.

When the solution is approved, it is very important to share the solution with all relevant stakeholders, to avoid new discussions about possible solutions at a later stage. Preferably, this is done by presenting the solution to the group. In such a setup, it is most helpful not to show the created drawings on a projector, but to sketch them again on a whiteboard. This way people can understand how the solution is built up, step by step. While sketching the solution, the group builds up a mental map of the solution. Questions can be answered as soon as they arise, during the buildup of the sketch.

Was the picture shown to them in full, they could get confused, overwhelmed by detail, or question some detail instead of grasping the total solution first.

Be prepared to answer questions from the stakeholders, as they might have their own view on the subject (and sometimes even a worked-out solution in some form). Try to avoid discussions about new alternatives. By getting approval before presenting the discussion to a larger group, the decision is not under discussion, but a fact.


This entry was posted on Saturday 07 June 2014


Earlier articles

What are concurrent users?

Performance and availability monitoring in levels

UX/UI has no business rules

Technical debt: a time related issue

Solution shaping workshops

Architecture life cycle

Project managers and architects

Using ArchiMate for describing infrastructures

Kruchten’s 4+1 views for solution architecture

The SEI stack of solution architecture frameworks

TOGAF and infrastructure architecture

The Zachman framework

An introduction to architecture frameworks

How to handle a Distributed Denial of Service (DDoS) attack

Architecture Principles

Views and viewpoints explained

Stakeholders and their concerns

Skills of a solution architect architect

Solution architects versus enterprise architects

Definition of IT Architecture

My Book

Infrastructure Architecture - Infrastructure Building Blocks and Concepts

What is Big Data?

How to make your IT "Greener"

What is Cloud computing and IaaS?

Purchasing of IT infrastructure technologies and services

IDS/IPS systems

IP Protocol (IPv4) classes and subnets

Infrastructure Architecture - Course materials

Introduction to Bring Your Own Device (BYOD)

IT Infrastructure Architecture model

Book general available

Fire prevention in the datacenter

Where to build your datacenter

Availability - Fall-back, hot site, warm site

Reliabilty of infrastructure components

Human factors in availability of systems

Business Continuity Management (BCM) and Disaster Recovery Plan (DRP)

Performance - Design for use

Performance concepts - Load balancing

Performance concepts - Scaling

Performance concept - Caching

Perceived performance

Ethical hacking

Computer crime

Introduction to Cryptography

Introduction to Risk management

The history of UNIX and Linux

The history of Microsoft Windows

The history of Novell NetWare

The history of operating systems - MS-DOS

The history of Storage

The history of Networking

History of servers

Tips for getting your ITAC certificate

Studying TOGAF

Is your data safe in the cloud?

Proof of concept

Who needs a consistent backup?

Measuring Enterprise Architecture Maturity

Human factors in security

Master Certified IT Architect

ITAC certification

Human factors in security

Google outage

SAS 70

TOGAF 9 - What's new?

DYA: Development without architecture

Spam is big business

Why IT projects fail

Power and cooling

Let system administrators participate in projects

The 7 Habits of Highly Effective People

Archimate

A meeting with John Zachman

ITAC - IT Architect certification

Personal Information is Personal Property

The Irresistible Forces Meet the Movable Objects

Hardeningscheck and hack testing for new servers

Knowledge management

Information Lifecycle Management - What is ILM

LEAP: The Redmond trip

LEAP: The last Dutch masterclasses

What do system administrators do?

Is software ever finished?

SCADA systems

LEAP - Halfway through the Dutch masterclasses

Securing data: The Castle versus the Tank

Non-functional requirements

LEAP - Microsoft Lead Enterprise Architect Program

Reasons for making backups

Log analysis - Use your logging information

Archivering data - more than backup

Patterns in IT architecture

Layers in IT security

High performance clusters and grids

Zachman architecture model

High Availability clusters

Monitoring by system administrators

What is VMS?

IT Architecture certifications

Storage Area Networks (SAN)

Documentation for system administrators

Rootkits

Presentations: PowerPoint sheets are not enough

99,999% availability

Linux certification: RHCE and LPI

IT Infrastructure model

Sjaak Laan


Recommended links

Ruth Malan
Gaudi site
Byelex
XR Magazine
Esther Barthel's site on virtualization


Feeds

 
XML: RSS Feed 
XML: Atom Feed 


Disclaimer

The postings on this site are my opinions and do not necessarily represent CGI’s strategies, views or opinions.

 

Copyright Sjaak Laan