If you receive errors when attempting to view this white paper, please install the latest version of
"When NetQoS was founded in 1999, traffic over the WAN was increasing in volume and complexity, leading to growing
application performance issues. However, most approaches to network management still focused on device availability
and fault management, not performance."
Source : NetQos
The 2008 Handbook of Application Delivery
Application Delivery is also known as:
Network Application Optimization,
WAN Application Optimization,
Application delivery network,
Application Delivery Solution,
Application Delivery Software,
A Guide to Decision Making
- Executive Summary
- The Applications Environment
- Network and Application Optimization
- Managed Service Providers
- The Changing Network Management Function
information presented and opinions expressed in this IT Innovation Report
represent the current opinions of the author(s) based on professional
judgment and best available information at the time of the presentation.
Consequently, the information is subject to change, and no liability for
advice presented is assumed. Ultimate responsibility for choice of
appropriate solutions remains with the reader.
We are just ending the first phase of a fundamental
transformation of the IT organization. At the beginning of this
transformation, virtually all IT organizations were comprised of myriad stove
piped functions; e.g., devices, networks, servers, storage, databases,
security, operating systems. A major component of the transformation is that
leading edge IT organizations are now creating an environment that is
characterized by the realization that IT is comprised of just two functions,
application development and application delivery, and that these functions
must work in an integrated fashion in order for the IT organization to
ensure acceptable application performance. This view of IT affects everything
' including the organizational structure, the management metrics, the
requisite processes, technologies and tools. One of the primary goals of
this handbook is to help IT organizations plan for that transformation.
described in the handbook, the activities that comprise a successful
application delivery function are planning, optimizing, managing and
controlling application performance. Each of these activities is challenging
today and will become more challenging over the next few years. As
described in Chapters 2 and 3, part of the increased challenge will come from
the deployment of new application development paradigms such as
Oriented Architecture), Rich Internet Architecture and Web 2.0. Also
adding to the difficulty of ensuring acceptable application performance is
the increased management complexity associated with the burgeoning deployment
of the virtualization of IT resources (i.e., desktops, servers, storage,
applications), the growing impact of wireless communications, the need to
provide increasing levels of security as well as emerging trends such as
Chapter 4 of this handbook discusses planning. As that
chapter points out, in most companies the focus of application development is
on ensuring that applications are developed on time, on budget, and with few
security vulnerabilities. That narrow focus combined with the fact that
application development has historically been done over a high-speed,
low-latency LAN, means that the impact of the WAN on the performance of the
application is generally not known until after the application is fully
developed and deployed. In addition, most IT organizations do not know the
impact that a major change, such as consolidating data centers, will have
until after the initiative is fully implemented. As a result, IT
organizations are left to react to application and infrastructure issues
typically only after they have impacted the user. Chapter 4 discusses
techniques such as WAN emulation, baselining and predeployment assessments
that IT organizations can use to identify and eliminate issues prior to their
impacting users and identifies criteria that IT organizations can use to
choose appropriate tools.
Chapter 5 discusses two classes of network and
application optimization solutions. One class focuses on the negative
effect of the WAN on application performance. This category is referred to
alternatively as a WAN optimization controller (WOC) or a Branch Office
Optimization Solution. Branch Office Optimization Solutions are often
referred to as symmetric solutions because they typically require an
appliance in both the data center as well as the branch office. Some vendors,
however, have implemented solutions that call for an appliance in the data
center, but instead of requiring an appliance in the branch office only
requires software on the user's computer. This class of solution is often
referred to as a software only solution and is most appropriate for
individual users or small offices. Chapter 5 contains an extensive set of
criteria that IT organizations can use to choose a Branch Office
The second class of solution discussed in Chapter 5
is often referred to as an Application Front End (AFE)
or Application Device
Controller (ADC). This solution is typically referred to as being an
asymmetric solution because an appliance is only required in the data center
and not the branch office. The primary role of the AFE is to offload
computationally intensive tasks, such as the processing of SSL traffic, from
a server farm. Chapter 5 also contains an extensive set of criteria that IT
organizations can use to choose an AFE.
Today most IT organizations that
have deployed a network and application optimization solution have done so in
a do-it-yourself (DIY) fashion. Chapter 6 describes another alternative ' the
use of a managed service provider (MSP) for application delivery services. MSPs are not new. For example, in the early to mid 1990s, many IT
organizations began to acquire managed frame relay services from an MSP as
an alternative to building and managing a frame relay network themselves. In
most cases, the IT organization was quite capable of building and managing
the frame relay network, but choose not to do so in order to focus its
attention on other activities or to reduce cost. Part of the appeal of using
an MSP for application delivery is that in many instances MSPs have expertise
across all of the components of application delivery (planning, optimization,
management and control) that the IT organization does not possess. As a
result, these MSPs can provide functionality that the IT organization on its
own could not provide. As is described in Chapter 6, there are two distinct
classes of application delivery MSPs that differ primarily in terms of how
they approach the optimization component of application delivery. One class
of application delivery MSP provides site-based services that are similar to
the current DIY approach used by most IT organizations. The other class of
application delivery MSP adds intelligence to the Internet to allow it to be
support production applications. Chapter 6 contains an extensive set of
criteria that IT organizations can use to determine if one of these services
would add value.
As part of the research that went into the creation of
this handbook, the CIO of a government organization was interviewed. He
stated that in his organization it is common to have the end user notice
application degradation before the IT function does and that this results in
IT looking like "a bunch of bumbling idiots." Chapter 7 discusses this
issue as well as some of the organizational issues that impact successful
application delivery, including the lack of effective processes as well as
the adversarial relationship that often exists between the application
development organization and the network organization. Chapter 7 also
discusses the fact that most IT organization are blind to the growing number
of applications that use port 80 and describes a number of management
techniques that IT organizations can use to avoid the "bumbling idiot
syndrome". This includes discovery, end-to-end visibility, network
analytics and route analytics. The chapter identifies criteria that IT
organizations can use to choose appropriate solutions and also includes some
specific suggestions for how IT organizations can manage VoIP.
examines the attempt on the part of many Network Operations Centers (NOCs) to
improve their processes, and highlights the shift that most NOCs are
taking from where they focus almost exclusively on the availability of
networks to where they are beginning to also focus on the performance of
networks and applications. Included in the chapter is a discussion of the
factors that are driving the NOC to change as well as the factors that are
inhibiting the NOC from being able to change. Chapter 8 details how the
approach that most IT organizations take to reducing the mean time to repair
has to be modified now that the NOC is gaining responsibility for application
performance and the chapter also examines the myriad techniques that IT
organizations use to justify an investment in performance management. Chapter
8 concludes with the observation that given where NOC personnel spend
their time, that the NOC should be renamed the Application Operations Center
Chapter 9 examines the type of control functionality that IT
organizations should implement in order to ensure acceptable application
performance. This includes route control as a way to impact the path that
traffic takes as it transits an IP network. The chapter also describes a
process for implementing QoS and summarizes the status of current QoS
Chapter 9 makes the assertion that firewalls are typically
placed at a point where all WAN access for a given site coalesces and that
this is the logical place for a policy and security control point for the
WAN. Unfortunately because traditional firewalls cannot provide the necessary
security functionality, IT organizations have resorted to implementing myriad
work-arounds. This approach, however, has serious limitations including the
fact that even after deploying the work-arounds the IT organization
typically does not see all of the traffic and the deployment of multiple
security appliances significantly drives up the operational costs and
complexity. The chapter concludes by identifying criteria that IT
organizations can use to
choose a next generation firewall.
Background and Goal
As recently as a few years ago, few IT organizations were
concerned with application delivery. That has all changed. Application
delivery is now a top of mind topic for virtually all IT organizations. As is
described in this handbook, there are many factors that complicate the task
of ensuring acceptable application performance. This includes the lack of
visibility into application performance, the centralization of IT resources,
the decentralization of employees and the complexity associated with the
current generation of n-tier applications.
Some of the IT organizations
that were interviewed for this handbook want to believe that the challenges
associated with application delivery are going away. They want to believe
that application developers will soon start to write more efficient
applications and that bandwidth costs will decrease to the point where they
can afford to throw bandwidth at performance problems.
associated with application delivery will increase over the next few years.
That follows in part because as explained in this handbook, the deployment of
new application development paradigms1 such as SOA (Services Oriented
Architecture), Rich Internet Architecture and Web 2.0 will dramatically
increase the difficulty of ensuring acceptable application performance. It
also follows because of the increasing management complexity associated with
the burgeoning deployment of the virtualization of IT resources (i.e.,
desktops, servers, storage, applications), the growing impact of wireless
communications, the need to provide increasing levels of security as well as
emerging trends such as storage optimization.
Instead of reaching a point
where the challenges associated with application delivery are going away, we
are just ending the first phase of a fundamental transformation of the IT
organization. At the beginning of this transformation, virtually all IT
organizations were comprised of myriad stove piped functions. By stove piped
is meant that these functions had few common goals, terminology, tools and
processes. A major component of the transformation is that leading edge IT
organizations are now creating an environment that is characterized by the
If you work in IT, you either develop applications or
you deliver applications.
Put another way, leading edge companies are
creating an IT organization that is comprised of two functions: application
development and application delivery. Both of these functions must work
holistically in order to ensure acceptable application performance.
view of IT affects everything ' including the organizational structure, the
management metrics, the requisite processes, technologies and tools. While
the transformation is indeed fundamental, it will not happen quickly. We
have just spent the last few years coming to understand the importance and
difficulty associated with application delivery and to deploy a first
generation of tools typically in a stand-alone, tactical fashion. As we enter
the next phase of application delivery, leading edge IT organizations will
develop plans for how they want to evolve from a stove-piped IT
infrastructure function to an integrated application delivery function.
Senior IT management needs to ensure that their organization evolves to where
it looks at application delivery holistically and not just as an
increasing number of stove-piped functions
This transformation will not be
easy in part because it crosses myriad organizational boundaries and involves
rapidly changing technologies that have never before been developed by
vendors, nor planned, designed, implemented and managed by IT organizations
in a holistic fashion.
Successful application delivery requires the
integration of tools and processes.
One of the goals of this handbook is
to help IT organizations plan for that transformation ' hence the subtitle: A
guide to decision making.
Forward to the 2008 Edition
This handbook builds
on the 2007 edition of the application delivery handbook. This edition of the
handbook differs from the original version in several ways. First,
information that was contained in the original version that is no longer
relevant was deleted from this edition. Second, information was added to
increase both the breadth and depth of this edition. For example, a
significant amount of new market research is included. In addition, there are
two new chapters in this edition. One of these new chapters, chapter 6
discusses the use the various types of managed service providers as a very
viable option that IT organizations can use to better ensure acceptable
application delivery. As is discussed in chapter 6, one of the advantages of
using a managed service provider is that they often have the skills and
processes that are necessary to bridge the gap that typically exists within
an IT organization between the application development groups and the rest of
the IT function.
Chapter 8 details the evolving network management
function. The includes a discussion of how the NOC, which once focused almost
exclusively on the availability of networks, now often has an additional
focus on the performance of networks and applications. Chapter 8 also
examines how the NOC has to change in order to reduce the meant time to
repair that is associated with application performance issues and details the
myriad ways that IT organizations justify an investment in performance
Other areas that were either added or expanded upon include
- Impact of Web services on security
- Development of a new generation of
- Use of WAN emulation to develop better applications and to plan
- Impact of Web 2.0 on application performance and management
- The criticality of looking deep into the packet for more effective management
- Status of QoS deployment
- Appropriate metrics for VoIP management
- Development of software-based WAN optimization solutions
- Factors that
impact the transparency of WAN optimization solutions
- Issues associated
with high-speed data replication
- Criteria to evaluate WAN optimization
- Criteria to evaluate application front ends (AFEs)
- Issues associated with port hopping applications
Unfortunately, this is a
lengthy handbook. It does not, however, require linear, cover-to-cover
reading. A reader may start reading this handbook in the middle and use the
references embedded in the text as forward and backwards pointers to related
Several techniques were employed to keep the handbook a
reasonable length. For example, the handbook allocates more space discussing
a new topics (such as the impact of Web 2.0) than it does on topics that are
relatively well understood ' such as the impact of consolidating servers
out of branch offices and into centralized data centers. Also, the handbook
does not contain a detailed analysis of any technology. To compensate for
this, the handbook includes an extensive bibliography. In addition, the body
of the handbook does not discuss any vendor or any products or services. The
Appendix to the handbook, however, contains material supplied by the majority
of the leading application delivery vendors.
To allow IT organizations to
compare their situation to those of other IT organizations, this handbook
incorporates market research data that has been gathered over the last two
years. The handbook also contains input gathered from interviewing roughly
thirty IT professionals. Most IT professionals cannot be quoted by name or
company in a document like this without their company heavily filtering
their input. To compensate for that limitation, Chapter 12 contains a brief
listing of the people who were interviewed, along with the phrase that is
used in the handbook to refer to them. The sponsors of the handbook provided
input into the areas of this handbook that are related to their company's
products and services. Both the sponsors and the IT professionals also
provided input into the relationship between and among the various components
of the application delivery framework.
Given the breadth and extent of the
input from both IT organizations and leading edge vendors this handbook
represents a broad consensus on a framework that IT organizations can use to
improve application delivery.
Over the last two years, Kubernan
has conducted extensive market research into the challenges associated with
application delivery. One of the most significant results uncovered by that
market research is the dramatic lack of success IT organizations have
relative to managing application performance. In particular, in Kubernan
asked 345 IT professionals the following question. "If the performance of
one of your company's key applications is beginning to degrade, who is the
most likely to notice it first ' the IT organization or the end user?"
Seventy three percent of the survey respondents indicated that it was the end
In the vast majority of instances when a key business application is
degrading, the end user, not the IT organization, first notices the
The fact that end users notice application degradation prior
to it being noticed by the IT organization is an issue of significant
importance to virtually all senior IT managers. The Government CIO stated
that in his organization the fact that the IT organization does not know when
an application has begun to degrade has lead to the perception that IT is
"a bunch of bumbling idiots." He further revealed that this situation has
also fostered an environment in which individual departments have both felt
the need and been allowed to establish their own shadow IT organization
In situations in which the end user is typically the first to notice
application degradation, IT ends up looking like bumbling idiots.
current approach to managing application performance reduces the confidence
that the company has in the IT organization.
In addition to performing
market research, Kubernan also provides consulting services. Jim Metzler was
hired by an IT organization that was hosting an application on the east
coast of the United States that users from all over the world accessed. Users
of this application that were located in the Pac Rim were complaining about
unacceptable application performance. The IT organization wanted Jim to
identify what steps it could take to improve the performance of the
application. Given that the IT organization had little information about the
semantics of the application, the task of determining what it would take to
improve the performance of the application was lengthy and served to
further frustrate the users of the application. (Chapter 7 details what has
to be done to reduce the mean time to repair application performance issues.)
This handbook is being written with that IT organization and others like them
A goal of this handbook is to help IT organizations develop the
ability to minimize the occurrence of application performance issues and to
both identify and quickly resolve issues when they do occur.
that goal, this handbook will develop a framework for application delivery.
It is important to note that most times when the industry uses the phrase
application delivery, this refers to just network and application
optimization. Network and application optimization is important. However,
achieving the goal stated above requires a broader perspective on the factors
that impact the ability of the IT organization to assure acceptable
Application delivery is more complex than just
network and application acceleration.
Application delivery needs to have
top down approach, with a focus on application performance.
factors in mind, the framework this handbook describes is comprised of four
Successful application delivery requires the
integration of planning, network and application optimization, management,
Some overlap exists in the model as a number of common IT
processes are part of multiple components. This includes processes such as
discovery (what applications are running on the network and how are they
being used), baselining, visibility and reporting.
This section of the handbook will discuss some of the primary
dynamics of the applications environment that impact application delivery. It
is unlikely any IT organization will exhibit all of the dynamics described.
It is also unlikely that an IT organization will not exhibit at least some
of these dynamics.
No product or service in the marketplace provides a
best in class solution for each component of the application delivery
framework. As a result, companies have to carefully match their requirements
to the functionality the alternative solutions provide.
want to be successful with application delivery must understand their current
and emerging application environment.
The preceding statement sounds simple.
However, less than one-quarter of IT organizations claim they have that
The Application Development Process
In most situations the
focus of application development is on ensuring that applications are
developed on time, on budget, and with few security vulnerabilities. That
narrow focus combined with the fact that application development has
historically been done over a high-speed, low-latency LAN, means that the
impact of the WAN on the performance of the application is generally not
known until after the application is fully developed and deployed.
majority of cases, there is at most a moderate emphasis during the design and
development of an application on how well that application will run over a
This lack of emphasis on how well an application will run over the
WAN often results in the deployment of chatty applications as shown in Figure
A chatty application requires hundreds of application turns to
complete a transaction. To exemplify the impact of a chatty protocol assume
that a given transaction requires 200 application turns. Further assume that
the latency on the LAN on which the application was developed was 1
millisecond, but that the round trip delay of the WAN on which the
application will be deployed is 100 milliseconds. For simplicity, the delay
associated with the data transfer will be ignored and only the delay
associated with the application turns will be calculated. In this case, the
delay on the LAN is 200 milliseconds, which is not noticeable. However,
the delay on the WAN is 20 seconds, which is very noticeable.
preceding example demonstrates the need to be cognizant of the impact of the
WAN on application performance during the application development lifecycle.
In particular, it is important during application development to identify
and eliminate any factor that could have a negative impact on application
performance. This approach is far more effective than trying to implement a
work-around after an application has been fully developed and deployed.
This concept will be expanded upon in Chapter 4.
The preceding example also
demonstrates the relationship between network delay and application delay.
A relatively small increase in network delay can result a very significant
increase in application delay.
Taxonomy of Applications
enterprise has tens and often hundreds of applications that transit the WAN.
One way that these applications can be categorized is:
A company typically runs the bulk of its key business functions
utilizing a handful of applications. A company can develop these applications
internally, buy them from a vendor such as Oracle or SAP, or acquire them
from a software-as-a-service provider such as Salesforce.com.
- Communicative and Collaborative
This includes delay sensitive applications
such as Voice over IP and conferencing, as well as applications that are
less delay sensitive such as email.
- Other Data Applications
category contains the bulk of a company's data applications. While these
applications do not merit the same attention as the enterprise's business
critical applications, they are important to the successful operation of the
- IT Infrastructure-4. Related Applications
contains applications such as DNS and DHCP that are not visible to the end
user, but which are critical to the operation of the IT infrastructure.
This category includes a growing variety of applications
such as Internet radio, YouTube, streaming news and multimedia, as well as
This includes any application intended
to harm the enterprise by introducing worms, viruses, spyware or other
Since they make different demands on the network,
another way to classify applications is whether the application is real time,
transactional or data transfer in orientation. For maximum benefit, this
information must be combined with the business criticality of the
application. For example, live Internet radio is real time but in virtually
all cases it is not critical to the organization's success. It is also
important to realize an application such as Citrix Presentation Server or SAP
is comprised of multiple modules with varying characteristics. Thus, it is
not terribly meaningful to say that Citrix Presentation Server traffic is
real time, transactional or data transfer in orientation. What is important
is the ability to recognize application traffic flows for what they are, for
example a Citrix printing flow vs. editing a Word document.
application delivery requires that IT organizations are able to identify the
applications running on the network and are also able to ensure the
acceptable performance of the applications relevant to the business while
controlling or eliminating applications that are not relevant.
In many situations, the traffic flow on the data network
naturally follows a simple hub-and-spoke design. An example of this is a
bank's ATM network where the traffic flows from an ATM to a data center and
back again. This type of network is sometimes referred to as a one-to many
A number of factors, however, cause the traffic flow in a network
to follow more of a mesh pattern. One factor is the wide spread deployment of
Voice over IP (VoIP)2. VoIP is an example of an application where traffic can
flow between any two sites in the network. This type of network is often
referred as an any-to-any network. An important relationship exists between
VoIP deployment and MPLS deployment. MPLS is an any-to-any network. As a
result, companies that want to broadly deploy VoIP are likely to move away
from a Frame Relay or an ATM network and to adopt an MPLS network.
Analogously, companies that have already adopted MPLS will find it easier
to justify deploying VoIP.
Another factor affecting traffic flow is that many
organizations require that a remote office have access to multiple data
centers. This type of requirement could exist to enable effective disaster
recovery or because the remote office needs to access applications that
disparate data centers host. This type of network is often referred as a
Every component of an application delivery solution
has to be able to support the company's traffic patterns, whether they are
one-to-many, many-to-many, or some-to-many.
Webification of Applications
The phrase Webification of Applications refers to the growing movement to
implement Web-based user interfaces and to utilize chatty Web-specific
protocols such as HTTP. Similar to the definition of a chatty application, a
protocol is referred to as being chatty if it requires tens if not hundreds
of turns for a single transaction.
In addition, XML is a dense protocol. That
means communications based on XML consume more IT resources than
communications that are not based on XML.
The webification of applications
introduces chatty protocols into the network. In addition, some or these
protocols (e.g., XML) tend to greatly increase the amount of data that
transits the network and is processed by the servers.
As will be discussed
in Chapter 9, the dense nature of XML also creates some security
Many companies either already have,
or are in the process of consolidating servers out of branch offices and
into centralized data centers. This consolidation typically reduces cost and
enables IT organizations to have better control over the company's data.
While server consolidation produces many benefits, it can also produce some
significant performance issues.
Server consolidation typically results in
chatty protocols such as CIFS (Common Internet File System), Exchange or
NFS (Network File System), which were designed to run over the LAN, running
over the WAN. The way that CIFS works is that it decomposes all files into
smaller blocks prior to transmitting them. Assume that a client was
attempting to open up a 20 megabyte file on a remote server. CIFS would
decompose that file into hundreds, or possibly thousands of small data
blocks. The server sends each of these data blocks to the client where it is
verified and an acknowledgement is sent back to the server. The server
must wait for an acknowledgement prior to sending the next data block. As a
result, it can take several seconds for the user to be able to open up the
Data Center Consolidation and Single Hosting
In addition to
consolidating servers out of branch offices and into centralized data centers,
many companies are also reducing the number of data centers they support
worldwide. HP, for example, recently announced it was reducing the number of
data centers it supports from 85 down to six3. This increases the distance
between remote users and the applications they need to access. Many
companies are also adopting a single-hosting model whereby users from all
over the globe transit the WAN to access an application that the company
hosts in just one of its data centers.
One of the effects of data center
consolidation and single hosting is that it results in additional WAN
latency for remote users.
Changing Application Delivery Model
rule in place until a few years ago stated that 80% of a company's employees
were in a headquarters facility and accessed an application over a high-speed,
low latency LAN. The new 80/20 rule states that 80% of a company's
employees access applications over a relatively low-speed, high latency WAN.
In the vast majority of situations, when people access an application they
are accessing it over the WAN.
Software as a Service
Wikipedia4, software as a service (SaaS) is a software application delivery
model where a software vendor develops a web-native software application
and hosts and operates (either independently or through a third-party) the
application for use by its customers over the Internet. Customers do not pay
for owning the software itself but rather for using it. They use it through
an API accessible over the Web and often written using Web Services. The term SaaS has become the industry preferred term, generally replacing the earlier
terms Application Service Provider (ASP) and On-Demand.
There are many
challenges associated with SaaS. For example, by definition of SaaS, the user
accesses the application over the Internet and hence incurs all of the
issues associated with the Internet. (See Chapter 6 for a discussion of the
use of managed service providers as a way to mitigate some of the impact of
the Internet.) In addition, since the company that uses the software does
not own the software, they cannot change the software in order to make it
Dynamic IT Environments
The environment in which
application delivery solutions are implemented is highly dynamic. For example,
companies are continually changing their business processes and IT
organizations are continually changing the network infrastructure. In
addition, companies regularly deploy new applications and updates to existing
To be successful, application delivery solutions must
function in a highly dynamic environment. This drives the need for both the
dynamic setting of parameters and automation.
Fractured IT Organizations
The application delivery function consists of myriad subspecialties such as
devices (e.g., desktops, laptops, point of sale devices), networks, servers,
storage, servers, security, operating systems, etc. The planning and
operations of these sub-specialties are typically not well coordinated
within the application delivery function. In addition, market research
performed in 2006 indicates that typically little coordination exists between
the application delivery function and the application development function.
Only 14% of IT organizations claim to have aligned the application delivery
function with the application development function. Eight percent (8%) of
IT organizations state they plan and holistically fund IT initiatives across
all of the IT disciplines. Twelve percent (12%) of IT organizations state
that troubleshooting an IT operational issues occurs cooperatively across
all IT disciplines.
The Industrial CIO described the current fractured, often
defensive approach to application delivery. He has five IT disciplines that
report directly to him. He stated that he is tired of having each of them
explain to him that their component of IT is fine and yet the company
struggles to provide customers an acceptable level of access to their Web
site, book business and ship product. He also said that he and his peers do
not care about the pieces that comprise IT, they care about the business
The CYA approach to application delivery focuses on showing that
it is not your fault that the application is performing badly. The goal of
the CIO approach is to rapidly identify and fix the problem.
Companies began deploying mainframe computers in
the late 1960s and mainframes became the dominant style of computing in the
1970s. The applications that were written for the mainframe computers of that
era were monolithic in nature. Monolithic means that the application
performed all of the necessary functions, such as providing the user
interface, the application logic, as well as access to data.
companies have moved away from deploying monolithic applications and towards
a form of distributed computing that is often referred to as n-tier
applications. Since these tiers are implemented on separate systems, WAN
performance impacts n-tier applications more than monolithic applications.
For example, the typical 3-tier application is comprised of a Web browser, an
application server(s) and a database server(s). The information flow in a
3-tier application is from the Web browser to the application server(s) and
to the database, and then back again over the Internet using standard
protocols such as HTTP or HTTPS.
The movement to a Service-Oriented
Architecture (SOA) based on the use of Web services-based applications
represents the next step in the development of distributed computing.
as WAN performance impacts n-tier applications more than monolithic
applications, WAN performance impacts Web services-based applications
significantly more than WAN performance impacts n-tier applications.
understand why the movement to Web services based applications will
drastically complicate the task of ensuring acceptable application
performance, consider the 3-tier application architecture that was previously
discussed. In a 3-tier application the application server(s) and the
database server(s) typically reside in the same data center. As a result, the
impact of the WAN is constrained to a single traffic flow, that being the
flow between the user's Web browser and the application server.
In a Web
services-based application, the Web services that comprise the application
typically run on servers that are housed within multiple data centers. As a
result, the WAN impacts multiple traffic flows and hence has a greater
overall impact on the performance of a Web services-based application that it
does on the performance of an n-tier application.
Web Services and
The expanding use of Web services creates some new security
challenges. Part of this challenge stems from the fact that in most
instances, the blueprint for Web services communication is outlined in
Services Description Language (WSDL) documents. These documents are
intended to serve as a guide to an IT organization's Web services.
Unfortunately, they can also serve to guide security attacks against the
Assuming that a hacker has gained access to an organization's
WSDL document, the hacker can then begin to look for vulnerabilities in the
system. For example, by seeing how the system reacts to invalid data that the
hacker has intentionally submitted, the hacker can learn a great deal
about the underlying technology and can use this knowledge to further exploit
the system. If the goal of the hacker is to create a denial of service attack
or degrade application performance, the hacker could exploit the verbose
nature of both XML and SOAP5. When a Web services message is received, the
first step the system takes is to read through, or parse, the elements of the
message. As part of parsing the message, parameters are extracted and
content is inserted into databases. The amount of work required by XML
parsing is directly affected by the size of the SOAP message. Because of this,
the hacker could submit excessively large payloads that would consume an
inordinate amount of system resources and hence degrade application
Chapter 9 will discuss some of the limitations of the current
generation of firewalls. One of these limitations is that the current
generation of firewalls is not capable of parsing XML. As such, these
firewalls are blind to XML traffic. As part of providing security for Web
services, IT organizations need to be able to inspect XML and SOAP
messages and make intelligent decisions based on the content of these
messages. For example, IT organizations need to be able to perform anomaly
detection in order to distinguish valid messages from invalid messages. In
addition, IT organizations need to be able to perform signature detection
to detect the signature of known attacks.
Defining Web 2.0
As was noted in the preceding section, the movement to a Service-Oriented
Architecture (SOA) based on the use of Web services-based applications is going
to drastically complicate the task of ensuring acceptable application
performance. The same is true for the movement to Web 2.0.In the case of Web 2.0, however, the problem is exacerbated because most IT
organizations are not aware of the performance issues associated with Web 2.0.
Many IT professionals view the phrase Web 2.0 as either just marketing hype that
is devoid of any meaning or they associate it exclusively with social networking
sites such as MySpace.
The Mobile Software CEO emphasized his view that Web 2.0 is "a lot more than
just social networking". He said that the goal of Web 2.0 is to "allow for
greater flexibility for presenting information to the user." The Mobile Software
CEO added that Web 2.0 started with sites such as Google and MySpace and is now
widely used as a way to aggregate websites together more naturally. A key
component of Web 2.0 is that the content is "very dynamic and alive and that as
a result people keep coming back to the website." The concept of an application
that is itself the result of aggregating other applications together has become
so common that a new term, mashup, has been coined to describe it. According to
Wikipedia6 a mashup is a web application that combines data from more than one
source into a single integrated tool; a typical example is the use of
cartographic data from Google Maps to add location information to real-estate
data from Craigslist, thereby creating a new and distinct service that was not
originally envisaged by either source.
The Business Intelligence CTO stated that when he thinks about Web 2.0 he
doesn't think about marketing hype. Instead he thinks about the new business
opportunities that are a result of Web 2.0. He said that, "Ten 6
years ago if somebody was starting a web based business they would need roughly
one million dollars to get their product to beta. Web 2.0 allows someone today
to start up a business for fifty thousand dollars." The Business Intelligence
CTO said that this dramatic change is enabled in part because today businesses
can hire programmers who use application platforms such as ASP.NET7 that rely on
ASP.NET to quickly develop applications that run on low cost virtual servers8
and communicate amongst themselves using Skype.
Another industry movement that is often associated with Web 2.0 is the
deployment of Rich Internet Applications (RIA). In a traditional Web application
all processing is done on the server, and a new Web page is downloaded each time
the user clicks. In contrast, an RIA can be viewed as "a cross between Web
applications and traditional desktop applications, transferring some of the
processing to a Web client and keeping (some of) the processing on the
application server."9 RIAs are created using technologies such as Macromedia
Flash, Flex, AJAX and Microsoft's Silverlight.
A recent publication10 quotes market research that indicates that by
2010 at least 60 percent of new application development projects will include
RIA technology and that at least 25 percent will rely primarily on RIA
technology. As stated in that publication, "This richer content is increasingly
dynamic in nature, enabling an unprecedented level of interactivity and
personalization. In real time, any consumer-specific information entered into
these applications is passed back to the Web infrastructure to enable interaction, further
personalization, and compelling marketing offers. For instance, consumers can
be presented with geographic- and demographic-specific content, content
that is tailored to preferences they indicate, surveys and contests, and
constantly updated content such as stock quotes, sales promotions, and news
feeds, to name a few."
Kubernan recently presented over 200 IT
professionals with the following question: "Which of the following best
describes your company's approach to using new application architectures such
as Services Oriented Architecture (SOA), Rich Internet Applications (RIA), or
Web 2.0 applications including the use of mashups?" Their responses are
shown in Table 3.1.
||Percentage of Respondents
|Don't use them
|Make modest use of them
|Make significant use of them
|N/A or Don't Know
Table 3.1: Current Use of New
The same group of IT professionals were then asked
to indicate how their company's use of those application architectures would
change over the next year. Their responses are shown in Table 3.2.
||Percentage of Respondents
|No change is expected
|We will reduce our use of these architectures
|We will increase our use of these architectures
|N/A or Don't Know
Increased Use of New Application Architectures
architectures (SOA, RIA, Web 2.0) have already begun to impact IT
organizations and this impact will increase over the next year.
Quantifying Application Response Time
As noted, Web 2.0 has some unique
In addition to a services focus, Web 2.0 characteristics
include featuring content that is dynamic, rich and in many cases, user
A model is helpful to illustrate the potential performance
bottlenecks in any application environment in general, as well as in a Web
2.0 environment in particular. The following model is a variation of the
application response time model created by Sevcik and Wetzel11. Like all
models, the following is only an approximation and as a result it is not
intended to provide results that are accurate to the millisecond level. The
model, however, is intended to provide insight into the key factors that
impact application response time. As shown below, the application response
time (R) is impacted by amount of data being transmitted (Payload), the WAN
bandwidth, the network round trip time (RTT), the number of application turns
(AppTurns), the number of simultaneous TCP sessions (concurrent requests),
the server side delay (Cs) and the client side delay (Cc).
Application Response Time Model
The Branch Office Optimization Solutions that
are described in Chapter 5 were designed primarily to deal with the size
of the payload and the number of application turns. The Application Front
Ends that are described in Chapter 5 were designed primarily to offload
communications processing from servers. They were not designed to offload
any backend processing.
The Web 2.0 Performance Issues
As noted, the
existing network and application optimization solutions were designed to
mitigate the performance impacts of large payloads and multiple application
turns. Microprocessor vendors such as Intel and AMD continually deliver
products that increase the computing power that is available on the desktop.
As a result, these products minimize the delays that are associated with
client processing (Cc). This leaves just one element of the preceding model
that has to be more fully accounted for ' server side delay. This is the
critical performance bottleneck that has to be addressed in order for Web 2.0
applications to perform well.
The existing generation of network and
application optimization solutions does not deal with a key requirement of
Web 2.0 applications ' the need to massively scale server performance.
reason this is so critical is that unlike clients, servers suffer from
scalability issues. In particular, servers have to support multiple users and
each concurrent user consumes some amount of server resources: CPU, memory,
I/O. Chris Loosley12 highlighted the scalability issues associated with
servers Loosley pointed out that activities such as catalog browsing is a
relatively fast and efficient activity that does not consume a lot of server
resources. He contrasted that to an activity that required the server to
update something, such as clicking a button to add an item to a shopping
cart. His points out that activities such as updating consumes significant
server resources and so the number of concurrent transactions, server
interactions that update a customer's stored information, plays a critical
role in determining server performance.
The Mobile Software CEO addressed the
issue of scalability when he stated that there is no better application
framework than ASP.NET, but that ASP.NET does make it very easy to develop
applications that do not perform well. As The Mobile Software CEO sees it, IT
organizations need to answer the question of "How will we scale Web 2.0
applications that have a rich amount of information from a dynamic database?"
He said that a big part of the issue is that because of the dynamic content
that is associated with Web 2.0 applications, "caching is not caching ' it
is different for every single application that you work with". As a result,
IT organizations need to answer questions such as: "When can I cache that
"How do I keep that cache up to date?" He added that the best
way to solve the Web 2.0 performance problems is to deploy intelligent tools.
The Business Intelligence CTO pointed out that the most important server side
issue associated with traditional applications was providing page views;
while with Web 2.0 applications it is supporting API calls. He emphasized
that "You can not scale a Web site just by throwing servers at it. That
buys you time, but it does not solve the problem." His recommendation was
that IT organizations should make relatively modest investments in servers
and make larger investments in tools to accelerate the performance of
The classic novel Alice in
Wonderland by the English mathematician Lewis Carroll first explained part of
the need for the planning component of the application delivery framework.
In that novel Alice asked the Cheshire cat, "Which way should I go?" The cat
replied, "Where do you want to get to?" Alice responded, "I don't know," to
which the cat said, "Then it doesn't much matter which way you go."
Relative to application performance, most IT organizations are somewhat vague
on where they want to go. In particular, only 38% of IT organizations have
established well-understood performance objectives for their company's
It is extremely difficult to make effective
network and application design decisions if the IT organization does not
have targets for application performance that are well understood and
One primary factor driving the planning component of
application delivery is the need for risk mitigation. One manifestation of
this factor is the situation in which a company's application development
function has spent millions of dollars to either develop or acquire a highly
visible, business critical application. The application delivery function
must take the proactive steps this section will describe in order to protect
both the company's investment in the application as well as the political
capital of the application delivery function.
Hope is not a strategy.
Successful application delivery requires careful planning, coupled with
extensive measurements and effective proactive and reactive processes.
Many planning functions are critical to the success of
application delivery. They include the ability to:
Profile an application
prior to deploying it, including running it in conjunction with a WAN
emulator to replicate the performance experienced in branch offices.
- Baseline the performance of the network.
- Perform a pre-deployment assessment
of the IT infrastructure.
- Establish goals for the performance of the
network and for at least some of the key applications that transit the
- Model the impact of deploying a new application.
- Identify the
impact of a change to the network, the servers, or to an application.
- Create a network design that maximizes availability and minimizes latency.
- Create a data center architecture that maximizes the performance of all of
the resources in the data center.
- Choose appropriate network technologies
- Determine what functionality to perform internally and
what functionality to acquire from a third party. This topic will be expanded
upon in Chapter 6.
Chapter 3 outlined some of the factors
that increase the difficulty of ensuring acceptable application performance.
One of these factors is the fact that in the vast majority of situations, the
application development process does not take into account how well the
application will run over a WAN.
One class of tool that can be used to
test and profile application performance throughout the application lifecyle
is a WAN emulator. These tools are used during application development and
quality assurance (QA) and serve to mimic the performance characteristics of
the WAN; e.g., delay, jitter, packet loss. One of the primary benefits of
these tools is that application developers and QA engineers can use them to
quantify the impact of the WAN on the performance of the application under
development, ideally while there is still time to modify the application.
One of the secondary benefits of using WAN emulation tools is that over time
the application development groups come to understand how to write
applications that perform well over the WAN.
Table 4.1, for example,
depicts the results of a lab test that was done using a WAN emulator to
quantify the affect that WAN latency would have on an inquiry-response
application that has a target response time of 5 seconds. Similar tests can
be run to quantify the affect that jitter and packet loss have on an
||Measured Response Time
Table 4.1: Impact of Latency on Application
As Table 4.1 shows, if there is no WAN latency the application
has a two-second response time. This two-second response time is well within
the target response time and most likely represents the time spent in the
application server or the database server. As network latency is increased
up to 75 ms., it has little impact on the application's response time. If
network latency is increased above 75 ms, the response time of the
application increases rapidly and is quickly well above the target response
Over 200 IT professionals were recently asked "Which of the
following describes your company's interest in a tool that can be used to
test application performance throughout the application lifecyle ' from
application design through ongoing management?" The survey respondents
were allowed to indicate multiple answers. Their responses are depicted in
||Percentage of Respondents
|If the tool worked
well it would make a significant improvement to our ability to manage
|The output of tools like this is generally
not that helpful
|Tools like this tend to be too difficult to use,
particularly during application development
developers would be resistant to using such a tool
|Our operations groups lack the application specific skills to use a
tool like this
Table 4.2: Interest in an Application Lifecycle Management Tool
conclusion that can be drawn from table 4.2 is:
The vast majority of IT
organizations see significant value from a tool that can be used to test
application performance throughout the application lifecyle.
application development process is just one of the factors that Chapter 3
identified that increase the difficulty of ensuring acceptable application
performance. Other factors include the consolidation of IT resources and
the deployment of demanding applications such as VoIP.
IT organizations will
not be regarded as successful if they do not have the capability to both
develop applications that run well over the WAN and to also plan for changes
such as data center consolidation and the deployment of VoIP.
because as previously stated, hope is not a strategy. IT organizations need
to be able to first anticipate the issues that will arise as a result of a
major change and then take steps to mitigate the impact of those issues.
Whenever an IT organization is considering implementing a tool of this type
it is important to realize that the ultimate goal of these tools is to
provide insight and not an undo level of precision. In particular, IT
environments are complex and dynamic. As a result, it can be extremely
difficult and laborious to have the tool accurately represent every aspect of
the IT environment. In addition, even if the tool could accurately represent
every aspect of the IT environment at some point in time, that environment
will change almost immediately and that representation would no longer be
Given the complex and dynamic nature of the IT environment, a valid use of a
WAN emulation tool is to provide insight into what happens if WAN delay
increases from 70 ms to 100 ms.. For example, would that increase the
application delay by a second? By two seconds? By five seconds? It is reasonable
to demand that the WAN emulation tool provide accurate insight. For example, it
is reasonable to demand that if the tool indicates that a 30 ms. increase in WAN
delay results in a 2 second increase in application delay, that indeed that is
correct. It is not reasonable, however, to expect that the tool would be able to
determine whether a 30 ms. increase in WAN delay would increase application
delay by 4.85 seconds vs. increasing it by 4.90 seconds.
One of the reasons why IT organizations should not expect a level of undo
precision from a WAN emulation tool has already been discussed ' the complex and
dynamic nature of the IT environment. Another reason is the inherent nature of
any modeling or simulation tool. One of the key characteristics of these tools
is that they typically contain a slippery slope of complexity. By that is meant
that when creating a simulation tool, a great deal of insight can be provided
without having the tool be unduly complex. The 80/20 rule applies here: 80% of
the insight can be provided while only incurring 20% of the complexity. However,
in order to add additional insight requires the tool to become very complex and
typically require a level of granular input that either does not exist or is
incredible time consuming to create.
The data in Table 4.2 indicates that IT professionals are well aware of the
fact that many of these tools are unacceptably complex. In particular, while the
survey respondents indicated a strong interest in these tools, thirty percent of
the survey respondents indicated either that tools like this tend to be
difficult or that their operations group would not have the skills necessary to
use a tool like this.
In the vast majority of cases, a tool that is unduly complex is of no use to
an IT organization.
The preceding discussion of using a WAN emulator to either develop more
efficient applications or to quantify the impact of a change such as a data
center initiative is a proactive use of the tool. In many cases, IT
organizations profile an application in a reactive fashion. That means the
organization profiled the application only after users complained about its
Alternatively, some IT organizations only profile an application shortly
before they deploy it. The advantages of this approach are that it helps the IT
- Identify minor changes that can be made to the application that will
improve its performance.
- Determine if some form of optimization technology will improve the
performance of the application.
- Identify the sensitivity of the application to parameters such as WAN
latency and use this information to set effective thresholds.
- Gather information on the performance of the application that can be
used to set the expectations of the users.
- Learn about the factors that influence how well an application will run
over a WAN.
Since companies perform these tests just before they put the application into
production, this is usually too late to make any major change.
The application delivery function needs to be involved early in the
applications development cycle.
The Automotive Network Engineer provided insight into the limitations of
testing an application just prior to deployment. He stated that relative to
testing applications just prior to putting them into production, "We are
required to go through a lot of hoops." He went on to say that sometimes the
testing was helpful, but that if the application development organization was
under a lot of management pressure to get the application into production, that
the application development organization often took the approach of deploying
the application and then dealing with the performance problems later.
The Consulting Architect pointed out that his organization is creating an
architecture function. A large part of the motivation for the creation of this
function is to remove the finger pointing that goes on between the network and
the application-development organizations. One goal of the architecture function
is to strike a balance between application development and application delivery.
For example, there might be good business and technical factors drive the
application development function to develop an application using chatty
protocols. One role of the architecture group is to identify the effect of that
decision on the application-delivery function and to suggest solutions. For
example, does the decision to use chatty protocols mean that additional
optimization solutions would have to be deployed in the infrastructure? If so,
how well will the application run if an organization deploys these optimization
solutions? What additional management and security issues do these solutions
A primary way to balance the requirements and capabilities of the application
development and the application-delivery functions is to create an effective
architecture that integrates those two functions.
Baselining provides a reference from which service quality and application
delivery effectiveness can be measured. It does this by quantifying the key
characteristics (e.g.., response time, utilization, delay) of applications and
various IT resources including servers, WAN links and routers. Baselining allows
an IT organization to understand the normal behavior of those applications and
Baselining is an example of a task that one can regard as a building block of
management functionality. That means baselining is a component of several key
processes, such as performing a pre-assessment of the network prior to deploying
an application or performing proactive alarming.
The Team Leader stated that his organization does not baseline the company's
entire global network. They have, however, widely deployed two tools that assist
with baselining. One of these tools establishes trends relative to their
traffic. The other tool baselines the end-to-end responsiveness
applications. The Team Leader has asked the two vendors to integrate the two
tools so that he will know how much capacity he has left before the performance
of a given application becomes unacceptable.
The Key Steps
Four primary steps comprise baselining. They are:
- Identify I. the Key Resources Most IT organizations do not have the
ability to baseline all of their resources. These organization must
determine which are the most important resources and baseline them. One way
to determine which resources are the most important is to identify the
company's key business applications and to identify the IT resources that
support these applications.
- Quantify the Utilization of the Assets over a Sufficient Period of
Time Organizations must compute the baseline over a normal business cycle.
For example, the activity and responses times for a CRM application might be
different at 8:00 a.m. on a Monday than at 8:00 p.m. on a Friday. In
addition, the activity and response times for that CRM application are
likely to differ greatly during a week in the middle of the quarter as
compared with times during the last week of the quarter.
cases, baselining focuses on measuring the utilization of resources, such as
WAN links. However, application performance is only indirectly tied to the
utilization of WAN links. Application performance is tied directly to
factors such as WAN delay. Since it is often easier to measure utilization
than delay, many IT organizations set a limit on the maximum utilization of
their WAN links hoping that this will result in acceptable WAN latency.
IT organizations need to modify their baselining activities to focus
directly on delay.
- Determine how the Organization III. Uses Assets This step involves
determining how the assets are being consumed by answering questions such
as: Which applications are the most heavily used? Who is using those
applications? How has the usage of those applications changed? In addition
to being a key component of baselining, this step also positions the
application- delivery function to provide the company's business and
functional managers insight into how their organizations are changing based
on how their use of key applications is changing.
- IV. Utilize the Information
The information gained from baselining
has many uses. This includes capacity planning, budget planning and
chargeback. Another use for this information is to measure the performance
of an application before and after a major change, such as a server upgrade,
a network redesign or the implementation of a patch. For example, assume
that a company is going to upgrade all of its Web servers. To ensure they
get all of the benefits they expect from that upgrade, that company should
measure key parameters both before and after the upgrade. Those parameters
include WAN and server delay as well as the end-to-end application response
time as experienced by the users.
An IT organization can approach baselining in multiple ways. Sampling and
synthetic approaches to baselining can leave a number of gaps in the data and
have the potential to miss important behavior that is both infrequent and
Organizations should baseline by measuring 100% of the actual traffic from
the real users.
The following is a set of criteria that IT organizations can use to choose a
baselining solution. For simplicity, the criteria are focused on baselining
applications and not other IT resources.
To what degree (complete, partial, none) can the solution identify:
- Well-known applications; e.g., e-mail, VoIP, Oracle, PeopleSoft.
- Custom applications.
- Complex applications; e.g., Microsoft Exchange, SAP R/3, Citrix
- Web-based applications, including URL-by-URL tracking
- Peer-to-peer applications.
- Unknown applications.
Application Profiling and Response Time Analysis
Can the solution:
- Provide response time metrics based on synthetic traffic generation?
- Provide response time metrics based on monitoring actual traffic?
- Relate application response time to network activity?
- Provide application baselines and trending?
The goal of performing a pre-deployment assessment of the current environment
is to identify any potential problems that might affect an IT organization's
ability to deploy an application. One of the two key questions that an
organization must answer during pre-deployment assessment is: Can the network
provide appropriate levels of security to protect against attacks? As part of a
security assessment, it is important review the network and the attached devices
and to document the existing security functionality such as
Detection System), IPS (Intrusion Prevention System) and
NAC (Network Access
Control). The next step is to analyze the configuration of the network elements
to determine if any of them pose a security risk. It is also necessary to test
the network to see how it responds to potential security threats.
The second key question that an organization must answer during
pre-deployment assessment is: Can the network provide the necessary levels of
availability and performance? As previously mentioned, it is extremely difficult
to answer questions like this if the IT organization does not have targets for
application performance that are well understood and adhered to. It is also
difficult to answer this question, because as Chapter 3 described, the typical
application environment is both complex and dynamic.
Organizations should not look at the process of performing a pre-deployment
network assessment in isolation. Rather, they should consider it part of an
application lifecycle management process that includes a comprehensive assessment
and analysis of the existing network; the development of a thorough rollout plan
including: the profiling of the application; the identification of the impact of
implementing the application; and the establishment of effective processes for
ongoing fact-based data management.
The Team Leader stated his organization determines whether to perform a
network assessment prior to deploying a new application on a case-by-case basis.
In particular, he pointed out that it tends to perform an assessment if it is a
large deployment or if it has some concerns about whether the infrastructure can
support the application. To assist with this function, his organization has
recently acquired tools that can help it with tasks such as assessing the
ability of the infrastructure to support VoIP deployment as well as evaluating
the design of their MPLS network.
The Engineering CIO said that the organization is deploying VoIP. As part of
that deployment, it did an assessment of the ability of the infrastructure to
support VoIP. The assessment was comprised of an analysis using an excel
spreadsheet. The organization identified the network capacity at each
office, the current utilization of that capacity and the added load that would
come from deploying VoIP. Based on this set of information, it determined where
it needed to add capacity.
The key components of a pre-deployment network assessment are:
Create an inventory of the applications running on the network
This includes discovering the applications that are running on the network.
Chapter 7 will discuss this task in greater detail.
In addition to identifying the applications that are running on the network,
it is also important to categorize those applications using an approach similar
to what Chapter 3 described. Part of the value of this activity is to identify
recreational use of the network; i.e., on-line gaming and streaming radio or
video. Blocking this recreational use can free up additional WAN bandwidth.
Chapter 7 quantifies the extent to which corporate networks are carrying
Another part of the value of this activity is to identify business
activities, such as downloads of server patches or security patches to desktops
that are being performed during peak times. Moving these activities to an
off-peak time adds additional bandwidth.
Evaluate bandwidth to ensure available capacity for new applications
This activity involves baselining the network as previously described. The
goal is to use the information about how the utilization of the relevant network
resources has been trending to identify if any parts of the network need to be
upgraded to support the new application.
As previously described, baselining typically refers to measuring the
utilization of key IT resources. The recommendation was made that companies
should modify how they think about baselining to focus not on utilization, but
on delay. In some instances, however, IT organizations need to measure more than
just delay. If a company is about to deploy VoIP, for example, then the
pre-assessment baseline must also measure the current levels of jitter and
packet loss, as VoIP quality is highly sensitive to those parameters.
Create response time baselines for key essential applications
This activity involves measuring the average and peak application response
times for key applications both before and after the new application is
deployed. This data will allow IT organizations to determine if deploying the
new application causes an unacceptable impact on the company's other key
As part of performing a pre-deployment network assessment, IT organizations
can typically rely on having access to management data from SNMP MIBs (Simple
Network Management Protocol Management Information Bases) on network devices,
such as switches and routers. This data source provides data link layer
visibility across the entire enterprise network and captures parameters, such as
the number of packets sent and received, the number of packets that are
discarded, as well as the overall link utilization.
NetFlow is a Cisco IOS software feature and also the name of a Cisco protocol
for collecting IP traffic information. Within NetFlow, a network flow is defined
as a unidirectional sequence of packets between a given source and destination.
The branch office router outputs a flow record after it determines that the flow
is finished. This record contains information, such as timestamps for the flow
start and finish time, the volume of traffic in the flow, and its source and
destination IP addresses and source and destination port numbers.
NetFlow represents a more advanced source of management data than SNMP MIBs.
For example, whereas data from standard SNMP MIB monitoring can be used to
quantify overall link utilization, this class of management data can be used to
identify which network users or applications are consuming the bandwidth.
The IETF is in the final stages of approving a standard (RFC 3917) for
logging IP packets as they flow through a router, switch or other networking
device and reporting that information to network management and accounting
systems. This new standard, which is referred to as IPFIX (IP Flow Information
EXport), is based on NetFlow Version 9.
An important consideration for IT organizations is whether they should deploy
vendor-specific, packet inspection- based dedicated instrumentation. The
advantage of deploying dedicated instrumentation is that it enables a more
detailed view into application performance. The disadvantage of this approach is
that it increases the cost of the solution. A compromise is to rely on data from
SNMP MIBs and NetFlow in small sites and to augment this with dedicated
instrumentation in larger, more strategic sites.
Another consideration is whether or not IT organizations should deploy
software agents on end systems. One of the architectural advantages of this
approach is that it monitors performance and events closer to the user's actual
experience. A potential disadvantage of this approach is that there can be
organizational barriers that limit the ability of the IT organization to put
software on each end system. In addition, for an agent-based approach to be
successful, it must not introduce any appreciable management overhead.
Whereas gaining access to management data is relatively easy, collecting and
analyzing details on every application in the network is challenging. It is
difficult, for example, to identify every IP application, host and conversation
on the network as well as applications that use protocols such as IPX or DECnet.
It is also difficult to quantify application response time and to identify the
individual sources of delay; i.e., network, application server, database. One of
the most challenging components of this activity is to unify this information so
the organization can leverage it to support myriad activities associated with
managing application delivery.
Network and Application Optimization
The phrase network and application optimization refers to an extensive set of
techniques that organizations have deployed in an attempt to optimize the
performance of networks and applications as part of assuring acceptable
application performance. The primary role that these techniques play is to:
- Reduce the amount of data that is sent over the WAN.
- Ensure that the WAN link is never idle if there is data to send.
- Reduce the number of round trips (a.k.a., transport layer or application
turns) that are necessary for a given transaction.
- Mitigate the inefficiencies of older protocols.
- Offload computationally intensive tasks from client systems and servers
There are two principal categories of network and application optimization
products. One category focuses on the negative effect of the WAN on application
performance. This category is often referred to as a WAN optimization controller
(WOC) but will also be referred to in this handbook as Branch Office
Optimization Solutions. Branch Office Optimization Solutions are often referred
to as symmetric solutions because they typically require an appliance in both
the data center as well as the branch office. Some vendors, however, have
implemented solutions that call for an appliance in the data center, but do not
require an appliance in the branch office. This class of solution is often
referred to as a software only solution.
The trade-off between a traditional symmetric solution based on an appliance
and a software only solution is straightforward. Because the traditional
symmetric solution involves an appliance in each branch office, it has the
dedicated hardware that allows it to service a large user base. However, because
of the requirement to have an appliance in each branch office, a traditional
symmetric solution also tends to be more expensive. As a result, the software
only solution is most appropriate for individual users or small offices. Note
that while a software only solution can not typically match the performance of a
symmetric solution, that does not mean that a software only solution is less
functional than a symmetric solution. IT organizations that are looking for a
software only solution should expect that the solution will provide a rich set
of functionality; i.e., Layer 3 and 4 visibility and shaping,
Layer 7 visibility and shaping, packet marking based on DSCP (DiffServ code
point), as well as sophisticated analysis and reporting.
The typical software only solution is comprised of:
- Agents that sit on each PC and which serve to monitor and shape WAN
application and user traffic in accordance with assigned policy.
- A PC or server that has two functions. One function is to serve as a
collector of network statistics. The other function is to store policies
that are accessed by the agents.
- A management console that is used for monitoring, policy development and
The second category of product that will be discussed in this Chapter is
often referred to as an Application Front End (AFE) or Application Device
Controller (ADC). This solution is typically referred to as being an asymmetric
solution because an appliance is only required in the data center and not the
branch office. The genesis of this category of solution dates back to the IBM
mainframe-computing model of the late 1960s and early 1970s. Part of that
computing model was to have a Front End Processor (FEP) reside in front of the
IBM mainframe. The primary role of the FEP was to free up processing power on
the general purpose mainframe computer by performing communications processing
tasks, such as terminating the 9600 baud multi-point private lines, in a device
that was designed just for these tasks. The role of the AFE is somewhat similar
to that of the FEP in that the AFE performs computationally intensive tasks,
such as the processing of SSL (Secure Sockets Layer) traffic, and hence frees up
server resources. However, another role of the AFE is to function as a
Load Balancer (SLB) and, as the name implies, balance traffic over multiple
servers. While performing these functions accelerates the performance of
Web-based applications, AFEs often do not accelerate the performance of standard
Windows based applications.
Companies deploy Branch Office Optimization Solutions and AFEs in different
ways. The typical company, for example, has many more branch offices than data
centers. Hence, the question of whether to deploy a solution in a limited
tactical manner vs. a broader strategic manner applies more to Branch Office
Optimization Solutions than it does to AFEs. Also, AFEs are based on open
standards and as a result a company can deploy AFEs from different vendors and
not be concerned about interoperability. In contrast, Branch Office Optimization
Solutions are based on proprietary technologies and so a company would tend to
choose a single vendor from which to acquire these solutions.
Alice in Wonderland Revisited
Chapter 4 began with a reference to Alice in Wonderland and discussed the
need for IT organizations to set a direction for things such as application
performance. That same reference to Alice in Wonderland applies to the network
and application optimization component of application delivery. In particular,
no network and application optimization solution on the market solves all
possible application performance issues.
To deploy the appropriate network and application optimization solution, IT
organizations need to understand the problem they are trying to solve.
Chapter 3 of this handbook described some of the characteristics of a generic
application environment and pointed out that to choose an appropriate solution,
IT organizations need to understand their unique application environment. In the
context of network and application optimization, if the company either already
has or plans to consolidate servers out of branch offices and into centralized
data centers, then as described later in this section, a WAFS (Wide Area File
Services) solution might be appropriate. If the company is implementing VoIP,
then any Branch Office Optimization Solution that it implements must be able to
support traffic that is both real-time and meshed, and have strong QoS
functionality. Analogously, if the company is making heavy use of SSL, it might
make sense to implement an AFE to relieve the servers of the burden of
processing the SSL traffic.
In addition to high-level factors of the type the preceding paragraph
mentioned, the company's actual traffic patterns also have a significant impact
on how much value a network and application optimization solution will provide.
To exemplify this, consider the types of advanced compression most solution
providers offer. The effectiveness of advanced compression depends on two
factors. One factor is the quality of the compression techniques that have been
implemented in a solution. Since many compression techniques use the same
fundamental and widely known mathematical and algorithmic foundations, the
performance of many of the solutions available in the market will tend to be
The second factor that influences the effectiveness of advanced compression
solutions is the amount of redundancy of the traffic. Applications that transfer
data with a lot of redundancy, such as text and html on web pages, will benefit
significantly from advanced compression. Applications that transfer data that
has already been compressed, such as the voice streams in VoIP or jpg-formatted
images, will see little improvement in performance from implementing advanced
compression and could possibly see performance degradation.
Because a network and optimization solution will provide varying degrees of
benefit to a company based on the unique characteristics of its environment,
third party tests of these solutions are helpful, but not conclusive.
In order to understand the performance gains of any network and application
optimization solution, that solution must be tested in an environment that
closely reflects the environment in which it will be deployed.
Branch Office Optimization Solutions
The goal of Branch Office Optimization Solutions is to improve the
performance of applications delivered from the data center to the branch office
or directly to the end user. Myriad techniques comprise branch office
optimization solutions. Table 5.1 lists some of these techniques and indicates
how organizations can use each of these techniques to overcome some
characteristic of the WAN that impairs application performance.
||WAN Optimization Techniques
- Data Compression
- Differencing (a.k.a., de duplication)
Mitigate Round-trip Time
- Request Prediction
- Response Spoofing
Forward Error Correction (FEC)
||Quality of Service (QoS)
Table 5.1: Techniques to Improve Application Performance
Below is a brief description of some of the principal WAN optimization
This refers to keeping a local copy of information with the goal of either
avoiding or minimizing the number of times that information must be accessed
from a remote site. As described below, there are multiple forms of caching.
With byte caching the sender and the receiver maintain large disk-based
caches of byte strings previous sent and received over the WAN link. As data is
queued for the WAN, it is scanned for byte strings already in the cache. Any
strings that result in cache hits are replaced with a short token that refers to
its cache location, allowing the receiver to reconstruct the file from its copy
of the cache. With byte caching, the data dictionary can span numerous TCP
applications and information flows rather than being constrained to a single
file or single application type.
Object caching stores copies of remote application objects in a local cache
server, which is generally on the same LAN as the requesting system. With object
caching, the cache server acts as a proxy for a remote application server. For
example, in Web object caching, the client browsers are configured to connect to
the proxy server rather than directly to the remote server. When the request for
a remote object is made, the local cache is queried first. If the cache contains
a current version of the object, the request can be satisfied locally at LAN
speed and with minimal latency. Most of the latency involved in a cache hit
results from the cache querying the remote source server to ensure that the
cached object is up to date.
If the local proxy does not contain a current version of the remote object,
it must be fetched, cached, and then forwarded to the requester. Loading the
remote object into the cache can potentially be facilitated by either data
compression or byte caching.
The role of compression is to reduce the size of a file prior to transmitting
that file over a WAN. As described below, there are various forms of
Static Data Compression
Static data compression algorithms find redundancy in a data stream and use
encoding techniques to remove the redundancy, creating a smaller file. A number
of familiar lossless compression tools for binary data are based on Lempel-Ziv
(LZ) compression. This includes zip, PKZIP and gzip algorithms.
LZ develops a codebook or dictionary as it processes the data stream and
builds short codes corresponding to sequences of data. Repeated occurrences of
the sequences of data are then replaced with the codes. The LZ codebook is
optimized for each specific data stream and the decoding program extracts the
codebook directly from the compressed data stream. LZ compression can often
reduce text files by as much as 60-70%. However, for data with many possible
data values LZ may prove to be quite ineffective because repeated sequences are
Differential Compression; a.k.a., Differencing or De-duplication
Differencing algorithms are used to update files by sending only the changes
that need to be made to convert an older version of the file to the current
version. Differencing algorithms partition a file into two classes of variable
length byte strings: those strings that appear in both the new and old versions
and those that are unique to the new version being encoded. The latter strings
comprise a delta file, which is the minimum set of changes that the receiver
needs in order to build the updated version of the file.
While differential compression is constrained to those cases where the
receiver has stored an earlier version of the file, the degree of compression is
very high. As a result, differential compression can greatly reduce bandwidth
requirements for functions such as software distribution, replication of
distributed file systems, and file system backup and restore.
Real Time Dictionary Compression
The same basic LZ data compression algorithms discussed earlier can also be
applied to individual blocks of data rather than entire files. Operating at the
block level results in smaller dynamic dictionaries that can reside in memory
rather than on disk. As a result, the processing required for compression and
decompression introduces only a small amount of delay, allowing the technique to
be applied to real-time, streaming data.
The goal of congestion control is to ensure that the sending device does not
transmit more data than the network can accommodate. To achieve this goal, the
TCP congestion control mechanisms are based on a parameter referred to as the
congestion window. TCP has multiple mechanisms to determine the congestion
Forward Error Correction (FEC)
FEC is typically used at the physical layer (Layer 1) of the OSI stack. FEC
can also be applied at the network layer (Layer 3) whereby an extra packet is
transmitted for every n packets sent. This extra packet is used to recover from
an error and hence avoid having to retransmit packets.
A subsequent section of the handbook will discuss some of the technical
challenges associated with data replication and will describe how FEC mitigates
some of those challenges.
Protocol acceleration refers to a class of techniques that improves
application performance by circumventing the shortcomings of various
communication protocols. Protocol acceleration is typically based on per-session
packet processing by appliances at each end of the WAN link, as shown in Figure
5.1. The appliances at each end of the link act as a local proxy for the remote
system by providing local termination of the session. Therefore, the end systems
communicate with the appliances using the native protocol, and the sessions are
relayed between the appliances across the WAN using the accelerated version of
the protocol or using a special protocol designed to address the WAN performance
issues of the native protocol. As described below, there are many forms of
TCP can be accelerated between appliances with a variety of techniques that
increase a session's ability to more fully utilize link bandwidth. Some of the
available techniques are dynamic scaling of the window size, packet aggregation,
selective acknowledgement, and TCP Fast Start. Increasing the window size for
large transfers allows more packets to be simultaneously in transit boosting
bandwidth utilization. With packet aggregation, a number of smaller packets are
aggregated into a single larger packet, reducing the overhead associated with
numerous small packets. TCP selective acknowledgment (SACK) improves performance
in the event that multiple packets are lost from one TCP window of data. With
SACK, the receiver tells the sender which packets in the window were received,
allowing the sender to retransmit only the missing data segments instead of all
segments sent since the first lost packet. TCP slow start and congestion
avoidance lower the data throughput drastically when loss is detected. TCP Fast
Start remedies this by accelerating the growth of the TCP window size to quickly
take advantage of link bandwidth.
CIFS and NFS Acceleration
As mentioned earlier, CIFS and NFS use numerous Remote Procedure Calls (RPCs)
for each file sharing operation. NFS and CIFS suffer from poor performance over
the WAN because each small data block must be acknowledged before the next one
is sent. This results in an inefficient ping-pong effect that amplifies the
effect of WAN latency. CIFS and NFS file access can be greatly accelerated by
using a WAFS transport protocol between the acceleration appliances. With the
WAFS protocol, when a remote file is accessed, the entire file can be moved or
pre-fetched from the remote server to the local appliance's cache. This
technique eliminates numerous round trips over the WAN. As a result, it can
appear to the user that the file server is local rather than remote. If a file
is being updated, CIF and NFS acceleration can use differential compression and
block level compression to further increase WAN efficiency.
Web pages are often composed of many separate objects, each of which must be
requested and retrieved sequentially. Typically a browser will wait for a
requested object to be returned before requesting the next one. This results in
the familiar ping-pong behavior that amplifies the effects of latency. HTTP can
be accelerated by appliances that use pipelining to overlap fetches of Web
objects rather than fetching them sequentially. In addition, the appliance can
use object caching to maintain local storage of frequently accessed web objects.
Web accesses can be further accelerated if the appliance continually updates
objects in the cache instead of waiting for the object to be requested by a
local browser before checking for updates.
Microsoft Exchange Acceleration
Most of the storage and bandwidth requirements of email programs, such as
Microsoft Exchange, are due to the attachment of large files to mail messages.
Downloading email attachments from remote Microsoft Exchange Servers is slow and
wasteful of WAN bandwidth because the same attachment may be downloaded by a
large number of email clients on the same remote site LAN. Microsoft Exchange
acceleration can be accomplished with a local appliance that caches email
attachments as they are downloaded. This means that all subsequent downloads of
the same attachment can be satisfied from the local application server. If an
attachment is edited locally and then returned to via the remote mail server,
the appliances can use differential file compression to conserve WAN bandwidth.
By understanding the semantics of specific protocols or applications, it is
often possible to anticipate a request a user will make in the near future.
Making this request in advance of it being needed eliminates virtually all of
the delay when the user actually makes the request. Many applications or
application protocols have a wide range of request types that reflect different
user actions or use cases. It is important to understand what a vendor means
when it says it has a certain application level optimization. For example, in
the CIFS (Windows file sharing) protocol, the simplest interactions that can be
optimized involve drag and drop. But many other interactions are more complex.
Not all vendors support the entire range of CIFS optimizations.
This refers to situations in which a client makes a request of a distant
server, but the request is responded to locally.
Tactical vs. Strategic Solutions
To put the question of tactical vs. strategic in context, refer again to the
IT organization that Chapter 2 of this handbook referenced. For that company to
identity the problem that it is trying to solve, it must answer questions such
as: Is the problem just the performance of this one application as used just by
employees in the Pac Rim? If that is the problem statement, then the company is
looking for a very tactical solution. However, the company might decide that the
problem that it wants to solve is how can it guarantee the performance of all of
their critical applications for all of its employees under as wide a range of
circumstances as possible. In this case, the company needs a strategic solution.
Historically, Branch Office Optimization Solutions have been implemented in a
tactical fashion. That means that companies have deployed the least amount of
equipment possible to solve a specific problem. Kubernan recently asked several
hundred IT professionals about the tactical vs. strategic nature of how they use
these techniques. Their answers, which Figure 5.2 shows, indicate the deployment
of these techniques is becoming a little more strategic.
The Electronics COO who noted that his company's initial deployment of
network and application optimization techniques was to solve a particular
problem supports that position. He also stated that his company is "absolutely
becoming more proactive moving forward with deploying these techniques."
Similarly, The Motion Picture Architect commented that his organization has
been looking at these technologies for a number of years, but has only deployed
products to solve some specific problems, such as moving extremely large files
over long distances. He noted that his organization now wants to deploy products
proactively to solve a broader range of issues relative to application
performance. According to The Motion Picture Architect, "Even a well written
application does not run well over long distances. In order to run well, the
application needs to be very thin and it is very difficult to write a full
featured application that is very thin."
IT organizations often start with a tactical deployment of WOCs and expand
this deployment over time.
Table 5.2 depicts the extent of the deployment of
branch office optimization solutions.
|No plans to deploy
||Have not deployed, but plan to deploy
in test mode
Table 5.2: Deployment of Branch Office Optimization Solutions
One conclusion that can be drawn from the data in Table 5.2 is:
The deployment of WAN Optimization Controllers will increase significantly.
The Engineering CIO stated that his organization originally deployed a WAFS
solution to alleviate redundant file copy. He said he has been pleasantly
surprised by the additional benefits of using the solution. In addition, his
organization plans on doing more backup of files over the network and
- Kubernan asserts its belief that words such as paradigm and holistically
have been out of favor so long that it is now acceptable to use them again
- 2005/2006 VoIP State of the Market Report, Steven Taylor,
- Hewlett-Packard picks Austin for two data centers
- Simple Object Access Protocol (SOAP) is the Web Services specification
used for invoking methods on remote software components, using an XML
- ASP.NET is a web application framework marketed by Microsoft that
programmers can use to build dynamic web sites, web applications and XML web
services. It is part of Microsoft's .NET platform and is the successor to
Microsoft's Active Server Pages (ASP) technology,
- Virtual servers will be discussed in more detail in Chapter 9
- Wikipedia on Rich Internet Applications:
- Web 2.0 is Here ' Is Your Web Infrastructure Ready?
- Why Centralizing Microsoft Servers Hurts Performance, Peter Sevcik and
Rebecca Wetzel, http://www.juniper.net/solutions/literature/ms_server_centralization.pdf
- Rich Internet Applications: Design, Measurement and Management
Challenges, Chris Loosley, http://www.keynote.com/docs/whitepapers/RichInternet_5.pdf
Published By Kubernan
Copyright © 2008
Editorial and Sponsorship Information
Contact Jim Metzler or Steven
Kubernan is an analyst and consulting joint venture of Steven
Taylor and Jim Metzler.
Professional Opinions Disclaimer
By Dr. Jim Metzler