Standard Units of Measure For IaaS Rev 1.1

OPEN DATA CENTER ALLIANCE USAGE:
Standard Units of Measure For IaaS Rev 1.1
Open Data Center Alliance: Standard Units of Measure For IaaS Rev 1.1
Table of Contents
Legal Notice........................................................................................................................3
Executive Summary ...........................................................................................................4
Purpose ..............................................................................................................................5
Taxonomy............................................................................................................................5
Quantitative Measures.......................................................................................................5
Qualitative Measures: For Service Assurance Level...........................................................6
Definition...........................................................................................................................6
General Guidelines.............................................................................................................7
Service-level Management................................................................................................8
Service-level Attributes.....................................................................................................8
Function Map ...................................................................................................................12
Methodology ....................................................................................................................13
Use Case ..........................................................................................................................14
Goals ..............................................................................................................................14
Considerations ................................................................................................................14
Sample usage .................................................................................................................14
Success scenario 1 (pre-usage) ......................................................................................14
Failure conditions 1 ........................................................................................................14
Success scenario 2 (actual, instrumented) ......................................................................14
Failure conditions 2 ........................................................................................................14
Failure conditions 3.........................................................................................................14
Failure handling ..............................................................................................................14
Requirements .................................................................................................................14
Benchmark Suitability Discussion ..................................................................................15
Summary of Industry Actions Required .........................................................................16
2
© 2011-2013 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.
Open Data Center Alliance: Standard Units of Measure For IaaS Rev 1.1
Legal Notice
© 2011-2013 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.
This “Standard Units of Measure For IaaS Rev 1.1” document is proprietary to the Open Data Center Alliance (the “Alliance”) and/or its
successors and assigns.
NOTICE TO USERS WHO ARE NOT OPEN DATA CENTER ALLIANCE PARTICIPANTS: Non-Alliance Participants are only granted the right to
review, and make reference to or cite this document. Any such references or citations to this document must give the Alliance full attribution
and must acknowledge the Alliance’s copyright in this document. The proper copyright notice is as follows: “© 2011-2013 Open Data Center
Alliance, Inc. ALL RIGHTS RESERVED.” Such users are not permitted to revise, alter, modify, make any derivatives of, or otherwise amend
this document in any way without the prior express written permission of the Alliance.
NOTICE TO USERS WHO ARE OPEN DATA CENTER ALLIANCE PARTICIPANTS: Use of this document by Alliance Participants is subject to the
Alliance’s bylaws and its other policies and procedures.
NOTICE TO USERS GENERALLY: Users of this document should not reference any initial or recommended methodology, metric, requirements,
criteria, or other content that may be contained in this document or in any other document distributed by the Alliance (“Initial Models”) in any
way that implies the user and/or its products or services are in compliance with, or have undergone any testing or certification to demonstrate
compliance with, any of these Initial Models.
The contents of this document are intended for informational purposes only. Any proposals, recommendations or other content contained in
this document, including, without limitation, the scope or content of any methodology, metric, requirements, or other criteria disclosed in this
document (collectively, “Criteria”), does not constitute an endorsement or recommendation by Alliance of such Criteria and does not mean that
the Alliance will in the future develop any certification or compliance or testing programs to verify any future implementation or compliance with
any of the Criteria.
LEGAL DISCLAIMER: THIS DOCUMENT AND THE INFORMATION CONTAINED HEREIN IS PROVIDED ON AN “AS IS” BASIS. TO THE MAXIMUM
EXTENT PERMITTED BY APPLICABLE LAW, THE ALLIANCE (ALONG WITH THE CONTRIBUTORS TO THIS DOCUMENT) HEREBY DISCLAIM ALL
REPRESENTATIONS, WARRANTIES AND/OR COVENANTS, EITHER EXPRESS OR IMPLIED, STATUTORY OR AT COMMON LAW, INCLUDING,
BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE, VALIDITY, AND/
OR NONINFRINGEMENT. THE INFORMATION CONTAINED IN THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND THE ALLIANCE
MAKES NO REPRESENTATIONS, WARRANTIES AND/OR COVENANTS AS TO THE RESULTS THAT MAY BE OBTAINED FROM THE USE OF, OR
RELIANCE ON, ANY INFORMATION SET FORTH IN THIS DOCUMENT, OR AS TO THE ACCURACY OR RELIABILITY OF SUCH INFORMATION.
EXCEPT AS OTHERWISE EXPRESSLY SET FORTH HEREIN, NOTHING CONTAINED IN THIS DOCUMENT SHALL BE DEEMED AS GRANTING
YOU ANY KIND OF LICENSE IN THE DOCUMENT, OR ANY OF ITS CONTENTS, EITHER EXPRESSLY OR IMPLIEDLY, OR TO ANY INTELLECTUAL
PROPERTY OWNED OR CONTROLLED BY THE ALLIANCE, INCLUDING, WITHOUT LIMITATION, ANY TRADEMARKS OF THE ALLIANCE.
TRADEMARKS: OPEN CENTER DATA ALLIANCE SM, ODCA SM, and the OPEN DATA CENTER ALLIANCE logo® are trade names, trademarks,
and/or service marks (collectively “Marks”) owned by Open Data Center Alliance, Inc. and all rights are reserved therein. Unauthorized use
is strictly prohibited. This document does not grant any user of this document any rights to use any of the ODCA’s Marks. All other service
marks, trademarks and trade names reference herein are those of their respective owners.
© 2011-2013 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.
3
OPEN DATA CENTER ALLIANCE USAGE:
Standard Units of Measure For IaaS Rev 1.1
Executive Summary
The worldwide market for cloud services–encompassing both dedicated services available via private clouds and shared services via public
clouds–could top $148.8 billion in 2014, up from $68.3 billion in 2010. The sheer, and growing, volume of available services makes it
challenging for any organization to assess their options and to measure what services are being delivered and what attributes exist for each
service.
Organizations will be comparing services from competing providers of cloud services, as well as with their own internal capabilities. Such
comparisons need to be quantitative on a like-for-like or “apples-to-apples” basis (e.g., quantity of consumption or period of usage) and
qualitative on a set of service assurance attributes (e.g., degree of elasticity or degree of service level). However, there is no standard, vendorindependent unit of measure equivalent to a million instructions per second (MIPS) and other measures used for mainframes that allow such
comparisons. This is partly because there are neither common measurements for the technical units of capacity being sold nor common ways
to describe the qualitative attributes of cloud services. Consequently, organizations either try to fit individual cloud provider’s models to their
business problems, or embark on costly and lengthy request for proposal (RFP) processes in an attempt to conform the providers to a set of
parameters that can be compared.
The Open Data Center AllianceSM recognizes the need to develop Standard Units of Measure (SUoM) to describe the quantitative and qualitative
attributes of services to enable an easier and more precise comparison and discovery of the marketplace. This usage model is designed to
provide subscribers of cloud services with a framework and associated attributes used to describe and measure the capacity, performance,
and quality of a cloud service.
Open Data Center Alliance: Standard Units of Measure For IaaS Rev 1.1
Purpose
This document describes the creation and use of SUoM for quantitative and qualitative measures that describe the capacity, performance, and
quality of the service components.
In the usage model detailed below, we restrict ourselves to consideration of the infrastructure as a service (IaaS) level, but the principles are
extensible as needed to platform as a service (PaaS) and software as a service (SaaS). This methodology can thus be used at a micro level to
define the characteristics of individual service components, and it can be extended to the macro level to predict the performance of a complex,
composite application landscape.
The intended use of the SUoM includes:
•• Within a service catalog to provide element parameters and categorization
•• As a definition of the expected service capabilities while services are in use
•• As a billing reference after consumption
Taxonomy
Table 1 provides a definition of the terms cloud subscriber and cloud provider, which are used throughout this document.
Table 1. Terminology
Actor
Definition
Cloud subscriber
A person or organization that has been authenticated to a cloud and maintains a business relationship with a cloud.
Cloud provider
An organization providing network services and charging the cloud subscribers. A (public) cloud provider provides services over the Internet.
Quantitative Measures
Quantitative units within the SUoM can be described and/or calibrated in terms of linear capability (e.g., 500 GB disk capacity), throughput
(e.g., 2000 input/output operations per second (IOPS)), or consumption-based (e.g., $0.01 per million IO operations).
For IaaS, we begin with quantitative units for the three major components that the cloud provider needs to describe.
•• Compute (incorporating CPU and memory)
•• Storage
•• Network
For compute, there must be a consistent benchmark that is useful for comparison across a wide range of cloud subscriber needs. We propose
SPECvirt_sc2010 from www.SPEC.org. This benchmark covers three principal performance areas meaningful to many cloud subscribers: Web
hosting (user interface intensive), Java hosting (compute intensive) and mail (database/transaction intensive). To represent memory needs, we
suggest use of a default gigabytes-per-SPECvirt ratio and descriptions of double and quadruple memory density above this level.
For storage, measurement units must allow comparison of capacity, performance, and quality. Capacity can be measured in terabytes (TB).
Performance can be provided in IOPS per TB. Quality is rated by level.
For networks, measurement units must allow comparison of bandwidth, performance, and quality. Bandwidth can be represented in gigabits
per second (gb/s). Performance can be quantified in latency/jitter/throughput per minute. Quality in networks, as in storage, is rated by level:
Bronze, Silver, Gold, and Platinum.
© 2011-2013 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.
5
Open Data Center Alliance: Standard Units of Measure For IaaS Rev 1.1
In addition, standard units are needed for aspects such as:
•• Time to deployment. When does deployment start and end?
•• Duration of use. When does billing start and end?
•• Block scale unit. The multiplier used for “wholesale” consumption of capacity;
e.g., providing resources in the thousands of cores for calculation farms.
Qualitative Measures: For Service Assurance Level
This usage model does not define how the cloud provider manages the infrastructure; instead it focuses on how the cloud subscriber wants to
consume infrastructure. Therefore, we can define levels similar to CMMI/COBIT (levels 1–5) or using ITIL processes.
The outcomes, such as the functionality of the service and the service delivery level, are important to consumers of a service. The business
mechanism for measuring outcomes is the service-level agreement and the technical mechanisms are the service catalog and the service
orchestration.
We are proposing a way to measure the service assurance level against an agreed standard in order to enable the qualitative measures.
Definition
A framework of four levels of service assurance differentiation–Bronze, Silver, Gold, and Platinum–is identified (Table 2). Each of these levels
stands by itself and can be applied to various industry sectors and IT environments.
Table 2: A description of each of the levels of service assurance
Level of Service Assurance
Bronze
Silver
Gold
Platinum
Description
Represents the lower-end
corporate requirement and
may equate to a reasonably
high level for a small to
medium business customer
Represents a trade-off of
more configured functionality
and service-level quality,
while still considering cost
effectiveness
Represents the highest level
Represents a preference for a
of contemplated corporate
higher quality of service within
requirements
the range of the service-level
agreement with more tailoring,
which may equate to additional cost
Example
Development environment
Test environment; “out of the
box” production environment
Finance sector production
environment
6
© 2011-2013 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.
Special purpose, high-end
demand environment
Open Data Center Alliance: Standard Units of Measure For IaaS Rev 1.1
General Guidelines
The following organization is used:
•• The Service-level Management section defines the parameters for the service-level management of the service assurance framework.
•• The Service-level Attributes section defines the attributes of the service assurance levels in non-functional requirements (NFR) terms.
•• The attributes defined in the Service-level Attributes section are mapped to the Bronze, Silver, Gold, and Platinum levels described in Table 4.
The various ODCA usage models further define detailed parameters and key performance indicators.
The following guidelines apply:
•• A given solution can combine different service levels for different solution elements. For example, Gold security features can be combined with
Bronze performance features.
•• The Bronze and Silver levels can be hosted on the same hardware and infrastructure software, with logical separation.
•• The Gold level can be hosted only on the same hardware and infrastructure software with other Gold level services.
•• The Platinum level requires separate hardware and infrastructure software for each cloud subscriber, or a proven acceptable control method is
provided for separation at all layers (e.g., 256-bit encryption), but limited to other Platinum services only.
•• There are two service constructs at the Platinum level.
–– The cloud provider has extremely limited access, supported by multi-factor authentication, to only the base infrastructure and not to the
applications or data, in order to provide the expected services and manage the infrastructure adequately.
–– The cloud provider has administrative access in IaaS mode for the hardware and infrastructure software, with multi-factor authentication
and four eyes-plus control of access and commands, to a structured framework of controls that support the delivery of the expected
service capabilities.
•• General management resources, such as security information and event management, help desks, and monitoring consoles, offer one instance
for all levels. The single instance could offer tiered services with different service qualities. There should be evidence of quality-of-service
controls to enable this.
•• The cloud subscriber can view detailed data from network probes, such as Network Intrusion Detection System/Network Intrusion Prevention
System (NIPS/NIDS), located in the network segments associated with their particular service level.
© 2011-2013 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.
7
Open Data Center Alliance: Standard Units of Measure For IaaS Rev 1.1
Service-level Management
Table 3 describes the service-level management for the service assurance framework.
Service-level Attributes
The attributes of the service levels, expressed in NFR terms, are:
•• Availability. The degree of uptime for the solution, taking into account contention probabilities, which includes an indication of response time
to problems and incidents, planning and maintenance schedules and impacts, and business continuity capability.
•• Performance. The extent to which the solution is assured to deliver a level of output.
•• Elasticity. The configurability and expandability of the solution, including the ability to adjust consumed service capacities up or down, and the
scale or limitations of the capacity changes.
•• Manageability. The degree of automation and control available for managing the solution.
•• Recoverability. The solution’s recovery point and recovery time objectives.
•• Interoperability. The degree to which services can interact with other services and infrastructure in multiple clouds. Interoperability is
described from two perspectives: (1) portability–the serial process of moving a system from one cloud environment to another, and (2)
interconnectability–the parallel process in which two co-existing environments communicate and interact.
•• Security and privacy. Describes the attributes that indicate the effectiveness of a cloud provider’s controls on access to services and
data protection, and the physical facilities from which the services are provided. These attributes should provide an indication of physical
protection, logical protection, controls and monitoring measures in place, compliance to country and corporate requirements, compliance with
regulatory and statutory laws and obligations, and remediation processes.
•• Configurability. Describes the features and functions of the services, including the available basic services, the available standard options to
add to the base services, the available customizable options and features to add to the base services, the ability to develop custom features for
the cloud subscriber, and the planned roadmap of functions and features for the service.
•• Long-distance migration. “Long distance” is defined as greater than 20 km of conductor between disparate data centers (cloud provider
sites). Inter-site latency is assumed to be at least 10 ms or worse. The characteristics of long-distance migration include cross-provider
migration, cost-sensitive migration, open standards compliance, and live and at-rest migration parameters.
Table 4 describes the mapping of the service-level attributes to the Bronze, Silver, Gold, and Platinum assurance levels.
Note: The figures given for attributes such as availability are illustrative. More precise figures and definitions are defined in the related Open
Data Center Alliance documents.
It is not the Open Data Center Alliance’s desire to mandate a long list of measures. There’s a point at which the number of measures begin to
make it more complex and difficult, instead of easier, to compare the cloud services of a number of cloud providers.
It should be understood by all parties that, while a cloud provider can predict and quantify how their infrastructure will behave, the cloud
provider cannot predict how the cloud subscriber’s workload will perform in that environment. That performance will depend on a number of
factors, of which even the cloud subscriber may not be aware. Instead, the SUoM gives both sides a common currency that they can use to
work toward such predictions.
8
© 2011-2013 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.
Open Data Center Alliance: Standard Units of Measure For IaaS Rev 1.1
Table 3. Service-level management MAPPED TO the service assurance framework
LEVEL
Bronze
Silver
Gold
Platinum
Service Catalog,
Service
Orchestration
Standard, out-of-the box.
Standard, out-of-the-box with
some customization features.
Partially integrated into
the cloud subscriber’s IT
processes.
Fully detailed, integrated
into the cloud subscriber’s
IT processes.
Service-level
Agreement (SLA)
•• Non-negotiable.
•• Provision made to provide
good service and to be
responsive to client and
attentive to problems.
•• Enhanced service level,
which provides priority
attention for all incidents,
service interruptions, and
phone calls.
•• Fully detailed, specific
per case.
•• Reasonable efforts to
respond to client needs,
but lowest priority. Based
•• Cost increases may
on basic functionality,
•• Service penalties for failure
accompany the increased
basic costs at affordable
to deliver priority service
service elements but are still
levels accompany the SLA.
(details to be specified).
optimized due to economy
of scale (potentially through •• Cost increases reflect the
multi-tenancy or similar
reduced ability of the cloud
commercial models).
provider to achieve economy
of scale, because of the
restrictions resulting from
accommodating similar
levels of enhanced tooling
and services.
•• Highest possible service level,
with immediate access to
the highest level of service
response from named (preagreed with cloud subscriber)
service contacts.
•• First priority access to any
resources (an expectation of
zero contention with other
tenants) and dedicated fault
teams on any incident.
•• Dedicated service
assurance team with named
representatives working
every month with the cloud
provider to assure the highest
level of service reporting and
interfacing. Major service
penalties for breach of these
service expectations.
•• Significant costs expected
because of costly dedicated
tooling and reduced economy
of scale, together with
increased infrastructure to
achieve the scalability and
redundancy requirements.
Operational-level
Agreement
•• Coverage: Services
•• Coverage: Same as Bronze
•• Support: Basic, such as
help desk and email
•• Support: Bronze plus level 2
support1
•• Coverage: End-to-end,
including all dependent
elements
•• Support: Silver plus escalation
processes, such as definition
of roles and responsibilities
and inclusion of detailed
operating procedures
Monitoring and
Reporting(on the
accomplishment of
the service levels)
1
Summarized over weeks or
months
Daily
Hourly
•• Coverage: Gold plus
infrastructure elements
•• Support: Gold plus automatic
support tools
Real-time, continuous
For more information about the levels of technical support, see www.wikipedia.org/wiki/Technical_support.
© 2011-2013 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.
9
Open Data Center Alliance: Standard Units of Measure For IaaS Rev 1.1
Table 4. Service-level attributes mapped to service assurance framework
Attribute
Description
Availability: Detailed description can be found in the ODCA Usage document: “Compute Infrastructure as a Service”2
Bronze
Reasonable efforts to attain 99% availability for the IaaS (up to but not including the cloud subscriber components).
The cloud provider cannot be penalized for any failure of the OS or app in the guest virtual machine, except where the failure is clearly
the fault of the hypervisor or underlying hardware solution.
Silver
Provisions made to attain 99.9% availability, including increased focus on preventing impact from contention risks.
Gold
Specifically demonstrable additional measures needed to achieve and sustain 99.9% availability and demonstrating resilience to reasonably
anticipated fault conditions. Service penalties should apply at this level.
Platinum
Highest possible focus on uptime to achieve 99.9% availability, with the expectation of significantly increased service penalties (beyond
Gold level) if not achieved.
Performance: Detailed description can be found in the ODCA Usage document: “Compute Infrastructure as a Service”
Bronze
Systems are installed and configured in a correct manner, so that they deliver the levels of throughput specified by their manufacturer.
Silver
Provisions made to tune the components to the environment in which they run. Performance is monitored, and significant deviations are remedied.
Gold
Specifically demonstrable additional measures applied to achieve and sustain acceptable throughput. Performance is monitored. Service
penalties should apply at this level.
Platinum
Highest possible focus on performance to achieve acceptable end-user experience, which may itself be monitored. Expectation of
significantly increased service penalties (beyond Gold level) if not achieved.
Elasticity: Detailed description can be found in the ODCA Usage document: “Compute Infrastructure as a Service”
Bronze
Reasonable efforts to provide ability to grow by at least 10% above current usage within 24 hours, or 25% within a month. Growth is
bounded by an upper limit per month.
Silver
Provisions made to provide ability to grow by 10% within 2 hours, 25% within 24 hours, 100% within a month. Growth is bounded by an
upper limit per month.
Gold
Significantly additional demonstrable steps taken to be able to respond very quickly to an increase or a decrease in needs: 25% within 2 hours,
50% within 24 hours, 300% within a month. Penalties are applied if this capacity is not available to these scale points when requested.
Platinum
Highest-capability to flex up and down, by 100% within 2 hours, 1,000% within a month, with major penalties if not available at any time as needed.
Manageability: Detailed description can be found in the ODCA Usage document: “Compute Infrastructure as a Service”
Bronze
Simple manual user interface for orchestration, monitoring, and billing
Silver
Web service interface for all functions. Integration of workflows for incident, problem change, orchestration, and billing.
Gold
Real-time interface to a full range of information on the service, including performance, configuration, and transaction rates. Availability
goal of 99.99% mean time between failure (MTBF) on the management interface.
Platinum
Real-time highly granular management interface capable of the fullest range of service interface from policy to millisecond-level probes.
MTFB goal of 99.99% on the management interface, with penalties for a breach.
Recoverability: Detailed description can be found in the ODCA Usage document: “Compute Infrastructure as a Service”
2
Bronze
Reasonable efforts to recover the IaaS service (for example, access to boot volumes and the ability to reboot the cloud subscriber virtual
environment again) with up to 24 hours of data loss (for example, loss of boot disk updates due to no intra-day backup) and up to 24 hours
of recovery time. No site disaster recovery (DR). Note that the focus is on recoverability of the underlying service, after which a cloud
subscriber still has its own recovery to complete.
Silver
Provisions made to recover within 4 hours, with up to 24 hours of data loss (No DR for full site disaster).
Gold
Enhanced recovery capability to recover within 2 hours for hardware failure, 24 hours for site failure, and no more than 4 hours of data loss.
Platinum
Highest recovery focus to provide as close to continuous nonstop availability as possible, aiming for less than one hour recovery and less
than 15 minutes data loss even in the event of a site failure.
www.opendatacenteralliance.org/docs/ODCA_Compute_IaaS_MasterUM_v1.0_Nov2012.pdf
10
© 2011-2013 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.
Open Data Center Alliance: Standard Units of Measure For IaaS Rev 1.1
Attribute
Description
Security and Privacy: Detailed description can be found in the Usage document: “Provider Security Assurance”3 and “Identity Management Interoperability Guide”4
Bronze
Basic security and privacy as provided by the default-offered hypervisor, with the cloud subscriber self-certifying the service, as required.
Silver
Enterprise security and privacy equivalent–certification of security systems offered by the cloud provider, giving cloud subscribers the
confidence and evidence that the systems are securely maintained.
Gold
Financial organization security and privacy and equivalent–certification of security systems and demonstration of compliance of those
systems with legislative- and sector-related compliance frameworks, confirmed by regular audits. Includes pro-active monitoring and
intrusion prevention functions at defined levels.
Platinum
Military organization security and privacy equivalent–certification and compliance guaranteed and proactive intrusion prevention, detection,
and monitoring at multiple levels.
Interoperability: Detailed description can be found in the ODCA Usage document: “Guide to Interoperability Across Clouds”5
Bronze
The cloud provider is asked to make basic provisions for portability and interconnectability of its service with the cloud subscriber’s own
services and solutions. This includes basic services to port and interconnect infrastructure workloads, applications, or business processes
within or across the same cloud provider’s systems and locations.
Silver
In addition to the Bronze-level requirements, the cloud provider is asked to make more advanced and complex provisions for portability
and interconnectability of its service with the cloud subscriber’s services and solutions, adding scale, performance, and greater focus on
minimization of downtime.
At this level, global portability and interconnectability within the cloud provider’s environment is also expected. Clearly defined prerequisites
and dependencies are available.
Gold
In addition to the Silver-level requirements, the cloud provider is asked to extend its interoperability considerations to accommodate
simple portability and interconnectability to a second cloud provider or between a cloud provider and a cloud subscriber’s internal IT. Clear
interfaces, based on open standards, are documented and integrated in this area. Some control and audit points are available for the cloud
subscriber, at key stages of the process.
Platinum
All previous levels are included. Additionally, the cloud provider is asked to provide greater focus on interoper­ability at scale, with more automated
interconnectability and more seamless portability for the cloud subscriber. This includes more sophisticated options to migrate workloads
with less disruption and more attention to details that make the solution interoperable with different hardware and software platforms.
At this level, documentation about data import and export structures, interfaces, and protocols is also expected.
Live reference examples of the interoperability features and best-practice documentation to assist the cloud subscriber to automatically
exploit all the features at any time are also expected. Additionally, control and audit points are defined for the cloud subscriber, for each
phase of the process.
Configurability: Detailed description can be found in the ODCA Usage document: “Service Orchestration”6
Bronze
No configuration options available; Two or three pre-defined configurations are available with no further configuring possible.
Silver
Two or three base configurations exist, and the cloud subscriber can configure a small number of additional options against those pre-defined base options.
Gold
Many base options exist, and each option has a high configuration capability offered with it, reactively, by either the cloud subscriber or the
cloud provider, depending on the contract.
Platinum
Many base options exist, and each has a high configuration capability offered with it, proactively, by either the cloud subscriber or the cloud
provider, depending on the contract, with warning systems detecting and alerting to high and low water marks, and automation of configuration
changes enabled.
Long-distance Migration: Detailed description can be found in the ODCA Usage document: “Long Distance Workload Migration”7
5
6
7
3
4
Bronze
No long-distance migration standards identified or evident for the service.
Silver
Long-distance migration standards identified for the service, with proprietary API provided.
Gold
Service based on open migration standards, with proof of concept (PoC) and testing facilities, as well as an open API, movement
methodology, and tooling to support movement and testing of services.
Platinum
Service based on open migration standards, with PoC and testing facilities, as well as an open API, movement methodology, and tooling to
support movement and testing of services; regular testing/migration of active services between distant nodes/cloud locations available.
www.opendatacenteralliance.org/docs/Security_Provider_Assurance_Rev%201.1_b.pdf
www.opendatacenteralliance.org/docs/Identity_Management_Interoperability_Guide_Rev1.0_b.pdf
www.opendatacenteralliance.org/docs/ODCA_Interop_Across_Clouds_Guide_Rev1.0.pdf
www.opendatacenteralliance.org/docs/ODCA_Service_Orch_MasterUM_v1.0_Nov2012.pdf
www.opendatacenteralliance.org/docs/Long_Distance_Workload_Migration_Rev1.0_b.pdf
© 2011-2013 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.
11
Open Data Center Alliance: Standard Units of Measure For IaaS Rev 1.1
Function Map
The flowchart in Figure 1 shows the importance of the SUoM during the three stages of the cloud subscriber and the cloud provider relationship.
•• Prior to use (selection). SUoM used to provide a realistic indication of the capacity and services offered, as described in the cloud provider’s
service catalog.
•• During use. SUoM used for assurance or benchmark that the capacity delivered is the capacity promised. The SUoM can also be used to
determine whether the current service will suffice in the event that major changes, such as a new OS, are planned.
•• Post use. SUoM used to report actual capacities used over a period of time to aid billing or future planning.
A significant advantage of using the SUoM usage model throughout these stages is that it creates a closed loop that in practice can lead
iteratively to better predictions and results.
Figure 1: The three stages of the cloud subscriber and cloud provider relationship.
· Licenses
· Processing (OS memory, CPU cores, speed)
· Network (bandwidth, latency)
· Storage (backup, capacity, replication)
Units Described
Determine
Needs
Units Used
Specification
Requirements
Monitor
Performance
Recorded
Use
Billing
Application Specifications
Prior to Use
12
During Use
Post Use
© 2011-2013 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.
Open Data Center Alliance: Standard Units of Measure For IaaS Rev 1.1
Methodology
The SUoM usage model can be considered at three conceptual levels:
•• At the lowest infrastructure level to define components
•• As benchmarks to predict the outputs or throughputs of an infrastructure
•• At the application level to predict or measure performance
The lower levels are simpler and more deterministic, while the higher levels are more useful to the cloud subscribers (Figure 2).
The cloud provider should be able to indicate service or infrastructure measures within a finely predictable range, certainly well within
±10 percent. From this, the cloud subscriber should ultimately be able to predict the performance of their systems with increasing accuracy
using an iterative approach. There is scope for the development of more accurate, and thus more useful, benchmarking and performance
prediction capabilities.
Amazon is a major cloud provider and, while not providing a benchmark as such, it is treated as a standard for comparison in terms of delivered
capacities and prices by many within the cloud services business. That does not, however, indicate that Amazon’s proprietary definitions should
be adopted as a standard for all players, but instead that a structure is needed that can effectively be used across all vendors.
Regarding system performance and transactional throughput, there are a number of potential sources for pre-existing measures:
•• TPC (www.tpc.org/information/benchmarks.asp)
•• SPEC (www.spec.org)
•• RPE2 (www.ideasinternational.com/IT-Buyers/Server-Performance)
•• SAPS (www.sap.com/solutions/benchmark/measuring)
These benchmarks can be considered starting points, taking into account the relative strength and weakness of each in order to learn
what works. There are also organizations (e.g., www.CloudHarmony.com) that measure the actual performance of various cloud providers’
environments to provide an indication of their performance in terms of speed and reliability. However, such measures are not an accurate
indication of how a particular cloud subscriber’s systems will behave and perform in the tested environment. For those purposes, it may be
best for a cloud subscriber to develop its own benchmark, using the application in question.
Figure 2: The Standard Units of Measure usage model can be considered at three conceptual levels.
Measure
Performance
More
Useful
Predict and
Measure Outputs
Define and
Test Components
Applications
SAP
Standards
Bespoke
Etc.
Benchmarks
TPC
SPEC
RPE2
SAPS
DIY
Etc.
Infrastructure
Component 1
Component 2
Component 3
Componentn
© 2011-2013 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.
13
Open Data Center Alliance: Standard Units of Measure For IaaS Rev 1.1
Because this is a new field, initial progress can best be made by “starting simple.” Yet, over time, further complexity and sophistication is
bound to be needed. Our proposed courses of action for the immediate future are:
•• Supply Side. Develop a structure and set of units whereby all suppliers can indicate the capacities of their infrastructure in terms that are:
(1) consistent within their own environment, and (2) at least acceptably comparable to other vendors’ environments.
•• Demand Side. Develop methods for being able to predict beforehand and analyze afterwards the performance of any given system within
such an environment, using the above measures.
Further developments that are needed include measures to indicate the time-to-deployment of a new environment in terms of its scale (using SUoM),
qualitative requirements, and common definitions of the starting and ending points.
Use Case
Goals
•• To ensure that cloud subscribers have the ability to predict performance, as well as track and record actual usage.
•• To ensure that cloud providers have the technical capability to give a deterministic indication of infrastructure capacities and track such
capacities in an auditable manner.
Considerations
Assumes the service catalog, as documented in “Service Catalog.”8
Sample usage
Cloud subscriber wants to run a newly developed system in a cloud environment. The commissioning of a suitable environment requires estimates of
necessary processing, storage, and I/O capacities. The system under development is run for a period in a quality assurance/acceptance environment
with a known number of users. The environment used is quantified, using SUoM, and the performance acceptability gauged. For the first production
deployment, the environment under consideration is also quantified using the same SUoM, factored by the number of users expected. Once deployed,
the environment’s performance and the actual number of users are monitored and adjusted accordingly, again using SUoM.
Success scenario 1 (pre-usage)
•• Cloud provider is able to define infrastructure capacities to allow accurate estimates of potential performance associated with all offered services.
•• Cloud provider is able to ensure that cloud subscriber’s requirements for capacities are met over the mid to long term.
•• Cloud subscriber is notified, as part of the service catalog, of the levels of capacity to be provided.
Failure conditions 1
Cloud provider is unable to identify a capacity and deliver a figure for each use of the infrastructure.
Success scenario 2 (actual, instrumented)
Cloud provider is able to monitor actual capacities used. Cloud subscriber is notified after the fact, within acceptable bounds, of actual units used.
Failure conditions 2
Cloud provider is unable to identify the volume and rate of usage arising from each use of the infrastructure.
Failure conditions 3
Cloud provider reports the volume and rate of usage arising from each use of the infrastructure, but it is significantly outside the previously
indicated levels.
Failure handling
For all failure conditions, the cloud provider should notify the cloud subscriber of the inability to provide benchmark figures and/or of the actual
figures produced.
Requirements
Existence and use of a service catalog.
8
www.opendatacenteralliance.org/document-sections/category/71-docs?download=445:service-catalog
14
© 2011-2013 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.
Open Data Center Alliance: Standard Units of Measure For IaaS Rev 1.1
Benchmark Suitability Discussion
Of the SPEC standards, SPECvirt_sc2010 is perhaps the closest we have to a cloud metric. SPECvirt_sc2010 combines modified versions of
three previous standards: SPECweb2005, SPECjAppServer2004, and SPECmail2008. The client-side SPECvirt_sc2010 harness controls the
workloads. Scaling is achieved by running additional sets of virtual machines (VMs), called “tiles,” until overall throughput reaches a peak
(wording taken from measurements of a physical machine9 ). All VMs must continue to meet required quality-of-service criteria.
In general, vendors who have submitted benchmarks for these tests have used well-documented open source stacks generally based around
Red Hat Linux and the Apache Stack, such as HP’s test platform for the ProLiant DL380 G7 server. This is a good example, although vendors
are allowed to choose the stack of their choice.
A key differentiator for this metric is that the test harness fires requests from a process external to the stack processing the requests. Compare
this with many benchmarks where the stack and inputs and outputs are all contained within one VM. While the test harness could be bundled
with the stack in the cloud VM to be tested, keeping it external allows “real world” tests to be conducted across the cloud data center or from
client sites to a cloud VM. This brings some degree of user experience testing into the overall picture, such as results for network and data
center latency.
RPE2 is another composite benchmark. It consists of several industry standards and can be used to provide a standard, objective measure of
compute power across all hardware, irrespective of chip architecture. It is intended to fully support virtualized environments. The six benchmarks
it includes are listed below. (More information on these benchmarks and RPE2 is available in the “Recommended Usage of IDEAS RPE2”
white paper.10 )
•• TPC-C. The Transaction Processing Performance Council’s online transaction processing benchmark. This simulates the transactions of a
complete order entry environment where a population of terminal operators executes transactions against a database for simulated order
fulfillment from a set of warehouses. See Overview of TPC Benchmark C: The Order Entry Benchmark for full details.
•• TPC-H. The Transaction Processing Performance Council’s ad hoc decision support benchmark. See TPC-H for details.
•• SAP SD 2-Tier. This is an SAP two-tier sales and distribution order processing benchmark. See Benchmark Sales and Distribution
on SAP’s web site for details.
•• SPECjbb2005. The Standard Performance Evaluation Council (SPEC) benchmark for Java Server performance. SPECjbb2005 evaluates the
performance of server side Java by emulating a three-tier client/server system (with emphasis on the middle tier). The benchmark exercises
the implementations of the Java Virtual Machine (JVM), Just-In-Time (JIT) compiler, garbage collection, threads, and some aspects of the
operating system. It also measures the performance of CPUs, caches, memory hierarchy, and the scalability of shared memory processors
(SMPs). See SPECjbb2005 for full details.
•• CINT2006 and CFP2006. The integer and floating point components of SPEC CPU 2006 and the SPEC Benchmark for CPU performance.
Each of these tests has over 10 sub tests in a range of programming languages. See SPEC CPU2006, CINT2006 and CFP2006 for full details.
•• RPE2. Although RPE2 is a comprehensive benchmark, it is complex to implement, with many tests requiring hours of runtime to provide
meaningful comparable results. Some debate is needed to analyze whether this would be the appropriate benchmark for comparing
cloud providers.
9
See www.spec.org/virt_sc2010
10
as.ideascp.com/cp/RPE2_Whitepaper.pdf
© 2011-2013 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.
15
Open Data Center Alliance: Standard Units of Measure For IaaS Rev 1.1
Summary of Industry Actions Required
In the interest of giving guidance on how to create and deploy solutions that are open, multi-vendor, and interoperable, we have identified
specific areas where the Alliance believes there should be open specifications, formal or de facto standards, or common IP-free
implementations. Where the Alliance has a specific recommendation on the specification, standard or open implementation, it is called out
in this usage model. In other cases, we will be working with the industry to evaluate and recommend specifications in future releases of this
document.
The following are industry actions required to refine this usage model:
1. Create a list of SUoM to be defined and calibrated (Alliance).
2. Align to the proposed service catalog descriptions and units (Alliance).
3. Identify applicable units per component configuration (hardware vendor and/or cloud provider).
4. Integrate within published catalogs/APIs (cloud provider).
5. Incorporate into methods for reporting current usage and billing (cloud provider).
6. Benchmark to calibrate units (cloud subscriber and/or third parties).
7. Develop more sophisticated benchmark and application performance prediction capabilities (industry).
8. Provide feedback to the Alliance on usage experiences (cloud subscriber and cloud provider).
9. Continually review usage and experiences in practice (cloud subscriber via the Alliance).
16
© 2011-2013 Open Data Center Alliance, Inc. ALL RIGHTS RESERVED.