When data center migration involves high asset utilization and agility in data centers, there are several drivers: composability, extensibility, and openness.Composability means that companies can be hardware and specific types of workloads exact match, until the specific hardware components (e.g., CPU, memory, memory, FPGA, NVMe module, coprocessor, network connection, etc.).High scalability means that companies can use as much as possible according to need of components (even if scattered on the physical frame), also can run real-time collection of all sizes needed for the workload calculation ability.Openness means that companies can choose and integrate components that are best suited to their workload, without human compatibility issues.
Composite/decoupled infrastructure (CDI) is an architectural approach designed to provide refined hardware composability, high scalability, and open management application programming interfaces (apis).Composable/decoupling infrastructure definition with virtualization and software infrastructure (SDI) cooperation, through the calculation of overcome the fixed ratio, memory, storage, the limitation of accelerator servers and network resources, improve the efficiency and flexibility of data center.
Composite/decouple infrastructure (CDI) is a key part of the puzzle that meets the requirements of a large scale data center.So what are the challenges facing today's data centers that lead to the need for composite/decoupled infrastructure (CDI)?
The challenge for the data center.
Some trends that require dynamic hardware configuration include:
.Ultra-high-speed growth and large - scale edge of cloud computing and new calculation model of computing is driving rapid expansion of data center service provider, so that traditional deployment and management methods can not keep up with this development.
.High density - the demand for more computing and storage capacity means that the data center operators are trying to pass on the equipment, refrigeration, floor space, and to finish more computing power cost budget.
.New workloads - when it comes to new workloads, big data, the Internet of things, artificial intelligence, and other data centers are limited by the size of these workloads.In addition, over time, these applications often show significant changes in requirements and rapid growth.
.DevOps and microservices - in the past, most applications were static on a single machine.Compared to current applications, these applications are composed of physically dispersed, continuously upgraded, dynamically optimized, interconnected software components.Hardware must also be extensible.
New hardware technology and new applications have all kinds of support hardware - different types of processors, memory and connected devices, this makes any fixed "one size fits all" hardware has become inefficient and inflexible.
Now more and more data centers have been asked to run a bigger, more complex the workload, the workload is often different from each other, so they run hardware requirements may vary due to the workload, and even one hour in a day will change too.For example, some workloads may require more processing power or memory capacity.Some may require NVMe storage or dedicated processors.In addition, it is desirable to use high-end devices across multiple workloads at different times in order to reduce total cost of ownership.
How to determine if the data center is under pressure?
What are the substantive implications of these new challenges?How do organizations know if their data centers are affected by these challenges?Some real indicators of stress in the data center include:
O data center management is still complex and requires a great deal of technical staff.
O even if the average utilization rate of the virtualized environment is rarely more than 50%, the operating rate of the non-virtualized data center is around 20% to 30%.
O providing hardware for new applications still takes days or weeks and requires multiple experts to implement it.
The intelligent platform management interface (IPMI) has a 20-year history and is inherently limited due to its protocols and bit-level coding techniques.Data centers need a more scalable, secure, and internet-friendly management standard.
O interoperability between devices and management software from different vendors is often problematic, limiting functionality and programmability.
The oCPU upgrade usually requires the replacement of all the resources in the server box and server, as well as the replacement of storage devices, power, fans, and network adapters as soon as possible.
O application developers slow down due to current requirements, deployment, validation, and supply processes.
O response to unforeseeable changes in application capacity requirements is slow and labor intensive.
All these challenges are a common source: the distribution of data center operators cannot easily high particle size and mass of a specific hardware device, in order to meet the specific workload (either individually or as a group), because of the changing hardware requirements.
Virtualization and software definition infrastructure (SDI) limitations.
Virtual machines (VMS) allow multiple applications to run on the server, helping to better leverage the server's hardware, achieve rapid configuration and load balancing, and increase administrative automation.Container also provides many of these advantages, because it can make the application and its all depend on the option of packaged together, and dynamic deployment to the server in response to a change in the workload, which will further improve hardware utilization and flexibility.
In addition to computing server (including file servers, storage cluster and network switch), the definition of software infrastructure (SDI) has extended the concept of hardware abstraction, to cover other infrastructure elements, so that the whole data center infrastructure like software programmable, like operating environment and run applications that run on it.In addition, what the organization lacks is the ability to configure the elements in the server (that is, to assemble specific hardware resources on demand) anywhere in the data center.The composite decomposition infrastructure (CDI) provides these missing pieces.
Composite decomposition infrastructure (CDI) virtue.
In enabling combination decomposition infrastructure (CDI) in data center, each server of each calculation module, nonvolatile memory, accelerator, storage and so on all is broken into Shared resource pool, and therefore can be managed separately under software control.The decomposed components can be regrouped under software control, or as a workload optimized server, regardless of the physical presence of the components.Studies have shown that combination decomposition infrastructure (CDI) can achieve as much as 63% of TCO earnings (55% of capital expenditure, 75% of operating costs), the technical update saves 44% of capital expenditure and 77% of the workforce.
The results of these savings are:
O because of decomposition, common management apis and vendor interoperability can be expanded faster and easier,
O more flexible in application development, configuration, and lifecycle management,
O because of better resource utilization, lower overconfiguration and dynamic load adjustment,
O independent upgrade cycle (i.e., only need to replace the target resource, not the entire server)
O optimizes performance through custom configurations, including fast non-volatile memory (NVM) and accelerators.
O more automated infrastructure management and more efficient use of staff.
Facebook, Google and other major cloud computing service providers (CSP) are actively studying the breakdown structure of their data centers.Some of their implementation is the custom, and the vast majority of the use of proprietary software and API. In order to match the biggest cloud computing service provider (CSP), there is no such mass organization requires a combination of commercial off-the-shelf decomposition infrastructure (CDI) solution.The commonality of open technical standards will help the industry achieve scale and enable the composite decomposition infrastructure (CDI) to be generally available from vendor selection.
Open composite decomposition infrastructure (CDI) blueprint.
This is the goal of the Rack Scale Design, which is a blueprint for industry innovation centered around the generic component-based infrastructure (CDI) data center architecture.Intel RSD is an implementation specification for interoperability between hardware and software vendors.
Intel RSD defines the key aspects of the logical architecture to implement the composite decomposition infrastructure (CDI).The first is the design specification, which defines the hardware and software capabilities required for modules, racks, and data center levels to achieve the granularity composability of infrastructure and scalable software control.The second is a set of common open apis that expose these capabilities to higher-level programming software from multiple open source or commercial vendors.
These APIs are defined in Redfish, this is an open, extensible and safety standards, it is based on Web friendly principle (RESTful APIs, the JSON data model), modern open management framework to replace IPMI. Redfish is the distributed management task force (DMTF) extensible platform management BBS products, this BBS is Broadcom, Dell, Ericsson, hewlett-packard, Intel, Lenovo, Microsoft, Supermicro and VMWare industry plan released in September 2014.The Intel RSD extension is regularly submitted to Redfish's extensible platform to manage BBS as recommended in the official Redfish standard.
沒有留言:
張貼留言