2018年1月21日 星期日

How to select a server for a data center migrating enterprise

The data center migration, in order to optimize performance, the IT department should assess its primary task, determine how to select the server, and create the most efficient workload.
 Data centre migration
The server is the core of modern computing. However, around the problem of how to select the appropriate servers to carry the workload of enterprises, sometimes it will cause a series of confusion to the managers of the enterprises. Although it is possible to fill data centers and manage any workload by using the same, virtualized and clustered white box system, cloud services are changing the way of enterprises to run applications. As more and more business organizations deploy workload in public cloud services, local data centers need less resources to manage the workloads that are still being deployed inside the enterprise. This has prompted IT and business leaders to seek more value and performance from a shrinking server size.
Today, the widely popularized white box system is being challenged by a new round of professional challenges brought by the features of the server. Some enterprise organizations are rediscovering the concept that a server may really be suitable for all application scenarios. However, your enterprise can actually choose or even customize the server cluster hardware to adapt to a specific use category.
Virtual machine integration and network I/O have added advantages
The core advantage of server virtualization is that it can host multiple virtual machines on the same physical server to make more available computing resources for servers. The virtual machine relies on the server memory (RAM) and the processor kernel. Because enterprises can configure virtual machines to be able to use a wide range of memory space and processor core, it is impossible to accurately determine the number of virtual machines that can be hosted on fixed servers. However, the rule of thumb on the server includes selecting a processor with more memory and processor core, which usually allows more virtual machines to be hosted on the same physical server, which helps to improve integration.
For example, a DELL /EMC PowerEdge R940 rack server can host up to 28 processor cores, and provide 48 double data rate 4 (DDR4) dual in line memory modules (DIMM) slots, which can support up to 6 TB of memory. Some business organizations may choose to abandon a single rack server, but prefer to use blade server instead, or as part of the super fusion infrastructure system. Servers for advanced virtual machine integration should also include elastic server functions, such as redundant hot plug in power, and DIMM, hot plug and DIMM mirroring and other elastic memory functions.
A secondary concern about how to choose a server for a high degree of integration is more attention to network I/O. The enterprise workload usually involves exchanging data, accessing centralized storage resources, and interface with the user through LAN or WAN. When multiple virtual machines try to share the same low-end network ports, it may lead to network bottlenecks. Integrated servers can benefit from fast network interfaces, such as a 10 Gigabit Ethernet port, but choosing servers with multiple 1 GbE ports is usually more economic and flexible, so they can be integrated together to improve speed and flexibility.
Virtualization container represents a relatively new virtualization method, which allows developers and IT teams to create and deploy applications as packaged instances and dependent instances -- but containers share the same underlying operation system kernel. Containers are very attractive for the development and deployment of highly scalable cloud based applications.
Like virtual machine integration, computing resources will have a direct impact on the number of containers that servers may carry. Therefore, servers for containers should provide enough RAM and processor cores. More computing resources usually allow more containers.
But a large number of synchronized containers will bring serious internal I/O challenge to the server. Each container must share a common OS kernel. This means that there may be ten or even hundreds of containers trying to communicate with the same kernel, resulting in too much delay of the operation system, which may affect the performance of the container. Similarly, the container is usually deployed as an application component, not a complete application. These component containers must communicate with each other and expand as needed to improve the performance of the overall workload. This creates a huge - and sometimes unpredictable - API traffic between the containers. In these two cases, the I/O bandwidth limitation of the server itself and the efficiency of the application architecture design will limit the number of containers that the server may successfully manage.
Network I / O can also cause potential bottlenecks when many containerized workloads must communicate with a LAN or wide area network outside the server.  Network bottlenecks may reduce access to shared storage, delay response to users, and even lead to work load errors. Considering the network needs of containers and workloads, and configuring enough network capacity for servers, it can be a fast 10 GbE port or multiple 1 GbE ports, which can be concentrated together to improve speed and flexibility.

沒有留言:

張貼留言