2018年2月5日 星期一

Energy saving method of data center migration

The data center migration, energy efficiency measurement standard is the energy utilization ratio of the sacristy (PUI), a low ratio shows better utilization rate, 1 is the ideal target of energy efficiency. 2, the common value of the ratio implies that the consumption of 2 watts of energy in the data center has dropped to 1 watts when the data center is tested. The loss is that energy is transformed into heat and then needs energy to dispel the heat through the traditional data center refrigeration system.
 Data centre migration
1. adjust the temperature of the high data center
2. turn off the server that is not used
3. use free external air to refrigerate
4. use the heat of the data center to heat the office area
5. a read-only dataset that uses a solid state hard disk to run a highly active state
6. the use of DC in the data center
7. heat the heat into the ground
8. discharge heat into the sea through pipes
One of the basic ways of saving energy: the temperature of the high data center
Adjust the temperature of the temperature adjusting device in the data center. Usually, the data center's temperature will be set to 68 degrees Fahrenheit or lower. This kind of logic can extend the life of the device logically, and let users have more time to make feedback when the refrigeration equipment fails.
Experience shows that when a server's hardware fails, especially when a hard disk fails, it does increase the temperature of the system's operation. But in the recent year, the IT economy has passed through an important critical point: the operating costs of the server generally exceed the cost of the purchase. This may make it more practical to save the hardware than to reduce the operating cost.
Basic energy saving two: turn off the server that is not used
Virtualization technology shows the advantages of integrating unused processors, hard disks and memory. So why not turn off the entire server? Will this increase the "enterprise flexibility" that matches the energy cost they consume? If you can find instances of the server being shut down, you can achieve the minimum energy consumption of these servers, to zero. But you must first take the objections into account.
First of all, you usually think that because the pressure is stacked on the non domain switching components such as the motherboard adapter, the energy cycle reduces the life expectancy of the server. The conclusion is that in real applications, servers are built from the same components that are frequently used in the energy cycle, such as cars and medical devices. There is no evidence to realities any reduced MTBF (that is, the average interval time) as a result of the energy cycle server.
The second objection is that the server takes too long to start. However, you can usually speed up the startup of servers by shutting down unnecessary import time diagnostic tools, directly importing and using some hot boot functions provided in some hardware from the already running snapshot images.
The third objection is that if we have to restart a server to adapt to the enhanced load, no matter how fast the user can import, it can not wait for users. However, most application software architectures are new to new users, because simplifying process needs are slower, so users don't know that they are waiting for servers to start. The application software does reach the limitation of the number of users. If users know that "we start more servers to speed up the demand response," they may be willing to do so.

沒有留言:

張貼留言