顯示具有 Datacenter migration 標籤的文章。 顯示所有文章
顯示具有 Datacenter migration 標籤的文章。 顯示所有文章

2018年8月19日 星期日

Datacenter migration, peripheral security issues

Datacenter migration, as smart devices add attack vectors exponentially, the Internet of things (IoT), industrial Internet of things (IIoT), and cloud-based applications rapidly increase data center risk.In this era of global connectivity, organizations need to constantly test their security against complex threats, including Web applications and file-free attacks, memory corruption, return/jump oriented programming (ROP/JOP), and compromised hardware and software supply chain attacks.
While data centers have traditionally relied on detecting and adopting perimeter security solutions to mitigate risks, the proliferation of new cyber threats has increased the need for prevention.According to poirot institute, estimates of data center outage costs an average of more than $740000 (nearly 40% since 2010), who is responsible for the data center network security staff must seek to adopt the next generation of prevention strategies to reduce and close the attack surface, and improve the efficiency of existing infrastructure, processes and personnel.
To protect the surrounding
For decades, peripheral security has been the primary means of protecting data centers.However, this strategy is similar to a medieval castle, where the object of protection is limited to a small area and protected by a solid wall with a heavily guarded entrance point.The data center has built a security layer around it, and these security layers collaborate in depth, the idea being that if one security layer doesn't defend against some attack, it can be protected by the next security layer.
Like castles, data centers emphasize the detection of traffic coming in and out of organizations.Traditional traffic detection methods include mapping out network access points to create continuous testing and reinforcement of peripheral facilities.This is very effective for detecting attacks and generating alerts, and hopefully enough security to prevent security layer damage that can lead to downtime, economic damage, reputational damage, and even environmental damage.
Strengthening data center security
Data center security is no longer about internal protection.Castle solutions work well in the age of mainframes and hardline terminals, but they are less effective against today's threats.In fact, the advent of wireless communications (OTA), iot devices, and cloud computing has made data centers less secure.
The main security challenge facing data centers today is that they must work to maintain the privacy of their data as they deploy applications internally in data centers, public, private, and mixed clouds.While many of their customers extend their business further into the cloud, this may also inadvertently increase the risk of clone configuration extension attacks.An attacker can locate everything in the operating technology components of routers, switches, storage controllers, servers, and sensors and switches.Once hackers gain control of the device, they can extend it more, potentially attacking all the same devices across the network.
Today's attacks come from new or unexpected places, as cyberattackers now have more tools to circumvent perimeter security detection and attack targets from inside the data center.Security is not just about infrastructure, said colonel Paul Craft, director of operations at the joint forces headquarters for the defense information network (jfhq-dodin) at the AFCEA defense network operations symposium in May."" this is our IT platform that will record all of our data, IT's also our ICS and SCADA systems, and IT covers all of our cross-domain networks." "He said.
Many attacks can now be quickly extended from one device to all devices, according to the pollmont institute, as can be seen from hackers' access to 200,000 network devices built with the same code.File - free attacks such as memory corruption (buffers, stacks and heaps) and ROP/JOP (return/jump oriented programming) to perform reordering are also a growing threat, infecting devices 10 times more likely than traditional attacks.
According to symantec's 2018 Internet security threat report, attacks on supply chains have increased 200 percent over the past year.Many organizations and vendors now control only a small portion of their source code because the modern software stack consists of binaries from third parties in the global supply chain that come from proprietary and open source code that contains hidden vulnerabilities.In addition, zero-day attacks are growing rapidly, and many hackers are exploiting unknown vulnerabilities in software, hardware, or firmware to attack systems.
New era of data center network security
Data centers must shift from focusing only on the safety of testing to emphasizing the safety of prevention.As many new attacks completely eschew traditional network and endpoint protection, the latest generation of tools is designed to fend off the growing class of attack media.This not only increases the security against the latest threats, but also increases the effectiveness of tools and processes in handling the rest of the content.
Today, the hardware in the supply chain must be assumed to be compromised.This means that businesses need to build and run protected software on potentially untrusted hardware.Data centers need this new defense strategy, which takes a deep approach to identifying potential vulnerabilities and directly strengthening binaries so that attacks can't be implemented or replicated.
One of the best ways to do this is to somehow convert the software binaries in the device so that the malware cannot change the command and is propagated through the system.This approach, known as "network hardening," prevents a single exploit from spreading across multiple systems.It Narrows the attack horizon and shrinks vulnerabilities in industrial control systems and embedded systems and devices, greatly reducing the chances of physical damage and human damage.
The best security always assumes that hackers will eventually break in.Rather than reacting to an attacked vulnerability after it is exploited, network hardening prevents malware from targeting data centers, and less defensible organizations do not cancel such infrastructure.

2018年8月14日 星期二

Datacenter migration must pay close attention to peripheral security.

Datacenter migration, as smart devices add attack vectors exponentially, the Internet of things (IoT), industrial Internet of things (IIoT), and cloud-based applications rapidly increase data center risk.In this era of global connectivity, organizations need to constantly test their security against complex threats, including Web applications and file-free attacks, memory corruption, return/jump oriented programming (ROP/JOP), and compromised hardware and software supply chain attacks.
While data centers have traditionally relied on detecting and adopting perimeter security solutions to mitigate risks, the proliferation of new cyber threats has increased the need for prevention.According to poirot institute, estimates of data center outage costs an average of more than $740000 (nearly 40% since 2010), who is responsible for the data center network security staff must seek to adopt the next generation of prevention strategies to reduce and close the attack surface, and improve the efficiency of existing infrastructure, processes and personnel.
 Datacenter migration
To protect the surrounding
For decades, peripheral security has been the primary means of protecting data centers.However, this strategy is similar to a medieval castle, where the object of protection is limited to a small area and protected by a solid wall with a heavily guarded entrance point.The data center has built a security layer around it, and these security layers collaborate in depth, the idea being that if one security layer doesn't defend against some attack, it can be protected by the next security layer.
Like castles, data centers emphasize the detection of traffic coming in and out of organizations.Traditional traffic detection methods include mapping out network access points to create continuous testing and reinforcement of peripheral facilities.This is very effective for detecting attacks and generating alerts, and hopefully enough security to prevent security layer damage that can lead to downtime, economic damage, reputational damage, and even environmental damage.
Strengthening data center security
Data center security is no longer about internal protection.Castle solutions work well in the age of mainframes and hardline terminals, but they are less effective against today's threats.In fact, the advent of wireless communications (OTA), iot devices, and cloud computing has made data centers less secure.
The main security challenge facing data centers today is that they must work to maintain the privacy of their data as they deploy applications internally in data centers, public, private, and mixed clouds.While many of their customers extend their business further into the cloud, this may also inadvertently increase the risk of clone configuration extension attacks.An attacker can locate everything in the operating technology components of routers, switches, storage controllers, servers, and sensors and switches.Once hackers gain control of the device, they can extend it more, potentially attacking all the same devices across the network.
Today's attacks come from new or unexpected places, as cyberattackers now have more tools to circumvent perimeter security detection and attack targets from inside the data center.Security is not just about infrastructure, said colonel Paul Craft, director of operations at the joint forces headquarters for the defense information network (jfhq-dodin) at the AFCEA defense network operations symposium in May."" this is our IT platform that will record all of our data, IT's also our ICS and SCADA systems, and IT covers all of our cross-domain networks." "He said.
Many attacks can now be quickly extended from one device to all devices, according to the pollmont institute, as can be seen from hackers' access to 200,000 network devices built with the same code.File - free attacks such as memory corruption (buffers, stacks and heaps) and ROP/JOP (return/jump oriented programming) to perform reordering are also a growing threat, infecting devices 10 times more likely than traditional attacks.
According to symantec's 2018 Internet security threat report, attacks on supply chains have increased 200 percent over the past year.Many organizations and vendors now control only a small portion of their source code because the modern software stack consists of binaries from third parties in the global supply chain that come from proprietary and open source code that contains hidden vulnerabilities.In addition, zero-day attacks are growing rapidly, and many hackers are exploiting unknown vulnerabilities in software, hardware, or firmware to attack systems.
New era of data center network security
Data centers must shift from focusing only on the safety of testing to emphasizing the safety of prevention.As many new attacks completely eschew traditional network and endpoint protection, the latest generation of tools is designed to fend off the growing class of attack media.This not only increases the security against the latest threats, but also increases the effectiveness of tools and processes in handling the rest of the content.
Today, the hardware in the supply chain must be assumed to be compromised.This means that businesses need to build and run protected software on potentially untrusted hardware.Data centers need this new defense strategy, which takes a deep approach to identifying potential vulnerabilities and directly strengthening binaries so that attacks can't be implemented or replicated.
One of the best ways to do this is to somehow convert the software binaries in the device so that the malware cannot change the command and is propagated through the system.This approach, known as "network hardening," prevents a single exploit from spreading across multiple systems.It Narrows the attack horizon and shrinks vulnerabilities in industrial control systems and embedded systems and devices, greatly reducing the chances of physical damage and human damage.
The best security always assumes that hackers will eventually break in.Rather than reacting to an attacked vulnerability after it is exploited, network hardening prevents malware from targeting data centers, and less defensible organizations do not cancel such infrastructure.

2018年8月13日 星期一

Datacenter migration, how to reduce the risk of data center

Datacenter migration, before considering the complexity of data center design, it is necessary to consider the use of a flexible system without single point of failure (SPOF). By definition, a single point of failure (SPOF) is a component that, once the system fails, makes the entire system inoperable. In other words, a single point of failure produces an overall failure. . These may be component failures or incorrect human intervention, such as switching without knowing how the system reacts.
2N redundant system can be regarded as a minimum requirement for SPOF installation. For simplicity, it is assumed that the 2N system of the data center consists of two identical electrical and mechanical systems, A and B. Fault tree analysis (FTA) will highlight the combination of events that cause failure. However, it is very difficult to simulate human errors in fault tree analysis (FTA). The data used to simulate human errors will always be subjective, and there are many variables.
If the system in this 2N redundant system example is physically separate, any operation on one system should have no effect on the other. However, the introduction of enhancements is not uncommon. It uses a simple 2N redundant system and adds other components, such as disaster recovery links and public storage containers connecting the two systems.
 Datacenter migration
In large-scale design, this becomes an automatic control system (such as SCADA, BMS), rather than a simple mechanical interlock. The basic principles of 2N redundant system have been destroyed, and the complexity of the system has increased exponentially. The same is true of the skills required by the operational team.
A review of the design still shows that 2N redundant design has been achieved, but the resulting complexity and operational challenges undermine the basic requirements of high availability design.
Studies have shown that a particular sequence of events that lead to failure is usually unpredictable and will not know what the consequences will be until it happens. In other words, the sequence of events is unknown before people know. Therefore, it will not become part of fault tree analysis (FTA).
Austrian physicist Ludwig Von Boltzmann has developed an entropy equation that has been applied to statistics, especially for missing information. In this theory, a box grid, such as a 4 x 2 or 5 x 4 grid, and a coin in the box are set. The theory allows users to determine the number of problems to determine which box to place coins on the defined grid. If you replace boxes with system components and coins with unknown failure events, one can consider how the system availability is affected by complexity. It can be seen that the number of unknown failure events that occur less frequently can reduce the number of failures that the system can fail. Therefore, increasing people's detailed knowledge of the system and discovering unknown events reduces the combination of system failures, thereby reducing the risk.
human factor
Research shows that any system with human-machine interface will eventually fail due to loopholes. Vulnerabilities are any possible vulnerabilities that may cause failures in data center facilities. Data center vulnerabilities may be related to infrastructure or facility operation. Infrastructure involves equipment and systems, in particular:
Mechanical and electrical reliability.
Facilities design, redundancy and topology.
These actions involve human factors, including human errors at the individual level and management level. It involves:
• operational team adaptability.
Team reaction to vulnerabilities.
The more complex the system, the more vulnerable the human factor is, the more training and learning the facilities need. Learning is applicable not only to individuals, but also to organizations. Organizational learning is characterized by maturity and processes (shown below as cumulative experience), such as around data center structures and resources, maintenance, change management, document management, debugging and operability, and maintainability.
Personal learning is a function of knowledge, experience and attitude (as shown in the chart as the depth of experience). Developing an organizational and personal learning environment helps reduce failure rates and provides operators with expertise that effectively reduces energy waste.
Universal learning curve applied to data center
It is important to understand that zero failure can never be achieved because the relationship between failure and experience follows an exponential curve. Data center facility operators with good knowledge and experience are still prone to complacency and to failure in a series of previously unknown events.
conclusion
By providing a learning environment that improves organizational and personal knowledge, it reduces the risk of data center. Although sophisticated operators have experience in reducing failure rates, too complex designs can still fail if implemented without adequate training.

2018年8月10日 星期五

Datacenter migration using liquid cooling obstacles

Datacenter migration,The rise of machine learning has led to higher and higher power densities in data centers, where a large number of servers are deployed, with power densities ranging from 30 kW to 50 kW per rack, prompting some data center operators to switch to liquid cooling instead of air cooling.
 Datacenter migration
Although some data center operators use liquid cooling to improve the efficiency of their facilities, the main reason is the need to cool more power-intensive racks.
But the conversion from air cooling to liquid cooling is not simple. Here are some of the major obstacles encountered in using liquid cooling technology in data centers:
1. two cooling systems are required.
Lex Coors, chief technology officer of data centers at European hosted data center giant Interxion, says it makes little sense for existing data centers to switch to liquid cooling at one time, and the operations teams at many data center facilities will have to manage and operate two cooling systems, not one.
This makes liquid cooling a better choice for new data centers or data centers that require major modifications.
But there are always exceptions, especially for very large manufacturers, whose unique data center infrastructure problems often require unique solutions.
Google, for example, is currently converting air-cooling systems from many of its existing data centers into liquid-cooling systems to cope with the power density of its TPU 3.0 processor, which its latest machine learns.
2. lack of industry standards
The lack of liquid cooling industry standards is a major obstacle to the widespread adoption of the technology.
"Customers must first have their own IT equipment for liquid cooling." "And the standardization of liquid cooling technology is not perfect, and organizations can't simply adopt it and make it work," Coors said.
Interxion's customers do not currently use liquid cooling technology, but Interxion is prepared to support it if necessary, Coors said.
3. electric shock hazard
Many liquid cooling solutions rely mainly on dielectric liquids, whose medium should be non-conductive and free from electrical shock hazards. But some organizations may use cold water or warm water for cooling.
"If a worker happens to touch the liquid at the moment it leaks, there's a risk of electrical shock and death, but there are many ways to deal with it," Coors said.
4. corrosion
Corrosion, like any system involving liquid pipes, is a major problem facing liquid cooling technology.
"Pipeline corrosion is a big problem, which is one of the problems that people need to solve." Coors said. Liquid cooling manufacturers are improving pipes to reduce the risk of leakage and automatically seal pipes in case of leakage.
He added, "at the same time, the rack itself also needs to be containerized. If there is a leak, just sprinkle the liquid on the rack, so there is no great harm. "
5. operational complexity
Jeff Flanagan, executive vice president of Markley Group, said the biggest risk of using liquid cooling might be increased operational complexity, and the company plans to launch liquid cooling services in high-performance cloud computing data centers early next year.
As data center operators, we prefer simple technologies, and the more components we have, the more likely we are to fail. When using liquid cooling technology to cool the chip, the liquid flows through each CPU or GPU in the server, requiring many components to be added to the cooling process, which increases the possibility of failure.
In operating data centers, there is another complication: immersing servers in dielectric fluids, which requires higher insulation technology.

2018年8月9日 星期四

Datacenter migration, how to deal with old servers in the computer room?

 Datacenter migration may cause a lot of people to worry about how to handle their old server hardware. Why is it in July 14th? This is the last date Microsoft's support for Windows Server 2003. It is said that in China, about 40% of the servers are running the system that is going to retire. It is believed that more and more old systems will be upgraded during this period of time, but there will also be a lot of server hardware running Windows Server 2003 ready to retire.
 Datacenter migration
The old server hardware can not be simply lost. Discarding the server hardware arbitrarily can not only pollute the environment, but also cause the loss of data. So how do we deal with the old server hardware that we have retired?
There are several options to consider:
1. donor organizations
If your business and move to newer hardware, then old devices can find a good place to use, rather than throw it into the garbage heap. The application of these equipment by a good enterprise management not only solves the problem of old equipment, but also promotes the corporate social responsibility image.
Take the Electronic Recycling Association, for example. There are some nonprofit organizations around the world that will take the equipment you throw away and do something good for it.
2. second hand market
Just like when you have a new smartphone and sell your old iPad, you can sell these old server devices to the second-hand market.
You'll probably find that some enthusiasts, even small businesses, can turn servers into home entertainment streaming systems or run SharePoint systems.
If you don't want the buyer to bargain, you can also sign an agreement with a potential buyer, who will be responsible for recycling the old equipment.
3. responsible person handling.
If you and your server are really at the end of practicality, but you're paranoid that you don't want to leave these devices to anyone, you need to deal with the e-waste yourself.
Handling e-waste is not simply throwing it out. E-waste can do incredible harm to the environment.
Anyway, when you choose to give up using these devices, you need to specially destroy your hard drive data to prevent someone with ulterior motives from stealing your company's data, because some services can use your old hard drive to recover your data.

2018年8月8日 星期三

Datacenter migration, driving factors behind requirements

Datacenter migration, enterprise leasing data center space demand driving factors continue to develop.With this ongoing change, more than 700 decision makers responsible for selecting enterprise IT and storage services were involved in a study commissioned by Vertiv to further understand this sustained and stable development.
 Datacenter migration
The study, conducted by Research firm 451 Research, aims to better understand the changing nature of space demand in rental data centers.If one looks back to the early 2000s, most of the demand for rental data center space comes from telecoms operators.However, people can now see greater demand from service providers, including public cloud providers and businesses looking for space to include higher-level services.
While analysts, investors and pundits have predicted that the trend will reduce the demand for rental data center space, these views do not take into account the potential future demand from wider Internet of things adoption.Nor do they take into account the need for hybrid data center space, nor the trend of not all workloads now moving to the cloud, for many reasons.
Future opportunities
As the report makes clear, the data center demand outlook is not entirely negative.The following seven major findings will drive current and future demand for rental data center space and how they will affect multi-tenant data center (MTDC) providers.
Continuous cloud adoption
In less than a decade, cloud computing has moved from the edge of the market to the mainstream. With the widespread adoption of cloud computing, companies have been shifting IT from internal data centers to external hosting, hosting private clouds, and public cloud environments.While each enterprise on average retains 40 percent of its workload in internally deployed data centers, and up to 36 percent in non-cloud environments, most respondents plan to increase their use of private and public clouds over the next two years.
The development of the Internet of things will further drive demand for data centers
Iot adoption was widespread among the 700 respondents surveyed, with only a tiny 2% of respondents saying they were not involved in any iot projects.It is clear that enterprise applications are still in the early stages of the iot maturity curve, with about two-thirds (64%) of respondents saying that their current iot activity phase is defined as "" in testing or planning" ".
Iot projects often require multiple locations for data analysis and storage.These include: endpoint devices with integrated computing/storage, intelligent gateway devices, nearby devices that perform local computing, internal deployment data centers, hosting facilities, hosting web sites, and the presence point location of network providers.
Not only do various hosting destinations exist for data analysis and storage, many deployments may end up storing, integrating, and moving data in a combination of public clouds and other commercial facilities, including hosting sites and/or network providers.
The promise of expanding the Internet of things
Respondents said that while most businesses are now in the early stages of iot projects, a significant amount of IT capacity is currently used for iot.Surprisingly, 54% of respondents said that between 26% and 75% of their current IT business support iot plans.Looking ahead to the next two years, 73 percent of respondents said they expect as much as three quarters (75 percent) of their data center and cloud computing capacity to be used to support iot plans.
Analysis workloads that drive computational requirements
In addition to the cloud computing that allows the iot process data storage, it also allows the processing of iot data, which is another great opportunity for data center providers.The public cloud is currently the most popular cloud platform (39%) for analyzing iot generated data, but it is by no means the only cloud platform.In fact, data processing is allocated between the hosting facilities (30%), attached to the local computing devices of the data generator (30%), in the network operator infrastructure (31%), and in the internal data center (35%).
Workload and provider
The nature of iot workloads also affects the location of iot data storage and processing.Slightly less than half (48%) mentioned that quality control/tracking systems are most likely to be processed near the data source.To meet this requirement, the micromodular data center is likely to become more prominent in addition to the relatively close multi-tenant data center (MTDC).
An undecided opportunity
For multi-tenant and micro-modular data center providers, undefined organizations in iot infrastructure represent market opportunities.
A quarter of respondents said public cloud providers were the top choice for iot storage and processing for infrastructure providers.There are fairly balanced differences between public clouds, with some respondents also choosing public clouds, private clouds and collocated data centers (21 percent).In addition, 28 percent of respondents chose services provided by network operators (14 percent) or hosted service providers (14 percent).
At the edge of the fog calculation
The OpenFog alliance defines fog computing as "a system-level architecture that allocates computing, storage, control, and network resources and services anywhere in the continuum from cloud computing to the Internet of things."
Of those respondents, there were some very early adopters, with up to 45 percent saying they were "" familiar" "or" "very familiar" "with the OpenFog alliance.The main market driver of fog computing is real-time analysis of data streams, which was chosen by more than a quarter (26 per cent) of respondents, followed by lower network return trip costs (24 per cent) and improved application reliability (21 per cent).
The key points
Based on these premises, the survey report further identified eight key points for multi-tenant data center (MTDC) providers:
(1) streamlining public cloud use or making it safer for hosted services and private cloud options are becoming increasingly important for customers.
(2) as demand for off-site deployments grows, multi-tenant data center (MTDC) providers with interconnected or hosted services will benefit greatly.
(3) hosting providers and telecom operators are in a unique position to address the specific challenges of public cloud.
(4) iot is an opportunity that data center capacity service providers should not ignore.
(5) the emergence of the Internet of things has created a new battlefield for computing power positioning.
(6) the Internet of things will bring applications and workloads that require near real-time response (low latency), which determines that computing capacity may be closer to the edge of the network or devices to minimize transmission delay impact.
(7) fog computing/edge computing market will bring important cooperation opportunities.
(8) marketing focuses on disseminating data center services that support critical fog computing/edge computing.
In addition to these key points, data center providers should also pay special attention to vertical industries and iot support in the countries/regions with the highest proportion in the mature planning stage.For example, the study found that Italy has the highest percentage of external organizations using cloud computing (67 percent), while China is most active in using web hosting as an iot data storage environment in the coming year.The biggest shift related to iot data storage is away from enterprise-owned data center facilities.While 71 per cent of the companies surveyed now store iot data internally, that number is expected to fall to 27 per cent in a year.
If one thing is clear, developments in cloud computing and the Internet of things will have a significant impact on data center demand.If data center providers are open to the opportunities these emerging technologies offer and the demand-driven power of renting data center space, it will enable them to enter new markets and stay ahead of the competition.

2018年8月7日 星期二

Datacenter migration,How to be intellectualized from modularization?

Datacenter migration,What is HUAWEI architects doing to the trend of intelligent data center? First of all, let's answer this question: why is it intelligent? No manual inspection?
 Datacenter migration
Really not, because "artificial" as the protagonist of data center operations and maintenance needs too much experience, can only be good after, can not warn, can not achieve fine management. The position of "artificial" in the data center should eventually gradually tend to "execution" instead of "management", and the work of management should rely on the continuous improvement of AI to gradually replace manual work. Modular data center plus intelligence will make the data center more powerful and perfect.
Faced with the intelligence of data center, how did HUAWEI do it?
Through the continuous intelligent upgrading of the self research equipment and the understanding of the L2 layer service, HUAWEI launched an intelligent micro module 3 products around I3 (iPower, iCooling, iManager) features.
Why is it 3? To put it simply, the 1 is hardware integration, the 2 is the combination of software and hardware, and the 3 is the integration of functions. So what functions do these 3 I3 features incorporate? And let me be fine.
IPower guarantee business is not interrupted; dangerous state can be predicted in advance; fire hazard is excluded for the first time.
In terms of intelligence, iPower can mainly achieve:
The power supply full link visual and alarm can locate accurately and identify faults in minutes.
The switch current, voltage and temperature of the power supply branch are monitored comprehensively, and the abnormal state is reported in the morning.
Socket level power supply monitoring, cabinet equipment running state at a glance;
The battery management system monitors key information such as SOH, current, voltage, internal resistance and temperature of each cell.
When the battery and socket devices fail, they can isolate the battery / socket power supply and eliminate fire hazards.
I feel that this understanding is more convenient, that is, Bian Que + Hua Tuo, a hope of knowledge, a drug can scrape bones.
ICooling: refrigeration is not only reliable but also energy efficient.
.AI self-learning algorithm, combined with the temperature and humidity of the channel, through the adjustment of indoor and outdoor fans, compressors, expansion valves, and so on, to achieve energy saving 8%, a year can save tens of tens of thousands of electricity costs;
The temperature cloud chart and the load power are no longer independent functions. ICooling combines the temperature nephogram - cabinet load - temperature control system to realize the double insurance of eliminating real-time hot spot - eliminating hidden trouble of hot spot.
The refrigerant capacity of air-conditioning is no longer half-hidden. One key of iCooling detects the refrigerant capacity to solve the problem of overheating and downtime caused by insufficient refrigerant.
IManager is the brain of smart micromodule 3.0, which not only carries the algorithms of iPower / iCooling, but also makes the operation and maintenance of the computer room easier. The best technology is that people can't feel the existence of technology. IManager is trying to make you:
You can't feel the place that needs to be operated manually. Intelligent lighting, automatic translation door, eLight module status indicator and fire fighting linkage.
I don't feel the trouble of asset management. In the face of quarterly and annual reconciliation of asset statements, asset automation functions can be easily reduced and manual statistics cost reduced.
IManager is more like a perfect housekeeper, earnest and dedicated, meticulous and never lose his duty.
In the future, HUAWEI will continue to explore the intelligent path of the micro module, continuous optimization around the I3 features, to ensure a new generation of data centers with both quality and intelligence for customers.

2018年8月6日 星期一

Datacenter migration the history of cooling technology

Datacenter migration, the hottest time of the year has come, a lot of things are suffering from high temperature, even in the northern Arctic Circle in Sweden and the northern Arctic Ocean in parts of Siberia, the temperature has also reached more than 30 degrees.
 Datacenter migration
Not only creatures but also all kinds of abiotic tools are being tested. With the advent of big data and cloud computing era, massive data flow into our lives. In this era, data for us is like "oil", the most valuable wealth of the enterprise, and the data center as the data storage and interaction infrastructure, its importance is becoming increasingly prominent.
The data center is generally a large warehouse, which mainly stores the server and other computer devices connected to the Internet. These devices save most of the data on the Internet, and provide the computing power for cloud computing. It can be expected that the data center will generate a lot of heat, it is reported that its energy density is more than 100 times that of an ordinary office building.
Data centers and equipment internal thermal load must be effectively managed, in order to cool the data center, the data center has taken various measures. So, what is the development of cooling technology in data center? Which cooling methods are most favored by the manufacturers?
Natural cooling, using only the temperature difference between the external air temperature and the equipment to cool the equipment, is one of the earliest cooling schemes in the data center, but this cooling method is restricted by the area, so the data center usually uses some form of air conditioning to cool the IT.
The air-conditioning equipment used for cooling the data center has also experienced a period of development. From the early ordinary air conditioning to the 70s precision air conditioning, the air cooling has been developing faster because of the low cost, but as the equipment continues to increase, the server is more and more dense, and the air cooling is gradually unable to fill the cooling demand, and availability and green energy saving. Dynamic cooling has become the main direction of innovation. Liquid cooling technology is favored by many manufacturers because of its outstanding performance.
Liquid cooling refers to the replacement of air by liquid to take away the heat generated by CPU, memory strip, chipset, expansion card and other devices in operation. According to the current technological research process, liquid cooling is classified as water cooling and refrigerant cooling. The refrigerants available include water, mineral oil, electronic fluoride solution, etc. According to the cooling principle, the liquid cooling is divided into two systems: cold plate liquid cooling (indirect cooling) and submerged liquid cooling (direct cooling).
If the previous air cooling is to let the server blow the fan, then the liquid cooling is to allow the server to shower or bathe. At present, there are three main liquid cooling technologies in the industry: cold plate, spray and immersion.
Cold plate liquid cooling is the flow of cooling water from the special water injection port, through the closed heat pipe flow into the main engine, take away the heat of CPU, memory and hard disk and other components out.
Spray type liquid cooling means to retrofit IT equipment, deploy corresponding spray devices, and cool the overheated devices when the equipment is running.
In contrast, submerged liquid cooling is more special. It is understood that the heat dissipation effect of immersion liquid cooling technology is first appeared abroad. It can be understood as making the server in the liquid, although it can achieve high density, low noise, low heat transfer temperature difference, natural cooling and so on. But the immersion liquid cooling technology is very difficult and cost high. At present, the industry only has single machine test and single machine. It is shown that server cluster deployment is not yet available.
In fact, the concept of liquid cooling has appeared many years ago, but it has only arisen in recent years. This is mainly because with the rapid development of the data center industry, especially the deployment of high density and even ultra-high density servers, the challenges facing the data center refrigeration are increasingly severe. How to further reduce the high power consumption, how to achieve the green development of the data center while guaranteeing the performance, has become the concern of the industry. And the focus of the breakthrough.
At present, the mainstream manufacturers at home and abroad are vigorously promoting the research of liquid cooling technology. For example, Facebook is launching a new indirect cooling system, the StatePoint liquid cooling (SPLC) solution, developed in collaboration with the Nortek air solutions company. Since it was developed in 2015, the technology (Nortek patent) uses a liquid air heat exchanger, cooling water through membrane separation layer evaporation technology, and cooling the air in a data center facility with cold water, and the diaphragm can prevent cross contamination between water and air.
In addition, the spray cooling data center, which is the core of the combined liquid cooling technology, is a new liquid cooling method, which is different from the traditional air cooling and soaking cooling mode. It directly sprinkled the insulated liquid cooling medium into the heating device inside the server or the radiator with its contact, and the cooling fluid was absorbed quickly. The heat of the chip is transmitted through the liquid cooling system to the outdoor atmosphere, which not only solves the problem of low air cooling efficiency, but also solves the problem of high cost of soaking and maintenance.
According to the industry news, the liquid cooling scheme is rising, with the technology driven by AI and the network edge computing, and also the decline of the data center reduction. At present, more and more manufacturers use liquid cooling technology to cool down the data center. For example, Google I/O 2018 first introduced liquid cooling in Data Center for cooling of AI chips.
Innovation is the first driving force to lead the development of science and technology. As a kind of efficient, energy-saving and safe cooling technology, liquid cooling technology is becoming the inevitable choice of most data centers.

2018年8月5日 星期日

Datacenter migration, introduction to modularized data center

Datacenter migration. The modular data center refers to the independent function and unified input and output interface of each module. The modules in different regions can backup each other and form a complete data center through the arrangement of the related modules. There are many forms of modularization, which can be design methods and ideas, or products.
 Datacenter migration
Modularized design is adopted in the modular design method, and modularization design is also adopted in each function system of the data center, which can be divided into modules, sub floors and staging construction during construction. There are several forms of data center modularization in product form: modular products, micro modules, and container data centers.
Modular products are represented by modular UPS, modular precision air conditioning, modular wiring, etc. the micro module is typical with cabinet micro environment. It refers to a number of stand units as the basic units, including the refrigeration module, power supply and distribution module, network, wiring, monitoring, fire control, and other independent operating units. The piece can be prefabricated in the factory and can be disassembled and assembled quickly. The container data center can be considered as a standardized, prefabricated and pre tested large modular data center product and solution.
The advantages of modular systems:
1. Modular systems are extensible: modular infrastructure can be deployed according to current IT requirements and can be added to more components in the future according to needs. This will significantly reduce the total cost of ownership.
Modular systems are changeable: they can be reconfigured to provide great flexibility to meet changing IT needs.
Three, modular systems are portable: when installing, upgrading, reconfiguring, or moving modularized, independent components, standard interfaces, and easy to understand structures save time and save money.
Four. Modularized components are replaceable: failure modules can be easily replaced to be upgraded or repaired, and usually do not need to stop the system running.
Five, modularization can improve the quality of fault repair: the transplantable and pluggable features of the module make a lot of work available in the factory, including before delivery (such as pre wiring for distribution equipment), as well as after delivery (such as repair of power modules).
From a statistical point of view, the same work is done in the factory compared to the field operation, which is much lower in performance reduction, reduction in capacity and rate of failure, for example, compared with the UPS power module that has been repaired in the field, the repaired modules in the factory are causing power failure, new failures, or unable to recover to full load. The probability of working state is a thousand times lower.
6. In terms of energy consumption, modular data center can control energy consumption through centralized management, and improve the utilization of equipment, thereby reducing resource consumption. At the same time, the PUE value of the modular data center is greatly reduced because of the optimization of the power line and data cable path, the server deployment and installation, the airflow organization in the module and so on.

2018年8月2日 星期四

Datacenter migration, efficiency and sustainability steps

Datacenter migration, experts predict that the data center will use three times more energy in the next ten years, making it more important for data center providers to find it more efficient than ever. Data center operators also require that important energy and electrical data be viewed around the clock to make informed decisions about server loads and optimize power capacity.
 Datacenter migration
The adoption of validated protection measures and the need to meet ISO 50001 and other energy performance standards have led to more sophisticated energy consumption reporting in the industry. In addition, the adoption of carbon emission targets and reporting in the data center area is increasing as sustainable development of enterprises shifts from sustainable strategies to business strategies.
Despite the growing demand for efficient and sustainable business, a recent study found that most of the organizations failed to implement the necessary steps to integrate and promote their projects. In fact, most enterprises still use considerable traditional energy and carbon emission management methods, and few enterprises coordinate the activities between the procurement, operation and sustainable development sectors. This way of disconnection will hinder investment return (ROI).
However, a UK hosted cloud computing service provider has turned its energy management challenge into an opportunity, with substantial cost savings. IOmart is a rapidly developing cloud computing company, recognized by some of the world's major cloud computing providers (including Microsoft, VMware, EMC, and AWS) as Tier One partners. With the continuous development of enterprises, the company is applying the guiding principles of sustainable development to the organization of customers themselves. A successful solution to get more efficient, lower cost and greater flexibility can save a lot of cost.
How does the company achieve this feat? IOmart company, in collaboration with Schneider electric, has established a strategic and comprehensive approach to energy management and carbon emissions management in its data center. More specifically, it combines procurement, energy, and sustainable development teams to compare data and develop sharing strategies to manage energy consumption and carbon emissions and reduce expenditure. This integrated approach, also known as active energy management, ultimately helps reduce energy use, meet energy compliance standards, and manage unstable energy costs.
The following are the four main steps taken by IOmart to share key information between departments and to use energy procurement data to support energy and sustainable development reports.
• the first step: more intelligent purchase of energy. The company's first challenge is to reduce energy costs by strategically purchasing energy. Schneider Electric helped deploy risk management solutions that responded flexibly to the market, saving 13% of the contract costs. With the early success of using a more intelligent approach to buying energy, the team hopes to build a more strategic and comprehensive approach to other energy and sustainable development opportunities.
• the second step: meeting the standards of energy and sustainability. Energy efficiency and sustainability goals are integrated to meet voluntary and mandatory standards, including climate change agreements, carbon reduction commitments and ISO 50001. Sharing data between departments is essential for regulatory purposes, including the use of energy procurement data to support energy and sustainable development reporting. IOmart won ISO 50001 certification in December 2016, indicating that its commitment to clients as a responsible data center provider has so far saved 1 million 500 thousand euros. Stringent regulation, energy consumption, PUE monitoring and tax rebate benefits contributed to these cost savings.
Step third: conduct audits to identify potential savings. Energy audit is part of the ISO 50001 certification process, revealing new energy saving opportunities.  Through a continuous efficiency approach, IOmart identified the potential for further cost savings of 150,000, and monitoring showed more opportunities for savings.  Energy saving opportunities include the management of existing cooling systems and upgrading of the set points and dead zones of air conditioning units.
• fourth step: use software to enable transparency. IOmart continues to create new opportunities through comprehensive decision-making. It supports this work by using advanced tools and analysis, recognition and prioritization improvements. Resource Advisor is a software platform for enterprise energy and sustainable development data management that allows enterprise automation processes, supports compliance teams, and visualizes data to translate information into action.
The result is self-evident. IOmart can now effectively manage the energy consumption of its data centers and make informed decisions in the short, medium and long term. The success of the method is the integration of the personnel and the strategy. Starting from energy procurement, increasing efficiency and sustainability, gathering teams and working closely with the financial sector of the enterprise can achieve impressive results.
When companies operate projects in isolated islands, they lose their income or cost savings, and this is a significant gap for those who want to balance their profitability and environmental responsibility. Integrated energy and carbon management provides a holistic view of data and resources to reduce consumption, promote innovation, and maximize cost savings.
By adopting a strategic, holistic approach to improving efficiency and sustainability, IOmart will become a model for other organizations to start seeking a positive energy management tour.

2018年8月1日 星期三

How is the datacenter migration correct?

Datacenter migration. As organizations grow and develop, the technologies they employ inevitably need to evolve and change.As a result, both small, chain-store businesses and data centers for nonprofits expanding into unfamiliar areas are increasingly deploying IT devices.
 Datacenter migration
The servers managed by the organization have been running steadily under load, taking up useful space and consuming more and more power.So what happens when an organization needs to expand its business?Hosting a business or migrating to a new data center may be the right choice for the organization.
Large hosted data centers are more efficient at providing power and cooling to servers thanks to their economies of scale.Because they buy power at wholesale prices, they also eliminate the cost of maintaining equipment in the UPS, generators, air conditioners and so on, because that is included in the price.
By hosting the business, the organization can free up space and resources for more efficient work or office space.
Get the managed data center location correctly
Hosting a data center is not only a smart decision, but also a critical one.The organization may well take into account downtime, security, and application performance, as well as the specifics of the actual requirements of the process.In order to migrate as smoothly and safely as possible, the following factors need to be considered:
(1) the organization needs to understand the migration background and conduct research
Blindly starting a data center migration is a big no-no.The organization needs to spend time thinking about how relocating key applications, services, and data will affect its business during migration, and what measures the organization can take to mitigate risks or temporary disadvantages.
(2) server downtime is a key consideration for the organization
How such events are handled depends on the business nature of the organization.If you cannot tolerate any server downtime, you need to protect your operations with a strong disaster recovery and backup plan.Organizations can also set up temporary private or mixed clouds to keep key processes running during migration.
Also, if your organization's system-critical applications are migrating, consider a pilot migration to ensure continued software compatibility (and reduce the chance of further downtime).A good data center provider will help the enterprise complete this process and ensure that it goes smoothly.
(3) network configuration is also a factor to be considered
The organization must decide what needs to be done to ensure that the existing application retains its functionality without compatibility.This requires decisions to be made on a case-by-case basis, as some applications may encounter configuration problems from the LAN.It is best to maintain security, so be sure to investigate the impact of migration on your organization's mission-critical applications.
This may not be immediately obvious to the organization, but network latency is also important.Hosting means accessing its data center through a dedicated high-speed connection.Post migration delays (time delays on the network) should not be a problem, but it is important to consider the unexpected that may occur during migration.
Because servers are typically migrated in batches, applications that share local connections must now work harder to communicate.To mitigate potential latency problems, determine which applications work together and when to run, and plan your organization's business migration schedule as quickly as possible.
(4) successful data center migration means that the organization needs to fully understand its applications
But these native applications have been running for years, with some documents nowhere to be found, and perhaps no one remembers who installed or built them.By using network tracking tools in the months leading up to migration, organizations can relearn all the information they need to know about the intricacies of legacy applications.
If the organization makes the necessary preparations for the migration, the actual process should be very simple and the migration should be seamless.When data needs to be migrated, the organization needs to determine what needs to be moved.For example, you need to see if you are still running hardware and software by terminating the contract early in the contract, or, for example, still using existing equipment that is no longer used for critical purposes.Importantly, the organization needs to ask itself whether each server needs to be restarted, or can it be virtualized to share space and rationalize the number of servers?
The organization needs to go through all of this and consider its purpose and role in the future business.Some devices may even find it more important than expected, so it is worth paying more.This is also an ideal time for organizations to review their migration schedules and consider whether they need to set up temporary private or hybrid clouds to avoid downtime during migration.
In addition, the organization needs to keep up to date with the overall data environment records, review existing logs, and record any changes to the manifest.Next, find the existing workload, software, and scheduled backups so that you know exactly what will and won't happen during the migration, and run the most important disaster recovery tests for ultimate security.
The organization also needs to inform its service contractors of its plans and point them to the new data center for any licenses and contract modifications.Also, it is necessary to write down the warranty information and serial number of the equipment to avoid any problems after physical relocation.
With the organization's equipment security management and round-the-clock maintenance, its business can return to its best state.If an organization's business continues to grow, it is easier than ever to expand its data centers without the need for temporary and expensive existing solutions.For the redundant processes and applications that organizations find in the planning process, it's time to take full advantage of these released resources and make way for a greener, more efficient future.

2018年7月31日 星期二

Datacenter migration, high speed Ethernet helps meet requirements

Datacenter migration,High speed Ethernet is rapidly becoming a network specification as the customer data center servers are increasing the amount of traffic from new, smarter applications, Internet devices, video, and so on.
According to IDC data, the overall 100Gb Ethernet revenue in the first quarter of 2018 increased by 83.8% to $742 million 500 thousand in the first quarter, and the first quarter port shipments increased by 117.7% over the same period. Dell 'Oro Group researchers say that the shipments of 100G Ethernet ports are expected to reach 12 million this year, compared with about 1 million on 100G Ethernet ports in 2016.
 Datacenter migration
Sameh Boujelbene, senior director of Dell 'Oro Group, says many factors have driven the demand for speed up of data centers, such as the huge growth of super large networks from Google, Amazon and Facebook, as well as the price and performance of 100G products.
A recent study by PwC further explains why high-speed networks are needed. "As companies gradually break away from traditional enterprise data centers, workload becomes increasingly unitary. "They're becoming more decentralized, more mobile, and more like workloads typically associated with very large-scale environments," PwC writes. "In the next 1-3 years, almost all major workload will be transferred from internal deployment to public cloud. Applications will be more dependent on the network, and because of the distribution and dynamics of the workload, the network will become more important.
Roland Acra, senior vice president and general manager of CISCO's data center business group, says the demand for high-speed ports and more data driven from the dense edges of the network are pushing the backbone to upgrade.
"It is mainly driven by the evolution of the network interface card NIC on the server.  Most server connections are basically between 1G and 10G. Now, the server that connects to the top switch has changed a lot from 10G to 25G or 50G, which makes the overhead uplink want to reach higher density, 100G, "Acra said.
From 10G to 25G to 100G Ethernet
"The price of 25G is basically the same as that of 10G and 100G. The use of four channel 25GE, 100GE backbone platform requires less wiring, and lower space requirements and costs. Backward compatibility provides an additional choice for simplifying conversion, while extending the value of current assets, "CISCO said.
"Cloud and software-defined architectures are shaking the Ethernet switch and router markets," wrote Petr Jirovsky, IDC's global network tracking research manager. "The continuing price decline and the growing difference between the cloud and communication service providers and the purchase preferences of the enterprises have created a challenging environment for the suppliers, as well as the opportunity for the end users."
Cisco, Juniper, Arista, HPE and HUAWEI aim at data center.
Cisco, Juniper, Arista, HPE and HUAWEI are only a handful of vendors that actively seek market opportunities for high-speed and traditional Ethernet. Juniper recently launched EX4650 High Density 25/100 Gbps Switch, which supports 48 100G Ethernet ports, or 48 25G ports, and eight 100G uplink.
But the driver of 100G is only the beginning of high-speed Ethernet. Dell Oro reported earlier this year that it is estimated that by 2020, 400G will account for 20% of the exchange revenue of data centers. It is estimated that in the next five years, the higher speed of 100G, 200G, 400G and 800G will increase significantly.
"In December 2017, Broadcom announced the launch of a 56G SerDes-based Tomahawk 3 chip with innovative technology and Nephos," Boujelbene said. "We expect commercial chips based on 56G SerDes to boost shipments of 200 Gbps and 400 Gbps, with 400 Gbps accounting for the majority. By 2020 to 2021, we expect 112G SerDes to push another speed upgrade cycle to drive 800 Gbps port shipments, plus another wave of 200 Gbps and 400 Gbps shipments.
Cisco's Accra says there are two to three years to go before the 400 gigabytes, or at least until the 400 gigabytes plummet is attractive enough. "400G is still a concern for cloud and service providers. CISCO has not taken the 400G link as a priority. "
However, Juniper seems to want to achieve faster 400G Ethernet speed than now. (Juniper is looking forward to becoming the first person in 400GbE)

2018年7月30日 星期一

The trend of datacenter migration, server virtualization

With datacenter migration, many organizations and companies have adopted virtualization priority policies that require all new applications to run in a virtual environment.However, migrating traditional legacy applications to a virtualized environment is quite another matter.
 Datacenter migration
While the industry is increasingly focused on technology and the benefits IT can bring to the organization's bottom line, a recent Spiceworks survey finds that spending on IT budgets has stalled.Companies are also failing to keep up with the demand for new technology.As a result of these trends, enterprise IT staff are constantly being asked to complete more tasks with less money.
The inevitable consequence of flat corporate IT budgets and reduced corporate IT staffing is that the priorities of projects involving traditional applications are reduced.Indeed, newly developed applications can run on virtual servers and are governed by virtualization priorities.But, as in, "if it's not broken, don't get busy trying to fix it."As the saying goes: because businesses lack the costs of pushing devices to achieve natural obsolescence (EOL), they often don't have the IT budget to move applications from traditional physical servers to virtual servers.
For most enterprise organizations, regardless of their size, mixed IT environments containing virtual and physical servers are now being processed.This is not an ideal situation in any case.The mixed environment complicates almost every aspect of server management.It reduces the productivity of administrators, increases the cost of management tools, decentralizes knowledge of management tools among administrators, and can lead to a significant reduction in application availability and data integrity.
Why deploy virtualization?
So why is there so much debate around deployment of virtualization?Why are IT organizations so obsessed with this technology?
1. Reduce capital expenditure
Deploying virtualization enables physical servers to host multiple virtual servers.It also provides the ability to easily migrate virtual servers between different physical servers to balance the need for resources.Physical servers running virtualized software can often run at more than 80 percent of their rated capacity.Integrating business applications into a single physical server, each of which has its own separate operating environment, can significantly reduce the number of physical servers in an enterprise data center.With fewer physical servers, IT organizations can further reduce capital expenditure, freeing up that money for other business divisions of the organization and increasing revenues.There are, of course, many other benefits.
2. Reduce operating expenses
Reducing the number of physical servers in data centers would also help corporate data centers save energy costs, and carbon emissions are an important consideration for corporate data center investors and shareholders to track metrics of concern.Also, it enables data centers to host more applications, which is a key factor in making data center real estate more valuable.
From a management perspective, it is much easier to configure and split virtual machines.If an application needs a new server, the administrator can configure the virtual machine much faster than the physical server.This typically reduces configuration time from weeks to hours or less, helping to develop applications quickly.
Virtual machines are also easier to manage than physical objects.A virtualized administrator can manage more machines instead of just a few physical devices.This helps improve the efficiency of administrators and alleviates the staffing problem in enterprise data centers.
Disadvantages of virtualization
Of course, virtualization technology also has its disadvantages.Not all business applications are suitable for running on a virtual server.While deploying virtualization can help businesses save money, the technology can also lead to increased spending in some other areas.As with many technologies, the rash use of the technology without sufficient knowledge of it may exacerbate problems that were originally intended to be solved with it.
Not everything can be virtualized
Virtualization is not the best choice for all applications.Applications that are very performance-sensitive may not be appropriate.These applications are unlikely to tolerate sharing physical resources with other applications, and the overhead of running a hypervisor on the same hardware may not be welcomed by enterprise customers.
There are various applications that require their servers to have physical attachments, and such applications often have unique drivers.Although virtual hypervisor software attracts most application use cases, it does not usually support these unusual applications.
Not all applications can be virtualized.For some applications, there may be license agreements that prevent virtualization.It can also be complex for some other applications.
Many enterprise organizations use old, traditional applications that are critical to the enterprise's key business, but have become so complex over many years of upgrades and changes that it is too risky to move them to a virtual platform.
2. Cost increase
The cost of the associated components may affect the deployment of virtualization technologies by enterprise customers.Although virtualization can help companies reduce operating costs over the long term, there are upfront investment costs associated with implementing the technology.
The host server used to run each virtual manager must be able to support the performance requirements of all virtual servers.These servers may cost more than the physical servers they replace.
Given the variety of tools available on the market today, many of which are provided by virtual hypervisor vendors, server and network administrators for enterprise customers must be trained in virtualization technology.
3. Server spread
Ironically, the problem of server sprawl that virtualization promises to solve is actually often exacerbated by the deployment of virtual machines.
When an enterprise data center deploys a server without fully understanding the impact it will have, server sprawl becomes an important issue for the data center.This often results in a data center filled with server hardware that needs to consume valuable energy and footprint over time, but is not fully utilized.
Server virtualization solves this problem.Integrating many physical servers on a single virtual server can ease energy and space constraints.However, the ease of configuring a virtual machine can lead to excessive server sprawl.
4. Single point of failure
Finally, an obvious drawback to server virtualization is that hosting multiple virtual servers on one piece of hardware has the potential to cause a single point of failure.If the physical server running the hypervisor fails, all applications running on the hypervisor managed virtual machine will be unavailable.
New data protection methods are needed to ensure data availability and data integrity in a virtual server environment.While many virtualized deployments rely on existing data protection technologies, the assumption is that a solution that works for a physical server will work for the virtual server as well, but in fact, the virtualization infrastructure presents a corresponding challenge.This is not just because the operational environments of enterprise data centers are unlikely to be entirely virtualized.

2018年7月29日 星期日

Datacenter migration, space will become a new frontier of cooling

In datacenter migration, when trying to cool the data center, people usually have to consider the external temperature, the cooling capacity of the cooling unit, and the airflow of the fan. For the cold of space, everyone feels filled with emptiness and desolation, but no more thoughts.
 Datacenter migration
However, the situation may change, and a small venture is planning to use radiant space cooling, a fascinating natural phenomenon that allows people to transfer heat to space. SkyCool has begun developing panels that can generate heat at wavelengths of 8 to 13 microns. These wavelengths will not be absorbed by the earth.
The cold of space
Eli Goldstein, co-founder and chief executive of SkyCool, said: "basically, it can make full use of space. It turns out that it is very cold. More generally, space is the final radiator, which is only about 270 degrees below zero. "
Although this phenomenon has been known and studied for centuries, the company's panel is made of a layer of silver film, covered with silica and hafnium oxide layer and can eliminate heat during the day.
The prototype of the system was installed in a two storey office building in Las Vegas in 2014. Under direct sunlight, the panel is 4.9 C (8.8 F) lower than ambient air temperature and can provide 40.1 watts of cooling power per square metre.
"What we do is that they can not absorb the heat of the sun, but at the same time they can radiate heat into space in the form of infrared radiation. And the combination of this nature has never appeared in any natural material, and it has only recently been designed to make up for it. Goldstein said.
The start-up company is composed of three researchers from Stanford University: Goldstein, postdoctoral consultant Aaswath Raman, and Shanhui Fan. Last year, the SkyCool system began to commercialize research conducted by Stanford University for the first time.
"At the end of this summer, we hope to install some devices in California," Goldstein said. "Then the plan will be to deploy more panels and expand the scale in more locations."
The company hopes its recycled water glycol panel will be successful in industries requiring high cooling load, such as data centers, refrigeration and commercial cooling.
"In the early days, we were more concerned with the edge data center," Goldstein said. "I think it's very interesting for large products: as long as you want to cover the whole load, you need to provide adjacent space for the panel, not just the roof. The panel can also be used in conjunction with the traditional cooling system to reduce the water consumption of the cooling tower or to cool the electricity in a more traditional way.
He added: "we have communicated with the data center company several times. I think the biggest challenge we face now is that because the company is smaller, we want to install and deploy in a data center or a larger data center over 5MW, which is a big installation for us. "
Another challenge, he said, is that no company wants to try this technology, especially in data centers and other facilities. We know that the technology itself is effective. I can cool the water and transmit it through the pump, and show that we can cool the water. What we need to prove next is not technology, but our ability to deploy in an economically efficient way and to link it to the actual system.
"The energy is very simple, we know how much heat we need to eliminate, and how much heat can be eliminated by the panel, now it's about how we do it on a large scale."

2018年7月26日 星期四

Datacenter migration, network how to quickly drain

Datacenter migration, when the network scale of data center becomes very large, it is necessary to add network equipment to realize multi-level connection.Present data centers tend to be a tree structure, the core put forward a few large capacity equipment, then hang under the multilayer devices (because the port number is not enough, may need to multilayer), dozens or even hundreds of sets of cascade network equipment together, once out of the fault, how can quickly find the fault equipment, often plagued by a lot of network operations staff.
 Datacenter migration
The network equipment in the data center is redundant. As long as the fault device is found when the network fails, the business can be restored by isolating it, and then slowly troubleshoot the cause of the failure.Network failures usually get fault feedback from the application side first, and then start to troubleshoot. At this time, the application personnel usually only describe an application access failure phenomenon, and they will not tell you which specific addresses do not work to which addresses, and sometimes even wrong information, which greatly delays the problem positioning time.Problem location is spent most of the time in the troubleshooting process, how to do?How can data center networks quickly troubleshoot?This article will provide the answer.
If network failure is to be analyzed from the fault phenomenon of feedback from the application side, it is already too late, and it is easy for the application personnel to bring into the mistake area. Some of the feedback phenomenon of the application personnel is only seen by themselves, and the phenomenon is probably only a local phenomenon, which cannot reflect the failure situation of the entire network.Therefore, we should rely on ourselves to do a good job of network monitoring, and find problems through monitoring, so as to quickly find fault equipment, do equipment isolation or troubleshooting.
The early network monitoring was mainly to monitor some log and port traffic of the device. More often, this information was not enough and problems could not be detected in time.Many network device manufacturers say their device logs are complete, but there are still some extreme cases or software bugs that cause failure when there is no log output, so it is necessary to locate the traffic.At this time, it is necessary for the network personnel to find the application personnel to understand the fault phenomenon, find out some lost packets or blocked IP addresses on the spot, and then conduct network circulation. All the equipment that the fault traffic passes through will be circulated to find the fault equipment.Now that is a tree network, each layer there are a lot of equipment, the circulation is considerable, and not all of the devices are can do support of all the characteristics of the traffic statistics, there are not supported by the device will make statistics, increase the difficulty of finding fault equipment, do network operations over the years are so insist on.
Obviously, the previous network troubleshooting method was effective, but the efficiency was too low, and the fault location time was long, which had a great impact on the business.Current network monitoring is aimed at data flow, monitoring specific data flow in the network, so as to find out the fault location once the data flow is interrupted.Here, several emerging network monitoring methods, also known as network visualization technology, are mentioned as the most effective methods for rapid obstacle elimination.
First of all, INT(in-band Network Telemetry, in-band Network Telemetry technology) technology enables the monitoring of Network state by collecting and reporting Network state at the data level.When the data message enters the first network equipment, equipment set on sampling and image out of the business flow of packet sampling way, INT based on packet encapsulation an INT head, and will need to collect information about the switch to fill in the INT data segment, message through all network equipment such processing, until finally a server connection of network equipment to spin out the INT head.
Message after each device will be collected INT message through gRPC message is sent to a remote monitoring server parsed and rendered, INT carried in the message delay of packet forwarding, equipment, congestion, etc., can be presented to the monitoring server, once the data message appear lost package or not, the monitoring server immediately perceived, and a few seconds to determine the scope of the problem and fault equipment.
Followed by ERSPAN (Encapsulated Remote Switch Port Analyzer, Remote monitoring of network traffic across the three layers of IP transmission technology), ERSPAN message based on the GRE encapsulation, and through the Ethernet forwarded to any IP routing can reach place.ERSPAN is to source port message a copy through the GRE (Generic Routing Encapsulation) sent to the server parsed, the physical location of collection server is not restricted, so that we can be the key to the whole network traffic forwarding through ERSPAN sent to the monitoring server, where traffic is part of the network appeared, be clear at a glance.The third is the sFlow and Netstream, these two are the data sampling technique, Netstream collection of relatively complete, but need to have dedicated hardware to complete, in the network deployment sFlow and Netstream, can pass gRPC will monitor data sent to the server, calculated by the monitoring server, and finishing, and graphical display the results appear, once part of the network has a problem, which can immediately appeared on the monitoring server.
SFlow and Netstream collect the main features of the header, rather than the whole content of the message. This is quite different from INT and ERSPAN, and there is no problem in troubleshooting most network faults.In a network, it does not mind the deployment of all three monitoring schemes, so that in case of failure, the data collected from multiple angles can be analyzed.
It is also important to try to send the data collection to the monitoring server through the management network. Otherwise, if there is a problem with the data network, the monitored data may not reach the monitoring server normally.In most cases, the failure of data network rarely affects the management network, and all devices can still be accessed normally. If the failure occurs, many devices cannot be accessed through the management network, it can be basically determined that this device is the fault point.
With the above network monitoring methods, it is not difficult to find the fault at the first time, and it can be fully automated. When the fault is found, the monitoring server will automatically issue the isolation command to isolate the fault equipment and recover it automatically.In this way, the network fault location can be found before the application failure is reported, the fault equipment can be isolated in time, and the service can be restored. In this way, the time of fault analysis can be greatly shortened, which has little impact on the business, and even the business part cannot perceive the fault at all.The actual application effect of network monitoring technologies such as INT and ERSPAN is still unknown. These technologies are always mentioned recently and need to be tested in practice.SFLOW and Netstream are relatively mature technologies, but they are not really used in network troubleshooting, so they need to be popularized.
With these monitoring technologies, network failures can be quickly eliminated, which is of great significance to the operation and maintenance of data centers and greatly improves the operation and maintenance efficiency.

2018年7月25日 星期三

Datacenter migration, liquid cooling scheme

Datacenter migration, liquid cooling solutions are expected to enter more enterprise data centers.In this paper, we will start from five aspects of the reasons and readers friends to explore.
 Datacenter migration
Today, liquid cooling solutions that have traditionally been used primarily for mainframe and academic supercomputers may soon infiltrate more enterprise-class data centers.Now, as new and more demanding enterprise workloads continue to push up the power density of data center server racks, managers and operators of enterprise data centers are desperate to find alternatives that are more effective than air cooling systems.
We have interviewed a series of data center operators and suppliers and asked them for their views on the promotion of liquid cooling solutions to mainstream applications.Some of the respondents did not want to disclose the specific applications they used in their data centers and said they viewed the workloads and their cooling methods as a competitive advantage for their companies.
A series of super-sized cloud service operators, including Alphabet, parent companies of Microsoft, Google, Facebook and baidu, have formed a group dedicated to creating an open specification for liquid-cooled server racks, but the group has yet to specify what they will use.However, in these very large scale data centers, at least there is a certain type of workload obviously need to adopt liquid cooling scheme, i.e., by the GPU acceleration machine learning system (or for Google company, is the TPU tensor of the latest processor, which has said publicly that the TPU is now using direct cooling liquid cooling design of chip).
While current corporate data center operators are skeptical and concerned about the use of liquid cooling solutions, some trends are already emerging.If your enterprise supports any of the following workloads in the data center, your data center may also adopt liquid cooling scheme in the future:
1. AI and accelerator
The rate of annual CPU performance growth described by Moore's law has slowed sharply in recent years.This is partly because accelerators (mainly gpus), as well as FPGA and dedicated asics, are increasingly entering enterprise data centers.
Gpu-driven machine learning is probably the most common hardware acceleration use case outside of HPC(high-performance computing).However, in 451 by the market Research institutions Research recently conducted a survey, about a third of IT service provider, said their business plan in the online data mining, analysis and engineering simulation, real-time video and other media, fraud detection, load balancing, and a similar delay sensitive services using the acceleration in the system.
Hardware accelerators have much higher thermal design points (TDP, thermal design points) than cpus, and usually require 200W or more power to cool them.Add a high-performance server CPU, and a single system in your enterprise data center will require more than 1kW of power to cool it.
Intel is also aggressively breaking the 150W limit on its conventionally designed server processors."As more and more corporate customers want more powerful chips, we are starting to see a gradual increase in the amount of watt consumed by these chips.""Said Andy Lawrence, executive director of the Uptime Institute.
The rack density of enterprise data center servers is increasing.Most data centers now have at least some racks over 10kW in their normal operating orbits, while 20 percent of the racks even have 30kW or higher power densities.But these workloads are not considered high performance computing."They just say that their workload has a higher density frame.""Lawrence said.
"If you put the GPU together with Intel's processors, their power density could triple."He said.The liquid cooling scheme is obviously very suitable for these accelerators, especially the immersion cooling scheme, which can cool GPU and CPU.
2. Cooling high-density storage
As the current density of storage in enterprise data centers continues to increase, it may become more difficult to effectively cool storage.Most of the storage capacity installed in the data center is composed of unsealed hard disk drives and cannot be cooled by liquid cooling.Newer technologies, however, offer hope to business users in the industry.For example, solid state drives can be cooled using a full immersion solution.In addition, the creation of high-density, high-speed read/write head helium in the latest generation of storage hardware requires a sealed unit to make it suitable for liquid cooling schemes.
As noted in the 451 Research report, the combination of solid-state and helium-filled hard drives means there is no need to separate air-cooled storage from liquid-cooled processing.The increased reliability of hard drives also has a benefit: immersion in the drive in the coolant can help reduce the impact of heat and humidity on the components.
3. Network edge calculation
The need to reduce current and future application latency further drives the need for a new generation of data centers on the edge of the network.These can be high-density remote facilities deployed in wireless towers, factory operations workshops or retail stores.And these facilities may increasingly host high-density computing hardware, such as GPU packaging clusters for machine learning.
While not all edge data centers are liquid cooling solutions, many edge data centers will be designed to support heavy workloads in confined Spaces that cannot use traditional cooling solutions, or to cool in new deployment environments that do not use traditional preconditions.As a result of reduced energy consumption, liquid cooling makes it easier to deploy marginal sites where there is no large capacity for power.
As many as 20 percent of edge data centers can use liquid cooling, according to Lawrence's estimates.He envisions remote, micromodular, high-density data center sites supporting 40kW per rack.
4. High-frequency trading and blockchain
Many modern financial services firms are computationally intensive, requiring high performance cpus and gpus.These workloads include high frequency trading systems and blockchain based applications such as smart contracts and encrypted currencies.
For example, an enterprise client of GRC (Green Revolution Cooling), a high-frequency trading company, is testing its immersion Cooling solution.When green revolution cooling introduced immersive cooling products for the mining of encrypted currency and the price of bitcoin soared from the end of 2017, the company experienced its biggest ever sales surge.
Peter Poulin, GRC's chief executive, told reporters that another GRC corporate customer in Trinidad and Tobago is running an encrypted currency service at 100kW per rack and connecting a warm water cooling loop to the evaporating tower.Since warm water cooling is more energy efficient than cold water cooling, it can operate in a tropical environment without a mechanical cooler.
5. The cost of traditional cooling scheme is high
When air-based cooling systems cannot cope with high density cooling demands, liquid cooling schemes start to make sense.
Earth science company CGG, for example, using the GRC immersion liquid cooling system, in order to supply the data center in Houston cooling, CGG in the data center is mainly analyses of seismic data processing work, they use on commercial server is a powerful GPU, each frame up to 23 kw power.This power density is relatively high, but it is usually air-cooled.Ted Barragy, senior systems manager at the CGG, said: "we put heavy computing servers in immersion tanks to cool down.But the truth is that this is not so much about meeting the application's workload as about the cost economy of an immersive liquid cooling solution.
In its upgrade process, the immersion liquid cooling scheme replaces the traditional cooling equipment used by CGG's old data center.According to Barragy, the team restored several megawatts of power because of the upgrade."Even after a few years of adding servers and immersion tanks, we still have half a megawatt of unused power."He said."It's an old, traditional data center, and about half of its power is used for inefficient air cooling systems."
Barragy also said the PUE value of the submerged cooling data center was about 1.05.This is more efficient than the company's new but air-cooled data center in Houston, which has a PUE of 1.35.
"A lot of people think that liquid cooling is just a high-density cooling solution that is really suitable for calculating power density of 60kW to 100kW per rack, but it has other significant advantages for our mainstream corporate customers," Poulin said.
Chris Brown, chief technology officer at the Uptime Institute, said they have now seen a general increase in interest in liquid cooling solutions.This is driven by the urgent need of enterprise data centers to achieve higher efficiency and lower operating costs.
"The emphasis on liquid cooling is no longer on ultra-high density, but on solutions that can be used by the average enterprise data center operations manager to cool any IT assets."He said."The solution is now moving into more common density solutions and more common data centers."

2018年7月24日 星期二

機房建置柴油發電機系統的使用和維護

機房建置,柴油發電機系統作爲數據中心的後備電源,是保證數據中心供電可靠性的最後一道防線。爲了保證柴油發電機系統在市電失電後能夠第一時間安全可靠的接管下游負載,專業運維人員會定期對發電機系統進行健康檢查以及實際的帶負載演練。
 Datacenter migration
一,常用功率和備用功率
在介紹柴油發電機系統健康檢查之前,我們先介紹兩個概念常用功率和備用功率。柴油發電機組在國內是用主用功率即常用功率來標稱的,通常柴油發電機組能夠在24小時之內連續使用的最大功率我們稱之爲常用功率,而在某一時段內,標準是每12個小時之內有1個小時可在常用功率的基礎上超載10%,此時機組功率就是我們平時所說的最大功率,即備用功率。
也就是說,如果您購買的是主用400千瓦的機組,那麼您12個小時之內有1個小時可以運行到440千瓦,如果您購買的是備用400千瓦的機組,假如您不超載平時都開在400千瓦,其實該機組一直都開在超載狀態(因爲該機組實際額定功率只有360千瓦),這對機組是非常不利的,將會縮短機組的壽命和造成故障率增高,所以,柴油發電機組最佳功率的選擇是在用戶的總負荷基礎上加10%功率儲備,這樣既經濟又實用。
二、機組空載運行的危害
另外,柴油發電機系統在定期健康檢查時不允許長時間空載運行,機組的空載運行本身也會產生以下危害:
1 .長時間空載運行會使柴油機噴油嘴噴出的柴油不能完全燃燒導致積碳,造成氣門,活塞環漏氣;
2。活塞-汽缸套密封不好,機油上竄,進入燃燒室燃燒,排氣冒藍煙;
3所示。對於增壓式柴油機,由於低載,空載,增壓壓力低,容易導致增壓器油封(非接觸式)的密封效果下降,機油竄入增壓室,隨同進氣進入汽缸;
4所示。上竄至汽缸的一部分機油參與燃燒,一部分機油不能完全燃燒,在氣門,進氣道,活塞頂,活塞環等處形成積炭,還有一部分則隨排氣排出,這樣,汽缸套排氣道內就會逐步積聚機油,也會形成積炭;
5。增壓器的增壓室內機油積聚到一定程度,就會從增壓器的結合面處滲漏出;
6。長期超負荷運行,將會更嚴重的導致運動部件磨損加劇,發動機燃燒環境惡化等導致大修期提前的後果。
三、機組帶假負載運行
爲了避免出現柴油發電機組空載對機組本身造成的危害,因此數據中心柴油發電機系統定期健康檢查時,通常採用假負載模擬帶載情況。假負載啓動過程如下:
1)確認配電櫃開關位置處於指定位置;
2)啓動發電機組,待發電機組運行至熱備用狀態後,合閘發電機組對應開關,合閘至負載箱出線開關,負載箱得電;
3)逐臺打開負載箱電源開關,控制模塊得電,打開風機電源開關,可運行等亮;
4)逐步投入負載,按照0(5分鐘)→50%→25%(5分鐘)→75%(5分鐘)→100%(15分鐘)→75%(10)→50%(10)→25%(10)→0(5分鐘)順序和帶載時間進行負載測試;
5)測試過程中記錄發電機組油壓,水溫,電壓,電流,功率,頻率等數據,觀察發電機組運行情況(例如:噪聲,抖動,漏氣,漏油等);
6)發電機組停機後,關閉負載散熱風機,關閉負載工作電源開關,並復原所有開關位置。
四、注意事項
1。嚴禁帶負荷停機
每次停機前,必須先逐步切斷負荷,然後關閉發電機組輸出空氣開關,最後將柴油機減速到怠速狀態運轉3 - 5分鐘左右再停機。
2。假負載的日常維護和保養
爲了避免假負載箱體日曬雨淋,往往箱體上會安裝防雨罩,因此需要每年定期對其進行防水防鏽處理,且假負載在工作時,箱體內部本身溫度很高,需要對其散熱,因此箱體本身不是一個密閉的環境,雨水滲透進散熱孔內,造成箱體內溼氣過大,長時間使用會造成電阻絲絕緣性下降,除此之外,也需要對假負載進行定期保養。因爲假負載工作時,除了產生高溫以外,也是一個高壓危險的帶電體,所以需要定期例行健康檢查,如內部除塵,元器件檢查和絕緣性監測等