機房建置,數據中心機房設計集建築、結構、電氣、暖通空調、給排水、消防、網絡、智能化等多個專業技術於一體。IDC機房應具有“良好的安全性能,可靠而且不能間斷”的特點。
如何保證數據中心發揮良好的安全性能,並且可靠地不間斷工作,在建設數據中心時要按照以下幾點要求:
電力系統要求
1、供配電系統應爲電子信息系統的可擴展性預留備用容量。
2、電子信息系統機房應由專用配電變壓器或專用迴路供電,變壓器宜採用乾式變壓器,且應提供嚴格意義上的雙迴路供電。
3、電子信息系統機房內的低壓配電系統不應採用TN-C系統。
4、電子信息設備應由不間斷電源系統供電。
5、用於電子信息系統機房內的動力設備與電子信息設備的不間斷電源系統應由不同迴路配電。
6、電子信息設備的配電應採用專用配電箱(櫃),專用配電箱(櫃)應靠近用電設備安裝。
7、電子信息設備的電源連接點應與其他設備的電源連接點嚴格區別,並應有明顯標識。
8、A級電子信息系統機房應配置後備柴油發電機系統,當市電發生故障時,後備柴油發電機應能承擔全部負荷的需要。
9、市電與柴油發電機的切換應採用具有旁路功能的自動轉換開關。自動轉換開關檢修時,不應影響電源的切換。
10、信息化管理辦公室現有用電量爲:50kW/h,如果擴大到全市各單位,預計10倍計算量爲:500kW/h(只是概略估計)。但是爲了將來的用電需求,儘量加大電力供應能力。
消防要求
1、一般規定
(1)電子信息系統機房應根據機房的等級設置相應的滅火系統,並應按現行國家標準《建築設計防火規範》GB50016、《高層民用建築設計防火規範》GB50045和《氣體滅火系統設計規範》GB50370,以及本規範附錄A的要求執行。
(2)電子信息系統機房的主機房應設置潔淨氣體滅火系統。機房中的變配電、不間斷電源系統和電池室,宜設置潔淨氣體滅火系統,也可設置高壓細水霧滅火系統。
(3)電子信息系統機房應設置火災自動報警系統,並應符合現行國家標準《火災自動報警系統設計規範》GB50116的有關規定。
2、消防設施
(1)採用管網式潔淨氣體滅火系統或高壓細水霧滅火系統的主機房,應同時設置兩種火災探測器,且火災報警系統應與滅火系統聯動。
(2)滅火系統控制器應在滅火設備動作之前,聯動控制關閉機房的風門、風閥,並應停止空調機和排風機、切斷非消防電源等。
(3)(自動噴水滅火系統的噴水強度、作用面積等設計參數,應按現行國家標準《自動噴水滅火系統設計規範》GB50084的有關規定執行。
靜電、防雷與接地要求
1、靜電防護
(1)主機房和輔助區的地板或地面應有靜電泄放措施和接地構造,防靜電地板、地面的表面電阻或體積電阻值應爲2.5×104~1.0×109Ω,且應具有防火、環保、耐污耐磨性能。
(2)主機房和輔助區內的工作臺面宜採用導靜電或靜電耗散材料。
(3)電子信息系統機房內所有設備的金屬外殼、各類金屬管道、金屬線槽、建築物金屬結構等必須進行等電位聯結並接地。
(4)靜電接地的連接線應有足夠的機械強度和化學穩定性,宜採用焊接或壓接。當採用導電膠與接地導體粘接時,其接觸面積不宜小於20cm2。
2、防雷與接地
(1)電子信息系統機房的防雷和接地設計,應滿足人身安全及電子信息系統正常運行的要求,並應符合現行國家標準《建築物防雷設計規範》GB50057和《建築物電子信息系統防雷技術規範》GB50343的有關規定。
(2)保護性接地和功能性接地宜共用一組接地裝置,其接地電阻應按其中最小值確定。
(3)電子信息系統機房內的電子信息設備應進行等電位聯結,等電位聯結方式應根據電子信息設備易受干擾的頻率及電子信息系統機房的等級和規模確定。
機房監控要求
1、一般規定
(1)電子信息系統機房應設置環境和設備監控系統及安全防範系統,各系統的設計應根據機房的等級,按現行國家標準《安全防範工程技術規範》GB50348和《智能建築設計標準》GB/T50314以及本規範附錄A的要求執行。
(2)環境和設備監控系統宜採用集散或分佈式網絡結構。系統應易於擴展和維護,並應具備顯示、記錄、控制、報警、分析和提示功能。
(3)環境和設備監控系統、安全防範系統可設置在同一個監控中心內,各系統供電電源應可靠,宜採用獨立不間斷電源系統電源供電,當採用集中不間斷電源系統電源供電時,應單獨迴路配電。
2、環境和設備監控系統
環境和設備監控系統宜符合下列要求:
監測和控制主機房和輔助區的空氣質量,應確保環境滿足電子信息設備的運行要求;主機房和輔助區內有可能發生水患的部位應設置漏水檢測和報警裝置;強制排水設備的運行狀態應納入監控系統;進入主機房的水管應分別加裝電動和手動閥門。
機房專用空調、柴油發電機、不間斷電源系統等設備自身應配帶監控系統,監控的主要參數宜納入設備監控系統,通信協議應滿足設備監控系統的要求。
A級和B級電子信息系統機房主機的集中控制和管理宜採用KVM切換系統。
2018年8月19日 星期日
Datacenter migration, peripheral security issues
Datacenter migration, as smart devices add attack vectors exponentially, the Internet of things (IoT), industrial Internet of things (IIoT), and cloud-based applications rapidly increase data center risk.In this era of global connectivity, organizations need to constantly test their security against complex threats, including Web applications and file-free attacks, memory corruption, return/jump oriented programming (ROP/JOP), and compromised hardware and software supply chain attacks.
While data centers have traditionally relied on detecting and adopting perimeter security solutions to mitigate risks, the proliferation of new cyber threats has increased the need for prevention.According to poirot institute, estimates of data center outage costs an average of more than $740000 (nearly 40% since 2010), who is responsible for the data center network security staff must seek to adopt the next generation of prevention strategies to reduce and close the attack surface, and improve the efficiency of existing infrastructure, processes and personnel.
To protect the surrounding
For decades, peripheral security has been the primary means of protecting data centers.However, this strategy is similar to a medieval castle, where the object of protection is limited to a small area and protected by a solid wall with a heavily guarded entrance point.The data center has built a security layer around it, and these security layers collaborate in depth, the idea being that if one security layer doesn't defend against some attack, it can be protected by the next security layer.
Like castles, data centers emphasize the detection of traffic coming in and out of organizations.Traditional traffic detection methods include mapping out network access points to create continuous testing and reinforcement of peripheral facilities.This is very effective for detecting attacks and generating alerts, and hopefully enough security to prevent security layer damage that can lead to downtime, economic damage, reputational damage, and even environmental damage.
Strengthening data center security
Data center security is no longer about internal protection.Castle solutions work well in the age of mainframes and hardline terminals, but they are less effective against today's threats.In fact, the advent of wireless communications (OTA), iot devices, and cloud computing has made data centers less secure.
The main security challenge facing data centers today is that they must work to maintain the privacy of their data as they deploy applications internally in data centers, public, private, and mixed clouds.While many of their customers extend their business further into the cloud, this may also inadvertently increase the risk of clone configuration extension attacks.An attacker can locate everything in the operating technology components of routers, switches, storage controllers, servers, and sensors and switches.Once hackers gain control of the device, they can extend it more, potentially attacking all the same devices across the network.
Today's attacks come from new or unexpected places, as cyberattackers now have more tools to circumvent perimeter security detection and attack targets from inside the data center.Security is not just about infrastructure, said colonel Paul Craft, director of operations at the joint forces headquarters for the defense information network (jfhq-dodin) at the AFCEA defense network operations symposium in May."" this is our IT platform that will record all of our data, IT's also our ICS and SCADA systems, and IT covers all of our cross-domain networks." "He said.
Many attacks can now be quickly extended from one device to all devices, according to the pollmont institute, as can be seen from hackers' access to 200,000 network devices built with the same code.File - free attacks such as memory corruption (buffers, stacks and heaps) and ROP/JOP (return/jump oriented programming) to perform reordering are also a growing threat, infecting devices 10 times more likely than traditional attacks.
According to symantec's 2018 Internet security threat report, attacks on supply chains have increased 200 percent over the past year.Many organizations and vendors now control only a small portion of their source code because the modern software stack consists of binaries from third parties in the global supply chain that come from proprietary and open source code that contains hidden vulnerabilities.In addition, zero-day attacks are growing rapidly, and many hackers are exploiting unknown vulnerabilities in software, hardware, or firmware to attack systems.
New era of data center network security
Data centers must shift from focusing only on the safety of testing to emphasizing the safety of prevention.As many new attacks completely eschew traditional network and endpoint protection, the latest generation of tools is designed to fend off the growing class of attack media.This not only increases the security against the latest threats, but also increases the effectiveness of tools and processes in handling the rest of the content.
Today, the hardware in the supply chain must be assumed to be compromised.This means that businesses need to build and run protected software on potentially untrusted hardware.Data centers need this new defense strategy, which takes a deep approach to identifying potential vulnerabilities and directly strengthening binaries so that attacks can't be implemented or replicated.
One of the best ways to do this is to somehow convert the software binaries in the device so that the malware cannot change the command and is propagated through the system.This approach, known as "network hardening," prevents a single exploit from spreading across multiple systems.It Narrows the attack horizon and shrinks vulnerabilities in industrial control systems and embedded systems and devices, greatly reducing the chances of physical damage and human damage.
The best security always assumes that hackers will eventually break in.Rather than reacting to an attacked vulnerability after it is exploited, network hardening prevents malware from targeting data centers, and less defensible organizations do not cancel such infrastructure.
While data centers have traditionally relied on detecting and adopting perimeter security solutions to mitigate risks, the proliferation of new cyber threats has increased the need for prevention.According to poirot institute, estimates of data center outage costs an average of more than $740000 (nearly 40% since 2010), who is responsible for the data center network security staff must seek to adopt the next generation of prevention strategies to reduce and close the attack surface, and improve the efficiency of existing infrastructure, processes and personnel.
To protect the surrounding
For decades, peripheral security has been the primary means of protecting data centers.However, this strategy is similar to a medieval castle, where the object of protection is limited to a small area and protected by a solid wall with a heavily guarded entrance point.The data center has built a security layer around it, and these security layers collaborate in depth, the idea being that if one security layer doesn't defend against some attack, it can be protected by the next security layer.
Like castles, data centers emphasize the detection of traffic coming in and out of organizations.Traditional traffic detection methods include mapping out network access points to create continuous testing and reinforcement of peripheral facilities.This is very effective for detecting attacks and generating alerts, and hopefully enough security to prevent security layer damage that can lead to downtime, economic damage, reputational damage, and even environmental damage.
Strengthening data center security
Data center security is no longer about internal protection.Castle solutions work well in the age of mainframes and hardline terminals, but they are less effective against today's threats.In fact, the advent of wireless communications (OTA), iot devices, and cloud computing has made data centers less secure.
The main security challenge facing data centers today is that they must work to maintain the privacy of their data as they deploy applications internally in data centers, public, private, and mixed clouds.While many of their customers extend their business further into the cloud, this may also inadvertently increase the risk of clone configuration extension attacks.An attacker can locate everything in the operating technology components of routers, switches, storage controllers, servers, and sensors and switches.Once hackers gain control of the device, they can extend it more, potentially attacking all the same devices across the network.
Today's attacks come from new or unexpected places, as cyberattackers now have more tools to circumvent perimeter security detection and attack targets from inside the data center.Security is not just about infrastructure, said colonel Paul Craft, director of operations at the joint forces headquarters for the defense information network (jfhq-dodin) at the AFCEA defense network operations symposium in May."" this is our IT platform that will record all of our data, IT's also our ICS and SCADA systems, and IT covers all of our cross-domain networks." "He said.
Many attacks can now be quickly extended from one device to all devices, according to the pollmont institute, as can be seen from hackers' access to 200,000 network devices built with the same code.File - free attacks such as memory corruption (buffers, stacks and heaps) and ROP/JOP (return/jump oriented programming) to perform reordering are also a growing threat, infecting devices 10 times more likely than traditional attacks.
According to symantec's 2018 Internet security threat report, attacks on supply chains have increased 200 percent over the past year.Many organizations and vendors now control only a small portion of their source code because the modern software stack consists of binaries from third parties in the global supply chain that come from proprietary and open source code that contains hidden vulnerabilities.In addition, zero-day attacks are growing rapidly, and many hackers are exploiting unknown vulnerabilities in software, hardware, or firmware to attack systems.
New era of data center network security
Data centers must shift from focusing only on the safety of testing to emphasizing the safety of prevention.As many new attacks completely eschew traditional network and endpoint protection, the latest generation of tools is designed to fend off the growing class of attack media.This not only increases the security against the latest threats, but also increases the effectiveness of tools and processes in handling the rest of the content.
Today, the hardware in the supply chain must be assumed to be compromised.This means that businesses need to build and run protected software on potentially untrusted hardware.Data centers need this new defense strategy, which takes a deep approach to identifying potential vulnerabilities and directly strengthening binaries so that attacks can't be implemented or replicated.
One of the best ways to do this is to somehow convert the software binaries in the device so that the malware cannot change the command and is propagated through the system.This approach, known as "network hardening," prevents a single exploit from spreading across multiple systems.It Narrows the attack horizon and shrinks vulnerabilities in industrial control systems and embedded systems and devices, greatly reducing the chances of physical damage and human damage.
The best security always assumes that hackers will eventually break in.Rather than reacting to an attacked vulnerability after it is exploited, network hardening prevents malware from targeting data centers, and less defensible organizations do not cancel such infrastructure.
2018年8月14日 星期二
機房建置,虛擬化降低數據中心存儲系統運維複雜度
機房建置,爲了滿足各類信息系統日益增長的IT資源需求,降低架構及管理複雜度並實現IT資源的快速交付和彈性分配,基於虛擬化技術建設雲數據中心已成爲當前高校信息化部門的首選。然而,從大多數高校數據中心的建設實踐看,虛擬化技術在計算、網絡和存儲這三大體系上的應用程度並不均衡,尤其是存儲虛擬化並未引起足夠的重視,已成爲高校數據中心建設的短板。
一、數據中心存儲系統面臨的問題
高校應用系統類型複雜,數量繁多。在建設和運行過程中,其存儲容量需求往往呈現螺旋上升趨勢。數據中心建設時,往往由於對存儲容量需求無法準確預估,項目預算或建設週期限制等原因,存儲擴容成爲常態。在擴容過程中,往往由於技術發展,採購限制等種種原因,不得不放棄在原有設備上進行擴容或採購相同設備的方案,而採購新的異構存儲設備。隨着存儲系統規模的不斷擴大,以下問題便凸顯出來:
1、管理複雜度急劇增加
由於不同廠商的存儲設備底層架構、管理界面和軟件功能都有較大的差別,在異構存儲環境下,配置和管理的複雜度無疑會大大增加。不同設備間的數據複製、遷移等操作需要較高的成本。
2、存儲資源利用率無法有效提升
在異構存儲環境下,各存儲設備往往會形成容量壁壘,無法實現統一的資源調度,資源利用率無法有效提高。
3、存儲系統高可用實現困難
異構存儲環境下,增加設備、架構調整或者數據遷移都往往需要對存儲設備進行離線操作,這樣無疑會降低存儲系統的可用性。而實現高可用的技術手段,如存儲鏡像等,又會由於異構存儲間的技術壁壘無法順利實現。
二、存儲虛擬化技術及其實現方式
根據SNIA(存儲網絡工業協會)的定義,存儲虛擬化是指“通過將存儲系統/子系統的內部功能從應用程序、服務器、網絡資源中進行抽象、隱藏或隔離,實現獨立於應用程序、網絡的存儲與數據管理”。簡單來說,存儲虛擬化技術能夠通過屏蔽具體物理設備的物理特徵,將各類存儲設備資源進行整合,爲上層服務器提供一個能夠統一配置管理的邏輯存儲資源池,從而有效地解決了異構存儲架構帶來的種種問題。
存儲虛擬化技術有多種實現方式和途徑。一般情況下,根據虛擬化在I/O路徑中不同的實現位置,可分爲基於主機的虛擬化、基於存儲設備的虛擬化和基於存儲網絡的虛擬化。
1、基於主機的虛擬化
基於主機的虛擬存儲依賴於代理或管理軟件,它們安裝在一個或多個主機上,實現存儲虛擬化的控制和管理。由於控制軟件是運行在主機上,會佔用主機資源且存在不同操作系統間的兼容性問題。因此,這種方法的可擴展性較差,實際運行的性能不是很好。
基於主機的方法也有可能影響到系統的穩定性和安全性,因爲有可能導致不經意間越權訪問到受保護的數據。這種方法要求在主機上安裝適當的控制軟件,因此一個主機的故障可能影響整個SAN系統中數據的完整性。軟件控制的存儲虛擬化還可能由於不同存儲廠商軟硬件的差異而帶來不必要的互操作性開銷,所以這種方法的靈活性也比較差。但是,因爲不需要任何附加硬件,基於主機的虛擬化方法最容易實現,其設備成本也最低。
2、基於存儲設備的虛擬化
基於存儲設備的虛擬化依賴於存儲設備的控制器軟件功能模塊,這種虛擬化技術往往只能解決同一廠商系列設備的虛擬化問題,無法支持包含多個廠商設備的複雜存儲系統的虛擬化。依賴於存儲供應商的虛擬化功能模塊將會在系統中排斥簡單的硬盤組和簡單存儲設備的使用,因爲這些設備並沒有提供存儲虛擬化的功能。採用該存儲虛擬化技術也會導致存儲系統在建設過程中的品牌唯一問題。當然,對於已經採用同一廠商不同系列存儲設備的數據中心來說,該方案能夠較容易地實現存儲資源的虛擬化和統一管理。
3、基於存儲網絡的虛擬化
基於存儲網絡的虛擬化是指在存儲系統的SAN網絡設備上實現存儲虛擬化功能。根據其數據路徑和控制路徑的耦合情況,可分爲對稱虛擬化和非對稱虛擬化兩種。對稱虛擬化是指數據路徑與控制路徑重合,在數據讀寫的過程中,在主機到存儲設備的路徑上實現虛擬存儲。具體實現時,實現虛擬化功能的設備安裝在主機和存儲設備的數據通道內,所有控制及數據訪問都必須通過該設備。該實現方式無需對現有SAN架構進行任何更改,也無需佔用任何的主機資源。
非對稱虛擬化是指數據路徑與控制路徑不重合,在主機到存儲設備的路徑外實現虛擬存儲。非對稱存儲虛擬化設備位於主機和存儲之間的數據通道之外,通過其他的網絡連接方式與主機系統通信,因而在主機中需要安裝專門的客戶端軟件,所面臨的問題與基於主機的存儲虛擬化類似。
三、存儲虛擬化技術帶來的好處
從應用實踐看,存儲虛擬化的實施爲數據中心的建設和運維帶來了以下幾點好處:
1、降低了存儲系統建設和管理的複雜度
由於採用了存儲虛擬化技術,數據中心的運維人員在日常管理與維護中通常無需對單一存儲設備進行操作,而是通過存儲虛擬化控制器提供的統一界面,對存儲資源進行管理。對存儲系統擴容時,可以靈活地根據需求、成本、項目預算等因素考慮採用利舊,對原有設備擴容及採購異構存儲等多種方案,實現了業務的不中斷操作。
2、實現了存儲資源的有效利用
與傳統存儲配置方式相比,基於虛擬存儲的自動精簡配置技術可以實現存儲資源的有效利用,能夠有效地避免物理存儲空間因超量分配導致的閒置。同時,存儲虛擬化技術通過構造存儲資源池,突破了單一設備的容量壁壘,實現了設備的容量整合。
3、存儲系統的高可用易於實現
通過存儲虛擬化控制器能夠非常容易在不同存儲設備上定義和實現存儲同步鏡像及雙活且對主機透明,當單一存儲設備因故障、檢修等原因需要停機時,不會影響相關業務的運行。
4、可實現存儲資源的服務質量管理
通過存儲虛擬化技術,可以實現存儲系統中異構存儲的差異化管理。通過配置不同服務級別的存儲資源池,可以將不同性能的存儲設備分類管理並分配給對應需求的應用系統。例如,可以把高端的存儲資源集中分配給關鍵應用,而把較低端的存儲資源分配給非關鍵應用。
總結
高校在數據中心規劃和建設過程中應當充分重視和研究存儲虛擬化技術,根據自身需求和特點靈活使用,以提升數據中心存儲服務的質量。
一、數據中心存儲系統面臨的問題
高校應用系統類型複雜,數量繁多。在建設和運行過程中,其存儲容量需求往往呈現螺旋上升趨勢。數據中心建設時,往往由於對存儲容量需求無法準確預估,項目預算或建設週期限制等原因,存儲擴容成爲常態。在擴容過程中,往往由於技術發展,採購限制等種種原因,不得不放棄在原有設備上進行擴容或採購相同設備的方案,而採購新的異構存儲設備。隨着存儲系統規模的不斷擴大,以下問題便凸顯出來:
1、管理複雜度急劇增加
由於不同廠商的存儲設備底層架構、管理界面和軟件功能都有較大的差別,在異構存儲環境下,配置和管理的複雜度無疑會大大增加。不同設備間的數據複製、遷移等操作需要較高的成本。
2、存儲資源利用率無法有效提升
在異構存儲環境下,各存儲設備往往會形成容量壁壘,無法實現統一的資源調度,資源利用率無法有效提高。
3、存儲系統高可用實現困難
異構存儲環境下,增加設備、架構調整或者數據遷移都往往需要對存儲設備進行離線操作,這樣無疑會降低存儲系統的可用性。而實現高可用的技術手段,如存儲鏡像等,又會由於異構存儲間的技術壁壘無法順利實現。
二、存儲虛擬化技術及其實現方式
根據SNIA(存儲網絡工業協會)的定義,存儲虛擬化是指“通過將存儲系統/子系統的內部功能從應用程序、服務器、網絡資源中進行抽象、隱藏或隔離,實現獨立於應用程序、網絡的存儲與數據管理”。簡單來說,存儲虛擬化技術能夠通過屏蔽具體物理設備的物理特徵,將各類存儲設備資源進行整合,爲上層服務器提供一個能夠統一配置管理的邏輯存儲資源池,從而有效地解決了異構存儲架構帶來的種種問題。
存儲虛擬化技術有多種實現方式和途徑。一般情況下,根據虛擬化在I/O路徑中不同的實現位置,可分爲基於主機的虛擬化、基於存儲設備的虛擬化和基於存儲網絡的虛擬化。
1、基於主機的虛擬化
基於主機的虛擬存儲依賴於代理或管理軟件,它們安裝在一個或多個主機上,實現存儲虛擬化的控制和管理。由於控制軟件是運行在主機上,會佔用主機資源且存在不同操作系統間的兼容性問題。因此,這種方法的可擴展性較差,實際運行的性能不是很好。
基於主機的方法也有可能影響到系統的穩定性和安全性,因爲有可能導致不經意間越權訪問到受保護的數據。這種方法要求在主機上安裝適當的控制軟件,因此一個主機的故障可能影響整個SAN系統中數據的完整性。軟件控制的存儲虛擬化還可能由於不同存儲廠商軟硬件的差異而帶來不必要的互操作性開銷,所以這種方法的靈活性也比較差。但是,因爲不需要任何附加硬件,基於主機的虛擬化方法最容易實現,其設備成本也最低。
2、基於存儲設備的虛擬化
基於存儲設備的虛擬化依賴於存儲設備的控制器軟件功能模塊,這種虛擬化技術往往只能解決同一廠商系列設備的虛擬化問題,無法支持包含多個廠商設備的複雜存儲系統的虛擬化。依賴於存儲供應商的虛擬化功能模塊將會在系統中排斥簡單的硬盤組和簡單存儲設備的使用,因爲這些設備並沒有提供存儲虛擬化的功能。採用該存儲虛擬化技術也會導致存儲系統在建設過程中的品牌唯一問題。當然,對於已經採用同一廠商不同系列存儲設備的數據中心來說,該方案能夠較容易地實現存儲資源的虛擬化和統一管理。
3、基於存儲網絡的虛擬化
基於存儲網絡的虛擬化是指在存儲系統的SAN網絡設備上實現存儲虛擬化功能。根據其數據路徑和控制路徑的耦合情況,可分爲對稱虛擬化和非對稱虛擬化兩種。對稱虛擬化是指數據路徑與控制路徑重合,在數據讀寫的過程中,在主機到存儲設備的路徑上實現虛擬存儲。具體實現時,實現虛擬化功能的設備安裝在主機和存儲設備的數據通道內,所有控制及數據訪問都必須通過該設備。該實現方式無需對現有SAN架構進行任何更改,也無需佔用任何的主機資源。
非對稱虛擬化是指數據路徑與控制路徑不重合,在主機到存儲設備的路徑外實現虛擬存儲。非對稱存儲虛擬化設備位於主機和存儲之間的數據通道之外,通過其他的網絡連接方式與主機系統通信,因而在主機中需要安裝專門的客戶端軟件,所面臨的問題與基於主機的存儲虛擬化類似。
三、存儲虛擬化技術帶來的好處
從應用實踐看,存儲虛擬化的實施爲數據中心的建設和運維帶來了以下幾點好處:
1、降低了存儲系統建設和管理的複雜度
由於採用了存儲虛擬化技術,數據中心的運維人員在日常管理與維護中通常無需對單一存儲設備進行操作,而是通過存儲虛擬化控制器提供的統一界面,對存儲資源進行管理。對存儲系統擴容時,可以靈活地根據需求、成本、項目預算等因素考慮採用利舊,對原有設備擴容及採購異構存儲等多種方案,實現了業務的不中斷操作。
2、實現了存儲資源的有效利用
與傳統存儲配置方式相比,基於虛擬存儲的自動精簡配置技術可以實現存儲資源的有效利用,能夠有效地避免物理存儲空間因超量分配導致的閒置。同時,存儲虛擬化技術通過構造存儲資源池,突破了單一設備的容量壁壘,實現了設備的容量整合。
3、存儲系統的高可用易於實現
通過存儲虛擬化控制器能夠非常容易在不同存儲設備上定義和實現存儲同步鏡像及雙活且對主機透明,當單一存儲設備因故障、檢修等原因需要停機時,不會影響相關業務的運行。
4、可實現存儲資源的服務質量管理
通過存儲虛擬化技術,可以實現存儲系統中異構存儲的差異化管理。通過配置不同服務級別的存儲資源池,可以將不同性能的存儲設備分類管理並分配給對應需求的應用系統。例如,可以把高端的存儲資源集中分配給關鍵應用,而把較低端的存儲資源分配給非關鍵應用。
總結
高校在數據中心規劃和建設過程中應當充分重視和研究存儲虛擬化技術,根據自身需求和特點靈活使用,以提升數據中心存儲服務的質量。
Datacenter migration must pay close attention to peripheral security.
Datacenter migration, as smart devices add attack vectors exponentially, the Internet of things (IoT), industrial Internet of things (IIoT), and cloud-based applications rapidly increase data center risk.In this era of global connectivity, organizations need to constantly test their security against complex threats, including Web applications and file-free attacks, memory corruption, return/jump oriented programming (ROP/JOP), and compromised hardware and software supply chain attacks.
While data centers have traditionally relied on detecting and adopting perimeter security solutions to mitigate risks, the proliferation of new cyber threats has increased the need for prevention.According to poirot institute, estimates of data center outage costs an average of more than $740000 (nearly 40% since 2010), who is responsible for the data center network security staff must seek to adopt the next generation of prevention strategies to reduce and close the attack surface, and improve the efficiency of existing infrastructure, processes and personnel.
To protect the surrounding
For decades, peripheral security has been the primary means of protecting data centers.However, this strategy is similar to a medieval castle, where the object of protection is limited to a small area and protected by a solid wall with a heavily guarded entrance point.The data center has built a security layer around it, and these security layers collaborate in depth, the idea being that if one security layer doesn't defend against some attack, it can be protected by the next security layer.
Like castles, data centers emphasize the detection of traffic coming in and out of organizations.Traditional traffic detection methods include mapping out network access points to create continuous testing and reinforcement of peripheral facilities.This is very effective for detecting attacks and generating alerts, and hopefully enough security to prevent security layer damage that can lead to downtime, economic damage, reputational damage, and even environmental damage.
Strengthening data center security
Data center security is no longer about internal protection.Castle solutions work well in the age of mainframes and hardline terminals, but they are less effective against today's threats.In fact, the advent of wireless communications (OTA), iot devices, and cloud computing has made data centers less secure.
The main security challenge facing data centers today is that they must work to maintain the privacy of their data as they deploy applications internally in data centers, public, private, and mixed clouds.While many of their customers extend their business further into the cloud, this may also inadvertently increase the risk of clone configuration extension attacks.An attacker can locate everything in the operating technology components of routers, switches, storage controllers, servers, and sensors and switches.Once hackers gain control of the device, they can extend it more, potentially attacking all the same devices across the network.
Today's attacks come from new or unexpected places, as cyberattackers now have more tools to circumvent perimeter security detection and attack targets from inside the data center.Security is not just about infrastructure, said colonel Paul Craft, director of operations at the joint forces headquarters for the defense information network (jfhq-dodin) at the AFCEA defense network operations symposium in May."" this is our IT platform that will record all of our data, IT's also our ICS and SCADA systems, and IT covers all of our cross-domain networks." "He said.
Many attacks can now be quickly extended from one device to all devices, according to the pollmont institute, as can be seen from hackers' access to 200,000 network devices built with the same code.File - free attacks such as memory corruption (buffers, stacks and heaps) and ROP/JOP (return/jump oriented programming) to perform reordering are also a growing threat, infecting devices 10 times more likely than traditional attacks.
According to symantec's 2018 Internet security threat report, attacks on supply chains have increased 200 percent over the past year.Many organizations and vendors now control only a small portion of their source code because the modern software stack consists of binaries from third parties in the global supply chain that come from proprietary and open source code that contains hidden vulnerabilities.In addition, zero-day attacks are growing rapidly, and many hackers are exploiting unknown vulnerabilities in software, hardware, or firmware to attack systems.
New era of data center network security
Data centers must shift from focusing only on the safety of testing to emphasizing the safety of prevention.As many new attacks completely eschew traditional network and endpoint protection, the latest generation of tools is designed to fend off the growing class of attack media.This not only increases the security against the latest threats, but also increases the effectiveness of tools and processes in handling the rest of the content.
Today, the hardware in the supply chain must be assumed to be compromised.This means that businesses need to build and run protected software on potentially untrusted hardware.Data centers need this new defense strategy, which takes a deep approach to identifying potential vulnerabilities and directly strengthening binaries so that attacks can't be implemented or replicated.
One of the best ways to do this is to somehow convert the software binaries in the device so that the malware cannot change the command and is propagated through the system.This approach, known as "network hardening," prevents a single exploit from spreading across multiple systems.It Narrows the attack horizon and shrinks vulnerabilities in industrial control systems and embedded systems and devices, greatly reducing the chances of physical damage and human damage.
The best security always assumes that hackers will eventually break in.Rather than reacting to an attacked vulnerability after it is exploited, network hardening prevents malware from targeting data centers, and less defensible organizations do not cancel such infrastructure.
While data centers have traditionally relied on detecting and adopting perimeter security solutions to mitigate risks, the proliferation of new cyber threats has increased the need for prevention.According to poirot institute, estimates of data center outage costs an average of more than $740000 (nearly 40% since 2010), who is responsible for the data center network security staff must seek to adopt the next generation of prevention strategies to reduce and close the attack surface, and improve the efficiency of existing infrastructure, processes and personnel.
To protect the surrounding
For decades, peripheral security has been the primary means of protecting data centers.However, this strategy is similar to a medieval castle, where the object of protection is limited to a small area and protected by a solid wall with a heavily guarded entrance point.The data center has built a security layer around it, and these security layers collaborate in depth, the idea being that if one security layer doesn't defend against some attack, it can be protected by the next security layer.
Like castles, data centers emphasize the detection of traffic coming in and out of organizations.Traditional traffic detection methods include mapping out network access points to create continuous testing and reinforcement of peripheral facilities.This is very effective for detecting attacks and generating alerts, and hopefully enough security to prevent security layer damage that can lead to downtime, economic damage, reputational damage, and even environmental damage.
Strengthening data center security
Data center security is no longer about internal protection.Castle solutions work well in the age of mainframes and hardline terminals, but they are less effective against today's threats.In fact, the advent of wireless communications (OTA), iot devices, and cloud computing has made data centers less secure.
The main security challenge facing data centers today is that they must work to maintain the privacy of their data as they deploy applications internally in data centers, public, private, and mixed clouds.While many of their customers extend their business further into the cloud, this may also inadvertently increase the risk of clone configuration extension attacks.An attacker can locate everything in the operating technology components of routers, switches, storage controllers, servers, and sensors and switches.Once hackers gain control of the device, they can extend it more, potentially attacking all the same devices across the network.
Today's attacks come from new or unexpected places, as cyberattackers now have more tools to circumvent perimeter security detection and attack targets from inside the data center.Security is not just about infrastructure, said colonel Paul Craft, director of operations at the joint forces headquarters for the defense information network (jfhq-dodin) at the AFCEA defense network operations symposium in May."" this is our IT platform that will record all of our data, IT's also our ICS and SCADA systems, and IT covers all of our cross-domain networks." "He said.
Many attacks can now be quickly extended from one device to all devices, according to the pollmont institute, as can be seen from hackers' access to 200,000 network devices built with the same code.File - free attacks such as memory corruption (buffers, stacks and heaps) and ROP/JOP (return/jump oriented programming) to perform reordering are also a growing threat, infecting devices 10 times more likely than traditional attacks.
According to symantec's 2018 Internet security threat report, attacks on supply chains have increased 200 percent over the past year.Many organizations and vendors now control only a small portion of their source code because the modern software stack consists of binaries from third parties in the global supply chain that come from proprietary and open source code that contains hidden vulnerabilities.In addition, zero-day attacks are growing rapidly, and many hackers are exploiting unknown vulnerabilities in software, hardware, or firmware to attack systems.
New era of data center network security
Data centers must shift from focusing only on the safety of testing to emphasizing the safety of prevention.As many new attacks completely eschew traditional network and endpoint protection, the latest generation of tools is designed to fend off the growing class of attack media.This not only increases the security against the latest threats, but also increases the effectiveness of tools and processes in handling the rest of the content.
Today, the hardware in the supply chain must be assumed to be compromised.This means that businesses need to build and run protected software on potentially untrusted hardware.Data centers need this new defense strategy, which takes a deep approach to identifying potential vulnerabilities and directly strengthening binaries so that attacks can't be implemented or replicated.
One of the best ways to do this is to somehow convert the software binaries in the device so that the malware cannot change the command and is propagated through the system.This approach, known as "network hardening," prevents a single exploit from spreading across multiple systems.It Narrows the attack horizon and shrinks vulnerabilities in industrial control systems and embedded systems and devices, greatly reducing the chances of physical damage and human damage.
The best security always assumes that hackers will eventually break in.Rather than reacting to an attacked vulnerability after it is exploited, network hardening prevents malware from targeting data centers, and less defensible organizations do not cancel such infrastructure.
2018年8月13日 星期一
機房建置,數據中心業務價值如何延續?
機房建置,海量數據的涌現和管理需求直接影響了中國數據中心行業的發展態勢。其一是趨於建造越來越多的部署集中型的大型數據中心。據統計,中國目前擁有超過50萬個數據中心,體量僅次於美國市場,排名世界第二,同時國內託管數據中心也在積極的進行兼併與收購,包括購買海外前十的資產,而公有云和私有云的興起,使得大量數據開始向雲端轉移……如此巨大體量和規模龐大的迅猛發展,如何通過規範化、標準化的服務滿足當下客戶對數據中心基礎設施建設和管的新要求?
由於行業趨勢的變化,中國數據中心出現了諸多問題和挑戰。根據2017年的數據統計,18%的數據中心都曾經發生過導致應用中斷的大型事故。其原因包括規劃設計階段因控制成本導致的設計質量低下、運維階段出現的體系流程不完善、人員資質和數量不夠導致的維護不足等諸多因素。由於沒有服務商對數據中心從規劃設計到後期運維的每一階段負責,造成故障頻發及出現故障處理不及時等問題。
除此之外,數據中心也面臨着能源消耗過大的問題。規模龐大的數據中心在成爲全球經濟的強大引擎,併爲經濟動脈源源不斷的輸送血液的同時,也消耗了巨大的能量。據全球權威統計,數據中心能耗已佔整個社會能耗的3%,數據中心綠色建設、高效運營已是迫在眉睫的任務。
基於以上數據中心出現的問題和麪臨的挑戰,施耐德電氣作爲數據中心永續運行理念的倡導者,基於其在數據中心基礎設施建設和管理多年的市場觀察和經驗,率先提出並率先提供全生命週期服務,旨在爲客戶數據中心的建造到運維各階段提供技術支持與增值服務,致力於幫助客戶構建真正具有高可用性、高可靠性和高效運營管理的數據中心。
日前,全球能效管理和自動化領域數字化轉型的領導者施耐德電氣在京成功舉辦了全生命週期服務媒體分享會。在分享會上,施耐德電氣IT業務部數據中心業務架構總監張子揚與施耐德電氣IT業務部全生命週期服務業務拓展經理蔣勝與媒體分享了施耐德電氣在數據中心全生命週期內提供的強大的服務能力和領先的技術。
基於WHOSE工作法則真正實現所見即所得
張子揚介紹,施耐德電氣爲數據中心提供的全生命週期服務會抓住4個關鍵週期:第一是規劃設計階段,施耐德電氣會提供諮詢設計和設計驗證服務;第二是在建設末端到接維過程中提供測試驗證,保證設計所見即所得;第三是在運營階段堅持爲客戶提供永續運營能力;最後階段就是爲數據中心進行二次評估和昇華。在這四個週期內,施耐德電氣全生命週期服務提供的是端到端的全過程管理。通過標準化、可視化的工具爲客戶提供全過程的規劃、運營、升級服務,提高可驗證性、降低運營風險。
就數據中心前期的規劃設計建設階段而言,施耐德電氣會通過標準化的、可視化的工具爲客戶提供全過程的規劃、運營、升級服務,提高可驗證性、降低運營風險,同時基於其通過國際運營理念和中國區36年的技術積累,爲用戶的的數據中心永續運行提供保障,並真正實現可持續發展。
這一設計驗證服務會基於施耐德電氣成熟的WHOES工作法來進行,即識別設計需求(What)、檢查或發現實施路徑(How)、優化實施路徑或技術(Optimize)、檢查實施工程的工程語言合理性(Engineering)和檢查工程文檔的標準化程度(Standardization)。最終目標是提高最終用戶數據中心的可驗證性、可用性、能效的服務過程。施耐德電氣整個設計認證團隊或者測試認證團隊都嚴格按照WHOSE法則進行工作,並在北上廣、東北及西南幾個一線城市和商業重鎮都配備本土化團隊,這也是施耐德電氣爲客戶提供驗證服務的極大優勢。
基於施耐德電氣所擁有龐大的數據中心基礎設施建設及運營的經驗積累,以及對行業的深度洞察和成熟的工作方法論使得施耐德電氣可以提供區別於競爭對手的高質量服務。
以河南中原雲項目爲例,此項目是當地政府企業和當地待轉制的電廠企業之間的合作。客戶需求是想用自己電場的餘熱進行數據中心的能源梯級利用。然而,在中國做三聯供技術是一個極爲複雜、專業性要求很高的極具挑戰性的選擇。業內能夠做到三聯供的數據中心園區也屈指可數。在規劃設計的中間階段,施耐德電氣作爲合作伙伴加入團隊,憑藉設計驗證服務幫助設計夥伴和客戶將整體施工方案進行了有機的梳理,使項目變成真正能夠投運的三聯供方案。如今,該項目已投運兩年,當地的政府和企業對這一成果都很滿意。
不可忽視的下半場數據中心高效運營管理是保障
如果說規劃設計階段是數據中心生命週期的基石,後期的運營階段則爲數據中心的護航編隊。
據國外數據統計,約57%的數據中心客戶沒有緊急預案,87%的客戶認爲業務中斷會影響業務,70%的數據中心停機事故是由於人員的誤操作產生的。而這一數據在中國更爲不樂觀。國內運維相關人員皆反應,數據中心故障往往都發生在建設完成的五年後。前五年得力於良好的規劃設計和驗證,設備運行狀況良好;但五年後,由於設備老化,專業的人員在緊急故障處理時的壓力非常大。另外,大多數客戶會將大部分的資金和注意力投入在建設階段,而不去關注運維,甚至很多數據中心全權交由物業公司代運維,這大大降低了數據中心後期維護質量。當故障發生時,也很難保證及時有效準確的反應和措施。
蔣勝強調,施耐德電氣對數據中心在後期階段的管理是“運營”而非單純的“運維”。施耐德電氣不僅關注數據中心的高可用性和高可驗證性,還重視業務連續性、設備可用性和能效管理。旨在數據中心運行過程中產生價值、優化數據,節省成本、併爲客戶提供增值服務價值。
在運營階段,施耐德電氣可以爲客戶提供多種分層級服務。第一層爲維護集成服務,即服務的總包。這種服務適於例如金融類的小型機房客戶需求,可幫助客戶提供對UPS、製冷、配電和安防設備的所有供應商的統一管理服務;第二層爲維護管理服務,是維護集成服務的升級,施耐德電氣將會派遣一名服務專家到現場監督服務過程和並進行變更管理;第三層爲關鍵設施運營,這是高度諮詢屬性的服務,將由施耐德電氣的員工在現場提供數據中心運營服務以補充或者替換現有的員工;最高層級則是關鍵基礎設施運營服務,施耐德電氣在這一層級的不同之處在於全方位的運作和維護。從數據中心灰白區再到樓宇管理層級,施耐德電氣在後臺均可根據客戶需求配備大量全方位、標準化的資源提供有力支持。
施耐德電氣在運營階段服務的最大優勢在於專業的維護團隊和強大的後臺技術支持團隊,利用規範的運維方法論爲客戶提供全方位的核心價值。
以聯通數據中心爲例,當時,施耐德電氣接到代運營聯通呼和浩特和廊坊雲基地的需求。聯通對施耐德電氣提出的明確要求包括:第一、施耐德電氣需幫助其建立符合聯通要求的運維體系;第二,提高整個數據中心的可用性和可靠性;第三,施耐德電氣要幫助其實現節能減排和優化成本。
針對客戶的需求,施耐德電氣結合自身在國內外實踐中積累的數據中心關鍵基礎設施運維管理經驗,並基於收購公司LeeTechnologies的先進方法論,協助客戶建立起內部運維管理標準和M&O認證體系,從而便於客戶日後因業務擴展等需求實現數據中心的快速部署和管理。同時施耐德電氣也爲數據中心基礎設施的安全運行提供了專業維護服務,所有現場工程師可以通過App端的施耐德電氣千里眼運維平臺完成工單。甚至針對一些工單,施耐德電氣會規定工程師在現場需將前中後期錄像和照片一併上傳到雲平臺,確保所有的步驟有跡可循。施耐德電氣的運營服務不僅幫助客戶實現了關鍵負荷零中斷,同時也爲聯通真正建立起完整的管理體系,並通過數字化平臺真正實現運維工作的日誌化,智能化,可視化還有可追溯化。
綜上所述,數據中心產業的發展不會一蹴而就,在爆發式增長的同時如何真正實現高可用性、可靠性以及業務的連續性和高效的日常運營?從而使得IT資產發揮最大業務價值,施耐德電氣全生命週期服務已經爲行業釐清脈絡並提供出專業的應對之策。
由於行業趨勢的變化,中國數據中心出現了諸多問題和挑戰。根據2017年的數據統計,18%的數據中心都曾經發生過導致應用中斷的大型事故。其原因包括規劃設計階段因控制成本導致的設計質量低下、運維階段出現的體系流程不完善、人員資質和數量不夠導致的維護不足等諸多因素。由於沒有服務商對數據中心從規劃設計到後期運維的每一階段負責,造成故障頻發及出現故障處理不及時等問題。
除此之外,數據中心也面臨着能源消耗過大的問題。規模龐大的數據中心在成爲全球經濟的強大引擎,併爲經濟動脈源源不斷的輸送血液的同時,也消耗了巨大的能量。據全球權威統計,數據中心能耗已佔整個社會能耗的3%,數據中心綠色建設、高效運營已是迫在眉睫的任務。
基於以上數據中心出現的問題和麪臨的挑戰,施耐德電氣作爲數據中心永續運行理念的倡導者,基於其在數據中心基礎設施建設和管理多年的市場觀察和經驗,率先提出並率先提供全生命週期服務,旨在爲客戶數據中心的建造到運維各階段提供技術支持與增值服務,致力於幫助客戶構建真正具有高可用性、高可靠性和高效運營管理的數據中心。
日前,全球能效管理和自動化領域數字化轉型的領導者施耐德電氣在京成功舉辦了全生命週期服務媒體分享會。在分享會上,施耐德電氣IT業務部數據中心業務架構總監張子揚與施耐德電氣IT業務部全生命週期服務業務拓展經理蔣勝與媒體分享了施耐德電氣在數據中心全生命週期內提供的強大的服務能力和領先的技術。
基於WHOSE工作法則真正實現所見即所得
張子揚介紹,施耐德電氣爲數據中心提供的全生命週期服務會抓住4個關鍵週期:第一是規劃設計階段,施耐德電氣會提供諮詢設計和設計驗證服務;第二是在建設末端到接維過程中提供測試驗證,保證設計所見即所得;第三是在運營階段堅持爲客戶提供永續運營能力;最後階段就是爲數據中心進行二次評估和昇華。在這四個週期內,施耐德電氣全生命週期服務提供的是端到端的全過程管理。通過標準化、可視化的工具爲客戶提供全過程的規劃、運營、升級服務,提高可驗證性、降低運營風險。
就數據中心前期的規劃設計建設階段而言,施耐德電氣會通過標準化的、可視化的工具爲客戶提供全過程的規劃、運營、升級服務,提高可驗證性、降低運營風險,同時基於其通過國際運營理念和中國區36年的技術積累,爲用戶的的數據中心永續運行提供保障,並真正實現可持續發展。
這一設計驗證服務會基於施耐德電氣成熟的WHOES工作法來進行,即識別設計需求(What)、檢查或發現實施路徑(How)、優化實施路徑或技術(Optimize)、檢查實施工程的工程語言合理性(Engineering)和檢查工程文檔的標準化程度(Standardization)。最終目標是提高最終用戶數據中心的可驗證性、可用性、能效的服務過程。施耐德電氣整個設計認證團隊或者測試認證團隊都嚴格按照WHOSE法則進行工作,並在北上廣、東北及西南幾個一線城市和商業重鎮都配備本土化團隊,這也是施耐德電氣爲客戶提供驗證服務的極大優勢。
基於施耐德電氣所擁有龐大的數據中心基礎設施建設及運營的經驗積累,以及對行業的深度洞察和成熟的工作方法論使得施耐德電氣可以提供區別於競爭對手的高質量服務。
以河南中原雲項目爲例,此項目是當地政府企業和當地待轉制的電廠企業之間的合作。客戶需求是想用自己電場的餘熱進行數據中心的能源梯級利用。然而,在中國做三聯供技術是一個極爲複雜、專業性要求很高的極具挑戰性的選擇。業內能夠做到三聯供的數據中心園區也屈指可數。在規劃設計的中間階段,施耐德電氣作爲合作伙伴加入團隊,憑藉設計驗證服務幫助設計夥伴和客戶將整體施工方案進行了有機的梳理,使項目變成真正能夠投運的三聯供方案。如今,該項目已投運兩年,當地的政府和企業對這一成果都很滿意。
不可忽視的下半場數據中心高效運營管理是保障
如果說規劃設計階段是數據中心生命週期的基石,後期的運營階段則爲數據中心的護航編隊。
據國外數據統計,約57%的數據中心客戶沒有緊急預案,87%的客戶認爲業務中斷會影響業務,70%的數據中心停機事故是由於人員的誤操作產生的。而這一數據在中國更爲不樂觀。國內運維相關人員皆反應,數據中心故障往往都發生在建設完成的五年後。前五年得力於良好的規劃設計和驗證,設備運行狀況良好;但五年後,由於設備老化,專業的人員在緊急故障處理時的壓力非常大。另外,大多數客戶會將大部分的資金和注意力投入在建設階段,而不去關注運維,甚至很多數據中心全權交由物業公司代運維,這大大降低了數據中心後期維護質量。當故障發生時,也很難保證及時有效準確的反應和措施。
蔣勝強調,施耐德電氣對數據中心在後期階段的管理是“運營”而非單純的“運維”。施耐德電氣不僅關注數據中心的高可用性和高可驗證性,還重視業務連續性、設備可用性和能效管理。旨在數據中心運行過程中產生價值、優化數據,節省成本、併爲客戶提供增值服務價值。
在運營階段,施耐德電氣可以爲客戶提供多種分層級服務。第一層爲維護集成服務,即服務的總包。這種服務適於例如金融類的小型機房客戶需求,可幫助客戶提供對UPS、製冷、配電和安防設備的所有供應商的統一管理服務;第二層爲維護管理服務,是維護集成服務的升級,施耐德電氣將會派遣一名服務專家到現場監督服務過程和並進行變更管理;第三層爲關鍵設施運營,這是高度諮詢屬性的服務,將由施耐德電氣的員工在現場提供數據中心運營服務以補充或者替換現有的員工;最高層級則是關鍵基礎設施運營服務,施耐德電氣在這一層級的不同之處在於全方位的運作和維護。從數據中心灰白區再到樓宇管理層級,施耐德電氣在後臺均可根據客戶需求配備大量全方位、標準化的資源提供有力支持。
施耐德電氣在運營階段服務的最大優勢在於專業的維護團隊和強大的後臺技術支持團隊,利用規範的運維方法論爲客戶提供全方位的核心價值。
以聯通數據中心爲例,當時,施耐德電氣接到代運營聯通呼和浩特和廊坊雲基地的需求。聯通對施耐德電氣提出的明確要求包括:第一、施耐德電氣需幫助其建立符合聯通要求的運維體系;第二,提高整個數據中心的可用性和可靠性;第三,施耐德電氣要幫助其實現節能減排和優化成本。
針對客戶的需求,施耐德電氣結合自身在國內外實踐中積累的數據中心關鍵基礎設施運維管理經驗,並基於收購公司LeeTechnologies的先進方法論,協助客戶建立起內部運維管理標準和M&O認證體系,從而便於客戶日後因業務擴展等需求實現數據中心的快速部署和管理。同時施耐德電氣也爲數據中心基礎設施的安全運行提供了專業維護服務,所有現場工程師可以通過App端的施耐德電氣千里眼運維平臺完成工單。甚至針對一些工單,施耐德電氣會規定工程師在現場需將前中後期錄像和照片一併上傳到雲平臺,確保所有的步驟有跡可循。施耐德電氣的運營服務不僅幫助客戶實現了關鍵負荷零中斷,同時也爲聯通真正建立起完整的管理體系,並通過數字化平臺真正實現運維工作的日誌化,智能化,可視化還有可追溯化。
綜上所述,數據中心產業的發展不會一蹴而就,在爆發式增長的同時如何真正實現高可用性、可靠性以及業務的連續性和高效的日常運營?從而使得IT資產發揮最大業務價值,施耐德電氣全生命週期服務已經爲行業釐清脈絡並提供出專業的應對之策。
Datacenter migration, how to reduce the risk of data center
Datacenter migration, before considering the complexity of data center design, it is necessary to consider the use of a flexible system without single point of failure (SPOF). By definition, a single point of failure (SPOF) is a component that, once the system fails, makes the entire system inoperable. In other words, a single point of failure produces an overall failure. . These may be component failures or incorrect human intervention, such as switching without knowing how the system reacts.
2N redundant system can be regarded as a minimum requirement for SPOF installation. For simplicity, it is assumed that the 2N system of the data center consists of two identical electrical and mechanical systems, A and B. Fault tree analysis (FTA) will highlight the combination of events that cause failure. However, it is very difficult to simulate human errors in fault tree analysis (FTA). The data used to simulate human errors will always be subjective, and there are many variables.
If the system in this 2N redundant system example is physically separate, any operation on one system should have no effect on the other. However, the introduction of enhancements is not uncommon. It uses a simple 2N redundant system and adds other components, such as disaster recovery links and public storage containers connecting the two systems.
In large-scale design, this becomes an automatic control system (such as SCADA, BMS), rather than a simple mechanical interlock. The basic principles of 2N redundant system have been destroyed, and the complexity of the system has increased exponentially. The same is true of the skills required by the operational team.
A review of the design still shows that 2N redundant design has been achieved, but the resulting complexity and operational challenges undermine the basic requirements of high availability design.
Studies have shown that a particular sequence of events that lead to failure is usually unpredictable and will not know what the consequences will be until it happens. In other words, the sequence of events is unknown before people know. Therefore, it will not become part of fault tree analysis (FTA).
Austrian physicist Ludwig Von Boltzmann has developed an entropy equation that has been applied to statistics, especially for missing information. In this theory, a box grid, such as a 4 x 2 or 5 x 4 grid, and a coin in the box are set. The theory allows users to determine the number of problems to determine which box to place coins on the defined grid. If you replace boxes with system components and coins with unknown failure events, one can consider how the system availability is affected by complexity. It can be seen that the number of unknown failure events that occur less frequently can reduce the number of failures that the system can fail. Therefore, increasing people's detailed knowledge of the system and discovering unknown events reduces the combination of system failures, thereby reducing the risk.
human factor
Research shows that any system with human-machine interface will eventually fail due to loopholes. Vulnerabilities are any possible vulnerabilities that may cause failures in data center facilities. Data center vulnerabilities may be related to infrastructure or facility operation. Infrastructure involves equipment and systems, in particular:
Mechanical and electrical reliability.
Facilities design, redundancy and topology.
These actions involve human factors, including human errors at the individual level and management level. It involves:
• operational team adaptability.
Team reaction to vulnerabilities.
The more complex the system, the more vulnerable the human factor is, the more training and learning the facilities need. Learning is applicable not only to individuals, but also to organizations. Organizational learning is characterized by maturity and processes (shown below as cumulative experience), such as around data center structures and resources, maintenance, change management, document management, debugging and operability, and maintainability.
Personal learning is a function of knowledge, experience and attitude (as shown in the chart as the depth of experience). Developing an organizational and personal learning environment helps reduce failure rates and provides operators with expertise that effectively reduces energy waste.
Universal learning curve applied to data center
It is important to understand that zero failure can never be achieved because the relationship between failure and experience follows an exponential curve. Data center facility operators with good knowledge and experience are still prone to complacency and to failure in a series of previously unknown events.
conclusion
By providing a learning environment that improves organizational and personal knowledge, it reduces the risk of data center. Although sophisticated operators have experience in reducing failure rates, too complex designs can still fail if implemented without adequate training.
2N redundant system can be regarded as a minimum requirement for SPOF installation. For simplicity, it is assumed that the 2N system of the data center consists of two identical electrical and mechanical systems, A and B. Fault tree analysis (FTA) will highlight the combination of events that cause failure. However, it is very difficult to simulate human errors in fault tree analysis (FTA). The data used to simulate human errors will always be subjective, and there are many variables.
If the system in this 2N redundant system example is physically separate, any operation on one system should have no effect on the other. However, the introduction of enhancements is not uncommon. It uses a simple 2N redundant system and adds other components, such as disaster recovery links and public storage containers connecting the two systems.
In large-scale design, this becomes an automatic control system (such as SCADA, BMS), rather than a simple mechanical interlock. The basic principles of 2N redundant system have been destroyed, and the complexity of the system has increased exponentially. The same is true of the skills required by the operational team.
A review of the design still shows that 2N redundant design has been achieved, but the resulting complexity and operational challenges undermine the basic requirements of high availability design.
Studies have shown that a particular sequence of events that lead to failure is usually unpredictable and will not know what the consequences will be until it happens. In other words, the sequence of events is unknown before people know. Therefore, it will not become part of fault tree analysis (FTA).
Austrian physicist Ludwig Von Boltzmann has developed an entropy equation that has been applied to statistics, especially for missing information. In this theory, a box grid, such as a 4 x 2 or 5 x 4 grid, and a coin in the box are set. The theory allows users to determine the number of problems to determine which box to place coins on the defined grid. If you replace boxes with system components and coins with unknown failure events, one can consider how the system availability is affected by complexity. It can be seen that the number of unknown failure events that occur less frequently can reduce the number of failures that the system can fail. Therefore, increasing people's detailed knowledge of the system and discovering unknown events reduces the combination of system failures, thereby reducing the risk.
human factor
Research shows that any system with human-machine interface will eventually fail due to loopholes. Vulnerabilities are any possible vulnerabilities that may cause failures in data center facilities. Data center vulnerabilities may be related to infrastructure or facility operation. Infrastructure involves equipment and systems, in particular:
Mechanical and electrical reliability.
Facilities design, redundancy and topology.
These actions involve human factors, including human errors at the individual level and management level. It involves:
• operational team adaptability.
Team reaction to vulnerabilities.
The more complex the system, the more vulnerable the human factor is, the more training and learning the facilities need. Learning is applicable not only to individuals, but also to organizations. Organizational learning is characterized by maturity and processes (shown below as cumulative experience), such as around data center structures and resources, maintenance, change management, document management, debugging and operability, and maintainability.
Personal learning is a function of knowledge, experience and attitude (as shown in the chart as the depth of experience). Developing an organizational and personal learning environment helps reduce failure rates and provides operators with expertise that effectively reduces energy waste.
Universal learning curve applied to data center
It is important to understand that zero failure can never be achieved because the relationship between failure and experience follows an exponential curve. Data center facility operators with good knowledge and experience are still prone to complacency and to failure in a series of previously unknown events.
conclusion
By providing a learning environment that improves organizational and personal knowledge, it reduces the risk of data center. Although sophisticated operators have experience in reducing failure rates, too complex designs can still fail if implemented without adequate training.
2018年8月10日 星期五
機房建置,企業為什麼要自己建數據中心?
機房建置,您是否認爲數據管理是貴公司業務的核心呢?除非您企業所從事的是有關數據管理方面的業務,否則數據管理絕不應該是貴公司的業務的核心。
對於您所在的企業而言,真正的核心業務應該是您企業賴以爲生的業務——即:定義貴公司真正所屬行業是什麼的業務。您企業可以是製造鞋子或機器零件的供應廠商;您企業也可以是通過生產工藝流程將原材料轉化爲相關產品。無論您企業所採用的是哪種營生方式,數據管理都不是您企業業務的核心。正如未來學家傑弗裏·摩爾(Geoffrey Moore)所定義的那樣,數據處理或數據管理僅僅就只是“背景”而已。
您企業爲什麼擁有一處自有數據中心?
在您企業構建自己的數據中心時,很可能很少有其他企業組織擁有自建的數據中心,而您在彼時也自然會覺得打造企業自有的數據中心採用起來會相當的得心應手。而回顧大約兩百年前的歷史,您會發現大多數製造商們都會選擇沿着河流建造工廠,這樣他們就可以安裝自己的由電輪驅動的發電機。兩百年前的企業主們不得不沿着河流建造工廠的真正原因是因爲彼時還沒有市政電力公用服務。
當市場上的諸如亞馬遜,IBM,微軟等等其他一系列大型信息技術(IT)公司最早開始大肆宣傳“雲服務”可以用來幫助企業客戶運營您企業的業務,但這些服務卻只是運行在其他公司的數據中心時,您並不相信。您擔心這類服務的安全性,可靠性及其是否真正具備成本效益。因此,您企業遲遲猶豫不決,而沒有選擇將貴公司的高價值數據資產放在這些雲服務。
因此,您企業選擇了構建自己的數據中心或租用主機託管設施的方案,這樣,您企業還將不得不需要進一步的負責管理和維護進行數據處理的相關硬件和IT基礎架構軟件。您企業需要將所有這一系列都妥善安置到一處擁有大量充足的電力供應、互聯網連接、重型冷卻設備、發電機以及基於電池的電源備份系統和長達數英里長的銅纜和光纖電纜的大型操作運營空間。您企業還將爲其投資各種數字化和物理安全系統以確保安全。然後繼續的加大投資以保持其全部設施的長期正常運營,維護和增長,以便提供足夠的容量來滿足貴公司不斷增長的業務需求。
哪些狀況已經發生了改變?
現如今,市場上的各種公共雲服務已被充分證明是安全可靠的了。我們只需看看大規模的服務供應商即可——包括諸如亞馬遜網絡服務、微軟公司的Azure、谷歌雲平臺、IBM公司的SoftLayer等等。可以想見的是:這些巨頭中的任何一家都會打賭稱他們絕不會提供任何不安全、不可靠的服務。
現如今的互聯網提供了一種高速的、全球範圍內的數據分配系統,該系統可靠,無處不在且極具成本效益。與上個世紀不同,互聯網現在通過全球戰略定位的公共雲服務數據中心連接着世界上幾乎任何地方的任何人。
您企業是什麼時候決定放棄自有數據中心的?
從您企業自己部署的本地內部數據中心遷移到基於公共雲服務的數據中心的最爲智能的方法是逐步評估所要遷移的每項數據和工作負載。這將花費相當一段時間,而在此期間,您企業的運營將成爲一個混合環境。
所以,您企業所採取的第一步是從內部部署環境過渡到混合部署環境。隨着您企業逐步將相關的每項數據工作負載和應用程序遷移到公共雲服務,您企業將繼續成爲混合環境。然而,可能會有這樣的一天,當您將大量數據中心的工作負載遷移到雲服務中時,您企業的業務不再需要這麼大規模的數據中心了。屆時您企業完全可以選擇逐漸減小數據中心的規模,因爲這樣更有意義。
最終,您企業可以從您自己的數據中心消除所有應用程序,所有數據和所有工作負載,並將每款應用程序和相關的數據都遷移到公共雲服務基礎架構——其可能是一家服務提供商,但更有可能是跨多家服務提供商。通過優先考慮每項工作負載和每款應用程序的最佳狀態來創建雲遷移規劃路線圖。遷移完所有應用程序和數據後,您企業就可以放棄自有的數據中心了。
到那時,貴公司已經被使用了多年的數據中心相關設備,已然通過生產性使用和財務折舊收回了成本。無需再續訂購買相關的軟件許可證授權了。您數據中心的所有投資都獲得了豐厚的回報,現在您只需支付可預測的,可預算的,並且是顯著降低成本的運營費用即可,從而節省了大量不可思議的成本。您企業可以在不發生業務中斷的情況下全部遷移到雲服務,而不會存在任何數據丟失的風險,同時您企業對於所有這些數據也是完全可以控制的,因爲您企業已經花費了相當長的一段時間將其從內部部署環境遷移過渡到混合雲部署。
您企業是否急於擺脫數據中心業務呢?是否試圖急於找到如何資助這一舉措?我們建議您企業可以遵循許多其他企業的成功經驗:不再更新舊的,過時的存儲設備上的維護合同或再次支付舊服務器更新的費用,而是使用這筆預算資金來資助您企業的雲遷移項目。
這也是大多數企業客戶處理雲轉型的方式——使用原本用於硬件和軟件更新的資金預算用於雲服務遷移項目,並放棄自有的數據中心。通常首先放棄災難恢復(DR)數據中心,並將備份數據到雲服務中,然後將應用程序及其數據遷移到雲中,並關閉其餘數據中心。
過去幾年中,成千上萬的企業客戶已經通過上述過程實現了過渡發展。有些企業現在已經完全採用公共雲服務來託管其數據和業務應用程序了。更多的企業則是關閉了他們的災難恢復數據中心,並希望在未來幾年減少或關閉其餘的數據中心。大多數企業客戶處於混合雲採用的各個階段,並且剛剛開始將關鍵業務工作負載遷移到雲服務中。他們通過充分利用雲服務中所提供的機器學習、數據分析和其他高級IT服務,進而得以專注於其現代化的核心業務。
最終,大多數企業客戶將放棄自有的數據中心,完全退出數據中心硬件和IT基礎架構軟件的維護業務,因爲這根本不是這些企業業務的核心。
今天,互聯網提供了可靠,安全的數據分發系統,以及增強了專用網絡所需的性能。與此同時,廣大的企業客戶將能夠按照自己的發展進度繼續其雲服務的轉型之旅,並保持混合模式,直到有一天您企業已經不再有公司自有或租用的主機託管數據中心,此時您就得以利用IT作爲優勢來爲您企業的業務提供運營環境了,進而使您能夠專注於您最擅長的領域——您企業真正的核心業務。
對於您所在的企業而言,真正的核心業務應該是您企業賴以爲生的業務——即:定義貴公司真正所屬行業是什麼的業務。您企業可以是製造鞋子或機器零件的供應廠商;您企業也可以是通過生產工藝流程將原材料轉化爲相關產品。無論您企業所採用的是哪種營生方式,數據管理都不是您企業業務的核心。正如未來學家傑弗裏·摩爾(Geoffrey Moore)所定義的那樣,數據處理或數據管理僅僅就只是“背景”而已。
您企業爲什麼擁有一處自有數據中心?
在您企業構建自己的數據中心時,很可能很少有其他企業組織擁有自建的數據中心,而您在彼時也自然會覺得打造企業自有的數據中心採用起來會相當的得心應手。而回顧大約兩百年前的歷史,您會發現大多數製造商們都會選擇沿着河流建造工廠,這樣他們就可以安裝自己的由電輪驅動的發電機。兩百年前的企業主們不得不沿着河流建造工廠的真正原因是因爲彼時還沒有市政電力公用服務。
當市場上的諸如亞馬遜,IBM,微軟等等其他一系列大型信息技術(IT)公司最早開始大肆宣傳“雲服務”可以用來幫助企業客戶運營您企業的業務,但這些服務卻只是運行在其他公司的數據中心時,您並不相信。您擔心這類服務的安全性,可靠性及其是否真正具備成本效益。因此,您企業遲遲猶豫不決,而沒有選擇將貴公司的高價值數據資產放在這些雲服務。
因此,您企業選擇了構建自己的數據中心或租用主機託管設施的方案,這樣,您企業還將不得不需要進一步的負責管理和維護進行數據處理的相關硬件和IT基礎架構軟件。您企業需要將所有這一系列都妥善安置到一處擁有大量充足的電力供應、互聯網連接、重型冷卻設備、發電機以及基於電池的電源備份系統和長達數英里長的銅纜和光纖電纜的大型操作運營空間。您企業還將爲其投資各種數字化和物理安全系統以確保安全。然後繼續的加大投資以保持其全部設施的長期正常運營,維護和增長,以便提供足夠的容量來滿足貴公司不斷增長的業務需求。
哪些狀況已經發生了改變?
現如今,市場上的各種公共雲服務已被充分證明是安全可靠的了。我們只需看看大規模的服務供應商即可——包括諸如亞馬遜網絡服務、微軟公司的Azure、谷歌雲平臺、IBM公司的SoftLayer等等。可以想見的是:這些巨頭中的任何一家都會打賭稱他們絕不會提供任何不安全、不可靠的服務。
現如今的互聯網提供了一種高速的、全球範圍內的數據分配系統,該系統可靠,無處不在且極具成本效益。與上個世紀不同,互聯網現在通過全球戰略定位的公共雲服務數據中心連接着世界上幾乎任何地方的任何人。
您企業是什麼時候決定放棄自有數據中心的?
從您企業自己部署的本地內部數據中心遷移到基於公共雲服務的數據中心的最爲智能的方法是逐步評估所要遷移的每項數據和工作負載。這將花費相當一段時間,而在此期間,您企業的運營將成爲一個混合環境。
所以,您企業所採取的第一步是從內部部署環境過渡到混合部署環境。隨着您企業逐步將相關的每項數據工作負載和應用程序遷移到公共雲服務,您企業將繼續成爲混合環境。然而,可能會有這樣的一天,當您將大量數據中心的工作負載遷移到雲服務中時,您企業的業務不再需要這麼大規模的數據中心了。屆時您企業完全可以選擇逐漸減小數據中心的規模,因爲這樣更有意義。
最終,您企業可以從您自己的數據中心消除所有應用程序,所有數據和所有工作負載,並將每款應用程序和相關的數據都遷移到公共雲服務基礎架構——其可能是一家服務提供商,但更有可能是跨多家服務提供商。通過優先考慮每項工作負載和每款應用程序的最佳狀態來創建雲遷移規劃路線圖。遷移完所有應用程序和數據後,您企業就可以放棄自有的數據中心了。
到那時,貴公司已經被使用了多年的數據中心相關設備,已然通過生產性使用和財務折舊收回了成本。無需再續訂購買相關的軟件許可證授權了。您數據中心的所有投資都獲得了豐厚的回報,現在您只需支付可預測的,可預算的,並且是顯著降低成本的運營費用即可,從而節省了大量不可思議的成本。您企業可以在不發生業務中斷的情況下全部遷移到雲服務,而不會存在任何數據丟失的風險,同時您企業對於所有這些數據也是完全可以控制的,因爲您企業已經花費了相當長的一段時間將其從內部部署環境遷移過渡到混合雲部署。
您企業是否急於擺脫數據中心業務呢?是否試圖急於找到如何資助這一舉措?我們建議您企業可以遵循許多其他企業的成功經驗:不再更新舊的,過時的存儲設備上的維護合同或再次支付舊服務器更新的費用,而是使用這筆預算資金來資助您企業的雲遷移項目。
這也是大多數企業客戶處理雲轉型的方式——使用原本用於硬件和軟件更新的資金預算用於雲服務遷移項目,並放棄自有的數據中心。通常首先放棄災難恢復(DR)數據中心,並將備份數據到雲服務中,然後將應用程序及其數據遷移到雲中,並關閉其餘數據中心。
過去幾年中,成千上萬的企業客戶已經通過上述過程實現了過渡發展。有些企業現在已經完全採用公共雲服務來託管其數據和業務應用程序了。更多的企業則是關閉了他們的災難恢復數據中心,並希望在未來幾年減少或關閉其餘的數據中心。大多數企業客戶處於混合雲採用的各個階段,並且剛剛開始將關鍵業務工作負載遷移到雲服務中。他們通過充分利用雲服務中所提供的機器學習、數據分析和其他高級IT服務,進而得以專注於其現代化的核心業務。
最終,大多數企業客戶將放棄自有的數據中心,完全退出數據中心硬件和IT基礎架構軟件的維護業務,因爲這根本不是這些企業業務的核心。
今天,互聯網提供了可靠,安全的數據分發系統,以及增強了專用網絡所需的性能。與此同時,廣大的企業客戶將能夠按照自己的發展進度繼續其雲服務的轉型之旅,並保持混合模式,直到有一天您企業已經不再有公司自有或租用的主機託管數據中心,此時您就得以利用IT作爲優勢來爲您企業的業務提供運營環境了,進而使您能夠專注於您最擅長的領域——您企業真正的核心業務。
Datacenter migration using liquid cooling obstacles
Datacenter migration,The rise of machine learning has led to higher and higher power densities in data centers, where a large number of servers are deployed, with power densities ranging from 30 kW to 50 kW per rack, prompting some data center operators to switch to liquid cooling instead of air cooling.
Although some data center operators use liquid cooling to improve the efficiency of their facilities, the main reason is the need to cool more power-intensive racks.
But the conversion from air cooling to liquid cooling is not simple. Here are some of the major obstacles encountered in using liquid cooling technology in data centers:
1. two cooling systems are required.
Lex Coors, chief technology officer of data centers at European hosted data center giant Interxion, says it makes little sense for existing data centers to switch to liquid cooling at one time, and the operations teams at many data center facilities will have to manage and operate two cooling systems, not one.
This makes liquid cooling a better choice for new data centers or data centers that require major modifications.
But there are always exceptions, especially for very large manufacturers, whose unique data center infrastructure problems often require unique solutions.
Google, for example, is currently converting air-cooling systems from many of its existing data centers into liquid-cooling systems to cope with the power density of its TPU 3.0 processor, which its latest machine learns.
2. lack of industry standards
The lack of liquid cooling industry standards is a major obstacle to the widespread adoption of the technology.
"Customers must first have their own IT equipment for liquid cooling." "And the standardization of liquid cooling technology is not perfect, and organizations can't simply adopt it and make it work," Coors said.
Interxion's customers do not currently use liquid cooling technology, but Interxion is prepared to support it if necessary, Coors said.
3. electric shock hazard
Many liquid cooling solutions rely mainly on dielectric liquids, whose medium should be non-conductive and free from electrical shock hazards. But some organizations may use cold water or warm water for cooling.
"If a worker happens to touch the liquid at the moment it leaks, there's a risk of electrical shock and death, but there are many ways to deal with it," Coors said.
4. corrosion
Corrosion, like any system involving liquid pipes, is a major problem facing liquid cooling technology.
"Pipeline corrosion is a big problem, which is one of the problems that people need to solve." Coors said. Liquid cooling manufacturers are improving pipes to reduce the risk of leakage and automatically seal pipes in case of leakage.
He added, "at the same time, the rack itself also needs to be containerized. If there is a leak, just sprinkle the liquid on the rack, so there is no great harm. "
5. operational complexity
Jeff Flanagan, executive vice president of Markley Group, said the biggest risk of using liquid cooling might be increased operational complexity, and the company plans to launch liquid cooling services in high-performance cloud computing data centers early next year.
As data center operators, we prefer simple technologies, and the more components we have, the more likely we are to fail. When using liquid cooling technology to cool the chip, the liquid flows through each CPU or GPU in the server, requiring many components to be added to the cooling process, which increases the possibility of failure.
In operating data centers, there is another complication: immersing servers in dielectric fluids, which requires higher insulation technology.
Although some data center operators use liquid cooling to improve the efficiency of their facilities, the main reason is the need to cool more power-intensive racks.
But the conversion from air cooling to liquid cooling is not simple. Here are some of the major obstacles encountered in using liquid cooling technology in data centers:
1. two cooling systems are required.
Lex Coors, chief technology officer of data centers at European hosted data center giant Interxion, says it makes little sense for existing data centers to switch to liquid cooling at one time, and the operations teams at many data center facilities will have to manage and operate two cooling systems, not one.
This makes liquid cooling a better choice for new data centers or data centers that require major modifications.
But there are always exceptions, especially for very large manufacturers, whose unique data center infrastructure problems often require unique solutions.
Google, for example, is currently converting air-cooling systems from many of its existing data centers into liquid-cooling systems to cope with the power density of its TPU 3.0 processor, which its latest machine learns.
2. lack of industry standards
The lack of liquid cooling industry standards is a major obstacle to the widespread adoption of the technology.
"Customers must first have their own IT equipment for liquid cooling." "And the standardization of liquid cooling technology is not perfect, and organizations can't simply adopt it and make it work," Coors said.
Interxion's customers do not currently use liquid cooling technology, but Interxion is prepared to support it if necessary, Coors said.
3. electric shock hazard
Many liquid cooling solutions rely mainly on dielectric liquids, whose medium should be non-conductive and free from electrical shock hazards. But some organizations may use cold water or warm water for cooling.
"If a worker happens to touch the liquid at the moment it leaks, there's a risk of electrical shock and death, but there are many ways to deal with it," Coors said.
4. corrosion
Corrosion, like any system involving liquid pipes, is a major problem facing liquid cooling technology.
"Pipeline corrosion is a big problem, which is one of the problems that people need to solve." Coors said. Liquid cooling manufacturers are improving pipes to reduce the risk of leakage and automatically seal pipes in case of leakage.
He added, "at the same time, the rack itself also needs to be containerized. If there is a leak, just sprinkle the liquid on the rack, so there is no great harm. "
5. operational complexity
Jeff Flanagan, executive vice president of Markley Group, said the biggest risk of using liquid cooling might be increased operational complexity, and the company plans to launch liquid cooling services in high-performance cloud computing data centers early next year.
As data center operators, we prefer simple technologies, and the more components we have, the more likely we are to fail. When using liquid cooling technology to cool the chip, the liquid flows through each CPU or GPU in the server, requiring many components to be added to the cooling process, which increases the possibility of failure.
In operating data centers, there is another complication: immersing servers in dielectric fluids, which requires higher insulation technology.
2018年8月9日 星期四
機房建置,保障機房服務器系統安全!
機房建置,隨着IT技術的革新,各種病毒層出不窮,黑客們的花招也越來越多。而處於互聯網這個相對開放環境中的服務器遭受的風險比以前更大了。越來越多的服務器攻擊、服務器安全漏洞,以及商業間諜隱患時刻威脅着服務器安全。服務器的安全問題越來越受到關注,我們要如何障服務器的安全呢?下面天互數據將爲大家提供七個維護服務器安全的技巧。
1.從基本做起,及時安裝系統補丁
不論是Windows還是Linux,任何操作系統都有漏洞,及時的打上補丁避免漏洞被蓄意攻擊利用,是服務器安全最重要的保證之一。
2.安裝和設置防火牆
現在有許多基於硬件或軟件的防火牆,很多安全廠商也都推出了相關的產品。對服務器安全而言,安裝防火牆非常必要。防火牆對於非法訪問具有很好的預防作用,但是安裝了防火牆並不等於服務器安全了。在安裝防火牆之後,你需要根據自身的網絡環境,對防火牆進行適當的配置以達到最好的防護效果。
3.安裝網絡殺毒軟件
現在網絡上的病毒非常猖獗,這就需要在網絡服務器上安裝網絡版的殺毒軟件來控制病毒傳播,同時,在網絡殺毒軟件的使用中,必須要定期或及時升級殺毒軟件,並且每天自動更新病毒庫。
4.關閉不需要的服務和端口
服務器操作系統在安裝時,會啓動一些不需要的服務,這樣會佔用系統的資源,而且也會增加系統的安全隱患。對於一段時間內完全不會用到的服務器,可以完全關閉;對於期間要使用的服務器,也應該關閉不需要的服務,如Telnet等。另外,還要關掉沒有必要開的TCP端口。
5.定期對服務器進行備份
爲防止不能預料的系統故障或用戶不小心的非法操作,必須對系統進行安全備份。除了對全系統進行每月一次的備份外,還應對修改過的數據進行每週一次的備份。同時,應該將修改過的重要系統文件存放在不同服務器上,以便出現系統崩潰時(通常是硬盤出錯),可以及時地將系統恢復到正常狀態。
6.賬號和密碼保護
賬號和密碼保護可以說是服務器系統的第一道防線,目前網上大部分對服務器系統的攻擊都是從截獲或猜測密碼開始。一旦黑客進入了系統,那麼前面的防衛措施幾乎就失去了作用,所以對服務器系統管理員的賬號和密碼進行管理是保證系統安全非常重要的措施。
7.採用熱/冷通道的設計方式來分佈數據中心的設備
雖然這個技術在上世紀90年代中期就已經有了,但這是一種有效的方式。這種設計使冷空氣通過這個通道直達服務器前面的通風口,並且能夠使來自服務器後面AC電源的熱氣流通過管道排除,這樣就大大節省了降溫所帶來的能耗。
8.監測系統日誌
通過運行系統日誌程序,系統會記錄下所有用戶使用系統的情形,包括最近登錄時間、使用的賬號、進行的活動等。日誌程序會定期生成報表,通過對報表進行分析,你可以知道是否有異常現象。
服務器安全問題是一個大問題。如果你不希望重要的數據被病毒或是黑客破壞,甚至被可能用這些數據來對付你的人竊取,那麼本文介紹的安全小技巧可能會對你有所幫助。
1.從基本做起,及時安裝系統補丁
不論是Windows還是Linux,任何操作系統都有漏洞,及時的打上補丁避免漏洞被蓄意攻擊利用,是服務器安全最重要的保證之一。
2.安裝和設置防火牆
現在有許多基於硬件或軟件的防火牆,很多安全廠商也都推出了相關的產品。對服務器安全而言,安裝防火牆非常必要。防火牆對於非法訪問具有很好的預防作用,但是安裝了防火牆並不等於服務器安全了。在安裝防火牆之後,你需要根據自身的網絡環境,對防火牆進行適當的配置以達到最好的防護效果。
3.安裝網絡殺毒軟件
現在網絡上的病毒非常猖獗,這就需要在網絡服務器上安裝網絡版的殺毒軟件來控制病毒傳播,同時,在網絡殺毒軟件的使用中,必須要定期或及時升級殺毒軟件,並且每天自動更新病毒庫。
4.關閉不需要的服務和端口
服務器操作系統在安裝時,會啓動一些不需要的服務,這樣會佔用系統的資源,而且也會增加系統的安全隱患。對於一段時間內完全不會用到的服務器,可以完全關閉;對於期間要使用的服務器,也應該關閉不需要的服務,如Telnet等。另外,還要關掉沒有必要開的TCP端口。
5.定期對服務器進行備份
爲防止不能預料的系統故障或用戶不小心的非法操作,必須對系統進行安全備份。除了對全系統進行每月一次的備份外,還應對修改過的數據進行每週一次的備份。同時,應該將修改過的重要系統文件存放在不同服務器上,以便出現系統崩潰時(通常是硬盤出錯),可以及時地將系統恢復到正常狀態。
6.賬號和密碼保護
賬號和密碼保護可以說是服務器系統的第一道防線,目前網上大部分對服務器系統的攻擊都是從截獲或猜測密碼開始。一旦黑客進入了系統,那麼前面的防衛措施幾乎就失去了作用,所以對服務器系統管理員的賬號和密碼進行管理是保證系統安全非常重要的措施。
7.採用熱/冷通道的設計方式來分佈數據中心的設備
雖然這個技術在上世紀90年代中期就已經有了,但這是一種有效的方式。這種設計使冷空氣通過這個通道直達服務器前面的通風口,並且能夠使來自服務器後面AC電源的熱氣流通過管道排除,這樣就大大節省了降溫所帶來的能耗。
8.監測系統日誌
通過運行系統日誌程序,系統會記錄下所有用戶使用系統的情形,包括最近登錄時間、使用的賬號、進行的活動等。日誌程序會定期生成報表,通過對報表進行分析,你可以知道是否有異常現象。
服務器安全問題是一個大問題。如果你不希望重要的數據被病毒或是黑客破壞,甚至被可能用這些數據來對付你的人竊取,那麼本文介紹的安全小技巧可能會對你有所幫助。
Datacenter migration, how to deal with old servers in the computer room?
Datacenter migration may cause a lot of people to worry about how to handle their old server hardware. Why is it in July 14th? This is the last date Microsoft's support for Windows Server 2003. It is said that in China, about 40% of the servers are running the system that is going to retire. It is believed that more and more old systems will be upgraded during this period of time, but there will also be a lot of server hardware running Windows Server 2003 ready to retire.
The old server hardware can not be simply lost. Discarding the server hardware arbitrarily can not only pollute the environment, but also cause the loss of data. So how do we deal with the old server hardware that we have retired?
There are several options to consider:
1. donor organizations
If your business and move to newer hardware, then old devices can find a good place to use, rather than throw it into the garbage heap. The application of these equipment by a good enterprise management not only solves the problem of old equipment, but also promotes the corporate social responsibility image.
Take the Electronic Recycling Association, for example. There are some nonprofit organizations around the world that will take the equipment you throw away and do something good for it.
2. second hand market
Just like when you have a new smartphone and sell your old iPad, you can sell these old server devices to the second-hand market.
You'll probably find that some enthusiasts, even small businesses, can turn servers into home entertainment streaming systems or run SharePoint systems.
If you don't want the buyer to bargain, you can also sign an agreement with a potential buyer, who will be responsible for recycling the old equipment.
3. responsible person handling.
If you and your server are really at the end of practicality, but you're paranoid that you don't want to leave these devices to anyone, you need to deal with the e-waste yourself.
Handling e-waste is not simply throwing it out. E-waste can do incredible harm to the environment.
Anyway, when you choose to give up using these devices, you need to specially destroy your hard drive data to prevent someone with ulterior motives from stealing your company's data, because some services can use your old hard drive to recover your data.
The old server hardware can not be simply lost. Discarding the server hardware arbitrarily can not only pollute the environment, but also cause the loss of data. So how do we deal with the old server hardware that we have retired?
There are several options to consider:
1. donor organizations
If your business and move to newer hardware, then old devices can find a good place to use, rather than throw it into the garbage heap. The application of these equipment by a good enterprise management not only solves the problem of old equipment, but also promotes the corporate social responsibility image.
Take the Electronic Recycling Association, for example. There are some nonprofit organizations around the world that will take the equipment you throw away and do something good for it.
2. second hand market
Just like when you have a new smartphone and sell your old iPad, you can sell these old server devices to the second-hand market.
You'll probably find that some enthusiasts, even small businesses, can turn servers into home entertainment streaming systems or run SharePoint systems.
If you don't want the buyer to bargain, you can also sign an agreement with a potential buyer, who will be responsible for recycling the old equipment.
3. responsible person handling.
If you and your server are really at the end of practicality, but you're paranoid that you don't want to leave these devices to anyone, you need to deal with the e-waste yourself.
Handling e-waste is not simply throwing it out. E-waste can do incredible harm to the environment.
Anyway, when you choose to give up using these devices, you need to specially destroy your hard drive data to prevent someone with ulterior motives from stealing your company's data, because some services can use your old hard drive to recover your data.
2018年8月8日 星期三
機房建置存儲架構:磁帶存儲在數據中心"重生"
機房建置,如今的企業或組織,都非常依賴於他們的計算機系統和他們存儲的信息。數據繼續呈指數增長,對數據存儲的需求也持續增強。
其實數據存儲已成爲了熱門話題,因爲組織需要考慮如何存儲他們正在生成的海量數據。用戶所產生的內容的增加,以及來自法規遵從性的不斷增加的壓力,導致信息存儲在未來很長時間內都是重點。
僅僅增加更多的存儲容量來滿足這種數據增長,對於那些需要壓縮預算、物理空間、電力和管理資源的組織來說,已經不再是一種可行策略。於是出現了一些根據高密度和低功耗原則而建立的現代存儲解決方案。
一個現代存儲基礎設施
有沒有一種新的技術或策略可以幫助企業克服那些超過了他們預算的數據增長所帶來的挑戰?答案是:沒有。不過,有一種已經存在了相當長一段時間的技術,它能夠滿足當今組織的所有存儲需求。
有些希望實現現代存儲體系結構的組織,發現自己可以利用閃存、磁盤和磁帶來創建存儲基礎設施,從而能夠在訪問速度和負擔能力的平衡下存儲數據。在這個案例中,磁帶提供了最大的密度和最低的功耗。
近年來,磁帶被認爲是一種瀕死的數據中心技術,但是隨着數據和存儲環境的不斷髮展,由於其高密度和低能耗,仍然保持着相關性。雖然基於磁盤的解決方案常常被考慮用於數據保護場景,與傳統的磁帶截然相反,但是對於大多數組織來說,重要的問題並不是在兩種技術之間進行選擇,而是如何在現代的存儲架構中有效地使用它們。
簡單地說,存儲的決定應該來自於一個平衡的“等式”,其中包括業務需求和每個存儲設備的投資回報 (ROI)最大化。
爆發式數據增長的原因
很長一段時間內,各種規模的組織都需要努力應對越來越多的“關鍵商業信息”和“其他信息”。這一非同尋常的數據增長背後有很多原因,這在組織中有所不同,但主要涉及以下方面:
1、組織要求對產品、服務和客戶提供更詳細的信息,這些信息將影響業務策略、驅動增長和提高服務水平(業務應用程序和數據倉庫)。
2、用戶驅動的非結構化內容增長,如圖像、視頻和音頻,以及更多的傳統內容。
3、減少風險的需要受到內部政策和遵守外部法規的影響,要求數據存儲的時間更長。
IT經理們不太關心這種指數級數據增長的原因,更關心如何存儲這些數據。這種工作在備份和歸檔存儲領域尤爲繁重,因爲公司數據被保存的時間更長。對於組織來說,關鍵挑戰就在於,平衡存儲需求和預算需求,滿足業務需求並保護公司數據免受許多可能導致災難性數據丟失的威脅。
最後的想法
磁盤和磁帶的角色已經在數據中心中“東山再起”,在實現最佳實踐策略方面是一個補充。組織能夠利用磁盤和閃存的高性能來處理關鍵業務數據,並利用磁帶來實現其優越的密度、壽命和經濟性。
在許多環境中,磁帶不再是具有嚴格恢復和訪問需求的重要應用程序的主要備份目標。然而,它繼續保持其作爲應用程序的主要備份和歸檔目標的角色,其要求不那麼嚴格,而且對數據保護和總體擁有成本(TCO)都是如此。
磁帶在現代存儲架構中扮演着重要的角色,在這種架構中,數據中心的密度需要跟上當今組織所面臨的指數級數據增長。將舊的備份從更昂貴的基於磁盤的存儲媒介遷移出來,具有一定的成本效益。如果需要存儲存檔數據很長一段時間,磁帶應該是成本最低和最可靠的數據保護媒介。
其實數據存儲已成爲了熱門話題,因爲組織需要考慮如何存儲他們正在生成的海量數據。用戶所產生的內容的增加,以及來自法規遵從性的不斷增加的壓力,導致信息存儲在未來很長時間內都是重點。
僅僅增加更多的存儲容量來滿足這種數據增長,對於那些需要壓縮預算、物理空間、電力和管理資源的組織來說,已經不再是一種可行策略。於是出現了一些根據高密度和低功耗原則而建立的現代存儲解決方案。
一個現代存儲基礎設施
有沒有一種新的技術或策略可以幫助企業克服那些超過了他們預算的數據增長所帶來的挑戰?答案是:沒有。不過,有一種已經存在了相當長一段時間的技術,它能夠滿足當今組織的所有存儲需求。
有些希望實現現代存儲體系結構的組織,發現自己可以利用閃存、磁盤和磁帶來創建存儲基礎設施,從而能夠在訪問速度和負擔能力的平衡下存儲數據。在這個案例中,磁帶提供了最大的密度和最低的功耗。
近年來,磁帶被認爲是一種瀕死的數據中心技術,但是隨着數據和存儲環境的不斷髮展,由於其高密度和低能耗,仍然保持着相關性。雖然基於磁盤的解決方案常常被考慮用於數據保護場景,與傳統的磁帶截然相反,但是對於大多數組織來說,重要的問題並不是在兩種技術之間進行選擇,而是如何在現代的存儲架構中有效地使用它們。
簡單地說,存儲的決定應該來自於一個平衡的“等式”,其中包括業務需求和每個存儲設備的投資回報 (ROI)最大化。
爆發式數據增長的原因
很長一段時間內,各種規模的組織都需要努力應對越來越多的“關鍵商業信息”和“其他信息”。這一非同尋常的數據增長背後有很多原因,這在組織中有所不同,但主要涉及以下方面:
1、組織要求對產品、服務和客戶提供更詳細的信息,這些信息將影響業務策略、驅動增長和提高服務水平(業務應用程序和數據倉庫)。
2、用戶驅動的非結構化內容增長,如圖像、視頻和音頻,以及更多的傳統內容。
3、減少風險的需要受到內部政策和遵守外部法規的影響,要求數據存儲的時間更長。
IT經理們不太關心這種指數級數據增長的原因,更關心如何存儲這些數據。這種工作在備份和歸檔存儲領域尤爲繁重,因爲公司數據被保存的時間更長。對於組織來說,關鍵挑戰就在於,平衡存儲需求和預算需求,滿足業務需求並保護公司數據免受許多可能導致災難性數據丟失的威脅。
最後的想法
磁盤和磁帶的角色已經在數據中心中“東山再起”,在實現最佳實踐策略方面是一個補充。組織能夠利用磁盤和閃存的高性能來處理關鍵業務數據,並利用磁帶來實現其優越的密度、壽命和經濟性。
在許多環境中,磁帶不再是具有嚴格恢復和訪問需求的重要應用程序的主要備份目標。然而,它繼續保持其作爲應用程序的主要備份和歸檔目標的角色,其要求不那麼嚴格,而且對數據保護和總體擁有成本(TCO)都是如此。
磁帶在現代存儲架構中扮演着重要的角色,在這種架構中,數據中心的密度需要跟上當今組織所面臨的指數級數據增長。將舊的備份從更昂貴的基於磁盤的存儲媒介遷移出來,具有一定的成本效益。如果需要存儲存檔數據很長一段時間,磁帶應該是成本最低和最可靠的數據保護媒介。
Datacenter migration, driving factors behind requirements
Datacenter migration, enterprise leasing data center space demand driving factors continue to develop.With this ongoing change, more than 700 decision makers responsible for selecting enterprise IT and storage services were involved in a study commissioned by Vertiv to further understand this sustained and stable development.
The study, conducted by Research firm 451 Research, aims to better understand the changing nature of space demand in rental data centers.If one looks back to the early 2000s, most of the demand for rental data center space comes from telecoms operators.However, people can now see greater demand from service providers, including public cloud providers and businesses looking for space to include higher-level services.
While analysts, investors and pundits have predicted that the trend will reduce the demand for rental data center space, these views do not take into account the potential future demand from wider Internet of things adoption.Nor do they take into account the need for hybrid data center space, nor the trend of not all workloads now moving to the cloud, for many reasons.
Future opportunities
As the report makes clear, the data center demand outlook is not entirely negative.The following seven major findings will drive current and future demand for rental data center space and how they will affect multi-tenant data center (MTDC) providers.
Continuous cloud adoption
In less than a decade, cloud computing has moved from the edge of the market to the mainstream. With the widespread adoption of cloud computing, companies have been shifting IT from internal data centers to external hosting, hosting private clouds, and public cloud environments.While each enterprise on average retains 40 percent of its workload in internally deployed data centers, and up to 36 percent in non-cloud environments, most respondents plan to increase their use of private and public clouds over the next two years.
The development of the Internet of things will further drive demand for data centers
Iot adoption was widespread among the 700 respondents surveyed, with only a tiny 2% of respondents saying they were not involved in any iot projects.It is clear that enterprise applications are still in the early stages of the iot maturity curve, with about two-thirds (64%) of respondents saying that their current iot activity phase is defined as "" in testing or planning" ".
Iot projects often require multiple locations for data analysis and storage.These include: endpoint devices with integrated computing/storage, intelligent gateway devices, nearby devices that perform local computing, internal deployment data centers, hosting facilities, hosting web sites, and the presence point location of network providers.
Not only do various hosting destinations exist for data analysis and storage, many deployments may end up storing, integrating, and moving data in a combination of public clouds and other commercial facilities, including hosting sites and/or network providers.
The promise of expanding the Internet of things
Respondents said that while most businesses are now in the early stages of iot projects, a significant amount of IT capacity is currently used for iot.Surprisingly, 54% of respondents said that between 26% and 75% of their current IT business support iot plans.Looking ahead to the next two years, 73 percent of respondents said they expect as much as three quarters (75 percent) of their data center and cloud computing capacity to be used to support iot plans.
Analysis workloads that drive computational requirements
In addition to the cloud computing that allows the iot process data storage, it also allows the processing of iot data, which is another great opportunity for data center providers.The public cloud is currently the most popular cloud platform (39%) for analyzing iot generated data, but it is by no means the only cloud platform.In fact, data processing is allocated between the hosting facilities (30%), attached to the local computing devices of the data generator (30%), in the network operator infrastructure (31%), and in the internal data center (35%).
Workload and provider
The nature of iot workloads also affects the location of iot data storage and processing.Slightly less than half (48%) mentioned that quality control/tracking systems are most likely to be processed near the data source.To meet this requirement, the micromodular data center is likely to become more prominent in addition to the relatively close multi-tenant data center (MTDC).
An undecided opportunity
For multi-tenant and micro-modular data center providers, undefined organizations in iot infrastructure represent market opportunities.
A quarter of respondents said public cloud providers were the top choice for iot storage and processing for infrastructure providers.There are fairly balanced differences between public clouds, with some respondents also choosing public clouds, private clouds and collocated data centers (21 percent).In addition, 28 percent of respondents chose services provided by network operators (14 percent) or hosted service providers (14 percent).
At the edge of the fog calculation
The OpenFog alliance defines fog computing as "a system-level architecture that allocates computing, storage, control, and network resources and services anywhere in the continuum from cloud computing to the Internet of things."
Of those respondents, there were some very early adopters, with up to 45 percent saying they were "" familiar" "or" "very familiar" "with the OpenFog alliance.The main market driver of fog computing is real-time analysis of data streams, which was chosen by more than a quarter (26 per cent) of respondents, followed by lower network return trip costs (24 per cent) and improved application reliability (21 per cent).
The key points
Based on these premises, the survey report further identified eight key points for multi-tenant data center (MTDC) providers:
(1) streamlining public cloud use or making it safer for hosted services and private cloud options are becoming increasingly important for customers.
(2) as demand for off-site deployments grows, multi-tenant data center (MTDC) providers with interconnected or hosted services will benefit greatly.
(3) hosting providers and telecom operators are in a unique position to address the specific challenges of public cloud.
(4) iot is an opportunity that data center capacity service providers should not ignore.
(5) the emergence of the Internet of things has created a new battlefield for computing power positioning.
(6) the Internet of things will bring applications and workloads that require near real-time response (low latency), which determines that computing capacity may be closer to the edge of the network or devices to minimize transmission delay impact.
(7) fog computing/edge computing market will bring important cooperation opportunities.
(8) marketing focuses on disseminating data center services that support critical fog computing/edge computing.
In addition to these key points, data center providers should also pay special attention to vertical industries and iot support in the countries/regions with the highest proportion in the mature planning stage.For example, the study found that Italy has the highest percentage of external organizations using cloud computing (67 percent), while China is most active in using web hosting as an iot data storage environment in the coming year.The biggest shift related to iot data storage is away from enterprise-owned data center facilities.While 71 per cent of the companies surveyed now store iot data internally, that number is expected to fall to 27 per cent in a year.
If one thing is clear, developments in cloud computing and the Internet of things will have a significant impact on data center demand.If data center providers are open to the opportunities these emerging technologies offer and the demand-driven power of renting data center space, it will enable them to enter new markets and stay ahead of the competition.
The study, conducted by Research firm 451 Research, aims to better understand the changing nature of space demand in rental data centers.If one looks back to the early 2000s, most of the demand for rental data center space comes from telecoms operators.However, people can now see greater demand from service providers, including public cloud providers and businesses looking for space to include higher-level services.
While analysts, investors and pundits have predicted that the trend will reduce the demand for rental data center space, these views do not take into account the potential future demand from wider Internet of things adoption.Nor do they take into account the need for hybrid data center space, nor the trend of not all workloads now moving to the cloud, for many reasons.
Future opportunities
As the report makes clear, the data center demand outlook is not entirely negative.The following seven major findings will drive current and future demand for rental data center space and how they will affect multi-tenant data center (MTDC) providers.
Continuous cloud adoption
In less than a decade, cloud computing has moved from the edge of the market to the mainstream. With the widespread adoption of cloud computing, companies have been shifting IT from internal data centers to external hosting, hosting private clouds, and public cloud environments.While each enterprise on average retains 40 percent of its workload in internally deployed data centers, and up to 36 percent in non-cloud environments, most respondents plan to increase their use of private and public clouds over the next two years.
The development of the Internet of things will further drive demand for data centers
Iot adoption was widespread among the 700 respondents surveyed, with only a tiny 2% of respondents saying they were not involved in any iot projects.It is clear that enterprise applications are still in the early stages of the iot maturity curve, with about two-thirds (64%) of respondents saying that their current iot activity phase is defined as "" in testing or planning" ".
Iot projects often require multiple locations for data analysis and storage.These include: endpoint devices with integrated computing/storage, intelligent gateway devices, nearby devices that perform local computing, internal deployment data centers, hosting facilities, hosting web sites, and the presence point location of network providers.
Not only do various hosting destinations exist for data analysis and storage, many deployments may end up storing, integrating, and moving data in a combination of public clouds and other commercial facilities, including hosting sites and/or network providers.
The promise of expanding the Internet of things
Respondents said that while most businesses are now in the early stages of iot projects, a significant amount of IT capacity is currently used for iot.Surprisingly, 54% of respondents said that between 26% and 75% of their current IT business support iot plans.Looking ahead to the next two years, 73 percent of respondents said they expect as much as three quarters (75 percent) of their data center and cloud computing capacity to be used to support iot plans.
Analysis workloads that drive computational requirements
In addition to the cloud computing that allows the iot process data storage, it also allows the processing of iot data, which is another great opportunity for data center providers.The public cloud is currently the most popular cloud platform (39%) for analyzing iot generated data, but it is by no means the only cloud platform.In fact, data processing is allocated between the hosting facilities (30%), attached to the local computing devices of the data generator (30%), in the network operator infrastructure (31%), and in the internal data center (35%).
Workload and provider
The nature of iot workloads also affects the location of iot data storage and processing.Slightly less than half (48%) mentioned that quality control/tracking systems are most likely to be processed near the data source.To meet this requirement, the micromodular data center is likely to become more prominent in addition to the relatively close multi-tenant data center (MTDC).
An undecided opportunity
For multi-tenant and micro-modular data center providers, undefined organizations in iot infrastructure represent market opportunities.
A quarter of respondents said public cloud providers were the top choice for iot storage and processing for infrastructure providers.There are fairly balanced differences between public clouds, with some respondents also choosing public clouds, private clouds and collocated data centers (21 percent).In addition, 28 percent of respondents chose services provided by network operators (14 percent) or hosted service providers (14 percent).
At the edge of the fog calculation
The OpenFog alliance defines fog computing as "a system-level architecture that allocates computing, storage, control, and network resources and services anywhere in the continuum from cloud computing to the Internet of things."
Of those respondents, there were some very early adopters, with up to 45 percent saying they were "" familiar" "or" "very familiar" "with the OpenFog alliance.The main market driver of fog computing is real-time analysis of data streams, which was chosen by more than a quarter (26 per cent) of respondents, followed by lower network return trip costs (24 per cent) and improved application reliability (21 per cent).
The key points
Based on these premises, the survey report further identified eight key points for multi-tenant data center (MTDC) providers:
(1) streamlining public cloud use or making it safer for hosted services and private cloud options are becoming increasingly important for customers.
(2) as demand for off-site deployments grows, multi-tenant data center (MTDC) providers with interconnected or hosted services will benefit greatly.
(3) hosting providers and telecom operators are in a unique position to address the specific challenges of public cloud.
(4) iot is an opportunity that data center capacity service providers should not ignore.
(5) the emergence of the Internet of things has created a new battlefield for computing power positioning.
(6) the Internet of things will bring applications and workloads that require near real-time response (low latency), which determines that computing capacity may be closer to the edge of the network or devices to minimize transmission delay impact.
(7) fog computing/edge computing market will bring important cooperation opportunities.
(8) marketing focuses on disseminating data center services that support critical fog computing/edge computing.
In addition to these key points, data center providers should also pay special attention to vertical industries and iot support in the countries/regions with the highest proportion in the mature planning stage.For example, the study found that Italy has the highest percentage of external organizations using cloud computing (67 percent), while China is most active in using web hosting as an iot data storage environment in the coming year.The biggest shift related to iot data storage is away from enterprise-owned data center facilities.While 71 per cent of the companies surveyed now store iot data internally, that number is expected to fall to 27 per cent in a year.
If one thing is clear, developments in cloud computing and the Internet of things will have a significant impact on data center demand.If data center providers are open to the opportunities these emerging technologies offer and the demand-driven power of renting data center space, it will enable them to enter new markets and stay ahead of the competition.
2018年8月7日 星期二
機房建置優秀的UPS不間斷電源,應該有哪些特質?
機房建置,供配電系統作爲數據中心業務不間斷運行的基礎設施,面臨着嚴峻的挑戰,,一旦出現故障將導致巨大的損失。如何快速地將電池故障排除掉,防患於未然?低負載下的數據中心電費居高不下,如何破解?傳統UPS不間斷電源運維複雜,管理苦難,如何解決?雲融合時代對數據中心供配電系統有何要求?
面對以上問題,華爲模塊化UPS不間斷電源有話說。華爲在電源領域有着20年+的深厚技術,積累了深厚的市場經驗。華爲模塊化UPS不間斷電源作爲網絡能源的核心產品,具有以下優秀特質:
#高可靠#:全冗餘架構打造系統高可靠。從控制模塊冗餘,功率模塊冗餘到雙總線冗餘設計,消除系統單點故障;
#高效率#:華爲模塊化UPS不間斷電源系統效率高達97.1%,模塊效率高達97.5%。同時,保證系統在低載區域持續高效運行,20%負載率下效率爲96.5%,40%負載率下爲97.1%,匹配當前大部分數據中心的運行負載率區域。效率的提升就意味着電費的節省,1個點的效率提升,則將在生命週期幾乎節省出設備投資成本;
#簡單運維#:模塊化UPS不間斷電源不需要專家維護,普通工程師即可運維,功率模塊通過在線熱插拔,5min之內,兩人即可完成更換。相比於塔式UPS不間斷電源複雜的原廠專業複雜運維,節省大量的運維時間和費用。
作爲新器件,要通過華爲嚴格的選型流程才能引入:
*引入元件後,要進行可靠性測試
*進入生產時,要進行來料管控,進行IQC抽檢
*每一步的生成測試和管控,從PCB到單板、到模塊、到整機
*針對市場運行情況進行失效分析、短板改進等等,應對客戶問題
華爲UPS秉承華爲強大的技術研發實力以及嚴苛的質量保障體系,經過超過1400項標準測試,21項專項可靠性試驗保證產品質量,並在傳統電力電子技術基礎上融合了數字信息技術,有效改善了可擴展性與可用性。華爲高頻模塊化UPS不間斷電源採用在線式雙變換和部件模塊化冗餘設計,基於DSP(數字信號處理)全數字化控制,可靠性高、功率密度高。同時,華爲模塊化UPS不間斷電源功率模塊,監控模塊,旁路模塊、控制模塊均支持熱插拔,安裝、擴容、維護簡單,支持關鍵部件失效預警功能,防止故障擴大,能夠提供穩定可靠電力保障。
華爲網絡能源營銷運作總裁李俊朋表示:"各行各業的數字化轉型對數據中心基礎設施提出了巨大的挑戰,面對龐大的業務處理需求,高可靠的供配電解決方案是業務零中斷的根本保障。華爲模塊化UPS不間斷電源通過數字化、網絡化手段實現了智能管理,能更好地支撐客戶未來業務的發展。"
華爲模塊化UPS不間斷電源近年來的表現可圈可點,全球市場份額呈現快速上升趨勢,這主要得益於華爲在該領域的持續投入與研發。它曾獲得2016年模塊化UPS不間斷電源年度最佳公司、2016年德國DCI供配電領域鉑金獎等榮譽。同時,華爲供配電解決方案已在全球廣泛商用,覆蓋政府、ISP、交通、金融等多個關鍵行業,並在歐洲、南太等地區的諸多國家贏得了多個客戶的信任,並保持了長期穩固的合作伙伴關係,爲全球用戶提供智能全方位的動力保障。
面對以上問題,華爲模塊化UPS不間斷電源有話說。華爲在電源領域有着20年+的深厚技術,積累了深厚的市場經驗。華爲模塊化UPS不間斷電源作爲網絡能源的核心產品,具有以下優秀特質:
#高可靠#:全冗餘架構打造系統高可靠。從控制模塊冗餘,功率模塊冗餘到雙總線冗餘設計,消除系統單點故障;
#高效率#:華爲模塊化UPS不間斷電源系統效率高達97.1%,模塊效率高達97.5%。同時,保證系統在低載區域持續高效運行,20%負載率下效率爲96.5%,40%負載率下爲97.1%,匹配當前大部分數據中心的運行負載率區域。效率的提升就意味着電費的節省,1個點的效率提升,則將在生命週期幾乎節省出設備投資成本;
#簡單運維#:模塊化UPS不間斷電源不需要專家維護,普通工程師即可運維,功率模塊通過在線熱插拔,5min之內,兩人即可完成更換。相比於塔式UPS不間斷電源複雜的原廠專業複雜運維,節省大量的運維時間和費用。
作爲新器件,要通過華爲嚴格的選型流程才能引入:
*引入元件後,要進行可靠性測試
*進入生產時,要進行來料管控,進行IQC抽檢
*每一步的生成測試和管控,從PCB到單板、到模塊、到整機
*針對市場運行情況進行失效分析、短板改進等等,應對客戶問題
華爲UPS秉承華爲強大的技術研發實力以及嚴苛的質量保障體系,經過超過1400項標準測試,21項專項可靠性試驗保證產品質量,並在傳統電力電子技術基礎上融合了數字信息技術,有效改善了可擴展性與可用性。華爲高頻模塊化UPS不間斷電源採用在線式雙變換和部件模塊化冗餘設計,基於DSP(數字信號處理)全數字化控制,可靠性高、功率密度高。同時,華爲模塊化UPS不間斷電源功率模塊,監控模塊,旁路模塊、控制模塊均支持熱插拔,安裝、擴容、維護簡單,支持關鍵部件失效預警功能,防止故障擴大,能夠提供穩定可靠電力保障。
華爲網絡能源營銷運作總裁李俊朋表示:"各行各業的數字化轉型對數據中心基礎設施提出了巨大的挑戰,面對龐大的業務處理需求,高可靠的供配電解決方案是業務零中斷的根本保障。華爲模塊化UPS不間斷電源通過數字化、網絡化手段實現了智能管理,能更好地支撐客戶未來業務的發展。"
華爲模塊化UPS不間斷電源近年來的表現可圈可點,全球市場份額呈現快速上升趨勢,這主要得益於華爲在該領域的持續投入與研發。它曾獲得2016年模塊化UPS不間斷電源年度最佳公司、2016年德國DCI供配電領域鉑金獎等榮譽。同時,華爲供配電解決方案已在全球廣泛商用,覆蓋政府、ISP、交通、金融等多個關鍵行業,並在歐洲、南太等地區的諸多國家贏得了多個客戶的信任,並保持了長期穩固的合作伙伴關係,爲全球用戶提供智能全方位的動力保障。
Datacenter migration,How to be intellectualized from modularization?
Datacenter migration,What is HUAWEI architects doing to the trend of intelligent data center? First of all, let's answer this question: why is it intelligent? No manual inspection?
Really not, because "artificial" as the protagonist of data center operations and maintenance needs too much experience, can only be good after, can not warn, can not achieve fine management. The position of "artificial" in the data center should eventually gradually tend to "execution" instead of "management", and the work of management should rely on the continuous improvement of AI to gradually replace manual work. Modular data center plus intelligence will make the data center more powerful and perfect.
Faced with the intelligence of data center, how did HUAWEI do it?
Through the continuous intelligent upgrading of the self research equipment and the understanding of the L2 layer service, HUAWEI launched an intelligent micro module 3 products around I3 (iPower, iCooling, iManager) features.
Why is it 3? To put it simply, the 1 is hardware integration, the 2 is the combination of software and hardware, and the 3 is the integration of functions. So what functions do these 3 I3 features incorporate? And let me be fine.
IPower guarantee business is not interrupted; dangerous state can be predicted in advance; fire hazard is excluded for the first time.
In terms of intelligence, iPower can mainly achieve:
The power supply full link visual and alarm can locate accurately and identify faults in minutes.
The switch current, voltage and temperature of the power supply branch are monitored comprehensively, and the abnormal state is reported in the morning.
Socket level power supply monitoring, cabinet equipment running state at a glance;
The battery management system monitors key information such as SOH, current, voltage, internal resistance and temperature of each cell.
When the battery and socket devices fail, they can isolate the battery / socket power supply and eliminate fire hazards.
I feel that this understanding is more convenient, that is, Bian Que + Hua Tuo, a hope of knowledge, a drug can scrape bones.
ICooling: refrigeration is not only reliable but also energy efficient.
.AI self-learning algorithm, combined with the temperature and humidity of the channel, through the adjustment of indoor and outdoor fans, compressors, expansion valves, and so on, to achieve energy saving 8%, a year can save tens of tens of thousands of electricity costs;
The temperature cloud chart and the load power are no longer independent functions. ICooling combines the temperature nephogram - cabinet load - temperature control system to realize the double insurance of eliminating real-time hot spot - eliminating hidden trouble of hot spot.
The refrigerant capacity of air-conditioning is no longer half-hidden. One key of iCooling detects the refrigerant capacity to solve the problem of overheating and downtime caused by insufficient refrigerant.
IManager is the brain of smart micromodule 3.0, which not only carries the algorithms of iPower / iCooling, but also makes the operation and maintenance of the computer room easier. The best technology is that people can't feel the existence of technology. IManager is trying to make you:
You can't feel the place that needs to be operated manually. Intelligent lighting, automatic translation door, eLight module status indicator and fire fighting linkage.
I don't feel the trouble of asset management. In the face of quarterly and annual reconciliation of asset statements, asset automation functions can be easily reduced and manual statistics cost reduced.
IManager is more like a perfect housekeeper, earnest and dedicated, meticulous and never lose his duty.
In the future, HUAWEI will continue to explore the intelligent path of the micro module, continuous optimization around the I3 features, to ensure a new generation of data centers with both quality and intelligence for customers.
Really not, because "artificial" as the protagonist of data center operations and maintenance needs too much experience, can only be good after, can not warn, can not achieve fine management. The position of "artificial" in the data center should eventually gradually tend to "execution" instead of "management", and the work of management should rely on the continuous improvement of AI to gradually replace manual work. Modular data center plus intelligence will make the data center more powerful and perfect.
Faced with the intelligence of data center, how did HUAWEI do it?
Through the continuous intelligent upgrading of the self research equipment and the understanding of the L2 layer service, HUAWEI launched an intelligent micro module 3 products around I3 (iPower, iCooling, iManager) features.
Why is it 3? To put it simply, the 1 is hardware integration, the 2 is the combination of software and hardware, and the 3 is the integration of functions. So what functions do these 3 I3 features incorporate? And let me be fine.
IPower guarantee business is not interrupted; dangerous state can be predicted in advance; fire hazard is excluded for the first time.
In terms of intelligence, iPower can mainly achieve:
The power supply full link visual and alarm can locate accurately and identify faults in minutes.
The switch current, voltage and temperature of the power supply branch are monitored comprehensively, and the abnormal state is reported in the morning.
Socket level power supply monitoring, cabinet equipment running state at a glance;
The battery management system monitors key information such as SOH, current, voltage, internal resistance and temperature of each cell.
When the battery and socket devices fail, they can isolate the battery / socket power supply and eliminate fire hazards.
I feel that this understanding is more convenient, that is, Bian Que + Hua Tuo, a hope of knowledge, a drug can scrape bones.
ICooling: refrigeration is not only reliable but also energy efficient.
.AI self-learning algorithm, combined with the temperature and humidity of the channel, through the adjustment of indoor and outdoor fans, compressors, expansion valves, and so on, to achieve energy saving 8%, a year can save tens of tens of thousands of electricity costs;
The temperature cloud chart and the load power are no longer independent functions. ICooling combines the temperature nephogram - cabinet load - temperature control system to realize the double insurance of eliminating real-time hot spot - eliminating hidden trouble of hot spot.
The refrigerant capacity of air-conditioning is no longer half-hidden. One key of iCooling detects the refrigerant capacity to solve the problem of overheating and downtime caused by insufficient refrigerant.
IManager is the brain of smart micromodule 3.0, which not only carries the algorithms of iPower / iCooling, but also makes the operation and maintenance of the computer room easier. The best technology is that people can't feel the existence of technology. IManager is trying to make you:
You can't feel the place that needs to be operated manually. Intelligent lighting, automatic translation door, eLight module status indicator and fire fighting linkage.
I don't feel the trouble of asset management. In the face of quarterly and annual reconciliation of asset statements, asset automation functions can be easily reduced and manual statistics cost reduced.
IManager is more like a perfect housekeeper, earnest and dedicated, meticulous and never lose his duty.
In the future, HUAWEI will continue to explore the intelligent path of the micro module, continuous optimization around the I3 features, to ensure a new generation of data centers with both quality and intelligence for customers.
2018年8月6日 星期一
機房建置網絡虛擬化交換架構VCS簡析
機房建置,隨着數據中心網絡不斷擴展,客戶增添越來越多的服務器和交換機,以及服務器虛擬化後虛擬機的動態遷移,會使得現有扁平式的二層網絡更加龐大,這就要求在二層實現多路徑功能來提高網絡效率和可靠性,也就是說數據中心網絡還必須能夠支持大型的扁平化設計。IETF組織制定的大量鏈路透明互聯(TRILL)標準提供了這一能力。VDX系列交換機即是基於TRILL協議下支持DCB以太網的革命性的下一代交換機。
虛擬集羣交換(VCS)解決方案簡析
虛擬集羣交換技術(VCS™)是一種2/3層以太網技術,包含了一系列新的標準及高級特性,可以提供更高的帶寬利用率、更高的網絡可擴展性、對網絡融合的無縫支持及管理簡便性。VCS技術的核心是三大技術“支柱”:以太網Fabric架構、分佈式智能和邏輯機箱。
以太網Fabric架構利用新的IETF多鏈路透明互連(TRILL)協議來消除對生成樹協議(STP)的需求,因爲它可以在兼容VCS的不同交換機之間建立多條路徑來傳輸數據中心橋接(DCB)流量。矩陣架構內的所有路徑都是活動的,如果一條鏈路發生故障,流量會以最短的時延自動分配到其它可用的等價路徑上。利用分佈式智能,配置和網絡拓撲信息可以自動分配給矩陣架構內的所有交換機。利用TRILL,矩陣架構可以原封不動地將幀從源端口發送到目標端口,就象整個矩陣架構是一個邏輯交換機機箱,而每個兼容VCS的交換機就象是機箱中的一個端口模塊。這種矩陣架構可在邏輯機箱中擴展到1000多個端口。
VCS架構VS傳統網絡架構
在擴展虛擬服務器環境時,網絡會帶來多方面的挑戰和侷限性,傳統網絡如生成樹(STP)的缺點、低利用率和鏈路故障恢復。
VCS可以爲管理員帶來以下優勢:
· 從邏輯上消除管理多個交換層的需要
· 在多臺物理交換機中執行策略並管理流量,就好象它們是一臺交換機
· 擴展網絡帶寬而不需要以手工方式重新配置交換機端口和網絡策略
· 向服務器、網絡和存儲管理員提供有關網絡狀態的單一定製視圖
VCS技術可幫助IT部門輕鬆應對虛擬化環境中的網絡挑戰。VMware環境中的VCS技術還可以爲IT部門一套可擴展且易於管理的基礎架構來支持雲計算。雲計算已成爲數據中心滿足這些IT要求的全新計算模式。它依靠服務器虛擬化來降低資本和運營支出,提供更安全而且高度可用的計算環境,來縮短應用部署時間,同時提供更高的容量靈活性來滿足不斷變化的需求。服務器虛擬化是計算技術領域的重大進步,其中的虛擬機(VM)包含了提供特定服務所需的所有軟件。VM將硬件和應用分開,使服務可以在任何可用的硬件(只要提供管理程序來運行虛擬機)上運行。雲計算還改變着IT部門的業務模式。IT部門需要基於服務的計算,其中的共享資源連接到應用,成本基於資源消耗情況而不是資產的購買。
虛擬集羣交換(VCS)解決方案簡析
虛擬集羣交換技術(VCS™)是一種2/3層以太網技術,包含了一系列新的標準及高級特性,可以提供更高的帶寬利用率、更高的網絡可擴展性、對網絡融合的無縫支持及管理簡便性。VCS技術的核心是三大技術“支柱”:以太網Fabric架構、分佈式智能和邏輯機箱。
以太網Fabric架構利用新的IETF多鏈路透明互連(TRILL)協議來消除對生成樹協議(STP)的需求,因爲它可以在兼容VCS的不同交換機之間建立多條路徑來傳輸數據中心橋接(DCB)流量。矩陣架構內的所有路徑都是活動的,如果一條鏈路發生故障,流量會以最短的時延自動分配到其它可用的等價路徑上。利用分佈式智能,配置和網絡拓撲信息可以自動分配給矩陣架構內的所有交換機。利用TRILL,矩陣架構可以原封不動地將幀從源端口發送到目標端口,就象整個矩陣架構是一個邏輯交換機機箱,而每個兼容VCS的交換機就象是機箱中的一個端口模塊。這種矩陣架構可在邏輯機箱中擴展到1000多個端口。
VCS架構VS傳統網絡架構
在擴展虛擬服務器環境時,網絡會帶來多方面的挑戰和侷限性,傳統網絡如生成樹(STP)的缺點、低利用率和鏈路故障恢復。
VCS可以爲管理員帶來以下優勢:
· 從邏輯上消除管理多個交換層的需要
· 在多臺物理交換機中執行策略並管理流量,就好象它們是一臺交換機
· 擴展網絡帶寬而不需要以手工方式重新配置交換機端口和網絡策略
· 向服務器、網絡和存儲管理員提供有關網絡狀態的單一定製視圖
VCS技術可幫助IT部門輕鬆應對虛擬化環境中的網絡挑戰。VMware環境中的VCS技術還可以爲IT部門一套可擴展且易於管理的基礎架構來支持雲計算。雲計算已成爲數據中心滿足這些IT要求的全新計算模式。它依靠服務器虛擬化來降低資本和運營支出,提供更安全而且高度可用的計算環境,來縮短應用部署時間,同時提供更高的容量靈活性來滿足不斷變化的需求。服務器虛擬化是計算技術領域的重大進步,其中的虛擬機(VM)包含了提供特定服務所需的所有軟件。VM將硬件和應用分開,使服務可以在任何可用的硬件(只要提供管理程序來運行虛擬機)上運行。雲計算還改變着IT部門的業務模式。IT部門需要基於服務的計算,其中的共享資源連接到應用,成本基於資源消耗情況而不是資產的購買。
Datacenter migration the history of cooling technology
Datacenter migration, the hottest time of the year has come, a lot of things are suffering from high temperature, even in the northern Arctic Circle in Sweden and the northern Arctic Ocean in parts of Siberia, the temperature has also reached more than 30 degrees.
Not only creatures but also all kinds of abiotic tools are being tested. With the advent of big data and cloud computing era, massive data flow into our lives. In this era, data for us is like "oil", the most valuable wealth of the enterprise, and the data center as the data storage and interaction infrastructure, its importance is becoming increasingly prominent.
The data center is generally a large warehouse, which mainly stores the server and other computer devices connected to the Internet. These devices save most of the data on the Internet, and provide the computing power for cloud computing. It can be expected that the data center will generate a lot of heat, it is reported that its energy density is more than 100 times that of an ordinary office building.
Data centers and equipment internal thermal load must be effectively managed, in order to cool the data center, the data center has taken various measures. So, what is the development of cooling technology in data center? Which cooling methods are most favored by the manufacturers?
Natural cooling, using only the temperature difference between the external air temperature and the equipment to cool the equipment, is one of the earliest cooling schemes in the data center, but this cooling method is restricted by the area, so the data center usually uses some form of air conditioning to cool the IT.
The air-conditioning equipment used for cooling the data center has also experienced a period of development. From the early ordinary air conditioning to the 70s precision air conditioning, the air cooling has been developing faster because of the low cost, but as the equipment continues to increase, the server is more and more dense, and the air cooling is gradually unable to fill the cooling demand, and availability and green energy saving. Dynamic cooling has become the main direction of innovation. Liquid cooling technology is favored by many manufacturers because of its outstanding performance.
Liquid cooling refers to the replacement of air by liquid to take away the heat generated by CPU, memory strip, chipset, expansion card and other devices in operation. According to the current technological research process, liquid cooling is classified as water cooling and refrigerant cooling. The refrigerants available include water, mineral oil, electronic fluoride solution, etc. According to the cooling principle, the liquid cooling is divided into two systems: cold plate liquid cooling (indirect cooling) and submerged liquid cooling (direct cooling).
If the previous air cooling is to let the server blow the fan, then the liquid cooling is to allow the server to shower or bathe. At present, there are three main liquid cooling technologies in the industry: cold plate, spray and immersion.
Cold plate liquid cooling is the flow of cooling water from the special water injection port, through the closed heat pipe flow into the main engine, take away the heat of CPU, memory and hard disk and other components out.
Spray type liquid cooling means to retrofit IT equipment, deploy corresponding spray devices, and cool the overheated devices when the equipment is running.
In contrast, submerged liquid cooling is more special. It is understood that the heat dissipation effect of immersion liquid cooling technology is first appeared abroad. It can be understood as making the server in the liquid, although it can achieve high density, low noise, low heat transfer temperature difference, natural cooling and so on. But the immersion liquid cooling technology is very difficult and cost high. At present, the industry only has single machine test and single machine. It is shown that server cluster deployment is not yet available.
In fact, the concept of liquid cooling has appeared many years ago, but it has only arisen in recent years. This is mainly because with the rapid development of the data center industry, especially the deployment of high density and even ultra-high density servers, the challenges facing the data center refrigeration are increasingly severe. How to further reduce the high power consumption, how to achieve the green development of the data center while guaranteeing the performance, has become the concern of the industry. And the focus of the breakthrough.
At present, the mainstream manufacturers at home and abroad are vigorously promoting the research of liquid cooling technology. For example, Facebook is launching a new indirect cooling system, the StatePoint liquid cooling (SPLC) solution, developed in collaboration with the Nortek air solutions company. Since it was developed in 2015, the technology (Nortek patent) uses a liquid air heat exchanger, cooling water through membrane separation layer evaporation technology, and cooling the air in a data center facility with cold water, and the diaphragm can prevent cross contamination between water and air.
In addition, the spray cooling data center, which is the core of the combined liquid cooling technology, is a new liquid cooling method, which is different from the traditional air cooling and soaking cooling mode. It directly sprinkled the insulated liquid cooling medium into the heating device inside the server or the radiator with its contact, and the cooling fluid was absorbed quickly. The heat of the chip is transmitted through the liquid cooling system to the outdoor atmosphere, which not only solves the problem of low air cooling efficiency, but also solves the problem of high cost of soaking and maintenance.
According to the industry news, the liquid cooling scheme is rising, with the technology driven by AI and the network edge computing, and also the decline of the data center reduction. At present, more and more manufacturers use liquid cooling technology to cool down the data center. For example, Google I/O 2018 first introduced liquid cooling in Data Center for cooling of AI chips.
Innovation is the first driving force to lead the development of science and technology. As a kind of efficient, energy-saving and safe cooling technology, liquid cooling technology is becoming the inevitable choice of most data centers.
Not only creatures but also all kinds of abiotic tools are being tested. With the advent of big data and cloud computing era, massive data flow into our lives. In this era, data for us is like "oil", the most valuable wealth of the enterprise, and the data center as the data storage and interaction infrastructure, its importance is becoming increasingly prominent.
The data center is generally a large warehouse, which mainly stores the server and other computer devices connected to the Internet. These devices save most of the data on the Internet, and provide the computing power for cloud computing. It can be expected that the data center will generate a lot of heat, it is reported that its energy density is more than 100 times that of an ordinary office building.
Data centers and equipment internal thermal load must be effectively managed, in order to cool the data center, the data center has taken various measures. So, what is the development of cooling technology in data center? Which cooling methods are most favored by the manufacturers?
Natural cooling, using only the temperature difference between the external air temperature and the equipment to cool the equipment, is one of the earliest cooling schemes in the data center, but this cooling method is restricted by the area, so the data center usually uses some form of air conditioning to cool the IT.
The air-conditioning equipment used for cooling the data center has also experienced a period of development. From the early ordinary air conditioning to the 70s precision air conditioning, the air cooling has been developing faster because of the low cost, but as the equipment continues to increase, the server is more and more dense, and the air cooling is gradually unable to fill the cooling demand, and availability and green energy saving. Dynamic cooling has become the main direction of innovation. Liquid cooling technology is favored by many manufacturers because of its outstanding performance.
Liquid cooling refers to the replacement of air by liquid to take away the heat generated by CPU, memory strip, chipset, expansion card and other devices in operation. According to the current technological research process, liquid cooling is classified as water cooling and refrigerant cooling. The refrigerants available include water, mineral oil, electronic fluoride solution, etc. According to the cooling principle, the liquid cooling is divided into two systems: cold plate liquid cooling (indirect cooling) and submerged liquid cooling (direct cooling).
If the previous air cooling is to let the server blow the fan, then the liquid cooling is to allow the server to shower or bathe. At present, there are three main liquid cooling technologies in the industry: cold plate, spray and immersion.
Cold plate liquid cooling is the flow of cooling water from the special water injection port, through the closed heat pipe flow into the main engine, take away the heat of CPU, memory and hard disk and other components out.
Spray type liquid cooling means to retrofit IT equipment, deploy corresponding spray devices, and cool the overheated devices when the equipment is running.
In contrast, submerged liquid cooling is more special. It is understood that the heat dissipation effect of immersion liquid cooling technology is first appeared abroad. It can be understood as making the server in the liquid, although it can achieve high density, low noise, low heat transfer temperature difference, natural cooling and so on. But the immersion liquid cooling technology is very difficult and cost high. At present, the industry only has single machine test and single machine. It is shown that server cluster deployment is not yet available.
In fact, the concept of liquid cooling has appeared many years ago, but it has only arisen in recent years. This is mainly because with the rapid development of the data center industry, especially the deployment of high density and even ultra-high density servers, the challenges facing the data center refrigeration are increasingly severe. How to further reduce the high power consumption, how to achieve the green development of the data center while guaranteeing the performance, has become the concern of the industry. And the focus of the breakthrough.
At present, the mainstream manufacturers at home and abroad are vigorously promoting the research of liquid cooling technology. For example, Facebook is launching a new indirect cooling system, the StatePoint liquid cooling (SPLC) solution, developed in collaboration with the Nortek air solutions company. Since it was developed in 2015, the technology (Nortek patent) uses a liquid air heat exchanger, cooling water through membrane separation layer evaporation technology, and cooling the air in a data center facility with cold water, and the diaphragm can prevent cross contamination between water and air.
In addition, the spray cooling data center, which is the core of the combined liquid cooling technology, is a new liquid cooling method, which is different from the traditional air cooling and soaking cooling mode. It directly sprinkled the insulated liquid cooling medium into the heating device inside the server or the radiator with its contact, and the cooling fluid was absorbed quickly. The heat of the chip is transmitted through the liquid cooling system to the outdoor atmosphere, which not only solves the problem of low air cooling efficiency, but also solves the problem of high cost of soaking and maintenance.
According to the industry news, the liquid cooling scheme is rising, with the technology driven by AI and the network edge computing, and also the decline of the data center reduction. At present, more and more manufacturers use liquid cooling technology to cool down the data center. For example, Google I/O 2018 first introduced liquid cooling in Data Center for cooling of AI chips.
Innovation is the first driving force to lead the development of science and technology. As a kind of efficient, energy-saving and safe cooling technology, liquid cooling technology is becoming the inevitable choice of most data centers.
2018年8月5日 星期日
機房建置,數據中心業界需要新的設計標準
機房建置,最常用於對數據中心進行分類的相關設計標準並不能直接促進該行業的技術創新,可持續能源的使用,以及能源效率的提升。這些標準包括BICSI,ANSI / TIA 942和UI,這一系列的標準通常被用於按類別對數據中心進行分類(例如BICSI 0 - 3和UI第我層到第(四層)。
但恰恰是由於有了這些現有標準的固定可用性等級和規定的冗餘度量,使得當前越來越無法很好的對正在投產運營中或構建過程中的大量數據中心進行分類了。
例如,基於可持續能源(所採用的並非柴油發電機和UPS不間斷電源設備)或網絡化數據中心拓撲結構的創新數據中心設計就無法很好的基於上述相關標準進行正確分類。
這並不是因爲這些設計無法提供類似的或更高的可用性。相反,這是因爲它們不適合上述相關標準所規定的分類。
因此,數據中心運營商們有時會自願犧牲所有數據中心組件的效率,因爲出於必須遵循合規性和行業標準等方面的原因,但這可能會導致更高的數據中心運營成本和能源消耗。
我們認爲,除現有的“固定安全控制”的可用性標準外,數據中心行業還需要一套更具包容性的分類標準,該標準將充分考慮利用方面的彈性,可持續性和效率的遠見設計。
數據中心設計標準需要進行審查
大約二十年前,數據中心的設計,構建和運營標準是由UI,TIA和BICSI等組織開創的。這些標準的簡單性和清晰度使它們很快成爲了數據中心業界普遍採納的設計參考標準。
這些標準中的每一個都建立在4個漸進級別上,僅涵蓋基於冗餘柴油發電機和UPS的傳統設計。按性能和正常運行時間排序,如下,我們爲您列出了每種分類所包含的相關要求:
l基本非冗餘:專用數據中心站點的容量要求
l基本冗餘:提高數據中心可用性的容量組件
l可併發維護:增加冗餘級別,使數據中心內的子系統能夠在更換或維護部分電源和冷卻設備期間繼續保持運行
l容錯:具有完全冗餘子系統的數據中心
這些標準限制了數據中心的設計創新,同時憑藉這些標準的固定安全設計設置,其又成爲了相關行業可持續性的關鍵。
此外,越來越多的正在投產運營中或正在建設中的數據中心無法使用傳統標準進行分類。三種常用的未經分類的設計類型包括:
1、專門使用可替代能源的設計,如電網,太陽能、風能,燃料電池和潮汐能
2,基於多個網絡數據中心的設計
3、數據中心設計部署的可用性功能超出其分類,但又不滿足下表分類中的所有要求的
總之,簡單性這一在全球範圍內普遍接受的分類標準現在卻在一定程度上減緩了該行業相關標準的發展,其並不反映當前數據中心行業對創新和可持續性的推動。
當前的數據中心行業可以說是面臨兩難的選擇。現有標準能夠滿足數據中心可用性方面的需求,包括容錯點。但是,這不包括偏離標準的設計。在現有的分類系統之上,有一個動態的,靈活和有遠見的模型空間,可以促進對更可持續的數據中心的投資,並推動企業對現有數據中心實施增量投資。
根據第三方的研究表明,爲當前的數字化經濟提供動力的數據中心目前的溫室氣體排放量約佔全球總排放量的2%,這大致相當於航空業的總溫室氣體排放量。有鑑於當下數字化經濟的增長勢頭不減,這一比例預計還會進一步的增加。爲了遏制潛在的數據中心溫室氣體的排放增長,並提高資源效率,行業的利益相關者們正在諸如綠色網格組織等非贏利聯盟機構的倡議下進行廣泛的合作。但是,提高數據中心效率不會減緩排放增長。較高比例的數據中心需要使用可持續能源,如風能和太陽能,以有效遏制整個行業的溫室氣體排放。
目前所廣泛實施的相關標準並沒有考慮專門使用可再生能源而設計的數據中心。這些標準只適用於那些將可持續能源與電網和柴油發電機一起使用的數據中心設計。因此,數據中心運營商們通常是自願犧牲效率的,因爲出於合規性等方面的原因,他們必須遵循行業標準,從而導致了明顯更高的數據中心運營成本和能源消耗。因此,實施固定標準來推動更靈活的數據中心運營可能在無意中推動了化石燃料消耗的進一步增長。
數據中心行業的另一個重大轉變是對於混合和公共雲架構採用的增長,這導致了越來越多的計算和存儲容量位於商業化的數據中心而非企業自有的數據中心。許多商業數據中心運營商,包括主機託管服務和雲服務提供商,都在創新方面投入了巨資,以提高可持續性。這些提供商們通常使用非傳統的數據中心拓撲,例如互連的多個數據中心。根據Interxion公司對於可用性的統計調研報告顯示,這些網絡化數據中心拓撲可以實現與傳統數據中心設計相同的正常運行時間,但在應用當前標準時無法對其進行分類。一些受到嚴格監管的行業(如金融服務業)不適應或不被允許使用未通過行業標準認證的數據中心。
總之,數據中心行業需要一套更加包容的標準,這一標準應該是開放的,靈活的,並得到所有利益相關者的認可和接受。這需要成爲促進企業數據中心各部門交叉協作和共同推動創新的標準,並且不僅要有助於實現可用性,還要有助於推動可持續性和效率提升。
構建可替代系統的塊
作爲全數據中心行業所支持的設計標準審查的第一步,我們基於3項因素提出了分層模型(參見下表2):
1、彈性:設計的每個組件都可以根據其彈性進行評分(即任何具備彈性設計的組件都會得到更高的評分結果)。基於組件總和的總分將作爲端到端設計的彈性的指標。在表2中,我們對每層的彈性評分是從1(低彈性)到10(高彈性)進行評級的。
2,可持續性:根據所使用的能源,可以根據表明了可持續性水平的“能源標籤”對設計進行分類。在表2中,我們將可持續性從一個(高可持續性)到F(低可持續性)爲可持續性進行評級。
3、能源效率:建議使用PUE對數據中心設計的能源效率進行分類,因爲該指標是衡量數據中心能源效率的重要指標。
有鑑於需要進一步減少數據中心行業對環境的影響,可持續性分類的重要性是顯而易見的。這種分類是否應該整合PUE效率數值和基於能源使用的分數則屬於更爲開放的辯論話題。
提出將彈性作爲標準的原因可能不那麼明顯,但同樣重要。鑑於統計可用性計算既耗時又複雜,難以將其用作決策過程的一部分,因此需要使用彈性而非可用性。相反,對每個單獨組件的彈性水平進行分類並基於此進行評分相對容易。當前的數據中心行業需要確定一種計算方法,以確保所有設計的分類始終如一。該方法應包含現有的可用性標準。正如PUE所表明的那樣,全球範圍內普遍接受認可的計算方法是可實現的目標。
爲了降低管理費用,任何新標準都可以包括一個易於使用的開源工具或應用程序,由非商業管理機構維護。企業客戶的數據中心工程部門和顧問可以使用這樣的工具上傳按照上述3項標準(彈性,可持續性和能源效率)所進行的數據中心設計方案進而促進行業協作和創新。
按照新標準所打造的數據中心模型爲那些希望構建數據中心的企業提供了選擇最適合其彈性,可持續性和效率要求的設計的能力,或者選擇具備這類數據中心設計模型的服務提供商來提供所需的服務級別協議。
但恰恰是由於有了這些現有標準的固定可用性等級和規定的冗餘度量,使得當前越來越無法很好的對正在投產運營中或構建過程中的大量數據中心進行分類了。
例如,基於可持續能源(所採用的並非柴油發電機和UPS不間斷電源設備)或網絡化數據中心拓撲結構的創新數據中心設計就無法很好的基於上述相關標準進行正確分類。
這並不是因爲這些設計無法提供類似的或更高的可用性。相反,這是因爲它們不適合上述相關標準所規定的分類。
因此,數據中心運營商們有時會自願犧牲所有數據中心組件的效率,因爲出於必須遵循合規性和行業標準等方面的原因,但這可能會導致更高的數據中心運營成本和能源消耗。
我們認爲,除現有的“固定安全控制”的可用性標準外,數據中心行業還需要一套更具包容性的分類標準,該標準將充分考慮利用方面的彈性,可持續性和效率的遠見設計。
數據中心設計標準需要進行審查
大約二十年前,數據中心的設計,構建和運營標準是由UI,TIA和BICSI等組織開創的。這些標準的簡單性和清晰度使它們很快成爲了數據中心業界普遍採納的設計參考標準。
這些標準中的每一個都建立在4個漸進級別上,僅涵蓋基於冗餘柴油發電機和UPS的傳統設計。按性能和正常運行時間排序,如下,我們爲您列出了每種分類所包含的相關要求:
l基本非冗餘:專用數據中心站點的容量要求
l基本冗餘:提高數據中心可用性的容量組件
l可併發維護:增加冗餘級別,使數據中心內的子系統能夠在更換或維護部分電源和冷卻設備期間繼續保持運行
l容錯:具有完全冗餘子系統的數據中心
這些標準限制了數據中心的設計創新,同時憑藉這些標準的固定安全設計設置,其又成爲了相關行業可持續性的關鍵。
此外,越來越多的正在投產運營中或正在建設中的數據中心無法使用傳統標準進行分類。三種常用的未經分類的設計類型包括:
1、專門使用可替代能源的設計,如電網,太陽能、風能,燃料電池和潮汐能
2,基於多個網絡數據中心的設計
3、數據中心設計部署的可用性功能超出其分類,但又不滿足下表分類中的所有要求的
總之,簡單性這一在全球範圍內普遍接受的分類標準現在卻在一定程度上減緩了該行業相關標準的發展,其並不反映當前數據中心行業對創新和可持續性的推動。
當前的數據中心行業可以說是面臨兩難的選擇。現有標準能夠滿足數據中心可用性方面的需求,包括容錯點。但是,這不包括偏離標準的設計。在現有的分類系統之上,有一個動態的,靈活和有遠見的模型空間,可以促進對更可持續的數據中心的投資,並推動企業對現有數據中心實施增量投資。
根據第三方的研究表明,爲當前的數字化經濟提供動力的數據中心目前的溫室氣體排放量約佔全球總排放量的2%,這大致相當於航空業的總溫室氣體排放量。有鑑於當下數字化經濟的增長勢頭不減,這一比例預計還會進一步的增加。爲了遏制潛在的數據中心溫室氣體的排放增長,並提高資源效率,行業的利益相關者們正在諸如綠色網格組織等非贏利聯盟機構的倡議下進行廣泛的合作。但是,提高數據中心效率不會減緩排放增長。較高比例的數據中心需要使用可持續能源,如風能和太陽能,以有效遏制整個行業的溫室氣體排放。
目前所廣泛實施的相關標準並沒有考慮專門使用可再生能源而設計的數據中心。這些標準只適用於那些將可持續能源與電網和柴油發電機一起使用的數據中心設計。因此,數據中心運營商們通常是自願犧牲效率的,因爲出於合規性等方面的原因,他們必須遵循行業標準,從而導致了明顯更高的數據中心運營成本和能源消耗。因此,實施固定標準來推動更靈活的數據中心運營可能在無意中推動了化石燃料消耗的進一步增長。
數據中心行業的另一個重大轉變是對於混合和公共雲架構採用的增長,這導致了越來越多的計算和存儲容量位於商業化的數據中心而非企業自有的數據中心。許多商業數據中心運營商,包括主機託管服務和雲服務提供商,都在創新方面投入了巨資,以提高可持續性。這些提供商們通常使用非傳統的數據中心拓撲,例如互連的多個數據中心。根據Interxion公司對於可用性的統計調研報告顯示,這些網絡化數據中心拓撲可以實現與傳統數據中心設計相同的正常運行時間,但在應用當前標準時無法對其進行分類。一些受到嚴格監管的行業(如金融服務業)不適應或不被允許使用未通過行業標準認證的數據中心。
總之,數據中心行業需要一套更加包容的標準,這一標準應該是開放的,靈活的,並得到所有利益相關者的認可和接受。這需要成爲促進企業數據中心各部門交叉協作和共同推動創新的標準,並且不僅要有助於實現可用性,還要有助於推動可持續性和效率提升。
構建可替代系統的塊
作爲全數據中心行業所支持的設計標準審查的第一步,我們基於3項因素提出了分層模型(參見下表2):
1、彈性:設計的每個組件都可以根據其彈性進行評分(即任何具備彈性設計的組件都會得到更高的評分結果)。基於組件總和的總分將作爲端到端設計的彈性的指標。在表2中,我們對每層的彈性評分是從1(低彈性)到10(高彈性)進行評級的。
2,可持續性:根據所使用的能源,可以根據表明了可持續性水平的“能源標籤”對設計進行分類。在表2中,我們將可持續性從一個(高可持續性)到F(低可持續性)爲可持續性進行評級。
3、能源效率:建議使用PUE對數據中心設計的能源效率進行分類,因爲該指標是衡量數據中心能源效率的重要指標。
有鑑於需要進一步減少數據中心行業對環境的影響,可持續性分類的重要性是顯而易見的。這種分類是否應該整合PUE效率數值和基於能源使用的分數則屬於更爲開放的辯論話題。
提出將彈性作爲標準的原因可能不那麼明顯,但同樣重要。鑑於統計可用性計算既耗時又複雜,難以將其用作決策過程的一部分,因此需要使用彈性而非可用性。相反,對每個單獨組件的彈性水平進行分類並基於此進行評分相對容易。當前的數據中心行業需要確定一種計算方法,以確保所有設計的分類始終如一。該方法應包含現有的可用性標準。正如PUE所表明的那樣,全球範圍內普遍接受認可的計算方法是可實現的目標。
爲了降低管理費用,任何新標準都可以包括一個易於使用的開源工具或應用程序,由非商業管理機構維護。企業客戶的數據中心工程部門和顧問可以使用這樣的工具上傳按照上述3項標準(彈性,可持續性和能源效率)所進行的數據中心設計方案進而促進行業協作和創新。
按照新標準所打造的數據中心模型爲那些希望構建數據中心的企業提供了選擇最適合其彈性,可持續性和效率要求的設計的能力,或者選擇具備這類數據中心設計模型的服務提供商來提供所需的服務級別協議。
Datacenter migration, introduction to modularized data center
Datacenter migration. The modular data center refers to the independent function and unified input and output interface of each module. The modules in different regions can backup each other and form a complete data center through the arrangement of the related modules. There are many forms of modularization, which can be design methods and ideas, or products.
Modularized design is adopted in the modular design method, and modularization design is also adopted in each function system of the data center, which can be divided into modules, sub floors and staging construction during construction. There are several forms of data center modularization in product form: modular products, micro modules, and container data centers.
Modular products are represented by modular UPS, modular precision air conditioning, modular wiring, etc. the micro module is typical with cabinet micro environment. It refers to a number of stand units as the basic units, including the refrigeration module, power supply and distribution module, network, wiring, monitoring, fire control, and other independent operating units. The piece can be prefabricated in the factory and can be disassembled and assembled quickly. The container data center can be considered as a standardized, prefabricated and pre tested large modular data center product and solution.
The advantages of modular systems:
1. Modular systems are extensible: modular infrastructure can be deployed according to current IT requirements and can be added to more components in the future according to needs. This will significantly reduce the total cost of ownership.
Modular systems are changeable: they can be reconfigured to provide great flexibility to meet changing IT needs.
Three, modular systems are portable: when installing, upgrading, reconfiguring, or moving modularized, independent components, standard interfaces, and easy to understand structures save time and save money.
Four. Modularized components are replaceable: failure modules can be easily replaced to be upgraded or repaired, and usually do not need to stop the system running.
Five, modularization can improve the quality of fault repair: the transplantable and pluggable features of the module make a lot of work available in the factory, including before delivery (such as pre wiring for distribution equipment), as well as after delivery (such as repair of power modules).
From a statistical point of view, the same work is done in the factory compared to the field operation, which is much lower in performance reduction, reduction in capacity and rate of failure, for example, compared with the UPS power module that has been repaired in the field, the repaired modules in the factory are causing power failure, new failures, or unable to recover to full load. The probability of working state is a thousand times lower.
6. In terms of energy consumption, modular data center can control energy consumption through centralized management, and improve the utilization of equipment, thereby reducing resource consumption. At the same time, the PUE value of the modular data center is greatly reduced because of the optimization of the power line and data cable path, the server deployment and installation, the airflow organization in the module and so on.
Modularized design is adopted in the modular design method, and modularization design is also adopted in each function system of the data center, which can be divided into modules, sub floors and staging construction during construction. There are several forms of data center modularization in product form: modular products, micro modules, and container data centers.
Modular products are represented by modular UPS, modular precision air conditioning, modular wiring, etc. the micro module is typical with cabinet micro environment. It refers to a number of stand units as the basic units, including the refrigeration module, power supply and distribution module, network, wiring, monitoring, fire control, and other independent operating units. The piece can be prefabricated in the factory and can be disassembled and assembled quickly. The container data center can be considered as a standardized, prefabricated and pre tested large modular data center product and solution.
The advantages of modular systems:
1. Modular systems are extensible: modular infrastructure can be deployed according to current IT requirements and can be added to more components in the future according to needs. This will significantly reduce the total cost of ownership.
Modular systems are changeable: they can be reconfigured to provide great flexibility to meet changing IT needs.
Three, modular systems are portable: when installing, upgrading, reconfiguring, or moving modularized, independent components, standard interfaces, and easy to understand structures save time and save money.
Four. Modularized components are replaceable: failure modules can be easily replaced to be upgraded or repaired, and usually do not need to stop the system running.
Five, modularization can improve the quality of fault repair: the transplantable and pluggable features of the module make a lot of work available in the factory, including before delivery (such as pre wiring for distribution equipment), as well as after delivery (such as repair of power modules).
From a statistical point of view, the same work is done in the factory compared to the field operation, which is much lower in performance reduction, reduction in capacity and rate of failure, for example, compared with the UPS power module that has been repaired in the field, the repaired modules in the factory are causing power failure, new failures, or unable to recover to full load. The probability of working state is a thousand times lower.
6. In terms of energy consumption, modular data center can control energy consumption through centralized management, and improve the utilization of equipment, thereby reducing resource consumption. At the same time, the PUE value of the modular data center is greatly reduced because of the optimization of the power line and data cable path, the server deployment and installation, the airflow organization in the module and so on.
Basic steps in website design that must be mastered
The steps of website design are not the same as the steps of website construction. It needs a series of technological processes such as website planning, production, advertising, daily updating, etc. Website design is equivalent to positioning a website style and theme. Just like the way we dress, everyone has the style of everyone. If you don't want to become popular, you need to have your own unique thoughts and become the brightest star in the crowd. That website design for the site is playing this role, want to stand out in the peer also need to be clear about their website design style. Now let's take a look at the basic steps of website design.
One, determine the theme of the website design
The basic step of website design is primarily the theme of the website, that is, to determine the main content of the website. If you want to build a good website, you must identify a clear theme.
For the current e-commerce website and enterprise brand website, we must first analyze and consider the purpose or needs of your own website, as well as the needs of the customers, and then determine the theme of your website design.
Two. Collect website design materials
Once the theme of the website is established, it is necessary to collect all aspects of the material on the subject. For a website, building a good theme is the first step, and also the key to website design. Therefore, it is necessary to collect various images, templates, style structures, backgrounds and other materials, and then to sort and select the materials collected from our website.
Three. Design the framework of web structure
The key to building a website is good website design. How to plan websites and achieve the perfect webpage design is a problem that web designers must solve. Site planning contains a lot of content, such as site structure, column configuration, site style, color matching, layout, text image use, etc. need to be considered. Only in this way can we create personalized, characteristic and attractive websites.
Four. Site design style positioning
Before designing websites, the most important thing is the location of the company and products. Accurate positioning is not only conducive to the company to capture market trends, but also conducive to attracting relevant groups, improve the correct use of website users, improve the competitiveness of the company. Only the company provides accurate positioning for web design, designers can make use of previous website production experience, provide advice and planning for the company, identify the style of the website, search related topics, share experience, and analyze the same peer website. To lay a solid foundation for the success of website post production, it is possible to stand out in the same industry.
Five, the determination of layout design
The basic steps of website design after understanding the design style of the website, complete the corresponding survey to determine the basic layout of the website page design. At this time, the company can provide some of the company's information and website design related to the design of the program, to facilitate designers to follow-up work. The designer will organize the data and make overall design to form the initial structure of the website. Docking with the company, writing agreement with both parties and continuing the next step.
Six. Web design is the key
When users enter the website design, if the first time of web design can attract the attention of users and whether to meet the needs of the users, this is the key to the survival of the web site. Only to meet the needs of users, to provide users with a good website experience is the user's favorite, and the user's liking is the key to the survival of the site. Therefore, we must pay special attention to website design. We can communicate with the company, understand the company's image, product characteristics, should highlight the key points and competitors website features. Combining key information, integrating company characteristics and creating themes. Users focus on favorite websites. This requires not only the rich experience of the site's production designers, but also the full cooperation of the company and the provision of relevant information as far as possible in order to make the site a success.
One, determine the theme of the website design
The basic step of website design is primarily the theme of the website, that is, to determine the main content of the website. If you want to build a good website, you must identify a clear theme.
For the current e-commerce website and enterprise brand website, we must first analyze and consider the purpose or needs of your own website, as well as the needs of the customers, and then determine the theme of your website design.
Two. Collect website design materials
Once the theme of the website is established, it is necessary to collect all aspects of the material on the subject. For a website, building a good theme is the first step, and also the key to website design. Therefore, it is necessary to collect various images, templates, style structures, backgrounds and other materials, and then to sort and select the materials collected from our website.
Three. Design the framework of web structure
The key to building a website is good website design. How to plan websites and achieve the perfect webpage design is a problem that web designers must solve. Site planning contains a lot of content, such as site structure, column configuration, site style, color matching, layout, text image use, etc. need to be considered. Only in this way can we create personalized, characteristic and attractive websites.
Four. Site design style positioning
Before designing websites, the most important thing is the location of the company and products. Accurate positioning is not only conducive to the company to capture market trends, but also conducive to attracting relevant groups, improve the correct use of website users, improve the competitiveness of the company. Only the company provides accurate positioning for web design, designers can make use of previous website production experience, provide advice and planning for the company, identify the style of the website, search related topics, share experience, and analyze the same peer website. To lay a solid foundation for the success of website post production, it is possible to stand out in the same industry.
Five, the determination of layout design
The basic steps of website design after understanding the design style of the website, complete the corresponding survey to determine the basic layout of the website page design. At this time, the company can provide some of the company's information and website design related to the design of the program, to facilitate designers to follow-up work. The designer will organize the data and make overall design to form the initial structure of the website. Docking with the company, writing agreement with both parties and continuing the next step.
Six. Web design is the key
When users enter the website design, if the first time of web design can attract the attention of users and whether to meet the needs of the users, this is the key to the survival of the web site. Only to meet the needs of users, to provide users with a good website experience is the user's favorite, and the user's liking is the key to the survival of the site. Therefore, we must pay special attention to website design. We can communicate with the company, understand the company's image, product characteristics, should highlight the key points and competitors website features. Combining key information, integrating company characteristics and creating themes. Users focus on favorite websites. This requires not only the rich experience of the site's production designers, but also the full cooperation of the company and the provision of relevant information as far as possible in order to make the site a success.
2018年8月2日 星期四
機房建置,超融合架構在數據中心應用探究
機房建置,本文基於對數據中心的發展歷程以及軟件定義數據中心階段的探討,進一步分析了超融合架構的自身優勢與應用場景。
超融合架構(Hyper-Converged基礎設施)是指:在同一套x86服務器中結合了計算,存儲,網絡等資源和服務器虛擬化技術,還具有緩存加速,重複數據刪除,在線數據壓縮,備份軟件,快照技術等功能,並且將多套設備採用統一的管理軟件通過網絡進行聚合,從而形成統一管理的資源池,模塊化的無縫橫向擴展得以實現。超融合架構就是基於通用的服務器硬件,藉助虛擬化和分佈式技術,融合計算,存儲,虛擬化爲一體。
超融合的英文“Hyper-Converged”中的“超級”,意思就是虛擬化,因此超融合架構天然具備虛擬化的基因。存儲技術的改變是超融合架構中最根本的變化,原先的集中共享式存儲轉變成了軟件定義存儲,特別是分佈式存儲。超融合架構的核心是分佈式存儲,分佈式存儲離不開軟件定義存儲,軟件定義存儲作爲一種數據存儲方式,是利用相對於物理存儲硬件的外部軟件進行所有存儲相關的控制工作,這個軟件是操作系統或虛擬層的一部分。
數據中心發展經歷
1 .數據中心的傳統架構階段
傳統數據中心所採用的基礎架構,是以大型服務器,小型服務器,x86服務器,集中式存儲,網絡,大型數據庫,高可用軟件和管理軟件的複雜系統架構,需要很多集成商和不同硬件廠商提供技術服務團隊。這種架構適合了數據大集中的發展趨勢。隨着企業應用的不斷增加以及互聯網應用發展帶來的爆發性增長的數據,數據中心架構孤島式的弊端也日益顯露。
首先,應用的可靠性,嚴重依賴於硬件提供的可靠性,可用性和可維護性特性,硬件採購成本極爲高昂。其次,煙囪式建設,離散式管理,設備種類繁多,運維難度大,運維的成本也隨之增長。最後,由於系統結構複雜導致部署週期長,系統上線進度嚴重影響;資源調度靈活性不足,系統資源得不到充分利用。
超融合架構在數據中心應用探究
2 .數據中心的虛擬化階段
隨着服務器虛擬化技術的出現,數據中心開始逐步向虛擬化數據中心轉變,數據中心進入了虛擬化數據中心階段。虛擬化技術就是在一個物理服務器上運行多個可移動的虛擬機,這些虛擬機共享底層硬件,擁有自己的虛擬資源如操作系統,計算,內存和存儲等。虛擬機可以提高服務器的利用率,並且虛擬機支持操作系統的和數據的備份,實施更加靈活。數據中心應用服務器虛擬化技術的好處主要有以下幾點。
一是由於利用虛擬化技術,可以減少對硬件數量需求的,從而使硬件採購成本大大降低。虛擬機技術使數據中心的硬件設備大幅度減少,從而使數據中心能耗降低,更加易於維護,隨着時間的推移,虛擬化帶來的成本降低是非常明顯的。
二是應用部署的靈活性得到了極大的改善。隨着虛擬化技術的發展,重新部署應用可以通過虛擬機的快照技術在幾分鐘之內就能夠完成。同時,應用的備份和遷移同樣也變得簡單,方便。虛擬化技術提高了服務器的資源利用率,利用虛擬機在線遷移技術數據中心對服務器可靠性,可用性和可維護性的依賴大大降低。
數據中心的服務器資源利用率和高可用性的問題通過虛擬化技術得到解決,隨着虛擬機數量的快速增長,對存儲I / O的需求也隨之大幅提升,在這種情況下傳統FCSAN存儲網絡方式又引發了新的問題:第一是可靠性問題。由於集中存儲非常依賴於存儲設備的可靠性,可用性和可維護性,所以當存儲設備發生故障時將會危及整個虛擬機資源池第。二是擴展性問題。存儲設備之間的數據遷移非常困難,無法解決存儲設備的性能孤島和數據孤島問題。第三是性能問題。虛擬機的I / O性能完全取決於後端存儲的能力,而單一存儲的I / O性能現在已經出現了明顯的瓶頸。第四是運維問題。各個廠家的存儲設備是互不兼容的,IP網絡與FCSAN也是完全孤立的,加大了運維的工作量。第五是成本問題。對專用設備的依賴顯著地增加了基礎設施的成本。
軟件定義數據中心階段
超融合架構替代傳統架構的變革,帶來了數據中心開始向軟件定義數據中心發展。傳統服務器虛擬化技術的資源的虛擬化和管理是通過專用的硬件設備得以實現,並沒有徹底實現硬件資源與虛擬化管理軟件之間的分離。這使得這類技術並不適用於大規模的虛擬數據中心環境。而軟件定義的技術實現了存儲,計算,網絡與專用硬件的分離,從而實現了它的基礎架構的真正融合。
軟件定義數據中心讓數據中心的存儲設備,服務器和網絡等重要基礎設施減少了對基礎物理硬件的依賴,變得更爲靈活,更自動化。由於傳統的DAS,NAS和SAN具有技術要求高,成本高,靈活性差等缺陷,所以軟件定義技術取代專用硬件逐步成爲數據中心基礎架構的發展趨勢。此外,超融合架構優點還包括以下幾個方面。
一是功能豐富。超融合架構不單純是一個分佈式存儲系統,還可以通過緩存加速,備份,快照,重複數據刪除,數據壓縮等強大的功能,來保證數據對存儲的高效利用,保證系統的穩定運行,並能降低能源消耗。
二是建設費用低。超融合架構通過軟件定義使用X86服務器自身的硬盤,從而使存儲的硬件成本極大降低。
三是部署便捷。相對於傳統存儲解放方案的複雜性,超融合平臺部署相對簡單。對於LUN,RAID,FC交換機,分區,掩碼,註冊表狀態變更通知或複雜的存儲多路徑等問題不必面對。
四是橫向擴展能力靈活。超融合架構在需要性能擴展時可以通過增加不同功能節點的方式進行,按照需求擴展所需要資源,分別獨自擴展如CPU運算節點,內存節點,GPU節點,存儲容量節點,提升所需要的性能。系統建設時期的設計和預算壓力得到了極大的減輕。
五是提高了I / O性能。超融合架構技術採用了分佈式存儲技術,配置的SSD硬盤來提高性能,利用本機的機械硬盤來擴充容量,極大提高了存儲的I / O性能。
超融合架構的具體應用場景
1 .提升數據中心的存儲性能
超融合架構存儲系統採用的軟件定義技術,傳統集中存儲的性能問題得到了很好的解決。傳統架構的存儲設備已無法滿足對存儲性能和靈活性的需求,超融合架構存儲是分佈式的,可以徹底擺脫傳統這架構對存儲系統的性能約束。超融合存儲通過完全去掉傳統存儲,利用分佈式文件系統來提供按線性增長的性能和容量,並且可以通過SSDCache進行加速,甚至可以全部使用SSD來構建整個分佈式存儲系統。
2 .業務快速部署,降低成本
超融合基礎架構首先給用戶帶來的價值是加快業務部署。傳統的項目要經過一個非常長的項目設計,規劃,然後到整個採購,之後要去進行集成,部署,測試等相關工作,而超融合架構一般都預集成封裝虛擬化平臺,雲平臺管理軟件,SDN網絡和分佈式存儲,集成整個存儲,計算和網絡以及應用軟件讓整個這架構的搭建簡化了很多。超融合架構大部分都是基於X86硬件設備,可以顯著降低,採購成本,運維成本也會降低。
3 .大數據分析平臺
超融合架構具備橫向擴展的特性,針對海量數據存儲應用,可以實現大規模通用集羣存儲。超融合架構存儲系統通過網絡技術將大量基本X86存儲單元整合起來協同工作,對外提供統一數據存儲服務,替代傳統集中式的存儲設備。
4 .支撐虛擬桌面,私有云等虛擬化計算應用
超融合架構可以將計算,存儲和網絡資源整合到一起,提供軟硬一體的解決方案。在虛擬桌面(VDI)應用方面,由於各種應用部署在單一的共享資源池裏,可以不需要擔心存儲系統的I / O影響虛擬機性能。此外,超融合架構的大容量分佈式存儲環境爲系統靈活掌控隨機和順序負載提供可能。而且採用SSD加速的分佈式存儲集羣可以保障足夠的IOPS應對VDI啓動和登入風暴等嚴峻的負載挑戰。在虛擬化應用方面,包括存儲,備份,複製,負載均衡在內所有策略的制定,都會圍繞支持虛擬機進行。例如數據保護策略,超融合架構就將其集成在虛擬機層。管理員只需在虛擬機層操作,就可以在不同數據中心之間或不同應用間(備份,複製等)進行負載遷移。
超融合的英文“Hyper-Converged”中的“超級”,意思就是虛擬化,因此超融合架構天然具備虛擬化的基因。存儲技術的改變是超融合架構中最根本的變化,原先的集中共享式存儲轉變成了軟件定義存儲,特別是分佈式存儲。超融合架構的核心是分佈式存儲,分佈式存儲離不開軟件定義存儲,軟件定義存儲作爲一種數據存儲方式,是利用相對於物理存儲硬件的外部軟件進行所有存儲相關的控制工作,這個軟件是操作系統或虛擬層的一部分。
數據中心發展經歷
1 .數據中心的傳統架構階段
傳統數據中心所採用的基礎架構,是以大型服務器,小型服務器,x86服務器,集中式存儲,網絡,大型數據庫,高可用軟件和管理軟件的複雜系統架構,需要很多集成商和不同硬件廠商提供技術服務團隊。這種架構適合了數據大集中的發展趨勢。隨着企業應用的不斷增加以及互聯網應用發展帶來的爆發性增長的數據,數據中心架構孤島式的弊端也日益顯露。
首先,應用的可靠性,嚴重依賴於硬件提供的可靠性,可用性和可維護性特性,硬件採購成本極爲高昂。其次,煙囪式建設,離散式管理,設備種類繁多,運維難度大,運維的成本也隨之增長。最後,由於系統結構複雜導致部署週期長,系統上線進度嚴重影響;資源調度靈活性不足,系統資源得不到充分利用。
超融合架構在數據中心應用探究
2 .數據中心的虛擬化階段
隨着服務器虛擬化技術的出現,數據中心開始逐步向虛擬化數據中心轉變,數據中心進入了虛擬化數據中心階段。虛擬化技術就是在一個物理服務器上運行多個可移動的虛擬機,這些虛擬機共享底層硬件,擁有自己的虛擬資源如操作系統,計算,內存和存儲等。虛擬機可以提高服務器的利用率,並且虛擬機支持操作系統的和數據的備份,實施更加靈活。數據中心應用服務器虛擬化技術的好處主要有以下幾點。
一是由於利用虛擬化技術,可以減少對硬件數量需求的,從而使硬件採購成本大大降低。虛擬機技術使數據中心的硬件設備大幅度減少,從而使數據中心能耗降低,更加易於維護,隨着時間的推移,虛擬化帶來的成本降低是非常明顯的。
二是應用部署的靈活性得到了極大的改善。隨着虛擬化技術的發展,重新部署應用可以通過虛擬機的快照技術在幾分鐘之內就能夠完成。同時,應用的備份和遷移同樣也變得簡單,方便。虛擬化技術提高了服務器的資源利用率,利用虛擬機在線遷移技術數據中心對服務器可靠性,可用性和可維護性的依賴大大降低。
數據中心的服務器資源利用率和高可用性的問題通過虛擬化技術得到解決,隨着虛擬機數量的快速增長,對存儲I / O的需求也隨之大幅提升,在這種情況下傳統FCSAN存儲網絡方式又引發了新的問題:第一是可靠性問題。由於集中存儲非常依賴於存儲設備的可靠性,可用性和可維護性,所以當存儲設備發生故障時將會危及整個虛擬機資源池第。二是擴展性問題。存儲設備之間的數據遷移非常困難,無法解決存儲設備的性能孤島和數據孤島問題。第三是性能問題。虛擬機的I / O性能完全取決於後端存儲的能力,而單一存儲的I / O性能現在已經出現了明顯的瓶頸。第四是運維問題。各個廠家的存儲設備是互不兼容的,IP網絡與FCSAN也是完全孤立的,加大了運維的工作量。第五是成本問題。對專用設備的依賴顯著地增加了基礎設施的成本。
軟件定義數據中心階段
超融合架構替代傳統架構的變革,帶來了數據中心開始向軟件定義數據中心發展。傳統服務器虛擬化技術的資源的虛擬化和管理是通過專用的硬件設備得以實現,並沒有徹底實現硬件資源與虛擬化管理軟件之間的分離。這使得這類技術並不適用於大規模的虛擬數據中心環境。而軟件定義的技術實現了存儲,計算,網絡與專用硬件的分離,從而實現了它的基礎架構的真正融合。
軟件定義數據中心讓數據中心的存儲設備,服務器和網絡等重要基礎設施減少了對基礎物理硬件的依賴,變得更爲靈活,更自動化。由於傳統的DAS,NAS和SAN具有技術要求高,成本高,靈活性差等缺陷,所以軟件定義技術取代專用硬件逐步成爲數據中心基礎架構的發展趨勢。此外,超融合架構優點還包括以下幾個方面。
一是功能豐富。超融合架構不單純是一個分佈式存儲系統,還可以通過緩存加速,備份,快照,重複數據刪除,數據壓縮等強大的功能,來保證數據對存儲的高效利用,保證系統的穩定運行,並能降低能源消耗。
二是建設費用低。超融合架構通過軟件定義使用X86服務器自身的硬盤,從而使存儲的硬件成本極大降低。
三是部署便捷。相對於傳統存儲解放方案的複雜性,超融合平臺部署相對簡單。對於LUN,RAID,FC交換機,分區,掩碼,註冊表狀態變更通知或複雜的存儲多路徑等問題不必面對。
四是橫向擴展能力靈活。超融合架構在需要性能擴展時可以通過增加不同功能節點的方式進行,按照需求擴展所需要資源,分別獨自擴展如CPU運算節點,內存節點,GPU節點,存儲容量節點,提升所需要的性能。系統建設時期的設計和預算壓力得到了極大的減輕。
五是提高了I / O性能。超融合架構技術採用了分佈式存儲技術,配置的SSD硬盤來提高性能,利用本機的機械硬盤來擴充容量,極大提高了存儲的I / O性能。
超融合架構的具體應用場景
1 .提升數據中心的存儲性能
超融合架構存儲系統採用的軟件定義技術,傳統集中存儲的性能問題得到了很好的解決。傳統架構的存儲設備已無法滿足對存儲性能和靈活性的需求,超融合架構存儲是分佈式的,可以徹底擺脫傳統這架構對存儲系統的性能約束。超融合存儲通過完全去掉傳統存儲,利用分佈式文件系統來提供按線性增長的性能和容量,並且可以通過SSDCache進行加速,甚至可以全部使用SSD來構建整個分佈式存儲系統。
2 .業務快速部署,降低成本
超融合基礎架構首先給用戶帶來的價值是加快業務部署。傳統的項目要經過一個非常長的項目設計,規劃,然後到整個採購,之後要去進行集成,部署,測試等相關工作,而超融合架構一般都預集成封裝虛擬化平臺,雲平臺管理軟件,SDN網絡和分佈式存儲,集成整個存儲,計算和網絡以及應用軟件讓整個這架構的搭建簡化了很多。超融合架構大部分都是基於X86硬件設備,可以顯著降低,採購成本,運維成本也會降低。
3 .大數據分析平臺
超融合架構具備橫向擴展的特性,針對海量數據存儲應用,可以實現大規模通用集羣存儲。超融合架構存儲系統通過網絡技術將大量基本X86存儲單元整合起來協同工作,對外提供統一數據存儲服務,替代傳統集中式的存儲設備。
4 .支撐虛擬桌面,私有云等虛擬化計算應用
超融合架構可以將計算,存儲和網絡資源整合到一起,提供軟硬一體的解決方案。在虛擬桌面(VDI)應用方面,由於各種應用部署在單一的共享資源池裏,可以不需要擔心存儲系統的I / O影響虛擬機性能。此外,超融合架構的大容量分佈式存儲環境爲系統靈活掌控隨機和順序負載提供可能。而且採用SSD加速的分佈式存儲集羣可以保障足夠的IOPS應對VDI啓動和登入風暴等嚴峻的負載挑戰。在虛擬化應用方面,包括存儲,備份,複製,負載均衡在內所有策略的制定,都會圍繞支持虛擬機進行。例如數據保護策略,超融合架構就將其集成在虛擬機層。管理員只需在虛擬機層操作,就可以在不同數據中心之間或不同應用間(備份,複製等)進行負載遷移。
Datacenter migration, efficiency and sustainability steps
Datacenter migration, experts predict that the data center will use three times more energy in the next ten years, making it more important for data center providers to find it more efficient than ever. Data center operators also require that important energy and electrical data be viewed around the clock to make informed decisions about server loads and optimize power capacity.
The adoption of validated protection measures and the need to meet ISO 50001 and other energy performance standards have led to more sophisticated energy consumption reporting in the industry. In addition, the adoption of carbon emission targets and reporting in the data center area is increasing as sustainable development of enterprises shifts from sustainable strategies to business strategies.
Despite the growing demand for efficient and sustainable business, a recent study found that most of the organizations failed to implement the necessary steps to integrate and promote their projects. In fact, most enterprises still use considerable traditional energy and carbon emission management methods, and few enterprises coordinate the activities between the procurement, operation and sustainable development sectors. This way of disconnection will hinder investment return (ROI).
However, a UK hosted cloud computing service provider has turned its energy management challenge into an opportunity, with substantial cost savings. IOmart is a rapidly developing cloud computing company, recognized by some of the world's major cloud computing providers (including Microsoft, VMware, EMC, and AWS) as Tier One partners. With the continuous development of enterprises, the company is applying the guiding principles of sustainable development to the organization of customers themselves. A successful solution to get more efficient, lower cost and greater flexibility can save a lot of cost.
How does the company achieve this feat? IOmart company, in collaboration with Schneider electric, has established a strategic and comprehensive approach to energy management and carbon emissions management in its data center. More specifically, it combines procurement, energy, and sustainable development teams to compare data and develop sharing strategies to manage energy consumption and carbon emissions and reduce expenditure. This integrated approach, also known as active energy management, ultimately helps reduce energy use, meet energy compliance standards, and manage unstable energy costs.
The following are the four main steps taken by IOmart to share key information between departments and to use energy procurement data to support energy and sustainable development reports.
• the first step: more intelligent purchase of energy. The company's first challenge is to reduce energy costs by strategically purchasing energy. Schneider Electric helped deploy risk management solutions that responded flexibly to the market, saving 13% of the contract costs. With the early success of using a more intelligent approach to buying energy, the team hopes to build a more strategic and comprehensive approach to other energy and sustainable development opportunities.
• the second step: meeting the standards of energy and sustainability. Energy efficiency and sustainability goals are integrated to meet voluntary and mandatory standards, including climate change agreements, carbon reduction commitments and ISO 50001. Sharing data between departments is essential for regulatory purposes, including the use of energy procurement data to support energy and sustainable development reporting. IOmart won ISO 50001 certification in December 2016, indicating that its commitment to clients as a responsible data center provider has so far saved 1 million 500 thousand euros. Stringent regulation, energy consumption, PUE monitoring and tax rebate benefits contributed to these cost savings.
Step third: conduct audits to identify potential savings. Energy audit is part of the ISO 50001 certification process, revealing new energy saving opportunities. Through a continuous efficiency approach, IOmart identified the potential for further cost savings of 150,000, and monitoring showed more opportunities for savings. Energy saving opportunities include the management of existing cooling systems and upgrading of the set points and dead zones of air conditioning units.
• fourth step: use software to enable transparency. IOmart continues to create new opportunities through comprehensive decision-making. It supports this work by using advanced tools and analysis, recognition and prioritization improvements. Resource Advisor is a software platform for enterprise energy and sustainable development data management that allows enterprise automation processes, supports compliance teams, and visualizes data to translate information into action.
The result is self-evident. IOmart can now effectively manage the energy consumption of its data centers and make informed decisions in the short, medium and long term. The success of the method is the integration of the personnel and the strategy. Starting from energy procurement, increasing efficiency and sustainability, gathering teams and working closely with the financial sector of the enterprise can achieve impressive results.
When companies operate projects in isolated islands, they lose their income or cost savings, and this is a significant gap for those who want to balance their profitability and environmental responsibility. Integrated energy and carbon management provides a holistic view of data and resources to reduce consumption, promote innovation, and maximize cost savings.
By adopting a strategic, holistic approach to improving efficiency and sustainability, IOmart will become a model for other organizations to start seeking a positive energy management tour.
The adoption of validated protection measures and the need to meet ISO 50001 and other energy performance standards have led to more sophisticated energy consumption reporting in the industry. In addition, the adoption of carbon emission targets and reporting in the data center area is increasing as sustainable development of enterprises shifts from sustainable strategies to business strategies.
Despite the growing demand for efficient and sustainable business, a recent study found that most of the organizations failed to implement the necessary steps to integrate and promote their projects. In fact, most enterprises still use considerable traditional energy and carbon emission management methods, and few enterprises coordinate the activities between the procurement, operation and sustainable development sectors. This way of disconnection will hinder investment return (ROI).
However, a UK hosted cloud computing service provider has turned its energy management challenge into an opportunity, with substantial cost savings. IOmart is a rapidly developing cloud computing company, recognized by some of the world's major cloud computing providers (including Microsoft, VMware, EMC, and AWS) as Tier One partners. With the continuous development of enterprises, the company is applying the guiding principles of sustainable development to the organization of customers themselves. A successful solution to get more efficient, lower cost and greater flexibility can save a lot of cost.
How does the company achieve this feat? IOmart company, in collaboration with Schneider electric, has established a strategic and comprehensive approach to energy management and carbon emissions management in its data center. More specifically, it combines procurement, energy, and sustainable development teams to compare data and develop sharing strategies to manage energy consumption and carbon emissions and reduce expenditure. This integrated approach, also known as active energy management, ultimately helps reduce energy use, meet energy compliance standards, and manage unstable energy costs.
The following are the four main steps taken by IOmart to share key information between departments and to use energy procurement data to support energy and sustainable development reports.
• the first step: more intelligent purchase of energy. The company's first challenge is to reduce energy costs by strategically purchasing energy. Schneider Electric helped deploy risk management solutions that responded flexibly to the market, saving 13% of the contract costs. With the early success of using a more intelligent approach to buying energy, the team hopes to build a more strategic and comprehensive approach to other energy and sustainable development opportunities.
• the second step: meeting the standards of energy and sustainability. Energy efficiency and sustainability goals are integrated to meet voluntary and mandatory standards, including climate change agreements, carbon reduction commitments and ISO 50001. Sharing data between departments is essential for regulatory purposes, including the use of energy procurement data to support energy and sustainable development reporting. IOmart won ISO 50001 certification in December 2016, indicating that its commitment to clients as a responsible data center provider has so far saved 1 million 500 thousand euros. Stringent regulation, energy consumption, PUE monitoring and tax rebate benefits contributed to these cost savings.
Step third: conduct audits to identify potential savings. Energy audit is part of the ISO 50001 certification process, revealing new energy saving opportunities. Through a continuous efficiency approach, IOmart identified the potential for further cost savings of 150,000, and monitoring showed more opportunities for savings. Energy saving opportunities include the management of existing cooling systems and upgrading of the set points and dead zones of air conditioning units.
• fourth step: use software to enable transparency. IOmart continues to create new opportunities through comprehensive decision-making. It supports this work by using advanced tools and analysis, recognition and prioritization improvements. Resource Advisor is a software platform for enterprise energy and sustainable development data management that allows enterprise automation processes, supports compliance teams, and visualizes data to translate information into action.
The result is self-evident. IOmart can now effectively manage the energy consumption of its data centers and make informed decisions in the short, medium and long term. The success of the method is the integration of the personnel and the strategy. Starting from energy procurement, increasing efficiency and sustainability, gathering teams and working closely with the financial sector of the enterprise can achieve impressive results.
When companies operate projects in isolated islands, they lose their income or cost savings, and this is a significant gap for those who want to balance their profitability and environmental responsibility. Integrated energy and carbon management provides a holistic view of data and resources to reduce consumption, promote innovation, and maximize cost savings.
By adopting a strategic, holistic approach to improving efficiency and sustainability, IOmart will become a model for other organizations to start seeking a positive energy management tour.
After the design of the company's website,how do you increase user access?
Nowadays, the purpose of company website design is just to expand the network business, better publicity and development of the company. But most companies still do not have a certain understanding of the site, only think that the site will be built after a certain benefit. That kind of idea is very wrong, and if the website website is designed without any use of no use, that is for most customers, the site traffic is not going, the amount of access is very low, the following is to share several ways to improve the site traffic.
First, after the company website design, the website promotion strategy is not good; the flow is not rising, if not for other reasons, such as the points listed in this article, it should be because the website in the promotion of the use of improper promotion strategy; this is a science, but also an art. It requires webmasters to have the spirit of learning and innovation and practice. Try to find a strategy that is suitable for the promotion of your website. It is appropriate to quote others appropriately, but it is not recommended to copy all contents.
Second, the company's website design is updated and maintained; when many web site administrators start making websites, they are full of enthusiasm, promotion and promotion are also very effective. However, once relaxed, the relationship between traffic and PR will retrogress. The reason is that the website is not updated and maintained.
Third, if the site is unique; a personal station must be done well, strong and big; it must be innovative, otherwise it is difficult to compete with websites that have already established a foothold in this industry.
Fourth, whether the content of the website is rich or not, it is a large proportion of its originality; the reason for the popularity of the design website in the Internet users is that its content is attractive; if it has good content and good publicity, then such a website will be welcome. Not only will she be welcomed by Internet users, but she will also be welcomed by search engines such as Baidu and Google.
Fifth, what is the quality of web pages? If your website is made very rough, there is no doubt that the amount of visits is absolutely not good. Chinese netizens can say that they are very critical, especially the speed of web page opening. More importantly, I want webmasters to pay more attention to this.
Sixth, the website content or business expansion of the company's Web site; we can see large sites like Baidu and Google, they continue to expand their business, not to mention our small stations; when our web site runs a period of time, the current traffic has not changed much, and we should Consider whether we want to expand our business, which will increase traffic and generate very high advertising costs.
First, after the company website design, the website promotion strategy is not good; the flow is not rising, if not for other reasons, such as the points listed in this article, it should be because the website in the promotion of the use of improper promotion strategy; this is a science, but also an art. It requires webmasters to have the spirit of learning and innovation and practice. Try to find a strategy that is suitable for the promotion of your website. It is appropriate to quote others appropriately, but it is not recommended to copy all contents.
Second, the company's website design is updated and maintained; when many web site administrators start making websites, they are full of enthusiasm, promotion and promotion are also very effective. However, once relaxed, the relationship between traffic and PR will retrogress. The reason is that the website is not updated and maintained.
Third, if the site is unique; a personal station must be done well, strong and big; it must be innovative, otherwise it is difficult to compete with websites that have already established a foothold in this industry.
Fourth, whether the content of the website is rich or not, it is a large proportion of its originality; the reason for the popularity of the design website in the Internet users is that its content is attractive; if it has good content and good publicity, then such a website will be welcome. Not only will she be welcomed by Internet users, but she will also be welcomed by search engines such as Baidu and Google.
Fifth, what is the quality of web pages? If your website is made very rough, there is no doubt that the amount of visits is absolutely not good. Chinese netizens can say that they are very critical, especially the speed of web page opening. More importantly, I want webmasters to pay more attention to this.
Sixth, the website content or business expansion of the company's Web site; we can see large sites like Baidu and Google, they continue to expand their business, not to mention our small stations; when our web site runs a period of time, the current traffic has not changed much, and we should Consider whether we want to expand our business, which will increase traffic and generate very high advertising costs.
2018年8月1日 星期三
機房建置,UPS並機異常切換至旁路供電的故障處理
一,故障現象
機房建置,動力維護人員在巡檢時發現某UPS1 + 1並機系統異常切換至旁路供電,同時,UPS1-2出現告警,顯示逆變器故障。出現該故障後,維護人員緊急聯繫某公司的工程師,根據廠家工程師的建議先把UPS1-1逆變恢復,將負載從旁路切換回UPS1-1逆變帶載,之後再將UPS1-2下電重新上電是否能夠正常工作。操作後,UPS1-1逆變器能正常開啓,負載已經從旁路切換至UPS1-1逆變帶載,UPS1-2下電後重啓仍然顯示“逆變器故障告警”。
二、原因分析
依據現場故障描述分析,動力維護人員初步認爲該故障的可能原因:
1,該套UPS安裝使用至今已近10年,於2011年5月27日進行過交直流電容的更換(正常電容的使用週期爲5年)。由於電容屬於易損件,有可能是由於電容老化失效,引起UPS輸出波形畸變,電壓偏移,對於並機UPS系統還會導致環流增大,切換故障等,這有可能導致這起故障的原因。
2,根據UPS工作原理和相關電路,如果UPS的“切換電路控制板AROI”故障而採集異常,將保護性地關閉UPS1-2逆變器輸出,同時出現“逆變器故障告警”提示,並將輸出強制切換至旁路供電,這也可能導致這起故障的原因。
3、若UPS切換至旁路供電時,如果切換的過程不屬於正常同步切換,那麼有可能對後端負載會造成一定的影響。
三,處理步驟
1,當天晚12點上,某工程師到達現場進行維修,此時現場情況爲:UPS1-2的逆變指示燈亮紅色,故障告警顯示“逆變器故障”,聯合包裹內部斷路器保持原位,UPS1-1逆變正常供電。用調機軟件察看UPS1-2的機器狀態和故障信息,導出設備的報告,確認沒有其它異常告警,開始下電進行維修。現場換完三個輸出交流電容後,工程師測量拆下來的舊電容,發現其中一個交流電容的容量由標稱600年的佛羅里達大學直接下降爲0,電容完全失效。
2,更換完成輸出電容,某工程師將UPS1-2輸出開關斷開,用手動模式進行單機測試,逐步調整UPS1-2逆變輸出電壓,同時測量逆變輸出電壓和交流電容的濾波電流,測試各項指標均正常,退出手動模式後,UPS1-2單機運行也都正常。
3,完成維修後的各項功能測試,我們將UPS1-2各斷路器恢復並機狀態,開啓UPS1-2逆變器,可是UPS1-2逆變燈一直在閃爍,無法進行並機,測試多次,偶爾並機成功,但是關閉逆變按鈕後又無法再次並機,看來UPS1-2還有其它故障沒有完全修復。
4、經過分析該型號UPS工作原理和相關電路,初步懷疑“切換電路控制板AROI“異常,因此利用倉庫拆下該板件進行維修更換,換上該板件後需要重新刷寫參數以及板件校驗,完成相關操作及測試後,UPS並機運行正常,UPS故障完成修復。
四,故障總結
1、建議針對現網運行5年以上的UPS設備進行易損件的預防性更換,防止類似故障再次出現,易損件包括:交流濾波電容,直流母線濾波電容,風扇,輔助電源板等,以提高設備運行的可靠性和延長UPS使用壽命。
2,針對超期使用的UPS應及時進行替換,該套某UPS已達到使用壽命的時間,因此,計劃今年進行割接替換,目前已完成新建UPS的驗收以及割接電纜的到的貨。
3、建議對UPS產品進行原廠維保,以保證UPS設備維護的深度,同時可以提前預防和排除類似的設備隱患,提高故障響應度,加快故障現場處理時限。
機房建置,動力維護人員在巡檢時發現某UPS1 + 1並機系統異常切換至旁路供電,同時,UPS1-2出現告警,顯示逆變器故障。出現該故障後,維護人員緊急聯繫某公司的工程師,根據廠家工程師的建議先把UPS1-1逆變恢復,將負載從旁路切換回UPS1-1逆變帶載,之後再將UPS1-2下電重新上電是否能夠正常工作。操作後,UPS1-1逆變器能正常開啓,負載已經從旁路切換至UPS1-1逆變帶載,UPS1-2下電後重啓仍然顯示“逆變器故障告警”。
二、原因分析
依據現場故障描述分析,動力維護人員初步認爲該故障的可能原因:
1,該套UPS安裝使用至今已近10年,於2011年5月27日進行過交直流電容的更換(正常電容的使用週期爲5年)。由於電容屬於易損件,有可能是由於電容老化失效,引起UPS輸出波形畸變,電壓偏移,對於並機UPS系統還會導致環流增大,切換故障等,這有可能導致這起故障的原因。
2,根據UPS工作原理和相關電路,如果UPS的“切換電路控制板AROI”故障而採集異常,將保護性地關閉UPS1-2逆變器輸出,同時出現“逆變器故障告警”提示,並將輸出強制切換至旁路供電,這也可能導致這起故障的原因。
3、若UPS切換至旁路供電時,如果切換的過程不屬於正常同步切換,那麼有可能對後端負載會造成一定的影響。
三,處理步驟
1,當天晚12點上,某工程師到達現場進行維修,此時現場情況爲:UPS1-2的逆變指示燈亮紅色,故障告警顯示“逆變器故障”,聯合包裹內部斷路器保持原位,UPS1-1逆變正常供電。用調機軟件察看UPS1-2的機器狀態和故障信息,導出設備的報告,確認沒有其它異常告警,開始下電進行維修。現場換完三個輸出交流電容後,工程師測量拆下來的舊電容,發現其中一個交流電容的容量由標稱600年的佛羅里達大學直接下降爲0,電容完全失效。
2,更換完成輸出電容,某工程師將UPS1-2輸出開關斷開,用手動模式進行單機測試,逐步調整UPS1-2逆變輸出電壓,同時測量逆變輸出電壓和交流電容的濾波電流,測試各項指標均正常,退出手動模式後,UPS1-2單機運行也都正常。
3,完成維修後的各項功能測試,我們將UPS1-2各斷路器恢復並機狀態,開啓UPS1-2逆變器,可是UPS1-2逆變燈一直在閃爍,無法進行並機,測試多次,偶爾並機成功,但是關閉逆變按鈕後又無法再次並機,看來UPS1-2還有其它故障沒有完全修復。
4、經過分析該型號UPS工作原理和相關電路,初步懷疑“切換電路控制板AROI“異常,因此利用倉庫拆下該板件進行維修更換,換上該板件後需要重新刷寫參數以及板件校驗,完成相關操作及測試後,UPS並機運行正常,UPS故障完成修復。
四,故障總結
1、建議針對現網運行5年以上的UPS設備進行易損件的預防性更換,防止類似故障再次出現,易損件包括:交流濾波電容,直流母線濾波電容,風扇,輔助電源板等,以提高設備運行的可靠性和延長UPS使用壽命。
2,針對超期使用的UPS應及時進行替換,該套某UPS已達到使用壽命的時間,因此,計劃今年進行割接替換,目前已完成新建UPS的驗收以及割接電纜的到的貨。
3、建議對UPS產品進行原廠維保,以保證UPS設備維護的深度,同時可以提前預防和排除類似的設備隱患,提高故障響應度,加快故障現場處理時限。
How is the datacenter migration correct?
Datacenter migration. As organizations grow and develop, the technologies they employ inevitably need to evolve and change.As a result, both small, chain-store businesses and data centers for nonprofits expanding into unfamiliar areas are increasingly deploying IT devices.
The servers managed by the organization have been running steadily under load, taking up useful space and consuming more and more power.So what happens when an organization needs to expand its business?Hosting a business or migrating to a new data center may be the right choice for the organization.
Large hosted data centers are more efficient at providing power and cooling to servers thanks to their economies of scale.Because they buy power at wholesale prices, they also eliminate the cost of maintaining equipment in the UPS, generators, air conditioners and so on, because that is included in the price.
By hosting the business, the organization can free up space and resources for more efficient work or office space.
Get the managed data center location correctly
Hosting a data center is not only a smart decision, but also a critical one.The organization may well take into account downtime, security, and application performance, as well as the specifics of the actual requirements of the process.In order to migrate as smoothly and safely as possible, the following factors need to be considered:
(1) the organization needs to understand the migration background and conduct research
Blindly starting a data center migration is a big no-no.The organization needs to spend time thinking about how relocating key applications, services, and data will affect its business during migration, and what measures the organization can take to mitigate risks or temporary disadvantages.
(2) server downtime is a key consideration for the organization
How such events are handled depends on the business nature of the organization.If you cannot tolerate any server downtime, you need to protect your operations with a strong disaster recovery and backup plan.Organizations can also set up temporary private or mixed clouds to keep key processes running during migration.
Also, if your organization's system-critical applications are migrating, consider a pilot migration to ensure continued software compatibility (and reduce the chance of further downtime).A good data center provider will help the enterprise complete this process and ensure that it goes smoothly.
(3) network configuration is also a factor to be considered
The organization must decide what needs to be done to ensure that the existing application retains its functionality without compatibility.This requires decisions to be made on a case-by-case basis, as some applications may encounter configuration problems from the LAN.It is best to maintain security, so be sure to investigate the impact of migration on your organization's mission-critical applications.
This may not be immediately obvious to the organization, but network latency is also important.Hosting means accessing its data center through a dedicated high-speed connection.Post migration delays (time delays on the network) should not be a problem, but it is important to consider the unexpected that may occur during migration.
Because servers are typically migrated in batches, applications that share local connections must now work harder to communicate.To mitigate potential latency problems, determine which applications work together and when to run, and plan your organization's business migration schedule as quickly as possible.
(4) successful data center migration means that the organization needs to fully understand its applications
But these native applications have been running for years, with some documents nowhere to be found, and perhaps no one remembers who installed or built them.By using network tracking tools in the months leading up to migration, organizations can relearn all the information they need to know about the intricacies of legacy applications.
If the organization makes the necessary preparations for the migration, the actual process should be very simple and the migration should be seamless.When data needs to be migrated, the organization needs to determine what needs to be moved.For example, you need to see if you are still running hardware and software by terminating the contract early in the contract, or, for example, still using existing equipment that is no longer used for critical purposes.Importantly, the organization needs to ask itself whether each server needs to be restarted, or can it be virtualized to share space and rationalize the number of servers?
The organization needs to go through all of this and consider its purpose and role in the future business.Some devices may even find it more important than expected, so it is worth paying more.This is also an ideal time for organizations to review their migration schedules and consider whether they need to set up temporary private or hybrid clouds to avoid downtime during migration.
In addition, the organization needs to keep up to date with the overall data environment records, review existing logs, and record any changes to the manifest.Next, find the existing workload, software, and scheduled backups so that you know exactly what will and won't happen during the migration, and run the most important disaster recovery tests for ultimate security.
The organization also needs to inform its service contractors of its plans and point them to the new data center for any licenses and contract modifications.Also, it is necessary to write down the warranty information and serial number of the equipment to avoid any problems after physical relocation.
With the organization's equipment security management and round-the-clock maintenance, its business can return to its best state.If an organization's business continues to grow, it is easier than ever to expand its data centers without the need for temporary and expensive existing solutions.For the redundant processes and applications that organizations find in the planning process, it's time to take full advantage of these released resources and make way for a greener, more efficient future.
The servers managed by the organization have been running steadily under load, taking up useful space and consuming more and more power.So what happens when an organization needs to expand its business?Hosting a business or migrating to a new data center may be the right choice for the organization.
Large hosted data centers are more efficient at providing power and cooling to servers thanks to their economies of scale.Because they buy power at wholesale prices, they also eliminate the cost of maintaining equipment in the UPS, generators, air conditioners and so on, because that is included in the price.
By hosting the business, the organization can free up space and resources for more efficient work or office space.
Get the managed data center location correctly
Hosting a data center is not only a smart decision, but also a critical one.The organization may well take into account downtime, security, and application performance, as well as the specifics of the actual requirements of the process.In order to migrate as smoothly and safely as possible, the following factors need to be considered:
(1) the organization needs to understand the migration background and conduct research
Blindly starting a data center migration is a big no-no.The organization needs to spend time thinking about how relocating key applications, services, and data will affect its business during migration, and what measures the organization can take to mitigate risks or temporary disadvantages.
(2) server downtime is a key consideration for the organization
How such events are handled depends on the business nature of the organization.If you cannot tolerate any server downtime, you need to protect your operations with a strong disaster recovery and backup plan.Organizations can also set up temporary private or mixed clouds to keep key processes running during migration.
Also, if your organization's system-critical applications are migrating, consider a pilot migration to ensure continued software compatibility (and reduce the chance of further downtime).A good data center provider will help the enterprise complete this process and ensure that it goes smoothly.
(3) network configuration is also a factor to be considered
The organization must decide what needs to be done to ensure that the existing application retains its functionality without compatibility.This requires decisions to be made on a case-by-case basis, as some applications may encounter configuration problems from the LAN.It is best to maintain security, so be sure to investigate the impact of migration on your organization's mission-critical applications.
This may not be immediately obvious to the organization, but network latency is also important.Hosting means accessing its data center through a dedicated high-speed connection.Post migration delays (time delays on the network) should not be a problem, but it is important to consider the unexpected that may occur during migration.
Because servers are typically migrated in batches, applications that share local connections must now work harder to communicate.To mitigate potential latency problems, determine which applications work together and when to run, and plan your organization's business migration schedule as quickly as possible.
(4) successful data center migration means that the organization needs to fully understand its applications
But these native applications have been running for years, with some documents nowhere to be found, and perhaps no one remembers who installed or built them.By using network tracking tools in the months leading up to migration, organizations can relearn all the information they need to know about the intricacies of legacy applications.
If the organization makes the necessary preparations for the migration, the actual process should be very simple and the migration should be seamless.When data needs to be migrated, the organization needs to determine what needs to be moved.For example, you need to see if you are still running hardware and software by terminating the contract early in the contract, or, for example, still using existing equipment that is no longer used for critical purposes.Importantly, the organization needs to ask itself whether each server needs to be restarted, or can it be virtualized to share space and rationalize the number of servers?
The organization needs to go through all of this and consider its purpose and role in the future business.Some devices may even find it more important than expected, so it is worth paying more.This is also an ideal time for organizations to review their migration schedules and consider whether they need to set up temporary private or hybrid clouds to avoid downtime during migration.
In addition, the organization needs to keep up to date with the overall data environment records, review existing logs, and record any changes to the manifest.Next, find the existing workload, software, and scheduled backups so that you know exactly what will and won't happen during the migration, and run the most important disaster recovery tests for ultimate security.
The organization also needs to inform its service contractors of its plans and point them to the new data center for any licenses and contract modifications.Also, it is necessary to write down the warranty information and serial number of the equipment to avoid any problems after physical relocation.
With the organization's equipment security management and round-the-clock maintenance, its business can return to its best state.If an organization's business continues to grow, it is easier than ever to expand its data centers without the need for temporary and expensive existing solutions.For the redundant processes and applications that organizations find in the planning process, it's time to take full advantage of these released resources and make way for a greener, more efficient future.
訂閱:
文章 (Atom)