Data center migration, in a simplified sense, IT capacity management is the basis for balancing the cost and performance of business services, where the allocation and allocation of infrastructure is the fulcrum.
If your company's infrastructure is poorly configured or insufficient to support business requirements, long time response time problems and outages can occur, resulting in millions of business losses.
A typical way to avoid this is to overconfigure the infrastructure, which is to estimate the required capacity and double it.
It is estimated that up to 50 per cent of cloud infrastructure is unused, and this phenomenon is even more in physical storage.
Overprovisioning wastes a lot of hardware, software licensing, and management costs.
The trick is to rationalize your company's infrastructure to meet current needs and to know exactly when and where to add additional capacity.
To optimize business services effectively, the capacity management process consists of four main steps:
1. Data collection and management.
Collect detailed information and related performance data for each application, service, and system in your enterprise environment.
2. Data analysis.
Analyze the data to determine the health of the service, potential performance issues, and root causes of these issues so that you can address these issues.
Prediction.
Accurately predict when and where resources shortages will occur so that resources are not scarce.
4. Submit executable information.
For each stakeholder: IT analysts, service managers, and business leaders provide the information they can take to make decisions.
What makes IT so challenging is that, given the technology of dynamic development, the changing business needs and the growth of demand add complexity, making the IT environment constantly changing.
Time has always been the nature of performance issues, but IT personnel are scattered across tasks and projects, reducing the time to ensure delivery of services.
Finally, capacity management expertise is becoming less and less.
A lack of skills In capacity and performance management will be a major constraint or risk for 75 per cent of corporate growth by 2020, according to Research In Action, a leading analytics firm.
Perhaps because of these challenges, many technology leaders believe that capacity management is a competitive advantage that will become even more so in the coming years.
According to Research In Action, 35 per cent of companies will use capacity management tools to gain competitive advantage by 2020 (compared with 20 per cent today).
Competitive advantages brought by effective capacity management:
1. Reduce the time spent by employees to provide high availability and consistent services
2. Reduce downtime and bottlenecks for mission-critical applications
3. Optimize hardware, software and cloud storage investments
4. More effective business planning to align IT investment with business goals
5. Protect the brand reputation of the enterprise
Automate administrative complexity
In recent years, most IT enterprises that have successfully deployed capacity management have used analysis and automation.
The advantages of this approach are speed and accuracy, even in a very complex environment, but it takes a considerable amount of time to implement effectively with appropriate tools and processes.
To understand this approach, let's explore each of the core processes described earlier:
1. Data collection and management
2. Data analysis
3, forecasting
4. Provide executable information
The data collection
Performance data must be collected at a fine-grained level to meet the business transaction requirements.
For example, real-time transactions and online shopping require more granular granularity than batch processing.
Remember that the collection tools that your enterprise USES must provide detailed, timely data in an automated and highly scalable manner to ensure project success.
Data analysis
Traditionally, this analysis was conducted by capacity management experts using simple tools such as spreadsheets to "manually" check data;
Or by building and maintaining custom tools and queries.
This type of manual analysis takes a great deal of time and expertise and is used in a number of enterprises with weak resources.
Automation is a major solution, although there are fewer viable solutions in this area.
Historically, many of these "automated" solutions still require a lot of time to set up and remain limited in providing useful information.
However, technology can now solve analysis problems in a more practical and effective manner.
forecast
To accurately predict performance, we need to recognize that the behavior of a computer system is not linear.
If it's linear, then predictions are as simple as linear trends.
Reality is queuing.
Queuing refers to the situation when a CPU, controller, or other device has more than it can perform.
Then, the service has to wait in line, just like waiting in line for a checkout in the store.
When there is only a short queue or no queue, the response time is proportional to the amount of work added.
You add some work, some applications, or infrastructure, and you have more work to do.
The line starts, and suddenly the delay is huge.
This is the dreaded inflection point in the curve, followed by an exponential increase in response time - the wait time is longer than the working time and the response is greatly affected.
Often, IT assumes that the delay will always be linear, and they are frantically trying to solve the problem.
To avoid inflection points, many IT organizations follow a strategy that never lets the system deal with tasks too busy, which means overconfiguring -- insurance but creating waste.
They paid too much for avoiding inflection points.
You must know exactly where the inflection point will appear, so that you can avoid IT without overconfiguring IT, which requires understanding how IT components interact to perform the work.
Use a variety of techniques to predict the varying degrees of accuracy of performance, from Excel spreadsheets to linear trends, to simulation modeling, to analysis modeling.
Until recently, however, these solutions required a great deal of expertise, expertise, and time.
Fortunately, predictions can now be automatically made very timely.
沒有留言:
張貼留言