Cloud computing key technologies and research issues

Cloud computing, as a new computing concept and model, technically integrates large server clusters, including computing servers, storage servers, and network bandwidth resources. Through virtualization of various allocable resources, special software is used to achieve The allocation of resources on demand supports the operation of various applications, so that users only need to pay attention to and provide business-related solutions without the need to spend a lot of manpower and material resources on hardware platforms, comprehensive computing, secure storage, information consistency And financial resources are conducive to improving the overall efficiency of the system, reducing costs, and promoting technological innovation.

Although the computing platform or service based on the cloud computing model has been widely accepted and gradually entered the application, the research of cloud computing is still in its infancy on the whole, and many existing problems have not yet been completely solved. This article presents some challenging key technologies and research issues in cloud computing.

1 VM migration

Cloud computing achieves load balancing across the data center by allowing virtual machine migration. In addition, virtual machine migration improves the robustness and high responsiveness of the data center.

Virtual machine migration is evolved from process migration. Recently Xen and VMWare have implemented live migration of virtual machines. Literature [1] pointed out that migrating the entire operating system and all its applications as a unit can avoid many difficulties faced by the process-level migration method, and analyzes the advantages of virtual machine real-time migration.

The main advantage of virtual machine migration is to avoid hot spots, however, this is not simple. Currently, detecting workload hotspots and initiating a migration lack the flexibility to respond to sudden workload changes. In addition, the state of the memory should be transferred consistently and efficiently when the virtual machine is migrated, and the resource load of the application and the physical server needs to be comprehensively considered.

2 Server integration

Server integration can maximize resource utilization while minimizing energy consumption. Virtual machine migration is often used to integrate virtual machines residing on multiple rarely used servers into one server, so that the remaining servers can be set to an energy-saving state. Optimizing integrated servers in the data center is usually a NP-hard variant of packing optimization. Various heuristic methods have been proposed for this problem.

Server consolidation should not affect application performance. As we all know, the use of individual virtual machine resources is constantly changing. For server resources shared between virtual machines (such as bandwidth, memory cache, and disk I / O), consolidating servers to the maximum extent may cause congestion.

Therefore, it is important to observe fluctuations in virtual machine load and use this information to effectively consolidate servers. Finally, when resource congestion occurs, the system must be able to respond quickly.

3 Energy management

Improving energy efficiency is another major issue in cloud computing. It is estimated that energy consumption costs account for 53% of total data center operating expenses. Therefore, infrastructure providers are under tremendous pressure to reduce energy consumption. The goal is not only to reduce energy costs in the data center, but also to meet government regulations and environmental standards.

Designing energy-efficient data centers has received increasing attention recently. This problem can be solved from multiple directions. For example, energy-saving hardware architecture, slowing down the CPU speed and turning off some hardware components have become the consensus of researchers.

Energy-saving job scheduling and server integration can reduce energy consumption. Recent research has also begun to study energy efficient network protocols and infrastructure.

A key challenge is to achieve a good balance between energy saving and application performance. In this regard, some researchers have recently begun to implement coordinated solutions for performance and energy management in a dynamic cloud environment [3].

4 Traffic management and analysis

Analyzing data traffic is important for today's data center. For example, many web applications rely on analyzing data traffic to optimize the user experience. Network operators also need to know data traffic for many management and planning decisions. However, there are still some challenging problems in extending the existing traffic measurement and analysis methods of Internet Service Providers (ISPs) to cloud computing data centers. First, the density of data center links is much higher than ISP;

Second, most existing methods can calculate the traffic matrix of hundreds of hosts, but a small data center may have thousands of servers; Finally, the existing methods are usually based on some ISP traffic patterns, but are deployed in data center applications Programs (such as MapReduce jobs) have greatly changed the flow pattern.

In addition, there is a tighter coupling of network usage, computing, and storage resources of application programs in cloud computing.

At present, there is not much work in measuring and analyzing data center traffic. Literature [4] reported the characteristics of data center traffic and the use of these to guide the design of network infrastructure.

5 Software framework

Cloud computing provides a platform for large-scale data-intensive applications. Usually these applications utilize the MapReduce framework (such as Hadoop scalable and fault-tolerant data processing). Research shows that the performance and resource consumption of MapReduce jobs are highly dependent on the type of application. For example, Hadoop task sort is I / O intensive, while grep requires a lot of CPU resources.

In addition, the VMs assigned to each Hadoop node may be heterogeneous. For example, the available bandwidth of a VM depends on other VMs configured on the same server.

Therefore, the performance and cost of MapReduce applications can be optimized by carefully selecting its configuration parameter values ​​and designing more efficient scheduling algorithms. By alleviating bottleneck resources, application execution time can be significantly improved. The key challenges include Hadoop performance modeling (whether online or offline) and adaptive scheduling under dynamic conditions.

Another related method considers the MapReduce framework to have energy-saving awareness [5]. The basic idea of ​​this method is to put the Hadoop nodes that have completed their work and wait for new tasks to sleep. This requires that Hadoop and HDFS must be energy efficient. In addition, there is usually a trade-off between performance and energy-saving perception. According to the goal, finding an ideal trade-off point is still an unexplored research topic.

6 Storage technology and data management

The software framework MapReduce and its different implementations (Hadoop and Dryad) aim at distributed processing of data-intensive tasks. These frameworks usually run on Internet file systems (such as GFS and HDFS). The storage structure, access mode and application programming interface of these file systems are different from the traditional distributed file system. In particular, they did not implement the standard POSIX interface, so they introduced compatibility issues with traditional file systems and applications. The current solutions mainly include support for the MapReduce framework using cluster file system (such as IBM's GPFS) method and support for scalable and concurrent data access based on new API primitives.

7 Conclusion

Demand-driven, technological progress and business model changes have jointly promoted the rapid development of cloud computing. Its core is to build a new model of information and data storage, processing and service. This paper summarizes the key technologies and difficulties of this emerging computing model from the perspectives of cloud computing platform construction and management, application construction, and other issues, and proposes the problems that need to be solved in future cloud computing research and applications.

The magnetic transducers (External Drive Type) are versatile and customizable to different physical sizes, housings, mounting options, power consumptions. Our magnetic transducers are engineered to be used with an external drive circuit. This allows our customers the flexibility to design and customize the circuitry to meet their needs. External circuitry allows different frequency ranges to be used to create multiple sounds through excitation waveform. Our magnetic transducers come in a slim-line profile and are a cost-effective solution. Our extensive capabilities make our magnetic transducers an excellent solution for high/low tones, siren, and chime sounds.

Magnetic Transducer External Drive Type

Piezoelectric Buzzer,Passive Magnetic Buzzer,External Drive Buzzer,Magnetic Transducer External Drive Type

Jiangsu Huawha Electronices Co.,Ltd ,

This entry was posted in on