This manages OS and application as a single unit by encapsulating them into virtual machines

Cloud Resource Virtualization

Dan C. Marinescu, in Cloud Computing, 2013

5.4 Virtual machines

A virtual machine (VM) is an isolated environment that appears to be a whole computer but actually only has access to a portion of the computer resources. Each VM appears to be running on the bare hardware, giving the appearance of multiple instances of the same computer, though all are supported by a single physical system. Virtual machines have been around since the early 1970s, when IBM released its VM/370 operating system.

We distinguish two types of VM: process and system VMs [see Figure 5.3(a)]. A process VM is a virtual platform created for an individual process and destroyed once the process terminates. Virtually all operating systems provide a process VM for each one of the applications running, but the more interesting process VMs are those that support binaries compiled on a different instruction set. A system VM supports an operating system together with many user processes. When the VM runs under the control of a normal OS and provides a platform-independent host for a single application, we have an application virtual machine (e.g., Java Virtual Machine [JVM]).

This manages OS and application as a single unit by encapsulating them into virtual machines

Figure 5.3. (a) A taxonomy of process and system VMs for the same and for different ISAs. Traditional, hybrid, and hosted are three classes of VM for systems with the same ISA. (b) Traditional VMs. The VMM supports multiple VMs and runs directly on the hardware. (c) A hybrid VM. The VMM shares the hardware with a host operating system and supports multiple virtual machines. (d) A hosted VM. The VMM runs under a host operating system.

A literature search reveals the existence of some 60 different virtual machines, many created by the large software companies; Table 5.1 lists a subset of them.

Table 5.1. A nonexhaustive inventory of system virtual machines. The host ISA refers to the instruction set of the hardware; the guest ISA refers to the instruction set supported by the virtual machine. The VM could run under a host OS, directly on the hardware, or under a VMM. The guest OS is the operating system running under the control of a VM, which in turn may run under the control of the VMM.

NameHost ISAGuest ISAHost OSGuest OSCompany
Integrity VM x86-64 x86-64 HP-Unix Linux, Windows HP Unix HP
Power VM Power Power No host OS Linux, AIX IBM
z/VM z-ISA z-ISA No host OS Linux on z-ISA IBM
Lynx Secure x86 x86 No host OS Linux, Windows LinuxWorks
Hyper-V Server x86-64 x86-64 Windows Windows Microsoft
Oracle VM x86, x86-64 x86, x86-64 No host OS Linux, Windows Oracle
RTS Hypervisor x86 x86 No host OS Linux, Windows Real Time Systems
SUN xVM x86, SPARC same as host No host OS Linux, Windows SUN
VMware EX Server x86, x86-64 x86, x86-64 No host OS Linux, Windows, Solaris, FreeBSD VMware
VMware Fusion x86, x86-64 x86, x86-64 Mac OS x86 Linux, Windows, Solaris, FreeBSD VMware
VMware Server x86, x86-64 x86, x86-64 Linux, Windows Linux, Windows, Solaris, FreeBSD VMware
VMware Workstation x86, x86-64 x86, x86-64 Linux, Windows Linux, Windows, Solaris, FreeBSD VMware
VMware Player x86, x86-64 x86, x86-64 Linux, Windows Linux, Windows, Solaris, FreeBSD VMware
Denali x86 x86 Denali ILVACO, NetBSD University of Washington
Xen x86, x86-64 x86, x86-64 Linux Solaris Linux, Solaris NetBSD University of Cambridge

A system virtual machine provides a complete system; each VM can run its own OS, which in turn can run multiple applications. Systems such as Linux Vserver[214], OpenVZ (Open VirtualiZation) [274], FreeBSD Jails[124], and Solaris Zones[296], based on Linux, FreeBSD, and Solaris, respectively, implement operating system-level virtualization technologies.

Operating system-level virtualization allows a physical server to run multiple isolated operating system instances, subject to several constraints; the instances are known as containers, virtual private servers (VPSs), or virtual environments (VEs). For example, OpenVZ requires both the host and the guest OS to be Linux distributions. These systems claim performance advantages over the systems based on a VMM such as Xen or VMware; according to [274], there is only a 1% to 3% performance penalty for OpenVZ compared to a stand-alone Linux server. OpenVZ is licensed under the GPL version 2.

Recall that a VMM allows several virtual machines to share a system. Several organizations of the software stack are possible:

Traditional. VM also called a “bare metal” VMM. A thin software layer that runs directly on the host machine hardware; its main advantage is performance [see Figure 5.3(b)]. Examples: VMWare ESX, ESXi Servers, Xen, OS370, and Denali.

Hybrid. The VMM shares the hardware with the existing OS [see Figure 5.3(c)]. Example: VMWare Workstation.

Hosted. The VM runs on top of an existing OS [see Figure 5.3(d)]. The main advantage of this approach is that the VM is easier to build and install. Another advantage of this solution is that the VMM could use several components of the host OS, such as the scheduler, the pager, and the I/O drivers, rather than providing its own. A price to pay for this simplicity is the increased overhead and associated performance penalty; indeed, the I/O operations, page faults, and scheduling requests from a guest OS are not handled directly by the VMM. Instead, they are passed to the host OS. Performance as well as the challenges to support complete isolation of VMs make this solution less attractive for servers in a cloud computing environment. Example: User-mode Linux.

A semantic gap exists between the added services and the virtual machine. As pointed out in [79], services provided by the virtual machine “operate below the abstractions provided by the guest operating system . It is difficult to provide a service that checks file system integrity without the knowledge of on-disk structure.”

The VMMs discussed next manage the resource sharing among the VMs sharing a physical system.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124046276000051

Virtual Servers and Platform as a Service

Ric Messier, in Collaboration with Cloud Computing, 2014

Hypervisors

The hypervisor, or VMM, is the software layer that sits underneath the guest operating system allowing it to believe there is really hardware in place. There are a number of hypervisors available commercially today from VMware, Oracle, Microsoft, and Parallels, as well as those available for Linux like KVM and Xen. There are two types of hypervisors. The first type is capable of running on the bare metal without an underlying operating system of its own. The second type would run on top of a host operating system. As an example, I am running Mac OS X on a MacBook Air while I am writing this. I have Parallels installed as a hypervisor. Since it runs on top of Mac OS X, it is a type 2 hypervisor. VMWare ESXi Server or Microsoft Hyper-V Server are type 1 hypervisors that run on top of the bare metal and don’t require a host operating system.

Either type of hypervisor is capable of running multiple guest operating systems but with a type 2 hypervisor, you may run multiple type 2 hypervisors on top of your host operating system and each of those hypervisors may have multiple operating systems. As an example, I have had both Parallels and VMWare Fusion installed on a Macintosh in the past and I could run both of them at the same time with guest operating systems running on top of the individual hypervisors. Figure 5.3 is a graphical representation of the two types of hypervisors. With type 1, you can see it sits right on top of the hardware while with type 2, you can see a more traditional, host operating system sits on top of the hardware while the hypervisor is on top of the host operating system.

This manages OS and application as a single unit by encapsulating them into virtual machines

Figure 5.3. Two Types of Virtual Machines

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124170407000058

Software Detection and Recovery

In Architecture Design for Soft Errors, 2008

8.7 OS-Level and VMM-Level Recoveries

Recovery in the OS or the VMM is very appealing. One can imagine running multiple applications, such as an editor and a playing audio in the background, when one's machine crashes. When the system recovers from this error, it would be very nice to have both the editing window with unsaved changes visible and the song replayed exactly from where it was left off. This requires extensive checkpointing and recovery mechanisms in either the OS or the VMM. Commercial systems are yet to fully adapt these techniques.

An interesting OS-level approach to handle the output commit problem has been proposed by Masubuchi et al. [6]. Disk output requests are redirected to a pseudo-device driver rather than to the device driver (Figure 8.15). The pseudo-device driver blocks outputs from any process until the next checkpoint. Nakano et al. [7] further optimized this proposal by observing that disk I/O and many network I/O operations are idempotent and can be replayed even if the output has already been committed once before. This is because disk I/O is naturally idempotent. Network I/O is not idempotent by itself, but TCP/IP network protocol allows sending and receiving of the same packet with the same sequence number multiple times. The receiver discards any redundant copies of the same packet.

This manages OS and application as a single unit by encapsulating them into virtual machines

FIGURE 8.15. The pseudo-device driver (PDD) software layer.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123695291500100

An Introduction to Virtualization

In Virtualization for Security, 2009

The Popek and Goldberg Requirements

Often referred to as the original reference source for VMM criteria, the Popek and Goldberg Virtualization Requirements define the conditions for a computer architecture to support virtualization. Written in 1974 for the third-generation computer systems of those days, they generalized the conditions that the software that provides the abstraction of a virtual machine, or VMM, must satisfy. These conditions, or properties, are

Equivalence A program running under the VMM should exhibit a predictable behavior that is essentially identical to that demonstrated when running on the underlying hardware platform directly. This is sometimes referred to as Fidelity.

Resource Control The VMM must be in complete control of the actual hardware resources virtualized for the guest operating systems at all times. This is sometimes referred to as Safety.

Efficiency An overwhelming number of machine instructions must be executed without VMM intervention or, in other words, by the hardware itself. This is sometimes referred to as Performance.

According to Popek and Goldberg, the problem that VMM developers must address is creating a VMM that satisfies the preceding conditions when operating within the characteristics of the Instruction Set Architecture (ISA) of the targeted hardware platform. The ISA can be classified into three groups of instructions: privileged, control sensitive, and behavior. Privileged instructions are those that trap if the processor is in User Mode and do not trap if it is in Supervisor Mode. Control-sensitive instructions are those that attempt to change the configuration of actual resources in the hardware platform. Behavior instructions are those whose behavior or result depends on the configuration of resources.

VMMs must work with each group of instructions while maintaining the conditions of equivalence, resource control, and efficiency. Virtually all modern-day VMMs satisfy the first two: equivalence and resource control. They do so by effectively managing the guest operating system and hardware platform underneath through emulation, isolation, allocation, and encapsulation, as explained in Table 1.3.

Table 1.3. VMM Functions and Responsibilities

FunctionDescription
Emulation Emulation is important for all guest operating systems. The VMM must present a complete hardware environment, or virtual machine, for each software stack, whether they be an operating system or application. Ideally, the OS and application are completely unaware they are sharing hardware resources with other applications. Emulation is key to satisfying the equivalence property.
Isolation Isolation, though not required, is important for a secure and reliable environment. Through hardware abstraction, each virtual machine should be sufficiently separated and independent from the operations and activities of other virtual machines. Faults that occur in a single virtual machine should not impact others, thus providing high levels of security and availability.
Allocation The VMM must methodically allocate platform resources to the virtual machines that it manages. Resources for processing, memory, network I/O, and storage must be balanced to optimize performance and align service levels with business requirements. Through allocation, the VMM satisfies the resource control property and, to some extent, the efficiency property as well.
Encapsulation Encapsulation, though not mandated in the Popek and Goldberg requirements, enables each software stack (OS and application) to be highly portable, able to be copied or moved from one platform running the VMM to another. In some cases, this level or portability even allows live, running virtual machines to be migrated. Encapsulation must include state information in order to maintain the integrity of the transferred virtual machine.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597493055000013

Desktop Virtualization

Diane Barrett, Gregory Kipper, in Virtualization and Forensics, 2010

Xen

Xen (pronounced Zen) is a powerful VMM for x86, x86-64, Itanium, and PowerPC 970 architectures and allows multiple OSes to run concurrently on the same physical server hardware. Figure 3.10 shows Xen running a Windows XP virtual environment inside Linux. Xen's hypervisor uses paravirtualization, a technique that presents a software interface that is similar, though not identical, to the underlying hardware. Xen systems are structured in which the Xen hypervisor is the lowest and most privileged layer. Above this layer comes one or more guest OSes, which the hypervisor schedules across the physical CPUs. Xen includes features such as 32- and 64-bit guest support, live migration, and support for the latest processors from Intel and AMD, along with other experimental non-x86 platforms.

This manages OS and application as a single unit by encapsulating them into virtual machines

Figure 3.10. Xen VMM Running a Windows XP Virtual Environment inside Linux

Additionally, Xen's hypervisor is available as an open-source release, as well as a commercially supported release. XenSource has been acquired by Citrix. It offers a comprehensive end-to-end virtualization solution with the main purpose to enable information technology to deliver applications to users anywhere.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597495578000035

Cloud Security

Dan C. Marinescu, in Cloud Computing, 2013

9.9 Security risks posed by a management OS

We often hear that virtualization enhances security because a virtual machine monitor or hypervisor is considerably smaller than an operating system. For example, the Xen VMM discussed in Section 5.8 has approximately 60,000 lines of code, one to two orders of magnitude fewer than a traditional operating system.14

A hypervisor supports stronger isolation between the VMs running under it than the isolation between processes supported by a traditional operating system. Yet the hypervisor must rely on a management OS to create VMs and to transfer data in and out from a guest VM to storage devices and network interfaces.

A small VMM can be carefully analyzed; thus, one could conclude that the security risks in a virtual environment are diminished. We have to be cautious with such sweeping statements. Indeed, the trusted computer base (TCB)15 of a cloud computing environment includes not only the hypervisor but also the management OS. The management OS supports administrative tools, live migration, device drivers, and device emulators.

For example, the TCB of an environment based on Xen includes not only the hardware and the hypervisor but also the management operating system running in the so-called Dom0 (see Figure 9.3). System vulnerabilities can be introduced by both software components, Xen, and the management operating system. An analysis of Xen vulnerabilities reports that 21 of the 23 attacks were against service components of the control VM [90]; 11 attacks were attributed to problems in the guest OS caused by buffer overflow16 and 8 were denial-of-service attacks.

This manages OS and application as a single unit by encapsulating them into virtual machines

Figure 9.3. The trusted computing base of a Xen-based environment includes the hardware, Xen, and the management operating system running in Dom0. The management OS supports administrative tools, live migration, device drivers, and device emulators. A guest operating system and applications running under it reside in a DomU.

Dom0 manages the building of all user domains (DomU), a process consisting of several steps:

1.

Allocate memory in the Dom0 address space and load the kernel of the guest operating system from secondary storage.

2.

Allocate memory for the new VM and use foreign mapping17 to load the kernel to the new VM.

3.

Set up the initial page tables for the new VM.

4.

Release the foreign mapping on the new VM memory, set up the virtual CPU registers, and launch the new VM.

A malicious Dom0 can play several nasty tricks at the time when it creates a DomU [215]:

Refuse to carry out the steps necessary to start the new VM, an action that can be considered a denial-of-service attack.

Modify the kernel of the guest operating system in ways that will allow a third party to monitor and control the execution of applications running under the new VM.

Undermine the integrity of the new VM by setting the wrong page tables and/or setting up incorrect virtual CPU registers.

Refuse to release the foreign mapping and access the memory while the new VM is running.

Let us now turn our attention to the run-time interaction between Dom0 and a DomU. Recall that Dom0 exposes a set of abstract devices to the guest operating systems using split drivers. The front end of such a driver is in the DomU and its back end in Dom0, and the two communicate via a ring in shared memory (see Section 5.8).

In the original implementation of Xen a service running in a DomU sends data to or receives data from a client located outside the cloud using a network interface in Dom0; it transfers the data to I/O devices using a device driver in Dom0.18 Therefore, we have to ensure that run-time communication through Dom0 is encrypted. Yet, Transport Layer Security (TLS) does not guarantee that Dom0 cannot extract cryptographic keys from the memory of the OS and applications running in DomU.

A significant security weakness of Dom0 is that the entire state of the system is maintained by XenStore (see Section 5.8). A malicious VM can deny access to this critical element of the system to other VMs; it can also gain access to the memory of a DomU. This brings us to additional requirements for confidentiality and integrity imposed on Dom0.

Dom0 should be prohibited from using foreign mapping for sharing memory with a DomU unless a DomU initiates the procedure in response to a hypercall from Dom0. When this happens, Dom0 should be provided with an encrypted copy of the memory pages and of the virtual CPU registers. The entire process should be closely monitored by the hypervisor, which, after the access, should check the integrity of the affected DomU.

A virtualization architecture that guarantees confidentiality, integrity, and availability for the TCB of a Xen-based system is presented in [215]. A secure environment when Dom0 cannot be trusted can only be ensured if the guest application is able to store, communicate, and process data safely. Thus, the guest software should have access to secure secondary storage on a remote storage server for keeping sensitive data and network interfaces to communicate with the user. We also need a secure run-time system.

To implement a secure run-time system we have to intercept and control the hypercalls used for communication between a Dom0 that cannot be trusted and a DomU we want to protect. Hypercalls issued by Dom0 that do not read or write to the memory of a DomU or to its virtual registers should be allowed. Other hypercalls should be restricted either completely or during specific time windows. For example, hypercalls used by Dom0 for debugging or for the control of the IOMMU19 should be prohibited.

We cannot restrict some of the hypercalls issued by Dom0, even though they can be harmful to the security of a DomU. For example, foreign mapping and access to the virtual registers are needed to save and restore the state of a DomU. We should check the integrity of a DomU after the execution of such security-critical hypercalls.

New hypercalls are necessary to protect:

The privacy and integrity of the virtual CPU of a VM. When Dom0 wants to save the state of the VM, the hypercall should be intercepted and the contents of the virtual CPU registers should be encrypted. When a DomU is restored, the virtual CPU context should be decrypted and then an integrity check should be carried out.

The privacy and integrity of the VM virtual memory. The page table update hypercall should be intercepted and the page should be encrypted so that Dom0 handles only encrypted pages of the VM. To guarantee integrity, the hypervisor should calculate a hash of all the memory pages before they are saved by Dom0. Because a restored DomU may be allocated a different memory region, an address translation is necessary (see [215]).

The freshness of the virtual CPU and the memory of the VM. The solution is to add to the hash a version number.

As expected, the increased level of security and privacy leads to increased overhead. Measurements reported in [215] show increases by factors of 1.7 to 2.3 for the domain build time, 1.3 to 1.5 for the domain save time, and 1.7 to 1.9 for the domain restore time.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124046276000099

Next-Generation Data Center Architectures and Technologies

Stephen R. Smoot, Nam K. Tan, in Private Cloud Computing, 2012

ESX Server storage components

There are five main ESX Server components that perform the storage operations:

Virtual Machine Monitor (VMM): The VMM (or hypervisor) contains a layer that emulates SCSI (Small Computer System Interface) devices within a VM. The VMM provides a layer of abstraction that hides and manages the differences between physical storage subsystems. To the applications and guest OS inside each VM, storage is simply presented as SCSI disks connected to either a virtual BusLogic or LSILogic SCSI host bus adapter (HBA).

Note:

VMs use either BusLogic or LSILogic SCSI drivers. BusLogic means Mylex BusLogic BT-958 emulation is used. BT-958 is a SCSI-3 protocol providing Ultra SCSI (Fast-40) transfer rates of 40 MB per second. The driver emulation supports the capability of “SCSI configured automatically,” aka SCAM, that allows SCSI devices to be configured with an ID number automatically, so no manual ID assignment is required.

Virtual SCSI layer: The primary role of the virtual SCSI layer is to manage SCSI commands and intercommunication between the VMM and the virtual storage system that is either the virtual machine file system (VMFS) or the network file system (NFS). All SCSI commands from VMs must go through the virtual SCSI layer. The I/O abort and reset operations are also managed at this layer. From here, the virtual SCSI layer passes I/O or SCSI commands from VMs to lower layers, through VMFS, NFS, or raw device mapping (RDM). RDM supports two modes: pass-through and nonpass-through. In the pass-through mode, all SCSI commands are allowed to pass through without traps. See the Raw Device Mapping section for details about RDM.

Virtual Machine File System (VMFS): The VMFS is a clustered file system that leverages shared storage to allow multiple ESX hosts to read and write to the same storage simultaneously. VMFS provides on-disk distributed locking to ensure that the same VM is not powered on by multiple servers at the same time. In a simple configuration, the VMs' disks are stored as files within a VMFS. See the Virtual Machine File System section for details about VMFS.

SCSI mid-layer: The main functions of the SCSI mid-layer are managing physical HBAs on ESX Server hosts, queuing requests, and handling SCSI errors. In addition, this layer contains automatic rescan logic that detects changes to the logical unit number (LUN) mapping assigned to an ESX Server host. It also handles path management functions, such as automatic path selection, path collapsing, failover, and failback to specific volumes.

Virtual SCSI HBAs: In an ESX Server environment, each VM uses up to four virtual SCSI HBAs. A virtual SCSI HBA allows a VM access to logical SCSI devices. It functions just like a physical HBA to the VM.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123849199000027

Cloud Computing Uncovered: A Research Landscape

Mohammad Hamdaqa, Ladan Tahvildari, in Advances in Computers, 2012

4.6.1 Virtualization Technologies

Depending on the location of the virtualization layer (hypervisor), there are two main hardware virtualization architectures: the bare-metal (Type 1) and the hosted (Type 2) architecture [73]. In the bare-metal architecture, the hypervisor is installed directly on the hardware. This architecture outperforms the hosted architecture, by allowing I/O devices to be partitioned into virtual machines for direct access. Bare-metal also has the advantage of supporting real-time and general purpose operating systems in parallel. However, since the hypervisor is installed directly on top of the hardware, it should include all device drivers. Furthermore, the lack of a base operating system makes the installation of these hypervisors more difficult and requires more customization and configuration.

On the other hand, the hosted architecture requires a base operating system to be installed first. The hypervisor (VMM) is installed on top of the hosting operating system. Hence, the VMM is easy to install and configure on most computers without the need for customization. However, a hosted architecture may result in performance degradation, because the I/O requests of the virtual machines need to be directed through the hosted OS. Another drawback of hosted architectures is their inability to run real-time operating systems directly inside the virtual machines. Figure 9 shows the simplified architectures of a non-virtualized system, and both the bare-metal and the hosted virtualization systems. The following are the main components of these systems:

This manages OS and application as a single unit by encapsulating them into virtual machines

Fig. 9. The architectures of a non-virtualized system, and both the bare-metal and the hosted virtualization systems.

Platform hardware: The hardware resources, which are required to be shared.

Virtual machine monitor (VMM): The program that is used to manage processor scheduling and physical memory allocation. It creates virtual machines by partitioning the actual resources, and interfaces the underlying hardware (virtual operating platform) to all operating systems (both host and guest).

Guest operating systems: Guest operating systems are always installed on top of the hypervisor in hardware virtualization systems. Guest operating systems are isolated from each other. Guest OS kernels use the interfaces provided by the hypervisor to access their privileged resources.

Host operating system (optional): The host operating system is the base operating system, under which the hypervisor is installed, in the hosted architecture case.

It is apparent from the previous discussion that both bare-metal and hosted architectures do not fully satisfy the Cloud Computing requirements. The bare-metal architecture is more popular in the Cloud Computing infrastructure because it is more efficient and delivers greater scalability, robustness, and performance. However, Cloud Computing still needs the hosted architecture’s flexibility and compatibility.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123965356000028

Revisiting VM performance and optimization challenges for big data

Muhammad Ziad Nayyer, ... Syed Asad Hussain, in Advances in Computers, 2019

3.1.1.3 I/O targeted approaches

Disk I/O contention is also a problem for the single server virtualization. One solution is to force the disk I/O scheduling on VM and Virtual Machine Monitor (VMM) levels. The selection of optimal disk I/O scheduler for collocated VMs can significantly improve disk I/O resources [41]. However, the tradeoff between disk I/O fairness and performance of collocated VMs is not very flexible and can be achieved at the cost of I/O throughput and latency. Big data either require multiple disks or separate NAS devices for storage. In both cases I/O scheduling at multiple levels is required to avoid disk I/O contention as a whole. So, the disk I/O scheduling only at VM level or VMM level will not work for big data.

A priority based scheduling technique has been proposed in Ref. [42] that improves the response time of I/O requests. In order to improve the response time an I/O request is interrupted at VMM level and the requests are reordered on physical server in the disk I/O queue according to priority. The network bandwidth problem can be solved through hardware solutions such as attaching multiple network adapters or using multi-queue based network adapters. It can also be solved through software solutions such as bandwidth capping, where a bandwidth cap is set in the VM configuration before booting [34]. For big data having increased number of I/O requests two problems will occur: First, a very long queue will be required and second, the priority queue's efficiency decreases with the increase in length as less priority tasks face longer delays.

In Nicpic [43], the state management packet scheduling task responsibility has been separated from CPU. The tasks such as classification of packets, putting them in queue, and application of rate limits are performed by CPU. The packets are queued in memory and scheduled for extraction according to the rate limit of VMs. Both scheduling and extraction tasks are performed by Network Identification Card (NIC) and direct memory access mode is used to extract the packets from queue to NIC. A very long queue is required to handle big data resulting in more memory requirement at NIC level. Since NICs are not capable of handling complex computations, the scheduling and extraction of large number of data packets will become difficult and time consuming resulting into delayed services.

A packet aggregation based mechanism overcomes the memory latency problem caused by network I/O while transferring packets from driver domain to guest OS [44]. Driver domain is used to provide access to shared network devices. One large packet is created by combining several small packets and transferred to the aggregation destination. More data can be transferred with aggregation mechanism since there will be less number of memory requests and revocations, fewer copies, and reduced notifications. Due to big data, the frequency of requests to driver domain at aggregation destination will increase. The reversal of the aggregation process requires extraction of small packets incurring more CPU load, hence resulting in overall performance degradation.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S0065245819300129

How Virtualization Happens

Diane Barrett, Gregory Kipper, in Virtualization and Forensics, 2010

How Virtualization Works

A VM is a software implementation of a computer that executes programs like a physical machine. In “Formal Requirements for Virtualizable Third Generation Architectures,” Popek and Goldberg, who have defined conditions that may be tested to determine whether architecture can support a VM, describe it as “an efficient, isolated duplicate of a real machine.” VMs have been in use for many years, and some would say we have come full circle. The fundamental concept of a VM revolves around a software application that behaves as if it were its own computer. This is the original job sharing mechanism prominent in mainframes. The VM application (“guest”) runs its own self-contained OS on the actual (“host”) machine. Put simply, a VM is a virtual computer running inside a physical computer. The virtual OS can range from a Windows environment to a Macintosh environment and is not limited to one per host machine. For example, you may have a Windows XP host machine with Linux and Windows 2003 VMs. Figure 1.2 shows Linux XP and Windows 2003 server VMs running on a Windows XP host machine.

This manages OS and application as a single unit by encapsulating them into virtual machines

Figure 1.2. Linux XP and Windows 2003 Virtual Machines Running on a Windows XP-Based Host

The explosion of x86 servers revived interest in virtualization. The primary driver was the potential for server consolidation. Virtualization allowed a single server to replace multiple dedicated servers with underutilized resources.

Popek and Goldberg explained virtualization through the idea of a virtual machine monitor (VMM). A VMM is a piece of software that has three essential characteristics. First, the VMM provides an environment for programs that is fundamentally identical to the environment on the physical machine; second, programs that run in this environment have very little speed degradation compared with the physical machine; and finally, the VMM has total control of system resources. However, the x86 architecture did not achieve the “classical virtualization” as defined by the Popek and Goldberg virtualization requirements, and binary translation of the guest kernel code was used instead. This is because x86 OSes are designed to run directly on the physical hardware and presume that they control the computer hardware. Virtualization of the x86 architecture has been accomplished through either full virtualization or paravirtualization. Both create the illusion of physical hardware to achieve the goal of OS independence from the hardware but present some trade-offs in performance and complexity. Creating this illusion is done by placing a virtualization layer under the OS to create and manage the VM. Full virtualization or paravirtualization will be discussed later in this chapter. Both Intel and AMD have now introduced architecture that supports classical virtualization.

Note

To facilitate virtualization, a hypervisor is used. The hypervisor controls how a computer's processors and memory is accessed by the guest operating system. A hypervisor or VMM is a virtualization platform that provides more than one OS to run on a host computer at the same time.

For a very in-depth explanation of virtualization, there are several published papers explaining the highly technical details of how virtualization works. For example, Adams and Agesen (2006) from VMware have written “A Comparison of Software and Hardware Techniques for x86 Virtualization.” The current associated link is in the reference section at the end of this chapter.

In a physically partitioned system, more than one OS can be hosted on the same machine. The most common example of this type of environment is a dual-boot system where Microsoft and Linux OSes coexist on the same system. Each of these partitions is being supported by a single OS. VM software products allow an entire stack of software to be encapsulated in a container called a VM. The encapsulation starts at the OS and runs all the way up to the application level. This type of technology has the capability to run more than one VM on a single physical computer provided that the computer has sufficient processing power, memory, and storage. Individual VMs are isolated from one another to provide a compartmentalized environment. Each VM contains its own environment just like a physical server and includes an OS, applications, and networking capabilities. The VMs are managed individually similar to a physical environment. Unlike a physically partitioned machine, VM products allow multiple OSes to exist on one partition.

Virtualizing Operating Systems

Virtualized OSes can be used in a variety of ways. These environments allow the user to play with questionable or malicious software in a sandbox-type environment. For example, VMs can be used to see how different OSes react to an attack or a virus. It can give the user access to a Linux environment without having to dual boot the laptop or PC. Finally, it allows an investigator to mount a suspect environment to see the environment just as the suspect used it. This can be helpful in presenting cases to a judge or jury. Showing the environment can have a big impact, especially if it can convey the content in a manner that the judge and jury would understand. Booting up a machine that has a pornographic background gets the point across much faster and clearer in the courtroom and in litigation conferences. Figure 1.3 shows the concept behind virtualizing OSes.

This manages OS and application as a single unit by encapsulating them into virtual machines

Figure 1.3. Virtualization Concepts

Virtualizing Hardware Platforms

Hardware virtualization, sometimes called platform or server virtualization, is executed on a particular hardware platform by host software. Essentially, it hides the physical hardware. The host software that is actually a control program is called a hypervisor. The hypervisor creates a simulated computer environment for the guest software that could be anything from user applications to complete OSes. The guest software performs as if it were running directly on the physical hardware. However, access to physical resources such as network access and physical ports is usually managed at a more restrictive level than the processor and memory. Guests are often restricted from accessing specific peripheral devices. Managing network connections and external ports such as USB from inside the guest software can be challenging. Figure 1.4 shows the concept behind virtualizing hardware platforms.

This manages OS and application as a single unit by encapsulating them into virtual machines

Figure 1.4. Hardware Virtualization Concepts

Server Virtualization

Server virtualization is actually a subset of hardware virtualization. This concept is most prominently found in data centers. It is mostly relied on as a power-saving measure for enterprises in an effort to implement cost-effective data centers and utilize the increased hardware resource capability of servers. In server virtualization, many virtual servers are contained on one physical server. This configuration is hidden to server users. To the user, the virtual servers appear exactly like physical servers.

Note

A provided software platform is used to divide one physical server into multiple isolated VMs. The VMs are sometimes called virtual private servers (VPSes) or virtual dedicated servers, but depending on the virtualization vendor and platform, they can also be known as guests, instances, containers, or emulations. There are three popular approaches to server virtualization: the VM model, the paravirtual machine model, and virtualization at the OS layer.

Server virtualization has become part of an overall virtualization trend in enterprise environments, which also includes storage and network virtualization along with management solutions. Server virtualization can be used in many other enterprise scenarios including the elimination of server sprawl, more efficient use of server resources, improved server availability, and disaster recovery, as well as in testing and development. This trend is also part of the autonomic computing development, in which the server environment will be able to manage itself based on perceived activity. Chapter 11, “Visions of the Future: Virtualization and Cloud Computing,” discusses the autonomic computing concept.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597495578000011

Which of the following can manage OS and application as a single unit encapsulating them?

Virtualisation Benefits: Create various IT Services such as Infra as a service, Software as a service and Platform as a service. Can manage Operating System (OS) and application as a single unit by encapsulating them into virtual machines.

What manages OS and application as a single unit by encapsulating them into virtual machines?

VirtualCenter is virtual infrastructure management software that centrally manages an enterprise's virtual machines as a single, logical pool of resources.

What is the software that creates and manages a virtual machine?

A hypervisor, also known as a virtual machine monitor or VMM, is software that creates and runs virtual machines (VMs).

What runs inside a virtual machine OS and application?

Containers run on the underlying operating system while VMs have their own operating system using hardware VM support. Hypervisors manage VMs while a container system will provide services from the underlying host OS and isolate the applications using virtual memory hardware.