Which of the following is an exploit in which malware allows the virtual OS to interact directly with the hypervisor?

Investigating Live Virtual Environments

Diane Barrett, Gregory Kipper, in Virtualization and Forensics, 2010

Virtual Environments

Virtualized environments are becoming more and more common from the server to the desktop. In a live acquisition, depending on the tools used, the virtual environment may or may not be captured. Let's look at a scenario where an organization uses an enterprise solution that includes a tool that monitors the user's workstation through an installed program such as an applet. The intent of this environment is to offer system administrators the ability to monitor target machines in the network. This can be accomplished by pushing small surveillance programs from a central server onto a target machine without alerting the user to the process. A silent mode allows the program to run without detection. The applet is part of a larger suit of forensic tools. Although the applet can be pushed to the user's workstation, if the user uses a virtual environment that uses the host network adapter, traffic can be monitored, but the applet may not be able to be pushed into that environment and may only show the host activity. In late 2007, this theory was tested with several of the commercial tools available. Most of the tools were unsuccessful in being pushed to the virtual environment when given their own IP address. The negative results ranged from not being able to install at all to the famous Microsoft blue screen of death, which came to be a regular occurrence in the experiment. In one instance, the applet recognized the virtual environment running, but it did not have the capability to install in that environment. This was the most promising result because the tool actually was intuitive enough to realize the environment was virtual and popped-up a nice box saying that the environment was virtual and it could not install. We suspect these issues are most likely the result of the stealthy way applets are designed to work and how the hypervisor interacts with the host computer.

Physical installation of the applet in the virtual environment was also tested. The results were a bit more successful, some of the applets installed, some didn't install, and one told us it couldn't install because the environment was virtual.

As our tools advance to account for virtual environments, the likelihood of capturing the required evidence from these environments will increase significantly. With all the recent developments for managing, provisioning, and monitoring virtual machines (VMs), it gives investigators more concrete places to find evidence. In the meantime, any organization that is combining forensic monitoring and virtualized environments should be sure to check with the software vendor about the capability of the tool to monitor this environment. If building VMs for desktop distribution, installing the applet inside the environment will probably prove to be successful so that the machine can be monitored in the same way as a physical machine, provided the tool has the capability to run on this platform.

Another area of interest is tracking the applet inside the VM from machine to machine and how to be sure it can be monitored when employees drop the VM to a thumb drive or take it home. When a VM is added to or removed from a work environment, it doesn't set off the metal detector. Now, methods are being developed to detect rogue VM environments. Once the VM is detected, the pushing of an applet into the environment can happen.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597495578000060

Data Centers: A Concentration of Information Security Risk

Carl S. Young, in Information Security Science, 2016

Introduction

The trend in information storage, management, and accessibility is toward the use of Cloud services. Moreover, the use of virtualization by these Cloud services increases the concentration of risk. Although virtualization has definite security benefits, specific vulnerabilities exist and should at least be understood.

In the traditional server architecture, there is one piece of hardware supporting a single instantiation of an OS or application. For example, a corporate email server might be running Windows/Microsoft Exchange. Why is this condition an issue? A software application like Exchange is estimated to use 15% of the processing capacity of a server. This leaves 85% of the processing capacity unused. Virtualization helps to address this inherent inefficiency.

In a virtualized environment, a layer of software known as a hypervisor is inserted between the hardware and the OS. The hypervisor allows for multiple OS/application servers, also called VMs or “guests,” to exist on that same physical hardware. This facilitates increased processing capacity of the hardware leading to enhanced resource utilization and efficiency. Fig. 15.3 shows the architecture of a virtual environment.5

Which of the following is an exploit in which malware allows the virtual OS to interact directly with the hypervisor?

Figure 15.3. Full virtualization architecture.

The hypervisor manages the guest OS access to hardware, for example, CPU, memory, and storage.6 The hypervisor partitions these resources so that each guest OS can access its own resources but cannot access the other guest OS resources or any resources not allocated for virtualization.

Relevant attack vectors in this context might include infecting a specific guest OS file or inserting malicious code into a guest OS memory. The isolation of guests is one of the principal security benefits of virtualization as it is designed to prevent unauthorized access to resources via partitioning. Virtual configurations also help prevent one guest OS from injecting malware into another. Partitioning can also reduce the threat of denial-of-service (DoS) conditions caused by excess resource consumption by another guest OS that is coresident on the same hypervisor.

Resources may be partitioned physically or logically with attendant security and operational pros and cons. In physical partitioning, the hypervisor assigns separate physical resources to each guest OS. These resources include disk partitions, disk drives, and network interface cards (NIC). Logical partitioning may divide resources on a single host or across multiple hosts.

Such hosts might consist of a collection of resources where each element of the collection would carry equivalent security implications if compromised, that is, an equivalent impact component of risk. Logical partitioning allows multiple guest OSs to share the same physical resources, such as processors and RAM, with the hypervisor controlling access to those resources.

Physical partitioning sets limits on resources for each guest OS because unused capacity from one resource may not be accessed by any other guest OS. The physical separation of resources may provide stronger security and improved performance than logical partitioning. The security risk profile of virtualized machines is strongly dependent on whether physical or logical partitioning is invoked.

As noted earlier, the hypervisor does the heavy lifting in terms of allocating CPU time, etc., across the coresident guest OSs. This configuration requires less hardware to support the same number of application servers. The net result is less money spent on physical servers and supporting hardware as well as the co-location of multiple OSs and applications.

Consider an organization that requires 12 application servers to support its operation. In the traditional model, the organization would purchase 12 physical systems plus associated costs including hardware, OS, and supporting hardware.

If a properly configured virtual server could support 4 application servers, the organization would purchase 3 systems to handle the 12 application servers. The organization would need to purchase the OSs and VMware software, and would probably want to purchase shared storage to leverage other benefits of virtualization.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128096437000152

Henry Dalziel, in How to Defeat Advanced Malware, 2015

4.1 Desktop virtualization does not secure the endpoint

In recent years, the growth of desktop virtualization has led to new challenges in endpoint protection. Agents that are deployed on physical Windows desktops do not function well in virtual desktops hosted on a hypervisor. Endpoint Protection Platform (EPP) suites are disk I/O heavy, and on a server running scores of VMs, this leads to collapse of the storage infrastructure and low VM/server density. As a result, each of the major vendors has had to rearchitect its EPP suite for virtualized environments. More importantly, however, it has led to the realization that the virtual infrastructure vendor has a key role to play in endpoint protection, since only the hypervisor has absolute control over all system resources: CPU, memory, storage, and network I/O, for all guests on the system.

Since all products for virtualized environments are in their earliest stages of development, the security of mission critical workloads or virtual desktops on virtual infrastructure is weak, since every compromise that is possible on a physical desktop can be achieved on a virtual one. Of note is a recent NIST study1 in the area of security for fully virtualized workloads, which notes: “Migrating computing resources to a virtualized environment has little or no effect on most of the resources’ vulnerabilities and threats.”

Virtualization technology, however, will be the key to the delivery of the next generation of security, since a hypervisor can provide a new (more secure) locus of execution for security software. The hypervisor has control over all system resources (CPU, memory, and all I/O) and is intimately involved in the execution of all guest VMs, giving it an unparalleled view of system state and a unique opportunity to provide powerful insights into the security of the system overall. Since the hypervisor relies on a much smaller code base than a full OS, it also has a much smaller attack surface. Finally, it has an opportunity to contain malware that does successfully penetrate a guest, within the VM container. Ultimately, the hypervisor provides a new, highly privileged runtime environment with an opportunity to provide greater control over endpoint security. Bromium is the only vendor to specifically exploit virtualization to both protect endpoints and detect new attacks.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128027318000047

Fuzzing

In Virtualization for Security, 2009

Publisher Summary

This chapter covers how virtualized environments can significantly increase the efficiency of fuzzing. Using scripted snapshots, the reset of an environment can be done in a matter of seconds instead of minutes. Using the debugging features of a virtualized environment to monitor the application can provide an ideal environment for the hard-to-monitor applications. In addition, it is possible to run multiple instances of the same application in parallel using multiple hardware platforms to increase the speed with which an application can be tested in an automated fashion. Virtualization has proven ideal for resetting the environment to an initial state before any malformed data had been sent. Without using virtualization this can involve restarting the application, or even worse, initiating a reboot just to get to a state where the next test can be performed. In addition, monitoring the application without interfering with the application itself can be a challenge. Some applications attempt to prevent debuggers from observing their behavior. While these attempts can be overcome (defeated, bypassed), it can be an involved process of application modification and research.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597493055000086

Configuring the Virtual Machine

In Virtualization for Security, 2009

USB Devices

USB disks can also be used for storage in a virtualized environment. In fact, most USB devices can be connected to a virtual machine and function very well. To connect a USB device to the virtual machine in VMware workstation, choose the following menu options: VM->Removable Devices->USB Devices (see Figure 4.7). From there a list of USB devices known to the system will be displayed. The devices that are already connected to the virtual machine will have a check mark next to them. Clicking a device that is not connected will connect it, and clicking a checked item will disconnect it. Note that this will disconnect it from the host rather abruptly. Devices should be disabled or unmounted at the operating system level before disconnecting them from a machine in much the same way that they should be before removing them in the physical world.

Which of the following is an exploit in which malware allows the virtual OS to interact directly with the hypervisor?

Figure 4.7. Connecting a USB Device

Which of the following is an exploit in which malware allows the virtual OS to interact directly with the hypervisor?

If you are running a virtual machine from a removable disk, do not attempt to connect that device to the virtual machine. The virtual machine will likely crash because its source files are no longer available.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597493055000049

Choosing the Right Solution for the Task

In Virtualization for Security, 2009

Security

One of the best ways to improve the performance of your virtualized environment is to move a virtual machine from one host system to another where there are more hardware resources available. The movement can be done on-demand or dynamically using automation features of the virtualization platform. The dynamic movement of virtual machines from one physical host system to another has become the blood pumping through the veins of many IT organizations offering optimal use of hardware resources.

Taking advantage of virtual machine movement has some important impacts on security. Moving a virtual machine running the corporate web site to the same physical hardware system that also processes the company payroll may introduce a policy problem. Designing proper segmentation and change control policies must be considered when designing your infrastructure.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597493055000025

Security Criteria: Building an Internal Cloud

Vic (J.R.) Winkler, in Securing the Cloud, 2011

Antimalware

The deployment and updating of antimalware software is also important within a virtualized environment. Where virus-prone operating systems are used for virtual servers in a manner that makes them subject to viruses, an antivirus solution should be used. This should be made part of the template VM images before a VM is instantiated. The virus signature files will often need to be updated on at least a daily basis. Setting virus-prone servers to automatically update their signature files every several hours will not entail undue overhead, but it will ensure that the maximum protection against viruses is deployed. Keep in mind that by using VMs, one achieves an advantage in terms of reducing cost-to-recover from infection—all that is really needed is to stand up a replacement uninfected VM.

A better antimalware approach for a cloud computing infrastructure is one where all input is filtered and examined before it gets to a server. Also, in the case of a mission critical application, one will need to maintain strict control over any changes to the system image/applications. For such applications, you really can't afford to get to the point where a production environment is constantly being subject to per-host virus exposure and remediation. One of the cost savings in cloud computing is the possibility to reduce repeated operations via better IT processes, and management of virus risk is one.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597495929000075

Architecture

Jeremy Faircloth, in Enterprise Applications Administration, 2014

Virtualization

We touched briefly on the concept of virtualization in Chapter 3. Now it’s time to go into more depth on virtualization and how it affects technical architecture. Virtualization is the concept of creating segmented virtual resources out of a larger physical resource. This very high-level definition is important as virtualization can play a role with almost any physical resource including disk, network, server, memory, or processor resources. In the interest of simplicity, we’re going to focus on the concept of virtualization at the server level and discuss how this architecture works and how it applies to enterprise applications.

Server virtualization involves taking a physical server, installing a hypervisor known as the host machine, and then creating virtual machines known as guest machines. There are two types of hypervisors, Type I and Type II. A Type I hypervisor, also known as a bare metal hypervisor, is installed directly on the physical hardware of a server as an operating system and is the first layer on top of that hardware. A Type II hypervisor, also known as a hosted hypervisor, is installed within another operating system running on a server. In this scenario, the server’s operating system is the first layer, and the hosted hypervisor is the second layer on top of the hardware. Once the hypervisor is installed, guest machines will then be created on top of the hypervisor. The guest machine will be on the second layer above the hardware in a bare metal hypervisor implementation. Alternatively, the guest machine will be on the third layer with a hosted hypervisor.

A number of different options exist in the marketplace for hypervisors. Some Type I hypervisors would include VMWare ESXi/vSphere, Citrix XenServer, KVM, and Microsoft Hyper-V. Some Type II hypervisors would include Parallels, Virtual Machine Manager, VMWare Player/Workstation, and VirtualBox. These hypervisors run directly on the server’s physical hardware or on top of a host operating system depending on the hypervisor type and provide an interface that allows the administrator to build out the virtual machine infrastructure. This infrastructure can include virtual networks as well as virtual servers.

The hierarchy of a virtualized server starts at the physical server, moves into the hypervisor, and then into various virtual servers within that hypervisor. Figure 6.9 shows how this virtual architecture looks when visualized.

Which of the following is an exploit in which malware allows the virtual OS to interact directly with the hypervisor?

Figure 6.9. Virtual server architecture.

As you can see in Figure 6.9, each virtual machine has its own allocated set of processors, memory, network cards, and disk. From the virtual machine perspective, these hardware resources are fully “owned” by the virtual machine operating system and free to use as it sees fit. In reality, the resources are allocated on an as-needed basis by the hypervisor and shared across all virtual machines that are being run within the context of the hypervisor.

While some hypervisors allow you to dedicate specific processors, memory, or other resources directly to a specific virtual machine, the largest gains in resource utilization are typically found by sharing resources on an as-needed basis across many virtual machines. This takes advantage of the fact that in most cases all of the virtual machines will not be 100% utilized all of the time. Those gaps where the virtual machine is not using parts of hardware resources allows those same resources to be allocated to other virtual machines to run their processes. This allows you to host a larger number of virtual machines on physical hardware than would otherwise be possible without the use of virtualization technologies.

In the past, virtualization was isolated to the realm of development and testing environments. However, over the years as virtualization technologies have improved, more and more companies are finding benefits in using virtualization within their production environments. There are many benefits to virtualization including reduced cost, reduced maintenance work, reduced energy consumption, more efficient use of resources, etc. These benefits have helped drive the tremendous growth of virtualization over the last several years and will continue to drive its growth over time.

There are, naturally, some downsides to using virtualization as well. Any time that you have additional software running on a physical machine, that software consumes some resources as overhead. This is the case for the hypervisors as well. When a hypervisor is in use, some percentage of machine resources are in use just to operate the hypervisor and are therefore unavailable to the virtual machines running under the hypervisor. Also, since system resources are shared, it may happen that processor time is not available when a virtual machine needs it if another virtual machine is already consuming that resource. This is one of the detriments that always exists when sharing a finite amount of resources and takes some planning and consideration to compensate for.

The management of virtual machines does take some skill and expertise especially in larger virtualized environments. Many organizations find that they do not necessarily have staff with the required skills in house and must either train employees on virtualization technologies, hire the appropriately skilled personnel, or contract out the work. This has led to further specialization within information technology where experts on specific virtualization technologies gain certifications on that technology and provide their knowledge and expertise to companies seeking to gain the benefits associated with virtualization.

Tips & Tricks

Virtualization Versus “Cloud”

One of the most predominant topics in information technology as of the time of this writing is cloud computing. It is important to clarify the difference between virtualization and cloud computing in order to clearly discuss these two topics. Virtualization is the segmentation of physical resources into smaller virtual resources. Cloud computing, which we’ll discuss in detail later, focuses on a complete abstraction between backend infrastructure resources and operating system/application resources. Virtualization is one of the technologies used to allow for this abstraction, but the technologies and concepts behind cloud computing become more complex based on its goals. So to keep it simple, virtualization is the segmentation and virtualization of physical resources into virtual resources while cloud computing is the concept of abstracting physical infrastructure completely from operating systems and applications using virtualization as one of the methods of accomplishing this abstraction.

Let’s go back to the technical architecture for FLARP and see if virtualization technologies can apply to this situation. If we take a look at the Service Orchestration web server and application server, we can see that they’re pretty small servers compared to the others. Considering that they will be communicating directly with each other a substantial amount, that communication speed could probably be increased by putting them on the same physical host. In addition, it’s likely that their resource utilization will probably be sequential in that a call first gets made to the web server consuming web server resources which subsequently calls the application server and utilizes application server resources. These factors make the use of virtualization technologies a good fit for this part of our enterprise application and is reflected in the diagram shown in Figure 6.10.

Which of the following is an exploit in which malware allows the virtual OS to interact directly with the hypervisor?

Figure 6.10. FLARP technical architecture with virtualization.

Some of the other areas to consider virtualization are in the web server layers and application server layers of the FLARP application tiers. However, based on the sizing that has to be done for these tiers, virtualization may not be a good fit unless the physical hosts are very large. In an environment where large physical hosts are available for hosting virtual machines (which is becoming more common), this may be a viable option. However, let’s assume that we’re just using virtualization for the Service Orchestration component of the application at this time.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124077737000065

Use Case Driven Security Threat Analysis

Zonghua Zhang, Ahmed Meddahi, in Security in Network Functions Virtualization, 2017

2.1.1 Overall description

The NFV infrastructure aims to provide the capability, resources or functionality for building a virtualized environment in which network functions can be executed. This NFVIaaS approach can greatly expand a carrier’s coverage in terms of locations, for providing and maintaining services at a large scale, while reducing or avoiding the physical network assets. It also impacts significantly the reduction of the cost and complexity in terms of deploying new hardware or leasing fixed services.

NFVIaaS provides computing capabilities that are comparable to an IaaS cloud computing service as a run time execution environment, as well as supporting the dynamic network connectivity services that may be considered as NaaS (Networking as a Service). Therefore, the architecture of this use case combines IaaS and NaaS models as key elements in order to provide network services within the NFV infrastructure. Service providers can either use their own NFVI/cloud computing infrastructure or leverage another service provider’s infrastructure to deploy their own network services (VNFs). Based on NFVIaaS, the computing nodes will be located in NFVI-PoPs such as central offices, outside plants, specialized pods, or embedded in other network equipment such as mobile devices. The physical location of the infrastructure is largely irrelevant for cloud computing services, but many network services have a certain degree of location dependence.

To better understand how an NFVIaaS can be performed, we may refer to Figure 2.1, which illustrates an NFVIaaS supporting cloud computing application, as well as VNF instances, from different service providers. As the figure shows, service provider 2 can run VNF instances on the NFVI/cloud infrastructure of another service provider 1 in order to improve service resilience and to improve the user experience by reducing latency and to perfectly comply with regulatory requirements. Service provider 1 will require that only authorized entities can load and operate VNF instances on its NFV infrastructure. The set of resources, e.g. computing, hypervisor, network capacity and binding to network termination, that service provider 1 makes available to service provider 2 would be constrained. Meanwhile, service provider 2 is able to integrate its VNF instances running on service provider 1’s NFV infrastructure into end-to-end network service instance, along with VNF instances running on its own NFV infrastructure. It is obvious that as the NFVIaaS of the two service providers are distinct and independent, the failure of one NFVIaaS will not affect the other.

Which of the following is an exploit in which malware allows the virtual OS to interact directly with the hypervisor?

Figure 2.1. NFV infrastructure as a service. For a color version of this figure, see www.iste.co.uk/zhang/networks.zip

Moreover, non-virtualized network functions can coexist with the VNFs regarding this case. Alternately, virtualized network functions from multiple service providers may coexist within the same NFV infrastructure. NFV infrastructure also provides appropriate isolation between the resources allocated to the different service providers, thus VNF instance failures or resource demands from one service provider will not affect the operation of another service provider’s VNF instances.

To summarize, this model provides basic storage and computing capabilities as standardized services over the network, whereas the storage and network equipment are pooled and made available to the users. The capabilities provided to the users are the processing, storage, networks and other fundamental computing resources, by which the users are able to deploy and run arbitrary network services. In doing so, the users do not manage or control the underlying infrastructure, but they are capable of controlling their deployed applications and can arbitrarily select networking components to accomplish their tasks.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781785482571500029

Communitywide Area Network and Mobile ISP

Colin Pattinson, ... Richard Braddock, in Green Information Technology, 2015

General Requirements

To reiterate, the main goal of the project was to provide gateway services for mobile virtualized environments that could be repurposed as a mobile community that was not reliant on wired backhaul. This implemented solution was made possible with minor software-level changes and the addition of WLAN functionality. Including this level of flexibility allowed the service to be able to completely power down the central virtualization server and operate solely as an mISP (Braddock and Pattinson, 2009; Pattinson et al., 2010). However, the mISP infrastructure needed to comply with the electrical as well as thermal constraints within the target locale. Additionally, this infrastructure comprised appropriate hardware components and software topology that fostered a convergence of a range of services that would have been traditionally abstracted across multiple physical or virtual servers. The sample mISP deployment with caching considerations is shown in Figure 13.3. A list of technical capabilities of the specialized platform follows:

Which of the following is an exploit in which malware allows the virtual OS to interact directly with the hypervisor?

Figure 13.3. A sample mISP deployment with caching considerations.

Aggregating multiple cellular-based Internet connections to provide a redundant high-speed backhaul link

Incorporating a wired backhaul link when within range of such a service

Optimizing transparent link/bandwidth

Acting as a wireless gateway to authenticated or trusted nodes as well as performing this authentication via a Web-based interface

Encapsulating session-level accounting and reporting, thus nullifying legal concerns that have plagued the wISP industry in developing nations (Mitta, 2009)

Incorporating a high-powered 802.11 g radio (details are found in Cisco, n.d.-b) interface when acting as a mobile learning environment

Provisioning a secure host OS on which to house the software payload

Providing adequate processing power to allow further server-side applications to be integrated as necessary

Providing local storage for server-side applications, possibly with precautions for further data defense and/or security

Allowing remote diagnosis and management services to interact at all stages of the design, whether they are traditional network metrics or more environmental aspects such as current climate conditions or internal state of the energy source(s)

Demonstrating a clear methodology for powering all services on and off the grid consistently in an autonomous fashion, including providing renewable energy collection

Using FOSS at every stage to meet user needs while maintaining zero software expenditure

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128013793000139

Which of the following is a network security service that filters malware from user side internet connections using different techniques?

Which of the following is a network security service that filters malware from user-side internet connections using different techniques? Secure web gateway (*SWGs use URL filtering, application control, data loss prevention, HTTPS inspections, and antivirus protection.)

What is isolating a virtual machine from the physical network to allow testing to be performed without impacting the production environment called quizlet?

Isolating a virtual machine from the physical network to allow testing to be performed without impacting the production environment is known as sandboxing.

Which of the following is a network virtualization solution provided by Microsoft?

Hyper-V is Microsoft's hardware virtualization product. It lets you create and run a software version of a computer, called a virtual machine. Each virtual machine acts like a complete computer, running an operating system and programs.

Which of the following is a network device that is deployed in the cloud to protect against unwanted access to a private network?

Firewall defined A firewall is a security device — computer hardware or software — that can help protect your network by filtering traffic and blocking outsiders from gaining unauthorized access to the private data on your computer.