What combines networks by splitting the available bandwidth into independent channels that can be assigned in real time to a specific device?

SDN in the Data Center

Paul Göransson, ... Timothy Culver, in Software Defined Networks (Second Edition), 2017

8.3.2 Network Virtualization Using GRE

The NVGRE technology was developed primarily by Microsoft, with other contributors including Intel, Dell, and Hewlett-Packard. Some of the main characteristics of NVGRE are:

NVGRE utilizes MAC-in-IP tunneling.

Each virtual network or overlay is called a virtual layer two network.

NVGRE virtual networks are identified by a 24 bit Virtual Subnet Identifier, allowing for up to 224 (16 million) networks.

NVGRE tunnels, like GRE tunnels, are stateless.

NVGRE packets are unicast between the two NVGRE end points, each running on a switch. NVGRE utilizes the header format specified by the GRE standard [5, 6].

Fig. 8.4 shows the format of an NVGRE packet. The outer header contains the MAC and IP addresses appropriate for sending a unicast packet to the destination switch, acting as a virtual tunnel end point, just like VXLAN. Recall that for VXLAN the IP protocol value was UDP. For NVGRE, the IP protocol value is 0x2F, which means GRE. GRE is a separate and independent IP protocol in the same class as TCP or UDP. Consequently, as you can see in the diagram, there are no source and destination TCP or UDP ports. The NVGRE header follows the outer header and contains an NVGRE Subnet Identifier of 24 bits in length, sufficient for about 16 million networks.

What combines networks by splitting the available bandwidth into independent channels that can be assigned in real time to a specific device?

Fig. 8.4. NVGRE packet format.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128045558000089

Virtualization

Rajkumar Buyya, ... S. Thamarai Selvi, in Mastering Cloud Computing, 2013

3.3.2.2 Network virtualization

Network virtualization combines hardware appliances and specific software for the creation and management of a virtual network. Network virtualization can aggregate different physical networks into a single logical network (external network virtualization) or provide network-like functionality to an operating system partition (internal network virtualization). The result of external network virtualization is generally a virtual LAN (VLAN). A VLAN is an aggregation of hosts that communicate with each other as though they were located under the same broadcasting domain. Internal network virtualization is generally applied together with hardware and operating system-level virtualization, in which the guests obtain a virtual network interface to communicate with. There are several options for implementing internal network virtualization: The guest can share the same network interface of the host and use Network Address Translation (NAT) to access the network; the virtual machine manager can emulate, and install on the host, an additional network device, together with the driver; or the guest can have a private network only with the guest.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124114548000036

Hypervisors, Virtualization, and Networking

Bhanu Prakash Reddy Tholeti, in Handbook of Fiber Optic Data Communication (Fourth Edition), 2013

16.1.2 Virtual networking

Network virtualization is the ability to manage and prioritize traffic in portions of a network that might be shared among different external networks. This ability allows administrators to use performance, resources, availability, and security more efficiently. The following virtualization technologies primarily exist at the system level and require hypervisor and Licensed Internal Code support to enable sharing between different operating systems:

Virtual IP address takeover: The assignment of a virtual IP address to an existing interface. If one system becomes unavailable, virtual IP address takeover allows for automatic recovery of network connections between different servers.

Virtual Ethernet: With this technology, you can use internal Transmission Control Protocol/ Internet protocol (TCP/IP) communication between VMs.

Virtual local area network (VLAN): A logically independent network. Several VLANs can exist on a single physical switch.

Virtual switch: A software program that allows one VM to communicate with another. It can intelligently direct communication on the network by inspecting packets before passing them on.

Virtual private network (VPN): An extension of a company’s intranet over the existing framework of either a public or private network. A VPN ensures that the data that is sent between the two end points of its connection remains secure.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124016736000167

Network Virtualization

Gary Lee, in Cloud Networking, 2014

Frame format

The NVGRE frame format shown in Figure 7.8 is very similar to the VXLAN frame format that we described in the last section. Instead of using a UDP header and a VXLAN header, only a GRE header is used, reducing the frame size by a few bytes. The GRE header contains the unique protocol ID (0x6558) for NVGRE frames as well as a 24-bit virtual segment identifier (VSID), that, like VXLAN, can support up to 16M unique tenant subnets. Another difference is that the inner layer 2 header does not contain a VLAN tag, and if one exists, it is removed before the frame is encapsulated. So in this case, the VSID can also be used to segregate multiple virtual network segments for a given tenant.

What combines networks by splitting the available bandwidth into independent channels that can be assigned in real time to a specific device?

Figure 7.8. NVGRE frame format.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128007280000072

5G and beyond networks

Silvia Ruiz, ... Haibin Zhang, in Inclusive Radio Communications for 5G and Beyond, 2021

6.3.4 Virtualized networks

Wireless network virtualization is a key technology for 5G, both as an enabler for operational sustainability and business agility [AMG+16]. Wireless network virtualization requires the abstraction of physical resources such as spectrum, infrastructure, power, etc., into virtual resources. These resources can then be partitioned and allocated to different Virtual Network Operators (VNOs). Authors in [AMG+16] propose a virtualized network architecture for an infrastructure provider that shares the physical resources of a Massive MIMO cell among several VNOs using spatial multiplexing as well as orchestrate the interaction between the VNOs and the infrastructure provider through an auction-based mechanism that allocates spatial streams to the VNOs.

Furthermore, authors in [AAC+17] present a customizable resource virtualization algorithm for multi-user data scheduling in a LTE C-RAN deployment. The algorithm is based on the hypervisor specific dynamic assignment of air resources to the VNOs, based on either joint scheduling or per-cell schemes. The objective is to improve the resource allocation mechanism for two scenarios linked with night-time and day-time.

Another contribution [CB18] presented the detection methods in the energy-efficient context in 5G networks. It was shown that the considered context-awareness information has enormous volume. Simulations performed showed that a power consumption reduction is possible if intelligent network organization is implemented. Authors suggest that the correlation between nodes used for finding clusters in the network allows to reduce the information exchanged in the network. However, mobility of nodes limits the clustering gain since the correlation in the network is limited in high-velocity scenario.

Finally, contribution [PKM18] presented the importance and the benefits of employing a geo-location spectrum database for deploying a TVWS network as a vital tool to limit harmful interference to primary TV spectrum users. This work outlines how a fully operational web-based TVWS system has been designed and developed for Cyprus using the web API framework and adhering to the methods defined by CSIR.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128205815000122

Dinkar Sitaram, Geetha Manjunath, in Moving To The Cloud, 2012

HP SAN Virtualization Services Platform

HP StorageWorks SAN Virtualization Services Platform (HP SVSP) is a switch-based storage virtualization solution wherein an intelligent FC switch runs virtualization functionality using specialized ASICs (Application Specific Integrated Circuits). Translation of logical to physical addresses and the redirection of I/O is performed in these switches. An out of band metadata manager (an appliance) manages the control operations. This is called a split-path architecture.

In this split-path architecture, the intelligent switch splits the data and the control operations in the network. The intelligent switch manages the I/O data path while the metadata control operations are routed to the out of band manager, which could be an appliance. The need for host agents which direct virtual I/O requests to the correct physical storage is thereby eliminated, since this is done transparently in the switch. The appliance performing metadata management has the physical storage visibility and allocates virtual volume mapping. Virtual volumes are presented to hosts as disk drives. On a host I/O to the virtual volume, the virtual volume's logical address is mapped to a physical address, and I/O is sent directly to the storage devices.

Figure 9.11 shows the high-level architecture of HP SVSP. It mainly includes Data Path Modules (or DPMs), which are intelligent switches, and Virtualization Server Managers (or VSMs), which are appliances. The virtualization functionality is performed both by the DPMs and VSMs. The DPM performs real-time parsing of FC frames by examining packets. The DPM gets its virtual-to-physical storage mappings from the VSM. VSM performs data management operations including functionality such as replication and backup. The VSM and DPM coordinate for all management and control path operations without interference in the data path between servers and storage arrays, hence supporting high I/O throughput to storage arrays.

What combines networks by splitting the available bandwidth into independent channels that can be assigned in real time to a specific device?

Figure 9.11. HP SAN virtualization platform architecture.

The solution includes replication support as well as snapshots, mirroring and non-disruptive data migration. Also, since I/O traffic flows directly through the ASIC, the latency problem can be made imperceptible to applications without the need to resort to caching.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597497251000093

Information Security Essentials for Information Technology Managers

Albert Caballero, in Computer and Information Security Handbook (Third Edition), 2017

Public Cloud

There are many core ideas and characteristics behind the architecture of the public cloud, but possibly the most alluring is the ability to create the illusion of infinite capacity. Whether its one server or thousands, the performance appears to perform the same, with consistent service levels that are transparent to the end user. This is accomplished by abstracting the physical infrastructure through virtualization of the operating system so that applications and services are not locked into any particular device, location, or hardware. Cloud services are also on demand, which is to say that you only pay for what you use and should therefore drastically reduce the cost of computing for most organizations. Investing in hardware and software that is underutilized and depreciates quickly, is not as appealing as leasing a service, that with minimal upfront costs, an organization can deploy as an entire infrastructure.

Server, network, storage, and application virtualization are the core components that most cloud providers specialize in delivering. These different computing resources make up the bulk of the infrastructure in most organizations, so it is easy to see the attractiveness of the solution. In the cloud, provisioning these resources is fully automated and scale up and down quickly. To understand how each provider protects and configures each of the major architecture components of the cloud, it is critical for an organization to be able to assess and compare the risk involved in utilizing that provider or service. Make sure to request that the cloud provider furnish information regarding the reference architecture in each of the following areas of their infrastructure:

Compute: Physical servers, OS, CPU, memory, disk space, etc.

Network: VLANs, DMZ, segmentation, redundancy, connectivity, etc.

Storage: LUNs, ports, partitioning, redundancy, failover, etc.

Virtualization: Hypervisor, geolocation, management, authorization, etc.

Application: Multitenancy, isolation, load-balancing, authentication, etc.

An important aspect of pulling off this type of elastic and resilient architecture is commodity hardware. A cloud provider needs to be able to provision more physical servers, hard drives, memory, network interfaces, and just about any operating system or server application transparently and efficiently. To be able to do this, servers and storage need to be provisioned dynamically and they are constantly being reallocated to and from different customer environments with minimum regard for the underlying hardware. As long as the service level agreements for up time are met and the administrative overhead is minimized, the cloud provider does little to guarantee or disclose what the infrastructure looks like. It is incumbent upon the subscriber to ask and validate the design characteristics of every cloud provider they contract services from. There are many characteristics that define a cloud environment; please see Fig. 24.10 providing a comprehensive list of cloud design characteristics.

What combines networks by splitting the available bandwidth into independent channels that can be assigned in real time to a specific device?

Figure 24.10. Characteristics of cloud computing.

Most of the key characteristics can be summarized in the list that follows [15].

On demand: The always-on nature of the cloud allows for organizations to perform self-service administration and maintenance, over the Internet, of their entire infrastructure without the need to interact with a third party.

Resource pooling: Cloud environments are usually configured as large pools of computing resources such as CPU, RAM, and storage from which a customer can choose to use or leave to be allocated to a different customer.

Measured service: The cloud brings tremendous cost savings to the end user due to its pay-as-you-go nature; therefore, it is critical for the provider to be able to measure the level of service and resources each customer utilizes.

Network connectivity: The ease with which users can connect to the cloud is one of the reasons why cloud adoption is so high. Organizations today have a mobile workforce, which require connectivity for multiple platforms.

Elasticity: A vital component of the cloud is that it must be able to scale up as customers demand it. A subscriber may spin up new resources seasonally or during a big campaign and bring them down when no longer needed. It is the degree to which a system can autonomously adapt capacity over time.

Resiliency: A cloud environment must always be available as most service agreements guarantee availability at the expense of the provider if the system goes down. The cloud is only as good as it is reliable so it is essential that the infrastructure be resilient and delivered with availability at its core.

Multitenancy: A multitenant environment refers to the idea that all tenants within a cloud should be properly segregated from each other. In many cases a single instance of software may serve many customers so for security and privacy reasons it is critical that the provider takes the time to build in secure multitenancy from the bottom up. A multitenant environment focuses on the separation of tenant data in such a way as to take every reasonable measure to prevent unauthorized access or leakage of resource between tenants.

The most significant cloud security challenges revolve around how and where the data is stored as well as whose responsibility is it to protect. In a more traditional IT infrastructure or private cloud environment the responsibility to protect the data and who owns it is clear. When a decision is made to migrate services and data to a public cloud environment certain things become unclear and difficult to prove and define. The most pressing challenges to assess are [3]:

Data residency: This refers to the physical geographic location where the data stored in the cloud resides. There are many industries that have regulations requiring organizations to maintain their customer or patient information within their country of origin. This is especially prevalent with government data and medical records. Many cloud providers have data centers in several countries and may migrate virtual machines or replicate data across disparate geographic regions causing cloud subscribers to fail compliance checks or even break the law without knowing it.

Regulatory compliance: Industries that are required to meet regulatory compliance such as HIPAA or security standards such as those in the PCI typically have a higher level of accountability and security requirements than those who do not. These organizations should take special care of what cloud services they decide to deploy and that the cloud provider can meet or exceed these compliance requirements. Many cloud providers today can provision part of their cloud environment with strict HIPAA or PCI standards enforced and monitored but only if you ask for it, and at an additional cost, of course.

Data privacy: Maintaining the privacy of users is of high concern for most organizations. Whether employees, customers, or patients, personally identifiable information is a high valued target. Many cloud subscribers do not realize that when they contract a provider to perform a service that they are also agreeing to allow that provider to gather and share metadata and usage information about their environment.

Data ownership: Many cloud services are contracted with stipulations stating that the cloud provider has permission to copy, reproduce, or retain all data stored on their infrastructure, in perpetuity—this is not what most subscribers believe is the case when they migrate their data to the cloud.

Data protection: This isn't always clear unless it is discussed before engaging the service. Many providers do have security monitoring available but in most cases it is turned off by default or costs significantly more for the same level of service. A subscriber should always validate that the provider can protect the company's data just as effectively, or even more so than the company itself.

If these core challenges with public cloud adoption are not properly evaluated then there are some potential security issues that could crop up. On the other hand, these issues can be avoided with proper preparation and due diligence. The type of data and operations in your unique cloud instance will determine the level of security required.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128038437000247

Players in the SDN Ecosystem

Paul Göransson, ... Timothy Culver, in Software Defined Networks (Second Edition), 2017

11.4 Software Vendors

The move toward network virtualization has opened the door for software vendors to play a large role in the networking component of the data center. Some of the software vendors that have become significant players in the SDN space include VMware, Microsoft, Big Switch, as well as a number of startups. We will try to put their various contributions into context in the following paragraphs.

VMware, long a dominant player in virtualization software for the data center, has contributed significantly to the interest and demand for SDN in the enterprise. VMware boldly altered the SDN landscape when it acquired Nicira. VMware’s purchase of Nicira has turned VMware into a networking vendor. Nicira’s roots, as we explained in Section 11.1.1, come directly from pioneers in the Open SDN research community.

VMware’s current offerings include Open vSwitch, (OVS) as well as the Network Virtualization Platform(NVP) acquired through Nicira. (NVP is now marketed as VMware NSX.) NVP uses OpenFlow (with some extensions) to program forwarding information into its subordinate OVS switches.

VMware marketing communications claim that SDN is complete with only SDN via Overlays. OpenFlow is their southbound API of choice, but the emphasis is on network virtualization via overlay networks, not on what can be achieved with OpenFlow in generalized networking environments. This is a very logical business position for VMware. The holy grail for VMware is not about academic arguments but about garnering as much of the data center networking market as it can. This is best achieved by promoting VMware’s virtualization strengths and avoiding esoteric disputes about one southbound API’s virtues versus other approaches.

Other vendors and enterprises have begun to see the need to interoperate with the Nicira solution. Although VMware’s solution does not address physical devices, vendors who wish to create overlay solutions that work with VMware environments will likely need to implement these Nicira-specific APIs as well.

While VMware and Cisco were co-definers of the VXLAN [10] standard, they now appear to be diverging. Cisco is focused on their ODL and APIC controller strategy, and VMware on NSX. With respect to tunneling technologies, Cisco promotes VXLAN which is designed for software and hardware networking devices, while VMware promotes both VXLAN and STT. STT [11] has a potential performance advantage in the software switch environment customary for VMware. This advantage derives from STT’s ability to use the server NIC’s TCP hardware acceleration to improve network speed and reduce CPU load.

As a board-level participant in the ONF, Microsoft is instrumental in driving the evolution of OpenFlow. Microsoft’s current initiative and effort regarding SDN has been around the Azure public cloud project. Similar to VMware, Microsoft has its own server virtualization software called Hyper-Vand is utilizing SDN via Overlays in order to virtualize the network as well. Their solution uses NVGRE [12] as the tunneling protocol, providing multitenancy and the other benefits described in Section 8.3.2.

Big Switch Networks was founded in 2010 by Guido Appenzeller with Rob Sherwood as CTO of controller technology, both members of our SDN hall of fame. Big Switch is one of the primary proponents of OpenFlow and Open SDN. Big Switch has Open SDN technology at the device, controller, and application levels. Big Switch created an open source OpenFlow switch code base called Indigo. Indigo is the basis for Big Switch’s commercial OpenFlow switch software, which they market as Switch Light. The Switch Light initiative is a collaboration of switch, ASIC and SDN software vendors to create a simple and cost-effective OpenFlow-enabled switch. Big Switch provides a version of Switch Light intended to run as a virtual switch, called Switch Light for Linux, as well as one targeted for the white-box hardware market, called Switch Light for Broadcom. We discuss the white-box switch concept in Section 11.5.

Big Switch provides both open source and commercial versions of an SDN controller. The commercial version is called Big Network Controller, which is based on its popular open source controller called Floodlight. Big Switch also offers complete SDN solutions, notably Big Cloud Fabric, which provides network virtualization through overlays using OpenFlow virtual and physical devices.

There are numerous software startups that are minor players in the SDN space. Both Nicira and Big Switch were startups. Nicira, as we have mentioned, was the target of a major VMware acquisition. Big Switch has received a large amount of venture capital funding. Both of these companies have become major forces on the SDN playing field. There are a number of other SDN software startups that have received varying amounts of funding and in some cases have been acquired. Except for Nicira and Big Switch, we do not feel that any of these have yet to become major voices in the SDN dialogue. Since they have been influential in terms of being recipients of much venture capital attention and investment, and may in the future become major players in SDN, we will discuss in Chapter 14 startups such as Insieme, PLUMgrid, and Midokura and their business ramifications for SDN.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128045558000119

How Virtualization Happens

Diane Barrett, Gregory Kipper, in Virtualization and Forensics, 2010

Network Virtualization

The concept of network or local area network virtualization involves virtualizing Internet Protocol (IP) routing, forwarding, and addressing schemes. Network virtualization provides a way to run multiple networks, each customized to a specific purpose, at the same time over a shared network using virtual IP management and segmentation, but it can be used in the opposite manner. In other words, network virtualization can be used to either merge multiple physical networks into one virtual network or logically segment a single physical network into multiple logical networks. Since this type of virtualization allows the interface to bring and teardown routing services, partitions can be created on the fly for one network without interrupting other services and routing tables on that same interface. This method splits up the available bandwidth into independent channels, so they can be assigned to a particular device in real time. Vendors such as Cisco and Nortel both have devices and implementations that support this type of virtualization.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597495578000011

Virtualization

Dijiang Huang, Huijun Wu, in Mobile Cloud Computing, 2018

2.5.2 Virtual Networks

The two most common forms of network virtualization are protocol-based virtual networks (such as VLANs, VPNs, and VPLSs) and virtual networks that are based on virtual devices (such as the networks connecting VMs inside a hypervisor). Several popular virtual networking protocols are presented as follows:

L2TP (Layer 2 Tunneling Protocol) is a tunneling protocol used to support VPNs or as part of the delivery of services by ISPs. It does not provide any encryption or confidentiality by itself. Rather, it relies on an encryption protocol that it passes within the tunnel to provide privacy. The entire L2TP packet, including payload and L2TP header, is sent within a User Datagram Protocol (UDP) datagram. It is common to carry PPP sessions within an L2TP tunnel. L2TP does not provide confidentiality or strong authentication by itself. IPsec is often used to secure L2TP packets by providing confidentiality, authentication and integrity. The combination of these two protocols is generally known as L2TP/IPsec.

PPP (Point-to-Point Protocol) is a data link (layer 2) protocol used to establish a direct connection between two nodes. It connects two routers directly without any host or any other networking device in between. It can provide connection authentication, transmission encryption, and compression. PPP is used over many types of physical networks including serial cable, phone line, trunk line, cellular telephone, specialized radio links, and fiber optic links such as SONET. PPP is also used over Internet access connections. Internet service providers (ISPs) have used PPP for customer dial-up access to the Internet, since IP packets cannot be transmitted over a modem line on their own, without some data link protocol.

VLAN (Virtual Local Area Networks) is any broadcast domain that is partitioned and isolated in a computer network at the data link layer. VLANs allow network administrators to group hosts together even if the hosts are not on the same network switch. This can greatly simplify network design and deployment, because VLAN membership can be configured through software. To subdivide a network into virtual LANs, one configures network equipment. Simpler equipment can partition based on physical ports, MAC addresses, or IP addresses. More sophisticated switching devices can mark frames through VLAN tagging, so that a single interconnect (trunk) may be used to transport data for multiple VLANs.

VXLAN (Virtual eXtensible LAN) is a network virtualization technology that attempts to improve the scalability problems associated with large cloud computing deployments. It uses a VLAN-like encapsulation technique to encapsulate layer 2 Ethernet frames within layer 4 UDP packets, using 4789 as the default IANA-assigned destination UDP port number. VXLAN endpoints, which terminate VXLAN tunnels and may be both virtual or physical switch ports, are known as VXLAN tunnel endpoints (VTEPs). It is an alternative of Generic Routing Encapsulation (GRE) protocol in cloud system to build private networks as layer 2 tunnels.

Generic Routing Encapsulation (GRE) is a communication protocol used to establish a direct, point-to-point connection between network nodes. Being a simple and effective method of transporting data over a public network, such as the Internet, GRE lets two peers share data they will not be able to share over the public network itself. GRE encapsulates data packets and redirects them to a device that deencapsulates them and routes them to their final destination. This allows the source and destination switches to operate as if they have a virtual point-to-point connection with each other (because the outer header applied by GRE is transparent to the encapsulated payload packet). For example, GRE tunnels allow routing protocols such as RIP and OSPF to forward data packets from one switch to another switch across the Internet. In addition, GRE tunnels can encapsulate multicast data streams for transmission over the Internet.

SSL (Secure Socket Layer) is a standard security technology for establishing an encrypted link between a server and a client – typically a web server (website) and a browser, or a mail server and a mail client (e.g., Outlook) by encrypting data above the transport layer. The SSL protocol has always been used to encrypt and secure transmitted data. For example, all browsers have the capability to interact with secured web servers using the SSL protocol. However, the browser and the server need what is called an SSL Certificate to be able to establish a secure connection, where SSL Certificates is constructed based on a key pair: a public and a private key, and a certificate that contains a public key digital signed by using a trusted third party's private key. A client can use the server's certificate to establish an encrypted connection.

IPSec (IP security) is a network protocol suite that authenticates and encrypts the packets of data sent over a network at the IP layer. IPsec includes protocols for establishing mutual authentication between agents at the beginning of the session and negotiation of cryptographic keys for use during the session. IPsec can protect data flows between a pair of hosts (host-to-host), between a pair of security gateways (network-to-network), or between a security gateway and a host (network-to-host).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012809641300003X

What combines networks by splitting the available bandwidth?

Components of an SDDC There are three major SDDC building blocks: Network virtualization combines network resources by splitting the available bandwidth into independent channels that can each be assigned -- or reassigned -- to a particular server or device in real time.

What combines multiple network storage devices so they appear?

Storage virtualization is a cloud solution that combines multiple physical storage devices so they appear to the end-user as one virtual storage repository.

What is a collection of computers often geographically dispersed that are coordinated to solve a common problem?

Grid computing is a collection of computers, often geographically dispersed, that are coordinated to solve a common problem.

What is the computing concept that stores manages and processes data and applications over the Internet rather than on a personal computer or server?

Cloud computing is the practice of using a network of remote servers hosted on the Internet to store, manage, and process data, rather than a local server or a personal computer. While the term “cloud computing” may be new, the concept is not.