Which type of communication will send a message to a group of host destinations simultaneously?

MCSA/MCSE 70-291: Reviewing TCP/IP Basics

Deborah Littlejohn Shinder, ... Laura Hunter, in MCSA/MCSE (Exam 70-291) Study Guide, 2003

Internet Group Management Protocol

The Internet Group Management Protocol (IGMP) manages host membership in multicast groups. IP multicast groups are groups of devices (typically called hosts) that listen for and receive traffic addressed to a specific, shared multicast IP address. Essentially, IP multicast traffic is sent to a specific MAC address but processed by multiple IP hosts. (As you’ll recall from our earlier discussion, each NIC has a unique MAC address, but multicast MAC addresses use a special 24-bit prefix to identify them as such.) IGMP runs on the router, which handles the distribution of multicast packets (often, multicast routing is not enabled on the router by default and must be configured).

Multicasting makes it easy for a server to send the same content to multiple computers simultaneously. IP addresses in a specific range (called Class D addresses) are reserved for multicast assignment. The IGMP protocol allows for different types of messages, used to join multicast groups and to send multicast messages.

A unicast message is sent directly to a single host, whereas a multicast is sent to all members of a particular group. Both utilize connectionless datagrams and are transported via the User Datagram Protocol (UDP) that we’ll discuss in the Host-to-Host Transport Layer section. A multicast is sent to a group of hosts known as an IP multicast group or host group. The hosts in this group listen for IP traffic sent to a specific IP multicast address. IP multicasts are more efficient than broadcasts because the data is received only by computers listening to a specific address. A range of IP addresses, Class D addresses, is reserved for multicast addresses. Windows Server 2003 supports multicast addresses and, by default, is configured to support both the sending and receiving of IP multicast traffic.

Which type of communication will send a message to a group of host destinations simultaneously?
Note

For more information about IGMP, see RFC 1112 at www.cis.ohio-state.edu/cgibin/rfc/rfc1112.html, which defines the specifications for IP multicasting.

Exam Warning

Although their acronyms are very similar and they function at the same layer of the networking models, ICMP and IGMP perform very different functions, so be sure you don’t get them confused on the test.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978193183692050007X

Virtual bus structure-based network-on-chip topologies†

In Networks-On-Chip, 2015

4.3.2.1 Multihop problem

The latency of a unicast message is regarded as the time from the generation of the unicast message at the source node until the time when the last flit of the message is absorbed by the destination node. So the service time of a unicast packet, TNoC, is given by

(4.1)TNoC=Ts+Tb,

where Ts is the router service time for the header flit and Tb is the number of cycles required by a packet to cross the channel. Since the remaining flits follow the header flit in a pipelined fashion, Tb is simply the quotient of the packet size, S, and the channel width, W:

(4.2)Tb=S/W.

We note that Ts is a function of the router design and its hop number, H, including the time to traverse the router (tR) and the link (tL):

(4.3)Ts=∑1Hthop= ∑1H(tR+tL)=∑1H(tcrossbar+tBW +tVA+tSA+tcontention+tL) .

Here, thop is the latency of each router hop, tBW is the time the flit spends in the buffers, tVA and tSA are the times the flit spends in arbitrating the buffer and switching resources, and tcrossbar is the time to actually traverse the router.

In the ideal case, the unicast delay for NoC networks, Tideal, should be the transmission delay of the physical link. For each hop, the ideal network latency, indicating the intrinsic network delay, can be written as

(4.4)tideal=tL+tcrossbar.

So the total ideal delay, Tideal, can be written as

(4.5)Tideal=∑1H(tideal)+Tb=∑1T (tL+tcrossbar)+Tb.

Compared with the ideal network latency, Tgap, indicating the extra router pipeline and resource contention latency in the NoC network design, can be written as

(4.6)Tgap=TNoC−Tideal=∑1H(tBW+tVA+tSA+tcontention).

From Equation (4.6), it can be seen that if we want to reduce the unicast latency of the NoC, we should reduce the number of transmission hops, the processing time of router pipelines, and the contention time caused by multiple packets waiting for transmission.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128009796000044

Performance Evaluation

José Duato, ... Lionel Ni, in Interconnection Networks, 2003

9.11.5 Performance of Unicast- and Multidestination-Based Schemes

This section compares multicast algorithms using unicast messages with those using multidestination messages. This measures the potential additional benefits that a system provides if it supports multidestination wormhole routing in hardware. The multicast algorithms selected for comparison are the ones that achieved the best performance in Section 9.11.4 (the SPUmesh and the SCHL schemes), using the same parameters as in that section. The performance results presented in this section were reported in [173].

As expected, the SCHL scheme outperforms the SPUmesh algorithm for both large and small d. This can be seen in Figure 9.50. Since the destination sets are completely random, there is no node contention to degrade the performance of the SCHL scheme. There is a crossover point at around d = 100, after which the SCHL scheme keeps improving as d increases. The reason is that hierarchical grouping of destinations gives relatively large leader sets for small d. This is because the destinations are randomly scattered in the mesh. On the other hand, SPUmesh generates perfectly staggered trees for identical destination sets. But, as d increases, leader sets become smaller and the advantage of efficient grouping in the SCHL scheme overcomes the node contention. This results in a factor of about 4−6 improvement over the SPUmesh algorithm when the number of sources and destinations is very high.

Which type of communication will send a message to a group of host destinations simultaneously?

Figure 9.50. Multiple multicast latency on a 16 × 16 mesh using SPUmesh and SCHL. Randomly generated destination sets with (a) fixed d (128, 200) with varying s and (b) fixed s (128, 256) with varying d.

(from [173])

These results lead to the conclusion that systems implementing multidestination message passing in hardware can support more efficient multicast than systems providing unicast message passing only. However, when the number of destinations for multicast messages is relatively small, the performance benefits of supporting multidestination message passing are small. So the additional cost and complexity of supporting multidestination message passing is only worth it if multicast messages represent an important fraction of network traffic and the average number of destinations for multicast messages is high.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781558608528500122

Message passing interface communication protocol optimizations†

In Networks-On-Chip, 2015

10.5.2.1 Round-trip traffic pattern

We first record the time taken for a number of round-trip message transfers. The randomly generated message (i.e., the destinations of unicast messages at each node are selected randomly) is sent to the target node by the MPI_Send instruction. When the target node receives this message, it will first perform the MPI_Receive instruction and then return the message back to the source node immediately without changing anything by the MPI_Send instruction. Considering that receivers pre-post MPI_Receive instructions, this round-trip test can help us to obtain the maximum network bandwidth of the communication system. Assuming that the maximum capacity of the L1 cache in the multicore processor is 32 kB, the maximum length of the message triggered by MPI primitive instructions should be set to 16 kB (for send and receive operations).

Figure 10.12 shows the bandwidth results for different protocols under a round-trip traffic pattern with message size ranging from 1 to 16,384bytes. As the message size increases, the communication bandwidth rapidly increases, such that more time can be used for the real data transmission. It can be seen from the figure that the buffered protocol achieves significantly higher bandwidth than the synchronous protocol primarily because the buffered protocol does not need the handshaking process or retry operations for its pre-post receives. As expected, the proposed ADCM approach achieves the same bandwidth as the buffered protocol. With sufficient buffers and pre-post receive operations, the buffered protocol exhibits a special case of the ADCM.

Which type of communication will send a message to a group of host destinations simultaneously?

Figure 10.12. The bandwidth comparison under round-trip traffic.

To understand further the buffered protocol's disadvantages and the ADCM's advantages, Figure 10.13 illustrates how the bandwidth varies in terms of the percentage of pre-post receive operations with a 4-kB message. Pre-post receive indicates that such receive operations are pre-posted before the time messages arrive. The synchronous protocol outperforms the buffered protocol when 58% or less receives are pre-posted. The ADCM approach achieves better communication bandwidth by dynamically performing corresponding protocol behavior on the basis of the buffer usage and communication demand.

Which type of communication will send a message to a group of host destinations simultaneously?

Figure 10.13. The bandwidth comparison for pre-post receive ratio variation.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012800979600010X

Network-on-chip customizations for message passing interface primitives†

In Networks-On-Chip, 2015

9.5.2.1 The effect of point-to-point communication: Bandwidth

We first discuss the performance of point-to-point communications in the proposed design. We record the time taken for a number of round-trip message transfers. The randomly generated messages (i.e., destinations of unicast messages at each node are selected randomly) are sent to the target node by MPI_Send instructions. When the target node receives this message, this node will first perform the MPI_Receive instruction and then return the message to the source node immediately without any change through the MPI_Send instruction. This round-trip test can help determine the network bandwidth of the communication system. Such a bandwidth is considered as the average peak performance on a link channel. In the experiment, the simulation of MPI instructions, such as MPI_Send and MPI_Receive, will be triggered by benchmarks, such that the L1 cache controller will access the data and interconnect with the MU. Assuming that the maximum capacity of the L1 cache in the multicore processor is 32kB, the maximum length of the message triggered by MPI primitive instructions should be set to 16 kB (for send and receive operations).

Figure 9.7 illustrates the bandwidth results of the point-to-point communication. When the size of the message is more than 1 kB, the bandwidth of the communication system could reach more than 5GB/s. Compared with software-based MPI implementations, such as TMD-MPI [42] with 10MB/s, the bandwidth of the proposed design exhibits a qualitative leap. For the hardware implementation in Ref. [44], a bandwidth of 531.2 MB/s is obtained. The proposed approach exhibits improved performance, which adequately demonstrates the potential benefit of supporting the parallel programming model by using a special hardware mechanism.

Which type of communication will send a message to a group of host destinations simultaneously?

Figure 9.7. Bandwidth results of the proposed design with different message sizes.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128009796000093

MCSA/MCSE 70-291: The Dynamic Host Configuration Protocol

Deborah Littlejohn Shinder, ... Laura Hunter, in MCSA/MCSE (Exam 70-291) Study Guide, 2003

BOOTP versus DHCP Relay

BOOTP is an older protocol used to boot diskless workstations with the use of a network-downloadable operating system image. Like DHCP, it is broadcast-based and runs over UDP port 67 and 68. Most routers are not set up by default to forward this type of broadcast, and need some type of assistance to be able to do so. Hence, the DHCP Relay Agent was born.

A DHCP Relay Agent is set up to listen for DHCP broadcast messages on a network segment on which there is no DHCP server. Its job is to intercept these messages and forward them via a one-to-one (unicast) message to a valid DHCP server across a router. The DHCP Relay Agent acts as an intermediary DHCP client, working to provide the real DHCP client with a valid DHCP lease. See Figure 3.33 for an illustration of how the DHCP Relay Agent works.

Which type of communication will send a message to a group of host destinations simultaneously?

Figure 3.33. Placing Your DHCP Relay Agent

Which type of communication will send a message to a group of host destinations simultaneously?
Note

In contrast to a broadcast, which is a one-to-all type of message using 255.255.255.255 as its destination subnet, a unicast is a one-to-one relationship in which the initiating computer already knows the destination IP address of the destination computer.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781931836920500093

Cable Networking Protocols

Walter Ciciora, ... Michael Adams, in Modern Cable Television Technology (Second Edition), 2004

Types of IP Addresses

There are three types of IP addresses: unicast, multicast, and broadcast. Unicast addresses are the most straightforward. They are assigned to identify a single interface (for most practical purposes this equates to a single host, though there is a difference). A unicast message is intended for one and only one interface. Multicast addresses identify a set of interfaces. This allows a message to be generated once and sent to a number of interfaces. A router with receiving hosts attached to two or more ports will replicate the packets on each port. Multicast addressing is useful for video conferencing, in which the same signal is to be sent to several participants. Similarly, it is used in IP distribution of video (IPTV). If one router needs to pass messages to several other routers, it can do so using multicast addresses. Broadcast addresses are a special case of multicast addresses. They identify all interfaces on a network. The use of broadcast addresses is discouraged.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781558608283500072

Collective Communication Support

José Duato, ... Lionel Ni, in Interconnection Networks, 2003

5.8 Engineering Issues

When designing a parallel computer, the designer faces many trade-offs. One of them concerns providing support for collective communication. Depending on the architecture of the machine, different issues should be considered.

Multicomputers usually rely on message passing to implement communication and synchronization between processes executing on different processors. As indicated in Section 5.3.1, supporting collective communication operations may reduce communication latency even if those operations are not supported in hardware. The reason is that system calls and software overhead account for a large percentage of communication latency. Therefore, replacing several unicast message-passing operations by a single collective communication operation usually reduces latency significantly. For example, when a processor needs to send the same message to many different processors, a single multicast operation can replace many unicast message transmissions. Even if multicast is not supported in hardware, some steps like system call, buffer reservation in kernel space, and message copy to the system buffer are performed only once. Also, when multicast is not supported in hardware, performance can be considerably improved by using the techniques described in Section 5.7 to organize the unicast messages as a multicast tree. Using those techniques, communication latency increases logarithmically with the number of destinations. Otherwise it would increase linearly. Obviously, implementing some hardware support for collective communication operations will speed up the execution of those operations even more.

On the other hand, communication between processes is usually performed in shared-memory multiprocessors by accessing shared variables. However, synchronization typically requires some hardware support. Barrier synchronization involves a reduce operation followed by a broadcast operation. Moreover, distributed shared-memory multiprocessors with coherent caches rely on a cache coherence protocol. Different copies of the same cache line are kept coherent by using write-invalidate or write-update protocols. Both invalidate and update commands may benefit from implementing hardware support for collective communication operations. Invalidations can be performed by sending a multicast message to all the caches having a copy of the block that is to be written to. Acknowledgments can be gathered from those caches by performing a reduce operation. Updates can also be performed by sending a multicast message to all the caches having a copy of the block.

Adding hardware support for collective communication increases cost and hardware complexity, possibly slowing down the routing hardware. Now, the question is whether or not it is useful to provide hardware support for collective communication. Some parallel computers provide support for a few operations. The nCUBE-2 (wormhole-switched hypercube) [248] supports broadcast within each subcube. The NEC Cenju-3 (wormhole-switched unidirectional MIN) [183] supports broadcast within each contiguous region. The TMC CM-5 [202] supports one multicast at a time via the control network. Unfortunately, in some cases that support was not properly designed. In the nCUBE-2 and the NEC Cenju-3 deadlock is possible if there are multiple multicasts.

One of the reasons for the lack of efficient hardware support for collective communication is that most collective communication algorithms proposed in the literature focused on networks with SAF switching. However, current multiprocessors and multicomputers implement wormhole switching. Path-based routing (see Section 5.5.3) was the first mechanism specifically developed to support multicast communication in direct networks implementing wormhole switching. However, the first path-based routing algorithms were based on Hamiltonian paths [211]. These algorithms are not compatible with the most common routing algorithms for unicast messages, namely, dimension-order routing. Therefore, it is unlikely that a system will take advantage of Hamiltonian path-based routing. Note that it makes no sense sacrificing the performance of unicast messages to improve the performance of multicast messages, which usually represent a smaller percentage of network traffic. Additionally, path-based routing requires a message preparation phase, splitting the destination set into several subsets and ordering those subsets. This overhead may outweigh the benefits from using hardware-supported multicast. Fortunately, in some cases it is possible to perform the message preparation phase at compile time.

More recently, the BRCP model (see Section 5.5.3) has been proposed [269]. In this model, the paths followed by multidestination messages conform to the base routing scheme, being compatible with unicast routing. Moreover, this model allows the implementation of multicast routing on top of both deterministic and adaptive unicast routing, and therefore is suitable for current and future systems. So, it is likely we will see some hardware implementations of multicast routing based on the BRCP model in future systems. However, more detailed performance evaluation studies are required to assess the benefits of hardware-supported multicast. Also, as indicated in Section 5.5.3, several delivery ports are required to avoid deadlock. This constraint may limit the applicability of the BRCP model.

Efficient barrier synchronization is critical to the performance of many parallel applications. Some parallel computers implement barrier synchronization in hardware. For example, the Cray T3D [259] uses a dedicated tree-based network with barrier registers to provide fast barrier synchronization. Instead of using a dedicated network, it is possible to use the same network as for unicast messages by implementing the hardware mechanisms described in Section 5.6.1. Again, these mechanisms have been proposed very recently, and a detailed evaluation is still required to assess their impact on performance.

As indicated above, protocols for cache coherence may improve performance if multicast and reduce operations are implemented in hardware. In this case, latency is critical. Multicast can be implemented by using the BRCP model described in Section 5.5.3. The multidestination gather messages introduced in Section 5.6.3 can be used to collect acknowledgments with minimum hardware overhead. This approach has been proposed and evaluated in [69]; up to a 15% reduction in overall execution time was obtained.

The cost of the message preparation phase can be reduced by using tree-based multicast routing because it is not necessary to order the destination set. Tree-based routing usually produces more channel contention than path-based routing and is also prone to deadlock. However, the average number of copies of each cache line is small, reducing contention considerably. Also, the pruning mechanism proposed in Section 5.5.2 can be used to recover from deadlock. Tree-based multicast routing with pruning has been specifically developed to support invalidations and updates efficiently [224]. This mechanism has some interesting advantages: it requires a single start-up regardless of the number of destinations, therefore achieving a very small latency. It requires a single delivery channel per node. Also, it is able to deliver a message to all its destinations using only minimal paths. However, this mechanism only supports multicast. Support for the reduce operation is yet to be developed.

Most hardware mechanisms to support collective communication operations have been developed for direct networks. Recently some researchers focused on MINs. As shown in Section 5.5.2, MINs are very prone to deadlock when multicast is supported in hardware. Current proposals to avoid deadlock require either complex signaling mechanisms or large buffers to implement VCT switching. Up to now, no general solution has been proposed to this problem. Therefore, direct networks should be preferred if collective communication operations are going to be supported in hardware.

Finally, note that efficient mechanisms to support collective communication operations in hardware have been proposed very recently. Including hardware support for collective communication in the router may increase performance considerably if collective communications are requested frequently. However, in practice, most collective communication operations involve only a few nodes. In these cases, the performance gain achieved by supporting those operations in hardware is small. The only exception is barrier synchronization. In this case, hardware support reduces synchronization time considerably. Some manufacturers include dedicated hardware support for barrier synchronization. Whether this support should be extended to other operations remains an open issue.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781558608528500080

ZigBee Applications

In Zigbee Wireless Networking, 2008

4.2.2 Ember ZigBee API

Ember is a small start-up company out of Boston, Massachusetts. They are funded by some heavyweight venture capitalists, such as Vulcan. Bob Metcalf, the inventor of Ethernet, is one of the principal investors. Ember is one of 16 ZigBee promoter companies.

Ember makes ZigBee radios, integrated chips, and a ZigBee software stack called EmberZNet. Documentation for the stack can be downloaded from http://www.ember.com. The software is only available in the development kits (see Figure 4.8).

Which type of communication will send a message to a group of host destinations simultaneously?

Figure 4.8. Ember EM250 Development Kit

Ember takes a slightly different approach to ZigBee than either TI or Freescale. Ember uses the concept of a transport layer, and enhances the functionality found in ZigBee with a series of data request functions:

emberSendDatagram()

emberSendSequenced()

emberSendMulticast()

emberSendLimitedMulticast()

emberSendUnicast() sends APS unicast messages

emberSendBroadcast() sends APS broadcast messages

Here is an example of using emberSendDatagram():

EmberStatus AppSendDatagram(int8u clusterId, int8u *contents,

int8u length)

{

EmberMessageBuffer message=EMBER_NULL_MESSAGE_BUFFER;

EmberStatus status;

if (length != 0) {

message=emberFillLinkedBuffers(contents, length);

if (message == EMBER_NULL_MESSAGE_BUFFER)

return EMBER_NO_BUFFERS;

}

status=emberSendDatagram(0, clusterId, message);

if (message != EMBER_NULL_MESSAGE_BUFFER)

emberReleaseMessageBuffer(message);

return status;

}

Ember uses the concept of linked buffers: a set of 32-byte buffers concatenated to form a larger buffer, for the use of over-the-air message functions.

EmberZNet also offers a variety of interesting features, such as over-the-air (or over Ethernet) updates of the software stack. Of course, updating over-the-air is a two-edged sword: On the one hand, it allows a vendor to provide feature enhancements or bug fixes across the ZigBee network. On the other hand, it opens a potential security hole, one that attackers could use to change the behavior of the network through the update mechanism.

Also differently than TI or Freescale, Ember uses Ethernet, not USB, to connect the development PC to the ZigBee boards (not surprising, considering Bob Metcalf's involvement with Ember). An advantage of the Ethernet approach is that a corporation can provide access to the ZigBee devices from anywhere inside the company, if set up correctly by the IT department. A disadvantage lies in price: Ember's kits tend to be more expensive than their competition.

Ember provides the InSight development environment with their kits, which can debug both single nodes, and the entire ZigBee network, over-the-air. InSight allows a developer to:

Debug hardware

Monitor application or debug data

Monitor radio data packets

Most other vendors use the Daintree Sensor Network Analyzer I introduced in Chapter 3, “The ZigBee Development Environment.”

Ember uses the concept of a transport layer and includes over-the-air updates.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780750685979000045

The Communication View

Richard John Anthony, in Systems Programming, 2016

3.3.4 Addressing Methodologies

There are four main addressing methodologies, that is, ways in which the recipient of a message is identified.

3.3.4.1 Unicast Communication

A message is delivered to a single destination process, which is uniquely addressed by the sender. That is, the message contains the address of the destination process. Other processes do not see the message.

Figure 3.7 illustrates unicast communication in which a message is sent to a single, specifically addressed destination process.

Which type of communication will send a message to a group of host destinations simultaneously?

Figure 3.7. Unicast communication.

3.3.4.2 Broadcast Communication

A single message (as transmitted by the sender) is delivered to all processes. The most common way to achieve this is to use a special broadcast address, which indicates to the communication mechanism that the message should be delivered to all computers.

Figure 3.8 illustrates broadcast communication in which the sender sends a single message that is delivered to all processes. When considering the Internet specifically, the model of broadcast communication depicted in Figure 3.8 is termed “local broadcast,” in which the set of recipients are the processes on computers in the same IP subnet as the sender. The special IPv4 broadcast address to achieve this is 255.255.255.255.

Which type of communication will send a message to a group of host destinations simultaneously?

Figure 3.8. Broadcast communication.

It is also possible to perform a directed broadcast with the IP. In this case, a single packet is sent to a specific remote IP subnet and is then broadcast within that subnet. In transit, the packet is forwarded in a unicast fashion. On reaching the destination subnet, it is the responsibility of the router on the entry border of the subnet to perform the last step as a broadcast. To achieve a directed broadcast, the network component of the original address must be the target subnet address, and all bytes of the host part of the address are set to the value 255. On reaching the final router in the delivery path, the address is converted to the IP broadcast address (i.e., 255.255.255.255) and thus delivered to all computers in the subnet. As an example, consider that the subnet address is 193.65.72.0, which may contain computers addressed from 193.65.72.1 to 193.65.72.254. The address used to send a directed broadcast to this subnet would be 193.65.72.255. The concept of directed broadcast is illustrated in Figure 3.9.

Which type of communication will send a message to a group of host destinations simultaneously?

Figure 3.9. IP-directed broadcast communication.

When broadcasting using a special broadcast address, the sender does not need to know, and may not be able to know, the number of receivers or their identities. The number of recipients of messages can range from none to the entire population of the system.

A broadcast effect can also be achieved by sending a series of identical unicast messages to each other process known by the sending process. Where the communication protocol does not directly support broadcast (e.g., with TCP), this is the only way to achieve the broadcast effect. The advantage is greater security as the sender identifies each recipient separately, but the disadvantages are greater overheads for the sender in terms of the processing associated with sending and greater overheads on the network (in terms of bandwidth used, as each individual message must now appear on the medium).

There is also the consideration of synchronization. With broadcast address-based communication in a local area network, the transmission time is the same (there is only one message sent), the propagation times will be similar (short distances), and thus, although the actual delivery to specific processes at each node may differ because of local workloads on host computers, the reception is reasonably synchronized; certainly more so than when a series of unicast messages are sent, when one receiver will possibly get the message before the other messages are even sent. This could have an impact in some services where voting is used or where the order of response is intended to be used to influence system behavior. For example, in a load balancing mechanism, a message to solicit availability may be sent and the speed of response might be a factor in determining suitability (on the basis that a host that responds quickly is likely to be a good candidate for sending additional work to), and thus, the use of the unicast approach of achieving a multicast or broadcast effect may require further synchronization mechanisms to be used (because the sender has implicitly preordered the responses in the ordering of the sending of requests).

Broadcast communication is less secure than unicast because any process listening on the appropriate port can hear the message and also because the sender does not know the actual identities of the set of recipient processes. IP broadcast communication can also be inefficient in the sense that all computers receive the packet at the network (IP) layer (effectively an interrupt requiring that the packet be processed and passed up to the transport layer) even if it turns out that none of the processes present are interested in the packet.

3.3.4.3 Multicast Communication

A single message (as transmitted by the sender) is delivered to a group of processes. One way to achieve this is to use a special multicast address.

Figure 3.10 illustrates multicast communication in which a group (a prechosen subset) of processes receive a message sent to the group. The light-shaded processes are members of the target group, so each will receive the message sent by process A; the dark-shaded processes are not members of the group and so ignore the message. The multicast address can be considered to be a filter; either processes listen for messages on that address (conceptually they are part of the group) or they do not.

Which type of communication will send a message to a group of host destinations simultaneously?

Figure 3.10. Multicast communication.

The sender may not know how many processes receive the message or their identities; this depends on the implementation of the multicast mechanism.

Multicast communication can be achieved using a broadcast mechanism. UDP is an example of a protocol that supports broadcast directly, but not multicast. In this case, transport layer ports can be used as a means of group message filtering by arranging that only the subset of processes that are members of the group listen on the particular port. The group membership action join group can be implemented locally by the process binding to the appropriate port and issuing a receive-from call.

In both types of multicast communication, that is, directly supported by the communication protocol or fabricated by using a broadcast mechanism, there can be multiple groups, and each individual process can be a member of several different groups. This provides a useful way to impose some control and structure on the communication at the higher level of the system or application. For example, the processes concerned with a particular functionality or service within the system can join a specific group related to that activity.

3.3.4.4 Anycast Communication

The requirement of an anycast mechanism is to ensure that the message is delivered to one member of the group. Some definitions are stricter, that is, that it must be delivered to exactly one member. Anycast is sometimes described as “delivery to the nearest of a group of potential recipients”; however, this is dependent on the definition of “nearest.”

Figure 3.11 illustrates the concept of anycast communication, in which a message is delivered to one member of a group of potential recipients. Whereas broadcast and multicast deliver a message to 0 or more recipients (depending on system size and group size, respectively), the goal of anycast is to deliver a message to 1 (or possibly more) recipients.

Which type of communication will send a message to a group of host destinations simultaneously?

Figure 3.11. Anycast communication.

Neither TCP nor UDP directly supports anycast communication, although it could be achieved using UDP with a list of group members, sending a unicast message to each one in turn and waiting for a response before moving on to the next. As soon as a reply is received from one of the group, the sequence stops.

The Arrows application within the Networking Workbench provides an interesting case example for exploring addressing modes; see Section 3.15.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128007297000030

What type of message is sent to a specific group of host?

A multicast message is a message sent to a selected group of hosts that are part of a subscribing multicast group.

What type of communication will send messages to all devices on the local area network?

Broadcast transmission is supported on most LANs (e.g. Ethernet), and may be used to send the same message to all computers on the LAN (e.g. the address resolution protocol (arp) uses this to send an address resolution query to all computers on a LAN, and this is used to communicate with an IPv4 DHC server).

What do you call that a message is sent to all hosts connected to the network?

A broadcast address is a network address used to transmit to all devices connected to a multiple-access communications network. A message sent to a broadcast address may be received by all network-attached hosts.

What type of message is sent to a specific group of hosts select one unicast dynamic multicast broadcast?

A unicast message is sent directly to a single host, whereas a multicast is sent to all members of a particular group.