Which of the following can denote the upper and lower bounds of their various network activities?

24th European Symposium on Computer Aided Process Engineering

Nagajyothi Virivinti, Kishalay Mitra, in Computer Aided Chemical Engineering, 2014

2.3 Multi-objective Optimization under Uncertainty

In industrial grinding process, in addition to goal of productivity maximization, other purposes of deterministic grinding circuit optimization have to satisfy the upper bound constraints on the control variables. We know that there lies a tradeoff between the throughput (TP) and the percent passing of midsize classes (MS) from the previous work of Mitra and Gopinath,2004. In deterministic optimization formulation, there are certain parameters which we will assume them as constant. But, in real life that may not be case. There are such six parameters in our industrial grinding process which are αR, αB, βR, βB are the grindability indices and grindability exponents for the rod mill (RMGI) and the ball mill (BMGI); and γP, γS are the sharpness indices for the primary (PCSI) and secondary cyclones (SCSI). These parameters are treated as constant in deterministic formulation. As they are going to be treated as uncertain parameters in the OUU formulation. These parameters are assumed uncertain because most of them are obtained from the regression of experimental data and thus are subject to uncertainty due to experimental and regression errors. In the next part of the section, we consider them as fuzzy numbers and solve the OUU problem by FEVM. In FEVM formulation, the uncertain parameters are considered as fuzzy numbers and the uncertain formulation is transformed into the deterministic formulation by expectation calculations for both objective function and constraints. So, the converted deterministic multi-objective optimization problem is expressed as:

Objective functions

MaxS,W1,W2E{TP(αR,βR,αB,βB,γP,γS)}MaxS,W1,W2E{MS(αR,βR,αB,βB, γP,γS)}

Subject to

E{CS(αR,βR,αB, βB,γP,γS)−CSU}≤ 0E{FS(αR,βR, αB,βB,γP,γS)−FSU}≤0E{PS(α R,βR,αB,βB,γP, γS)−PSU}≤0E{RCL(αR,βR,αB,β B,γP,γS)−RCLU}≤ 0

Decision variables bounds

(6)SL≤S≤SUW1L≤W1≤W1U W2L≤W2≤W2U

Fuzzy Simulation Algorithm for Expected Value Model based on credibility is used to perform expectation calculations in the above equivalent deterministic formulation.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444634559500775

Temporal Constraint Setting

Xiao Liu, ... Jinjun Chen, in Temporal QOS Management in Scientific Cloud Workflow Systems, 2012

6.3.2 Setting Coarse-grained Temporal Constraints

The second step is to set coarse-grained upper bound temporal constraints at build time. Based on the four basic building blocks, the weighted joint distribution of an entire workflow or workflow segment can be obtained efficiently to facilitate the negotiation process for setting coarse-grained temporal constraints. Here, we denote the obtained weighted joint distribution of the target scientific workflow (or workflow segment) SW as N(μsw,σ sw2) where μsw=∑i=1nwiμi and σsw=∑i=1nwi2σi2. Meanwhile, we assume that the minimum threshold for the probability consistency is β%, which implies the user’s acceptable bottom-line probability, namely the confidence for timely completion of the workflow instance; and the maximum threshold for the upper bound constraint is max(SW), which denotes the user’s acceptable latest completion time. The actual negotiation process can be conducted in two alternative ways: time oriented and probability oriented.

The time-oriented negotiation process starts with the user’s initial suggestion of an upper bound temporal constraint of U(SW) and the evaluation of the corresponding temporal consistency state by the service provider. If U(SW)=μsw+σsw with λ as the α% percentile, and α% is below the threshold of β%, then the upper bound temporal constraint needs to be adjusted, otherwise the negotiation process terminates. The subsequent process is the iteration in which the user presents a new upper bound temporal constraint which is less constrained than the previous one and the service provider re-evaluates the consistency state until it reaches or is above the minimum probability threshold.

In contrast, the probability-oriented negotiation process begins with the user’s initial suggestion of a probability value of α%. The service provider evaluates the execution time R(SW) of the entire workflow process SW by the sum of all activity durations as ∑i=1nwi(μi+λσi), where λ is the α% percentile. If R(SW) is above the maximum upper bound constraint of max(SW) for the user, the probability value needs to be adjusted, otherwise the negotiation process terminates. The following process is the iteration in which the user presents a new probability value which is lower than the previous one and the service provider re-evaluates the workflow duration until it reaches or is lower than the upper bound constraint.

As depicted in Figure 6.6, with the probability-based temporal consistency, the time-oriented negotiation process is normally where increasing upper bound constraints are presented and evaluated with their temporal probability consistency states until the probability is above the user’s bottom-line confidence; the probability-oriented negotiation process is normally where decreasing temporal probability consistency states are presented and estimated with their upper bound constraints until the constraint is below the user’s acceptable latest completion time. In practice, the user and service provider can choose either of the two negotiation processes or even interchange dynamically if they want. However, on one hand, users who have some background knowledge about the execution time of the entire workflow or some of the workflow segments may prefer to choose time-oriented negotiation process since it is relatively easier for them to estimate and adjust the coarse-grained constraints. On the other hand, for users who have no sufficient background knowledge, a probability-oriented negotiation process is a better choice since they can make the decision by comparing the probability values of temporal consistency states with their personal bottom-line confidence values.

Which of the following can denote the upper and lower bounds of their various network activities?

Figure 6.6. Negotiation process for setting coarse-grained temporal constraints.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123970107000069

Novel Probabilistic Temporal Framework

Xiao Liu, ... Jinjun Chen, in Temporal QOS Management in Scientific Cloud Workflow Systems, 2012

4.2 Component I: Temporal Constraint Setting

Component I is temporal constraint setting. As depicted in Table 4.1, the input includes process models (cloud workflow process definitions) and system logs for scientific cloud workflows. The output includes both coarse-grained and fine-grained upper bound temporal constraints. Temporal constraints mainly include three types, i.e. upper bound, lower bound and fixed time. An upper bound constraint between two activities is a relative time value so that the duration between them must be less than or equal to it. As discussed in [21], conceptually, a lower bound constraint is symmetric to an upper bound constraint and a fixed-time constraint can be viewed as a special case of upper bound constraint, hence they can be treated similarly. Therefore, in this book, we focus only on upper bound constraints.

Table 4.1. Component I: Temporal Constraint Setting

Strategy overview Input: process models and system logs for scientific cloud workflows
Output: coarse-grained and fine-grained upper bound temporal constraints
Methods: statistical time-series-pattern-based forecasting strategy; weighted joint normal distribution; probability-based temporal consistency model; win–win negotiation process; automatic propagation
Step 1: Forecasting activity duration intervals Based on system logs, a statistical time-series-pattern-based forecasting strategy is applied for estimation of workflow activity duration intervals
Step 2: Setting coarse-grained temporal constraints With the probability-based temporal consistency model, a win–win negotiation process between service users and service providers is designed to set coarse-grained temporal constraints. The negotiation process can be conducted in either a time- or probability-oriented way
Step 3: Setting fine-grained temporal constraints Based on coarse-grained temporal constraints, fine-grained temporal constraints can be assigned through an automatic propagation process

The first step is to forecast workflow activity duration intervals. Activity duration interval is one of the basic elements for the temporal consistency model. For workflow activities, the accuracy of their estimated durations is critical for the effectiveness of our temporal framework. However, forecasting is not a trivial issue in cloud workflow environments due to its complexity and uncertainty by nature. For such a purpose, a statistical time-series pattern-based forecasting strategy is designed. Unlike conventional strategies which build forecasting models based on several dominated affecting factors for activity durations (e.g. CPU load, network speed and memory space), our forecasting strategy focuses on the performance patterns of activity durations themselves and tries to predict an accurate interval based on the most similar historical time-series patterns. Details of the forecasting strategy will be presented in Chapter 5.

The second step is to set coarse-grained temporal constraints. Coarse-grained temporal constraints are those assigned for entire workflow instances (deadlines) and local workflow segments (milestones). They are usually specified by service users based on their own interests. However, as analysed in Section 2.3, to ensure the quality of temporal constraints, a balance should be considered between the service user’s requirements and the system’s performance. For such a purpose, with the probability-based temporal consistency model, a win–win negotiation process between service users and service providers is designed to support the setting of coarse-grained temporal constraints. The negotiation process can be conducted either in a time-oriented way where the service user suggests new temporal constraints and the service provider replies with the corresponding probability consistency state (i.e. probability confidence for completing the workflow instance within the specified temporal constraint) or in a probability-oriented way where the service user suggests new probability consistency state and the service provider replies with the corresponding temporal constraints. Finally, a set of balanced coarse-grained temporal constraints can be achieved. Details about the setting strategy for coarse-grained temporal constraints will be presented in Chapter 6.

The third step is to set fine-grained temporal constraints. Given the results of the previous step, coarse-grained temporal constraints need to be propagated along the entire workflow instance to assign fine-grained temporal constraints for each workflow activity. In our strategy, an automatic propagation process is designed to assign fine-grained temporal constraints based on their aggregated coarse-grained ones. A case study demonstrates that our setting strategy for fine-grained temporal constraints is very efficient and accurate. Details of the setting strategy for fine-grained temporal constraints will also be presented in Chapter 6.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123970107000045

13th International Symposium on Process Systems Engineering (PSE 2018)

Pablo A. Marchetti, Jaime Cerdá, in Computer Aided Chemical Engineering, 2018

3.3 Material transfer representation

Transfer operations are determined using binary variables Vbb' and Wbb'k. These variables represent the direct transfer between units (when Vbb' = 1) and the transfer through storage vessel k (when Wbb'k = 1) of intermediate product from batch b to batch b'. Non-negative continuous variables βbb' and ηbb'k account for the volumes being transferred in each case. Appropriate upper-bound constraints are also included to guarantee that βbb' and ηbb'k are zero whenever Vbb' and Wbb'k, respectively, are not selected. Provided that BSb is the batch size, using these variables the required material balance equations are:

(3)BSb=∑b′∈Bis+1βb b′+∑k∈Kisηbb′k ∀b∈Bis,i∈I,s∈S′

(4) BSb′=∑b∈Bisβb b′+∑k∈Kisηbb′k ∀b′∈Bis+1,i∈I, s∈S′

Also, specific constraints are included to guarantee that Vbb' and Wbb'k are zero whenever batches b, b' are not selected. The same to ensure that Wbb'k is zero when YbkS = 0. Since either direct transfer or transfer through a storage tank can be selected, the following constraint is needed:

(5)Vbb′+Wbb′k≤SELb′∀b∈Bis,b′∈Bis+1,k∈Kis, i∈I,s∈S′

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444642417502214

Temporal Checkpoint Selection and Temporal Verification

Xiao Liu, ... Jinjun Chen, in Temporal QOS Management in Scientific Cloud Workflow Systems, 2012

7.2.2 Minimum Probability Time Redundancy

After we have identified the effective probability consistency range for temporal violation handling, the next issue is to determine at which activity point to check for the temporal consistency so that a temporal violation can be detected in the first place. Here, a necessary and sufficient checkpoint selection strategy is presented. First, the definitions of probability time redundancy and minimum probability time redundancy are presented.

Definition 7.1 (Probability Time Redundancy for Single Workflow Activity)

At activity point ap between ai and aj (i≤j), let U(ai, aj) be of β% C with the percentile of λβ which is above the threshold of θ% with the percentile of λθ. Then the probability time redundancy of U(ai, aj) at ap is defined as PTR(U(ai,aj),ap)=u(a i,aj)−[R(ai,ap)+θ(ap+1,aj)]. Here, θ(ap+1,aj)=∑k=p+1j(μk+λθσk).

Definition 7.2 (Minimum Probability Time Redundancy)

Let U1, U2,…,UN be N upper bound constraints and all of them cover ap. Then, at ap, the minimum probability time redundancy is defined as the minimum of all probability time redundancies of U1, U2,…,UN and is represented as MPTR(ap)=Min{PTR(Us,ap )|s=1,2,…,N}.

The purpose of defining minimum probability time redundancy is to detect the earliest possible temporal violations. Based on Definition 7.2, Theorem 7.1 is presented to locate the exact temporal constraint which has the temporal consistency state below the θ% bottom line.

Theorem 7.1

At workflow activity point ap, if R(ap )>θ(ap)+MPTR(ap−1), then at least one of the temporal constraints is violated, and it is exactly the one whose time redundancy at ap−1 is MPTR(ap−1).

Proof

Suppose that U(ak, al) is an upper bound constraint whose probability is above the threshold before execution of ap (k<p<l), it is the one with MPTR(ap−1). Then, according to Definition 7.1, at ap−1, we have u(ak,al)>R(ak,ap−1)+θ(ap,al) and here MPTR(ap−1)=u(ak,al)−R(ak,ap−1)−θ(ap,al). Now, assume that at activity ap, we have R(ap)>θ(ap)+MPTR(ap−1) which means R(a p)>θ(ap)+u(ak ,al)−R(ak,ap− 1)−θ(ap,al) and that is u(ak,al)<R(ap )+R(ak,ap−1)+θ(ap,al)−θ(ap ) where the right-hand side equals R(ak,ap)+θ(ap−1,al). Since R(a k,ap)+θ(ap−1, al)<R(ak,ap)+θ(ap+1,al), we have u(ak,al)<R(ak,ap)+θ(ap+1,al) and this results in a probability of temporal consistency which is lower than that of θ% where u(ak,al )=R(ak,ap)+θ(ap+1,al). Therefore, a potential temporal violation is detected and it is exactly the one whose time redundancy at ap−1 is MPTR(ap−1). Thus, the theorem holds.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123970107000070

11th International Symposium on Process Systems Engineering

James C. Atuonwu, ... Antonius J. B. van Boxtel, in Computer Aided Chemical Engineering, 2012

3 Results and Discussion

A sensitivity analysis shows that the optimal adsorbent choices depend strongly on the parameter (x) upper and lower bound constraints. The active sets are: {b0, n} for the lower bound and {X01, X02, X03, X04, E, F, G, H} for the upper bound constraints. Thus, in choosing among adsorbents of known isotherm parameters for drying applications, the lower the values of {b0, n} and the higher the values of {X01, X02, X03, X04, E, F, G, H}, the more suitable the adsorbent. When the upper and lower bound constraints are set to within ± 50% of typical values for adsorbents as reported in the literature (Atuonwu et al., 2011; Mihoubi and Bellagi, 2006; Moore and Serbezov, 2005), the optimal choice of adsorbents is as shown in Fig. 3. Ambient air dehumidification which occurs in the first stage requires a type 1 adsorbent (example, zeolite) while exhaust air dehumidification in the second uses a type 4 or 5 isotherm-based adsorbent (example, silica-gel). The following is observed: first, sorption capacity-vapour pressure matching. The adsorbent chosen for ambient air dehumidification has a higher sorption capacity at lower drying air vapour pressures while for the moister higher vapour pressure 1st stage dryer exhaust air, another adsorbent of higher sorption capacity is chosen. Second, there is heat requirement matching. The optimal operating conditions (Table 1) shows the required regeneration air temperature for the second stage adsorbent equals the first stage regenerator exhaust air temperature (streams 11 & 12). Thus, Heater2, Fig. 1 is redundant. Third, the flowrate (speed) of the second-stage adsorbent is 3701 kg/h which is much higher than that of the first stage which is returned as 2139 kg/h. This implies the second-stage wheel behaviour is optimized for dryer and regenerator exhaust heat recovery while the first stage is optimized for air dehumidification. Overall, these effects ensure a highly energy-efficient system that improves further with heat recovery.

Which of the following can denote the upper and lower bounds of their various network activities?

Fig. 3. Adsorbent-water vapour sorption isotherms for optimal adsorbent choices per stage showing type 1 in first stage and type 4 (or 5) in second stage

Table 1. Process variables for optimal adsorption dryer

No.Flow(kg/h)Temp. (°C)Humidity(kg/kg)No.Flow(kg/h)Temp.(°C)Humidity(kg/kg)
1 54000 25 0.0100 8 72 46 0.0500
2 54000 46 0.0020 9 3794 25 0.0100
3 54000 24 0.0109 10 3794 400 0.0100
4 54000 34 0.0061 11 3794 229 0.1232
5 54000 23 0.0105 12 3794 229 0.1232
6 72 23 10.0000 13 3794 130 0.1918
7 72 34 6.6765

The specific energy consumption of the system is 2040 kJ/kg of water evaporated. With heat recovery where stream 13 is used to preheat stream 9, it becomes 1565 kJ/kg. For a conventional two-stage dryer at the same drying air temperatures, the operating conditions when energy consumption is optimized are shown in Table 2. The drying capacity is much less and so, the required air flowrate is much higher (Compare stream 1, Tables 1 & 2). The energy required to heat this high air flow increases and the scope for heat recovery is very low as the exhaust air (stream 5) temperature is very close to ambient. The specific energy consumption is 4310 kJ/kg. In comparison therefore, the optimal adsorption drying system reduces energy consumption by about 64%.

Table 2. Process variables for conventional dryer

No.Flow(kg/h)Temp. (°C)Humidity(kg/kg)No.Flow(kg/h)Temp.(°C)Humidity(kg/kg)
1 71067 10 0.0100 5 71067 28 0.0141
2 71067 46 0.0100 6 72 10 10.0000
3 71067 29 0.0108 7 72 29 3.2894
4 71067 36 0.0108 8 72 28 0.0500

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444595065501000

Rate Analysis for Embedded Systems

ANMOL MATHUR, ... RAJESH K. GUPTA, in Readings in Hardware/Software Co-Design, 2002

3 RATE ANALYSIS FRAMEWORK

In this section we define average execution rate, give a precise formulation of the problems that are addressed in the article, and then give a brief overview of our rate analysis framework.

3.1 Average Execution Rate

The rate of execution of a process is defined as the number of executions of the process per unit time. Let

(1)xi(k) = time at which process Pi starts executing for the (k + 1)th time.

We are concerned with systems that exhibit infinite behavior. Such systems behave in a cyclic fashion in steady state after the initial transients. The following limit has been used by researchers as a standard way of summarizing the performance of a process in such a system (see Bacelli et al. [1992] and Ramamoorthy and Ho [1980]). We use the same definition and define the average interexecution time Ti and the average execution rate ri of a process pi as

(2) Ti=limn→z∑k=0n−1xi(k+1)−xi(k)n= limn→∞xi(n)nand ri=Ti−1,

if the preceding limit exists. We show in Section 3.1 that for a finite process graph with finite edge delays, the preceding limit exists and can be efficiently computed. Thus, in this article we focus on asymptotic rates of execution. Note that the time between successive invocations of a process (interexecution time) is not constant, hence the instantaneous execution rate of a process keeps changing. The following example illustrates our definition of average rate of execution.

Example 3.1

Consider the process graph shown in Figure 2. Assuming that the processes start executing at time 0, and the edge delays equal their lower bounds (δ12 = 1, δ21 = 2), the sequence of initiation times of the processes are:

Process p1: 0, 2, 3, 5, 6, 8 ···

Process p2: 0, 1, 3, 4, 6, 7 ···.

Thus, the sequence of interexecution times for both the processes are:

Process p1: 2, 1, 2, 1 ···

Process p2: 1, 2, 1, 2 ···,

and the average interexecution time is 3/2. Thus, using the definition of average rate of execution given previously, the execution rates of both processes in this example are 2/3. Notice that the limit in our definition of the average rate of execution exists in this case because both sequences of interexecution times are periodic (with period 2).

3.2 Rate Constraints

We assume that the designer specifies upper and lower bounds on the average execution rates of all the processes in the system. Thus, each process pi in the process graph is associated with a constraint interval Ii = [Li, Ui]. Constraint intervals for processes in a process graph can be arbitrary (since they are specified independently for each process by the designer). If there is no lower bound constraint on the rate of a process then Li = 0, and if there is no upper bound constraint then Ui = ∞. For the specified rate constraints to be satisfied, the computed execution rate for each process must lie within its constraint interval; that is

Li≤ri≤Ui.

Usually the environment in which an embedded system works imposes rate constraints on various components (processes) of the system. So the rates at which certain events happen in the environment place execution rate constraints on the components of the embedded system that are supposed to process those events.

3.3 Problem Formulation

Given a description of an embedded system (process graph and the sequencing graphs of the processes), along with the associated rate constraints, the following problems need to be addressed in the rate analysis framework.

Delay Analysis

Compute bounds on the execution times of operations in the sequencing graphs, and use these estimates to find bounds on the delays of the edges in the process graph. Delay analysis requires computations of bounds on the time after the initiation of a process when it issues an enable signal for another process, and the estimation of the communication delay between the process generating the enable signal and the one receiving it. This problem has been addressed by Gupta [1996] and Mathur et al. [1996], and is not the focus of this article.

Consistency Checking of Rate Constraints

This problem arises because the designer usually specifies the rate constraints independently for each process. A set of rate constraints is said to be inconsistent if they cannot be simultaneously satisfied for a given process graph topology, irrespective of the delay intervals on the process graph edges. Thus, if a set of rate constraints is inconsistent, the computed rate intervals can never satisfy all the rate constraints. Since consistency of a set of rate constraints is independent of the actual delay intervals in the process graph, we can state the consistency checking problem as follows: Given a process graph and the rate constraint intervals associated with each process in the process graph, are the constraint intervals consistent?

Section 5 describes our consistency checking algorithm.

Rate Analysis

The rate analysis problem is stated as follows. Given a description of an embedded system as a process graph and the associated sequencing graphs, find upper and lower bounds on the average execution rate of each process in the process graph.

So, rate analysis finds an interval [rl(pi), ru(pi)] for each process pi such that the average rate of execution of the process is guaranteed to lie in this interval. The rate constraints on pi are said to be satisfied if the rate interval computed by rate analysis is contained in [Li, Ui]. If the computed rate interval is not contained in [Li, Ui], then one or both of the rate constraints of pi are violated. In Section 4 we present our algorithms for rate analysis.

Rate Constraint Debugging and Process Redesign

If rate analysis finds that some of the rate constraints are violated, the designer needs to redesign certain processes and/or change the process graph topology by altering the manner in which the processes communicate. The rate analysis tool needs to give adequate information about the cause of the rate constraint violation to help in this step. We discuss some possible approaches in Section 6.

Figure 4 shows our framework for rate analysis. The design specification is first translated from a high-level description to a process graph along with the associated sequencing graphs. First, the rate constraints are checked for consistency. If the rate constraints are consistent, rate analysis is performed on the process graph. If all the computed rate intervals are contained in the intervals defined by the rate constraints, then the system satisfies all the rate constraints and no redesign is required. If a rate constraint is violated, the tool gives the user the reasons for the violation and this information can be used to redesign the relevant processes. Although consistency analysis of the rate constraints precedes rate analysis in the flow, we discuss rate analysis before consistency analysis in this article. This is motivated by the fact that many of the results required for describing our consistency-checking algorithm are more natural to discuss in the context of rate analysis.

Which of the following can denote the upper and lower bounds of their various network activities?

Fig. 4. An overview of our rate analysis framework.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781558607026500181

Applications of state-of-the-art malware modeling frameworks

Vasileios KaryotisVasileios Karyotis, M.H.R. KhouzaniM.H.R. Khouzani, in Malware Diffusion Models for Wireless Complex Networks, 2016

9.1.5 Robustness Analysis for Wireless Multihop Networks

In this subsection, based on the previous analysis of the queuing-based model for malware diffusion modeling, we provide some results that can be used to characterize generic topology control based attacks and thus assess network robustness. The results are indicative and focus only on wireless multihop networks. However, extension to other types of network topology is straightforward.

In the following, for presentation purposes, we choose the maximum number of available attackers to be Mmax=50, while the range of the malicious transmission radius is within [rmin ,rmax]=[200,300]. The transmission radius of legitimate nodes is assumed fixed and equal to R =200 m. The rest of parameters of the optimization problem, i.e. in c and d are provided in each figure separately.

The generic objective function, as explained in the previous subsection, is concave in the optimization interval defined by the above selections of the optimization variables, and the search of the optimal solution is constrained in a closed interval defined by linear constraints. Thus, the optimal value will lie on the stationary points of the function or at the boundary region of the corresponding search space. In this case, since the gradient of this function is nonzero at all points in the admissible region, the objective function has no stationary points in the corresponding interval, and consequently, the optimal value will lie at the boundary region. The two objective functions presented (yielded by Eq. (9.12a) for the two different values of c mentioned before) differ only in a multiplicative constant that only affects the respective optimal values of each optimization problem at the optimal solution and not the form of solutions themselves.

As expected by the form of optimization problem (9.12), the optimal value of the objective function in each of the above problems is attained when the two optimization variables satisfy their upper bound constraints, i.e. x1=50,x2=300, namely, at the boundary of the search space. In this case, the optimal strategy that maximizes network damage is {M=50,r=300}. Each attacker should fully utilize its transmission radius and all available attackers should be used to maximize the expected network damage. However, that would not be the case in the event that the optimization objective was a utility function that would take into account the available energy resources, in which case each attacker would have to constantly adapt its transmission radius in a clever way, in order to save energy resources without sacrificing potential network damage. It should be noted that even though this result seems too obvious to require an optimization problem to be solved, this is not always the case if other requirements and operational aspects are posed. However, the methodology and solution approach will be exactly the same and it can be directly extended to cover those cases as well. The above example motivates such methodology in a simple and straightforward manner.

In Fig. 9.3, we provide the optimal values of the expected number of infected nodes for various values of λ∕μ and numbers of network nodes N. As expected, higher values of λ∕μ, as well as denser topologies, yield greater network damage, meaning that networks where the ratio λ∕μ works in favor of the attacker further improve the result of an attack strategy. However, as the ratio λ∕μ grows beyond a threshold value, no further significant damage is caused. This means that after this point a chosen attack strategy remains relatively unaffected by λ∕μ, and thus no effort needs to be put to improve the attacker’s side (i.e. possibly increase the link infection rate). Consequently, a group of attackers applying the aforementioned attack strategy should only improve its one-hop operation (namely, λ) if the attacked network is either dense or the ratio λ∕μ is below unity.

Which of the following can denote the upper and lower bounds of their various network activities?

FIGURE 9.3. Optimal E[LI ] versus λ∕μ.

From Karyotis V, Papavassiliou S. On the malware spreading over non-propagative wireless ad hoc networks: the attacker’s perspective. In: Proceedings of the 3rd ACM workshop on QoS and security for wireless and mobile networks. p. 156–9.©  2007 Association for Computing Machinery, Inc. Reprinted by permission.

In Fig. 9.4, the objective function value depicted against the number of network nodes, for various values of λ∕μ. The optimal strategy maximizing network damage is again {M=50,r= 300} and the optimal value scales linearly with the number of total network nodes. As λ∕μ increases, so does optimal E[LI], but there exists a threshold value for λ∕μ, above which additional increase in the rate of link infection to the recovery rate ratio does not offer significant profit in terms of network damage, as shown in Fig. 9.4. It can also be observed that for sparse topologies (i.e. fewer network nodes for the same network deployment area), different values of λ∕μ do not yield different results, as the sparsity of the network does not allow many infections and recoveries to take place and thus the results appear similar. On the contrary, such differences are evident for denser topologies.

Which of the following can denote the upper and lower bounds of their various network activities?

FIGURE 9.4. Optimal E[LI ] versus N.

From Karyotis V, Papavassiliou S. On the malware spreading over non-propagative wireless ad hoc networks: the attacker’s perspective. In: Proceedings of the 3rd ACM workshop on QoS and security for wireless and mobile networks. p. 156–9.©  2007 Association for Computing Machinery, Inc. Reprinted by permission.

As mentioned before, both optimization problems have the same form. Consequently, the optimal strategy that maximizes network damage is again {M=50,r=300}. It should be noted that this strategy corresponds to a bang-bang control policy (bang-bang policies arise in minimum-time problems or as application of Pontryagin’s minimum or maximum principle, as explained in more detail in Appendix C) if one assumes that the noninfected queue is the controller of the infected queue and the aggregated service rate of the noninfected queue the control signal of the infected queue, [194].

Fig. 9.5 depicts the optimal values of the average node infection rate with respect to λ and μ for a network of N=100 nodes. A similar form (properly scaled) would be obtained for different network densities. The optimal values of E[γ S] scale linearly with λ and μ, but the rate of increase is different, with μ having a slightly greater rate between the two.

Which of the following can denote the upper and lower bounds of their various network activities?

FIGURE 9.5. Optimal E[γS ] versus (λ,μ).

From Karyotis V, Papavassiliou S. On the malware spreading over non-propagative wireless ad hoc networks: the attacker’s perspective. In: Proceedings of the 3rd ACM workshop on QoS and security for wireless and mobile networks. p. 156–9.©  2007 Association for Computing Machinery, Inc. Reprinted by permission.

Fig. 9.6 depicts the dependence of the optimal value on the number of network nodes, for various values of λ and μ. Similar observations, as those mentioned above for the maximization of the average number of infected nodes, apply. Linearity underlies the dependence of E[γS] with network density and for sparser topologies the results do not bear significant differences. However, as the network density increases, differences between various network cases (i.e. different {λ,μ} combinations) become more prominent.

Which of the following can denote the upper and lower bounds of their various network activities?

FIGURE 9.6. Optimal E[γS] versus N.

From Karyotis V, Papavassiliou S. On the malware spreading over non-propagative wireless ad hoc networks: the attacker’s perspective. In: Proceedings of the 3rd ACM workshop on QoS and security for wireless and mobile networks. p. 156–9.©  2007 Association for Computing Machinery, Inc. Reprinted by permission.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128027141000190

The multidimensional 0–1 knapsack problem: An overview

Arnaud Fréville, in European Journal of Operational Research, 2004

The MKP is one of the most well-known constrained integer programming problem, and the large domain of its applications has greatly contributed to its fame. Following the seminal paper of Lorie and Savage [120], MKP modeling has been investigated intensively in capital budgeting and project selection applications [135,146,183]. Recently, Meier et al. [137] investigated more realistic approaches combining capital budgeting models with novel, real techniques for project evaluation. One of their contributions was to propose a new scenario-based capital budgeting model which includes the MKP as a subproblem coupled with generalized upper bound (GUB) constraints. Beaujon et al. [19] also reported a MIP formulation designed to select projects for inclusion in a R&D portfolio, this model taking the form of a MKP with other generalized constraints. Capital budgeting appeared to be an on-going management challenge for not-for-profits hospitals and multihospital healthcare systems in the United States. In this context, a financially oriented capital budgeting framework is developed in [108] which uses a MKP formulation. The MKP has been also introduced to model problems including cutting stock [70], loading problems [20,165], investment policy for the tourism sector of a developing country [65], allocation of databases and processors in a distributed data processing [63], delivery of groceries in vehicles with multiple compartments [25] and approval voting [170]. More recently, the MKP has been used to model the daily management of a remote sensing satellite like SPOT, which consisted in deciding every day what photographs will be attempted the next day [179].

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S0377221703002741

A survey on resource allocation techniques in OFDM(A) networks

Farshad Shams, ... Marco Luise, in Computer Networks, 2014

5.2 Cooperative solutions

Recently, several other methods which use various heuristics based on cooperative game theory [120,124] have been proposed to address the problem of fair resource allocation for OFDMA systems, using either centralized or distributed algorithms. The Nash bargaining solution (NBS) [120] is the most refined technique applied to wireless resource allocation problems in an OFDMA network. The NBS proves the existence and uniqueness of an NE point of the following convex utility function:

(27a)maxp,N∏k=1K( rk-r̲k)

(27b)s. t.rk=∑n∈Nk rkn⩾r̲k∀k∈K

(27c)and∑n∈Nk pkn⩽p‾k∀k∈K

(27d)andNk∩Nj=∅∀k,j∈K,k≠j,

In other words, the goal is to maximize the product of the excesses of the transmitters’ rates over their own minimum demands r̲k. The NBS guarantees each user to achieve its own demand, thus providing an individual rationality to the resource allocation. The important result of applying NBS is that the final rate allocation vector is Pareto optimal. Taking into consideration the strictly concave increasing property of the logarithm function, we can transform (27a) into:

(28)max p,N∑k=1Klog2(rk-r̲k)

Clearly, when r̲k=0, the NBS fairness scheme reduces to the weighted proportional one, with ϕk=1 [17].

Han et al. in [54] introduce a distributed algorithm for an OFDMA uplink based on the NBS and the Hungarian method [19] to maximize the overall system rate under individual power and rate constraints. The underlying idea is that once the minimum demands are provided for all users, the rest of the resources are allocated proportionally to different users according to their own conditions. The proposed algorithm shows a complexity O(K2Nlog2N +K4), without considering the (expensive) computational load to solve the (convex) equations of the NBS. In [95], Lee et al. solve two subproblems of exclusive subcarrier assignment and power control in an OFDMA network aiming at maximizing the NBS fairness. The simulation results show an overall end-to-end rate between the nodes comparable to that achieved in [54].

One main drawback of applying NBS in resource allocation problems is that this scheme guarantees minimum requirements of the users, but it does not impose any upper bound constraint. In fact, the achieved data rate may be much higher than the initial demands and this is unsatisfactory from the wireless network provider viewpoint. One of the most prominent alternatives to the NBS is the Raiffa–Kalai–Smorodinsky bargaining solution (RBS), defined by Raiffa [132] and characterized by Kalai and Smorodinsky [75]. The RBS requires that a user’s payoff data rate should be proportional not only to its minimal rate, but also to its maximal one. Whereas the NBS takes into account the individual gains, RBS emphasizes the importance of one’s gain and others’ losses. For an OFDMA resource allocation problem, the RBS bargaining outcome is the solution to:

(29a)maxp,N∏k=1Krk-r̲k+1K-1∑j∈K,j≠k (r‾j-rj)

(29b) s.t.r̲k⩽rk ⩽r‾k∀k∈K

(29c)and∑n∈Nkpkn ⩽p‾k∀k∈K

(29d) andNk∩Nj=∅∀ k,j∈K,k≠j,

wherein r‾k denotes the upper bound of the transmission rate of the each user. When applying RBS, if the channel quality of a terminal improves, it will get a better capacity without any reduction to that of the other users (individual monotonicity). The existence and uniqueness of RBS can be shown, but a Pareto optimal NE point is not always attained for more than two players, as Roth stated in [135]. By using again the properties of the logarithm function, the utility maximization (29a) can be equivalently investigated using the following objective function:

(30)maxp,N∑k=1Klog2 rk-r̲kr‾k-r̲k

Using this formulation, the RBS is a point at which each individual’s gain is proportional to its maximum gain. When r̲k=0∀ k∈K and r1:⋯:rK=r‾1:⋯:r‾K , the RBS achieves the same results of the max–min fairness criterion. In RBS formulation, the achieved data rate vector satisfies:

(31)rk-r̲ kr‾k-r̲ k=rj-r̲j r‾j-r̲j ∀j,k∈K.

In [23], Chee et al. propose a centralized algorithm for the OFDMA downlink scenario based on RBS. The results show a good performance only when the gap between the maximum and the minimum rate is (very) large. Even though the subcarriers are assigned in an exclusive manner, the computational complexity of this algorithm is O(KN+K2). Ref. [24] investigates the problem of time–space resource allocation in a MIMO-OFDMA network in the downlink direction with aim at maximization data rate of each terminal, without specifying the complexity of the iterative algorithm to solve the NBS convex equation.

Auction methods are another cooperative game scheme which has recently drawn attention in the resource allocation research literature. In [119], Noh proposes a distributed and iterative auction-based algorithm in the OFDMA uplink scenario with incomplete information. The time complexity of the algorithm is experimentally equal to O(KNlog2K). However, the simulation parameters are not realistic (three users and three subcarriers), and it is thus hard to estimate the computational complexity when using real-world network parameters. Alavi et al. in [4] propose an auction-based algorithm to achieve near a proportionally fairness data rate vector, although the computational complexity is not specified. Ref. [37] propose a joint downlink/uplink subcarrier allocation (with fixed power) based on stable matching game to maximize data rate of each terminal in downlink and uplink directions, simultaneously.

In [143], Shams et al. attempt to improve the fairness of the solution and to reduce the complexity in the uplink direction of OFDMA-based networks, by deriving a coalition-based algorithm [124] to provide each terminal with exactly the desired rate, so as to satisfy both wireless terminals and the network service provider. The numerical results show that the computational complexity of the proposed centralized algorithm is lower than K·N, thus representing, to the best of our knowledge, the cheapest one available in the literature.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1389128614001212

How can a configuration review reduce the impact of a vulnerability scan on the network's overall performance?

A configuration review can reduce the impact of a vulnerability scan on the network's overall performance in part because it defines a targeted group of devices to scan, ensures the scan is designed to meet its intended goals, determines the sensitivity level of the scan, and specifies the types of data that will be ...

Which type of threat actor would benefit the most from accessing your enterprise's new machine learning algorithm research and development program?

Which type of threat actor would benefit the most from accessing your enterprise's new machine learning algorithm research and development program? Competitors. Competitors are threat actors who launch attacks against an opponent's system to steal classified information like industry research to customer lists.
Third-party cookies (also known as tracking cookies or trackers) are created by “parties” other than the website that the user is currently visiting – providers of advertising, retargeting, analytics and tracking services.

Which of the following statements correctly describes a disadvantage of a hardware based keylogger?

Which of the following statements correctly describes the disadvantage of a hardware-based keylogger? A hardware-based keylogger must be physically installed and removed without detection.