Continuous Model

Mathematical Demography

Marc Artzrouni , in Encyclopedia of Social Measurement, 2005

Continuous versus Discrete

In a continuous model, events can take place at every point in time. For example, the time between birth and death can be any positive decimal number. In a discrete model, events are categorized within time intervals. For example we might count the numbers of deaths between ages 0 and 1, between 1 and 5, between 5 and 10, between 10 and 15, and so on. (This example, which is typical, also shows that the lengths of the intervals need not be the same.) Both deterministic and stochastic models can be either continuous or discrete.

The life table and its applications are first presented. The classical linear and non-linear deterministic models follow. The entry closes with stochastic models, most of which were developed since the 1970s.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0123693985003601

Artificial Neural Networks

Igor Kononenko , Matjaž Kukar , in Machine Learning and Data Mining, 2007

11.3.2 Continuous model

Hopfield's continuous model uses the same topology as the discrete model – each neuron is connected with each other neuron. The combination function of the continuous model is analogous to Equation (11.1). The dynamics is described with the following two equations:

(11.3) C j d u j dt = i , i j T ji X i u j R j + I j = A j u j R j

(11.4) X j = g j u j

Here Xi is the state (and the output) of the i-th neuron, uj the input (activation level) of the j-th neuron and Ij , Rj , Cj constants. An output function g j defines the relation between input and output. It is derivable and sigmoidal with asymptotes 0 and 1. From Equation (11.3) it follows that the speed of change uj is proportional to the difference between the current and new uj, calculated with Equation (11.1). The condition i     j is omitted, as for the learning rule it holds Tii  =  0.

The learning rule for the continuous model is the same as for the discrete one. After learning the memory matrix T is fixed. The execution phase starts by setting the neurons' states to values that correspond to a new, possibly incomplete and/or noisy example. The neurons operate in parallel and continuously change their states with dynamics, described with the above equation. It is guaranteed that such a network always converges onto a fixed point (which is the result of the execution).

The stability of the continuous model can be proven by defining the energy of the network's state:

(11.5) E = 1 2 j i , i j X j X i T ji j I j X j + j 1 R j 0 X j g i 1 X dX

which monotonously decreases. The proof is provided in Section 11.7.1.  

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781904275213500113

An integrative approach for hybrid modeling, simulation and control of data networks based on the DEVS formalism

Rodrigo Castro , Ernesto Kofman , in Modeling and Simulation of Computer Networks and Systems, 2015

4.3.4 Detailed analysis of W(t) with the discrete model

The experiment with the continuous model shows W(t) as a valid representation identical for all N users in the system. This is a simplification that omits several interesting details about the discrete and stochastic behavior of the flows.

For a more realistic view of the concurrent dynamics of W(t) at each node, we go back to the discrete model, this time around modeling N=6 TCP connections sharing a common router. New TCP connections will be allowed to join the system every 5 seconds, repeating the scenario tested with the continuous model. In Figure 18.22 we show the result of the simulation for 30 seconds.

Figure 18.22. Discrete approximation of TCP/AQM with new users joining the system incrementally (from N=1 to N=6). Time units correspond to seconds.

On the right-hand side we show the evolution of W for the first user that joins the system. At the left, we show W for all extra users joining every 5 seconds. The curves W 2 to W 6 show in detail the adaptation of each node to the varying overall congestion condition.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128008874000183

A Discrete Approach to Top-Down Modeling of Biochemical Networks

Reinhard Laubenbacher , Pedro Mendes , in Computational Systems Biology, 2006

V RELATIONSHIP BETWEEN DISCRETE AND CONTINUOUS MODELS

The relationship between discrete and continuous models has been studied extensively in population dynamics ( Durrett and Levin 1994; Henson et al. 2001; Domokos and Scheuring 2004; Geritz and Kisdi 2004). For models of biochemical and other biological networks, this relationship was first explored by Glass and Kauffman (1973), with subsequent work by Edwards (2000), Edwards et al. (2001), and Glass et al. (2003). Within the modeling frameworks explored there, (bottom-up) discrete models can be a helpful tool to provide constraints and information about (bottom-up) continuous models of the same network. A good example of how a continuous and a discrete model of the same system can be used together is given by Muraille et al. (1996), where an ODE model of immune response to a replicating pathogen is studied via a discrete logical model using the technique of Thomas (1991). The dynamics of the discrete model, which are easy to analyze, are used to obtain a qualitative picture of the dynamics of the ODE model.

A corresponding mathematical theory for top-down modeling has yet to be developed. How can high-level information from discrete multi-state dynamic models of a network be incorporated into the model selection process for low-level ODE models? For the polynomial system framework described here, we are developing such a theory in parallel with an ODE framework based on a linearization of the dynamics (i.e., the Jacobian, a first-order truncation of the Taylor approximation to the dynamics).

Estimates of the elements of the Jacobian matrix are currently pursued through non-linear least squares. Our aim is to develop ways in which these top-down approaches become synergistic. In particular, we expect the results of the discrete model to be used as initial states for the parameter estimation needed to define a continuous model. We are currently carrying out experiments that will be used to validate both methods, using integrated transcriptomics, proteomics, and metabolomics time courses measuring oxidative stress response in Saccharomyces cerevisiae.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780120887866500319

Proceedings of the 9th International Conference on Foundations of Computer-Aided Process Design

Ruiqi Wang , ... Mengxi Liu , in Computer Aided Chemical Engineering, 2019

Non-overlapping Constraint

This work is based on continuous model and the shapes of facilities are considered. Consequently, the non-overlapping constraint should be involved. Considering two facilities, there are two kinds of relative location for them: above/below, and right/left. A binary variable can be used to describe the relative location of two facilities, and the non-overlapping constraint can be mathematically shown as Equation (12).

(12) Z overlapping , i , j y i y j L y , i 2 + L y , j 2 + d 1 Z overlapping , i , j x i x j L x , i 2 + L x , j 2 + d

Where, Z overlapping, i, j is a binary variable representing the relative location of facility i and facility j. When Z overlapping,i,j   =   1, facility j is forced to be in the up or down side of facility i, as shown in Figure 1 (a). When z overlapping,i,j   =   0, facility j is forced to be in the right or left side of facility i, as shown in Figure 1 (b). x i , y i , x j , and y j are the coordinates of facility i and j. L x,i , L x,j , L y,i , and L y,j are the length of facility i and j paralleling to x axis and y axis respectively. d/2 is the distance between two facilities reserved for installation and maintenance, and the facilities should also keep distance of d/2 from the boundary of plant area.

Figure 1

Figure 1. Two kinds of relative location of facility i and j

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128185971500151

Programming reaction-diffusion processors

Andy Adamatzky , ... Tetsuya Asai , in Reaction-Diffusion Computers, 2005

5.4.5 Composition Ł

This composition is implemented in both discrete model A 2 and continuous model 4. In the A 2 model the junction automaton • is excited only when exactly two of its neighbours are excited (all other automata behave in the same way as those in model A 0). Therefore, the excitation spreads to the output branch only when two wave fronts meet at the intersection automaton. So, when one of the input variables is F the output is F independent of another input variable: T Ł F = * Ł F = F . When two single waves collide a single wave is generated at the output channel: * Ł * = * and T Ł T = T . For inputs T and * only the first impulse of signal T will be 'supported' by the impulse of the signal *, so T Ł * = * (Fig. 5.10a). In the continuous model ℱ4, the basic mechanism is similar to the discrete model A 2. In the output channel the concentration of the activator exceeds the threshold only when two waves arrive at the junction simultaneously, so T Ł * = * (Fig. 5.10b).

Figure 5.10. Example of gate Ł dynamics: (a) T Ł * = * in model A 2 . (b) T Ł * = * in model ℱ4. In (b) an excitable domain is white, a passive domain is black and an impulse wave is grey.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444520425500062

Software Process Simulation

David L. Olson , in Encyclopedia of Information Systems, 2003

III.D.4. Other Types of Simulation Models

A number of other simulation systems have been developed, including hybrid combinations of systems dynamics (continuous) models and discrete event models. State-based simulation models are more formal with more complete graphical representations. State-based modeling is strong in its ability to capture details graphically, but is not as good at capturing mathematical details as discrete event models. State-based models receive productivity, error, and detection rates as inputs and predict effectiveness in terms of effort, duration, errors detected, and errors missed by process.

Knowledge-based meta-model simulations have also been developed. These employ a statistical network. Informal process descriptions are elicited and converted into a process model. The static and dynamic properties of a process model, such as consistency, completeness, internal correctness, and traceability, can be evaluated with this type of model. Graphic views are provided to users for interactive editing and communication.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122272404001635

30th European Symposium on Computer Aided Process Engineering

Karim Alloula , Jean-Pierre Belaud , in Computer Aided Chemical Engineering, 2020

1 Introduction

The process system engineering activity mainly consists in modeling, and then optimizing, process units or whole plants. For optimizing continuous models, the CAPE community usually tries to find the global minimum of a continuous criterion under a set of linear or non-linear constraints. When the global optimization problem is not convex, it may be hard to solve because the number of local minima may increase exponentially with the number of variables. Any method, either deterministic or heuristic, may stop its iterations nearby one local minima or the calculation complexity of the global optimization algorithms may lead to huge delays for producing some optimum.

This paper exhibits the main principles of a new global continuous optimization method, which tries to overcome, at least partially, some of the drawbacks of the other deterministic global continuous methods in the unconstrained case.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128233771503347

Machine Learning Basics

Igor Kononenko , Matjaž Kukar , in Machine Learning and Data Mining, 2007

3.2.5 Performance evaluation in regression

In regressional problems it is unreasonable to use classification accuracy. The reason is simple – in most problems it would be 0 as we model continuous-valued and not discrete functions. More appropriate measures are those based upon the difference between the true and the predicted function's value.

Mean squared error. The most frequently used measure of quality for automatically built continuous functions (models) f ^ is the mean squared error (MSE). It is defined as the average squared difference between the predicted value f ^ (i) and the desired (correct) value fi :

(3.43) MSE = 1 n i = 1 n f i f ^ i 2

Because the error magnitude depends on the magnitudes of possible function values it is advisable to use the relative mean squared error instead:

(3.44) RE = n MSE i f i f ¯ 2

where f ¯ is an average function value:

(3.45) f ¯ = 1 n i f i

The relative mean squared error is nonnegative, and for acceptable models less than 1:

(3.46) 0 RE 1

RE   =  1 can be trivially achieved by using the average model f ^ i = f ¯ (Equation (3.45)). If, for some function, RE   >  1, the model is completely useless. An ideal function is f ^ i = f i with RE  =   0.

Mean absolute error. Another frequently used quality measure for automatically built continuous functions (models) f ^ is the mean absolute error (MAE). It is defined as the average absolute difference between the predicted value f ^ (i) and the desired (correct) value fi :

(3.47) MAE = 1 n i = 1 n f i f ^ i

Since the magnitude of MAE depends on the magnitudes of possible function values it is often better to use the relative mean squared error:

(3.48) RMAE = n MAE i f i f ¯

where f ¯ is defined as in Equation (3.45). The relative mean absolute error is nonnegative, and for acceptable models less than 1:

(3.49) 0 RMAE 1

RMAE   =  1 can also be trivially achieved by using the average model f ^ i = f ¯ .

Correlation coefficient. The correlation coefficient measures the statistical (Pearson's) correlation between actual function values fi and predicted function values f ^ i for a dependant (regression) variable:

(3.50) R = S f f ^ S f S f ^

where

S f f ^ = i = 1 n f i f ¯ f ^ i f ^ ¯ n 1 S f = i = 1 n f i f ¯ 2 n 1 S f ^ = i = 1 n f ^ i f ^ ¯ 2 n 1 f ¯ = 1 n i f i f ^ ¯ = 1 n i f ^ i

The correlation coefficient is bounded to the [−   1,1] interval. 1 stands for perfect positive correlation, -1 for perfect negative correlation, and 0 for no correlation at all. For useful regressional predictors, only positive values of correlation coefficients make sense:

(3.51) 0 < R 1

As opposed to the mean squared error and the mean absolute error that need to be minimized, the learning algorithm aims to maximize the correlation coefficient.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781904275213500034

Continuous Simulation

Stanislaw Raczynski , in Encyclopedia of Information Systems, 2003

I. Introduction

Roughly speaking, continuous simulatio. is one of the two main fields of computer simulation and modeling, the other being discrete event simulation . Continuous models include those of concentrated parameter system. and distributed parameter system. The former group of models includes those for which the power of the set of all possible states (or, more precisely, the number of classes of equivalence of inputs) is equal to the power of the set of real numbers, and the latter refers to systems for which that set is greater than the set of reals. These classes of dynamic systems are described in more detail in the next section. The most common mathematical tools for continuous modeling and simulation are the ordinary differential equations (ODEs) and the partial differential equations (PDEs).

First of all, we must remember that in the digital computer nothing is continuous, so the process of using continuous simulation with this hardware is an illusion. Historically, the first (and only) devices that did realize continuous simulation were the analog computers. Those machines are able to simulate truly continuous and parallel processes. The development of digital machines made it necessary to look for new numerical methods and their implementations in order to get good approximations for the solution of both ordinary and partial differential equations. This aim has been achieved to some extent, so we have access to quite good software tools for continuous simulation.

In the present article some of the main algorithms are discussed, like the methods of Euler, Runge–Kutta, multistep, predictor-corrector, Richardson extrapolation, midpoint for the ODEs, and the main finite difference and finite element methods for the PDEs.

To illustrate the very elemental reason why continuous simulation on a digital computer is only an imperfect approximation of the real system dynamics, consider a simple model of an integrator. This is a continuous device that receives an input signal and provides the output as the integral of the input. The differential equation that describes the device is

(1) dx / dt = u ( t )

where u. is the input and x. is the output. The obvious and most simple algorithm that can be applied on a digital computer is to discretize the time variable and advance the time from 0 to the desired final time in small intervals h. The iterative formula can be eqn

(2) x ( t + h ) = x ( t ) + hu ( t )

given the initial condition x(0). This is a simple "rectangle rule" that approximates the area below the curve u(t.) using a series of rectangles. The result is always charged with certain error. From the mathematical point of view this algorithm is quite good for regular input signals, because the error tends to zero when h. approaches zero, so we can obtain any required accuracy.

Suppose now that our task is to simulate the integrator over the time interval [0,1] with u = const = 1. We want to implement the above algorithm on a computer on which real numbers are stored to a resolution of eight significant digits. To achieve high accuracy of the simulation we execute the corresponding program of Eq. (2) several times, with h approaching zero. One can expect that the error will also approach zero. Unfortunately, this is not the case. Observe, that if h < 0.000000001, the result of the sum operation at the right-hand side of Eq. (2) is equal to x(t) instead of x(t) + hu(t) because of the arithmetic resolution of the computer. So, the error does not tend to zero when h becomes small, and the final result may be zero instead of one (integral of 1 over [0,1]). This example is rather primitive, but it shows the important fact that we cannot construct a series of digital simulations of a continuous problem that tends to the exact solution—at least theoretically. Of course, we have a huge number of numerical methods that guarantee sufficiently small errors and are used with good results, but we must be careful with any numerical algorithm and be aware of the requirements regarding the simulated signals to avoid serious methodological errors. A simple fact that we always must take into account is that in a digital computer real numbers does not exist, and are always represented as their rough approximations.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122272404000186