1.1
Process Measurements
Measured
process data inevitably contain some inaccurate information, since measurements
are obtained with imperfect instruments which have their own accuracy. In
addition, signal transmission, power fluctuation, improper instrument installation
and miscalibration are other sources of measurement errors.
It
is assumed that any observation is composed of a true value plus some error
value. This indicates that a measurement can be modeled as:
y = x + e (1.1)
where
y is the observed value of the raw measurement, x is the true value of the
process variable, and e is the measurement error.
1.2
Measurement Error
The
error term in Equation (1.1), e, can be divided into two subcomponents, random
error and gross error, as shown in Figure 1.1.
Random
error is caused by one or more factors that randomly affect measurement of a
variable. It follows a Gaussian distribution.
The
Gaussian noise is normally distributed with a mean value of zero and known
variance. The probability density function (PDF) of a measurement with Gaussian
noise is described by the formula:
where µ is
the mean value of the measurements, and s is the standard
deviation.
The
important property of random error is that it adds variability to the data, but
it does not affect average performance for the group.
Gross error (as
depicted in Figure 1.3) can be caused by:
- instrument systematic bias that is consistently erroneous, either higher or lower than the true value of the process variable, probably because of instrument miscalibration
- measurement device failure
- nonrandom events affecting process, such as process leak
Unlike
random errors, gross errors tend to be consistently either positive or
negative. Because of this, it is sometimes considered to be a bias in the
measurement.
Generally,
measurements with gross errors will lead to severely incorrect information
about the process, much more so than those with random errors. Gross error
detection is an important aspect in validation of process data.
Errors in measured
data can lead to significant deterioration in plant operation. Small random and
gross errors deteriorate the performance of control systems, whereas larger
gross errors can nullify process optimization. It is important to estimate the
true conditions of process states with the information provided by the raw
measurements, in order to achieve optimal process monitoring, control, and
optimization.
1.3
Data reconciliation
The estimation of a
process state involves the processing of the raw data and their transformation
into reliable information.
For example
a cooling-water
station provides water for four plants as shown in Figure 1.4. All the flow
rates for the circulation water arem easured in this network. At steady-state,
the raw measurements and their standard deviations are listed in Table 1.1.
If we make mass
balances around each plant in the network using the raw measurements, we will find
that all the flow measurements contain errors. This is because the true values of
the flow rates must satisfy mass balances at steady state.
For example, the
measurement of stream 1, coming into Plant 1, is 110.5 kt/h. However, the sum
of the measured flows for streams 2 and 3 leaving Plant 1 is 60.8 + 35.0 = 95.8
kt/h. Now the question is, how many tons of cooling water does each plant use?
For Plant 1, is it 110.5 kt/h or 95.8 kt/h? The estimation of the true values
for the flows in this network can be solved by Date Reconciliation (DR).
Data reconciliation
is the estimation of process variables based on information contained in the
process measurements and models. The process models used in the data reconciliation
are usually mass and energy conservation equations.
The DR technique
allows the adjustment of the measurements so that the corrected measurements
are consistent with the corresponding balances. This information from the
reconciled data can be used by the company for different purposes such as:
This is especially
true with the implementation of a Distributed Control System (DCS), as shown in
Figure 1.5.
- Monitoring
- Management
- Optimization
- Modeling
- Simulation
- Control
- Instrument maintenance
- Equipment analysis
The interest in
applying DR techniques started in the 1980’s when plant management realized the
benefits of having access to more reliable estimates of process data. Nowadays,
data reconciliation techniques have been widely applied to various processing
industries, such as:
- Refinery
- Petrochemical
- Metal/Mineral
- Chemical
- Pulp/Paper
Commercial software
specializing in data reconciliation is available. A demo-version of one
commercial software can be downloaded at: http://www.simsci.com/products/datacon.stm.
Research
and development during the past 30 years have led to two major types of
applications:
- Mass and heat balance reconciliation. The simplest example is the off-line reconciling of flow rates around process units. The reconciled flow rates satisfy the overall mass balance of the units.
- Model parameter estimation. Accurate, precise estimates of model parameters are required in order to obtain reliable model predictions for process simulation, design and optimization. One approach to the parameter estimation is to solve the estimation problem simultaneously with the data reconciliation problem. The reconciled model parameters are expected to be more accurate and can be used with greater confidence.
In general,
the optimal estimates for process variables by DR are solutions to a constrained
least-squares or maximum likelihood objective function, where the
measurement errors are minimized with process model constraints.
With the
assumption of normally distributed measurements, a least-squares objective
function is conventionally formulated for the data reconciliation problem. At
process steady state, the reconciled data are obtained by:
Minimizing
subject to
J(yˆ,zˆ ) = (y - yˆ )TV-1(y
- yˆ ) (1.3)
f (yˆ,zˆ ) = 0
g (yˆ,zˆ ) ≥ 0
where
y is
an M×1 vector of raw measurements for M process variables,
ˆyis an M×1 vector of
estimates (reconciled values) for the M process variables,
ˆz
is
an N×1 vector of estimates for unmeasured process variables, z,
V is
an M ×M covariance matrix of the measurements,
f is
a C×1 vector describing the functional form of model equality constraints,
g is
a D×1 vector describing the functional form of model inequality constraints
which include simple upper and lower bounds.
The models employed
in DR represent variable relationships
of the physical system of the process. The reconciled data takes information
from both the measurements and the models. In reconciling steady-state measurements,
the model constraints are algebraic equations. On the other hand, when dealing
with dynamic processes, dynamic models that are differential equations have to
be used.
Based on the type of
model constraints, the data reconciliation problem can be divided into several
subproblems as shown in Figure 1.6. Each sub-problem will be discussed
respectively in this module.
The algorithm of the
DR formulated by Equation (1.3) indicates
that the data reconciliation techniques not only reconcile the raw
measurements, but also estimate unmeasured process variables or model
parameters, provided that they are observable.
1.4 Process Variable Classification
It is
also important to clarify some concepts in DR techniques Measured variables are
classified as redundant and nonredundant, whereas unmeasured variables are classified
as observable and nonobservable. The
classification of process variables is
shown in Figure 1.7.
- A redundant variable is a measured variable that can be estimated by other measured variables via process models, in addition to its measurement.
- A nonredundant variable is a measured variable that cannot be estimated other than by its own measurement.
- An observable variable is an unmeasured variable that can be estimated from measured variables through physical models.
- A nonobservable variable is a variable for which no information is available
To
demonstrate these concepts, we take the cooling water network as the example:
In Figure
1.4, all six flows are measured, and any one of them can be estimated
by mass balances using other measured flows, so they are all redundant
variables.
However,
if the measurements of flows 2, 4, and 6 wereeliminated as shown in Figure
1.8, flow 1 becomes a measured nonredundant variable, but the
measurements of flows 3 and 5 are redundant. The unmeasured flows 2, 4, and 6,
in this case, are observable, because their values can be estimated by mass
balances around the plants, using the measured flows..
1.5 Redundancy
A
measurement is spatially redundant if there are more thanenough data
to completely define the process at any instant intime. Referring to Figure
1.4, all the measurements arespatially redundant. For example, we don’t
need the value ofthe measurement for flow stream 1, we can still completely
define
the process. This is because flow stream 1 can becalculated by other spatial
measurements via mass balances.
A
measurement is temporally redundant if its pastmeasurements can be
used to estimate the current state. A typical case for a temporally redundant
measurement is that, at the current sampling time, t, the true value of the
process variable can be predicted by dynamic models, in addition to the raw
measurement.
2 komentar:
Great!!!!
Amazing trading platform, quick withdrawal I have been using this platform together with the most recommended forex strategy on the internet from Robert and so far i have no complaints making $7000-$15000 on a weekly basis he is great and i am thankful i was lucky enough to have met him via Email Robertseaman939@gmail.com or
WhatsApp: +44 7466 770724
Post a Comment