Volume 2, Number 1, Fall 2001


Evaluating Uncertainty in Electric Utility Economic Analysis

Hesham E. Shaalan, Ph.D.
Electrical Engineering Technology
School of Technology
Georgia Southern University

Abstract:  

Uncertainty is a major issue facing electric utilities in planning and decision making. Substantial uncertainties exist concerning future load growth, construction times and costs, performance of new resources, and the regulatory as well as economic environment in which utilities operate. Therefore, it is no longer valid to assume that the input parameters of economic studies are known with certainty. Utilities need to understand the effects of variations in these parameters on the final outcome of economic studies. This paper presents a method of modeling uncertainty and performing sensitivity analysis with interval mathematics. An interval analysis solution provides the complete sensitivity analysis without the need for many simulations. An electric utility example is presented to illustrate the application of the concepts.

1. Introduction

1.1 General Background

The current economic, social, and political climate in which the electric power industry operates has changed considerably in the last 30 years. Prior to the end of the 1950s, planning for the construction of plant facilities was basically straightforward because it could be assumed that the load would at least double every 10 years1. Therefore, past trends provided a relatively simple guide for the future. During the 1960s, generation unit sizes increased and high voltage transmission and interconnections between utilities expanded rapidly to take advantage of the economics of scale.

The utility industry economic environment was relatively stable prior to the 1970s. Both inflation and interest rates were predictable, and consequently costs did not change rapidly. Therefore, the uncertainties associated with most aspects of utility finance were minimal, and economic studies could be performed with some degree of certainty2.

The oil embargo of the early 1970s disrupted the economic stability of the utility industry. The industry was faced with escalating fuel costs in addition to the possibility of supply interruptions. Furthermore, the United States was experiencing rapid increases in interest rates. These factors represented a reversal of long-standing trends.

Public concern for depleting the earth's limited resources along with concern for the environmental impact added to the challenges confronting the utility industry. In addition, the cost of nuclear power was escalating due to new Federal regulations, which made it evident that nuclear power was not going to be a universal supply for our energy needs.

Looking into the future, the issues facing utilities will be driven by the fundamental need for additional generating capacity, new transmission systems, and more efficient use of existing resources3. The retirement of older units coupled with load growth that has been higher than expected in some cases indicates that substantial construction will be required. Proposed stricter environmental legislation will probably accelerate the retirement of older plants.

As a result, there are now many uncertainties associated with utility economic decisions. It is no longer valid to assume that the input parameters of economic studies are known with certainty. Utilities need to understand the potential effects of variations in these parameters on the final outcome of economic studies.

1.2 Modeling of Uncertainty

The past decade has seen a growing recognition that policies that ignore uncertainty often lead in the long run to unsatisfactory technical, social, and political outcomes. As a result, many large corporations and federal agencies now routinely employ decision analytic techniques that incorporate explicit treatment of uncertainty4.

Uncertainty is a major issue facing electric utilities in planning and decision making. Substantial uncertainties exist concerning future load growth, construction times and costs, performance of new resources, and the regulatory and economic environment in which utilities operate.

During the past decade, utilities have begun to use a variety of analytical approaches to deal with these uncertainties. These methods include sensitivity, scenario, portfolio, and probabilistic analyses. As typically applied, these methods involve the use of a computer model that simulates utility operations over 20 or 30 years5. Fuzzy numbers were also used to model non-statistical uncertainties in engineering economic analysis6.

Modeling uncertainty in utility economic evaluation can be based on two general approaches3. The first is a probabilistic approach where probability distributions for all of the uncertainties are assumed. The second approach is called "Unknown but Bounded" in which upper and lower limits on the uncertainties are assumed without a probability structure. The two approaches are explained as follows.

Probability distributions may not be available for some variables in economic evaluations because sampled data is unavailable. Examples of such variables would be the carrying charge factor and minimum acceptable return for an electric utility. An opinion sampling of people who work in the field could be performed along with an evaluation of historical data. However, this process may be highly unreliable since it is based on opinions of economic situations that are changing both drastically and rapidly.

A probability distribution may be assumed in some cases. Since no particular distribution is known, all values are assumed to be equally likely between given limits. In this type of situation a uniform distribution is the most appropriate. This distribution is also referred to as the rectangular distribution because of its shape. Based on a known distribution, confidence intervals can be defined. In addition, the way in which confidence intervals vary with transformations can be determined.

Another approach to modeling uncertainty is referred to as unknown but bounded. In this case upper and lower bounds on the uncertainties are assumed without probability distributions. In Schweppe's last published paper, the concept was defined in general without providing any numerical examples3. The concept was previously mentioned in Schweppe's book titled Uncertain Dynamic Systems7. However, the book covered dynamic systems driven by a white unknown-but-bounded process.

Interval mathematics provides a tool for the practical implementation and extension of the unknown but bounded concept. Confidence intervals cannot be calculated in this case because there are no probability distributions.

The unknown but bounded concept as presented by Schweppe does not directly address sensitivity analysis3. He addressed that problem separately in the context of strategic planning for a utility as a whole8,9. Schweppe's method was based on running simulations repeatedly for a range of input variables. Results of these runs are integrated into functions that yield nonlinear relationships between input and output variables. These functions are then used to evaluate the effects of uncertainties and sensitivities of particular decisions8.

In contrast to Schweppe's approach, an interval analysis solution with sharp bounds immediately provides the complete sensitivity analysis for the problem. There is no need for many simulation runs because the total variation in the output is known given the total variation in input parameters.

1.3 Sensitivity Analysis

A great deal of engineering effort is often devoted to testing systems' sensitivities to changes in design parameters. As a rule, sensitive elements are those which should be designed with utmost care. On the other hand, the mathematical model used in the design process is usually idealized and often inaccurately formulated. Therefore, some unforeseen changes may cause the system to behave in a different manner. Sensitivity analysis can help the engineer innovate ways to minimize such system discrepancies since the analysis takes into consideration assumptions concerning discrepancies between the ideal and the actual system10.

Methods of mathematical optimization rely in one way or another on relative sensitivities. Even the simple task of fitting data to a curve usually involves sensitivity calculations. As for social scientists, economists, and others, sensitivity and perturbation techniques can provide valuable information about the amount of inaccuracy in the behavior of a model regarding the inaccuracies in the system's data.

If the data gathered by field study or experimental testing falls within certain tolerance limits, the tolerances may well be amplified and widened in the results obtained. The question may then arise as to how uncertain the results are due to the uncertainties of the data10.

Sensitivity analysis is usually carried out by determining which parameters have significant effects on the results of a study. An attempt is then made to increase the precision of these parameters in order to reduce the danger of serious error. Furthermore, sensitivity analysis techniques have generally relied on either a-priori or a-posteriori techniques. A-priori techniques attempt to predict the sensitivity of the solution based on analytical methods applied to the input data. A-posteriori techniques seek to achieve the same goal based on analysis of the final results of the computations10.

Traditionally, all parameters are set to nominal values and each parameter is varied independently to determine its effect on the outcome. Those parameters that have significant effect are viewed as sensitive. In most cases sensitivity analysis does not deal with the possibility that several parameters varying simultaneously can cause significant variations in the output. Simultaneous variations of parameters more accurately models the real world situation2.

During the 1960's a new approach, known as interval analysis, emerged as an alternative way of dealing with uncertainty in data11. The uncertainties associated with utility economic analysis could be more effectively understood if the input parameters were treated as interval numbers whose ranges contain the uncertainties in those parameters. The resulting computations, carried out entirely in interval form, would then literally carry the uncertainties associated with the data through the analysis. Likewise, the final outcome in interval form would contain all possible solutions due to the variations in input parameters.

Thus, it is possible to perform sensitivity analysis by assigning interval bounds to any or all of the input parameters and observing the effects on the final interval outcome.

2. Interval Mathematics

2.1 Introduction

Interval mathematics provides a useful tool in determining the effects of uncertainty in parameters used in a computation. In this form of mathematics, interval numbers are used instead of ordinary single point numbers. An interval number is defined as an ordered pair of real numbers representing the lower and upper bounds of the parameter range11. For instance, assume that the cost of a particular piece of equipment will be in the range of 600 to 800 dollars. The corresponding interval number would then be as follows: cost = [600 , 800]. An interval number can then be formally defined as follows: [a, b], where a £ b.

In the special case where the upper and lower bounds of an interval number are equal, the interval is referred to as a point or degenerate interval. In this case, interval mathematics is reduced to ordinary single point arithmetic.

Given two interval numbers, [a, b] and [c, d], the rules for interval addition, subtraction, multiplication, and division are as follows:

[a, b] + [c, d] = [a+c, b+d],(1)

[a, b] - [c, d] = [a-d, b-c], (2)

[a, b] * [c, d] = [min(ac,ad,bc,bd), max(ac,ad,bc,bd)], (3)

[a, b] / [c, d] = [a, b] * [1/d, 1/c], where 0 Ï [c, d]. (4)

A brief review of some basic interval relationships and interval arithmetic is given below.

Two intervals are considered equal if and only if their corresponding end points are equal. Therefore, given two interval numbers X = [a, b] and Y = [c, d], X = Y if and only if a = c and b = d.

The midpoint of an interval X = [a, b] is defined as

m(X) = m([a, b]) = (a + b)/2 (5)

Similarly, the width of an interval is given by

w(X) = w([a, b]) = (b - a) (6)

The negative of an interval Y = [c, d] is defined as

-Y = -[c, d] = [-d, -c] (7)

The reciprocal of an interval X = [a, b] is given by

1/X = 1/[a, b] = [1/b, 1/a] (8)

if X does not contain zero. If X contains zero then the set is unbounded and cannot be represented as an interval whose end points are real numbers.

Based on these mathematical properties, powers of intervals may be given by

Xy = [ay , by] if a > 0 or if y is odd,

= [by , ay] if b < 0 and y is even,

= [0 , |X|y] if 0 Î X and y is even, where

|X| = max{|a|, |b|}. (9)

Given the interval numbers X, Y, and Z, the following properties hold

X + (Y + Z) = (X + Y) + Z (10)

X * (Y * Z) = (X * Y) * Z (11)

X + Y = Y + X (12)

X * Y = Y * X (13)

It is also worth noting that

X - X = 0, and X/X = 1

only when X is a degenerate interval. Otherwise,

X - X = [a, b] - [a, b]

= [a, b] + [-b, -a]

= [a-b, b-a] (14)

and

X/X = [a/b, b/a] if a > 0, or

X/X = [b/a, a/b] if b < 0. (15)

The following example is used to illustrate the above concepts. Assume that $1000 is invested for one year at an interest rate varying between 8% and 10%, which is represented by the interval [0.08, 0.1]. The resulting interval will be [1080, 1100], which means that the minimum future worth of the $1000 investment would be $1080, while the maximum amount would be $1100.

Implementing interval analysis techniques confronts some obstacles because its algebraic structure is unlike that of common single point arithmetic. Accordingly, interval computations may produce extremely conservative bounds12.

2.2 Interval Bounds

Some interval computations produce narrow bounds while others result in overly wide bounds. Normally, the approach to producing better bounds has been to rearrange the expression so that each interval parameter appears only once11. For example, assume that A is an interval variable in the following equation

Y = A/(A-2), (16)

where Y is the interval output. The equation can be rearranged so that it has fewer occurrences of the variable A

Y = 1 + 2/(A-2). (17)

Equation (17) yields a narrower interval result since it has only one occurrence of the variable A as opposed to two occurrences in Equation (16). The reason for this is that when the substitution of the numerical interval is made for the variable A, the identity of the variable in its two occurrences in Equation (16) is lost. From the viewpoint of interval calculations alone, the interval computation in Equation (16) is equivalent to finding the range of values of a function of two independent variables. That is, Equation (16) can be rewritten as

Y = A1/(A2-2) = [a1,b1]/([a2,b2]-[2,2]). (18)

In evaluating Equation (18), the interval A1 can be at its maximum value while A2 is at its minimum value.

The following example will be used to illustrate the above observation. The present worth of an annual expense of $1000 over 2 years will be found using two different approaches to the calculation. A Minimum Acceptable Return (MAR) in the range of eleven to twelve percent, which is represented by the interval [0.11 , 0.12], will be assumed. The present worth annuity factor13 is as follows

P/A(i,n) = [1 - (1/(1+i)n)]/i, (19)

where i = MAR = [0.11, 0.12],

n = number of years = [2, 2].

The present worth is given by

Present worth = 1000 * P/A(i,n)

= [1000 , 1000] * [1.57 , 1.844]

= [1570 , 1844]. (20)

Exact bounds can be obtained by rearranging the manner in which the calculations are performed14. This can be achieved by carrying out the calculations one year at a time. This is performed by setting n to 1 in Equation (19) to obtain

P/A = 1/(1+i) (21)

and then repeatedly applying Equation (21) year-by-year. Here the calculations are rearranged such that the interval parameter appears only once per calculation. Thus, the analysis is performed one year at a time such that i appears only once in each operation as given by

Step 1 (year 2 to year 1) : [1000,1000] * 1/(1+i) = [893, 901]

Step 2 (year 1 to present) : ([893,901] + [1000,1000]) * 1/(1+i) = [1690,1713].

The result is [1690, 1713], which has a width of 23 as compared to the width of 274 obtained using Equation (20). It is clear that the resulting interval bounds are drastically improved by using year-by-year calculations. This concept can be used to carry out any economic calculation to get exact bounds on the resulting interval.

2.3 Statistical Aspects

Formal statistical reasoning is based on the laws of probability15. Implicit in the treatment of statistical inference problems is the assumption that random samples of observations can be obtained. A random sample can be defined as consisting of statistically independent, identically distributed random variables16.

A random variable may represent the occurrence or non-occurrence of an event under a given set of conditions. However, if an event always occurs under a given set of conditions then it is called a certain or deterministic event17.

In certain cases, a point estimate of a parameter may be obtained based on a random sample of observations from a population with known distribution function. However, the point estimate is a random variable distributed around the true value of the parameter. Therefore, an interval is needed which will actually include the true value of the parameter with reasonable confidence16.

The generally accepted method of handling this problem is to construct what are known as confidence intervals. The computed confidence interval covers the true parameter with a specified probability such as 0.95, or in other words with 95% confidence. However, it cannot be said that the probability is 95% that the true value falls within the computed interval; it either does or does not. What can be said is that the computed interval covers the true value in 95% of all possible samples15.

There are problems that can be properly posed in terms of intervals and solved correctly by interval analysis. In this view, interval analysis is a branch of mathematics that has its own theory, techniques, and problems to which it is applicable18.

Interval analysis deals with transformations of one interval into another. The central theory of numerical interval analysis is that the resulting interval contains the answer18. It means that if the input interval represents a parameter range, then the computed output interval contains all values resulting from the transformation of input data.

It is important to optimize interval computations as much as possible with respect to accuracy. However, there is a difference between having confidence in the accuracy of a computed result and knowing the exactness of the resulting interval bounds.

The assessment of confidence in the result of an interval computation depends mainly on the level of confidence in the input parameters. Therefore, if input parameters are known to be accurate, then there is high confidence in the results. However, if there is little confidence in input parameters, then it follows that there is a proportionally limited confidence in the result. In other words, the confidence in the computed results depends on the level of confidence in input parameters and not on the computational procedure.

As mentioned earlier, there is a distinction between having confidence in the computed result and knowing the exactness of the resulting interval bounds. The width of the resulting interval bounds may sometimes be so large that the result's usefulness is greatly diminished. Therefore, an effort has to be made to reduce the width of the resulting interval bounds.

Given a set of interval input parameters, the bounds of the resulting interval computations may depend on the calculation procedure as well as the input parameters. Therefore, the same set of input parameters may produce different interval bounds depending on the calculation procedure used. Authors like Hansen19 have used the term 'sharp bounds' to denote the narrowest resulting bounds. Others have used the term 'exact bounds' to indicate the same meaning12.

Based on the above discussion, it can be argued that the concept of statistical confidence intervals does not apply in the technical sense to this research. The empirical fact is that most of the input parameters used are either arbitrary or deterministic in nature. Therefore, the fundamental concepts of probability theory are not applicable in this situation.

 3. Probability Intervals

The input parameters that are modeled as intervals are not described by a normal distribution. This is based on the fact that the normal distribution would emphasize the middle section of a parameter range. By contrast, the uniform distribution is more appropriate since it does not emphasize any given point17. In other words, the probability of any given value is the same throughout the entire interval range.

The uniform distribution is formally defined as follows17. A random variable x has a continuous uniform distribution on [a, b] if it has the following density function:

f(x) = 0, x Ï [a, b]

= 1/(b-a) x Î [a, b]. (22)

The mean of this distribution is easily found to be (b+a)/2. The variance can also be found by (b-a)2/12, which means that the standard deviation is equal to (b-a)/Ö 12.

Based on the above discussion, it is only possible to come up with what can be called probability intervals on certain results. The situation here can be described as follows. There is a set of variables X1, X2, ... ,Xk where one or more of them can be considered a random variable. A transformation is then applied to produce a new variable Y from the X variables. If the distribution of X1, X2, ... ,Xk is known, then the problem is to find the distribution of Y.

A simple example is as follows. Suppose that k = 1, which means there is only one X variable. Let Y = X2 and assume that X is described by a continuous uniform distribution. Let x Î [2, 3] for instance, where x is described by a uniform distribution. Thus, f(x) = 1 for x Î [2,3]. Let g(y) denote the density function of y, which leads to

g(y) dy = f(x) dx,

since the probabilities of these two events must be equal16. Then

g(y) = f(x) dx/dy. (23)

In other words, the probability density function for y equals the probability density for x multiplied by the derivative of x with respect to y. The foregoing discussion assumed that y was a strictly increasing function of x everywhere. The same result is readily obtained if y is a strictly decreasing function of x everywhere16. The only difference is that the absolute value of dx/dy appears in place of dx/dy in Equation 23. But since x = Ö y, then

dx/dy = 0.5 y-1/2.

Finally, proper substitutions in Equation 23 provide

g(y) = f(x) dx/dy

= (1) * (0.5 y-1/2)

or g(y) = 0.5 y-1/2, where y Î [4,9].

The middle 95% of this distribution can now be found to obtain an interval that covers 95% of the range of y. The cumulative distribution function is as follows:

G(y) = ò g(y) dy = ò (0.5 y-1/2) dy, where the limits of integration are from 4 to y

= Ö y - 2 , y Î [4,9].

Therefore, y is between 4.1006 and 8.8506 with 95% probability. Obviously, different levels of probability can be found using the same procedure with the appropriate choice of numbers.

The above concepts are explained in the context of economic analysis as follows. Suppose there are two alternative system designs to be considered. The annual total cost associated with each alternative is represented by:

RR = MC + ccf * CC , (24)

where

RR = Revenue Requirement

MC = Maintenance Cost

ccf = carrying charge factor

CC = Capital Cost.

The carrying charge factor is determined by utilities based on equipment type. Note that all elements of Equation (24) are either arbitrary or deterministic in nature.

The present value of annual revenue requirements associated with each alternative is needed to compare alternatives on equal footing with respect to the time value of money. As a result, the present worth factor13 has to be used in the following manner:

Present Value = (future amount) * P/F(i, n)

where P/F = present worth factor = 1/(1+i)n

i = interest rate

n = number of years.

As shown in the previous section, this type of calculation is performed one year at a time to obtain sharp interval bounds. Therefore, the above equation simplifies to using 1/(1+i) year by year.

Based on material discussed earlier, it is now possible to develop probability intervals for the present value of future costs. Suppose that K represents the total cost associated with a certain alternative one year from now. Then P/F = 1/(1+i), and the present worth of K is K/(1+i). Let y be a transformation of the variable x defined as follows: y = K/(1+x), where x is the interest rate. Suppose that x is between 11% and 12% and is described by a uniform distribution. Then it follows that x Î [0.11 , 0.12] and f(x) = 1/0.01 = 100 for that range of x.

Let g(y) denote the density function of y, then Equation (23) can be used as follows:

g(y) = f(x) dx/dy

but y = k/(1+x) Þ x = (k - y)/y

Þ dx/dy = -k/y2.

Then g(y) = f(x) |dx/dy|

= (100)(k/y2) , where y Î [0.8928, 0.9009]K.

The cumulative distribution function can now be found as follows:

G(y) = ò g(y) dy = ò 100K/y2 dy, where the limits of integration are from 0.8928 and y

= (112 - 100/y) * K , y Î [0.8928, 0.9009]K.

Therefore, the present worth is between 0.8928K and 0.9009K with a probability of 95%.

4. Electric Utility Example

Electric utilities are constantly seeking ways to reduce costs and improve system performance. Recently, the availability of distribution automation equipment has given utilities more choices in the design of distribution systems. Thus, distribution automation is now an important consideration in upgrading existing systems and designing new ones. Furthermore, economic evaluation of distribution automation is important since it facilitates the examination of associated costs and benefits20.

Automation can be thought of as doing a repetitive task with minimal human intervention. Distribution automation refers to automation of repetitive tasks on the distribution system. Automated switching techniques can be used to enhance the capabilities of a power distribution system. An automatic switch can be defined as a switching device that opens and closes to connect two or more sections of a power distribution system, with the aid of computer control by remote communications. However, manual switches are commonly used in present power distribution systems.

The following example is used to illustrate the operation of automated switches. The sample system represents an existing system that contains two distribution substations and two manual switches for power flow control. Several types of loads may exist on the network such as residential, commercial, and industrial loads. It is assumed that load 1 and load 2 have stabilized and are no longer growing. However, the new load is assumed to be growing. The new load may be supplied by either substation depending on the status of the two switches. But the system will be overloaded if the new load grows beyond the limits of the existing equipment.

The classical approach of solving this problem is building a new substation to feed the new loads and prevent equipment overloading. However, along with the substation, new lines must be set up for power flow, breakers must be installed for protection, and large step-down transformers must be installed. In addition to these circuit elements that must be purchased and installed, zoning for the new substation and high-voltage right of ways must be approved by local government agencies. Therefore, it is obvious that a great deal of money and planning time must be appropriated to construct a new substation.

An automated approach involves using only automated switching techniques to resolve circuit overloading. This approach allows for employing automatic switches to maximize the capabilities of the existing substations and to distribute the power flow in a more efficient manner. The idea behind automatic switching is that when load peaks change in different areas of the system, the switch opens or closes to direct power to the location where it is needed. This is an important factor due to the time varying nature of different load types (or load diversity). Thus, automatic switching can be performed as frequently as needed in contrast with manual switching. Another major benefit of the automated approach is that the need for a new substation and all of its associated costs are eliminated until future loads are much larger. This time extension factor is a significant benefit due to the lengthy regulatory and planning periods that must be endured before construction of a new substation can begin.

 

Utilities use the revenue requirement method for the analysis of alternatives involving money. It gives utilities the ability to compare alternatives in terms of their impact on the revenue required from customers. The costs associated with an expansion plan are expressed as capital expense and operating expense. The capital expense is the cost associated with the original placement of an item including its purchase price and installation cost. The operating expense is associated with the on-going operation of the plant item. The carrying charge factor is used is used to reduce capital expenses to a yearly amount. The yearly cost associated with capital expense is referred to as a carrying charge factor. Thus, the revenue requirement for each year is the sum of carrying charges and operating expenses for that year.

 

It is assumed that all automation equipment installations occur in the first two years but with equipment having a 10-year lifetime. Therefore, revenue requirement calculations will be performed for the first 10 years of expenditure. The first year of the plan consists of installing transducers and remote terminal units for the two substations. The information is fed to a central computer through telephone lines. The second year of the plan calls for installing two automatic switches linked to the computer. Table 1 shows the interval capital cost of distribution automation equipment based on estimates obtained from electric utilities. The annual operating cost is assumed to be ten percent of the capital cost. The carrying charge factor is represented by the interval [0.19, 0.21] based on realistic estimates. The carrying charge factor is treated as an interval along with capital cost and operating expense to account for uncertainties. The Minimum Acceptable Return (MAR) is assumed to be in the range of 11 to 12 percent, which means that MAR = [0.11, 0.12]. An investment alternative that does not yield a return exceeding the MAR should not be pursued. This cutoff rate is usually established by a policy decision within the organization.

The carrying charge factor is multiplied by the capital expense to convert it to an annual amount over the lifetime of equipment. Table 2 shows the revenue requirement components for each year. The Present Value of Revenue Requirement (PVRR) can be found using the method discussed earlier in section 2.2. This is done by performing the calculation one year at a time, which gives the following result: PVRR = $[133426, 153741]. This value can be compared to the one obtained using the conventional method (equation 19), which gives the following result: PVRR = $[129359, 158662]. As can be seen, the width of the first result is 20315, which is narrower than the width of the second result of 29393.

Table 1: Interval capital cost of equipment

Equipment type

Capital Cost ($)

6 Current transducers

[590, 610]

6 Voltage transducers

[590, 610]

2 Watt transducers

[3500, 3700]

2 Automatic switches

[11900, 12100]

2 Substation remote terminal units

[19900, 20100]

1 Central computer

[24900, 25100]

 

Table 2: Annual revenue requirement components

Year

Capital cost ($)

Carrying charge ($)

Operating cost ($)

Revenue requirement ($)

1

[49280, 50320]

[9363, 10567]

[7228, 7532]

[16591, 18099]

2

[11900, 12100]

[11624, 13108]

[8418, 8742]

[20042, 21850]

3

[11624, 13108]

[8418, 8742]

[20042, 21850]

4

[11624, 13108]

[8418, 8742]

[20042, 21850]

5

[11624, 13108]

[8418, 8742]

[20042, 21850]

6

[11624, 13108]

[8418, 8742]

[20042, 21850]

7

[11624, 13108]

[8418, 8742]

[20042, 21850]

8

[11624, 13108]

[8418, 8742]

[20042, 21850]

9

[11624, 13108]

[8418, 8742]

[20042, 21850]

10

[11624, 13108]

[8418, 8742]

[20042, 21850]

 

5. Conclusion

There are now many uncertainties associated with utility economic decisions. It is no longer valid to assume that the input parameters of economic studies are known with certainty. Utilities need to understand the effects of variations in these parameters on the final outcome of economic studies. This paper presented a method of modeling uncertainty and performing sensitivity analysis with interval mathematics. Sensitivity of the resulting interval bounds can be assessed based on the variation in one or all input parameters. This form of sensitivity analysis is not normally possible without the use of interval mathematics. An electric utility example was presented to illustrate the application of the concepts. Since the input parameters are not obtained through sampling, the concept of statistical confidence intervals does not apply. However, probability intervals may be obtained for the present value of a future amount.

References:

[1]. R. Billinton and R. Allan, "Power-System Reliability in Perspective," IEE J. Electronics & Power, Vol. 30, pp. 231-236, March 1984.

[2]. H. E. Shaalan, "An Interval Mathematics Approach To Economic Evaluation Of Power Distribution Systems," Ph.D. Dissertation, Virginia Polytechnic Institute and State University, Blacksburg, VA, May, 1992.

[3]. H. Shaalan, "Modelling Uncertainty in Electric Utility Economics Using Interval Mathematics," Proceedings of the 4th IASTED International Conference on Power & Energy Systems, September 19-22, 2000, Marbella, Spain, pp. 675-678.

[4]. M. Granger and M. Henrion, Uncertainty: A Guide To Dealing With Uncertainty In Quantitative Risk And Policy Analysis, Cambridge University Press, 1990.

[5]. E. Hirst, E. Yourstone, and M. Gettings, "Resource Planning Model Treats Electric Utility Uncertainties," IEEE Computer Applications in Power, Oct. 1991, pp. 40-45.

[6]. A. Dimitrovski and A. Matos, "Fuzzy Engineering Economic Analysis," IEEE Transactions on Power Systems, Vol. 15, No. 1, February 2000, pp. 283-289.

[7]. F. C. Schweppe, Uncertain Dynamic Systems, Prentice - Hall, Englewood Cliffs, New Jersey, 1973.

[8]. H. Merrill, F. Schweppe, and D. White, "Energy Strategy Planning for Electric Utilities, Part I, the SMARTE Methodology," IEEE Transactions on Power Apparatus and Systems, Vol. PAS-101, No. 2, February 1982, pp. 340-346.

[9]. H. Merrill, and F. Schweppe, "Strategic Planning for Electric Utilities: Problems and Analytic Methods," Interfaces, Vol. 14, No. 1, January & February 1984, pp. 72-83.

[10]. A. Dief, Sensitivity Analysis in Linear Systems, Springer-Verlag, 1986.

[11]. R. Moore, Interval Analysis, Prentice-Hall, Englewood Cliffs N.J., 1966.

[12]. J. Matthews, R. Broadwater, L. Long, "The Application of Interval Mathematics to Utility Economic Analysis," IEEE Transactions on Power Systems, February 1990, pp. 177-181.

[13]. W. J. Fabrycky and B. S. Blanchard, Life-Cycle Cost and Economic Analysis, Prentice-Hall, 1991.

[14]. R. Broadwater and H. Shaalan, "A Bound Conserving Interval Mathematics Approach To Electric Utility Economic Evaluation," Proceedings of North American Power Symposium, Carbondale, IL, October, 1991, pp. 170-173.

[15]. D. S. Moore, Statistics - Concepts and Controversies, W. H. Freeman and Company, San Fransisco, 1979.

[16]. K. A. Brownlee, Statistical Theory and Methodology in Science and Engineering, John Wiley and sons, New York, 1965.

[17]. J. L. Melsa and A. P. Sage, An Introduction to Probability and Stochastic Processes, Prentice-Hall, 1973.

[18]. Interval Mathematics 1980, Edited by K. Nickel, "Optimization of Interval Computation," Article by L. B. Rall, Proceedings of an International Symposium on Interval Mathematics, Germany, pp. 489-498, Academic Press, 1980.

[19]. R. Moore, Editor, "Reliability in Computing - The Role of Interval Methods in Scientific Computing," Article by E. Hansen, "An Overview of Global Optimization Using Interval Analysis," International Workshop at Ohio State University, Academic Press, 1988, pp. 289-308.

[20]. H. Shaalan, "Using Distribution Automation to Satisfy Load Growth Demands" Proceedings of the 3rd IASTED International Conference on Power & Energy Systems, November 8-10, 1999, Las Vegas, Nevada, pp. 211-213.