Uncertainty analysis aims at characterizing the errors associated with experiments and predictions of computer codes, in contradistinction with sensitivity analysis, which aims at determining the rate of change (i.e., derivative) in the predictions of codes when one or more (typically uncertain) input parameters varies within its range of interest. The paper associated with this presentation reviews the salient features of three independent approaches for estimating uncertainties associated with predictions of complex system codes and shows key results from the applications of one of those. The first approach reviewed in this paper as the prototype for propagation of code input errors is the so-called “GRS method”, which includes the so-called “CSAU method” (Code Scaling, Applicability and Uncertainty) and the majority of methods adopted by the nuclear industry. Although the entire set of the actual number of input parameters for a typical NPP (Nuclear Power Plant) input deck, ranging up to about 105 input parameters, could theoretically be considered as uncertainty sources by these methods, only a ‘manageable’ number (of the order of several tens) is actually taken into account in practice. Ranges of variations, together with suitable PDF (Probability Density Function) are then assigned for each of the uncertain input parameter actually considered in the analysis. The number of computations using the code under investigation needed for obtaining the desired confidence in the results can be determined theoretically (it is of the order of 100). Subsequently, an additional number of computations (ca. 100) with the code are performed to propagate the uncertainties inside the code, from inputs to outputs (results). The second approach reviewed in this paper is the propagation of code output errors, as representatively illustrated by the UMAE-CIAU (Uncertainty Method based upon Accuracy Extrapolation ‘embedded’ into the Code with capability of Internal Assessment of Uncertainty). Note that this class of methods includes only a few applications from industry. The use of this method depends on the availability of ‘relevant’ experimental data, here, the word ‘relevant’ is connected with the specific NPP transient scenario under investigation for uncertainty evaluation. Assuming such availability of relevant data, which are typically Integral Test Facility (ITF) data, and assuming the code correctly simulates the experiments, it follows that the differences between code computations and the selected experimental data are due to errors. If these errors comply with a number of acceptability conditions, then the resulting (error) database is processed and the ‘extrapolation’ of the error takes place. Relevant conditions for the extrapolation are: • building up the NPP nodalization with the same criteria as was adopted for the ITF nodalizations; • performing a similarity analysis and demonstrating that NPP calculated data are “consistent” with the data measured in a qualified ITF experiment. The third approach described in this paper is based on ASAP (Adjoint Sensitivity Analysis Procedure) and GASAP (Global Adjoint Sensitivity Analysis Procedure) methods extended to performing uncertainty evaluation in conjunction with concepts from Data Adjustment and Assimilation (DAA). The ASAP is the most efficient deterministic method for computing local sensitivities of large-scale systems, when the number of parameters and/or parameter variations exceeds the number of responses of interest. The GASAP has been originally designed as a global sensitivity analysis and optimization method by which system’s critical points (i.e. bifurcations, turning points, saddle points, response extrema) can be determined in the combined phase-space formed by the parameters, forward state variables, and adjoint variables and then subsequently analyzed by the efficient ASAP. The DAA is the technique by which experimental observations are combined with code predictions and their respective errors to provide an improved estimate of the system state; in other words, DAA uses dynamic models to extract information from observations in order to reconstruct the structure of the system and reduce uncertainties in both the system parameters and responses. The reason for considering this approach derives from its potential to open an independent way (i.e. different from propagation of code input errors or from propagation of code output errors) for performing global uncertainty analysis. Key results from the application of the CIAU are discussed, addressing the following objectives: a) Application of the uncertainty methods to the licensing process focusing on the LBLOCA (Large Break Loss of Coolant Accident) analysis. b) Demonstration that results from the application of two different computer codes to the prediction of the same accident scenario produces similar results. c) Evaluation of the comparison between the conservative and the BEPU (Best Estimate Plus Uncertainty) approach, addressing the convenience for the industry to apply uncertainty methods. d) Comparison among the uncertainty bands predicted by various uncertainty methods within the BEMUSE Project.
Development and Application of Uncertainty Methods for Licensing
D’Auria Francesco
Primo
Conceptualization
;
2007-01-01
Abstract
Uncertainty analysis aims at characterizing the errors associated with experiments and predictions of computer codes, in contradistinction with sensitivity analysis, which aims at determining the rate of change (i.e., derivative) in the predictions of codes when one or more (typically uncertain) input parameters varies within its range of interest. The paper associated with this presentation reviews the salient features of three independent approaches for estimating uncertainties associated with predictions of complex system codes and shows key results from the applications of one of those. The first approach reviewed in this paper as the prototype for propagation of code input errors is the so-called “GRS method”, which includes the so-called “CSAU method” (Code Scaling, Applicability and Uncertainty) and the majority of methods adopted by the nuclear industry. Although the entire set of the actual number of input parameters for a typical NPP (Nuclear Power Plant) input deck, ranging up to about 105 input parameters, could theoretically be considered as uncertainty sources by these methods, only a ‘manageable’ number (of the order of several tens) is actually taken into account in practice. Ranges of variations, together with suitable PDF (Probability Density Function) are then assigned for each of the uncertain input parameter actually considered in the analysis. The number of computations using the code under investigation needed for obtaining the desired confidence in the results can be determined theoretically (it is of the order of 100). Subsequently, an additional number of computations (ca. 100) with the code are performed to propagate the uncertainties inside the code, from inputs to outputs (results). The second approach reviewed in this paper is the propagation of code output errors, as representatively illustrated by the UMAE-CIAU (Uncertainty Method based upon Accuracy Extrapolation ‘embedded’ into the Code with capability of Internal Assessment of Uncertainty). Note that this class of methods includes only a few applications from industry. The use of this method depends on the availability of ‘relevant’ experimental data, here, the word ‘relevant’ is connected with the specific NPP transient scenario under investigation for uncertainty evaluation. Assuming such availability of relevant data, which are typically Integral Test Facility (ITF) data, and assuming the code correctly simulates the experiments, it follows that the differences between code computations and the selected experimental data are due to errors. If these errors comply with a number of acceptability conditions, then the resulting (error) database is processed and the ‘extrapolation’ of the error takes place. Relevant conditions for the extrapolation are: • building up the NPP nodalization with the same criteria as was adopted for the ITF nodalizations; • performing a similarity analysis and demonstrating that NPP calculated data are “consistent” with the data measured in a qualified ITF experiment. The third approach described in this paper is based on ASAP (Adjoint Sensitivity Analysis Procedure) and GASAP (Global Adjoint Sensitivity Analysis Procedure) methods extended to performing uncertainty evaluation in conjunction with concepts from Data Adjustment and Assimilation (DAA). The ASAP is the most efficient deterministic method for computing local sensitivities of large-scale systems, when the number of parameters and/or parameter variations exceeds the number of responses of interest. The GASAP has been originally designed as a global sensitivity analysis and optimization method by which system’s critical points (i.e. bifurcations, turning points, saddle points, response extrema) can be determined in the combined phase-space formed by the parameters, forward state variables, and adjoint variables and then subsequently analyzed by the efficient ASAP. The DAA is the technique by which experimental observations are combined with code predictions and their respective errors to provide an improved estimate of the system state; in other words, DAA uses dynamic models to extract information from observations in order to reconstruct the structure of the system and reduce uncertainties in both the system parameters and responses. The reason for considering this approach derives from its potential to open an independent way (i.e. different from propagation of code input errors or from propagation of code output errors) for performing global uncertainty analysis. Key results from the application of the CIAU are discussed, addressing the following objectives: a) Application of the uncertainty methods to the licensing process focusing on the LBLOCA (Large Break Loss of Coolant Accident) analysis. b) Demonstration that results from the application of two different computer codes to the prediction of the same accident scenario produces similar results. c) Evaluation of the comparison between the conservative and the BEPU (Best Estimate Plus Uncertainty) approach, addressing the convenience for the industry to apply uncertainty methods. d) Comparison among the uncertainty bands predicted by various uncertainty methods within the BEMUSE Project.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.