Figure 7 1: Histogram for slopes in all the three cases
Related Figures (141)
Table 2.1: The advantages and disadvantages of manual and automatic calibration Figure 3.3: Schematic representation of a half-space calculation in two dimensions Figure 3.4: Schematic representation of convex hull peeling in two dimensions it. A convex layer can be defined as follows: Construct the smallest convex hull which encloses all sample points (X1,..., Xn). Then, the sample points on the perimeter are designated as the first convex layer and removed. The convex hull of the remaining points is constructed; these points on the perimeter are the second convex layer. The process is repeated, and a sequence of nested convex layers is formed. The higher layer point belongs to, the deeper the point is within the data cloud. The schematic representation of convex hull peeling in two dimensions is given in Figure 3.4. Here we can see that the depth at inner most convex hull has a depth of 3 and as we go out, the depth at the second convex hull is 2 and so on. Hence given a number of points, we can compute a convex hull and depth of the points that are on outer convex hull will be 1 and it will increase as we go to next convex hull. Convex hull peeling depth is relatively simple to computate. However it is not roust in the presence of outliers Hugg et al. (2006). Simplicial median The Upper Neckar basin is located in South-West Germany in the state of Baden-Wiirt- temberg. The region is flat, undulating in the east and north. The Black Forest and Swabian Alb are in the west and south. The 4000 km? large Upper Neckar basin was subdivided into 13 subcatchments (Figure 4.1). Figure 4.1: Study area: Upper Neckar catchment in South-West Germany Table 4.1: Summary of the size of the different subcatchments in Upper Neckar catchment Figure 4.2: Study area: 28 catchments of England and Wales, UK Table 4.3: Different UK catchments and their properties Table 4.4: Annual mean discharge and precipitation for all UK catchments Figure 4.3: Index map of the Kolar basin daily rainfall for the basin. As seen from the Figure 4.3, the coverage of the rainfall stations is not uniform; there is no station in the northern part of the basin. The daily gauge and discharge data for monsoon season only was available at Satrana. The pan evaporation data for a station located near the basin in agricultural area was used. An overview of the catchments can be found in Jain (1990, 1993) and Singh et al. (2009). The fourth study area for this research is located in the south of Germany. The rivet Rems originates from Lutenburg near the city Aalen in Baden-Wiirttemberg. It flows westwards to the river Neckar, of which it is the tributary. The catchment is about 58( km?. The Rems catchment is composed of four subcatchments. Schwabisch-Gmiind Haubersbronn, Schorndorf and Neustadt (fig. 4.4). The data vaibility of this catchment is good. The hydrological and meteorological data series include discharges, precipita- tion, temperature, vapor pressure, humidity, wind speed, sunshine duration, snow depth etc. The data was available in daily resolution for this research work. Figure 4.4: Rems catchment in southern Germany and its subcatchments (Thapa, 2009) Figure 4.5: Monthly mean precipitation 1990-2005 for Rems catchments (stations with 100 % observation) Figure 4.6: Annual average precipitation for Rems catchments (stations with 100 % observa- tion) Figure 4.7: Monthly average discharge (1900-2005) for Rems catchments Figure 4.8: Annual average discharge for Rems catchments Table 4.5: Summary of the basic characteristics of Rems catchment 11 m?/s. The overall hydrological characteristics of these subcatchments are shown in Table 4.5. Schwabisch-Gmiind and Haubersbronn are gauges in the upper catchment, having a smaller drainage area, which corresponds to a much lower flow over whole year. Neustadt has the smallest slope and highest mean annual flow. For more details about the catchments, refer to Thapa (2009) and Liang (2010). Figure 4.9: Schematic representation of the HBV model Figure 4.10: Schematic representation of the HYMOD model The conceptual three reservoirs model was developed by Jain (1993). The concept of the model described by Jain (1993) and Singh et al. (2009), is given here. In this model, rainfall-runoff process is conceptualized by three reservoirs. The catchment is represented with the help of three storages. The first storage, termed as surface storage, represents the water stored in the surface and top few centimeters of soil of the catchment. It has a maximum storage capacity given by Smear. The second storage represents the catchment soil moisture storage and has a maximum water holding capacity given by Cmax. The third storage represents the ground water storage. The possible range of model parameters is given in Table 4.8. Table 4.8: The possible range of the three reservoirs model parameters Figure 4.11: Structure of the three reservoirs model (Jain, 1993) Table 4.9: Input data for WaSiM-ETH model Figure 4.12: Structure of WaSiM-ETH using TOPMODEL approach (Liang, 2010) Table 5.1: Model performance for the observed series using optimal parameters obtained using 100 randomly perturbed discharge data sequences Table 5.2: Model performance for the observed series using optimal parameters obtained using 100 randomly perturbed temperature data sequences Figure 5.1: Scatter plot of the model parameters obtained by optimization using random dis- charge errors Table 5.3: Model parameters range for Rottweil (Neckar) catchment 5.3 Geometrical Structure of the Good Parameter Set [able 5.4: Model performance for the N = 10000 random parameter sets with respect to the data depth calculated on the basis of the points selected corresponding to the upper 10 % performance vectors were generated. The depth of the points of Yy with respect to Xj, was calcu- lated. For all parameters 6 € Yq, the hydrological model was run and the performances calculated. The results are evaluated for parameters such that D(@) > L, exemplified in Table 5.4 with the statistics of the performances. One can see that the randomly generated parameter vectors which posses high depth have good model performance. The standard deviation of the performance decreases with increasing depth, showing that in the deep interior of the set all parameter vectors perform similarly. These results show that for this case one can geometrically identify parameter vectors which are good. Note that even if the best performance is related to the deepest subset, this is not nec- essarily always the case, since the global optimum might itself correspond to a low depth. Table 5.5: Runoff characteristics for different time periods The model performance was calculated for each time period. The set of good parameter vectors was identified for each time period separately and the depth of each parameter with respect to this set was calculated. In this way, three depth values were assigned to each parameter vector. The sets with the 50 and 150 deepest parameter vectors were identified for each time period. The intersection of the convex sets corresponding to the 50 deepest points consisted of 36 for the 150 point set 84 points indicating that depth is stable over all time periods. Note that a parameter vector was considered to be in the intersection if it had positive depth with respect the sets considered. As a set of 10000 points were considered, an independent selection of two sets with 150 points would have led with vectors was identified for each time period separately and the depth of each parameter Figure 5.2: The performance of the model using different depth Table 5.6: Model performance for parameter vectors according to their depth corresponding to the time period 1961-1970 with large depth are robust with respect to the selected time period. As a second test, the parameters with greater depth for one time period were used for another time period and their performance was calculated. In Table 5.6, the results of the transferred model quality with respect to the depth corresponding to the time period 1961-1970 are shown. Note that the subset of the boundary points was selected by choosing only points for which the performance exceeds a given threshold. This way we obtained two sets with the same mean performance. Note that for the interior points, the performance in the other time periods is significantly better than those of the boundary points. The standard deviations of the performance for the validation time periods are smaller for the interior points. This indicates that the transfer of these parameters is more reasonable for the parameter vectors from the interior. Table 5.7: Model performance for the inner and the shifted boundary and deep points PISule Vv CAPI allio LUO COMSlluCulOll OL elle UTC POs 1 OHO CITE Mos10ll, The above construction of parameter vectors C ,, Cz and C3 was carried out for a large number of randomly selected pairs 6, and 02. The 6, and 02 were selected in such a manner that their mean performance was the same. Table 5.7 shows the statistics of the Nash-Sutcliffe coefficients for the sets corresponding to C ,, Cg and C3. One can see that the inside points all have good performance and the standard deviation is small. Points at Cy (outside points) have the worst performance while C3 is better than C2 but worse than C,. The skewness of the performance is nearly zero for the inside set C3, while in other cases, the strong negative skew indicates that in some cases the performance loss due to the shift outside of the set is extremely high. The same alteration of the parameters leads to less performance loss for deep points than for shallow points. Further, there is no loss if the parameter vector remains in the convex set of deep parameters. This again highlights the advantage of deep parameter vectors. Figure 5.3: Construction of the points C1,C2 and C3 in one dimension for sensitivity analysis of parameters Figure 5.4: Systematic representation of the ROPE algorithm Figure 5.5: Histograms of the model performances for the different iterations of the algorithm for the Stissen catchment Figure 5.6: Parameter value vs. model performance for the sets obtained in iteration 2 (crosses) and iteration 4 (circles) for the Stissen catchment corresponding to iteration 2 but the parameter range remains the same. Figure 5.7 shows the two dimensional scatter plot for the two model parameters for iterations 2 and 4 obtained for the Tubingen catchment. One has the impression that these parameters can take a wide range of values, and that there is no difference between the the two sets. The ranges of listed in Table 5.3. the parameters for the Rottweil catchment for iteration 1 and 4 are Even if the ranges are very similar for many parameters one has to bear in mind that these are two dimensional projections of 9 dimensional sets. The sets themselves are very different; the ratio of their 9 dimensional volumes is approximately 0.01 (calculated as Monte Carlo integral). Figure 5.7: Parameter value for the sets obtained in iteration 2 (crosses) and iteration 4 (cir- cles) for the Ttibingen catchment 5 Robust Estimation of Hydrological Model Parameters Figure 5.8: Hydrograph with confidence interval for boundary points and inner points Figure 5.9: Confidence band width of high depth Vs confidence band width of low depth 5.5 Application of the ROPE Algorithm to Different Models and Different Catchments The description of the catchments is given in Chapter 4. The HYMOD model was calibrated for the time period 1981-85 and validated for other time period. Table 5.9 shows the calibration and validation result for catchment 27. It can be seen from Table 5.9 that the ROPE algorithm can be used to calibrate HYMOD conceptual model on catchments other than those where it was developed. The parameters obtained by calibration are good for transferring to other time periods. The result form the other catchments shows a very similar trend. Table 5.9: Model performance (NS) for calibration time period 1981-1985 and validation for the other time period for catchment 27 in the UK Table 5.8: Model performance for calibration time period 1961-1970 and validation for other time period for Rottweil 5 Robust Estimation of Hydrological Model Parameters Table 5.10: Range of model parameters The data for the period 1983-86 was chosen for model calibration. The length of com- putational time step was one day. First, a few model runs were taken to have some idea about the range of the parameters. Maximum and minimum values of the model parameters are presented in Table 5.10. Next, a large number of uniformly distributed parameters sets were generated within the identified range and the model runs were taken with each set of parameters. The Nash-Sutcliffe coefficient (Nash and Sutcliffe, 1970) was used as the objective function for the model evaluation. Based on this index, the best 10 percent parameter sets were chosen and again the maximum and minimum values of the parameters were determined. Table 5.11: Range of model parameters after calibration local minima. The ROP] FE; algorithm has a feature that it does not give a single value of parameters after calibration; instead, it gives a vector of parameter set. After the calibration, the range of parameters has been considerably narrowed down and the final range is given in Table 5 the maximum reduction .11. It can be seen that some the parameters like Cingz have in the range. The final range of the parameters is still very wide but they perform equally well. Please note that not all parameters in this range are good since it is not a uniform space. Table 5.12: Performance for calibration and validation It can be seen here that some parameters have a large range and some small. This happens because of different sensitivities of the parameters. The model was validatec using the data for the period 1987-88. Robustness of the calibrated parameters car be seen in Table 5.12 in which the statistics of 1000 parameter sets is given. It i: very clear from Table 5.12 that parameters obtained by the ROPE algorithm are wel transferable to some other time period. The value of NS index is poor because of the quality of the data. Figure 5.11 shows the observed and simu validation period. It may be noted that in the previous stud ated hydrographs for the y by Jain (1993), on the same basin and using the same model, similar results were obtained. The result of the ROP] trans F calibration was better as compared to the previous stud the ROPE calibration gives a space of good parameter sets. y due to its robustness ir erability of parameters in time. Further, instead of single value of each parameter Figure 5.11: Observed and model hydrograph for validation 5 Robust Estimation of Hydrological Model Parameters Figure 5.16: Contour Map for Leon function and optimal region Figure 5.18: Contour Map for McCormick function and optimal region Figure 5.17: McCormick function Figure 5.19: Styblinski-Tang function Figure 5.20: Contour Map for Styblinski-Tang function and optimal region 5 Robust Estimation of Hydrological Model Parameters 5.8 Case Study Result from SRWP Algorithm Figure 5.22: Contour Map for Levy and optimal region Figure 5.21: Levy function Figure 5.23: Rastrigin function Figure 5.24: Contour Map for Rastrigin function and optimal region Figure 5.26: Contour Map for Siz-Hump Camelback function and optimal region Figure 5.25: Siz-Hump Camelback function Figure 5.27: Improvement of performance by SRWP algorithm as the number of iteration increases. This is because the volume of parameter space shrinks so much that there is very little scope to improve the model parameters. It is very obvious that for smaller difference of maximum and minimum we need a larger number of iterations and there will be, consequently, more rejection. Table 5.13 shows the transferability of the parameter to the other time periods. It is very clear that the parameters have performed well for all three time periods. Table 5.13: Model performance for calibration time period 1961-1970 and validation for other time period for Rottweil Table 5.14: Initial and final parameter range 5.28 shows the spread of the parameters range at initial N percentage to final parameters sets. It can be seen from figure that at N percentage we have a wide range of parameters which have narrowed down as the number of iterations increased. As we can not plot an eight dimensional figure, a plot matrix is made for clear visualization. Figure 5.29 shows the plot matrix of the initial N percentage parameters. Here it is clear that there is no clear structure of any parameters. However, from the final plot matrix of the parameters (fig. 5.30) we can see that the parameter vector is less scattered and more structured. fig. 5.30) we can see that the parameter vector is less scattered and more structured. Figure 5.30: Plot matrix of parameter at final iteration calibration of the model using the methodology developed in this chapter give a very _ similar result as obtained by global optimization. The final parameters obtained by SCI q- UA optimization is given in table 5.16. The result of the sequential calibration method was better as compared to the previous study due to its robustness in transferability of parameters in time. Also the convergence is faster then SCE-UA. It does not depend much on the initial values too. The advantage of the sequential calibration method ov global optimization lies in its final parameters. In sequential calibration method we g er et a parameter space instead of a single value of parameters. this helps address somehow the equifinality problem. Small changes in parameters does not change the performance. Hence, the results of the sequential calibration method is more robust. Table 5.15: Model performance for calibration time period 1961-1970 and validation for other time period for Rottweil using SCE-UA Table 5.16: Initial and final Parameter Range in SA and SCE-UA Where, NS is Nash-Sutcliffe coefficient, Q, is the observed discharge, Qm is model dis- charge. RMSE is root mean square, N is the simulation length. VE is volume error, PE is peak error, Qm(peak) 18 peak model discharge, Qo(peak) 18 observed peak discharge. The peak discharge was defined based on a threshold value. Each catchments can have different threshold value. The case study presented in this chapter has used 20 m3/s as threshold. Notation used for each objective function, which is used in rest of this chapter is given in table 6.1. The objective functions used for this research are the Nash-Sutcliffe coefficient (Nash and Sutcliffe, 1970), root mean square, volume error and peak error. The following equations (6.1 to 6.4) describe the mathematical form of objective functions. To normalize the effect of the extreme value, a logarithm of above mention objective function was defined. To calculate the logarithm objective function, discharge was replaced with logarithm of discharge. Table 6.2: Parameter range obtained by different objective functions after calibration are compared with the initial values, given in the Tables 6.2 and 6.3. From these tables, it is very clear that the shrinkage in range of parameters are varying from one objective function to the others. Range also varies from parameter to parameter within an ob- jective function. All the eight objective functions have different parameter ranges when compared to the initial range of parameters. For example, parameter Ciyazx have very wide range in OF3 and OF7 compared to others. The search domains vary from one objective function to the other. This shows that the each objective function gives impor- tance to a different part of the hydrograph. Consequently, a proper choice of objective function is very important. Hence, a proper diagnosis of these parameter space is given in following sections. Table 6.3: Parameter range obtained by different log objective functions after calibration . Impact of Objective Function on Mapping of Model Parameters During Calibration Figure 6.1: Parameter values for different objective functions parameter space, where we can see good performance (red points) is scattered all over the space. Please bear in mind, in diffusion space, some of the sensitive and important parameter may dominate the space. However, this will not effect the visualization of shrinkage of the volume and rate of shrinkage. We can clearly see that, at initial iteration of ROPE algorithm, for all the objective function, good parameters are widely scattered in space. As iterations increase in ROPE algorithm, these good parameters shrink toward a specific region in each objective functions. It is very noticeable that, in all objective functions, optimal parameters are not at the same region. Only OF 1 and OF 2 have given similar region, which strengthen previous result as discuss above. From these diffusion space figures one can see that, the rate of shrinkage of parameters space is very much faster for OF4 than for OF1, OF2 and OF3. This also confirms the similar result, what we obtained during volume calculation by Mote Carlo integration. Hence, it can be conclude that each objective function has a different shrinkage rate for parameter space in calibration. Figure 6.2: Parameter values for the different Logarithmic objective functions ) Impact of Objective Function on Mapping of Model Parameters During Calibration Figure 6.3: Decrease in volume of space by each objective functions Figure 6.4: Initial parameters in the diffusion space (red colour represents higher performance, ) Impact of Objective Function on Mapping of Model Parameters During Calibration Figure 6.5: Decrease in volume of space after third and fourth iteration of ROPE algorithm by different objective functions in diffusion space Figure 6.7: Decrease in volume of space after fifth iteration of ROPE algorithm by different objective functions in diffusion space } Impact of Objective Function on Mapping of Model Parameters During Calibration . Impact of Objective Function on Mapping of Model Parameters During Calibration Figure 6.8: Intersection of parameter space for different objective functions at different itera- tions of ROPE algorithm (the numbers specify the strength of intersection) Figure 6.9: An ideal intersection of parameter space for different objective functions Figure 6.10: Comparison of the performance from space P,, Pz over all space able 6.4: Optimized at different objective functions but all the performance criteria is calcu- lated for each objective function Table 6.5: Optimized by logarithm of different objective functions but all the performance cri- teria is calculated for each objective function | Impact of Objective Function on Mapping of Model Parameters During Calibration Table 6.6: Result from hierarchical calibration Figure 6.11: Hydrograph from different objective functions Figure 6.12: Difference of 95 and 5 percent from different objective functions Figure 6.13: Distribution of parameter (Cirax) obtained by different objective functions as objective function for calibration, we choose volume error as objective function, this will lead to false and uncertain prediction. So it is very important to chose a proper objective function and provide an large enough number of iterations to reach optimal parameters. Figure 7.2: Example of event selection from Neckar Catchment (Rottwiel) Where T is the number of observation time steps available. Again, for simplicity, denote Xa(t) = (X(t-d+1),X(t-—d+4+2),...,X(t)). For each t, the statistical depth Xq(t) with respect to the set Xq is calculated and denoted by D(t). The statistical depth is invariant to affine transformations, however non-linear transformations might have an effect on the depth. Depth can be calculated using untransformed observations, their logarithms or their ranks. Time steps t with D(t) < Do are considered to be unusual. In this chapter, critical events are defined around the unusual (low depth) days. A time t is part of a critical event if there is an unusual time ¢* in its neighborhood, defined as |t — t*| < 6,. An example of events which were selected by using discharge and API is given in Figure 7.2. It is important to see that critical events selected by API and discharge are almost at the same time period in a series. This shows that we can use either API or discharge for critical event selections. One can have the impression that only the high magnitude API and discharge can be unusual in nature. This is not true, however: The low API and discharge can also be critical events, though this is not shown in figure 7.2. = Table 7.1: Calibration of HYMOD model at Rottweil over period 1961-70 [BV respectively for the time period 1961-70. From the tables it is clear that the perfor- ance of the model calibration based on critical time period is comparable with model alibration on whole data series. Cases 2(a) and 2(b) have nearly equal Nash-Sutclif oefficient in calibration for both models. This shows that event selection can be done ither using API or discharge give similar parameters in calibration of a model. Both he HBV and HYMOD model have behaved very similar in the calibration period. The 10st noticeable point is that only six percentage of data is enough to calibrate equally ood as when using the whole data provided that the events are selected properly. When he events were selected randomly the calibration performance was poor as compared to vents selected with the help of depth function. This is because data depth selects only hose events which are unusual and contain more information. It is not a big surprise 9 get equally good results in calibration when carefully selected data is taken for cali- ration. Further, its robustness should be reflect in prediction time period. Tables 7.3 ) 7.4 show that the model is calibrated on unusual events (selected based on discharge r API) is as good transferable as when whole data series is used for calibration. The ransferability was tested for three different time periods (1971-80, 1981-90, 1991-2000). 1 all the time periods both models behaved similarly well and the transferability was s good as if we had used whole data series. At the same time, when the parameter btained by calibrated on randomly selected events was transfered to other time period ras poor. This shows that a careful selection of events is necessary for proper identifi- ation of model parameters. It is also important to note that event selected based on ischarge or API give similar results in validation. Hence, it can be conclude that a iformation contained event can be selected either based on discharge or API. Table 7.2: Calibration of HBV model at Rottweil over period 1961-70 Table 7.3: Validation of HYMOD model at Rottweil over period 1971-80 7 Calibration of Hydrological Models on Hydrologically Unusual Events Table 7.4: Validation of HBV model at Rottweil over period 1971-80. Figure 7.3: The sequential addition of years to the calibration of HYMOD and validated over 10 years (1961-70) Figure 7.4: The sequential addition of years to the calibration of HYMOD and validated over 10 years (1981-90) Figure 7.5: Conditional depth with precipitation curve for measuring events Table 7.5: Calibration of HYMOD for Rottweil for time period 1991-00; event selection is based on predicted precipitation and known precipitation 7 Calibration of Hydrological Models on Hydrologically Unusual Events Table 7.6: Validation of HYMOD for Rottweil for time period 1981-90; event selection is based on predicted precipitation and known precipitation Table 7.7: Statistics of the best 10 % parameter sets of WaSim-ETH, calibrated by ROPE al- gorithm Figure 7.6: Critical events selected from year 1993 from Rems catchments Table 7.8: Statistical entropy for different parameters of WaSim-ETH where i is the class of discretization and p; is the corresponding probability of occurrenc in that class. Statistical entropy for all the parameters of WaSim-| ETH model is givet in table 7.8. The entropy values for parameters m, r, and cmilt calibrated using case | are smaller, which implies that these parameters are more concentrated in one or tw certain classes, while those calibrated using case 2 are more scatt ered. On the othe hand, parameters Tho, kp, Shmz, ky and Pop, have a lower statistical entropy value in case 2. This shows that these parameters as more structured in this case where w selected events by the ICE algorithm. This gives more identifiability to these parameters For Kxor, the entropy values in both cases are more or less the same, which means th identifiability of this parameter on certain values doesn’t change doesn’t mean these certain values will be the same since entropy much. However, i only represents thi scattering degree of a sample, but not why at which point of the sample is scattered. The calibrated WaSim-ETH model was validated for another time period (1994-1996) Table 7.9 shows the calibration and validation results. The comparison for both the cases were made based on several objective functions as given in table. For different objective functions, the performance of the parameter sets obtained by case 1 is slightly better than those by case 2 in calibration period. However, if we look for the validatior period, the result obtained by both the cases is nearly the same. The mean Nash Suttclif coefficient for year 1994, 1995, 1996 is 0.93, 0.78 and 0.88, respectively, in case 1 and is very similar in case 2. Calibration using both the cases is therefore, nearly the same. Tc aid the visual appraisal of the results, the hydrograph for the calibration and validatior period is plotted in figure 7.8. ROPE calibration does not give single parameter set Table 7.9: Statistics for calibration and validation of WaSim-ETH (best 10 % performance) instead giving parameter space, hence confidence bound for hydrograph is plotted in figure 7.8. As we can seen from figure, in both in calibration period except at certain peaks the cases, dynamic of hydrograph is same e.g. at days around 330 to 360). During validation for year 1994 to 1996, the result in case 1 and case 2 is nearly equal. This clearly indicates that the event selected by IC] a very complex physically based model. EF algorithm is also suitable for calibrating Figure 7.8: Calibration and validation with 90 % confidence Figure 7.9: Three-Layer, Feed Forward ANN structure Table 7.10: RMSE, SSE and correlation coefficient from the ANN model for the training perioc of Chester site Table 7.11: RMSE, SSE and correlation coefficient from the ANN model for the validatio period of Chester site Table 7.12: RMSE, SSE and correlation coefficient of the ANN model for training period of Thebes site was very small for Case 1 and was almost the same for the remaining two cases. Figure 7.10: Observed and computed discharge by different cases for the Chester validation period Figure 7.11: Observed and computed sediment concentration for each case for the Chester val- idation period Table 7.13: RMSE, SSE and correlation coefficient of the ANN model for validation period of Thebes site Figure 7.12: Observed and computed discharge by different cases for the Thebes validation pe- riod ANN trained using “data-rich” events is as good as that using the whole data set. At Figure 7.13: Observed and computed sediment concentration by different cases for the Thebes validation period Figure 7.14: Convex hull of the training and testing set visual appraisal of the concept, please refer to the figure 7.14. The space where depth is greater then zero means, it have very similar properties as the training set. Figure 7.15 shows that residuals in model and observed discharge. In this figure calculated depth is normalized and plotted along the residual as to indicate depth equal to zero or higher. It can be appreciated from this figure that period where depth is zero, residual is very high and period where depth is higher residual is lower. This is indicated in the figure with dark and dotted circles, respectively. Similar results can be seen in the case of sediment data as shown in figure 7.16. This illustrates that the points which are inside the convex hull of training set are the points where we can expect low errors. Practically it also indicates that testing points which are in the convex hull of training set are similar to the training set. Thus without running the model, we can predict the performance of the model a priori by looking at geometry of the training and testing data. 7.7 Conclusions Figure 7.15: Residuals of the observed and computed discharge at each validation period for the Chester site Figure 7.16: Residuals of the observed and computed sediment concentration at each validatior period for the Chester site Figure 8.1: Schematic outline of RDPE algorithm using simulated annealing. A window size of 2n + 1 was defined around each time period for optimize parameters at that particular time period. During the process of the optimization for each window size, we obtained a time series of parameters. This time series of parameters can be used for two further purposes, namely for model structure analysis and for model prediction improvement. The model structure diagnosis can be done to define sensitive and insensitive periods for parameters. At the same time we can investigate reason for time varying nature of parameters. Further, time series of parameters can be use to build predictive model of parameters. This model can be use for future prediction, which will then become more realistic. Figure 8.2: Parameter Beta and Performance (NS) at low depth and at high depth Figure 8.3: Typical time series of beta with window size less. So, for example, paramet (fig. 8.6), while parameters L we look into the model struct with wide variation over time process are changing constant model, which will reduce com ers , K,, Ko, DD, Dew etc have wide variation over time , MAXBAS, Kyer, FC, PWP are very less (fig. 8.7. If ure of the HBV model, it is clear that those parameters have direct relation to the actual process in nature, where y. This diagnosis can be useful for building a parameter putational time, since parameters will be calculated only when they should be active. This may lead to improvement in model prediction. We can see from table 8.1 that using time varying parameter there is improvement in model calibration interns of Nash Su tcliff coefficient 0.81 to 0.87. This may be because of the flexibility of the parameters to adjust for better fit and are thus able to describe process in a more realistic way. Figure 8.4: Active and inactive period for parameter beta 8 Robust Dynamic Parameter Estimation for Hydrological Models Table 8.3: Validation using different parameter by each method for different time periods Figure 9.1: Schematic representation of inside catchment (blue star) and case of extrapolation (green circle) Figure 9.2: The condition of step 7 of the algorithm for the case of one catchment property (red and black are two extreme catchments and blue is inbetween catchment) (blue point) and shown in figure 9.2(d). If this (blue) catchment can be regionalized using the selected property, then in the parameter space of the set of good parameters corresponding to the (blue) catchment (in the blue rectangle) should have an intersec- tion with the convex hull of the parameters of the catchment corresponding to the two extreme catchments (black and red points). This can be verified in figure 9.2(e) where in our case the intersection is empty. In this case no linear function of the selected catch- ment property using the black and red cacthments would lead to good performing model parameters for the blue catchment. Thus at least one more catchment property has to be taken into account. This is explained in figure 9.38, where the catchment property space is two dimensional (two properties). The three catchments corresponding to the corners of the triangle in the property space possess good parameters in the convex sets denoted by the same colors). For an inside catchment (green point), the corresponding good set intersects with the convex hull corresponding to the three corners. Therefore the condition of step 7 is fulfilled. To see if this condition is fulfill throughout, it should be checked for all catchments to decide whether two properties are enough to perform regionalization. If not, an additional catchment property has to be considered. ote that the above algorithm can also be used to identify the properties for a region- alization with a non-linear function, which is monotonic in each of its variables. The mt 2 . ee = ~: rs 2 +47 2 4 * ¢ ¢: a: Figure 9.3: The condition of step 7 of the algorithm for the case of two catchment properties (red, black and blue are the extreme catchments and green is inbetween catchment) Table 9.1: The catchments properties to be considered for regionalization (Yadav et al. (2007)) This performance measure restricts the possible parameters set by taking only those which are nearly equally good for all years. Table 9.2: The numerical values of the considered catchment properties Table 9.3: List of the boundary and the inside catchments Thus two groups were obtained. The first group contained the so called boundary catch- ments; whose properties cannot be obtained as a convex combinations of the others. The second group contained the inside catchments whose properties are in the convex hull of the properties of the boundary catchments. Table 9.3 lists the boundary and the inside catchments. Regionalization was performed and checked for each inside catchment. As an illustration of the methodology, Table 9.4 shows a set of possible boundary catchments and the corresponding weights for the target inside catchment 20. The linear parameter estimation was carried out by using the deepest parameters for the catchments in the boundary set. For comparison, a set of randomly selected good parameters were also used. Table 9.4: A set of possible boundary catchments for catchment 20 and the corresponding weights ‘able 9.5: The performance (NS,) of the conver estimation (9.5) and the explicit multiple linear regression using the deepest and randomly selected parameter vectors for inside catchments The performance of the explicit multiple linear regression (9.1) using the deepest and randomly selected parameter vectors for the estimation of the regression coefficients, and the corresponding results using the convex estimator (9.5) is shown in table 9.5. As one can see, the performances of the two estimators are similar. The deepest. parameter vectors lead to the best estimations for both cases. The performance of the models using the regionalized parameters is comparable to the performance which was obtained using calibration. Note that one cannot expect a better performance for the target catchment than the performance of the model parameters on the catchments which were used for regionalization. Figure 9.4 shows the observed and the simulated hydrographs for catchment 17 using the convex estimator (using 4 boundary catchments only) and multiple linear regression with the deepest point. Figure 9.5 shows the same in the case of an extrapolation. ‘igure 9.4: Observed and simulated hydrographs for catchment 17 using the convex estimator (using 4 boundary catchments only) and multiple linear regression with the deepest point 9 Regionalization of the Hydrological Model Parameters Using Data Depth ure 9.5: Observed and simulated hydrographs for catchment 17 using the convex estimator and multiple linear regression with the deepest point in case of extrapolation (neg- ative weights allowed) ‘able 9.6: Cross validated performance (NS,) of the related conver combination (negative weights allowed) and multiple linear regression using the deepest and randomly se- lected parameter vectors 9 Regionalization of the Hydrological Model Parameters Using Data Depth ‘igure 9.6: The performance (NS) of the convex estimator for target catchment 5 using the possible 4 catchment combinations with the deepest (red crosses) and randomly se- lected parameters (black stars) Table 9.7: Number of possible combinations for the choice of 4 boundary catchments which include the properties of the given inside catchments 9 Regionalization of the Hydrological Model Parameters Using Data Depth ‘igure 9.7: The performance (NS,) of the convex estimators for catchment number 20 using the deepest (red crosses) and lower depth (black stars). The estimators using the deepest parameter vectors including catchment 16 are marked with a green cross