Fig. 2. RBF neural network architecture. ne error function ror a general weignt vector A. The RBF network (Fig. 2) is similar to a general feed-forward network trained using the »ack-propagation scheme or FFBP net in that it has three layers of neurons, namely input, ridden and output. However it uses only one hidden layer, each neuron in which operates is the Gaussian transfer function, as against the sigmoid function of the common FFBP. Further while the training of FFBP is fully supervised (where both input-output examples ire required), the same of the RBF is fragmented, wherein unsupervised learning of the nput information, first classifies it into clusters, which in turn are used to yield the output ifter a supervised learning. This ‘local tuning’ not only is more efficient, but can sometimes nodel the data non-linearities in a better way (as in the present case—to be discussed later) han the common FFBP. Mathematically, the output y of an RBF network corresponding oO input x (Refer to Fig. 2) is computed by the equation: Fig. 4. The region of measurements around western Indian Coastline. Fig. 5. Validation of the RBF network. Fig. 6. Validation of FFBP network (algorithm- resilient back—propagation (RP)) R=correlation coefficient; MAE = mean average error; RMSE = root mean square error; SI = scatter index; MSRE = mean square relative error; RBF = radial basis function; ANFIS = adaptive neuro-fuzzy inference system; FFBP = feed-forward back propagation; RP = resilient propagation. Fig. 7. Validation of ANFIS network.