Today,
I finish reading the dissertation ‘Reservoir Properties from Well Logs Using Neural
Networks’ Chapter 2.
Summary
Neural networks
can be used to perform the two basic tasks of pattern recognition and function
approximation.
Pattern recognition
An unsupervised
network
1. An unsupervised
network for feature extraction, namely transforming the input patterns into an
intermediate smaller dimensional space
2. A
supervised network for classification: map the intermediate patterns into one
of the classes in an r-dimensional space where r is the number of classes to be
distinguished.
A supervised
network
It performs the
task of feature extraction and classification both based on the information it
has extracted from the training data.
Function
approximation
Functional relationship:
Given a set of
examples to map the function:
Approximate the
unknown : for all x
Bias variance
dilemma
Overtraining:
A small bias but a
large variance
Cross-validation
approach:
Early stopping
method of training
In this study we
use the overtraining approach for predicting porosity and water saturation,
cross validation approach for predicting permeability and a soft overtraining
approach for lithofacies identification.
A MLP network:
Advantages: It
does not require any assumptions. It exhibits a great degree of robustness or
fault tolerance because of built-in redundancy. It has a strong capability for
function approximation. Previous knowledge of the relationship between input
and output is not necessary. The MLP can adapt its synaptic weights to changes
in the surrounding environment by adjusting the weights to minimize the error.
Disadvantages: We
use LM algorithm to adjusting weights. The mean square error surface of a
multiple network may get stuck in the local minima instead of converging into
the global minimum.
Multiple network
system:
An actual multiple network system could consist of a mixture of
ensemble and modular combinations at different levels. The architecture of the
networks for predicting porosity and water saturation are ensemble combination
while the architecture of the networks for predicting permeability and
lithofacies are modular and ensemble combination.
Ensemble
combination:
1. The bias
of the ensemble averaged-function pertaining to the CM is same as that of the
function pertaining to a single neural network.
2. The
variance of is less than that of .
3. The individual
expert should be purposely over-trained to reduce the bias at the cost of the
variance.
Training
parameters: the initial weights, the training data, the topology of the
networks and the training algorithm.
The problem with
this method is that it requires large amounts of data.
1. The training
sets are adaptively resampled.
2. Picking
n samples from a training dataset of N samples
3. Virtual
samples
Providing weights
to each network by two approaches:
Unconstrained
approach:
Combined output: , where is output from the individual network, are the weights.
Approximation
error is: .
Constrained
approach:
There is an
additional constraint, that is .
The weights can be
calculated from the training dataset, or partially on the training and test
dataset. This method is called the optimal linear combination (OLC) method.
Modular
combination:
The single network
has a large number of adjustable parameters and hence the risk of overfitting
the training dataset increases.
The training time
for such a large network is likely to be longer than for all the experts
trained in parallel.
1. Avoid of
overfitting, save time
2. Reduce model
complexity, making the overall system easier to understand, modify, and extend
Class decomposition,
automatic decomposition
There are four
different models of combining component networks: cooperative, competitive,
sequential and supervisory.
Ensembles are
always involved in cooperative combination.
The input-output relationship
is linear in MLR whereas the relationship is nonlinear in neural network. The neural
network method does not force predicted values to lie near the mean values and
thus preserves the natural variability in the data.
Tomorrow,
I will read more of the dissertation.
No comments:
Post a Comment