Today,
I read the dissertation ‘Reservoir Properties from Well Logs Using Neural
Networks’. The reading parts are Summary, Introduction and some of Chapter 2
Neural Networks.
List
of Acronyms:
CM:
committee machine
MLP:
multilayer perceptron network
MLR:
multiple linear regression
MNLR:
multiple nonlinear regression
OLC:
optimal linear combination
MNN:
modular neural network
CPI
log: computer processed interpretation log
MWD:
measurement while drilling
LMS:
least mean square
MLFF:
multilayer feed forward
Summary
and Introduction
The
basic unit of a CM is a MLP whose optimum architecture and size of training
dataset has been discovered by using synthetic data for each application.
All
the programming has been doing using MATLAB programming language and different
functions from the neural network toolbox.
ANNs
are most likely to be superior to other methods under the following conditions:
(1) The data
is ‘fuzzy’.
(2) The
pattern is hidden.
(3) The
data exhibits non-linearity.
(4) The
data is chaotic.
A
single MLP, when repeatedly trained on the same patterns, will reach different
minima of the objective function each time and hence give a different set of
neuron weights. A common approach therefore is to train many networks, and then
select the one that yields the best generalization performance.
CM,
where a number of individually trained networks are combined, in one way or
another, improves the accuracy and robustness.
CM
was demonstrated to improve porosity and permeability predictions from well
logs using ensemble combination of neural networks rather than selecting the
single best by trial and error.
Chapter
2: Neural Networks
Mathematically the function of the neuron k can be expressed by
y_k=φ(u_k+b_k), where u_k=∑_(j=1)^m▒〖w_kj x_j 〗.
Hardlimit function
φ(ν)={█(1 if υ≥0@ 0 if υ<0 )┤
A symmetrical hard limit function is described as
φ(ν)={█(+1 if υ≥0@ -1 if υ<0 )┤
It is most commonly used in pattern recognition problems.
Linear function
φ(ν)=ν
This activation function is used in pattern recognition and in function approximation problems.
Sigmoid function
φ(ν)=1/(1+exp(-aν))
This is the most common form of activation function used in the construction of multilayer networks that are trained using BP algorithm.
Tomorrow,
I will read more of the dissertation.
No comments:
Post a Comment