Today, I realized parallel computing in MATLAB.
Summary:
I checked that Your laptop is 4 core CPU. So theoretically, it will save 3/4 time overall.
I ran 40 times to get the results for Vp and Vs prediction:
Average values: 0.8555 and 0.8295 in terms of R2; 0.0652 and 0.0684 in terms of NRMSE
Best Values: 0.8722 and 0.8492 in terms of R2 and; 0.0599 and 0.0603 in terms of NRMSE
Running time now is 348 seconds (about 900 seconds initially).
Next week, I will finish other tasks discussed today.
9/29/2017
9/28/2017
try some changes of the parameters
Today, I tried some changes of the parameters of ANN models.
Summary:
1. randomly select initial weights and biases (combine parallel computing to decrease training time)
and record the best result after training ANN models for several times (10 or 20 or 30 ......) I found that the best result among them is a little better than the best result in the first day.
I think it is one effective method to avoid local minimum and obtain global optimization.
2. compare prediction results with and without some data preprocessing steps (such as pca, reciprocal transformation, remove constant values). They can be done after the first step since they may lead to little improvement.
3. another to methods may be useful: Nguyen-Windrow initialization; evaluate effective number of parameters.
Tomorrow, I will validate all of the above methods.
Summary:
1. randomly select initial weights and biases (combine parallel computing to decrease training time)
and record the best result after training ANN models for several times (10 or 20 or 30 ......) I found that the best result among them is a little better than the best result in the first day.
I think it is one effective method to avoid local minimum and obtain global optimization.
2. compare prediction results with and without some data preprocessing steps (such as pca, reciprocal transformation, remove constant values). They can be done after the first step since they may lead to little improvement.
3. another to methods may be useful: Nguyen-Windrow initialization; evaluate effective number of parameters.
Tomorrow, I will validate all of the above methods.
9/27/2017
compare three training functions and predict Vp and Vs seperately
Today, I compared three training functions and predicted Vp and Vs seperately.
Summary:
1.
I validate that BR and SCG is equally effective to predict geomechanical data, which are better than LM. So, if we write the paper, we can include them as one of our sections.
I validate that BR and SCG is equally effective to predict geomechanical data, which are better than LM. So, if we write the paper, we can include them as one of our sections.
2.
You mentioned that I can try to predict Vp first and Vs next. I did and I found that prediction accuracy did not improve.
As I mentioned before, accuracy of the ANN model without deleting outliers is 0.8611 and 0.8252 in terms of R2 and 0.0622 and 0.0660 in terms of NRMSE.
However, when I predict Vp in the first model and predict Vs in the second model, accuracy of ANN models without deleting outliers is 0.8500, 0.8178 in terms of R2 and 0.0710 and 0.0733 in terms of NRMSE.
Tomorrow, I will think about other ways to improve the accuracy.
You mentioned that I can try to predict Vp first and Vs next. I did and I found that prediction accuracy did not improve.
As I mentioned before, accuracy of the ANN model without deleting outliers is 0.8611 and 0.8252 in terms of R2 and 0.0622 and 0.0660 in terms of NRMSE.
However, when I predict Vp in the first model and predict Vs in the second model, accuracy of ANN models without deleting outliers is 0.8500, 0.8178 in terms of R2 and 0.0710 and 0.0733 in terms of NRMSE.
Tomorrow, I will think about other ways to improve the accuracy.
9/26/2017
look for deep learning applications and compare training functions
Today, I looked for deep learning applications and compared training functions.
Summary:
1. I have not found deep learning applications related to our research.
2. I think that training functions may affect the prediction accuracy. I am now comparing LM, SCG and BR. I will select the best one after comparison.
Tomorrow, I will continue above work.
Summary:
1. I have not found deep learning applications related to our research.
2. I think that training functions may affect the prediction accuracy. I am now comparing LM, SCG and BR. I will select the best one after comparison.
Tomorrow, I will continue above work.
9/25/2017
look for applications of deep learning
Today, I looked for applications of deep learning.
Summary:
Deep learning is a type of machine learning, which is well-suited to identification applications such as face recognition, text translation, voice recognition and advanced driver assistance systems.
Usually, there are hundreds of hidden layers of NN models, which needs millions of images and videos to train them. That is why deep learning algorithms can outperform human at classifying images, win against the world's best GO player, or enable a voice-controlled assistant.
However, I have not found applications for function approximation problems, which is like our research problem.
Tomorrow, I will continue to look for deep learning application similar to our research.
Summary:
Deep learning is a type of machine learning, which is well-suited to identification applications such as face recognition, text translation, voice recognition and advanced driver assistance systems.
Usually, there are hundreds of hidden layers of NN models, which needs millions of images and videos to train them. That is why deep learning algorithms can outperform human at classifying images, win against the world's best GO player, or enable a voice-controlled assistant.
However, I have not found applications for function approximation problems, which is like our research problem.
Tomorrow, I will continue to look for deep learning application similar to our research.
9/22/2017
finish paper 2
Today, I finished paper 2.
Summary:
I sent you by an email.
Next week, I will improve paper 1 for Fuel.
Summary:
I sent you by an email.
Next week, I will improve paper 1 for Fuel.
9/21/2017
check all results possible in the basic model.
Today, I checked all results possible in the basic model.
Summary:
I compare 4 conditions.
1. do not delete outliers, ANN model with just one layer
R2: 0.8564 0.8187 NRMSE: 0.0632 0.0672
2. do not delete outliers, ANN model with two layers
R2: 0.8611 0.8252 NRMSE: 0.0622 0.0660
3. delete outliers, ANN model with two layers
R2: 0.8494 0.8265 NRMSE: 0.0695 0.0689
4. delete outliers, ANN model with two layers, predict Vp and Vp/Vs together
R2: 0.8299 0.7964 NRMSE: 0.0712 0.0807
The second performs the best.
Tomorrow, I will continue to look for methods to improve the model such as deep ANN.
Summary:
I compare 4 conditions.
1. do not delete outliers, ANN model with just one layer
R2: 0.8564 0.8187 NRMSE: 0.0632 0.0672
2. do not delete outliers, ANN model with two layers
R2: 0.8611 0.8252 NRMSE: 0.0622 0.0660
3. delete outliers, ANN model with two layers
R2: 0.8494 0.8265 NRMSE: 0.0695 0.0689
4. delete outliers, ANN model with two layers, predict Vp and Vp/Vs together
R2: 0.8299 0.7964 NRMSE: 0.0712 0.0807
The second performs the best.
Tomorrow, I will continue to look for methods to improve the model such as deep ANN.
9/20/2017
finish second paper draft
Today, I finished the second paper draft.
Summary:
I send you by an email.
Tomorrow, we can discuss about which Journal to submit our paper.
Summary:
I send you by an email.
Tomorrow, we can discuss about which Journal to submit our paper.
9/18/2017
Two tables are built.and 4 commons are found
Today, I built two tables for Well 1 and Well 2.
Summary:
two tables are built and 4 commons are found.
Summary:
two tables are built and 4 commons are found.
1. Lower
mean value of RLA0-5 results in better prediction performance results.
2.
Higher mean value of dielectric dispersion
results in better prediction performance results.
3.
Smaller skewness of dielectric dispersion
results in better prediction performance results.
4. Higher
porosity of the formation results in better prediction performance results.
Tomorrow, I have a project due, I will analyze them on Wednesday. We can discuss if you have any ideas.
9/15/2017
finish all except 3.5
Today, I finish all except 3.5.
Summary:
I finish all except 3.5, which is 'petrophysical and statistical controls on the prediction performances'.
Since there are 5 dielectric dispersion, I consider how to use and divide them into several parts of different prediction accuracy.
In the end, I decide to use relative error to measure accuracy of every depth, which is |P-O|/O, P is predicted value, O is original value. For every depth, there are two values, one is the average of conductivity dispersion and the other is the average of permittivity dispersion.
If both of them are less than 0.1, they belong to good prediction performance. If both of them are higher than 0.2 or either of them are higher than 0.3, they belong to poor prediction performance. The rest belong to moderate prediction performance.
Next week, I will try to analyze those results and find some rules.
Summary:
I finish all except 3.5, which is 'petrophysical and statistical controls on the prediction performances'.
Since there are 5 dielectric dispersion, I consider how to use and divide them into several parts of different prediction accuracy.
In the end, I decide to use relative error to measure accuracy of every depth, which is |P-O|/O, P is predicted value, O is original value. For every depth, there are two values, one is the average of conductivity dispersion and the other is the average of permittivity dispersion.
If both of them are less than 0.1, they belong to good prediction performance. If both of them are higher than 0.2 or either of them are higher than 0.3, they belong to poor prediction performance. The rest belong to moderate prediction performance.
Next week, I will try to analyze those results and find some rules.
9/13/2017
complete about 80% of the improvement
Today, I complete about 80% of the second paper.
Summary:
I continue to analyze the petrophysical and statistical part.
Since tomorrow is career fair, I will finish the second paper on Friday.
9/12/2017
complete about 60% of the improvement
Today, I continue to improve the paper and complete about 60%.
Summary:
I start to analyze the petrophysical and statistical part.
Tomorrow, I will try to finish the paper.
Summary:
I start to analyze the petrophysical and statistical part.
Tomorrow, I will try to finish the paper.
9/11/2017
complete about 40% of the improvement
Today, I continue to improve the paper and complete about 40%.
Summary:
I mainly do replotting, recalculation, looking for citations, and adding some parts.
Tomorrow, I will continue to change the paper.
Summary:
I mainly do replotting, recalculation, looking for citations, and adding some parts.
Tomorrow, I will continue to change the paper.
9/08/2017
check all wells
Today, I checked all wells to see the difference of accuracy of predicting permittivity dispersion.
Summary:
I checked them one by one and found that they are all worse than the total database of the well.
So I think lithologies at different depths is not the main reason for poor and good prediction performance of permittivity dispersion.
The first figure is NMRSE result of well 1 (6 different lithologies).
The second figure is R2 result of well 1 (6 different lithologies).
Next week, we can discuss about our second paper when you have time.
Summary:
I checked them one by one and found that they are all worse than the total database of the well.
So I think lithologies at different depths is not the main reason for poor and good prediction performance of permittivity dispersion.
The first figure is NMRSE result of well 1 (6 different lithologies).
The second figure is R2 result of well 1 (6 different lithologies).
Next week, we can discuss about our second paper when you have time.
9/07/2017
finish the final draft of the second paper
Today, I finished the draft of the second paper.
Summary:
I have checked different lithologies one by one in well 1 and find that all results are worse than the total one, so i think lithologies at different depths may not be same main reason.
I conclude 2 reasons in our paper:
1. fewer data samples
2. high water salinity
Details can be seen in the draft that I sent to you.
Tomorrow, maybe we can discuss which Journal we can submit our paper to and change the format for submitting. Also, we can continue to improve our paper if you have more suggestions.
Summary:
I have checked different lithologies one by one in well 1 and find that all results are worse than the total one, so i think lithologies at different depths may not be same main reason.
I conclude 2 reasons in our paper:
1. fewer data samples
2. high water salinity
Details can be seen in the draft that I sent to you.
Tomorrow, maybe we can discuss which Journal we can submit our paper to and change the format for submitting. Also, we can continue to improve our paper if you have more suggestions.
9/06/2017
finish applying models
Today, I finished applying models to different wells.
Summary:
I finished applying models to different wells.
I am now analyzing some problems:
1. why prediction performance in well 3 is bad?
2. why prediction of cond is always better than perm?
3. why NRMSE is better than R2 for estimation of accuracy?
Tomorrow, I will try to finish all the analyzing parts.
Summary:
I finished applying models to different wells.
I am now analyzing some problems:
1. why prediction performance in well 3 is bad?
2. why prediction of cond is always better than perm?
3. why NRMSE is better than R2 for estimation of accuracy?
Tomorrow, I will try to finish all the analyzing parts.
9/05/2017
do the second task left last week and finish half of them
Today, I did the second task left last week and finished half of them.
Summary:
I trained and tested the third method in well 1 and apply it in well 2. I will finish the opposite tomorrow.
Tomorrow, I will continue to change the second paper.
9/01/2017
leave two tasks
Today, I improved all other parts except two tasks.
Summary:
Two tasks:
1. Evaluation of NRMSE and R2 and their explanation
2. Train and test the third method in well 1 and apply it in well 2 and vice versa
Next week, I will try to find good explanation of NRMSE and R2 and realize the application of the third method in 2 wells.
Summary:
Two tasks:
1. Evaluation of NRMSE and R2 and their explanation
2. Train and test the third method in well 1 and apply it in well 2 and vice versa
Next week, I will try to find good explanation of NRMSE and R2 and realize the application of the third method in 2 wells.
Subscribe to:
Posts (Atom)