Dense CRF with DGM

Post a reply


This question is a means of preventing automated form submissions by spambots.
Smilies
:D :) ;) :( :o :shock: :? 8-) :lol: :x :P :oops: :cry: :evil: :twisted: :roll: :!: :?: :idea: :arrow: :| :mrgreen: :geek: :ugeek:

BBCode is ON
[img] is ON
[flash] is OFF
[url] is ON
Smilies are ON

Topic review
   

Expand view Topic review: Dense CRF with DGM

Re: Dense CRF with DGM

Post by Zeinab Ghassabi » Thu Feb 01, 2018, 20:42

I myself wrote code for it. Thanks.

Re: Dense CRF with DGM

Post by Zeinab Ghassabi » Tue Jan 30, 2018, 21:10

Thank you for your response. I run the code and got class probabilities for each pixel (1x4 vector for every pixel).
If there is any function which can get input image instead of just one pixel as "node_id" and produce four 2D images which each image corresponds to one class probability for all pixels?

Re: Dense CRF with DGM

Post by Creator » Sun Jan 28, 2018, 20:45

I will be very glad if you let me know about your new results.
In order to see the class probabilities for each pixel, please check the Demo Visualization project: the visualization of the “Node potential vector” is exactly what you need.

The 3D version of the library would be interesting. Please feel free to make a pull request on GitHub, I will add your code to the repository.

Re: Dense CRF with DGM

Post by Zeinab Ghassabi » Sat Jan 20, 2018, 15:34

Thank you very much for your kind and help.
I used the code you sent and the same results. However, I will play with parameters, increase the number of classes and use different features to get better results. Maybe because of using local features, I have the result. if you mind, I will inform you if I get better results.
In addition to class labels from the CRF, how can I see the class probabilities for each pixel in the demoTrain of DGM ? I want to know how much a pixel is belong to each of 4 classes.
Sorry for such much inconvenience.
In the future, will you mind to release 3D version of DGM?

Re: Dense CRF with DGM

Post by Creator » Mon Jan 15, 2018, 21:09

In my opinion, the code that you attached is OK. I have used variables m and i for testing on multiple images. Since you perform testing on a single image, these variable are not needed.

Please see attached the modified code. I have deleted the redundant code and added the dgm::vis::CMarker class represent the results in the same colours.

I saw, that the denseCRF produces worse results. However, I have noticed that with the small amount of iterations (e.g. 1) the result is more or less OK. The best results I got is 86,07% with 1 iteration:
res.png

I had approximately the same results with very small classes, when using denseCRF in my experiments with micro-organism dataset.

In order to enhance the results I would suggest to do 2 steps:
  1. Play with the parameters of the crf.addPairwiseGaussian(3, 3, 3); and crf.addPairwiseBilateral(60, 60, 20, 20, 20, img.data, 10); functions. I think, that the inference may simply “overblur” the minor classes. Reducing “stddev” values may help.
  2. During the training in DGM, use more train images. That will probably make the unary potentials more strong.
I hope my advices will help you.

Re: Dense CRF with DGM

Post by Zeinab Ghassabi » Sat Jan 13, 2018, 15:04

Thank you very very much for your kind and help!
I used OpenCV v.3.3.0 and it works properly on my computer, too. Thank you for improving the code.

I used CRF_output_CvKNN_f36.tif: Using CTrainNodeCvKNN and CTrainEdgePotts and 36 features. I also save the potential matrices in filename.dat to use in the densecrf. Would you please help me in modyfing your code for my images? I have nStates = 4. I can not understand the upper bound of m = 21 and i = 20. This is because you have 21 nStates? If I change nStates to 4, can I make comment this part:

Code: Select all

if (m == 17) continue;
      for (word i = 11; i <= 20; i++) {
         if (m == 6 && i == 17) continue;
         if (m == 18 && i == 9) continue;

why do you wrote the above part?

I attached you my code. The result is worse than DGM.

Thank you for your help.
Attachments
dense_inference.cpp
(8.06 KiB) Downloaded 299 times
solution-DensCRF_output.tif
solution-DensCRF_output.tif (5.43 KiB) Viewed 8102 times

Re: Dense CRF with DGM

Post by Creator » Sat Jan 06, 2018, 20:52

I have tried out your code with 5 and 36 features and could not reproduce the bug. In both cases the code produces results. I have used the latest version of the DGM library from GitHub with OpenCV v.3.3.0.

However, I think that there might be 2 reasons for the bug:
  1. OpenCV v.3.2.0 had a bug with the data serialization:
    https://github.com/Project-10/DGM/issues/8,
    https://github.com/opencv/opencv/issues/9125.
    But this bug was confirmed only for Random Forest classifier. Maybe it is also relevant for the ml::TrainData::loadFromCSV() function.
    So I have disabled the nodeTrainer->save() / nodeTrainer->load() functionality.
  2. When using a lot of features, CTrainNodeCvGMM class may produce very small potentials. And the small potentials in its turn may lead to the “The lower precision boundary for the potential of the node %zu is reached.” problem. So, I would suggest to use another node training model, for example k-nearest neighbors: CTrainNodeCvKNN
Please see attached results:
CRF_output_CvGMM_f5.tif
Using CTrainNodeCvGMM and CTrainEdgeConcat and 5 features
CRF_output_CvGMM_f5.tif (87.77 KiB) Viewed 8120 times

CRF_output_CvGMM_f36.tif
Using CTrainNodeCvGMM and CTrainEdgeConcat and 36 features
CRF_output_CvGMM_f36.tif (79.73 KiB) Viewed 8120 times

CRF_output_CvKNN_f36.tif
Using CTrainNodeCvKNN and CTrainEdgePotts and 36 features
CRF_output_CvKNN_f36.tif (79.85 KiB) Viewed 8120 times

CRF_GT.tif
The visualized groundtruth
CRF_GT.tif (78.77 KiB) Viewed 8120 times


I attach also the code, that generates these results:
Demo Train.cpp
(9.31 KiB) Downloaded 320 times
I hope that will help you.

P.S.
I have noticed that the features data from the .csv files has 61 x 37887 samples and 61 x 1 responses. So after the concatenation you have 61 x (37*1024) data matrix, where the last 1024 values in every row are a mixture between samples and responses. Since, you use only 36 first features, it should not be a problem, however, that data container looked a bit strange to me.

Re: Dense CRF with DGM

Post by Zeinab Ghassabi » Mon Jan 01, 2018, 19:52

I attach also the files:
  1. input image, its ground truth, a text file of features for input image (for training)
  2. test-image, a text file of features for test image ( for testing)
the size of images and GT is 61 x 1024 (You can use grey-scale)
the size of features is 61 x 1024 x 36

I attached the modified source code of demo train, too.
Using these files, you can see the error.
sorry for inconvenience.
Attachments
All_features_testimage.csv
(7.43 MiB) Downloaded 315 times
All_features.csv
(6.59 MiB) Downloaded 318 times
dgm-demo train.txt
(13.33 KiB) Downloaded 361 times
mask_DABAB8D0.tif
mask_DABAB8D0.tif (37.14 KiB) Viewed 8143 times
DABAB8D0-Adjacentl.tif
DABAB8D0-Adjacentl.tif (61.74 KiB) Viewed 8143 times
DABAB8D0-gt4.tif
DABAB8D0-gt4.tif (1.63 KiB) Viewed 8143 times

Re: Dense CRF with DGM

Post by Zeinab Ghassabi » Sun Dec 31, 2017, 18:58

Thank you very much for your response.
regarding first question:
Do you save and/or load the training data in-between the training and classification stages?

Code: Select all

nodeTrainer->train();
nodeTrainer->save("E:\\zghassabi\\Project1-DIE\\Dataset\\image 1\\nodetrainn.dat");
nodeTrainer->reset();
nodeTrainer->load("E:\\zghassabi\\Project1-DIE\\Dataset\\image 1\\nodetrainn.dat");

edgeTrainer->train();
Timer::stop();

// ==================== STAGE 3: Filling the Graph =====================
Timer::start("Filling the Graph... ");
// CV_32FC(nStates) <- CV_8UC(nFeatures);
Mat nodePotentials = nodeTrainer->getNodePotentials(test_fv); // Classification: CV_32FC(nStates) <- CV_8UC(nFeatures)
Serialize::to("E:\\zghassabi\\Project1-DIE\\Dataset\\image 1\\fileName.dat", nodePotentials);
graph->setNodes(nodePotentials); // Filling-in the graph nodes
graph->fillEdges(edgeTrainer, test_fv, params, params_len); // Filling-in the graph edges with pairwise potentials
Timer::stop();


regarding second question:
The error occurs on the first iteration or after some iteration?

When I choose the below three sections in the node-training-model, the decoding part does not work properly. It seems that after several iteration the error appears.
section 1: Gaussian Mixture model
section 3: nearest neighbor
section 5: Microsoft random Forrest.

regarding third question:
Which inference method do you use? Have you tried running different inference methods?

I explored different combination of node taring models with edge training models. It seems that when the number of features is more than 5, GMM, k-nn, mRF do not work. I tested all of these combination with 1D feature . All of them work when the feature is 1D or 3D.

Could you please check that the node potentials, used in the graph filling stage are not empty?

nodePotentials is not empty.

Which node and which edge training models do you use?

node-training-model: OpenCV Gaussian Mixture Model
edge-training-model: Concatenated Model
the above models do not produce errors for 5-D feature.

You wrote that you have checked the features. Have you also checked the responses in

Code: Select all

Mat data2 = raw_data->getResponses();
In particular for the class, starting from which, the bug appears?

When I choose node training model as
section 1: Gaussian Mixture model
section 3: nearest neighbor
section 5: Microsoft random Forrest.
and when I use a feature matrix with higher dimensions.

Re: Dense CRF with DGM

Post by Creator » Sat Dec 30, 2017, 01:19

The error in MessagePassing.cpp is related to the inference stage. Having the current information, you shared with me, it is difficult to say, where exactly the problem lies. It might be in the node training phase, or data preparation phase. Could you please provide some additional information, in particular to the following questions:
  • Do you save and/or load the training data in-between the training and classification stages?
  • The error occurs on the first iteration or after some iteration?
  • Which inference method do you use? Have you tried running different inference methods?
  • Could you please check that the node potentials, used in the graph filling stage are not empty?
  • Which node and which edge training models do you use?
  • You wrote that you have checked the features. Have you also checked the responses in

    Code: Select all

    Mat data2 = raw_data->getResponses();
    In particular for the class, starting from which, the bug appears?

Ideally, I could help you more effectively, if I could reproduce the error. As I said it is difficult to define the source of this bug without debugging the code. (I assume, that since it works in DGM v.1.5.1 and does not work in DGM v.1.5.2, it is a bug).

Top