training with multiple images

Semantic Image Segmentation with Conditional Random Fields
Carlos

training with multiple images

Postby Carlos » Tue Jan 26, 2016, 02:16

I was looking for a library to use CRF/MRF and I found your library. Looks very nice, but I have some questions.
Currently, I am working with MRF and I read your tutorial "Demo Train". It is easy to read and understand how the library works, however you are using only one image for the training stage. I was wondering how can I train using multiple images (100 for example). If I repeat the process for every image, the potentials are accumulated or something similar?
Please, could you help me to understand how to train with multiple images?
Thanks a lot for making your library public.

User avatar
Creator
Posts: 128
Joined: Tue Dec 16, 2008, 21:52
Location: Hannover, Germany
Contact:

Re: training with multiple images

Postby Creator » Tue Jan 26, 2016, 02:43

Yes, the training is cumulative: you can train different images in a loop.
Functions addFeatureVec() and addFeatureVecs() just gather the train data, so you may use a procedure like:

Code: Select all

// ========================= STAGE 2: Training =========================
printf("Training... ");
ticks = getTickCount();
 
for (image_idx = 0; image_idx < 100; image_idx++) {
 
    Mat fv = imread(“FV(image_idx)” , 1); resize(fv, fv, imgSize, 0, 0, INTER_LANCZOS4); // feature vector
    Mat gt = imread(“GT(image_idx)”, 0);  resize(gt, gt, imgSize, 0, 0, INTER_NEAREST); // groundtruth
 
    // Node Training (copact notation)
    nodeTrainer->addFeatureVec(fv, gt);
 
 
    // Edge Training (comprehensive notation)
    Mat featureVector1(nFeatures, 1, CV_8UC1);
    Mat featureVector2(nFeatures, 1, CV_8UC1);
    for (int y = 1; y < height; y++) {
        byte *pFv1 = fv.ptr<byte>(y);
        byte *pFv2 = fv.ptr<byte>(y - 1);
        byte *pGt1 = gt.ptr<byte>(y);
        byte *pGt2 = gt.ptr<byte>(y - 1);
        for (int x = 1; x < width; x++) {
            for (byte f = 0; f < nFeatures; f++) featureVector1.at<byte>(f, 0) = pFv1[nFeatures * x + f]; // featureVector1 = fv[x][y]
            for (byte f = 0; f < nFeatures; f++) featureVector2.at<byte>(f, 0) = pFv1[nFeatures * (x - 1) + f]; // featureVector2 = fv[x-1][y]
            edgeTrainer->addFeatureVecs(featureVector1, pGt1[x], featureVector2, pGt1[x-1]);
            edgeTrainer->addFeatureVecs(featureVector2, pGt1[x-1], featureVector1, pGt1[x]);
            for (byte f = 0; f < nFeatures; f++) featureVector2.at<byte>(f, 0) = pFv2[nFeatures * x + f]; // featureVector2 = fv[x][y-1]
            edgeTrainer->addFeatureVecs(featureVector1, pGt1[x], featureVector2, pGt2[x]);
            edgeTrainer->addFeatureVecs(featureVector2, pGt2[x], featureVector1, pGt1[x]);
        } // x
    } // y
 
}
 
nodeTrainer->train();
edgeTrainer->train();
 
ticks = getTickCount() - ticks;
printf("Done! (%fms)\n", ticks * 1000 / getTickFrequency());

Carlos

Re: training with multiple images

Postby Carlos » Tue Jan 26, 2016, 02:47

Thanks for your help.
If I write a paper using your library, I will cite your library.


Return to “Direct Graphical Models”

Who is online

Users browsing this forum: No registered users and 1 guest