DGM Support for more than UINT8 Feature Images

Semantic Image Segmentation with Conditional Random Fields
victor-robles
Posts: 2
Joined: Thu Dec 12, 2019, 23:08
Contact:

DGM Support for more than UINT8 Feature Images

Postby victor-robles » Tue Dec 17, 2019, 18:22

I have successfully gotten DGM 1.7 to build in Linux and Windows 10 and the Demo Train has compiled and worked successfully both by training a single image and with multiple images. In addition, I’ve created arbitrary openCV Mats for the feature vectors, i.e. (row x column x n-features where n > 3) that also worked.

The only issue is when I try to expand to features that would require more than unsigned 8-bit precision and images that are more than 8-bits. When I try to use 16-bit input images and 32-bit floating values for feature matrices I get the following error:
Assertion failed: m1.depth() == CV_8U in "/homDirectory/DGM-v.1.7.0/include/macroses.h", line 113Aborted (core dumped)

So I ask, does the algorithm only support 8-bit unsigned integer features and images or is there something that I’m missing in order to allow higher precision images and features

User avatar
Creator
Posts: 157
Joined: Tue Dec 16, 2008, 20:52
Location: Hannover, Germany
Contact:

Re: DGM Support for more than UINT8 Feature Images

Postby Creator » Wed Dec 18, 2019, 00:27

Hi
yes, currently the DGM library supports only 8-bit input images and only 8-bit features. This is due to the fact that some note potential models (e.g. Bayes) can work only with the 8-bit features.
However, some of the other models (e.g. Gaussian Mixture Model) can naturally work with 16-bit integers or 32-bit floats. Nevertheless, for sake of generality compact notations of DGM interface functions do not allow usage of 16- or 32-bit features.

In order to use 16- or 32-bit features with note potentials, please use the comprehensive notation (per-pixel form) of adding the node training data:

Code: Select all

addFeatureVec(const Mat &featureVector, byte gt)

instead of

Code: Select all

addFeatureVecs(const Mat &featureVectors, const Mat &gt)

If you check the implementation of this function for the GMM node trainer: https://github.com/Project-10/DGM/blob/master/modules/DGM/TrainNodeGMM.cpp you can see that the input feature vector is converted to the 64-bit floating point data type.


Return to “Direct Graphical Models”

Who is online

Users browsing this forum: No registered users and 20 guests