DGM Support for more than UINT8 Feature Images

Post a reply


This question is a means of preventing automated form submissions by spambots.
Smilies
:D :) ;) :( :o :shock: :? 8-) :lol: :x :P :oops: :cry: :evil: :twisted: :roll: :!: :?: :idea: :arrow: :| :mrgreen: :geek: :ugeek:

BBCode is ON
[img] is ON
[flash] is OFF
[url] is ON
Smilies are ON

Topic review
   

Expand view Topic review: DGM Support for more than UINT8 Feature Images

Re: DGM Support for more than UINT8 Feature Images

Post by Creator » Wed Dec 18, 2019, 00:27

Hi
yes, currently the DGM library supports only 8-bit input images and only 8-bit features. This is due to the fact that some note potential models (e.g. Bayes) can work only with the 8-bit features.
However, some of the other models (e.g. Gaussian Mixture Model) can naturally work with 16-bit integers or 32-bit floats. Nevertheless, for sake of generality compact notations of DGM interface functions do not allow usage of 16- or 32-bit features.

In order to use 16- or 32-bit features with note potentials, please use the comprehensive notation (per-pixel form) of adding the node training data:

Code: Select all

addFeatureVec(const Mat &featureVector, byte gt)

instead of

Code: Select all

addFeatureVecs(const Mat &featureVectors, const Mat &gt)

If you check the implementation of this function for the GMM node trainer: https://github.com/Project-10/DGM/blob/master/modules/DGM/TrainNodeGMM.cpp you can see that the input feature vector is converted to the 64-bit floating point data type.

DGM Support for more than UINT8 Feature Images

Post by victor-robles » Tue Dec 17, 2019, 18:22

I have successfully gotten DGM 1.7 to build in Linux and Windows 10 and the Demo Train has compiled and worked successfully both by training a single image and with multiple images. In addition, I’ve created arbitrary openCV Mats for the feature vectors, i.e. (row x column x n-features where n > 3) that also worked.

The only issue is when I try to expand to features that would require more than unsigned 8-bit precision and images that are more than 8-bits. When I try to use 16-bit input images and 32-bit floating values for feature matrices I get the following error:
Assertion failed: m1.depth() == CV_8U in "/homDirectory/DGM-v.1.7.0/include/macroses.h", line 113Aborted (core dumped)

So I ask, does the algorithm only support 8-bit unsigned integer features and images or is there something that I’m missing in order to allow higher precision images and features

Top