The onnx model exported by exportONNXNetwork() is not the same as the result of running in opencv and Matlab?
    14 visualizzazioni (ultimi 30 giorni)
  
       Mostra commenti meno recenti
    
For example, I use the pre-training model googlenet to classify images, use the official example to test in OpenCV4.1, and identify "peppers.png", the recognition result is not bell pepper.No matter how I set the input image mean, normalization, etc., it always fails. 
My matlab program is:
net = googlenet; 
exportONNXNetwork(net,'mygoogleNet.onnx','OpsetVersion',9); // or 6,7,8
 My OpenCV program is as follows,"synset_words.txt" is in the attachment:
void main()
{
    Mat img = imread("C:\\Program Files\\MATLAB\\R2019a\\examples\\deeplearning_shared\\peppers.png");
    String onnx_path = "mygoogleNet.onnx"; // this is matlab googlenet export onnx file;
    std::string file = "synset_words.txt";
    vector<string> classes;
    std::ifstream ifs(file.c_str());
    if (!ifs.is_open())
    CV_Error(Error::StsError, "File " + file + " not found");
    std::string line;
    while (std::getline(ifs, line))
    {
       classes.push_back(line);
    }
    // read net
    Net net = readNetFromONNX(onnx_path);
    if (net.empty())
    {
        cout << "net is empty!" << endl;
    }
    net.setPreferableBackend(DNN_BACKEND_OPENCV);
    net.setPreferableTarget(DNN_TARGET_CPU);
    int net_size = 224;// googlenet net input size
    img = img(Rect(0, 0, net_size, net_size)); // keep the same image in matlab
    while (true)
    {
        Mat image = img.clone();
        Mat blob;
        blobFromImage(image, blob, 1.0/255, Size(net_size, net_size), Scalar(122.6789, 116.6686, 104.0069),true); // set params
        //! [Set input blob]
        net.setInput(blob);
        Mat prob = net.forward();
        Point classIdPoint;
        double confidence;
        minMaxLoc(prob.reshape(1, 1), 0, &confidence, 0, &classIdPoint);
        int classId = classIdPoint.x;
        //! show result
        resize(image, image, Size(500, 500));
        // Put efficiency information.
        std::vector<double> layersTimes;
        double freq = getTickFrequency() / 1000;
        double t = net.getPerfProfile(layersTimes) / freq;
        std::string label = format("Inference time: %.2f ms", t);
        putText(image, label, Point(0, 15), FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0, 255, 0));
        // Print predicted class.
        label = format("%s: %.4f", (classes.empty() ? format("Class #%d", classId).c_str() :
        classes[classId].c_str()),
        confidence);
        putText(image, label, Point(0, 40), FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0, 255, 0));
        imshow("", image);
        waitKey(1);
    }
}
result :

why is not correct? anyone know?
0 Commenti
Risposte (3)
  Don Mathis
    
 il 29 Mag 2019
        
      Modificato: Don Mathis
    
 il 29 Mag 2019
  
      Could it be that you're multiplying the test image by 1.0/255 before passing it to your imported network? Notice in the MATLAB example that the network was passed an image with pixels in the range [0 255]. It looks like you're normalizing it to [0 1]?
Also, does openCV import images as BGR? If so, you'll need to change the image to RGB because the network expects that.Maybe both of these problems are occurring?
2 Commenti
  David
 il 26 Apr 2021
				re: image normalization
When executing an exported ONNX model in say python, it is unclear to me if we're supposed to leave the image in the raw 0-255 range or do some normalization.  I have yet to get the same answer in Matlab (classifier accuracy great) and ONNXRuntime in python.  Having a hard time finding the right combination of reshaping and image processing in python. What I see on the webs are people doing a sort of mean subtraction for each color plane, but the Matlab code isn't doing any of that, except for imresize.  
 Any examples would be greatly appreciated.  
  KAAN AYKUT  KABAKÇI
 il 6 Ago 2020
        Hello, 
in my environment the problem was totally about OpenCV version. When i use OpenCV 4.2.0, i was getting different results between MATLAB and Python. After downgrade the OpenCV version to 4.0.0, the problem disappeared. I am using following blobFromImage configuration:
blob = cv2.dnn.blobFromImage(input_image, 1, (512,512), (0,0,0), True, False)
SwapRB=True.
Crop=False.
Shape of my images is (512,512,3)
0 Commenti
Vedere anche
Categorie
				Scopri di più su Deep Learning Toolbox in Help Center e File Exchange
			
	Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!






