Skip to content
This repository was archived by the owner on Feb 7, 2023. It is now read-only.
This repository was archived by the owner on Feb 7, 2023. It is now read-only.

Output from ONNX and CoreML don't seem to be matching.  #577

@rekil156

Description

@rekil156

I have converted my pytorch model to ONNX and viewed my results. These images from ONNX are correct. (It is a segmentation model based of Unet with Mobilenet as backbone).

I'm running all the code on Google Colab (tried it with paperspace and AWS and same results) .
onnx='1.7.0'
coremltools='3.4'
onnx_coreml='1.3'

!pip install -U coremltools -q
!pip install onnx -q
!pip install --upgrade onnx-coreml -q
!pip install onnxruntime -q

import onnx
import torch
from onnx import onnx_pb
from onnx_coreml import convert

from PIL import Image
import torchvision.transforms as transforms
img = Image.open("/content/Aaron_Peirsol_0004.jpg")
resize = transforms.Resize([224, 224])#preprocessing 
img = resize(img)

to_tensor = transforms.ToTensor()
img_y = to_tensor(img)*255
img_y.unsqueeze_(0)

onnx_model = onnx.load('tmp.onnx')
onnx.checker.check_model(onnx_model)

import onnxruntime
import matplotlib.pyplot as plt

ort_session = onnxruntime.InferenceSession(TMP_ONNX)

def to_numpy(tensor):
    return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()

ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(img_y)}
ort_outs = ort_session.run(None, ort_inputs)
img_out_y = ort_outs[0]

img_out_y=img_out_y[0].transpose((1,2,0))
plt.imshow(img_out_y)

I get the expected output. Segments the hair in the red channel, skin in the green and background in the blue channel.
Input image:
Aaron_Peirsol_0004

Segmented output image:
Screen Shot 2020-06-16 at 12 14 43 PM

ONNX model:
tmp.onnx.zip

Now I convert the ONNX model to CoreML:

ML_MODEL = '0-bestr.mlmodel'

# Convert ONNX to CoreML model
model_file = open(TMP_ONNX, 'rb')
model_proto = onnx_pb.ModelProto()
model_proto.ParseFromString(model_file.read())
# 595 is the identifier of output.
coreml_model = convert(model_proto,
                       image_input_names=['0'],
                       image_output_names=['595'],
                       )
coreml_model.save(ML_MODEL);

I use the coreml model and run it on the mac with:

import coremltools
import numpy as np
import PIL.Image

Height = 224  # use the correct input image height
Width = 224  # use the correct input image width
INPUT_LAYER = '0'
OUTPUT_LAYER = '593'

path = 'Aaron_Peirsol_0004.jpg'
img = PIL.Image.open(path).resize((Width,Height), PIL.Image.ANTIALIAS)

out_dict = model.predict({INPUT_LAYER: img})
out_img = np.array(out_dict[OUTPUT_LAYER])

# #multi-array output 
np_img = out_img.transpose((1,2,0)).reshape(112,112,3)*1#output image is scaled by a factor of 2
print(np.min(np_img),np.max(np_img))
PIL_image = PIL.Image.fromarray(np_img.astype('uint8'), 'RGB')
PIL_image.save('seg_out.png')

and i get the following output:
Screen Shot 2020-06-15 at 12 34 50 PM

I see there are a lot of nan values which are resulting in these black patches.

Am i missing some preprocessing that i need to be taking care of?
Any tips or further resources that i could check out will be very helpful.
Thanks

System Information

  • If applicable

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionResponse providing clarification needed. Will not be assigned to a release. (type)

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions