Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backbones for models are not loading #204

Open
sumaira-hussain opened this issue Sep 12, 2019 · 4 comments
Open

Backbones for models are not loading #204

sumaira-hussain opened this issue Sep 12, 2019 · 4 comments

Comments

@sumaira-hussain
Copy link

sumaira-hussain commented Sep 12, 2019

Hi
I have tried to load different backbones for same model but except vgg16 and resnet34, none of the backbone are loading.
Program and error are attached for reference.

import segmentation_models as sm
import os
import matplotlib.pyplot as plt
import skimage.io as io
from keras.callbacks import ModelCheckpoint, EarlyStopping
import tensorflow as tf
import cv2
import numpy as np
from sklearn.model_selection import train_test_split

os.environ["CUDA_VISIBLE_DEVICES"] = "0"

from segmentation_models.losses import bce_jaccard_loss
from segmentation_models.metrics import iou_score

BACKBONE = 'efficientnetb7'
preprocess_input = sm.get_preprocessing(BACKBONE)

train_dir='.....................'
train_label_dir='.....................'
test_dir='.....................'
train_imgs= ['.....................{}'.format(i) for i in os.listdir(train_dir)]
train_label_imgs= ['.....................{}'.format(i) for i in os.listdir(train_label_dir)]
test_imgs= ['.....................{}'.format(i) for i in os.listdir(test_dir)]
nrows=256
ncolumns=256
channels =3
print(len(train_imgs))
def read_process_img(train_imgs):
X=[]
for image in train_imgs:
X.append(cv2.resize(cv2.imread(image, cv2.IMREAD_COLOR), (nrows, ncolumns),interpolation=cv2.INTER_CUBIC))
print(len(X))
return X
X=read_process_img(train_imgs)
y=read_process_img(train_label_imgs)
x_test=read_process_img(test_imgs)
X=np.array(X)
y=np.array(y)
x_test=np.array(x_test)
print(X.shape[0])
print(y.shape)

x_train, x_val, y_train, y_val = train_test_split(X,y,test_size=0.20, train_size=0.8, random_state=2)

print(x_train.shape)
print(x_val.shape)
print(y_train.shape)
print(y_val.shape)

x_train = preprocess_input(x_train)
x_val = preprocess_input(x_val)
x_test = preprocess_input(x_test)

model = sm.Unet(BACKBONE, input_shape=(None, None,3 ),encoder_weights='imagenet')
model.compile('Adam', loss=bce_jaccard_loss, metrics=[iou_score])

callbacks = [
EarlyStopping(patience=10, verbose=1),
ModelCheckpoint('unet_membrane.h5', verbose=1, save_best_only=True, save_weights_only=True)
]

results = model.fit(
x=x_train,
y=y_train,
batch_size=4,
epochs=100,
callbacks = callbacks,
validation_data=(x_val, y_val),
)

Error:

100
100
100
86
100
(100, 256, 256, 3)
(80, 256, 256, 3)
(20, 256, 256, 3)
(80, 256, 256, 3)
(20, 256, 256, 3)

W0912 21:36:02.976926 140192420181760 nn_ops.py:4224] Large dropout rate: 0.5125 (>0.5). In TensorFlow 2.x, dropout() uses dropout rate instead of keep_prob. Please ensure that this is intended.
W0912 21:36:03.423848 140192420181760 nn_ops.py:4224] Large dropout rate: 0.525 (>0.5). In TensorFlow 2.x, dropout() uses dropout rate instead of keep_prob. Please ensure that this is intended.

Downloading data from https://github.com/Callidior/keras-applications/releases/download/efficientnet/efficientnet-b7_weights_tf_dim_ordering_tf_kernels_autoaugment_notop.h5


SSLCertVerificationError Traceback (most recent call last)
~/anaconda3/envs/tensorflow/lib/python3.7/urllib/request.py in do_open(self, http_class, req, **http_conn_args)
1316 h.request(req.get_method(), req.selector, req.data, headers,
-> 1317 encode_chunked=req.has_header('Transfer-encoding'))
1318 except OSError as err: # timeout error

~/anaconda3/envs/tensorflow/lib/python3.7/http/client.py in request(self, method, url, body, headers, encode_chunked)
1228 """Send a complete request to the server."""
-> 1229 self._send_request(method, url, body, headers, encode_chunked)
1230

~/anaconda3/envs/tensorflow/lib/python3.7/http/client.py in _send_request(self, method, url, body, headers, encode_chunked)
1274 body = _encode(body, 'body')
-> 1275 self.endheaders(body, encode_chunked=encode_chunked)
1276

~/anaconda3/envs/tensorflow/lib/python3.7/http/client.py in endheaders(self, message_body, encode_chunked)
1223 raise CannotSendHeader()
-> 1224 self._send_output(message_body, encode_chunked=encode_chunked)
1225

~/anaconda3/envs/tensorflow/lib/python3.7/http/client.py in _send_output(self, message_body, encode_chunked)
1015 del self._buffer[:]
-> 1016 self.send(msg)
1017

~/anaconda3/envs/tensorflow/lib/python3.7/http/client.py in send(self, data)
955 if self.auto_open:
--> 956 self.connect()
957 else:

~/anaconda3/envs/tensorflow/lib/python3.7/http/client.py in connect(self)
1391 self.sock = self._context.wrap_socket(self.sock,
-> 1392 server_hostname=server_hostname)
1393

~/anaconda3/envs/tensorflow/lib/python3.7/ssl.py in wrap_socket(self, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, session)
411 context=self,
--> 412 session=session
413 )

~/anaconda3/envs/tensorflow/lib/python3.7/ssl.py in _create(cls, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, context, session)
852 raise ValueError("do_handshake_on_connect should not be specified for non-blocking sockets")
--> 853 self.do_handshake()
854 except (OSError, ValueError):

~/anaconda3/envs/tensorflow/lib/python3.7/ssl.py in do_handshake(self, block)
1116 self.settimeout(None)
-> 1117 self._sslobj.do_handshake()
1118 finally:

SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1056)

During handling of the above exception, another exception occurred:

URLError Traceback (most recent call last)
~/anaconda3/envs/tensorflow/lib/python3.7/site-packages/keras/utils/data_utils.py in get_file(fname, origin, untar, md5_hash, file_hash, cache_subdir, hash_algorithm, extract, archive_format, cache_dir)
221 try:
--> 222 urlretrieve(origin, fpath, dl_progress)
223 except HTTPError as e:

~/anaconda3/envs/tensorflow/lib/python3.7/urllib/request.py in urlretrieve(url, filename, reporthook, data)
246
--> 247 with contextlib.closing(urlopen(url, data)) as fp:
248 headers = fp.info()

~/anaconda3/envs/tensorflow/lib/python3.7/urllib/request.py in urlopen(url, data, timeout, cafile, capath, cadefault, context)
221 opener = _opener
--> 222 return opener.open(url, data, timeout)
223

~/anaconda3/envs/tensorflow/lib/python3.7/urllib/request.py in open(self, fullurl, data, timeout)
524
--> 525 response = self._open(req, data)
526

~/anaconda3/envs/tensorflow/lib/python3.7/urllib/request.py in _open(self, req, data)
542 result = self._call_chain(self.handle_open, protocol, protocol +
--> 543 '_open', req)
544 if result:

~/anaconda3/envs/tensorflow/lib/python3.7/urllib/request.py in _call_chain(self, chain, kind, meth_name, *args)
502 func = getattr(handler, meth_name)
--> 503 result = func(*args)
504 if result is not None:

~/anaconda3/envs/tensorflow/lib/python3.7/urllib/request.py in https_open(self, req)
1359 return self.do_open(http.client.HTTPSConnection, req,
-> 1360 context=self._context, check_hostname=self._check_hostname)
1361

~/anaconda3/envs/tensorflow/lib/python3.7/urllib/request.py in do_open(self, http_class, req, **http_conn_args)
1318 except OSError as err: # timeout error
-> 1319 raise URLError(err)
1320 r = h.getresponse()

URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1056)>

During handling of the above exception, another exception occurred:

Exception Traceback (most recent call last)
in
72
73 # define model
---> 74 model = sm.Unet(BACKBONE, input_shape=(None, None,3 ),encoder_weights='imagenet')
75 model.compile('Adam', loss=bce_jaccard_loss, metrics=[iou_score])
76

~/anaconda3/envs/tensorflow/lib/python3.7/site-packages/segmentation_models/init.py in wrapper(*args, **kwargs)
32 kwargs['models'] = _KERAS_MODELS
33 kwargs['utils'] = _KERAS_UTILS
---> 34 return func(*args, **kwargs)
35
36 return wrapper

~/anaconda3/envs/tensorflow/lib/python3.7/site-packages/segmentation_models/models/unet.py in Unet(backbone_name, input_shape, classes, activation, weights, encoder_weights, encoder_freeze, encoder_features, decoder_block_type, decoder_filters, decoder_use_batchnorm, **kwargs)
224 weights=encoder_weights,
225 include_top=False,
--> 226 **kwargs,
227 )
228

~/anaconda3/envs/tensorflow/lib/python3.7/site-packages/segmentation_models/backbones/backbones_factory.py in get_backbone(self, name, *args, **kwargs)
101 def get_backbone(self, name, *args, **kwargs):
102 model_fn, _ = self.get(name)
--> 103 model = model_fn(*args, **kwargs)
104 return model
105

~/anaconda3/envs/tensorflow/lib/python3.7/site-packages/classification_models/models_factory.py in wrapper(*args, **kwargs)
76 modules_kwargs = self.get_kwargs()
77 new_kwargs = dict(list(kwargs.items()) + list(modules_kwargs.items()))
---> 78 return func(*args, **new_kwargs)
79
80 return wrapper

~/anaconda3/envs/tensorflow/lib/python3.7/site-packages/efficientnet/model.py in EfficientNetB7(include_top, weights, input_tensor, input_shape, pooling, classes, **kwargs)
591 input_tensor=input_tensor, input_shape=input_shape,
592 pooling=pooling, classes=classes,
--> 593 **kwargs)
594
595

~/anaconda3/envs/tensorflow/lib/python3.7/site-packages/efficientnet/model.py in EfficientNet(width_coefficient, depth_coefficient, default_resolution, dropout_rate, drop_connect_rate, depth_divisor, blocks_args, model_name, include_top, weights, input_tensor, input_shape, pooling, classes, **kwargs)
466 BASE_WEIGHTS_PATH + file_name,
467 cache_subdir='models',
--> 468 file_hash=file_hash)
469 model.load_weights(weights_path)
470 elif weights is not None:

~/anaconda3/envs/tensorflow/lib/python3.7/site-packages/keras/utils/data_utils.py in get_file(fname, origin, untar, md5_hash, file_hash, cache_subdir, hash_algorithm, extract, archive_format, cache_dir)
224 raise Exception(error_msg.format(origin, e.code, e.msg))
225 except URLError as e:
--> 226 raise Exception(error_msg.format(origin, e.errno, e.reason))
227 except (Exception, KeyboardInterrupt):
228 if os.path.exists(fpath):

Exception: URL fetch failure on https://github.com/Callidior/keras-applications/releases/download/efficientnet/efficientnet-b7_weights_tf_dim_ordering_tf_kernels_autoaugment_notop.h5: None -- [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1056)

@sumaira-hussain
Copy link
Author

@qubvel

@JordanMakesMaps
Copy link

Did you already look at #198?

@sumaira-hussain
Copy link
Author

Did you already look at #198?

Yes I did but it didn't work in my case. I am only able to run vgg16 and resnet34. rest are not available and trying to check version also returns error "module efficientnet has no attribute name version"

@maremoto
Copy link

maremoto commented Sep 27, 2019

Did you already look at #198?

Hi, I have the same issue than in #198, the efficientnet backbones are not loding in Windows O.S.

The versions of the packages are:

conda 4.3.21 (I use a  conda environment)
python 3.6.7
jupyter 4.4.0
tensorflow-gpu 1.14.0
keras 2.3.0
segmentation_models 0.2.1
image-classifiers 0.2.0
albumentations 0.3.3
efficientnet 0.0.4

If I try to execute efficientnet.__version__ in the notebook I get
AttributeError: module 'efficientnet' has no attribute '__version__'

Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants