입력 0 와 호환되지 않는 레이어 inception_v3 예상되는 모양=(없음,299,299,3),발견양=(1, 229, 229, 3)

0

질문

어떻게 변환하는 모양=(1, 299, 299, 3) 해 예상되는 모양=(없음,299,299,3)으로 먹이기 위한 훈련 처음 v3 모델입니다.

참고:동일한 질문을 물어왔지만 대답을 충분히 명확하지 않다.

 img = tf.io.read_file(img)
  # decode image into tensor
img = tf.io.decode_image(img,channels=3) # hardcode for 3 channels to be compatible despite of the image type
  # resize the image
img = tf.image.resize(img,[229,229])
  # Scale Y/N
img = img/255.

img.shape

TensorShape([229,229,3])

instance = np.expand_dims(img, axis=0)

print(instance.shape)

(1, 229, 229, 3)

predictions = model(instance).numpy().argmax(axis=1)

오류가 발생한 경우려고 예측을 얻을 다음과 같습니다. ValueError:입력 0 와 호환되지 않 층 inception_v3:형상을 예상=(없음,299,299,3),발 모양=(없음,229,229,3) --------------------------------------------------------------------------- ValueError 추적(가장 최근의 호출 마지막) 에() ---->1 예측=모형(인스턴스).numpy().argmax(축=1)

1 frames
/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py in assert_input_compatibility(input_spec, inputs, layer_name)
    267                              ' is incompatible with layer ' + layer_name +
    268                              ': expected shape=' + str(spec.shape) +
--> 269                              ', found shape=' + display_shape(x.shape))
    270 
    271 

ValueError: Input 0 is incompatible with layer inception_v3: expected shape=(None, 299, 299, 3), found shape=(1, 229, 229, 3)

에 따라하는 대답 01 코드를 변경했 다음과 같이가 발생하는 몇 가지 독특한 오류가 있습니다.

img = tf.expand_dims(img,axis=0)
img = tf.keras.applications.inception_v3.preprocess_input(img)
predictions = model.predict(img).argmax(axis=1)

새 오류는 다음과 같습니다

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-130-1d94d955d14c> in <module>()
----> 1 predictions = model.predict(img).argmax(axis=1)

9 frames
/usr/local/lib/python3.7/dist-packages/keras/engine/training.py in predict(self, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing)
   1749           for step in data_handler.steps():
   1750             callbacks.on_predict_batch_begin(step)
-> 1751             tmp_batch_outputs = self.predict_function(iterator)
   1752             if data_handler.should_sync:
   1753               context.async_wait()

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
    883 
    884       with OptionalXlaContext(self._jit_compile):
--> 885         result = self._call(*args, **kwds)
    886 
    887       new_tracing_count = self.experimental_get_tracing_count()

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
    931       # This is the first call of __call__, so we have to initialize.
    932       initializers = []
--> 933       self._initialize(args, kwds, add_initializers_to=initializers)
    934     finally:
    935       # At this point we know that the initialization is complete (or less

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
    758     self._concrete_stateful_fn = (
    759         self._stateful_fn._get_concrete_function_internal_garbage_collected(  # pylint: disable=protected-access
--> 760             *args, **kwds))
    761 
    762     def invalid_creator_scope(*unused_args, **unused_kwds):

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
   3064       args, kwargs = None, None
   3065     with self._lock:
-> 3066       graph_function, _ = self._maybe_define_function(args, kwargs)
   3067     return graph_function
   3068 

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
   3461 
   3462           self._function_cache.missed.add(call_context_key)
-> 3463           graph_function = self._create_graph_function(args, kwargs)
   3464           self._function_cache.primary[cache_key] = graph_function
   3465 

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
   3306             arg_names=arg_names,
   3307             override_flat_arg_shapes=override_flat_arg_shapes,
-> 3308             capture_by_value=self._capture_by_value),
   3309         self._function_attributes,
   3310         function_spec=self.function_spec,

/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes, acd_record_initial_resource_uses)
   1005         _, original_func = tf_decorator.unwrap(python_func)
   1006 
-> 1007       func_outputs = python_func(*func_args, **func_kwargs)
   1008 
   1009       # invariant: `func_outputs` contains only Tensors, CompositeTensors,

/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
    666         # the function a weak reference to itself to avoid a reference cycle.
    667         with OptionalXlaContext(compile_with_xla):
--> 668           out = weak_wrapped_fn().__wrapped__(*args, **kwds)
    669         return out
    670 

/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
    992           except Exception as e:  # pylint:disable=broad-except
    993             if hasattr(e, "ag_error_metadata"):
--> 994               raise e.ag_error_metadata.to_exception(e)
    995             else:
    996               raise

ValueError: in user code:

    /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:1586 predict_function  *
        return step_function(self, iterator)
    /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:1576 step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:1286 run
        return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:2849 call_for_each_replica
        return self._call_for_each_replica(fn, args, kwargs)
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:3632 _call_for_each_replica
        return fn(*args, **kwargs)
    /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:1569 run_step  **
        outputs = model.predict_step(data)
    /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:1537 predict_step
        return self(x, training=False)
    /usr/local/lib/python3.7/dist-packages/keras/engine/base_layer.py:1020 __call__
        input_spec.assert_input_compatibility(self.input_spec, inputs, self.name)
    /usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py:269 assert_input_compatibility
        ', found shape=' + display_shape(x.shape))

    ValueError: Input 0 is incompatible with layer inception_v3: expected shape=(None, 299, 299, 3), found shape=(None, 229, 229, 3)
1

최고의 응답

1

당신이 사용해야 한다 tf.expand_dims 를 새로운 차원을 추가 기존의 텐서. 와 전화 inception_v3.preprocess_input 을 확장 입력 픽셀-1 사 1 당신을 위해:

import tensorflow as tf

model = tf.keras.applications.InceptionV3()

image = tf.random.normal((299, 299, 3))
image = tf.expand_dims(image, axis=0)
image = tf.keras.applications.inception_v3.preprocess_input(image)
preds = model.predict(image)
print(preds.argmax(axis=1))
[111]

당신은 크기 조정과 이미지가 잘못된 크기입니다. 대 [229,229] 필요하신 [299,299]. 이것을 보십시오:

enter image description here

import tensorflow as tf
import numpy as np

model = tf.keras.applications.inception_v3.InceptionV3(weights="imagenet")
x = tf.io.read_file('dog.jpeg')
x = tf.io.decode_image(x,channels=3) 
x = tf.image.resize(x,[299,299])
x = tf.expand_dims(x, axis=0)
x = tf.keras.applications.inception_v3.preprocess_input(x)
preds = model.predict(x)
print(preds.argmax(axis=1))
[158]
2021-11-22 11:46:41

지금 난 혼란 오류가 있습니다. ValueError: Input 0 is incompatible with layer inception_v3: expected shape=(None, 299, 299, 3), found shape=(None, 229, 229, 3)
E Vision

이것은 메모리 오류가 발생하는 동안 건축 그래프?
E Vision

업데이 대답합니다.
AloneTogether

대단히 감사합니다. 매우 불편해서는 내쟁하고 있습니다. 나는 이러한 유형의 문제도 많고 부가 치료?
E Vision

아무 문제가 발생합니다.
AloneTogether

다른 언어로

이 페이지는 다른 언어로되어 있습니다

Русский
..................................................................................................................
Italiano
..................................................................................................................
Polski
..................................................................................................................
Română
..................................................................................................................
हिन्दी
..................................................................................................................
Français
..................................................................................................................
Türk
..................................................................................................................
Česk
..................................................................................................................
Português
..................................................................................................................
ไทย
..................................................................................................................
中文
..................................................................................................................
Español
..................................................................................................................
Slovenský
..................................................................................................................