Open
Description
Click to expand!
Issue Type
Support
Have you reproduced the bug with TF nightly?
No
Source
source
Tensorflow Version
2.9.1
Custom Code
No
OS Platform and Distribution
centos7
Mobile device
No response
Python version
3.8.14
Bazel version
5.0
GCC/Compiler version
10.2
CUDA/cuDNN version
11.4
GPU model and memory
A30
Current Behaviour?
One input of my model has nothing to do with batch_size, for example, its shape is [1,2,3], how to avoid automatically adding 1 dimension when creating tf.keras.layers.Input
? If I manually slice it, the slice operator will be introduced, resulting in a decrease in inference performance.
Standalone code to reproduce the issue
import tensorflow as tf
x = tf.keras.layers.Input(shape=(32,64))
# x.shape: (None, 32, 64)
x = x[0, :, :]
# x.shape: (32, 64)