类型错误:期望传递给操作符“Equal”参数“y”的数据类型为float32,但实际传递的是类型为“str”的“auto”。

21

我正在制作一个神经网络来预测音频数据(了解神经网络的功能和如何使用TensorFlow),到目前为止一切都很顺利,唯一的问题是:我已经四处寻找解决方法,但没有找到具体的帮助我解决问题。我设置了数据集和模型,它们都正常工作,但是当我尝试训练模型时,不知何故会出现类型错误,即使数据集中的所有值都是32位浮点数。如果有人能回答这个问题,或者至少指导我朝着正确的方向解决这个问题,我将不胜感激。以下是代码和控制台输出。(顺便说一下,数据集中的所有值都介于0到1之间,我不知道这是否相关,但我想添加这一点)

编辑:我还包括了AudioHandler类,可以用它来重现错误。get_audio_arrayget_audio_arrays可用于将单个MP3文件或多个MP3文件转换为音频数据数组。您还可以使用dataset_from_arrays生成从使用dataset_from_arrays创建的音频数组中生成数据集。

from AudioHandler import AudioHandler
import os

seq_length = 22050
BATCH_SIZE = 64
BUFFER_SIZE = 10000

audio_arrays = AudioHandler.get_audio_arrays("AudioDataset", normalized=True)

dataset = AudioHandler.dataset_from_arrays(audio_arrays, seq_length, BATCH_SIZE, buffer_size=BUFFER_SIZE)

print(dataset)

rnn_units = 256

def build_model(rnn_units, batch_size):
    model = tf.keras.Sequential([
        tf.keras.layers.InputLayer(batch_input_shape=(batch_size, None, 2)),

        tf.keras.layers.GRU(rnn_units, return_sequences=True, stateful=True),

        tf.keras.layers.Dense(2)
    ])
    return model


model = build_model(rnn_units, BATCH_SIZE)

model.summary()

model.compile(optimizer='adam', loss=tf.keras.losses.MeanSquaredError)

EPOCHS = 10

history = model.fit(dataset, epochs=EPOCHS)
from pydub import AudioSegment
import numpy as np
from pathlib import Path
from tensorflow import data
import os


class AudioHandler:

    @staticmethod
    def print_audio_info(file_name):
        audio_segment = AudioSegment.from_file(file_name)
        print("Information of '" + file_name + "':")
        print("Sample rate: " + str(audio_segment.frame_rate) + "kHz")
        # Multiply frame_width by 8 to get bits, since it is given in bytes
        print("Sample width: " + str(audio_segment.frame_width * 8) + " bits per sample (" + str(
            int(audio_segment.frame_width * 8 / audio_segment.channels)) + " bits per channel)")
        print("Channels: " + str(audio_segment.channels))

    @staticmethod
    def get_audio_array(file_name, normalized=True):
        audio_segment = AudioSegment.from_file(file_name)
        # Get bytestring of raw audio data
        raw_audio_bytestring = audio_segment.raw_data
        # Adjust sample width to accommodate multiple channels in each sample
        sample_width = audio_segment.frame_width / audio_segment.channels
        # Convert bytestring to numpy array
        if sample_width == 1:
            raw_audio = np.array(np.frombuffer(raw_audio_bytestring, dtype=np.int8))
        elif sample_width == 2:
            raw_audio = np.array(np.frombuffer(raw_audio_bytestring, dtype=np.int16))
        elif sample_width == 4:
            raw_audio = np.array(np.frombuffer(raw_audio_bytestring, dtype=np.int32))
        else:
            raw_audio = np.array(np.frombuffer(raw_audio_bytestring, dtype=np.int16))
        # Normalize the audio data
        if normalized:
            # Cast the audio data as 32 bit floats
            raw_audio = raw_audio.astype(dtype=np.float32)
            # Make all values positive
            raw_audio += np.power(2, 8*sample_width)/2
            # Normalize all values between 0 and 1
            raw_audio *= 1/np.power(2, 8*sample_width)
        # Reshape the array to accommodate multiple channels
        if audio_segment.channels > 1:
            raw_audio = raw_audio.reshape((-1, audio_segment.channels))

        return raw_audio

    @staticmethod
    # Return an array of all audio files in directory, as arrays of audio data
    def get_audio_arrays(directory, filetype='mp3', normalized=True):

        file_count_total = len([name for name in os.listdir(directory) if os.path.isfile(os.path.join(directory, name))]) - 1

        audio_arrays = []
        # Iterate through all audio files
        pathlist = Path(directory).glob('**/*.' + filetype)
        # Keep track of progress
        file_count = 0
        print("Loading audio files... 0%")
        for path in pathlist:
            path_string = str(path)
            audio_array = AudioHandler.get_audio_array(path_string, normalized=normalized)
            audio_arrays.append(audio_array)
            # Update Progress
            file_count += 1
            print('Loading audio files... ' + str(int(file_count/file_count_total*100)) + '%')

        return audio_arrays

    @staticmethod
    def export_to_file(audio_data_array, file_name, normalized=True, file_type="mp3", bitrate="256k"):
        if normalized:
            audio_data_array *= np.power(2, 16)
            audio_data_array -= np.power(2, 16)/2
        audio_data_array = audio_data_array.astype(np.int16)
        audio_data_array = audio_data_array.reshape((1, -1))[0]
        raw_audio = audio_data_array.tostring()
        audio_segment = AudioSegment(data=raw_audio, sample_width=2, frame_rate=44100, channels=2)
        audio_segment.export(file_name, format=file_type, bitrate=bitrate)

    # Splits a sequence into input values and target values
    @staticmethod
    def __split_input_target(chunk):
        input_audio = chunk[:-1]
        target_audio = chunk[1:]
        return input_audio, target_audio

    @staticmethod
    def dataset_from_arrays(audio_arrays, sequence_length, batch_size, buffer_size=10000):
        # Create main data set, starting with first audio array
        dataset = data.Dataset.from_tensor_slices(audio_arrays[0])
        dataset = dataset.batch(sequence_length + 1, drop_remainder=True)
        # Split each audio array into sequences individually,
        # then concatenate each individual data set with the main data set
        for i in range(1, len(audio_arrays)):
            audio_data = audio_arrays[i]
            tensor_slices = data.Dataset.from_tensor_slices(audio_data)
            audio_dataset = tensor_slices.batch(sequence_length + 1, drop_remainder=True)
            dataset.concatenate(audio_dataset)

        dataset = dataset.map(AudioHandler.__split_input_target)

        dataset = dataset.shuffle(buffer_size).batch(batch_size, drop_remainder=True)

        return dataset
Loading audio files... 0%
Loading audio files... 25%
Loading audio files... 50%
Loading audio files... 75%
Loading audio files... 100%
2020-06-21 00:20:10.796993: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-06-21 00:20:10.811357: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fddb7b23fd0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-06-21 00:20:10.811368: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
<BatchDataset shapes: ((64, 22050, 2), (64, 22050, 2)), types: (tf.float32, tf.float32)>
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
gru (GRU)                    (64, None, 256)           199680    
_________________________________________________________________
dense (Dense)                (64, None, 2)             514       
=================================================================
Total params: 200,194
Trainable params: 200,194
Non-trainable params: 0
_________________________________________________________________
Epoch 1/10
Traceback (most recent call last):
  File "/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/RNN.py", line 57, in <module>
    history = model.fit(dataset, epochs=EPOCHS)
  File "/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 66, in _method_wrapper
    return method(self, *args, **kwargs)
  File "/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 848, in fit
    tmp_logs = train_function(iterator)
  File "/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 580, in __call__
    result = self._call(*args, **kwds)
  File "/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 627, in _call
    self._initialize(args, kwds, add_initializers_to=initializers)
  File "/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 506, in _initialize
    *args, **kwds))
  File "/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 2446, in _get_concrete_function_internal_garbage_collected
    graph_function, _, _ = self._maybe_define_function(args, kwargs)
  File "/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 2777, in _maybe_define_function
    graph_function = self._create_graph_function(args, kwargs)
  File "/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 2667, in _create_graph_function
    capture_by_value=self._capture_by_value),
  File "/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py", line 981, in func_graph_from_py_func
    func_outputs = python_func(*func_args, **func_kwargs)
  File "/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 441, in wrapped_fn
    return weak_wrapped_fn().__wrapped__(*args, **kwds)
  File "/Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py", line 968, in wrapper
    raise e.ag_error_metadata.to_exception(e)
TypeError: in user code:

    /Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:571 train_function  *
        outputs = self.distribute_strategy.run(
    /Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:951 run  **
        return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
    /Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica
        return self._call_for_each_replica(fn, args, kwargs)
    /Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica
        return fn(*args, **kwargs)
    /Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:533 train_step  **
        y, y_pred, sample_weight, regularization_losses=self.losses)
    /Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/keras/engine/compile_utils.py:205 __call__
        loss_value = loss_obj(y_t, y_p, sample_weight=sw)
    /Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/keras/losses.py:143 __call__
        losses = self.call(y_true, y_pred)
    /Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/keras/losses.py:246 call
        return self.fn(y_true, y_pred, **self._fn_kwargs)
    /Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/keras/losses.py:313 __init__
        mean_squared_error, name=name, reduction=reduction)
    /Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/keras/losses.py:229 __init__
        super(LossFunctionWrapper, self).__init__(reduction=reduction, name=name)
    /Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/keras/losses.py:94 __init__
        losses_utils.ReductionV2.validate(reduction)
    /Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/ops/losses/loss_reduction.py:67 validate
        if key not in cls.all():
    /Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py:1491 tensor_equals
        return gen_math_ops.equal(self, other, incompatible_shape_error=False)
    /Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/ops/gen_math_ops.py:3224 equal
        name=name)
    /Users/anonteau/Desktop/Development/Python/Lo-FiGenerator/venv/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:479 _apply_op_helper
        repr(values), type(values).__name__, err))

    TypeError: Expected float32 passed to parameter 'y' of op 'Equal', got 'auto' of type 'str' instead. Error: Expected float32, got 'auto' of type 'str' instead.```

从你的代码中可以得出以下几点观察:
  1. 只有在现有层上堆叠另一个 GRU Layer 时,才应该使用 return_sequences=True。由于您只使用了一层,因此可以将该参数设置为 False
  2. 您所说的 tf.keras.layers.Dense(2) 到底是什么意思?您想预测两个值吗?
  3. 您可以尝试删除 InputLayer 并在 GRU 层中添加形状(只是试验)。
如果以上修改都不起作用,请分享模块 AudioHandle 的代码(或安装步骤),以便我们可以重现您的错误。谢谢!
- user11530462
感谢您的评论和Tensorflow建议。我已经按照您的建议进行了更改,但仍然收到与之前相同的错误。因此,我分享了AudioHandler模块的代码。(是的,我正在尝试预测代表音频数据通道的2个浮点值) - notpublic
5个回答

69

尝试更改

model.compile(optimizer='adam', loss=tf.keras.losses.MeanSquaredError)

model.compile(optimizer='adam', loss=tf.keras.losses.MeanSquaredError())

2
好建议!我可以问一下这两者之间有什么区别吗? - Memphis Meng
6
第一种情况下,您将类传递给损失参数;第二种情况下,您传递了该类的一个实例。 - Sourcerer

9

我在编译我的模型:

model.compile(optimizer='Adam', loss=tf.losses.CosineSimilarity,
metrics= ['accuracy'])

然后我遇到了同样的错误。我按照以下方式更改了我的代码:

model.compile(optimizer='Adam', loss= tf.losses.CosineSimilarity(),
metrics= ['accuracy'])

它成功了。


2

在我使用以下代码编译模型时,遇到了类似的问题:

model.compile(loss=tf.keras.losses.MeanSquaredError, optimizer='Adam')

我将其更改为以下内容,然后它就起作用了。感谢https://stackoverflow.com/users/7349864/sourcerer

model.compile(loss=tf.keras.losses.MeanSquaredError(), optimizer='Adam')

1

我正在编译我的模型:

model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(), optimizer='adam', metrics=['accuracy'])

然后我得到了相同的错误。我按照以下方式更改了我的代码:

model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), optimizer='adam', metrics=['accuracy'])

然后它就可以工作了。

from_logits=True 属性告诉损失函数,模型生成的输出值未经过归一化处理,即 logits。换句话说,softmax 函数未应用于它们以产生概率分布。因此,在这种情况下,输出层没有 softmax 激活函数。


0

还有一种带有解决方案的错误类型

x = self.input1(x) 

你不需要这句话,可以直接将输入传递到LSTM中,而无需预定义输入形状。建议您浏览

import tensorflow as tf

class MyModel(tf.keras.Model):

  def __init__(self):
    super(MyModel, self).__init__()
    self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu)
    self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)

  def call(self, inputs):
    x = self.dense1(inputs)
    return self.dense2(x)

model = MyModel()

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接