使用带分层抽样的TensorFlow数据集

4

给定一个TensorFlow数据集

Train_dataset = tf.data.Dataset.from_tensor_slices((Train_Image_Filenames,Train_Image_Labels))
Train_dataset = Train_dataset.map(Parse_JPEG_Augmented)
...

我希望对我的批次进行分层处理以应对类别不平衡的情况。我发现了tf.contrib.training.stratified_sample,并认为可以按照以下方式使用:

Train_dataset_iter = Train_dataset.make_one_shot_iterator()
Train_dataset_Image_Batch,Train_dataset_Label_Batch = Train_dataset_iter.get_next()
Train_Stratified_Images,Train_Stratified_Labels = tf.contrib.training.stratified_sample(Train_dataset_Image_Batch,Train_dataset_Label_Batch,[1/Classes]*Classes,Batch_Size)

但是它会出现以下错误,我不确定这是否允许我保持tensorflow数据集的性能优势,因为我可能必须通过feed_dict传递Train_Stratified_ImagesTrain_Stratified_Labels

File "/xxx/xxx/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/training/python/training/sampling_ops.py", line 192, in stratified_sample
with ops.name_scope(name, 'stratified_sample', list(tensors) + [labels]):
File "/xxx/xxx/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 459, in __iter__
"Tensor objects are only iterable when eager execution is "
TypeError: Tensor objects are only iterable when eager execution is enabled. To iterate over this tensor use tf.map_fn.

什么是使用分层批次数据集的“最佳实践”方式?

可能是[使用Dataset API生成平衡的小批量数据集]的重复问题(https://dev59.com/fFYN5IYBdhLWcg3wxqhu)。 - Ismael EL ATIFI
1个回答

1
以下是一个简单的示例,演示了sample_from_datasets的用法(感谢@Agade提供的思路)。
import math
import tensorflow as tf
import numpy as np


def print_dataset(name, dataset):
    elems = np.array([v.numpy() for v in dataset])
    print("Dataset {} contains {} elements :".format(name, len(elems)))
    print(elems)


def combine_datasets_balanced(dataset_smaller, size_smaller, dataset_bigger, size_bigger, batch_size):
    ds_smaller_repeated = dataset_smaller.repeat(count=int(math.ceil(size_bigger / size_smaller)))
    # we repeat the smaller dataset so that the 2 datasets are about the same size
    balanced_dataset = tf.data.experimental.sample_from_datasets([ds_smaller_repeated, dataset_bigger], weights=[0.5, 0.5])
    # each element in the resulting dataset is randomly drawn (without replacement) from dataset even with proba 0.5 or from odd with proba 0.5
    balanced_dataset = balanced_dataset.take(2 * size_bigger).batch(batch_size)
    return balanced_dataset


N, M = 3, 10
even = tf.data.Dataset.range(0, 2 * N, 2).repeat(count=int(math.ceil(M / N)))
odd = tf.data.Dataset.range(1, 2 * M, 2)
even_odd = combine_datasets_balanced(even, N, odd, M, 2)

print_dataset("even", even)
print_dataset("odd", odd)
print_dataset("even_odd_all", even_odd)

Output :

Dataset even contains 12 elements :  # 12 = 4 x N  (because of .repeat)
[0 2 4 0 2 4 0 2 4 0 2 4]
Dataset odd contains 10 elements :
[ 1  3  5  7  9 11 13 15 17 19]
Dataset even_odd contains 10 elements :  # 10 = 2 x M / 2  (2xM because of .take(2 * M) and /2 because of .batch(2))
[[ 0  2]
 [ 1  4]
 [ 0  2]
 [ 3  4]
 [ 0  2]
 [ 4  0]
 [ 5  2]
 [ 7  4]
 [ 0  9]
 [ 2 11]] 

2
据我个人经验,我曾遇到这样一种情况:使用 rejection_resample 对一个极度不平衡的数据集进行重新平衡的性能开销非常大,但是使用 sample_from_datasets 则没有这种情况。 - Agade

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接