如何在PySpark中压缩两个RDD?

3

我一直在尝试合并两个Rdds,分别是averagePoints1和kpoints2。但它会抛出以下错误:

ValueError: Can not deserialize RDD with different number of items in pair: (2, 1)

我尝试了许多方法,但我无法确定这两个Rdds是否相同,它们具有相同数量的分区。我的下一步是对这两个列表应用欧几里得距离函数来衡量它们之间的差异,如果有人知道如何解决此错误或有其他方法可供我使用,我将不胜感激。

提前感谢您。

 averagePoints1 = averagePoints.map(lambda x: x[1])
 averagePoints1.collect()
 Out[15]:
 [[34.48939954847243, -118.17286894440112],
 [41.028994230117945, -120.46279399895184],
 [37.41157578999635, -121.60431843383599],
 [34.42627845075509, -113.87191272382309],
 [39.00897622397381, -122.63680410846844]] 

  kpoints2 = sc.parallelize(kpoints,4)
  In [17]:

  kpoints2.collect()
  Out[17]:
  [[34.0830381107, -117.960562808],
  [38.8057258629, -120.990763316],
  [38.0822414157, -121.956922473],
  [33.4516748053, -116.592291648],
  [38.1808762414, -122.246825578]]
2个回答

3
a= [[34.48939954847243, -118.17286894440112],
 [41.028994230117945, -120.46279399895184],
 [37.41157578999635, -121.60431843383599],
 [34.42627845075509, -113.87191272382309],
 [39.00897622397381, -122.63680410846844]] 
b= [[34.0830381107, -117.960562808],
  [38.8057258629, -120.990763316],
  [38.0822414157, -121.956922473],
  [33.4516748053, -116.592291648],
  [38.1808762414, -122.246825578]]

rdda = sc.parallelize(a)
rddb = sc.parallelize(b)
c = rdda.zip(rddb)
print(c.collect())

请查看此答案 如何在pyspark中合并两个RDD


kpoints2是RDD中的一个样本,平均点数是RDD中的平均点数,我将编写一个while循环直到收敛,因此这个解决方案不起作用。请问您有其他想法吗? - Muhammed Eltabakh

0
newSample=newCenters.collect() #new centers as a list
    samples=zip(newSample,sample) #sample=> old centers
    samples1=sc.parallelize(samples)
    totalDistance=samples1.map(lambda (x,y):distanceSquared(x[1],y))

对于未来的搜索者,这是我最终采用的解决方案


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接