如何在Apache Spark 1.0中构建一个大型分布式[稀疏]矩阵?

4

我有一个RDD,格式如下:byUserHour: org.apache.spark.rdd.RDD[(String, String, Int)] 我想要创建这些数据的稀疏矩阵,用于计算中位数、平均值等。RDD包含行ID、列ID和值。我有两个数组,其中包含用于查找的行ID和列ID字符串。

这是我的尝试:

import breeze.linalg._
val builder = new CSCMatrix.Builder[Int](rows=BCnUsers.value.toInt,cols=broadcastTimes.value.size)
byUserHour.foreach{x =>
  val row = userids.indexOf(x._1)
  val col = broadcastTimes.value.indexOf(x._2)
  builder.add(row,col,x._3)}
builder.result()

这是我的错误信息:
14/06/10 16:39:34 INFO DAGScheduler: Failed to run foreach at <console>:38
org.apache.spark.SparkException: Job aborted due to stage failure: Task not serializable: java.io.NotSerializableException: breeze.linalg.CSCMatrix$Builder$mcI$sp
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1033)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1017)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1015)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1015)
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:770)
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:713)
    at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:697)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1176)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
    at akka.actor.ActorCell.invoke(ActorCell.scala:456)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
    at akka.dispatch.Mailbox.run(Mailbox.scala:219)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

我的数据集非常大,所以如果可能的话,我希望进行分布式处理。如果有任何帮助,将不胜感激。


进展更新:

CSCMartix不能在Spark中工作。然而,有一个扩展了DistributedMatrixRowMatrixRowMatrix确实有一个方法computeColumnSummaryStatistics(),应该能够计算出我正在寻找的一些统计信息。我知道MLlib每天都在发展,所以我会关注更新,但与此同时,我将尝试创建一个RDD [Vector]来输入到RowMatrix中。需要注意的是,RowMatrix是实验性的,并且表示没有有意义的行索引的面向行的分布式矩阵。


我不知道答案,但是CSCMatrix.Builder仅设计用于单线程执行,更不用说分布式了。 - dlwh
感谢分享发现。你还有一个 SparseVector 实现:http://people.csail.mit.edu/matei/spark-unified-docs/api/scala/index.html#org.apache.spark.mllib.linalg.SparseVector - maasg
2个回答

2

从映射开始,现在按照用户小时不同的方式进行映射,这是一个RDD[(String, (String, Int))]。由于RowMatrix不能保留行的顺序,因此我对row_id进行groupByKey操作。也许将来我会想出如何使用稀疏矩阵进行操作。

val byUser = byUserHour.groupByKey // RDD[(String, Iterable[(String, Int)])]
val times = countHour.map(x => x._1.split("\\+")(1)).distinct.collect.sortWith(_ < _)  // Array[String]
val broadcastTimes = sc.broadcast(times) // Broadcast[Array[String]]

val userMaps = byUser.mapValues { 
  x => x.map{
    case(time,cnt) => time -> cnt
  }.toMap
}  // RDD[(String, scala.collection.immutable.Map[String,Int])]


val rows = userMaps.map {
  case(u,ut) => (u.toDouble +: broadcastTimes.value.map(ut.getOrElse(_,0).toDouble))}       // RDD[Array[Double]]


import org.apache.spark.mllib.linalg.{Vector, Vectors}
val rowVec = rows.map(x => Vectors.dense(x)) // RDD[org.apache.spark.mllib.linalg.Vector]

import org.apache.spark.mllib.linalg.distributed._
val countMatrix = new RowMatrix(rowVec)
val stats = countMatrix.computeColumnSummaryStatistics()
val meanvec = stats.mean

1
你可以使用CoordinateMatrix来实现这个功能:
import org.apache.spark.mllib.linalg.distributed.{CoordinateMatrix, MatrixEntry}

val sparseMatrix = new CoordinateMatrix(byUserHour.map {
  case (row, col, data) => MatrixEntry(row, col, data) 
})

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接