如何将动态命名的列合并成字典?

3

以下是这些数据框:

IncomingCount
-------------------------
Venue|Date    | 08 | 10 |
-------------------------
Hotel|20190101| 15 | 03 |
Beach|20190101| 93 | 45 |

OutgoingCount
-------------------------
Venue|Date    | 07 | 10 | 
-------------------------
Beach|20190101| 30 | 5  |
Hotel|20190103| 05 | 15 |

如何在不手动循环遍历两个表的每一行的情况下,合并(全连接)两个表以得到以下内容?

Dictionary:
[
 {"Venue":"Hotel", "Date":"20190101", "08":{ "IncomingCount":15 }, "10":{ "IncomingCount":03 } },
 {"Venue":"Beach", "Date":"20190101", "07":{ "OutgoingCount":30 }, "08":{ "IncomingCount":93 }, "10":{ "IncomingCount":45, "OutgoingCount":15 } },
 {"Venue":"Hotel", "Date":"20190103", "07":{ "OutgoingCount":05 }, "10":{ "OutgoingCount":15 } }
]

条件如下:
  1. 场馆和日期栏位作为连接条件。
  2. 其他数字表示的栏位是动态创建的。
  3. 如果动态栏位不存在,则被排除(或包含值为None)。
3个回答

1
我可以理解这部分内容:

到目前为止,我能够得到:

import pandas as pd
import numpy as np

dd1 = {'venue': ['hotel', 'beach'], 'date':['20190101', '20190101'], '08': [15, 93], '10':[3, 45]}
dd2 = {'venue': ['beach', 'hotel'], 'date':['20190101', '20190103'], '07': [30, 5], '10':[5, 15]}

df1 = pd.DataFrame(data=dd1)
df2 = pd.DataFrame(data=dd2)

df1.columns = [f"IncomingCount:{x}" if x not in ['venue', 'date'] else x for x in df1.columns]
df2.columns = [f"OutgoingCount:{x}" if x not in ['venue', 'date'] else x for x in df2.columns ]

ll_dd = pd.merge(df1, df2, on=['venue', 'date'], how='outer').to_dict('records')
ll_dd = [{k:v for k,v in dd.items() if not pd.isnull(v)} for dd in ll_dd]

输出:
[{'venue': 'hotel',
  'date': '20190101',
  'IncomingCount:08': 15.0,
  'IncomingCount:10': 3.0},
 {'venue': 'beach',
  'date': '20190101',
  'IncomingCount:08': 93.0,
  'IncomingCount:10': 45.0,
  'OutgoingCount:07': 30.0,
  'OutgoingCount:10': 5.0},
 {'venue': 'hotel',
  'date': '20190103',
  'OutgoingCount:07': 5.0,
  'OutgoingCount:10': 15.0}]

2
谢谢,但由于系统限制,我无法使用pandas。我正在寻找pyspark解决方案。 - Chris Wijaya

1

这个过程有些繁琐,但可以利用spark的create_map函数完成。

基本上将列分成四组:键(场馆、日期)、公共(10)、仅入站(08)、仅出站(07)。

然后为每个组创建映射器(除了键),只映射每个组中可用的内容。应用映射,删除旧列并将映射列重命名为旧名称。

最后将所有行转换为字典(从df的rdd中),并收集。

from pyspark.sql import SparkSession
from pyspark.sql.functions import create_map, col, lit

spark = SparkSession.builder.appName('hotels_and_beaches').getOrCreate()

incoming_counts = spark.createDataFrame([('Hotel', 20190101, 15, 3), ('Beach', 20190101, 93, 45)], ['Venue', 'Date', '08', '10']).alias('inc')
outgoing_counts = spark.createDataFrame([('Beach', 20190101, 30, 5), ('Hotel', 20190103, 5, 15)], ['Venue', 'Date', '07', '10']).alias('out')

df = incoming_counts.join(outgoing_counts, on=['Venue', 'Date'], how='full')

outgoing_cols = {c for c in outgoing_counts.columns if c not in {'Venue', 'Date'}}
incoming_cols = {c for c in incoming_counts.columns if c not in {'Venue', 'Date'}}

common_cols = outgoing_cols.intersection(incoming_cols)

outgoing_cols = outgoing_cols.difference(common_cols)
incoming_cols = incoming_cols.difference(common_cols)

for c in common_cols:
    df = df.withColumn(
        c + '_new', create_map(
            lit('IncomingCount'), col('inc.{}'.format(c)),
            lit('OutgoingCount'), col('out.{}'.format(c)),
        )
    ).drop(c).withColumnRenamed(c + '_new', c)

for c in incoming_cols:
    df = df.withColumn(
        c + '_new', create_map(
            lit('IncomingCount'), col('inc.{}'.format(c)),
        )
    ).drop(c).withColumnRenamed(c + '_new', c)

for c in outgoing_cols:
    df = df.withColumn(
        c + '_new', create_map(
            lit('OutgoingCount'), col('out.{}'.format(c)),
        )
    ).drop(c).withColumnRenamed(c + '_new', c)

result = df.coalesce(1).rdd.map(lambda r: r.asDict()).collect()
print(result)

result:

[{'Venue': 'Hotel', 'Date': 20190101, '10': {'OutgoingCount': None, 'IncomingCount': 3}, '08': {'IncomingCount': 15}, '07': {'OutgoingCount': None}}, {'Venue': 'Hotel', 'Date': 20190103, '10': {'OutgoingCount': 15, 'IncomingCount': None}, '08': {'IncomingCount': None}, '07': {'OutgoingCount': 5}}, {'Venue': 'Beach', 'Date': 20190101, '10': {'OutgoingCount': 5, 'IncomingCount': 45}, '08': {'IncomingCount': 93}, '07': {'OutgoingCount': 30}}]

1
如果您不创建映射列,我不确定您是否会得到所需的嵌套。 - Rich
你是对的,我的之前评论是错误的,我删除它是为了避免混淆。 - Chris Wijaya

0
最终结果是OP所期望的一个字典列表,其中所有具有相同“场馆”和“日期”的DataFrame行已经被合并在一起。
# Creating the DataFrames
df_Incoming = sqlContext.createDataFrame([('Hotel','20190101',15,3),('Beach','20190101',93,45)],('Venue','Date','08','10'))
df_Incoming.show()
+-----+--------+---+---+
|Venue|    Date| 08| 10|
+-----+--------+---+---+
|Hotel|20190101| 15|  3|
|Beach|20190101| 93| 45|
+-----+--------+---+---+
df_Outgoing = sqlContext.createDataFrame([('Beach','20190101',30,5),('Hotel','20190103',5,15)],('Venue','Date','07','10'))
df_Outgoing.show()
+-----+--------+---+---+
|Venue|    Date| 07| 10|
+-----+--------+---+---+
|Beach|20190101| 30|  5|
|Hotel|20190103|  5| 15|
+-----+--------+---+---+

这个想法是从每一行创建一个字典,并将DataFrame的所有存储为一个大的列表中的字典。最后一步,我们将那些具有相同VenueDate的字典组合在一起。

由于DataFrame中的所有都存储为Row()对象,因此我们使用collect()函数将所有记录作为Row()列表返回。只是为了说明输出-

print(df_Incoming.collect())
[Row(Venue='Hotel', Date='20190101', 08=15, 10=3), Row(Venue='Beach', Date='20190101', 08=93, 10=45)]

但是,由于我们想要一个包含字典的列表,我们可以使用列表推导式将它们转换为一个 -

list_Incoming = [row.asDict() for row in df_Incoming.collect()]
print(list_Incoming)
[{'10': 3, 'Date': '20190101', 'Venue': 'Hotel', '08': 15}, {'10': 45, 'Date': '20190101', 'Venue': 'Beach', '08': 93}]

但是,由于数字列的形式类似于"08":{ "IncomingCount":15 }而不是"08":15,因此我们使用字典推导式将它们转换为这种形式 -

list_Incoming = [ {k:v if k in ['Venue','Date'] else {'IncomingCount':v} for k,v in dict_element.items()} for dict_element in list_Incoming]
print(list_Incoming)
[{'10': {'IncomingCount': 3}, 'Date': '20190101', 'Venue': 'Hotel', '08': {'IncomingCount': 15}}, {'10': {'IncomingCount': 45}, 'Date': '20190101', 'Venue': 'Beach', '08': {'IncomingCount': 93}}]

同样地,我们也对OutgoingCount进行操作。

list_Outgoing = [row.asDict() for row in df_Outgoing.collect()]
list_Outgoing = [ {k:v if k in ['Venue','Date'] else {'OutgoingCount':v} for k,v in dict_element.items()} for dict_element in list_Outgoing]
print(list_Outgoing)
[{'10': {'OutgoingCount': 5}, 'Date': '20190101', 'Venue': 'Beach', '07': {'OutgoingCount': 30}}, {'10': {'OutgoingCount': 15}, 'Date': '20190103', 'Venue': 'Hotel', '07': {'OutgoingCount': 5}}]

最后一步:现在,我们已经创建了必要的列表字典,我们需要根据场馆日期将列表合并在一起。

from copy import deepcopy
def merge_lists(list_Incoming, list_Outgoing):
    # create dictionary from list_Incoming:
    dict1 = {(record['Venue'], record['Date']): record  for record in list_Incoming}

    #compare elements in list_Outgoing to those on list_Incoming:

    result = {}
    for record in list_Outgoing:
        ckey = record['Venue'], record['Date']
        new_record = deepcopy(record)
        if ckey in dict1:
            for key, value in dict1[ckey].items():
                if key in ('Venue', 'Date'):
                    # Do not merge these keys
                    continue
                # Dict's "setdefault" finds a key/value, and if it is missing
                # creates a new one with the second parameter as value
                new_record.setdefault(key, {}).update(value)

        result[ckey] = new_record

    # Add values from list_Incoming that were not matched in list_Outgoing:
    for key, value in dict1.items():
        if key not in result:
            result[key] = deepcopy(value)

    return list(result.values())

res = merge_lists(list_Incoming, list_Outgoing)
print(res)
[{'10': {'OutgoingCount': 5, 'IncomingCount': 45}, 
  'Date': '20190101', 
  'Venue': 'Beach', 
  '08': {'IncomingCount': 93}, 
  '07': {'OutgoingCount': 30}
 },

 {'10': {'OutgoingCount': 15}, 
   'Date': '20190103', 
   'Venue': 'Hotel', 
   '07': {'OutgoingCount': 5}
 }, 

 {'10': {'IncomingCount': 3}, 
  'Date': '20190101', 
  'Venue': 'Hotel', 
  '08': {'IncomingCount': 15}
 }]

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接