Pandas:如何在偏移日期上合并两个数据框?

3

我希望合并两个数据框,df1和df2,基于df2的行是否在df1的行后3-6个月的日期范围内。例如:

df1(每家公司都有季度数据):

    company DATADATE
0   012345  2005-06-30
1   012345  2005-09-30
2   012345  2005-12-31
3   012345  2006-03-31
4   123456  2005-01-31
5   123456  2005-03-31
6   123456  2005-06-30
7   123456  2005-09-30

df2(对于每个公司,我都有可能在任何一天发生事件日期):

    company EventDate
0   012345  2005-07-28 <-- won't get merged b/c not within date range
1   012345  2005-10-12
2   123456  2005-05-15
3   123456  2005-05-17
4   123456  2005-05-25
5   123456  2005-05-30
6   123456  2005-08-08
7   123456  2005-11-29
8   abcxyz  2005-12-31 <-- won't be merged because company not in df1

理想的合并数据框——将df2中的EventDates与df1中DATADATE的行相对应,且时间间隔为3-6个月(即1个季度)的行将被合并:

    company DATADATE    EventDate
0   012345  2005-06-30  2005-10-12
1   012345  2005-09-30  NaN   <-- nan because no EventDates fell in this range
2   012345  2005-12-31  NaN
3   012345  2006-03-31  NaN
4   123456  2005-01-31  2005-05-15
5   123456  2005-01-31  2005-05-17
5   123456  2005-01-31  2005-05-25
5   123456  2005-01-31  2005-05-30
6   123456  2005-03-31  2005-08-08
7   123456  2005-06-30  2005-11-19
8   123456  2005-09-30  NaN

我正试图应用这个相关主题[根据不规则时间间隔合并pandas数据帧],通过给df1添加start_time和end_time列来表示DATADATE之后3个月(start_time)到6个月(end_time),然后使用np.searchsorted(),但这种情况有点棘手,因为我想按每个公司进行合并。
2个回答

2
这实际上是一个算法复杂度因不同解决方案而显著不同的罕见问题。您可能需要考虑这一点,而不是仅关注一行代码的巧妙程度。
从算法角度来看:
  • 将较大的数据帧按日期排序
  • 对于较小的数据帧中的每个日期,使用bisect模块在较大的数据帧中找到相关行
对于长度分别为mn(其中m < n)的数据帧,复杂度应为O(m log(n))

我按照您提供的步骤实现了它,并在上面发布了我的代码。虽然在我的大数据集上需要很长时间,但它确实有效。我最初希望能够将pandas groupby用于按['company','DATADATE']分组df1,并groupby.apply()一个函数,该函数将获取df2中与每行df1的start_time和end_time之间的EventDates相关行(即DATADATE之后3-6个月)。 - Lyndon
这很有趣。当我有时间时,我会很高兴深入研究你的答案。 - Ami Tavory

2

以下是我根据Ami Tavory提出的算法提供的解决方案:

#find the date offsets to define date ranges
start_time = df1.DATADATE.apply(pd.offsets.MonthEnd(3))
end_time = df1.DATADATE.apply(pd.offsets.MonthEnd(6))

#make these extra columns
df1['start_time'] = start_time
df1['end_time'] = end_time

#find unique company names in both dfs
unique_companies_df1 = df1.company.unique()
unique_companies_df2 = df2.company.unique()

#sort df1 by company and DATADATE, so we can iterate in a sensible order
sorted_df1=df1.sort(['company','DATADATE']).reset_index(drop=True)

#define empty df to append data
df3 = pd.DataFrame()

#iterate through each company in df1, find 
#that company in sorted df2, then for each 
#DATADATE quarter of df1, bisect df2 in the 
#correct locations (i.e. start_time to end_time)

for cmpny in unique_companies_df1:

    if cmpny in unique_companies_df2: #if this company is in both dfs, take the relevant rows that are associated with this company 
        selected_df2 = df2[df2.company==cmpny].sort('EventDate').reset_index(drop=True)
        selected_df1 = sorted_df1[sorted_df1.company==cmpny].reset_index(drop=True)

        for quarter in xrange(len(selected_df1.DATADATE)): #iterate through each DATADATE quarter in df1
            lo=bisect.bisect_right(selected_df2.EventDate,selected_CS.start_time[quarter]) #bisect_right to ensure that we do not include dates before our date range
            hi=bisect.bisect_left(selected_IT.EventDate,selected_CS.end_time[quarter]) #bisect_left here to not include dates after our desired date range            
            df_right = selected_df2.loc[lo:hi].copy()  #grab all rows with EventDates that fall within our date range
            df_left = pd.DataFrame(selected_df1.loc[quarter]).transpose()

            if len(df_right)==0: # if no EventDates fall within range, create a row with cmpny in the 'company' column, and a NaT in the EventDate column to merge
                df_right.loc[0,'company']=cmpny

            temp = pd.merge(df_left,df_right,how='inner',on='company') #merge the df1 company quarter with all df2's rows that fell within date range
            df3=df3.append(temp)

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接