Pandas使用append创建新列

4

我试图将多个文本文件编译成一个数据框。 但是,当我使用Pandas Concat函数连接数据框时,结果数据框的形状会添加新列。 在下面的代码示例中,数据框3具有12列而不是8列。 为什么?

**Input:**
import pandas as pd

df1 = pd.read_csv('2011-12-01-data.txt',sep = None, engine = 'python')
df2 = pd.read_csv('2011-12-02-data.txt',sep = None, engine = 'python')
df3= pd.concat([df1, df2])

print(df1.shape)
print(df2.shape)
print(df3.shape)

**Output:** 
df1 shape = (26986, 8)
df1 shape =(27266, 8)
df3 shape =(54252, 12)

我正在处理与航班数据相关的IT技术内容。相关数据可以在http://lunadong.com/datasets/clean_flight.zip 下载。
2个回答

2

我认为您需要使用 header=None 参数来设置默认列名为 0-7,因为文件没有标题。此外,如果有分隔符 tab,可以指定它。

df1 = pd.read_csv('2011-12-01-data.txt',sep = '\t', engine = 'python', header=None)
df2 = pd.read_csv('2011-12-02-data.txt',sep = '\t', engine = 'python', header=None)
df3= pd.concat([df1, df2])

print(df1.shape)
print(df2.shape)
print(df3.shape)
(26987, 8)
(27267, 8)
(54254, 8)

print(df1.columns)
Int64Index([0, 1, 2, 3, 4, 5, 6, 7], dtype='int64')
print(df2.columns)
Int64Index([0, 1, 2, 3, 4, 5, 6, 7], dtype='int64')
print(df3.columns)
Int64Index([0, 1, 2, 3, 4, 5, 6, 7], dtype='int64')

另一种解决方案是为新列名称指定names参数:
names= ['col1','col2','col3','col4','col5','col6','col7','col8']
df1 = pd.read_csv('2011-12-01-data.txt',sep = '\t', engine = 'python', names=names)
df2 = pd.read_csv('2011-12-02-data.txt',sep = '\t', engine = 'python', names=names)
df3= pd.concat([df1, df2])

print(df1.shape)
print(df2.shape)
print(df3.shape)
(26987, 8)
(27267, 8)
(54254, 8)

print(df1.columns)
print(df2.columns)
print(df3.columns)
Index(['col1', 'col2', 'col3', 'col4', 'col5', 'col6', 'col7', 'col8'], dtype='object')
Index(['col1', 'col2', 'col3', 'col4', 'col5', 'col6', 'col7', 'col8'], dtype='object')
Index(['col1', 'col2', 'col3', 'col4', 'col5', 'col6', 'col7', 'col8'], dtype='object')

你只会得到12列的数据,因为两个数据框的第一行中有些值是相同的,因此这些值被用来创建列名。在使用 concat 合并数据后,只有这些列的位置才会对齐。如果值不同,则不会对齐,你将得到 NaN

print(df1.columns)
Index(['aa', 'AA-1007-TPA-MIA', '12/01/2011 01:55 PM', '12/01/2011 02:07 PM',
       'F78', '12/01/2011 03:00 PM', '12/01/2011 02:57 PM', 'D5'],
      dtype='object')

print(df2.columns)
Index(['aa', 'AA-1007-TPA-MIA', '12/02/2011 01:55 PM', '12/02/2011 02:13 PM',
       'F78', '12/02/2011 03:00 PM', '12/02/2011 03:05 PM', 'D5'],
      dtype='object')

print(df3.columns)
Index(['12/01/2011 01:55 PM', '12/01/2011 02:07 PM', '12/01/2011 02:57 PM',
       '12/01/2011 03:00 PM', '12/02/2011 01:55 PM', '12/02/2011 02:13 PM',
       '12/02/2011 03:00 PM', '12/02/2011 03:05 PM', 'AA-1007-TPA-MIA', 'D5',
       'F78', 'aa'],
      dtype='object')

print(df3.head())
  12/01/2011 01:55 PM       12/01/2011 02:07 PM       12/01/2011 02:57 PM  \
0                 NaN      12/1/2011 2:07PM EST      12/1/2011 2:51PM EST   
1                 NaN  12/1/11 2:06 PM (-05:00)  12/1/11 2:51 PM (-05:00)   
2                 NaN  12/1/11 2:06 PM (-05:00)  12/1/11 2:51 PM (-05:00)   
3                 NaN  12/1/11 2:06 PM (-05:00)  12/1/11 2:51 PM (-05:00)   
4                 NaN  12/1/11 2:06 PM (-05:00)  12/1/11 2:51 PM (-05:00)   

  12/01/2011 03:00 PM 12/02/2011 01:55 PM 12/02/2011 02:13 PM  \
0                 NaN                 NaN                 NaN   
1                 NaN                 NaN                 NaN   
2                 NaN                 NaN                 NaN   
3                 NaN                 NaN                 NaN   
4                 NaN                 NaN                 NaN   

  12/02/2011 03:00 PM 12/02/2011 03:05 PM  AA-1007-TPA-MIA   D5  F78  \
0                 NaN                 NaN  AA-1007-TPA-MIA  NaN  NaN   
1                 NaN                 NaN  AA-1007-TPA-MIA  NaN  NaN   
2                 NaN                 NaN  AA-1007-TPA-MIA  NaN  NaN   
3                 NaN                 NaN  AA-1007-TPA-MIA  NaN  NaN   
4                 NaN                 NaN  AA-1007-TPA-MIA  NaN  NaN   

                aa  
0   flightexplorer  
1  airtravelcenter  
2       myrateplan  
3      helloflight  
4        flytecomm 

print(df3.tail())
      12/01/2011 01:55 PM 12/01/2011 02:07 PM 12/01/2011 02:57 PM  \
27261                 NaN                 NaN                 NaN   
27262                 NaN                 NaN                 NaN   
27263                 NaN                 NaN                 NaN   
27264                 NaN                 NaN                 NaN   
27265                 NaN                 NaN                 NaN   

      12/01/2011 03:00 PM     12/02/2011 01:55 PM     12/02/2011 02:13 PM  \
27261                 NaN        Dec 02 - 10:20pm        Dec 02 - 10:23pm   
27262                 NaN             10:20pDec 2             10:23pDec 2   
27263                 NaN     2011-12-02 10:20 PM                     NaN   
27264                 NaN     2011-12-02 10:20 pm                     NaN   
27265                 NaN  2011-12-02 10:20PM CST  2011-12-02 10:31PM CST   

          12/02/2011 03:00 PM     12/02/2011 03:05 PM  AA-1007-TPA-MIA    D5  \
27261        Dec 02 - 11:59pm       Dec 02 - 11:51pm*  AA-2059-DFW-SLC    A3   
27262             11:43pDec 2                     NaN  AA-2059-DFW-SLC    A3   
27263     2011-12-02 11:59 PM                     NaN  AA-2059-DFW-SLC   NaN   
27264                     NaN                     NaN  AA-2059-DFW-SLC   NaN   
27265  2011-12-02 11:35PM MST  2011-12-02 11:43PM MST  AA-2059-DFW-SLC   A3    

         F78           aa  
27261  C20/C  travelocity  
27262    C20       orbitz  
27263    NaN      weather  
27264    C20          dfw  
27265   C20    flightwise 

1

用户jezrael的答案解决了问题。但是让我试着解释一下为什么pandas会向您的连接数据框添加新列以及出了什么问题。

pandas错误读取标题

当您设置header = None时,pandas将您文件的第一行读取为标题,并默认将其设置为每列的名称。根据您的代码,如果header = None,则每个数据框都会得到这两组列。

df1: ['aa', 'AA-1007-TPA-MIA', '12/01/2011 01:55 PM', '12/01/2011 02:07 PM', 'F78', '12/01/2011 03:00 PM', '12/01/2011 02:57 PM', 'D5']

df2: ['aa', 'AA-1007-TPA-MIA', '12/02/2011 01:55 PM', '12/02/2011 02:13 PM', 'F78', '12/02/2011 03:00 PM', '12/02/2011 03:05 PM', 'D5']

附加非唯一列

最后,当您将两个数据框连接起来时,所有不属于df1和df2共有的列都被附加为单独的列。'aa'、'AA-1007-TPA-MIA'、'F78'和'D5'是属于df1和df2的唯一值,而其他所有内容都被追加到列列表中。
这导致了4(df1&df2) + 4(df1) + 4(df2) = 12列。

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接