我有以下pyspark.dataframe
:
age state name income
21 DC john 30-50K
NaN VA gerry 20-30K
我正在尝试实现与 Pandas 中 df.isnull().sum()
等效的操作,其输出结果为:
age 1
state 0
name 0
income 0
起初我尝试了类似以下的东西:
null_counter = [df[c].isNotNull().count() for c in df.columns]
但这会产生以下错误:
TypeError: Column is not iterable
同样,这是我目前迭代列以获取最小值的方法:
class BaseAnalyzer:
def __init__(self, report, struct):
self.report = report
self._struct = struct
self.name = struct.name
self.data_type = struct.dataType
self.min = None
self.max = None
def __repr__(self):
return '<Column: %s>' % self.name
class BaseReport:
def __init__(self, df):
self.df = df
self.columns_list = df.columns
self.columns = {f.name: BaseAnalyzer(self, f) for f in df.schema.fields}
def calculate_stats(self):
find_min = self.df.select([fn.min(self.df[c]).alias(c) for c in self.df.columns]).collect()
min_row = find_min[0]
for column, min_value in min_row.asDict().items():
self[column].min = min_value
def __getitem__(self, name):
return self.columns[name]
def __repr__(self):
return '<Report>'
report = BaseReport(df)
calc = report.calculate_stats()
for column in report1.columns.values():
if hasattr(column, 'min'):
print("{}:{}".format(column, column.min))
它使我能够“遍历列”
<Column: age>:1
<Column: name>: Alan
<Column: state>:ALASKA
<Column: income>:0-1k
我认为这种方法已经变得过于复杂了,我如何正确地迭代所有列以提供各种摘要统计信息(最小值,最大值,是否为空,不为空等)?从Pandas来看,pyspark.sql.Row
和pyspark.sql.Column
之间的区别似乎很奇怪。
TypeError: Can't convert 'method' object to str implicitly
错误。 - too_many_questions['individual_id', 'first_name', 'last_name', 'house_number', 'street_name', 'city', 'state', 'zip', 'county_name', 'age', 'gender', 'birthdate', 'null_col', 'ind_politicalparty', 'ind_vendorethnicity', 'dma', 'cd', 'hh_income']
- too_many_questionsstr
类型更或多或少地引起了问题?编辑后,错误变成了TypeError: Can't convert 'int' object to str implicitly
。感谢您的帮助! - too_many_questionsColumn.cast
。 - too_many_questions