如何使用ExecuteScript(使用Python作为脚本引擎)来进行一个加法练习?[尝试学习NiFi的新手用户]

3

我是相对新手的NiFi用户,不确定如何正确地完成以下操作。我想要使用ExecuteScript处理器(脚本引擎:Python)来执行以下操作(请只用Python进行操作):

1)有一个包含以下信息的CSV文件(第一行是标题):

first,second,third
1,4,9
7,5,2
3,8,7

2)我想要找到每行的总和,并生成一个修改后标题的最终文件。最终文件应该如下所示:

first,second,third,total
1,4,9,14
7,5,2,14
3,8,7,18

对于我编写的Python脚本,如下:

def summation(first,second,third):
    numbers = first + second + third
    return numbers
flowFile = session.get()
if (flowFile != None):
    flowFile = session.write(flowFile, summation())

但它无法工作,我不确定如何修复这个问题。是否有人可以提供一些解决此问题的方法?
NiFi流程图: NiFi flow
谢谢
1个回答

1
您的脚本没有按照您的期望执行。解决这个问题有几种方法:
  1. 使用迭代整个CSV内容的脚本一次性操作整个flowfile
  2. 将CSV内容中的行视为“记录”,并使用处理单个行的脚本操作每个记录
我将提供更改您的脚本以一次处理整个flowfile内容。您可以在此处此处此处了解更多关于Record*处理器的信息。
以下是执行您期望的操作的脚本。请注意区别以了解我更改的地方(当然,这个脚本可以更加高效和简洁;它是冗长的,以演示正在发生的事情,而且我不是Python专家)。
import json
from java.io import BufferedReader, InputStreamReader
from org.apache.nifi.processor.io import StreamCallback

# This PyStreamCallback class is what the processor will use to ingest and output the flowfile content
class PyStreamCallback(StreamCallback):
  def __init__(self):
        pass
  def process(self, inputStream, outputStream):
      try:
        # Get the provided inputStream into a format where you can read lines
        reader = BufferedReader(InputStreamReader(inputStream))
        # Set a marker for the first line to be the header
        isHeader = True        
        try:
          # A holding variable for the lines
          lines = []
          # Loop indefinitely
          while True:
            # Get the next line
            line = reader.readLine()
            # If there is no more content, break out of the loop
            if line is None:
              break
            # If this is the first line, add the new column
            if isHeader:
              header = line + ",total"
              # Write the header line and the new column
              lines.append(header)
              # Set the header flag to false now that it has been processed
              isHeader = False
            else:
              # Split the line (a string) into individual elements by the ',' delimiter
              elements = self.extract_elements(line)
              # Get the sum (this method is unnecessary but shows where your "summation" method would go)
              sum = self.summation(elements)
              # Write the output of this line
              newLine = ",".join([line, str(sum)])
              lines.append(newLine)

          # Now out of the loop, write the output to the outputStream
          output = "\n".join([str(l) for l in lines])
          outputStream.write(bytearray(output.encode('utf-8')))

        finally:
            if reader is not None:
                reader.close()

      except Exception as e:
        log.warn("Exception in Reader")
        log.warn('-' * 60)
        log.warn(str(e))
        log.warn('-' * 60)
        raise e
        session.transfer(flowFile, REL_FAILURE)

  def extract_elements(self, line):
    # This splits the line on the ',' delimiter and converts each element to an integer, and puts them in a list
    return [int(x) for x in line.split(',')]

  # This method replaces your "summation" method and can accept any number of inputs, not just 3
  def summation(self, list):
    # This returns the sum of all items in the list
    return sum(list)


flowFile = session.get()
if (flowFile != None):
  flowFile = session.write(flowFile,PyStreamCallback())
  session.transfer(flowFile, REL_SUCCESS)

我流程的结果(使用 GenerateFlowFile 处理器中提供的输入):

2018-07-20 13:54:06,772 INFO [Timer-Driven Process Thread-5] o.a.n.processors.standard.LogAttribute LogAttribute[id=b87f0c01-0164-1000-920e-799647cb9b48] logging for flow file StandardFlowFileRecord[uuid=de888571-2947-4ae1-b646-09e61c85538b,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1532106928567-1, container=default, section=1], offset=2499, length=51],offset=0,name=470063203212609,size=51]
--------------------------------------------------
Standard FlowFile Attributes
Key: 'entryDate'
    Value: 'Fri Jul 20 13:54:06 EDT 2018'
Key: 'lineageStartDate'
    Value: 'Fri Jul 20 13:54:06 EDT 2018'
Key: 'fileSize'
    Value: '51'
FlowFile Attribute Map Content
Key: 'filename'
    Value: '470063203212609'
Key: 'path'
    Value: './'
Key: 'uuid'
    Value: 'de888571-2947-4ae1-b646-09e61c85538b'
--------------------------------------------------
first,second,third,total
1,4,9,14
7,5,2,14
3,8,7,18

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接