假设需要处理的输入是test.xml。
<report>
<report-name name="ALL_TIME_KEYWORDS_PERFORMANCE_REPORT"/>
<date-range date="All Time"/>
<table>
<columns>
<column name="campaignID" display="Campaign ID"/>
<column name="adGroupID" display="Ad group ID"/>
</columns>
<row campaignID="79057390" adGroupID="3451305670"/>
<row campaignID="79057390" adGroupID="3451305670"/>
</table>
</report>
mapper.py文件
import sys
import cStringIO
import xml.etree.ElementTree as xml
if __name__ == '__main__':
buff = None
intext = False
for line in sys.stdin:
line = line.strip()
if line.find("<row") != -1:
.............
.............
.............
print '%s\t%s'%(campaignID,adGroupID )
reducer.py 文件
import sys
if __name__ == '__main__':
for line in sys.stdin:
print line.strip()
我用以下命令运行了Hadoop:
我使用以下命令运行了Hadoop:
bin/hadoop jar contrib/streaming/hadoop-streaming-1.0.4.jar
- file /path/to/mapper.py file -mapper /path/to/mapper.py file
-file /path/to/reducer.py file -reducer /path/to/reducer.py file
-input /path/to/input_file/test.xml
-output /path/to/output_folder/to/store/file
当我运行以上命令时,Hadoop 会按照我们在 reducer.py 文件中指定的格式正确创建输出文件,并将所需数据存储在其中。
现在,我想做的是,不想将输出数据存储在 Hadoop 默认创建的文本文件中,而是想将数据保存到 MYSQL 数据库中。因此,我在 reducer.py 文件中编写了一些 Python 代码,直接将数据写入 MYSQL 数据库,并尝试通过删除输出路径来运行上述命令。
bin/hadoop jar contrib/streaming/hadoop-streaming-1.0.4.jar
- file /path/to/mapper.py file -mapper /path/to/mapper.py file
-file /path/to/reducer.py file -reducer /path/to/reducer.py file
-input /path/to/input_file/test.xml
我遇到了类似以下的错误:
12/11/08 15:20:49 ERROR streaming.StreamJob: Missing required option: output
Usage: $HADOOP_HOME/bin/hadoop jar \
$HADOOP_HOME/hadoop-streaming.jar [options]
Options:
-input <path> DFS input file(s) for the Map step
-output <path> DFS output directory for the Reduce step
-mapper <cmd|JavaClassName> The streaming command to run
-combiner <cmd|JavaClassName> The streaming command to run
-reducer <cmd|JavaClassName> The streaming command to run
-file <file> File/dir to be shipped in the Job jar file
-inputformat TextInputFormat(default)|SequenceFileAsTextInputFormat|JavaClassName Optional.
-outputformat TextOutputFormat(default)|JavaClassName Optional.
.........................
.........................
- 我有一个疑问,如何在处理文件后将数据保存在
数据库
中? - 在哪个文件(mapper.py/reducer.py)中可以编写将数据写入数据库的代码?
- 用于运行hadoop以将数据保存到数据库中的命令是什么?因为当我从hadoop命令中删除输出文件夹路径时,它会显示错误。
请问有谁能帮忙解决上述问题……
编辑:
处理步骤如下:
按照上述方法创建了读取xml文件并在某个文件夹中通过
hadoop命令
创建文本文件的mapper
和reducer
文件例如:文本文件(使用hadoop命令处理xml文件的结果)所在的文件夹如下所示
/home/local/user/Hadoop/xml_processing/xml_output/part-00000
这里的xml文件大小为1.3 GB
,经过hadoop处理后创建的文本文件大小为345 MB
现在,我想做的就是尽可能快地读取上面路径中的文本文件并将数据保存到mysql数据库中
。
我已经尝试使用基本的python,但是它需要350秒
左右来处理文本文件并将其保存到mysql数据库。
根据Nichole的提示,现在我已经下载了sqoop并解压到以下路径:
/home/local/user/sqoop-1.4.2.bin__hadoop-0.20
然后进入bin
文件夹并输入./sqoop
,但是出现了下面的错误。
sh-4.2$ ./sqoop
Warning: /usr/lib/hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: $HADOOP_HOME is deprecated.
Try 'sqoop help' for usage.
我也尝试了以下方法:
./sqoop export --connect jdbc:mysql://localhost/Xml_Data --username root --table PerformaceReport --export-dir /home/local/user/Hadoop/xml_processing/xml_output/part-00000 --input-fields-terminated-by '\t'
结果
Warning: /usr/lib/hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: $HADOOP_HOME is deprecated.
12/11/27 11:54:57 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
12/11/27 11:54:57 INFO tool.CodeGenTool: Beginning code generation
12/11/27 11:54:57 ERROR sqoop.Sqoop: Got exception running Sqoop: java.lang.RuntimeException: Could not load db driver class: com.mysql.jdbc.Driver
java.lang.RuntimeException: Could not load db driver class: com.mysql.jdbc.Driver
at org.apache.sqoop.manager.SqlManager.makeConnection(SqlManager.java:636)
at org.apache.sqoop.manager.GenericJdbcManager.getConnection(GenericJdbcManager.java:52)
at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:525)
at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:548)
at org.apache.sqoop.manager.SqlManager.getColumnTypesForRawQuery(SqlManager.java:191)
at org.apache.sqoop.manager.SqlManager.getColumnTypes(SqlManager.java:175)
at org.apache.sqoop.manager.ConnManager.getColumnTypes(ConnManager.java:262)
at org.apache.sqoop.orm.ClassWriter.getColumnTypes(ClassWriter.java:1235)
at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1060)
at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:82)
at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:64)
at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:97)
at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
at com.cloudera.sqoop.Sqoop.main(Sqoop.java:57)
以上的sqoop命令是否有助于读取文本文件并保存到数据库的功能?因为我们需要从文本文件中处理并插入到数据库中!!!