http://grepalex.com/2013/02/25/hadoop-libjars/
When working with MapReduce one of the challenges that is encountered early-on is determining how to make your third-part JAR’s available to the map and reduce tasks. One common approach is to create a fat jar, which is a JAR that contains your classes as well as your third-party classes (see this Cloudera blog post for more details).
A more elegant solution is to take advantage of the libjars
option in the hadoop jar
command, also mentioned in the Cloudera post at a high level. Here I’ll go into detail on the three steps required to make this work.
Add libjars to the options
It can be confusing to know exactly where to put libjars
when running the hadoop jar
command. The following example shows the correct position of this option:
$ export LIBJARS=/path/jar1,/path/jar2
$ hadoop jar my-example.jar com.example.MyTool -libjars ${LIBJARS} -mytoolopt value
It’s worth noting in the above example that the JAR’s supplied as the value of the libjar
option are comma-separated, and not separated by your O.S. path delimiter (which is how a Java classpath is delimited).
You may think that you’re done, but often times this step alone may not be enough - read on for more details!
Make sure your code is using GenericOptionsParser
The Java class that’s being supplied to the hadoop jar
command should use the GenericOptionsParser class to parse the options being supplied on the CLI. The easiest way to do that is demonstrated with the following code, which leverages the ToolRunner class to parse-out the options:
public static void main(final String[] args) throws Exception {
Configuration conf = new Configuration();
int res = ToolRunner.run(conf, new com.example.MyTool(), args);
System.exit(res);
}
It is crucial that the configuration object being passed into the ToolRunner.run
method is the same one that you’re using when setting-up your job. To guarantee this, your class should use the getConf()
method defined in Configurable
(and implemented in Configured
) to access the configuration:
public class SmallFilesMapReduce extends Configured implements Tool {
public final int run(final String[] args) throws Exception {
Job job = new Job(super.getConf());
...
job.waitForCompletion(true);
return ...;
}
If you don’t leverage the Configuration object supplied to the ToolRunner.run
method in your MapReduce driver code, then your job won’t be correctly configured and your third-party JAR’s won’t be copied to the Distributed Cache or loaded in the remote task JVM’s.
It’s the ToolRunner.run
method (actually it delegates the command parsing to GenericOptionsParser
) which actually parses-out the libjars
argument, and adds to the Configuration object a value for the tmpjar
property. So a quick way to make sure that this step is working is to look at the job file for your MapReduce job (there’s a link when viewing the job details from the JobTracker), and make sure that the tmpjar
configuration name exists with a value identical to the path that you specified in your command. You can also use the command-line to search for the libjars
configuration in HDFS
$ hadoop fs -cat <JOB_OUTPUT_HDFS_DIRECTORY>/_logs/history/*.xml | grep tmpjars
Use HADOOP_CLASSPATH to make your third-party JAR’s available on the client-side
So far the first two steps tackled what you needed to do to to make your third-party JAR’s available to the remote map and reduce task JVM’s. But what hasn’t been covered so far is making these same JAR’s available to the client JVM, which is the JVM that’s created when you run the hadoop jar
command.
For this to happen, you should set the HADOOP_CLASSPATH
environment variable to contain the O.S. path-delimited list of third-party JAR’s. Let’s extend the commands in the first step above with the addition of setting the HADOOP_CLASSPATH
environment variable:
$ export LIBJARS=/path/jar1,/path/jar2
$ export HADOOP_CLASSPATH=/path/jar1:/path/jar2
$ hadoop jar my-example.jar com.example.MyTool -libjars ${LIBJARS} -mytoolopt value
Note that value for HADOOP_CLASSPATH
uses a Unix path delimiter of :
, so modify accordingly for your platform. And if you don’t like the copy-paste above you could modify that line to substitute the commas for semi-colons:
$ export HADOOP_CLASSPATH=`echo ${LIBJARS} | sed s/,/:/g`
相关推荐
Starting with the basics of Apache Hadoop and Solr, this book then dives into advanced topics of optimizing search with some interesting real-world use cases and sample Java code.
to make the information more accessible to the users.This book empowers you to build such solutions with relative ease with the help of Apache Hadoop, along with a host of other Big Data tools. ...
operations, or software development usually associated with distributed computing, you’ll focus on particular analyses you can build, the data warehousing techniques that Hadoop provides, and higher...
Starting with understanding what deep learning is and what the various models associated with deep learning are, this book will then show you how to set up the Hadoop environment for deep learning....
operations, or software development usually associated with distributed computing, you’ll focus on particular analyses you can build, the data warehousing techniques that Hadoop provides, and higher...
Data Algorithms Recipes for Scaling Up with Hadoop and Spark 英文epub 本资源转载自网络,如有侵权,请联系上传者或csdn删除 本资源转载自网络,如有侵权,请联系上传者或csdn删除
Apache Hadoop, along with a host of other big data tools, empowers you to build such solutions with relative ease. This book lists some unique ideas and techniques that enable you to conquer ...
Big Data Processing With Hadoop is an essential reference source that discusses possible solutions for millions of users working with a variety of data applications, who expect fast turnaround ...
Explore big data concepts, platforms, analytics, and their applications using the power of Hadoop 3 Apache Hadoop is the most popular platform for big data processing, and can be combined with a host ...
Big Data Processing With Hadoop
Data Analytics with Hadoop,数据处理,数据分析,数据挖掘等方面的经典教材
Starting with the basics of Apache Hadoop and Solr, the book covers advanced topics of optimizing search with some interesting real-world use cases and sample Java code. This is a step-by-step guide...
Deep Learning with Hadoop 英文epub 本资源转载自网络,如有侵权,请联系上传者或csdn删除 查看此书详细信息请在美国亚马逊官网搜索此书
Hadoop: The Definitive Guide is a comprehensive resource for using Hadoop to build reliable, scalable, distributed systems. Programmers will find details for analyzing large datasets with Hadoop, and...
Data Analytics with Hadoop 英文无水印pdf pdf所有页面使用FoxitReader和PDF-XChangeViewer测试都可以打开 本资源转载自网络,如有侵权,请联系上传者或csdn删除 本资源转载自网络,如有侵权,请联系上传者或...
deep neural networks with Hadoop for optimal performance. Starting with understanding what deep learning is, and what the various models associated with deep neural networks are, this book will then ...
书名:Hadoop The Definitive Guide 语言:英文 The rest of this book is organized as follows. Chapter 2 provides an introduction to MapReduce. Chapter 3 looks at Hadoop filesystems, and in particular ...