你好,游客 登录
背景:
阅读新闻

编译Hadoop-2.4.0之HDFS的64位C++库

[日期:2014-07-22] 来源:Linux社区  作者:Linux [字体: ]

C++库的源代码位于:

 

hadoop-2.4.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs

 

这里提供一个直接对这些源文件进行编译的makefile,编译后将打包命名为libhdfs.a. makefile内容为:

 

CC            = gcc

DEFINES      = -DG_ARCH_X86_64

 

 

CFLAGS        += -fPIC -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -pipe -O3 -D_REENTRANT $(DEFINES)

CXXFLAGS      += -pipe -O3 -D_REENTRANT $(DEFINES) -rdynamic

 

AR            = ar cqs

LFLAGS        = -rdynamic

 

OBJECTS      = exception.o expect.o hdfs.o jni_helper.o native_mini_dfs.o

 

TARGET        = libhdfs.a

 

#command, don't change

CHK_DIR_EXISTS= test -d

DEL_FILE      = rm -f

 

 

first: all

####### Implicit rules

 

.SUFFIXES: .o .c .cpp .cc .cxx .C .cu

 

.cpp.o:

 $(CXX) -c $(CXXFLAGS) $(INCPATH) -o "$@" "$<"

 

.cc.o:

 $(CXX) -c $(CXXFLAGS) $(INCPATH) -o "$@" "$<"

 

.cxx.o:

 $(CXX) -c $(CXXFLAGS) $(INCPATH) -o "$@" "$<"

 

.C.o:

 $(CXX) -c $(CXXFLAGS) $(INCPATH) -o "$@" "$<"

 

.c.o:

 $(CC) -c $(CFLAGS) $(INCPATH) -o "$@" "$<"

       

####### Build rules    

all: $(AR)

 

$(AR): $(TARGET)

 

$(TARGET):  $(OBJECTS)

 $(AR) $(TARGET) $(OBJECTS)

 

clean: 

 -$(DEL_FILE) $(OBJECTS) $(TARGET)

 

保存好后,直接make. 编译信息如下:

 

gcc -c -fPIC -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -pipe -O3 -D_REENTRANT -DG_ARCH_X86_64  -o "exception.o" "exception.c"

gcc -c -fPIC -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -pipe -O3 -D_REENTRANT -DG_ARCH_X86_64  -o "expect.o" "expect.c"

gcc -c -fPIC -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -pipe -O3 -D_REENTRANT -DG_ARCH_X86_64  -o "hdfs.o" "hdfs.c"

gcc -c -fPIC -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -pipe -O3 -D_REENTRANT -DG_ARCH_X86_64  -o "jni_helper.o" "jni_helper.c"

gcc -c -fPIC -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -pipe -O3 -D_REENTRANT -DG_ARCH_X86_64  -o "native_mini_dfs.o" "native_mini_dfs.c"

ar cqs libhdfs.a exception.o expect.o hdfs.o jni_helper.o native_mini_dfs.o

 

接下来测试一下这个库能不能用。进入以下目录

 

hadoop-2.4.0-src/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/test

 

找到测试源代码,对该文件夹中所有测试代码进行编译.这里再提供一个简单的makefile,内容如下:

 

LIBS = -L$(JAVA_HOME)/jre/lib/amd64/server/ -ljvm  -L../ -lhdfs

INCPATH = -I$(JAVA_HOME)/include -I$(JAVA_HOME)/include/linux -I. -I..

all:

 gcc -o hdfs_ops test_libhdfs_ops.c $(INCPATH) $(LIBS)

 gcc -o hdfs_read test_libhdfs_read.c $(INCPATH) $(LIBS)

 gcc -o hdfs_write test_libhdfs_write.c $(INCPATH) $(LIBS)

 gcc -o hdfs_zerocopy test_libhdfs_zerocopy.c $(INCPATH) $(LIBS)

 

直接make,编译信息如下:

 

gcc -o hdfs_ops test_libhdfs_ops.c -I/d0/data/lichao/software/java/jdk1.7.0_55/include -I/d0/data/lichao/software/java/jdk1.7.0_55/include/linux -I. -I.. -L/d0/data/lichao/software/java/jdk1.7.0_55/jre/lib/amd64/server/ -ljvm  -L../ -lhdfs

gcc -o hdfs_read test_libhdfs_read.c -I/d0/data/lichao/software/java/jdk1.7.0_55/include -I/d0/data/lichao/software/java/jdk1.7.0_55/include/linux -I. -I.. -L/d0/data/lichao/software/java/jdk1.7.0_55/jre/lib/amd64/server/ -ljvm  -L../ -lhdfs

gcc -o hdfs_write test_libhdfs_write.c -I/d0/data/lichao/software/java/jdk1.7.0_55/include -I/d0/data/lichao/software/java/jdk1.7.0_55/include/linux -I. -I.. -L/d0/data/lichao/software/java/jdk1.7.0_55/jre/lib/amd64/server/ -ljvm  -L../ -lhdfs

gcc -o hdfs_zerocopy test_libhdfs_zerocopy.c -I/d0/data/lichao/software/java/jdk1.7.0_55/include -I/d0/data/lichao/software/java/jdk1.7.0_55/include/linux -I. -I.. -L/d0/data/lichao/software/java/jdk1.7.0_55/jre/lib/amd64/server/ -ljvm  -L../ -lhdfs

 

我们随便生成一个文件,包含1到10这10个数字,并加载到hdfs文件系统。

 

seq 1 10 >  tmpfile

hadoop fs -mkdir /data

hadoop fs -put tmpfile /data

hadoop fs -cat /data/tmpfile

1

2

3

4

5

6

7

8

9

10

 

ok。现在运行生成的hdfs_read程序,测试一下hdfs的64位C++接口:

 

./hdfs_read /data/tmpfile 21 32

 

运行信息如下:

 

log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).

log4j:WARN Please initialize the log4j system properly.

log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

1

2

3

4

5

6

7

8

9

10

 

--------------------------------------分割线 --------------------------------------

 

Ubuntu 13.04上搭建Hadoop环境 http://www.linuxidc.com/Linux/2013-06/86106.htm

 

Ubuntu 12.10 +Hadoop 1.2.1版本集群配置 http://www.linuxidc.com/Linux/2013-09/90600.htm

 

Ubuntu上搭建Hadoop环境(单机模式+伪分布模式) http://www.linuxidc.com/Linux/2013-01/77681.htm

 

Ubuntu下Hadoop环境的配置 http://www.linuxidc.com/Linux/2012-11/74539.htm

 

单机版搭建Hadoop环境图文教程详解 http://www.linuxidc.com/Linux/2012-02/53927.htm

 

 





收藏 推荐 打印 | 录入: | 阅读:
本文评论   查看全部评论 (0)
表情: 表情 姓名: 字数
点评:
       
评论声明
  • 尊重网上道德,遵守中华人民共和国的各项有关法律法规
  • 承担一切因您的行为而直接或间接导致的民事或刑事法律责任
  • 本站管理人员有权保留或删除其管辖留言中的任意内容
  • 本站有权在网站内转载或引用您的评论
  • 参与本评论即表明您已经阅读并接受上述条款