返回顶部
首页 > 资讯 > 数据库 >hadoop2.4.1结合hbase0.96.2
  • 518
分享到

hadoop2.4.1结合hbase0.96.2

2024-04-02 19:04:59 518人浏览 独家记忆
摘要

接上:Http://onlyoulinux.blog.51cto.com/7941460/1554951上文说到用hadoop2.4.1分布式结合hase0.94.23出现大量的报错,没能解决,最后用新版本

接上:Http://onlyoulinux.blog.51cto.com/7941460/1554951

上文说到用hadoop2.4.1分布式结合hase0.94.23出现大量的报错,没能解决,最后用新版本HBase0.96.2同样的配置没遇到报错,个人感觉是版本兼容性问题,报错内容如下:

2014年 09月 22日 星期一 19:56:03 CST Starting master on nn
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 5525
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 5525
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
2014-09-22 19:56:04,681 INFO org.apache.hadoop.hbase.util.VersionInfo: HBase 0.94.23
2014-09-22 19:56:04,681 INFO org.apache.hadoop.hbase.util.VersionInfo: Subversion git://asf911.gq1.ygridcore.net/home/jenkins/jenkin
s-slave/workspace/HBase-0.94.23 -r f42302b28aceaab773b15f234aa8718fff7eea3c
2014-09-22 19:56:04,681 INFO org.apache.hadoop.hbase.util.VersionInfo: Compiled by jenkins on Wed Aug 27 00:54:09 UTC 2014
2014-09-22 19:56:05,072 DEBUG org.apache.hadoop.hbase.master.HMaster: Set serverside HConnection retries=140
2014-09-22 19:56:05,852 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2014-09-22 19:56:05,853 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2014-09-22 19:56:05,854 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2014-09-22 19:56:05,855 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2014-09-22 19:56:05,856 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2014-09-22 19:56:05,857 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2014-09-22 19:56:05,858 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2014-09-22 19:56:05,858 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2014-09-22 19:56:05,859 INFO org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2014-09-22 19:56:05,863 INFO org.apache.hadoop.ipc.HBaseServer: Starting IPC Server listener on 60000
2014-09-22 19:56:05,889 INFO org.apache.hadoop.hbase.ipc.HBaserpcMetrics: Initializing RPC Metrics with hostName=HMaster, port=60000
2014-09-22 19:56:06,510 INFO org.apache.hadoop.hbase.ZooKeeper.RecoverableZooKeeper: The identifier of this process is 3210@nn
2014-09-22 19:56:06,572 INFO org.apache.zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012
 17:52 GMT
2014-09-22 19:56:06,572 INFO org.apache.zookeeper.ZooKeeper: Client environment:host.name=nn
2014-09-22 19:56:06,572 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.version=1.7.0_67
2014-09-22 19:56:06,572 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.vendor=oracle Corporation
2014-09-22 19:56:06,572 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.home=/usr/java/jdk1.7.0_67/jre
"logs/hbase-root-master-nn.log" 165L, 30411C                                                                      1,1          顶端
        at sun.NIO.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:489)
        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:434)
        at org.apache.hadoop.ipc.Client$Connection.setupiOStreams(Client.java:560)
        at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:184)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1206)
        at org.apache.hadoop.ipc.Client.call(Client.java:1050)
        ... 18 more
2014-09-22 19:56:18,228 INFO org.apache.hadoop.hbase.master.HMaster: Aborting
2014-09-22 19:56:18,228 DEBUG org.apache.hadoop.hbase.master.HMaster: Stopping service threads
2014-09-22 19:56:18,228 INFO org.apache.hadoop.ipc.HBaseServer: Stopping server on 60000
2014-09-22 19:56:18,231 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60000: exiting
2014-09-22 19:56:18,231 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60000: exiting
2014-09-22 19:56:18,231 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60000: exiting
2014-09-22 19:56:18,232 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60000: exiting
2014-09-22 19:56:18,232 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60000: exiting
2014-09-22 19:56:18,232 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60000: exiting
2014-09-22 19:56:18,233 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60000: exiting
2014-09-22 19:56:18,233 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60000: exiting
2014-09-22 19:56:18,233 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60000: exiting
2014-09-22 19:56:18,234 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60000: exiting
2014-09-22 19:56:18,234 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC Server handler 0 on 60000: exiting
2014-09-22 19:56:18,234 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC Server handler 1 on 60000: exiting
2014-09-22 19:56:18,234 INFO org.apache.hadoop.ipc.HBaseServer: REPL IPC Server handler 2 on 60000: exiting
2014-09-22 19:56:18,235 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server listener on 60000
2014-09-22 19:56:18,240 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder
2014-09-22 19:56:18,240 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder
2014-09-22 19:56:18,241 INFO org.apache.hadoop.hbase.master.HMaster: Stopping infoServer
2014-09-22 19:56:18,247 INFO org.mortbay.log: Stopped SelectChannelConnector@0.0.0.0:60010
2014-09-22 19:56:18,278 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down
2014-09-22 19:56:18,278 INFO org.apache.zookeeper.ZooKeeper: Session: 0x1489d3757740000 closed
2014-09-22 19:56:18,278 INFO org.apache.hadoop.hbase.master.HMaster: HMaster main thread exiting
2014-09-22 19:56:18,278 ERROR org.apache.hadoop.hbase.master.HMasterCommandLine: Failed to start master
java.lang.RuntimeException: HMaster Aborted
        at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:160)
        at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:104)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76)
        at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2129)



2014-09-22 20:35:55,320 INFO org.mortbay.log: jetty-6.1.26
2014-09-22 20:35:57,557 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:60010
2014-09-22 20:35:57,645 INFO org.apache.hadoop.hbase.master.ActiveMasterManager: Deleting Znode for /hbase/backup-masters/nn,60000,1
411389351537 from backup master directory
2014-09-22 20:35:58,013 WARN org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node /hbase/backup-masters/nn,60000,14113893515
37 already deleted, and this is not a retry
2014-09-22 20:35:58,013 INFO org.apache.hadoop.hbase.master.ActiveMasterManager: Master=nn,60000,1411389351537
2014-09-22 20:36:00,237 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 tim
e(s).
2014-09-22 20:36:01,239 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 tim
e(s).
2014-09-22 20:36:02,312 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 2 tim
e(s).
2014-09-22 20:36:03,314 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 3 tim
e(s).
2014-09-22 20:36:04,320 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 4 tim
e(s).
2014-09-22 20:36:05,321 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 5 tim
e(s).
2014-09-22 20:36:06,322 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 6 tim
e(s).
2014-09-22 20:36:07,323 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 7 tim
e(s).
2014-09-22 20:36:08,325 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 8 tim
e(s).
2014-09-22 20:36:09,389 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 9 tim
e(s).
2014-09-22 20:36:09,395 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown.
java.net.ConnectException: Call to localhost/127.0.0.1:9000 failed on connection exception: java.net.ConnectException: 拒绝连接
        at org.apache.hadoop.ipc.Client.wrapException(Client.java:1099)
        at org.apache.hadoop.ipc.Client.call(Client.java:1075)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
        at com.sun.proxy.$Proxy10.getProtocolVersion(Unknown Source)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
        at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)


下面是我的配置:


> hbase-env.sh

export JAVA_HOME=/usr/java/jdk1.7.0_67/
export HBASE_HOME=/usr/local/hbase-0.96.2-hadoop2
export HADOOP_HOME=/usr/local/hadoop-2.4.1/
export HBASE_MANAGES_ZK=false


> regionservers

[root@nn hbase-0.96.2-hadoop2]# cat conf/regionservers
nn
sn
dn1


> hbase-site.xml 

[root@nn hbase-0.96.2-hadoop2]# cat conf/hbase-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--

-->
<configuration>
   <property>
      <name>hbase.rootdir</name>
      <value>hdfs://nn:9000/tmpdir</value>
      <description>The directory shared by RegionServers.</description>
   </property>
   <property>
      <name>dfs.replication</name>
      <value>2</value>
      <description>The replication count for HLog and HFile storage. Should not be greater than HDFS datanode count.</description>
   </property>
   <property>
      <name>hbase.zookeeper.quorum</name>
      <value>nn,sn,dn1</value>
   </property>
   <property>
      <name>hbase.zookeeper.property.datadir</name>
      <value>/tmpdir/zookeeperdata/</value>
   </property>
   <property>
      <name>hbase.zookeeper.property.clientPort</name>
      <value>2181</value>
      <description>Property from ZooKeeper's config zoo.cfg.The port at which the clients will connect.
      </description>
   </property>
   <property>
      <name>hbase.cluster.distributed</name>
      <value>true</value>
   </property>
   <property>
      <name>hbase.hregion.memstore.flush.size</name>
      <value>5242880</value>
   </property>
   <property>
      <name>hbase.hregion.memstore.flush.size</name>
      <value>5242880</value>
   </property>
<property>
<name>hbase.master</name>
<value>master:60000</value>
</property>
</configuration> 


启动并验证:


hadoop2.4.1结合hbase0.96.2



简单的bash shell测试


[root@sn test]# hbase shell
2014-09-26 16:22:26,178 INFO  [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.96.2-hadoop2, r1581096, Mon Mar 24 16:03:18 PDT 2014

hbase(main):001:0> list
TABLE                                                                                                                              
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hbase-0.96.2-hadoop2/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop-2.4.1/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
0 row(s) in 6.7970 seconds

=> []
hbase(main):002:0> create
create             create_namespace
hbase(main):002:0> create 'test','stu_id','stu_xx'
0 row(s) in 18.0590 seconds

=> Hbase::Table - test
hbase(main):003:0> put 'test','1','stu_id:name','jaybing'
0 row(s) in 3.2750 seconds

hbase(main):004:0> scan 'test'
ROW                                COLUMN+CELL                                                                                     
 1                                 column=stu_id:name, timestamp=1411720116252, value=jaybing                                      
1 row(s) in 0.2280 seconds

hbase(main):005:0> list
TABLE                                                                                                                              
test                                                                                                                               
1 row(s) in 0.2460 seconds

=> ["test"]
hbase(main):006:0>


您可能感兴趣的文档:

--结束END--

本文标题: hadoop2.4.1结合hbase0.96.2

本文链接: https://lsjlt.com/news/37487.html(转载时请注明来源链接)

有问题或投稿请发送至: 邮箱/279061341@qq.com    QQ/279061341

猜你喜欢
  • hadoop2.4.1结合hbase0.96.2
    接上:http://onlyoulinux.blog.51cto.com/7941460/1554951上文说到用hadoop2.4.1分布式结合hase0.94.23出现大量的报错,没能解决,最后用新版本...
    99+
    2024-04-02
  • Nodejs结合MongoDB
     var http = require("http"),  mongo = require("mongodb"),   events =...
    99+
    2024-04-02
  • MongoDB结合PHP
    前面有篇文章介绍了MongoDB安装使用:http://msiyuetian.blog.51cto.com/8637744/1720559 下面这篇文章主要来介绍PHP怎么来连接MongoDB,在进行试验之...
    99+
    2024-04-02
  • Spring 结合 Hibernate
    applicationContext.xml 文件:<xml version="1.0" encoding="UTF-8"> <beans xmlns="http://www.springframew...
    99+
    2023-01-31
    Spring Hibernate
  • laravel与workerman结合
    安装workerman composer require workerman/workerman 生成命令 php artisan make:command Workerman ...
    99+
    2023-09-04
    laravel php 服务器
  • Python_sort函数结合funct
    举例如下: 1 from functools import cmp_to_key 2 persons = [ 3 { 4 'name':'zhangsan', 5 'age':20, ...
    99+
    2023-01-30
    函数 Python_sort funct
  • rsync如何结合find
    这篇文章主要介绍了rsync如何结合find,具有一定借鉴价值,感兴趣的朋友可以参考下,希望大家阅读完这篇文章之后大有收获,下面让小编带着大家一起了解一下。rsync -avpz -e ssh 192.168.1.100:/ `find /...
    99+
    2023-06-09
  • DISTINCT与聚合函数的结合使用
    在SQL中,DISTINCT关键字用于返回唯一不重复的值,而聚合函数用于对数据进行统计或计算,例如SUM、COUNT、AVG等。这两者可以结合使用,以实现对唯一值进行统计或计算。 例如,可以使用DISTINCT和COUNT结合使用,以统计某...
    99+
    2024-08-03
    sql server
  • 如何用GROUP BY结合聚合函数
    要使用GROUP BY结合聚合函数,首先需要在查询中使用GROUP BY来将数据分组,然后在SELECT语句中使用聚合函数来对每个分组进行计算。例如: SELECT column1, SUM(column2) FROM table ...
    99+
    2024-08-03
    sql server
  • php和vue怎么结合
    可以通过使用 PHP 后端开发框架,例如 Laravel 或 CodeIgniter,将 Vue.js 用作前端 JavaScript 框架。 步骤如下: 在 PHP 后端项目中安装 Vue.js。在 PHP 模板中引入 Vue.j...
    99+
    2023-09-01
    php vue.js 前端 javascript 开发语言
  • Python数据结构:集合
    集合的定义  使用大括号,并且里面必须有初始值,否则是dict字典类型 集合的特征 集合内部的元素无序,所以不能使用索引、切片等操作 集合内部的元素具有唯一性,不允许元素重复出现 集合内部的元素,只能存放int, float, s...
    99+
    2023-01-30
    数据结构 Python
  • Python 基于Python结合pyk
    基于Python结合pykafka实现kafka生产及消费速率&主题分区偏移实时监控   By: 授客 QQ:1033553122   1.测试环境 python 3.4   zookeeper-3.4.13.tar.gz 下载地...
    99+
    2023-01-30
    Python pyk
  • (python)数据结构---集合
    一、描述 set翻译为集合 set是可变的、无序的、不可重复的 set的元素要求可哈西(不可变的数据类型可哈西,可变的数据类型不可哈希) set是无序的,因此不可以索引,也不可以修改 线型结构的查询时间复杂度是O(n),随着数据的增大...
    99+
    2023-01-30
    数据结构 python
  • Spring整合mybatis、springMVC总结
    目录1.mybatis配置流程2.spring配置流程3.spring 整合Dao层4.spring整合Service层5.spring整合MVC层6. spring整合dao-se...
    99+
    2023-05-18
    Spring整合mybatis springMVC Spring整合mybatis Spring整合springMVC
  • 如何结合 cobra 和 klog
    问题内容 我有一个项目需要使用cobra和klog来生成可执行文件并打印日志并保留。 首先我测试了一下,使用下面的klog可以将日志同时输出到terminal和file。 packag...
    99+
    2024-02-06
  • mongodb和redis怎么结合
    mongodb 和 redis 结合使用 MongoDB 和 Redis 都是流行的 NoSQL 数据库,它们具有不同的优势和功能,结合使用可以提供更强大的数据处理能力。 为什么需要结合...
    99+
    2024-05-30
    redis mongodb
  • Redis整合Spring结合使用缓存实例
    一、Redis介绍 什么是Redis? redis是一个key-value存储系统。和Memcached类似,它支持存储的value类型相对更多,包括string(字符串)、list(链表)、s...
    99+
    2022-06-04
    缓存 实例 Redis
  • 聚合函数与HAVING子句的结合
    在SQL中,聚合函数用于对多行数据进行计算并返回一个单一的值,例如SUM、AVG、COUNT等。而HAVING子句用于过滤由GROUP BY子句分组后的数据。 当我们想要根据聚合函数的结果来进行筛选时,可以使用HAVING子...
    99+
    2024-08-03
    sql server
  • Vue3结合TypeScript项目开发实践总结
    目录概述1、compositon Api1、ref 和 reactive的区别?2、周期函数3、store使用4、router的使用2、关注点分离3、TypeScript支持概述 ...
    99+
    2024-04-02
  • 聚合函数与数据可视化的结合
    聚合函数是一种用于计算数据集合中的统计信息的函数,例如平均值、总和、最大值、最小值等。数据可视化是将数据以图形的形式呈现出来,以便更直观地理解数据的分布和趋势。 结合聚合函数和数据可视化可以帮助用户更好地理解数据,并从中获取有用的信息。例如...
    99+
    2024-08-03
    sql server
软考高级职称资格查询
编程网,编程工程师的家园,是目前国内优秀的开源技术社区之一,形成了由开源软件库、代码分享、资讯、协作翻译、讨论区和博客等几大频道内容,为IT开发者提供了一个发现、使用、并交流开源技术的平台。
  • 官方手机版

  • 微信公众号

  • 商务合作