为elasticsearch注册一个基于HDFS的快照仓库时,出现连接拒绝错误。

使用对接器进行测试 elasticsearch snapshot-restore 随着 HDFS repository plugin.

拉动 hadoop-docker弹性搜索 从Docker Hub。

docker run -it -d -p 8088:8088 -p 51270:50070 -p 9000:9000 -v /e/WS/my-hadoop-docker/logs:/usr/local/hadoop/logs sequenceiq/hadoop-docker:2.7.0 /etc/bootstrap.sh -bash

在 elasticsearch 容器中成功安装 HDFS 仓库插件。

创建了一个自定义网络 my-net 在docker中为elasticsearch容器和hadoop容器进行通信。

docker ps

CONTAINER ID        IMAGE                            COMMAND                  CREATED             STATUS              PORTS                                                                                                                                                                                     NAMES
eee7af657313        sequenceiq/hadoop-docker:2.7.0   "/etc/bootstrap.sh -…"   25 hours ago        Up 2 hours          2122/tcp, 8030-8033/tcp, 8040/tcp, 0.0.0.0:8088->8088/tcp, 8042/tcp, 19888/tcp, 49707/tcp, 50010/tcp, 50020/tcp, 50075/tcp, 50090/tcp, 0.0.0.0:9000->9000/tcp, 0.0.0.0:51270->50070/tcp   stoic_proskuriakova

netstat -tnlphadoop-docker 容器

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name
tcp        0      0 0.0.0.0:8088                0.0.0.0:*                   LISTEN      560/java
tcp        0      0 0.0.0.0:13562               0.0.0.0:*                   LISTEN      655/java
tcp        0      0 0.0.0.0:50010               0.0.0.0:*                   LISTEN      248/java
tcp        0      0 0.0.0.0:50075               0.0.0.0:*                   LISTEN      248/java
tcp        0      0 0.0.0.0:8030                0.0.0.0:*                   LISTEN      560/java
tcp        0      0 0.0.0.0:8031                0.0.0.0:*                   LISTEN      560/java
tcp        0      0 0.0.0.0:8032                0.0.0.0:*                   LISTEN      560/java
tcp        0      0 0.0.0.0:8033                0.0.0.0:*                   LISTEN      560/java
tcp        0      0 0.0.0.0:50020               0.0.0.0:*                   LISTEN      248/java
tcp        0      0 0.0.0.0:8040                0.0.0.0:*                   LISTEN      655/java
tcp        0      0 172.17.0.2:9000             0.0.0.0:*                   LISTEN      128/java
tcp        0      0 0.0.0.0:8042                0.0.0.0:*                   LISTEN      655/java
tcp        0      0 0.0.0.0:50090               0.0.0.0:*                   LISTEN      408/java
tcp        0      0 0.0.0.0:2122                0.0.0.0:*                   LISTEN      24/sshd
tcp        0      0 0.0.0.0:34351               0.0.0.0:*                   LISTEN      655/java
tcp        0      0 127.0.0.1:38933             0.0.0.0:*                   LISTEN      248/java
tcp        0      0 0.0.0.0:50070               0.0.0.0:*                   LISTEN      128/java
tcp        0      0 :::2122                     :::*                        LISTEN      24/sshd

运行以下请求得到的结果是 Connection refused 错误。

 curl -X PUT "localhost:9200/_snapshot/my_hdfs_repository?pretty" -H 'Content-Type: application/json' -d'
{
  "type": "hdfs",
  "settings": {
    "uri": "hdfs://172.18.0.2:9000/",
    "path": "elasticsearch/repositories/my_hdfs_repository",
    "conf.dfs.client.read.shortcircuit": "false"
  }
}
'
{
  "error" : {
    "root_cause" : [
      {
        "type" : "repository_exception",
        "reason" : "[my_hdfs_repository] cannot create blob store"
      }
    ],
    "type" : "repository_exception",
    "reason" : "[my_hdfs_repository] cannot create blob store",
    "caused_by" : {
      "type" : "unchecked_i_o_exception",
      "reason" : "Cannot create HDFS repository for uri [hdfs://172.18.0.2:9000/]",
      "caused_by" : {
        "type" : "connect_exception",
        "reason" : "Call From 3b1fed43bdf5/172.18.0.3 to 1f5a5b633379.my-net:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused",
        "caused_by" : {
          "type" : "connect_exception",
          "reason" : "Connection refused"
        }
      }
    }
  },
  "status" : 500
}

Docker引擎:19.03.8Windows 10。

在hadoop-docker容器中,使用的是 example-java-read-and-write-f from-hdfs。 试图访问hdfs文件系统,得到了同样的错误。

java -jar example-java-read-and-write-from-hdfs-1.0-SNAPSHOT-jar-with-dependencies.jar hdfs://localhost:9000

Exception in thread "main" java.net.ConnectException: Call From eee7af657313/172.17.0.2 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)
        at org.apache.hadoop.ipc.Client.call(Client.java:1472)
        at org.apache.hadoop.ipc.Client.call(Client.java:1399)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
        at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1988)
        at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118)
        at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400)
        at io.saagie.example.hdfs.Main.main(Main.java:48)
Caused by: java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705)
        at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
        at org.apache.hadoop.ipc.Client.call(Client.java:1438)
        ... 18 more

在hadoop-docker容器中,将hdfs文件系统替换为hdfs文件系统,就会出现simialr错误。localhost 与容器的IP 172.17.0.2 可以向HDFS写入数据。

java -jar example-java-read-and-write-from-hdfs-1.0-SNAPSHOT-jar-with-dependencies.jar hdfs://172.17.0.2:9000

Apr 14, 2020 12:26:17 AM io.saagie.example.hdfs.Main main
INFO: Begin Write file into hdfs
Apr 14, 2020 12:26:18 AM io.saagie.example.hdfs.Main main
INFO: End Write file into hdfs
Apr 14, 2020 12:26:18 AM io.saagie.example.hdfs.Main main
INFO: Read file into hdfs
Apr 14, 2020 12:26:18 AM io.saagie.example.hdfs.Main main
INFO: hello;world

在主机(windows 10)中,运行 java -jar target/example-java-read-and-write-from-hdfs-1.0-SNAPSHOT-jar-with-dependencies.jar hdfs://localhost:9000 又有例外

Exception in thread "main" org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/root/hello.csv could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1550)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3067)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:722)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

        at org.apache.hadoop.ipc.Client.call(Client.java:1468)
        at org.apache.hadoop.ipc.Client.call(Client.java:1399)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
        at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
        at java.lang.reflect.Method.invoke(Unknown Source)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1532)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1349)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)

解决方案:

10年前,但如果它能帮助某人使用HDFS仓库插件与ElasticSearch!我使用了这个docker镜像。https:/github.combig-data-europedocker-hadoop。.

添加地图端口 9866:9866 的数据节点进入 docker-compose.yaml 并用.net更新hadoop.env。

CORE_CONF_fs_defaultFS=hdfs://YOUR_IP_NAMENODE:9000
HDFS_CONF_dfs_replication=1
HDFS_CONF_dfs_client_use_datanode_hostname=true

然后,在运行 docker-compose up 进入用户界面 http:/YOUR_IP_NAMENODE:9870。 并等待 “Safemode is off”。

连接到namenode并创建用户 rootelasticsearch :

docker exec -it namenode bash

hdfs dfs -mkdir /user
hdfs dfs -mkdir /user/root
hdfs dfs -chown root:supergroup /user/root

hdfs dfs -mkdir /user/elasticsearch
hdfs dfs -chown elasticsearch:supergroup /user/elasticsearch

有了ElasticSearch.NET,你就可以使用它了。

PUT _snapshot/backup
{
  "type": "hdfs",
  "settings": {
    "uri": "hdfs://YOUR_IP_NAMENODE:9000",
    "path": "repository/backup"
  }
}

并使用它,它对我来说是确定的!

给TA打赏
共{{data.count}}人
人已打赏
未分类

QKDB的周数

2022-9-9 6:47:22

未分类

如何防止在spring batch上,如果前一个作业还在运行,如何防止计划作业的执行?

2022-9-9 6:58:17

0 条回复 A文章作者 M管理员
    暂无讨论,说说你的看法吧
个人中心
购物车
优惠劵
今日签到
有新私信 私信列表
搜索