Hadoop HDFS如何挂载多块硬盘扩容

今天发现Hadoop 的HDFS机器,挂载的硬盘快满了,除掉扩容单盘大小重新resize,挂载过个硬盘是否可行?如果可行,怎么操作?
已邀请:

采菊篱下 - 无分享不快乐 助力技术开源分享 运维一枚

赞同来自:

1. 修改配置文件hdfs-site.xml
<!-- DataNode 数据本地存储路径 -->
<property>
<name>dfs.datanode.data.dir</name>
<value>/data1/appData/hdfs/data</value>
<value>/data2/appData/hdfs/data</value>
</property>
如上所示把你添加的盘目录填写进去
 
2. 添加修改磁盘存放选择策略
<property>
<name>dfs.datanode.fsdataset.volume.choosing.policy</name>
<value>org.apache.hadoop.hdfs.server.datanode.fsdataset.AvailableSpaceVolumeChoosingPolicy</value>
</property>
效果如下:
[root@hdfs-dn1 hadoop]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda3 47273912 1046304 43826176 3% /
tmpfs 8166272 0 8166272 0% /dev/shm
/dev/sda1 198337 27789 160308 15% /boot
/dev/sdb 206424760 100420660 95518340 52% /data
/dev/sdc 309637120 28557664 265350816 10% /data2
[root@hdfs-dn1 hadoop]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda3 47273912 1046304 43826176 3% /
tmpfs 8166272 0 8166272 0% /dev/shm
/dev/sda1 198337 27789 160308 15% /boot
/dev/sdb 206424760 100420660 95518340 52% /data
/dev/sdc 309637120 28565404 265343076 10% /data2
[root@hdfs-dn1 hadoop]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda3 47273912 1046304 43826176 3% /
tmpfs 8166272 0 8166272 0% /dev/shm
/dev/sda1 198337 27789 160308 15% /boot
/dev/sdb 206424760 100420668 95518332 52% /data
/dev/sdc 309637120 28570380 265338100 10% /data2
[root@hdfs-dn1 hadoop]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda3 47273912 1046304 43826176 3% /
tmpfs 8166272 0 8166272 0% /dev/shm
/dev/sda1 198337 27789 160308 15% /boot
/dev/sdb 206424760 100420668 95518332 52% /data
/dev/sdc 309637120 28575012 265333468 10% /data2
参考:

https://blog.csdn.net/xichenguan/article/details/38728749
https://my.oschina.net/cjun/blog/812423
http://www.bkjia.com/yjs/891991.html

要回复问题请先登录注册