Hadoop HDFS如何挂载多块硬盘扩容

今天发现Hadoop 的HDFS机器,挂载的硬盘快满了,除掉扩容单盘大小重新resize,挂载过个硬盘是否可行?如果可行,怎么操作?
已邀请:

空心菜 - 心向阳光,茁壮成长

赞同来自: koyo

1. 修改配置文件hdfs-site.xml


dfs.datanode.data.dir
/data1/appData/hdfs/data
/data2/appData/hdfs/data
如上所示把你添加的盘目录填写进去
 
2. 添加修改磁盘存放选择策略

dfs.datanode.fsdataset.volume.choosing.policy
org.apache.hadoop.hdfs.server.datanode.fsdataset.AvailableSpaceVolumeChoosingPolicy
效果如下:
[root@hdfs-dn1 hadoop]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda3 47273912 1046304 43826176 3% /
tmpfs 8166272 0 8166272 0% /dev/shm
/dev/sda1 198337 27789 160308 15% /boot
/dev/sdb 206424760 100420660 95518340 52% /data
/dev/sdc 309637120 28557664 265350816 10% /data2
[root@hdfs-dn1 hadoop]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda3 47273912 1046304 43826176 3% /
tmpfs 8166272 0 8166272 0% /dev/shm
/dev/sda1 198337 27789 160308 15% /boot
/dev/sdb 206424760 100420660 95518340 52% /data
/dev/sdc 309637120 28565404 265343076 10% /data2
[root@hdfs-dn1 hadoop]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda3 47273912 1046304 43826176 3% /
tmpfs 8166272 0 8166272 0% /dev/shm
/dev/sda1 198337 27789 160308 15% /boot
/dev/sdb 206424760 100420668 95518332 52% /data
/dev/sdc 309637120 28570380 265338100 10% /data2
[root@hdfs-dn1 hadoop]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda3 47273912 1046304 43826176 3% /
tmpfs 8166272 0 8166272 0% /dev/shm
/dev/sda1 198337 27789 160308 15% /boot
/dev/sdb 206424760 100420668 95518332 52% /data
/dev/sdc 309637120 28575012 265333468 10% /data2
参考:


https://blog.csdn.net/xichenguan/article/details/38728749
https://my.oschina.net/cjun/blog/812423
http://www.bkjia.com/yjs/891991.html

要回复问题请先登录注册