namenode 出现 missing blocks的解决方案

hadoop

2018-02-02

6687

0

今天测试服务器被突然重启,导致hdfs的页面显示

There are 2 missing blocks. The following files may be corrupted:

blk_1078468526 /user/spark/sparkstreamingcheckpoint/checkpoint-1517487380000
blk_1078468528 /user/spark/sparkstreamingcheckpoint/checkpoint-1517487400000

Please check the logs or run fsck in order to identify the missing blocks. See the Hadoop FAQ for common causes and potential solutions.

 

根据提示,运行fsck查看情况

hadoop fsck /user/spark/sparkstreamingcheckpoint

 

发现确实有两个blocks丢失

 

这两个blocks的两个副本都丢失了,所以才导致hdfs发出这个警告。

数据已经无法恢复,只能通过命令删除元数据来消除警告。

hadoop fsck -delete /user/spark/sparkstreamingcheckpoint/checkpoint-1517487380000
hadoop fsck -delete /user/spark/sparkstreamingcheckpoint/checkpoint-1517487400000

 

一旦出现这种情况,数据基本无法恢复,所以配置HDFS的副本数时尽可能3个以上。

转载请注明出处: http://www.julyme.com/20180202/99.html

发表评论

全部评论:0条

Julyme

感觉还行吧。

Julyme的IT技术分享

友情链接




/sitemap