How can I view how many blocks has a file been broken into, in a Hadoop file system?
It is always a good idea to use hdfs instead of hadoop as 'hadoop' version is deprecated.
Here is the command with hdfs and to find the details on a file named 'test.txt' in the root, you would write
hdfs fsck /test.txt -files -blocks -locations