0% found this document useful (0 votes)
52 views3 pages

MapReduce and HBase Commands

The document provides a comprehensive list of commands for MapReduce and HBase, detailing their functions. It includes commands for managing HDFS, running MapReduce jobs, and manipulating HBase tables. Each command is accompanied by a brief description in Hindi, explaining its purpose and usage.

Uploaded by

Harsh Goyal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views3 pages

MapReduce and HBase Commands

The document provides a comprehensive list of commands for MapReduce and HBase, detailing their functions. It includes commands for managing HDFS, running MapReduce jobs, and manipulating HBase tables. Each command is accompanied by a brief description in Hindi, explaining its purpose and usage.

Uploaded by

Harsh Goyal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

MapReduce and HBase Commands

🟩 MapReduce Commands
1) hdfs dfs -mkdir /input
➡️Input ke liye HDFS mein folder banata hai.

2) hdfs dfs -put localfile.txt /input


➡️Local file ko HDFS ke input folder mein upload karta hai.

3) hadoop jar yourJarFile.jar MainClassName /input /output


➡️MapReduce program run karta hai with input and output path.

4) hdfs dfs -ls /output


➡️Output directory ke andar ke files ko list karta hai.

5) hdfs dfs -cat /output/part-r-00000


➡️MapReduce ka result output file se print karta hai terminal pe.

6) hdfs dfs -rm -r /output


➡️Pichla output folder delete karta hai naya run ke liye (warna error aayega).

7) javac -classpath `hadoop classpath` -d wordcount_classes WordCount.java


➡️Java MapReduce file ko compile karta hai Hadoop ke liye.

8) jar -cvf wordcount.jar -C wordcount_classes/ .


➡️Compiled classes ko ek jar file mein pack karta hai.

9) hadoop fs -put input.txt /inputdir


➡️File ko HDFS pe upload karne ke liye

10) hadoop fs -cat /output/part-r-00000


➡️Output ko directly terminal mein dekhne ke liye

11) hadoop job -status <job_id>


➡️Specific job ka status check karne ke liye

12) hadoop jar myjar.jar className -D mapred.reduce.tasks=2 /input /output


➡️Custom reducers ke saath job run karna
🟦 HBase Commands
1) start-hbase.sh
➡️HBase start karne ke liye use hota hai.

2) hbase shell
➡️HBase shell open karta hai CLI ke liye.

3) create 'table_name', 'column_family'


➡️Naya table create karta hai with column family.

4) list
➡️Available tables ko list karta hai.

5) disable 'table_name'
➡️Table ko disable karta hai (modification ke liye required).

6) drop 'table_name'
➡️Disable ke baad table ko delete karta hai.

7) put 'table_name', 'row_key', 'column_family:column', 'value'


➡️Table mein new value insert karta hai.

8) scan 'table_name'
➡️Table ka content display karta hai.

9) get 'table_name', 'row_key'


➡️Kisi ek row key ka data fetch karta hai.

10) delete 'table_name', 'row_key', 'column_family:column'


➡️Particular cell ka data delete karta hai.

11) truncate 'table_name'


➡️Pura table clear karta hai (structure same rehta hai).

12) exit
➡️HBase shell se bahar nikalta hai.

13) scan 'tablename', {LIMIT => 5}


➡️Table ke top 5 rows dekhne ke liye

14) alter 'tablename', NAME => 'new_cf'


➡️ New column family add karne ke liye
15) truncate 'tablename'
➡️ Table ko empty karne ke liye (schema same rahega)

16) hbase org.apache.hadoop.hbase.mapreduce.Import <tablename>


<HDFS_input_path>
➡️ HDFS se data import karna HBase mein

17) hbase org.apache.hadoop.hbase.mapreduce.Export <tablename>


<HDFS_output_path>
➡️HBase se data export karna HDFS mein

You might also like