Structure Difference between partitioning and bucketing in hive
I created hive external tables (which is usually my choice). You can stick on to yours.
Please follow these steps:
Create a database
CREATE DATABASE IF NOT EXISTS testdb LOCATION '/hivedb/testdb';
Create a clustered table (bucketed table)
CREATE TABLE testdb.Employee(ID BIGINT,NAME STRING, SALARY BIGINT,COUNTRY STRING )CLUSTERED BY(ID) INTO 5 BUCKETSROW FORMAT DELIMITED FIELDS TERMINATED BY ','STORED AS TEXTFILELOCATION '/hivedb/testdb/employee';
Create a plain table
CREATE TABLE testdb.Employee_plain_table(ID BIGINT,NAME STRING, SALARY BIGINT,COUNTRY STRING )ROW FORMAT DELIMITED FIELDS TERMINATED BY ','STORED AS TEXTFILELOCATION '/hivedb/testdb/employee_plain_table';
Enforce bucketing, as recommended by @lake in the previous answer
set hive.enforce.bucketing = true;
Create a data file ('data.txt'). I created a data file with 20records.
1,AAAAA,1000.00,USA2,BBBBB,2000.00,CANADA3,CCCCC,3000.00,MEXICO4,DDDDD,4000.00,BRAZIL5,EEEEE,5000.00,ARGENTINA6,DDDDD,6000.00,CHILE7,FFFFF,7000.00,BOLIVIA8,GGGGG,8000.00,VENEZUELA9,HHHHH,9000.00,PERU10,IIIII,10000.00,COLOMBIA11,JJJJJ,11000.00,EQUADOR12,KKKKK,12000.00,URUGUAY13,LLLLL,13000.00,PARAGUAY14,MMMMM,14000.00,GUYANA15,NNNNN,15000.00,NICARAGUA16,OOOOO,16000.00,PANAMA17,PPPPP,17000.00,COSTA RICA18,QQQQQ,18000.00,HAITI19,RRRRR,19000.00,DOMINICA20,SSSSS,20000.00,JAMAICA
Copy the data file to HDFS location'/hivedb/testdb/employee_plain_table'
./hadoop fs -put ~/so/data.txt /hivedb/testdb/employee_plain_table
Run a select * command over testdb.Employee_plain_table
select * from testdb.Employee_plain_table;
This should display 20 records.
Use an insert command
insert overwrite table testdb.employee select * from employee_plain_table;
This should run a map reduce job and insert the records to bucketedtable.
This will create 5 files as we have 5 buckets as per the DDL ofemployee table.
Verify this using command:
./hadoop fs -ls /hivedb/testdb/employeeFound 5 items-rwxr-xr-x 1 hduser supergroup 95 2017-10-19 11:04 /hivedb/testdb/employee/000000_0-rwxr-xr-x 1 hduser supergroup 81 2017-10-19 11:04 /hivedb/testdb/employee/000001_0-rwxr-xr-x 1 hduser supergroup 90 2017-10-19 11:05 /hivedb/testdb/employee/000002_0-rwxr-xr-x 1 hduser supergroup 88 2017-10-19 11:05 /hivedb/testdb/employee/000003_0-rwxr-xr-x 1 hduser supergroup 84 2017-10-19 11:05 /hivedb/testdb/employee/000004_0
Open up each file, compare with the original data file and you will get to know what has happened.
Hope this clarifies your query!Reference: https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL+BucketedTables
Update: You used load with "local", it is only a copy operation i.e. it copies the given input file from source to destination location. Load command from "local" is a copy, and the one from "hdfs" is a move operation. No mapreduce is involved, so no bucketing happened.
This is because you have to enforce the bucketing during the insert to your bucketed table or create the buckets for yourself. If you are inserting the data into a bucket table you can use the following flags.
set hive.enforce.bucketing = true; -- (Note: Not needed in Hive 2.x onward)
That will force Hive to create the buckets. You should be able to see a number of files equal to your number of buckets (if you have enough records and a proper distribution of your clustering column).
Update. Load command doesn't create any buckets, it just put the data into the the HDFS. You should load the data into another table and insert the data from one table to another using insert overwrite statement.