Linked Lists Program

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Linked Lists

Program:
import java.util.*;
public class LinkedListDemo {
public static void main(String args[]) {
// create a linked list
LinkedList ll = new LinkedList();
// add elements to the linked list
ll.add("F");
ll.add("B");
ll.add("D");
ll.add("E");
ll.add("C");
ll.addLast("Z");
ll.addFirst("A");
ll.add(1, "A2");
System.out.println("Original contents of ll: " + ll);
// remove elements from the linked list
ll.remove("F");
ll.remove(2);
System.out.println("Contents of ll after deletion: " + ll);
// remove first and last elements
ll.removeFirst();
ll.removeLast();
System.out.println("ll after deleting first and last: " + ll);
// get and set a value
Object val = ll.get(2);
ll.set(2, (String) val + " Changed");
System.out.println("ll after change: " + ll);
}
}
Output:
Original contents of ll: [A, A2, F, B, D, E, C, Z]
Contents of ll after deletion: [A, A2, D, E, C, Z]
ll after deleting first and last: [A2, D, E, C]
ll after change: [A2, D, E Changed, C]

stack
Program:
import java.util.*;
public class stackpro {
public static void main(String[] args) {
Stack<Integer> s=new Stack<Integer>();
Scanner sc=new Scanner(System.in);
int i;
do{
System.out.println("1:push");
System.out.println("2:pop");
System.out.println("3:peek");
System.out.println("4:search");
System.out.println("5:isEmpty");
System.out.println("Enter the choice");
i=sc.nextInt();
switch(i)
{
case 1:
System.out.println("Enter the element:");
int x=sc.nextInt();
s.push(x);
System.out.println("stack is "+s);
break;
case 2:
int y=s.pop();
System.out.println("the value poped is"+y);
break;
case 3:
int z=s.peek();
System.out.println("The peek element is"+z);
break;
case 4:
System.out.println("Enter the element to be searched");
int b=sc.nextInt();
int a=s.search(b);
if(a==-1)
System.out.println("Element is not available");
else
System.out.println("Element is available in index "+a);
break;
case 5:
System.out.println("The stack is empty: "+s.empty());
break;
case 6:
System.exit(0);
}
}while(i<=6);
}
}
output:
1:push
2:pop
3:peek
4:search
5:isEmpty
Enter the choice
1
Enter the element:
10
stack is [10]
1:push
2:pop
3:peek
4:search
5:isEmpty
Enter the choice
1
Enter the element:
20
stack is [10, 20]
1:push
2:pop
3:peek
4:search
5:isEmpty
Enter the choice
1
Enter the element:
30
stack is [10, 20, 30]
1:push
2:pop
3:peek
4:search
5:isEmpty
Enter the choice
3
The peek element is30
1:push
2:pop
3:peek
4:search
5:isEmpty
Enter the choice
2
the value poped is30
1:push
2:pop
3:peek
4:search
5:isEmpty
Enter the choice
4
Enter the element to be searched
20
The index is 1
1:push
2:pop
3:peek
4:search
5:isEmpty
Enter the choice
5
The stack is empty: false
1:push
2:pop
3:peek
4:search
5:isEmpty
Enter the choice
6
Queues
Program:
import java.util.*;
class TestCollection12{
public static void main(String args[]){
PriorityQueue<String> queue=new PriorityQueue<String>();
queue.add("Amit");
queue.add("Vijay");
queue.add("Karan");
queue.add("Jai");
queue.add("Rahul");
System.out.println("head:"+queue.element());
System.out.println("head:"+queue.peek());
System.out.println("iterating the queue elements:");
Iterator itr=queue.iterator();
while(itr.hasNext()){
System.out.println(itr.next()); }
queue.remove();
queue.poll();
System.out.println("after removing two elements:");
Iterator<String> itr2=queue.iterator();
while(itr2.hasNext()){
System.out.println(itr2.next());
} } }
Output:
head:Amit
head:Amit
iterating the queue elements:
Amit
Jai
Karan
Vijay
Rahul
after removing two elements:
Karan
Rahul
Vijay

SET:
import java.util.*;
public class Set {
public static void main(String[] args) {
LinkedHashSet<String> lset=new LinkedHashSet<String>();
lset.add("pratyusha");
lset.add("pratyusha");//set does not allow duplicate values.
lset.add("bindu");
lset.add("aruna");
for(String s:lset)//advanced for loop i.e., iterator.
{
System.out.println(s);
}
System.out.println(lset);
TreeSet<String> tset=new TreeSet<String>();//sorted order
tset.add("praneeth");
tset.add("anuradha");
tset.add("pratyusha");
System.out.println(tset);
TreeSet<Integer> set=new TreeSet<Integer>();
set.add(10);
set.add(100);
set.add(90);
set.add(18);
System.out.println(set);
HashSet<String> hset=new HashSet<String>();//random order
hset.add("pratyusha");
hset.add("anuradha");
hset.add("srinivas");
hset.add("bindu");
hset.add("vineela");
hset.add("jyothsna");
System.out.println(hset);
LinkedHashSet<Integer> a=new LinkedHashSet<Integer>();
a.add(14);
a.add(18);
a.add(28);
a.add(35);
System.out.println(a.contains(14));//contains returns boolean value
int sum=0;
for(Integer i:a)
{
sum=sum+i;
}
System.out.println(sum);
}
}
output:
[pratyusha, bindu, aruna]
[anuradha, praneeth, pratyusha]
[10, 18, 90, 100]
[jyothsna, vineela, anuradha, srinivas, bindu, pratyusha]
true
95

MAP:
import java.util.*;
public class map {
public static void main(String[] args) {
Scanner sc=new Scanner(System.in);
TreeMap<String,Double> tmap=new TreeMap<String,Double>();//sorted
order
tmap.put("13a91a0514",80.6);
tmap.put("13a91a0528",82.6);
tmap.put("13a91a0518",81.6);
tmap.put("13a91a0535",83.6);
tmap.put("13a91a0535",83.6);//values do not repeat
System.out.println(tmap);
HashMap<String,Double> hmap=new
HashMap<String,Double>();//random
order
hmap.put("13a91a0514",80.6);
hmap.put("13a91a0518",81.6);
hmap.put("13a91a0535",83.6);
hmap.put("13a91a0528",82.6);
System.out.println(hmap);
LinkedHashMap<String,Double> lmap=new
LinkedHashMap<String,Double>();//no change
lmap.put("13a91a0514",80.6);
lmap.put("13a91a0518",81.6);
lmap.put("13a91a0535",83.6);
lmap.put("13a91a0528",82.6);
System.out.println(lmap);
//taking input from the user
System.out.println("How many elements are there");
int no=sc.nextInt();
System.out.println("Enter"+no+" keys and values");
TreeMap<Integer,String> t=new TreeMap<Integer,String>();
for(int i=0;i<no;i++)
{
int key=sc.nextInt();
String value=sc.next();
t.put(key, value);
}
System.out.println(t);
//advanced for loop
for(Map.Entry<Integer,String> e:t.entrySet())
{
System.out.println(e.getKey());
System.out.println(e.getValue());
}
}
}
output:
{13a91a0514=80.6, 13a91a0518=81.6, 13a91a0528=82.6,
13a91a0535=83.6}
{13a91a0514=80.6, 13a91a0518=81.6, 13a91a0528=82.6,
13a91a0535=83.6}
{13a91a0514=80.6, 13a91a0518=81.6, 13a91a0535=83.6,
13a91a0528=82.6}
How many elements are there
4
Enter4 keys and values
1
14
2
18
3
35
4
28
{1=14, 2=18, 3=35, 4=28}
1
14
2
18
3
35
4
28
Installation of Hadoop (Standalone
mode):
Step 1: Verifying JAVA Installation
Java must be installed on your system before installing Hive. Let us
verify java installation
using the following command:
$ java –version
If Java is already installed on your system, you get to see the following
response:
java version "1.7.0_71"
Java(TM) SE Runtime Environment (build 1.7.0_71-b13)
Java HotSpot(TM) Client VM (build 25.0-b02, mixed mode)
If java is not installed in your system, then follow the steps given below
for installing java.
Installing Java
Step I:
Download java (JDK <latest version> - X64.tar.gz) by visiting the
following link
http://www.oracle.com/technetwork/java/javase/downloads/jdk7-
downloads-1880260.html.
Then jdk-7u71-linux-x64.tar.gz will be downloaded onto your system.
Step II:
Generally you will find the downloaded java file in the Downloads folder.
Verify it and
extract the jdk-7u71-linux-x64.gz file using the following commands.
$ cd Downloads/
$ ls
jdk-7u71-linux-x64.gz
$ tar zxf jdk-7u71-linux-x64.gz
$ ls
jdk1.7.0_71 jdk-7u71-linux-x64.gz
Step III:
To make java available to all the users, you have to move it to the
location “/usr/local/”. Open
root, and type the following commands.
$ su
password:
# mv jdk1.7.0_71 /usr/local/
# exit
Step IV:
For setting up PATH and JAVA_HOME variables, add the following
commands to ~/.bashrc
file.
export JAVA_HOME=/usr/local/jdk1.7.0_71
export PATH=$PATH:$JAVA_HOME/bin
Now apply all the changes into the current running system.
$ source ~/.bashrc
Step V:
Use the following commands to configure java alternatives:
# alternatives --install /usr/bin/java java usr/local/java/bin/java 2
# alternatives --install /usr/bin/javac javac usr/local/java/bin/javac 2
# alternatives --install /usr/bin/jar jar usr/local/java/bin/jar 2
# alternatives --set java usr/local/java/bin/java
# alternatives --set javac usr/local/java/bin/javac
# alternatives --set jar usr/local/java/bin/jar
Now verify the installation using the command java -version from the
terminal as explained
above.
Step 2: Verifying Hadoop Installation
Hadoop must be installed on your system before installing Hive. Let us
verify the Hadoop
installation using the following command:
$ hadoop version
If Hadoop is already installed on your system, then you will get the
following response:
Hadoop 2.4.1 Subversion
https://svn.apache.org/repos/asf/hadoop/common -r 1529768
Compiled by hortonmu on 2013-10-07T06:28Z
Compiled with protoc 2.5.0
From source with checksum 79e53ce7994d1628b240f09af91e1af4
If Hadoop is not installed on your system, then proceed with the
following steps:
Downloading Hadoop
Download and extract Hadoop 2.4.1 from Apache Software Foundation
using the following
commands.
$ su
password:
# cd /usr/local
# wget http://apache.claz.org/hadoop/common/hadoop-2.4.1/
hadoop-2.4.1.tar.gz
# tar xzf hadoop-2.4.1.tar.gz
# mv hadoop-2.4.1/* to hadoop/
# exit
Pseudo distrubution mode:
The following steps are used to install Hadoop 2.4.1 in pseudo distributed
mode.
Step I: Setting up Hadoop
You can set Hadoop environment variables by appending the following
commands to
~/.bashrc file.
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export
HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/nati
ve export
PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
Now apply all the changes into the current running system.

$ source ~/.bashrc
Step II: Hadoop Configuration
You can find all the Hadoop configuration files in the location
“$HADOOP_HOME/etc/hadoop”. You need to make suitable changes in
those configuration
files according to your Hadoop infrastructure.
$ cd $HADOOP_HOME/etc/hadoop
In order to develop Hadoop programs using java, you have to reset the
java environment
variables in hadoop-env.sh file by replacing JAVA_HOME value with
the location of java in
your system.
export JAVA_HOME=/usr/local/jdk1.7.0_71
Given below are the list of files that you have to edit to configure Hadoop.
core-site.xml
The core-site.xml file contains information such as the port number used
for Hadoop
instance, memory allocated for the file system, memory limit for storing
the data, and the size
of Read/Write buffers.
Open the core-site.xml and add the following properties in between the
<configuration> and
</configuration> tags.
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>

</configuration>
hdfs-site.xml
The hdfs-site.xml file contains information such as the value of
replication data, the
namenode path, and the datanode path of your local file systems. It means
the place where
you want to store the Hadoop infra.
Let us assume the following data.
dfs.replication (data replication value) = 1
(In the following path /hadoop/ is the user name.
hadoopinfra/hdfs/namenode is the directory created by hdfs file system.)
namenode path = //home/hadoop/hadoopinfra/hdfs/namenode
(hadoopinfra/hdfs/datanode is the directory created by hdfs file system.)
datanode path = //home/hadoop/hadoopinfra/hdfs/datanode
Open this file and add the following properties in between the
<configuration>,
</configuration> tags in this file.
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:///home/hadoop/hadoopinfra/hdfs/namenode </value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:///home/hadoop/hadoopinfra/hdfs/datanode </value >
</property>
</configuration>
Note: In the above file, all the property values are user-defined and you
can make changes
according to your Hadoop infrastructure.
yarn-site.xml
This file is used to configure yarn into Hadoop. Open the yarn-site.xml
file and add the
following properties in between the <configuration>, </configuration>
tags in this file.
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
mapred-site.xml
This file is used to specify which MapReduce framework we are using.
By default, Hadoop
contains a template of yarn-site.xml. First of all, you need to copy the file
from mapred
site,xml.template to mapred-site.xml file using the following command.
$ cp mapred-site.xml.template mapred-site.xml
Open mapred-site.xml file and add the following properties in between
the <configuration>,
</configuration> tags in this file.
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
Verifying Hadoop Installation
The following steps are used to verify the Hadoop installation.
Step I: Name Node Setup
Set up the namenode using the command “hdfs namenode -format” as
follows.
$ cd ~
$ hdfs namenode -format
The expected result is as follows.
10/24/14 21:30:55 INFO namenode.NameNode: STARTUP_MSG:
/**********************************************************
**
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost/192.168.1.11
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.4.1
...
...
10/24/14 21:30:56 INFO common.Storage: Storage directory
/home/hadoop/hadoopinfra/hdfs/namenode has been successfully
formatted.
10/24/14 21:30:56 INFO namenode.NNStorageRetentionManager: Going
to
retain 1 images with txid >= 0
10/24/14 21:30:56 INFO util.ExitUtil: Exiting with status 0
10/24/14 21:30:56 INFO namenode.NameNode: SHUTDOWN_MSG:
/**********************************************************
**
SHUTDOWN_MSG: Shutting down NameNode at
localhost/192.168.1.11
***********************************************************
*/
Step II: Verifying Hadoop dfs
The following command is used to start dfs. Executing this command will
start your Hadoop
file system.
$ start-dfs.sh
The expected output is as follows:
10/24/14 21:37:56
Starting namenodes on [localhost]
localhost: starting namenode, logging to /home/hadoop/hadoop-
2.4.1/logs/hadoop-hadoop
namenode-localhost.out
ADITYA ENGINEERING COLLEGEHADOOP AND BIG DATA
localhost: starting datanode, logging to /home/hadoop/hadoop-
2.4.1/logs/hadoop-hadoop
datanode-localhost.out
Starting secondary namenodes [0.0.0.0]
Step IV: Accessing Hadoop on Browser
The default port number to access Hadoop is 50070. Use the following
url to get Hadoop
services on your browser.
http://localhost:50070/Hadoop Browser
Step V: Verify all applications for cluster
The default port number to access all applications of cluster is 8088. Use
the following url to
visit this service.
http://localhost:8088/All
Application
6)To implement HDFS Commands for
Adding ,reterivimgand deleting files
1.ls :Lists the contents of the directory specified by path, showing the
names, permissions,
owner, size and modification date for each entry.
2. lsr Behaves like -ls, but recursively displays entries in all
subdirectories of path.
3. du Shows disk usage, in bytes, for all the files which match path;
filenames are reported
with the full HDFS protocol prefix.
4. dus Like -du, but prints a summary of disk usage of all files/directories
in the path.
5. mv Moves the file or directory indicated by src to dest, within HDFS.
6. cp Copies the file or directory identified by src to dest, within HDFS.
7. rm Removes the file or empty directory identified by path.
8. rmr Removes the file or directory identified by path. Recursively
deletes any child
entries i. e. , filesorsubdirectoriesofpath.
9. put Copies the file or directory from the local file system identified by
localSrc to dest
within the DFS.
10. copyFromLocal Identical to -put
11. moveFromLocal Copies the file or directory from the local file
system identified by
localSrc to dest within HDFS, and then deletes the local copy on success.
12. get [-crc] Copies the file or directory in HDFS identified by src to the
local file system
path identified by localDest.
13. getmerge Retrieves all files that match the path src in HDFS, and
copies them to a
single, merged file in the local file system identified by localDest.
14. cat Displays the contents of filename on stdout.
15. copyToLocal Identical to –get
16. moveToLocal Works like -get, but deletes the HDFS copy on success.
17. mkdir Creates a directory named path in HDFS. Creates any parent
directories in path
that are missing e. g. , mkdir − pinLinux.
18. setrep [-R] [-w] rep Sets the target replication factor for files
identified by path to rep.
Theactualreplicationfactorwillmovetowardthetargetovertime
19. touchz Creates a file at path containing the current time as a
timestamp. Fails if a file
already exists at path, unless the file is already size 0.
20. test -[ezd] Returns 1 if path exists; has zero length; or is a directory or
0 otherwise.
21. stat [format] Prints information about path. Format is a string which
accepts file size
in blocks , filename , block size , replication , and modification date .
22. tail [-f] Shows the last 1KB of file on stdout.
23. chmod [-R] mode,mode,... ... Changes the file permissions associated
with one or more
objects identified by path.... Performs changes recursively with R. mode
is a 3-digit octal
mode, or {augo}+/-{rwxX}. Assumes if no scope is specified and does
not apply an umask.
24. chown [-R] [owner][:[group]] ... Sets the owning user and/or group
for files or
directories identified by path.... Sets owner recursively if -R is specified.
25. chgrp [-R] group ... Sets the owning group for files or directories
identified by path....
Sets group recursively if - R is specified.
26. help Returns usage information for one of the commands listed above.
You must omit
the leading '-' character in cmd.
7)Run a basic Word Count Map Reduce program to
understand Map Reduce
Paradigm.
minimum requirements
1. Input text files – any text file
2. Ubuntu VM
3. The mapper, reducer and driver classes to process the input files
WordCount Driver class:
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.util.*;
public class WordCount extends Configured implements Tool{
public int run(String[] args) throws Exception
{
//creating a JobConf object and assigning a job name for identification
purposes
JobConf conf = new JobConf(getConf(), WordCount.class);
conf.setJobName("WordCount");
//Setting configuration object with the Data Type of output Key and
Value
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
//Providing the mapper and reducer class names
conf.setMapperClass(WordCountMapper.class);
conf.setReducerClass(WordCountReducer.class);
//We wil give 2 arguments at the run time, one in input path and other is
output path
Path inp = new Path(args[0]);
Path out = new Path(args[1]);
//the hdfs input and output directory to be fetched from the command line
FileInputFormat.addInputPath(conf, inp);
FileOutputFormat.setOutputPath(conf, out);
JobClient.runJob(conf);
return 0;
}
public static void main(String[] args) throws Exception
{
// this main function will call run method defined above.
int res = ToolRunner.run(new Configuration(), new WordCount(),args);
System.exit(res);
}
}
Word Count Mapper
import java.io.IOException;
import java.util.StringTokenizer;
WordCountMapper Class
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
public class WordCountMapper extends MapReduceBase implements
Mapper<LongWritable, Text, Text, IntWritable>
{
//hadoop supported data types
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
//map method that performs the tokenizer job and framing the initial key
value pairs
public void map(LongWritable key, Text value, OutputCollector<Text,
IntWritable>
output, Reporter reporter) throws IOException
{
//taking one line at a time and tokenizing the same
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
//iterating through all the words available in that line and forming the key
value pair
while (tokenizer.hasMoreTokens())
{
word.set(tokenizer.nextToken());
//sending to output collector which inturn passes the same to reducer
output.collect(word, one);
}
}
}
Word Count Reducer
import java.util.Iterator;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
public class WordCountReducer extends MapReduceBase implements
Reducer<Text,
IntWritable, Text, IntWritable>
{

//reduce method accepts the Key Value pairs from mappers, do the
aggregation based on
keys and produce the final out put
public void reduce(Text key, Iterator<IntWritable> values,
OutputCollector<Text,
IntWritable> output, Reporter reporter) throws IOException
{
int sum = 0;
/*iterates through all the values available with a key and add them
together and give
the
final result as the key and sum of its values*/
while (values.hasNext())
{
sum += values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}

Follow the steps to execute the job:-


1. Copy the jar to a location in LFS
(/home/training/usecase/wordcount/wordcount.jar)
2. Copy the input files from windows to
LFS(/home/training/usecase/wordcount/input/)
3. Create an input directory in HDFS
hadoop fs –mkdir /projects/wordcount/input/
4. Copy the input files from LFS to HDFS
Hadoop fs –copyFromLocal /home/training/usecase/wordcount/input/*
/projects/wordcount/input/
5. Execute the jar
hadoop jar /home/training/usecase/wordcount/wordcount.jar
com.bejoy.samples.wordcount.WordCount /projects/wordcount/input/
/projects/wordcount/output/
We’d just look at the command in detail with each parameter
/home/training/usecase/wordcount/wordcount.jar -> full path of the jar
file in LFS
com.bejoy.samples.wordcount.WordCount -> full package name of the
Driver Class
/projects/wordcount/input/ -> input files location in HDFS
/projects/wordcount/output/ -> a directory in HDFS where we need the
output files

NOTE: In Hadoop the map reduce process creates the output directory in
hdfs and store the
output files on to the same. If the output directory already exists in
Hadoop then the m/r job
wont execute, in that case either you need to change the output directory
or delete the
provided output directory in HDFS before running the jar again
6. Once the job shows a success status we can see the output file in the
output directory(part-
00000)
Hadoop fs –ls /projects/wordcount/output/
7. For any further investigation of output file we can retrieve the data
from hdfs to LFS and
from there to the desired location
hadoop fs –copyToLocal /projects/wordcount/output/
/home/training/usecase/wordcount/output

To Execute Pig Queries :-


STORE Relation_name INTO ' required_directory_path ' [USING
function];
And we have read it into a relation student using the LOAD operator as
shown below.
grunt> student = LOAD 'hdfs://localhost:9000/pig_data/student_data.txt'
USING PigStorage(',')
as ( id:int, firstname:chararray, lastname:chararray, phone:chararray,
city:chararray );
Now, let us store the relation in the HDFS directory “/pig_Output/” as
shown below.
grunt> STORE student INTO ' hdfs://localhost:9000/pig_Output/ '
USING PigStorage (',
Dump Operator
The Dump operator is used to run the Pig Latin statements and display
the results on the
screen. It is generally used for debugging Purpose.
Syntax
Given below is the syntax of the Dump operator.
grunt> Dump Relation_Name
grunt> describe student;
grunt> explain Relation_name;
To Execute Hive Queries :-
hive> CREATE DATABASE [IF NOT EXISTS] userdb;
hive> CREATE SCHEMA userdb;
hive> DROP DATABASE IF EXISTS userdb;
The following query creates a table named employee using the above data.
hive> CREATE TABLE IF NOT EXISTS employee ( eid int, name String,
salary String, destination String)
COMMENT ‘Employee details’
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ‘\t’
LINES TERMINATED BY ‘\n’
STORED AS TEXTFILE;
The syntax for load data is as follows:
hive> LOAD DATA LOCAL INPATH '/home/user/sample.txt'
OVERWRITE INTO TABLE employee;The following query renames the table from
employee to emp.
hive> ALTER TABLE employee RENAME TO emp;

You might also like