Rhadoop – Sentiment Analysis

In continuation to my previous post on sentiment analysis, here lets explore further on performing the same using RHadoop.

What will be covered?

  • Environment & Pre-requisites
  • Rhadoop in action
    • Setting Rhadoop environment variables
    • Setting working folder paths
    • Loading data
    • Scoring function
    • Writing Mapper
    • Writing Reducer
    • Run your Map-Reduce program
    • Read data output from hadoop to R data frame

Environment:

I have performed this analysis on below given set up:

Pre-requisites:

Ensure that all hadoop process are running. You can do this by running the hadoop command on your terminal

start-dfs.sh and start-yarn.sh

Then, run the command jps on your terminal and the result should look similar to below screen shot:

11

RHadoop In Action:

Set up the environment variables, and note that the path may change based on your version of Ubuntu and Hadoop (I’m using Hadoop 2.4.0) installation

Sys.setenv("HADOOP_CMD"="/usr/local/hadoop/bin/hadoop"
Sys.setenv("HADOOP_STREAMING"="/usr/local/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.4.0.jar")

Setting folder paths:

To ease the testing process during development stage flexibility of switching between local and hadoop environment would be useful. In the below code setting Local = T would use files from local folder otherwise Local = F to use from hadoop

# Root folder path
setwd('/home/manohar/example/Sentiment_Analysis')

# Set "LOCAL" variable to T to execute using rmr's local backend.
# Otherwise, use Hadoop (which needs to be running, correctly configured, etc.)
LOCAL=F

if (LOCAL)
{
  rmr.options(backend = 'local')
  
  # we have smaller extracts of the data in this project's 'local' subdirectory
  hdfs.data.root = '/home/manohar/example/Sentiment_Analysis/'
  hdfs.data = file.path(hdfs.data.root, 'data', 'data.csv')
  
  hdfs.out.root = hdfs.data.root
  
} else {
  rmr.options(backend = 'hadoop')
  
  # assumes 'Sentiment_Analysis/data' input path exists on HDFS under /home/manohar/example
  
  hdfs.data.root = '/home/manohar/example/Sentiment_Analysis/'
  hdfs.data = file.path(hdfs.data.root, 'data')
  
  # writes output to 'Sentiment_Analysis' directory in user's HDFS home (e.g., /home/manohar/example/Sentiment_Analysis/)
  hdfs.out.root = 'Sentiment_Analysis'
}

hdfs.out = file.path(hdfs.out.root, 'out')

 

Loading Data:

Below code will copy the file from local to hadoop, if file already exists then will return TRUE

# equivalent to hadoop dfs -copyFromLocal
hdfs.put(hdfs.data,  hdfs.data)

 

Our data is in csv file, so setting the input format for better code readability especially in for the mapper stage

# asa.csv.input.format() - read CSV data files and label field names
# for better code readability (especially in the mapper)
#
asa.csv.input.format = make.input.format(format='csv', mode='text', streaming.format = NULL, sep=',',
                                         col.names = c('ID', 'Name', 'Gender', 'Age','OverAllRating',                                             
                                                       'ReviewType', 'ReviewTitle', 'Benefits', 'Money', 'Experience', 
                                                       'Purchase', 'claimsProcess', 'SpeedResolution', 'Fairness',            
                                                       'ReviewDate', 'Review', 'Recommend', 'ColCount'),
                                         stringsAsFactors=F)

 

Load opinion lexicons, the files and paper on opinion lexicons can be found here

pos_words <- scan('/home/manohar/example/Sentiment_Analysis/data/positive-words.txt', what='character',     comment.char=';')
neg_words <- scan('/home/manohar/example/Sentiment_Analysis/data/negative-words.txt', what='character', comment.char=';')

 

Scoring Function:

Below is the main function that calculates the sentiment score, written by Jeffrey Breen (source here!)

score.sentiment = function(sentence, pos.words, neg.words)
{
  require(plyr)
  require(stringr)
  
  score = laply(sentence, function(sentence, pos.words, neg.words) {
    # clean up sentences with R's regex-driven global substitute, gsub():
    sentence = gsub('[[:punct:]]', '', sentence)
    sentence = gsub('[[:cntrl:]]', '', sentence)
    sentence = gsub('\\d+', '', sentence)
    # and convert to lower case:
    sentence = tolower(sentence)
    
    # split into words. str_split is in the stringr package
    word.list = str_split(sentence, '\\s+')
    # sometimes a list() is one level of hierarchy too much
    words = unlist(word.list)
    
    # compare our words to the dictionaries of positive & negative terms
    pos.matches = match(words, pos.words)
    neg.matches = match(words, neg.words)
    
    # match() returns the position of the matched term or NA
    # we just want a TRUE/FALSE:
    pos.matches = !is.na(pos.matches)
    neg.matches = !is.na(neg.matches)
    
    # and conveniently enough, TRUE/FALSE will be treated as 1/0 by sum():
    score = sum(pos.matches) - sum(neg.matches)
    
    return(score)
  }, pos.words, neg.words)
  
  score.df = data.frame(score)
  return(score.df)
}

 

Mapper:

This is the first stage in the map-reduce process which splits out each word into a separate string (also called as tokenizing the string) and for each word seen, it will output the word and a 1 which is the count value to indicate that it has seen the word one time. Mapper phase works parallel because Hadoop uses divide and conquer approach to slove the problem. This is just to execute your code as fast as possible. In this phase all the computation, processing and distribution of data takes place. However in our case we are using single node the code and logic is fairly simple.

The mapper gets keys and values from the input formatter. In our case, the key is NULL and the value is a data.frame from read.table()

mapper = function(key, val.df) {  
  # Remove header lines
  val.df = subset(val.df, Review != 'Review')
  output.key = data.frame(Review = as.character(val.df$Review),stringsAsFactors=F)
  output.val = data.frame(val.df$Review)
  return( keyval(output.key, output.val) )
}

 

Reducer:

The reduce phase will then sum up the number of times each word was seen and write that sum count together with the word as output.

There are two sub parts that internally works before our code gives its final result, that are shuffle and short. Shuffle just to collect similar type of works into single unit and Short for shorting data into some order.

reducer = function(key, val.df) {  
  output.key = key
  output.val = data.frame(score.sentiment(val.df, pos_words, neg_words))
  return( keyval(output.key, output.val) )  
}

 

Running your Map-Reduce:

Executing the map-reduce logic program.

mr.sa = function (input, output) {
  mapreduce(input = input,
            output = output,
            input.format = asa.csv.input.format,
            map = mapper,
            reduce = reducer,
            verbose=T)
}
out = mr.sa(hdfs.data, hdfs.out)

------- output on screen ------
> out = mr.sa(hdfs.data, hdfs.out)
16/09/09 10:11:30 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
packageJobJar: [/usr/local/hadoop/data/hadoop-unjar2099064477903127749/] [] /tmp/streamjob6583314935744487158.jar tmpDir=null
16/09/09 10:11:33 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8050
16/09/09 10:11:33 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8050
16/09/09 10:11:36 INFO mapred.FileInputFormat: Total input paths to process : 3
16/09/09 10:11:36 INFO mapreduce.JobSubmitter: number of splits:4
16/09/09 10:11:38 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1473394634383_0002
16/09/09 10:11:39 INFO impl.YarnClientImpl: Submitted application application_1473394634383_0002
16/09/09 10:11:39 INFO mapreduce.Job: The url to track the job: http://manohar-dt:8088/proxy/application_1473394634383_0002/
16/09/09 10:11:39 INFO mapreduce.Job: Running job: job_1473394634383_0002
16/09/09 10:11:58 INFO mapreduce.Job: Job job_1473394634383_0002 running in uber mode : false
16/09/09 10:11:58 INFO mapreduce.Job:  map 0% reduce 0%
16/09/09 10:12:27 INFO mapreduce.Job:  map 48% reduce 0%
16/09/09 10:12:37 INFO mapreduce.Job:  map 100% reduce 0%
16/09/09 10:13:22 INFO mapreduce.Job:  map 100% reduce 100%
16/09/09 10:13:35 INFO mapreduce.Job: Job job_1473394634383_0002 completed successfully
16/09/09 10:13:36 INFO mapreduce.Job: Counters: 50
    File System Counters
        FILE: Number of bytes read=1045750
        FILE: Number of bytes written=2580683
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=679293
        HDFS: Number of bytes written=578577
        HDFS: Number of read operations=15
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=2
    Job Counters 
        Launched map tasks=4
        Launched reduce tasks=1
        Data-local map tasks=4
        Total time spent by all maps in occupied slots (ms)=148275
        Total time spent by all reduces in occupied slots (ms)=53759
        Total time spent by all map tasks (ms)=148275
        Total time spent by all reduce tasks (ms)=53759
        Total vcore-seconds taken by all map tasks=148275
        Total vcore-seconds taken by all reduce tasks=53759
        Total megabyte-seconds taken by all map tasks=151833600
        Total megabyte-seconds taken by all reduce tasks=55049216
    Map-Reduce Framework
        Map input records=9198
        Map output records=1818
        Map output bytes=1037580
        Map output materialized bytes=1045768
        Input split bytes=528
        Combine input records=0
        Combine output records=0
        Reduce input groups=1616
        Reduce shuffle bytes=1045768
        Reduce input records=1818
        Reduce output records=1720
        Spilled Records=3636
        Shuffled Maps =4
        Failed Shuffles=0
        Merged Map outputs=4
        GC time elapsed (ms)=1606
        CPU time spent (ms)=22310
        Physical memory (bytes) snapshot=1142579200
        Virtual memory (bytes) snapshot=5270970368
        Total committed heap usage (bytes)=947912704
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters 
        Bytes Read=678765
    File Output Format Counters 
        Bytes Written=578577
    rmr
        reduce calls=1616
16/09/09 10:13:36 INFO streaming.StreamJob: Output directory: /Sentiment_Analysis/out

 

Load output from hadoop to R data frame:

Read the output from hadoop folder to a R variable and convert it to data frame for further processing.

results = from.dfs(out)

# put the result in a dataframe
df = sapply(results,c)
df = data.frame(df) # convert to dataframe
colnames(df) <- c('Review', 'score') # assign column names

print(head(df))

------- Result -----
                                             Review score
1                              Very good experience     1
2                         It was a classic scenario     1
3              I have chosen  for all my insurances     0
4           As long as customers understand the t&c     0
5    time will tell if  live up to our expectations     0
6 Good price  good customer service happy to help..     3

 

Now we have the sentiment score for each text. This opens up opportunity for further analysis such as classifying emotion, polarity and a whole lot of visualization for insight. Please see my previous post here to learn more about this.

You can find the full working code in my github account here!

 

R-Hadoop Integration on Ubuntu

Contents

  • About the Manual
  • Pre-requisites
  • Install R Base on Hadoop
  • Install R Studio on Hadoop
  • Install RHadoop packages

RHadoop is a collection of four R packages that allow users to manage and analyze data with Hadoop.

  1. plyrmr– higher level plyr-like data processing for structured data, powered by rmr
  2. rmr– functions providing Hadoop MapReduce functionality in R
  3. rhdfs– functions providing file management of the HDFS from within R
  4. rhbase– functions providing database management for the HBase distributed database from within R

This manual is direct for R and Hadoop 2.4.0 integration on Ubuntu 14.04

Pre-requisites:

 We assume, that the user would have below two running up before starting R and Hadoop integration

Ubuntu 14.04

Hadoop 2.x +

Read my blog to learn more about here on how setting-up-a-single-node-hadoop-cluster.

Pre – requisite:

Once Hadoop installation is done, make sure that all the processes are running:

Run the command jps on your terminal and the result should look similar to below screen shot:

11

R installation

Step 1: Click on the Ubuntu-software center.

1.png

Step 2:  Open Ubuntu Software Center in full screen mode, if the size of the screen is small then we cannot see the search option,Search R-base and click on the First link. Click on install

2.png

Step 3: Once installation has done open your terminal. Type the command R and your r console will be open.

 

You can perform any operation on this R console for example, to plot a graph of some variables:-

plot(seq(1,1000,2.3))

We can see the graph of this plot function below screenshot:

3.png

Step 4:

If we want to come out from R console then give the command

q()

If you want to save workspace then type y otherwise type n.

c is for continue on the same workspace.

Step 7: Now we install R-studio in ubuntu.

  • Open your browser and download r-studio. I downloaded RStudio 0.98.953 – Debian 6+/Ubuntu 10.04+ (32-bit) — this is actually a file: rstudio-0.98.953-amd32.deb

4.png

Go to download folder, right click on the download file and open file with Ubuntu Software Center and click on install.

5.png

6.png

Go on terminal and type R, you can see R console and R studio.

7.png

Install RHadoop packages

 Step1: Install thrift

sudo apt-get install libboost-dev libboost-test-dev libboost-program-options-dev libevent-dev automake libtool flex bison pkg-config g++ libssl-dev

$ cd /tmp

If the below does not work please manually download the thrift jar

$ sudo wget https://dist.apache.org/repos/dist/release/thrift/0.9.0/thrift-0.9.0.tar.gz | tar zx

$ cd thrift-0.9.0/

$ ./configure

$ make

$ sudo make install

$ thrift –help

 

Step 2: Install supporting R packges:

install.packages(c(“rJava”, “Rcpp”, “RJSONIO”, “bitops”, “digest”, “functional”, “stringr”, “plyr”, “reshape2”, “dplyr”, “R.methodsS3”, “caTools”, “Hmisc”), lib=”/usr/local/R/library”)

Step 3: Download below packages from https://github.com/RevolutionAnalytics/RHadoop/wiki/Downloads

rmr2

rhdfs

rhbase

plyrmr

In R terminal run the commands to install packages. Replace <path> to suit your downloaded file location

sudo gedit /etc/R/Renviron

Install RHadoop (rhdfs, rhbase, rmr2 and plyrmr)

Install relevant packages:

install.packages(“rhdfs_1.0.8.tar.gz”, repos=NULL, type=”source”)

install.packages(“rmr2_3.1.2.tar.gz”, repos=NULL, type=”source”)

install.packages(“plyrmr_0.3.0.tar.gz”, repos=NULL, type=”source”)

install.packages(“rhbase_1.2.1.tar.gz”, repos=NULL, type=”source”)

References

You’ll find youtube vedio and step by step instruction about installing R in Hadoop in the following link.

URL http://www.rdatamining.com/tutorials/rhadoop

Rdatamining: R on Handoop – Step by step instructions

URL: http://www.rdatamining.com/tutorials/rhadoop

Youtube: Word count map reduce program in R

URL: http://www.youtube.com/watch?v=hSrW0Iwghtw

Revolution Analytics: RHadoop packages

URL: https://github.com/RevolutionAnalytics/RHadoop/wiki

Install R-base Guide

URL: http://www.sysads.co.uk/2014/06/install-r-base-3-1-0-ubuntu-14-04/

 

In the next blog post I’ll show a sample sentiment analysis using map reduce in R using rmr package.

 

Setting up a Single Node Hadoop Cluster

Step By Step Hadoop Installation Guide

Setting up Single Node Hadoop Cluster on Windows over VM

Contents

  • Objective
  • Current Environments
  • Download VM and Ubuntu 14.04
  • Install Ubuntu on VM
  • Install Hadoop 2.4 on Ubuntu 14.04

 

Objective: This document will help you to setup Hadoop 2.4.0 onto Ubuntu 14.04 on your virtual machine of Windows operating system.

Current environment includes:

  • Windows XP/7 – 32 bit
  • VM Player (Non-commercial use only)
  • Ubuntu 14.04 32 bit
  • Java 1.7
  • Hadoop 2.4.0

Download and Install VM Player from the link https://www.vmware.com/tryvmware/?p=player

Download Ubuntu 14.04 iso file from the link: http://www.ubuntu.com/download/desktop

Download the list of Hadoop commands for reference from the following link: http://hadoop.apache.org/docs/r1.0.4/commands_manual.pdf (Don’t be afraid of this file, this is just for your refer to help you learn more about important Hadoop commands)

Install Ubuntu in VM:

  • Click on Create a New Virtual Machine
  • Browse and select the Ubuntu iso file.
  • Personalize Linux by providing appropriate details.
  • Follow through the wizard steps to finish installation.

1

2.png

3.png

Install Hadoop 2.4 on Ubuntu 14.04

Step 1: Open Terminal

4.png

Step 2: Download Hadoop tar file by running the below command in terminal

wget http://mirror.fibergrid.in/apache/hadoop/common/stable/hadoop-2.7.2.tar.gz

Step 3: Unzip tar file through command: tar -xzf hadoop-2.7.2.tar.gz

Step 4: Let’s move everything into a more appropriate directory:

sudo mv hadoop-2.7.2/ /usr/local

cd /usr/local

sudo ln -s hadoop-2.7.2/ hadoop

Lets create a directory to for later use to store hadoop data:

mkdir /usr/local/hadoop/data

 

Step 5: Set up user and permission (Replace manohar by your user id)

sudo addgroup hadoop

sudo adduser –ingroup hadoop manohar

sudo chown -R hadoop: manohar /usr/local/hadoop/

Step 6: Install ssh:

sudo apt-get install ssh

ssh-keygen -t rsa -P “”

cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

Step 7: Install Java:

sudo apt-get update

sudo apt-get install default-jdk

sudo gedit ~/.bashrc

This will open the .bashrc file in a text editor. Go to the end of the file and paste/type the following content in it:

#HADOOP VARIABLES START

export HADOOP_HOME=/usr/local/hadoop

export JAVA_HOME=/usr

export HADOOP_INSTALL=/usr/local/hadoop

export PATH=$PATH:$HADOOP_INSTALL/bin

export PATH=$PATH:$HADOOP_INSTALL/sbin

export HADOOP_MAPRED_HOME=$HADOOP_INSTALL

export HADOOP_COMMON_HOME=$HADOOP_INSTALL

export HADOOP_HDFS_HOME=$HADOOP_INSTALL

export YARN_HOME=$HADOOP_INSTALL

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native

export HADOOP_OPTS=”-Djava.library.path=$HADOOP_INSTALL/lib”

export HADOOP_PREFIX=$HADOOP_INSTALL

export HADOOP_CMD=$HADOOP_INSTALL/bin/hadoop

export HADOOP_STREAMING=$HADOOP_INSTALL/share/hadoop/tools/lib/hadoop-streaming-2.7.2.jar

#HADOOP VARIABLES END

5.png

After saving and closing the .bashrc file, execute the following command so that your system recognizes the newly created environment variables:

source ~/.bashrc

Putting the above content in the .bashrc file ensures that these variables are always available when your VPS starts up.

Step 8:

Unfortunately, Hadoop and ipv6 don’t play nice so we’ll have to disable it – to do this you’ll need to open up /etc/sysctl.conf and add the following lines to the end:

net.ipv6.conf.all.disable_ipv6 = 1

net.ipv6.conf.default.disable_ipv6 = 1

net.ipv6.conf.lo.disable_ipv6 = 1

Type the command: sudo gedit /etc/sysctl.conf

6

Step 9: Editing /usr/local/hadoop/etc/hadoop/hadoop-env.sh:

 sudo gedit /usr/local/hadoop/etc/hadoop/hadoop-env.sh

In this file, locate the line that exports the JAVA_HOME variable. Change this line to the following:

Change export JAVA_HOME=${JAVA_HOME} to match the JAVA_HOME you set in your .bashrc (for us JAVA_HOME=/usr).

Also, change this line:

export HADOOP_OPTS=”$HADOOP_OPTS -Djava.net.preferIPv4Stack=true

TO BE

export HADOOP_OPTS=”$HADOOP_OPTS -Djava.net.preferIPv4Stack=true -Djava.library.path=$HADOOP_PREFIX/lib”

And finally, add the following line:

export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_PREFIX}/lib/native

Step 10: Editing /usr/local/hadoop/etc/hadoop/core-site.xml:

sudo gedit /usr/local/hadoop/etc/hadoop/core-site.xml

In this file, enter the following content in between the <configuration></configuration> tag:

<property>

<name>fs.default.name</name>

<value>hdfs://localhost:9000</value>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>/usr/local/hadoop/data</value>

</property>

7.png

Step 11: Editing /usr/local/hadoop/etc/hadoop/yarn-site.xml:

sudo gedit /usr/local/hadoop/etc/hadoop/yarn-site.xml

In this file, enter the following content in between the <configuration></configuration> tag:

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

<property>

<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>

<value>org.apache.hadoop.mapred.ShuffleHandler</value>

</property>

<property>

<name>yarn.resourcemanager.resource-tracker.address</name>

<value>localhost:8025</value>

</property>

<property>

<name>yarn.resourcemanager.scheduler.address</name>

<value>localhost:8030</value>

</property>

<property>

<name>yarn.resourcemanager.address</name>

<value>localhost:8050</value>

</property>

The yarn-site.xml file should look something like this:

8.png

Step 12: Creating and Editing /usr/local/hadoop/etc/hadoop/mapred-site.xml:

 By default, the /usr/local/hadoop/etc/hadoop/ folder contains the /usr/local/hadoop/etc/hadoop/mapred-site.xml.template file which has to be renamed/copied with the name mapred-site.xml. This file is used to specify which framework is being used for MapReduce.

This can be done using the following command:

cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml

Once this is done, open the newly created file with following command:

sudo gedit /usr/local/hadoop/etc/hadoop/mapred-site.xml

In this file, enter the following content in between the <configuration></configuration> tag:

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

The mapred-site.xml file should look something like this:

9.png

Step 13: Editing /usr/local/hadoop/etc/hadoop/hdfs-site.xml:

 The /usr/local/hadoop/etc/hadoop/hdfs-site.xml has to be configured for each host in the cluster that is being used. It is used to specify the directories which will be used as the namenode and the datanode on that host.

Before editing this file, we need to create two directories which will contain the namenode and the datanode for this Hadoop installation. This can be done using the following commands:

sudo mkdir -p /usr/local/hadoop_store/hdfs/namenode

sudo mkdir -p /usr/local/hadoop_store/hdfs/datanode

Open the /usr/local/hadoop/etc/hadoop/hdfs-site.xml file with following command:

sudo gedit /usr/local/hadoop/etc/hadoop/hdfs-site.xml

In this file, enter the following content in between the <configuration></configuration> tag:

<property>

<name>dfs.replication</name>

<value>1</value>

</property>

<property>

<name>dfs.namenode.name.dir</name>

<value>file:/usr/local/hadoop_store/hdfs/namenode</value>

</property>

<property>

<name>dfs.datanode.data.dir</name>

<value>file:/usr/local/hadoop_store/hdfs/datanode</value>

</property>

The hdfs-site.xml file should look something like this:

10.png

Step 14: Format the New Hadoop Filesystem:

After completing all the configuration outlined in the above steps, the Hadoop filesystem needs to be formatted so that it can start being used. This is done by executing the following command:

hdfs namenode –format

Note: This only needs to be done once before you start using Hadoop. If this command is executed again after Hadoop has been used, it’ll destroy all the data on the Hadoop filesystem.

Step 15: Start Hadoop

All that remains to be done is starting the newly installed single node cluster:

start-dfs.sh

While executing this command, you’ll be prompted twice with a message similar to the following:

Are you sure you want to continue connecting (yes/no)?

Type in yes for both these prompts and press the enter key. Once this is done, execute the following command:

start-yarn.sh

Executing the above two commands will get Hadoop up and running. You can verify this by typing in the following command:

jps

Executing this command should show you something similar to the following:

11.png

If you can see a result similar to the depicted in the screenshot above, it means that you now have a functional instance of Hadoop running on your VPS.