phone_icon+91 9874084514 |

facebook_likeinfo@objectsolacademy.com

facebook_like265 people like this. Sign Up to see what your friends like.

BIG DATA & HADOOP

Big Data is a collection of large sets/volumes of data which, that organizations store & analyze to make better business decision, which can not be processed using traditional computing techniques.

Traditional systems that were used to store & analyze data have become obsolete when it comes to huge volume of Data.

This is where Hadoop comes into picture & companies involved in working handling huge Data have started implementing Hadoop for collecting, storing, processing & retrieving huge volumes of data to get insight and make better business decisions.

Core Concepts and Terminologies:

Big data is a collection of large datasets that cannot be processed using traditional computing techniques. Big data is not merely a data, rather it involves various tools, technqiues and frameworks.

Hadoop is an Apache open source framework written in java that allows distributed processing of large datasets across clusters of computers using simple programming models. A Hadoop frame-worked application works in an environment that provides distributed storage and computation across clusters of computers. Hadoop is designed to scale up from single server to thousands of machines, each offering local computation and storage.

Hadoop Architecture:

Hadoop framework includes following four modules:

Hadoop Common: These are Java libraries and utilities required by other Hadoop modules. These libraries provides filesystem and OS level abstractions and contains the necessary Java files and scripts required to start Hadoop.

Hadoop YARN: This is a framework for job scheduling and cluster resource management.

Hadoop Distributed File System (HDFS™): A distributed file system that provides high-throughput access to application data.

Hadoop MapReduce: This is YARN-based system for parallel processing of large data sets.

MapReduce:

Hadoop MapReduce is a software framework for easily writing applications which process big amounts of data in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner.

The term MapReduce actually refers to the following two different tasks that Hadoop programs perform:

The Map Task:This is the first task, which takes input data and converts it into a set of data, where individual elements are broken down into tuples (key/value pairs).

The Reduce Task:This task takes the output from a map task as input and combines those data tuples into a smaller set of tuples. The reduce task is always performed after the map task.

Typically both the input and the output are stored in a file-system. The framework takes care of scheduling tasks, monitoring them and re-executes the failed tasks.

The MapReduce framework consists of a single master JobTracker and one slave TaskTracker per cluster-node. The master is responsible for resource management, tracking resource consumption/availability and scheduling the jobs component tasks on the slaves, monitoring them and re-executing the failed tasks. The slaves TaskTracker execute the tasks as directed by the master and provide task-status information to the master periodically.

The JobTracker is a single point of failure for the Hadoop MapReduce service which means if JobTracker goes down, all running jobs are halted.

Hadoop Distributed File System

Hadoop can work directly with any mountable distributed file system such as Local FS, HFTP FS, S3 FS, and others, but the most common file system used by Hadoop is the Hadoop Distributed File System (HDFS).

The Hadoop Distributed File System (HDFS) is based on the Google File System (GFS) and provides a distributed file system that is designed to run on large clusters (thousands of computers) of small computer machines in a reliable, fault-tolerant manner.

HDFS uses a master/slave architecture where master consists of a single NameNode that manages the file system metadata and one or more slave DataNodes that store the actual data.

A file in an HDFS namespace is split into several blocks and those blocks are stored in a set of DataNodes. The NameNode determines the mapping of blocks to the DataNodes. The DataNodes takes care of read and write operation with the file system. They also take care of block creation, deletion and replication based on instruction given by NameNode.

Career Scope:

The right mix of a professional with excellent analytical skills & hands on experience with advanced technology like Hadoop is what organizations are looking for.

According to latest McKinsey report, more than 2,00,000 data scientists will be needed by the industry in (2014-2016).

Request/Query ?

WHAT EXPERTS SPEAK

NEW BATCH

iPhone, Android, Core java

19th July 2015

iPhone, Android, Core java

19th July 2015

iPhone, Android, Core java

19th July 2015