In the recent research years scientists regularly encounter limitations due to large data sets in many areas. Big data is high-volume, high-velocity and high-variety information assets that demand cost effective, innovative forms of information processing for enhanced insight and decision making and it refers to data volumes in the range of Exabytes (10 ) and beyond. Such volumes exceed the capacity of current on-line storage systems and processing systems. Data, information, and knowledge are being created and collected at a rate, that is rapidly approaching the Exabyte range. But, its creation and aggregation are accelerating and will approach the Zettabyte range within a few years. Data sets grown in size in part because they are increasingly being gathered from ubiquitous information-sensing mobile devices, aerial sensory technologies, software logs, cameras, microphones, Radio-Frequency Identification (RFID) readers, and Wireless Sensor Networks. Big data usually include data sets with sizes beyond the ability of commonly used software tools to capture, curate, manage, and process the data within a tolerable elapsed time. Big data sizes are a constantly moving target, as of 2012 ranging from a few dozen Terabytes to many Petabytes of data in a single data set. Big data is difficult to work with using most Relational Database Management Systems and desktop statistics and visualization packages, requiring instead "massively parallel software running on tens, hundreds, or even thousands of servers". This paper analyze the issues and challenges of the recent research activity towards big data concepts.