Big data is defined as high-volume, high-velocity, and high-variety
of information that has the potential to be mined for information. It can also define as a situation where the volume, velocity, and variety of structured,
semi-structured, and unstructured data exceed an organizations storage or
compute capacity. Storage of these big data can be done by introducing multiple
data centers. In order to debate about research issues
for big data improvement, it is necessary to understand the characteristics
that define big data.
Volume indicates the massive amount of
data (Size of data from terabytes to zettabytes). It is related to the
development of data storage and network technologies. The fast development of data processing technology
improvement, network bandwidth, and distance data technology making data
generation and storage capability exponentially.
Data comes from various
sources in several formats such as
structured, semi-structured and
unstructured data sources. In the big data area, not only the amount of data
starts explosive growth, the data types are
also becoming numerous. Data includes simple text documents, sensor
data, audio, video, maps, and any other form of information 51.
Huge volumes of data are often produced at high speeds, such as data
generated by sensor arrays or multiple events, and need to be processed in
real-time, almost real-time, or batch, or as currents “as in the case of The visualization”.
all well and good having access to big data however, unless we can transform it
to value it is useless. The real value of
these datasets is when this data are
integrated. Integration between data sets from various sources allow one to detect information and trends that
always can’t be uncovered by looking at a data set in isolation. So it is the most important V of big data 50.
Veracity indicates the biases, data noise, and abnormality of data. The stored data is mined meaningful to the
problem being analyzed. Veracity isn’t just about data
quality, it’s about data understandability.