One the major challenges of the Big Data era is that it has realized the availability of a great amount and variety of open big datasets for analysis by non-corporate data analysts as well, such as research scientists, data journalists, policy makers, SMEs and individuals. The level of difficulty in transforming a data-curious user into someone who can competently access, analyze and consume that data is even more burdensome now for a great number of users with little or no support and expertise on the data (pre)processing part.
Self-service Data analytics is a recent trend to visual analytics that enables corporate business users to access and work with data even though they do not have a background in statistical analysis, business intelligence (BI) or data mining.
Self-service visual analytics is a new paradigm, widely promoted in modern corporate environments, in which business users are enabled and encouraged to directly manipulate (explore, blend, analyze) underlying data in rich visual ways, in order to derive insights from business information as quickly and efficiently as possible. Allowing less tech-savvy end users to make decisions based on their own queries and analyses, frees up the organization’s business intelligence and information technology (IT) teams from the tedious work of data preparation.
The aim of the VisualFacts project is to develop a scalable platform for providing self-service visual analytic capabilities to a wide range of corporate and non-corporate users to access, explore, analyze open and privately-held data and collaborate on the analytic results of their work by sharing, annotating and reusing them in the form of visual facts.
Current visual platforms and solutions (such as Tableau, SAS, Spotfire, QlikView etc.) do not target the above communities, but focus mainly on closed-world corporate environments. This is mainly due to.
Democratizing self-service visual analytics, thus enabling a greater number of data scientists with diverse analytic needs to seamlessly and collaboratively perform data analysis in a most intuitive and productive way, without the support of expert IT users in the data preparation, analysis and optimization phase is the main goal of VisualFacts. It involves innovative research work for addressing the following questions:
VisualFacts is a 3-years project funded by Hellenic Foundation for Research and Innovation : 1st Call for H.F.R.I. Research Projects for the support of Post-doctoral Researchers and the hosting organization is ATHENA Research Center. Its main objectives are:
VisualFacts research activities are structured around the following areas
The main objective of VisualFacts is the provision of an easy-to-use interface that will allow the visual exploration of big, heterogeneous data, the visual application of analytics (e.g., trend analysis, visual recommendations and outlier detection) and the collaborative sharing of visual artefacts. First, VisualFacts will offer a variety of charts like bar, line, scatter, heat map, network diagrams, tree map etc, which can be organized in publicly available dashboards. VisualFacts will develop all the functionality and models required to support collaborative editing and publishing of dashboards. Next, even though data visualization can convey a lot of information about correlated variables, outlier values and existing trends in an intuitive way, applying data analytics to enrich visualizations can further help reveal hidden insights. Thus, a second contribution is that it will offer visual ways to perform analytic functions on the charts, such as regression analysis for trends or outlier detection in scatter diagrams, and provide exploration assistance in the form of visual recommendations. The latter addresses a common problem when dealing with big data: potentially important parts may never be explored. Moreover, determining the most suitable visualization type (pie, chart, map, etc.) for a scenario at hand, usually proves to be a tedious task, as users might not know in advance the types of data under analysis. To jump start the data exploration process and highlight areas with such patterns of interest, VisualFacts will provide visualization recommendations based on the characteristics of the data (e.g. data types, statistical properties). Furthermore, it will support interactive visual operations (e.g. pan, zoom, filter) for addressing visual clutter and information overload in exploration scenarios. VisualFacts will allow the effective abstraction and summarization of the data under analysis by providing a) dynamically calculated statistical information regarding the profile of the data, and b) hierarchical approaches for multi-level navigation, that will offer an intuitive way to find areas of interest in the dataset. These hierarchical views will be constructed on-the-fly by considering schema characteristics, as well as user preferences and environment parameters (e.g., screen resolution). Finally, for minimizing the overall visual analysis time, VisualFacts will employ data caching and prefetching techniques. By using information regarding previous user interaction as well as statistics about the data, the system will attempt to identify which parts of the dataset are more likely to be requested by future user queries and bring these to the cache.
One of the objectives of VisualFacts is the timely delivery of smart visual analytics over dirty graph data. To address this objective, the Visual Analytics functionality must be backed by proper data structures and retrieval techniques that can support the specificities and complexity of visual analytics processing. Unfortunately, this requires support from novel data management techniques, as existing techniques fall short because of technical drawbacks, the main of which are: (i) inability to address schema heterogeneity proactively, (ii) inability to address query processing over highly complex queries, especially in the case of graph data, and (iii) a general lack of algorithms for integrating query processing with entity resolution. Furthermore, the specific requirements set by the functionality of VisualFacts must rely on data structures that exhibit hierarchical, dynamic, and metadata-rich characteristics, so that scenarios of hierarchical exploration, visual data profiling, and visualization recommendation can be supported. For these reasons, VisualFacts will build on the Extended Characteristic Set (ECS) data structure, which is an indexing structure for graph data that targets heterogeneity and join-heavy query processing. ECS satisfies the aforementioned requirements because of the following factors: (i) it is decoupled from the explicit schema of the data, and instead is built on the implicit, emergent schema, making it able to capture heterogeneity at its core, (ii) it is inherently hierarchical, as it supports generalizations and specializations in its structure, and (iii) it partitions the data in relatively small and semantically rich parts, and is thus a good candidate for pruning unnecessary records while performing entity resolution and other cleansing tasks on query-time. However, in its present form, ECS indexing cannot efficiently address data diversity, as this leads to the creation of many sparse ECSs that are costly to fetch and process. For this reason, VisualFacts will extend ECS indexing for detecting and merging similar ECSs. We will design and implement techniques for detecting hierarchical relations between merged ECSs, for faster access of related data records, which will enable efficient real-time visual exploration scenarios. To address scalability, the indexes will be implemented in a distributed setting, over multiple nodes. To this end, VisualFacts will implement and deploy parallel query processing algorithms. The index will be adaptively updated, in order to provide fast access to raw data. This will be driven by the incremental emergent schema detection process, and the feedback by the entity resolution results, that will enable further linkage and merging of existing ECSs
Delivering quality visual analytics is directly related to the quality of the data. However, the aggregation of data from remote sources often leads to inherent dirtiness, as far as both the structure (i.e. heterogeneity) and the content (i.e., duplicates, missing values etc.) are concerned. While VisualFacts addresses structural problems with the use of the aforementioned emergent schema detection techniques, content dirtiness must be addresses by Entity Resolution algorithms. In a self-service manner, VisualFacts must ensure that these processes take place automatically, without user intervention. Thus, VisualFacts will develop and integrate Entity Resolution algorithms into its data management framework. These algorithms will focus on deduplication and record linkage, which are inherently quadratic problems, as they require comparisons between all entities in the data. To speed up these processes, VisualFacts will employ the entity blocking / Meta-blocking approach, a technique for grouping together closely related entities into a graph with the aim to prune redundant and unnecessary comparisons, thereby speeding up the Entity Resolution processes [Chri12, PKPN14]. These methods are traditionally applied independently of query evaluation as a data preprocessing step; on the contrary, VisualFacts will develop ER methods that are directly integrated in the form of operators within the query evaluation process. More specifically, VisualFacts will extend the Meta-blocking technique so that it is deployed over the ECS index, in order to detect probable duplicates on query-time. This process will be efficient and fully-automated, without requiring any user intervention. For addressing the high computational cost and complexity of the actual comparisons between similar data, VisualFacts will design and implement a parallelization algorithm for efficient block distribution among the available cloud nodes. Each node will hold the essential information for executing comparisons pertained to each block locally, entailing minimum shuffling among nodes. Finally, VisualFacts implement an update operator for reusing ER results by each query and enriching the Meta-blocking graph structure with information about entity matchings, in order to improve the quality and performance of future queries
The resulting components will be integrated into a cloud-based platform. The architecture of the platform contains the Data Layer, the Core Platform Layer and the Presentation Layer. The Data Layer consists of the primary tools and modules related to the physical storage of input data and generated indexes. This layer will be responsible for the deployment of the database structures that store the indexed data along with their indexes on disk for future querying. The Core Platform Layer is responsible for all the core backend functionality of the platform, which addresses issues such as management of input data, creation and update of indexes, query optimization and query evaluation, as well as all the methods related to the processing and generation of visual analytics. It consists of three main components, namely the Data Staging and Indexing component (responsible for all data preparation and indexing tasks), the Query Processing component (for processing and evaluating incoming queries), and the Visual Analytics Processing Component (handles the incoming requests from the user interface including input of visual analytics queries, and generation of visualizations).