Friday, May 24, 2013

Business Intelligence 2.0

Business Intelligence 2.0 (BI 2.0) is a set of new tools and software for business intelligence, beginning in the mid-2000s, that enable, among other things, dynamic querying of real-time corporate data by employees, and a more web- and browser-based approached to such data, as opposed to the proprietary querying tools that had characterized previous business intelligence software.
This change is partly due to the popularization of service-oriented architectures (SOA), which enables for a flexible, composable and adaptive middleware. Also, open standards for exchanging data such as XBRL (Extensible Business Reporting Language), Web Services and various Semantic Web ontologies enable using data external to an organization, such as benchmarking type information.
Business Intelligence 2.0 is most likely named after Web 2.0, although it takes elements from both Web 2.0 (a focus on user empowerment and community collaboration, technologies like RSS and the concept of mashups), and the Semantic Web, sometimes called "Web 3.0" (semantic integration through shared ontologies to enable easier exchange of data).
According to analytics expert Neil Raden, BI 2.0 also implies a move away from the standard data warehouse that business intelligence tools have used, which "will give way to context, contingency and the need to relate information quickly from many sources."

Lessons from social media

According to president and CEO Greg Nelson, BI 2.0 has a lot to learn from social media, such as Twitter, Facebook, blogs etc. He lists, in the white paper Business Intelligence 2.0: Are we there yet?, lessons and opportunities that BI can learn from social media.
Lessons and opportunities:
From Facebook the following can be learned: Destination naturally forces people to login and participate. Continuous flow of information. Provides environment for developers to create their own applications. Post things of interest (reports, graphics, interpretations) to my personal page (things I have discovered.)
From Twitter the following can be learned: Real-time, continuous flow of decisions, status about the business, complex event processing. Platform evolves through unplanned usage/organic evolution of capabilities. Succinct explanation of the state of the business. Search commentary; generate word clouds that provide a visualization of the “vibe” or sentiment of the business. Tags and users comments. Submit note-worthy information (anything on the web you think is newsworthy) – associate it with data or objects.
From blogs the following can be learned: Information can be interpreted and informally published to a group of interested parties.
From RSS the following can be learned: Commentary should be available as an RSS feed. Reports or data updates could be delivered via RSS feed.

The future of BI 2.0

According to president and CEO Greg Nelson the future of BI will see a great change when it comes to BI 2.0 In his 2010 white paper he concludes the following:
  1. Decisions, facts and context will be developed through crowdsourcing.
  2. Similarly, data and reports will incorporate narrative context information supplied by users. For example, data points and graphs annotated with descriptive insight directly alongside the results.
  3. Data will have a more direct linkage with action. When you see something wrong, the data will tell you where it is going wrong and why. Exceptions, alerts and notifications will be based on dynamic business rules that learn about your business and what you are interested in.
  4. People will be able to directly act on information. Interactions with operational systems, requests for information, comments, “start a discussion”, provide supporting information, “become a follower” of the metric, and “rate or report a problem” will reside alongside the data.
  5. Business decisions shall be monitored so that interventions and our hypotheses about business tactics will be tagged in the context of the data that measures its effect. Our ability to test a hypothesis will be integrated into our decision support systems. Say, for example, we see something in the data; we explore it; we understand its root cause; and design an intervention to deal with it. We will be able to tag interventions or events that have happened and have that appear in the context of the reporting of the data so that over time our collective knowledge about the world will be captured alongside the data and artifacts.
  6. Visualizing data and complex relationships will be easier and more intuitive models of info-graphics will become mainstream. Tools will have the ability to create graphic representations of the data based on what it “sees” and displays the best visual display given what it has. Furthermore, the tools will learn what visualizations work best for you and your environment.
  7. The ability to detect complex patterns in data through automated analytic routines or intelligent helper models will be built into analytic applications.
  8. Finding information will be easier and search results will provide context so that we know when we have the right results. Users will have the ability to tag specific data elements at various levels (page, widget, some aspect of the data presentation – row, column, cell, line, point) or an abstract interpretation of the results. Anyone looking at the same data will see that context when viewed.
  9. Linkages with unstructured contents such as documents, discussions and commentary as well as a knowledge base of previously answered requests will be key to ensuring collective knowledge and collaboration.
  10. Technical, process and business event monitoring will allow streamlined operational processes (Business Process Engineering, Business Activity Monitoring, Business Rules Engineering) and learning models will be applied to organizational flow of data.* Nelson, Greg (2010).

No comments:

Post a Comment