Showing posts with label business intelligence. Show all posts
Showing posts with label business intelligence. Show all posts

Wednesday, December 14, 2011

Mobile business intelligence expectations

There is a discussion of some remarks I made to Ann All at IT Business Edge here, together with some opnions by Howard Dresner.

The interview reflects my opinion pretty well. I see mobile BI as a way to find new types of customers for BI more than as a way to replace existing installations. Too bad I didn't mention salespeople on the road, I think they are an important potential market as well.

Another point is that I think most people who said they expected mobile BI to be in use within 12 months were being too optimistic. The BI Survey 11 will address this question.

Thursday, November 17, 2011

Chasing new trends

I'm talking to Tibco about Spotfire. It's an interesting product that I've reviewed before. It seems to me that they are moving more and more into operative BI, which fits the Tibco idea of fast data delivery that fits Tibco very well. They also seems to be putting more emphasis on ROLAP than they used to.

What is interesting is that they are also presenting a social media tool called Tibbr (wonder how they came up with that name!) and a cloud version. There's nothing wrong with this of course, but it seems to me that it doesn't fit their bus and/or ROLAP approach very well.

Their justification for the investment is that some analyst or other is predicting fast growth in this area. This reminds me of what an important ole analysts play in the market. Thanks to the analysts, a lot of BI vendors are jumping on the cloud bandwagon, even though the cloud sales channel is very different from what most BI vendors are accustomed to, and that the idea of mving data off site and then back onsite adds complexity.

Thursday, July 28, 2011

Why short projects are part of agile business intelligence

One of the key ideas in agility is the importance of delivering real, testable results without delay. In fact, the Agile Manifesto recommends delivering working software frequently, from a couple of weeks to a couple of months.

Delivery working software with two months may sound a bit extreme, but there is good evidence that short projects are more successful than long projects. In fact, our research in the BI Survey shows that the application should be rolled out to the users less than six months after the product has been selected. We have found the same result year after year in the ten year history of the Survey. Amazingly, project benefits start to fall as early as a month after the product is selected, and continue to fall thereafter. And of the many project parameters we study, none shows as clear an effect on project success as project length.

These results from the BI Survey provide clear empirical support for the idea of using agile methods in business intelligence projects. The results have remained also consistent since we started the Survey ten years ago, long before the idea of agile development or agile business intelligence became popular.

But why do short projects work so much better? Our research shows that the main problems that arise in longer projects are organizational, not technical. Needs change over the course of time, and end users lose interest in the project. Disagreements over the project goals arise. Lack of interest and disappointed end users are a major issue in business intelligence.

And needs certainly do change quickly in business intelligence. For example, another study we carried out shows that three quarters of European companies modify their planning processes one or more times a year. In an environment like this, a project that takes a year to implement is quite likely to be obsolete before it is finished. Even a six month wait can push potential users to look around for a more agile solution.

The problem this creates is that not all business intelligence projects can be carried out within a few months. This is especially true when major data management issues need to be addressed. The agile solution to this is to find way of splitting large projects into smaller ones. The usual argument against this approach is that it creates the risk of reducing efficiency in the long term. But the agile methodology is to measure success in terms of working software delivered in the short term, instead of attempting to meet nebulous long-term goals.

Sunday, July 17, 2011

Business Intelligence, the semantic Web and Alltop

My blog is including on the Alltop business intelligence page, and at the time of writing I display their badge on the blog.

According to the site 'The purpose of Alltop is to help you answer the question, “What’s happening?”' Alltop is a 1990's style Web registry maintained by human editors.

But Alltop's business intelligence page has several problems that make it less useful than it could be. The page has fallen victim to a semantic Web style gotcha. Like many other phrases, business intelligence means different things to different people. If you don't disambiguate somehow, a web registry based on a phrase may make no sense.

There are three distinct meanings of the phrase "business intelligence". The first is something about software for analyzing business data -- like my blog. The second is news about businesses, which is interesting but unrelated. These are some of those blogs:

MEED NEWS, EMARKETS.DE, B2B TRADE, DEALBOOK, ARUNDEL BUSINESS NEWS, FOREX TRADING INFO, THE FINANCE KID, ARBITRAGE MAGAZINE, SMALL BUSINESS SUPPORT

The third meaning is based on a completely different meaning of intelligence -- intelligence as in IQ, as opposed to intelligence as in information for analysis. In the sense business intelligence just means being smart about business, which could men just about anything.

So Alltop's business intelligence page contain sites that are not at all related to the #businessintelligence tag on Twitter. A lot of these seem to be advice about sales and entrepreneurship or general management consulting blogs. A few are just political blogs, or blogs about general market or marketing trends. They're fine in their way, I guess, just misplaced. Here's a list:

STATSPOTTING!, WSJ: THE NUMBERS GUY, MANAGE BY WALKING AROUND, BUILDING BUSINESS VALUE, CORPORATE MANAGEMENT STRATEGIES, KNOWLEDGE WORKS, HUSTLEKNOCKIN', THE THINK HERE BLOG, LEAD VIEWS, THE SOLOPRENEUR LIFE, SMEDIO, INTERCALL BLOG, GLOBAL INSTITUTE FOR INSPIRATION, SMALL BUSINESS SUPPORT, RED WING SOFTWARE BLOG, FRED67 B2B REFERRAL CLUB, RESULTS.COM BUSINESS GROWTH TIPS

I'm not criticizing any of these guys, just saying they seem to be improperly categorized.

Also Alltop is syndicating advertising material thinly disguised as blogs. Of course, maybe they're getting paid, what do I know? If not they should be. The following are BI vendors, which may or may not be problem:

RED WING SOFTWARE BLOG, LOGIXML, BIME - SAAS BUSINESS INTELLIGENCE (BI), BLOG.JINFONET.COM, MICROSTRATEGY 101, PANORAMA BUSINESS INTELLIGENCE

In addition, there are several aggregators -- Yahoo, and Beyenetwork twice. These guys can be seen as comptetitors, I guess.

In the end I think that the lack of careful stewardship reduces the usefulness of the site. The problem is that business intelligence is a vague term and needs a semantic Web to be useful. Manual editing in a web registry is a workaround, but it is not being used here to much effect.

Saturday, July 09, 2011

Discovering hierarchies in columnar databases

I recently blogged about columnar and Wayne Eckerson asked me for a clearer explanation of what I mean by columnar databases "discovering hierarchies".

For example consider the approach of two well known products, IBM Cognos TM1, which is multidimensional, and QlikView, which is columnar.

My definition of a data model is a structure that is informed by an administrator, or set down in the master data. To me this is different to a structure derived from analyzing the transactions. In the following simple example, let's say I have two sales teams, one for dental hygiene products and one for soap.

If I were designing a data model in TM1, then I could create a hierarchy, which is a set of parent child relationships between the departments and the products they sell. If the soap people cross-sold some toothpaste, it would have no effect on the hierarchy, because it is predetermined by my idea of how my company is supposed to work.

If I were to import the same data in QlikView I could create a report that showed me the relationship between the sales teams and the products without defining the model. Once the data is imported, QlikView recognizes the relationships automatically.

When the soap guys cross-sell toothpaste, QlikView discovrs the new relationship, but the hierarchies stay the same in TM1, because that's how I defined the model. To me this is the key difference. On the one hand the structures are coming directly from the actuals, and on the other hand they reflect my predefined perception (or "model") of what is going on.

So columnar databases typically discover the relationships automatically, and multidimensional databases allows you to define the relationships as you want them. Another way to look at this is that the transactional data drives the master data structure in a colunmar database, but those structures are wired into the multidimensional model.

So which approach is better? It depends on the application.

Tuesday, July 05, 2011

Data modeling and agile BI

One of the advantage that some analytical tools such as QlikView Spotfire or Tableau claims to offer over the products they call "Traditional BI" is that they can be used without data modeling. According to this claim, data modeling is a major obstacle to agile business intelligence and not needed anyway.

Is it true that data modeling is dead? Has technology found a workaround?
The need for data modeling depends upon the application. Products that promise user friendly analysis without any data analysis are usually intended for a specific type of analysis that does not require any previously specified structure.

A good example of data does not require modeling that retailers gather about their customers. This data comes in big flat tables with many columns, and the whole point to the analysis is to find unexpected patterns in this unstructured data. In this case adding a model is adding assumptions that may actually hinder the analysis process.

However, some types of analyses only make sense with at least some modeling. Time intelligence is an example of a type of analysis that is supported by a data model. Also analyzing predefined internal structures such as cost accounts or complex sales channels is usually more convenient based on predefined structures. The alternative method of discovering the structures in the raw data may not be possible.

Planning is a common area of agile BI, and planning is rarely possible without predefined structures. It is no coincidence that the tools that promise analysis without data modeling do not offer planning features. Planning requires adding new data to an existing data set. In some cases, this includes adding new master data, for example when new products are being planned. Furthermore, there is often a good deal of custom business logic in a planning application that cannot be defined automatically. Most financial planning processes, and the analysis and simulation that goes along with them cannot be carried out on simple table.

In my view the new generation columnar databases are a welcome addition to agile BI. But I also think that their marketing is sometimes a little over the top when it comes to dismissing existing BI solution in this area.

Thursday, June 30, 2011

Column oriented databases are not the same as in-memory databases

In recent years, thanks not least to aggressive marketing by QlikTech (or Qlik Technologies as the are now often called) Tableau and Tibco Spotfire, columnar databases and in-memory databases have become very fashionable. Microsoft's VertiPaq engine, which is behind the PowerPivot product, is a good example of a tool that came in on the wave of this trend.

One of the results of these is that there seems to be some confusion about what the terms "in-memory" and "column oriented" mean, and attributes of one are often attributed to the other.

Just to be perfectly clear: A columnar database is not necessarily in-memory, and an in-memory database is not necessarily columnar.

In-memory is a somewhat vague term, since, as Nigel Pendse likes to point out, all databases have to hold data in memory to process it -- the CPU cannot directly access the hard drive. However, I would say that unlike some other tools, IBM Cognos TM1 and QlikView are in-memory. These products load everything into memory before they do anything. If there is not enough memory to fit the entire data set, the load fails and that's that. The same applies to SAP HANA. But unlike QlikView and HANA, TM1 is a multi-dimensional database.

The loading behavior of an in-memory database is much different to the MOLAP engine in Analysis Services, which is fundamentally disk-based but has sophisticated paging abilities to keep as much as the data as possible in memory, or the column oriented Spotfire, which attempts to load everything but uses paging if there is not enough memory.

Columnar is a much clearer and simpler term. It simply means that the data is stored by column instead of by row. There are a large number of analytic databases with this architecture, such as Exadata, SAND, Greenplum, Aster, or Sybase IQ, just to name a few. Some, like Vertica and VertiPaq, even refer to their columnar architecture in their names. Some columnar databases are in-memory, but many are designed to deal with huge amounts of data, up to the petabyte range, and cannot possibly hold it all in memory.

By the way, what got me off on this rant is actually this blog about Endeca Latitude 2 which actually equates the two technologies, and a Linked-In discussion the author started (which is private, so I can't link it here) with the title "Is Data Modeling Dead?"

The idea in memory databases kill data modelling comes from the fact that columnar databases are often used to discover hierarchies, and a whole generation of so-called "agile" in-memory database tools use this method. But in-memory multi-dimensional databases are still around and still very useful for analyzing data on well defined structures such as financial data.

Tuesday, June 21, 2011

The end of Essbase Visual Explorer

I talked to Tableau and Oracle on the same day recently so I managed to get both sides of this story.

Essbase Visual Explorer is an OEM version of Tableau Desktop. In other words, it is the Tableau product rebranded as an Oracle Hyperion product.

The OEM agreement was originally made between Tableau and Hyperion. It made sense for Hyperion, because they did not want to invest in in-house development on any new front-ends, but they needed something to liven up their Essbase offering. It made sense for Tableau because they were a tiny unknown company at the time, and hooking up with Hyperion, then one of the largest BI companies, was a great way to push sales and raise visibility.

But Tableau has moved on since then. Hyperion’s unwillingness to market Essbase aggressively meant that Tableau could not depend on Hyperion forever, and Tableau now supports a wide variety of data sources. They said to me that they were “sunsetting” the relationship. My impression was that only a small proportion of their customers are Visual Explorer customers and they are ready to move on.

Oracle inherited the relationship from Hyperion, but its strategy has been quite different to (and in my opinion more sensible than) Hyperion’s. Reading between the lines of what Paul Rodwick said to me, my guess is that Oracle thinks that Tableau got more out of the deal than Hyperion did. Be that as it may, as a stand-alone tool Visual Explorer does not fit well into Oracle’s ambitious plans to integrate Essbase with its reporting and analysis suite, OBIEE. Visual Explorer is still on Oracle’s price list but the recently released BI Foundation Suite combining Essbase and OBIEE does not include Visual Explorer.

So Oracle will continue to support Visual Explorer, but both Oracle and Tableau have indicated to us that they have little interest in continuing the relationship, and I do not expect Oracle to continue actively positioning Essbase Visual Explorer in the coming years.

Thursday, June 16, 2011

SAP raises the stakes with EPM 10

SAP is now adding new features to its (relatively) new integrated planning tool, BusinessObjects Planning and Consolidation, often still called BPC. It needs to show customers that it is up to more than just integrating its portfolio and it also needs to face up to scrappy new competitors like Tagetik and De facto, planning tools with a very similar architecture.

The presentation I saw yesterday was nominally about EPM, but SAP concentrated on Planning and Consulting (BPC). It has many new features including a new workspace and Web-based data input. This is something new since the original OutlookSoft product only allowed planning in Excel. Neither Tagetik nor De Facto offer this. The product also allows reporting on non-BPC BW data and a slicker looking Excel interface with some nice new features. Unfortunately, they couldn’t get the template function working in Excel for the demo.

SAP says they will gradually move the whole thing to HANA but did not provide details. Having the database in memory is the best way to provide the performance planners demand. However, in my opinion HANA’s architecture is not very well suited for planning. Column database architecture is available in other business intelligence tools such as QlikView and Microsoft PowerPivot, but they are not suited for planning.

Planning involves adding new data to the database (as opposed to overwriting data) and the automatic data driven modelling features of this kind of database makes it impractical to offer a simple way to add data. You cannot "discover" new periods, scenarios or products in an ETL process. Multi-dimensional databases, with their predefined dimensions, are better for this kind of feature, because the idea of an “empty cell” for adding new plan data comes so naturally.

Monday, June 13, 2011

How Information Builders gets around the Flash / HTML 5 controversy

As I mentioned in a previous post, the iPad is still a hot topic for business intelligence vendors, and both Oracle and Information Builders have just come out with new iPad support. At its Summit 2011, Information Builders has been demoing the InfoAssist tool with the built-in ability to switch between rendering in Flash and HTML5 on the fly.

Information Builders is not exactly a newcomer to the world of using HTML to render rich content. In fact the Information Builders Active reports were originally rendered in HTML 4 and offered an amazingly rich  user experience completely off line. But what was amazing back in the day is becoming more commonplace, with HTML5 making it much easier to build rich interactive content. This development shows that users and developers no longer have to takes sides in the Flash vs HTML 5 argument.

To me, the moral of the story is that Microsoft, Abode and Apple may well be wasting their time fighting over the rich web development platform. It was always pretty artificial. And it isn't just large vendors like Information Builders who have this kind of multi-platform capability any more. HTML 5 is not just for MP3 bloggers any more, and as cross-platform development specialists such as Appcelerator  and the many Flash / HTML 5 converters (including Adobe Wallaby) gain traction in the market, it is becoming less and less important to worry about the development tool used to get the content delivered.