Drillbridge on Linux

Did I mention recently that Drillbridge can now be installed on Linux in addition to Windows (in fact, Drillbridge is so flexible, it now runs on Windows, Linux, Mac OS X, and AIX all out of the box!). WELL IT DOES!

So regardless if your Hyperion environment is running on Windows, Linux, or AIX, it’s now easier than ever to add Drillbridge to one of your servers and unlock the power of your detailed data for your users.

Drillbridge 1.5.0 available!

It took a little longer than anticipated (I took some genuine time off over the holiday break), but I am very happy to make Drillbridge version 1.5.0 available to download.

A lot of work has gone into this release. For example:

  • You can now drill from columns (!)
  • Use your own custom mappings to resolve member names (this is a super cool feature I wrote about earlier and it knocks the pants off of Essbase Studio (as far as I know) by allowing you to write in mappings for your own members that might not have children (for example, a YTD member in the Time dimension)
  • Streaming output performance enhanced
  • Now generates XLSX files with much better output (not everything is treated as a String – if the column type is DATE or TIME or TIMESTAMP, it gets treated and formatted accordingly)
  • Now run Drillbridge from Linux!
  • All files included that should allow for running on Windows, Linux, AIX, Solaris, and even Mac OS X
  • Java 7 or newer is required! I know this might be an inconvenience but I pretty much had to, and it’s good to get current with Java. Java 1.8/8 should work just fine as well in case you want to go all out

I have done my absolute best to test this version and make sure it doesn’t have any showstopper bugs but there are bound to be issues. There are a few things I know need to be worked on for the inevitable 1.5.1 release. So if you upgrade, please be sure to make backups so you can flip back to 1.3.3 or 1.3.4 or whatever your current version is, in case there’s something broken that you need.

If you have any questions about the new features and how to use them, don’t hesitate to hit up the forums or email. In the coming days I will be adding on to the Drillbridge Wiki with some info about how to set things up in terms of new features. There was recently a lot of spam activity on the wiki that I’m trying to sort out but for now it looks okay.

That all being said, this is the best release of Drillbridge ever and it now contains every feature I originally set out to put in, and then some. There will undoubtedly be some quirks but I look forward to a few point releases to stamp them out. Many or most of the features are from direct user requests, feedback, and ideas, so thanks to everyone.

Happy Drilling.

Essbase Outline Export Parser released

I had a use-case today where I needed to parse an XML file created by the relatively new MaxL command “export outline”. This command generates an XML file for a given cube for either all dimensions or just all dimensions you specify. I just needed to scrape the file for the hierarchy of a given dimension, and that’s exactly what this tool does: pass in an XML file that was generated by export outline, then pass in the name of a dimension, and the output to the console will be a space-indented list of members in the dimension. More information on usage at the Essbase Outline Export Parser GitHub page including sample input, sample output, and command-line usage.

Also note that the venerable Harry Gates has also created something similar that includes a GUI in addition to working on the command line. While both written in Java, we’re using different methods to parse the XML. Since I’m more familiar/comfortable with JAXB for reading XML I went with that, which in my experience gives a nice clean and extensible way to model the XML file and read it without too much trouble. The code for this project could be easily extended to provide other output formats.

Drillbridge 1.4.0 Feature Preview: Custom Mappings

Drillbridge 1.4.0 is coming out later this month and it contains some really cool features. Today I am going to go over one of them. This feature is called “Custom Mappings” and while it’s ostensibly simple, it is a huge win for drill-through and cube design.

Consider the scenario where you are drilling from a member in the Time dimension. Drilling from January (Jan), February (Feb), and so on are straightforward (especially with Drillbridge’s convenience methods for mapping these to 01, 02 and such). Even drilling from upper-level members is a snap – Drillbridge gets handed the member Qtr1, for example, then opens the outline to get the three children, then applies the mappings to those and plugging it into the query (so the relevant query fragment might be WHERE Period IN ('01', '02', '03') or something.

Everything is great, right? Well, what about those YTD members you often see in ASO cubes as an alternate hierarchy? Something like this:

  • YTD (~)
    • YTD_Jan (~) Formula: [Jan]
    • YTD_Feb (~) Formula: [Jan] + [Feb]
    • YTD_Mar (~) Formula: [Jan] + [Feb] + [Mar]
    • etc.

The problem with these members is that they are dynamic calcs with no children. So if you try to drill on this, then Drillbridge would literally be querying the database for a period member named “YTD_Feb”, for example.

I have sort of worked around this before in Studio by instead putting shared members under these. Under YTD_Jan you have a shared member Jan, under YTD_Feb you have shared members Jan and Feb, and so on. This works, although it’s a bit cumbersome and feels a little clunky.

Custom Mappings to the Rescue!

Custom Mappings is a new Drillbridge feature that allows you to specify a list of member names to use when drilling on certain member names. If a Custom Mapping is added to a report, Drillbridge will consult that first for child member names. If a mapping isn’t found then Drillbridge will just use the normal provider of mappings (e.g. it’ll open the cube outline and use that).

All that’s needed to create custom mappings is to put them in a file. Here’s an example:

YTD_Jan,Jan

YTD_Feb,Jan
YTD_Feb,Feb

YTD_Mar,Jan
YTD_Mar,Feb
YTD_Mar,Mar

YTD_Apr,Jan
YTD_Apr,Feb
YTD_Apr,Mar
YTD_Apr,Apr

YTD_May,Jan
YTD_May,Feb
YTD_May,Mar
YTD_May,Apr
YTD_May,May

YTD_Jun,Jan
YTD_Jun,Feb
YTD_Jun,Mar
YTD_Jun,Apr
YTD_Jun,May
YTD_Jun,Jun

YTD_Jul,Jan
YTD_Jul,Feb
YTD_Jul,Mar
YTD_Jul,Apr
YTD_Jul,May
YTD_Jul,Jun
YTD_Jul,Jul

YTD_Aug,Jan
YTD_Aug,Feb
YTD_Aug,Mar
YTD_Aug,Apr
YTD_Aug,May
YTD_Aug,Jun
YTD_Aug,Jul
YTD_Aug,Aug

YTD_Sep,Jan
YTD_Sep,Feb
YTD_Sep,Mar
YTD_Sep,Apr
YTD_Sep,May
YTD_Sep,Jun
YTD_Sep,Jul
YTD_Sep,Aug
YTD_Sep,Sep

YTD_Oct,Jan
YTD_Oct,Feb
YTD_Oct,Mar
YTD_Oct,Apr
YTD_Oct,May
YTD_Oct,Jun
YTD_Oct,Jul
YTD_Oct,Aug
YTD_Oct,Sep
YTD_Oct,Oct

YTD_Nov,Jan
YTD_Nov,Feb
YTD_Nov,Mar
YTD_Nov,Apr
YTD_Nov,May
YTD_Nov,Jun
YTD_Nov,Jul
YTD_Nov,Aug
YTD_Nov,Sep
YTD_Nov,Oct
YTD_Nov,Nov

YTD_Dec,Jan
YTD_Dec,Feb
YTD_Dec,Mar
YTD_Dec,Apr
YTD_Dec,May
YTD_Dec,Jun
YTD_Dec,Jul
YTD_Dec,Aug
YTD_Dec,Sep
YTD_Dec,Oct
YTD_Dec,Nov
YTD_Dec,Dec

With this Custom Mapping in place on a report, drill-to-bottom can be provided on a cube’s Time dimension YTD members. You get all of this functionality without having to tweak the outline, add a bunch of shared members, or anything. And what’s even better – you’ll even save a whole trip to the outline. If there is some member that is problematic to resolve, for some reason, or you just wanted to override the member resolution process, you could also stick it in the custom mapping.

Just for completeness, let’s take a look at the admin screens for editing and updating Custom Mappings. Here’s an overview of all of the different Custom Mappings that have been created:

Drillbridge Custom Mappings List

Here’s a look at editing a Custom Mapping:

Drillbridge Edit Custom MappingAnd here’s previewing the list of individual mappings available for a given mapping (that have been uploaded by importing a text file):

Drillbridge View Entries in a Custom MappingThere you have it. As I mentioned, this feature will be available in upcoming release version 1.4.0, which should be out later this month. This feature is really, really cool, and there are a few more things this release adds that I will be talking about over the next week up to the release.

Drillbridge with Teradata & Netezza?

The ODTUG webinar for Drillbridge yesterday seemed to go pretty well (more to come soon!) but one of the questions that came up is if Drillbridge works with Teradata and/or Netezza for implementing Hyperion drill-through to relational. My answer: it should, but I don’t know for sure. Drillbridge supports Microsoft SQL Server, Oracle, and MySQL out of the box. Drillbridge also allows you to put your own JDBC driver into it’s /lib folder and you should be able to use any other flavor of database that you can write SQL for: be it Informix, DB2, Teradata, Netezza, or whatever.

So that being said, if you are interested in implementing Drillbridge and using one of these backend relational databases, please don’t hesitate to reach out to me if I can help with it. I’d love to be able to confirm compatibility rather than to just say “I suspect it will work, JDBC is awesome, right?”

How to scan a cube for members with no data – the HUMA way

I recently dusted off one of my Hyperion/Java flights of fancy the other week: HUMA. HUMA is a tool for scanning a cube and finding which members have no data associated with them at all. As I mentioned in the post talking about some performance tweaks, HUMA scans every single cell in an entire cube to determine what is empty.

One of the reasons I put HUMA on the self for a while is that in a program that does this, there’s a lot that can go wrong to cause the program to fail or run so long that the sun goes supernova before the program finishes. If you were lucky then you’d get a list of members that you could put on the chopping block. If you’re lucky it’s even a member in a dense dimension, whose removal might just increase your block density ever so slightly.

That all being said, let me just say up front that maybe there’s not a lot to be gained from doing this. Maybe the real solution is to buy some nice fast SSDs for your server, or increase the cache size, or check out that fancy new ASO Planning stuff, or whatever. Let me just say upfront that I get that. Nevertheless, this tool exists and if you’re the type of administrator who looks for any edge you can get, well, then maybe you’re one of the dozens of masochists people that have already downloaded this thing.

So, for the curious: how do you and how does HUMA scan an entire cube with a kajillion possible data cells? Glad you asked…

Step 1 – Read the outline

The first thing that HUMA does is read the outline for the list of stored members in each dimension. HUMA also pays attention to which dimensions are sparse and which are dense.

Step 2 – Build a virtual grid with every single cell combination

Imagine you’re in Smart View and you are drilling around Sample/Basic. You drill to the bottom of each dimension, and now there is nothing in your POV – it’s just rows and columns as far as the eye can see. You’d have something like this:

A fully expanded grid view with nothing in the POV

All dimensions in the Sample.Basic database have been drilled into; there are no POV/page members.

In practice, even a barely non-trivial combination of members from multiple dimensions results in a grid that is gigantic – and too big to store in memory all at once. HUMA doesn’t actually generate this grid in memory, thankfully. It generates (incredibly quickly) the member combinations for any place in the grid. Hence why it’s a virtual grid. From here the problem is actually kind of simple sounding: just check all the cells. Before we do that, let’s come back to why we paid attention to dense and sparse:

A highlighted row corresponding to a single block of data

A single row is highlighted: given columns A-C representing sparse dimensions, then the highlighted row represents the entirety of a single block.

Spare dimensions are on the left, dense dimensions are on the top. So you can see that the highlighted row in the above screenshot corresponds to a data block (in a BSO database). The data is strategically laid out this way so that to the extent possible we will be retrieving data against the same blocks over and over, giving the Essbase engine a chance to leave those in memory and serve them up faster.

Step 3 – Retrieve sub-grids

We can’t, obviously, just pull back all the data at once… we need to iterate through it. For technically reasons we also need to limit the number of cells we pull back at once, and we have a maximum number of columns (for Java API purposes) of 256. We might want to set the max cells per retrieve though. So let’s say that’s about 50 cells. We’d then be retrieving sub-grids roughly shaped like this:

A representation of HUMA sub-grids that might be retrieved

A fully expanded grid with different sub-grids of data represented. The top left starts out as grid 1, then moves right to grid 2 and keeps moving right. The next row picks up at theoretical sub-grid number 24.

See that the first grid is one, then to the right (in the same “hot” blocks) is 2, and then so on through to the right side such that the next grid down in this case happens to be 24, then 25, and so on. Even if we just retrieved (or tried to retrieve) all of these grids, it would still take a considerable amount of time. So there’s one more trick up HUMA’s sleeves: quick member elimination. Let’s look at the data in grid 1:

A highlighted sub-grid in the Sample.Basic retrieval process.

Another look at the data contained in HUMA sub-grid number 1.

Step 4 – Analyze data and reconfigure search space

We’ve found data for such member combinations as Cola, Actual, New York, and so on. We can now remove these from consideration in the search. HUMA knows that it no longer has to ever check any of those members. Because of it’s extremely fast grid regeneration/iteration, it does this automatically and shrinks the search space by removing those members. This can speed things up incredibly. If you watch the [at the moment very verbose] HUMA output, you’ll see some debugging information about how many grids there are, and as members are found, this number gets recomputed.

HUMA will finish in one of two ways: it eliminates every member from consideration for being unused and it quits, indicating as much, or two, it finishes and it reports back all of the unused members. I guess there’s a third option, which is that HUMA crashes. Actually, HUMA itself won’t crash, but the most likely issue you’ll run into is port exhaustion, in which case you’ll want to increase the milliseconds between retrieves and try again (or just increase the number of ports on the server, as indicated in the help).

That, in a nutshell, is how HUMA operates. This time around, performance is off the charts compared to what the first version was, so that’s cool. This program is available from the Downloads site. This is still a very rough program but please feel free to share your feedback and experiences.

Daily Drillbridge Update

Lots of exciting things going on in the Drillbridge world! I am putting the finishing touches on my ODTUG webinar that is NEXT WEEK (the 28th!). You can register here or wait for the recording to be released. I’ll put a link on this blog.

The “context path” update in the Drillbridge 1.3.4 beta appears to be working swimmingly – enabling the use of a proxy with Oracle HTTP Server so that Drillbridge can be used on a normal web port and URL of your own choosing, so that’s super cool.

Source Code Metrics

Just for fun, I ran the Drillbridge codebase through a source code analyzer and it is just shy of 5,000 lines of Java, not counting over 1,000 lines of comments. Pretty cool stuff! Believe it or not, Drillbridge started out as a proof of concept with about 3 Java class files (now it’s weighing in with 120 classes).

Drillbridge 1.3.4 BETA available

The normal Downloads location now contains a beta of Drillbridge 1.3.4. This release only adds one requested feature: custom context paths. Normally, Drillbridge operates at the root of its web container, such as http://server:9220/admin/reports. This change adds support for putting in an application.properties variable that allows for a custom context path. So instead of the above link, you’d use http://server:9220/drillbridge/admin/reports. Note that the name of the root (in this case drillbridge) can be set to whatever.

If you use this feature you’ll have to edit your drill-through definition (the code that goes into EAS or gets deployed from Drillbridge) by hand, although you were probably doing that anyway if you needed this feature. This feature is simply meant to allow for putting IIS in front of Drillbridge in case you want to do a redirect and offer Drillbridge through your normal web server on port 80.

If you don’t need or want to test this feature, just stick with 1.3.3. The downloads section now clearly indicates that this is a beta.

HUMA 0.5.0 available to download

An early build of the Hyperion Unused Member Analyzer is now available to download.

I posted details over in the Saxifrage Systems LLC “tools” forum with some brief notes on how to run and use the tool. Again, just to iterate (and so I have a clear conscience), it’s super alpha software, so there’s no warranty on this thing. Do please let me know your feedback either in email or the forum post. Future iterations of the software will clean it up.