I was going to do a nice in-depth post to follow up on my discussion of the relational cache outline extraction method/improvements on the Next Generation Outline Extractor, but someone already beat me to the punch. It turns out that the tool’s primary author, Tim Tow, blogged about the technique and the tables for ODTUG a couple of years back.
I’ll just add on one thing that I wanted to highlight, though: the general technique behind most relational extractions from an Essbase outline is to generate a single table, with such common columns as PARENT
, CHILD
, ALIAS
, UDA
, STORAGE
, CONSOLIDATION
, and so on. If you think about this, it tends to imply a number of limitations that might make this technique unfeasible for you:
- The table can only hold one extraction at a time
- You can only get the member name and a certain alias table alias at a time
- The columns in the table are variable based on the attribute dimensions that may be associated with the dimension
- More than one UDA: ¯\_(ツ)_/¯
Since the relational cache extraction method was designed for robust, flexible, and quick use with Dodeca, it is structured a little differently than traditional extracts. The additional aliases, attribute associations, and UDAs are all in separate tables, allowing for arbitrary and complete extractions of such data (in tables CACHED_OUTLINE_MEMBER_ALIASES
, CACHED_OUTLINE_MEMBER_ATTRIBS
, and CACHED_OUTLINE_MEMBER_UDAS
respectively).
Additionally, each version of an outline is extracted and given a unique ID as well, meaning that you can extract an outline over and over again to the same tables without having to necessarily clear the table beforehand. This again stands in contrast to the typical extraction format that is essentially one extraction in a table at a time (although if you choose to then merge that in to a separate table with a unique ID, you can).
In any case, I keep finding myself surprised at just how many people are using the relational extraction technique as part of their automation and day to day solutions, including techniques related to automated outline change comparisons, quasi-DRM solutions, and shuffling data around the enterprise for consumption in other solutions.
Hi Jason,
I really like the benefit of having multiple versions in the RDBMS tables.
If you’re looking for ideas, I think it would be cool to extend the tool to connect to PBCS (and get that data into the RDBMS tables).
Perhaps connect with four parameters: URL DOMAIN USERID PASSWORD (or password file) to store PBCS metadata into a relational repository for analysis.
Or instead of logging directly into PBCS, supply the tool with a Snapshot and have the metadata portion of it extracted and loaded to a relational database.
Hi Chris,
I’ve thought out connecting to PBCS. I think I’d want to rework the architecture of the outline extractor just a little bit before that, in order to make more flexible with respect to input/output modules and input from the API versus MaxL. Also, for PBCS at the moment you’d have to supply the names of dimensions yourself since as far as I can see, there’s no REST API endpoints that actually provide the dimension list. Not a big deal but makes use a little less smooth.
In any case, I’ll put PBCS support on the development roadmap. And see if you can find someone for me to talk to about my REST API wishlist. :-)
Do you have an example snapshot file you could provide me to do some analysis on?