Glitch with a non-printing Unicode character in member name

This is an Essbase bug, kind of. I’ve been working on a project lately that uses the relatively new MaxL Essbase outline export command (yes, Pete, I called it relatively new again, even though according to you it’s not… well, it’s relatively NEW TO ME SO THAT’S WHAT MATTERS… :-). Anyway, I wrote a quick XML parser for the output, using Java.

The nice thing about the parser is that it uses something in the Java language called JAXB. It’s a really nice way of modeling the contents of an XML file using Java classes, so that you don’t have to write your own parsing code, which is tedious and error prone. There are reasons you might use either approach, but overall I have been very happy over the last few years with the ability to write XML readers in Java with JAXB.

Curiously, I came across an outline export that would cause the parser to throw an exception. The Java stack trace indicated that an illegal character (0x1f – that’s not the character itself, rather, the Unicode character ID) was at fault. Specifically, character 0x1f is the “unit separator” character. In a nutshell you might say that while most of us are used to writing things with letters and numbers and things like tabs, spaces, and newlines, there happen to be all of these other weird characters that exist that have arcane uses or historical reasons for existing. It’s such a prevalent issue (or at least, can be) that many advanced text editors have various commands to “show invisibles” or non-printing characters. One such tool that many of us Essbase boffins are adept with is Notepad++ – a veritable Swiss army knife of a text editor.

Nicely enough, the Java stack trace indicated that the problem in the XML was with parsing a “name” attribute on a <Member> tag – in other words, an Essbase member name in the source outline contained an invisible character. As it turns out, in XML 1.0 it is illegal to have this particular character. So while Essbase happily generates invalid XML during the export, when I try to import it with Java, I get the exception. But how to find the offending member? I mean, how do you do a text search for an invisible character (seriously, this is like some “what is the sound of one hand clapping” kind of stuff).

In Notepad++ you can search for a regular expression. So I turned on Show Invisibles, pulled up the Find dialog, checked on the “Use Regular Expressions” option, then typed in [\x1f] which is is the Regex code to tell Notepad++ to search for this little bastard of a character. Sure enough, there was exactly one in the output file that surely snuck in from an otherwise innocuous copy and paste to EAS some time ago. I fixed the member name in EAS, reran the export, reprocessed with the parser, and all was well again in the universe.

cubus outperform EV Analytics Review: Position in the Enterprise

Welcome to the fourth installment of my increasingly inaccurate EV three-part review. If you missed the first three parts, you can check out the EV backgroundusing EV, and using EV continued parts to catch up!

I hope you enjoyed this little mini-series on a really interesting tool so far. As I mentioned in the first article, this tool has a special place in my heart owing to how critical it was to my Essbase life going back as far as 2005. I was quite the EV enthusiast back in the day within my company, but when I talked to people about trying it out or using it, it quite often fell flat. I’d often hear, “Why don’t I use the Excel add-in?” or “Why do I need another tool for that, I already have [whatever]” or “isn’t the enterprise standard [some enterprisey thing]?”

I see where these people were coming from. I get it. In a world where the tools that users are given are often prescribed quite strictly for them, having one more tool to support is matter to be taken lightly: licensing costs, support, training, and all the normal fun IT things.

For these reasons, I prefer to think of EV as another tool in the toolbox for my users – not the exclusive tool. It’s not the end-all-be-all enterprise reporting solution like Financial Reporting, and it’s also a distinct experience apart from Smart View. Consider these tools:

  • Smart View
  • Planning
  • Financial Reporting
  • Tableau
  • Dodeca
  • cubus EV

Then consider the following evaluation criteria:

  • Ease of use / Learning curve
  • Report definition handled by user or admin?
  • Installation
  • Data visualization ability
  • Primary usage reason
  • Relation to other tools

I won’t exhaustively put these all on a spectrum for each property (I’ll save that for a future post), but looking at a few of these products and these evaluation criteria, I can point out a few things.

Smart View, Tableau, and EV are all ostensibly self-serve: you just make them available to the user, they point it at a data source, and then perform analysis and reporting, much to their bean counting merriment. Planning, Dodeca, and Financial Reporting ostensibly require some administrator to have put in some structure ahead of time that the user will consume.

As for ease of use, EV certainly isn’t harder to use than Smart View, if anything it’s a bit simpler. EV makes it hard if not impossible to put your grid into an inconsistent state with respect to the underlying OLAP data source, meaning that you can’t really screw things up by moving some member to the wrong column. Easier to use than EV, however, would be Dodeca and FR. Planning gets kind of its own special place on this spectrum (it’s not easy per se, it’s not hard… it’s something). Similarly for Tableau – a bit of a learning curve, simple reports are fairly straightforward, but the sky is the limit for some of the crazy visualizations you can do.

Speaking of data visualization, Tableau is quite clearly the champ out of all of these. Dodeca and Smart View have similar support for charting (by way of Excel or Excel-like charts). EV’s isn’t ostensibly a data visualization environment, but it’s visualization capabilities in terms of bread and butter charting are compelling, particularly the way that it is an absolutely seamless experience with respect to the data grid you’re working with. In other words, with Excel/Smart View you add in a chart if you want, in EV the data IS the chart if you want it to be. Or it’s just the data – or it’s both at the same time.

Installation for EV is pretty straightforward and a little better than Smart View since there isn’t an installer to worry about, so it’s nice being able to just give your use a URL and away they go. Similar props for Dodeca and most of the other tools on this list.

Final Thoughts

So what does this all add up to? I think that EV is a great tool to have IN the toolbox, but not the ONLY tool in the toolbox. Almost paradoxically it is a compelling tool for your advanced Smart View users but also for Smart View novices that may be intimidated by ad hoc queries and multi-dimensional concepts. EV rewards the user of a well-constructed cube, with a competent and functional UI that extends the value of properly deployed features, such as Dynamic Time Series, UDAs, attribute dimensions, sensible member names, and more.

On the other hand, it doesn’t seem to be for everyone: based on my own prior experience, it can be a confusing addition to the technological landscape for some IT managers (not to mention one more mouth to feed, system-wise), and might run into “But we already have X for that” syndrome. Again, I think it’s a complement and not a replacement or enterprise standard. There are countless scenarios I can imagine where if I were to be dropped into some enterprise as the benevolent dictator of all things BI (or OLAP, or EPM, or whatever), I would say “let’s take this thing out for a spin and see what people think” and would give Decision Systems or cubus a call.

Thoughts on deprecated Essbase 11.1.2.4 features and the future of EAS

The Hyperion blogging-verse has been quite aflutter with the release of 11.1.2.4. So I won’t bore you with a recap on new features, since that has been done quite effectively by my esteemed colleagues. I will say, however, that by all accounts it seems to be a great release.

Oracle is really starting to hit their stride with EPM releases.

As a brief aside: I seem to be in the relative minority of EPM developers in that I come from the computer science side of things (as opposed to Finance), so believe me when I say there is a tremendous amount of energy and time spent to develop great software: writing code, testing, documenting, and more. Software is kind of this odd beast that gets more and more complex the more you add on to it.

Sometimes the best thing a developer can do is delete code and remove features, rather than add things on. This is a very natural thing to happen in software development. Removing features can result in cleaner sections of code that are faster and easier to reason about. It typically sets the stage for developing something better down the road.

In the software world there is this notion of “deprecating” features. This generally means the developer is saying “Hey, we’re leaving this feature in – for now – but we discourage you from building anything that relies on it, we don’t really support it, and there’s a good chance that we are going to remove it completely in a future release.”

With that in mind, it was with a curious eye that I read the Essbase 11.1.2.4 release notes the other day – not so much with regard to what was added, but to what was taken away (deprecated). EIS is still dead (no surprise), the Visual Basic API is a dead end (again, not a secret), some essbase.cfg settings are dead, direct I/O (I have yet to hear a success story with direct I/O…), zlib block compression is gone (I’m oddly sad about this), but interesting for now is this little tidbit: the Essbase Administration Services Java API is deprecated.

For those of you who aren’t aware, there is a normal Java API for Essbase that you may have heard of, but lurking off in the corner has been a Java API for EAS. This was a smallish API that one could use to create custom modules for EAS. It let you hook into the EAS GUI and add your own menu items and things. I played with it a little bit years ago and wrote a couple of small things for it, but nothing too fancy. As far as I know, the EAS Java API never really got used for anything major that wasn’t written by Oracle.

So, why deprecate this now? Like I said, it’s kind of Oracle’s way of saying to everyone, “Hey, don’t put resources into this, in fact, it’s going away and if you do put resources into it, and then you realize you wasted your time and money, we’re going to point to these release notes and say, hey, we told you so.”

Why is this interesting? A couple of things. One, I’m sad that I have to cross off a cool idea for a side project I had (because I’d rather not develop for something that’s being killed).

Two (and perhaps actually interesting), to me it signals that Oracle is reducing the “surface area” of EAS, as it were, so that they can more easily pivot to an EAS replacement. I’m not privy to any information from Oracle, but I see two possible roads to go down, both of which involve killing EAS:

Option 1: EAS gets reimplemented into a unified tool alongside Essbase Studio’s development environment.

Option 2: EAS functionality gets moved to the web with an ADF based front-end similar in nature to Planning’s web-based front-end.

I believe Option 2 the more likely play.

I always got the impression from the Essbase Studio development environment that it was meant to more or less absorb EAS functionality (at least, more than it actually ever did). I say this based on early screenshots I saw and my interpretation of its current functionality. Also, Essbase Studio is implemented on the same framework that Eclipse (one of the most popular Java programming environments) is, which is to say that it’s implemented on an incredibly rich, modular, flexible environment that looks good on multiple OS environments and is easy to update.

In terms of choosing a client-side/native framework to build tools on, this would be the obvious choice for Oracle to make (and again, it seems like they did make this choice some time ago, then pulled back from it).

The alternative to a rich “fat client” is to go to the web. The web is a much more capable place than it was back in the Application Manager and original EAS days. My goodness, look at the Hyperion Planning and FDMEE interfaces and all the other magic that gets written with ADF. Clearly, it’s absolutely technically possible to implement the functionality that EAS provides in a web-based paradigm. Not only is it possible, but it also fits in great with the cloud.

In other words, if you’re paying whatever per month for your PBCS subscription, and you get a web-based interface to manage everything, how much of a jump is it for you to put Essbase itself in the cloud, and also have a web interface for managing that? Not much of a leap at all.

Camshaft 1.0.1 released

Quick bug fix release (thanks to Peter N. for the heads up!). There was a problem with the way the runnable JAR was packaged. A new version can be downloaded from the Camshaft downloads page.

Camshaft is a Java command-line utility that executes MDX queries against a given cube and returns the results in a sensible format for loading or processing with your own tools (as opposed to you having to use voodoo or something to try and parse it into something usable). So stop parsing header bullshit off of MDX queries and start parsing complements from your users saying how awesome you are.

Drillbridge 1.3.3 re-updated

There was a little issue in Drillbridge causing editing reports to not work. This was due to a column I introduced whose SQL code to update the internal Drillbridge database was not set correctly, so the column didn’t get added. Then when you would go to edit a report, it would try to query a non-existing column, causing it to fail. I’ve since fixed this and re-uploaded Drillbridge 1.3.3. Please let me know if any other issues.

Performance nuances with MaxL data imports with local and server

Some time ago, I reviewed and revamped the MaxL automation for a client. One of the major performance gains I got was actually pretty simple to implement but resulted in a huge performance improvement.

Did you know that the MaxL import data command can can be told whether the file to load is a local data file or a server data file? Check out the MaxL reference here for a quick refresher. See that bold “local” after from? That’s the default, meaning if we omit the keyword altogether, then the MaxL interpreter just assumes it’s a local file.

Imagine that you have an Essbase server, and then a separate machine with the MaxL interpreter. This could be your local workstation or a dedicated automation server. Let’s say that there is a text file on your workstation at C:/Essbase/data.txt. You would craft a MaxL import command to import the local data file named C:/Essbase/data.txt. That’s because the file is local to the MaxL interpreter.

Now imagine that the file we want to load is actually on the server itself and we have a drive mapped (such as the Y: drive) from our workstation to the server. We can still import the data file as a local file, but this time it’s Y:/data.txt (Assume that the drive is mapped directly to the folder containing the file).

In this scenario, MaxL reads the file over the network from the server to the client, then uploads that data back to the server. This data flow is represented in the figure in the left of this diagram:

MaxL data loads: server vs. local

You might be thinking, “But wait, the file is on the server, shouldn’t it be faster?” Well, no. But there’s hope. Now consider server file loading. In this case we use the server keyword on the import statement and we specify the name of the file to load. Note that the file location is based on the database being loaded to. If you’re loading to Sample Basic, then Essbase will look in the ../app/Sample/Basic folder for the file. If you don’t want to put files in the database folder, you can actually cheat a little bit and specify a path such as ..\..\data.txt and load the file from a relative path. In this case by specifying the ..\..\, Essbase will go up two folders (to the \app folder) and look for the file there. You can fudge the paths a little, but the key is this: Essbase will load the file from itself, without the MaxL client incurring the performance penalty of two full trips of the data. This is depicted in the right figure in the diagram: the MaxL client issues a data load command to the server, which then loads the file directly, and we don’t incur the time needed to load the file.

In my case the automation the written to load a file that was already on the server (in the \app folder), so I just changed the import to be a server style import, and immediately cut the data import time dramatically.

I wouldn’t be surprised if this “anti-pattern” is being used in other places – so take a look at your automation. Let me know if you find this in your environment and are able to get a performance boost!

 

Essbase Java API Consulting & Custom Development Available

I recently finished developing a solution for a client that involved writing a custom Java program using the Essbase API. The client was really amazed at how quickly I was able to develop the solution because their previous experience with using it (or hiring someone to develop with it for them) was not nearly as productive or smooth.

I graciously accepted their compliment and then told them that I’ve simply been working with the Essbase Java API for a long time – almost a decade now. Not only that, but I have several helper libraries that I use in most of my projects that prevent me from having to reinvent the wheel. By this time the libraries are quite battle-tested and robust and help speed up the development of many common operations such as pulling information out of the outline, running MDX queries, programmatically doing a data load, pulling statistical information, and more. Instead of spinning my wheels writing and rewriting the same boilerplate code, I accelerate development and focus on creating a good solution for the task at hand.

That all being said, for those of you out there finding this blog post now or in the future, whether you’re an administrator, consultant, manager, or other, and find yourself needing some help with a solution that involves Java development and utilizing the Essbase Java API, don’t hesitate to contact me. I am available through my consulting firm to do custom work or even fixing existing solutions you have already that are now exhibiting some quirk or need an enhancement. My extensive experience with Java and this particular API means than I can get up and running fixing your problem, not learning how to do it while on the clock.

Do this, not that: Current vs. Prior Year dynamic calc in Scenario

Here’s just a quickie I saw the other day. Imagine a normal cube with a Years dimension, a Scenario dimension, and any other normal dimensions. Years contains FY2012, FY2013, FY2014 or similar and so on. Scenario contains Actual, Budget, and all the other normal stuff you’d expect to see.

Naturally, the Scenario dimension will contain all sorts of handy dynamic calcs, starting with our trusty Actual to Budget variance:

Actual vs. Budget: @VAR("Actual", "Budget");

So far so good.

How about a scenario that gives us the current year versus the prior year? Don’t do this:

@VAR("FY2014", "FY2013");

Or this (which is I guess slightly better but still not quite great):

@VAR(&CurrentYear, &PriorYear);

Why shouldn’t you do this? One, it requires maintenance – the kind of maintenance that is easily forgotten about util a user calls up and says that something doesn’t look quite right.

Second and more importantly, it’s semantically wrong. Hard-coding the year effectively breaks the inter-dimensional promise that our cube is ostensibly making – which is that the Scenario value we’re looking at should be based on the current Year member – not some arbitrary member irrespective of the POV.

(This all being said, yes, there could be a legitimate design reason to code a dynamic calc in Scenario that is always the current year irrespective of the POV, but I digress).

A simple formula can get us the prior value:

@PRIOR("Actual", 1, @CHILDREN("Years"))

As well as the actual versus prior:

@VAR("Actual", @PRIOR("Actual", 1, @CHILDREN("Years")));

Note that this assumes there is nothing else in the Years dimension and that it’s got a typical “ascending” sort (2010, 2011, 2012, in that order). If you have a years dimension going in descending order you could put -1 in for the @PRIOR command or just switch to @NEXT.

There you have it – a simple cleanup that saves maintenance, doesn’t rely on outline variables being updated, is intuitive, and more importantly, is doesn’t break the semantics of the cube.

Calling all Seattle-based Hyperion enthusiasts!

Are you in the Seattle or greater Seattle region? Are you a Hyperion, Essbase, EPM, OLAP, BI, or whatever-you-want-to-call-it enthusiast? There is a new Hyperion User Group starting up that I am helping out with. We would love your feedback to see what kind of meetups and content YOU, the user, would be most interested in. If you’re in this area (or even a little further to the south in Oregon!) can you please take a few minutes to fill out a quick survey?

Thanks for your time!

Gadget Review: The Jiggler

So, theoretically speaking, let’s say you are doing some work for a client and they give you one of their laptops. Said client is super secure and employs all of the latest and greatest tricks and tips to minimize their risk. And one of those security techniques is to auto lock the laptop after a certain number of minutes of no input. And this setting can’t be changed (because that’s locked down too). And you are working between that laptop and another laptop (say, a nice MacBook Pro…). What to do? Well, hypothetically speaking, you could plugin a little device to pretend your mouse is jiggling. In fact, it might even be called the Jiggler. Yeah, this might do the trick… just saying.