Essbase & Hyperion Papercuts

I am a software developer who regards the output of the software development process as skin to the woodworker making a fine jewelry box, the master chef perfecting a plate, or the musician crafting a song. I believe solutions should be elegant, robust, deceivingly simple, and polished. I take the same approach to code with my various open source Essbase related Java projects, GitHub projects, and perhaps more importantly, the critical filter through which I see and use Hyperion, Essbase, and related software.

To that end and just for my own amusement, I will be henceforth be writing about various “Essbase and Hyperion papercuts” that I see and perceive. This is in similar spirit to the Ubuntu Papercuts project that occurred some time ago. The idea was that in the aggregate, a lot of little issues start to become troublesome and lead to a worse user experience. As with many pieces of software, Essbase in general has this problem. It’s an awesome piece of software, but then has all these little warts and quirks. It’s like NASA puts a man on the moon (this is the scientific equivalent of Essbase at its heart) and then when stepping out of the lander on the surface of the moon, trips on a faulty stair leading to the surface (this is like any number of little Essbase quirks I run into).

That being said, if there is some small, seemingly trivial and inconsequential Essbase/Hyperion (or Planning, or FDM, or ODI, or…) issue that just really grinds your gears, let me know about it! This blog gets a few hits from Oracle headquarters, so maybe, just maybe, someone will see it and we can all make the world’s best EPM software just a little bit cooler.

 

Do you have an Essbase or Hyperion blog? Let me know!

I follow a plethora (you like that vocab word for the day?) of Essbase and Hyperion blogs, in addition to other technical blogs I love to follow such as on ODI, cloud computing, big data, and iOS development. I think I have a pretty solid list of blogs but there are always new ones popping up that I want to know about! Do you have an Essbase, Hyperion, EPM or other related topic blog? Please email or tweet it to me!

Also, some of my favorite Essbase related blogs are linked at the footer of this website so check them out. And similarly if you like this blog then please consider adding it your blogroll or list of links so we can all share the Essbase blog lovin’.

Thanks!

Pointing an ODI Procedure to different schemas on the same server to facilitate testing

One of the truly awesome and powerful features of Oracle Data Integrator is to develop interfaces against a logical schema, then let ODI choose the physical schema at run time. In other words, we have the same logical job that should work in development, quality assurance, and production environments – and this works without having to create and maintain multiple copies of the same job.

Some environments, however, are less than ideal and don’t have a servers for testing (!). ODI really, really benefits from extensive testing in development, though. So what to do? Create a separate schema on the production server. We can still map out the test context for ODI to this separate schema and more or less approximate having a true testing environment.

What if we need to write an ODI Procedure, though? Obviously our first choice is to create an Interface,  but we need to drop down to procedures here and there for various reasons. Normally a procedure would work just fine in different physical servers because the code being executed, including any schema references, would be the same. This is not true if we have a faux test schema on our production server. We might have CUSTOMERS and our newly created CUSTOMERS_DEV or something similar. If an ODI procedure wants to do an update on a table, for instance, it might look like this:

UPDATE CUSTOMERS.INFO SET fav_color = 'Red'

Our procedure is tied to the particular schema. Again, if we had the same schema name but on different physical servers, this wouldn’t be an issue. But we don’t in this scenario. Let’s update the procedure to pull out the proper schema name from the ODI context so that the procedure works in our quasi-dev and production enviroments:

UPDATE <%=odiRef.getSchemaName()%>.INFO SET fav_color = 'Red'

Tada! In the development context, the schema CUSTOMERS_DEV will be used, and in production, CUSTOMERS will be used. Of course, the ideal solution is to have a full test and production environment, but in a pinch this will allow us to do testing on the same server and ensure that our ODI implementation is well-tested, correct, and robust.

Fixing an esoteric Oracle Incremental Merge Update Knowledge module bug with journalizing and an aggregate function

Editing knowledge modules is not for the faint of heart. KMs translate your interfaces and other fun business things into executable code. In short, they convert ODIs mostly declarative what definition of jobs into executable units by taking care of the how. There are numerous options available to interfaces and these largely affect the code that is generated for your job.

I ran into a problem the other week where my integration into an Oracle schema by way of the Oracle Incremental Update (MERGE) IKM from a journalized data source using an aggregate function was not working.

Thankfully, someone else had run into this issue and Google turned up a quite helpful OTN post that solved 99% of the problem for me. But the post is from a little while ago and didn’t fully work for me.

Here’s the relevant fix code from the brave and helpful OTN poster:

<%=odiRef.getGrpBy()%>
<%if ((odiRef.getGrpBy().length() > 0) && (odiRef.getPop("HAS_JRN").equals("1"))) {%>
,JRN_FLAG,JRN_SUBSCRIBER,JRN_DATE
<%}%>
asdffd

This doesn’t quite work, owing to a relatively new ODI feature: multiple data sets in the source of an interface. Multiple data set are a really, really cool feature that were added a bit ago. They basically let you take the UNION of different data sources and integrate them to the same target. This is useful in contrast to needing to interfaces for the same target. You can also do the intersection and minus out rows too but I haven’t quite had occasion to use that just yet.

As it pertains to us, though, there’s a parameter on much of the ODI generated code now that is used to specify which of the data sets to perform an action on. Here’s the updated updated (yes, that’s two updated’s in a row).

<%=odiRef.getGrpBy(i)%>
<% if ((odiRef.getGrpBy(i).length() > 0) && (odiRef.getPop("HAS_JRN").equals("1"))) {%>
,JRN_FLAG,JRN_SUBSCRIBER,JRN_DATE
<%}%>

Note the addition of the (i) in two places which is how the dataset is specified. Many thanks to the original poster for helping out in the first place and helping me fix my issue.

One extra thing to validate in that load rule, even if all looks well

This isn’t one you will run into too often, if ever, but seems to be a small issue. I was working on a load rule the other day (yay load rules!) and everything validated just fine. There’s nothing better than the smell of an “everything validated correctly!” dialog in the morning. I am on a project refactoring a cube and one of the small changes made was that the dimension name has changed for one of the dimensions from Accounts to Account. One of the core load rules for this cube uses the “sign flip on UDA” feature. Of course, for this feature to work (which I have used many times before and love [but not as much as I love EIS…]) it needs to specify the dimension and the UDA of the members in that dimension to flip the sign for. Well, sign flipping wasn’t working and even though the load rule validates just fine, the problem was that non-existant dimension in the sign flip configuration. So, although there could be a reason to not want to validate this, it seems somewhat reasonable (if nothing else, for the sake of completeness) that the load rule verification should include verifying that if the dimension name for sign flipping does exist. It would have saved me a little bit of time.

In case you missed it: Oracle Data Integrator success story webinar

A little while ago I presented a success story using Oracle Data Integrator. The webinar goes over ODI functionality at a high-level, giving a quick tour of the benefits of major functional areas and how they were used to greatly improve and build out the ETL and automation at a high profile client. If you are interested in using ODI in your organization or aren’t sure (but want to know more!) then I strongly encourage you to check it out. ODI is one of my specialization areas now where I have been doing a tremendous amount of work, so I look forward to becoming an expert in this technology and sharing this expertise with others.

Note that this webinar, while technical, is not a deep drive into implementation details. I am planning a followup webinar to very comprehensively look at using ODI as an ETL/automation solution and implementing it with best practices. This will be the webinar to look for if you are more technical and really want to see the ins and outs of how things work and can be architected for robustness, maintainability, and efficiency.

Using ODI error tables (and tables with dollar signs in their names) as models for transformations

I setup a solution awhile ago where ODI facilitates data movement from an ODI error table into another data store. As you may know, ODI has a naming convention where its temporary and work tables are prefixed with some combination of a letter (or two), a dollar sign, and perhaps an underscore. So if my original table in a relational data store is CUSTOMERS, then the error table for this table would be E$_CUSTOMERS.

So of course I fire up my RKM to build out the model metadata for these new tables, just like I normally would. Everything goes fine but when I go to build the interface and run it, things don’t work so well. A dollar sign ($) is a very common character for scripts and programming languages to indicate that variable interpolation should occur (in other words, to fill in a placeholder). For example, if the variable $color = “orange” and you interpolate a string such as echo “My favorite color is $color!” then the result is “My favorite color is orange!” Nothing too fancy here.

That being said, sometimes a dollar sign can wreak havoc when used where some code doesn’t expect it. I’m not sure if it ever got fixed, but I developed at a place once that would dole out Active Directory IDs based on letters from the first and last name – three letters from each, to be precise. But people with very short last names would have dollar signs appended to them. There was a bug in Hyperion where you couldn’t login with one of these user names, likely because it just never got tested (oops). I’ll assume that’s fixed now.

In any case, back to ODI. In a given ODI datastore, you have the name, resource name, and alias name. Name is typically how you see it in the ODI design context. Resource name is the actual resource in the schema/database. Alias is used by ODI in interfaces as, well, an alias. I didn’t record the exact error message but given that my interface was failing and the output in Operator was a bit odd, I immediately suspected the dollar sign throwing things off either for ODI’s generated code, on the target system (and Oracle database server), or both.

My quick solution? Just replace the dollar sign with an underscore in the alias in the datastore. You can delete the dollar sign entirely but I decided to just go with an underscore. By making the change in the datastore, subsequent interfaces leveraging this datastore will also benefit from the fix, as opposed to trying to adjust just the one interface. Of course, leave the resource name alone (and the name) since they don’t need to be changed (and in the case of the resource name, shouldn’t be changed).

Another approach to all of this would be to change the physical topologie’s naming convention so that error tables don’t have a dollar sign (or other entities that ODI names) but I quite like ODI’s preferred naming strategy (for the most part).

Linux troubleshooting guide for system admins!

Most, but not all of my Essbase administration experience is on Windows servers. Linux support appeared years ago and has gotten much better – and more common – as the years have progressed. I ran Linux as my desktop for many years (Slackware, Fedora, Gentoo [shudder], Ubuntu, and more) before falling in love with OS X so I’m pretty comfortable on a Linux command line (and an OS X command line for that matter). But I came across this server troubleshooting article awhile back that has some absolutely awesome stuff in it, much of it new to me. If you need to get into a Linux system and start digging around to see what’s going on, this is an absolutely awesome guide.

My ODTUG Kscope13 presentation: Practical Essbase Web Services

It has been a few years since I last presented at Kscope, but I am back this year! I will be presenting on “Practical Essbase Web Services” – this will be my take on the new web services features from recent Essbase versions, as well as drawing on my experience developing mobile solutions, developing Essbase middle tiers with the Java API, and other approaches to extracting data from Essbase. For those of you in C# shops or wanting to get at Essbase data from your other favorite languages (I’m looking at you, PHP, Python, and Clojure), this should be a fun overview of your options. I’ll look forward to seeing you there – and if you are interested in the presentation but aren’t going to ODTUG’s Kscope, let me know!

Portable Firefox to the rescue!

Just a quick tip that I’ve been meaning to mention as my schedule allows for a few more posts these days… Have you ever heard of PortableApps.com? I used to use them quite bit, although not too much lately. These are popular software applications that have been packaged with the specific intent of running them from a USB stick. They have web browsers, email clients, virus scanners, image editors, and more. All of these are open source applications. You don’t have to install them, though, which is the nice thing. They are ready to run as-is. This also comes in particularly handy if you have an existing installation of some software (again, using Firefox as an example) and don’t want to mess with it. I ran into an issue with a client awhile back where Internet Explorer was having some issues with logging in to Workspace/Planning, but this particular version (something around 11.1.1.3) didn’t work with the current version of Firefox. It was, however, certified to work with a much older version of Firefox – version 3.5. What to do? Go to PortableApps.com, download the old Firefox binary from version 3.5, run it from the local hard drive without having to install, and presto, I was good to go. Next time you have a browser acting up or need to switch things up, this might be a decent way to go.