This post is a proof of my laziness. I’m just kidding. You know that I’m not lazy 🙂
A friend of mine came up with a question, “How can I create a dynamic load rule”. Well that’s not the exact question, however let’s pretend for sometime that it was and how we can solve it.(His question was to how can I export data from BSO and load into ASO).
I do know that there is(are) similar thread(s) in Oracle forums, and the obvious answer is to create a DATAEXPORT calc script.
Get Data out from Source and then use a load rule to Import it to the Target.
The question many of us had/have is how is it determining the Rows and Columns. If you don’t specify DataExportColHeader, Essbase makes the first Dense dimension as Column Dimension.
If you look at the explanation, you’ll find that it needs a dense dimension. It is easier to use a static dimension like period as the Column Dimension as it won’t change.
Static Dimension will be one where you are not adding members to it and the DATAEXPORT calc script is not referring to a member set which is changing.
For example if I’ve @RELATIVE(YearTotal,0) in my calc script, I still know that it is 12/13 members starting from BegBalance/Jan to Dec.
What if in my cube Period is not dense and I don’t have a static Dimension to select.
Just for the context of this blog post think that Sample Basic has only one dense dimension – Measures and has a similar structure.
If I create a calc script similar to the one given below,
I’ll get Measures as Column.
Let’s imagine that next month I did add more members to the Parent “Profit”.
My data file will change and I’ll have to change my load rule to reflect those changes.
Yes I can do that, but can I make it Dynamic so that I don’t need to edit the load rule (or make sure that it is correct) all the time.
I know that there is an option in Load Rule where I can mention the record which contains the field names – “Record containing data load filed names”.
Oh yeah, we know that one, but what are you going to do with that?
Let’s look at the load rule. Well it’s just like any other rule.
Now let’s open the data file.
See how it moved line 1 as Field Names, however it is not correct, because those my Data fields.
How can I get the Row Dimensions to show up in export file? (see we asked you earlier, “Where are you going with that option?” 😉 )
If you look at the SET DATAEXPORTOPTIONS, there is an option to get the dimension names as header – DataExportDimHeader ON;
Let’s add that to our script and see how that works.
Hmm that’s interesting I got that one, however I lost the column field names.
How can I get the headers come as a single line?
Not to worry, here come the friendly neighborhood Spiderman – sorry sed command 🙂
sed -i “:a;N;$!ba;s/n/t/;s/”Measures”t//” C:Temp100-10.txt
If you notice Dimension name Measures also get displayed when we use DataExportDimHeader, we don’t need that because we’ve the member names from Measures in Columns. What the sed command does is it moves 2nd line to 1st and then replaces Measures with a tab.
You can replace “Measures” with your dimension name and if the delimiter is , then change t to ,
sed -i “:a;N;$!ba;s/n/,/;s/”Measures”,//” C:Temp100-10.txt
Open the data file in the load rule again.
Tada, A dynamic load rule!!!!
For the windows users, you can download sed editor from here.
You’ll need a dll file for the sed editor to work, you download that from here
(Keep both files under same folder)
Note: If you open the rule without the data file it’ll not show the field names 🙂