Essbase String to Number CDF

I was looking at the comments on my post about String Essbase CDF functions and found an odd request there.

I couldn't find a reason why someone would try doing that. However I ending up creating 4 functions to perform the request!!!

The request was to create a String to Number function.

Let's say that you want to convert some of your metadata to data in Essbase, there is no function that exists now which can do this. (I'm not sure whether there are some CDFs out there).

Let's see how it works.

I took Sample Basic and created two members which will hold the integer and double values.

StringInteger and StringDouble


















For the next two functions, you can pass the replacement character (for example product name contains "-" in it and I would like to remove it and store the numbers)

StringDoublewReplace and StringIntegerwReplace members were added to hold these numbers.

















You can register the function using the below given MaxL statements.
You can download the CDF from here. For installation instructions please look here.

Oracle EPM Workspace - HTTP 404.13 Request exceeds content length

I don't think you will get this error, but if you got it and then if you are on this post, you now have an answer to solve it :)

Now you'll be thinking "How did this guy get the error?"

I always try to break things (and Fix them later). I was trying to add a MIME TYPE for MP4 files, so that I can upload Video file in Workspace. If you are interested in knowing what MIME is have a look here.

You can add a new MIME TYPE by Navigating to Administer->Reporting and Analysis -> MIME Types in Workspace.

I did add one and then tried to load an MP4 file. During the upload process I got an error (which I've never seen before)

"HTTP Error 404.13 - Not Found
The request filtering module is configured to deny a request that exceeds the request content length."












The good thing is it provides the article and evens lists a solution :) (didn't test my GoogleFu :))












The solution is to edit applicationHost.config under C:\Windows\System32\inetsrv\Config and update <requestLimits maxAllowedContentLength="yournumber" />

The bad thing is it won't work :(

I'm not sure why it doesn't update setting in IIS, you've to manually change the content length in IIS Manager.

Login to IIS Manager and click "Request Filtering"













Click "Edit Feature Settings"













Update Maximum allowed content length to a big number. (I updated it to 100000000 = 100 MB)


















Once the update is done, I could upload a big MP4 file and share it with my users :)

Calculation Manager Execute MaxL and MDX Data Copy

It's been a longtime dear reader. My apologies for not updating this blog.

There were many reasons.

  • I bought a house and was busy settling in.
  • I picked up a new hobby - gardening and landscaping (which was kind of obvious). I was/is always fascinated with gardening. When I was a kid I used to pick up the plants to see whether roots are coming or not :)
  • I was busy with preparing for KScope - My first KScope and it was a great experience to meet all the people you know from forums, blogs,..... You should sign up for Kscope15.
For all those who thanked me during the conference for this blog (and for those who are reading this) , I would like to thank you instead - you are the reason behind this blog.

I wanted to write about this longback when the feature was introduced and it was just another post (like other 25 of them) in my drafts folder. Now you might be thinking why I'm posting about this all on a sudden. Everything happens for a reason and I'll talk about that real soon :)

CalcMgrExecuteEncryptMaxLFile and CalcMgrExecuteMaxLEnScript was introduced in 11.1.2.3.000, what does that do.

It does exactly what the name suggests - it can execute an encrytped MaxL script or a MaxL file.

Imagine a scenario where you've a Planning application and an ASO reporting cube. To move data we have to run a batch which will export the data and then use MaxL scripts to load it into the ASO cube.

Now imagine what such a function can do. You can write a rule in Planning which will export only the data which the user is working (using variables - POV, Pages) and load it into ASO cube. Isn't that wonderful.
Yeah I know what you are thinking - What happens if multiple users execute the same script? For now DATAEXPORT won't allow filename to be parameterized.  However think about the possibilities!!!!

I didn't use Planning (I love Sample Basic) in my example, however it'll work the same.

I've a new database which is BasicT (this one has more members in Market). I'm extracting "New York" data and uploading it to "India" and here is what my Calc script looks like.
This is my MaxL statement
Execute the script and there you go, data loaded to the target.

I thought about using RUNJAVA - just to see the syntax, however I will advice you all to use RUNJAVA always. There is reason behind it :)

Syntax for RUNJAVA is different than the function and here is how it looks like.
As you can see RUNJAVA doesn't need a FIX statement, and it will not execute the statement for all members. We'll talk about this in detail when we look at the next undocumented functions.

Now if you ask me what true and false does in the function, the only answer I've is - It runs asynchronously when true. I've seen that true is fast.

Now let's talk about two undocumented functions. They are the reason why I'm writing this blog. I was excited to figure out the syntax for it.

CalcMgrMDXDataCopy
CalcMgrMDXDynDataCopy

Oh what do they do -

  • First functions can accept MDX queries and it'll export the data from a BSO copy and will load it to a target cube (ASO/BSO)
  • Second function - if you are not good at MDX, will write an MDX query for you and it'll export the data from a BSO copy and will load it to a target cube (ASO/BSO)
Why I was existed about it?

Think about performing a datacopy in ASO - now we've to depend upon execute allocation (using share). Yeah I know it works, however think about just writing a MDX query and loading it into a different application or think about a DATAEXPORT like script from ASO.

Yes the function generates a a neatly formatted report using MDX script. I had a feeling that the function is using GridAPI and Sree Menon (PM for Calc Manager)confirmed that they are using GridAPI to export data.

For all the folks who are interested. Sree Menon updated that it was a POC in which DEV team wanted to move data from BSO to ASO. Well we just got lucky isn't it? ;)

I wrote this calc script for exporting data from Sample Basic and load it to Sample BasicT
What was it doing, it kept exporting/loading the same data for all the members it process. (I didn't have a clue - Sree informed me about this). Here is the reason why I asked you to stick with RUNJAVA always.

Following are Sree's words on the syntax
Since this was just a POC, Source/Target mapping does not work like partitioning. It is just source meber to target member (Apr to April). No functions are supported.
Since the mdx can return large data, for performace, maxRowsPerPage could be set to say 1000 which means as soon as the grid has 1000 rows and update to ASO is done.
At the end remaining rows are also pushed. If the maxRowsPerPage is set to "-1", then all the data is filled in to one grid.
sourceMemberMappings could be something like @List("Apr", "Qtr1") and targetMemberMappings could be @List("April", "Quarter1").
After following his advice this is how the script looks like.
Here is the syntax from the specs @CalcMgrMDXDataCopy(user, password, useToken, sourceApplicaionName, srcDatabaseName, targetApplicationName, targetDatabaseName, mdxQuery, loggingFileName, sourceMemberMappings, targetMemberMappings, maxRowsPerPage,tgtPOVCols,server)

I didn't use tgtPOVCols here, but if you have different dimensionality and you want to load it to a POV, then you can mention them here. Let's look at the output now.
I didn't do anything here. That is the entire output. Isn't that really neat?

I had some trouble getting the syntax for CalcMgrMDXDynDataCopy

I think the definitions are little messed up in the specs.

This is from the specs

@CalcMgrMDXDynDataCopy(user, password, useToken, sourceApplicaionName, srcDatabaseName, targetApplicationName, targetDatabaseName, logFileName, sourceMemberMappings, targetMemberMappings, maxRowsPerPage, slicerAxisQuery, axisQuery, povCols, server)

and this is the one which works
So I think this is what the spec should really look like

@CalcMgrMDXDynDataCopy(user, password, useToken, sourceApplicaionName, srcDatabaseName, targetApplicationName, targetDatabaseName, logFileName, sourceMemberMappings, targetMemberMappings, maxRowsPerPage, povCols, slicerAxisQuery, axisQuery);

You don't need a server for this one.

What will this do, it generates the MDX for you!!!!!

The first line is the MDX generated by the function.
That was a really lengthy post and I hope that Calc Manager team will expand this function and give us an ASO DATAEXPORT and more :)

Export/Query Batch Scheduler details

Someone asked this question in OTN and the generic answer was not it is not available.

I knew that we could generate it. I had a reason to believe so - it shows up in Workspace. Right click on the batch and look at Show Details - there it is.

However it took me a while to get this done. I tried to explore something new ;) (I was looking to add a stylesheet to XML file and generate the output)

The details are captured in an XML, (oh forgot to mention that Batch Scheduler adds an entry to a table called FR_SCHEDULER_DATA)







and stored in the XML column. The question was how to return those values.

I looked at the XML file (by editing it in SQL Developer) and found that it has the start time and end time (which proves my theory).
PARAMVALUE is in Epoch Date.

Since it is an XML file, I was thinking about using EXtensible Stylesheet Language. Here is what I got after creating the XSL file.





It looks great , however not that usable, because you'll have to extract the XML from table, create a file for each job - it is not easy.

I'm not a good SQL programmer and it took me a while (google ;)) to get a code which can achieve what is needed.

SQL

The below given SQL will extract a scheduled batch's name, it's status, start time and end time.
Here is what it looks like











If you see two timings reported against a batch 1st is the start time and 2nd is the end time.

Change History Years in Planning on an EPMA application - Understanding EPMA tables Part II

Second post in understanding EPMA tables, if you missed the first one here is the link.

When I wrote the first post on EPMA, my intention was to convert all Classic Planning hacks (like history year addition, change weekly distribution, change start month) into EPMA hacks.

I started with adding history years and was not sure whether I can achieve it. I shelved this post for 7 months!!!!
I couldn’t accept that I cannot do it. There are those days!!!!!

After 7 months, it is a feel good factor that you were able to crack it.

What we are going to discuss is not a supported/recommended way, do at your own risk.

I’ve an EPMA application with start year FY11

before change

I decided to share Year dimension, in 11.1.2.3.500, it’ll warn you about it. It won’t allow you to share Year dimension.


Nice Smile. I had to delete Year dimension and then add it from Shared Library.

Since we are talking about dimension, let’s look at a table called DS_DIMENSION

year dimension

Don’t think that this table only holds “dimension” information, you’ll see that even Application Class (System, Planning, Essbase (BSO), Essbase (ASO), Consolidation) is a dimension.

E_DIMENSION_TYPE determines where it is EPMA internal dimension (MetaData) or a Normal dimension.
E_DIMENSION_STATE will tell you whether it is a Shared/Local dimension.

Back to where we left, I tried to deploy the application and got this error, which was expected.


The start year of the application now started showing 2010.


Now where is the problem, this is where the admiration towards EPMA started, it validates against Planning application and against itself. When you start a deployment it starts keeping a record of that entry. Don’t believe me look at DS_APPLICATION table.

library id

1 is the current application, rest all are previous, so now you’ve to change the member names from each libraries.

update ds_member

Get the library_id of your application and change FY11 (which was the earlier start year) to FY10, FY12 to FY11 and so on, commit the changes.

As I said, it validates against Planning too, so you’ll have to change HSP_OBJECT and HSP_CALENDAR tables.

planning application members

Update objects in similar way, FY11 to FY10 and so on.

update planning application

Update the FIRST_YEAR in HSP_CALENDAR. Now one change which I didn’t perform 7 months back and which kept me hunting for the reason was because of a column!!!!!

I did update the tables, restarted EPMA and Planning, however when I try to deploy, it started giving me this error message


I kept wondering where is it still fetching 2011, and the answer was OLD_NAME in HSP_OBJECT, so before you repeat the same mistake, change that column in similar fashion (FY11 to FY10 and so on)

I was paranoid and went one more step to delete the entries from DS_TRANSACTION_HISTORY. (I don’t think this is needed). However if you need it here is the sql for that.


Tables to be updated

DS_MEMBER columns C_MEMBER_NAME and C_MEMBER_DESCR
HSP_CALENDAR column FIRST_YEAR
HSP_OBJECT columns OLD_NAME and OBJECT_NAME

Restart EPMA and Planning services and deploy the application.

Views discussed in this blog are my personal views.
They do not represent the view of organizations, businesses or institutions that I'm part of