Showing posts with label ETL. Show all posts
Showing posts with label ETL. Show all posts

Friday, June 30, 2017

Move data "On-Save" from Planning BSO to Reporting ASO (Part 2)

This is continuation from previous post to provide more 'out of the box' solution for data movement from BSO planning to ASO Reporting cube. This solution is only possible if you are in 11.1.2.4. 

  We will have exactly the same process like part 1 of this blog. The only thing that we will be skipping here is writing CDF part. Because Oracle has calc manager CDF that can run a MaxL stored in the server using RUNJAVA com.hyperion.calcmgr.common.cdf.MaxLFunctions. More over from 11.1.2.4 we can have formatted MDX output in a flat file. What else you need ! You already guessed where it is going...

Solution:

  1. Level0 export MDX query with 'NONEMPTYBLOCK‘  keyword. 
  2. Create MaxL script to spool MDX value to a flat file. Use set column_separator "|" ; to get a formatted output. More information on this available on other very useful blogs.
  3. Load the flat file to ASO cube with MaxL. 
  4. Call all of these MaxL scripts from BSO calculation script using RUNJAVA com.hyperion.calcmgr.common.cdf.MaxLFunctions. (Check internet blogs)
  5. Add this calculation script in your web-form to run it on save.

Thursday, November 3, 2016

Truncating DATAEXPORT target relational table with CDF

First of all, I am a big fan of Essbase DATAEXPORT calc script to relational database. It provides flexible way of communication between cubes. You can export data to a relational table then manipulate through SQL load rule and load it back.  Certain type of mapping / data management is much easier in SQL. Moreover entire script can be called from Maxl. 
    Well, can it be? One problem though, Essbase DATAEXPORT calc does not allow you to truncate / delete from the underline table. Bummer!! How hard it could be for oracle to provide that function? It would have save my time of writing this blog and yours reading it. 
We will solve this issue with CDF (custom defined function). If you have not created your first CDF yet, then this is a good place to start. Once you have that CDF ready you can use it in your calc script like this…

Steps to install CDF

  1. Install JDK. In my case I have JDK 1.7 
  2. Install Eclipse (Java IDE). 
  3. In Eclipse create a new Java project. (File->New->Java Project). I have named it CDF.
  4. Expand the project and right click on src folder, create new -> package. I named it  com.williams.cdf
  5. Right click on com.williams.cdf and select Build path -> Configure Build Path. Click on the Libraries tab then add 2 External JARs essbase.jar and odbc14.jar. File essbase.jar is available in Oracle/Middleware/EPMSystem11R1/products/Essbase/EssbaseServer/java directory. you can download odbc14.jar from internet. It is required to connect to your underline oracle database. If your relational database is not oracle then you need corresponding java jar file to connect with that database.
Now the set up is done,  it is time for coding.
 6. Right click on package com.williams.cdf and create new -> Class.  I named it relationalDDL. Once done, it should look like this...
Here is the code that I have for relationalDDL.java ...

package com.williams.cdf;
  
import java.sql.*;
import oracle.jdbc.pool.OracleDataSource;

public class relationalDDL {
    static Connection sqlcon=null;
    static Statement sqlstmt = null;
    static OracleDataSource ods = null;
  public static void main(com.hyperion.essbase.calculator.Context ctx,String args[]) {         
         truncateTable(args[0],args[1],args[2],args[3]); 
    }

   private static void openConnection( String URL, String userid, String passwd) {
                    try {
                                                ods = new OracleDataSource();
                                } catch (SQLException e1) {
                                                System.out.println("New Connection Object creation Failed: " + e1.getMessage());
                                }
                ods.setURL(URL);
                ods.setUser(userid);
                ods.setPassword(passwd);
                                try {
                                                sqlcon = ods.getConnection();
                                                System.out.println("Oracle database connection established to "+ URL);
                                } catch (SQLException e) {
                                                System.out.println("Connection Failed to "+ URL);
                                                System.out.println("SQLException: " + e.getMessage());
                                }              
           
    }
    private static void closeConnection(Connection oraConn){
                if(oraConn != null)
                                {
                                                try
                                                {
                                                                oraConn.close();
                                                }
                                                catch (SQLException x)
                                                {
                                                                System.out.println("SQLException: " + x.getMessage());
                                                }
                                }
    }
    public static void truncateTable(String table, String URL, String userid, String passwd) {
     
                System.out.println("CDF Started");
                openConnection( URL,userid, passwd);
                try {
                                                sqlstmt=sqlcon.createStatement();
                                                sqlstmt.execute("TRUNCATE TABLE "+ table);
                                                System.out.println("Trancated table: "+table);
                                } catch (SQLException e) {
                                                System.out.println("SQLException: " + e.getMessage());
                                }
                closeConnection(sqlcon);
    }
}

If you have everything set up properly then you should be able to save the code above without any error in eclipse. Once saved right click on relationalDDL.java in package explorer to create a run configuration. 


We will configure something like shown below but we will not run it. Just hit apply and close.


Once configuration is saved, right click on relationalDDL.java in package explorer and click Export. Select Runnable JAR file



Provide the path where you want to export in your local machine. Ignore any warning that says main class not found. I named the jar file as DDL.jar. 

copy this jar file to your EPMSystem11R1/products/Essbase/EssbaseServer/java/udf folder.

update udf.policy file in EPMSystem11R1/products/Essbase/EssbaseServer/java
add following lines.

// Grant all permissions to CDF DDL.jar 
grant codeBase "file:${essbase.java.home}/udf/DDL.jar" {
 permission java.security.AllPermission; 
};

Now as this CDF is ready to  run, invoke it from calc script with RUNJAVA command. For any error check Essbase.log.

Tuesday, March 22, 2016

ETL inside ORACLE EPM workspace

There are quite a few options available when it comes to choice of an ETL tool for Oracle Hyperion Essbase and Planning data load. In this blog, I will discuss how you can select EPM workspace for the same. This is an excellent  platform to use to launch jobs that has batch / shell scripts wrapped over MaxL or SQL.  But Wait! Why should I use workspace ? Why not just running batch scripts or shell scripts in the server? Because in workspace you will have audit trail with ability to restrict access for users. Also one can schedule job, check run history, debug logs etc.(yeah yeah...we can do that in server too...) Finally, better user experience!

To set it up in workspace, first you need to make sure MaxL, SQL , batch files are running properly in workspace server. i.e. path related to ARBORPATH, ESSBASEPATH etc set in the server and Essbase is installed.

1. Create a generic Application or job launcher by Navigate -> Administer->Report and Analysis ->Generic job Applications. Then click '+' sign to create a new job.  Lets call it Run_CMD as it will run our batch job. Product host is your workspace server. Have the command template set up like below. To read more about the command template setup click the help button in the popup window. Lastly provide the full path to your cmd.exe in windows as executable. click ok.





2. Now we need to provide the batch job information that we want to run with this generic job launcher. To make it more interesting, lets consider our batch job uses both Maxl and SQL in it. Which will be supplied to the script as an input file. To do this we can put our .sql and .mxl files in the server and use that path in the batch script. But by doing that we will not be able to check those files from workspace. To make it more debug friendly we will import these files.

Lets see the next steps....

3. Navigate ->Explore. Create a folder where you want to store these jobs. My folder structure looks like this ...I redirect all my outputs under the log folder.


4. Now you have to import your batch file. I had batch file called MyBatchFile.bat that looks like ...


Note: that I do not have any path specified for files InputSQL.sql and InputMaxL.mxl. We will import them into workspace. 

5. Select File->Import->File as Job

Check Import as Generic Job and click next


Select Run_CMD from drop down list and click next



Click 'Go'



Add InputSQL.sql and InputMaxL.mxl files from your PC and click ok.

At this point you may get an error if you don't have mime type set up for .sql and .mxl file type. To do that go to Navigate -> Administer->Report and Analysis -MIME Types. Click go at the bottom of the page and add .sql and .mxl type

Finally run the job by double clicking on it or by using Run job option. Use Logs folder as your output directory.