All posts in “pdi”

Happy Pi Day!

Today is Pi Day, March 14 or 3.14 (using a rather awkward M.dd format).

So this morning, the following dialog occurred within our team:

– Happy Pi Day!
– You know what we should do? A PDI Monte Carlo simulator to calculate Pi…
– Uuuuh, nice. Yeah, I think I can do that.
– … without scripting steps.
– Challenge accepted!

Well, happy Pi Day everyone, and behold our awesome (though of limited use) Monte Carlo simulator to calculate an approximation of Pi. It uses only the following steps:

  • Generate rows
  • Generate random value
  • Calculator (x2)
  • Add sequence (x2)
  • Filter rows (x2)
  • Write to log

It runs “forever” and gives updated results on the logs for each 1.27 Million rows (approximately).

Remark: forever means until you reach 1b rows and the sequence counters roll over.

Oh, you may want to run it with Minimal Logging only.

Happy Pi Day!

Download here

Using PDI and C-tools to display real time scores analysis for UEFA Euro 2016

So, the Euro 2016 started this past Friday. We’ll have matches almost everyday until July 10th.

And here at Ubiquis we decided to start a little project around the Euro. The basic idea is: suppose England gets to half time level with Slovakia. Given both teams’ past history, what’s the most likely scenario? A win for England? A draw? A win for Slovakia? We got score change information from all Euro finals matches since 1960 from Wikipedia and parsed it with PDI. And using a real time scores API called xmlscores.com, built a couple queries that answer one basic question: “out of all matches in which team A was leading/trailing by X goals or more, at the same point in time, how many ended up in wins, losses and draws?”.

So, for example, when yesterday Germany was leading Ukraine by 1 goal at Half time, the dashboard would give us some idea of how often Germany managed to keep a lead and win the match, or how often Ukraine managed to turn the tables around when losing by 1 goal or more and ended up drawing or winning the match.

The dataset is quite small and the data model very simple, but it serves to show how Pentaho C-tools and PDI can be leveraged to create a real time information system, fed by external data sources, in a reliable maner.

Here’s the link: Euro 2016 Analysis dashboard (no login required).

Hopefully we’ll use the learnings from this project to expand its scope and reliability and use it for other sports competitions. Stay tuned.

Secret Santa Generator

It’s Christmas! And with Christmas comes Secret Santa! We replicated the names in a hat process which is normally used for this kind of thing: write all the names in pieces of paper and put them in a hat; each person takes one, if you drew your own name, you put it back and draw another. If the last person draws his own name, the whole process is repeated. We know it’s not the most efficient way to do it but is an interesting challenge to implement in PDI. You can find it here.

We start with a list of names and emails in a CSV file. PDI reads them, generates two sets of random numbers, builds the pairings and checks if in any pair there’s a repeated name. If so, it repeats the process again, until we have a valid list of pairs of names and then it emails everyone with the names of the recipient they were assigned.

Using loops in PDI is a powerful pattern but a dangerous one. One should always guarantee that the job will not loop indefinitely, eventually blowing up with an out of memory error. In our case this would happen if the list provided has only one name on it, but we trust the user not to misuse the program (besides, what’ the point of having a secret Santa if there’s only one Santa to assign, anyway?).

PDI: getting the previous value of a field, but only sometimes

In PDI we can easily retrieve the previous (or next) value of a field easily, by using the Transformation step Analytical Query. It allows us to fetch the value of a given field or fields N rows back or N rows forward. Quite useful for things like cumulative sums (although for that we could just use a Group by step), getting the previous attribute values of a SCD, etc to map a SCD of type II to a SCD of type III, etc.

However, this is a rather limited step:

  • The number of rows back or forth is fixed; it can be a variable, but can’t be dynamically changed;
  • We can only get a field’s value, not anything more complex, like operations on fields;
  • We can’t get values based on a condition (e.g., if customer number is null, take the previous customer number; otherwise, build it using some algorithm)

We were facing a problem where the answer seemed to be the Analytical Query step… only that it wasn’t. Here’s the problem: on an event tracking database we have a category_raw field; as a general rule, this is the value we want to use as the true category of the object, let’s call it category_new. However, if the category_raw field value is ‘X’, we should ignore it and instead use the previous category_new value. Sounds simple, right?

Here’s a sample of the data and the desired output:

category_raw;category_new
A;A
B;B
X;B
D;D
E;E

From here it seems quite obvious: we use the Analytical Query step, fetch the category_raw value of the previous row and with a simple condition we evaluate

category_raw == “X” ? prev_category_raw : category_raw

However, if we have various consecutive exceptions,

category_raw;category_new
A;A
X;A
X;A
D;D
E;E

This approach doesn’t work: the 3rd row would fetch “X”, the value of the previous row, not “A” which occurred 2 rows before.

We tried to trick the Analytical Query step into fetching the value of the same field it’s creating, but that doesn’t work either. The step has to read values from one field and write a new field, but they must be different.

In the end we decided to go with a simple Java Script step, not so much because it’s the only way to go (you can quite easily fetch the objects you need using a Java class, Java expression, etc.), but because it’s simple.

Here’s a Javascript code:

var category_new;
if( category_raw == “X”){
category_new = getVariable( “prev_category_new”, “”);
}else{
category_new = category_raw;
}
setVariable( “prev_category_new”, category_new, “s”);

We know we’re setting a variable and using it in the same transformation, which is something we were always told is wrong. And usually, it is. However, as setting and getting the variable both happen inside the same step, it’s actually harmless: the variable is set for row 1 before it’s called on row 2, and so on…

Here’s a sample of the result:

category_raw;category_new
A;A
B;B
X;B
X;B
X;B
F;F
X;F
H;H

Each row picks up the correct category_new value, taking into account the handling of the excepcional “X” values.

Reading/Writing files from a Kettle CDA datasource in Pentaho 5.x

Kettle datasources in CDA are the most versatile way of getting data from disparate systems into a dashboard or report. Both C-tools and PRD expose Kettle as a datasource, so we can call a transformation that queries several different systems (databases, web services, etc.), do some Kettle magic on the results and then ship the final resultset back to the server.

However… there’s no way out of the box to reference sub-transformations in Pentaho 5.x. Solution files are now hosted on Jackrabbit and not on the file system and ${Internal.Transformation.Filename.Directory} just doesn’t work. If you have a transformation calling a sub-transformation and upload both to the repository, when you run it you get an error message saying that it cannot find the file.

Although this usually isn’t a problem, as most Kettle transformations used by dashboards are relatively simple, for more complex tasks you may want to use mappings (sub-transformations), call a job or transformation executor or, and perhaps the best use case, use metadata injection. In these scenarios you _really_ need to be able to call ktr files from the BA server context, and as it happens you can’t.

Well, sort of. As it turns out, you can.

The problem: lets say you have your server installed in /opt/pentaho/biserver-ce (or biserver-ee, we at Ubiquis don’t discriminate), and your kettle file is in your repository in /public/myDashboard/files/kettle/myTransformation.ktr. You want to use Metadata injection on a transformation called myTemplate.ktr and so you reference it by ${Internal.Transformation.Filename.Directory}/myTemplate.ktr in your metadata injection step. When you run your datasource, your error message will say something like

Cannot read from file /opt/pentaho/biserver-ce/pentaho-solutions/public/myDashboard/files/kettle/myTemplate.ktr because it’s not a file.

So what seems to be happening is that Kettle is using your pentaho-solutions folder as the root folder and it just appends the JCR path to it. Which obviously fails, as there’s no physical counterpart in your filesystem for the JCR files.

But, if you use the Pentaho Repository Synchronizer plugin to copy the contents of JCR to the filesystem and vice-versa (which is perhaps the simplest way to have your solution files under GIT or SVN, at least until the Ivy GS plugin becomes fully functional), then the synchronisation process will create a folder repositorySynchronizer inside your pentaho-solutions folder and use it to create the copies of every JCR file.

So, after synchronising JCR with the file system, your template transformation can be found in /opt/pentaho/biserver-ce/pentaho-solutions/repositorySynchronizer/public/myDashboard/files/kettle.

And now, the funny bit: to get it working, just create a symbolic link to remove the extra repositorySynchronizer element from your path: go to your pentaho-solutions folder and just type ln -s repositorySynchronizer/public public.

Now the template transformation is found and you can reference it from a transformation called by a dashboard.

Credit for the discovery goes to Miguel Cunhal, our junior consultant, that had the tough task of getting our metadata injection transformation running inside a demo dashboard. But more on that in another post.

GEM, a Generic ETL Machine

It’s finally here. After a few months of work, a lot of bugs fixed and a last minute refactor of all database connections, we are proud to release the first version of GEM, Generic ETL Machine.

It’s an ETL framework based on Pentaho Data Integration, to automate most of the common tasks that take time to develop, are essential to ensure a proper maintainable ETL, but that are hard to explain when the project’s stakeholders want to see their data. Things like enabling a solid logging mechanism with email alerts; data lineage to allow us to trace the data from the reports and dashboards, to the cube, then the fact table, then the staging tables, then the source files or databases, complete with line numbers, date the record was extracted, etc; a way (still very archaic) to define the order in which several tasks should run; a way to quickly deploy from the development laptop to a server and switch environments, from dev to test, to prod; and all this, of course, configurable.

This is still the first version and a lot of work is left to do. For the time being there’s no user interface to display the status of each ETL run or the results, all rollbacks must be performed manually in the database and there’s no way to schedule different tasks at different intervals. That’s all on the roadmap, time permitting, together with increasing the number of available samples (CSV files and web services to name a couple), support for PDI clusters, output and input directly into Hadoop in its multiple shapes and forms.

But it’s usable as it is (well, sort of – we only support a couple databases on each stage for the time being) and it’s been allowing us to spend a significant larger amount of time dealing with what really matters when we have to develop an ETL from scratch: the data analysis and the design of the data model. Not enabling file or DB logging, or setting up the email step for the 500th time.

Clone it, fork it, commit to it, enjoy!

source code on Github

MonetDB bulk loader and PDI

It’s pretty much common knowledge that you cannot use a Table Output step with a column oriented database. All those insert statements are painstakingly slow. That’s why we have bulk loaders. Not all DBMSs have a bulk load step, but the most common ones do (either as a Job entry or a Transformation step). There’s even a Vertica bulk loader, though not included with the release (note to self, create a JIRA requesting that).

This post is about PDI’s MonetDB bulk loader step. You have to define the DB connection, set the target schema and table, set a buffer size (number of rows to be sent each time, default is 100k), and insert the field mappings. So, it’s relatively easy to set up and test.

pdi_bulk_loader

It works quite nicely and… oh, wait, it doesn’t work anymore. Apparently something is broken with the bulk load step. It does get the rows through up to whatever you defined as the buffer size, but then it blows up. So, in this case it loaded 100k rows and threw an exception after that. If you increase the buffer size to 500k rows it’ll load 500k rows and blow up on row 500001. And so on.

Obviously this is a problem. If your target DB is MonetDB you have two options to load the data via PDI: the slow Table Output or the much faster MonetDB bulk loader, which is broken.

After some googling I found a JIRA issue filed describing exactly the same problem (PDI-12278) and it also suggested a code change to fix it. The issue was reported in May and it’s on the backlog, so it’s not likely it’ll make it to a release anytime soon and I needed the step working now, so I decided to test the fix. As it happens, it works very nicely.

Now, I’m not a Java dev. In fact, I can’t even read Java code, let alone write it. So I decided to write this to explain to the non-devs around what you need to do to get the bulk loader step working.

I assume you have a Linux/Unix/Mac machine to do this. If you’re running Windows you should: a) Switch to a decent operating system; b) failing that, install Cygwin and all the packages required for this to work (e.g., you’ll need xmltasks as least, but last time I used Cygwin on a Windows box to build anything was some 4 years ago, so don’t take my word for it).

Step 1: Get the code

From your shell just clone PDI from the Github repo:

$ git clone https://github.com/pentaho/pentaho-kettle

Step 2: Build it

$ cd pentaho-kettle
$ ant dist

While this is building, grab yourself a cup of coffee and wait. It may take up to 1 hour to download all the required dependencies.

Check that you get Build successful in the end.

Step 3: See if it works

$ cd dist
$ ./spoon.sh

If it doesn’t run, most likely something is wrong with your environment, or the build failed somewhere. Go back to step 2 and fix whatever is the problem.

Step 4: Change the code, build again

$ cd engine/src/org/pentaho/di/trans/steps/monetdbbulkloader
$ nano MonetDBBulkLoader.java

Comment out lines 505-511 (WARNING: obviously, as more code is committed to the repository, the line numbers may change. Below you have what the code should look like, with a few lines before and after for context; easy way to find the code snippet you need to change? search for “again…”, there’s a single occurrence)

// again...
error = data.in.waitForPrompt();
if ( error != null ) {
throw new KettleException( "Error loading data: " + error );
}

// COMMENT FROM HERE
// data.out.writeLine( "" );
// and again, making sure we commit all the records
// error = data.in.waitForPrompt();

// if ( error != null ) {
// throw new KettleException( "Error loading data: " + error );
// }
// TO HERE

if ( log.isRowLevel() ) {
logRowlevel( Const.CR );
}

Save the file and build it again.

Step 5: Test to see it works

pdi_bulk_loader_fixed

It’s working nicely and loaded all 10M rows in about 1’15” (the upload was done to a remote server some 3 countries away, not localhost, hence the relatively slow load speed).

Step 6: Apply the patch to a release version of PDI, instead of using trunk-snapshot

The previous steps allowed us to test the patch on the trunk-snapshot version of PDI. Most likely, this is NOT the version of PDI you want to run. You may want it for example, in PDI 5.2. What we need to do now is unpackage the JAR file where the MonetDBBulkLoader class is, copy it to the same location on the release version and then repackage.

For those that don’t knot the least bit about java, all those JAR files inside an application folder (e.g., the lib folder of PDI) are just archive files. You can unzip them to see what’s inside.

So first, lets copy the trunk-snapshot file to somewhere nicer and unpack it. Lets assume you have a ~/temp folder you want to use

$ cd lib
$ cp kettle-engine-TRUNK-SNAPSHOT.jar ~/temp/

Now do the same for your release PDI kettle engine (just replace <version> by your PDI version and build numbers)

$ cd /lib
$ cp kettle-engine-<version>.jar ~/temp/

Unzip both files:

$ cd ~/temp
$ mkdir trunk-snapshot
$ mkdir release
$ unzip kettle-engine-TRUNK-SNAPSHOT.jar -d trunk-snapshot
$ unzip kettle-engine-<version>.jar -d release

No copy the class you want from the snapshot folder to the release folder,

$cp trunk-snapshot/org/pentaho/di/trans/steps/monetdbbulkloader/MonetDBBulkLoader.class release/org/pentaho/di/trans/steps/monetdbbulkloader/MonetDBBulkLoader.class

Repack the release jar file:

$cd release
$jar cmf META-INF/MANIFEST.MF ../kettle-engine-<version>.jar *

Finally, grab your brand new kettle-engine JAR file and copy it to the lib folder of your PDI (remember to backup the original one, just in case).

Step 7: test that it works

Your MonetDB bulk loader step should now work in your 5.2 PDI version.

WARNING: This step by step guide is provided as-is. It was tested on PDI CE 5.2 and worked for me. That does not asure it will run for everybody, nor across all versions or all platforms. Use it at your own risk and remember to backup your original files first. If you spot an error or a typo on this post, please leave a comment and we’ll change it.