Sharepoint as a Collaboration and Productivity Tool

As we implement our Business Intelligence Sharepoint team site I have discovered that being new to Sharepoint the main challenge is to figure out how to maximize the functionality of each Sharepoint component (web part) type, here’s some suggestions that can be useful if you find yourself on the same spot:

This blog is moving to a new home!

From May, 2010 now new content will be published here, our new home is at:

You can also subscribe to our RSS feed through FeedBurner:

商业智能 Shangyè Zhìnéng (BI) The Independent Business Intelligence Blog

Sharepoint doc library files imported through windows explored are all checked out

I run into this when migrating our file server content into the Sharepoint BI Team Site,  it had to do with two settings (versioning and require checkout) that cause all files I copied into our document library through WebDAV and Windows Explorer to be checked out to my user. This was solved by temporarily turning off these settings while I imported the files again and then turning them back on:


Let RSS be the backbone of your Sharepoint site

I really like the way Sharepoint sites allow you to set email alerts and use RSS feeds to track updates to keep on top of edits and additions to document library, blogs and all other those cool webparts. The only issue with email and RSS is that you have to add an RSS feed for each of those items that you want users of your site to be able to track. Bluedog Limited’s SyndicationGenerator is a web part RSS feed generator that allows you to aggregate multiple feeds into a single one that can be subscribed to by your readers, check it out…

Now about the RSS as a backbone idea… when you have distributed, matrixed or simple a new team that is just forming this kind of visibility into all aspects of your sharepoint site is key to keep everybody informed asynchronously and can be a great teambuilder when combined with twitter feeds or other micro-blog service (which can be added to the Syndication webpart too!).

DataStage: ParamName does not reference a known parameter of the job

Today I got an error that reads like this:

JobControl (@<JOB_NAME>): Controller problem: Error
calling DSSetParam($<PARAMETER_NAME>), code=-3
[ParamName does not reference a known parameter of the job]

After a quick debug/issue tracking session on Director I found out that
one of the jobs in my master sequence was missing all of our standard
parameters with database user, schema and password, since we define
them at the project level all I had to do is click on All to
default and the problem was promptly fixed.

PeopleSoft Enterprise Performance Management Warehouse Schema Naming Standards

The nicest thing about PeopleSoft after the whole PIA Metadata-driven interface and its impact on data quality is how meticulous PeopleSoft engineers are about documentation and sticking to naming standards, here’s a simplified diagram of the wharehouse architecture and the naming standard for objects in each section of the schema:

peoplesoft epm schema naming standards

peoplesoft epm schema naming standards

PeopleSoft EPM:Applying Maintenance Packs and Bundles (Updates)

We had one or two issues related to DataStage ETL maps that were skipped when we applied MP5 and multiple bundles on our development environment.  To prevent this issue from happening again I decided to re-install all of our ETL maps from scratch using a Sharepoint wiki to keep track of the update level applied to each of our projects (Main + CS & HCM Integration Updates), order of the files. The format is simple and looks very much like a check-list where the team member applying the patches initials or types in his or her name as each file is imported.  Check it out:

[Project] EPM90_OWE_MDX
Common + HCM OWE + HCM MDX Cummulative Maintenance Pack 5 (MP5) (Feb 28, 2009)
Applied By

Common Utilities




All E OWE Jobs for all Warehouses

Applied By

E – SETUPS and OWS Jobs




E – MDW Jobs (SKU)


E – MDW Jobs (SKU)


E – MDW Jobs (SKU)
E – MDW Jobs (SKU)

Leverage PeopleSoft EPM Data Lineage Spreadsheets

Not having to do all the leg work necessary to arrive at table-level data lineage in the PeopleSoft EPM Warehouse is a great testimony to good documentation and a great culture back there at big PS, for those of you that enjoy this benefit, here’s some suggestions on how to leverage it:

1) [Re]organize the excel spreadsheet so that it reflects the sequence of your ETL schedule and if time allows add simple job dependencies.

2) Yes, consolidate all the lineage worksheets into a single centralized list (or at least one per business domain [hr, fin, scm]).

3) Add a second worksheet to your consolidated lineage file to track hash file dependencies and read/write operations.
3) Add a third worksheet to track look up operations and their keys.

The benefits of this approach far out weight the initial investment, when you realize how tedious this task is, just keep in mind all those hours that you’ve spent in agony debugging DataStage programs in the past. This spreadsheet is a real time saver when trying to find our where a particular column came from or what jobs perform write operations on the sequential file that is messing up your dimension.

While I work on it I keep my sanity thinking about the new episode of Scrubs next week and being able to make my deadline, I’m looking forward to a big celebration and a weekend in Vegas with my friends…

OBIEE Logical Table vs Logical Table Sources – Best Practice

The difference between Logical Tables, Logical Table Sources (LTS) and the physical tables that compose a Logical Table Source, here are some basic guidelines that have helped me get this concept to seat more clearly in my mind.

businesmodellayer-logical-table LT LTS

BMM Layer - Logical Table and LTS

Logical Tables like organization in the picture on the left are used to create drill down paths or dimensions like Dim_Organizaition.

Usually when you drag and drop a column from a table that is not currently being used in your logical table, the physical table containing such column gets added as a new Logical Table Source (LTS) such as UofA on the image on your left. This most usually is not the result you should be aiming for, in general, when a the column you are adding to a Logical Table comes from the same system of record you have been using you should just rename the LTS to reflect the name of the system of record in the same fashion we renamed ours to UofA and then go edit the sources for this LTS.

The confusion often stems from having what I would call “Logical Table Source” Sources (LTSS). The image below depicts them for UofA, you can acces this dialog by double clicking any LTS.

BMM - Logical Table Source Properties and LTSS

BMM - Logical Table Source Properties and LTSS

Physical Table FRS_GL_ACCOUNTS is a “Logica Table Source”  Source (LTSS) for Logical Table Source UofA. The rationale again is that UofA represents a system of record (source) that brings in information from more than one physical table to build our organization logical table.

If you still have questions add comments to this post and let’s get the discussion started 🙂