Need to lookup on File with multiple records and extract records with specific condition (For example in this case only account number starting with “12”)
Source File
ID
|
User_Name
|
1
|
James
|
2
|
Williams
|
3
|
Ravi
|
Lookup file
ID
|
Account_Name
|
Account_number
|
1
|
Acc _James_1
|
123232
|
1
|
Acc _James_2
|
45778266
|
2
|
Acc _ Williams_1
|
5455546
|
2
|
Acc _ Williams_2
|
1234343
|
3
|
Acc _ Ravi_1
|
254589
|
3
|
Acc _ Ravi_2
|
12544456
|
Expected Output
ID
|
Account_Name
|
Account_number
|
User_Name
|
1
|
Acc _James_1
|
123232
|
James
|
2
|
Acc _ Williams_2
|
1234343
|
Williams
|
3
|
Acc _ Ravi_2
|
12544456
|
Ravi
|
a)In the Lookup condition give the option for Multiple Output for the matching records (this option only available for Informatica 9).The output in Lookup will be as below(Data in Expression)
ID
|
Account_Name
|
Account_number
|
User_Name
|
1
|
Acc _James_1
|
123232
|
James
|
1
|
Acc _James_2
|
45778266
|
James
|
2
|
Acc _ Williams_1
|
5455546
|
Williams
|
2
|
Acc _ Williams_2
|
1234343
|
Williams
|
3
|
Acc _ Ravi_1
|
254589
|
Ravi
|
3
|
Acc _ Ravi_2
|
12544456
|
Ravi
|
b)In expression check for the account starting with “12” using below condition
IIF (SUBSTR (Account_number, 1, 1) =’12’, 1, 0)
c)Next step is quite simple. We can use a filter and take records only were the flag is 1.The output will be as below
ID
|
Account_Name
|
Account_number
|
User_Name
|
1
|
Acc _James_1
|
123232
|
James
|
2
|
Acc_ Williams_2
|
1234343
|
Williams
|
3
|
Acc _ Ravi_2
|
12544456
|
Ravi
|
Hallo,
ReplyDeleteGasping at your brilliance! Thanks a tonne for sharing all that content. Can’t stop reading. Honestly!
Again, you have to make sure that those empty records don't get forwarded to the target flat file. And you can do that only by using a Filter transformation which throws away those records where your live data are too incomplete to produce any meaningful output.
By the way do you have any YouTube videos, would love to watch it. I would like to connect you on LinkedIn, great to have experts like you in my connection (In case, if you don’t have any issues).
Please keep providing such valuable information.
Thank you,
Morgan
Greetings Mate,
ReplyDelete10/10 !!! Thank you for making your blogs an embodiment of perfection and simplicity. You make everything so easy to follow.
From this point of view I suggest that you split up the mapping into two. The first mapping simply reads the source data and writes all "TEST" records to the flat file; in addition it counts the records and sets a mapping variable to this total count.
Next you hand over this mapping variable to a workflow variable.
Now you can check the value of the workflow variable and, if it's > 5000, end the workflow execution.
Only if this workflow variable (= the mapping variable = the total count of "TEST" records) is <= 5000 will the actual load session be executed.
Once again thanks for your tutorial.
Thanks & Regards,
Morgan
Hallo,
ReplyDeleteAmaze! I have been looking bing for hours because of this and i also in the end think it is in this article! Maybe I recommend you something helps me all the time?
My first doubt is: If we are using Update Strategy Transformation in a mapping then there is no need of setting Target Level properties in a session ( I mean, it doesn't matter what option such as Update as Update, Update as Insert, Update else Insert, Insert, Delete, we select at Target level properties ) because these properties will be overridden by the logic which we have mentioned in the Update Strategy Transformation. So, we just need to set "Treat Source Rows as" property to "Data Driven". Rest will be handled by the logic itself.
Anyways great write up, your efforts are much appreciated.
Obrigado,
Irene Hynes
Hey,
ReplyDelete10/10 !!! Thank you for making your blogs an embodiment of perfection and simplicity. You make everything so easy to follow.
How will the other countries data (person name of non-us country) get matched when we use US population in match & merge setting. Will informatica provide any other population file for all countries.
Informatica PIM Training
By the way do you have any YouTube videos, would love to watch it. I would like to connect you on LinkedIn, great to have experts like you in my connection (In case, if you don’t have any issues).
Please keep providing such valuable information.
Thanks a heaps,
Ajeeth
Hello,
ReplyDeleteGreat info! I recently came across your blog and have been reading along.
I thought I would leave my first comment. I don’t know what to say except that I have
As suggested earlier Informatica PIM Training USA , count the "TEST" records in a variable port of type Bigint in an Expression transformation (let's assume you name this port "v_cnt_errors"); also define another variable port of type Bigint with this expression:
SetVariable( $$CNT_ERRORS, v_cnt_errors)
(assuming that you named the mapping variable $$CNT_ERRORS)
More details, explanations, and samples can be found in the Designer Guide for PowerCenter.
I read multiple articles and watched many videos about how to use this tool - and was still confused! Your instructions were easy to understand and made the process simple.
Thank you,
Preethi.
Hi Man,
ReplyDelete10/10 !!! Thank you for making your blogs an embodiment of perfection and simplicity. You make everything so easy to follow.
Q1, Update Strategy vs. Treat Source Rows As: wrong. The UPD just tells the pmdtm process whether the current record shall be inserted into the target, updated (according to the update details!), deleted from the target, or rejected. Data Driven just means that the flag set by the UPD shall be observed, instead of handling all records the same way.
When the UPD flags a record as DD_UPDATE, then the choice between Update Else Insert, Insert Else Update, or Update As Update will be followed.
In all other cases this choice does not apply.
OK, one additional BUT: if you flag a record for DD_INSERT but you don't check the checkbox "Insert", then the record will NOT be inserted into the target table.
The same holds true for the three Update flags and the Delete flag in the appropriate manner.
Q2, SCD without an UPD: if you want to work without an UPD transformation, then you will need separate mappings and sessions for the INSERT records, the UPDATE records, and the DELETE records.
Informatica PIM Training
Q3: where's the difference to Q2?
Q4, default values:
Treat Source Rows As = Insert
Insert, Update As Update, and Delete are checked by default.
I am so grateful for your blog. Really looking forward to read more.
Thank you,
Ajeeth
Hello There,
ReplyDeleteSmokin hot stuff! You’ve trimmed my dim. I feel as bright and fresh as your prolific website and blogs!
Functionally both approaches are equivalent. Some people don't like having more than one session per workflow, so it's more or less a matter of policies and habit which approach to take.
Personally I would prefer a mixture of both approaches: one mapping with two target load paths, the first target load path just writing the header line to the target file, the second target load path writing the """"real"""" data to the output file. This is in my opinion the easiest approach, but you have to understand how the so-called Target Load Plan works in a PowerCenter mapping. It's not difficult, but some people simply don't like this approach.
I started using Informatica PIM Training USA blog for my training practice.
Excellent tutorials - very easy to understand with all the details. I hope you will continue to provide more such tutorials.
Thanks a heaps,
Morgan