Search This Blog

Monday 27 August 2012

My Journey to SAP BI 7.0 Certification


I cleared my C_TBW45_70: SAP Certified Application Associate- Business Intelligence with SAP NetWeaver 7.0 certification on May 2012.

Prelude:  I prepared for the certification for around 4 months reading and understanding each topic.  This blog is the result of that.  I covered the books prescribed by SAP as I mentioned here http://anjalisapbi.blogspot.in/2012/02/sap-certified-application-associate.html

For APD and IP topics I read this book.  I found this book concise and easy to understand.

 

The Exam Day!

The exam was scheduled for 9.00 am and after a few technical glitches started half an hour late.  We were permitted to take only the confirmation letter send by SAP to the exam hall.  We were given couple of sheets of paper, pen, and a bottle of water.

There were 80 questions and in the initial screens it was mentioned that the pass percentage for BI certification was 66%.  Most of the questions had more than 1 correct answer.  If 3 correct answers are expected, you need to get all the 3 of them correct for scoring 1 mark.  There are no partial correct marks.  The other type was True/False and a few fill in the blanks.  The number of answers correct are clearly mentioned along with the questions, like 3 correct answers.

The exam was divided into around 10 sections (forgot how many!) and each section had around 3 to 15 questions.  APD had the least number of questions.  From IP there were around 6-7 questions.  Majority of the questions were from the reporting topics.  I could complete the exam by around 2 hours, didn’t find it too difficult.

I clicked the submit button around 21/2 hours to the exam.  Fingers Crossed!!

After a few seconds I saw ‘Congratulations …….’which was followed by a report giving % correct scores for each of above sections.

After 2 weeks, I received a confirmation mail from SAP and the address to dispatch the Certificate was also enquired.  In another 2 weeks I received it.

Overall a wonderful experience!!  Best of luck if you are writing one!

Important SAP Tables


Sales & Distribution Tables

VBAK  - Sales document header data

VBAP  - Sales document item data.

VBLK -  Sales document Delivery note header.

 VBBE  - Sales requirements.

VBFA  - Sale sdocument flow

VBUK -  Sales document heade status.

VBUP -  Sales document item satus.

VBEH  - Scheduleline history.

VBEP  - Sales document schedule line data.

VBPA -  Sales document Partner.

VBRK  - Billing header data.

VBRP  - Billing item data .
 
LIKP - Delvery header data

LIPS - Delivery Item data.
 
Material Management
MARA : General Material Data (Transparent Table)
Development Class : MG
Delivery Class : Application Table
(Master & Transaction Data)
Text Table : MAKT
 
Important Fields
MANDT Client
MATNR Material Number
ERSDA Creation Date
ERNAM Name of person who created.
MTART Material Type.
MATKL Material Group.
MEINS Base Unit of Measure.
ZEINR Document Number.
ZEIAR Document Type.
WRKST Basic Material.
NTGEW Net Weight.
VOLUM Volume.
VOLEH Volume Unit.
SPART Division.
KZKFG Configuration Material.
VHART Shipping Material Type.
MAGRV Material Group Shipping Material.
DATAB Valid from Date.
ATTYP  Material Category.
PMATA  Pricing reference material.
 
Please see this link for more details
http://www.erpgenie.com/sap/abap/tables_sd.htm

LO DataSources

Logistic Cockpit (LC) is a technique to extract logistics transaction data from R/3.

All the DataSources belonging to logistics can be found in the LO Cockpit (Transaction LBWE) grouped by their respective application areas.

The DataSources for logistics are delivered by SAP as a part of its standard business content in the SAP ECC 6.0 system and has the following naming convention. A logistics transaction DataSource is named as follows: 2LIS__ where,
  • Every LO DataSpurce starts with 2LIS.
  • Application is specified by a two digit number that specifies the application relating to a set of events in a process. e.g. application 11 refers to SD sales.
  • Event specifies the transaction that provides the data for the application specified, and is optional in the naming convention. e.g. event VA refers to creating, changing or deleting sales orders. (Verkauf Auftrag stands for sales order in German).
  • Suffix specifies the details of information that is extracted. For e.g. ITM refers to item data, HDR refers to header data, and SCL refers to schedule lines.

Up on activation of the business content DataSources, all components like the extract structure, extractor program etc. also gets activated in the system.

The extract structure can be customized to meet specific reporting requirements at a later point of time and necessary user exits can also be made use of for achieving the same.

An extract structure generated will have the naming convention, MC <Application> <Event>0 <Suffix>. Where, suffix is optional. Thus e.g. 2LIS_11_VAITM, sales order item, will have the extract structure MC11VA0ITM.


Delta Initialization:
  • LO DataSources use the concept of setup tables to carry out the initial data extraction process.
  • The presence of restructuring/setup tables prevents the BI extractors directly access the frequently updated large logistics application tables and are only used for initialization of data to BI.
  • For loading data first time into the BI system, the setup tables have to be filled.
Delta Extraction:
  • Once the initialization of the logistics transaction data DataSource is successfully carried out, all subsequent new and changed records are extracted to the BI system using the delta mechanism supported by the DataSource.
  • The LO DataSources support ABR delta mechanism which is both DSO and InfoCube compatible. The ABR delta creates delta with after, before and reverse images that are updated directly to the delta queue, which gets automatically generated after successful delta initialization.
  • The after image provides status after change, a before image gives status before the change with a minus sign and a reverse image sends the record with a minus sign for the deleted records.
  • The type of delta provided by the LO DataSources is a push delta, i.e. the delta data records from the respective application are pushed to the delta queue before they are extracted to BI as part of the delta update. The fact whether a delta is generated for a document change is determined by the LO application. It is a very important aspect for the logistic DataSources as the very program that updates the application tables for a transaction triggers/pushes the data for information systems, by means of an update type, which can be a V1 or a V2 update.
  • The delta queue for an LO DataSource is automatically generated after successful initialization and can be viewed in transaction RSA7, or in transaction SMQ1 under name MCEX<Application>.
Update Method
The following three update methods are available
  1. Synchronous update (V1 update)
  2. Asynchronous update (V2 update)
  3. Collective update (V3 update)
Synchronous update (V1 update)
  • Statistics updates is carried out oat the same time as the document update in the application table, means whenever we create a transaction in R/3, then the entries get into the R/3 table and this takes place in v1 update.
Asynchronous update (V2 update)
  • Document update and the statistics update take place in different tasks. V2 update starts a few seconds after V1 update and this update the values get into statistical tables from where we do the extraction into BW.
V1 and V2 updates do not require any scheduling activity.

Collective update (V3 update)
  • V3 update uses delta queue technology is similar to the V2 update. The main differences is that V2 updates are always triggered by applications while V3 update may be scheduled independently.
Update modes
  1. Direct Delta
  2. Queued Delta
  3. Unserialized V3 Update

Direct Delta
  • With this update mode, extraction data is transferred directly to the BW delta queues with each document posting.
  • Each document posted with delta extraction is converted to exactly one LUW in the related BW delta queues.
  • In this update mode no need to schedule a job at regular intervals (through LBWE “Job control”) in order to transfer the data to the BW delta queues. Thus additional monitoring of update data or extraction queue is not require.
  • This update method is recommended only for customers with a low occurrence of documents (a maximum of 10000 document changes - creating, changing or deleting - between two delta extractions) for the relevant application.
Queued Delta
  • With queued delta update mode, the extraction data is written in an extraction queue and then that data can be transferred to the BW delta queues by extraction collective run.
  • If we use this method, it will be necessary to schedule a job to regularly transfer the data to the BW delta queues i.e extraction collective run.
  • SAP recommends to schedule this job hourly during normal operation after successful delta initialization, but there is no fixed rule: it depends from peculiarity of every specific situation (business volume, reporting needs and so on).
Unserialized V3 Update
  • With this Unserialized V3 Update, the extraction data is written in an update table and then that data can be transferred to the BW delta queues by V3 collective run.
Setup Table
  • Setup table is a cluster table that is used to extract data from R/3 tables of same application.
  • The use of setup table is to store your data in them before updating to the target system. Once you fill up the setup tables with the data, you need not go to the application tables again and again which in turn will increase your system performance.
  • LO extractor takes data from Setup Table while initialization and full upload.
  • As Setup Tables are required only for full and init load we can delete the data after loading in order to avoid duplicate data.
  • We have to fill the setup tables in LO by using OLI*BW or also by going to SBIW à Settings for Application à Specific Data Sources à Logistics à Managing Extract Structures à Initialization à Filling in Setup tables à Application specific setup table of statistical data.
  • We can delete the setup tables also by using LBWG code. You can also delete setup tables application wise by going to SBIW à Settings for Application à Specific Data Sources à Logistics à Managing Extract Structures à Initialization à Delete the Contents of the Setup Table.
  • Technical Name of Setup table is ExtractStructure-setup, for example suppose Data source name 2LIS_11_VAHDR. And extract structure name is MC11VA0HDR then setup table name will be MC11VA0HDRSETUP.
LUW
  • LUW stands of Logical Unit of Work. When we create a new document it forms New image ‘N’ and whenever there is a change in the existing document it forms before image ‘X’ and after Image ‘ ‘ and these after and before images together constitute one LUW.
Delta Queue (RSA7)
  • Delta queue stores records that have been generated after last delta upload and not yet to be sent to BW.
  • Depending on the method selected, generated records will either come directly to this delta queue or through extraction queue.
  • Delta Queue (RSA7) Maintains 2 images one Delta image and the other Repeat Delta. When we run the delta load in BW system it sends the Delta image and whenever delta loads and we the repeat delta it sends the repeat delta records.

Statistical Setup
  • Statistical Setup is a program, which is specific to Application Component. Whenever we run this program it extracts all the data from database table and put into the Setup Table.

Tuesday 21 August 2012

Interview Questions I faced - 2012

1) In a process chain scheduled to run daily, the infopackage is not picking up the file.  What might be the reason?  How you will rectify it?

2) How to find out when the last statistical setup was run?

3) When will you go for an infosource in BI 7.0?

4) What is BIA?

5) Describe step by step procedure to define exceptions, conditions, variable offsets?

6) What is a text variable?  Where will you define it?

7)What are the different performance optimization methods you have used?

8) When will you go for aggregates?

9) What is exceptional aggregation?

10) Are you aware of HANA?

11) What is in-memory appliance?

12) What are dataservices?  How is it used in BW 7.3?

13) Describe step by step procedure of one full extraction from ECC?

14)What is pseudo-delta mechanism?

15) Can you calculate the difference in date between 2 date fields during reporting?

16) There is a multiprovider which access 2 infocubes which has some same InfoObjects.  How will access in a report?

17) If its an Infoset in the above question, how to access it?

18) Can you access a .exe file from a Process Chain?

19) What is a DB Connect?  What are RFCs?

20) How can I allow special Characters?

21)  How to correct an error in PSA?

22) How does reconstruct work?

23) When will you go for Aggregates?

24) What are flat aggregates?

25) What is a degenerated dimension?