Search This Blog

Thursday 6 December 2012

How to find a record in the PSA


·         1. Go to manage of DSO >> Contents Tab
2. Click on Change log table
3. Give the value of Purchase order number/ Sales Document # as applicable and execute
4. Request IDs >> ODSR* will be the output
5. Go to manage of DSO >> Requests tab
6. Check the request number against this ODSR* request (obtained from change log )
7. Open corresponding request in PSA (Right click of data source -> Manage)

§  Also try this

1. Call T-Code RSRQ and give the Req ID if you know
2. Request monitor screen will be opened
3. From there also you can go to PSA by clicking the PSA symbol on the top
Data for that request will be displayed in PSA. If data is deleted in PSA for that request, a pop-up will be displayed.

Thursday 29 November 2012

Unable to select the transformation in a transport request

Solution
Most probably it will be because transformation is already contained in a TR, happens mostly when we give collection mode as automatic.
1. Double click on your transformation.
2. Go to Extras - Object Directory Entry
3. Then click on "Lock Overview"
4. It will display message saying "the object is not locked" or it will show you the transport in which it is already collected.  This helps in finding to which transport request its contained.

SAP BI Runtime error: MESSAGE_TYPE_X

Solution:
1. Go to SE38 in your BW system2. Enter program name - RSSM_OLTP_INIT_DELTA_UPDATE3. Execute
4. Just enter your datasource name (is it 0FI_AA_11 ?) and logical source system name
5. Execute the program


Nothing will appear. Now execute your process chain, it will be a success.

How to download data from InfoCube to Excel

You can directly download the data of infocube to excel.
Display the data of the infocube. ON the menu bar click on LIST - - > EXPORT -- > LOCAL FILE . Choose spreadsheet. Specify the path and name of the file and click on generate. This would load the data from cube to excel on to your machine.
 Alternatively you can download the contents of your Cube into a CSV file using Open Hub destination.
 For OHD see the below steps.
1. Select Open Hub Destination from RSA1.
2. Right Click on the info area Select Create Open Hub Destination.
3. provide Name, Description for the Open Hub.
4. In template, Select Object type as "Infocube" and give the name in Name field. Press Enter.
5. In Destination tab, Select Destination type as File. Give all other details like Filename, Directory etc. select type as FILE and give SEPARATOR as comma(,)
6. In Field Def tab, you will see all the Fields that can be transferred from Source infocube to File. Click Save.
7. Right click on created open hub, Create transformation. Give Source as the infocube. Make transformation as needed.
8. Now similarly Create DTP. Then Save and Activate the DTP.
9. you can schedule your extract.

Transporting Bex Queries

·         
Create a BEx request - through RSA1 - Transport Connection and on the Transport Connection menu you need to hit on 'BEx' logo with the TR vehicle and then you will get the below screen, click on Assign/Delete, and create the TR for BEx first. All the changes you do to the transported Queries are then captured in this TR.

Go to RSA1, then transport connection in here, find the Query Elements, expanding it, you can find your query.. then transfer it to the right pane and capture the neccessary objects in requests.  A new task will be created.  You need to ensure that all the calculated key figures/ restricted key figures/ variables/ are captured in the transport request.   Once you have the request, you can release it and then import in the quality box

Wednesday 3 October 2012

LO DataSource Enhancement Steps

BI Certification Questions

Data Modeling

1. Which of the following statements refer to a Remote Cube?
a. InfoCube that gathers data from several Basic Cubes
b. Transaction data that are not managed in SAP BW but are available to SAP BW users
c. Remote Cubes do not contain data
d. A cube built in a non-BW data warehouse
e. None of the above
ANSWER: b, c

2. If product price is a slowly changing dimension, how do you model product price if you want to report on the price at the time of the query?
a. As a characteristic on the material dimension
b. As a time-dependent navigational attribute
c. As a time-dependent hierarchy of material
d. As a time-independent navigational attribute of material
e. None of the above
ANSWER: d
3. Which of the following refer to pointer tables that provide the technical link to the master data outside of the InfoCube?
a. Dimension tables
b. Set IDs
c. M tables
d. SID tables
e. ODS transparent tables
ANSWER: d
4. Which of the following Dimensions are delivered by SAP BW for each InfoCube?
This question has more than one answer.
a. Unit
b. Time
c. Transfer package
d. Currency unit
e. Info packet (Info package)
f. GIS
ANSWER: a, b, e
5. Which of the following statements describes a categorical dimension?
This question has only one answer.
a. An artificial dimension used to describe the status of another dimension
b. An artificial dimension/attribute used to categorize another dimension
c. An artificial dimension used in a reporting scenario where the comparison of the same key figure is needed
d. A dimension that is used across all cubes
e. None of the above
ANSWER: b
6. Which of the following is an example of a slowly changing dimension?
This question has only one answer.
a. Customer on a sales order
b. Date
c. Person responsible for a cost centre
d. Value of a purchase order
e. None of the above
ANSWER: c
7. Fact tables may be partitioned by which of the following characteristics?
a. 0CALDAY
b. 0CALMONTH
c. 0CALYEAR
d. 0CALPERIOD
e. 0CALWEEK
ANSWER: b
8. Which of the following statements is correct concerning the operational data store (ODS)?
a. The data in the ODS are compressed and do not have a Request ID attached
b. ODS can consolidate data from different source systems
c. Data in the ODS cannot be reported upon
d. The ODS is the inbound layer of the SAP BW system and data are written directly to it from the source system
e. The ODS generally holds transaction data only
ANSWER: b, e

9. A Fact table consists of which of the following elements?
a. Characteristic InfoObjects
b. Key figures InfoObjects
c. Calculated key figures InfoObjects
d. Hierarchies
e. DIM Ids
ANSWER: a, b

10. If you worked for a retail company that changed prices on a very frequent basis, how
would you model the price?
This question has only one answer.
a. In the master data table as an attribute
b. In the transaction table as a new characteristic
c. In the ODS to enhance performance
d. In the fact table
e. None of the above
ANSWER: b

11. Which of the following occurs as a result of degenerating dimensions?
This question has more than one answer.
a. The grain of the fact table represents actual working documents like order number or invoice number
b. All descriptive attributes of the grain are located in other dimensions
c. The InfoCube becomes very weak and performs poorly when it is run
d. The degenerating dimensions cannot be modeled using normal star schema
e. None of the above
ANSWER: a, b, c

12. What values can a key figure retain?
This question has only one answer.
a. Last value
b. Average value
c. First value
d. Sum
e. All of the above
f. None of the above
ANSWER: e

13. Which of the following describes an unbalanced hierarchy?
This question has only one answer.
a. A hierarchy with too many leaves within one branch
b. A hierarchy with fewer nodes within one or more branches
c. Hierarchy without leaves
d. The term used for data that are delivered from two or more sources on different aggregation
levels
f. None of the above
ANSWER: b

14. How many hierarchies can be created for a key figure?
This question has only one answer.
a. 10
b. 2
c. 5
d. More than 1
e. None
ANSWER: e

15. Which of the following represents a characteristic that spans an entire data model and
upon which all other entities are dependent?
This question has only one answer.
a. Many-to-many relationship
b. A degenerated characteristic
c. A strong entity
d. An attribute
e. A data model characteristic
ANSWER: e

16. Navigational attributes behave in a query like which of the following?
This question has only one answer.
a. Attributes
b. Variables
c. Characteristics
d. Filters
e. None of the above
ANSWER: c

17. When should an attribute be placed in a separate dimension?
This question has only one answer.
a. Always
b. If the attribute changes frequently
c. If the attributes have a large number of distinct values
d. Never
e. None of the above
ANSWER: b

18. Which of the following objects is not provided in business content?
This question has only one answer.
a. InfoCube
b. InfoSource
c. KPIs
d. Queries
e. SAP R/3 extractor program
f. None of the above
ANSWER: f

19. Which of the following objects keeps the aggregate and InfoCube in sync?
This question has only one answer.
a. Dim pointer
b. Read pointer
c. SID pointers
d. Surrogate pointer
e. None of the above
f. All of the above
ANSWER: b

20. A MultiCube joins numerous other InfoCubes via which of the following?
This question has only one answer.
a. A common set of characteristics
b. A common set of key figures
c. A common set of InfoSources
d. A common set of DataSource
e. None of the above
ANSWER: a


21. Why do we want to normalize a data model?
This question has only one answer.
a. To avoid redundancy
b. To make it look good
c. To increase performance
d. To comply with the multidimensional data modeling rules
e. None of the above
ANSWER: a

22. Hierarchy can be which of the following?
This question has more than one answer.
a. Time-dependent name
b. Version dependent
c. With intervals
d. Time-dependent structure
e. Used by multiple InfoCube
f. None of the above
ANSWER: b, c, d

23. If product price is a slowly changing dimension, how do you model product price if you
want to report the price at any point in time?
This question has more than one answer.
a. As a time-dependent navigational attribute
b. As a time-dependent navigational attribute of material
c. As a time-independent navigational attribute of material
d. As a characteristic on the material dimension
e. As a time-dependent hierarchy of material
ANSWER: b, e


24. Name the three differences between OLAP and OLTP systems?
This question has more than one answer.
a. Software used
b. Current data versus history required
c. One to two months of data versus two to seven years of data
d. Detailed transaction data versus summarized data
e. None of the above
ANSWER: b, c, d


25. Sometimes we have to combine unrelated characteristics into one dimension, due to
which of the following facts?
a. We run out of dimensions and we have to include the data in the same InfoCube
b. We are running out of space on the database
c. We are looking for higher performance
d. Lower number of dimensions is a way to optimize data staging
e. To reduce the number of joins
f. All of the above
ANSWER: a, e


26. Which of the following items refer to InfoObjects?
This question has more than one answer.
a. Characteristics
b. Key figures
c. Queries
d. Attributes
e. None of the above
ANSWER: a, b


27. What are the three predefined dimensions in SAP BW?
This question has more than one answer.
a. Date
b. InfoPackage
c. Time
d. Unit
e. None of the above
ANSWER: b, c, d


28. What are the advantages of a smaller dimension?
This question has more than one answer.
a. Available immediately after transaction data load
b. Improves query performance
c. Overhead during load time
d. It is not an advantage but a liability
e. None of the above
ANSWER: a, b, c


29. What are the limitations of a Remote Cube?
This question has more than one answer.
a. Additional requirements for extractors
b. Slow
c. System dependent
d. Only for transaction data
e. Small volume
ANSWER: a, b, d


30. Which of the following are examples of characteristics?
This question has more than one answer.
a. Monthly sales
b. Products
c. Cost centers
d. Company codes
e. Sales person
ANSWER: b, c, d, e


31. A conceptual description of data objects, their attributes, and the relationships between them is:
a) A data warehouse
b) A data model
c) An InfoCatalog
d) An InfoSet
e) An InfoSource
Answer: b.

32. Which of the following are types of SAP BW InfoCubes?
a) MultiCube
b) Inverted Cube
c) Remote Cube
d) Relational Cube
e) Basic Cube
Answer: a, c, and e.

Tuesday 2 October 2012

In-Memory Data Management - Prof. Hasso Plattner

In-Memory Data Management

Prof. Hasso Plattner
The online course focuses on the management of enterprise data in column-oriented in-memory databases. Latest hardware and software trends led to the development of a new revolutionary technology that enables flexible and lightning-fast analysis of massive amounts of enterprise data. The basic concepts and design principles of this technology are explained in detail. Beyond that, the implications of the underlying design principles for future enterprise applications and their development are discussed. Unbelievable things are possible and you will understand why this is true using an in-memory column-oriented database instead of a traditional row-oriented disk-based one.       For more information   https://openhpi.de/courses

 
Prof. Hasso Plattner is co-founder of SAP AG, where he served as the CEO until 2003 and has since been chairman of the supervisory board. SAP AG is the leading provider of enterprise software solutions. In his role as chief software advisor, he concentrates on defining the mid- and long-term technology strategy and direction of SAP.
 

Generic Extraction based on a View

These are detailed step-by-step documents about how and why to do generic extraction with a view.

Part 1
http://scn.sap.com/docs/DOC-32025
Part 2
http://scn.sap.com/docs/DOC-32026
Part 3
http://scn.sap.com/docs/DOC-32027
Part 4
http://scn.sap.com/docs/DOC-31966

Tuesday 25 September 2012

Creating Navigational attributes in 3.5 flow

In 3.x flow, the steps to create navigational attributes are as follows:

1) Attribute level change - Go to /nRSD1, give your navigational attribute's name-->display-->uncheck the attribute only flag and activate.

2) InfoObject level change - Go to /nRSD1, give your InfoObject name to which you want to maintain the navigational attribute --> display--> detail navigational attributes--> check navigational attribute flag against your navigational attribute and activate. Click on "Yes" wherever prompted.

3) InfoCube level change - Go to /nRSDCUBE, give your InfoCube name--> display-->click on change mode -->click on Navigational attribute --> Select yours and activate.

4) Activate Update Rules --> InfoCube--> Update Rules--> Change and activate

Tables for Process Chains Details

Find below link for BW tables
http://wiki.sdn.sap.com/wiki/display/BI/Important+Tables+in+SAP+BI+(+NW2004)
RSCOMPTLOGOT - Process Chain Component
RSEVENTCHAIN Event Chain Processing Event Table
RSEVENTHEAD Header for the event chain
RSEVENTHEADT Header for the event chain
RSPCCHAIN Process chain details
RSPCCHAINATTR Attributes for a Process Chain
RSPCCHAINEVENTS Multiple Events with Process Chains
RSPCCHAINT Texts for Chain
RSPCCOMMANDLOG System Command Execution Logs (Process Chains)
RSPCLOGCHAIN Cross-Table Log ID / Chain ID
RSPCLOGS Application Logs for the Process Chains
RSPCPROCESSLOG Logs for the Chain Runs
RSPCRUNVARIABLES Variables for Process Chains for Runtime
RSPC_MONITOR Monitor individual process chains

Friday 21 September 2012

A Guide to the new SDN/SCN from BI perspective

Often a newbie to SCN (SAP Community Network- www.scn.sap.com ) gets quite bewildered by its vastness and becomes unable to corner to an area of interest. I have collected this information’s over a period of time and I hope it will help you to understand the new SDN in a better way, on how it is organized, how to start a discussion, how to search, and how to optimize the maximum potential.
The SCN is divided into Spaces for each technology. Here I am listing the BI spaces and the links to access it.
BI Spaces in SCN
1) Data-warehousing
1a) SAP NetWeaver Business Warehouse in Data-warehousing
1b) Business Content and Extractors in Data-warehousing
2) Business Intelligence




6a) SAP Certification in SAP Training and Education

Asset Types
In each of these spaces, the contents are divided in to the following ones known as Asset Types.
 
 
 
 
Discussion
Collaborative document
Article
Blog
Presentation
Wiki page
Eg. In case you want to post a question, First login, go to create a discussion, choose a space, and type in your question.
 
How to share Your Knowledge in SCN Topic Spaces
Please check this link

The new content submission (Publishing White papers)
SCN now accepts only SAP submissions and the community will submit their articles directly in the respective spaces by creating Jive docs. If you are not an SAP employee the system won't let you in, other community members should be creating documents in Jive by clicking on Create a document.
For more information

Points in SCN (Contributor Reputation Program)
All the above are collected to the best of my knowledge. If any one has something to addon, I request to please do so, and share with the community.
"The more you share your knowledge, the more you get"
 



Tuesday 18 September 2012

My tryst with PMP Certification

Hi all,

I know its a little late to bring up this!!! and the fact that this blog is all bout SAP BI..but just in case any of you are contemplating to be PMP certified...this one is for you.

I passed my PMP exam on August 2010.  I have been on and off the pmzilla forum learning a lot from others questions and lessons learned.
I started preparing in the month of May and have been consistent in preparing for 2hrs per day. The materials I used were the PMBOK 4th edition, Head First PMP, PMP: Project Management Professional Exam Study Guide by Kim Heldman, and Rita Mulcahy's PMP Exam Preparation 6th edition.
I started off with the PMBOK, 4th edition.I read through it the first time just to get an idea of what to expect, that itself took a week to finish! Then I joined the above site and began reading how others prepared for the exam. This brought me to the Rita Mulcahy’s PMP book. This was really helpful in getting the basics right. Then I followed a pattern of reading a chapter from PMBOK, then the same chapter from Rita’s PMP, Head first PMP, and then Kim Heldman’s PMP guide. This helped me getting the foundation strong, I analyzed situations and found the gaps I had in my PM knowledge. I took notes of important points from these books. I found Kim Heldman’s PMP guide organized differently, which made it hard to sort according to the KA, so used to just skim through it and answer the chapter end questions. On the whole, I found Head First PMP book had simple explanation and that Rita’s PMP followed a more comprehensive approach. I used Rita Mulcahy’s Fastrack SW too. After this I started taking online tests from different sites like
www.preparepm.com
www.pm-abc.com
www.passionatepm.com
www.simplilearn.com
www.headfirstlabs.com/PMP/pmp_exam/v1/quiz.html
http://www.voightps.de/Free_PMP_Exam.asp
www.oliverlehmann.com/
I used to take notes on all the incorrect answers I got and used to analyze why I got it wrong. I used to put in extra hours during weekends. So by July I finished all this and began reading PMBOK once more, this time I prepared an ITTO dump, using mnemonics. I tried this site and found it to be quite good. Thanks Harwinder!www.monkibo.com/pmp-exam-itto-trainer/index.html.
Along with this I used to go through all the notes I prepared and follow this forum and www.deepfriedbrainpmp.com
A week to my scheduled exam I did the pmp exam from www.pmstudy.com and also went through the Code of Ethics and Professional Conduct by PMI. I found the pmstudy’s exam closer to the actual exam.
My scores were between 75% and 82%.The whole preparation got me an idea that the exam was going to be tough.
The day before the exam, I just went through the ITTO dump and slept early and tight. I didn’t prepare the PMBOK page 43 and formulas dump as after all this preparation they were crystal clear to me.
Exam Day
I got the afternoon slot, so just went through my study notes and reached the center 30 min ahead. There was a security check (not even in airports have I seen something of this extent!), asked to remove my watch, and keep my belongings in a locker. The locker key and primary ID were allowed inside.I was given 2 sheets of paper and 2 pencils.I was given a brief of the exam and that it was video and audio recorded. When the 15 min tutorial started I wrote the dump within 10 min and went through the tutorial and started the exam. I was relieved to see that the exam was not tough as it was projected to be, not much of paragraph questions, only simple problems to solve. More questions were from quality, risk, change requests, and ITTO’s. After 50 questions I was sure to make it. I finished the exam within 3 hrs, with 4 questions marked for review. The rest of the time I spent reviewing all questions, changed a couple of answers and hit the end button. I was asked to finish a survey and then voila!!! The words I was dreaming all the way..Congratulations !!!.. There were some more sentences too, which I don't remember now..Well at that point, nothing else mattered! I thanked God.The freaky Friday is nothing but a mythJ
When I came out I was given a score report.I scored proficient in 3 and moderately proficient in others.
A few people I would like to thank….Harwinder and chandraR for their wonderful explanations. Great work guys!!My family, friends, and most importantly God!!!
Thank you all from the bottom of my heart…..!

Tuesday 4 September 2012

Error: Setup Table for SD not getting deleted

Error: Need to delete the setup table for SD module , in LBWG, upon clicking the execute button , a confirmation window opens up asking "Data will be lost , Do u want to delete the restructure tables" , I click on YES and after that there is no further status message (that whether the delete was sucessfull or not). Now when i go to RSA3 and test the extraction( for 2LIS_11_VAITM) , it still shows some records . indicating that the setup table has not been deleted

Solution:
1. Call T/A SE14 --> Enter table name MC11VA0ITMSETUP -->Select radio button "tables" --> click on Edit -->Select processing type "Background" --> Click on "Activate and Adjust database".
2. Here before performing the Activate and Adjust database function -- first try to check the data base object (click on check button and select database object --ok) if every thing is ok then proceed with the above steps.
3. Next try to delete the setup tables and check.

Also check if the data source is active or not before deleting the setup tables.

Monday 27 August 2012

My Journey to SAP BI 7.0 Certification


I cleared my C_TBW45_70: SAP Certified Application Associate- Business Intelligence with SAP NetWeaver 7.0 certification on May 2012.

Prelude:  I prepared for the certification for around 4 months reading and understanding each topic.  This blog is the result of that.  I covered the books prescribed by SAP as I mentioned here http://anjalisapbi.blogspot.in/2012/02/sap-certified-application-associate.html

For APD and IP topics I read this book.  I found this book concise and easy to understand.

 

The Exam Day!

The exam was scheduled for 9.00 am and after a few technical glitches started half an hour late.  We were permitted to take only the confirmation letter send by SAP to the exam hall.  We were given couple of sheets of paper, pen, and a bottle of water.

There were 80 questions and in the initial screens it was mentioned that the pass percentage for BI certification was 66%.  Most of the questions had more than 1 correct answer.  If 3 correct answers are expected, you need to get all the 3 of them correct for scoring 1 mark.  There are no partial correct marks.  The other type was True/False and a few fill in the blanks.  The number of answers correct are clearly mentioned along with the questions, like 3 correct answers.

The exam was divided into around 10 sections (forgot how many!) and each section had around 3 to 15 questions.  APD had the least number of questions.  From IP there were around 6-7 questions.  Majority of the questions were from the reporting topics.  I could complete the exam by around 2 hours, didn’t find it too difficult.

I clicked the submit button around 21/2 hours to the exam.  Fingers Crossed!!

After a few seconds I saw ‘Congratulations …….’which was followed by a report giving % correct scores for each of above sections.

After 2 weeks, I received a confirmation mail from SAP and the address to dispatch the Certificate was also enquired.  In another 2 weeks I received it.

Overall a wonderful experience!!  Best of luck if you are writing one!

Important SAP Tables


Sales & Distribution Tables

VBAK  - Sales document header data

VBAP  - Sales document item data.

VBLK -  Sales document Delivery note header.

 VBBE  - Sales requirements.

VBFA  - Sale sdocument flow

VBUK -  Sales document heade status.

VBUP -  Sales document item satus.

VBEH  - Scheduleline history.

VBEP  - Sales document schedule line data.

VBPA -  Sales document Partner.

VBRK  - Billing header data.

VBRP  - Billing item data .
 
LIKP - Delvery header data

LIPS - Delivery Item data.
 
Material Management
MARA : General Material Data (Transparent Table)
Development Class : MG
Delivery Class : Application Table
(Master & Transaction Data)
Text Table : MAKT
 
Important Fields
MANDT Client
MATNR Material Number
ERSDA Creation Date
ERNAM Name of person who created.
MTART Material Type.
MATKL Material Group.
MEINS Base Unit of Measure.
ZEINR Document Number.
ZEIAR Document Type.
WRKST Basic Material.
NTGEW Net Weight.
VOLUM Volume.
VOLEH Volume Unit.
SPART Division.
KZKFG Configuration Material.
VHART Shipping Material Type.
MAGRV Material Group Shipping Material.
DATAB Valid from Date.
ATTYP  Material Category.
PMATA  Pricing reference material.
 
Please see this link for more details
http://www.erpgenie.com/sap/abap/tables_sd.htm

LO DataSources

Logistic Cockpit (LC) is a technique to extract logistics transaction data from R/3.

All the DataSources belonging to logistics can be found in the LO Cockpit (Transaction LBWE) grouped by their respective application areas.

The DataSources for logistics are delivered by SAP as a part of its standard business content in the SAP ECC 6.0 system and has the following naming convention. A logistics transaction DataSource is named as follows: 2LIS__ where,
  • Every LO DataSpurce starts with 2LIS.
  • Application is specified by a two digit number that specifies the application relating to a set of events in a process. e.g. application 11 refers to SD sales.
  • Event specifies the transaction that provides the data for the application specified, and is optional in the naming convention. e.g. event VA refers to creating, changing or deleting sales orders. (Verkauf Auftrag stands for sales order in German).
  • Suffix specifies the details of information that is extracted. For e.g. ITM refers to item data, HDR refers to header data, and SCL refers to schedule lines.

Up on activation of the business content DataSources, all components like the extract structure, extractor program etc. also gets activated in the system.

The extract structure can be customized to meet specific reporting requirements at a later point of time and necessary user exits can also be made use of for achieving the same.

An extract structure generated will have the naming convention, MC <Application> <Event>0 <Suffix>. Where, suffix is optional. Thus e.g. 2LIS_11_VAITM, sales order item, will have the extract structure MC11VA0ITM.


Delta Initialization:
  • LO DataSources use the concept of setup tables to carry out the initial data extraction process.
  • The presence of restructuring/setup tables prevents the BI extractors directly access the frequently updated large logistics application tables and are only used for initialization of data to BI.
  • For loading data first time into the BI system, the setup tables have to be filled.
Delta Extraction:
  • Once the initialization of the logistics transaction data DataSource is successfully carried out, all subsequent new and changed records are extracted to the BI system using the delta mechanism supported by the DataSource.
  • The LO DataSources support ABR delta mechanism which is both DSO and InfoCube compatible. The ABR delta creates delta with after, before and reverse images that are updated directly to the delta queue, which gets automatically generated after successful delta initialization.
  • The after image provides status after change, a before image gives status before the change with a minus sign and a reverse image sends the record with a minus sign for the deleted records.
  • The type of delta provided by the LO DataSources is a push delta, i.e. the delta data records from the respective application are pushed to the delta queue before they are extracted to BI as part of the delta update. The fact whether a delta is generated for a document change is determined by the LO application. It is a very important aspect for the logistic DataSources as the very program that updates the application tables for a transaction triggers/pushes the data for information systems, by means of an update type, which can be a V1 or a V2 update.
  • The delta queue for an LO DataSource is automatically generated after successful initialization and can be viewed in transaction RSA7, or in transaction SMQ1 under name MCEX<Application>.
Update Method
The following three update methods are available
  1. Synchronous update (V1 update)
  2. Asynchronous update (V2 update)
  3. Collective update (V3 update)
Synchronous update (V1 update)
  • Statistics updates is carried out oat the same time as the document update in the application table, means whenever we create a transaction in R/3, then the entries get into the R/3 table and this takes place in v1 update.
Asynchronous update (V2 update)
  • Document update and the statistics update take place in different tasks. V2 update starts a few seconds after V1 update and this update the values get into statistical tables from where we do the extraction into BW.
V1 and V2 updates do not require any scheduling activity.

Collective update (V3 update)
  • V3 update uses delta queue technology is similar to the V2 update. The main differences is that V2 updates are always triggered by applications while V3 update may be scheduled independently.
Update modes
  1. Direct Delta
  2. Queued Delta
  3. Unserialized V3 Update

Direct Delta
  • With this update mode, extraction data is transferred directly to the BW delta queues with each document posting.
  • Each document posted with delta extraction is converted to exactly one LUW in the related BW delta queues.
  • In this update mode no need to schedule a job at regular intervals (through LBWE “Job control”) in order to transfer the data to the BW delta queues. Thus additional monitoring of update data or extraction queue is not require.
  • This update method is recommended only for customers with a low occurrence of documents (a maximum of 10000 document changes - creating, changing or deleting - between two delta extractions) for the relevant application.
Queued Delta
  • With queued delta update mode, the extraction data is written in an extraction queue and then that data can be transferred to the BW delta queues by extraction collective run.
  • If we use this method, it will be necessary to schedule a job to regularly transfer the data to the BW delta queues i.e extraction collective run.
  • SAP recommends to schedule this job hourly during normal operation after successful delta initialization, but there is no fixed rule: it depends from peculiarity of every specific situation (business volume, reporting needs and so on).
Unserialized V3 Update
  • With this Unserialized V3 Update, the extraction data is written in an update table and then that data can be transferred to the BW delta queues by V3 collective run.
Setup Table
  • Setup table is a cluster table that is used to extract data from R/3 tables of same application.
  • The use of setup table is to store your data in them before updating to the target system. Once you fill up the setup tables with the data, you need not go to the application tables again and again which in turn will increase your system performance.
  • LO extractor takes data from Setup Table while initialization and full upload.
  • As Setup Tables are required only for full and init load we can delete the data after loading in order to avoid duplicate data.
  • We have to fill the setup tables in LO by using OLI*BW or also by going to SBIW à Settings for Application à Specific Data Sources à Logistics à Managing Extract Structures à Initialization à Filling in Setup tables à Application specific setup table of statistical data.
  • We can delete the setup tables also by using LBWG code. You can also delete setup tables application wise by going to SBIW à Settings for Application à Specific Data Sources à Logistics à Managing Extract Structures à Initialization à Delete the Contents of the Setup Table.
  • Technical Name of Setup table is ExtractStructure-setup, for example suppose Data source name 2LIS_11_VAHDR. And extract structure name is MC11VA0HDR then setup table name will be MC11VA0HDRSETUP.
LUW
  • LUW stands of Logical Unit of Work. When we create a new document it forms New image ‘N’ and whenever there is a change in the existing document it forms before image ‘X’ and after Image ‘ ‘ and these after and before images together constitute one LUW.
Delta Queue (RSA7)
  • Delta queue stores records that have been generated after last delta upload and not yet to be sent to BW.
  • Depending on the method selected, generated records will either come directly to this delta queue or through extraction queue.
  • Delta Queue (RSA7) Maintains 2 images one Delta image and the other Repeat Delta. When we run the delta load in BW system it sends the Delta image and whenever delta loads and we the repeat delta it sends the repeat delta records.

Statistical Setup
  • Statistical Setup is a program, which is specific to Application Component. Whenever we run this program it extracts all the data from database table and put into the Setup Table.