Performance Tuning and Administration
Performance Tuning and Administration
Contents:
◼ Table and index structures
◼ Monitoring MaxDB and optimizing performance
◼ Errors and how to resolve them
© SAP 2008
© SAP 2008
© SAP 2008
© SAP 2008
Index Table
Index Logical reference Tables are
(using a primary key)
organized in logically
1 2 3
B* trees, linked by
relational
index level
2 3 data
1 3
3
1 2
2
Data Index Data Data
records
in pages,
leaf level
◼ In the MaxDB database management system, data is stored in relational tables and organized using
B* tree structures. The B* tree enables users to access data quickly.
◼ This graphic shows how this data storage is structured.
• Each table consists of index structures, which are generated from the primary key and located in
B* trees, and "leaf pages", which contain the data stored. Index structures are stored in the tables
and not in separate structures.
◼ Steps 1 to 3 above provide an overview of how to access data using a complex SQL statement.
• The base table is accessed using the primary key; external indexes are not used.
• The index is accessed using secondary indexes. The cross connections to the base table that result
from the data found here then form additional accesses to these tables.
• The logical reference between the secondary index and the base table uses the primary key.
◼ This method saves space when tables and indexes are stored, while also facilitating fast access using
the primary keys.
Primary key
◼ The primary key is created in the data tree.
◼ There is no separate primary key tree.
◼ The primary key serves as a separator in the B* tree.
◼ Records in tables are sorted by key.
© SAP 2008
Leaf level
Alger, Ankara, Athens, Mexico City, New York,
Algeria, Turkey, ... Greece, ... Mexico, ... USA, ...
4.4 Mil. 3.2 Mil. 3 Mil. 18 Mil. 16 Mil.
© SAP 2008
◼ B* trees are created and extended in the opposite direction to which they are later used.
◼ It starts from the data pages, where the data record with the smallest value in the primary index field
is also the first entry in the data page.
◼ These values are transferred to the superordinate index page at index level. Characters are only
entered if necessary to clearly distinguish between this value and the largest data record on the
previous page at leaf level. Entries at index level and root level can therefore be of different lengths.
◼ Pages at different levels are interlinked. By contrast, the individual levels are interlinked only by the
initial sections of the index chains or leaf page chains.
◼ As soon as the current root page is filled with index entries and an additional page is required to
record index information, the database converts the current root page to a normal index page and
generates a new root page at the index level above.
◼ A total of seven levels can be created. This corresponds to six index levels and one root level with
one root page.
Index level
Is "Athen" < "Me"?
At Me ... N ... Par ... Follow the last link
Leaf level
Alger, Ankara, Athens, Mexico City, New York,
Algeria, Turkey, ... Greece, ... Mexico, ... USA, ...
4.4 Mil. 3.2 Mil. 3 Mil. 18 Mil. 16 Mil.
◼ The concept of the B* tree can be explained using the above SQL statement "SELECT * FROM
POPULATION WHERE...":
• First, MaxDB determines the root page from system table ROOTS. This is the logical page at the
highest root level and is used for every table access.
• On this page, the system determines whether the "Athens" value is smaller than the entries on the
root page. As soon as a root page entry is found that is larger than the WHERE condition value
for which you are searching, the last pointer address is evaluated. This indicates a page at the
level below.
• This process is repeated until the system reaches the lowest level, the leaf level. This contains the
complete content of the table row for which you are searching. However, this page is not read
completely. Instead, a pointer-controlled search is carried out for index matches and the system
branches directly to the position in the data store.
◼ For table fields of type LONG (strings of variable length), a pointer is provided at leaf level that
leads to another B* tree in which dynamically growing field information is stored.
Index level
Is "18 Mil." < "2"?
16 18 2 3 ... 4 ... Follow the last link
Leaf level
16 Mil., 18 Mil., 3 Mil., 3.2 Mil., 4.4 Mil.,
New York ... Mexico City ... Athens Ankara ... Alger ...
◼ In principle, secondary keys have the same B* tree structure as the primary key. At leaf level, they
use the primary key to point to entries in matching table rows. This means that secondary keys
require less reorganization when changes are made to the corresponding base table.
◼ If the tree structure becomes unbalanced because entries have been deleted, changed, or inserted, the
database recognizes this and reorganizes the pages so that they are once again distributed evenly.
This also ensures that space is used efficiently within the pages.
◼ The pages are reorganized automatically, ensuring that space is used optimally for MaxDB database
instances and B* trees are structured efficiently at all times. Administrators do not need to intervene
or reorganize pages.
Apach |.
Apensen|.....
Arbon |.......
111 217 143 169 206 191 81
Ardwick|........
Arnhem |...
Position list
Aneby .. Athens Athens |........
(sorted)
© SAP 2008
◼ Data records are stored randomly in the initial area of the target page.
◼ The end area of the data page contains a position list pointing to the individual records of the data
page. This address list is arranged in such a way that data entries can be read in order when accessed
sequentially via the position list.
◼ The database system searches the remaining entries and returns the table row requested.
◼ The areas containing the position list and the data record entries expand towards each other.
© SAP 2008
© SAP 2008
◼ One particular objective of optimizing SQL statements is to minimize the number of page accesses.
◼ Other resources (CPU, memory usage) are used as multipliers in the optimization process.
Rule-based optimizers:
◼ The access strategy is fixed by specific rules at the time of parsing.
◼ Independent of the values in the WHERE condition
◼ The rule determines which access type is selected.
Cost-based optimizers:
◼ Determine the access strategy based on
◼ The value attributes of a column
◼ Available indexes
◼
Number of accesses, and so on
◼ The most cost-effective strategy is selected.
© SAP 2008
◼ There are two types of optimizer for relational database systems: rule-based and cost-based.
◼ Rule-based optimizers function in accordance with specific rules. For example: If an index exists, it
is used for access irrespective of the values transferred in the WHERE condition. This means that
with rule-based optimizers, the strategy for processing the statement is selected at the time of
parsing.
◼ Cost-based optimizers use statistical information about the size of the table and the value attributes
within the table columns to determine the most cost-effective access strategy.
◼ A cost-benefit plan is created for the different access options. The best strategy for executing the
statement is selected, subject to the values specified in the WHERE condition. The final search
strategy can therefore only be determined at the time of execution.
◼ SQL hints should only be seen as a temporary solution until the optimizer is corrected with one of
the next database patches. For more information, see SAP Note 832544 (FAQ: MaxDB Hints).
Table-Scan
Costvalue=5000
Primary key
Costvalue=3000 Optimized SQL
SQL statement statement
to be executed Index 1 executed
Costvalue=10
Index 2
Costvalue=4000
Cost-based optimizer
Effect:
© SAP 2008
◼ Optimizer statistics are constantly changing due to the addition of new data, changes to existing
data, and deletions. For this reason, they must be updated at regular intervals or after extensive
changes are made. The statistic information update (UPDATE STATISTICS) is initiated using a
work process.
◼ The cost-based optimizer uses the available information about size, indexes, and value distribution
within the indexed columns for each table to calculate how many accesses are required to determine
the result. If multiple indexes exist for a table, the optimizer selects the index most suitable for
accessing the data.
DDL information
Investigate (parse) statement
© SAP 2008
◼ An SQL statement is first processed by the parser, which performs a syntactic and semantic
analysis. Tables and their column specifications are checked during the semantic analysis.
◼ The optimizer determines which primary and secondary keys exist for the table and checks whether
a corresponding key can be used to search for values.
◼ In the case of secondary keys, the number of different values plays an important role. It makes no
sense to search an index if there is only one secondary key value.
◼ The number of pages to be read in the secondary key is calculated by creating a start key and a stop
key. Whether the index is included in the search depends on the number of pages in the table. The
total number of pages in the table is calculated from the statistics.
◼ Finally, a strategy is determined in accordance with which the SQL statement is executed.
Two options:
DBACOCKPIT ◼ Completely recreate optimizer statistics
(UpdAllStats)
◼ Two-level concept
◼ Check the tables for significant changes
in size (PrepUpdStat)
◼ Renew the statistics for tables that were
changed significantly (UpdStats)
© SAP 2008
◼ SAP NetWeaver enables you to update optimizer statistics in the planning calendar of the
Computing Center Management System (CCMS, transaction DBACOCKPIT). A two-level concept
is offered that uses a time-optimized procedure:
• In the first step, tables are identified that require new statistics due to significant changes
(PrepUpdStats).
• In the second step, the statistics for these tables are recreated (UpdStats).
◼ It is also possible to completely recreate the statistics (UpdAllStats).
• This procedure is time-consuming and should therefore be performed when workload is low.
◼ Update Statistics for a table:
• dbmcli –U c -USQL w sql_execute “update statistics <user>.<tablename>”
• dbmcli –U c -USQL w sql_execute “update statistics * ESTIMATE SAMPLE 1000 ROWS”
• Statistics can also be updated for specific columns:
dbmcli –U c -USQL w sql_execute “update statistics column (<column name>) for
SAP<SID>.<tablename> ESTIMATE SAMPLE 10 PERCENT”
◼ For more information, see SAP Note 927882 (FAQ: MaxDB Update Statistics).
◼ The EXPLAIN statement cannot be used with UPDATE, DELETE, or INSERT commands.
◼ EXPLAIN is used for QUERY statements that access base tables.
◼ EXPLAIN VIEW displays the names of tables and indexes that form the basis of a (multiple)
(JOIN) view query.
◼ EXPLAIN does not execute the specified SELECT statement.
© SAP 2008
◼ You can access EXPLAIN VIEW in SAP NetWeaver using transaction ST05 and in the command
monitor using transaction DBACOCKPIT or DB50.
◼ Advanced EXPLAIN statements are also available (EXPLAIN JOIN and EXPLAIN SEQUENCE).
The latter can be interpreted by Development only. For this reason, Development requires the
MaxDB connection (SAP Note 202344) so that Database Studio can be used remotely when
analyzing performance problems using SQL statements. We therefore recommend that you prepare
this connection so that it can be made available at short notice if a problem occurs.
◼ For more information, see SAP Note 819324 (FAQ: MaxDB SQL optimization).
Transaction ST05
© SAP 2008
◼ The EXPLAIN statement displays an information block for every table in the SELECT statement.
◼ The table sequence reflects the processing of the access strategy to the corresponding tables.
◼ RESULT IS NOT COPIED indicates whether a temporary result set was created (in this example, no
result set was created). Temporary results tables are created for JOINs (across multiple tables)
depending on how they are processed (Sorted Merge or Nested Loop).
◼ The COSTVALUE IS value in the PAGECOUNT column shows how many page accesses were
calculated. This corresponds to logical I/O accesses.
SAP NetWeaver
transactions
ST05, DB50,
and DBACockpit
© SAP 2008
◼ SAP NetWeaver also allows you to use database hints with MaxDB.
◼ Database hints do not usually constitute long-term solutions, despite often being used as such by
other database providers. Hints can quickly become counterproductive since, in contrast to the cost-
based optimizer, the database cannot adapt to changes in the dataset or structures in the underlying
tables. In most cases, access is negatively affected.
◼ MaxDB provides hints as a temporary solution only to rectify potential optimizer weak points until
they can be eliminated with a subsequent patch.
◼ However, this function enhances performance analysis by allowing you to focus the optimizer on
specific areas when working with EXPLAIN statements.
◼ This allows database administrators to access data in the tables in different ways (key field access,
index access). They can also easily test the indexes that should be used by the optimizer. Usually,
the cost value of the indexes not selected proves to be significantly higher.
© SAP 2008
◼ For more information, see SAP Note 990602 (FAQ: CCMS for MaxDB/SAP liveCache
technology).
© SAP 2008
◼ As of SAP NetWeaver 7.0 (2004s), the DBA Cockpit is also available in the CCMS (transaction
DBACOCKPIT). It provides user-friendly access to all database information required for
administration, including detailed analyses.
◼ The DBA Cockpit contains the following important areas:
• Space displays up-to-date performance values and database configurations. It allows you to
access lists of information about MaxDB and table sizes. Before these lists can be displayed, the
relevant collective reports must be scheduled so that all possible information is gathered.
• Performance contains the Database Analyzer, Resource Monitor, and Command Monitor
MaxDB tools, which allow you to analyze and optimize database performance.
• Tools provides additional tools that can be used in the MaxDB administration environment, for
example, graphical tools and the Database Console, which calls up the x_cons MaxDB tool and
presents the data output.
◼ Transaction DBACOCKPIT is subject to ongoing development and updated by means of the SAP
NetWeaver Basis Support Packages. It is also available in other database systems and can be used to
manage MaxDB, which is often included in the installation as a "black box". Only the MaxDB client
package and the DBADASLIB have to be installed locally on the application server used.
© SAP 2008
◼ Both the new and old versions of Alert Monitor are integrated into the DBA Cockpit.
◼ The new version is shown here.
© SAP 2008
◼ When you select Alert Monitor (Old Version) in the DBA Cockpit, transaction RZ20 launches.
◼ The database alert monitor enables you to identify any potential problems and database bottlenecks
at a glance. The CCMS monitoring concept allows you to create your own Alert Monitor views.
SAP provides a database view that you can use as a template.
◼ Before you can do this, you have to set all configurable threshold values using transaction RZ20.
◼ The default view contains the following areas: Space Management, Performance, Backup/Recovery,
and Health. For database clients, any problems relating to incorrect ABAP SQL commands are also
displayed.
◼ For information about up-to-date Alert Monitor developments, see SAP Note 545030.
© SAP 2008
◼ The Database Assistant (transaction DB50) is also available in the CCMS. As with the DBA
Cockpit, this provides user-friendly access to all database information required for administration,
including detailed analyses.
◼ DB50 contains the following important areas:
• Current Status displays up-to-date performance values and database configurations.
• Problem Analysis contains the Database Analyzer, Resource Monitor, and Command Monitor
MaxDB tools, which allow you to analyze and optimize database performance.
• Statistics allows you to access lists containing information about MaxDB and table sizes. Before
these lists can be displayed, the relevant collective reports must be scheduled so that all possible
information is gathered.
• Tools provides additional tools that can be used in the MaxDB administration environment, for
example, graphical tools and the Database Console, which calls up the x_cons MaxDB tool and
presents the data output.
◼ Transaction DB50 is subject to ongoing development and updated via the SAP NetWeaver Basis
Support Packages. It is also available in other database systems and can be used to manage MaxDB,
which is often included in the installation as a black box. Only the MaxDB client package and the
DBADASLIB have to be installed locally on the application server used.
© SAP 2008
◼ When you choose "Space" "Caches" (as shown above), important access rates (hit rates) are
displayed for the main memory areas of the database.
◼ The I/O buffer cache is broken down into the cache areas it contains.
◼ The most important area in terms of performance is the data cache. The hit rate should be well
above 98%, the higher, the better.
◼ Note that these hit rates have been averaged since the database was started and, while seeming
optimal, may hide the fact that hit rates drop below 95% when load is high.
◼ Use the Database Analyzer to break down these hit rates into different times of the day. The "Expert
Analysis" area of Database Analyzer provides long lists of performance-relevant data. The
UMEW60 workshop explains which values are important here.
© SAP 2008
◼ In the Task Manager area, transaction DB50 provides several functions with which you can obtain
more detailed information about the status of the tasks and threads that are active in the database.
◼ You can also activate a much more detailed time measurement here. When you do so, the database
logs a large number of execution times for various actions. This data is then stored temporarily in
internal tables and partially displayed in the Task Manager. Finally, it is stored in the log files of the
Database Analyzer.
◼ The read and write times for the selected task (here, for example, the log writer) are displayed in the
details on the I/O Operations tab page. If the task selected uses its own I/O, the numerical values are
displayed in the fields on the right. If it uses dev threads, the values are displayed in the fields on the
left. In this case, the log writer performs the write actions itself and shows an average write time of
8.2 ms since the detailed time measurement was activated (11 write actions from this point).
◼ Note that the detailed time measurement uses up additional database resources. The extent to which
it affects performance depends on the system load. You should therefore activate it only for the
duration of the analysis, where it is often essential for evaluating the performance of hard disk areas
(or connections to them). It is often advisable to leave the time measurement function activated for a
longer period so that logs for all times of the day are created at once.
◼ You can also use the
x_cons SID show dev_io
command to obtain the read and write times for the disks and storage systems. As of MaxDB 7.7,
these I/O times are also calculated when the detailed time measurement is inactive since the costs
for this are minimal.
◼ You can also activate the detailed time measurement from the console using
dbmcli –U c db_execute detailed time measurement ON
© SAP 2008
◼ Database Analyzer is an important tool for collecting data relating to the status of the database, and
to some extent, the overall system. With this data, you can often determine what happened in the
database at the time of an event, after it has occurred.
◼ The slide above shows the main list of Database Analyzer, which contains detailed information
about database operations on the current day.
◼ Several other values are also documented. This information is usually determined every 15 minutes
(900 s) and then immediately available in more detail in the ¡°Expert Analysis¡± area where it is
sorted by date. Database Analyzer is activated automatically and collects the values from the system
tables where the data has already been stored by the database kernel. For this reason, Database
Analyzer is not critical with respect to performance and should always be activated.
◼ Since performance analysis is often largely based on this data, which is logged in chronological
order, we strongly recommend that you ensure that Database Analyzer is always activated. With
more recent Basis Support Packages, it is normally started automatically when the SAP NetWeaver
application server is started. To run Database Analyzer on Windows, for example, you require
Microsoft MDAC Version 2.7 or higher. This is usually supplied with Windows 2003 and later
versions of Windows.
◼ For further descriptions of Database Analyzer messages, see http://help.sap.com (search for
"analyzer" and specific text).
◼ Alternatively, you can start Database Analyzer via the console. To do so, enter
dbanalyser –d <SID> -u sap<sid>,sap –c 1 –t 900 –o <directory where the analysis data is to be
written>
or simply "dbanalyzer" and use the interactive mode.
© SAP 2008
◼ Once it has been in use for a couple of weeks, Database Analyzer will have created a large number
of directories under RunDirectoryPath/analyzer. Problems may then occur due to the length of the
list. To keep these lists manageable, you can use the automatic administration function shown here.
Generally, about 100-200 KB of data is logged each day.
◼ The automatic functions are executed every night at 1:00 AM local time and, in this example, delete
the data from the directory after a retention period of 93 days. You can also store this data directly in
the database and delete it after a predefined period. Furthermore, aggregate functions are provided to
reduce the quantity of data. These should be activated when you want to display past aggregated
data.
◼ If you need to delete data at short notice, use the function on the Manual Admin tab page.
◼ If the lists of daily directories become too long, you may find that no directories are displayed in the
"Expert Analysis" area of Database Analyzer even though they definitely exist in the directory.
◼ You can also manage the Database Analyzer using the dbmcli. The functions below are also used by
DB50. If problems occur when the Database Analyzer is started automatically, choose Tools
Database Manager (CLI) and execute these commands as a test:
• dbmcli –U c dban_start
• dbmcli –U c dban_state
• dbmcli –U c dban_stop
© SAP 2008
◼ With the resource monitor in DB50, you can collect SQL statements in different statement classes.
The values defined by the WHERE condition can vary, however. This means that only raw
statements are collected and not these varying values. The corresponding execution times and some
other performance-relevant values are added together (cumulated) or average values are calculated.
The list then immediately shows the SQL statements producing the load and optimization steps can
be taken.
◼ You configure the recommended start settings, as shown above. You then have to start or stop the
monitor by choosing the buttons indicated.
◼ By double-clicking a row in the list that is subsequently displayed, you can view the executed
statements. However, you cannot execute an EXPLAIN statement at this point since the parameter
values that have to be entered as variables in the WHERE condition are missing.
◼ To start the resource monitor without DB50, use the following commands:
© SAP 2008
◼ Unlike the resource monitor, the command monitor allows you to capture specific SQL statements
that were executed in the database.
◼ You also have to configure and start this monitor manually by choosing the pencil icon. In the
dialog box displayed, you can define the recommended conditions as shown above. These values are
normally sufficient to log 12 to 24 hours.
◼ You can then sort the output list of SQL statements as usual by different column values (the list is
sorted by runtime in descending order by default). The length of the logged list is restricted and
follows the FiFo (first in, first out) rule. This means that earlier statements are removed from the list.
However, they still appear in the monitor list until you next refresh it.
◼ By double-clicking a list entry, you can display the statement with the relevant parameters and
values of the WHERE condition. Since these values are available (unlike in the resource monitor),
you can execute an Explain statement.
◼ The output of the EXPLAIN statement returns a table with the analysis data. Every table (for
example, in an analyzed JOIN) receives a block in which the components used – such as strategies,
indexes, or key fields – are displayed.
◼ RESULT IS NOT COPIED indicates whether a temporary result set was created and then
transferred to the application in a block.
◼ Finally, the COSTVALUE IS output in the PAGECOUNT column shows the costs calculated for
the execution of this statement. This calculation is based on logical I/O accesses to the table. All
other numerical values in the rows above are taken from the calculated statistics of the tables or
indexes shown on the left.
© SAP 2008
◼ Transaction DB59 provides a detailed overview of all existing MaxDB and liveCache connections.
It is similar to transaction SM59 for network connections in this respect.
◼ By double-clicking one of the entries in the list, you start the corresponding DB50 (MaxDB) or
LC10 (liveCache) transaction for the database connection type selected.
◼ In this transaction, you can find and modify all important information about the connection, such as
the server name, the database name, and the user logon data. If you want to change the data,
remember to restart all affected application servers of the SAP system – the system will also prompt
you to do so. Otherwise, these application servers would continue to use the old logon data and
connection errors would occur.
◼ You can then test the connections or connection changes by choosing ¡°Connection Test¡±. The first
step is to define the application server from which you want to test the connection and then choose
"Connection Test" again to carry out the actual test. You can then view the information logged for
this test by choosing "Log". In the log, you can see that four different tests were performed and thus
all database accesses were simulated.
◼ The following pages describe how to create a new database connection in DB59. Since DB50 and
DB59 are part of the Basis Support Package, you can also configure them on non-MaxDB SAP
systems. All you need in this case is a MaxDB client package that is installed on the external
database server.
© SAP 2008
◼ DB59 allows you to address databases that are not used with the local SAP NetWeaver system for
data storage. This feature is particularly useful when analyzing older versions of SAP applications
(such as 3.1I to 4.6B). If you have sufficient analysis tools, you can use it in DB50 to optimize an
older version. For example, you could use SAP Solution Manager as the central information system
for your SAP landscape.
◼ To establish a new database connection, choose Integrate Database. First, you have to enter the
name of the new database connection and the underlying database type. In the next step, you define
all important connection information such as the server name, database name, and user logon data.
◼ On the second tab page, you can manage and start jobs for transaction RZ20 (CCMS Monitoring).
You can also have database log files deleted automatically on a regular basis.
◼ Once you have saved your entries, a dialog box appears reminding you to restart all relevant
application servers in the SAP system if you make any changes later. Otherwise, these application
servers would continue to use the old logon data and connection errors would occur.
© SAP 2008
◼ In addition to disk errors, two other situations can restrict the availability of the database.
• LOG FULL: The log area is 100 percent full
• DATABASE FULL: The database instance is 100 percent full
◼ The database can still be read. Operations can continue as normal as soon as the relevant bottleneck
is resolved. To avoid these situations, you should monitor the capacity of the database and log area
regularly and increase the storage disk capacity or start a manual log backup in good time.
◼ To determine the best access strategy, the MaxDB optimizer requires up-to-date statistical
information that must be created in accordance with SAP strategies using the planning calendar in
the DBA Cockpit.
◼ When you check the volume (Database Studio: context menu of instance Check Database
Structure¡¦), the logical consistency of the database structures in ONLINE and ADMIN operational
states is also checked. Memory pages that are not occupied are added to free memory administration
and any inconsistent internal tree structures are repaired. We recommend that you check the volume
once a month, for example, before performing a complete data backup.
◼ Check the physical consistency of backups regularly, for example in Database Studio by calling up
the context menu and choosing Check Backup¡¦.
© SAP 2008
Application
appldiag SQL Traces
X Server Xserver_hostname.prt
rtedump
Global Memory
MaxDB
Runtime Environment
knltrace knlMsg
◼ appldiag: If errors occur between the runtime environment and the kernel, they are entered in the
appldiag file. A corresponding file is created for every operating system user.
◼ SQL traces: Particularly if errors occur when SQL data is accessed, you can activate SQL traces on
an application basis (ST05) or activate a precompiler trace (dbslada.pct).
◼ xserver_<hostname>.prt: If errors occur during communication via the X server, they are entered in
the xserver_<hostname>.prt file.
◼ rtedump: If a crash occurs, the runtime environment writes its status to the rtedump file. This is an
ASCII output of the command x_cons <SERVERDB> show all.
◼ knlMsg: The kernel writes information and messages to the knlMsg file. It has a fixed size and is
cyclically overwritten. After a crash, the backtrace is also contained in this file. It is created in XML
format.
◼ knltrace: The kernel writes the knltrace file if Vtrace is activated and if the database crashes. It has a
predefined fixed length.
◼ knldump: During an emergency shutdown, the global memory is written to the knldump file. The
corresponding file system should be large enough. In recent MaxDB releases, this extremely large
file is not created until the function for writing the file has been enabled.
◼ core: If a UNIX process crashes, the operating system writes a Core. This contains a memory extract
of the process. MaxDB also prevents the core from being generated and so it must be activated first.
◼ drwtsn32.log: If the database crashes on Windows, the stack backtrace is written to the
DrWtsn32.log file, provided this is entered as a system debugger. For more information, see SAP
Note 49776.
© SAP 2008
◼ You can determine the <indepdatapath> in the Database Manager CLI with the following command:
dbmcli –d <SID> -u <control-user>,<password> dbm_getpath indepdatapath
◼ After a crash, the following files are copied to a backup directory automatically:
knlMsg, knltrace, knldump, rtedump, *.dmp, *.buf, *.stm
◼ If, during a restart, the system recognizes that the database crashed during the previous session, the
files to be backed up are saved in a directory and named as follows:
<DB NAME>_<DATE>_<TIME>, for example, DEV_20081114_12-09-45
◼ The backed up diagnosis files are deleted from the original directory.
◼ The backup directory is located in the directory to be configured (DiagnoseHistoryPath MaxDB
parameter).
◼ You can also use the DiagnoseHistoryCount MaxDB parameter to specify how many analysis
versions are stored. If the maximum number of histories has been reached, the oldest analysis is
deleted if the problem occurs again.
◼ If the analysis cannot be stored correctly, this does not affect the database restart.
Example:
06.01 12:53:46 18286 -11205 sqlexec: system error, not enough space
06.01 12:58:25 18286 -11205 sqlexec: system error, not enough space
06.01 18:46:23 19025 -11109 database ‘S10’ is not running
07.01 11:47:37 10959 -11987 sql33_con_msl: task limit
07.01 11:47:59 12031 -11987 sql33_request: connection broken, kernel cleared
08.02 13:11:07 18899 -11987 connection closed by communication partner
08.29 13:13:13 11199 -11987 comseg given away, assuming timeout
ID of the causing
process
© SAP 2008
◼ Diagnosis file knlMsg is the most important source of information if database problems occur.
• The messages issued during communication between the runtime environment of MaxDB and the
database kernel are logged in this file.
• Every time the database kernel is started, a new file is created and the existing file saved as
knlMsg.old.
• In more recent releases, the knlMsg file is also backed up after a database crash. It is saved to the
DIAGHISTORY directory (specified in the DiagnoseHistoryPath DB parameter) under
RunDirectoryPath to ensure that the data that caused the database crash is not lost after two
restart attempts. Usually, the two most recent crash files are backed up (this can be configured in
the DiagnoseHistoryCount DB parameter).
• Errors are also logged in the knlMsg.err file. This file is not overwritten and can be archived and
then deleted if it becomes very large. A new version of the file is then created automatically.
◼ On UNIX, you can use the error channels of the database kernel (which you can configure in the
MessageOutputDevice1 and MessageOutputDevice2 parameters) to direct messages for further
processing using pipes. In this way, you can forward database warnings via e-mail, for example.
◼ The protconv tool shipped with MaxDB allows you to convert XML files into ASCII using the
command line.
© SAP 2008
◼ The start of the Database Messages file always lists a complete set of database parameters in the
header area. This is followed by additional information about the operating system, limitations that
may exist, and so on.
© SAP 2008
◼ The knlMsg and knlMsg.old files are written cyclically. The size (parameter:
KernelMessageFileSize) is usually set to 800 KB, although this can be larger in more recent
releases.
◼ In this area of DBA Cockpit, the diagnosis files are processed accordingly and important messages
are highlighted.
[Processor type]
[Processors online]
4
[Processor info]
AMD64 Family 15 Model 65 Stepping 2, AuthenticAMD
© SAP 2008
◼ The sdbinfo command allows you to collect operating system information that also affects the
database software. The process of collecting this version information is often very tedious,
particularly on UNIX but also on Windows. However, you must provide this information when
requesting support.
SQL trace
R/3 ST05 (available within the
SAP system
MaxDB Kernel
© SAP 2008
◼ In SAP NetWeaver, you activate the SQL trace with transaction ST05. The log is written by the
database interface. The variables, their values, and the runtimes are specified with the individual
statements. With EXPLAIN, transaction ST05 also allows you to display the optimizer strategy for
the statement.
◼ The precompiler of the database instance also writes an SQL trace (also called the precompiler
trace). The commands that arrive at this interface and the data that is transferred to the client are
shown here.
© SAP 2008
◼ You set the precompiler trace for SAP NetWeaver in a profile parameter. After changing the profile
parameters, you only need to restart the work process in Windows systems. With UNIX systems,
you need to restart the SAP system or the affected application server. The trace files are in the run
directory of the SAP instance. The file name comprises the process ID of the work process and the
extension .pct.
◼ Other database tools or SAP tools (R3trans, tp, saplicense, and so on) that run on the precompiler
and that are started via the console or DOS box read the SQLOPT environment variable. Unless
specified with the -F option, the trace file is written to the current directory. The name comprises the
corresponding C module (such as dbslada) and the extension .pct.
◼ You can also activate the trace with irtrace WITHOUT having to restart the system/application
server.
◼ The tool provides the following options for changing the trace behavior:
• Activate/deactivate the trace for ALL interface processes on the application server:
irtrace –p all –t <trace type>
SAPDB_<TaskID>.pct:
PRODUCT : SAP DB C-PreComp Runtime
DRIVER : G:/sapdb/programs/runtime/7403\pgm\libpcr
PRECOMPILER : 7.4.3 005
LIBRARY : 7.4.3 039
BUILD : 039-123-092-249
version :P_1, P_2
SQL STATEMENT : FROM MODULE : dbadautl AT LINE : 644
Statement Name : :0x000010
OUTPUT : LZU : W32/INTEL 7.4.3 Build 039-123-092-249
OUTPUT : PCR : C-PreComp 7.4.3 Build 039-123-092-249
START : DATE : 2005-11-23 TIME : 0020:30:55
END : DATE : 2005-11-23 TIME : 0020:30:55
OPTION : PARSEINFOCACHE=OFF
SESSION : 1;
DATABASE : DB_000 Xuser Data
USERKEY : DEFAULT
SQLMODE : SAPR3
SERVERDB : DEV
SERVERNODE: twdf0736
CONNECT "SAPDEV " IDENTIFIED BY :A SQLMODE SAPR3 ISOLATION LEVEL 0
TIMEOUT 0
SQL STATEMENT : FROM MODULE : dbadautl AT LINE : 92
Statement Name : :0x000001
START : DATE : 2005-11-23 TIME : 0020:30:55
END : DATE : 2005-11-23 TIME : 0020:30:55
DB_000: MASS STATEMENT :
SELECT KERNEL INTO :P_1 FROM DOMAIN.VERSIONS
SQL STATEMENT : FROM MODULE : dbadautl AT LINE : 713
Statement Name : :0x000011
PARSEID: OUTPUT: 0008C0A0 00000301 3C000000 01000000
PARSEID: INPUT : 0008C0A0 00000301 3C000000 01000000
OUTPUT : 1: PARAMETER : Kernel 7.5.0 Build 029-121-099-958
SQLERRD(INDEX_3) : 1
START : DATE : 2005-11-23 TIME : 0020:30:55
END : DATE : 2005-11-23 TIME : 0020:30:55
© SAP 2008
◼ The extract from the precompiler trace shows that the file begins with important version information
(this is important if the wrong libraries were loaded).
◼ All accesses are also logged (user name, database host, and database instance).
◼ The precompiler trace often helps if connection problems occur for a database client such as the
application server (error message -709).
SQL Manager
(Previously: AK)
Kernel
Binary File:
Data Access knltrace
Manager or
KernelTrace ON
(Previously: KB, BD) KernelTrace Flush knltrace.dat
© SAP 2008
◼ The database trace (kernel trace, Vtrace) enables you to analyze executed SQL statements on the
server.
◼ The database kernel consists of two logical areas, which were previously divided into further areas
(AK, KB, and BD). The commands still partially distinguish between these areas.
◼ When you activate the database trace, you specify the kernel areas from which important business
data is collected in buffers. Once the problem has been identified, this information is written to the
knltrace file. Usually, you use the default specification.
◼ The database trace specifies the layers or modules of the kernel for which the logs are to be
extracted. You can do this with the Database Studio and xkernprot tools or transaction
DB50/DBACOCKPIT.
◼ Strategy and time data is displayed only if the OPTIMIZER or TIME options were activated for the
database trace.
◼ The SWITCH output contains data from the trace of a slow kernel. The slow kernel is a special
debug kernel. It is used only if specifically requested by Development or Support.
◼ The knltrace file is written automatically when the database is shut down as usual or if it crashes
unexpectedly.
◼ For more information, see SAP Note 837385 (FAQ: MaxDB database trace (VTRACE)).
5. Extract the data related to the problem from the binary file into a readable form
© SAP 2008
◼ You can activate, deactivate, and flush the database trace in Database Studio, the Database Manager
CLI, and the CCMS (transaction DB50).
◼ You can also flush the database trace using SQL Studio.
◼ Commands in the Database Manager CLI (executed in the ADMIN or ONLINE operational states):
• To activate:
dbmcli –U c -UUTL c util_execute diagnose vtrace default on
• Flush, ONLINE operational state:
dbmcli –U c -USQL w sql_execute vtrace
• Flush, ADMIN operational state:
dbmcli –U c -UUTL c util_execute vtrace
• To deactivate:
dbmcli –U c -UUTL c util_execute diagnose vtrace default off
• To evaluate:
xkernprot –d <SID> <traceout>
▪ This can also be executed in the OFFLINE operational state.
▪ <traceout> indicates the layers to be evaluated (for example, akbnx):
a AK layer m message buffer, sql_packet
b BD layer n net (distribution)
k KB layer t time vtrace
s Strategy x switch output (slow kernel)
• (For the TIME and SWITCH options, see the previous slide).
© SAP 2008
◼ You can define the scope and level of detail of the trace by choosing Database Trace Options¡¦.
For a general trace, select TraceDefault.
◼ Unless otherwise specified by Development or Support, the default database trace is sufficient.
◼ You can also choose to include information about DELETE, INSERT, UPDATE, SELECT, and
optimizer operations.
◼ The Advanced tab page offers two useful enhancements:
• Trace Session
You can activate the database trace for specific database sessions. To do this, however, you
require the name of the database session. You can find the session name using
x_cons <SID> show active
and
dbmcli –U c –USQL DEFAULT sql_execute ¡°SELECT * FROM TRANSACTIONS¡°
.
• Stop on Error
You can configure the database trace so that it is deactivated automatically if a specific error
occurs. This is useful if you want to reproduce a specific problem and you know which error is
going to occur. By using this function, you ensure that relevant information in the trace file is not
overwritten.
© SAP 2008
◼ By choosing Generate¡¦, you can sort the information from the knltrace file and extract the
information in the subareas you require to an ASCII file (<SID>.prt).
◼ Specify the layers or modules of the kernel for which you want to extract the trace output. The
default value is abkmx.
◼ Data regarding strategies and times is output only if the OPTIMIZER and TIME options were
activated.
◼ The checkboxes at the bottom allow you to automate all the steps for displaying the readable trace
information:
• Flush the trace information
• Open the readable trace file
• Deactivate the current trace
© SAP 2008
© SAP 2008
◼ Once the actions for which you want to generate a trace are complete, the trace information must be
written from the memory to the "knltrace" file. To perform this action independently, choose
Database Trace Flush.
◼ To delete all information from the trace, choose Database Trace Clear.
The knldump file can become very large. It is generated in binary format and can be
read only using internal diagnosis tools.
© SAP 2008
© SAP 2008
Hardware errors
Backup/restore errors
◼ Check the backup.
Configuration errors
◼ Help Portal (http://help.sap.com)
© SAP 2008
◼ If an SQL error occurs, transactions usually terminate with a short dump (transaction ST22). You
can obtain more information using transactions ST05 and SM21. The database diagnosis files can
also help (DB50: Problem Analysis ® Messages). You may have to activate and analyze traces
(precompiler trace and/or database trace).
◼ System errors are critical errors and are often logged as error -602 in transactions SM21 and ST22.
You can find the unique error number in the knlMsg file.
◼ If you experience problems when backing up or restoring data, analyze the Database Manager logs,
for example, with Database Studio or in CCMS (transaction DB50, Problem Analysis ® Logs).
◼ In addition to the diagnosis files, you should provide SAP Support with information about additional
actions such as the release upgrade performed, new software imported, operating system upgrade,
and so on.
◼ Important: The storage concept and configuration of MaxDB for the SAP environment (such as SAP
NetWeaver, content server, and so on) is very different from that provided on some MaxDB Web
pages (different directories, different database users). This is mainly due to the standardization
efforts in United Linux and the Linux Standard Base. Therefore, if you have configuration queries,
you should refer to the installation guide or SAP Help Portal (including SAP Note 327578) first.
Report
© SAP 2008
◼ If problems occur when the SAP system is connected to the database instance, always check the
DEV logs (the dev_w* files from the SAP NetWeaver run directory). If the information provided
here is not sufficient to resolve the error, it may help to test the following tools (if necessary, with
the precompiler trace using SQLOPT) and use the log files generated by this test (trans.log and
dbslada.pct):
set SQLOPT=-X
R3trans –x
tp connect <SAPSID> [<TPPARAM file>]
◼ The XUSER data must also be checked. For more information, see SAP Note 39439.
© SAP 2008
◼ Diagnosis files have to be saved explicitly only if they were not copied to the directory of the
DiagnoseHistoryPath MaxDB parameter automatically.
© SAP 2008
◼ See also SAP Note 839333 (FAQ: MaxDB error diagnosis-> corrupt data pages)
© SAP 2008
Unit: 6
Topic: Performance Tuning and Troubleshooting
To detect performance problems early on, the exercises show how to use
the tools practically. In this way, you can expand on the features of
MaxDB when developing your own applications and subsequently
optimizing them.
Unit: 6
Topic: Performance Tuning and Troubleshooting