Oracle Database 11g: Administration I
Question No: 81 – (Topic 2)
Automatic Shared Memory Management (ASMM) has been enabled for your database
instance. The initialization parameters for the components that are managed by ASMM are not set. After observing the effects of ASMM, you executed the following command:
SQLgt; ALTER SYSTEM SET DB_CACHE_SIZE = 100M;
Which statement is true in this scenario?
The minimum memory size for the database buffer cache is set to 100 MB.
The maximum memory size that can be obtained by the database buffer cache during ASMM is set to 100 MB.
The minimum memory size allocated for a server process in the database buffer cache in dedicated mode is set to 100 MB.
The maximum memory size from the database buffer cache that can be released for dynamic distribution during ASMM is set to 100 MB.
Question No: 82 – (Topic 2)
Your database is configured in shared server mode. However, your senior DBA asks you to modify the value of the PRIVATE_SGA limit in the profile of the users.
What could be the reason for this?
To limit the User Global Area (UGA) memory allocated to a session from the SGA
To limit the amount of memory to be used for the dispatcher queue of a session in SGA
To limit the amount of memory to be used for the request pool in System Global Area (SGA)
To control the amount of memory allocated in SGA for the local variables for each shared server process
Question No: 83 – (Topic 2)
You configured the Flash Recovery Area for your database. The database instance has been started in ARCHIVELOG mode and the LOG_ARCHIVE_DEST_1 parameter is not set.
What will be the implications on the archiving and the location of archive redo log files?
Archiving will be disabled because the destination for the redo log files is missing
The database instance will shut down and the error details will be logged in the alert log file
Archiving will be enabled and the destination for the archived redo log file will be set to the Flash Recovery Area implicitly
Archiving will be enabled and the location for the archive redo log file will be created in the default location $ORACLE_HOME/log
The LOG_ARCHIVE_DEST_n parameters (where n = 1, 2, 3, … 10) define up to ten archive log destinations.
The parameter integer suffix is defined as the handle displayed by the V$ARCHIVE_DEST dynamic performance view.
Specifies a standby destination. Oracle Net (IPC or TCP) transmits the archivelog. A standby instance must be associated with the destination. The value represented by tnsnames_service corresponds to an appropriate service name in tnsnames.ora.
Specifies a local file system destination. You must specify this parameter for at least one destination.
Specifies that archiving to the destination must succeed before the redo log file can be made available for reuse.
Specifies that successful archiving to the destination is not required before the redo log file can be made available for reuse. If the quot;must succeed count,quot; set with LOG_ARCHIVE_MIN_SUCCEED_DEST, is met, the redo logfile is marked for reuse. This is the default.
Specifies the minimum number of seconds before the archiver process (ARCn, foreground, or log writer process) should try again to access a previously failed destination. Future attempts are made when the next redo log file is archived. If a destination is MANDATORY,
then Oracle recommends that you specify a REOPEN time that reduces the possibility of primary database shutdown due to lack of available online redo log files.
If you do not specify seconds, then the default value is 300
Question No: 84 – (Topic 2)
Which two statements are true regarding B-tree index? (Choose two.)
The leaf blocks in the index are doubly linked.
The leaf node stores a bitmap for each key value.
The rows with NULL value in key columns also have entries in the index.
The deletion of a row from the table causes a logical deletion in index leaf block and the space becomes available for the new leaf entry.
Answer: A,D Explanation: B-Tree Index
Structure of a B-tree Index
At the top of the index is the root, which contains entries that point to the next level in the index. At the next level are branch blocks, which in turn point to blocks at the next level in the index. At the lowest level are the leaf nodes, which contain the index entries that point to rows in the table. The leaf blocks are doubly linked to facilitate the scanning of the index in an ascending as well as descending order of key values.
Format of Index Leaf Entries
An index entry has the following components:
Entry header: Stores the number of columns and locking information
Key column length-value pairs: Define the size of a column in the key followed by the value for the column (The number of such pairs is the maximum of the number of columns in the index.)
ROWID: Row ID of a row that contains the key values
B-Tree Index (continued)
Index Leaf Entry Characteristics
In a B-tree index on a nonpartitioned table:
Key values are repeated if there are multiple rows that have the same key value unless
the index is compressed
There is no index entry corresponding to a row that has all key columns that are NULL. Therefore, a WHERE clause specifying NULL always results in a full table scan.
A restricted ROWID is used to point to the rows of the table because all rows belong to the same segment
Effect of DML Operations on an Index
The Oracle server maintains all the indexes when DML operations are carried out on a table. Here is an explanation of the effect of a DML command on an index:
Insert operations result in the insertion of an index entry in the appropriate block.
Deleting a row results only in a logical deletion of the index entry. The space used by the deleted row is available for new sequential leaf entries.
Updates to the key columns result in a logical delete and an insert to the index. The PCTFREE setting has no effect on the index except at the time of creation. A new entry may be added to an index block even if it has less space than that specified by PCTFREE.
Types of Indexes
These are several types of index structures that are available depending on your needs. Two of the most common are:
Default index type; in the form of a balanced tree
Has a bitmap for each distinct value indexed
Each bit position represents a row that may or may not contain the indexed value.
Best for low-cardinality columns
Question No: 85 – (Topic 2)
You want to configure and schedule offline database backups to run automatically. Which tool or utility would you use to achieve this?
The XML script
The PL/SQL script
The DBMS_SCHEDULER package
Enterprise Manager to schedule the backup
Question No: 86 – (Topic 2)
Which two activities are NOT supported by the Data Recovery Advisor (DRA)? (Choose two.)
Recover from failures in the RAC environment.
Diagnose and repair a data file corruption online.
Diagnose and repair a data file corruption offline.
Diagnose and repair failures on a standby database.
Answer: A,D Explanation:
Data Recovery Advisor
The Data Recovery Advisor automatically gathers data failure information when an error is encountered. In addition, it can proactively check for failures. In this mode, it can potentially
detect and analyze data failures before a database process discovers the corruption and signals an error. (Note that repairs are always under human control.)
Data failures can be very serious. For example, if your current los files are missing, you cannot open your database. Some data failures (like block corruptions in data files) are not catastrophic because they do not take the database down or prevent you from opening the Oracle instance. The Data Recovery Advisor handles both cases: the one when you cannot start up the database (because required database files are missing, inconsistent, or corrupted) and the one when file corruptions are discovered during run time.
Supported Database Configurations
In the current release, the Data Recovery Advisor supports single-instance databases. Oracle Real Application Clusters databases are not supported.
The Data Recovery Advisor cannot use blocks or files transferred from a standby database to repair failures on a primary database. Furthermore, you cannot use the Data Recovery Advisor to diagnose and repair failure on a standby database. However, the Data Recovery Advisor does support failover to a standby database as a repair option (as mentioned above).
Question No: 87 – (Topic 2)
You are working on an instance started using the SPFILE. You want to move the Flash Recovery Area of your database to a new location. You want the Flashback log files to be stored in the new location. Given below are the steps to accomplish the task in random order:
Shut down the instance.
Change the value of the DB_RECOVERY_FILE_DEST initialization parameter to a new value.
Execute the ALTER DATABASE FLASHBACK OFF command.
Start up the instance and mount the database.
Execute the ALTER DATABASE FLASHBACK ON command.
Open the database.
Select the correct order in which these tasks need to be performed.
A. 2, 1, 4, 3, 5, 6
B. 1, 4, 3, 2, 6, 5
C. 1, 4, 2, 6, 3, 5
D. 3, 2, 1, 4, 5, 6
Answer: A Explanation:
How to change Flash Recovery Area to a new location?
If you need to move the Flash Recovery Area of your database to a new location, invoke SQL*Plus to change the DB_RECOVERY_FILE_DEST initialization parameter.
ALTER SYSTEM SET DB_RECOVERY_FILE_DEST=#39; disk1#39; SCOPE=BOTH SID=#39;*#39;;
After you change this parameter, all new Flash Recovery Area files will be created in the new location.
The permanent files (control files and online redolog files), flashback logs and transient files can be left in the old Flash Recovery Area location. The database will delete the transient files from the old Flash Recovery Area location as they become eligible for deletion.
For the FLASHBACK logfiles to be able to to pick up the new #39;db_recovery_file_dest#39; location, the flashback option needs to be toggled off and on.
This can be done like this:
Shutdown the Database
Startup mount the Database: SQLgt; startup mount;
Toggle the Flashback off:
SQLgt; alter database flashback off;
Toggle the Flashback on:
SQLgt; alter database flashback on;
Open the Database: SQLgt; alter database open;
If you need to actually move your current permanent files, transient files, to the new Flash Recovery Area, then follow the following steps:
1) To move the existing backupsets and archived redo log files, use the following command:
RMANgt; BACKUP AS COPY ARCHIVELOG ALL DELETE INPUT;
RMANgt; BACKUP DEVICE TYPE DISK BACKUPSET ALL DELETE INPUT;
Question No: 88 – (Topic 2)
The instance abnormally terminates because of a power outage. Which statement is true about redo log files during instance recovery?
Inactive and current redo log files are required to accomplish recovery
Online and archived redo files are required to accomplish instance recovery
All redo log entries after the last checkpoint are applied from redo log files to data files
All redo log entries recorded in the current log file until the checkpoint position are applied to data files
Question No: 89 – (Topic 2)
Which statements are true regarding the creation of an incident package file by using the EM Workbench Support? (Choose all that apply.)
You can add SQL test cases to the incident package.
You can add or remove the trace files to the package.
You cannot create an incremental incident package when the physical files are purged from the ADR.
You can create the incremental incident package ZIP file for new or modified diagnostic information for the incident package already created.
Question No: 90 – (Topic 2)
Identify two situations in which the block header grows in a data block. (Choose two.)
When row directories need more row entries
When there is row migration in the data block
When there is an increase in the PCTFREE value for the data block
When more transaction slots are required than are initially configured
Answer: A,D Explanation:
Database Block: Contents
Block header: The block header contains the segment type (such as table or index), data block address, table directory, row directory, and transaction slots of size 24 bytes each, which are used when modifications are made to rows in the block. The block header grows downward from the top.
Row data: This is the actual data for the rows in the block. Row data space grows upward from the bottom.
Free space: Free space is in the middle of the block, enabling the header and the row data space to grow when necessary. Row data takes up free space as new rows are inserted or as columns of existing rows are updated with larger values.
Examples of events that cause header growth:
Row directories that need more row entries
More transaction slots required than initially configured
Initially, the free space in a block is contiguous. However, deletions and updates may fragment the free space in the block. The free space in the block is coalesced by the Oracle server when necessary.
|Lowest Price Guarantee||Yes||No||No|
|Free VCE Simulator||Yes||No||No|