How to
handle data corruption for ASM type files?
The
storage array should contain one or more spare disks (often called hot spares).
When a physical disk starts to report errors to the monitoring infrastructure
or fails suddenly the firmware should immediately restore fault tolerance by mirroring
the contents of the failed disk onto a spare disk
When a user comes to you and asks that a particular SQL query is taking more time. How will you solve this?
If you
find the SQL Query (which make problem) then take a SQLTRACE with explain plan
it will show how the SQL query will executed by oracle depending upon the
report you will tune your database.
For example: one table has 10,000 records but you want to fetch only 5 rows but in that query oracle does the full table scan. Only for 5 rows full table is scan is not a good thing so create an index on the particular column by this way to tune the database.
For example: one table has 10,000 records but you want to fetch only 5 rows but in that query oracle does the full table scan. Only for 5 rows full table is scan is not a good thing so create an index on the particular column by this way to tune the database.
By
default Maximum Enabled Role in a database.
The MAX_ENABLED_ROLES
init.ora parameter limits the number of roles any user can have enabled
simultaneously. The default is 30 in both oracle 8i and 9i. When you create a
role it is enabled by default. If you create many roles, then you may exceed
the MAX_ENABLED_ROLE setting even if you are not the user of this role.
User
Profiles:
The user
profile are used to limits the amount of system and database resources
available to a user and to manage password restrictions. If no profile are
created in a database then the default profile are, which specify unlimited
resources for all users, will be used.
How to
convert local management Tablespace to dictionary managed Tablespace?
>execute
dbms_space_admin.tablespace_convert_to_local('tablespace_name');
>execute dbms_space_admin.tablespace_convert_from_local('tablespace_name');
>execute dbms_space_admin.tablespace_convert_from_local('tablespace_name');
What is a cluster Key ?
The related
columns of the tables are called the cluster key. The cluster key
is using a cluster index and its value is stored only once for multiple tables
in the cluster.
What are
four performance bottlenecks that can occur in a database server and how
are they detected and prevented?
· CPU
bottlenecks
· Undersized
memory structures
· Inefficient
or high-load SQL statements
· Database
configuration issues
Four
major steps to detect these issues:-
· Analyzing
Optimizer Statistics
· Analyzing
an Execution Plan
· Using
Hints to Improve Data Warehouse Performance
· Using
Advisors to Verify SQL Performance
Analyzing
Optimizer Statistics
Optimizer
statistics are a collection of data that describes more details about the
database and the objects in the database. The optimizer statistics are stored
in the data dictionary. They can be viewed using data dictionary views similar
to the following:
SELECT *
FROM DBA_SCHEDULER_JOBS WHERE JOB_NAME 'GATHER_STATS_JOB';
Because
the objects in a database can constantly change statistics must be regularly
updated so that they accurately describe these database objects. Statistics are
maintained automatically by Oracle Database or you can maintain the optimizer
statistics manually using the DBMS_STATS package.
Analyzing
an Execution Plan
General
guidelines for using the EXPLAIN PLAN statement are:
To use
the SQL script UTLXPLAN.SQL to create a sample output table called PLAN_TABLE
in your schema.
To
include the EXPLAIN PLAN FOR clause prior to the SQL statement.
After
issuing the EXPLAIN PLAN statement to use one of the scripts or packages
provided by Oracle Database to display the most recent plan table output.
The
execution order in EXPLAIN PLAN output begins with the line that is indented
farthest to the right. If two lines are indented equally then the top line is
normally executed first.
To analyze
EXPLAIN PLAN output:
EXPLAIN
PLAN FOR (‘YOUR QUERY’);
EXPLAIN
PLAN FOR SELECT p.prod_name c.channel_desc SUM(s.amount_sold) revenue
FROM products p channels c sales s
WHERE s.prod_id p.prod_id
AND s.channel_id c.channel_id
AND s.time_id BETWEEN '01-12-2001' AND '31-12-2001'GROUP BY p.prod_name c.channel_desc;
FROM products p channels c sales s
WHERE s.prod_id p.prod_id
AND s.channel_id c.channel_id
AND s.time_id BETWEEN '01-12-2001' AND '31-12-2001'GROUP BY p.prod_name c.channel_desc;
SELECT *
FROM TABLE (DBMS_XPLAN.DISPLAY);
Using
Advisors how to Verify SQL Performance?
Using the
SQL Tuning Advisor and SQL Access Advisor you can invoke the query optimizer in
advisory mode to examine a given SQL statement or set of SQL statements and
provide recommendations to improve their efficiency. The SQL Tuning Advisor and
SQL Access Advisor can make various types of recommendations such as creating
SQL profiles restructuring SQL statements creating additional indexes or
materialized views and refreshing optimizer statistics. Additionally Oracle
Enterprise Manager enables you to accept and implement many of these
recommendations in very few steps
Difference
between Rman Recovery Catalog or nocatalog Option?
The
recovery catalog is an optional feature of RMAN though Oracle, recommends that
you use it, it isn’t required. One major benefit of the recovery catalog is
that it stores metadata about backups in a database that can be reported
or queried. Catalog means you have a recovery catalog database, nocatalog means
that you are using the controlfile as rman repository. Of course catalog
option can only be used when recovery catalog is present (which is not mandatory).
From functional point of view there is no difference either taking backup in
catalog or nocatlaog mode.
What is
Oracle Net?
Oracle
Net is responsible for handling client-to-server and server to- server
communications in an Oracle environment. It manages the flow of information
in the Oracle network infrastructure. Oracle Net is used to establish the
initial connection to the Oracle server and then it acts as the messenger,
which passes requests from the client back to the server or between two Oracle
servers.
Difference
of Backup Sets and Backup Pieces?
RMAN can
store backup data in a logical structure called a backup set, which is the smallest unit of an RMAN
backup. A backup set contains the data from one or more datafiles, archived
redo logs, or control files or server parameter file. Backup sets, which are
only created and accessed through RMAN, are the only form in which RMAN can
write backups to media managers such as tape drives and tape libraries.
A backup
set contains one or more binary files in an RMAN-specific format. This file is
known as a backup piece. A backup set can contain multiple
datafiles. For example, you can back up ten datafiles into a single backup set
consisting of a single backup piece. In this case, RMAN creates one backup
piece as output. The backup set contains only this backup piece.
What is
an UTL_FILE? What are different procedures and functions associated with it?
The UTL_FILE package lets your PL/SQL programs read and write operating system (OS) text files. It provides a restricted version of standard OS stream file input/output (I/O).
Subprogram -Description
FOPEN function-Opens a file for input or output with the default line size.
IS_OPEN function -Determines if a file handle refers to an open file.
FCLOSE procedure -Closes a file.
FCLOSE_ALL procedure -Closes all open file handles.
GET_LINE procedure -Reads a line of text from an open file.
PUT procedure-Writes a line to a file. This does not append a line terminator.
NEW_LINE procedure-Writes one or more OS-specific line terminators to a file.
PUT_LINE procedure -Writes a line to a file. This appends an OS-specific line terminator.
PUTF procedure -A PUT procedure with formatting.
FFLUSH procedure-Physically writes all pending output to a file.
FOPEN function -Opens a file with the maximum line size specified.
The UTL_FILE package lets your PL/SQL programs read and write operating system (OS) text files. It provides a restricted version of standard OS stream file input/output (I/O).
Subprogram -Description
FOPEN function-Opens a file for input or output with the default line size.
IS_OPEN function -Determines if a file handle refers to an open file.
FCLOSE procedure -Closes a file.
FCLOSE_ALL procedure -Closes all open file handles.
GET_LINE procedure -Reads a line of text from an open file.
PUT procedure-Writes a line to a file. This does not append a line terminator.
NEW_LINE procedure-Writes one or more OS-specific line terminators to a file.
PUT_LINE procedure -Writes a line to a file. This appends an OS-specific line terminator.
PUTF procedure -A PUT procedure with formatting.
FFLUSH procedure-Physically writes all pending output to a file.
FOPEN function -Opens a file with the maximum line size specified.
Differentiate
between TRUNCATE and DELETE?
The Delete commands will log the data changes in the log file where as the truncate will simply remove the data without it. Hence Data removed by Delete command can be rolled back but not the data removed by TRUNCATE. Truncate is a DDL statement whereas DELETE is a DML statement.
The Delete commands will log the data changes in the log file where as the truncate will simply remove the data without it. Hence Data removed by Delete command can be rolled back but not the data removed by TRUNCATE. Truncate is a DDL statement whereas DELETE is a DML statement.
What is
an Oracle Instance?
Instance
is a combination of memory structure and process structure. Memory structure is
SGA (System or Shared Global Area) and Process structure is background
processes.
Components
of SGA:
Database
Buffer Cache: It is
further divided into Library Cache and Data Dictionary Cache or Row Cache,
Shared
Pool/large pool/stream pool/java pool
Redo log
Buffer,
Background
Process:
Mandatory
Processes (SMON, PMON, DBWR, LGWR, CKPT, RECO)
Optional
Process (ARCN, RBAC, MMAN, MMON, MMNL)
When Oracle starts an instance,
it reads the initialization parameter file to determine the values of
initialization parameters. Then, it allocates an SGA, which is a shared area of
memory used for database information, and creates background processes. At this
point, no database is associated with these memory structures and processes.
What
information is stored in Control File?
The
database name, The timestamp of database creation, The names and locations of
associated datafiles and redo log files, Tablespace information, Datafile
offline ranges, The log history, Archived log information, Backup set
and backup piece information, Backup datafile and redo log
information, Datafile copy information, The current log sequence
number
When you
start an Oracle DB which file is accessed first?
To Start
an instance, oracle server need a parameter file which contains
information about the instance, oracle server searches file in following
sequence:
1) SPFILE ------ if finds instance started .. Exit
2) Default SPFILE -- if it is spfile is not found
3) PFILE -------- if default spfile not find, instance started using pfile.
4) Default PFILE -- is used to start the instance.
1) SPFILE ------ if finds instance started .. Exit
2) Default SPFILE -- if it is spfile is not found
3) PFILE -------- if default spfile not find, instance started using pfile.
4) Default PFILE -- is used to start the instance.
What is
the Job of SMON, PMON processes?
SMON: System monitor performs instance
recovery at instance startup in a multiple instances. Recovers
other instances that have failed in cluster environment .It cleans
up temporary segments that are no longer in use. Recovers dead
transactions skipped during crash and instance recovery. Coalesce
the free extents within the database, to make free space contiguous and
easy to allocate.
PMON: Process monitor performs recovery when a user process fails. It is responsible for cleaning up the cache, freeing resources used by the processes. In the mts environment it checks on dispatcher and server processes, restarting them at times of failure.
PMON: Process monitor performs recovery when a user process fails. It is responsible for cleaning up the cache, freeing resources used by the processes. In the mts environment it checks on dispatcher and server processes, restarting them at times of failure.
What is Instance Recovery?
When an
Oracle instance fails, Oracle performs an instance recovery when the associated
database is re-started.
Instance recovery occurs in two steps:
Instance recovery occurs in two steps:
Cache
recovery: Changes
being made to a database are recorded in the database buffer cache. These
changes are also recorded in online redo log files simultaneously. When there
are enough data in the database buffer cache, they are written to data files.
If an Oracle instance fails before the data in the database buffer cache are
written to data files, Oracle uses the data recorded in the online redo log
files to recover the lost data when the
associated database is re-started. This process is called cache recovery.
Transaction recovery: When a transaction modifies data in a database, the before image of the modified data is stored in an undo segment. The data stored in the undo segment is used to restore the original values in case a transaction is rolled back. At the time of an instance failure, the database may have uncommitted transactions. It is possible that changes made by these uncommitted transactions have gotten saved in data files. To maintain read consistency, Oracle rolls back all uncommitted transactions when the associated database is re-started. Oracle uses the undo data stored in undo segments to accomplish this. This process is called transaction recovery.
associated database is re-started. This process is called cache recovery.
Transaction recovery: When a transaction modifies data in a database, the before image of the modified data is stored in an undo segment. The data stored in the undo segment is used to restore the original values in case a transaction is rolled back. At the time of an instance failure, the database may have uncommitted transactions. It is possible that changes made by these uncommitted transactions have gotten saved in data files. To maintain read consistency, Oracle rolls back all uncommitted transactions when the associated database is re-started. Oracle uses the undo data stored in undo segments to accomplish this. This process is called transaction recovery.
1.
Rolling forward the committed transactions
2. Rolling backward the uncommitted transactions
2. Rolling backward the uncommitted transactions
What is written in Redo Log Files?
Log
writer (LGWR) writes redo log buffer contents Into Redo Log Files. Log writer
does this every three seconds, when the redo log buffer is 1/3 full and
immediately before the Database Writer (DBWn) writes its changed buffers into
the data file.
How do
you control number of Datafiles one can have in an Oracle database?
When
starting an Oracle instance, the database's parameter file indicates the amount
of SGA space to reserve for datafile information; the maximum number of
datafiles is controlled by the DB_FILES parameter. This limit applies only for
the life of the instance.
How many
Maximum Datafiles can there be in an Oracle Database?
Default
maximum datafile is 255 that can be defined in the control file at the time of
database creation.
It can be
increased by setting the initialization parameter value up to higher at the
time of database creation. Setting this value too higher can cause DBWR issues.
Before 9i
Maximum number of datafile in database was 1022.After 9i the limit is
applicable to the number of datafile in the Tablespace.
What is a
Tablespace?
A
tablespace is a logical storage unit within the database. It is logical because
a tablespace is not visible in the file system of the machine on which database
resides. A tablespace in turn consists of at least one datafile, which, in tune
are physically located in the file system of the server. The tablespace builds
the bridge between the Oracle database and the file system in which the table's
or index' data is stored.
There are
three types of tablespaces in Oracle:
Permanent
tablespaces, Undo tablespaces, Temporary tablespaces
What is
the purpose of Redo Log files?
The
purpose of redo log file is to record all changes made to the data during the
recovery of database. It always advisable to have two or more redo log files
and keep them in a separate disk, so you can recover the data during the system
crash.
Which
default Database roles are created when you create a Database?
Connect ,
resource and dba are three default roles
What is a
Checkpoint?
A
checkpoint performs the following three operations:
1. Every
block in the buffer cache is written to the data files. That is,
it synchronizes the data blocks in the buffer cache with
the datafiles on disk. It's the DBWR that writes all
modified database blocks back to the datafiles.
2. The
latest SCN is written (updated) into the datafile header.
3. The
latest SCN is also written to the controlfiles.
The
update of the datafile headers and the control files is done by the LGWR
(if CKPT is enabled). As of version 8.0, CKPT is enabled by default. The date
and time of the last checkpoint can be retrieved through checkpoint_time
in v$datafile_header. The SCN of the last checkpoint can be found
in v$database as checkpoint_change#.
Which
Process reads data from Datafiles?
The “Server
process” reads the blocks from datafiles to buffer cache
Which
Process writes data in Datafiles?
DBWn Process is writing the dirty
buffers from db cache to data files.
Can you
make a Datafile auto extendible. If yes, then how?
You must
be logged on as a DBA user, then issue
For Data
File:
SQL>Alter
database datafile 'c:\oradata\mysid\XYZ.dbf' autoextend on next 10m maxsize 40G
SQL>Alter
database datafile 'c:\oradata\mysid\XYZ.dbf' autoextend on next 10m maxsize
unlimited;
For Temp
File:
SQL>Alter
database tempfile 'c:\oradata\mysid\XYZ.dbf' autoextend on next 10m maxsize
unlimited;
This
would turn on autoextend, grab new disk space of 10m when needed and have no
upper limit on the size of the datafile.
Note:
This would be bad on a 32bit machine, where the max size is typically 4gig.
What is a
Shared Pool?
It is the
area in SGA that allows sharing of parsed SQL statements among concurrent
users. It is to store the SQL statements so that the identical SQL
statements do not have to be parsed each time they're executed.
The
shared pool is the part of the SGA where (among others) the following
things are stored:
Optimized
query plans, Security checks, Parsed SQL statements, Packages, Object information
What is
kept in the Database Buffer Cache?
Database
Buffer cache is one of the most important components of System Global Area
(SGA). Database Buffer Cache is the place where data blocks are copied from
datafiles to perform SQL operations. Buffer Cache is shared memory structure
and it is concurrently accessed by all server processes. Oracle allows
different block size for different tablespaces. A standard block size is
defined in DB_BLOCK_SIZE initialization
parameter. System tablespace uses standard block
size. DB_CACHE_SIZE parameter is used to define size for Database
buffer cache. For example to create a cache of 800 MB, set parameter as
below
DB_CACHE_SIZE=800M
If you
have created a tablesapce with bock size different from standard block size,
for example your standard block size is 4k and you have created a tablespace
with 8k block size then you must create a 8k buffer cache as
DB_8K_CACHE_SIZE=256
DB_8K_CACHE_SIZE=256
How many
maximum Redo Logfiles one can have in a Database?
Maximum
number of log files a database can accommodate depends on the parameter
"MAXLOGMEMBERS" specified during database creation. In a database we
can create 255 maximum redo log files. It depends on what you specified for
MAXLOGFILES during database creation (manually) or what you specified for
"Maximum no. of redo log files" with DBCA.
What is
PGA_AGGREGRATE_TARGET parameter?
PGA_AGGREGATE_TARGET
is an Oracle server parameter that specifies the target aggregate PGA memory
available to all server processes attached to the instance. Some of the
properties of the PGA_AGGREGATE_TARGET parameter are given below:
Parameter type: Big integer
Syntax: PGA_AGGREGATE_TARGET = integer [K M G]•Default value: 20% of SGA size or 10MB, whichever is greater or modifiable by ALTER SYSTEM
Parameter type: Big integer
Syntax: PGA_AGGREGATE_TARGET = integer [K M G]•Default value: 20% of SGA size or 10MB, whichever is greater or modifiable by ALTER SYSTEM
Large
Pool is used for what?
Large
Pool is an optional memory structure used for the following purposes: -
(1) Session information for shared server
(2) I/O server processes
(3) Parallel queries
(4) Backup and recovery if using through RMAN.
The role of Large Pool is important because otherwise memory would be allocated from the Shared pool. Hence Large pool also reduces overhead of Shared pool.
(1) Session information for shared server
(2) I/O server processes
(3) Parallel queries
(4) Backup and recovery if using through RMAN.
The role of Large Pool is important because otherwise memory would be allocated from the Shared pool. Hence Large pool also reduces overhead of Shared pool.
What is
PCT Increase setting?
PCTINCREASE
refers to the percentage by which each next extent (beginning with the third
extend) will grow. The size of each subsequent extent is equal to the size of
the previous extent plus this percentage increase.
What is
PCTFREE and PCTUSED Setting?
PCTFREE is a block storage parameter
used to specify how much space should be left in a database block for future
updates. For example, for PCTFREE=10, Oracle will keep on adding new rows to a block
until it is 90% full. This leaves 10% for future updates (row expansion).
When using Oracle Advanced Compression, Oracle will trigger block compression when the PCTFREE is reached. This eliminates holes created by row deletions and maximizes contiguous free space in blocks.
When using Oracle Advanced Compression, Oracle will trigger block compression when the PCTFREE is reached. This eliminates holes created by row deletions and maximizes contiguous free space in blocks.
PCTUSED is a block storage parameter
used to specify when Oracle should consider a database block to be empty enough
to be added to the freelist. Oracle will only insert new rows in blocks that is
enqueued on the freelist. For example, if PCTUSED=40, Oracle will not add new
rows to the block unless sufficient rows are deleted from the block so that it
falls below 40% empty.
SQL>
SELECT Pct_free FROM user_tables WHERE table_name = ‘EMP’;
Source - http://shahiddba.blogspot.in
No comments:
Post a Comment