Monday, 13 July 2015

On More-Secure Applications By Tom Kyte

On More-Secure Applications

By Tom Kyte

I’m worried about the security of my application—things like SQL injection, for example. What can I do to minimize the chances that my application will be hacked?

This is a great question, because not a day seems to go by without news of yet another hack. Whether it be someone stealing identities, credit card information, personal information, or whatever, new security incidents seem to happen often. Too often.
There are a few things you can do in your application design to eliminate or reduce your exposure. Securing an application is something that needs to be done as the application is being developed—it is very hard to retrofit security into an existing application. Trying to fix an existing application to be secure is sort of like trying to patch a leaky foundation of a house rather than building a waterproof foundation in the first place.
Here are some of the most important things you can do for your application design architecturally:
  • Make sure you have read the Database 2 Day + Security Guide and the Database Security Guide. They will give you an overview of what you need to be thinking about security-wise and an excellent look into the capabilities Oracle Database offers in the area of security.
  • Employ the concept of least privilege.
  • Use multiple schemas—many more than one—to separate objects and help enforce the concept of least privilege.
  • Use bind variables! They are not only a scalability and performance feature; they also help secure your application from SQL injection attacks.
  • Employ multiple levels of defense. Do not put security only in the application code; repeat it as many times as you can within the database, using different techniques. In that way, a bug in one layer of defense won’t leave your database exposed.
Read on for details of some of these security strategies.

Least Privilege

This is a key tenet of database security: grant the fewest (least) privileges possible to everyone—from your DBAs down to the application schemas and out to the schemas used to connect to the database from the middle tier.
All too often, application developers request a privilege in the database simply to make their lives easier. For example, if they are working on an application that requires data from other application schemas—from many tables in many other schemas—they might request the SELECT ANY TABLE privilege. With that privilege, no matter what table they need from those other schemas, they will have it. The application developers might feel that it makes them more “agile”—able to pump out code faster—because they never have to ask for a SELECT grant again.
If attackers can find a SQL injection flaw in the developed application, they will almost certainly be able to gain at least read access to everything in the database—not just the tables the application accesses but every single table in the entire database.
The SELECT ANY TABLE privilege will also make it very hard to survive a true security audit. There will be no way to justify why the application truly needs SELECT ANY TABLE privileges. Additionally, there will be no documentation for the tables the application truly needs.
No ANY grant should ever be given to an application schema. The power of a grant with the ANY keyword in it—such as CREATE ANY CONTEXT, SELECT ANY TABLE, DROP ANY TABLE—is beyond what any application needs. There is always another way for developers to achieve what they need to do.
For example, I’ve seen DROP ANY TABLE granted to an application schema with the reasoning that the application developers needed to truncate a table in another schema. In reference to truncating a table, the Database SQL Language Reference, states: “To truncate a table, the table must be in your schema or you must have the DROP ANY TABLE system privilege.”
That is true, but you do not need to have the DROP ANY TABLE privilege to achieve the goal of truncating a table in another schema. That is what’s important—the goal is to truncate table T in schema X. There are at least two ways to achieve that:
  1. Use the powerful and dangerous DROP ANY TABLE privilege.
  2. Implement a stored procedure that executes as schema X (the owner of the table) and performs the truncate. And then grant EXECUTE privileges on this procedure.
If you were to grant DROP ANY TABLE to the application schema and an attacker discovered a SQL injection flaw in the application, the attacker would have the DROP ANY TABLE privilege. Think about how damaging that would be!
The other approach, achieving the goal with the minimum privileges—with the least privileges—is the right way to go. Consider the following:
SQL> create user a identified by a;
User created.

SQL> create user b identified by b
  2  default tablespace users
  3  quota 5m on users;
User created.

SQL> grant create session to a;
Grant succeeded.

SQL> grant create session,
  2        create table,
  3        create procedure
  4  to b;
Grant succeeded.

I now have two schemas—A and B. A has just the privilege to log in, and B can log in and create tables and procedures. Now I’ll log in as B and create my objects:
SQL> connect b/b
Connected.

SQL> create table t
  2  as
  3  select *
  4    from all_users;
Table created.

SQL> create or replace
  2  procedure truncate_table_t
  3  authid DEFINER
  4  as
  5  begin
  6      execute immediate
  7        'truncate table B.T';
  8  end;
  9  /
Procedure created.

SQL> grant select on t to a;
Grant succeeded.

SQL> grant execute
  2    on truncate_table_t
  3  to a;
Grant succeeded.

Schema B now has a table T with some data in it and also a definer’s rights procedure that truncates table B.T. A definer’s rights routine (the default type of stored procedure) runs with the privileges granted directly to the owner of the procedure—that is, all the privileges of schema B minus any privileges granted to B via a role. Schema B allows schema A to read table T and to execute the stored procedure B.TRUNCATE_TABLE_T.
I’ll log in as A and see what I can do:
SQL> connect a/a
Connected.

SQL> select count(*) from b.t;

  COUNT(*)
------------
       55

I can see that table B.T exists, I can query it, and it has data. Now I’ll try to truncate table B.T as user A:
SQL> truncate table b.t;
truncate table b.t
                 *
ERROR at line 1:
ORA-01031: insufficient privileges

I am not privileged enough to truncate this table. For that truncate to succeed as executed by A, I would need the DROP ANY TABLE privilege. But that doesn’t mean I need to have the DROP ANY TABLE privilege in order to truncate B.T! I can just execute that stored procedure:
SQL> exec b.truncate_table_t;
PL/SQL procedure successfully completed.

SQL> select count(*) from b.t;

  COUNT(*)
--------------
        0

I have achieved the goal—to truncate B.T—but did not require the DROP ANY TABLE privilege. I have greatly limited the exposure to risk, but I have not eliminated it. An attacker finding a SQL injection bug in code executed by schema A would likely be able to execute the B.TRUNCATE_TABLE_T procedure, but I’ve still achieved a huge reduction in exposure. I’ve gone from risking the loss of every table in the database to the loss of data in one table, a table that is truncated on a recurring basis already.
Using stored procedures is a great way to reduce the strength of a grant you need to give across schemas. They definitely help achieve the least privileges concept. Here schema A needs the EXECUTE privilege only on a procedure that can truncate exactly the one table that A needs.
NOTE: Oracle Database 12c includes a new privilege analysis tool to help enforce the concept of least privileges. See the Database Vault Administrator’s Guide, for details.

Use Multiple Schemas

This idea probably gets more pushback from developers than any other security idea I suggest. I’m going to reproduce a question from a previous Ask Tom column:
A data architect at work has proposed that we start using separate database accounts to hold the code (packages, procedures, views, and so on) and the data (tables, materialized views, indexes, and so on) for an application. I’ve never come across this idea before, and it seems to be contrary to the concepts of encapsulation, in that the application will be spread across at least two schemas and require more administrative overhead to maintain the necessary grants between them.
Are there any situations you can think of where this would be a recommended approach? And if you did this, how would you recommend referencing objects in the data schema from the application schema? Finally, would you put any views into the code or data schema?
You can see my original response to this question at bit.ly/asktommultischema, but in looking at this question again, I can see that the questioner is trying to find reasons to not do something that would be greatly beneficial to security. Developers may throw out words such as encapsulation (although having multiple schemas actually promotes encapsulation) and claim that it will require more administrative overhead to maintain the necessary grants, while missing the point that the production application will need to have the concept of least privileges in place. What some developers view as drawbacks, I see as positives.
My approach would be to have at least one schema that contains table data, and maybe more than one—probably more than one—but at least one schema that owns just the table data and, if need be, a few procedures like the one described in the last section. There would be a second schema, and this schema would own code (PL/SQL, Java stored procedures, and so on) that accesses these tables. It would also contain views of the various tables as needed. The first schema, the one that contains table data, would grant just the privileges needed on a table to the second, “code” schema. (There would be no GRANT ALL ON T TO another_schema.) The data schema would grant just the access necessary: INSERT, UPDATE, DELETE, and/or SELECT.
Then there would be a third schema. This schema would be granted nothing more than CREATE SESSION to log in and the bare privileges on the second schema the application needs in order to execute the procedures and access the views. This third schema, the database account, is the one your application server would use to connect to the database.
Think about the benefits this would bring you. If hackers get into the application schema, the damage they can do will be very limited. They won’t be able to read every table—they’ll be able to read only a few. And if you use stored procedures as a data access layer, they may not be able to access any tables at all! All they’ll be able to do is run your application. They won’t be able to drop any tables, which they would be able to do if you used a single schema for everything, or update anything they choose, as they would be able to if you used a single schema. And so on. Hackers will be very restricted in what they can and cannot do.
Let’s make this a bit more concrete. Suppose your application has an application audit trail (as it and every application should). Your typical application user needs to be able to insert into this audit trail, but that user should never be able to read it, delete it, or modify it. You might also have an administrative application that needs to read the audit trail, but it doesn’t ever need to insert into it, update it, or delete from it. If you go with a single schema, both the application and the administrative application users will have full READ/WRITE access on this table. You might say, “Our application enforces security—don’t worry.” But that does worry me, because you will have a bug in your application—somewhere, someday. And then the audit trail will be 100 percent exposed to tampering.
If instead you put the audit trail into its own schema and create two code schemas—one for the typical application user and the other for the typical administrative application user, you’ll be able to grant INSERT privileges on the audit trail table to the first code schema and SELECT privileges on the audit trail to the second code schema. Now the first schema can create the code that inserts into the audit trail. The second schema can create some views for reporting or use stored procedures that return ref cursors instead.
Last, you’ll create a schema that has CREATE SESSION and EXECUTE privileges on the code in the first application schema and then create an administrative login that has CREATE SESSION and EXECUTE privileges on the code in the second schema. This is the concept of least privileges put into action to the fullest. The administrative schema will use code in the application schema to audit itself and will be able to report on—but not modify—the audit trail. The application schema will also be able to audit itself but not read the audit trail (because it has no reason to).
To witness this multischema architecture idea in action—with all the details, code, and more—see the Database 2 Day Developer’s Guide, Chapter 9, “Developing a Simple Oracle Database Application.”

Use Bind Variables

Did you know that if your SQL uses bind variables for all variables that can change from execution to execution, your code cannot be SQL-injected? On the other hand, if you use string concatenation to put these variables into your SQL, your code can be SQL-injected!
That is, if you issue SQL such as SELECT * FROM EMP WHERE ENAME LIKE ? and you bind in a value for the ?, no one will be able to change the meaning of your SQL, regardless of what they send you. On the other hand, if you build your SQL statement by using string concatenation like this:
SELECT * FROM EMP WHERE ENAME 
LIKE '" + some_variable +"'

it will be far too easy for your code to be SQL-injected.
In my experience, many, if not most, database attacks are performed by SQL injection, whereby the attacker sends you input that makes your resulting SQL different from what you intended. There are programmatic ways to combat this. For example, you can use the DBMS_ASSERT package in PL/SQL when building SQL, write your own “sanitizer” routines to verify that the inputs are safe to concatenate, and write lots of code. You’ll still have to worry about attack vectors you haven’t thought of (see bit.ly/tkbinject for an interesting example of a SQL injection attack most people would not see coming). So whatever programmatic strategy you use, there will still be concern that your code is not as secure as you think it is.
Or you can use bind variables. If you use a bind variable, it will be impossible—repeat, impossible—for an attacker to change SELECT * FROM EMP WHERE ENAME LIKE ? into any other SQL. On the other hand, it would be relatively easy for an attacker to try to change
SELECT * FROM EMP WHERE ENAME 
LIKE '" + some_variable +"' 

into
SELECT * FROM EMP WHERE ENAME 
LIKE '' or 1=1 – ' 

by providing the input
' or 1=1 – 

That input would change the meaning of your query entirely. Additionally, attackers might instead try to input
'UNION ALL SELECT… FROM T – ' 

Think about what that would do to your query. Instead of querying the EMP table, your attackers would now be querying some other table T (a SQL injection bug, once found, typically gives at least READ access to every object the schema has read access to).
If you do not use bind variables in your application for inputs into your query, I firmly believe you’ll have to
  • Write lots of additional procedural code to sanitize inputs (and lose sleep every night wondering if you did it perfectly every time and everywhere).
  • Submit your code to be reviewed by at least five people who do not like you. The reason for the “do not like you” part is that they must be motivated to search long and hard for any mistakes you might have made. If they like you—or even worse, respect you—they might not look hard enough.
But following these steps will not guarantee security. Your code may still be SQL-injectable, because it might not be perfect and the reviewers might not find everything.
Remember: bugs happen to everyone. Bugs, including ones that allow for SQL injection, happen to me more times than I can count. Consider the article I wrote years ago on SQL injection. After you read the section on SQL injection in that article, I encourage you to read on and look at the last section. There I used a stored procedure to do “selective granting”—similar to the truncate example earlier in this article. But note the “note” there about revised content. My original stored procedure—the one that was printed in the hard-copy magazine, never to be fixed—had a SQL injection flaw in it! Yes, in an article on SQL injection, I supplied some code that was SQL-injectable. It can happen to anyone—highly experienced programmers, novice programmers . . . everyone.

Have Multiple Levels of Defense

Having multiple levels of defense is another basic security tenet, right up there with the least privileges concept. You want to have security in depth—security at multiple levels.
Suppose you put all your security logic in the application, so the folks at the network/database/storage level don’t have to worry about anything. Someone will find a way around that security. It is not if but a matter of when attackers will find a way around it.
If, on the other hand, you have multiple layers of defense—multiple repetitive layers of defense—a hole in any one defense level won’t mean that your data will be compromised. For example, suppose for some reason that your application uses string concatenation and does not use bind variables. In that case, I would suggest that you procedurally sanitize your application inputs to validate them. Have your string concatenation code reviewed so that multiple eyes look at it to validate it.
  • Employ Oracle Database Firewall to catch SQL injection flaws when they inevitably occur (from not using bind variables!).
  • Use the concept of least privileges so that if all other defenses fail, you’ll minimize your risk.
  • Use multiple schemas to further mitigate the security risk (and take least privileges to the farthest point possible).
  • Employ auditing at the application level, firewall level, and database level; consider usingOracle Audit Vault to consolidate all that information; and set up real-time audit policies that look for suspicious activity as it happens.

There are at least six levels of defense right there, but each of those layers might have a flaw in it somewhere—a hole to be exploited. Use multiple layers of defense in case one—or more—of them is defeated.

Tuesday, 16 June 2015

Backup Guidelines

Causes of Unplanned Down Time

Software Failures
o Operating system
o Database
o Middleware
o Application
o Network

Hardware Failures
o CPU
o Memory
o Power supply
o Bus
o Disk
o Tape
o Controllers
o Network
o Power

Human Errors
o Operator error
o User error
o DBA
o System admin.
o Sabotage
„h Disasters
o Fire
o Flood
o Earthquake
o Power failure
o Bombing

Causes of Planned Down Time

Routine Operations
o Backups
o Performance mgmt
o Security mgmt
o Batches
Periodic Maintenance
o Storage maintenance
o Initialization parameters
o Software patches
o Schema management
o Operating system
o Middleware
o Network
 New deployments
o HW upgrade
o OS upgrades
o DB upgrades
o MidW upgrades
o App upgrades
o Net upgrades


Minimizing Unplanned Downtime Guidelines
 Use RAID data storage with mirroring ( 1+0 is a good choice).
 Maintain offsite storage of your backups with a reliable vendor. Make is part of your
recovery testing program.
 Normally, for a production database, operating in Archivelog mode is a must.
 Multiplex the control files on separate disk drives managed by different disk controllers.
 Oracle strongly recommends that you multiplex the redo log file.
 After every major structural change, back up the control file. You can take backup of the
control file every hour.
 If you backup to a tape, backup to two copies. The media might be defective.
 Make auxiliary files part of your backup: (SPFILE) or the init.ora, sqlnet.ora,
tnsnames.ora, password and wallet files.
 Log your backup operations.
 Make every application has its own tablespace.
 Use Data Pump utility for supplemental protection.
 Make a plan to make sure that the backups are actually readable and valid.
Make database recovery testing plan.
Always keep a redundancy set online (use flash recovery area for this purpose) so you can recover faster. A redundancy set has:

  •  Last backup of all datafiles
  •  Last backup of the control file
  •  Multiplexed copies of the current redo log files
  •  Copies of the current control file
  •  The archived redo logs since the last backup
  •  Auxilary files: SPFILE or the init.ora, listener.ora, and tnsnames.ora, pwsd



Friday, 29 May 2015

DBA Interview Questions - 4

Oracle DBA Interview Questions:-

1) How to set pga size, can you change it while the database is running?
show parameter pga_aggregate_target;
alter system set pga_aggregate_target=100m;
Yes the pga can be changed while the database is up and running.
2) How to know which parameter is dynamic/static?
ISSES_MODIFIABLEVARCHAR2(5)Indicates whether the parameter can be changed with ALTER SESSION (TRUE) or not (FALSE)
ISSYS_MODIFIABLEVARCHAR2(9)Indicates whether the parameter can be changed with ALTER SYSTEM and when the change takes effect:
  • IMMEDIATE – Parameter can be changed with ALTER SYSTEM regardless of the type of parameter file used to start the instance. The change takes effect immediately.
  • DEFERRED – Parameter can be changed with ALTER SYSTEM regardless of the type of parameter file used to start the instance. The change takes effect in subsequent sessions.
  • FALSE – Parameter cannot be changed with ALTER SYSTEM unless a server parameter file was used to start the instance. The change takes effect in subsequent instances.
SQL> desc v$parameter
SQL> select distinct ISSYS_MODIFIABLE from v$parameter;
3) How to know how much free memory available in sga?
select * from v$sgastat where name =’free memory';
4) What are oracle storage structures?
Oracle storage structures are tablespace,segment,extent,oracle block
5) List types of Oracle objects
table,index,cluster table,IOT (Index Organisation Table),function,package,synonym,trigger,sequence
6) What is an index, how many types of indexes you know? Why you need an index
Index is an oracle object which is used to retrieve the data much faster rather than scanning entire table.Typically this is like an index page in a book which contains the links to the pages,where we can go through easily through out the book.
If index page is not there,we have to search each and every page for our need,so we use indexes in oracle also to retrive the data quickly.
Types of indexes:
Btree index: Used for searches mostly when used select statements(Ex:pincode)
bit map index: when having low cardinolity (low priority) columns used in the statements.for example: gender column
function based index: sum(salary), upper(ename), lower(ename)
reverse index: used mostly to increase the speed of inserts (its like btree only but the key is reverse).
7) What is synonym ?
Synonym is used to hide the complexity of the original object.
for example user ‘a’ has table ‘t’ which user ‘a’ wants to hide the name but user ‘b’ has to access it.
In this case user ‘a’ can create a synonym on table ‘t’ and give a select priviledge to user ‘b’.
SQL> create synonym aishu on t;
SQL> grant select on aishu to b;
View: DBA_SYNONYMS
Query: Select synonym_name from dba_synonyms;
8) What is sequence?
Sequence is a oracle object which used to create the unique and sequential numbering for a column
example: employee num,account id
create table employee(id number ,name varchar2(10),salary number);
create sequence aishu_seq start with 1 increment by 1;
insert into employee values (aishu_seq.nextval,’paddu’,120000);
9) Define different types of tablespaces you know?
Permanent: system table space,sys aux,user
undo: to store undo segments
temp: to store the sort segments
10) What is the difference between Locally managed tablespace and dictionary managed tablespace?
LMT: Locally Managed Table space stores all the extent mapping or allocation details in the header of the data file
DMT:Dictionary Managed Table space stores all the extent mapping or allocation details in the dictionary table called UET$ and FET$
Since everytime an allocation of extents generate some recursive sql on UET$ and FET$ this is contention in dictionary cache, hence this is not good for performance of database, but LMT can store this outside of dictionary , coz it stores in header of the data file.
By Default from 10g the management is LMT only
11) What is Automatic segment space management? and how to find the tablespace in ASSM?
Oracle will allocates the extents automatically to the table or segment depending upon the size of the table.We need not to give the storage parameters.
SQL>desc dba_tablespaces;
SQL> select tablespace_name, allocation_type,segment_space_management,extent_management from dba_tablespaces;
12) What is uniform segment? How to find it?
In uniform segment every extent will have same size.
SQL>desc dba_tablespaces;
SQL> select tablespace_name, allocation_type,segment_space_management,extent_management from dba_tablespaces;
13) Where do  you see the tablespace information?
SQL>select * from dba_tablespaces;
14) How to find the datafiles that associated with particular tablespace? Ex: System
SQL> desc dba_data_files
SQL> select * from dba_data_files where tablespace_name=’SYSTEM';
15) How to see which undo tablespace is used for database?
SQL> show parameter undo_tablespace
NAME                                 TYPE        VALUE
———————————— ———– ——————————
undo_tablespace                      string      UNDOTBS3
SQL>
16) How to see the default temporary tablespace for a database?
SQL> select PROPERTY_NAME,PROPERTY_VALUE from database_properties where PROPERTY_NAME like ‘%TEMP%';
PROPERTY_NAME
——————————
PROPERTY_VALUE
——————————————————————————–
DEFAULT_TEMP_TABLESPACE
TEMP2
17) How to see what is the default block size for a database ?
SQL> show parameter block_size;
NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_block_size                        integer     8192
18) Is it possible to have multiple blocksizes in a database if so how? Explain
Yes it is possible but we have to create multiple DB buffer pools while setting the required block size parameter.To do that it require database bounce.
For example if you want to have 2k size along with default 8k
SQL> show parameter db_2k_cache_size
SQL>alter system set db_2k_cache_size=100M scope=spfile;
SQL>shut immediate
Now we can create the table space using new 2k block size
example:SQL>
create tablespace test_2k datafile ‘/u01/oradata/paddu/test2k.dbf’ size 100M block size 2k;
19) What is the oracle block, can you explain?
Oracle block is the lowest level of storage structure where it contains the data(business/oracle data)
The block has divided into many sections starting from block header,row header,row directory,ITL list(Intrested Transaction List),free space.
20) Can you change the blocksize once the database is created?
Obsolutely no becoz once the datafiles is formatted into 8k we cannot change the database block sizes , if you need, you have to create fresh database with new block size and restore from backup or import.
21) Can you change the database name once the database is created?
Yes we can but the database has to be shutdown and also this will change all of the headers of the files
Option 1
1) Using NID utility
a)SQL>alter database close;
b) nid target=sys as sysdba dbname=ketan(this will change the control files and headers of the datafiles with the correction of new name)
c)cd $ORACLE_HOME/dbs
d) cp initaishu.ora initketan.ora
e)vi initketan.ora
find db_name parameter and change it to ketan
f)vi /etc/oratab
change the name aishu to ketan
g) . oraenv
set the variable ORACLE_SID=ketan
f) startup the database
SQL> startup (but the database open will error out since the datbase should open with resetlogs)
h) alter database open resetlogs;
Option 2
By changing the control file
a)alter database backup control file to trace;
b)go to trace directory and edit the control file trace
c)In control file trace update database name aishu to ketan
d)shut down the database
e)remove the control file
f)vi init file and change the dbname aishu to ketan
g)startup nomount
h)execute the control file trace which we modified earlier(this will create a new control file in the location with the  new dbname )
i) alter database open resetlogs;
22) Can you change the instance name once the database is created?
SQL>alter system set instance_name=’test’ scope=spfile;
shut immediate;
cp spfilepaddu.ora spfilekarthika.ora
export ORACLE_SID=karthika
startup
23) Can you rename the tablespace once it is created?
SQL>alter tablespace testtbs1 rename to testtbs3;
24) Can you rename the user once it is created?
No there is no direct command to change the user name but there is  a work around.
1) export user
exp / as sysdba owner=paddu file=/home/oracle/paddu.dmp
sqlplus / as sysdba
SQL> drop user paddu cascade;
QL>create user aishu identified by aishu;
SQL>grant connect,resource to aishu;
SQL> exit
imp file=/home/oracle/paddu.dmp fromuser=paddu touser=aishu
sqlplus / as sysdba
SQL>select username from dba_users;
25) Can you rename the table once it is created?
Yes
connect to user aishu
SQL>connect aishu/aishu
SQl>select * from tab
SQL> create table t as select * from user_tables;
(or)
SQL> create table dummy (id number,name varchar2(50),salary number);
SQL>rename t to t1
26) Can you rename the column in a table?
yes
SQL>desc t;
alter table t rename column result_cache to aishu;
27) Where you can see the datafile information?
desc dba_data_files;
SQL> select tablespace_name,file_name,autoextensible, bytes/1024/1024 SIZE_MB from dba_data_files;
28) Where you can see the tempfile or tablespace information?(for a particular database)
desc dba_temp_files;
desc dba_tablespaces;
29) What is the difference between v$ views and dba views?
dba views are static views
v$ views are dynamic views
dba views are available once the db is in open mode only
V$ can be viewed even the database is in mount state
30) What is the difference between a role and privilege , can you provide an example?
Set of priviliges is nothing but a role.
providing authorisation to an user such as create,alter,delete,drop,truncate,insert,update.
How many types of privileges are there ?
system privileges : create session/user
object privileges : insert,update,delete or create table
31) Where to view the roles and privileges assigned to a user?
For roles:-desc
dba_roles can be used to know what  are all the roles in a database.
select * from dba_roles;
role_sys_privs can used to know what are all the system privileges assigned to that role.
role_tab_privs can be used to know what are all the object privileges assigned to that role
dba_role_privs can be used to know the grantees assigned to that role
For privileges:-
dba_tab_privs: can be used to know what all privileges assigned to a user
SQL> select grantee,owner,table_name,grantor,privilege from dba_tab_privs where grantee=’AISHU';
dba_sys_privs: to know what all system privileges assigned to a user
SQL> select grantee,privilege from dba_sys_privs where grantee=’AISHU';
32) What is the difference between with grant option and with admin option while assigning privileges?
Grant option : We can grant that grant to other user
admin option : can be used for sysdba privileges to grant other
grant select on T to aishu with grant option;
Aishu can grant select on table T to any one;
grant dba to aishu with admin option;
aishu can grant or manage dba role and assign to anyone.
34) How to reovoke privilege or role?
revoke select on T from aishu;
revoke suresh from aishu;(suresh is a role here )
33) How to change the default tablespace for a user?
alter user aishu default tablespace t;
34) How to give a tablespace quota to a user?
alter user aishu grant unlimited quota on tablespace t;
36) What are constraints? Can you list them and when will you use them?
Constrains in oracle are use to protect the integrity of the data.
for example a not null constraint will not allow any null value in the column
a unique constraint will not allow any duplicate value in the column
a  primary key constraint will not allow any duplicate value and null in the column.
a foreign key constraint will be from the one of the primary key of the table which means data must resides in the primary key table(Master list table)
Master Table (a table that contains primary key)
SQL> create table pincode (area varchar2(30), pincodenum number primary key);
SQL> insert into pincode values (‘Miyapur’,500049);
SQL> insert into pincode values(‘Ameerpet’,500084);
Child Table ( aa table that is referred to primary key column)
SQL> Create table employee (empname varchar2(30),empid number unique,address1 varchar2(10) check=’Hyderabad’, address2 varchar2(20),pincode number
constraint pin_fk foreign key(pincode) references pincode (pincodenum));
SQL> Insert into employee values(‘Aishu’,1,’Ameerpet,’Line’,’500084′);
37) What is Row chaining? When does it occur? where can you find it? What is the solution?
When the row is not adequate to fit in the block while inserting oracle will insert half row in one block and half in another block leaving  a pointer between these two blocks.
select table_name,chain_cnt from dba_tables where table_name=’tablename';
Solutions:
create a table with bigger the block size
1) Create tablespace ts data file ‘/u01/oradata/aishu/paddu.dbf’ size 100m blocksize 16k;
2) alter table employee move to ts;
Here TS is the tablespace name with bigger size, before creating tablespace it is assumed that you have created a db buffer cache for it.
38) What is row migration? When does it occur? Where can you find this information?
Row migration happens when update occurs at one column and the row is not adequate to fit in the block then the entire row will be moved to the new block.
select table_name,chain_cnt from dba_tables where table_name=’tablename';
Solution:
set pct_free storage parameter for table to adequate.;
39) How to find whether the instance is using spfile or pfile?
show parameter spfile;
40) How to create password file?
orapwd file=$ORACLE_HOME/dbs/pwaishu.ora entry=5 ignore case=Y;
41) How to create a database manually , can you provide steps briefly?
1) create a parameter file in /dbs directory with necessary parameters like db_name,instance_name,control file locations,sga_max_size etc..
2)create necessary directories for datafiles,trace files,redo log files, control files according OFA
3)prepare the create db command
4)create catalog views (compile,invalid)
5)add entries in listner.ora,tnsnames.ora
6) add entry in /etc/oratab
42) What is OFA? What is the benefit of it?
Oracle Flexible Architecture
Different directory structures with diff files and we keeping the files (redolog,control etc) in the corresponding described locations which keeps the files in track and we can easily manage them,Inaddition to tha I/O will be redistributed.
43) What does system tablespace and sysaux tablespace contains?
system table space stores the system tables such as dbtables,oracle base tables,dictionary objects that related to oracle.a kind of metada(data about data)
Sysaux table space: from 10g onwards oracle has segregated some of the dictionary objects to be created in sysaux table space seperating from system table space to reduce the burden on the one table space
for example oracle session statistics,system statistics,awr data( automatic work load repository),oracle execution statistics
44) Do you know about statistics , what is the use of it? What kind of statistics exists in database?
Statistics is a collection information about data or database
There are different types of statistics that oracle maintains-
1)System-Statistics: statistics about the hardware like cpu speed,I/O speed,read time write time etc : select * from aux_stats$
2)Object statistics : For a table oracle collects the information about no.of rows,no.of blocks,avg row length etc.We can view
SQL>select table_name,num_rows,blocks,avg_row_len from dba_tables
for index oracle collect statistics on index column about no.of rows,no.of root blocks,no.of branch blocks,no.of leaf blocks,no.of distinct values etc.
46. Why you need statistics to be collected?
These statistics wil help the query execution engine called optimizer to determine how best the data can the accessed
45) Where to find the table size?
table=create segment in a tablespace, that segment contains extents and that extents contains blocks
T=100 * 8192 = 819200000
dba_segments
SQL>select segment_name,bytes/1024/1024 from dba_segments where segment_name=’T';
46> How to find the size of a database?
select sum (bytes) from dba_segments;
46) Where to find different types of segments in oracle database?
select distinct segment_type from dba_segments;
47) How to resize the datafile?
To resize a datafile the first most thing is the data file should be in auto extendable mode
ALTER DATABASE DATAFILE ‘/u02/oracle/rbdb1/stuff01.dbf’ RESIZE 100M;.
Can i resize the datafile to lesser than it has?
I have 1gb
I want to 100MB,
But the data in that datafile is upto 500MB
can i resize to 100mb?
Yes we can unless the data is not above 100MB in the datafile.
48) How to add datafile to a tablepsace?
alter tablespace t add datafile ‘/u01/oradata/paddu/tbs1′ size 100M;
49) How to delete the datafile?
alter tablespace t drop datafile ‘/u01/oradata/paddu/tbs1′
Note:The drop datafile will only works if the datafile is empty
50) How to move datafiles from one location to another location? Can you provide the steps?
1.Connect as SYS DBA with CONNECT / AS SYSDBA command.
2.Make offline the affected tablespace with ALTER TABLESPACE <tablespace name> OFFLINE; command.
3.Copy the datafiles from old location to new location using OS cp
4.Modify the name or location of datafiles in Oracle data dictionary using following command syntax:
ALTER database RENAME DATAFILE ‘<old location>’ TO ‘<new location>';
5.Bring the tablespace online again with ALTER TABLESPACE alter tablespace <tablespace name> ONLINE; command
51) What is profile? what is the benefit of profile? Where do you see the information of profiles? Provide an example of profile?
Profile is a set of properties assign to an user
For an example password complexity,password reuse,password expiry,idle time etc
SQL>desc dba_profiles;
SQL> select username,profile from dba_users;
52) How to change the profile of a user?
alter user username profile profile name
53) How to create user?
create user username identified by password default tablespace testtbs1 profile test;
ex: create user paddu identified by paddu default tablespace testtbs1 profile test;
SQL> select username,profile from dba_users;
SQL> grant connect,resource to paddu;
54) How to create schema?
schema is nothing but an user.
55) How to grant privileges to user?
using grant command
grant create table to user;
grant create table to user with grant option;
Note: with grant option provides user to grant the privilege to other users as well, kind of admin
56) Can you delete alert log while database is up and running?
show parameter background;
Yes database can create a new alert log file,but whenever any activity happens in the database it creates a new alert log file
Yes one can delete or move the alert log file while the database is up and running there will be no impact,oracle will automatically creates a new alert log if it not found any in the directory
57) What is fragmentation of table?
Fragmentation of a table is something when ever there is a purge or deletion of a table.Oracle will not use those unused blocks and always try to allocate the extents above high watermark.this leads the table to grow larger than its size.
58) What is cursor?
Cursor is some thing which resides in the PGA and dc in SGA
A cursor is a handle, or pointer, to the context area. Through the cursor, a … Cursors allow you to fetch and process rows returned by a SELECT. statement, one …
. Through the cursor, a
PL/SQL program can control the context area and what happens to it as
the statement is processed. Two important features about the cursor are implict and explict cursors
59) Can you tell various dynamic views you know about and their purpose?
v$session-shows about sessions information that logged into database
Ex:select sid,status,username,logon_time,blocking_session,module,event,sql_id  from v$session
v$process : To view the process information to attached to the session
Ex: select pid,spid,addr from v$process
v$database: To view the database information
Ex:  select dbid,name,open_mode,created from v$database;
v$instance : To view the instance information
Ex: SQL>select  INSTANCE_NAME, HOST_NAME,STATUS,STARTUP_TIME from v$instance;
v$lock : To view the locked sessions
Ex:SQL> select SID,type,id1,lmode,request from v$lock;
SID is the session ID column that is requesting or holding the lock
TYPE: Type is the column that shows about what kind of end queue or lock it has
ID1 :This column says about the object id that involved in lock.Match this object id dba_objects to get the object names
LMODE : lock mode it can be 1-6
6 is the least level of lock and an exclusive lock,when we update a row that row will be locked as exclusive so that no one will be modified
From 1-5 the locks are different types of levels which are some shared or table level locks
Row exclusive – Any DML that happens on any row locks as exclusive so that no one can modify
Row shared – Select statement ran, during that period the rows will be in shared mode so that no modification to be done until  that select retrieve all rows.
Table Lock – When an update statement ran on column , no other can moidfy the structure of table, and allow row exclusive
v$parameter – Displays information about parametters in database
SQL>select name,value,ISSYS_MODIFIABLE from v$parameter where name=’sga_target';
v$sgastat :  Displays information about sga individual pool sizes and also displays free memory in the sga
SQL> select * from v$sgastat;
v$sgainfo : Displays information about sga  pool sizes
SQL> select * from v$sgainfo;
v$transaction: Displays information about the transactions that running in the database
SQL>select * from  v$transaction;
v$pgaStat:displays about pga allocated in the database
SQL>select * from v$pgastat;
v$sga_resize_ops : Displays information about sga resize operation when sga target is set
SQL> select * from v$sga_resize_ops;
v$sysstat : Displays the systems statistics  information
SQL> select * from v$sysstat;
v$sesstat : Displays the information about session statistics
SQL> select * from v$sesstat;
Display each statistics information from sysstat but for each session, so 604 statistics X each session
v$logfile : Displays information about the redo log files
SQL> select group#,member,status from v$logfile;
v$log : Displays information about redolog groups
SQL>select group#,members,status from v$log;
v$undostat : Displays the information about undo usage in the database
v$sysaux_occupants : Displays the information about objects that resides in sysaux table space
60) Difference between v$ views and dba_views?
V$ views are dynamic and populated from base table like X$BH and USER$ etc etc
DBA_** views are the the views built on top of v$ views in combination. for example v$session has been built from v$session,user$ etc
60) Where to view the session information?
SQL> select sid,status,username,logon_time,machine,sql_id,blocking_session,event from v$session where sid=”;
If you do not know the sid replace with any column information you know in where condition.
61) Where to view the process associated with session information?
select sid,status,username,action,program,machine from v$session  where paddr in (select addr from v$process where spid=5046);
61) Where to view the locks in oracle database?
v$lock
62) What are locks?
locks are low level serialisation mechanism called end queues in oracle which protects the database of data changes
63) What latches?
latches are typically a kind of locks  but held for very short time to protect the memory structures of the instance
64) Where does Oracle latch or lock occurs?
Oracle latch occurs in the memory structures i.e, in instance ex:buffer latch,redolog latch,shared pool latch
ORacle lock occurs at block level to protect the integrity of the data as the data stored in the block only ex: row lock,table lock etc
65) Where to see the information about latches?
v$latch and v$latch_children
66) How to switch from pfile to spfile?
SQL>create spfile from pfile;
bounce the database;
Now the database will pickup the spfile automatically
67) Explain the difference between a data block, an extent and a segment.
Data block is a lowest level storage structure, a block cannot span multiple extents
Extent is a set of block which resides inside the table space, an extent cannot span multiple segments
Segment is set of  extents nothing but an object, a segment can spawn multiple datafiles
68) How to get the DDL of a table or index? i.e create statement?
SQL> select dbms_metadata.get_ddl(‘AISHU’,’T’,’TABLE’) from dual
SQL>  select dbms_metadata.get_ddl(‘TABLE’,’T’,’AISHU’)  from dual;
DBMS_METADATA.GET_DDL(‘TABLE’,’T’,’AISHU’)
——————————————————————————–
CREATE TABLE “AISHU”.”T”
(    “X” VARCHAR2(100)
) SEGMENT CREATION IMMEDI
SQL> set long 1000
SQL> /
DBMS_METADATA.GET_DDL(‘TABLE’,’T’,’AISHU’)
——————————————————————————–
CREATE TABLE “AISHU”.”T”
(    “X” VARCHAR2(100)
) SEGMENT CREATION IMMEDIATE
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DE
FAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE “TESTTBS2″
For user
SQL> select dbms_metadata.get_ddl(‘USER’,’AISHU’)  from dual;
DBMS_METADATA.GET_DDL(‘USER’,’AISHU’)
——————————————————————————–
CREATE USER “AISHU” IDENTIFIED BY VALUES ‘S:02BB0F7450ED93B75191C06AD3CD1E1E7
DB11803941FB7B885803634D39F;F8EF185F1D85D4B3′
DEFAULT TABLESPACE “TESTTBS1″
TEMPORARY TABLESPACE “TEMP2″
69) What is the difference between a TEMPORARY tablespace and a PERMANENT tablespace?
Temporary table space for sorting and also used for temporary tables
Permanent table space contains the permanent or business data
70) What is checkpoint? why database need it?
checkponit  which occurs when ever the redolog switch happens,during this ckpt process writes the check point information to control file and the data file header and tells to dbwr to flush the dirty buffer from buffer cache to disk until that check point.
71) What is log switch, when does it occurs?
Log switch occurs when the current redo log is full and the log writer has to go to next redo log group.
71) Where to view the undo usage information?
v$undostat
72) How to set the log archive destination? can we have multiple destinations for archivelogs?
Yes we can have multiple destination for achivelogs
we can have 30 destination from 11g onwards
to see destiantion you can use follow and set accordingly
show parameter log_archive
SQL> alter system set log_archive_dest=”  scope=memory;
System altered.
SQL> alter system set log_archive_dest_1=’location=/u02/archives/paddu’ scope=memory;
System altered.
SQL> alter system set log_archive_dest_2=’location=/u01/archives/paddu’ scope=memory;
System altered.
SQL>
73) Can you rename a database? Provide steps?
Yes, we can rename a database, we have two topins
Until 10g,
As the database name is written in control file we have to change the database name in control file and in init file.
1. Alter Database backup control file to trace;
2. Above step will create a text control file in user_dump_dest directory.
3. Change name of the Database in above file and in init.ora file.
4. STARTUP NOMOUNT
5. Run the script that was modified in step 3
6. ALTER DATABASE OPEN RESETLOGS;
From 10g onwards
Using NID utility
If I am changing the database name , does mu backup are valid?
Invalid, Your database name is changed so the old backup backup in invalid as the name is old, rman check with dbid and name.
74) Why you need to do open resetlogs, what does it?
Whenever there is recovery operation performed specifically incomplete media recovery , the database must be open with reset logs since we dont have the archives or redo information until point of failure, hence this is required. further this will reset the archive log sequence
75) How to multiplex controlfiles?
if we are using pfile:
show parameter control_files;
Note down the  control file location
shut down the database
copy the control file old location to newer location
add the new control file location in parameter file
startup the database
show parameter control_files;(this shows the old and new location as well)
if we are using spfile:
show parameter control_files;
alter system set controlfile=’oldlocation,newlocation’ scope=’spfile’
shut down the database
copy the control file from old location to new location
startup the database
76) How to multiplex redo log files?
alter database add log ‘  ‘ to group3;
77) How to add redo log groups to a database?
alter database add log ‘ ‘ size 50m group6;
v$log or v$logfiles;
78) Can you drop the redo log groups while the database is up and running?
Yes we can drop the redolog group but the redo log should be inactive
79) Can you drop the system tablespace, if so what happened to database?
No we can’t drop  the system tablespace.Oracle will not allow it
80) Can you drop the normal tablespace, if so what happened to database?
Yes we can drop the normal table spaces but the associated objects will be dropped
but the table space should be empty if not we have to use
SQL>drop tablespace tablespace name including contents;
if you want to drop the associated datafiles also with table space we should use
SQL>drop tablespace tablespace name including contents and datafiles;
81) What is the difference between Oracle home and oracle base
ORACLE_BASE is the root directory for oracle. ORACLE_HOME located beneath ORACLE_BASE is where the oracle products reside.
82) Where do you check the free space of objects?
SQL>dba_free_space;
select * from dba_free_space;
83) How to kill the blocking session, how to find the blocking session?
We have to find the blocking session information by using
SQL>select sid,username,serial#,status,event,blocing_session from v$session where username=’SYS';
Now check the blocking_session column for the sid that is blocking and confirm with the application team to kill
Now  execute
SQL> alter system kill session ‘sid,serial#’ immediate;
alternatively we can also find the lock informatin in v$lock
84) Can you kill the pmon or smon or ckpt ? what happens to database?
These are all the mandatory process to run the database.if we kill any of the process the DB will be crash.
85) Define the parameters for different pools of oracle instance?
shared pool:shared_pool_size;
DB buffer cache:db_cache_size;
java pool:java_pool_size;
largepool:large_pool_size;
stream pool:stream_pool_size;
Redolog buffer:log_buffer;
alternatively sga_max_size and sga_target should set to manage this pools automatically.
86) Consider the scenario below,
shared pool:shared_pool_size; 100m
DB buffer cache:db_cache_size; 100m
java pool:java_pool_size; 100m
largepool:large_pool_size; 100m
stream pool:stream_pool_size; 10m
Redolog buffer:log_buffer; 5m
Total SGA manually allocated in pools: 410M
I have also kept SGA_MAX_SIZE=400M in pfile and started the database which one the Oracle consider, 410M or 400M
410M, if the sga_max_size is lesser than the all pools total if specified in pfile then sga_max_size parameter is ignored.
86) List Process you follow to start looking into Performance issue at database level (If the application is running very slow, at what points do you need to go about the database in order to improve the performance?)
Answer ( Although i have never worked directly on performance issues, the below can be steps)
Run a TOP command in Unix to see CPU usage (identify CPU killer processes)
Run VMSTAT, SAR, and PRSTAT command to get more information on CPU and memory usage and possible blocking
Run AWR report to identify:
1. TOP 5 WAIT EVENTS
2. RESOURCE intensive SQL statements
See if STATISTICS on affected tables needs to be re-generated
IF poorly written statements are culprit, run a EXPLAIN PLAN on these statements and see whether new index or use of HINT brings the cost of SQL down.
87) Can you explain different modes of startup of oracle database?
No mount:starts the instance and bg process only
Mount:Oracle reads the control file and identify all the datafiles and kept them ready.
Open:The datafiles marked as read/write and database is now ready for operation.
88) Can you explain different modes of shutdown of oracle database?
close:all the changes in the buffer cache will be pushed to datafiles and existed session will be disconnected and no new sessions will be permitted.
dismount: All the datafiles will be closed
instance shut down:bg wil be stopped and memory pools will be cleared from OS.
89) How to know how many oracle homes or oracle instances exists in database host?
Once the oracle installation is completed the installer will update the file called /etc/oratab with new home with this file we can find how many homes are existed
To find how many instances are running use
ps -eaf | grep pmon
90) What is the difference between putty and sqlplus?
Putty is an ssh client to connect to the database host from remotely.
SQL database connectivity tool to connect to the database using tns names entry
91) What is tns string?
tns string is an entry to identify the database host,db port and the db name.Oracle will use oracle sqlplus will use this entries appropriate or right database host.
92) What is tnsentry?
Tns entry is a address to the database host and database written in the tnsnames.ora, generally tnsnames.ora located at $ORACLE_HOME/network/admin
92) How to change the database into archivelog mode?
SQL> archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /u01/archives/paddu
Oldest online log sequence     24
Next log sequence to archive   26
Current log sequence           26
We have to bring the db to mount mode and then use
SQL>alter database close;(This brings the db to mount mode)
SQL>alter database archive log;
SQL>startup force;
To disable the archive log just use
SQL>alter database noarchivelog;
93) How to set the password to expiry of 90 days?
identify the profile for that user
SQL> select username,default_profile from dba_user where username=’user';
SQL> select * from dba_profiles where profile=’DEFAULT';
change the profile for password life time
SQL> alter profile default set limit=’PASSWORD_LIFE_TIME=90′;
94) How to set the new password for Oracle user?
alter user username identified by newpassword;
95) How to set the same password to oracle user when the password is expired?
select username,password from dba_users where username=’username';
SQL>alter user username identified by values ‘above password';
96) Where do you find the password for oracle user?
select username,password from dba_users where username=’username';
97)How to set new undo tablespace in oracle database?
create a new undo tablespace
SQL> create undo tablespace undotbs4 datafile ‘/u01/oradata/paddu/undotbs4.dbs’ size 100m;
Tablespace created.
SQL> show parameter undo_tablespace;
NAME                                 TYPE        VALUE
———————————— ———– ——————————
undo_tablespace                      string      UNDOTBS3
SQL> alter system set undo_tablespace=’UNDOTBS4′ scope=spfile;
System altered.
SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area  263049216 bytes
Fixed Size                  2212448 bytes
Variable Size             167775648 bytes
Database Buffers           88080384 bytes
Redo Buffers                4980736 bytes
Database mounted.
Database opened.
SQL> show parameter undo_tablespace;
NAME                                 TYPE        VALUE
———————————— ———– ——————————
undo_tablespace                      string      UNDOTBS4
SQL>
100) Renaming schema 
Fastest Way, since the original import will not happen only metadata creation will happen, as the transportable import has been performed, In TTS the associated datafiles will be attached to new user , hence the datafiles with existing object(tables/indexex etc) will be point to new user.
1. create user new_user…
2. grant … to new_user;
3. execute dbms_tts.transport_set_check(…);
4. alter tablespace … read only;
5. exp transport_tablespace=y tablespaces=…
6. drop tablespace … including contents;
7. imp transport_tablespace=y tablespaces=… datafiles=… fromuser=old_user touser=newuser
8. create nondata objects in new_user schema
9. [drop user old_user cascade;]
10. alter tablespace … read write;

Thursday, 28 May 2015

Oracle Cost-Based Optimizer (CBO) and Database Statistics (DBMS_STATS)

Read this awesome article from Oracle-Base 

If you put 10 Oracle performance gurus in the same room they will all say database statistics are vital for the cost-based optimizer to choose the correct execution plan for a query, but they will all have a different opinion on how to gather those statistics. A couple of quotes that stand out in my mind are:
  • "You don't necessarily need up to date statistics. You need statistics that are representative of your data." - Graham Wood.
    Meaning, the age of the statistics in your system is not a problem as long as they are still representative of your data. So just looking at the LAST_ANALYZED column of the DBA_TABLES view is not an indication of valid stats on your system.
  • "Do you want the optimizer to give you the best performance, or consistent performance?" - Anjo Kolk
    Meaning, regularly changing your stats potentially introduces change. Change is not always a good thing.
Neither of these experts are suggesting you never update your stats, just pointing out that in doing so you are altering information the optimizer uses to determine which execution plan is the most efficient. In altering that information it is not unlikely the optimizer may make a different decision. Hopefully it will be the correct decision, but maybe it wont. If you gather statistics for all tables every night, your system will potentially act differently every day. This is the fundamental paradox of gathering statistics.
So what should our statistics strategy be? Here are some suggestions.
  • Automatic Optimizer Statistics Collection: From 10g onward the database automatically gathers statistics on a daily basis. The default statistics job has come under a lot of criticism over the years, but its value depends on the type of systems you are managing. Most of that criticism has come from people discussing edge cases, like large data warehouses. If you are managing lots of small databases that have relatively modest performance requirements, you can pretty much let Oracle do its own thing where stats are concerned. If you have any specific problems, deal with them on a case by case basis.
  • Mixed Approach: You rely on the automatic job for the majority of stats collection, but you have specific tables or schemas that have very specific stats requirements. In these cases you can either set the preferences for the objects in question, or lock the stats for the specific tables/schemas to prevent the job from changing them, then devise a custom solution for those tables/schemas.
  • Manual: You disable the automatic stats collection completely and devise a custom solution for the whole of the database.
Which one of these approaches you take should be decided on a case-by-case basis. Whichever route you take, you will be using theDBMS_STATS package to manage your stats.
Regardless of the approach you take, you need to consider system and fixed object statistics for every database, as these are not gathered by the automatic job.

DBMS_STATS

The DBMS_STATS package was introduced in Oracle 8i and is Oracle's preferred method of gathering statistics. Oracle list a number of benefits to using it including parallel execution, long term storage of statistics and transfer of statistics between servers.
The functionality of the DBMS_STATS package varies greatly between database versions, as do the default parameter settings and the quality of the statistics they generate. It is worth spending some time checking the documentation relevant to your version.

Table and Index Stats

Table statistics can be gathered for the database, schema, table or partition.
EXEC DBMS_STATS.gather_database_stats;
EXEC DBMS_STATS.gather_database_stats(estimate_percent => 15);
EXEC DBMS_STATS.gather_database_stats(estimate_percent => 15, cascade => TRUE);

EXEC DBMS_STATS.gather_schema_stats('SCOTT');
EXEC DBMS_STATS.gather_schema_stats('SCOTT', estimate_percent => 15);
EXEC DBMS_STATS.gather_schema_stats('SCOTT', estimate_percent => 15, cascade => TRUE);

EXEC DBMS_STATS.gather_table_stats('SCOTT', 'EMPLOYEES');
EXEC DBMS_STATS.gather_table_stats('SCOTT', 'EMPLOYEES', estimate_percent => 15);
EXEC DBMS_STATS.gather_table_stats('SCOTT', 'EMPLOYEES', estimate_percent => 15, cascade => TRUE);

EXEC DBMS_STATS.gather_dictionary_stats;
The ESTIMATE_PERCENT parameter was often used when gathering stats from large segments to reduce the sample size and therefore the overhead of the operation. In Oracle 9i upwards, we also had the option of letting Oracle determine the sample size using theAUTO_SAMPLE_SIZE constant, but this got a bad reputation because the selected sample size was sometimes inappropriate, making the resulting statistics questionable.
In Oracle 11g, the AUTO_SAMPLE_SIZE constant is the preferred (and default) sample size as the mechanism for determining the actual sample size has been improved. In addition, the statistics estimate based on the auto sampling are near to 100% accurate and much faster to gather than in previous versions
The CASCADE parameter determines if statistics should be gathered for all indexes on the table currently being analyzed. Prior to Oracle 10g, the default was FALSE, but in 10g upwards it defaults to AUTO_CASCADE, which means Oracle determines if index stats are necessary.
As a result of these modifications to the behavior in the stats gathering, in Oracle 11g upwards, the basic defaults for gathering table stats are satisfactory for most tables.
Index statistics can be gathered explicitly using the GATHER_INDEX_STATS procedure.
EXEC DBMS_STATS.gather_index_stats('SCOTT', 'EMPLOYEES_PK');
EXEC DBMS_STATS.gather_index_stats('SCOTT', 'EMPLOYEES_PK', estimate_percent => 15);
The current statistics information is available from the data dictionary views for the specific objects (DBA, ALL and USER views). Some of these view were added in later releases.
  • DBA_TABLES
  • DBA_TAB_STATISTICS
  • DBA_TAB_PARTITIONS
  • DBA_TAB_SUB_PARTITIONS
  • DBA_TAB_COLUMNS
  • DBA_TAB_COL_STATISTICS
  • DBA_PART_COL_STATISTICS
  • DBA_SUBPART_COL_STATISTICS
  • DBA_INDEXES
  • DBA_IND_STATISTICS
  • DBA_IND_PARTITIONS
  • DBA_IND_SUBPARTIONS
Histogram information is available from the following views.
  • DBA_TAB_HISTOGRAMS
  • DBA_PART_HISTOGRAMS
  • DBA_SUBPART_HISTOGRAMS
Table, column and index statistics can be deleted using the relevant delete procedures.
EXEC DBMS_STATS.delete_database_stats;
EXEC DBMS_STATS.delete_schema_stats('SCOTT');
EXEC DBMS_STATS.delete_table_stats('SCOTT', 'EMP');
EXEC DBMS_STATS.delete_column_stats('SCOTT', 'EMP', 'EMPNO');
EXEC DBMS_STATS.delete_index_stats('SCOTT', 'EMP_PK');

EXEC DBMS_STATS.delete_dictionary_stats;

System Stats

Introduced in Oracle 9iR1, the GATHER_SYSTEM_STATS procedure gathers statistics relating to the performance of your systems I/O and CPU. Giving the optimizer this information makes its choice of execution plan more accurate, since it is able to weigh the relative costs of operations using both the CPU and I/O profiles of the system.
There are two possible types of system statistics:
  • Noworkload: All databases come bundled with a default set of noworkload statistics, but they can be replaced with more accurate information. When gathering noworkload stats, the database issues a series of random I/Os and tests the speed of the CPU. As you can imagine, this puts a load on your system during the gathering phase.
    EXEC DBMS_STATS.gather_system_stats;
  • Workload: When initiated using the start/stop or interval parameters, the database uses counters to keep track of all system operations, giving it an accurate idea of the performance of the system. If workload statistics are present, they will be used in preference to noworkload statistics.
    -- Manually start and stop to sample a representative time (several hours) of system activity.
    EXEC DBMS_STATS.gather_system_stats('start');
    EXEC DBMS_STATS.gather_system_stats('stop');
    
    -- Sample from now until a specific number of minutes.
    DBMS_STATS.gather_system_stats('interval', interval => 180); 
    
Your current system statistics can be displayed by querying the AUX_STATS$ table.
SELECT pname, pval1 FROM sys.aux_stats$ WHERE sname = 'SYSSTATS_MAIN';

PNAME                               PVAL1
------------------------------ ----------
CPUSPEED
CPUSPEEDNW                           1074
IOSEEKTIM                              10
IOTFRSPEED                           4096
MAXTHR
MBRC
MREADTIM
SLAVETHR
SREADTIM

9 rows selected.

SQL>
If you are running 11.2.0.1 or 11.2.0.2 then check out MOS Note: 9842771.8.
The DELETE_SYSTEM_STATS procedure will delete all workload stats and replace previously gathered noworkload stats with the default values.
EXEC DBMS_STATS.delete_system_stats;
You only need to update your system statistics when something major has happened to your systems hardware or workload profile.
There are two schools of thought about system stats. One side avoid the use of system statistics altogether, favoring the default noworkload stats. The other side suggests providing accurate system statistics. The problem with the latter, is it is very difficult to decide what represents an accurate set of system statistics. Most people seem to favor investigation of systems using a variety of methods, including gathering system stats into a stats table, then manually setting the system statistics using the SET_SYSTEM_STATS procedure.
EXEC DBMS_STATS.set_system_stats('iotfrspeed', 4096);
The available parameter names can be found here.
I would say, if in doubt, use the defaults.

Fixed Object Stats

Introduced in Oracle 10gR1, the GATHER_FIXED_OBJECTS_STATS procedure gathers statistics on the X$ tables, which sit underneath theV$ dynamic performance views. The X$ tables are not really tables at all, but a window on to the memory structures in the Oracle kernel. Fixed object stats are not gathered automatically, so you need to gather them manually at a time when the database is in a representative level of activity.
EXEC DBMS_STATS.gather_fixed_objects_stats;
Major changes to initialization parameters or system activity should signal you to gather fresh stats, but under normal running this does not need to be done on a regular basis.
The stats are removed using the DELETE_FIXED_OBJECTS_STATS procedure.
EXEC DBMS_STATS.delete_fixed_objects_stats;

Locking Stats

To prevent statistics being overwritten, you can lock the stats at schema, table or partition level.
EXEC DBMS_STATS.lock_schema_stats('SCOTT');
EXEC DBMS_STATS.lock_table_stats('SCOTT', 'EMP');
EXEC DBMS_STATS.lock_partition_stats('SCOTT', 'EMP', 'EMP_PART1');
If you need to replace the stats, they must be unlocked.
EXEC DBMS_STATS.unlock_schema_stats('SCOTT');
EXEC DBMS_STATS.unlock_table_stats('SCOTT', 'EMP');
EXEC DBMS_STATS.unlock_partition_stats('SCOTT', 'EMP', 'EMP_PART1');
Locking stats can be very useful to prevent automated jobs from changing them. This is especially useful with tables used for ETL processes. If the stats are gathered when the tables are empty, they will not reflect the real quantity of data during the load process. Instead, either gather stats each time the data is loaded, or gather them once on a full table and lock them.

Transfering Stats

It is possible to transfer statistics between servers allowing consistent execution plans between servers with varying amounts of data. First the statistics must be collected into a statistics table. In the following examples the statistics for the APPSCHEMA user are collected into a new table, STATS_TABLE, which is owned by DBASCHEMA.
EXEC DBMS_STATS.create_stat_table('DBASCHEMA','STATS_TABLE');
EXEC DBMS_STATS.export_schema_stats('APPSCHEMA','STATS_TABLE',NULL,'DBASCHEMA');
This table can then be transfered to another server using your preferred method (Export/Import, SQL*Plus COPY etc.) and the stats imported into the data dictionary as follows.
EXEC DBMS_STATS.import_schema_stats('APPSCHEMA','STATS_TABLE',NULL,'DBASCHEMA');
EXEC DBMS_STATS.drop_stat_table('DBASCHEMA','STATS_TABLE');

Setting Preferences

Since Oracle 10g, many of the default values of parameters for the DBMS_STATS procedures have changed from being hard coded to using preferences. In Oracle 10g, these preferences could be altered using the SET_PARAM procedure.
EXEC DBMS_STATS.set_param('DEGREE', '5');
In 11g, the SET_PARAM procedure was deprecated in favor of a layered approach to preferences. The four levels of preferences are amended with the following procedures.
  • SET_GLOBAL_PREFS: Used to set global preferences, including some specific to the automatic stats collection job.
  • SET_DATABASE_PREFS: Sets preferences for the whole database.
  • SET_SCHEMA_PREFS: Sets preferences for a specific schema.
  • SET_TABLE_PREFS: Sets preferences for a specific table.