Call for Oracle support & training (800) 766-1884
Free Oracle Tips

Corporate Oracle Training
Custom Oracle Training
Oracle New Features Training
Advanced Oracle DBA Classes
Oracle Tuning Courses
Oracle Tips & Tricks
Oracle Training Links
Oracle Training Links
Oracle Training Links

We are top for USA Oracle Training Clients


Free Oracle Tips


Free Oracle App Server Tips


Oracle support

Oracle training

Oracle tuning


Remote Oracle

Custom Oracle Training

Oracle Data Warehouse Tuning

The intent of this chapter is to show the main tools and techniques that can be used by the Oracle10g data warehouse administrator for time-series warehouse tuning.


As corporate data warehouse systems grow from small scale applications into industry-wide systems, the IT manager must be postured to help the system grow without service interruption.  Oracle10g fills this niche with their database that allows infinite scalability; however, the IT manager must also choose server hardware that allows seamless growth. 

What Does a Data Warehouse Need?

Since the earliest days of Decision Support Systems (DSS) in the 1960’s, database professionals have recognized that internal processing for data warehouse applications is very different than that of Online Transaction Processing Systems (OLTP).


Data warehouse applications tend to be very I/O intensive, since that type of database reads trillions of bytes of information.  Data warehouse systems require specialized servers that can support the typical processing that is found in data warehouses.  Most data warehouses are bi-modal and have batch windows, usually in the evenings, when new data is loaded, indexed, and summarized. The server must have on-demand CPU and RAM resources as well as a database management system that must be able to dynamically reconfigure its resources to accommodate these shifts in processing.


In the 1970’s, Moore’s law was introduced.  Moore’s law stated that processor costs were always falling while speed continued to improve.  However, Moore’s law does not apply to RAM. 


While RAM costs continue to fall every year, the speed of RAM access is constrained by silicon technology and has not improved in the past three decades.  These trends are shown in Figure 18.1.


Figure 18.1: Speed improvements of CPU versus RAM


Because RAM speed has not improved like CPU speed, RAM must be localized near the CPUs to keep them running at full capacity, and this is a main feature of many of the new Intel-based servers.  Non Uniform Memory Access (NUMA) has been available for years in high-end UNIX servers running Symmetric Multi-processor (SMP) configurations.


In order to process large volumes of data quickly, the server must be able to support parallel large-table full-table scans for data warehouse aggregation.  One of the biggest improvements in multi-CPU servers is their ability to utilize Oracle parallel features for table summarization, aggregation, DBA maintenance and parallel data manipulation. 


For example, this divide and conquer approach makes large-table full-table scans run seven times faster on an 8-CPU server and 15x faster on a 16-way SMP server.  These relationships are illustrated in Figure 18.2.


Figure 18.2:  Data Warehouse large-table full-table scans can be parallelized for super-fast response


Historically, data warehouse applications have been constrained by I/O, but all of this is changing with the introduction of specialized data warehouse techniques, all with the goal of keeping the server CPUs running at full capacity.  These techniques include:

§         Partitioning: By having the database place data physically near other related data, excessive I/O is reduced.

§         Materialized Views: By pre-summarizing data and pre-joining tables, server resources become less I/O intensive.

§         Advanced indexing: Databases now offer specialized techniques such as bitmap join indexes and specialized internal table join techniques (STAR transformation schemes) that help shift the processing burden away from I/O.

All of these techniques have helped remove the I/O bottleneck and make data warehouse applications more CPU intensive.  There are many server resources that are required for all large data warehouse applications.  These features include:

§         Large RAM Regions: All 64-bit servers have a larger word size, 2 to the 64th power, that allows for up to 18 billion GB.  That is 18 exabytes! This allows for huge scalability as the processing demand grows and allows the database to have many gigabytes of data buffer storage.

§         Fast CPU: Intel's 64-bit Itanium 2 architecture is far faster than the older 32-bit chipsets.  The advanced features built into the Itanium 2 chipset allow much more real work to be done for each processor cycle. Combined with the Oracle10g NUMA RAM support, computationally intensive DSS queries will run at lightening speeds.

§         High parallelism X "parallelism" : Each processing node has four Itanium 2 CPUs interconnected to local memory modules and an inter-node crossbar interconnect controller via a high speed bus.  Up to four of these processing nodes can be interconnected to create a highly scalable SMP system. This design allows large-scale parallel processing for Oracle full-table scans, which are the scattered reads that are the hallmark of Oracle warehouse systems. For example, the Unisys 64-bit ES7000 servers support up to 16 processors, allowing for large parallel benefits.

§         High Performance I/O architecture: The I/O subsystem also influences scalability and performance.  Enterprise systems must provide the channel capacity required to support large databases and networks.  The Itanium 2 system architecture can support up to 64 peripheral component interconnect (PCI or PCI-X) 64-bit channels operating at speeds from 33 MHz to 100 MHz.

The advent of large RAM regions is also beneficial to the data warehouse.  In most data warehouses, a giant central fact table exists, surrounded by smaller dimension tables.  This schema is illustrated in Figure 18.3.


Figure 18.3:  A typical data warehouse schema


In a typical STAR schema, the larger full-table scans can never be cached, but it is important to be able to control the caching of the dimension tables and indexes.  When using a 64-bit server with fast RAM access, the Oracle KEEP pool and multiple buffer caches can be configured to guarantee that the smaller, frequently referenced objects always remain in the data buffers.  This will shift the database bottleneck away from I/O.  Once the bottleneck has been shifted from I/O to CPU, performance can be scaled by adding more processors.


These star_transformation joins can be successfully used to enable a data warehouse as explained below.

This is an excerpt from my latest book "Oracle Tuning: The Definitive Reference". 

You can buy it direct from the publisher for 50%-off and get instant access to the code depot of Oracle tuning scripts:




Oracle performance tuning book



Oracle performance tuning software

Oracle performance tuning software
Oracle performance Tuning 10g reference poster
Oracle training in Linux commands
Oracle training Excel
email BC:

Copyright © 1996 -  2014 by Burleson Inc. All rights reserved.

Oracle® is the registered trademark of Oracle Corporation.