Mostrando entradas con la etiqueta Optimizador (CBO). Mostrar todas las entradas
Mostrando entradas con la etiqueta Optimizador (CBO). Mostrar todas las entradas

[ 2018-09-06 ]

Interpretando un "Explain Plan" (Parte I)

¿Qué es un explain plan?

Un explain plan es una representación de la ruta de acceso que el optimizador de Oracle Database toma cuando es ejecutada una consulta SQL en la base de datos. El procesamiento de una consulta SQL se puede dividir en 7 fases:

1-  Sintáctica:   se verifica la sintaxis de la consulta
2-  Semántica: comprueba que todos los objetos existan  y sean accesibles
3-  View Merging: se reescribe la consulta como join en tablas base en lugar de usar vistas
4-  Transformación de sentencias: se reescribe la consulta transformando algunas construcciones complejas en otras más simples siempre que sea posible (por ejemplo, subconsulta unnesting, in/or transformation, etc.). Algunas transformaciones usan reglas mientras que otras tienen un costo basado en estadísticas.
5-  Optimización: determina la ruta de acceso óptima para la consulta. El optimizador basado en costos (CBO) usa estadísticas para analizar los costos relativos de acceso a los objetos.
6-  Query Evaluation Plan (QEP): Se evalúan las diferentes opciones de acceso y se  genera el plan.
7-  QEP execution: Se llevan adelante la acciones indicadas en el plan de ejecución determinado.

Los pasos del 1 al 6 a veces se agrupan y se denominan 'parsing', fase de 'parseo' o simplemente análisis.
El paso 7 es la ejecución misma de la sentencia.

El  "explain plan" es fundamentalmente una representación de la ruta de acceso generada en el paso 6.

Una vez que se ha determinado el “access path” o ruta de acceso, éste se almacena en memoria en la “library cache”  junto con la sentencia correspondiente. Las consultas se almacenan en la “library cache” en función de una representación en hash del literal de la sentencia, este hash luego lo utiliza Oracle para ubicarla en memoria. Antes de realizar las fases 1-6 antes mencionadas para un SQL determinado, Oracle busca la sentencia en la “library cache” para ver si ya fue "parseada" y utilizar el plan ya generado. Primero aplica un algoritmo hash al texto del SQL en cuestión y luego se busca este valor hash en  “library cache”.  Si el hash coincide con alguno de los almacenados, se utilizará ese “Access path” o método de acceso para resolver la consulta. Este será siempre utilizado (para la misma consulta) hasta  tanto la misma vuelva a parseada. Dependiendo de las circunstancias, el plan de ejecución tal vez varie de un parseo a otro. (En futuros posteos veremos que determina que una consulta vuelva a ser parseada). 

[ 2018-08-29 ]

Privilegios necesarios para ejecutar un Explain Plan

Como sabemos, una de las maneras que tenemos  para obtener el plan de ejecución de una sentencia SQL es utilizar  la cláusula EXPLAIN PLAN.  De esta forma podemos determinar el plan de ejecución que una base de datos Oracle esta utiliza o utilizará para la ejecución  de determinada instrucción SQL. El comando EXPLAIN PLAN, inserta una fila en una tabla especifica (output table),  con la descripción de cada operación correspondiente a  cada paso del plan de ejecución. También se puede utilizar EXPLAIN PLAN como parte de la herramienta SQL trace.

EXPLAIN PLAN también determina el costo de ejecutar la sentencia SQL. En el costo calculado, entran en juego diversas variables como por ejemplo el número de bloques leídos, tiempo de lectura, lectura simple o multibloque, ciclos de procesador, velocidad de CPU, etc. pero en definitiva es lo que hace que Oracle determine un plan de ejecución específico sobre todos los calculados.

En el software de instalación del motor de base de datos podemos encontrar un script de ejemplo con la definición de la output table (PLAN_TABLE). La tabla de salida que finalmente  utilicemos debe tener los mismos nombres de columna y tipos de datos que esta tabla. Por lo general , el nombre común de este script es utlxplan.sql en versiones anteriores a 10.2 y catplan.sql a partir de esta versión. Lo podemos encontrar en @?/rdbms/admin/

[ 2018-06-28 ]

Restaurando versiones previas de estadísticas (optimizer statistics)

En ciertos casos, es probable que luego de una actualización de estadísticas (en una o varias tablas y/o índices), el plan ejecución para ciertas consultas puede llegar a variar, ocasionando una posible degradación de la performance. Si bien está claro que las estadísticas “frescas” deberían a ayudar al optimizador (CBO) a seleccionar el mejor plan de ejecución posible, no siempre las cosas funciona de esta manera y por el contrario el rendimiento puede llegar a verse afectado. Si esta situación  ocurre  en un ambiente productivo, es probable que no tengamos mucho tiempo para analizar las causas  y determinar una posible solución.
Una buena opción para intentar  revertir el problema de manera rápida, es restaurar las estadísticas a valores previos.
Para esto, a partir de la versión 10g de Oracle Database, cuando nuevas estadísticas son tomadas en un objeto, los valores de la estadísticas anteriores son conservadas para que en el caso de ser necesario puedan ser restaurados.
Por defecto, el tiempo de retención de 31 días. Esto significa que serán conservadas todas las estadísticas tomadas durante ese periodo.

Podemos verificarlo con la siguiente consulta:

SQL> select DBMS_STATS.GET_STATS_HISTORY_RETENTION from dual;

GET_STATS_HISTORY_RETENTION
---------------------------
                         31

Si queremos modificar este tiempo de retención, podemos hacerlo ejecutando el siguiente procedimiento:

SQL> execute DBMS_STATS.ALTER_STATS_HISTORY_RETENTION (XXXX);

[ 2018-03-12 ]

Estadísticas en objetos de SYS y fixed tables

Oracle recomienda recopilar estadísticas para el esquema SYS, principalmente cuando la base de datos realiza muchos cambios (DML) sobre los objetos de este esquema.
Para realizar esto podemos utilizar el procedimiento gather_schema_stats del  paquete dbms_stats.  De esta manera se optimizará  el rendimiento de consultas internas recursivas y las consultas que las aplicaciones hacen sobre objetos del esquema SYS. Esta estadísticas son especialmente útiles cuando se está utilizando Oracle APPS.
Tenemos tres formas para recolectar estadísticas en el esquema SYS :

SQL> exec dbms_stats.gather_schema_stats ('SYS');
SQL> exec dbms_stats.gather_database_stats (gather_sys=>true);
SQL> exec dbms_stats.gather_dictionary_stats;


También es posible recopilar estadísticas para tablas dinámicas como las tablas x$ las cuales son generadas en memoria durante el arranque de la instancia. La recomendación de tomar estadísticas sobre estos objetos, está orientada a casos donde se perciben problemas de performance al consultar las vistas dinámicas v$. En este caso, el cálculo debería ser realizado en un momento donde la carga del sistema sea representativa, ya que estos objetos registran la actividad actual de la base de datos.
Para realizar este cálculo de estadísticas debemos utilizar el procedimiento gather_fixed_objects_stats del paquete dbms_stats.

SQL> exec dbms_stats.gather_fixed_objects_stats;

Ref: Note MOS 457926.1

[ 2018-02-28 ]

Optimizer with Oracle Database 18c



Uno de los componentes más importantes de la base de datos Oracle es el Optimizador o CBO, fascinante por su funcionalidad y complejidad actúa como el cerebro en la ejecución de instrucciones SQL, determinando el plan de ejecución más eficiente en base a la estructura de la consulta y la información estadística de los objetos subyacentes. Sin dudas, entender a fondo su funcionamiento es de vital importancia tanto para los DBAs como para los equipos de desarrollo.
Como toda nueva versión, Oracle Database 18c incorpora cambios y novedades a nivel Optimizador. Junto con el lanzamiento de este nuevo release se publicaron también varios “Papers” técnicos como ya comenté en una entrada anterior.
En uno de ellos, “Optimizer with Oracle Database 18c “, se presentan todas las nuevas funciones relacionadas con el optimizador y las estadísticas en 18c.  Se proporcionan además ejemplos simples y  fácilmente reproducibles para permitir  familiarizarse rápidamente con ellas, especialmente al momento de migrar desde versiones anteriores.
También se describe cómo se han mejorado las funcionalidades existentes destinadas a optimización de performance y administración. Lógicamente este documento describe lo anteriormente comentado a grandes rasgos, para profundizar más sobre estas características recomiendo consultar la documentación: https://docs.oracle.com/en/database/oracle/oracle-database/18/index.html.

Aquí se puede leer y descargar el paper: Optimizer with Oracle Database 18c


[ 2018-02-27 ]

Ejecutando manualmente una "Optimizer Statistics Auto Task"


Bajo ciertas circunstancias puede ser necesario lanzar manualmente el job de recopilación automática de estadísticas del optimizador.  Esto es habitual en casos donde se está evaluando la performance de algún SQL o luego de depuraciones grandes de datos.
A partir de la versión 11g,  el mecanismo de ejecución de “Auto-Tasks” para tareas de mantenimiento, reemplazó la necesidad de gather_stats_job.
También  existe la posibilidad de forzar la ejecución manual (fuera de la ventana de mantenimiento) utilizando la siguiente sentencia:

SQL> exec DBMS_AUTO_TASK_IMMEDIATE.GATHER_OPTIMIZER_STATS

Para utilizar este comando es necesario el privilegio de DBA.

Esto le indica al subsistema "Automated Maintenance Tasks" iniciar un job para recolectar estadísticas, siempre y cuando no exista ya un job en ejecución.
Este caso puede darse si una ventana de mantenimiento está abierta en ese momento. Si el job es finalmente lanzado sin problemas, tendrá un nombre con el formato: ORA $ _AT_OS_MANUAL_nnnnnn (nnnnn son uno o más dígitos decimales).
Como comentaba, a diferencia de los jobs de mantenimiento automatizados normales , el job "MANUAL" no está directamente vinculado a una ventana de mantenimiento específica.

[ 2018-02-20 ]

Oracle Database 12.2: Reportes AWR a nivel PDB

Artículo publicado en Oracle Technology Network (OTN) en español -  febrero de 2018 


Introducción

Desde su aparición en Oracle Database 10g, Automatic Workload Repository (AWR) ha ido evolucionando constantemente con el correr de las versiones.
Oracle 12c introdujo un gran cambio en la arquitectura de la base de datos incorporando el concepto de “multitenant”.  En el primer release de 12c, los reportes de AWR solamente pueden ser generados a nivel “Container Database” (CDB). Esto nos impide, en cierta manera, poder analizar en profundidad el comportamiento de determinadas PDBs puntuales.
La versión 12.2 introduce una notable mejora en AWR,  la capacidad de poder correr snapshots  tanto a nivel CDB como de PDB cuando trabajamos en ambientes “multitenant”.
Esta nueva característica nos permite un diagnóstico más granular de problemas de performance focalizados en una PDB en particular, situación que resulta sumamente interesante y útil en soluciones DBaaS donde se espera que el rol de DBA tienda a ser el de un “pDBA” ( Pluggable Database Administrator), es decir un DBA responsable de la administración de una o varias PDBs en particular, pero que a nivel físico comparten recursos con otras en el mismo CDB.

[ 2017-07-04 ]

Optimizer with Oracle Database 12c Release 2 (White Paper)


Nuevo Oracle White Paper
Optimizer with Oracle Database 12c Release 2
 
Introduction
The Oracle Optimizer is one of the most fascinating components of the Oracle Database, since it is essential to the processing of every SQL statement. The optimizer determines the most efficient execution plan for each SQL statement based on the structure of the given query, the available statistical information about the underlying objects, and all the relevant optimizer and execution features. This paper introduces all of the new optimizer and statistics related features in Oracle Database 12c Release 2 and provides simple, reproducible examples to make it easier to get acquainted with them, especially when migrating from previous versions. It also outlines how existing functionality has been enhanced to improve both performance and manageability.

Some Oracle Optimizer features have been broken out of this paper and covered in their own.

Specifically, they are:
» Optimizer Statistics and Optimizer Statistics Advisor
» SQL Plan Management
» Approximate Query Processing

To get a complete picture of the Oracle Optimizer, it is recommended that you read this paper in conjunction with the relevant papers listed in the References section. See page 28 for details.


[ 2016-12-28 ]

Efficient Statistics Maintenance for Partitioned Tables Using Incremental Statistics – Part 1

Concepts

What statistics are there on partitioned tables?  

Consider a partitioned table called SALES that has date-range partitions for financial quarters 1 and 2 (Q1 and Q2). The partition key is on the column QUARTER. The database gathers statistics for individual partitions so that cardinality can be estimated for queries that are pruned to a single partition. These are called partition-level statistics. To illustrate this with an example, I’m going to consider just a couple of statistics and ignore the others. The number of rows in Q1 and Q2 are 600 and 550 respectively. The number of distinct values (NDVs) for SALE_TYPE in Q1 is 30 and Q2 it’s 50:
Partition-level statistics

When a query is compiled, if the Oracle Optimizer determines that it will need to access a single partition (using partition pruning, for example) then the statistics at the partition-level
will be enough to help determine the execution plan. Here’s a query that reads Q1 only:

SELECT SUM(amount) FROM sales WHERE quarter = 'Q1' AND sale_type = 'DIRECT';

If the Optimizer determines at compile-time that a query has the potential to access more than one partition, then individual partition statistics are not enough. In the next example, the query needs to access more than one partition:

SELECT SUM(amount) FROM sales 
WHERE  sale_type = 'DIRECT';

Continuar aqui

[ 2016-03-17 ]

Group-by and Aggregation Elimination

I get a fair number of questions on query transformations, and it’s especially true at the moment because we’re in the middle of the Oracle Database 12c Release 2 beta program. Sometimes people notice “something missing” or unusual in an execution plan and realize that a transformation is happening. For example, Join Elimination (thanks, Christian) can take you by surprise if you notice that a table in your query doesn’t appear in the SQL execution plan at all (and neither do any of its indexes).
I’m sure you’re aware that query transformations are an important step in query optimization because it’s often possible to reduce the cost of a query by eliminating steps like joins or sorts. Sometimes changing the shape of the query will allow it to use different access paths, different types of join, and entirely different types of query blocks. We cover most transformations in our “What to expect from the Optimizer” collateral and, in particular, release-specific Optimizer white papers (here's the Oracle Database 12c one). 
In Oracle Database 12.1.0.2, we addead a new transformation called Group-by and Aggregation Elimination and it slipped through any mention in our collateral. It happens to be one of the simplest transformations in the Oracle Optimizer’s repertoire and I know that some of you are spookily well-informed and know about it already. You might have seen it getting a mention in Mike Dietrich’s upgrade blog.

Continuar aqui

[ 2015-12-02 ]

Basics of the Cost Based Optimizer

Comparto una serie de articulos sobre el optimizador de Oracle (CBO) de Jonathan Lewis publicada entre junio y noviembre de este año en http://allthingsoracle.com
This series on Oracle’s Cost Based Optimizer is aimed at the less experienced DBAs and developers to help them understand what the optimizer is trying to achieve, how it arrives at an execution plan, why it makes mistakes, and (perhaps most importantly) how to recognize the source of those mistakes and so address the resulting problems in an appropriate fashion.
I will try to avoid going into extreme technical detail though I will outline a few of the slightly more technical issues and mention a few URLs that go into greater depth on particular topics.
To get the best out of this series you will need to have some experience with reading execution plans, especially when we look at the “trouble-shooting” side of optimization.

Basics of the Cost Based Optimizer – Part 1



[ 2015-09-21 ]

Tips on SQL Plan Management and Oracle Database In-Memory – Part 3

In Part 1 of this series of tips on SQL Plan Management (SPM) and Oracle Database In-Memory I covered an example where a full table scan query made use of the In-Memory column store immediately without changing the SQL execution plan. In Part 2 I presented an example where the In-Memory column store made an alternative SQL execution plan viable, and where there was a corresponding SQL plan baseline already in place so that this plan could be used immediately. 
In this post I will consider a slightly different scenario:
  • SQL plan management is used to stabilize the SQL execution plans for our critical application (that until now has not been using the In-Memory column store).
  • The In-Memory column store is subsequently enabled and populated with application tables. 
  • The Optimizer identifies new SQL execution plans.
  • Most of the new execution plans have never been chosen by the Optimizer before.
Continuar aqui

[ 2015-09-02 ]

Tips on SQL Plan Management and Oracle Database In-Memory - Part 2

In Part 1 of this series of tips on SQL Plan Management (SPM) and Oracle Database In-Memory, I covered what would happen if we have a SQL plan baseline for a full table scan query when the table was populating the In-Memory column store. 
In this part I’m going to cover a scenario where a query has more than one SQL plan baseline: 
  • There is a query (called Q2, for short).
  • Q2 queries a table called MYSALES, which is not yet populating the In-Memory column store.
  • Q2 filters rows in MYSALES using a predicate on the SALE_TYPE column.
  • Data in SALE_TYPE is skewed, so there’s an index and a histogram on this column.
  • Because there is data skew, Q2 has two accepted SQL plan baselines; one with a full table scan and one with an index range scan.
You’ve probably come across this situation many times: the Oracle Optimizer must choose between a full table scan or an index range scan depending on predicate selectivity. The ability to change the execution plan based on the value of bind variables is called adaptive cursor sharing. If you’ve not come across that, then you’ll find it useful to check out the section on this topic in the Database SQL Tuning Guide.
What’s great about SPM is that it allows you to have multiple SQL plan baselines for individual queries, so you're not forced to pick one plan in preference to another. This capability is most relevant in environments where SQL statements use bind variables and there is a good deal of data skew. Queries like this are likely to have their plans affected by Oracle In-Memory Database because in-memory full table scans will have a lower cost than storage-resident table scans. Clearly, the In-Memory column store will affect the point of inflection where a full table scan will become more efficient than an index range scan. How is this going to work with SPM? 
Continuar aqui

Tips on SQL Plan Management and Oracle Database In-Memory Part 1

If you follow Oracle’s In-Memory blog then you probably came across a post mentioning how you should use SQL Plan Management when you’re upgrading to Oracle Database In-Memory. Whether you have read that post or not, you might be wondering what will happen if you have some SQL plan baselines and you begin to populate the In-Memory column store with a bunch of tables as used by those baselines. That’s what this post is about. Well, in fact, I’m going to break the topic up into a few posts because (as ever!) there is a little bit of subtlety to cover. Luckily, this will make your life easier rather than more difficult because you can get immediate benefit from In-Memory even if you don’t evolve SQL plan baselines on day one.  
When I started to think about this post I thought that I would start with the first scenario that probably comes to mind if you’re familiar with SQL Plan Management (SPM): 
  • The Optimizer comes up with a new execution plan for a SQL statement because something has changed, and Oracle Database In-Memory would be a very good example of that! 
  • If there’s a SQL plan baseline for the statement, the database will use the baseline execution plan and capture the new plan.
  • Where appropriate, the new plan will be validated and accepted using SQL plan evolution. 
I will get to that, but first it’s better to start with a couple of more subtle points. With this information in our back pocket it will be easier to understand (and explain) the more traditional aspects of SQL plan evolution in the context of Oracle Database In-Memory.
Continuar aqui

[ 2015-06-28 ]

What you need to know about SQL Plan Management and Auto Capture

Introduction

SQL Plan Management (SPM) is an Oracle database feature that allows you to establish a set of SQL execution plans that will be used even if the database is subject to changes that would otherwise cause execution plan changes to occur. For example, you might have an end-of-day batch run that operates in a business context where there are extreme peaks and troughs in daily volume, or perhaps you are upgrading a database and want to be sure that plans are carried over (at least initially). You do not have to fix execution plans in stone with SPM, you can use plan evolution to automate the process of finding improved plans, improving performance in a controlled way and at your own pace. If you’re not familiar with SPM, a very good place to start is to take a look at Maria Colgan’s four-part blog post on the subject. It gives you all the tools you need to get started.
If you are using SPM in Oracle Database 11gR2 or 12c, or if you are considering whether you should use it, then this blog post is for you. I decided to publish this post because I recently encountered a couple of environments that ran into, let’s say, “difficulties” with SPM when capturing SQL plan baselines automatically and continuously over a very long period of time (more than a year in fact). I’d like to give you a few pointers to avoid running into the same problems and why automatic SQL baseline capture was never intended to be used in that way.
Continuar aqui

[ 2015-05-28 ]

Space Management and Oracle Direct Path Load

Most of you will be familiar with the concept of direct path load and how it’s an efficient way to load large volumes of data into an Oracle database as well as being a great technique to use when moving and transforming data inside the database. It’s easy to use in conjunction with parallel execution too, so you can scale out and use multiple CPUs and database servers if you need to process more data in less time.
Probably less well known is how the Oracle database manages space during direct path load operations. This is understandable because the Oracle database uses a variety of approaches and it has not always been very obvious which one it has chosen. The good news is that Oracle Database 12c from version 12.1.0.2 onwards makes this information visible in the SQL execution plan and it is also possible to understand what’s happening inside earlier releases with a bit of help from this post and some scripts I’ve linked to at the end.
Why isn’t there a one-size-fits-all approach for space management? Simply put, direct path load has to work well in a wide variety of different circumstances. It is used to fill non-partitioned and partitioned tables, which means that it must work well with a small or large number of database segments. It must operate serially or in parallel, be confined to a single database instance or distributed across an entire RAC database.There are some subtle complexities to take into account too, such as data skew. For example, some table partitions might contain much more data than others. Successful parallel execution depends on the even distribution of workloads across a number of parallel execution servers. If this isn’t the case then some of the servers will finish early, leaving them with no useful work to do. This results in a low effective degree of parallelism, poor machine resource utilization and an extended elapsed time. Avoiding issues associated with data skew increases the complexity of space management because it’s usually not appropriate to simply map each parallel execution server to an individual database segment. 
    The strategies Oracle uses are designed to achieve excellent scalability in a wide variety of circumstances, avoiding the extended run times associated with skewed datasets. In this post I will focus on how the strategies work without delving too deeply into how the database makes its choice. I will explain space management using Oracle Database 11g Release 2 as the baseline and then I will introduce the changes we’ve made in Oracle Database 12c. To put things in perspective, a full rack Oracle Exadata Database Machine is capable of loading data at over 20 terabytes an hour, so you can be sure that space management has received some attention! Even if you don’t have a system like that at your disposal, you will still benefit from the improvements and optimizations Oracle makes to keep your database operating at its full potential.
    Continuar aqui

    [ 2013-11-30 ]

    A Beginner’s Guide to Optimizer Hints

    It is often easy to forget this, but in many ways it is after we hit the execute button that the really exciting stuff starts with our code. A number of engines silently spring into action; including the optimizer. The optimizer analyses your SQL statement and decides the most efficient way to execute it based on the objects involved in the statement and the conditions you’re subjecting them to. Your database automatically gathers stats about your objects – stuff like the number of rows, number of distinct values, of nulls, data distribution – and the optimizer uses this information in its decision-making. (You can study the explain plan to see what decisions the optimizer has taken.)  The optimizer arrives at its conclusions, often in barely a whisper of time.
    And when the SQL statement is executed, you sit back and you feel like a genius.
    And that’s it, right? THE END.
    Optimizer Hints
    Well, not necessarily. The optimizer is the hero of our story; let me introduce the potential villains, Optimizer Hints. An optimizer hint is a code snippet within an SQL statement controlling the decisions of the optimizer. Hints give us the opportunity, in cases where we have superior knowledge about the database, to influence the optimizer. In fact, the very name is a misnomer – they are not hints; they are commands that override the optimizer (as long as the hint is valid and the _OPTIMIZER_IGNORE_HINTS initialization parameter is not TRUE).
    Continuar aqui

    [ 2013-07-02 ]

    How does AUTO_SAMPLE_SIZE work in Oracle Database 11g?

    When it comes to gathering statistics, one of the most critical decisions you have to make is, what sample size should be used? A 100% sample will ensure accurate statistics but could take a really long time. Whereas a 1% sample will finish quickly but could result in poor statistics.
    The ESTIMATE_PERCENT parameter in the DBMS_STATS.GATHER_*_STATS procedures controls the sample size used when gathering statistics and its default value is AUTO_SAMPLE_SIZE.
    In an earlier blog post, we talked about the new implementation of AUTO_SAMPLE_SIZE in Oracle Database 11g in terms of its improvements in the speed and accuracy of statistics gathering compared to the old AUTO_SAMPLE_SIZE prior to Oracle Database 11g.
    In this post, we will offer a closer look at the how the new AUTO_SAMPLE_SIZE algorithm works and how it affects the accuracy of the statistics being gathered.
    Continuar aqui

    [ 2013-03-04 ]

    How do adaptive cursor sharing and SQL Plan Management interact?

    We've received a lot of questions about how adaptive cursor sharing (ACS) and SQL plan management (SPM) interact.  We discussed this briefly in one of the original SPM posts, but in this post, we'll explain the concepts of how the two features interact in more details, and show an example. 
    The simplest way to reason about the interaction is to remember that they are responsible for two different tasks.  ACS controls whether or not a child cursor is shared on a particular execution.  For each execution of the query, ACS considers the current bind values and decides if an existing child cursor can be shared or if the optimizer should be given the chance to find a better plan for the current bind values.  SPM controls which plans the optimizer may choose.  If a child cursor is bind-aware, the decision to share or not is made irrespective of whether the query is controlled by SPM.  But once the query and its current bind values are sent to the optimizer for optimization, SPM constrains the optimizer's choice of plans, without regard to whether this query is being optimized due to ACS.
    Continuar aqui

    [ 2013-01-14 ]

    How do I create statistics to make ‘small’ objects appear ‘large’ to the Optmizer?

    I recently spoke with a customer who has a development environment that is a tiny fraction of the size of their production environment. His team has been tasked with identifying problem SQL statements in this development environment before new code is released into production.

    The problem is the objects in the development environment are so small, the execution plans selected in the development environment rarely reflects what actually happens in production.

    To ensure the development environment accurately reflects production, in the eyes of the Optimizer, the statistics used in the development environment must be the same as the statistics used in production. This can be achieved by exporting the statistics from production and import them into the development environment. Even though the underlying objects are a fraction of the size of production, the Optimizer will see them as the same size and treat them the same way as it would in production.

    Below are the necessary steps to achieve this in their environment. I am using the SH sample schema as the application schema who's statistics we want to move from production to development.

    Continuar aqui