When I tried to set the log_archive_dest_1, I got the error ORA-16179:
SQL> alter system set log_archive_dest_1=’/u1/oracle/archivelogs/ora12db1/‘;
alter system set log_archive_dest_1=’/u1/oracle/archivelogs/ora12db1/‘
*
ERROR at line 1:
ORA-32017: failure in updating SPFILE
ORA-16179: incremental changes to „log_archive_dest_1“ not allowed with SPFILE
The error message doesn’t mean a lot and is not meaningful. Expect e.g. incorrect path or syntax error.
After checking the documentation I have seen that the log_archive_dest_1 is more comp lexer than a path (unlike for example db_create_file_dest), one has to specify whether the archive location is remote (on other server) or local (and there are lot of other things that could be specified in this parameter). In my case, I just wanted to keep the archive logs on the same machine as the Oracle instance owner, so I just need to add the location key word at the beginning of the parameter:
SQL> alter system set log_archive_dest_1=’location=/u1/oracle/archivelogs/ora12db1/‘;
System altered.
Note:
If the path does not exist (or oracle instance owner has no access to it) the following error is displayed:
SQL> alter system set log_archive_dest_1=’location=/u1/oracle/archivelogs/ora12db1/‘;
alter system set log_archive_dest_1=’location=/u1/oracle/archivelogs/ora12db1/‘
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-16032: parameter LOG_ARCHIVE_DEST_1 destination string cannot be translated
ORA-07286: sksagdi: cannot obtain device information.
Linux-x86_64 Error: 2: No such file or directory
When deleting data from a table, Oracle checks for each deleted row whether there are other rows that depend on the deleted one. So oracle will scan (or use indexes if they exist) all tables that have a foreign key pointing on the target table, to see whether the deleted row has child rows. So Deleting must not cause orphans. Oracle will do a full scan on the child tables if no indexes are defined on the foreign keys columns. This is of course very inefficient if the child tables are big.
But what if you just want to delete data from the parent table that does not exist in the child tables? Like this „delete from tbl_parent where not exists (select * from tbl_child)„. You would expect Oracle to not recheck that the deleted records does not have any dependent records in the child tables.
Let’s illustrate this with an example
CREATE TABLE tbl_parent (ID NUMBER PRIMARY KEY) ;
CREATE TABLE tbl_child (ID NUMBER , id_parent NUMBER, CONSTRAINT fk_tbl_child_parent FOREIGN KEY (id_parent) REFERENCES tbl_parent(ID)) ;
INSERT INTO tbl_parent SELECT ROWNUM FROM dual CONNECT BY LEVEL < 101;
INSERT INTO tbl_child SELECT ROWNUM, ROWNUM + 50 FROM dual CONNECT BY LEVEL < 51;
COMMIT;
And we try to delete all rows from the parent table that does not exist in the child table. We enable the event 10046 and analyze the trace file.
ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';
delete from tbl_parent where id not in (select id_parent from tbl_child);
commit;
ALTER SESSION SET EVENTS '10046 trace name context off';
In the trace file:
SQL ID: 8uucqtmzuqnhz Plan Hash: 2719173255 delete from tbl_parent where id not in (select id_parent from tbl_child) call count cpu elapsed disk query current rows ——- —— ——– ———- ———- ———- ———- ———- Parse 1 0.00 0.00 0 4 2 0 Execute 1 0.00 0.00 0 11 256 50 Fetch 0 0.00 0.00 0 0 0 0 ——- —— ——– ———- ———- ———- ———- ———- total 2 0.01 0.00 0 15 258 50 Misses in library cache during parse: 1 Optimizer mode: ALL_ROWS Parsing user id: 107 Number of plan statistics captured: 1 Rows (1st) Rows (avg) Rows (max) Row Source Operation ———- ———- ———- ————————————————— 0 0 0 DELETE TBL_PARENT (cr=361 pr=0 pw=0 time=9025 us starts=1) 50 50 50 HASH JOIN ANTI NA (cr=11 pr=0 pw=0 time=285 us starts=1 cost=5 size=2600 card=100) 100 100 100 INDEX FAST FULL SCAN SYS_C0021393 (cr=4 pr=0 pw=0 time=24 us starts=1 cost=2 size=1300 card=100)(object id 100084) 50 50 50 TABLE ACCESS FULL TBL_CHILD (cr=7 pr=0 pw=0 time=17 us starts=1 cost=3 size=650 card=50) Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited —————————————- Waited ———- ———— Disk file operations I/O 1 0.00 0.00 log file sync 1 0.00 0.00 PGA memory operation 1 0.00 0.00 SQLNet message to client 1 0.00 0.00 SQLNet message from client 1 0.00 0.00
SQL ID: g59tya9raw21s Plan Hash: 3669815686 select /*+ all_rows */ count(1) from „TESTER“.“TBL_CHILD“ where „ID_PARENT“ = :1 call count cpu elapsed disk query current rows ——- —— ——– ———- ———- ———- ———- ———- Parse 1 0.00 0.00 0 0 0 0 Execute 50 0.00 0.00 0 0 0 0 Fetch 50 0.00 0.00 0 350 0 50 ——- —— ——– ———- ———- ———- ———- ———- total 101 0.00 0.00 0 350 0 50 Misses in library cache during parse: 1 Misses in library cache during execute: 1 Optimizer mode: ALL_ROWS Parsing user id: SYS (recursive depth: 1) Number of plan statistics captured: 1 Rows (1st) Rows (avg) Rows (max) Row Source Operation ———- ———- ———- ————————————————— 1 1 1 SORT AGGREGATE (cr=7 pr=0 pw=0 time=27 us starts=1) 0 0 0 TABLE ACCESS FULL TBL_CHILD (cr=7 pr=0 pw=0 time=23 us starts=1 cost=2 size=13 card=1)
SQL ID: 9pgdf32vu0h3c Plan Hash: 0 commit
So we see that for each deleted row oracle do the following check:
select /*+ all_rows */ count(1)
from
"TESTER"."TBL_CHILD" where "ID_PARENT" = :1
So oracle did not realize that this check is superfluous.
Today I want to show you how I analyse the referential integrity in Oracle. First only for one level and then using an SQL to find the entire hierarchy leading up to a specific table. Certainly you can do this using tools, but here only SQL is used.
The referential integrity in Oracle is ensured by using foreign keys. As an an example we create three tables parent_tbl, child_tbl1 and child_tbl2. Both child_tbl1 and child_tbl2 tables have foreign keys pointing on parent_tbl table.
With the following statement we find the tables that have foreign keys pointing on the parent_tbl table.
SELECT A.OWNER,
A.TABLE_NAME,
A.CONSTRAINT_NAME,
A.R_CONSTRAINT_NAME,
A.CONSTRAINT_TYPE
FROM USER_CONSTRAINTS A
JOIN USER_CONSTRAINTS B
ON A.OWNER = B.OWNER
AND A.R_CONSTRAINT_NAME = B.CONSTRAINT_NAME
WHERE B.TABLE_NAME = 'PARENT_TBL'
AND A.CONSTRAINT_TYPE = 'R'
OWNER TABLE_NAME CONSTRAINT_NAME R_CONSTRAINT_NAME CONSTRAINT_TYPE
1 TESTER CHILD_TBL2 FK_CHILD_TBL2 PK_PARENT_TBL R
2 TESTER CHILD_TBL1 FK_CHILD_TBL1 PK_PARENT_TBL R
If you are interested to know also the columns involved in the foreign key relationship:
WITH CONS AS
(SELECT A.OWNER,
A.TABLE_NAME,
A.CONSTRAINT_NAME,
A.R_CONSTRAINT_NAME,
B.TABLE_NAME R_TABLE_NAME,
A.CONSTRAINT_TYPE
FROM USER_CONSTRAINTS A
JOIN USER_CONSTRAINTS B
ON A.OWNER = B.OWNER
AND A.R_CONSTRAINT_NAME = B.CONSTRAINT_NAME
WHERE B.TABLE_NAME = 'PARENT_TBL'
AND A.CONSTRAINT_TYPE = 'R')
SELECT A.*, B.COLUMN_NAME, B.POSITION
FROM CONS A
JOIN USER_CONS_COLUMNS B
ON A.OWNER = B.OWNER
AND A.TABLE_NAME = B.TABLE_NAME
AND A.CONSTRAINT_NAME = B.CONSTRAINT_NAME
OWNER TABLE_NAME CONSTRAINT_NAME R_CONSTRAINT_NAME R_TABLE_NAME CONSTRAINT_TYPE COLUMN_NAME POSITION
1 TESTER CHILD_TBL1 FK_CHILD_TBL1 PK_PARENT_TBL PARENT_TBL R PARENT_TBL_ID2 2
2 TESTER CHILD_TBL1 FK_CHILD_TBL1 PK_PARENT_TBL PARENT_TBL R PARENT_TBL_ID 1
3 TESTER CHILD_TBL2 FK_CHILD_TBL2 PK_PARENT_TBL PARENT_TBL R PARENT_TBL_ID2 2
4 TESTER CHILD_TBL2 FK_CHILD_TBL2 PK_PARENT_TBL PARENT_TBL R PARENT_TBL_ID 1
The following statement return only one row per table. (so you can use it directly to generate ddl for the fk)
WITH CONS AS (SELECT A.OWNER,
A.TABLE_NAME,
A.CONSTRAINT_NAME,
B.TABLE_NAME R_TABLE_NAME,
A.R_CONSTRAINT_NAME,
A.CONSTRAINT_TYPE
FROM USER_CONSTRAINTS A
JOIN USER_CONSTRAINTS B
ON A.OWNER = B.OWNER
AND A.R_CONSTRAINT_NAME = B.CONSTRAINT_NAME
WHERE B.TABLE_NAME = 'PARENT_TBL'
AND A.CONSTRAINT_TYPE = 'R'),
CONS_R_COL AS
(SELECT A.*, B.COLUMN_NAME, B.POSITION
FROM CONS A
JOIN USER_CONS_COLUMNS B
ON A.OWNER = B.OWNER
AND A.TABLE_NAME = B.TABLE_NAME
AND A.CONSTRAINT_NAME = B.CONSTRAINT_NAME), INFO_V AS
(SELECT RC.OWNER,
RC.TABLE_NAME,
RC.CONSTRAINT_NAME,
RC.COLUMN_NAME,
RC.R_TABLE_NAME,
RC.R_CONSTRAINT_NAME,
RP.COLUMN_NAME R_COLUMN_NAME,
RC.POSITION
FROM CONS_R_COL RC
JOIN USER_CONS_COLUMNS RP
ON RC.OWNER = RP.OWNER
AND RC.R_TABLE_NAME = RP.TABLE_NAME
AND RC.POSITION = RP.POSITION) SELECT OWNER,
TABLE_NAME,
CONSTRAINT_NAME,
LISTAGG(COLUMN_NAME, ',') WITHIN GROUP(ORDER BY POSITION) COLS,
R_TABLE_NAME,
R_CONSTRAINT_NAME,
LISTAGG(R_COLUMN_NAME, ',') WITHIN GROUP(ORDER BY POSITION) R_COLS
FROM INFO_V
GROUP BY OWNER,
TABLE_NAME,
CONSTRAINT_NAME,
R_TABLE_NAME,
R_CONSTRAINT_NAME
So far so good. But what can be interesting is to find the whole hierarchy leading up to a table. Let’s create a few tables that make up a binary tree. Starting from the table T0. This has two child tables, each one with two child tables. We stop at level 3. in total we have 15 tables. With the following block you can first visualize this tree.
DECLARE
lvl BINARY_INTEGER := 0;
deep BINARY_INTEGER := 2;
procedure LOG(msg VARCHAR2,n binary_integer DEFAULT 0) IS
BEGIN
dbms_output.put_line(rpad(' ' ,n*2)||msg);
END;
PROCEDURE cre_childs(parent_tbl VARCHAR2, l binary_integer) IS
BEGIN
FOR j IN 0..1 LOOP
LOG(parent_tbl||j,l);
IF l <= deep THEN
cre_childs(parent_tbl ||j,l+1);
END IF;
END LOOP;
END;
BEGIN
LOG('T0');
cre_childs('T0',1);
END;
With the following statement you get the whole hierarchy leading up to the table T0:
WITH core AS
(SELECT b.table_name tbl_parent,
a.TABLE_NAME tbl_child,
b.constraint_name constraint_parent,
a.CONSTRAINT_NAME constraint_child
FROM user_constraints a
JOIN user_constraints b
ON a.R_CONSTRAINT_NAME = b.CONSTRAINT_NAME
WHERE a.CONSTRAINT_TYPE = 'R'),
tbl0 AS
(SELECT tbl_parent,
tbl_child,
level l,
connect_by_iscycle iscycl,
constraint_parent,
constraint_child
FROM core
START WITH tbl_parent = 'T0' ------ Root table
CONNECT BY NOCYCLE PRIOR tbl_child = tbl_parent),
tbl AS
(SELECT tbl_parent,
tbl_child,
MAX(iscycl) iscycl,
constraint_parent,
constraint_child,
l
FROM tbl0
GROUP BY tbl_parent,tbl_child,constraint_parent,constraint_child,l),
tbl2 as
(SELECT tbl_parent, tbl_child, constraint_parent, constraint_child,0 iscycl, l level_
FROM tbl
UNION all
SELECT tbl.tbl_child, tbl.tbl_child, core.constraint_parent, core.constraint_child,1, l level_
FROM tbl
JOIN core
ON tbl.tbl_child = core.tbl_parent and tbl.tbl_child = core.tbl_child WHERE iscycl = 1
AND core.constraint_parent <> tbl.constraint_parent),
tmp as
(SELECT tbl_parent table_name, constraint_parent constraint_name,iscycl
FROM tbl2
UNION
SELECT tbl_child, constraint_child,iscycl FROM tbl2),
tmp2 as
(SELECT c.table_name, c.constraint_name, c.column_name, position,iscycl
FROM (select table_name,constraint_name,column_name,position from user_cons_columns
) c
JOIN tmp
ON c.table_name = tmp.table_name
AND c.constraint_name = tmp.constraint_name),
tmp3 as
(SELECT table_name,
constraint_name,
listagg(column_name,',') WITHIN GROUP(ORDER BY position) cols,
MAX(iscycl) iscycl
FROM tmp2
GROUP BY table_name, constraint_name)-- main select SELECT distinct tbl2.iscycl,tbl2.tbl_parent,tbl2.tbl_child,p_col.cols p_cols , c_col.cols c_cols ,level_
FROM tbl2
JOIN tmp3 p_col
ON tbl2.tbl_parent = p_col.table_name
AND tbl2.constraint_parent = p_col.constraint_name
JOIN tmp3 c_col
ON tbl2.tbl_child = c_col.table_name
AND tbl2.constraint_child = c_col.constraint_name
order by level_,tbl2.iscycl DESC
Today I would like to talk about how I generate dynamically a code to export a table or a query result as XML file using XMLTYPE.
In many situations I need to save a table or query result as XML-file in order to use the file as configuration for my application or to use it in other program (Java, C#, …). To achieve this I wrote the following PL/SQL Block:
declare
-- table or view to export
v_tbl_name varchar2(30) := 'MYTABLE';
v_stmt varchar2(32767);
--the table will be saved in this variable
v_output_as_xmltype XMLTYPE;
function fu_format(p_name varchar2, p_type varchar2) return varchar2 is
ret varchar2(200);
begin
--on can add e.g formating for timestamp...
ret := case when p_type = 'DATE' then 'to_char('||p_name||',''dd.mm.yyyy'')' else p_name end;
return lpad(' ',15) || ret;
end fu_format;
begin
for rec in (select column_name,data_type from user_tab_cols where table_name = v_tbl_name order by column_id) loop
v_stmt := v_stmt || fu_format(rec.column_name,rec.data_type) || ' as ' || rec.column_name || ',' || chr(10);
end loop;
v_stmt := 'SELECT XMLELEMENT("'||v_tbl_name||'",' || chr(10) ||
'XMLAGG( ' || chr(10) ||
' XMLELEMENT("row", ' || chr(10) ||
' XMLFOREST( ' || chr(10) ||
rtrim(v_stmt,','||chr(10)) || chr(10) ||
' ) ' || chr(10) ||
' ) ' || chr(10) ||
') ' || chr(10) ||
') FROM ' || v_tbl_name
;
-- this is the statement
dbms_output.put_line(v_stmt);
execute immediate v_stmt into v_output_as_xmltype;
--one can save the v_output_as_xmltype in a file
-- or just insert it in a table
-- ddl for the xml_output_table
--create table xml_output_table(id number,content xmltype);
INSERT INTO xml_output_table VALUES (1, v_output_as_xmltype);
COMMIT;
end;
Let create a table to test this block:
create table mytable(
id number
,col1 varchar2(50)
,col2 varchar2(50)
,col3 varchar2(50)
,created_at date
);
insert into mytable select rownum
,'col1_'||rownum
,'col2_'||rownum
,'col3_'||rownum
,sysdate-rownum
from dual connect by level < 4;
3 rows inserted.
commit;
One have only to set the table (or view) name in the block above to generate following code:
SELECT XMLELEMENT("MYTABLE1",
XMLAGG(
XMLELEMENT("row",
XMLFOREST(
ID as ID,
COL1 as COL1,
COL2 as COL2,
COL3 as COL3,
to_char(CREATED_AT,'dd.mm.yyyy') as CREATED_AT
)
)
)
) FROM MYTABLE
But what if you want to export a select statement with where clause instead of a table or a view? One possibility would be to create a view with the statement. An elegant way is to use the „dbms_sql“ package. With it you can get all necessary information about the statement (column name and type) to generate the SQL.
For example: select id,col1 from mytable where id < 5.
declare
-- Statement
v_query varchar2(32767) := 'select id,col1 from MYTABLE where id < 5';
v_stmt varchar2(32767);
v_output_as_xmltype XMLTYPE;
v_cursor_id integer;
v_col_cnt integer;
v_columns dbms_sql.desc_tab;
function fu_format(p_name varchar2, p_type number) return varchar2 is
ret varchar2(200);
begin
ret := case when p_type = dbms_types.TYPECODE_DATE then 'to_char('||p_name||',''dd.mm.yyyy'')' else p_name end;
return lpad(' ',15) || ret;
end fu_format;
begin
v_cursor_id := dbms_sql.open_cursor;
dbms_sql.parse(v_cursor_id, v_query, dbms_sql.native);
dbms_sql.describe_columns(v_cursor_id, v_col_cnt, v_columns);
for i in 1 .. v_columns.count loop
v_stmt := v_stmt || fu_format(v_columns(i).col_name,v_columns(i).col_type) || ' as ' || v_columns(i).col_name || ',' || chr(10);
end loop;
dbms_sql.close_cursor(v_cursor_id);
v_stmt := 'SELECT XMLELEMENT("query",' || chr(10) ||
'XMLAGG( ' || chr(10) ||
' XMLELEMENT("row", ' || chr(10) ||
' XMLFOREST( ' || chr(10) ||
rtrim(v_stmt,','||chr(10)) || chr(10) ||
' ) ' || chr(10) ||
' ) ' || chr(10) ||
') ' || chr(10) ||
') text FROM ( ' || v_query || ' )'
;
dbms_output.put_line(v_stmt);
--execute immediate v_stmt into v_output_as_xmltype;
--INSERT INTO xml_tab VALUES (1, v_output_as_xmltype);
--COMMIT;
end;
This will generate this statement:
SELECT XMLELEMENT("query",
XMLAGG(
XMLELEMENT("row",
XMLFOREST(
ID as ID,
COL1 as COL1
)
)
)
) text FROM ( select id,col1 from MYTABLE where id < 5 )
Today I want to report about my experience with the error ORA-54032: column to be renamed is used in a virtual column expression.
We developed new features in our software. The new features should be released for both oracle version 11.2.0.4 and 12.1.0.2. After successfully testing the upgrade on 11.2.0.4 oracle version, we got however on 12.1.0.2 following error: ORA-54032: column to be renamed is used in a virtual column expression.
We had just tested it on 11.2.0.4 and everything was ok. The error confused me because in fact in one place I defined virtual column and renamed other columns. Then I thought that I didn’t respect the order by renaming and defining the new columns. And perhaps I did mistake by testing on 11.2.0.4. I checked the test routine again and after a code review, I found that everything was ok. Then I got the suspicion that maybe we have an Oracle bug here. I’ve read before that there are problems in 12.1 because of extended statistics:
Extended statistics were introduced in Oracle 11g to allow statistics to be gathered on groups of columns, to highlight the relationship between them, or on expressions. Oracle 11gR2 makes the process of gathering extended statistics for column groups easier.
Then I checked with the following query, which virtual column is there in the target table:
SELECT column_name,hidden_column FROM user_tab_cols WHERE table_name = 'MYTABLE'
Then the surprise: In the result I found this oddly column: SYS_STSQZJ#5GML0JJ$YE3F3U#_SN1 YES
In fact there is a virtual column and moreover hidden! But I did not create any one.
By the name sounds that it was created by oracle, so it was surely created because of the extended statistics.
I have no problem with that if its purpose to improve the performance, but why does it trigger an error when trying to rename my own columns? I did not even create this column and I do not even know if it exits! Could not oracle make sure to separate its internal optimizations from our logic? One can not even remove those virtual column with a drop.
ALTER TABLE mytable DROP COLUMN SYS_STSQZJ#5GML0JJ$YE3F3U#_SN1;
one get this erros: ORA-12996: cannot drop system-generated virtual column
Well, then I had to watch how I just get my code up and running:
I researched and found the following way.
BEGIN
FOR i IN (SELECT extension_name, extension
FROM user_stat_extensions
WHERE table_name = UPPER('MYTABLE')-- your table
AND extension_name LIKE 'SYS_STS%' -- SYS_STS are system generated from the DECODE in ALL_STAT_EXTENSIONS
) LOOP
DBMS_OUTPUT.PUT_LINE('-- Dropping extension on '||i.extension);
DBMS_OUTPUT.PUT_LINE('exec DBMS_STATS.DROP_EXTENDED_STATS(user, ''&¤t_table.'', '''||i.extension||''');');
END LOOP;
END;
Today I want to briefly talk about my experience with the login as a sys user. A strange story.
A colleague called me and said that suddenly he can not log in to a pluggable database as a Sys user. It was a 12.1.0.2 Oracle Database. For me something should be because of the fact that the Listner is not started or the password file does not exist or a firewall problem. But the colleague confirmed to me that he can log in remotely with system user and that password file exists. Then I told him to rebuild the password file, maybe the file was just corrupt. Rebuilding the file also did not help. That confused me…..
Rebuilding the file also did not help. That confused me…..
Then I told him to create a dummy user and grant him sysdba role. Sys user can only log in as sysdba (or sysoper or sys..) and those users have the particuliarity of having their passwords in the database as well as in the password file on the Oracle Instance. (If one want to log in remotely. The instance does not have to be started, so the password file is needed).
create user dummy identified by dummy; grant create session to dummy; grant sysdba to dummy;
amazing he could log in as dummy user but not as sys. As sys he gets the following error message:
I was confused again. What can the problem be here? Can it be that Sys password is not in the password file?! That’s not allowed to be true. Then I asked him to look in the view v$pwfile_users: select * from v$pwfile_users;
And then came the big surprise. In fact Sys User is not in it. How can something like that happen?! Sys user has sysdba role!
Then … That sounds crazy, I told him to log in as sys user with OS permission and then give sysdba to sys user
sqlplus / as sysdba grant sysdba to sys;
Then again sys user was found in the view and one could log in remotely as sys with sysdba!
I think it’s strange when you can revoke sysdba role from sys user. Well, I think you can not revoke it. This role has I guess special handling in oracle database. Otherwise one would not have been able to log in with Os permissions. I believe that „revoke“ only affects the Password File.
I was so busy I did not properly investigate this case. When I had little time again and want to reproduce the problem, I get the following error message when I try to revoke sysdba from sys:
ORA-01998: REVOKE failed: user SYS always has SYSOPER and SYSDBA
Here’s my experience with a performance problem in the Oracle database. After several months of work on a product and after successful tests, it was time to carry out the integration tests at the customer. Now the customer has noticed that the GUI (JAVA) is quite slow in some situations.
What happened then?! Everything works for us perfect. What is special about the customer? Now I was assigned to analyze the problem.
The first thing I did was to look in our application log and to compare it with the logs in our UAT environment. Surprisingly, the suspicious SQL was easy to find. It was a simple SQL. It should return at most dozens of records and is finished in our UAT environmentin in one second. But by the customer takes more than a minute. The Statement should be called several times, each time it takes more than a minute. This of course makes the GUI very slow.
The SQL ist:
WITH CFG_TBL_AS AS
(SELECT
OWNER, TABLE_NAME, COLUMN_NAME, FUNCTION_NAME, TABLE_NAME_NEW
FROM TMP_$_ANONY$CONFIG$
WHERE TABLE_NAME_NEW IS NOT NULL)
SELECT A.TABLE_NAME, A.COLUMN_NAME, B.TABLE_NAME_NEW, B.FUNCTION_NAME
FROM ALL_TAB_COLS A
LEFT JOIN CFG_TBL_AS B
ON A.OWNER = B.OWNER
AND A.TABLE_NAME = B.TABLE_NAME
AND A.COLUMN_NAME = B.COLUMN_NAME
WHERE (A.OWNER, A.TABLE_NAME) IN
(SELECT OWNER, TABLE_NAME FROM CFG_TBL_AS)
ORDER BY A.TABLE_NAME, A.COLUMN_ID
looks easy. I get the Execute Plan from the Customer. I will only show the important here. (which is actually the problem)
It looks like oracle is trying to materialize a subquery. (A powrful feature of With Clause is materializing subqueries) In our environment the execute plan looks like this
Then I tried the SQL with the materialize hint and as expected I get the same execute paln and it takes also more than one minutes.
WITH CFG_TBL_AS AS
(SELECT --+materialize
OWNER, TABLE_NAME, COLUMN_NAME, FUNCTION_NAME, TABLE_NAME_NEW
FROM TMP_$_ANONY$CONFIG$
WHERE TABLE_NAME_NEW IS NOT NULL)
SELECT A.TABLE_NAME, A.COLUMN_NAME, B.TABLE_NAME_NEW, B.FUNCTION_NAME
FROM ALL_TAB_COLS A
LEFT JOIN CFG_TBL_AS B
ON A.OWNER = B.OWNER
AND A.TABLE_NAME = B.TABLE_NAME
AND A.COLUMN_NAME = B.COLUMN_NAME
WHERE (A.OWNER, A.TABLE_NAME) IN
(SELECT OWNER, TABLE_NAME FROM CFG_TBL_AS)
ORDER BY A.TABLE_NAME, A.COLUMN_ID
Why is it even trying to materialize a few records (which fit into a single block)? And why should something take forever? But the important question for me was why does the statement behave as if the hint is inline?
This is because of the hidden system parameter: _with_subquery. It turns out that this is set to inline on our UAT environment!
ALTER SYSTEM SET "_with_subquery" = INLINE;
Someone forgot to remove this parameter, so we could not spot this issue before delivery. After adding the hint everythings work fine.
The following block should provide the commands to disable the foreign keys pointing on a table. It works wonderfully in 11.2.0.4 and 12.1.0.2 but not on 12.2.0.1. Does not trigger an error but also no results. You can test it, just replace ‚MYTABLE‘ with a table that has foreign key pointing on it.
DECLARE
-- table that has Foreignkey pointing on it
v_tbl_name VARCHAR2(30) := 'MYTABLE';
CURSOR get_massdata_tableinfo
IS
SELECT v_tbl_name table_name FROM dual
;
CURSOR get_fks(par_target_table user_tables.table_name%TYPE)
IS
WITH
user_constr AS
(
SELECT *
FROM all_constraints
WHERE owner = sys_context('USERENV', 'CURRENT_SCHEMA')
)
SELECT r.constraint_name,
r.table_name
FROM user_constr r,
user_constr t
WHERE t.table_name = par_target_table
AND t.constraint_type = 'P'
AND t.constraint_name = r.r_constraint_name
;
BEGIN
FOR crec IN get_massdata_tableinfo
LOOP
--
dbms_output.put_line('Table Name ' || crec.table_name);
-- disable FK´s pointing to table
FOR rec IN get_fks(crec.table_name) --no rows in 12.2.0.1 (but it works in 11.2.0.4 and 12.1.0.2)
LOOP
dbms_output.put_line('ALTER TABLE ' || rec.table_name ||
' DISABLE CONSTRAINT ' || rec.constraint_name);
END LOOP;
END LOOP;
END;
If I call the SQL directly, then I get records.
WITH
user_constr AS
(
SELECT *
FROM all_constraints
WHERE owner = sys_context('USERENV', 'CURRENT_SCHEMA')
)
SELECT r.constraint_name,
r.table_name
FROM user_constr r,
user_constr t
WHERE t.table_name = 'MY_TABLE'
AND t.constraint_type = 'P'
AND t.constraint_name = r.r_constraint_name
Is this a bug ?
It looks like the problem is because of the oracle patch 22485591. It works in a 12.2.0.1 without this patch. In addition, a strange thing: When I add the hint no_merge to the following Statement it works also with the path 22485591.
CURSOR get_fks(par_target_table user_tables.table_name%TYPE)
IS
WITH
user_constr AS
(
SELECT /*+ no_merge */ * -- this is very strange
FROM all_constraints
WHERE owner = sys_context('USERENV', 'CURRENT_SCHEMA')
)
SELECT r.constraint_name,
r.table_name
FROM user_constr r,
user_constr t
WHERE t.table_name = par_target_table
AND t.constraint_type = 'P'
AND t.constraint_name = r.r_constraint_name
;
I posted the problem in freelists.org and I get this response from Jonathan Lewis:
—————————— — „deep dive“ —————————— Okay, I ran your test and got no output on 12.2.01 Traced it (10046 level 4) Found that the query against all_constraints went parallel and returned no data, so added a /*+ noparallel */ hint to the query, then (belts and braces) „alter session disable parallel query;“ Ran the test again – got the expected result. It looks like the parallel execution loses the value of sys_context. I would check whether 12.1.0.2 and 11.2.0.4 ran the query parallel or whether they ran it serially, and if they’re running t serially check what happens if you force it parallel.
—————– — After the no_merge comment: —————–
That (ed. the no_merge) may be luck rather than anything else. I just tried the same thing and still saw the problem (and parallel execution). In your case it’s possible that the presence of the no_merge hint resulted in Oracle materializing the subquery and maybe that made it run serially – i.e. it was about object_statistics rather than functionality.
P.S. Looking at the execution plans, my 12.2 (corrected from 12.1) is translating all_constraints to a query involving int$int$DBA_CONSTRAINTS, while the 12.1 query is a „more traditional“ massive join of lots of tables – so the base problem seems to start with the appearance of a CDB-mechanism of all_constraints. (I’m running 12.2. from a PDB, while the 12.1 is non-PDB database)
In this post I would like to briefly introduce how I dynamically generate a SQL to find diff of two tables. When optimizing a SQL Statement to get better performance, in most cases is not only enough to add (or remove) a few hints, but you have to rewrite the whole SQL statement. This is too risky and can cause the new statement to produce a very different result. That’s why you have to compare the results of both statements. One saves the result of the SQL before the optimization in a table e.g. table_a. And the result of SQL after optimization in the table Table_b. The difference is the result of the following SQL:
Select ‘table_a’ src,t.* from (Select * from table_a minus select * from table_b)
Union all
Select ‘table_b’ src,t.* from (Select * from table_b minus select * from table_a)
This will not work if the tables contain LOB objects. You can exclude the LOB objects from the comparison. But if they were quite relevant, you could use a hash function. The following PL / SQL block dynamically generates the Diff SQL and use the dbms_crypto.hash function:
DECLARE
tbl_a VARCHAR2(30):= 'TABLE_a';
tbl_b VARCHAR2(30):= 'TABLE_b';
cols VARCHAR2(32767);
stmt VARCHAR2(32767);
BEGIN
FOR rec IN (SELECT * FROM user_tab_columns WHERE table_name = 'JURISTISCHE_PERSON' ORDER BY column_id) LOOP
cols := cols || CASE WHEN rec.data_type IN ('CLOB','BLOB') THEN 'dbms_crypto.hash( utl_raw.cast_to_raw('||rec.column_name||'), 2) as ' || rec.column_name
ELSE rec.column_name END||','||CHR(10);
END LOOP;
cols := rtrim(cols,','||CHR(10));
stmt := 'select '''||tbl_a||''' src,t.* from (select '||cols|| ' from ' || tbl_a || ' minus select '||cols|| ' from ' || tbl_b||') t
union all
select '''||tbl_b||''',t.* from (select '||cols|| ' from ' || tbl_b || ' minus select '||cols|| ' from ' || tbl_a||') t';
dbms_output.put_line(stmt);
END;
My Experience with Oracle Bug 14202396 in Oracle 11.2.0.4: Datapump remap_data fails if used for more than 10 cols in a table.
I was assigned to write a tool to anonymize an entire database. This should be exported and imported anonymously in another database. The first idea that came to mind was to use „dbms_datapump“. I have good experiences with it and I know that this utility is very flexible and fast. It also offers the possibility to obscure columns on the fly using dbms_datapump.data_remap function.
Before I opted for this solution, I made a proof of concept, then the whole thing was implemented and it was revealed during testing that in some cases, that it does not work at all, and without any reasonable error message! I get the error in some cases at defining the job and before starting it.
ORA-39001: invalid argument value ORA-06512: in „SYS.DBMS_SYS_ERROR“, line 79 ORA-06512: in „SYS.DBMS_DATAPUMP“, line 3507 ORA-06512: in „SYS.DBMS_DATAPUMP“, line 4309 ORA-06512: in line 66 00000 – „invalid argument value“ *Cause: The user specified API parameters were of the wrong type or value range. Subsequent messages supplied by DBMS_DATAPUMP.GET_STATUS will further describe the error. *Action: Correct the bad argument and retry the API.
There was no error or hint in the Oracle Alert file. To narrow down the problem, I have written the following test block:
DECLARE
STMT VARCHAR2(32767);
tbl_name VARCHAR2(30) := UPPER('TEST_TABLE$$$$');
fu_name varchar2(100) := 'PKG_OBFUS_DATA.FU_1';
v_job_state user_datapump_jobs.state%TYPE;
v_exp_job_name varchar2(30) := 'exp_dmp_i_m_just_tst';
h1 NUMBER;
g_dbdir VARCHAR2(30) := 'DATA_PUMP_DIR';
NUMBER_OF_COL_TO_TEST BINARY_INTEGER := 11; -- <= 10 works , > 10 error at job definition (11.2.0.4). But in 12.2 it works fine
v_col_name VARCHAR2(30) := 'COL_1';
BEGIN
BEGIN
EXECUTE IMMEDIATE 'drop table '||tbl_name||' purge';
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line('table does''t exist....');
END;
--generate table code with 20 colums
FOR REC IN (SELECT A
FROM (SELECT 'create table '||tbl_name||' (' A
FROM DUAL
UNION ALL
SELECT DECODE(ROWNUM, 1, NULL, ',') || 'col_' || ROWNUM ||
' varchar2(100)'
FROM DUAL
CONNECT BY LEVEL < 21
UNION ALL
SELECT ') nologging compress'
FROM DUAL)) LOOP
STMT := STMT || REC.A || CHR(10);
END LOOP;
DBMS_OUTPUT.PUT_LINE(STMT);
-- create the table
EXECUTE IMMEDIATE STMT;
--generate insert stmt
select 'insert into '||tbl_name||' values ('||listagg(''''||column_name||'''' , ',') within group(order by column_id) || ')'
INTO stmt
FROM user_tab_cols where table_name = tbl_name ;
--insert just one row ;
EXECUTE IMMEDIATE stmt;
dbms_output.put_line(SQL%ROWCOUNT || ' row inserted in '||tbl_name||'' ) ;
COMMIT;
-- create obfus function (it works only for functions in pkg)
--head
EXECUTE IMMEDIATE 'CREATE OR REPLACE PACKAGE PKG_OBFUS_DATA IS FUNCTION FU_1(p_in VARCHAR2) RETURN VARCHAR2; END;';
--body
EXECUTE IMMEDIATE 'CREATE OR REPLACE PACKAGE BODY PKG_OBFUS_DATA IS FUNCTION FU_1(p_in VARCHAR2) RETURN VARCHAR2 IS
BEGIN RETURN ''test''; END; END;';
-- test datapump
h1 := dbms_datapump.open(operation => 'EXPORT',job_mode => 'TABLE',job_name => v_exp_job_name);
dbms_datapump.add_file(handle => h1, filename => 'delete_me_if_u_see_me.dmp' ,directory => g_dbdir, reusefile => 1);
dbms_datapump.metadata_filter(h1,'NAME_EXPR', ' IN ('''||tbl_name||''')');
DBMS_DATAPUMP.metadata_filter(h1,'SCHEMA_EXPR', 'IN ('''||USER||''')');
dbms_output.put_line('ok');
--dbms_datapump.data_remap(handle => h1, name => 'COLUMN_FUNCTION',
-- table_name => tbl_name, column => v_col_name, function => fu_name, schema => 'SMARAGD_DBA');
FOR i IN 1..NUMBER_OF_COL_TO_TEST LOOP
dbms_output.put_line(i);
dbms_datapump.data_remap(handle => h1, name => 'COLUMN_FUNCTION',
table_name => tbl_name, column => 'COL_'||i, function => fu_name, schema => 'SMARAGD_DBA');
END LOOP;
--definition is ok
dbms_datapump.start_job(h1);
dbms_datapump.wait_for_job(h1, v_job_state);
--cleanup
EXECUTE IMMEDIATE 'drop table '||tbl_name||' purge';
EXECUTE IMMEDIATE 'drop package PKG_OBFUS_DATA';
END;
The test block simply creates a table with 20 columns, a test function for anonymization. And I made runs. I was surprised: the program works as long as the number of anonymised columns is less than 10! I could not believe it, Oracle developers forgot something here or is that just a restriction?! In the doc I found no things about this limitation and the error message does not say much. Then I posted the question in a forum, one responded quickly and told me that he had the same problem. He said: We opened a SR for it at the time, and Oracle’s response was that you need to purchase Oracle Data Redaction if the built-in Datapump limit is insufficient. I was pretty disappointed. I did not want to lose much time on this topic then I have a workaround built in, which cost me two days.
Later I found out that Oracle does not have any restrictions here, but it was just a bug. After installing the OPatch everything worked fine as expected.