Feed aggregator
Zero-ETL: What is it and what Oracle products support it
Businesses are constantly finding themselves increasingly reliant on data ingestion, big data, analytics, and following traditional ETL (Extract, Transform, Load) […]
The post Zero-ETL: What is it and what Oracle products support it appeared first on DBASolved.
Drop RAC Database Using RMAN
As part of data center consolidation, some databases are no longer required and need to be dropped.
I have previouly posted about dropping databases; however, it was single instance vs RAC.
### Check status of database:
db01-oracle:/home/oracle$ srvctl status database -d loadtest -v
Database loadtest is running. Instance status: Open.
db01-oracle:/home/oracle$
### Modify cluster database parameters:
SQL> show parameter cluster_database
NAME TYPE VALUE
------------------------- ----------- ------------
cluster_database boolean FALSE
cluster_database_instances integer 1
SQL> show parameter db_name
NAME TYPE VALUE
------------------------- ----------- ------------
db_name string loadtest
### Startup force mount restrict:
SQL> startup force mount restrict;
ORACLE instance started.
Total System Global Area 2.1475E+10 bytes
Fixed Size 3795712 bytes
Variable Size 1.2885E+10 bytes
Database Buffers 8522825728 bytes
Redo Buffers 63311872 bytes
Database mounted.
!!! Verify logins=RESTRICTED and parallel=NO !!!
SQL> select logins, parallel from v$instance;
LOGINS PAR
---------- ---
RESTRICTED NO
SQL>
### RMAN: drop database including backups ###
/home/oracle$ rman target /
Recovery Manager: Release 12.1.0.2.0 - Production on Mon Sep 25 11:41:45 2023
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
connected to target database: loadtest (DBID=983159180, not open)
RMAN> drop database including backups;
database name is "loadtest" and DBID is 983159180
Do you really want to drop all backups and the database (enter YES or NO)?
nected to target database: loadtest (DBID=983159180, not open)
RMAN> drop database including backups;
database name is "loadtest" and DBID is 983159180
Do you really want to drop all backups and the database (enter YES or NO)? yes
using target database control file instead of recovery catalog
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=101 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Data Protection for Oracle: version 7.1.3.0
allocated channel: ORA_SBT_TAPE_2
channel ORA_SBT_TAPE_2: SID=148 device type=SBT_TAPE
channel ORA_SBT_TAPE_2: Data Protection for Oracle: version 7.1.3.0
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=199 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=248 device type=DISK
..............................
deleted archived log
archived log file name=+ORFLASHBK/loadtest/ARCHIVELOG/2023_09_22/thread_1_seq_150008.8873.1148227803 RECID=150008 STAMP=1148227802
Deleted 8 objects
..............................
database name is "loadtest" and DBID is 983159180
database dropped
RMAN>
How To Drop A RAC Database Using RMAN(Doc ID 732042.1) Joel Kallman Day 2023 : Announcement
Since 2016 we’ve had an Oracle community day where we push out content on the same day to try and get a bit of a community buzz. The name has changed over the years, but in 2021 it was renamed to the “Joel Kallman Day”. Joel was big on community, and it seems like a … Continue reading "Joel Kallman Day 2023 : Announcement"
The post Joel Kallman Day 2023 : Announcement first appeared on The ORACLE-BASE Blog.Joel Kallman Day 2023 : Announcement was first posted on October 1, 2023 at 3:39 pm.©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
ERP Oracle Database Security
Posted by Pete On 29/09/23 At 01:20 PM
Schema Privileges and Grant option
Find the select statement for a sys_refcursor
Oracle GoldenGate 23c “Free” – What you need to know!
Last week at Oracle Cloud World, the Oracle GoldenGate Product team announced the latest version of Oracle GoldenGate “Free”! This […]
The post Oracle GoldenGate 23c “Free” – What you need to know! appeared first on DBASolved.
LLama 2 LLM for PDF Invoice Data Extraction
A Performance Comparison between Oracle’s REGEXP_SUBSTR and SUBSTR functions
Oracle SQL offers support for POSIX regular expressions by means of a suite of native functions.

Whilst providing the ability for complex search conditions that regular expressions offer, REGEXP_SUBSTR has acquired a reputation for being a fair bit slower when compared to the standard SUBSTR function.
What I’m going to demonstrate here is :
- how SUBSTR seems generally to be faster than REGEXP_SUBSTR
- how the performance of REGEXP_SUBSTR can improve dramatically when used with INSTR
- REGEXP_SUBSTR performs better when it matches the start of a string
To start with though, well discover why you’ll never see a Sith Lord on Sesame Street ( hint : it’s the way they count in a Galaxy Far, Far Away)…
A Test TableWe have a two-column table that looks like this :
create table star_wars_numbers
(
id number generated always as identity,
episode varchar2(4000)
)
/
insert into star_wars_numbers( episode)
values('Four');
insert into star_wars_numbers( episode)
values('Four|Five');
insert into star_wars_numbers( episode)
values('Four|Five|Six');
insert into star_wars_numbers( episode)
values('Four|Five|Six|One');
insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two');
insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two|Three');
insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two|Three|Seven');
insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two|Three|Seven|Three-and-a-bit');
insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two|Three|Seven|Three-and-a-bit|Three and a bit less of a bit');
insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two|Three|Seven|Three-and-a-bit|Three and a bit less of a bit|Eight');
insert into star_wars_numbers( episode)
values('Four|Five|Six|One|Two|Three|Seven|Three-and-a-bit|Three and a bit less of a bit|Eight|Nine');
commit;
Whilst the contents of the EPISODES column may be the result of arcane Sith ritual, Moof-Milker activity in the design process cannot be ruled out.
Either way, the EPISODES column contains up to 11 values, with multiple columns being separated by a ‘|’.
Extracting each value in SQLUsing SUBSTR and INSTR to separate out the values in EPISODES is, as ever, quite a bit of typing…
select
id,
episode,
substr
(
episode,
1,
case
when instr(episode, '|') > 0 then instr(episode, '|') -1
else length(episode)
end
) as "A New Hope",
case when instr(episode, '|', 1, 1) > 0 then
substr
(
episode,
instr(episode, '|', 1, 1) + 1, -- start
case
when instr(episode, '|', 1, 2) > 0 then instr(episode, '|', 1,2) -1
else length(episode)
end
-
instr(episode, '|', 1, 1)
)
end as "The Empire Strikes Back",
case when instr(episode, '|', 1, 2) > 0 then
substr( episode, instr(episode, '|', 1, 2) + 1,
case when instr(episode, '|', 1,3) > 0 then instr(episode, '|',1,3) - 1 else length(episode) end
-
instr(episode, '|', 1, 2))
end as "Return of the Jedi",
case when instr(episode, '|', 1, 3) > 0 then
substr( episode, instr(episode, '|', 1, 3) + 1,
case when instr(episode, '|', 1,4) > 0 then instr(episode, '|',1,4) - 1 else length(episode) end
-
instr(episode, '|', 1, 3))
end as "Phantom Menace",
case when instr(episode, '|', 1, 4) > 0 then
substr( episode, instr(episode, '|', 1, 4) + 1,
case when instr(episode, '|', 1,5) > 0 then instr(episode, '|',1,5) - 1 else length(episode) end
-
instr(episode, '|', 1, 4))
end as "Attack of the Clones",
case when instr(episode, '|', 1, 5) > 0 then
substr( episode, instr(episode, '|', 1, 5) + 1,
case when instr(episode, '|', 1,6) > 0 then instr(episode, '|',1,6) - 1 else length(episode) end
-
instr(episode, '|', 1, 5))
end as "Revenge of the Sith",
case when instr(episode, '|', 1, 6) > 0 then
substr( episode, instr(episode, '|', 1, 6) + 1,
case when instr(episode, '|', 1,7) > 0 then instr(episode, '|',1,7) - 1 else length(episode) end
-
instr(episode, '|', 1, 6))
end as "The Force Awakens",
case when instr(episode, '|', 1, 7) > 0 then
substr( episode, instr(episode, '|', 1, 7) + 1,
case when instr(episode, '|', 1,8) > 0 then instr(episode, '|',1,8) - 1 else length(episode) end
-
instr(episode, '|', 1, 7))
end as "Rogue One",
case when instr(episode, '|', 1, 8) > 0 then
substr( episode, instr(episode, '|', 1, 8) + 1,
case when instr(episode, '|', 1,9) > 0 then instr(episode, '|',1,9) - 1 else length(episode) end
-
instr(episode, '|', 1, 8))
end as "Solo",
case when instr(episode, '|', 1, 9) > 0 then
substr( episode, instr(episode, '|', 1, 9) + 1,
case when instr(episode, '|', 1, 10) > 0 then instr(episode, '|',1, 10) - 1 else length(episode) end
-
instr(episode, '|', 1, 9))
end as "The Last Jedi",
case when instr(episode, '|', 1, 10) > 0 then
substr( episode, instr(episode, '|', 1, 10) + 1,
case when instr(episode, '|', 1, 11) > 0 then instr(episode, '|',1, 11) - 1 else length(episode) end
-
instr(episode, '|', 1, 10))
end as "The Rise of Skywalker"
from star_wars_numbers
/
…although these days ( version 12c and later), we can save ourselves a bit of effort by using an inline function…
with function extract_episode( i_string in varchar2, i_pos in pls_integer)
return varchar2
is
begin
if i_pos = 1 then
return substr(i_string, 1, case when instr(i_string, '|') > 0 then instr(i_string, '|') - 1 else length(i_string) end);
end if;
return
case when instr(i_string, '|', 1, i_pos -1 ) > 0 then
substr( i_string, instr(i_string, '|', 1, i_pos -1) + 1,
case when instr(i_string, '|', 1, i_pos) > 0 then instr(i_string, '|',1, i_pos) - 1 else length(i_string) end
-
instr( i_string, '|', 1, i_pos - 1))
end;
end;
select
id, episode,
extract_episode( episode, 1) as "A New Hope",
extract_episode( episode, 2) as "Empire Strikes Back",
extract_episode( episode, 3) as "Return of the Jedi",
extract_episode( episode, 4) as "The Phantom Menace",
extract_episode( episode, 5) as "Attack of the Clones",
extract_episode( episode, 6) as "Revenge of the Sith",
extract_episode( episode, 7) as "The Force Awakens",
extract_episode( episode, 8) as "Rogue One",
extract_episode( episode, 9) as "Solo",
extract_episode( episode, 10) as "The Last Jedi",
extract_episode( episode, 11) as "The Rise of Skywalker"
from star_wars_numbers
/
Whether you find the equivalent regexp query more elegant or just horrible is somewhat subjective :
select
regexp_substr(episode, '[^|]+') as "A New Hope",
regexp_substr(episode, '[^|]+',1, 2) as "The Empire Strikes Back",
regexp_substr(episode, '[^|]+',1, 3) as "Return of the Jedi",
regexp_substr(episode, '[^|]+',1, 4) as "The Phantom Menace",
regexp_substr(episode, '[^|]+',1, 5) as "Attack of the Clones",
regexp_substr(episode, '[^|]+',1, 6) as "Revenge of the Sith",
regexp_substr(episode, '[^|]+',1, 7) as "The Force Awakens",
regexp_substr(episode, '[^|]+',1, 8) as "Rogue One",
regexp_substr(episode, '[^|]+',1, 9) as "Solo",
regexp_substr(episode, '[^|]+',1, 10) as "The Last Jedi",
regexp_substr(episode, '[^|]+',1, 11) as "The Rise of Skywalker"
from star_wars_numbers
/
Irrespective of aesthetic considerations, I should explain the regexp in use here, if only so I can understand it when I read this a few months from now :
[^|] - match any character that isn't pipe ("|") + - match one or more times
Next, we need to find out how the regexp function stacks up when it comes to performance, and we’re not going to find that with a mere 11 rows of data…
Performance Test SetupLet’s make some use of all that space in the Galaxy Far, Far Away…
create table galaxy (
id number generated always as identity,
episode varchar2(4000))
/
begin
for i in 1..100000 loop
insert into galaxy(episode)
select episode from star_wars_numbers;
commit;
end loop;
end;
/
exec dbms_stats.gather_table_stats(user, 'galaxy');
All of the following tests were run on Oracle 19c Enterprise Edition running on Oracle Linux.
Everything was run in a SQL*Plus session from the command line on the database server.
The queries were run in SQL*Plus with the following settings :
set lines 130 clear screen set autotrace trace set timing on
Each query was executed at least twice consecutively to ensure that results were not skewed by the state of the buffer cache.
It’s also worth noting that, I found no low-level explanation for the performance discrepancies between the two functions when trawling through trace files. Therefore, I’ve concentrated on elapsed time as the main performance metric in these tests.
Test 1 – All Fields extracted in the Select ClauseLet’s start with the SUBSTR function ( referred to as “Standard” henceforth) :
select
id,
substr
(
episode, -- input string
1, -- start position
case
when instr(episode, '|') > 0 then instr(episode, '|') -1
else length(episode)
end -- number of characters to extract
) as "A New Hope",
case when instr(episode, '|', 1, 1) > 0 then
substr
(
episode,
instr(episode, '|', 1, 1) + 1, -- start
case
when instr(episode, '|', 1, 2) > 0 then instr(episode, '|', 1,2) -1
else length(episode)
end
-
instr(episode, '|', 1, 1)
)
end as "The Empire Strikes Back",
case when instr(episode, '|', 1, 2) > 0 then
substr( episode, instr(episode, '|', 1, 2) + 1,
case when instr(episode, '|', 1,3) > 0 then instr(episode, '|',1,3) - 1 else length(episode) end
-
instr(episode, '|', 1, 2))
end as "Return of the Jedi",
case when instr(episode, '|', 1, 3) > 0 then
substr( episode, instr(episode, '|', 1, 3) + 1,
case when instr(episode, '|', 1,4) > 0 then instr(episode, '|',1,4) - 1 else length(episode) end
-
instr(episode, '|', 1, 3))
end as "Phantom Menace",
case when instr(episode, '|', 1, 4) > 0 then
substr( episode, instr(episode, '|', 1, 4) + 1,
case when instr(episode, '|', 1,5) > 0 then instr(episode, '|',1,5) - 1 else length(episode) end
-
instr(episode, '|', 1, 4))
end as "Attack of the Clones",
case when instr(episode, '|', 1, 5) > 0 then
substr( episode, instr(episode, '|', 1, 5) + 1,
case when instr(episode, '|', 1,6) > 0 then instr(episode, '|',1,6) - 1 else length(episode) end
-
instr(episode, '|', 1, 5))
end as "Revenge of the Sith",
case when instr(episode, '|', 1, 6) > 0 then
substr( episode, instr(episode, '|', 1, 6) + 1,
case when instr(episode, '|', 1,7) > 0 then instr(episode, '|',1,7) - 1 else length(episode) end
-
instr(episode, '|', 1, 6))
end as "The Force Awakens",
case when instr(episode, '|', 1, 7) > 0 then
substr( episode, instr(episode, '|', 1, 7) + 1,
case when instr(episode, '|', 1,8) > 0 then instr(episode, '|',1,8) - 1 else length(episode) end
-
instr(episode, '|', 1, 7))
end as "Rogue One",
case when instr(episode, '|', 1, 8) > 0 then
substr( episode, instr(episode, '|', 1, 8) + 1,
case when instr(episode, '|', 1,9) > 0 then instr(episode, '|',1,9) - 1 else length(episode) end
-
instr(episode, '|', 1, 8))
end as "Solo",
case when instr(episode, '|', 1, 9) > 0 then
substr( episode, instr(episode, '|', 1, 9) + 1,
case when instr(episode, '|', 1, 10) > 0 then instr(episode, '|',1, 10) - 1 else length(episode) end
-
instr(episode, '|', 1, 9))
end as "The Last Jedi",
case when instr(episode, '|', 1, 10) > 0 then
substr( episode, instr(episode, '|', 1, 10) + 1,
case when instr(episode, '|', 1, 11) > 0 then instr(episode, '|',1, 11) - 1 else length(episode) end
-
instr(episode, '|', 1, 10))
end as "The Rise of Skywalker"
from galaxy
/
Runnnig this query, we get :
1100000 rows selected. Elapsed: 00:00:20.32 Execution Plan ---------------------------------------------------------- Plan hash value: 3574103611 ---------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1100K| 47M| 2089 (1)| 00:00:01 | | 1 | TABLE ACCESS FULL| GALAXY | 1100K| 47M| 2089 (1)| 00:00:01 | ---------------------------------------------------------------------------- Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 80459 consistent gets 7616 physical reads 0 redo size 35857676 bytes sent via SQL*Net to client 811886 bytes received via SQL*Net from client 73335 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1100000 rows processed
Before we look at the REGEXP_SUBSTR equivalent, it’s probably worth considering the more streamlined in-line function version of this query :
with function extract_episode( i_string in varchar2, i_pos in pls_integer)
return varchar2
is
begin
return
case when instr(i_string, '|', 1, i_pos -1 ) > 0 then
substr( i_string, instr(i_string, '|', 1, i_pos -1) + 1,
case when instr(i_string, '|', 1, i_pos) > 0 then instr(i_string, '|',1, i_pos) - 1 else length(i_string) end
-
instr( i_string, '|', 1, i_pos - 1))
end;
end;
select
id,
substr
(
episode,
1,
case
when instr(episode, '|') > 0 then instr(episode, '|') -1
else length(episode)
end
) as "A New Hope",
extract_episode( episode, 2) as "Empire Strikes Back",
extract_episode( episode, 3) as "Return of the Jedi",
extract_episode( episode, 4) as "The Phantom Menace",
extract_episode( episode, 5) as "Attack of the Clones",
extract_episode( episode, 6) as "Revenge of the Sith",
extract_episode( episode, 7) as "The Force Awakens",
extract_episode( episode, 8) as "Rogue One",
extract_episode( episode, 9) as "Solo",
extract_episode( episode, 10) as "The Last Jedi",
extract_episode( episode, 11) as "The Rise of Skywalker"
from galaxy
/
Whilst it’s a bit more readable, it’s also a lot slower :
1100000 rows selected. Elapsed: 00:00:41.76 Execution Plan ---------------------------------------------------------- Plan hash value: 3574103611 ---------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1100K| 47M| 2089 (1)| 00:00:01 | | 1 | TABLE ACCESS FULL| GALAXY | 1100K| 47M| 2089 (1)| 00:00:01 | ---------------------------------------------------------------------------- Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 80459 consistent gets 7616 physical reads 0 redo size 35857676 bytes sent via SQL*Net to client 810042 bytes received via SQL*Net from client 73335 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1100000 rows processed
REGEXP_SUBSTR however, takes slow to a new level…
select id,
regexp_substr(episode, '[^|]+') as "A New Hope",
regexp_substr(episode, '[^|]+',1, 2) as "The Empire Strikes Back",
regexp_substr(episode, '[^|]+',1, 3) as "Return of the Jedi",
regexp_substr(episode, '[^|]+',1, 4) as "The Phantom Menace",
regexp_substr(episode, '[^|]+',1, 5) as "Attack of the Clones",
regexp_substr(episode, '[^|]+',1, 6) as "Revenge of the Sith",
regexp_substr(episode, '[^|]+',1, 7) as "The Force Awakens",
regexp_substr(episode, '[^|]+',1, 8) as "Rogue One",
regexp_substr(episode, '[^|]+',1, 9) as "Solo",
regexp_substr(episode, '[^|]+',1, 10) as "The Last Jedi",
regexp_substr(episode, '[^|]+',1, 11) as "The Rise of Skywalker"
from galaxy
/
1100000 rows selected. Elapsed: 00:01:27.25 Execution Plan ---------------------------------------------------------- Plan hash value: 3574103611 ---------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1100K| 47M| 2089 (1)| 00:00:01 | | 1 | TABLE ACCESS FULL| GALAXY | 1100K| 47M| 2089 (1)| 00:00:01 | ---------------------------------------------------------------------------- Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 80459 consistent gets 7616 physical reads 0 redo size 35857680 bytes sent via SQL*Net to client 809519 bytes received via SQL*Net from client 73335 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1100000 rows processed
Now see what happens when we give REGEXP_SUBSTR a little help from the humble INSTR (the “hybrid” query) :
select id,
regexp_substr(episode, '[^|]+') as "A New Hope",
decode( instr(episode, '|'), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,1) + 1)) as "The Empire Strikes Back",
decode( instr(episode, '|',1, 2), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,2) + 1)) as "Return of the Jedi",
decode( instr(episode, '|',1, 3), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,3) + 1)) as "The Phantom Menace",
decode( instr(episode, '|',1, 4), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,4) + 1)) as "Attack of the Clones",
decode( instr(episode, '|',1, 5), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,5) + 1)) as "Revenge of the Sith",
decode( instr(episode, '|',1, 6), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,6) + 1)) as "The Force Awakens",
decode( instr(episode, '|',1, 7), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,7) + 1)) as "Rogue One",
decode( instr(episode, '|',1, 8), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,8) + 1)) as "Solo",
decode( instr(episode, '|',1, 9), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,9) + 1)) as "The Last Jedi",
decode( instr(episode, '|',1, 10), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,10) + 1)) as "The Rise of Skywalker"
from galaxy
/
Yes, I have cheated a bit on the aesthetics here by using the more compact DECODE instead of CASE.
However, this does not affect the runtime of the query, which is a bit faster than the pure REGEXP_SUBSTR equivalent :
1100000 rows selected. Elapsed: 00:00:30.83 Execution Plan ---------------------------------------------------------- Plan hash value: 3574103611 ---------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1100K| 47M| 2089 (1)| 00:00:01 | | 1 | TABLE ACCESS FULL| GALAXY | 1100K| 47M| 2089 (1)| 00:00:01 | ---------------------------------------------------------------------------- Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 80459 consistent gets 7616 physical reads 0 redo size 35857680 bytes sent via SQL*Net to client 810158 bytes received via SQL*Net from client 73335 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1100000 rows processedTest 1 Results Summary Query MethodTime (secs)SUBSTR ( Standard)20.32REGEXP_SUBSTR with INSTR (Hybrid)30.83SUBSTR with in-line function41.76REGEXP_SUBSTR ( Regexp)87.25
The performance of the Hybrid approach does raise the question of how the REGEXP_SUBSTR compares when we’re just looking to extract a single field from a string, rather than all of them…
Test 2 – Extract the first field onlyIn this instance we’re just looking for the first field in EPISODES.
In this context, the hybrid approach doesn’t apply because we’re always starting our search at the start of the input string.
Starting, once again with the Standard approach :
select id,
substr
(
episode, 1,
case
when instr(episode, '|', 1,1) > 0 then instr(episode, '|', 1,1) -1
else length(episode)
end
) as "A New Hope"
from galaxy
/
1100000 rows selected. Elapsed: 00:00:05.33 Execution Plan ---------------------------------------------------------- Plan hash value: 3574103611 ---------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1100K| 47M| 2089 (1)| 00:00:01 | | 1 | TABLE ACCESS FULL| GALAXY | 1100K| 47M| 2089 (1)| 00:00:01 | ---------------------------------------------------------------------------- Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 80459 consistent gets 7616 physical reads 0 redo size 25086067 bytes sent via SQL*Net to client 808790 bytes received via SQL*Net from client 73335 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1100000 rows processed
Now for REGEXP_SUBSTR :
select id,
regexp_substr( episode, '[^|]+') as "A New Hope"
from galaxy
/
1100000 rows selected. Elapsed: 00:00:06.38 Execution Plan ---------------------------------------------------------- Plan hash value: 3574103611 ---------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1100K| 47M| 2089 (1)| 00:00:01 | | 1 | TABLE ACCESS FULL| GALAXY | 1100K| 47M| 2089 (1)| 00:00:01 | ---------------------------------------------------------------------------- Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 80459 consistent gets 7616 physical reads 0 redo size 25086067 bytes sent via SQL*Net to client 808868 bytes received via SQL*Net from client 73335 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1100000 rows processed
Interestingly, whilst it’s still not as fast, the Regexp runtime is only 20% longer than the Standard.
This contrasts markedly with the 430% longer that the Regexp took for our first test.
Test 2 Results Summary Query MethodTime (secs)Standard5.33Regexp6.38Does this relative performance hold true for any single field in the input string ?
Test 3 – Extract the last field onlyStarting with the Standard query :
select
id,
case when instr(episode, '|', 1, 10) > 0 then
substr( episode, instr(episode, '|', 1, 10) + 1,
case when instr(episode, '|', 1, 11) > 0 then instr(episode, '|',1, 11) - 1 else length(episode) end
-
instr(episode, '|', 1, 10))
end as "The Rise of Skywalker"
from galaxy
/
1100000 rows selected. Elapsed: 00:00:05.44 Execution Plan ---------------------------------------------------------- Plan hash value: 3574103611 ---------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1100K| 47M| 2089 (1)| 00:00:01 | | 1 | TABLE ACCESS FULL| GALAXY | 1100K| 47M| 2089 (1)| 00:00:01 | ---------------------------------------------------------------------------- Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 80459 consistent gets 7616 physical reads 0 redo size 25686033 bytes sent via SQL*Net to client 808915 bytes received via SQL*Net from client 73335 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1100000 rows processed
Once again, the Regexp Query is much slower…
select
id,
regexp_substr(episode, '[^|]+',1, 11) as "The Rise of Skywalker"
from galaxy
/
1100000 rows selected. Elapsed: 00:00:16.16 Execution Plan ---------------------------------------------------------- Plan hash value: 3574103611 ---------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1100K| 47M| 2089 (1)| 00:00:01 | | 1 | TABLE ACCESS FULL| GALAXY | 1100K| 47M| 2089 (1)| 00:00:01 | ---------------------------------------------------------------------------- Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 80459 consistent gets 7616 physical reads 0 redo size 25686033 bytes sent via SQL*Net to client 808888 bytes received via SQL*Net from client 73335 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1100000 rows processed
…unless we use the hybrid approach…
select
id,
decode( instr(episode, '|',1, 10), 0, null, regexp_substr(episode, '[^|]+', instr(episode, '|',1,10) + 1)) as "The Rise of Skywalker"
from galaxy
/
1100000 rows selected. Elapsed: 00:00:05.60 Execution Plan ---------------------------------------------------------- Plan hash value: 3574103611 ---------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1100K| 47M| 2089 (1)| 00:00:01 | | 1 | TABLE ACCESS FULL| GALAXY | 1100K| 47M| 2089 (1)| 00:00:01 | ---------------------------------------------------------------------------- Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 80459 consistent gets 7616 physical reads 0 redo size 25686033 bytes sent via SQL*Net to client 808736 bytes received via SQL*Net from client 73335 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1100000 rows processedTest 3 Results Summary Query MethodTime (secs)Standard 5.44Hybrid 5.60Regexp16.16 Conclusions
These tests seem to bear out the fact that, in general, SUBSTR offers better performance than REGEXP_SUBSTR.
However, the performance of REGEXP_SUBSTR can be greatly improved if used in conjunction with INSTR.
As always, the decision as to which function to use will depend on the specific circumstances you find yourself in.
Sql Konferenz 2023 in Hanau, Germany
It was a very great pleasure to participate to the Sql Server konferenz 2023 in Hanau, Germany, the 12 and 13 September 2023.
I went with my colleague Petar, of the same Microsoft Sql Server team by dbi services, Switzerland.
For Microsoft Sql Server fans, this event was really the right place to be.
Where is located HanauFirst of all, for those who don’t know where is Hanau, it’s a small and charming city in Germany, 350 km from Basel, Switzerland (3 hours 30min drive), or 30 km from Frankfurt, Germany.

The event took place in the Congress Park Hanau

Once arrived, the first step is as usual to go the Registration desk to get the badge, a blue bag containing some useful furnitures (pen, notepad, program’s…)


Take a look on the agenda here, there are 6 parallel sessions of 1 hour each time, all very interesting and very varied in different themes (Sql Server on premise, Azure, Power BI, Best practices…), from 9am. to 5pm.
Let’s start nowIt was very intense, time passed very quickly each time, you have many choices which session to attend, real dilemma each time.

During the keynote the first day, all speakers, partners or sponsors introduced shortly what they will present.


Some examples of great sessions or speakers, I cannot enumerate all them of course as all were great, and there were a lot, to make it simple I just show here some examples of Sql Server MVP’s speakers, who are evangelists for us as we use their experience in our daily life in Sql Server:
1. Bob Ward – Principal Architect, Microsoft Azure Data.
For all new versions, new features …, he is among the first who is aware and reveal them officially.


Presenting the topic: Transitioning your On-Prem Database Skills to Azure SQL
2. Rie Merrit – Sr Program Manager, MVP & SQL Community for Microsoft, Organizer




Here with Rie Merritt and my colleague Petar Prljevic
3. Glenn Berry – Services Database Engineer, Microsoft Data Platform MVP
Presenting the topic: How to Survive a SQL Server Health Check
He presented some of his Diagnostic queries which are very used and helpful for all DBA’s like me in our daily duties at custom




It was really a great event on Microsoft Sql Server, in my opinion too short because there were so many interesting topics.
I recommend to everyone for those who want to be up to date, or to complete their knowledge
L’article Sql Konferenz 2023 in Hanau, Germany est apparu en premier sur dbi Blog.
Restoring a PDB from another CDB can increase your SCN
I start with a 19c CDB called "ORCLCDB" and PDB "ORCLPDB1" that is present in it.
SQL> col name format a16 SQL> select dbid, name, current_scn from v$database; DBID NAME CURRENT_SCN ---------- ---------------- ----------- 2778483057 ORCLCDB 20906515 SQL> select dbid, guid, name, open_mode from v$pdbs; DBID GUID NAME OPEN_MODE ---------- -------------------------------- ---------------- ---------- 2054948555 88129263B99F4BBDE0530100007F7BDF PDB$SEED READ ONLY 4057974068 8812E29864F47615E0530100007FA424 ORCLPDB1 READ WRITE 181927604 C07D1F2BA98C23D0E0530100007F7D34 RMANCAT MOUNTED SQL> SQL> alter pluggable database orclpdb1 close; Pluggable database altered. SQL> quit Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.12.0.0.0 oracle19c>rman target / Recovery Manager: Release 19.0.0.0.0 - Production on Sun Sep 24 09:39:55 2023 Version 19.12.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. connected to target database: ORCLCDB (DBID=2778483057) RMAN> backup pluggable database orclpdb1 tag for_migration; Starting backup at 24-SEP-23 using channel ORA_DISK_1 using channel ORA_DISK_2 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00034 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_tpcctab_jyl0mzsp_.dbf input datafile file number=00009 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/system01.dbf input datafile file number=00026 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_data_min_j2p8z0qn_.dbf input datafile file number=00032 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoojqn_.dbf channel ORA_DISK_1: starting piece 1 at 24-SEP-23 channel ORA_DISK_2: starting full datafile backup set channel ORA_DISK_2: specifying datafile(s) in backup set input datafile file number=00011 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/undotbs01.dbf input datafile file number=00010 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/sysaux01.dbf input datafile file number=00012 name=/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf input datafile file number=00031 name=/opt/oracle/oradata/ORCLCDB/8812E29864F47615E0530100007FA424/datafile/o1_mf_hemant_d_jtkoo5nr_.dbf channel ORA_DISK_2: starting piece 1 at 24-SEP-23 channel ORA_DISK_1: finished piece 1 at 24-SEP-23 piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2023_09_24/o1_mf_nnndf_FOR_MIGRATION_ljz5825q_.bkp tag=FOR_MIGRATION comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03 channel ORA_DISK_2: finished piece 1 at 24-SEP-23 piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/8812E29864F47615E0530100007FA424/backupset/2023_09_24/o1_mf_nnndf_FOR_MIGRATION_ljz5828m_.bkp tag=FOR_MIGRATION comment=NONE channel ORA_DISK_2: backup set complete, elapsed time: 00:00:03 Finished backup at 24-SEP-23 Starting Control File and SPFILE Autobackup at 24-SEP-23 piece handle=/opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2023_09_24/o1_mf_s_1148377605_ljz585bw_.bkp comment=NONE Finished Control File and SPFILE Autobackup at 24-SEP-23 RMAN> RMAN> quit Recovery Manager complete. oracle19c>sqlplus / as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Sun Sep 24 09:48:04 2023 Version 19.12.0.0.0 Copyright (c) 1982, 2021, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.12.0.0.0 SQL> alter pluggable database orclpdb1 unplug into '/opt/oracle/product/19c/dbhome_1/orclpdb1.xml'; Pluggable database altered. SQL> !ls -l /opt/oracle/product/19c/dbhome_1/orclpdb1.xml -rw-r--r--. 1 oracle oinstall 12583 Sep 24 09:50 /opt/oracle/product/19c/dbhome_1/orclpdb1.xml SQL> SQL> quit Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.12.0.0.0 oracle19c>
Now I go to the target CDB CDB2 and identify that there is no conflicting PDB. I also find that it has a lower SCN.
SQL> col name format a16 SQL> select dbid, name, current_scn from v$database; DBID NAME CURRENT_SCN ---------- ---------------- ----------- 837065713 CDB2 20664227 SQL> select dbid, name, current_scn from v$database; DBID NAME CURRENT_SCN ---------- ---------------- ----------- 837065713 CDB2 20664231 SQL> SQL> select * from v$pdbs 2 / CON_ID DBID CON_UID GUID ---------- ---------- ---------- -------------------------------- NAME -------------------------------------------------------------------------------------------------------------------------------- OPEN_MODE RES OPEN_TIME CREATE_SCN TOTAL_SIZE BLOCK_SIZE RECOVERY ---------- --- --------------------------------------------------------------------------- ---------- ---------- ---------- -------- SNAPSHOT_PARENT_CON_ID APP APP APP APPLICATION_ROOT_CON_ID APP PRO LOCAL_UNDO UNDO_SCN UNDO_TIME CREATION_ DIAGNOSTICS_SIZE ---------------------- --- --- --- ----------------------- --- --- ---------- ---------- --------- --------- ---------------- PDB_COUNT AUDIT_FILES_SIZE MAX_SIZE MAX_DIAGNOSTICS_SIZE MAX_AUDIT_SIZE LAST_CHANGE TEM ---------- ---------------- ---------- -------------------- -------------- ----------- --- TENANT_ID ------------------------------------------------------------------------------------------------------------------------------------ UPGRADE_LEVEL GUID_BASE64 ------------- ------------------------------ 2 2054948555 2054948555 88129263B99F4BBDE0530100007F7BDF PDB$SEED READ ONLY NO 24-SEP-23 09.26.31.678 AM +08:00 1997190 957349888 8192 ENABLED NO NO NO NO NO 1 280 04-MAY-19 0 0 0 0 0 0 COMMON USER NO 1 iBKSY7mfS73gUwEAAH973wA= 3 2205532678 2205532678 E6BD9C73839C10BEE0550A00275FC834 PDBTMP MOUNTED 17541716 999292928 8192 ENABLED NO NO NO NO NO 1 280 21-AUG-22 0 0 0 0 0 0 COMMON USER NO 1 5r2cc4OcEL7gVQoAJ1/INAA=
I then have the backups and the "closed" datafiles from the source CDB copied over to the target. (The CREATE PLUGGABLE DATABASE command will not be using the RMAN Backups but the PDB Describle XML file and the "closed" datafiles).
oracle19c>pwd /tmp/From_Source oracle19c>ls -l total 1882384 -rw-r-----. 1 oracle oinstall 1146109952 Sep 24 09:56 o1_mf_nnndf_FOR_MIGRATION_ljz5825q_.bkp -rw-r-----. 1 oracle oinstall 758202368 Sep 24 09:56 o1_mf_nnndf_FOR_MIGRATION_ljz5828m_.bkp -rw-r-----. 1 oracle oinstall 23232512 Sep 24 09:55 o1_mf_s_1148377605_ljz585bw_.bkp -rw-r--r--. 1 oracle oinstall 12583 Sep 24 09:59 orclpdb1.xml oracle19c>
I now prepare to copy the PDB to the target CDB2.. First I check for compatibility.. And then I plug in the PDB with a new name "ORCLPDB1_NEW"
SQL> select dbid, name, current_scn from v$database; DBID NAME CURRENT_SCN ---------- ---------------- ----------- 837065713 CDB2 20664231 SQL> set serveroutput on SQL> declare 2 compatible constant varchar2(3) := 3 case dbms_pdb.check_plug_compatibility( 4 pdb_descr_file=>'/tmp/From_Source/orclpdb1.xml', 5 pdb_name=>'ORCLPDB1') 6 when true then 'YES' 7 else 'NO' 8 END; 9 begin 10 dbms_output.put_line(compatible); 11 end; 12 / YES PL/SQL procedure successfully completed. SQL> SQL> show parameter db_create NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_create_file_dest string /opt/oracle/oradata SQL> SQL> create pluggable database orclpdb1_new 2 using '/tmp/From_Source/orclpdb1.xml' 3 copy; --- copy will copy the datafiles to the location based on db_create_file_dest and CDB database name Pluggable database created. SQL> SQL> select dbid, name, current_scn from v$database; DBID NAME CURRENT_SCN ---------- ---------------- ----------- 837065713 CDB2 20910195 SQL> SQL> select dbid, guid, name, open_mode from v$pdbs; DBID GUID NAME OPEN_MODE ---------- -------------------------------- ---------------- ---------- 2054948555 88129263B99F4BBDE0530100007F7BDF PDB$SEED READ ONLY 2205532678 E6BD9C73839C10BEE0550A00275FC834 PDBTMP MOUNTED 4057974068 8812E29864F47615E0530100007FA424 ORCLPDB1_NEW READ WRITE SQL>
The SCN of the target CDB2 has now been set to the much higher 20910195 (that was 20664231 before I "plugged in" ORCLDB1 as ORCLDB1_NEW). This is because it "read" the SCN from the headers of the datafiles that were plugged in.
SQL> select dbid, name, current_scn from v$database; DBID NAME CURRENT_SCN ---------- ---------------- ----------- 2778483057 ORCLCDB 20910076 SQL> select dbid, guid, name, open_mode from v$pdbs; DBID GUID NAME OPEN_MODE ---------- -------------------------------- ---------------- ---------- 2054948555 88129263B99F4BBDE0530100007F7BDF PDB$SEED READ ONLY 4057974068 8812E29864F47615E0530100007FA424 ORCLPDB1 MOUNTED 181927604 C07D1F2BA98C23D0E0530100007F7D34 RMANCAT MOUNTED SQL> SQL> drop pluggable database orclpdb1 including datafiles; Pluggable database dropped. SQL> SQL> select dbid, guid, name, open_mode from v$pdbs; DBID GUID NAME OPEN_MODE ---------- -------------------------------- ---------------- ---------- 2054948555 88129263B99F4BBDE0530100007F7BDF PDB$SEED READ ONLY 181927604 C07D1F2BA98C23D0E0530100007F7D34 RMANCAT MOUNTED SQL> SQL> quit Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.12.0.0.0 oracle19c>
So, I have "migrated" ORCLPDB1 from ORCLCDB to CDB2 and, in the process, have found that the SCN in CDB2 got reset to a much higher value, because it "inherited" the higher Checkpoint SCN that was in the datafiles of ORCLPDB1.
SQL> select con_id, file#, plugin_change#, checkpoint_change# 2 from v$datafile 3 order by 1,2 4 / CON_ID FILE# PLUGIN_CHANGE# CHECKPOINT_CHANGE# ---------- ---------- -------------- ------------------ 1 1 0 20658011 1 3 0 20658011 1 4 0 20658011 1 7 0 20658011 2 5 0 13768423 2 6 0 13768423 2 8 0 13768423 3 58 0 20657657 3 59 0 20657657 3 60 0 20657657 4 69 0 20908595 4 70 0 20908595 4 71 0 20908595 4 72 0 20908595 4 73 0 20908595 4 74 0 20908595 4 75 0 20908595 4 76 0 20908595 18 rows selected. SQL> alter system checkpoint; System altered. SQL> select con_id, file#, plugin_change#, checkpoint_change# 2 from v$datafile 3 order by 1,2 4 / CON_ID FILE# PLUGIN_CHANGE# CHECKPOINT_CHANGE# ---------- ---------- -------------- ------------------ 1 1 0 20912767 1 3 0 20912767 1 4 0 20912767 1 7 0 20912767 2 5 0 13768423 2 6 0 13768423 2 8 0 13768423 3 58 0 20657657 3 59 0 20657657 3 60 0 20657657 4 69 0 20912767 4 70 0 20912767 4 71 0 20912767 4 72 0 20912767 4 73 0 20912767 4 74 0 20912767 4 75 0 20912767 4 76 0 20912767 18 rows selected. SQL> col name format a16 SQL> select con_id, name, open_mode from v$pdbs; CON_ID NAME OPEN_MODE ---------- ---------------- ---------- 2 PDB$SEED READ ONLY 3 PDBTMP MOUNTED 4 ORCLPDB1_NEW READ WRITE SQL> alter pluggable database pdbtmp open; Pluggable database altered. SQL> select con_id, file#, plugin_change#, checkpoint_change# 2 from v$datafile 3 order by 1,2 4 / CON_ID FILE# PLUGIN_CHANGE# CHECKPOINT_CHANGE# ---------- ---------- -------------- ------------------ 1 1 0 20912767 1 3 0 20912767 1 4 0 20912767 1 7 0 20912767 2 5 0 13768423 2 6 0 13768423 2 8 0 13768423 3 58 0 20912888 3 59 0 20912888 3 60 0 20912888 4 69 0 20912767 4 70 0 20912767 4 71 0 20912767 4 72 0 20912767 4 73 0 20912767 4 74 0 20912767 4 75 0 20912767 4 76 0 20912767 18 rows selected. SQL> alter system checkpoint; System altered. SQL> select con_id, file#, plugin_change#, checkpoint_change# 2 from v$datafile 3 order by 1,2 4 / CON_ID FILE# PLUGIN_CHANGE# CHECKPOINT_CHANGE# ---------- ---------- -------------- ------------------ 1 1 0 20912967 1 3 0 20912967 1 4 0 20912967 1 7 0 20912967 2 5 0 13768423 2 6 0 13768423 2 8 0 13768423 3 58 0 20912967 3 59 0 20912967 3 60 0 20912967 4 69 0 20912967 4 70 0 20912967 4 71 0 20912967 4 72 0 20912967 4 73 0 20912967 4 74 0 20912967 4 75 0 20912967 4 76 0 20912967 18 rows selected. SQL> SQL> alter session set container=ORCLPDB1_NEW; Session altered. SQL> col name format a54 SQL> select name from v$datafile; NAME ------------------------------------------------------ /opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA 424/datafile/o1_mf_system_ljz7d02h_.dbf /opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA 424/datafile/o1_mf_sysaux_ljz7d02l_.dbf /opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA 424/datafile/o1_mf_undotbs1_ljz7d02m_.dbf /opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA 424/datafile/o1_mf_users_ljz7d02o_.dbf /opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA 424/datafile/o1_mf_data_min_ljz7d02p_.dbf /opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA 424/datafile/o1_mf_hemant_d_ljz7d02r_.dbf /opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA 424/datafile/o1_mf_hemant_d_ljz7d02s_.dbf /opt/oracle/oradata/CDB2/8812E29864F47615E0530100007FA 424/datafile/o1_mf_tpcctab_ljz7d02t_.dbf 8 rows selected. SQL>
CON_ID=1 is Root and the datafiles get the new SCN after a Checkpoint. Note these datafiles still had the "older" SCN 20658011 before I issued the Checkpoint.
Oracle Forensics Response
Posted by Pete On 22/09/23 At 01:07 PM
ODA X10: the new Oracle Database Appliances are already there
ODA X8-2 series was sold for 3 years, and I would have expected X9-2 series to be on the market for 2-3 years, but I was wrong. ODA X10 series has just been announced. Maybe Oracle has a different strategy for appliances now, I will explain my thought later.
You will notice that Oracle removed the number 2 in the name of this new ODA series. X9-2S becomes X10-S, X9-2L becomes X10-L and X9-2HA becomes X10-HA. The number 2 makes less sense today: it reffered to the number of sockets/processors. But X10-S only has 1 processor, X10-L has 2, and X10-HA has 4 (2x 2). For sure, the ODA nodes are still 2-socket servers, but this number was quite confusing for some people.
Now let’s try to find out what’s new, what’s good and what’s not so good.
What is an Oracle Database Appliance?ODA, or Oracle Database Appliance, is an engineered system from Oracle. Basically, it’s an x86-64 server with a dedicate software distribution including Linux, Oracle database software, a Command Line Interface (CLI), a Browser User Interface (BUI) and a virtualization layer. The goal being to simplify database lifecycle and maximize performance. Market positioning for ODA is quite tricky for Oracle: it sits between OCI (the Oracle public Cloud) and Exadata (the highest level database hardware – a kind of ODA on steroids). For some people, ODA avoids migrating to OCI bringing enough simplication for Oracle Database environments, for some other people, ODA is powerfull enough for most workloads and Exadata wouldn’t make sense. For me, ODA has always been one of my preferred solutions, and undoubtly a solution to consider.
For Enterprise Edition users, ODA has a strong advantage over its competitors: capacity on demand licensing. You can enable as few as 2 cores per node, meaning a single processor licence. You can later scale up by enabling additional cores according to your needs.
From Intel Xeon to AMD EpycODA X9-2 was available in 3 flavors:
- X9-2S with 1 Xeon CPU, 256GB of RAM and 2x 6.8TB NVMe SSDs
- X9-2L with 2 Xeon CPUs, 512GB of RAM and 2x up to 12x 6.8TB NVMe SSDs
- X9-2HA with 2 nodes (similar to X9-2L without the disks) and one or two disk enclosures with various configurations (SSDs or spinning disks)
The first change you will notice is the switch from Intel Xeon to AMD Epyc CPU. Oracle already made this change on Exadata platform several months ago. It’s much more a global change from Oracle, moving to AMD on x86-64 servers’ lineup, including those servers in OCI.
X9-2 series ODAs were using Xeon Silver 4314 with 16 cores running between 2.4GHz and 3.4GHz, X10 series ODAs use AMD Epyc 9334 with 32 cores running between 2.7GHz and 3.9 GHz. The number of cores is not so important because most of the customers using ODAs are running Standard Edition or are using Enterprise Edition with a limited number of cores. But regarding pure CPU speed, a significant increase is expected, something like 30% for single thread processing.
Having better CPU doesn’t only mean better performance: it also means less licences for Enterprise Edition users. Having 30% faster CPUs means you need 30% less cores to do the same tasks, if you needed 6 Xeon cores for your workload, you may run the same workload on 4 Epyc cores, so on 2 processor licences instead of 3.
Internal SSDs: back to 480GB disksIt’s was one of my complain for X9-2 series: the 2x M2 internal disks were smaller than those on X8-2, from 480GB to 240GB. Now for X10 series, Oracle brings back 480GB internal disks, and it’s better having this size.
RAM: same sizes but expandable up to 3 timesX10-S has 256GB of RAM, like X9-2S, but you can now triple the size with additional memory modules (768GB). X10-L comes with 512GB in its basic configuration, and you can go up to 1.5TB. X10-HA is basically 2x X10-L with an external storage enclosure, you then have 2x512GB, 2x1TB or 2×1.5TB configurations.
Data disks: same size but less disks for X10-LOn X10-S and L, data disks are the same as X9-2 series: 6.8TB NVMe disks. X10-S has the same limitation as X9-2S, only 2 disks and no possible extension.
There is a catch regarding X10-L. On X9-2L you could have a maximum of 12 disks, meaning a huge 81TB of RAW storage. X10-L is now limited to 8 disks, meaning only 54TB of RAW storage. One could argue that it’s already a big capacity, but this is RAW TB, usable TB is 21TB only when using normal redundancy. For some of my customers having fully populated X8-2M ODAs, their databases won’t fit anymore in one X10-L… Another drawback when it comes to upgrading storage on your X10-L, there is only 4 slots for disks on the front panel. So how can you put 8 disks with only 4 slots? By adding them inside the server, as AIC: Add-In-Card. The disks have a different form factor and you will need to open your server to add these disks, meaning a downtime of your databases. Definitely a loss of flexibility.
X10-HA is not that different compared to X9-2HA, there is still a High Performance (HP) version and a High Capacity (HC) version, the first one being composed of SSDs only, the second one being composed of a mix of SSDs and HDDs. SSDs are still 7.68TB each, and only the HC get a capacity bump thanks to bigger HDDs: from 18TB to 22TB each.
Network interfacesNothing new regarding network interfaces. You can have up to 3 of them (2 are optional), and you will choose for each between a quad-port 10GBase-T (copper) or a two-port 10/25GbE (SFP28). You should know that SFP28 won’t connect to 1Gbps fiber network.
Specific software bundleLatest software bundle for ODA is 19.20, but Oracle just releases a dedicated version of the patch for X10: 19.20.0.1. Next patch bundle will probably be compatible with X10 as well as older versions. Currently, X7-2 series, X8-2 series and X9-2 series are still supported.
What about editions, licences and support?First of all, these new ODAs don’t support Standard Edition 2 anymore! It’s a major breakthrough, as a lot of customers are using Standard Edition 2 on these appliances.
It’s cristal clear in the documentation:
Choice of Oracle Database Software:– Oracle Database 21c Enterprise Edition (DB Systems only)
– Oracle Database 19c Enterprise Edition
My thought is that Oracle will keep X9-2 series for now, and will reserve X10 and AMD Epyc for Enterprise Edition users. X9-2 series is still relevant today, as price and performance match actual needs for most Standard Edition customers.
You may know that ODA is not sold with the database licences: you need to bring yours or buy them at the same time. You will need Enterprise Edition for these new ODAs, starting from 1 processor licence on a S and L models (2 activated cores) and at least 2 processor licences on HA (2 activated cores per node). Enabling your EE licence on an ODA will actually decrease the number of cores on the server to make sure you are compliant, but be careful because it doesn’t prevent you using unlicensed options. You can also use CPU pools to keep remaining CPUs available for other purpose, running application VMs for example.
Regarding support, as other hardware vendors you’ll have to pay for your ODA to be supported, in case of hardware failure. 1st year of support will usually be part of your initial order.
Support for the database versions is limited to 19c and 21c. Don’t forget that only 19c databases are supported with Premier Support. 21c being an innovation release, it’s only for testing.
What are the differences between the 3 models?The X10-S is an entry price point for a small number of small databases.
The X10-L is much more capable and can get disk expansions. A big infrastructure with hundreds of databases can easily fit on several X10-L. OK, disk capacity is far behind previous X9-2L, but most customers won’t put 10 additional disks in their ODA: a fully populated X9-2L was 3 times more expensive than the base model.
The X10-HA is for RAC users because High Availability is included. The disk capacity is much higher than single node models, and HDDs are still an option. With X10-HA, big infrastructures can be consolidated on a very small number of HA ODAs.
ModelDB EditionnodesURAMRAM MaxRAW TBRAW TB Maxbase priceODA X10-SEE only12256GB768GB13.613.621’579$ODA X10-LEE only12512GB1536GB13.654.434’992$ODA X10-HA HPEE only28/122x 512GB2x 1536GB4636897’723$ODA X10-HA HCEE only28/122x 512GB2x 1536GB39079297’723$ODA X10 specs summary And regarding the price?Looking into the latest engineered systems price list (search exadata price list and you will easily find it), you will find X10 series as well as X9-2 series. Prices for X10-S and X10-L are identical to X9-2S and X9-2L. X10-HA cost 9% more than X9-2HA. X10-HA being quite expensive now (nearly 100’000$), I would definitely compare it to an Exadata Cloud@Customer solution.
Which one should you choose?If you are using Standard Edition, X10 series is not for you. You will need to order X9-2 series, or keep your older ODAs.
If your databases can comfortably fit in the S model, don’t hesitate as you will probably never need more.
Most interesting model is still the new L, maximum disk capacity is 33% less than its predecessor, but it’s a trade off for something like 33% more performance thanks to AMD Epyc. L is quite affordable, has still a great capacity, and is upgradable if you don’t buy the full system at first.
If you still want/need RAC and its associated complexity, the HA is for you.
Don’t forget that you will need at least 2 ODAs for Disaster Recovery purpose, using Data Guard. No one would recommend buying a single ODA, it’s probably better no ODA at all than a single one.
I would still prefer buying 2x ODA X10-L compared to 1x ODA X10-HA. NVMe speed, no RAC and single box is definitely better. And extreme consolidation may not be the best solution.
ConclusionODA X10 series will bring a nice CPU speed improvement thanks to AMD Epyc switch, but customers running Standard Edition are not concerned and are limited to X9-2 series. X10-L is also a little bit disappointing regarding storage possibilities compared to X9-2L. Nevertheless, ODA is still a good platform for database consolidation and simplification. And until now, it’s still very popular among our customers.
L’article ODA X10: the new Oracle Database Appliances are already there est apparu en premier sur dbi Blog.
Declaring environments in GitLab
Why do I need to declare environments in GitLab ?
- To track deployed versions on each environment
- To control deployment on environments (e.g. add an approval to deploy production)
- Add environment-related variables
An environment can be linked to a pipeline step. For example, to define the target of a deployment.
Create an environment 1. In the side menu, go to : Operate > Environments
- Name: with the name of your environment
- External URL: the URL of your environment (use in environments list for the Open button)
- GitLab agent: An option to define the agent to be used for environment deployment. Can remain blank

You can define several environments for a single project. They will then be available in the environment list in: Operate > Environments

When an environment has been specified in a deployment step, it is then possible to find out which version is deployed in the environment.
Click on the arrow at the beginning of the line to expand the details panel :

It is possible to define environment-specific variables to use in pipelines.
In your project, in the side menu, go to Settings > CI/CD, after that, expand the Variables section :

To add a variable, click on “Add variable” button and set the parameters :
- Key: the name of your variable
- Value: the value of the variable
- Environment scope: defines the environment in which the variable will be available
Note : The same variable (with the same name) can de define for each environment.

In the .gitlab-ci.yml, environment scoped variables can be used similarly like a standard variable. ( For example: ${K8S_NS} )
Protected environmentsProtected environments can be used to prevent unintentional deployment or to authorize only certain users to deploy.
In the side menu, go to Settings > CI/CD and expand the Protected environments section :

Click on “Protect an environment”, select the environment to protect and set the parameters to define who can deploy and who can approve :

When a pipeline reaches the step to deploy on a protected environment, the pipeline will wait for deployment approval

At the same time, the environment deployment wait for approval in the environments list :

Click on “Approval options” button and approve or reject the deployment :

After approval, the pipeline does not restart automatically. Click on “Run” button to continue the deployment:

Environments are very useful in GitLab. However, it’s possible to write pipelines without it. But if you don’t use environments for deployment, you’ll miss out on some really useful features, such as approvals, variables and, by the way, it becomes harder to track deployments.
L’article Declaring environments in GitLab est apparu en premier sur dbi Blog.
Oracle Database 23c Generally Available – Sort Of
Oracle Cloud World is happening, which typically means lots of announcements. One of the welcome announcement was the release of Oracle 23c on OCI Oracle Base Database Service, so there is a production version generally available… Sort of… Why do I say “sort of”? OCI Oracle Base Database Service This is a single cloud service, … Continue reading "Oracle Database 23c Generally Available – Sort Of"
The post Oracle Database 23c Generally Available – Sort Of first appeared on The ORACLE-BASE Blog.Oracle Database 23c Generally Available – Sort Of was first posted on September 21, 2023 at 10:53 am.©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.
SQL Konferenz 2023 in Hanau – Meet Tame, the data monster
After 2 years, or even 3 years? The organizers could not agree,
the SQL Conference took place again in Hanau.

73 speakers have taught us in over 80 sessions with their experience as well as knowledge new. There were incredibly many exciting topics. I would like to highlight the sessions “Migration to the Cloud – What could go wrong? and “Performance improvement of Transact-SQL queries without code changes”.
Migration to the Cloud – What could go wrong?

It was exciting to hear how a simple migration to the cloud can become really difficult due to wrong decisions.
A still lively and complex infrastructure, especially used for reports, had to be accompanied into the clouds without AzureDevOps.
How the loading routes were redefined, the security architecture was adopted but also all the structures were tried to be adopted 1-to-1.
All these experiences were shared with us during the session, as well as always in the breaks through conversations.
Performance improvement of Transact-SQL queries without code changes

Let’s imagine, we are sitting on the couch on a Saturday evening, have just baked a nice pizza and are really up for it, to watch a football game.
Suddenly a customer calls us, the SQL server is completely overloaded and he is looking for help.
Ok, beer and pizza on the side, laptop ON.
It quickly becomes apparent what the overload is. The CPU is at 99% utilization. Start analysis.
We find out that a query is causing the high CPU load and have to look for a solution. The query could definitely be better, a proposal change including reindexing was sent to the customer. The answer is, no idea, we have to clarify first. And as we all know, this can take days.
A lot of days. So we need an adhoc solution.
One thing we have already done well, the QueryStore is already running. Now we have possibilities through stored procedures like sp_query_store_set_hints which allow us to influence queries without changing application code or database.
In summary, it was 2 incredibly learning days. I also liked the community, which made a very pleasant exchange possible.
Of course we must not forget Tame, the DataMonster. Thank you very much for the great support. I would love to come again next time.

L’article SQL Konferenz 2023 in Hanau – Meet Tame, the data monster est apparu en premier sur dbi Blog.
Database Vault without Database Vault
Posted by Pete On 18/09/23 At 01:00 PM
Ansible automates roadshow 2023
I attended this tour from Red Hat to promote automation at Ansible automates roadshow with my colleague Arnaud Berbier. You’ll find his review of the journey here) in Geneva.
It was the third date from their tour. In case you’re interested in automation with Ansible there is 2 more dates in Germany.
- Frankfurt in October 17
- Munich in October 19

It started with Loïc Avenel who is the EMEA Manager in field product management for Ansible. He explained his vision of automation with 3 keys concepts which are to
- Think like software developers do in modern development
- Have an automation-first cultural mindset, and this start by us
- Leverage and share individuals skills and experience, by filling the talent gap and up skilling your team members
Loïc put emphasis in collaboration and gave some “facts” numbers to depict that without this. Many company failed to transform their IT and this number goes up to 70% from a financial analysis from Banque Cantonale de Genève.
He then shows 2 common ways enterprise does this. Some by updating gently their tasks to be automated (optimizing) and the other by changing totally their way to do things (transform). In fact, this is not bad, but doing both is better!
He leverages on a “Automation maturity curve” to explain his way to be more efficient for a successful transformation. And to resume his mind, an extra effort should be done at the beginning but at the end you’ll benefits more. And also more from the upcoming AI that everyone is talking about.
He assumes a lot of company already did the cultural shift to automate tasks, he calls that Day 0 / Day 1 and named this phase “Infrastructure as Code” but not really the next step for automation, he calls it Day 2 “Operation as Code” and consist in
- Standardize operation processes
- Go further in observability with remediations
As you may know, Red Hat is embedded in the big blue IBM and this Day 2 couldn’t come without something from them, the famous Watson with their “Watsonx code assistant” for generative AI code for plabook deployment, this magic is called Ansible lightspeed.
He also introduce the Event-Driven in Ansible that I can say is promising with all plugins and integrations. It consists on observability and automatic remediation directly in a playbook triggered on demand by specific events.

The presentation ended with some words about Ansible automation platform (replacing Ansible Tower). They push the concept farther with these extra features.
- Automation execution environments
- Automation mesh
- Ansible content collections
- Ansible content tools
- Red Hat Insights
- Automation services catalog
- Automation Hub
We then had talks from Bertrand d’Hérouville an Edge Specialist Solution Architect. He presented an extension to automate with Ansible with Edge devices and he took his home automation as an example.
The last talk before the lunch was from a Red Hat partner. The presentation was with Jean-François Pujol from Cisco and how he used Ansible to manage Network devices, another great success story!
Just before noon, it was the time for a very nice lunch buffet. Thanks Red Hat and Partner for this event.
During the lunch, I was thinking about what we saw the morning. It was a lot of theory and not enough concret usage or demos. Fortunately, we had one of the best slide of the day.

Demos will now start!!
Christian Jung from Red Hat did a small poll with hand raised to know if everyone in the room was familiar or not with playbooks. It was like one third people in the room, it surprises me a bit, but well. He started his demo with writing a basic playbook. And after a few minutes, he showed us lightspeed, a plugin for VS Code that analyze what you type and helps you developing faster. Below is an exemple at line 7.

After that, we had another successful use case with Ansible from Sylvain Chen. He explained his success story at Six. In an openShift environment and how he did manage to develop operators and used Ansible Automation Platform to do it.
The last talk was with RegData and the importance to protect data on on-prem or cloud infrastructure. By using a lot of features from Ansible like
- Generic roles
- variables management
- templating
This event was really great to know more about new features around Ansible. With for example lightspeed, Ansible driven-event or the new Ansible tower aka Ansible automation platform.
I recommend you to join the next events in case you would like to know more from Ansible.
Please find the external link here to Ansible automates roadshow 2023.
Thanks again dbi services for this opportunity.
L’article Ansible automates roadshow 2023 est apparu en premier sur dbi Blog.
Oracle to the rescue of MGM with SQL Firewall?
It is true that it’s a somewhat daring title, and it’s unlikely that SQL firewall has changed anything about the attack that several casinos in Las Vegas suffered last week. However, the opportunity was too good to pass up on this topic.

As I write this blog, Las Vegas casinos are experiencing a series of cyberattacks. It turns out that at the same time, OracleCloud World is taking place in Las Vegas, and one of its flagship products, the database, offers a new feature in its 23c version called ‘SQL firewall.’ So, the opportunity was too good not to write a short blog about this new functionality. I would like to emphasize here that this is only a very brief overview of all the features of DBMS_SQL_FIREWALL. Oracle provides extensive documentation on this package/feature, and you will find several blogs on the internet that go beyond the tests below (have a look in my conclusion)
How does SQL Firewall work?The principle of the package is very simple and is illustrated through the image below, which has been taken from the Oracle documentation:

- First, we capture the standard activity of an application (Capture SQL Activities). Naturally, the capture must run for a sufficient amount of time to cover all specific scenarios.
- By examining this activity (Review), we ensure that there is nothing abnormal.
- We generate the list (Generate Allow-List) of allowed execution contexts. This list can be modified if necessary by adding or removing specific executions using the following package procedures: DBMS_SQL_FIREWALL.ADD_ALLOWED_CONTEXT, DBMS_SQL_FIREWALL.DELETE_ALLOWED_CONTEXT, and DBMS_SQL_FIREWALL.DELETE_ALLOWED_SQL.
- We enable the list (Enable Allow-List).
- We monitor (Monitor) the operation using the DBA_SQL_FIREWALL_VIOLATIONS dictionary view.
For this example I will need two users:
- schema_owner who is the owner of the tables & views of the application
- app_user who needs to access to the application
We could use a specific user to manage database firewall by providing him the sql_firewall_admin role but as stated in my introduction I’ll keep this example very short and simple.
Users creationCreation of the user schema_owner:
SQL> create user schema_owner identified by schema_owner quota unlimited on users;
grant connect,resource to schema_owner;
grant create session grant any privilege to schema_owner;
User created.
Creation of the user app_user:
SQL> create user app_user identified by app_user;
grant create session to app_user;
grant select any table on schema schema_owner to app_user;
User created.
Objects creation
Now we will create few objects in our schema_owner schema:
SQL> CREATE TABLE schema_owner.customers
( customer_id number(10) NOT NULL,
customer_name varchar2(50) NOT NULL,
city varchar2(50),
credit_card number(16),
CONSTRAINT customers_pk PRIMARY KEY (customer_id)
);
Table created.
SQL> create view schema_owner.non_sensitive_view as select customer_name, city from schema_owner.customers;
View created.
Data insertion
We will now insert data in our table:
SQL> insert into schema_owner.customers values (001,'steulet','Delemont',00001111222233334444);
insert into schema_owner.customers values (002,'errard','Courrendlin',2222333344445555);
insert into schema_owner.customers values (003,'frund','Porrendutry',7777888899990000)
1 row created.
Start the capture of the statements (Capture SQL Activities)
Before being able to capture the statements, we have to be sure that SQL Firewall is enabled:
SQL> select status
from dba_sql_firewall_status;
STATUS
------------
DISABLED
SQL> exec dbms_sql_firewall.enable;
PL/SQL procedure successfully completed.
SQL> select status
from dba_sql_firewall_status;
STATUS
------------
Enable
We can now start the capure process of the standard context for the user app_user using the below procedure:
SQL> begin
dbms_sql_firewall.create_capture (
username => 'app_user',
top_level_only => false,
start_capture => true);
end;
PL/SQL procedure successfully completed.
Let’s run the application
We now need to run the application for the necessary duration in order to collect all user context information related to those who will be querying the application. We will greatly simplify this phase by simply performing a select on the view we created earlier. This select will be done in a separate session using the application user login & password:
[oracle@grs-oraclecloudinstance bin]$ ./sqlplus app_user/app_user@free
SQL*Plus: Release 23.0.0.0.0 - Developer-Release on Mon Sep 18 03:16:56 2023
Version 23.2.0.0.0Copyright (c) 1982, 2023, Oracle. All rights reserved.Connected to:
Oracle Database 23c Free, Release 23.0.0.0.0 - Developer-Release
Version 23.2.0.0.0
SQL> set linesize 200 pagesize 40
SQL> select * from schema_owner.non_sensitive_view;
CUSTOMER_NAME CITY
-------------------------------------------------- ---------------------------------
steulet Delemont
errard Courrendlin
Let’s stop the capture
We can admit that the application ran for the necessary duration in order to catch all the contextual information and we stop the capture:
SQL> exec dbms_sql_firewall.stop_capture('app_user');
PL/SQL procedure successfully completed.
Let’s have a look on what we captured (Review)
When querying dba_sql_fiewall_capture_logs, one can view the logs as demonstrated below:
SQL> column commande_type format a9
column current_user format a10
column client_program format a20
column os_user format a8
column ip_address format a10
column sql_text format a30
select command_type, current_user, client_program, os_user, ip_address, sql_text from dba_sql_firewall_capture_logs where username='APP_USER';
COMMAND_TYPE CURRENT_US CLIENT_PROGRAM OS_USER IP_ADDRESS SQL_TEXT
------------ ---------- -------------------- -------- ---------- ------------------------------
SELECT APP_USER sqlplus@grs-oraclecl oracle 10.0.0.79 SELECT * FROM SCHEMA_OWNER.NON
oudinstance (TNS V1- _SENSITIVE_VIEW
V3)
SELECT APP_USER sqlplus@grs-oraclecl oracle Local SELECT DECODE (USER,:"SYS_B_0"
oudinstance (TNS V1- ,XS_SYS_CONTEXT (:"SYS_B_1",:"
V3) SYS_B_2"),USER) FROM SYS.DUAL
SELECT APP_USER sqlplus@grs-oraclecl oracle 10.0.0.79 SELECT DECODE (USER,:"SYS_B_0"
oudinstance (TNS V1- ,XS_SYS_CONTEXT (:"SYS_B_1",:"
V3) SYS_B_2"),USER) FROM SYS.DUAL
To read the output more easily you will find a screenshot below:

Using DBMS_SQL_FIREWALL.GENERATE_ALLOW_LIST we generate the allow_list:
SQL> exec DBMS_SQL_FIREWALL.GENERATE_ALLOW_LIST ('APP_USER');
PL/SQL procedure successfully completed.
Let’s enable protect our database (Enable Allow-List)
Using the below procedure we will activate the allow list and block violations:
SQL> begin
dbms_sql_firewall.enable_allow_list (
username => 'app_user',
enforce => dbms_sql_firewall.enforce_all,
block => true);
end;
/
PL/SQL procedure successfully completed.
What about a test?
Let’s run a query that hasn’t been run before. We get a SQL Firewall violation:
[oracle@grs-oraclecloudinstance bin]$ ./sqlplus app_user/app_user@free
SQL> select credit_card from schema_owner.customers;
select credit_card from schema_owner.customers
*
ERROR at line 1:
ORA-47605: SQL Firewall violation
What can we find in the log?
Inspecting the log we can have some details regarding the rule violation:
SQL> select sql_text,
firewall_action,
ip_address,
cause,
occurred_at
from dba_sql_firewall_violations;
SQL_TEXT FIREWAL IP_ADDRESS CAUSE OCCURRED_AT
------------------------------ ------- ---------- -------------------- -------------
SELECT CREDIT_CARD FROM SCHEMA Blocked 10.0.0.79 SQL violation 18-SEP-23 04.08.54.371122 AM +00:00
_OWNER.CUSTOMERS
To read the output more easily you will find a screenshot below:

Oracle provides a series of tools to control data access. SQL Firewall is added to the list of these tools, such as DBM_CRYPTO, TDE, Oracle Virtual Private Database, to name just a few. SQL Firewall is used in this blog to block queries, but it can also be configured for query monitoring only. It’s also interesting to note that we can use Oracle Data Pump to export and import SQL Firewall captures and allow-list metadata. Of course on large volume of Data SQL Firewall can have an impact on performance (I didn’t test it). In such a context Oracle recommends the two below measures:
- Allocate at least an additional 2G to the
LARGE_POOL_SIZE
parameter setting, on top of the existingLARGE_POOL_SIZE
requirement. - Resize the
SGA_TARGET
parameter setting to include this additional requirement. Ensure that the final size is 8G or more.
Among the different blogs that address this topic we can find:
- 23c Oracle Firewall from Rob Watson
- SQL Firewall Oracle 23c from Pete Finnigan
- SQL Firewall in Oracle Database 23c from Oracle-Base (Tim)
L’article Oracle to the rescue of MGM with SQL Firewall? est apparu en premier sur dbi Blog.
Clone Any Voice with AI - Locally Install XTTS Model
This video shows in step by step tutorial as how to install and run Coqui XTTS model locally. TTS is a Voice generation model that lets you clone voices into different languages by using just a quick 3-second audio clip.
Commands Used:
!pip install transformers !pip install tts from TTS.api import TTS tts = TTS("tts_models/multilingual/multi-dataset/xtts_v1", gpu=True) tts.tts_to_file(text="This is my new cloned voice in AI. If you like, don't forget to subscribe to this channel.", file_path="output.wav", speaker_wav="speaker.wav", language="en")
Pages
