LogiUpSkill

Oracle ASM Disk Creation and Management Guide

Oracle ASM Disk Creation and Management Guide Oracle ASM Disk Creation and Management Guide | Technical Blog Oracle ASM Disk Creation and Management Guide Complete Guide to Creating and Managing Disks in ASM Technical Documentation | Oracle Database Storage Administration ← Back to Home Overview Prerequisites Disk Creation Verification Table of Contents 1. ASM Overview 2. Prerequisites and Planning 3. Disk Discovery 4. Disk Preparation 5. Creating ASM Disks 6. Adding Disks to Diskgroups 7. Verification Steps 8. Maintenance Operations 9. Best Practices 1. ASM Overview Oracle Automatic Storage Management (ASM) is an integrated, high-performance database file system and disk manager built into Oracle Database. ASM simplifies database storage administration by providing a vertical integration of the file system and volume manager specifically designed for Oracle Database files. What is ASM? Volume Manager: Manages physical disks and provides logical volumes File System: Provides a cluster-aware file system for Oracle database files Storage Virtualization: Abstracts physical storage into logical disk groups Automatic Rebalancing: Distributes data evenly across available disks Redundancy Options: Supports normal, high, and external redundancy levels Key Concepts Component Description Purpose ASM Disk Physical disk or disk partition managed by ASM Basic storage unit Disk Group Logical collection of ASM disks Storage pool for database files Failure Group Set of disks that share a common resource Redundancy and availability ASM Instance Special Oracle instance for managing ASM storage Storage management and metadata 2. Prerequisites and Planning System Requirements Oracle Grid Infrastructure installed and configured ASM instance running (+ASM) Raw devices, block devices, or NFS volumes available Appropriate permissions (root for disk preparation, grid for ASM operations) ASMLib or UDEV configured (for Linux environments) Planning Considerations Disk Size: Plan for current and future storage needs Redundancy Level: Choose between External, Normal, or High redundancy Performance: Consider I/O patterns and throughput requirements Failure Groups: Design for hardware failure isolation Disk Group Strategy: Separate DATA, FRA, and REDO disk groups Redundancy Levels Explained Redundancy Mirroring Min Disks Usable Space Use Case External None (RAID/SAN handles it) 1 100% Hardware RAID arrays Normal 2-way mirroring 2 50% Standard production (recommended) High 3-way mirroring 3 33% Mission-critical systems 3. Disk Discovery Check Available Disks # List all block devices lsblk # List all SCSI devices lsscsi # Check disk details fdisk -l # View disk partitions cat /proc/partitions # Check for existing ASM disks ls -l /dev/oracleasm/disks/ # Check disk usage df -h Sample Output: [root@server ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 50G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 49G 0 part / sdb 8:16 0 100G 0 disk sdc 8:32 0 100G 0 disk sdd 8:48 0 100G 0 disk Verify Disk Availability # Check if disks are already in use pvdisplay /dev/sdb pvdisplay /dev/sdc pvdisplay /dev/sdd # Expected output for unused disks: “Failed to find physical volume” # Check for existing partitions parted /dev/sdb print parted /dev/sdc print parted /dev/sdd print 4. Disk Preparation Create Partitions (If Needed) Warning: Partitioning will destroy all data on the disk. Ensure backups are complete before proceeding. # Interactive partitioning with fdisk fdisk /dev/sdb # Commands within fdisk: # n – Create new partition # p – Primary partition # 1 – Partition number # [Enter] – Accept default first sector # [Enter] – Accept default last sector (use entire disk) # w – Write changes and exit # Repeat for other disks fdisk /dev/sdc fdisk /dev/sdd Set Partition Type # Within fdisk, set partition type for ASM # t – Change partition type # 1 – Partition number # 8e – Linux LVM type (or fd for Linux RAID) # Verify partition table fdisk -l /dev/sdb fdisk -l /dev/sdc fdisk -l /dev/sdd Configure Disk Permissions # Set ownership to grid user (ASM owner) chown grid:asmadmin /dev/sdb1 chown grid:asmadmin /dev/sdc1 chown grid:asmadmin /dev/sdd1 # Set appropriate permissions chmod 660 /dev/sdb1 chmod 660 /dev/sdc1 chmod 660 /dev/sdd1 # Verify permissions ls -l /dev/sd[bcd]1 5. Creating ASM Disks Method 1: Using ASMLib (Recommended for Linux) Configure ASMLib # Initialize ASMLib (run once) oracleasm configure -i # Sample configuration: # Default user to own the driver interface []: grid # Default group to own the driver interface []: asmadmin # Start Oracle ASM library driver on boot (y/n) [n]: y # Scan for Oracle ASM disks on boot (y/n) [y]: y # Load ASMLib module oracleasm init # Check ASMLib status oracleasm status Create ASM Disks # Create ASM disk for DATA diskgroup oracleasm createdisk DATA01 /dev/sdb1 # Create ASM disk for DATA diskgroup (additional) oracleasm createdisk DATA02 /dev/sdc1 # Create ASM disk for FRA diskgroup oracleasm createdisk FRA01 /dev/sdd1 # List all ASM disks oracleasm listdisks # Scan for ASM disks oracleasm scandisks # Query disk details oracleasm querydisk -d DATA01 oracleasm querydisk -d DATA02 oracleasm querydisk -d FRA01 Sample Output: [root@server ~]# oracleasm listdisks DATA01 DATA02 FRA01 [root@server ~]# oracleasm querydisk -d DATA01 Disk “DATA01″ is a valid ASM disk Method 2: Using UDEV Rules # Create UDEV rules file vi /etc/udev/rules.d/99-oracle-asmdevices.rules # Add rules for each disk KERNEL==”sdb1″, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″ KERNEL==”sdc1″, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″ KERNEL==”sdd1″, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″ # Reload UDEV rules udevadm control –reload-rules udevadm trigger # Verify permissions persist after reboot ls -l /dev/sd[bcd]1 Set ASM Disk Discovery Path # Connect as grid user su – grid # Set ASM_DISKSTRING parameter sqlplus / as sysasm SQL> ALTER SYSTEM SET ASM_DISKSTRING = ‘/dev/oracleasm/disks/*’ SCOPE=BOTH; Alternative Discovery Paths: ASMLib: /dev/oracleasm/disks/* Raw devices: /dev/raw/* Block devices: /dev/sd* Multipath: /dev/mapper/* NFS: /nfs_mount_point/* 6. Adding Disks to Diskgroups Connect to ASM Instance # Switch to grid user su – grid # Set ASM environment export ORACLE_SID=+ASM export ORACLE_HOME=/u01/app/19.0.0/grid # Connect to ASM instance sqlplus / as sysasm # Or use asmcmd asmcmd Check Existing Diskgroups SQL> SELECT name, state, type, total_mb, free_mb FROM v$asm_diskgroup; NAME STATE TYPE TOTAL_MB FREE_MB ———– ——– ——— ———– ———- DATA MOUNTED NORMAL 204800 150000 FRA MOUNTED NORMAL 102400 80000 Create New Diskgroup — Create DATA diskgroup with Normal redundancy CREATE

DNS Configuration for Oracle SCAN

DNS Configuration for Oracle SCAN Complete Guide to Single Client Access Name Configuration in Oracle RAC Table of Contents 1. SCAN Overview and Architecture 2. Prerequisites and Requirements 3. DNS Server Configuration 4. Creating DNS Records 5. Verification and Testing 6. Grid Infrastructure Integration 7. Troubleshooting 8. Best Practices 1. SCAN Overview and Architecture Single Client Access Name (SCAN) is a critical component of Oracle Real Application Clusters (RAC) that provides a single network name for clients to access any Oracle database running in a cluster. SCAN eliminates the need for clients to change connection strings when nodes are added to or removed from the cluster. What is SCAN? SCAN is a fully qualified domain name (FQDN) registered in DNS that resolves to multiple IP addresses. By default, Oracle recommends configuring three SCAN IP addresses to provide high availability and load balancing for client connections. Key Benefits Simplified Client Configuration: Single name for all database connections regardless of cluster size High Availability: Multiple IP addresses provide redundancy and failover capability Load Balancing: Automatic distribution of client connections across SCAN listeners Scalability: Add or remove nodes without updating client connection strings Location Transparency: Clients remain unaware of cluster topology changes SCAN Architecture SCAN Architecture DiagramClient Applications → DNS Server → SCAN Name (3 IPs) → SCAN Listeners → RAC Nodes Component Description Quantity SCAN Name Fully qualified domain name (FQDN) 1 per cluster SCAN IP Addresses Virtual IP addresses assigned to SCAN 3 (recommended) SCAN Listeners Oracle listeners running on SCAN IPs 3 (one per SCAN IP) DNS Server Resolves SCAN name to multiple IPs 1 or more (HA) SCAN Resolution Process Client application requests connection to database using SCAN name DNS server resolves SCAN name to 3 IP addresses (round-robin) Client attempts connection to first IP address returned by DNS SCAN listener accepts connection and redirects to appropriate node listener Node listener establishes database connection on local instance If SCAN listener unavailable, client tries next IP address automatically 2. Prerequisites and Requirements Network Requirements Public network configured and operational on all cluster nodes Three available IP addresses in the same subnet as public network IP addresses must not be in use or reserved by other systems Network switches and routers configured to allow SCAN traffic Firewall rules permitting traffic on port 1521 (or custom listener port) DNS Requirements Critical Requirement:DNS server must support round-robin resolution for multiple A records with the same hostname. This is essential for SCAN functionality. DNS server installed and operational (BIND, Microsoft DNS, or other) Forward lookup zone configured for domain Reverse lookup zones configured for IP subnets (recommended) Appropriate permissions to create/modify DNS records DNS server configured in /etc/resolv.conf on all nodes Oracle Grid Infrastructure Requirements Oracle Grid Infrastructure 11.2.0.1 or higher Three SCAN VIP resources will be created during installation Three SCAN listener resources will be created automatically Cluster nodes must have DNS resolution working properly Pre-Configuration Checklist Item Requirement Verification Command DNS Resolution Working DNS on all nodes nslookup google.com Domain Name FQDN decided for SCAN Example: rac-scan.example.com IP Addresses 3 IPs allocated and available ping <ip_address> (should fail) Subnet Match Same subnet as public network ifconfig or ip addr show DNS Server Access Admin credentials Login to DNS management console 3. DNS Server Configuration Supported DNS Servers Oracle SCAN supports any DNS implementation that follows standard DNS protocols. Common implementations include: BIND (Berkeley Internet Name Domain): Most common on Linux/Unix systems Microsoft DNS: Integrated with Windows Active Directory dnsmasq: Lightweight DNS forwarder for smaller deployments PowerDNS: High-performance authoritative DNS server BIND DNS Server Configuration Install BIND (if not already installed) # RHEL/Oracle Linux/CentOS sudo yum install bind bind-utils -y # Ubuntu/Debian sudo apt-get install bind9 bind9utils -y # Verify installation named -v Configure BIND Main Configuration File Edit /etc/named.conf to define zones: # Edit named.conf sudo vi /etc/named.conf # Add or verify these options options { directory “/var/named”; dump-file “/var/named/data/cache_dump.db”; statistics-file “/var/named/data/named_stats.txt”; memstatistics-file “/var/named/data/named_mem_stats.txt”; allow-query { any; }; recursion yes; # Listen on all interfaces listen-on port 53 { any; }; listen-on-v6 port 53 { any; }; # Forward requests for external domains forwarders { 8.8.8.8; 8.8.4.4; }; }; # Define forward lookup zone zone “example.com” IN { type master; file “example.com.zone”; allow-update { none; }; }; # Define reverse lookup zone (for 192.168.1.0/24) zone “1.168.192.in-addr.arpa” IN { type master; file “192.168.1.rev”; allow-update { none; }; }; Set Proper Ownership and Permissions # Set ownership sudo chown named:named /etc/named.conf # Set permissions sudo chmod 640 /etc/named.conf # Verify configuration syntax sudo named-checkconf /etc/named.conf Microsoft DNS Server Configuration Access DNS Manager Open Server Manager on Windows Server Navigate to Tools → DNS Connect to your DNS server Expand Forward Lookup Zones Configure Zone Right-click Forward Lookup Zones and select New Zone Select Primary Zone and click Next Enter zone name (e.g., example.com) and click Next Create new zone file and click Next Allow dynamic updates if required or select Do not allow dynamic updates Click Finish Round-Robin Configuration: Microsoft DNS enables round-robin by default. Verify this setting in DNS server properties under the Advanced tab. Ensure Enable round robin is checked. 4. Creating DNS Records Planning SCAN Configuration Parameter Example Value Description SCAN Name rac-scan Hostname portion of FQDN Domain example.com DNS domain name FQDN rac-scan.example.com Complete SCAN name SCAN IP 1 192.168.1.201 First SCAN IP address SCAN IP 2 192.168.1.202 Second SCAN IP address SCAN IP 3 192.168.1.203 Third SCAN IP address BIND DNS – Forward Zone Configuration Create Forward Zone File # Create zone file sudo vi /var/named/example.com.zone # Add the following content $TTL 86400 @ IN SOA ns1.example.com. admin.example.com. ( 2024011101 ; Serial (YYYYMMDDNN) 3600 ; Refresh (1 hour) 1800 ; Retry (30 minutes) 604800 ; Expire (1 week) 86400 ) ; Minimum TTL (1 day) ; Name server records @ IN NS ns1.example.com. ns1 IN A 192.168.1.10 ; SCAN records (critical for Oracle RAC) rac-scan IN A 192.168.1.201 rac-scan IN A 192.168.1.202 rac-scan IN A 192.168.1.203 ; Individual node records rac1 IN A 192.168.1.101 rac2 IN A 192.168.1.102

EmExpress

Enterprise Manager Database Express (EM Express) provides a web-based interface for managing Oracle databases. In Oracle 19c, while DBCA automatically enables EM Express for the Container Database (CDB), Pluggable Databases (PDB) require manual configuration. This guide covers the complete setup process and explains how to switch between the modern Java JET interface and the feature-rich Flash-based UI. Table of Contents 1. Overview of EM Express in Oracle 19c 2. Prerequisites 3. Configuring EM Express for CDB 4. Configuring EM Express for PDB 5. Java JET vs Flash-Based UI 6. Switching Between UI Modes 7. Best Practices 1. Overview of EM Express in Oracle 19c Enterprise Manager Database Express is a lightweight, browser-based database management tool built into Oracle Database. Starting with Oracle 19c, the interface was modernized using Oracle JET (JavaScript Extension Toolkit), replacing the previous Flash-based implementation. EM Express Architecture Overview EM Express is a web-based management interface built into Oracle Database 19c that communicates through the XML DB HTTP listener on configurable HTTPS ports. Component Description Default Port CDB EM Express Manages the Container Database and common users 5500 (HTTPS) PDB EM Express Manages individual Pluggable Database resources 5501+ (HTTPS) XML DB HTTP Listener Handles web requests for EM Express Configurable   2. Prerequisites Before configuring EM Express, ensure the following requirements are met: Oracle Database 19c installed and running SYSDBA privileges for configuration tasks Network access to the configured HTTPS ports Modern browser with JavaScript enabled (Chrome, Firefox, Edge) Firewall rules configured to allow traffic on EM Express ports Important Each PDB requires a unique HTTPS port. Plan your port allocation strategy before configuring multiple PDBs. 3. Configuring EM Express for CDB By default, DBCA configures EM Express for the CDB during database creation. However, you may need to verify or modify the configuration. Follow these steps to configure the HTTPS port for the Container Database. 1 . Connect to the Database as SYSDBA [oracle@dbserver ~]$ sqlplus / as sysdba SQL*Plus: Release 19.0.0.0.0 – Production on Tue Aug 11 13:31:02 2020 Version 19.8.0.0.0 Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Production Version 19.8.0.0.0 2. Verify Current Container Context SQL> SHOW CON_NAME CON_NAME —————————— CDB$ROOT 3. Check Existing HTTPS Port Configuration SQL> SELECT DBMS_XDB_CONFIG.GETHTTPSPORT() FROM DUAL; DBMS_XDB_CONFIG.GETHTTPSPORT() —————————— 0 A return value of 0 indicates that the HTTPS port is not configured. SQL> SELECT DBMS_XDB_CONFIG.GETHTTPSPORT() FROM DUAL; DBMS_XDB_CONFIG.GETHTTPSPORT() —————————— 0 A return value of 0 indicates that the HTTPS port is not configured. 4. Configure the HTTPS Port for CDB SQL> EXEC DBMS_XDB_CONFIG.SETHTTPSPORT(5500); PL/SQL procedure successfully completed. 5. Verify the Configuration SQL> SELECT DBMS_XDB_CONFIG.GETHTTPSPORT() FROM DUAL; DBMS_XDB_CONFIG.GETHTTPSPORT() —————————— 5500 Access URL: CDB EM Express is now accessible at: https://:5500/em Figure 1: EM Express Login Screen – CDB The EM Express login interface displays fields for Username, Password, and Container Name, with the Oracle Enterprise Manager Database Express branding. 4. Configuring EM Express for PDB Unlike the CDB, Pluggable Databases do not have EM Express enabled by default. Each PDB requires manual configuration with a unique HTTPS port. 1. List Available PDBs SQL> SHOW PDBS CON_ID CON_NAME OPEN MODE RESTRICTED ———- —————————— ———- ———- 2 PDB$SEED READ ONLY NO 3 WORAVRPDB1 READ WRITE NO 2. Switch to the Target PDB SQL> ALTER SESSION SET CONTAINER = WORAVRPDB1; Session altered. SQL> SHOW CON_NAME CON_NAME —————————— WORAVRPDB1 3. Verify Current HTTPS Port (Expected: 0) SQL> SELECT DBMS_XDB_CONFIG.GETHTTPSPORT() FROM DUAL; DBMS_XDB_CONFIG.GETHTTPSPORT() —————————— 0 4. Configure Unique HTTPS Port for PDB SQL> EXEC DBMS_XDB_CONFIG.SETHTTPSPORT(5501); PL/SQL procedure successfully completed. 5Confirm the Port Assignment SQL> SELECT DBMS_XDB_CONFIG.GETHTTPSPORT() FROM DUAL; DBMS_XDB_CONFIG.GETHTTPSPORT() —————————— 5501 Access URLPDB EM Express is now accessible at: https://:5501/em Figure 2: EM Express Login Screen – PDB The PDB-specific EM Express login interface accessed via port 5501, showing the same authentication fields for pluggable database management.   Port Assignment Strategy for Multiple PDBs   Container Recommended Port Access URL CDB$ROOT 5500 https://hostname:5500/em PDB1 5501 https://hostname:5501/em PDB2 5502 https://hostname:5502/em PDB3 5503 https://hostname:5503/em   Java JET vs Flash-Based UI   Oracle 19c introduced a redesigned EM Express interface built on Oracle JET technology. While the new interface offers improved performance and a modern look, it provides a simplified feature set compared to the legacy Flash-based UI. Java JET UI (Default) Oracle 19c+ Modern Fast and responsive interface No Flash dependency Performance monitoring focus Limited administrative features No Server tab (user management) Flash-Based UI (Legacy) <span data-contrast="none" Pre-19c Style  Feature-Rich Full administrative capabilities User and schema management Storage administration Configuration options Complete Server tab access Known Limitation The Java JET UI lacks the Server tab, which means user creation and permission management must be performed via SQL*Plus or by switching to the Flash-based UI. Figure 3: Java JET UI – Database Home (Performance Tab Only)  The modern Java JET interface displays a simplified Database Home page with Status information (uptime, version, platform) and Performance graphs (Activity, Services, Instances). Only the Performance tab is available in this UI mode.  Figure 4: Flash-Based UI – Complete Administrative Interface The legacy Flash-based UI provides full administrative capabilities with multiple tabs including Configuration, Storage, Security, and Performance. The interface displays comprehensive performance metrics, activity graphs, and complete database management options. 6. Switching Between UI Modes Oracle provides a built-in script to toggle between the Java JET and Flash-based interfaces. This is particularly useful when you need access to administrative features not available in the modern UI. Switch to Flash-Based UI (EMX Mode) SQL> @?/rdbms/admin/execemx emx Session altered. no rows selected old 1: select nvl( ‘&1′,’omx’) p1 from dual new 1: select nvl( ’emx’,’omx’) p1 from dual P1 — emx PL/SQL procedure successfully completed. Session altered. Important After running the script, log out and log back in to EM Express for the changes to take effect. Revert to Java JET UI (OMX Mode) SQL> @?/rdbms/admin/execemx omx Session altered. no rows selected old 1: select nvl( ‘&1′,’omx’) p1 from dual new 1: select nvl( ‘omx’,’omx’) p1 from dual P1 — omx PL/SQL procedure successfully completed. Session altered.   Quick Reference: UI Mode Commands   Command UI Mode Description @?/rdbms/admin/execemx emx   Flash-Based

Xag 

Install and configure XAG Services Manager Oracle Grid Infrastructure provides the necessary components to manage high availability (HA) for any business critical application. Oracle Grid Infrastructure Agents (XAG) are Oracle Grid Infrastructure components that provide the HA framework to application resources and resource types managed through the agent management interface, AGCTL. This framework provides a complete, ready to use application HA solution that contains pre-defined Oracle Grid Infrastructure resource configurations and agents to integrate applications for complete application HA.Oracle GoldenGate Veridata12c is a high-speed,low-impact, heterogeneous data comparison ,and repair solution. Working primarily outside of the database system it quickly identifies, reports on, and fixes data discrepancies between databases without interrupting the on going business processes automatically. AGCTL, Agent Control, is the agent command line utility to manage Oracle Grid Infrastructure agents (XAG) for application HA using Oracle Grid Infrastructure. AGCTL is the interface to add an application to the Oracle Grid Infrastructure as a clustered resource under agent control. Once configured, AGCTL is the management interface to bring the application online, offline or relocate, as well as to check the application state and modify the agent configuration. There are AGCTL options to disable and remove the application from the Oracle Grid Infrastructure agent control. Prerequisite : 1) A shared filesystem for OGG recovery information such as checkpoint files, bounded recovery files and trail files is required. This can be either DBFS, OCFS (Oracle Cluster File system) or ACFS (ASM Cluster Filesystem). In our example, we are using ACFS, which is included in the 11.2 Clusterware bundle (by default). 2) A separate VIP for the golden gate resource. (In this example, we will use the IP address 192.168.45.159 for this purpose) 3) Xag installation directory must be same on all nodes 4) appvip must be required and resolve by DNS 5) add database service Stage 1. Create file system : 1. Create the ASM Diskgroup [oracle@west01 scp]$ sh asmdu.sh -m Instances running on west01 : +ASM1, woravr1 DiskGroup Redundancy Total MB Usable MB % Free ——— ———– ——– ——— —— ACFS EXTERN 199996.00 199860.00 99 DATA EXTERN 399996.00 394464.00 98 FRA EXTERN 150012.00 148796.00 99 OCR EXTERN 99998.00 98762.00 98 2. create asm volume for asfs file system ASMCMD [+] > volinfo –all Diskgroup Name: ACFS Volume Name: ACFS1 Volume Device: /dev/asm/acfs1-148 State: ENABLED Size (MB): 102400 Resize Unit (MB): 64 Redundancy: UNPROT Stripe Columns: 8 Stripe Width (K): 1024 Usage: Mountpath: ASMCMD [+] > 3. Create folder on os layer [root@west01 /]# mkdir acfs1 [root@west01 /]# 4. create ACFS file system Create ACFS Command: /sbin/mkfs -t acfs /dev/asm/acfs1-148 Following commands should be run as privileged user : /u01/app/oracle/19c/grid_1/bin/srvctl add filesystem -d /dev/asm/acfs1-148 -m /acfs1 -fstype ACFS -autostart ALWAYS /u01/app/oracle/19c/grid_1/bin/srvctl start filesystem -d /dev/asm/acfs1-148 chown : /acfs1 chmod 775 /acfs1 [root@west01 /]# cat /u01/app/oracle_base/cfgtoollogs/asmca/scripts/acfs_script.sh #!/bin/sh /u01/app/oracle/19c/grid_1/bin/srvctl add filesystem -d /dev/asm/acfs1-148 -m /acfs1 -fstype ACFS -autostart ALWAYS if [ $? = “0” -o $? = “2” ]; then /u01/app/oracle/19c/grid_1/bin/srvctl start filesystem -d /dev/asm/acfs1-148 if [ $? = “0” ]; then chmod 775 /acfs1 /u01/app/oracle/19c/grid_1/bin/srvctl status filesystem -d /dev/asm/acfs1-148 exit 0 else exit $? fi /u01/app/oracle/19c/grid_1/bin/srvctl status filesystem -d /dev/asm/acfs1-148 fi [root@west01 /]# sh /u01/app/oracle_base/cfgtoollogs/asmca/scripts/acfs_script.sh ACFS file system /acfs1 is mounted on nodes west01,west02 [root@west01 /]# /sbin/mount.acfs -o all –/bin/umount -t acfs -a 5 . check the status [oracle@west01 ~]$ crsctl stat res -t | grep acfs ora.acfs.acfs1.acfs ONLINE ONLINE west01 mounted on /acfs1,ST ONLINE ONLINE west02 mounted on /acfs1,ST [oracle@west01 ~]$ Stage 2 : Configure VIP and DB service for XAG 1. Add service srvctl add service -d woravr -s woravrsvc -r west01 -a west02 -P PRECONNECT 2. Add VIP crsctl stat res -p |grep -ie .network -ie subnet |grep -ie name -ie subnet [root@west01 ~]# appvipcfg create -network=1 -ip=192.168.45.159 -vipname=westgg-vip -user=oracle Using configuration parameter file: /u01/app/oracle/19c/grid_1/crs/install/crsconfig_params The log of current session can be found at: /u01/app/oracle_base/crsdata/west01/scripts/appvipcfg.log [root@west01 ~]# [root@west01 ~]# crsctl start resource westgg-vip CRS-2672: Attempting to start ‘westgg-vip’ on ‘west02’ CRS-2676: Start of ‘westgg-vip’ on ‘west02’ succeeded Stage 3. Install and configure XAG setup Download zip : zag.zip or page 1. unzip in /tmp folder [oracle@west01 tmp]$ unzip xag.zip Archive: xag.zip creating: xag/ inflating: xag/xagsetup.sh inflating: xag/aggoldengate.pm inflating: xag/agapacheas.pm ……….skipped…………… inflating: xag/template/xag.ebs.type inflating: xag/template/xag.goldengate.type inflating: xag/template/xag.base.type 2. create same directory structure on all nodes west01: [root@west01 ]# mkdir -p /u01/app/oracle/19c/xag [root@west01 ]# chown oracle:oinstall -R xag west02: [root@west02 ]# mkdir -p /u01/app/oracle/19c/xag [root@west02 ]# chown oracle:oinstall -R xag 3. Install Xag west01: [oracle@west01 ]# cd /tmp/xag [oracle@west01 ]$ ./xagsetup.sh –install –directory /u01/app/oracle/19c/xag –nodes west01,west02 Installing Oracle Grid Infrastructure Agents on: west01 Installing Oracle Grid Infrastructure Agents on: west02 Done. 4. Add xag service with help of agctl Xag binary is located on “/u01/app/oracle/19c/xag/bin” for Microservices (Run from Root user) /u01/app/oracle/19c/xag/bin/agctl add goldengate westgg –gg_home /acfs1/ggate –service_manager –config_home /acfs1/ggate/deploy/etc/conf –var_home /acfs1/ggate/deploy/var –port 7900 –adminuser ggadmin –user oracle –group oinstall –vip_name westgg-vip –filesystems ora.acfs.acfs1.acfs –db_services ora.woravr.woravrsvc.svc –use_local_services –nodes west01,west02 for : classic Goldengate /u01/app/oracle/19c/xag/bin/agctl add goldengate westgg –gg_home /acfs/ggate –nodes west01,west02 –vip_name westgg-vip –filesystems ora.acfs.acfs1.acfs –databases ora.woravr.db –oracle_home /u01/app/oracle/19c/db_1 Where: –gg_home specifies the location of the Oracle GoldenGate software. –service_manager indicates this is an Oracle GoldenGate Microservices instance. –config_home specifies the Oracle GoldenGate deployment configuration home directory. –var_home specifies the Oracle GoldenGate deployment variable home directory. –port specifies the deployment Service Manager port number. –adminuser specifies the Oracle GoldenGate Microservices administrator account name. –user specifies the name of the operating system user that owns the Oracle GoldenGate deployment. –group specifies the name of the operating system group that owns the Oracle GoldenGate deployment. –network specifies the network subnet for the VIP, determined on page 17 (optional). –ip specifies the IP address for the VIP, which was determined on page 17 (optional). If you have already created a VIP, then specify it using the –vip_name parameter in place of –network and –ip. –filesystems specifies the CRS file system resource that must be mounted before the deployment is started. 5. start the service [oracle@west01 ]$ /u01/app/oracle/19c/xag/bin/agctl start goldengate westgg 6. check status [oracle@west01 ]$ /u01/app/oracle/19c/xag/bin/agctl status goldengate Goldengate instance ‘westgg’ is running on

Converting Oracle Standby Database  

Converting Oracle Standby Database: Physical to Snapshot and Back Using DataGuard Broker   Oracle Data Guard Administration | Snapshot Standby Database Management Oracle Data Guard provides the ability to convert a physical standby database to a snapshot standby database for testing, development, or validation purposes. This guide provides comprehensive procedures for converting between physical and snapshot standby modes using the DataGuard Broker (DGMGRL), ensuring seamless transitions while maintaining data protection capabilities. Table of Contents Overview: Physical vs Snapshot Standby Prerequisites and Considerations Convert Physical Standby to Snapshot Standby Convert Snapshot Standby Back to Physical Validation and Verification Best Practices and Recommendations Overview: Physical vs Snapshot Standby Oracle Data Guard supports multiple standby database types, each serving different operational requirements. Understanding the differences between physical standby and snapshot standby databases is essential for effective disaster recovery and database management strategies. Physical Standby Database Block-for-block copy of the primary databaseContinuously applies redo data from primary Read-only access (with Active Data Guard) Provides disaster recovery protection Cannot be opened for read-write operations Maintains synchronization with primary Snapshot Standby Database Fully updateable standby database Receives but does not apply redo data Open for read-write operations Ideal for testing and development Changes are discarded upon conversion back Maintains redo log reception from primary Key Concept: Flashback Database Requirement Snapshot standby conversion relies on Oracle Flashback Database technology. When converting from physical to snapshot standby, a guaranteed restore point is automatically created. This restore point enables the database to discard all changes made during the snapshot period and return to synchronization with the primary database when converted back to physical standby. Conversion Workflow Overview   Prerequisites and Considerations     Technical Requirements   Requirement Description Verification Command Flashback Database Must be enabled on the standby database SELECT flashback_on FROM v$database; Fast Recovery Area Configured with adequate space for flashback logs SHOW PARAMETER db_recovery_file_dest; DataGuard Broker Configuration must be enabled and healthy SHOW CONFIGURATION; Database Version Oracle 11g Release 1 or later SELECT version FROM v$instance; Standby State Physical standby must be mounted or applying redo SELECT database_role FROM v$database; Pre-Conversion Checklist 1. Verify Flashback Database Status SQL*Plus – Standby Database — Connect to standby database SQL> SELECT flashback_on, current_scn FROM v$database; FLASHBACK_ON CURRENT_SCN —————- ———– YES 1234567890 2. Check Fast Recovery Area Space SQL*Plus – Standby Database SQL> SELECT name, space_limit/1024/1024/1024 AS limit_gb, space_used/1024/1024/1024 AS used_gb, space_reclaimable/1024/1024/1024 AS reclaimable_gb, ROUND((space_used/space_limit)*100,2) AS pct_used FROM v$recovery_file_dest; NAME LIMIT_GB USED_GB RECLAIMABLE_GB PCT_USED —————————– ———– ——– ————— +FRA 500 125.5 15.2 25.10 3. Verify DataGuard Broker Configuration DGMGRL – Connect and Verify DGMGRL> CONNECT sys@primary_db Password: ******** Connected to “PRIMARY_DB” Connected as SYSDBA. DGMGRL> SHOW CONFIGURATION; Configuration – DG_CONFIG Protection Mode: MaxPerformance Members: PRIMARY_DB – Primary database STANDBY_DB – Physical standby database Fast-Start Failover: Disabled Configuration Status: SUCCESS (status updated 15 seconds ago) Important Considerations Ensure sufficient FRA space for flashback logs during the snapshot period Plan for increased storage as redo logs accumulate without being applied Document the snapshot duration to manage resource consumption All changes made to snapshot standby will be permanently lost upon conversion back   Convert Physical Standby to Snapshot Standby   Converting a physical standby database to a snapshot standby enables read-write access for testing, development, or validation scenarios. The DataGuard Broker simplifies this process to a single command. Conversion Procedure Using DGMGRL 1 Connect to DataGuard Broker DGMGRL – Connect to Primary or Standby — Start DGMGRL and connect $ dgmgrl DGMGRL> CONNECT sys@primary_db Password: ******** Connected to “PRIMARY_DB” Connected as SYSDBA. 2. Verify Current Standby Status DGMGRL – Check Database Status DGMGRL> SHOW DATABASE STANDBY_DB; Database – STANDBY_DB Role: PHYSICAL STANDBY Intended State: APPLY-ON Transport Lag: 0 seconds (computed 1 second ago) Apply Lag: 0 seconds (computed 1 second ago) Average Apply Rate: 1.02 MByte/s Real Time Query: ON Instance(s): STANDBY_DB1 Database Status: SUCCESS 3. Convert to Snapshot Standby DGMGRL – Execute Conversion DGMGRL> CONVERT DATABASE STANDBY_DB TO SNAPSHOT STANDBY; Converting database “STANDBY_DB” to a Snapshot Standby database, please wait… Database “STANDBY_DB” converted successfully 4 Verify Snapshot Standby Status DGMGRL – Confirm Conversion DGMGRL> SHOW DATABASE STANDBY_DB; Database – STANDBY_DB Role: SNAPSHOT STANDBY Intended State: APPLY-OFF Transport Lag: 0 seconds (computed 0 seconds ago) Apply Lag: (unknown) Average Apply Rate: (unknown) Real Time Query: OFF Instance(s): STANDBY_DB1 Database Warning(s): ORA-16714: the value of property ArchiveLagTarget is inconsistent with the requirement Database Status: WARNING Expected Database State After Conversion SQL> SELECT database_role, open_mode FROM v$database; DATABASE_ROLE OPEN_MODE —————- ——————– SNAPSHOT STANDBY READ WRITE SQL> SELECT name, guarantee_flashback_database FROM v$restore_point WHERE guarantee_flashback_database = ‘YES’; NAME GUA —————————— — SNAPSHOT_STANDBY_REQUIRED_01 YES Conversion Complete The standby database is now a snapshot standby and can accept read-write operations. A guaranteed restore point has been automatically created to enable conversion back to physical standby. Redo logs continue to be received from the primary but are not applied.   Convert Snapshot Standby Back to Physical   After completing testing or development activities on the snapshot standby, convert it back to a physical standby to resume redo apply and data protection. This process uses Flashback Database to discard all changes made during the snapshot period. Data Loss Warning All changes made to the snapshot standby database will be permanently discarded during conversion back to physical standby. Ensure all necessary data has been exported or backed up before proceeding. Conversion Procedure Using DGMGRL 1. Connect to DataGuard Broker DGMGRL – Connect DGMGRL> CONNECT sys@primary_db Password: ******** Connected to “PRIMARY_DB” Connected as SYSDBA. 2. Verify Current Snapshot Status DGMGRL – Check Current State DGMGRL> SHOW DATABASE STANDBY_DB; Database – STANDBY_DB Role: SNAPSHOT STANDBY Intended State: APPLY-OFF Transport Lag: 0 seconds (computed 2 seconds ago) Apply Lag: 2 hours 15 minutes (computed 2 seconds ago) Average Apply Rate: (unknown) Real Time Query: OFF Instance(s): STANDBY_DB1 Database Status: WARNING Convert to Physical Standby DGMGRL – Execute Conversion DGMGRL> CONVERT DATABASE STANDBY_DB TO PHYSICAL STANDBY; Converting database “STANDBY_DB” to a Physical Standby database, please wait… Operation requires shut down of instance “STANDBY_DB1” on database “STANDBY_DB” Shutting down instance “STANDBY_DB1″… Database closed. Database dismounted. ORACLE instance shut down. Operation requires start up of instance “STANDBY_DB1”

Configure DataGuard broker  19c  

  Configure DataGuard broker 19c   Oracle Data Guard Broker provides a centralized framework to manage and monitor an entire Data Guard configuration through a unified client interface. The broker simplifies complex administrative tasks, automates failover operations, and ensures configuration consistency across all databases in your Data Guard environment.   Table of Contents   Data Guard Broker Overview Architecture and Configuration Files Key Benefits and Capabilities Prerequisites Removing Existing Broker Configuration Disabling Archive Destinations Enabling Data Guard Broker Creating Broker Configuration Enabling and Verifying Configuration Configuration Management   Data Guard Broker Overview   Data Guard Broker is Oracle’s enterprise-class management layer that sits above the core Data Guard infrastructure. It provides a unified interface for managing complex Data Guard topologies with multiple standby databases, automating routine tasks, and ensuring configuration consistency.   What is Data Guard Broker?   Centralized Management Framework: Manage entire Data Guard configurations through a single client connection to any database in the configuration Command-Line Interface (DGMGRL) : Powerful CLI for configuration, monitoring, and management operations Automated Operations : Simplifies switchover and failover with single-command execution Unified Configuration : Eliminates the need for multiple SQL*Plus sessions across different databases Configuration Persistence : Maintains configuration details in binary flat files stored at each database location Broker Limitations DGMGRL (Command Line Interface) does not have the ability to create standby databases from scratch. Use RMAN Active Duplicate or Oracle Enterprise Manager Cloud Control GUI for standby database creation. The CLI is primarily used for configuration management and operational tasks after standby databases are established.   Architecture and Configuration Files     Configuration File Storage   The broker maintains its configuration metadata in binary flat files rather than within the database itself. This design ensures that configuration information remains accessible even when databases are not mounted or open. Configuration File Parameters Two initialization parameters control the location of broker configuration files: DG_BROKER_CONFIG_FILE1 – Primary configuration file location DG_BROKER_CONFIG_FILE2 – Secondary configuration file location (redundancy) 🔒 Redundancy and Protection Two copies of the configuration files are always maintained on each database node for redundancy and protection against corruption. If one file becomes corrupted, the broker automatically uses the alternate file. Both files are stored in ASM disk groups or on the local filesystem depending on your storage architecture. File Location Examples SQL> SHOW PARAMETER dg_broker_config  NAME                                             TYPE        VALUE ———————————— ———– —————————————————————————–  dg_broker_config_file1               string      +DATA/woravr/dr1woravr.dat dg_broker_config_file2               string      +DATA/woravr/dr2woravr.dat    Key Benefits and Capabilities   Simplified Switchover/Failover Execute role transitions with a single command, minimizing downtime associated with planned maintenance or unplanned outages. The broker performs all pre-checks and validation automatically. Unified Configuration Management Replace multiple SQL*Plus statements across different databases with a single unified interface. All configuration changes are automatically propagated to all members. Automated Monitoring Continuous health monitoring of all databases in the configuration with automatic detection of lag, apply rate issues, and connectivity problems. Fast-Start Failover Optional automatic failover capability that detects primary database failures and initiates failover to a designated standby database without manual intervention. Configuration Validation Performs comprehensive pre-checks before role transitions to ensure all prerequisites are met, reducing the risk of failed switchover or failover operations. Centralized Property Management Manage Data Guard-related initialization parameters through broker properties, ensuring consistency across all databases in the configuration.   Prerequisites   Before configuring Data Guard Broker, ensure the following conditions are met: Data Guard Configuration: A functioning Data Guard configuration with at least one standby database Network Connectivity: TNS connectivity established between all databases in the configuration Matching Software Versions: All databases running the same Oracle Database version Unique DB_UNIQUE_NAME: Each database must have a unique DB_UNIQUE_NAME parameter Compatible Parameter: The COMPATIBLE parameter should be set consistently across all databases Valid TNS Entries: Properly configured tnsnames.ora entries on all nodes   Step 1: Removing Existing Broker Configuration   If you have an existing broker configuration that needs to be reconfigured or removed, start by cleanly removing the current configuration. 1. Connect to DGMGRL and Remove Configuration [oracle@west01 ~]$ dgmgrl DGMGRL for Linux: Release 19.0.0.0.0 – Production on Tue Apr 20 15:51:38 2021 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. Welcome to DGMGRL, type “help” for information. DGMGRL> remove configuration; Removed configuration DGMGRL> show configuration; ORA-16532: Oracle Data Guard broker configuration does not exist Configuration details cannot be determined by DGMGRL Configuration Removal Impact Removing the broker configuration does not affect the underlying Data Guard setup. Redo transport and apply services continue to function. Only the broker metadata is removed, and you can recreate the broker configuration at any time. Step 2: Disabling LOG_ARCHIVE_DEST_2 Before enabling Data Guard Broker, clear the LOG_ARCHIVE_DEST_2 parameter on both primary and standby databases. The broker will manage this parameter automatically once enabled. Disable on Primary Database 2Clear Archive Destination on Primary SQL> SHOW PARAMETER log_archive_dest_2 NAME TYPE VALUE ———————————— ———– —————– log_archive_dest_2 string SERVICE=woravrdr LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=woravrdr SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2=” SCOPE=BOTH; System altered. SQL> SHOW PARAMETER log_archive_dest_2 NAME TYPE VALUE ———————————— ———– —————– log_archive_dest_2 string Disable on Standby Database 3. Clear Archive Destination on Standby SQL> SHOW PARAMETER log_archive_dest_2 NAME TYPE VALUE ———————————— ———– —————– log_archive_dest_2 string SERVICE=woravr LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=woravr SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2=” SCOPE=BOTH; System altered. SQL> SHOW PARAMETER log_archive_dest_2 NAME TYPE VALUE ———————————— ———– —————– log_archive_dest_2 string 💡 Why Clear LOG_ARCHIVE_DEST_2? The broker automatically manages redo transport destinations through its own property system. Manually configured LOG_ARCHIVE_DEST parameters can conflict with broker-managed settings. Once the broker is enabled, it will configure these parameters according to your broker properties. Step 3: Enabling Data Guard Broker The DG_BROKER_START parameter controls whether the Data Guard Broker is active on each database. This must be set to TRUE on all databases that will participate in the broker configuration. Enable Broker on Primary Database 4. Enable DG Broker Process on Primary SQL> SHOW PARAMETER dg_broker_start NAME TYPE VALUE ———————————— ———– —————– dg_broker_start boolean TRUE — If

Oracle Data Guard

Oracle Data Guard Configuration for RAC Using Active Duplicate Oracle Database 19c Administration | Complete Implementation Guide This comprehensive guide demonstrates how to configure Oracle Data Guard for a Real Application Clusters (RAC) environment using the RMAN Active Duplicate method. Active Duplicate eliminates the need for backup and restore operations, enabling efficient standby database creation directly from the primary database over the network. Table of Contents 1. Environment Overview 2. Prerequisites and Architecture 3. Primary Database Configuration 4. Standby Database Configuration 5. RMAN Active Duplicate Process 6. Verification and Monitoring 7. Monitoring Scripts 1. Environment Overview This implementation demonstrates Data Guard configuration between two geographically distributed RAC clusters. The setup includes a two-node primary cluster and a two-node standby cluster, both running Oracle Database 19c Enterprise Edition on ASM storage. Environment Specifications Component  Primary Site  Standby Site  Platform  Linux x86_64  Linux x86_64  Servers  west01, west02  east01, east02  Domain  westscan.oravr.in  eastscan.oravr.in  Oracle Version  19.10.0.0  19.10.0.0  Storage  ASM (+DATA, +FRA)  ASM (+DATA_DG, +DATA_DG)  Database Name  woravr  woravr  DB_UNIQUE_NAME  woravr  woravrdr  Instances  woravr1, woravr2  woravrdr1, woravrdr2  Oracle Home  /u01/app/oracle/product/19c/db_1  /u01/app/oracle/product/19c/db_1     Data Guard RAC Architecture  Primary RAC Cluster (west01/west02) synchronizes with Standby RAC Cluster (east01/east02) using redo transport services. Both clusters utilize ASM for storage management and SCAN listeners for client connectivity.  2. Prerequisites and Architecture Cluster Status Verification Before proceeding with Data Guard configuration, verify that all cluster services are online on both sites. Primary Cluster Status Check [oracle@west01 ~]$ crsctl check cluster -all ************************************************************** west01: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** west02: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** Important Ensure all CRS components (Cluster Ready Services, Cluster Synchronization Services, and Event Manager) are online on all nodes before proceeding with Data Guard configuration.   Key Requirements   Network Connectivity: Bidirectional network access between primary and standby sites Identical OS Platform: Both sites must run compatible Linux x86_64 versions Same Oracle Version: Identical Oracle Database version and patch level ASM Configuration: ASM disk groups configured on both sites TNS Configuration: Proper listener and tnsnames configuration for cross-site communication Password File: Synchronized password files with SYS credentials 3. Primary Database Configuration Step 1: Enable Forced Logging Forced logging ensures all database changes are captured in redo logs, which is critical for Data Guard synchronization. SQL> ALTER DATABASE FORCE LOGGING; Database altered. Step 2: Copy Password File to Standby The password file must be identical on both sites to allow SYS authentication for redo transport services. Extract Password File from ASM ASMCMD> pwd +data/woravr/password ASMCMD> pwcopy pwdworavr.314.5141100038 /tmp copying +data/woravr/password/pwdworavr.314.5141100038 -> /tmp/pwdworavr.314.5141100038 [oracle@west02]$ scp -p /tmp/pwdworavr.314.5141100038 oracle@east01:/u01/app/oracle/product/19c/db_1/dbs/orapwworavrdr1 [oracle@west02]$ scp -p /tmp/pwdworavr.314.5141100038 oracle@east02:/u01/app/oracle/product/19c/db_1/dbs/orapwworavrdr2 Step 3: Configure Standby Redo Logs Standby redo logs are essential for real-time redo apply and protection modes like Maximum Availability and Maximum Protection. The recommended number of standby redo log files is: Formula(Maximum number of logfiles + 1) × Maximum number of threads For this configuration: (2 + 1) × 2 = 6 standby redo logs Verify Current Online Redo Log Configuration SQL> SELECT b.thread#, a.group#, a.member, b.bytes FROM v$logfile a, v$log b WHERE a.group# = b.group#; THREAD# GROUP# MEMBER BYTES ———- ———- ———————————– ———- 1 2 +DATA/woravr/redo02.log 209715200 1 1 +DATA/woravr/redo01.log 209715200 2 3 +DATA/woravr/redo03.log 209715200 2 4 +DATA/woravr/redo04.log 209715200 Create Standby Redo Logs for Thread 1 SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 5 (‘+DATA/woravr/redo05.log’) SIZE 200M, GROUP 6 (‘+DATA/woravr/redo06.log’) SIZE 200M, GROUP 7 (‘+DATA/woravr/redo07.log’) SIZE 200M; Database altered. Create Standby Redo Logs for Thread 2 SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 GROUP 8 (‘+DATA/woravr/redo08.log’) SIZE 200M, GROUP 9 (‘+DATA/woravr/redo09.log’) SIZE 200M, GROUP 10 (‘+DATA/woravr/redo10.log’) SIZE 200M; Database altered. Verify Standby Redo Log Creation SQL> SELECT b.thread#, a.group#, a.member, b.bytes FROM v$logfile a, v$standby_log b WHERE a.group# = b.group#; THREAD# GROUP#MEMBER BYTES ———- ———- ———————————– ———- 1 5 +DATA/woravr/redo05.log 209715200 1 6 +DATA/woravr/redo06.log 209715200 1 7 +DATA/woravr/redo07.log 209715200 2 8 +DATA/woravr/redo08.log 209715200 2 9 +DATA/woravr/redo09.log 209715200 2 10 +DATA/woravr/redo10.log 209715200 6 rows selected. Step 4: Verify Archive Mode SQL> ARCHIVE LOG LIST; Database log mode Archive Mode Automatic archival Enabled Archive destination +FRA Oldest online log sequence 10 Next log sequence to archive 11 Current log sequence 11 Step 5: Configure Primary Database Initialization Parameters Data Guard requires specific initialization parameters to enable redo transport, log apply services, and file name conversion between sites. Backup Current Parameter File SQL> CREATE PFILE=’/home/oracle/initworavr.ora.bkp_vr’ FROM SPFILE; File created. Set Essential Data Guard Parameters SQL> ALTER SYSTEM SET db_unique_name=’woravr’ SCOPE=SPFILE SID=’*’; SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG=’DG_CONFIG=(woravr,woravrdr)’ SCOPE=BOTH SID=’*’; SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_1=’LOCATION=+FRA VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=woravr’ SCOPE=BOTH SID=’*’; SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2=’SERVICE=woravrdr LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=woravrdr’ SCOPE=BOTH SID=’*’; SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_1=ENABLE SCOPE=BOTH SID=’*’; SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE SCOPE=BOTH SID=’*’; Deprecation NoticeThe FAL_CLIENT parameter is deprecated in Oracle 19c but included here for reference. FAL_SERVER remains active for fetch archive log operations. Configure FAL and File Name Conversion SQL> ALTER SYSTEM SET fal_client=woravr SCOPE=BOTH SID=’*’; SQL> ALTER SYSTEM SET fal_server=woravrdr SCOPE=BOTH SID=’*’; SQL> ALTER SYSTEM SET DB_FILE_NAME_CONVERT=’+DATA_DG’,’+DATA’ SCOPE=SPFILE SID=’*’; SQL> ALTER SYSTEM SET LOG_FILE_NAME_CONVERT=’+DATA_DG’,’+DATA’ SCOPE=SPFILE SID=’*’; SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO SCOPE=BOTH SID=’*’; Key Initialization Parameters Explained  Parameter  Purpose  Configuration  LOG_ARCHIVE_CONFIG  Defines all databases in Data Guard configuration  DG_CONFIG=(woravr,woravrdr)  LOG_ARCHIVE_DEST_1  Local archive destination  +FRA for all roles  LOG_ARCHIVE_DEST_2  Remote standby destination using LGWR ASYNC  SERVICE=woravrdr  FAL_SERVER  Fetch Archive Log server for gap resolution  woravrdr  DB_FILE_NAME_CONVERT  Maps datafile paths from standby to primary  +DATA_DG → +DATA  LOG_FILE_NAME_CONVERT  Maps redo log paths from standby to primary  +DATA_DG → +DATA  STANDBY_FILE_MANAGEMENT  Automates standby file operations  AUTO  Step 6: Configure TNS Entries on Primary  Both primary and standby sites require TNS entries for bidirectional communication. The UR=A (Uninterruptible Retry Always) parameter ensures connection attempts continue even when the database is not in mounted state.  Primary Site tnsnames.ora (All Nodes)  woravr = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = westscan.oravr.in)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = woravr) ) ) woravrdr = (DESCRIPTION = (ADDRESS = (PROTOCOL =

Oracle Database Architecture 

Oracle Database Architecture  Complete Technical Guide | January 11, 2026  Oracle Database architecture is a sophisticated system comprising memory structures, background processes, and physical storage components that work together to provide high performance, reliability, and data integrity. This guide provides an in-depth exploration of Oracle’s architectural components and their interactions.  Table of Contents  1. Architecture Overview 2. Oracle Instance vs Database 3. Memory Architecture 4. Background Processes 5. Physical Storage Structures 6. Logical Storage Structures 7. Data Flow and Processing 8. RAC Architecture  1. Architecture Overview Oracle Database employs a multi-layered architecture designed for enterprise-grade performance, scalability, and fault tolerance. The architecture separates the database instance (memory and processes) from the database itself (physical files), enabling flexible deployment options including single-instance, RAC, and Data Guard configurations.  Figure 1: Oracle Database Architecture – High-Level Overview  Key Architectural Principle The separation of instance and database enables Oracle’s high availability features. An instance can be started, stopped, or failed over independently while the database files remain persistent on storage.  2. Oracle Instance vs Database An Oracle instance consists of memory structures and background processes that manage database operations. The instance exists in server memory and is transient – it is created when the database starts and destroyed when it shuts down.    Component  Type  Description  System Global Area (SGA)  Memory  Shared memory region containing data and control information  Program Global Area (PGA)  Memory  Private memory region for each server process  Background Processes  Process  Mandatory and optional processes performing database operations  Oracle Database The database consists of physical files stored on disk that contain the actual data, metadata, and transaction logs. These files persist across instance restarts and contain: Data Files: Store actual user and system data in tablespaces Control Files: Contain database metadata and structural information Redo Log Files: Record all changes made to the database Archive Log Files: Historical copies of redo logs for recovery Parameter Files: Instance configuration parameters (SPFILE/PFILE) — View instance status and database information SELECT instance_name, status, database_status FROM v$instance; SELECT name, open_mode, database_role FROM v$database;   3. Memory Architecture System Global Area (SGA) The SGA is a shared memory region that contains data and control information for the Oracle instance. All server processes and background processes share access to the SGA. Database Buffer Cache Caches data blocks read from data files. The buffer cache uses an LRU (Least Recently Used) algorithm to manage blocks, keeping frequently accessed data in memory to minimize physical I/O. — View instance status and database information SELECT instance_name, status, database_status FROM v$instance; SELECT name, open_mode, database_role FROM v$database; SELECT name, ROUND(bytes/1024/1024, 2) size_mb FROM v$sgainfo WHERE name LIKE ‘%Buffer%’; Shared Pool  Contains the library cache (parsed SQL statements and execution plans), data dictionary cache, and other shared structures. Proper sizing prevents hard parsing overhead. SELECT pool, name, bytes/1024/1024 mb FROM v$sgastat WHERE pool = ‘shared pool’ ORDER BY bytes DESC FETCH FIRST 10 ROWS ONLY; Redo Log Buffer A circular buffer that caches redo entries before they are written to the online redo log files. LGWR flushes this buffer under specific conditions to ensure durability. Large Pool Optional memory area used for large memory allocations including RMAN backup/restore operations, shared server session memory, and parallel query operations. SGA Component  Purpose  Sizing Parameter  Buffer Cache  Data block caching  DB_CACHE_SIZE  Shared Pool  SQL/PLSQL caching, dictionary  SHARED_POOL_SIZE  Redo Log Buffer  Redo entry caching  LOG_BUFFER  Large Pool  Large allocations (RMAN, parallel)  LARGE_POOL_SIZE  Java Pool  Java stored procedures  JAVA_POOL_SIZE  Streams Pool  Oracle Streams/GoldenGate  STREAMS_POOL_SIZE  — View SGA component sizes SELECT component, current_size/1024/1024 current_mb, min_size/1024/1024 min_mb, max_size/1024/1024 max_mb FROM v$sga_dynamic_components WHERE current_size > 0; Program Global Area (PGA) The PGA is a private memory region containing data and control information for each server process. Unlike the SGA, PGA memory is not shared between processes. Sort Area: Memory for sort operations (ORDER BY, GROUP BY, DISTINCT) Hash Area: Memory for hash joins and hash aggregations Session Memory: Session-specific variables and cursors Private SQL Area: Bind variable values and runtime state — View PGA memory usage  SELECT name, value/1024/1024 mb  FROM v$pgastat  WHERE name IN (‘aggregate PGA target parameter’,                   ‘aggregate PGA auto target’,                   ‘total PGA allocated’,                   ‘total PGA used for auto workareas’);  4. Background Processes Oracle background processes perform maintenance tasks, I/O operations, and ensure database consistency. Some processes are mandatory while others start based on configuration.   Mandatory Background Processes   Process Name Function DBWn Database Writer Writes modified buffers from buffer cache to data files LGWR Log Writer Writes redo log buffer entries to online redo log files CKPT Checkpoint Updates control files and data file headers at checkpoints SMON System Monitor Instance recovery, coalescing free space, cleaning temp segments PMON Process Monitor Cleans up failed user processes, releases locks and resources RECO Recoverer Resolves distributed transaction failures   Optional Background Processes   Process Name When Started ARCn Archiver ARCHIVELOG mode enabled MMON Manageability Monitor AWR snapshots and alerts MMAN Memory Manager Automatic memory management enabled CJQ0 Job Queue Coordinator DBMS_SCHEDULER jobs exist SMCO Space Management Coordinator Automatic space management — View active background processes SELECT pname, description FROM v$process WHERE pname IS NOT NULL ORDER BY pname; LGWR Critical PathLGWR performance is critical for commit latency. Every COMMIT must wait for LGWR to write redo entries to disk. Use fast storage (NVMe, SSD) for redo logs and consider redo log file placement carefully.   5. Physical Storage Structures   Data Files Data files contain the actual database data including tables, indexes, and other segments. Each data file belongs to exactly one tablespace and stores data in Oracle blocks. — View data files and their tablespaces SELECT tablespace_name, file_name, ROUND(bytes/1024/1024/1024, 2) size_gb, autoextensible, status FROM dba_data_files ORDER BY tablespace_name; Control Files Control files are critical metadata files containing database structure information, checkpoint data, and RMAN backup metadata. Oracle recommends multiplexing control files across different storage locations. — View control file locations SELECT name, status, block_size, file_size_blks FROM v$controlfile; Redo Log Files Online redo logs record all changes made to the database. Oracle uses a circular writing mechanism with multiple redo log groups for high availability.

Autonomous Health Framework (AHF) 

Autonomous Health Framework (AHF) Oracle Autonomous Health Framework (AHF) presents the next generation of an all-in-one solution that includes tools that work together autonomously 24×7 to keep database systems healthy and running. It does so while minimizing human reaction time using existing components like orachk, TFA and many more.  Autonomous Health Framework (AHF) will be shipped with new releases of the database, but you should always download the latest version. The following location gives an overview of the AHF product, as well as download links and basic instructions.  Autonomous Health Framework (AHF) – Including TFA and ORAchk/EXAChk (Doc ID 2550798.1) Advantages: User-friendly & real-time health monitoring, fault detection & diagnosis via a single interface  Secure consolidation of distributed diagnostic collections  Continuous availability  Machine learning-driven, autonomous degradation detection, reduce your overheads (on both customer and oracle support Tiers)  TFA is still used for diagnostic collection and management and ORAchk/EXAchk for a compliance check.  ORAchk/EXAchk now use the TFA secure socket and TFA scheduler for automatic checks (less overhead).  Pre-requisites: Before you begin the installation there are some pre-requisites to installing Oracle AHF and running ORAchk. I’ve outlined the key pre-requisites below: Linux Oracle AHF should be installed as root to obtain the fullest capabilities. If you are unable to install as root then Oracle AHF should be installed as the Oracle Home User. Oracle AHF should be installed to a filesystem with at least 5GB of free diskspace. Perl version 5.10 or later is required to install Oracle AHF. Windows Oracle AHF should be installed as a user with local administrative privileges. Oracle AHF should be installed to a disk with at least 5GB of free diskspace. Perl version 5.10 or later is required to install Oracle AHF (note that a later version of Perl is usually to be found installed in the %ORACLE_HOME%perl directory) Install Oracle Autonnomous Health Framework (AHF) (as root user) Autonomous Health Framework (AHF) can be installed as the “root” user on the server, which provides the most functionality and allows it to run in a proactive manner as a daemon. In this example we will perform an installation as the root user. Unzip the software and run the ahf_setup command. Answer the questions when prompted. The following is an example of the root installation. [root@west02 oracle]# ./ahf_setup AHF Installer for Platform Linux Architecture x86_64 AHF Installation Log : /tmp/ahf_install_211400_4701_2021_12_29-10_46_20.log Starting Autonomous Health Framework (AHF) Installation AHF Version: 21.1.4 Build Date: 202106281226 Default AHF Location : /opt/oracle.ahf Do you want to install AHF at [/opt/oracle.ahf] ? [Y]|N : y AHF Location : /opt/oracle.ahf AHF Data Directory stores diagnostic collections and metadata. AHF Data Directory requires at least 5GB (Recommended 10GB) of free space. Choose Data Directory from below options : 1. /u01/app/oracle_base [Free Space : 73565 MB] 2. Enter a different Location Choose Option [1 – 2] : 1 AHF Data Directory : /u01/app/oracle_base/oracle.ahf/data Do you want to add AHF Notification Email IDs ? [Y]|N : y Enter Email IDs separated by space : xyz@gmail.com AHF will also be installed/upgraded on these Cluster Nodes : 1. west01 The AHF Location and AHF Data Directory must exist on the above nodes AHF Location : /opt/oracle.ahf AHF Data Directory : /u01/app/oracle_base/oracle.ahf/data Do you want to install/upgrade AHF on Cluster Nodes ? [Y]|N : y Extracting AHF to /opt/oracle.ahf Configuring TFA Services Discovering Nodes and Oracle Resources Not generating certificates as GI discovered Starting TFA Services Created symlink from /etc/systemd/system/multi-user.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service. Created symlink from /etc/systemd/system/graphical.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service. .————————————————————————–. | Host | Status of TFA | PID | Port | Version | Build ID | +——–+—————+——+——+————+———————-+ | west02 | RUNNING | 6134 | 5000 | 21.1.4.0.0 | 21140020210628122659 | ‘——–+—————+——+——+————+———————-‘ Running TFA Inventory… Adding default users to TFA Access list… .——————————————————————–. | Summary of AHF Configuration | +—————–+————————————————–+ | Parameter | Value | +—————–+————————————————–+ | AHF Location | /opt/oracle.ahf | | TFA Location | /opt/oracle.ahf/tfa | | Orachk Location | /opt/oracle.ahf/orachk | | Data Directory | /u01/app/oracle_base/oracle.ahf/data | | Repository | /u01/app/oracle_base/oracle.ahf/data/repository | | Diag Directory | /u01/app/oracle_base/oracle.ahf/data/west02/diag | ‘—————–+————————————————–‘ Starting orachk scheduler from AHF … AHF install completed on west02 Installing AHF on Remote Nodes : AHF will be installed on west01, Please wait. Installing AHF on west01 : [west01] Copying AHF Installer [west01] Running AHF Installer AHF binaries are available in /opt/oracle.ahf/bin AHF is successfully installed Moving /tmp/ahf_install_211400_4701_2021_12_29-10_46_20.log to /u01/app/oracle_base/oracle.ahf/data/west02/diag/ahf/ [oracle@west02 ~]$ service oracle-tfa.service status Redirecting to /bin/systemctl status oracle-tfa.service ● oracle-tfa.service – Oracle Trace File Analyzer Loaded: loaded (/etc/systemd/system/oracle-tfa.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2021-12-29 10:47:54 IST; 43min ago Main PID: 5978 (init.tfa) Tasks: 70 CGroup: /system.slice/oracle-tfa.service ├─ 2034 /bin/sleep 30 ├─ 5978 /bin/sh /etc/init.d/init.tfa run >/dev/null 2>&1 /dev/null 2>&1 </dev/null ├─11114 /opt/oracle.ahf/jre/bin/java -server -Xms64m -Xmx128m -Djava.awt.headless=true -Ddisable.checkForUpdate=true -XX:HeapDumpPath=/u01/app/oracle_base/oracle.ahf/data/west01/diag/tfa oracl… └─11294 /opt/oracle.ahf/jre/bin/java -server -Xms64m -Xmx128m -XX:HeapDumpPath=/u01/app/oracle_base/oracle.ahf/data/west01/diag/tfa -DtfaHome=/opt/oracle.ahf/tfa -DcrsHome=/u01/app/oracle/19c/… [oracle@west01 ~]$ [oracle@west01 ~]$ /opt/oracle.ahf/bin/tfactl status WARNING – AHF Software is older than 180 days. Please consider upgrading AHF to the latest version using ahfctl upgrade. .———————————————————————————————-. | Host   | Status of TFA | PID   | Port | Version    | Build ID             | Inventory Status | +——–+—————+——-+——+————+———————-+——————+ | west01 | RUNNING        | 11114 | 5000 | 21.1.4.0.0 | 21140020210628122659 | COMPLETE         | | west02 | RUNNING       | 14911 | 5000 | 21.1.4.0.0 | 21140020210628122659 | COMPLETE          | '——–+—————+——-+——+————+———————-+——————' Upgrade : [root@west02 oracle]# ./ahf_setup AHF Installer for Platform Linux Architecture x86_64 AHF Installation Log : /tmp/ahf_install_214000_23474_2021_12_29-12_11_10.log Starting Autonomous Health Framework (AHF) Installation AHF Version: 21.4.0 Build Date: 202112200745 AHF is already installed at /opt/oracle.ahf Installed AHF Version: 21.1.4 Build Date: 202106281226 Do you want to upgrade AHF [Y]|N : Y AHF will also be installed/upgraded on these Cluster Nodes : 1. west01 The AHF Location and AHF Data Directory must exist on the above nodes AHF Location : /opt/oracle.ahf AHF Data Directory : /u01/app/oracle_base/oracle.ahf/data Do you want to install/upgrade AHF on Cluster Nodes ? [Y]|N : Y Upgrading /opt/oracle.ahf Shutting down AHF Services Stopped OSWatcher Nothing to do ! Shutting down TFA Removed symlink /etc/systemd/system/multi-user.target.wants/oracle-tfa.service. Removed symlink /etc/systemd/system/graphical.target.wants/oracle-tfa.service. Successfully shutdown TFA.. /usr/bin/checkmodule: loading policy configuration from inittfa-policy.te