LogiUpSkill

Elementor #13422

    DNS Configuration for Oracle SCAN Complete Guide to Single Client Access Name Configuration in Oracle RAC Table of Contents 1. SCAN Overview and Architecture 2. Prerequisites and Requirements 3. DNS Server Configuration 4. Creating DNS Records 5. Verification and Testing 6. Grid Infrastructure Integration 7. Troubleshooting 8. Best Practices 1. SCAN Overview and Architecture Single Client Access Name (SCAN) is a critical component of Oracle Real Application Clusters (RAC) that provides a single network name for clients to access any Oracle database running in a cluster. SCAN eliminates the need for clients to change connection strings when nodes are added to or removed from the cluster. What is SCAN? SCAN is a fully qualified domain name (FQDN) registered in DNS that resolves to multiple IP addresses. By default, Oracle recommends configuring three SCAN IP addresses to provide high availability and load balancing for client connections. Key Benefits Simplified Client Configuration: Single name for all database connections regardless of cluster size High Availability: Multiple IP addresses provide redundancy and failover capability Load Balancing: Automatic distribution of client connections across SCAN listeners Scalability: Add or remove nodes without updating client connection strings Location Transparency: Clients remain unaware of cluster topology changes SCAN Architecture SCAN Architecture Diagram Client Applications → DNS Server → SCAN Name (3 IPs) → SCAN Listeners → RAC Nodes Component Description Quantity SCAN Name Fully qualified domain name (FQDN) 1 per cluster SCAN IP Addresses Virtual IP addresses assigned to SCAN 3 (recommended) SCAN Listeners Oracle listeners running on SCAN IPs 3 (one per SCAN IP) DNS Server Resolves SCAN name to multiple IPs 1 or more (HA) SCAN Resolution Process Client application requests connection to database using SCAN name DNS server resolves SCAN name to 3 IP addresses (round-robin) Client attempts connection to first IP address returned by DNS SCAN listener accepts connection and redirects to appropriate node listener Node listener establishes database connection on local instance If SCAN listener unavailable, client tries next IP address automatically 2. Prerequisites and Requirements Network Requirements Public network configured and operational on all cluster nodes Three available IP addresses in the same subnet as public network IP addresses must not be in use or reserved by other systems Network switches and routers configured to allow SCAN traffic Firewall rules permitting traffic on port 1521 (or custom listener port) DNS Requirements Critical Requirement: DNS server must support round-robin resolution for multiple A records with the same hostname. This is essential for SCAN functionality. DNS server installed and operational (BIND, Microsoft DNS, or other) Forward lookup zone configured for domain Reverse lookup zones configured for IP subnets (recommended) Appropriate permissions to create/modify DNS records DNS server configured in /etc/resolv.conf on all nodes Oracle Grid Infrastructure Requirements Oracle Grid Infrastructure 11.2.0.1 or higher Three SCAN VIP resources will be created during installation Three SCAN listener resources will be created automatically Cluster nodes must have DNS resolution working properly Pre-Configuration Checklist Item Requirement Verification Command DNS Resolution Working DNS on all nodes nslookup google.com Domain Name FQDN decided for SCAN Example: rac-scan.example.com IP Addresses 3 IPs allocated and available ping <ip_address> (should fail) Subnet Match Same subnet as public network ifconfig or ip addr show DNS Server Access Admin credentials Login to DNS management console 3. DNS Server Configuration Supported DNS Servers Oracle SCAN supports any DNS implementation that follows standard DNS protocols. Common implementations include: BIND (Berkeley Internet Name Domain): Most common on Linux/Unix systems Microsoft DNS: Integrated with Windows Active Directory dnsmasq: Lightweight DNS forwarder for smaller deployments PowerDNS: High-performance authoritative DNS server BIND DNS Server Configuration Install BIND (if not already installed) # RHEL/Oracle Linux/CentOS sudo yum install bind bind-utils -y # Ubuntu/Debian sudo apt-get install bind9 bind9utils -y # Verify installation named -v Configure BIND Main Configuration File Edit /etc/named.conf to define zones: # Edit named.conf sudo vi /etc/named.conf # Add or verify these options options { directory “/var/named”; dump-file “/var/named/data/cache_dump.db”; statistics-file “/var/named/data/named_stats.txt”; memstatistics-file “/var/named/data/named_mem_stats.txt”; allow-query { any; }; recursion yes; # Listen on all interfaces listen-on port 53 { any; }; listen-on-v6 port 53 { any; }; # Forward requests for external domains forwarders { 8.8.8.8; 8.8.4.4; }; }; # Define forward lookup zone zone “example.com” IN { type master; file “example.com.zone”; allow-update { none; }; }; # Define reverse lookup zone (for 192.168.1.0/24) zone “1.168.192.in-addr.arpa” IN { type master; file “192.168.1.rev”; allow-update { none; }; }; Set Proper Ownership and Permissions # Set ownership sudo chown named:named /etc/named.conf # Set permissions sudo chmod 640 /etc/named.conf # Verify configuration syntax sudo named-checkconf /etc/named.conf Microsoft DNS Server Configuration Access DNS Manager Open Server Manager on Windows Server Navigate to Tools → DNS Connect to your DNS server Expand Forward Lookup Zones Configure Zone Right-click Forward Lookup Zones and select New Zone Select Primary Zone and click Next Enter zone name (e.g., example.com) and click Next Create new zone file and click Next Allow dynamic updates if required or select Do not allow dynamic updates Click Finish Round-Robin Configuration: Microsoft DNS enables round-robin by default. Verify this setting in DNS server properties under the Advanced tab. Ensure Enable round robin is checked. 4. Creating DNS Records Planning SCAN Configuration Parameter Example Value Description SCAN Name rac-scan Hostname portion of FQDN Domain example.com DNS domain name FQDN rac-scan.example.com Complete SCAN name SCAN IP 1 192.168.1.201 First SCAN IP address SCAN IP 2 192.168.1.202 Second SCAN IP address SCAN IP 3 192.168.1.203 Third SCAN IP address BIND DNS – Forward Zone Configuration Create Forward Zone File # Create zone file sudo vi /var/named/example.com.zone # Add the following content $TTL 86400 @ IN SOA ns1.example.com. admin.example.com. ( 2024011101 ; Serial (YYYYMMDDNN) 3600 ; Refresh (1 hour) 1800 ; Retry (30 minutes) 604800 ; Expire (1 week) 86400 ) ; Minimum TTL (1 day) ; Name server records @ IN NS ns1.example.com. ns1 IN A 192.168.1.10 ; SCAN records (critical for Oracle RAC) rac-scan IN A 192.168.1.201 rac-scan IN A 192.168.1.202 rac-scan IN A 192.168.1.203 ; Individual node records rac1 IN A 192.168.1.101

Elementor #13394

                                                          What Is a Defect in Testing?  What Is a Defect in Testing?  A Deep Dive Into Defects, Bugs, and the Complete Defect Life Cycle  Understanding Software Defects: The Foundation of Quality Testing  In software testing, a defect represents anything in the product that deviates from expected behavior—whether that’s from requirements, design specifications, user expectations, industry standards, or even basic common sense. Defects are fundamentally why testing exists: they represent risk to your product, your users, and your business.  Some defects are tiny annoyances that slightly inconvenience users. Others are catastrophic failures that crash systems, leak sensitive data, or break critical revenue streams. Understanding the nature of defects and how to manage them effectively is what separates amateur testers from quality professionals.  The Critical Reality: Finding a defect is only step one. Managing it effectively—tracking, communicating, fixing, verifying, and closing it—is what transforms a “found issue” into genuine product quality. That’s where the Defect Life Cycle becomes essential.  Defect vs Bug vs Error vs Failure: Clear Definitions  People often use these terms interchangeably in casual conversation, but in professional testing environments, clarity in terminology saves time, reduces miscommunication, and improves collaboration between teams.  The Four Key Terms Explained  Term  Definition  Who Creates It  Example  Error  A human mistake made during development  Developer  Developer writes wrong logic or misunderstands requirement  Defect / Bug  A flaw in the code or product (result of an error)  Result of Error  Login button doesn’t respond to Enter key press  Failure  System behaves incorrectly during execution  Runtime Manifestation  User cannot log in; application crashes  Incident  A reported issue in production (may or may not be a defect)  User/Monitoring  Customer reports “can’t complete checkout”  The Defect Causation Flow     Examples:  ERROR:    • Misread requirement document    • Copy-pasted wrong code block    • Forgot to handle edge case    • Used wrong comparison operator (= instead of ==)    DEFECT:    • Button event handler not attached    • Validation logic missing    • Database query returns wrong data    • Memory leak in loop    FAILURE:    • Application crashes on submit    • Data displayed incorrectly    • User cannot complete transaction    • System becomes unresponsive                    Professional Tip: In testing conversations, use “defect” when referring to issues found during testing, and “incident” when referring to production issues. This distinction helps separate pre-release quality control from post-release support activities.  Why Defect Management Matters Beyond “Logging Bugs”  A defect isn’t just a ticket in your tracking system. It serves multiple critical functions in the software development lifecycle:  The Multiple Roles of a Defect Record  Communication Contract A defect serves as a formal communication contract between tester and developer. It documents what was expected, what actually happened, and provides evidence for reproduction.  Quality Metric Defects provide quantitative measures of product quality: defect density, defect trends over time, escape rates to production, and fix rates per sprint.  Risk Indicator Severity and priority classifications tell the business what’s at risk. Critical defects indicate potential revenue loss, security breaches, or legal liability.  Process Signal Repeated defect types reveal systemic issues: gaps in requirements gathering, inadequate code reviews, insufficient test coverage, or unclear acceptance criteria.  Audit Trail Defect records provide historical documentation for compliance, root cause analysis, and continuous improvement initiatives.  Warning Signs of Weak Defect Management:  “Works on my machine” disputes between teams  Defects stuck in endless reopen loops  Missed release deadlines due to unknown defect counts  Unstable releases with high production incident rates  Zero trust in testing process from stakeholders  Developers ignoring or rejecting valid defects  Effective Defect Management: The Key to Quality Software  The Defect Life Cycle: Complete Journey from Discovery to Closure  The Defect Life Cycle (also called the Bug Life Cycle) is the structured journey a defect takes from initial discovery through final closure. Understanding each stage ensures defects are handled systematically, nothing falls through the cracks, and teams maintain clear accountability.  Complete Defect Life Cycle Flow  Stage 1: NEW  A tester identifies a defect and logs it with complete details. This is the birth of the defect in your tracking system.  Tester’s Responsibility at NEW Stage:  Make the defect reproducible and clear enough that a developer can fix it without guessing or asking for clarification. A high-quality defect report saves hours of back-and-forth communication.  Essential Elements of a NEW Defect  Title: Concise description of symptom and location (e.g., “Login button unresponsive to Enter key on login page”)  Steps to Reproduce: Minimal, exact, numbered steps that reliably reproduce the issue  Expected Result: What should happen according to requirements  Actual Result: What actually happened during testing  Severity & Priority: Impact assessment and urgency level  Environment Details: Build version, browser/device, OS, database version, user role  Attachments: Screenshots, screen recordings, console logs, network traces, stack traces  Test Data: Specific data used (masked if sensitive)  Stage 2: ASSIGNED  Once reviewed and accepted, the defect is assigned to a specific developer or team. The status changes from NEW to ASSIGNED.  Why Assignment Matters: Assignment establishes clear ownership. Without ownership, defects languish in limbo, and no one is accountable for resolution. Every defect should have exactly one owner at any given time.  Stage 3: OPEN  The developer begins active investigation and analysis. At this stage, they may confirm the defect is valid or route it to special states like Rejected or Duplicate.  Developer Responsibilities in OPEN State  Reproduce the issue using provided steps  Identify root cause through debugging and code analysis  Determine appropriate fix approach  Request additional information if reproduction fails  Update defect with findings and estimated fix time  Stage 4: FIXED  Developer implements a code change, performs internal verification (unit tests, local testing), then marks the defect as FIXED and returns it to testing.  Important Truth: “FIXED” does not mean “done.” It means “developer believes the issue is resolved.” The defect still requires independent verification by testing before it can be considered truly resolved.  Stage 5: RETEST  Tester re-executes the original test steps to confirm the defect no longer occurs in the new build or environment.  Best Practices for Retesting  Use the

DNS configuration fo Scan 

  1. SCAN Overview and Architecture   Single Client Access Name (SCAN) is a critical component of Oracle Real Application Clusters (RAC) that provides a single network name for clients to access any Oracle database running in a cluster. SCAN eliminates the need for clients to change connection strings when nodes are added to or removed from the cluster.   What is SCAN?   SCAN is a fully qualified domain name (FQDN) registered in DNS that resolves to multiple IP addresses. By default, Oracle recommends configuring three SCAN IP addresses to provide high availability and load balancing for client connections.   Key Benefits   Simplified Client Configuration: Single name for all database connections regardless of cluster size High Availability: Multiple IP addresses provide redundancy and failover capability Load Balancing: Automatic distribution of client connections across SCAN listeners Scalability: Add or remove nodes without updating client connection strings Location Transparency: Clients remain unaware of cluster topology changes   SCAN Architecture   SCAN Architecture Diagram Client Applications → DNS Server → SCAN Name (3 IPs) → SCAN Listeners → RAC Nodes   Component Description Quantity SCAN Name Fully qualified domain name (FQDN) 1 per cluster SCAN IP Addresses Virtual IP addresses assigned to SCAN 3 (recommended) SCAN Listeners Oracle listeners running on SCAN IPs 3 (one per SCAN IP) DNS Server Resolves SCAN name to multiple IPs 1 or more (HA) SCAN Resolution Process Client application requests connection to database using SCAN name DNS server resolves SCAN name to 3 IP addresses (round-robin) Client attempts connection to first IP address returned by DNS SCAN listener accepts connection and redirects to appropriate node listener Node listener establishes database connection on local instance If SCAN listener unavailable, client tries next IP address automatically   2. Prerequisites and Requirements   Network Requirements Public network configured and operational on all cluster nodes Three available IP addresses in the same subnet as public network IP addresses must not be in use or reserved by other systems Network switches and routers configured to allow SCAN traffic Firewall rules permitting traffic on port 1521 (or custom listener port)   DNS Requirements   Critical Requirement: DNS server must support round-robin resolution for multiple A records with the same hostname. This is essential for SCAN functionality. DNS server installed and operational (BIND, Microsoft DNS, or other) Forward lookup zone configured for domain Reverse lookup zones configured for IP subnets (recommended) Appropriate permissions to create/modify DNS records DNS server configured in /etc/resolv.conf on all nodes   Oracle Grid Infrastructure Requirements Oracle Grid Infrastructure 11.2.0.1 or higher Three SCAN VIP resources will be created during installation Three SCAN listener resources will be created automatically Cluster nodes must have DNS resolution working properly   Pre-Configuration Checklist Item Requirement Verification Command DNS Resolution Working DNS on all nodes nslookup google.com Domain Name FQDN decided for SCAN Example: rac-scan.example.com IP Addresses 3 IPs allocated and available ping <ip_address>  (should fail) Subnet Match Same subnet as public network ifconfig  or  ip addr show DNS Server Access Admin credentials Login to DNS management console 3. DNS Server Configuration     Supported DNS Servers Oracle SCAN supports any DNS implementation that follows standard DNS protocols. Common implementations include: BIND (Berkeley Internet Name Domain): Most common on Linux/Unix systems Microsoft DNS: Integrated with Windows Active Directory dnsmasq: Lightweight DNS forwarder for smaller deployments PowerDNS: High-performance authoritative DNS server         BIND DNS Server Configuration Install BIND (if not already installed) # RHEL/Oracle Linux/CentOS sudo yum install bind bind-utils -y # Ubuntu/Debian sudo apt-get install bind9 bind9utils -y # Verify installation named -v Configure BIND Main Configuration File Edit /etc/named.conf to define zones: # Edit named.conf sudo vi /etc/named.conf # Add or verify these options options { directory “/var/named”; dump-file “/var/named/data/cache_dump.db”; statistics-file “/var/named/data/named_stats.txt”; memstatistics-file “/var/named/data/named_mem_stats.txt”; allow-query { any; }; recursion yes; # Listen on all interfaces listen-on port 53 { any; }; listen-on-v6 port 53 { any; }; # Forward requests for external domains forwarders { 8.8.8.8; 8.8.4.4; }; }; # Define forward lookup zone zone “example.com” IN { type master; file “example.com.zone”; allow-update { none; }; }; # Define reverse lookup zone (for 192.168.1.0/24) zone “1.168.192.in-addr.arpa” IN { type master; file “192.168.1.rev”; allow-update { none; }; }; Set Proper Ownership and Permissions # Set ownership sudo chown named:named /etc/named.conf # Set permissions sudo chmod 640 /etc/named.conf # Verify configuration syntax sudo named-checkconf /etc/named.conf   Microsoft DNS Server Configuration   Access DNS Manager Open Server Manager on Windows Server Navigate to Tools → DNS Connect to your DNS server Expand Forward Lookup Zones   Configure Zone   Right-click Forward Lookup Zones and select New Zone Select Primary Zone and click Next Enter zone name (e.g., example.com) and click Next Create new zone file and click Next Allow dynamic updates if required or select Do not allow dynamic updates Click Finish Round-Robin Configuration: Microsoft DNS enables round-robin by default. Verify this setting in DNS server properties under the Advanced tab. Ensure Enable round robin is checked.   4. Creating DNS Records Planning SCAN Configuration Parameter Example Value Description SCAN Name rac-scan Hostname portion of FQDN Domain example.com DNS domain name FQDN rac-scan.example.com Complete SCAN name SCAN IP 1 192.168.1.201 First SCAN IP address SCAN IP 2 192.168.1.202 Second SCAN IP address SCAN IP 3 192.168.1.203 Third SCAN IP address BIND DNS – Forward Zone Configuration Create Forward Zone File # Create zone file sudo vi /var/named/example.com.zone # Add the following content $TTL 86400 @ IN SOA ns1.example.com. admin.example.com. ( 2024011101 ; Serial (YYYYMMDDNN) 3600 ; Refresh (1 hour) 1800 ; Retry (30 minutes) 604800 ; Expire (1 week) 86400 ) ; Minimum TTL (1 day) ; Name server records @ IN NS ns1.example.com. ns1 IN A 192.168.1.10 ; SCAN records (critical for Oracle RAC) rac-scan IN A 192.168.1.201 rac-scan IN A 192.168.1.202 rac-scan IN A 192.168.1.203 ; Individual node records rac1 IN A 192.168.1.101 rac2 IN A 192.168.1.102 rac3 IN A 192.168.1.103 ; VIP records rac1-vip IN A 192.168.1.111 rac2-vip IN A 192.168.1.112 rac3-vip IN A 192.168.1.113 Important: The three rac-scan A records must have the

EmExpress

Enterprise Manager Database Express (EM Express) provides a web-based interface for managing Oracle databases. In Oracle 19c, while DBCA automatically enables EM Express for the Container Database (CDB), Pluggable Databases (PDB) require manual configuration. This guide covers the complete setup process and explains how to switch between the modern Java JET interface and the feature-rich Flash-based UI. Table of Contents 1. Overview of EM Express in Oracle 19c 2. Prerequisites 3. Configuring EM Express for CDB 4. Configuring EM Express for PDB 5. Java JET vs Flash-Based UI 6. Switching Between UI Modes 7. Best Practices 1. Overview of EM Express in Oracle 19c Enterprise Manager Database Express is a lightweight, browser-based database management tool built into Oracle Database. Starting with Oracle 19c, the interface was modernized using Oracle JET (JavaScript Extension Toolkit), replacing the previous Flash-based implementation. EM Express Architecture Overview EM Express is a web-based management interface built into Oracle Database 19c that communicates through the XML DB HTTP listener on configurable HTTPS ports. Component Description Default Port CDB EM Express Manages the Container Database and common users 5500 (HTTPS) PDB EM Express Manages individual Pluggable Database resources 5501+ (HTTPS) XML DB HTTP Listener Handles web requests for EM Express Configurable   2. Prerequisites Before configuring EM Express, ensure the following requirements are met: Oracle Database 19c installed and running SYSDBA privileges for configuration tasks Network access to the configured HTTPS ports Modern browser with JavaScript enabled (Chrome, Firefox, Edge) Firewall rules configured to allow traffic on EM Express ports Important Each PDB requires a unique HTTPS port. Plan your port allocation strategy before configuring multiple PDBs. 3. Configuring EM Express for CDB By default, DBCA configures EM Express for the CDB during database creation. However, you may need to verify or modify the configuration. Follow these steps to configure the HTTPS port for the Container Database. 1 . Connect to the Database as SYSDBA [oracle@dbserver ~]$ sqlplus / as sysdba SQL*Plus: Release 19.0.0.0.0 – Production on Tue Aug 11 13:31:02 2020 Version 19.8.0.0.0 Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Production Version 19.8.0.0.0 2. Verify Current Container Context SQL> SHOW CON_NAME CON_NAME —————————— CDB$ROOT 3. Check Existing HTTPS Port Configuration SQL> SELECT DBMS_XDB_CONFIG.GETHTTPSPORT() FROM DUAL; DBMS_XDB_CONFIG.GETHTTPSPORT() —————————— 0 A return value of 0 indicates that the HTTPS port is not configured. SQL> SELECT DBMS_XDB_CONFIG.GETHTTPSPORT() FROM DUAL; DBMS_XDB_CONFIG.GETHTTPSPORT() —————————— 0 A return value of 0 indicates that the HTTPS port is not configured. 4. Configure the HTTPS Port for CDB SQL> EXEC DBMS_XDB_CONFIG.SETHTTPSPORT(5500); PL/SQL procedure successfully completed. 5. Verify the Configuration SQL> SELECT DBMS_XDB_CONFIG.GETHTTPSPORT() FROM DUAL; DBMS_XDB_CONFIG.GETHTTPSPORT() —————————— 5500 Access URL: CDB EM Express is now accessible at: https://:5500/em Figure 1: EM Express Login Screen – CDB The EM Express login interface displays fields for Username, Password, and Container Name, with the Oracle Enterprise Manager Database Express branding. 4. Configuring EM Express for PDB Unlike the CDB, Pluggable Databases do not have EM Express enabled by default. Each PDB requires manual configuration with a unique HTTPS port. 1. List Available PDBs SQL> SHOW PDBS CON_ID CON_NAME OPEN MODE RESTRICTED ———- —————————— ———- ———- 2 PDB$SEED READ ONLY NO 3 WORAVRPDB1 READ WRITE NO 2. Switch to the Target PDB SQL> ALTER SESSION SET CONTAINER = WORAVRPDB1; Session altered. SQL> SHOW CON_NAME CON_NAME —————————— WORAVRPDB1 3. Verify Current HTTPS Port (Expected: 0) SQL> SELECT DBMS_XDB_CONFIG.GETHTTPSPORT() FROM DUAL; DBMS_XDB_CONFIG.GETHTTPSPORT() —————————— 0 4. Configure Unique HTTPS Port for PDB SQL> EXEC DBMS_XDB_CONFIG.SETHTTPSPORT(5501); PL/SQL procedure successfully completed. 5Confirm the Port Assignment SQL> SELECT DBMS_XDB_CONFIG.GETHTTPSPORT() FROM DUAL; DBMS_XDB_CONFIG.GETHTTPSPORT() —————————— 5501 Access URLPDB EM Express is now accessible at: https://:5501/em Figure 2: EM Express Login Screen – PDB The PDB-specific EM Express login interface accessed via port 5501, showing the same authentication fields for pluggable database management.   Port Assignment Strategy for Multiple PDBs   Container Recommended Port Access URL CDB$ROOT 5500 https://hostname:5500/em PDB1 5501 https://hostname:5501/em PDB2 5502 https://hostname:5502/em PDB3 5503 https://hostname:5503/em   Java JET vs Flash-Based UI   Oracle 19c introduced a redesigned EM Express interface built on Oracle JET technology. While the new interface offers improved performance and a modern look, it provides a simplified feature set compared to the legacy Flash-based UI. Java JET UI (Default) Oracle 19c+ Modern Fast and responsive interface No Flash dependency Performance monitoring focus Limited administrative features No Server tab (user management) Flash-Based UI (Legacy) <span data-contrast="none" Pre-19c Style  Feature-Rich Full administrative capabilities User and schema management Storage administration Configuration options Complete Server tab access Known Limitation The Java JET UI lacks the Server tab, which means user creation and permission management must be performed via SQL*Plus or by switching to the Flash-based UI. Figure 3: Java JET UI – Database Home (Performance Tab Only)  The modern Java JET interface displays a simplified Database Home page with Status information (uptime, version, platform) and Performance graphs (Activity, Services, Instances). Only the Performance tab is available in this UI mode.  Figure 4: Flash-Based UI – Complete Administrative Interface The legacy Flash-based UI provides full administrative capabilities with multiple tabs including Configuration, Storage, Security, and Performance. The interface displays comprehensive performance metrics, activity graphs, and complete database management options. 6. Switching Between UI Modes Oracle provides a built-in script to toggle between the Java JET and Flash-based interfaces. This is particularly useful when you need access to administrative features not available in the modern UI. Switch to Flash-Based UI (EMX Mode) SQL> @?/rdbms/admin/execemx emx Session altered. no rows selected old 1: select nvl( ‘&1′,’omx’) p1 from dual new 1: select nvl( ’emx’,’omx’) p1 from dual P1 — emx PL/SQL procedure successfully completed. Session altered. Important After running the script, log out and log back in to EM Express for the changes to take effect. Revert to Java JET UI (OMX Mode) SQL> @?/rdbms/admin/execemx omx Session altered. no rows selected old 1: select nvl( ‘&1′,’omx’) p1 from dual new 1: select nvl( ‘omx’,’omx’) p1 from dual P1 — omx PL/SQL procedure successfully completed. Session altered.   Quick Reference: UI Mode Commands   Command UI Mode Description @?/rdbms/admin/execemx emx   Flash-Based

Xag 

Install and configure XAG Services Manager Oracle Grid Infrastructure provides the necessary components to manage high availability (HA) for any business critical application. Oracle Grid Infrastructure Agents (XAG) are Oracle Grid Infrastructure components that provide the HA framework to application resources and resource types managed through the agent management interface, AGCTL. This framework provides a complete, ready to use application HA solution that contains pre-defined Oracle Grid Infrastructure resource configurations and agents to integrate applications for complete application HA.Oracle GoldenGate Veridata12c is a high-speed,low-impact, heterogeneous data comparison ,and repair solution. Working primarily outside of the database system it quickly identifies, reports on, and fixes data discrepancies between databases without interrupting the on going business processes automatically. AGCTL, Agent Control, is the agent command line utility to manage Oracle Grid Infrastructure agents (XAG) for application HA using Oracle Grid Infrastructure. AGCTL is the interface to add an application to the Oracle Grid Infrastructure as a clustered resource under agent control. Once configured, AGCTL is the management interface to bring the application online, offline or relocate, as well as to check the application state and modify the agent configuration. There are AGCTL options to disable and remove the application from the Oracle Grid Infrastructure agent control. Prerequisite : 1) A shared filesystem for OGG recovery information such as checkpoint files, bounded recovery files and trail files is required. This can be either DBFS, OCFS (Oracle Cluster File system) or ACFS (ASM Cluster Filesystem). In our example, we are using ACFS, which is included in the 11.2 Clusterware bundle (by default). 2) A separate VIP for the golden gate resource. (In this example, we will use the IP address 192.168.45.159 for this purpose) 3) Xag installation directory must be same on all nodes 4) appvip must be required and resolve by DNS 5) add database service Stage 1. Create file system : 1. Create the ASM Diskgroup [oracle@west01 scp]$ sh asmdu.sh -m Instances running on west01 : +ASM1, woravr1 DiskGroup Redundancy Total MB Usable MB % Free ——— ———– ——– ——— —— ACFS EXTERN 199996.00 199860.00 99 DATA EXTERN 399996.00 394464.00 98 FRA EXTERN 150012.00 148796.00 99 OCR EXTERN 99998.00 98762.00 98 2. create asm volume for asfs file system ASMCMD [+] > volinfo –all Diskgroup Name: ACFS Volume Name: ACFS1 Volume Device: /dev/asm/acfs1-148 State: ENABLED Size (MB): 102400 Resize Unit (MB): 64 Redundancy: UNPROT Stripe Columns: 8 Stripe Width (K): 1024 Usage: Mountpath: ASMCMD [+] > 3. Create folder on os layer [root@west01 /]# mkdir acfs1 [root@west01 /]# 4. create ACFS file system Create ACFS Command: /sbin/mkfs -t acfs /dev/asm/acfs1-148 Following commands should be run as privileged user : /u01/app/oracle/19c/grid_1/bin/srvctl add filesystem -d /dev/asm/acfs1-148 -m /acfs1 -fstype ACFS -autostart ALWAYS /u01/app/oracle/19c/grid_1/bin/srvctl start filesystem -d /dev/asm/acfs1-148 chown : /acfs1 chmod 775 /acfs1 [root@west01 /]# cat /u01/app/oracle_base/cfgtoollogs/asmca/scripts/acfs_script.sh #!/bin/sh /u01/app/oracle/19c/grid_1/bin/srvctl add filesystem -d /dev/asm/acfs1-148 -m /acfs1 -fstype ACFS -autostart ALWAYS if [ $? = “0” -o $? = “2” ]; then /u01/app/oracle/19c/grid_1/bin/srvctl start filesystem -d /dev/asm/acfs1-148 if [ $? = “0” ]; then chmod 775 /acfs1 /u01/app/oracle/19c/grid_1/bin/srvctl status filesystem -d /dev/asm/acfs1-148 exit 0 else exit $? fi /u01/app/oracle/19c/grid_1/bin/srvctl status filesystem -d /dev/asm/acfs1-148 fi [root@west01 /]# sh /u01/app/oracle_base/cfgtoollogs/asmca/scripts/acfs_script.sh ACFS file system /acfs1 is mounted on nodes west01,west02 [root@west01 /]# /sbin/mount.acfs -o all –/bin/umount -t acfs -a 5 . check the status [oracle@west01 ~]$ crsctl stat res -t | grep acfs ora.acfs.acfs1.acfs ONLINE ONLINE west01 mounted on /acfs1,ST ONLINE ONLINE west02 mounted on /acfs1,ST [oracle@west01 ~]$ Stage 2 : Configure VIP and DB service for XAG 1. Add service srvctl add service -d woravr -s woravrsvc -r west01 -a west02 -P PRECONNECT 2. Add VIP crsctl stat res -p |grep -ie .network -ie subnet |grep -ie name -ie subnet [root@west01 ~]# appvipcfg create -network=1 -ip=192.168.45.159 -vipname=westgg-vip -user=oracle Using configuration parameter file: /u01/app/oracle/19c/grid_1/crs/install/crsconfig_params The log of current session can be found at: /u01/app/oracle_base/crsdata/west01/scripts/appvipcfg.log [root@west01 ~]# [root@west01 ~]# crsctl start resource westgg-vip CRS-2672: Attempting to start ‘westgg-vip’ on ‘west02’ CRS-2676: Start of ‘westgg-vip’ on ‘west02’ succeeded Stage 3. Install and configure XAG setup Download zip : zag.zip or page 1. unzip in /tmp folder [oracle@west01 tmp]$ unzip xag.zip Archive: xag.zip creating: xag/ inflating: xag/xagsetup.sh inflating: xag/aggoldengate.pm inflating: xag/agapacheas.pm ……….skipped…………… inflating: xag/template/xag.ebs.type inflating: xag/template/xag.goldengate.type inflating: xag/template/xag.base.type 2. create same directory structure on all nodes west01: [root@west01 ]# mkdir -p /u01/app/oracle/19c/xag [root@west01 ]# chown oracle:oinstall -R xag west02: [root@west02 ]# mkdir -p /u01/app/oracle/19c/xag [root@west02 ]# chown oracle:oinstall -R xag 3. Install Xag west01: [oracle@west01 ]# cd /tmp/xag [oracle@west01 ]$ ./xagsetup.sh –install –directory /u01/app/oracle/19c/xag –nodes west01,west02 Installing Oracle Grid Infrastructure Agents on: west01 Installing Oracle Grid Infrastructure Agents on: west02 Done. 4. Add xag service with help of agctl Xag binary is located on “/u01/app/oracle/19c/xag/bin” for Microservices (Run from Root user) /u01/app/oracle/19c/xag/bin/agctl add goldengate westgg –gg_home /acfs1/ggate –service_manager –config_home /acfs1/ggate/deploy/etc/conf –var_home /acfs1/ggate/deploy/var –port 7900 –adminuser ggadmin –user oracle –group oinstall –vip_name westgg-vip –filesystems ora.acfs.acfs1.acfs –db_services ora.woravr.woravrsvc.svc –use_local_services –nodes west01,west02 for : classic Goldengate /u01/app/oracle/19c/xag/bin/agctl add goldengate westgg –gg_home /acfs/ggate –nodes west01,west02 –vip_name westgg-vip –filesystems ora.acfs.acfs1.acfs –databases ora.woravr.db –oracle_home /u01/app/oracle/19c/db_1 Where: –gg_home specifies the location of the Oracle GoldenGate software. –service_manager indicates this is an Oracle GoldenGate Microservices instance. –config_home specifies the Oracle GoldenGate deployment configuration home directory. –var_home specifies the Oracle GoldenGate deployment variable home directory. –port specifies the deployment Service Manager port number. –adminuser specifies the Oracle GoldenGate Microservices administrator account name. –user specifies the name of the operating system user that owns the Oracle GoldenGate deployment. –group specifies the name of the operating system group that owns the Oracle GoldenGate deployment. –network specifies the network subnet for the VIP, determined on page 17 (optional). –ip specifies the IP address for the VIP, which was determined on page 17 (optional). If you have already created a VIP, then specify it using the –vip_name parameter in place of –network and –ip. –filesystems specifies the CRS file system resource that must be mounted before the deployment is started. 5. start the service [oracle@west01 ]$ /u01/app/oracle/19c/xag/bin/agctl start goldengate westgg 6. check status [oracle@west01 ]$ /u01/app/oracle/19c/xag/bin/agctl status goldengate Goldengate instance ‘westgg’ is running on

Converting Oracle Standby Database  

Converting Oracle Standby Database: Physical to Snapshot and Back Using DataGuard Broker   Oracle Data Guard Administration | Snapshot Standby Database Management Oracle Data Guard provides the ability to convert a physical standby database to a snapshot standby database for testing, development, or validation purposes. This guide provides comprehensive procedures for converting between physical and snapshot standby modes using the DataGuard Broker (DGMGRL), ensuring seamless transitions while maintaining data protection capabilities. Table of Contents Overview: Physical vs Snapshot Standby Prerequisites and Considerations Convert Physical Standby to Snapshot Standby Convert Snapshot Standby Back to Physical Validation and Verification Best Practices and Recommendations Overview: Physical vs Snapshot Standby Oracle Data Guard supports multiple standby database types, each serving different operational requirements. Understanding the differences between physical standby and snapshot standby databases is essential for effective disaster recovery and database management strategies. Physical Standby Database Block-for-block copy of the primary databaseContinuously applies redo data from primary Read-only access (with Active Data Guard) Provides disaster recovery protection Cannot be opened for read-write operations Maintains synchronization with primary Snapshot Standby Database Fully updateable standby database Receives but does not apply redo data Open for read-write operations Ideal for testing and development Changes are discarded upon conversion back Maintains redo log reception from primary Key Concept: Flashback Database Requirement Snapshot standby conversion relies on Oracle Flashback Database technology. When converting from physical to snapshot standby, a guaranteed restore point is automatically created. This restore point enables the database to discard all changes made during the snapshot period and return to synchronization with the primary database when converted back to physical standby. Conversion Workflow Overview   Prerequisites and Considerations     Technical Requirements   Requirement Description Verification Command Flashback Database Must be enabled on the standby database SELECT flashback_on FROM v$database; Fast Recovery Area Configured with adequate space for flashback logs SHOW PARAMETER db_recovery_file_dest; DataGuard Broker Configuration must be enabled and healthy SHOW CONFIGURATION; Database Version Oracle 11g Release 1 or later SELECT version FROM v$instance; Standby State Physical standby must be mounted or applying redo SELECT database_role FROM v$database; Pre-Conversion Checklist 1. Verify Flashback Database Status SQL*Plus – Standby Database — Connect to standby database SQL> SELECT flashback_on, current_scn FROM v$database; FLASHBACK_ON CURRENT_SCN —————- ———– YES 1234567890 2. Check Fast Recovery Area Space SQL*Plus – Standby Database SQL> SELECT name, space_limit/1024/1024/1024 AS limit_gb, space_used/1024/1024/1024 AS used_gb, space_reclaimable/1024/1024/1024 AS reclaimable_gb, ROUND((space_used/space_limit)*100,2) AS pct_used FROM v$recovery_file_dest; NAME LIMIT_GB USED_GB RECLAIMABLE_GB PCT_USED —————————– ———– ——– ————— +FRA 500 125.5 15.2 25.10 3. Verify DataGuard Broker Configuration DGMGRL – Connect and Verify DGMGRL> CONNECT sys@primary_db Password: ******** Connected to “PRIMARY_DB” Connected as SYSDBA. DGMGRL> SHOW CONFIGURATION; Configuration – DG_CONFIG Protection Mode: MaxPerformance Members: PRIMARY_DB – Primary database STANDBY_DB – Physical standby database Fast-Start Failover: Disabled Configuration Status: SUCCESS (status updated 15 seconds ago) Important Considerations Ensure sufficient FRA space for flashback logs during the snapshot period Plan for increased storage as redo logs accumulate without being applied Document the snapshot duration to manage resource consumption All changes made to snapshot standby will be permanently lost upon conversion back   Convert Physical Standby to Snapshot Standby   Converting a physical standby database to a snapshot standby enables read-write access for testing, development, or validation scenarios. The DataGuard Broker simplifies this process to a single command. Conversion Procedure Using DGMGRL 1 Connect to DataGuard Broker DGMGRL – Connect to Primary or Standby — Start DGMGRL and connect $ dgmgrl DGMGRL> CONNECT sys@primary_db Password: ******** Connected to “PRIMARY_DB” Connected as SYSDBA. 2. Verify Current Standby Status DGMGRL – Check Database Status DGMGRL> SHOW DATABASE STANDBY_DB; Database – STANDBY_DB Role: PHYSICAL STANDBY Intended State: APPLY-ON Transport Lag: 0 seconds (computed 1 second ago) Apply Lag: 0 seconds (computed 1 second ago) Average Apply Rate: 1.02 MByte/s Real Time Query: ON Instance(s): STANDBY_DB1 Database Status: SUCCESS 3. Convert to Snapshot Standby DGMGRL – Execute Conversion DGMGRL> CONVERT DATABASE STANDBY_DB TO SNAPSHOT STANDBY; Converting database “STANDBY_DB” to a Snapshot Standby database, please wait… Database “STANDBY_DB” converted successfully 4 Verify Snapshot Standby Status DGMGRL – Confirm Conversion DGMGRL> SHOW DATABASE STANDBY_DB; Database – STANDBY_DB Role: SNAPSHOT STANDBY Intended State: APPLY-OFF Transport Lag: 0 seconds (computed 0 seconds ago) Apply Lag: (unknown) Average Apply Rate: (unknown) Real Time Query: OFF Instance(s): STANDBY_DB1 Database Warning(s): ORA-16714: the value of property ArchiveLagTarget is inconsistent with the requirement Database Status: WARNING Expected Database State After Conversion SQL> SELECT database_role, open_mode FROM v$database; DATABASE_ROLE OPEN_MODE —————- ——————– SNAPSHOT STANDBY READ WRITE SQL> SELECT name, guarantee_flashback_database FROM v$restore_point WHERE guarantee_flashback_database = ‘YES’; NAME GUA —————————— — SNAPSHOT_STANDBY_REQUIRED_01 YES Conversion Complete The standby database is now a snapshot standby and can accept read-write operations. A guaranteed restore point has been automatically created to enable conversion back to physical standby. Redo logs continue to be received from the primary but are not applied.   Convert Snapshot Standby Back to Physical   After completing testing or development activities on the snapshot standby, convert it back to a physical standby to resume redo apply and data protection. This process uses Flashback Database to discard all changes made during the snapshot period. Data Loss Warning All changes made to the snapshot standby database will be permanently discarded during conversion back to physical standby. Ensure all necessary data has been exported or backed up before proceeding. Conversion Procedure Using DGMGRL 1. Connect to DataGuard Broker DGMGRL – Connect DGMGRL> CONNECT sys@primary_db Password: ******** Connected to “PRIMARY_DB” Connected as SYSDBA. 2. Verify Current Snapshot Status DGMGRL – Check Current State DGMGRL> SHOW DATABASE STANDBY_DB; Database – STANDBY_DB Role: SNAPSHOT STANDBY Intended State: APPLY-OFF Transport Lag: 0 seconds (computed 2 seconds ago) Apply Lag: 2 hours 15 minutes (computed 2 seconds ago) Average Apply Rate: (unknown) Real Time Query: OFF Instance(s): STANDBY_DB1 Database Status: WARNING Convert to Physical Standby DGMGRL – Execute Conversion DGMGRL> CONVERT DATABASE STANDBY_DB TO PHYSICAL STANDBY; Converting database “STANDBY_DB” to a Physical Standby database, please wait… Operation requires shut down of instance “STANDBY_DB1” on database “STANDBY_DB” Shutting down instance “STANDBY_DB1″… Database closed. Database dismounted. ORACLE instance shut down. Operation requires start up of instance “STANDBY_DB1”

Configure DataGuard broker  19c  

  Configure DataGuard broker 19c   Oracle Data Guard Broker provides a centralized framework to manage and monitor an entire Data Guard configuration through a unified client interface. The broker simplifies complex administrative tasks, automates failover operations, and ensures configuration consistency across all databases in your Data Guard environment.   Table of Contents   Data Guard Broker Overview Architecture and Configuration Files Key Benefits and Capabilities Prerequisites Removing Existing Broker Configuration Disabling Archive Destinations Enabling Data Guard Broker Creating Broker Configuration Enabling and Verifying Configuration Configuration Management   Data Guard Broker Overview   Data Guard Broker is Oracle’s enterprise-class management layer that sits above the core Data Guard infrastructure. It provides a unified interface for managing complex Data Guard topologies with multiple standby databases, automating routine tasks, and ensuring configuration consistency.   What is Data Guard Broker?   Centralized Management Framework: Manage entire Data Guard configurations through a single client connection to any database in the configuration Command-Line Interface (DGMGRL) : Powerful CLI for configuration, monitoring, and management operations Automated Operations : Simplifies switchover and failover with single-command execution Unified Configuration : Eliminates the need for multiple SQL*Plus sessions across different databases Configuration Persistence : Maintains configuration details in binary flat files stored at each database location Broker Limitations DGMGRL (Command Line Interface) does not have the ability to create standby databases from scratch. Use RMAN Active Duplicate or Oracle Enterprise Manager Cloud Control GUI for standby database creation. The CLI is primarily used for configuration management and operational tasks after standby databases are established.   Architecture and Configuration Files     Configuration File Storage   The broker maintains its configuration metadata in binary flat files rather than within the database itself. This design ensures that configuration information remains accessible even when databases are not mounted or open. Configuration File Parameters Two initialization parameters control the location of broker configuration files: DG_BROKER_CONFIG_FILE1 – Primary configuration file location DG_BROKER_CONFIG_FILE2 – Secondary configuration file location (redundancy) 🔒 Redundancy and Protection Two copies of the configuration files are always maintained on each database node for redundancy and protection against corruption. If one file becomes corrupted, the broker automatically uses the alternate file. Both files are stored in ASM disk groups or on the local filesystem depending on your storage architecture. File Location Examples SQL> SHOW PARAMETER dg_broker_config  NAME                                             TYPE        VALUE ———————————— ———– —————————————————————————–  dg_broker_config_file1               string      +DATA/woravr/dr1woravr.dat dg_broker_config_file2               string      +DATA/woravr/dr2woravr.dat    Key Benefits and Capabilities   Simplified Switchover/Failover Execute role transitions with a single command, minimizing downtime associated with planned maintenance or unplanned outages. The broker performs all pre-checks and validation automatically. Unified Configuration Management Replace multiple SQL*Plus statements across different databases with a single unified interface. All configuration changes are automatically propagated to all members. Automated Monitoring Continuous health monitoring of all databases in the configuration with automatic detection of lag, apply rate issues, and connectivity problems. Fast-Start Failover Optional automatic failover capability that detects primary database failures and initiates failover to a designated standby database without manual intervention. Configuration Validation Performs comprehensive pre-checks before role transitions to ensure all prerequisites are met, reducing the risk of failed switchover or failover operations. Centralized Property Management Manage Data Guard-related initialization parameters through broker properties, ensuring consistency across all databases in the configuration.   Prerequisites   Before configuring Data Guard Broker, ensure the following conditions are met: Data Guard Configuration: A functioning Data Guard configuration with at least one standby database Network Connectivity: TNS connectivity established between all databases in the configuration Matching Software Versions: All databases running the same Oracle Database version Unique DB_UNIQUE_NAME: Each database must have a unique DB_UNIQUE_NAME parameter Compatible Parameter: The COMPATIBLE parameter should be set consistently across all databases Valid TNS Entries: Properly configured tnsnames.ora entries on all nodes   Step 1: Removing Existing Broker Configuration   If you have an existing broker configuration that needs to be reconfigured or removed, start by cleanly removing the current configuration. 1. Connect to DGMGRL and Remove Configuration [oracle@west01 ~]$ dgmgrl DGMGRL for Linux: Release 19.0.0.0.0 – Production on Tue Apr 20 15:51:38 2021 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. Welcome to DGMGRL, type “help” for information. DGMGRL> remove configuration; Removed configuration DGMGRL> show configuration; ORA-16532: Oracle Data Guard broker configuration does not exist Configuration details cannot be determined by DGMGRL Configuration Removal Impact Removing the broker configuration does not affect the underlying Data Guard setup. Redo transport and apply services continue to function. Only the broker metadata is removed, and you can recreate the broker configuration at any time. Step 2: Disabling LOG_ARCHIVE_DEST_2 Before enabling Data Guard Broker, clear the LOG_ARCHIVE_DEST_2 parameter on both primary and standby databases. The broker will manage this parameter automatically once enabled. Disable on Primary Database 2Clear Archive Destination on Primary SQL> SHOW PARAMETER log_archive_dest_2 NAME TYPE VALUE ———————————— ———– —————– log_archive_dest_2 string SERVICE=woravrdr LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=woravrdr SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2=” SCOPE=BOTH; System altered. SQL> SHOW PARAMETER log_archive_dest_2 NAME TYPE VALUE ———————————— ———– —————– log_archive_dest_2 string Disable on Standby Database 3. Clear Archive Destination on Standby SQL> SHOW PARAMETER log_archive_dest_2 NAME TYPE VALUE ———————————— ———– —————– log_archive_dest_2 string SERVICE=woravr LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=woravr SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2=” SCOPE=BOTH; System altered. SQL> SHOW PARAMETER log_archive_dest_2 NAME TYPE VALUE ———————————— ———– —————– log_archive_dest_2 string 💡 Why Clear LOG_ARCHIVE_DEST_2? The broker automatically manages redo transport destinations through its own property system. Manually configured LOG_ARCHIVE_DEST parameters can conflict with broker-managed settings. Once the broker is enabled, it will configure these parameters according to your broker properties. Step 3: Enabling Data Guard Broker The DG_BROKER_START parameter controls whether the Data Guard Broker is active on each database. This must be set to TRUE on all databases that will participate in the broker configuration. Enable Broker on Primary Database 4. Enable DG Broker Process on Primary SQL> SHOW PARAMETER dg_broker_start NAME TYPE VALUE ———————————— ———– —————– dg_broker_start boolean TRUE — If

Oracle Data Guard

Oracle Data Guard Configuration for RAC Using Active Duplicate Oracle Database 19c Administration | Complete Implementation Guide This comprehensive guide demonstrates how to configure Oracle Data Guard for a Real Application Clusters (RAC) environment using the RMAN Active Duplicate method. Active Duplicate eliminates the need for backup and restore operations, enabling efficient standby database creation directly from the primary database over the network. Table of Contents 1. Environment Overview 2. Prerequisites and Architecture 3. Primary Database Configuration 4. Standby Database Configuration 5. RMAN Active Duplicate Process 6. Verification and Monitoring 7. Monitoring Scripts 1. Environment Overview This implementation demonstrates Data Guard configuration between two geographically distributed RAC clusters. The setup includes a two-node primary cluster and a two-node standby cluster, both running Oracle Database 19c Enterprise Edition on ASM storage. Environment Specifications Component  Primary Site  Standby Site  Platform  Linux x86_64  Linux x86_64  Servers  west01, west02  east01, east02  Domain  westscan.oravr.in  eastscan.oravr.in  Oracle Version  19.10.0.0  19.10.0.0  Storage  ASM (+DATA, +FRA)  ASM (+DATA_DG, +DATA_DG)  Database Name  woravr  woravr  DB_UNIQUE_NAME  woravr  woravrdr  Instances  woravr1, woravr2  woravrdr1, woravrdr2  Oracle Home  /u01/app/oracle/product/19c/db_1  /u01/app/oracle/product/19c/db_1     Data Guard RAC Architecture  Primary RAC Cluster (west01/west02) synchronizes with Standby RAC Cluster (east01/east02) using redo transport services. Both clusters utilize ASM for storage management and SCAN listeners for client connectivity.  2. Prerequisites and Architecture Cluster Status Verification Before proceeding with Data Guard configuration, verify that all cluster services are online on both sites. Primary Cluster Status Check [oracle@west01 ~]$ crsctl check cluster -all ************************************************************** west01: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** west02: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** Important Ensure all CRS components (Cluster Ready Services, Cluster Synchronization Services, and Event Manager) are online on all nodes before proceeding with Data Guard configuration.   Key Requirements   Network Connectivity: Bidirectional network access between primary and standby sites Identical OS Platform: Both sites must run compatible Linux x86_64 versions Same Oracle Version: Identical Oracle Database version and patch level ASM Configuration: ASM disk groups configured on both sites TNS Configuration: Proper listener and tnsnames configuration for cross-site communication Password File: Synchronized password files with SYS credentials 3. Primary Database Configuration Step 1: Enable Forced Logging Forced logging ensures all database changes are captured in redo logs, which is critical for Data Guard synchronization. SQL> ALTER DATABASE FORCE LOGGING; Database altered. Step 2: Copy Password File to Standby The password file must be identical on both sites to allow SYS authentication for redo transport services. Extract Password File from ASM ASMCMD> pwd +data/woravr/password ASMCMD> pwcopy pwdworavr.314.5141100038 /tmp copying +data/woravr/password/pwdworavr.314.5141100038 -> /tmp/pwdworavr.314.5141100038 [oracle@west02]$ scp -p /tmp/pwdworavr.314.5141100038 oracle@east01:/u01/app/oracle/product/19c/db_1/dbs/orapwworavrdr1 [oracle@west02]$ scp -p /tmp/pwdworavr.314.5141100038 oracle@east02:/u01/app/oracle/product/19c/db_1/dbs/orapwworavrdr2 Step 3: Configure Standby Redo Logs Standby redo logs are essential for real-time redo apply and protection modes like Maximum Availability and Maximum Protection. The recommended number of standby redo log files is: Formula(Maximum number of logfiles + 1) × Maximum number of threads For this configuration: (2 + 1) × 2 = 6 standby redo logs Verify Current Online Redo Log Configuration SQL> SELECT b.thread#, a.group#, a.member, b.bytes FROM v$logfile a, v$log b WHERE a.group# = b.group#; THREAD# GROUP# MEMBER BYTES ———- ———- ———————————– ———- 1 2 +DATA/woravr/redo02.log 209715200 1 1 +DATA/woravr/redo01.log 209715200 2 3 +DATA/woravr/redo03.log 209715200 2 4 +DATA/woravr/redo04.log 209715200 Create Standby Redo Logs for Thread 1 SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 GROUP 5 (‘+DATA/woravr/redo05.log’) SIZE 200M, GROUP 6 (‘+DATA/woravr/redo06.log’) SIZE 200M, GROUP 7 (‘+DATA/woravr/redo07.log’) SIZE 200M; Database altered. Create Standby Redo Logs for Thread 2 SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2 GROUP 8 (‘+DATA/woravr/redo08.log’) SIZE 200M, GROUP 9 (‘+DATA/woravr/redo09.log’) SIZE 200M, GROUP 10 (‘+DATA/woravr/redo10.log’) SIZE 200M; Database altered. Verify Standby Redo Log Creation SQL> SELECT b.thread#, a.group#, a.member, b.bytes FROM v$logfile a, v$standby_log b WHERE a.group# = b.group#; THREAD# GROUP#MEMBER BYTES ———- ———- ———————————– ———- 1 5 +DATA/woravr/redo05.log 209715200 1 6 +DATA/woravr/redo06.log 209715200 1 7 +DATA/woravr/redo07.log 209715200 2 8 +DATA/woravr/redo08.log 209715200 2 9 +DATA/woravr/redo09.log 209715200 2 10 +DATA/woravr/redo10.log 209715200 6 rows selected. Step 4: Verify Archive Mode SQL> ARCHIVE LOG LIST; Database log mode Archive Mode Automatic archival Enabled Archive destination +FRA Oldest online log sequence 10 Next log sequence to archive 11 Current log sequence 11 Step 5: Configure Primary Database Initialization Parameters Data Guard requires specific initialization parameters to enable redo transport, log apply services, and file name conversion between sites. Backup Current Parameter File SQL> CREATE PFILE=’/home/oracle/initworavr.ora.bkp_vr’ FROM SPFILE; File created. Set Essential Data Guard Parameters SQL> ALTER SYSTEM SET db_unique_name=’woravr’ SCOPE=SPFILE SID=’*’; SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG=’DG_CONFIG=(woravr,woravrdr)’ SCOPE=BOTH SID=’*’; SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_1=’LOCATION=+FRA VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=woravr’ SCOPE=BOTH SID=’*’; SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2=’SERVICE=woravrdr LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=woravrdr’ SCOPE=BOTH SID=’*’; SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_1=ENABLE SCOPE=BOTH SID=’*’; SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE SCOPE=BOTH SID=’*’; Deprecation NoticeThe FAL_CLIENT parameter is deprecated in Oracle 19c but included here for reference. FAL_SERVER remains active for fetch archive log operations. Configure FAL and File Name Conversion SQL> ALTER SYSTEM SET fal_client=woravr SCOPE=BOTH SID=’*’; SQL> ALTER SYSTEM SET fal_server=woravrdr SCOPE=BOTH SID=’*’; SQL> ALTER SYSTEM SET DB_FILE_NAME_CONVERT=’+DATA_DG’,’+DATA’ SCOPE=SPFILE SID=’*’; SQL> ALTER SYSTEM SET LOG_FILE_NAME_CONVERT=’+DATA_DG’,’+DATA’ SCOPE=SPFILE SID=’*’; SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO SCOPE=BOTH SID=’*’; Key Initialization Parameters Explained  Parameter  Purpose  Configuration  LOG_ARCHIVE_CONFIG  Defines all databases in Data Guard configuration  DG_CONFIG=(woravr,woravrdr)  LOG_ARCHIVE_DEST_1  Local archive destination  +FRA for all roles  LOG_ARCHIVE_DEST_2  Remote standby destination using LGWR ASYNC  SERVICE=woravrdr  FAL_SERVER  Fetch Archive Log server for gap resolution  woravrdr  DB_FILE_NAME_CONVERT  Maps datafile paths from standby to primary  +DATA_DG → +DATA  LOG_FILE_NAME_CONVERT  Maps redo log paths from standby to primary  +DATA_DG → +DATA  STANDBY_FILE_MANAGEMENT  Automates standby file operations  AUTO  Step 6: Configure TNS Entries on Primary  Both primary and standby sites require TNS entries for bidirectional communication. The UR=A (Uninterruptible Retry Always) parameter ensures connection attempts continue even when the database is not in mounted state.  Primary Site tnsnames.ora (All Nodes)  woravr = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = westscan.oravr.in)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = woravr) ) ) woravrdr = (DESCRIPTION = (ADDRESS = (PROTOCOL =

Search Sources in Service Portal

Search Sources in Service Portal Search Sources (Service Portal): Search Sources define where the Service Portal looks for data during a search. They help fetch relevant records, knowledge articles, and catalog items. By configuring search sources, users get faster and more accurate search results. From an end-user perspective, whenever someone needs information, the first instinct is to use the search box and type a keyword, expecting accurate and matching results. The Service Portal Search Sources provide this exact functionality. They allow us to define where the data should come from (tables), apply conditions if needed, and control which fields are displayed in the search results, ensuring a smooth and relevant search experience. Create a new Service Portal Search Source, provide the required details, and save the record. After saving, the Source section will appear below the form, where you can configure tables, conditions, and fields for search results. Click New and add the required configuration such as Name, ID, and other necessary details, then save the record. Scroll down to the Data Source section and add the table Incident, then configure Short Description as shown in the snapshot below. To check the result, create a new page with one container and add an Instance of type Homepage Search. The results will appear as shown below.