LogiUpSkill

Oracle Database Architecture 

Oracle Database Architecture  Complete Technical Guide | January 11, 2026  Oracle Database architecture is a sophisticated system comprising memory structures, background processes, and physical storage components that work together to provide high performance, reliability, and data integrity. This guide provides an in-depth exploration of Oracle’s architectural components and their interactions.  Table of Contents  1. Architecture Overview 2. Oracle Instance vs Database 3. Memory Architecture 4. Background Processes 5. Physical Storage Structures 6. Logical Storage Structures 7. Data Flow and Processing 8. RAC Architecture  1. Architecture Overview Oracle Database employs a multi-layered architecture designed for enterprise-grade performance, scalability, and fault tolerance. The architecture separates the database instance (memory and processes) from the database itself (physical files), enabling flexible deployment options including single-instance, RAC, and Data Guard configurations.  Figure 1: Oracle Database Architecture – High-Level Overview  Key Architectural Principle The separation of instance and database enables Oracle’s high availability features. An instance can be started, stopped, or failed over independently while the database files remain persistent on storage.  2. Oracle Instance vs Database An Oracle instance consists of memory structures and background processes that manage database operations. The instance exists in server memory and is transient – it is created when the database starts and destroyed when it shuts down.    Component  Type  Description  System Global Area (SGA)  Memory  Shared memory region containing data and control information  Program Global Area (PGA)  Memory  Private memory region for each server process  Background Processes  Process  Mandatory and optional processes performing database operations  Oracle Database The database consists of physical files stored on disk that contain the actual data, metadata, and transaction logs. These files persist across instance restarts and contain: Data Files: Store actual user and system data in tablespaces Control Files: Contain database metadata and structural information Redo Log Files: Record all changes made to the database Archive Log Files: Historical copies of redo logs for recovery Parameter Files: Instance configuration parameters (SPFILE/PFILE) — View instance status and database information SELECT instance_name, status, database_status FROM v$instance; SELECT name, open_mode, database_role FROM v$database;   3. Memory Architecture System Global Area (SGA) The SGA is a shared memory region that contains data and control information for the Oracle instance. All server processes and background processes share access to the SGA. Database Buffer Cache Caches data blocks read from data files. The buffer cache uses an LRU (Least Recently Used) algorithm to manage blocks, keeping frequently accessed data in memory to minimize physical I/O. — View instance status and database information SELECT instance_name, status, database_status FROM v$instance; SELECT name, open_mode, database_role FROM v$database; SELECT name, ROUND(bytes/1024/1024, 2) size_mb FROM v$sgainfo WHERE name LIKE ‘%Buffer%’; Shared Pool  Contains the library cache (parsed SQL statements and execution plans), data dictionary cache, and other shared structures. Proper sizing prevents hard parsing overhead. SELECT pool, name, bytes/1024/1024 mb FROM v$sgastat WHERE pool = ‘shared pool’ ORDER BY bytes DESC FETCH FIRST 10 ROWS ONLY; Redo Log Buffer A circular buffer that caches redo entries before they are written to the online redo log files. LGWR flushes this buffer under specific conditions to ensure durability. Large Pool Optional memory area used for large memory allocations including RMAN backup/restore operations, shared server session memory, and parallel query operations. SGA Component  Purpose  Sizing Parameter  Buffer Cache  Data block caching  DB_CACHE_SIZE  Shared Pool  SQL/PLSQL caching, dictionary  SHARED_POOL_SIZE  Redo Log Buffer  Redo entry caching  LOG_BUFFER  Large Pool  Large allocations (RMAN, parallel)  LARGE_POOL_SIZE  Java Pool  Java stored procedures  JAVA_POOL_SIZE  Streams Pool  Oracle Streams/GoldenGate  STREAMS_POOL_SIZE  — View SGA component sizes SELECT component, current_size/1024/1024 current_mb, min_size/1024/1024 min_mb, max_size/1024/1024 max_mb FROM v$sga_dynamic_components WHERE current_size > 0; Program Global Area (PGA) The PGA is a private memory region containing data and control information for each server process. Unlike the SGA, PGA memory is not shared between processes. Sort Area: Memory for sort operations (ORDER BY, GROUP BY, DISTINCT) Hash Area: Memory for hash joins and hash aggregations Session Memory: Session-specific variables and cursors Private SQL Area: Bind variable values and runtime state — View PGA memory usage  SELECT name, value/1024/1024 mb  FROM v$pgastat  WHERE name IN (‘aggregate PGA target parameter’,                   ‘aggregate PGA auto target’,                   ‘total PGA allocated’,                   ‘total PGA used for auto workareas’);  4. Background Processes Oracle background processes perform maintenance tasks, I/O operations, and ensure database consistency. Some processes are mandatory while others start based on configuration.   Mandatory Background Processes   Process Name Function DBWn Database Writer Writes modified buffers from buffer cache to data files LGWR Log Writer Writes redo log buffer entries to online redo log files CKPT Checkpoint Updates control files and data file headers at checkpoints SMON System Monitor Instance recovery, coalescing free space, cleaning temp segments PMON Process Monitor Cleans up failed user processes, releases locks and resources RECO Recoverer Resolves distributed transaction failures   Optional Background Processes   Process Name When Started ARCn Archiver ARCHIVELOG mode enabled MMON Manageability Monitor AWR snapshots and alerts MMAN Memory Manager Automatic memory management enabled CJQ0 Job Queue Coordinator DBMS_SCHEDULER jobs exist SMCO Space Management Coordinator Automatic space management — View active background processes SELECT pname, description FROM v$process WHERE pname IS NOT NULL ORDER BY pname; LGWR Critical PathLGWR performance is critical for commit latency. Every COMMIT must wait for LGWR to write redo entries to disk. Use fast storage (NVMe, SSD) for redo logs and consider redo log file placement carefully.   5. Physical Storage Structures   Data Files Data files contain the actual database data including tables, indexes, and other segments. Each data file belongs to exactly one tablespace and stores data in Oracle blocks. — View data files and their tablespaces SELECT tablespace_name, file_name, ROUND(bytes/1024/1024/1024, 2) size_gb, autoextensible, status FROM dba_data_files ORDER BY tablespace_name; Control Files Control files are critical metadata files containing database structure information, checkpoint data, and RMAN backup metadata. Oracle recommends multiplexing control files across different storage locations. — View control file locations SELECT name, status, block_size, file_size_blks FROM v$controlfile; Redo Log Files Online redo logs record all changes made to the database. Oracle uses a circular writing mechanism with multiple redo log groups for high availability.

Autonomous Health Framework (AHF) 

Autonomous Health Framework (AHF) Oracle Autonomous Health Framework (AHF) presents the next generation of an all-in-one solution that includes tools that work together autonomously 24×7 to keep database systems healthy and running. It does so while minimizing human reaction time using existing components like orachk, TFA and many more.  Autonomous Health Framework (AHF) will be shipped with new releases of the database, but you should always download the latest version. The following location gives an overview of the AHF product, as well as download links and basic instructions.  Autonomous Health Framework (AHF) – Including TFA and ORAchk/EXAChk (Doc ID 2550798.1) Advantages: User-friendly & real-time health monitoring, fault detection & diagnosis via a single interface  Secure consolidation of distributed diagnostic collections  Continuous availability  Machine learning-driven, autonomous degradation detection, reduce your overheads (on both customer and oracle support Tiers)  TFA is still used for diagnostic collection and management and ORAchk/EXAchk for a compliance check.  ORAchk/EXAchk now use the TFA secure socket and TFA scheduler for automatic checks (less overhead).  Pre-requisites: Before you begin the installation there are some pre-requisites to installing Oracle AHF and running ORAchk. I’ve outlined the key pre-requisites below: Linux Oracle AHF should be installed as root to obtain the fullest capabilities. If you are unable to install as root then Oracle AHF should be installed as the Oracle Home User. Oracle AHF should be installed to a filesystem with at least 5GB of free diskspace. Perl version 5.10 or later is required to install Oracle AHF. Windows Oracle AHF should be installed as a user with local administrative privileges. Oracle AHF should be installed to a disk with at least 5GB of free diskspace. Perl version 5.10 or later is required to install Oracle AHF (note that a later version of Perl is usually to be found installed in the %ORACLE_HOME%perl directory) Install Oracle Autonnomous Health Framework (AHF) (as root user) Autonomous Health Framework (AHF) can be installed as the “root” user on the server, which provides the most functionality and allows it to run in a proactive manner as a daemon. In this example we will perform an installation as the root user. Unzip the software and run the ahf_setup command. Answer the questions when prompted. The following is an example of the root installation. [root@west02 oracle]# ./ahf_setup AHF Installer for Platform Linux Architecture x86_64 AHF Installation Log : /tmp/ahf_install_211400_4701_2021_12_29-10_46_20.log Starting Autonomous Health Framework (AHF) Installation AHF Version: 21.1.4 Build Date: 202106281226 Default AHF Location : /opt/oracle.ahf Do you want to install AHF at [/opt/oracle.ahf] ? [Y]|N : y AHF Location : /opt/oracle.ahf AHF Data Directory stores diagnostic collections and metadata. AHF Data Directory requires at least 5GB (Recommended 10GB) of free space. Choose Data Directory from below options : 1. /u01/app/oracle_base [Free Space : 73565 MB] 2. Enter a different Location Choose Option [1 – 2] : 1 AHF Data Directory : /u01/app/oracle_base/oracle.ahf/data Do you want to add AHF Notification Email IDs ? [Y]|N : y Enter Email IDs separated by space : xyz@gmail.com AHF will also be installed/upgraded on these Cluster Nodes : 1. west01 The AHF Location and AHF Data Directory must exist on the above nodes AHF Location : /opt/oracle.ahf AHF Data Directory : /u01/app/oracle_base/oracle.ahf/data Do you want to install/upgrade AHF on Cluster Nodes ? [Y]|N : y Extracting AHF to /opt/oracle.ahf Configuring TFA Services Discovering Nodes and Oracle Resources Not generating certificates as GI discovered Starting TFA Services Created symlink from /etc/systemd/system/multi-user.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service. Created symlink from /etc/systemd/system/graphical.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service. .————————————————————————–. | Host | Status of TFA | PID | Port | Version | Build ID | +——–+—————+——+——+————+———————-+ | west02 | RUNNING | 6134 | 5000 | 21.1.4.0.0 | 21140020210628122659 | ‘——–+—————+——+——+————+———————-‘ Running TFA Inventory… Adding default users to TFA Access list… .——————————————————————–. | Summary of AHF Configuration | +—————–+————————————————–+ | Parameter | Value | +—————–+————————————————–+ | AHF Location | /opt/oracle.ahf | | TFA Location | /opt/oracle.ahf/tfa | | Orachk Location | /opt/oracle.ahf/orachk | | Data Directory | /u01/app/oracle_base/oracle.ahf/data | | Repository | /u01/app/oracle_base/oracle.ahf/data/repository | | Diag Directory | /u01/app/oracle_base/oracle.ahf/data/west02/diag | ‘—————–+————————————————–‘ Starting orachk scheduler from AHF … AHF install completed on west02 Installing AHF on Remote Nodes : AHF will be installed on west01, Please wait. Installing AHF on west01 : [west01] Copying AHF Installer [west01] Running AHF Installer AHF binaries are available in /opt/oracle.ahf/bin AHF is successfully installed Moving /tmp/ahf_install_211400_4701_2021_12_29-10_46_20.log to /u01/app/oracle_base/oracle.ahf/data/west02/diag/ahf/ [oracle@west02 ~]$ service oracle-tfa.service status Redirecting to /bin/systemctl status oracle-tfa.service ● oracle-tfa.service – Oracle Trace File Analyzer Loaded: loaded (/etc/systemd/system/oracle-tfa.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2021-12-29 10:47:54 IST; 43min ago Main PID: 5978 (init.tfa) Tasks: 70 CGroup: /system.slice/oracle-tfa.service ├─ 2034 /bin/sleep 30 ├─ 5978 /bin/sh /etc/init.d/init.tfa run >/dev/null 2>&1 /dev/null 2>&1 </dev/null ├─11114 /opt/oracle.ahf/jre/bin/java -server -Xms64m -Xmx128m -Djava.awt.headless=true -Ddisable.checkForUpdate=true -XX:HeapDumpPath=/u01/app/oracle_base/oracle.ahf/data/west01/diag/tfa oracl… └─11294 /opt/oracle.ahf/jre/bin/java -server -Xms64m -Xmx128m -XX:HeapDumpPath=/u01/app/oracle_base/oracle.ahf/data/west01/diag/tfa -DtfaHome=/opt/oracle.ahf/tfa -DcrsHome=/u01/app/oracle/19c/… [oracle@west01 ~]$ [oracle@west01 ~]$ /opt/oracle.ahf/bin/tfactl status WARNING – AHF Software is older than 180 days. Please consider upgrading AHF to the latest version using ahfctl upgrade. .———————————————————————————————-. | Host   | Status of TFA | PID   | Port | Version    | Build ID             | Inventory Status | +——–+—————+——-+——+————+———————-+——————+ | west01 | RUNNING        | 11114 | 5000 | 21.1.4.0.0 | 21140020210628122659 | COMPLETE         | | west02 | RUNNING       | 14911 | 5000 | 21.1.4.0.0 | 21140020210628122659 | COMPLETE          | '——–+—————+——-+——+————+———————-+——————' Upgrade : [root@west02 oracle]# ./ahf_setup AHF Installer for Platform Linux Architecture x86_64 AHF Installation Log : /tmp/ahf_install_214000_23474_2021_12_29-12_11_10.log Starting Autonomous Health Framework (AHF) Installation AHF Version: 21.4.0 Build Date: 202112200745 AHF is already installed at /opt/oracle.ahf Installed AHF Version: 21.1.4 Build Date: 202106281226 Do you want to upgrade AHF [Y]|N : Y AHF will also be installed/upgraded on these Cluster Nodes : 1. west01 The AHF Location and AHF Data Directory must exist on the above nodes AHF Location : /opt/oracle.ahf AHF Data Directory : /u01/app/oracle_base/oracle.ahf/data Do you want to install/upgrade AHF on Cluster Nodes ? [Y]|N : Y Upgrading /opt/oracle.ahf Shutting down AHF Services Stopped OSWatcher Nothing to do ! Shutting down TFA Removed symlink /etc/systemd/system/multi-user.target.wants/oracle-tfa.service. Removed symlink /etc/systemd/system/graphical.target.wants/oracle-tfa.service. Successfully shutdown TFA.. /usr/bin/checkmodule: loading policy configuration from inittfa-policy.te

Autonomous Health Framework (AHF) 

Autonomous Health Framework (AHF) Install Oracle Autonnomous Health Framework (AHF) (as root user) Autonomous Health Framework (AHF) can be installed as the “root” user on the server, which provides the most functionality and allows it to run in a proactive manner as a daemon. In this example we will perform an installation as the root user. Unzip the software and run the ahf_setup command. Answer the questions when prompted. The following is an example of the root installation. [root@west02 oracle]# ./ahf_setup AHF Installer for Platform Linux Architecture x86_64 AHF Installation Log : /tmp/ahf_install_211400_4701_2021_12_29-10_46_20.log Starting Autonomous Health Framework (AHF) Installation AHF Version: 21.1.4 Build Date: 202106281226 Default AHF Location : /opt/oracle.ahf Do you want to install AHF at [/opt/oracle.ahf] ? [Y]|N : y AHF Location : /opt/oracle.ahf AHF Data Directory stores diagnostic collections and metadata. AHF Data Directory requires at least 5GB (Recommended 10GB) of free space. Choose Data Directory from below options : 1. /u01/app/oracle_base [Free Space : 73565 MB] 2. Enter a different Location Choose Option [1 – 2] : 1 AHF Data Directory : /u01/app/oracle_base/oracle.ahf/data Do you want to add AHF Notification Email IDs ? [Y]|N : y Enter Email IDs separated by space : xyz@gmail.com AHF will also be installed/upgraded on these Cluster Nodes : 1. west01 The AHF Location and AHF Data Directory must exist on the above nodes AHF Location : /opt/oracle.ahf AHF Data Directory : /u01/app/oracle_base/oracle.ahf/data Do you want to install/upgrade AHF on Cluster Nodes ? [Y]|N : y Extracting AHF to /opt/oracle.ahf Configuring TFA Services Discovering Nodes and Oracle Resources Not generating certificates as GI discovered Starting TFA Services Created symlink from /etc/systemd/system/multi-user.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service. Created symlink from /etc/systemd/system/graphical.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service. .————————————————————————–. | Host | Status of TFA | PID | Port | Version | Build ID | +——–+—————+——+——+————+———————-+ | west02 | RUNNING | 6134 | 5000 | 21.1.4.0.0 | 21140020210628122659 | ‘——–+—————+——+——+————+———————-‘ Running TFA Inventory… Adding default users to TFA Access list… .——————————————————————–. | Summary of AHF Configuration | +—————–+————————————————–+ | Parameter | Value | +—————–+————————————————–+ | AHF Location | /opt/oracle.ahf | | TFA Location | /opt/oracle.ahf/tfa | | Orachk Location | /opt/oracle.ahf/orachk | | Data Directory | /u01/app/oracle_base/oracle.ahf/data | | Repository | /u01/app/oracle_base/oracle.ahf/data/repository | | Diag Directory | /u01/app/oracle_base/oracle.ahf/data/west02/diag | ‘—————–+————————————————–‘ Starting orachk scheduler from AHF … AHF install completed on west02 Installing AHF on Remote Nodes : AHF will be installed on west01, Please wait. Installing AHF on west01 : [west01] Copying AHF Installer [west01] Running AHF Installer AHF binaries are available in /opt/oracle.ahf/bin AHF is successfully installed Moving /tmp/ahf_install_211400_4701_2021_12_29-10_46_20.log to /u01/app/oracle_base/oracle.ahf/data/west02/diag/ahf/ [oracle@west02 ~]$ service oracle-tfa.service status Redirecting to /bin/systemctl status oracle-tfa.service ● oracle-tfa.service – Oracle Trace File Analyzer Loaded: loaded (/etc/systemd/system/oracle-tfa.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2021-12-29 10:47:54 IST; 43min ago Main PID: 5978 (init.tfa) Tasks: 70 CGroup: /system.slice/oracle-tfa.service ├─ 2034 /bin/sleep 30 ├─ 5978 /bin/sh /etc/init.d/init.tfa run >/dev/null 2>&1 </dev/null ├─14911 /opt/oracle.ahf/jre/bin/java -server -Xms64m -Xmx128m -Djava.awt.headless=true -Ddisable.checkForUpdate=true -XX:HeapDumpPath=/u01/app/oracle_base/oracle.ahf/data/west02/diag/tfa oracl… └─15080 /opt/oracle.ahf/jre/bin/java -server -Xms64m -Xmx128m -XX:HeapDumpPath=/u01/app/oracle_base/oracle.ahf/data/west02/diag/tfa -DtfaHome=/opt/oracle.ahf/tfa -DcrsHome=/u01/app/oracle/19c/… [oracle@west02 ~]$ [oracle@west01 ~]$ service oracle-tfa.service status Redirecting to /bin/systemctl status oracle-tfa.service ● oracle-tfa.service – Oracle Trace File Analyzer Loaded: loaded (/etc/systemd/system/oracle-tfa.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2021-12-29 10:49:12 IST; 43min ago Main PID: 3665 (init.tfa) Tasks: 70 CGroup: /system.slice/oracle-tfa.service ├─ 602 /bin/sleep 30 ├─ 3665 /bin/sh /etc/init.d/init.tfa run >/dev/null 2>&1 </dev/null ├─11114 /opt/oracle.ahf/jre/bin/java -server -Xms64m -Xmx128m -Djava.awt.headless=true -Ddisable.checkForUpdate=true -XX:HeapDumpPath=/u01/app/oracle_base/oracle.ahf/data/west01/diag/tfa oracl… └─11294 /opt/oracle.ahf/jre/bin/java -server -Xms64m -Xmx128m -XX:HeapDumpPath=/u01/app/oracle_base/oracle.ahf/data/west01/diag/tfa -DtfaHome=/opt/oracle.ahf/tfa -DcrsHome=/u01/app/oracle/19c/… [oracle@west01 ~]$ [oracle@west01 ~]$ /opt/oracle.ahf/bin/tfactl status WARNING – AHF Software is older than 180 days. Please consider upgrading AHF to the latest version using ahfctl upgrade. .———————————————————————————————-. | Host   | Status of TFA | PID   | Port | Version    | Build ID             | Inventory Status | +——–+—————+——-+——+————+———————-+——————+ | west01 | RUNNING        | 11114 | 5000 | 21.1.4.0.0 | 21140020210628122659 | COMPLETE         | | west02 | RUNNING       | 14911 | 5000 | 21.1.4.0.0 | 21140020210628122659 | COMPLETE          | ‘——–+—————+——-+——+————+———————-+——————‘ Upgrade : [root@west02 oracle]# ./ahf_setup AHF Installer for Platform Linux Architecture x86_64 AHF Installation Log : /tmp/ahf_install_214000_23474_2021_12_29-12_11_10.log Starting Autonomous Health Framework (AHF) Installation AHF Version: 21.4.0 Build Date: 202112200745 AHF is already installed at /opt/oracle.ahf Installed AHF Version: 21.1.4 Build Date: 202106281226 Do you want to upgrade AHF [Y]|N : Y AHF will also be installed/upgraded on these Cluster Nodes : 1. west01 The AHF Location and AHF Data Directory must exist on the above nodes AHF Location : /opt/oracle.ahf AHF Data Directory : /u01/app/oracle_base/oracle.ahf/data Do you want to install/upgrade AHF on Cluster Nodes ? [Y]|N : Y Upgrading /opt/oracle.ahf Shutting down AHF Services Stopped OSWatcher Nothing to do ! Shutting down TFA Removed symlink /etc/systemd/system/multi-user.target.wants/oracle-tfa.service. Removed symlink /etc/systemd/system/graphical.target.wants/oracle-tfa.service. Successfully shutdown TFA.. /usr/bin/checkmodule: loading policy configuration from inittfa-policy.te /usr/bin/checkmodule: policy configuration loaded /usr/bin/checkmodule: writing binary representation (version 19) to inittfa-policy.mod Starting AHF Services Starting TFA.. Created symlink from /etc/systemd/system/multi-user.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service. Created symlink from /etc/systemd/system/graphical.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service. Waiting up to 100 seconds for TFA to be started.. . . . . . Successfully started TFA Process.. . . . . . TFA Started and listening for commands No new directories were added to TFA Directory /u01/app/oracle_base/crsdata/west02/trace/chad was already added to TFA Directories. INFO: Starting orachk scheduler in background. Details for the process can be found at /u01/app/oracle_base/oracle.ahf/data/west02/diag/orachk/compliance_start_291221_121254.log AHF upgrade completed on west02 Upgrading AHF on Remote Nodes : AHF will be installed on west01, Please wait. Upgrading AHF on west01 : [west01] Copying AHF Installer [west01] Running AHF Installer Do you want AHF to store your My Oracle Support Credentials for Automatic Upload ? Y|[N] : n AHF is successfully upgraded to latest version TFA-00002 Oracle Trace File Analyzer (TFA) is not running Moving /tmp/ahf_install_214000_23474_2021_12_29-12_11_10.log to /u01/app/oracle_base/oracle.ahf/data/west02/diag/ahf/ Uninstall : [root@west02 oracle]# /opt/oracle.ahf/bin/tfactl uninstall -deleterepo -silent Starting AHF Uninstall TFA-00002 Oracle Trace File Analyzer (TFA) is not running TFA-00002 Oracle Trace File Analyzer (TFA) is not running AHF will be uninstalled on: west02 west01 west02 Checking for ssh equivalency in west01 west01 is configured for ssh user equivalency for root user Stopping AHF service on local node west02… Stopping TFA Support Tools… Removed symlink

ServiceNow JSON Parsing: Processing REST API Responses 

ServiceNow JSON Parsing: Processing REST API Responses  Use Case: When an Incident is created in Instance A, the caller information is sent to Instance B. Instance B fetches all incidents related to that caller and sends the incident details back to Instance A. Instance A parses the response and stores the incident data in a custom table.    Step 1: Create REST Message  Create a REST Message named AddCallerDetailsToCustomTable  Purpose: To communicate from Instance A to Instance B  Endpoint is the target ServiceNow instance (Instance B) where the request will be sent  Selected Authentication type: Basic  Chose an existing Basic Auth Profile (dev224187)  This profile contains:  Username  Password  Used to authenticate REST calls to Instance B  Step 2: Create HTTP Method  Selected HTTP Method = POST  Reason: Sending data (caller) to the target instance  Provided the Scripted REST API endpoint of Instance B  This endpoint points to the Scripted REST API created in Instance B  Added required headers to handle JSON data:  Accept: application/json  Content-Type: application/json  Ensures request and response are in JSON format  Step 3: Create Scripted REST Service (Instance B) Created a Scripted REST Service named AddCallerToCustomTable  Purpose: To receive caller-related incident data from another ServiceNow instance  Step 4: Create Scripted REST Resource  Created a Scripted REST Resource under the API AddCallerToCustomTable  Resource name: AddCallerToCustomTable  Selected HTTP Method: POST  This allows the source instance to send POST requests to this endpoint  1. Accept Incoming Request Payload  The resource is designed to accept JSON request body  Request body contains caller information sent from Instance A    (function process( /*RESTAPIRequest*/ request, /*RESTAPIResponse*/ response) {      var requestBody = request.body.data;      var user = requestBody.caller;      var incidents = [];    2. Query Incident Table  Initialized a GlideRecord on the incident table  Filtered incidents based on the received caller_id:      var gr = new GlideRecord(‘incident’);      gr.addQuery(‘caller_id’, user);      gr.query();    3. Build Response Payload  Loop through all matching incident records  Collected required incident details:  Incident number  Short description  Stored them in an array:        while (gr.next()) {          incidents.push({              number: gr.getValue(‘number’),              short_description: gr.getValue(‘short_description’)          });      }    4. Send JSON Response  Returned the incident data as a JSON response  This response is sent back to Instance A        response.setBody({‘incidents’:incidents});  })(request, response);  Step 5: Create Business Rule (Instance A)  Navigate to System Definition > Business Rules and click New.  Table: Incident  When: after  Insert: Checked  Get Caller ID  Fetches the caller_id sys_id from the incident  This value is sent to another instance/API  (function executeRule(current, previous /*null when async*/ ) {      var user = current.caller_id.getValue();    Create Payload JSON payload containing the caller  Will be sent in REST request body      var payload = {          caller: user      };    Prepare REST Message Calls a Named REST Message  AddCallerDetailsToCustomTable → REST Message  AddCallerToCustomTable → HTTP Method        var restMessage = new sn_ws.RESTMessageV2(‘AddCallerDetailsToCustomTable’, ‘AddCallerToCustomTable’);      Attach Payload Converts payload to JSON  Sets it as the request body        restMessage.setRequestBody(JSON.stringify(payload));      Execute REST API Call Sends the request to the external/other ServiceNow instance        try {          var response = restMessage.execute();  6.Convert JSON String → JavaScript Object  The REST API response body is always a string  JSON.parse() Converts JSON string into a JavaScript object  Allows field-level access using dot notation           var responseBody = JSON.parse(response.getBody());          gs.log(“API Response: ” + JSON.stringify(responseBody));      Read Incident Data from Response Extracts incident list from API response  responseBody → full parsed object  result → main response wrapper  incidents → array of incident objects  Structure:  responseBody  └── result       └── incidents [ array ]            var incidents= responseBody.result.incidents;              Insert Data into Custom Table Loops through each incident returned          for(var i=0; i<incidents.length;i++){              var gr = new GlideRecord(‘u_add_caller_details’);    Creates new record in custom table              gr.initialize();    Accessing Fields Using Dot Notation incidents[i]. number → Fetches incident number from API response  gr.u_number → Custom table field  incidents[i].short_description → Fetches short description  gr.u_short_description → Custom table field  incidents[i] gives one incident  .number and .short_description fetch values              gr.u_number=incidents[i].number;              gr.u_short_description=incidents[i].short_description;    Inserts record into custom table              gr.insert();            }    Catches REST API errors  Logs error message to system logs        } catch (ex) {          gs.error(“REST API call failed: ” + ex.message);      }    })(current, previous);  Instance A:  This incident creation is the trigger point for:  REST API call  JSON parsing  Custom table insertion  Instance B:   ALL incidents available for the caller “Abel Tuter” in Instance B  These are the incidents that:  The Scripted REST API in Instance B queries  And returns back to Instance A in the API response  Instance A:  REST API Test Result:   REST Message executed successfully  API returned data in JSON format  Response contains:  result object  incidents array  Each incident has:  number  short_description      API Response: {“result”:{“incidents”:[{“number”:”INC0010488″,”short_description”:”0.0″},{“number”:”INC0010560″,”short_description”:”0.0″},{“number”:”INC0010232″,”short_description”:”0.0″},{“number”:”INC0010487″,”short_description”:”0.0″},{“number”:”INC0010233″,”short_description”:”0.0″},{“number”:”INC0010355″,”short_description”:”0.0″},{“number”:”INC0010356″,”short_description”:”0.0″},{“number”:”INC0010204″,”short_description”:”0.0″},{“number”:”INC0010359″,”short_description”:”0.0″},{“number”:”INC0010195″,”short_description”:”0.0″},{“number”:”INC0010292″,”short_description”:”0.0″},{“number”:”INC0010187″,”short_description”:”0.0″},{“number”:”INC0010357″,”short_description”:”0.0″},{“number”:”INC0010180″,”short_description”:”0.0″},{“number”:”INC0010181″,”short_description”:”0.0″},{“number”:”INC0010357″,”short_description”:”0.0″}]}}    Instance A:   Custom Table(u_add_caller_details)  The data shown here is not manually created  It is automatically inserted by the Business Rule in Instance A  End-to-End Flow  Incident created in Instance A  Business Rule runs after insert  Caller ID sent to Instance B via REST API  Scripted REST API in Instance B:  Queries all incidents for Abel Tuter  Instance B returns incident data as JSON  Instance A:  Parses JSON response  Loops through incidents  Each incident is inserted into:        u_add_caller_details 

Incident Synchronization Between ServiceNow Instances Using Flow Designer and Custom action 

Incident Synchronization Between ServiceNow Instances Using Flow Designer and Custom action Use Case:  When an incident is created in the source ServiceNow instance, a Flow Designer custom action triggers an outbound REST API call to create the same incident in a target instance. The target instance returns a JSON response, which is parsed and stored back in the source incident for confirmation and traceability.    Step 1: Created a REST Message  Navigated to System Web Services → Outbound → REST Message  Created a REST Message named Sample  Set the target instance URL as the endpoint  Configured Basic Authentication using a REST credential profile  Step 2: Configured HTTP Method  Added an HTTP Method named Test  Selected POST method  Set endpoint to https://myinstance.service-now.com/api/now/table/incident  Added HTTP headers:  Accept: application/json  Content-Type: application/json  Defined JSON request body using variables:  short description  caller  Id           Content:   {      “sys_id”:”${id}”,     “caller_id” : “${cd}”,      “short_description” : “${sd}”  }  Step 3 :Created a Flow  Created a Flow named Create Incident Through Integration  Configured trigger as Incident Created or Updated  Selected Incident table as trigger source  Step 4 : Created a Custom Action  Added a custom Action to the flow  Defined input variables:  sd → Short Description  cd → Caller (Sys ID)  id → Incident ID  Step 5 : Mapped Flow Data to Action Inputs  Mapped Incident Short Description to sd  Mapped Incident Caller to cd  Mapped Incident Sys ID to id  Step 6 : Added Script Step in Custom Action  Used RESTMessageV2 in the script  Called the configured REST Message and HTTP Method  Passed input variables to REST message parameters  Executed the REST call and logged the response  (function execute(inputs, outputs) {    This is a Flow Designer / Action script  inputs → values passed from Flow (incident sys_id, short description, caller)  outputs → values returned back to Flow     try {    Prevents the integration from failing silently  Any runtime error goes to the catch block          var r = new sn_ws.RESTMessageV2(‘Sample’, ‘Test’);    Calls an outbound REST Message  Sample → REST Message record name  Test → HTTP method  This REST Message points to the Target Instance       r.setStringParameterNoEscape(‘id’, inputs.id);   r.setStringParameterNoEscape(‘sd’, inputs.shortDescription);   r.setStringParameterNoEscape(‘cd’, inputs.callerId);      These parameters are sent to the target instance  Used to create the incident there  NoEscape ensures special characters are not altered    var response = r.execute();    Sends the request to the target instance  Waits for the response     var responseBody = response.getBody();   var httpStatus = response.getStatusCode();  responseBody → JSON returned by target instance  httpStatus → HTTP code (200, 201 = success)       gs.info(‘Response Body: ‘ + responseBody);   gs.info(‘HTTP Status: ‘ + httpStatus);    Logs response for debugging               var parsedResponse = JSON.parse(responseBody);   var result = parsedResponse.result;    Converts JSON string → JavaScript object  All incident details are inside result             var targetIncidentNumber = result.number;    var shortDescription     = result.short_description;    var priority             = result.priority;    var category             = result.category;    var subcategory           = result.subcategory;    var state                = result.state;    var callerSysId           = result.caller_id.value;  Reads individual values from target instance  These fields confirm incident creation success                 var grIncident = new GlideRecord(‘incident’);    if (grIncident.get(inputs.id)) {    Opens the Source Instance Incident  Uses inputs.id (source incident sys_id)         grIncident.description =             “Incident successfully created in Target Instancenn” +             “Target Incident Number: ” + targetIncidentNumber + “n” +             “Caller Sys ID: ” + callerSysId + “n” +             “Short Description: ” + shortDescription + “n” +             “Priority: ” + priority + “n” +             “Category: ” + category + “n” +             “Subcategory: ” + subcategory + “n” +             “State: ” + state;                grIncident.update();          }       Stores target incident details  Helps with traceability and auditing          } catch (ex) {          gs.error(‘Integration Error: ‘ + ex.message);      }    })(inputs, outputs);     Logs errors if REST call or parsing fails  Step 7: Executed Integration via Flow  When an Incident is created in the source instance  Flow triggers automatically  Custom action runs and sends data to target instance  REST API creates a new Incident in the target instance      Source Instance:  Before Integration Trigger:  This is the source instance incident  At this point:  Incident exists only in source  REST integration has not yet written back any data  This incident is the trigger for your Flow Designer + Custom Action  Target Instance:  This incident was not manually created  It was created by:  Inbound REST API on target instance  Triggered from source instance  Field values match what was sent in REST request  Source Instance:  After Integration Execution  The Description field is now populated automatically  REST call executed successfully  Target incident was created  JSON response was parsed correctly  Source incident was updated programmatically

Assignment Rule

Assignment Rule An Assignment Rule is a server-side rule that automatically assigns a task to the most appropriate user or user group based on predefined conditions. It runs after a record is inserted or updated, evaluates field values, and intelligently determines who should handle the task. Steps to Create an Assignment Rule To create an Assignment Rule, follow the steps below: Open the Navigation Panel in ServiceNow. Navigate to System Policy → Assignment. Click on Assignment to proceed. Once selected, the Assignment Rule form will be displayed, where you can define conditions and configure how tasks are automatically assigned to the appropriate user or user group. This structured approach helps streamline task distribution and ensures efficient workload management. Enter the Assignment Rule Name and specify the conditions that determine which user or user group for the task should be assigned to. Navigate to the Assign To tab and choose the appropriate User or User Group responsible for handling the task. Alternatively, you can define the assignment conditions in the Script tab instead of the Assign To tab, allowing for more flexible and advanced logic. Once the configuration is complete, click Submit to save the Assignment Rule. Create an Incident record and set the Short Description field to include the word “Demo” to trigger the Assignment Rule. Click Save, and once the record is created, open the newly saved record.   The incident is automatically assigned to Abrahim Lincon, confirming that the Assignment Rule is working correctly. Refer to the snapshot below. Difference between Data Lookup Rule and Assignment Rule: ·       Data Lookup Rule: ·       Assignment Rule: ·       Works on the client side ·       Works on the server side ·       Executes before insert/update ·       Executes after insert/update ·       Used to populate multiple fields based on a combination of field values ·       Used to assign a task to a User or User Group ·       Values are fetched from a separate lookup table ·       No separate table is required ·       Mainly used for auto-filling data (like category, priority, SLA, etc.) ·       Mainly used for auto assignment of records ·       Improves data consistency and accuracy ·       Improves workload distribution and efficiency ·       Cannot assign users/groups ·       Specifically designed for user/group assignment Problem Statement Scenario: Incident Assignment Based on Short DescriptionStep 1: Create Groups and Assign Roles• Create a group named LTI Group and assign the ITIL role.• Create another group named L&T Group and assign the ITIL role.Step 2: Create Users and Add Them to LTI GroupCreate the following users and add all of them to the LTI Group:• LTI1• LTI2• LTI3• LTI4 Step 3: Configure Incident Assignment LogicIncidents should be assigned based on the value entered in the Short Description field, as shown below:Short Description Assigned To (User)Finance LTI4Leave LTI3Training LTI1Training and ServiceNow LTI2 Default Assignment Condition• If the Short Description does not match any of the above values:o The incident should be assigned to the L&T Groupo The Assigned To field should remain blank Solution Create the LTI Group and assign it the ITIL role to enable incident management access. After creating the group, add existing users or create new users and associate them with the group. Similarly, create the L&T Group and assign it to the ITIL role for incident handling. Create an Assignment Rule with the following configuration: Condition: Short Description contains “Finance” Assignment: Automatically route the task to the LTI Group and LTI 4 user This ensures seamless task allocation and efficient handling of finance-related requests. Create an Assignment Rule with the following setup: Condition: Short Description contains “Leave” Assignment: Automatically assign the task to the LTI Group and LTI 3 user For a concise, professional one-liner: Assignment Rule: If Short Description contains “Leave,” assign to LTI Group and LTI 3 users. Create an Assignment Rule with the following setup: Condition: Short Description contains “Training” Assignment: Automatically assign the task to the LTI Group and LTI 1 user Concise one-liner version: Assignment Rule: If Short Description contains “Training,” assigns to LTI Group and LTI 1 user. Create an Assignment Rule with the following setup: Condition: Short Description contains both “Training” and “ServiceNow” Assignment: Automatically assign the task to the LTI Group and LTI 2 user Assignment Rule: If Short Description contains “Training” and “ServiceNow,” assigns to LTI Group and LTI 2 user. Create an Assignment Rule with the following setup: Condition: Short Description does not contain “Training,” “ServiceNow,” “Finance,” or “Leave” Assignment: Automatically assign the task to the L&T Group Concise one-liner version: Assignment Rule: If Short Description does not contain Training, ServiceNow, Finance, or Leave, assign to L&T Group. Create an incident with a Short Description containing “Finance”; it will automatically be assigned to the LTI Group and LTI 4 user. Once the incident is saved, it’s successfully logged and ready for action. Create an incident with a Short Description containing ‘Leave’. It will automatically be assigned to the LTI Group and LTI 3 user. Once the incident is saved, it’s officially recorded and queued for action by the assigned group and users. Once an incident with ‘Training’ in the Short Description is created, it’s automatically routed to the LTI Group and LTI 1 user for action After saving the incident, you get a confirmation that it has been successfully created and assigned Once an incident with ‘Training’ and ‘ServiceNow’ in the Short Description is created, it’s automatically routed to the LTI Group and LTI 2 user for action If an incident’s Short Description does not contain Finance, Leave, Training, or ServiceNow, it’s automatically routed to the L&T Group, and the ‘Assigned To’ field remains blank for manual assignment. For incidents whose Short Description does not contain Finance, Leave, Training, or ServiceNow, only the Assignment Group (L&T Group) will be populated, while the ‘Assigned To’ field will remain empty due to the assignment rule.

Import data from MySQL into Service-now

Import data from MySQL into Service-now Mid Server To import data from My SQL server into service now we can use MID Server and Data source. MID Server is required when we need to access on premises data(i.e when machine/any server is in Virtual Private Network) Mid Server is a Java application, through which Service-now can communicate with external applications which are running in Clients VPN i.e. Virtual Private Network. Mid Server need to install in Clients Network to get or send data to Clients tool which are in their VPN. We used MID server here because client MySQL server is in VPN, so installed the mid server in client VPN and same mid server we are using in data source. ECC Queue – Mid Server talks with service now using ECC (External Communication Channel). Mid Server Script Include: Mid Server Script Include, are the script, which we will execute on Mid Server For example 1. If we need to connect with jira tool which is in VPN to create a ticket in JIRA and update the response which we are getting from jira into service-now ticket. 2. We need to create a Mid Server script include(Java code) , which will run on the Mid server, which will call the jira web service to create a ticket and take the response and the same response will send to service now table as an update by calling service now web service. 3. After creating mid server script include, we need to create a record into ecc queue against the specific mid server so that script will execute on that specific mid server. Mid Server Installation Steps – Download mid server from service now instance from left navigation menu. Create one folder(“MeghanaMidServer “) in C drive and extract the downloaded ZIP (MID Server setup) in this folder. Open “config.xml” file and configure ServiceNow instance details along with newly created user with “mid_server” role and mentioned MID server name as “MeghanaMidServer”(This name will appear in ServiceNow once we done with setting and mid server started). Once done with all configurations start midserver by running start.bat file Once the above configurations and observations are fine, and then check in whether our mid server is appearing and with status “UP” or not. Go to service now instance->MID Server->Server = MeghanaMidServer check its status and validate it(Validation means service now instance version and mid server instance is same).   Datasource Create New Data Source:     Name – Name of data source Import set table label – Provide any label for import set Import set table name – New import set will be created with this name     Type – Select type of data source (JDBC,FILE,LDAP etc) File JDBC LDAP OIDC Use MID Server  – Specify name of the mid server to which we need to connect from service-now MeghanaMidServer     Install XAMPP server on your machine. Start MySQL server – it will list port number.   Format – Specify data source format(MySQL,Oracle, SQL Server) — None — MySQL Oracle SQLServer Database name – Create database in any db server(Created in MySQL server) and specify that db name here Database port – Use db port number shown on XAMPP control panel against MySQL   Discover moreSQL servermy sqlComputerClientSQLServerclientMicrosoft SQL ServerapplicationMySQLdatabase     Username  – Specify MySQL server user name and password Password   Open link –  http://localhost/phpmyadmin/ It will show you local host server address, use that address here.   Server   Query – It will query on table in such a way that it will return all rows from table, or we can write specific SQL (Select required fields or add filter conditions) All Rows from Table Specific SQL Table name – This table should be present in database. Import Set Click on Load All Records – It will load all the records from specified table(book) into import set(u_myimportset).   Verify data in import set from left navigation. Transform Map – To dump/ transform import set data into table, we need to create target table first. Create new table(book) from left navigation. add required fields (id,title,author) those are in MySQL db table. From left navigation go to transform map Name – Specify name for new transform map Source table – Select import set — None — Computer [imp_computer] Import Incident [u_import_incident] Location [imp_location] MyDataSource [u_myimportset] Notification [imp_notification] User [imp_user] Target table – Select table book in which we need to insert data   Discover moredatabaseApplication softwarecomputerDatabaseSQLServerclientdbSQLmy sqlComputer   Filed Mapping – Perform mapping of import set filed with actual table Click on transform – check the data is loaded in book table. here we are manually transforming import set data into table. but to automate this task we can use schedule job. Schedule Job : Go to your data source from left navigation -> configure-> Related list-> add  Scheduled Data imports   By selecting Run = Daily, This will transform data daily from data source into import set table.   Source Script – Open filed map – > select any filed Click on Use source script checkbox -> it will allow you to run any script on this filed, before inserting data into target table. In following example script, we are adding PR as prefix  to the title. Execute schedule job to load newly added data from my sql table to target table Check Prefix PR is added in title  

SLA

Service Level Aggrement (SLA) SLA Priority Increase and Decrease Service Level Agreement (SLA): Service Level Management (SLM) enables you to monitor and manage the services offered by your organization. It is a contract between the service provider and the customer that ensures response and resolution of tasks within a specified timeframe.  It will include details about resolution time, breach levels and their corresponding penalties. If the organization not able to fulfil the goal of the services in the time specified in the agreement. Then, it is considered as a breach of the Agreement. The breach can cost the penalty or impact the image of the organization. Thus, SLA helps organization to measure the quality and efficiency of the task assigned to the employees by determining the progress of the task. Response –  It is the time taken to acknowledge the ticket.  Eg – Time required to give response for an incident such as assignment of incident to the user.   Resolution –  Actual time taken to resolve a ticket.  Eg- Time required for resolve the incident.               1.Retroactive start: It starts the SLA from the time start time of the previous SLA.              2.Retroactive pause: This would consider the pause time of previous SLA. Requirement: Create an SLA’s for priorities P1, P2, P3, P4 such that when you increase the priority the retroactive start should be on updated field and when you decrease the priority the retroactive start should be created field.  Solution: Let’s create a temp field of type date/time for incident form layout. This temp field will store the created or updated date/time based on the requirement of increase and decrease of priority. Let’s create SLA definition for the priorities 1-4 of the incident table i.e. P1, P2, P3, P4 of duration 3 min, Keep Schedule as No schedule for all SLA’s for now. And other fields as default. Here start condition should be based on priority. And cancel condition should be when start condition not met. Retroactive start and Retroactive pause should be true for all. And select your ‘temp’ field for ‘set start to’ field. Pause condition should be when state is put on hold. At the end, stop condition should be when State is resolved. Similarly create other SLA’s for other priorities P2,P3,P4. Now it’s time to write the Business rule which would set the temp field according to the priority change.

Email Notification in Service-now

Email Notification in Service-now Email Notification : Email notifications are a type of triggered email—email that’s sent in response to specific user action or an event. Creating an email notification involves specifying when to send it, who receives it, whatit contains, and if it can be delivered in an email digest. Types of Email Notifications: Outbound Email Notifications: Sending mail to users from ServiceNow instance. ServiceNow used SMTP to send the mail from the server. Inbound Email Notifications: Receiving mails from user in ServiceNow instance POP is used. For email notifications, we have to set some settings. Navigate to System Properties => Email Properties, the following list of properties will be displayed, In Outbound Email Configuration, Select Email sending enabled and enter any random email address as shown in above snapshot. In Inbound Email Configuration, Select Email receiving enabled and at the bottom write the trusted domain from which ServiceNow should receive mails. If we enter star (*) then from all domains the email will be received. Check “Automatically create users……..”  to create users automatically when any email is received from the user. See below snapshot. Outbound Notifications : Create Outbound notifications for the incident table when the record is inserted.. Navigate to Configure => All => Notifications and click on New button as below, The following form will be displayed. Enter condition in “When to send” section, 1.Send when : Record inserted or updated Now select the user or group to whom the email to be sent as follows, User/Groups in field is form selected table.  Configure the email as follows,  Save the above form. Now, Create a new record in incident table Save the above record. Since, short description contains “ashwini”, so as per the outbound notification we set, the email will be automatically sent to the mentioned user. The email engine process this mail. First, this email will go in “Outbox” under System Mailboxes. In Outbox the mail is in Ready state. Email will be sent to the user, and it will be saved in Sent under System mailbox as follows Since this email details are in Sent mailbox and its state is “Processed” that means the mail has been sent to the user which is mentioned as a recipient while creating Notification. For recipient check notification as follows, The mail will be like below, To check the sender navigate to  Email Properties =>Email Accounts as follows, If the user reply to this mail, it will be displayed in All=>Inbound =>Received To check the received mails, navigate to System Mailboxes =>Inbound =>Received as below,   The email engine of ServiceNow will identify the “Reply” by the Ref no of the email which is send by the ServiceNow to the user as below, 2.Send when: user defined event is fired: For that there should be a user defined event. To create Event, navigate to Event Registry as follows, (Fired by :to trigger this event we should write some code ,we can write it in business rule.write here the name of the business rule Queue : whenever u r triggering any event it shd be the part of the system queue) Click on Submit. To trigger this event we need to write a script.(business rule or script include) So create business rule to run after updating the incident record if short description contains “ashwini” To trigger the event we have eventQueue(). Use the function eventQueue(“Your event name”,”the current object”, “parameter1”, “parameter2”) Refer follow screenshots for business rule , Now change “Send when” option to “Event is fired” and keep all as it is in outbound notification as below, Now update any incident and check the mail ,  Now the mail should be sent to user as below, To send the notification containing param values(passed in business rule while triggering the event) ,access param values in outbound notification as below, Now again update any incident record and check the mail. Mail Script : Email scripts allow for business rule-like scripting within an outbound email message. With mail scripts, you can dynamically change the email output of your system based ondifferent criteria. Mail scripts allow you to perform simple tasks, such as displaying incidentdata, and complex ones, such as making advanced database queries. You can add a ${mail_script:script_name}embedded script tag to the body of the email notificationor template, replacing script name with the name of the script you created. This makes it easyto use the same scripts in multiple email notifications or templates. Now to add email scripts in the body of our mail, first we will have to createmail scripts. To create a mail script open the Notification Email Script module and click on new. We can write our code using template.print(); (Check below code in “message HTML” of Notification ) To see the code for mail script, navigate to System Notification => Notification Email Scripts You will get the list of all mail scripts, search for the required one as below, Now update incident record and check the mail, Inbound Actions: When the system receives the mail, it can be seen in inbox / received mail. Inbound actions in ServiceNow are used to process incoming emails by creating or updating records within the platform. These actions are similar to business rules and use conditions and scripts to take action on a target table. When an email is received, the system checks for a watermark that associates it with a task and other conditions. If the conditions are met, the system takes the inbound email action configured. The system can take two types of actions: Record action, which sets a value for a field in the target table, and Email reply, which sends an email back to the source that triggered the action. Navigate to System Policy => Inbound Actions and click on New button The following form will be displayed. Set the type as “New” and Action type is “Record Action”. Available Types are: New: An email that is not recognized as a reply or forward. Reply: An email with a watermark, with an In-Reply-To email header, or whose