Oracle ASM Disk Creation and Management Guide
Oracle ASM Disk Creation and Management Guide Oracle ASM Disk Creation and Management Guide | Technical Blog Oracle ASM Disk Creation and Management Guide Complete Guide to Creating and Managing Disks in ASM Technical Documentation | Oracle Database Storage Administration ← Back to Home Overview Prerequisites Disk Creation Verification Table of Contents 1. ASM Overview 2. Prerequisites and Planning 3. Disk Discovery 4. Disk Preparation 5. Creating ASM Disks 6. Adding Disks to Diskgroups 7. Verification Steps 8. Maintenance Operations 9. Best Practices 1. ASM Overview Oracle Automatic Storage Management (ASM) is an integrated, high-performance database file system and disk manager built into Oracle Database. ASM simplifies database storage administration by providing a vertical integration of the file system and volume manager specifically designed for Oracle Database files. What is ASM? Volume Manager: Manages physical disks and provides logical volumes File System: Provides a cluster-aware file system for Oracle database files Storage Virtualization: Abstracts physical storage into logical disk groups Automatic Rebalancing: Distributes data evenly across available disks Redundancy Options: Supports normal, high, and external redundancy levels Key Concepts Component Description Purpose ASM Disk Physical disk or disk partition managed by ASM Basic storage unit Disk Group Logical collection of ASM disks Storage pool for database files Failure Group Set of disks that share a common resource Redundancy and availability ASM Instance Special Oracle instance for managing ASM storage Storage management and metadata 2. Prerequisites and Planning System Requirements Oracle Grid Infrastructure installed and configured ASM instance running (+ASM) Raw devices, block devices, or NFS volumes available Appropriate permissions (root for disk preparation, grid for ASM operations) ASMLib or UDEV configured (for Linux environments) Planning Considerations Disk Size: Plan for current and future storage needs Redundancy Level: Choose between External, Normal, or High redundancy Performance: Consider I/O patterns and throughput requirements Failure Groups: Design for hardware failure isolation Disk Group Strategy: Separate DATA, FRA, and REDO disk groups Redundancy Levels Explained Redundancy Mirroring Min Disks Usable Space Use Case External None (RAID/SAN handles it) 1 100% Hardware RAID arrays Normal 2-way mirroring 2 50% Standard production (recommended) High 3-way mirroring 3 33% Mission-critical systems 3. Disk Discovery Check Available Disks # List all block devices lsblk # List all SCSI devices lsscsi # Check disk details fdisk -l # View disk partitions cat /proc/partitions # Check for existing ASM disks ls -l /dev/oracleasm/disks/ # Check disk usage df -h Sample Output: [root@server ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 50G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 49G 0 part / sdb 8:16 0 100G 0 disk sdc 8:32 0 100G 0 disk sdd 8:48 0 100G 0 disk Verify Disk Availability # Check if disks are already in use pvdisplay /dev/sdb pvdisplay /dev/sdc pvdisplay /dev/sdd # Expected output for unused disks: “Failed to find physical volume” # Check for existing partitions parted /dev/sdb print parted /dev/sdc print parted /dev/sdd print 4. Disk Preparation Create Partitions (If Needed) Warning: Partitioning will destroy all data on the disk. Ensure backups are complete before proceeding. # Interactive partitioning with fdisk fdisk /dev/sdb # Commands within fdisk: # n – Create new partition # p – Primary partition # 1 – Partition number # [Enter] – Accept default first sector # [Enter] – Accept default last sector (use entire disk) # w – Write changes and exit # Repeat for other disks fdisk /dev/sdc fdisk /dev/sdd Set Partition Type # Within fdisk, set partition type for ASM # t – Change partition type # 1 – Partition number # 8e – Linux LVM type (or fd for Linux RAID) # Verify partition table fdisk -l /dev/sdb fdisk -l /dev/sdc fdisk -l /dev/sdd Configure Disk Permissions # Set ownership to grid user (ASM owner) chown grid:asmadmin /dev/sdb1 chown grid:asmadmin /dev/sdc1 chown grid:asmadmin /dev/sdd1 # Set appropriate permissions chmod 660 /dev/sdb1 chmod 660 /dev/sdc1 chmod 660 /dev/sdd1 # Verify permissions ls -l /dev/sd[bcd]1 5. Creating ASM Disks Method 1: Using ASMLib (Recommended for Linux) Configure ASMLib # Initialize ASMLib (run once) oracleasm configure -i # Sample configuration: # Default user to own the driver interface []: grid # Default group to own the driver interface []: asmadmin # Start Oracle ASM library driver on boot (y/n) [n]: y # Scan for Oracle ASM disks on boot (y/n) [y]: y # Load ASMLib module oracleasm init # Check ASMLib status oracleasm status Create ASM Disks # Create ASM disk for DATA diskgroup oracleasm createdisk DATA01 /dev/sdb1 # Create ASM disk for DATA diskgroup (additional) oracleasm createdisk DATA02 /dev/sdc1 # Create ASM disk for FRA diskgroup oracleasm createdisk FRA01 /dev/sdd1 # List all ASM disks oracleasm listdisks # Scan for ASM disks oracleasm scandisks # Query disk details oracleasm querydisk -d DATA01 oracleasm querydisk -d DATA02 oracleasm querydisk -d FRA01 Sample Output: [root@server ~]# oracleasm listdisks DATA01 DATA02 FRA01 [root@server ~]# oracleasm querydisk -d DATA01 Disk “DATA01″ is a valid ASM disk Method 2: Using UDEV Rules # Create UDEV rules file vi /etc/udev/rules.d/99-oracle-asmdevices.rules # Add rules for each disk KERNEL==”sdb1″, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″ KERNEL==”sdc1″, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″ KERNEL==”sdd1″, OWNER=”grid”, GROUP=”asmadmin”, MODE=”0660″ # Reload UDEV rules udevadm control –reload-rules udevadm trigger # Verify permissions persist after reboot ls -l /dev/sd[bcd]1 Set ASM Disk Discovery Path # Connect as grid user su – grid # Set ASM_DISKSTRING parameter sqlplus / as sysasm SQL> ALTER SYSTEM SET ASM_DISKSTRING = ‘/dev/oracleasm/disks/*’ SCOPE=BOTH; Alternative Discovery Paths: ASMLib: /dev/oracleasm/disks/* Raw devices: /dev/raw/* Block devices: /dev/sd* Multipath: /dev/mapper/* NFS: /nfs_mount_point/* 6. Adding Disks to Diskgroups Connect to ASM Instance # Switch to grid user su – grid # Set ASM environment export ORACLE_SID=+ASM export ORACLE_HOME=/u01/app/19.0.0/grid # Connect to ASM instance sqlplus / as sysasm # Or use asmcmd asmcmd Check Existing Diskgroups SQL> SELECT name, state, type, total_mb, free_mb FROM v$asm_diskgroup; NAME STATE TYPE TOTAL_MB FREE_MB ———– ——– ——— ———– ———- DATA MOUNTED NORMAL 204800 150000 FRA MOUNTED NORMAL 102400 80000 Create New Diskgroup — Create DATA diskgroup with Normal redundancy CREATE
