martes, 26 de julio de 2022

CONFIGURAR Y MONITOREAR FLASHBACK DATABASE CON RESTORE POINT




Se adjunta algunos comandos para crear y monitorear la creacion de un Punto de Restauración.


## tiempo en minutos 1440 = 1 dia

ALTER SYSTEM SET DB_FLASHBACK_RETENTION_TARGET=1440;


SQL> select open_mode from v$database;


OPEN_MODE

--------------------

READ WRITE


SQL> SELECT NAME,CDB,LOG_MODE,FLASHBACK_ON FROM V$DATABASE;

NAME CDB LOG_MODE FLASHBACK_ON

--------- --- ------------ ------------------

CDB1 YES ARCHIVELOG YES


## RP Normal

SQL> CREATE RESTORE POINT BEFORE_RP;


## RP Gatantizado, independiente del tiempo programado

SQL> CREATE RESTORE POINT BEFORE_RP GUARANTEE Flashback DATABASE;


## Borrado de RP

SQL> DROP RESTORE POINT BEFORE_RP;


## consulta scn

SQL> select current_scn,current_timestamp from v$database;


*****************

## consulta Restore Point


set pagesize 200 linesize 200

col NAME format a30

col TIME  format a40

SELECT NAME, SCN, TIME, DATABASE_INCARNATION#,GUARANTEE_FLASHBACK_DATABASE,STORAGE_SIZE/1024/1024 "Size GB" FROM V$RESTORE_POINT;


##  Query resumen para consultar utilizacion de Restore Point


set pagesize 200 linesize 200

col NAME format a30

SELECT max(NAME) NAME

, max(LIMIT) LIMIT

, max(USED) USED

, max(Reclaimable) Reclaimable

, max(Used_Percent) Used_Percent

, max(Free_Percent) Free_Percent

, max(NVL(restore_point,0)) Restore_Point

, round(100.0*max(NVL(restore_point,0))/max(LIMIT),1) Restore_Point_Percent

FROM

(select name

, round(SPACE_LIMIT/1024/1024) LIMIT

, round(SPACE_USED/1024/1024) USED

, round(SPACE_RECLAIMABLE/1024/1024) Reclaimable

, round(100.0*space_used/space_limit,1) Used_Percent

, round(CASE WHEN space_limit > 0 AND space_limit >= space_used AND space_used >= space_reclaimable THEN 100.00 * (space_limit - (space_used - space_reclaimable))/space_limit ELSE NULL END,1) Free_Percent, 

 to_number(null) restore_point

FROM V$RECOVERY_FILE_DEST

UNION ALL

SELECT '' name

, to_number(null) limit

, to_number(null) USED

, to_number(null) Reclaimable

, to_number(null) Used_Percent

, to_number(null) Free_Percent

, round(sum(STORAGE_SIZE)/1024/1024) restore_point

FROM V$RESTORE_POINT)

/


lunes, 18 de julio de 2022

Oracle ASM devices pointing to multipath devices and not scsi paths

 

How to make sure Oracle ASM devices pointing to multipath devices and not scsi paths, sd devices when using ASMLib to manage ASM disks?

 SOLUTION VERIFICADA - Actualizado  - 

Red Hat Insights can detect this issue

Proactively detect and remediate issues impacting your systems.

Medio Ambiente

  • Red Hat Enterprise Linux (RHEL) 5, 6, 7, 8
  • device-mapper-multipath
  • Oracle ASM using ASMLib

Cuestión

  • Oracle application crashes when a single path in multipath fails. The application should be unaware of underlying path failures.
  • ASM crashed and with that the Oracle DB
  • Using Device Mapper Multipathing for Oracle database, and expect Oracle LUNs to see multipath, not sd devices?
  • How to make sure Oracle ASM devices pointing to multipath devices and not scsi paths, sd devices when using ASMLib to manage ASM disks?
  • I/O's to the SAN are not shared across all the paths of multipath in Oracle application server configured with Oracleasm.

Resolución

ORACLEASM_SCANORDER should be configured to force the use of the multipath pseudo-device. Since ASM uses entries from /proc/partition, a filter would need to be set to exclude underlying paths.

  1. Edit /etc/sysconfig/oracleasm and add dm to the SCANORDER, and sd to SCANEXCLUDE as follows:

    # ORACLEASM_SCANORDER: Matching patterns to order disk scanning
    ORACLEASM_SCANORDER="dm"
    
    # ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
    ORACLEASM_SCANEXCLUDE="sd"
    
  2. This would require that the oracleasm configuration to be updated:

    # oracleasm configure
    # oracleasm scandisks
    
  3. File /etc/sysconfig/oracleasm is soft-linked to /etc/sysconfig/oracleasm-_dev_oracleasm which is the file used by OracleASM. Verify the soft-link exists.

    # ls -al /etc/sysconfig/oracleasm
    lrwxrwxrwx 1 root root 39 Feb 22 15:54 /etc/sysconfig/oracleasm -> /etc/  sysconfig/oracleasm-_dev_oracleasm
    
  4. Changes to the oracleasm configuration file requires a restart of OracleASM service to take effect. This can be disruptive in a production environment.

    Note: It is recommended to schedule a reboot after setting SCANORDER and SCANEXCLUDE in /etc/sysconfig/oracleasm. Normally a system reboot is not required for oracleasm to start using the multipath devices. However, in (private) RHBZ#1683606, it has been noticed that, while oracleasm was still allowed to detect single paths (before the configuration change and the restart of oracleasm) it could change the value of counters used in device structures within the kernel (block_device.bd_holders) to invalid (negative) values and make the paths appear as being in use. If this happens, restarting only oracleasm will not clear the counters and the devices will continue appearing as being in use. In this case, multipath will still be unable to add the paths to the corresponding maps until the system is rebooted. If this happens, messages similar to the following will be appearing in the system logs whenever multipath tries to add one of those paths to the corresponding map:

    device-mapper: table: 253:<dm_num>: multipath: error getting device
    device-mapper: ioctl: error adding target to table
    

    The problem can appear even when multipath is using the paths (i.e. the counters can be "silently" changed while the paths are in use by multipath). In such a scenario, the problem will appear in case of an outage, which will cause the paths to be removed from the maps. When the paths return, multipath will be failing to add them to the corresponding maps.

    For this reason, it is recommended to schedule a reboot after setting SCANORDER and SCANEXCLUDE in /etc/sysconfig/oracleasm.

  5. Once restarted, verify the multipath device is being used, a major of 253 should be returned:

    # oracleasm querydisk -d <ASM_DISK_NAME>
    

Causa Raíz

When devices were added to the DISKGROUP, the underlying sd* device was used instead of the multipath pseudo device.

The dm-* devices are intended for internal use and are not persistent. However, once the DISKGROUP is created this writes metadata to the device which ASM is then able to check the header regardless of the dm- assignment. The intention here is to force ASM to read from multipath devices.

Procedimientos para el Diagnóstico

  • Query the disk to obtain the major:minor number of the disk being used by the disk group:

    # /etc/init.d/oracleasm querydisk -d ASM_DATA1
    Disk "ASM_Data1" is a valid ASM disk on device [8,16]
    
  • We can see that 8:16 is the underlying sdb path, not the ASM_DATA1 multipath pseudo device. Failover would not occur with this configuration.

    ASM_DATA1 (3600500000000000001) dm-24 IBM,2107900
    [size=100G][features=1 queue_if_no_path][hwhandler=0][rw]
    \_ round-robin 0 [prio=0][active]
    \_ 3:0:1:1 sdb 8:16   [failed][faulty]
    \_ 5:0:0:1 sdc 8:32   [active][ready] 
    \_ 5:0:1:1 sdd 8:48   [active][ready] 
    \_ 3:0:0:1 sde  8:64  [failed][faulty]
    
  • This can also be see in /proc/partitions:

     8     0  142577664 sda
     8     1     514048 sda1
     8     2   24579450 sda2
     8     3   12289725 sda3
     8    16   52428800 sdb
     8    32   52428800 sdc
     8    48   52428800 sdd
     8    64   52428800 sde
    
  • The major:minor of the multipath ASM_DATA1 pseudo device would be 253:24, or dm-24. This is the device that should be used:

    253    24   52428800 dm-24
    
  • Note: To check if an oracleasm device is mapped correctly in a vmcore, please see How to map an oracleasm path in a vmcore.

miércoles, 19 de enero de 2022

Administer Oracle Grid Infrastructure Start, Stop, Other


 

Un resumen de los comandos más relevantes para Levantar/Parar/Chequear los recursos del Cluster

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
# Status, Configuración y Chequeos de Recursos de Cluster
crsctl check crs                             # Status Servicios CRS
crsctl check cluster -n rac2                 # Status Servicios Cluster
crsctl check ctss                            # Status Servicio CTSS
crsctl config crs (requiere root)            # Configuración Autoarranque OHAS
cat /etc/oracle/scls_scr/rac1/root/ohasdstr  # Configuración Autoarranque OHAS
crsctl stat res -t                           # Status Todos Recursos Cluster
crsctl stat res ora.rac.db -p                # Configuración de un Recurso
crsctl stat res ora.rac.db -f                # Configuracion Completa
crsctl query css votedisk                    # Status Voting Disks
olsnodes -n -i -s -t                         # Listar Nodos Cluster
oifcfg getif                                 # Información Interfaces de Red
ocrcheck                                     # Status OCR (ejecutar como root para chequear corrupción lógica)
ocrcheck -local (requiere root)              # Status OCR con CRS/OHAS OFFLINE
ocrconfig -showbackup                        # Información Backups OCR
ocrconfig -add +TEST                          # Crear una copia de OCR en otro Diskgroup
cluvfy comp crs -n rac1                      # Verificar Integridad CRS
srvctl status database -d RAC                # Status Base de Datos
srvctl status instance -d RAC -i RAC1        # Status Instancia
srvctl status service -d RAC                 # Status Servicios de una BD
srvctl status nodeapps                       # Status Servicios de Red
srvctl status vip -n rac1                    # Status IP Virtual
srvctl status listener -l LISTENER           # Status Listener
srvctl status asm -n rac1                    # Status Instancia ASM
srvctl status scan                           # Status IP SCAN
srvctl status scan_listener                  # Status Listener SCAN
srvctl status server -n rac1                 # Status Nodo
srvctl status diskgroup -g DGRAC             # Status Disk Group
srvctl config database -d RAC                # Configuración Database
srvctl config service -d RAC                 # Configuración Servicios
srvctl config nodeapps                       # Configuración Servicios Red
srvctl config vip -n rac1                    # Configuración IP Virtual
srvctl config asm -a                         # Configuración Instancia ASM
srvctl config listener -l LISTENER           # Configuración Listener
srvctl config scan                           # Configuración IP SCAN
srvctl config scan_listener                  # Configuración SCAN Listener
 
# Arrancar, Parar y Reubicar Recursos de Cluster
crsctl stop cluster                                    # Parar Clusterware (requiere root)
crsctl start cluster                                   # Arrancar Clusterware (requiere root)
crsctl stop crs                                        # Parar OHAS (requiere root) - Incluye Parada Clusterware
crsctl start crs                                       # Arrancar OHAS (requiere root) - Incluye Arranque Clusterware
crsctl disable                                         # Deshabilitar Autoarranque CRS (requiere root)
crsctl disable                                         # Habilitar Autoarranque CRS (requiere root)
srvctl stop database -d RAC -o immediate               # Parar Database (parada IMMEDIATE)
srvctl start database -d RAC                           # Arrancar Database
srvctl stop instance -d RAC -i RAC1 -o immediate       # Parar Instancia BD (parada IMMEDIATE)
srvctl start instance -d RAC -i RAC1                   # Arrancar Instancia BD
srvctl stop service -d RAC -s OLTP -n rac1             # Parar Servicio
srvctl sart service -d RAC -s OLTP                     # Arrancar Servicio
srvctl stop nodeapps -n rac1                           # Parar Servicios Red (requiere parar dependencias)
srvctl start nodeapps                                  # Arrancar Servicios Red
srvctl stop vip -n rac1                                # Parar IP Virtual (requiere parar dependencias)
srvctl start vip -n rac1                               # Arrancar IP Virtual
srvctl stop asm -n rac1 -o abort -f                    # Parar Instancia ASM (es recomendable usar "crsctl stop cluster")
srvctl start asm -n rac1                               # Arrancar Instancia ASM (es recomendable usar "crsctl start cluser")
srvctl stop listener -l LISTENER                       # Parar Listener
srvctl start listener -l LISTENER                      # Arrancar Listener
srvctl stop scan -i 1                                  # Parar IP SCAN (requiere parar dependencias)
srvctl start scan -i 1                                 # Arrancar IP SCAN
srvctl stop scan_listener -i 1                         # Parar SCAN Listener
srvctl start scan_listener -i 1                        # Arrancar SCAN Listener
srvctl stop diskgroup -g TEST -n rac1,rac2             # Parar Disk Group (requiere parar dependencias)
srvctl start diskgroup -g TEST -n rac1,rac2            # Arrancar Disk Group (requiere parar dependencias)
srvctl relocate service -d RAC -s OLTP -i RAC1 -t RAC2 # Reubicar Servicio (del nodo 1 al nodo 2)
srvctl relocate scan_listener -i 1 rac1                # Reubicar SCAN Listener