martes, 4 de julio de 2023

 


Como Creaar un Non-CDB Database en ExaCS



Una de diferencias de trabajar en Exadata CC contra el On-Prem es que muchas cosas se hacen por consola y que viene por defecto. Por ejemplo me toco crear una bases de datos dentro de unos VMCluster donde por defecto la consola no te deja elegir si la quieres con Arquitectura Multitenant. Es decir, aunque no crees un PDB, de todas formas la crea con un CDB.
Bueno para este caso se debe de trabajar por consola y ejecutar los comando con cuenta root.

En Oracle Exadata Cloud Service (ExaCS), mediante dbaascli, puede crear una base de datos Oracle Non-CDB. Primero creará un home de Oracle Database con una versión disponible y luego creará una base de datos en ese home de Oracle Database. Aquí, veremos pasos con un ejemplo para crear una base de datos no CDB 19c con el parche 19.19.

[root@vmnodo01 ~]# dbaascli database create --dbName PRDB19 --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_1 --createAsCDB false
DBAAS CLI version 23.1.2.0.0
Executing command database create --dbName PRDB19 --oracleHome /u02/app/oracle/product/19.0.0.0/dbhome_1 --createAsCDB false
Job id: 2f83dde8-6b92-4885-a468-2153060760ec
Session log: /var/opt/oracle/log/PRDB19/database/create/dbaastools_2023-07-04_04-13-59-PM_215882.log
Enter SYS_PASSWORD:

Enter SYS_PASSWORD (reconfirmation):

Enter TDE_PASSWORD:

Enter TDE_PASSWORD (reconfirmation):

Loading PILOT...
Enter SYS_PASSWORD                                                                                                                                   ***************
Enter SYS_PASSWORD (reconfirmation):                                                                                                        ****************
Enter TDE_PASSWORD                                                                                                                           ******************
Enter TDE_PASSWORD (reconfirmation):                                                                                                        ******************
Session ID of the current execution is: 1872
Log file location: /var/opt/oracle/log/PRDB19/database/create/pilot_2023-07-04_04-14-17-PM_218873
-----------------
Running Plugin_initialization job
Completed Plugin_initialization job
-----------------
Running Default_value_initialization job
Completed Default_value_initialization job
-----------------
Running Validate_input_params job
Completed Validate_input_params job
-----------------
Running Validate_cpu_availability job
Completed Validate_cpu_availability job
-----------------
Running Validate_asm_availability job
Completed Validate_asm_availability job
-----------------
Running Validate_disk_space_availability job
Completed Validate_disk_space_availability job
-----------------
Running validate_users_umask job
Completed validate_users_umask job
-----------------
Running Validate_huge_pages_availability job
Completed Validate_huge_pages_availability job
-----------------
Running Validate_hostname_domain job
Completed Validate_hostname_domain job
-----------------
Running Install_db_cloud_backup_module job
Skipping. Job is detected as not applicable.
-----------------
Running Perform_dbca_prechecks job
Completed Perform_dbca_prechecks job
-----------------
Running Validate_backup_report job
Skipping. Job is detected as not applicable.
-----------------
Running Setup_acfs_volumes job
Completed Setup_acfs_volumes job
-----------------
Running Setup_db_folders job
Completed Setup_db_folders job
-----------------
Running DB_creation job





Completed DB_creation job
-----------------
Running Create_db_from_backup job
Skipping. Job is detected as not applicable.
Completed Create_db_from_backup job
-----------------
Running Load_db_details job

Completed Load_db_details job
-----------------
Running Populate_creg job
Completed Populate_creg job
-----------------
Running Register_ocids job
Skipping. Job is detected as not applicable.
-----------------
Running Run_datapatch job
Skipping. Job is detected as not applicable.
-----------------
Running Create_users_tablespace job
Skipping. Job is detected as not applicable.
-----------------
Running Configure_pdb_service job
Skipping. Job is detected as not applicable.
-----------------
Running Set_pdb_admin_user_profile job
Skipping. Job is detected as not applicable.
-----------------
Running Lock_pdb_admin_user job
Skipping. Job is detected as not applicable.
-----------------
Running Configure_flashback job
Completed Configure_flashback job
-----------------
Running Update_cloud_service_recommended_config_parameters job
Completed Update_cloud_service_recommended_config_parameters job
-----------------
Running Update_distributed_lock_timeout job
Completed Update_distributed_lock_timeout job
-----------------
Running Configure_archiving job
Completed Configure_archiving job
-----------------
Running Configure_huge_pages job
Completed Configure_huge_pages job
-----------------
Running Set_credentials job
Completed Set_credentials job
-----------------
Running Update_dba_directories job
Completed Update_dba_directories job

-----------------
Running Set_cluster_interconnects job
Completed Set_cluster_interconnects job
-----------------
Running Create_db_secure_profile job
Completed Create_db_secure_profile job
-----------------
Running Set_utc_timezone job
Completed Set_utc_timezone job
-----------------
Running Run_dst_post_installs job
Completed Run_dst_post_installs job
-----------------
Running Enable_auditing job
Completed Enable_auditing job
-----------------
Running Apply_security_measures job
Completed Apply_security_measures job
-----------------
Running Set_listener_init_params job
Completed Set_listener_init_params job
-----------------
Running Update_db_wallet job
Completed Update_db_wallet job
-----------------
Running Add_oratab_entry job
Completed Add_oratab_entry job
-----------------
Running Configure_sqlnet_ora job
Completed Configure_sqlnet_ora job
-----------------
Running Configure_tnsnames_ora job
Completed Configure_tnsnames_ora job
-----------------
Running Enable_fips job
Completed Enable_fips job
-----------------
Running DB_backup_assistant job
Completed DB_backup_assistant job
-----------------
Running Restart_database job
Completed Restart_database job
-----------------
Running Create_db_login_environment_file job
Completed Create_db_login_environment_file job
-----------------
Running Generate_dbsystem_details job
Completed Generate_dbsystem_details job
-----------------
Running Cleanup job
Completed Cleanup job
dbaascli execution completed
[root@vmnodo01 ~]#
[root@vmnodo01 ~]# su - oracle
Last login: Tue Jul  4 16:32:49 -04 2023
Last login: Tue Jul  4 16:32:58 -04 2023 on pts/0
[oracle@vmnodo01 ~]$ ps -fea |grep smon
root     131034      1  3 Jun15 ?        14:27:16 /u01/app/19.0.0.0/grid/bin/osysmond.bin
grid     133341      1  0 Jun15 ?        00:00:23 asm_smon_+ASM1
oracle   395509      1  0 20:32 ?        00:00:00 ora_smon_PRDB191
[oracle@vmnodo01 ~]$ srvctl status database -d PRDB19
Instance PRDB191 is running on node vmnodo01
Instance PRDB192 is running on node vmnodo02
[oracle@vmnodo01 ~]$ ll
-rwxrwx--- 1 oracle oinstall 631 Jul  4 20:32 PRDB19.env
[oracle@vmnodo01 ~]$ . PRDB19.env
[oracle@vmnodo01 ~]$ sqlplus "/as sysdba"

SQL*Plus: Release 19.0.0.0.0 - Production on Tue Jul 4 20:33:42 2023
Version 19.19.0.0.0
Copyright (c) 1982, 2022, Oracle.  All rights reserved.
Connected to:
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
Version 19.19.0.0.0

SQL> select name, cdb from v$database;
NAME      CDB
--------- ---
PRDB19   NO

viernes, 24 de marzo de 2023

                                         ORACLE GOLDENGATE


This post covers Overview & Components of Oracle GoldenGate (software for real-time data integration and replication in heterogeneous IT Systems).

Oracle Goldengate is a must-know for DBAs & consists of following components:

  1. Manager
  2. Extract
  3. Data Pump
  4. Collector
  5. Replicat
  6. Trails Files

Change Sychronization

1. Extract

Oracle GoldenGate extract process resides on the source system and captures the committed transactions from the source database. The DB logs may contain committed as well as uncommitted data but, remember, extract process captures only committed transactions and write them to local trail files. It is important to note that Extract captures only the committed transaction from its data source.

The extract can be configured for any of  the following purposes:

  • Initial Load: For the Initial Load method of replication, extract captures a static set of data directly from the source table or objects.
  • Change Synchronization: In this method of replication, extract process continuously captures data (DML and DDL) from the source database to keep the source and target database in a consistent state of replication and it is the sole method to implement continuous replication between the source and target database.

The data source of the extract process could be one of the following

  • Source table (if the extract is configured for initial load)
  • The database transaction logs or recovery logs such as (Oracle Redo Logs, Oracle Archive Logs, or SQL audit trails or Sybase transaction logs) depending on the type of source database.
  • Third-party capture module can also be used to extract transactional data from the source database. In this method, the data and metadata from an external API are passed to the extract API.

Extract captures changes from the source database based on the extract configuration (contains the objects to be replicated from the source database).

Multiple extract processes can be configured on a source database to operate on same/different source objects.

The extract performs either of the following tasks after extracting the data/records from the source database objects.

  • Delivers the data extracted from the source to the target server Trail Files through the collector process
  • Writes the data extracted from the source on to the Local Trail Files on the source system

Optionally, Extract can also be configured to perform data filtering, transformation and mapping while capturing data and or before transferring the data to the target system.

2. DataPump

This is an optional GoldenGate process (server process) on the source system and comes into picture when the extracted data from the source is not directly transferred to the target Trail Files. In the DataPump setup, the extract process gets the records/data from a source and keeps it in the local file system by means of local Trail Files. The DataPump acts as a secondary extract process where it reads the records from Local Trail Files and delivers to the Target system Trail files through the collector.

Data Pump is also known as secondary extract process. It is always recommended to include data Pump in Goldengate configuration.

3. Collector

The collector is a server process that runs in the background on the target system in a GoldenGate replication setup where the extract is configured for continuous Change Synchronization.

Collector has the following roles to perform in the GoldenGate replication.

  • When a connection request is sent from the source extract, the collector process on the target system scan and map the requesting connection to the available port and send the port details back to the manager for assignment to the requesting extract process.
  • Collector receives the data sent by source extract process and writes them to Trail Files on the target system.

There is one collector process on the target system per one extract process on the source system, i.e it is a one to one mapping between extract and collector process.

4. Replicat

The Replicat process runs on the target system and is primarily responsible for replicating the extracted data delivered to the target trail files by the source extract process.

The replicat process scans the Trail Files on the target system, generates the DDL and DML from the Trail Files and finally applies them on to the target system.

Replicat has the following two types of configuration which relate to the type of extract being configured on the source system.

  • Initial loads: In initial data loads configuration, Replicat can apply a static data copy which is extracted by the Initial load extract to target objects or route it to a high-speed bulk-load utility.
  • Change synchronization: In change synchronization configuration, Replicat applies the continuous stream of data extracted from the source objects to the target objects using a native database interface or ODBC drivers, depending on the type of the target database.

Optionally, Replicat can also be configured to perform data filtering, transformation, and mapping before applying the transaction on to the target database

5. Trail or Extract Files

 Trails or Extract Files are the Operating system files which GoldenGate use to keep records extracted from the source objects by the extract process. Trail files can be created on the source system and target system depending on the GoldenGate replication setup. Trail Files on the source system are called Extract Trails or Local Trails and on the target system called as Remote Trails.

Trail files are the reason why Goldengate is platform-independent.

By using trail GoldenGate minimizes the load on the source database as once the transaction logs/online logs/redo logs/ archive logs are extracted and loaded by the extract process to trail files, all the operations like filtering, conversions, mapping happens out of the source database. Use of trail file also makes the extraction and replication process independent of each other.

6. Manager

Manager can be considered as the parent process in a GoldenGate replication setup on both the source and target system. Manager, controls, manages and maintains the functioning of other GoldenGate processes and files. A manager process is responsible for the following tasks.

  • Starting up Oracle GoldenGate processes
  • Maintaining port number for processes
  • Starting up dynamic processes
  • Performing GoldenGate Trail Management
  • Creating events, errors and threshold report.