O slideshow foi denunciado.
Utilizamos seu perfil e dados de atividades no LinkedIn para personalizar e exibir anúncios mais relevantes. Altere suas preferências de anúncios quando desejar.

CUBRID Developer's Course

8.046 visualizações

Publicada em

This presentation reveals many important aspects of the CUBRID Database, including its unique features, future roadmap, comparison with other databases, architecture, etc.

  • Seja o primeiro a comentar

CUBRID Developer's Course

  1. 1. CUBRIDDeveloper's Course<br />Author: Bomyung Oh<br />Team / Department: DBMS Development Lab<br />Author(2): Kyungsik Seo<br />Team / Department: DBMS Development Lab<br />
  2. 2. Comparison of the featuredevelopment speed with MySQL<br />CUBRID<br /><ul><li>CUBRID Cluster</li></ul>Cluster<br />R3.2<br /><ul><li>SQL Compatibility
  3. 3. CUBRID FBO</li></ul>R3.1<br />R3.0<br /><ul><li>HA Feature
  4. 4. Hierarchical Query</li></ul>R2.0<br /><ul><li>Views
  5. 5. Triggers
  6. 6. Stored Procedure
  8. 8. Query Plan Cache
  9. 9. Query Result Cache
  10. 10. Replication
  11. 11. Partitioning
  12. 12. Click Counter</li></ul>R1.0<br />MySQL<br />5.5<br />5.4<br />5.1<br />5.0<br /><ul><li>Views
  13. 13. Triggers
  14. 14. Stored Procedures
  16. 16. Query Cache
  17. 17. Replication
  18. 18. Full Text Indexing
  19. 19. Partitioning
  20. 20. Event scheduler
  21. 21. MySQL Cluster
  22. 22. XML Functions</li></ul>4.1<br />4.0<br />3.23<br />2003<br />2001<br />2002<br />2004<br />2005<br />2006<br />2007<br />2008<br />2009<br />2010<br />
  23. 23. Who are using CUBRID <br />Over 100,000 Downloads<br />
  24. 24. Introduction to CUBRID<br />Overview and Architecture of CUBRID<br />Using CUBRID<br />Introduction to CUBRID HA <br />
  25. 25. 1.1 Overview and Architecture of CUBRID<br />
  26. 26. What is CUBRID?<br /><ul><li>Introduction</li></ul>CUBRIDis a comprehensive open source relational database management system that is highly optimized for Web Applications, particularly those with Read-intensive transactions. <br /><ul><li>Korea</li></ul>http://dev.naver.com/projects/cubrid/<br />http://www.cubrid.com/online_manual/cubrid_830/index.htm<br />http://www.cubrid.com<br />http://devcafe.nhncorp.com/g_cubrid<br /><ul><li>Global</li></ul>http://www.cubrid.org/<br />http://wiki.cubrid.org/index.php/CUBRID_Manuals/cubrid_2008_R3.0_manual<br /><ul><li>You Tube</li></ul>http://www.youtube.com/user/cubrid<br />
  27. 27. CUBRIDArchitecture (Simplified)<br /><ul><li>A 3-tier structure that separates DB Servers from Brokers</li></ul>Broker : DB Server = 1 : N is possible<br />Application Client<br />Java Apps<br />CUBRID Manager<br />WAS<br />WAS<br />Query Editor<br />DB Interface<br />JDBC driver<br />Manager port: 8001,8002<br />JDBC<br />JDBC<br />connect<br />Broker port: 30000<br />Middleware<br />cub_broker<br />cub_auto<br />Broker1<br />Broker2<br />cub_job<br />cub_cas<br />send_fd<br />connect<br />Server port: 1523<br />DBServer<br />cub_master<br />cub_auto<br />connect<br />Server port: 1523<br />send_fd<br />cub_job<br />cub_server<br />DB Server2<br />DB Server1<br />Data<br />Volume2<br />Data<br />Volume1<br />volume file<br />log file<br />volume file<br />log file<br />
  28. 28. CUBRIDArchitecture (Detailed)<br />CUBRID<br />Manager<br />GUI<br />CUBRID<br />Manager<br />Interface<br />ODBC<br />CCI<br />PHP<br />OLE DB<br />Python<br />Ruby<br />JDBC<br />CM Server<br />Broker<br />Job<br />Queuing<br />Monitoring<br />Connection<br />Pooling<br />Logging<br />Client Library<br />Native C API<br />Parser<br />Object<br />Manager<br />Schema<br />Manager<br />Transaction<br />Manager<br />Query<br />Transform<br />Workspace Manager<br />Query<br />Optimizer<br />Memory Manager<br />Plan<br />Generation<br />Communication Module<br />Server<br />Admin<br />Utility<br />Communication Module<br />Create, Delete, Copy, Rename<br />Transaction<br />Manager<br />Log<br />Manager<br />Lock<br />Manager<br />Query<br />Manager<br />Access<br />Method<br />B+Tree<br />Module<br />File Manager<br />System<br />Catalog Module<br />Add Volume<br />Buffer Manager<br />Load /<br />Unload<br />Disk Manager<br />Backup /<br />Restore<br />Active<br />Log<br />Compact /<br />Optimize<br />File Based<br />Objects<br />Data<br />Volume<br />Index<br />Volume<br />Temp<br />Volume<br />Check /<br />Diag<br />Archive<br />Log<br />
  29. 29. CUBRIDProcess (Detailed)<br />JDBC driver<br />CCI library<br />API<br />connect<br />query &<br />result<br />port listening<br />query &<br />result<br />File<br />cub_broker<br />cubrid_broker.conf<br />fork<br />parse<br />descriptor pass<br />Process<br />cub_cas<br />cub_cas<br />shared memory<br />csql<br />cubridcs.so<br />cubridcs.so<br />Dynamic shared library<br />connect<br />cubridcs.so<br />request &<br />response<br />TCP<br />job queue<br />multi-thread<br />port listening<br />parse<br />descriptor pass<br />parse<br />cub_master<br />cub_server<br />cubrid.conf<br />UDS<br />cubrid.so<br />mount<br />(read/write)<br />register<br />read<br />volume file<br />log file<br />databases.txt<br />volume file<br />log file<br />cub_admin<br />cubridsa.so<br />
  30. 30. 1.2 Using CUBRID<br />
  31. 31. Prerequisites for Installation<br />Download CUBRID<br />http://sourceforge.net/projects/cubrid<br />Check supported platforms<br />(Linux/Windows)<br />uname –r <br />rpm –qa | grepglibc<br />Install JRE version 1.5 or higher and set up the environment variables(CUBRID Manager)<br />http://java.sun.com/javase/downloads/index.jsp<br />For Linux<br />For Windows<br />Visual C++ 2008 distribution pack installation<br />Create DB users<br />(multiple instances)<br />http://www.microsoft.com/downloads/details.aspx?displaylang=ko&FamilyID=9b2da534-3e03-4391-8a4d-074b9f2bc1bf<br />Install and launch CUBRID<br />
  32. 32. CUBRIDInstallation and Starting CUBRID Service<br /><ul><li>How to install CUBRID and start CUBRID Service in the Windows environment
  33. 33. For detailed information, see the manual provided at the following link:. Run the exe file to start the installation wizard.http://www.cubrid.org/manual/gs/gs_install_windows.htm
  34. 34. Starting CUBRID Service in the CUBRIDtray
  35. 35. How to install CUBRID and start CUBRID Service in the Linux environment
  36. 36. For detailed information, see the manual provided at the following link: </li></ul>http://www.cubrid.org/manual/gs/gs_install_linux.htm<br /><ul><li>Starting CUBRID Service (CUBRID-related processes must be started)</li></ul>For detailed information, see the manual provided at the following link:<br />http://www.cubrid.org/manual/gs/gs_must_svcstart.htm<br /><ul><li>Start the CUBRID service by using the following command:</li></ul>% sh CUBRID- . /home1/cub_user/.cubrid.sh<br />% cubrid service start<br />
  37. 37. DB Creation and DB Start<br /><ul><li>How to create a new DB and start itFor detailed information, see the manual provided at the following link:</li></ul>http://www.cubrid.org/manual/admin/admin_db_create_create.htm<br /><ul><li>Creating testdb and starting it with a command
  38. 38. Starting an existing DB(demodb is included in the installation of CUBRID, by default)For detailed information, see the manual provided at the following link:</li></ul>http://www.cubrid.org/manual/gs/gs_must_svcstart.htm<br /><ul><li>Creating demodb and starting it with a command</li></ul>% cubridcreatedbtestdb<br /> % cubrid server start testdb<br /> % cubrid server start demodb<br />
  39. 39. CUBRID Manager - Configuration<br /><ul><li>Java-based GUI tools JRE/JDKversion 1.6 or higher is required
  40. 40. CUBRID Manager is a tool used to control the functions of servers and brokers, andto monitor and analyze logs
  41. 41. CUBIRD Manager consists of the search pane to the left, the query edit pane to the right, the top menu, and the toolbar</li></li></ul><li>CUBRID Manager – start<br />Start CUBRID Server <br />Start CUBRID Manager <br />inserthost connection information<br />(Default manager account<br />ID: admin / PW: admin)<br />insert<br /> DB connection information<br />(Default DB account<br />ID: dba / PW: No password)<br />Start DB Server<br />Execute queries<br />
  42. 42. CUBRID Manager - stop<br />Stop DB Server<br />Disconnect from the host<br />Stop CUBRID Manager <br />
  43. 43. 1.3 Introduction to CUBRID HA <br />
  44. 44. Introduction to CUBRID HA <br /><ul><li>Replication
  45. 45. No-Automatic Fail-over  No-Automatic Sync
  46. 46. HA
  47. 47. Automatic Fail-over  Automatic Sync</li></li></ul><li>HA Configuration and Usage– DB Server Redundancy<br />AP<br />Web Server<br />AP<br />Web Server<br />Fail-back<br />Fail-over<br />Broker #2<br />Broker #1<br />Automatic failover<br />Active<br />Server<br />Standby<br />Server<br />Node Fail<br />Automatic failover<br />Replication<br />
  48. 48. HA Configuration and Usage– Broker Redundancy<br />AP<br />Web Server<br />AP<br />Web Server<br />JDBC Driver<br />CCI Library<br />Automatic failover<br />Fail-back<br />Fail-over<br />Broker #2<br />Broker #1<br />Node Fail<br />Active<br />Server<br />Standby<br />Server<br />Replication<br />
  49. 49. Diagram of HA Architecture (Detailed)<br />Async<br />Update<br />Select<br />A-Node<br />Active Server Node<br />S1-Node<br />Standby Server Node<br />applylogdb<br />coyplogdb<br />applylogdb<br />coyplogdb<br />Server<br />Active<br />Replica<br />Standby<br />Semi-Sync<br />Sync<br />active log<br />archive logs<br />A-node’s<br />active & archive logs<br />S1-node’s<br />active & archive logs<br />active log<br />archive logs<br />Replication Log is not included<br />Replication Log is included<br />#Configurations#<br />#A-Node’s log path<br />S1-Node’s active & archive logs <br /> = $CUBRID_DATABASES/database-name_S1-Node-hostname<br />(ex. /home1/cubrid1/DB/tdb01_Snode1)<br />copylogdb & applylogdberror logs <br /> = $CUBRID/log<br />#S1-Node’s log path<br />A-Node’s active & archive logs <br /> = $CUBRID_DATABASES/database-name_A-Node-hostname<br />(ex. /home1/cubrid1/DB/tdb01_Anode1)<br />copylogdb & applylogdberror logs <br /> = $CUBRID/log<br />#Configurations#<br />A-node & S1-node’s <cubrid.conf><br />ha_mode=yes<br />ha_node_list=hagrpname@A-node:S1-node<br />A-node & S1-node’s <cubrid-ha><br />CUBRID_USER=username<br />DB_LIST=‘dbname‘<br />broker node’s <databases.txt><br />dbnamevol_pathA-node:S1-nodelog_path<br />
  50. 50. 2. CUBRIDArchitecture<br />CUBRIDVolume Structure<br />CUBRIDParameters<br />Broker Parameters<br />Error Log File<br />System Catalog<br />
  51. 51. 2.1 CUBRIDVolume Structure<br />
  52. 52. CUBRIDVolume Structure<br /><ul><li>* : The table is mapped to aCUBRID file.
  53. 53. **: A CUBRID file can be separated to multiple CUBRID volumes.</li></ul>File_1<br />File_2<br />File_3<br />Free_Pages<br />Volumes<br />
  54. 54. DB Volume Structure<br />
  55. 55. DBVolume – InformationVolume<br /><ul><li>Information Volumes
  56. 56. Data volume
  57. 57. Saves the data of an application, such as tables or records
  58. 58. A record storage file, called heap, is created in a data volume
  59. 59. Index volume
  60. 60. A volume in which B+Tree indexes are saved for faster data access or queries
  61. 61. Temp volume
  62. 62. A volume in which intermediate results are saved to fetch result sets that exceed the size of the memory buffer, or to execute join queries
  63. 63. A temporary volume with an appropriate size must be created when creating a DB volume.
  64. 64. This is a permanent volume that is used for temporary purposes, and is different from temporary volumes that are used only temporarily.
  65. 65. Generic volume
  66. 66. The initial volume during DB creation, which can be used as the data, index, or temp volume.
  67. 67. If the usage of the volume (data, index, or temporary) is not specified, it can be used for general purposes.</li></li></ul><li>DB Volume – Log Volume<br /><ul><li>Log Volumes
  68. 68. The active log volume includes the most recent updates that have been applied to a database.
  69. 69. Records the status of a committed, aborted, or active transaction.
  70. 70. It is used to recover a DB from a storage media failure.
  71. 71. When the space allocated to an active log is completely used up, the content of the active log will be copied to and stored in a new log (archive log).
  72. 72. Example: demodb_lgat(active log), demodb_lgar*(arcive log)</li></li></ul><li>DB Volume – Control Volume<br /><ul><li>Control Information Volumes
  73. 73. Volume Information
  74. 74. Includes the location information on DB volumes to be created or added
  75. 75. This file cannot be manually modified, deleted, or moved.
  76. 76. The name of the file is in {dbname}_vinf format.
  77. 77. Log Information
  78. 78. Records the information of the current logs and archive logs
  79. 79. Records the information on a new archive log file and unnecessary archive log file.
  80. 80. The name of the file is in {dbname}_lginf format</li></ul>-5 C:CUBRIDdatabasesdemodbdemodb_vinf<br />-4 C:CUBRIDdatabasesdemodbdemodb_lginf<br />-3 C:CUBRIDdatabasesdemodbdemodb_bkvinf<br />-2 C:CUBRIDdatabasesdemodbdemodb_lgat<br />0 C:CUBRIDdatabasesdemodbdemodb<br />1 C:CUBRIDDATABA~1demodbdemodb_x0010<br />COMMENT: CUBRID/LogInfo for database /CUBRID/databases/demodb<br />ACTIVE: /CUBRID/databases/demodb_lgat 5000 pages<br />ARCHIVE: 0 /CUBRID/databases/demodb_lgar000 0 4997<br />COMMENT: Log archive /CUBRID/databases/demodb_lgar000 is not needed any longer unless a database media crash occurs.<br />
  81. 81. DB Volume – Backup Volume<br /><ul><li>Backup Volume Information
  82. 82. Records the location and backup information of a backup volume
  83. 83. Located in the same path in which log files are stored.
  84. 84. The name of the file is in {dbname}_bkvinf format.</li></ul>0 0 /Backup/demodb_bk000 0 level full backup of the first file.<br /> 0 1 /Backup/demodb_bk001 0 level full backup of the second file.<br /> 1 0 /Backup/demodb_bk100 1 level incremental backup of the first file.<br /> 2 0 /Backup/demodb_bk200 2 level incremental backup of the first file.<br />The path information of a backup<br />file<br />Backup level<br />information<br />The sequence number of a backup volume per <br />level<br />
  85. 85. DB Volume – $CUBRID/conf/databases.txt<br /><ul><li>databases.txt
  86. 86. Contains the name, path, and the name of the built host of a DB.
  87. 87. Records the information related to the DB that is created in the databases.txt file upon the creation of a DB.
  88. 88. Saved to the path in which the $CUBRID_DATABASES environment variables are specified.
  89. 89. If it does not exist in the directory specified by the environment variable, the current directory will be used instead.
  90. 90. Caution
  91. 91. If a host name has been changed or a DB deleted by an OS command, this file must be modified as well.
  92. 92. As the user must be able to modify the databases.txt file during DB creation or deletion, the user must have the privilege to write to this file. If a user without the appropriate privilege attempts to create a DB, the DB creation will fail. For this reason, a DBA should enable the user-write privilege for the directory, or create a databases.txt file in the directory of each user and configure the environment variables.</li></ul>demodb /CUBRID/databases/demodb hostname /CUBRID/databases/demodb<br />DB name<br />DB path<br />Host name<br />DB log path<br />
  93. 93. DB Volume Management<br />An example of volume configuration<br />disk1<br />disk3<br />disk2<br />db1<br />db1_temp<br />db1_log<br />db1_data<br />db1_index<br />db_backup<br /><ul><li>Distributes according to usage to avoid the disk bottlenecks
  94. 94. Distributes data, index, temp, and log volume so that they are separated from each other
  95. 95. Avoids the disk bottlenecks and improves disk management
  96. 96. Distributes volumes that can be used simultaneously
  97. 97. data & log, data & index, data & temp
  98. 98. Configures a volume to an appropriate size to prevent it from adding more volumes while in service
  99. 99. Data, Index, Temp, Active Log: Page size and the number of pages must be considered
  100. 100. Backup: Backs up with the -r option, and then deletes unnecessary archive logs</li></li></ul><li>2.2 CUBRID Parameters<br />$CUBRID/conf/cubrid.conf<br />
  101. 101. Environment Configuration File - $CUBRID/conf/cubrid.conf<br /><ul><li>cubrid.conf
  102. 102. A file in which the value of CUBRIDsystem parameters are saved.
  103. 103. The file is located in a subdirectory of $CUBRID/conf . You are recommended to specify different values from one DB to another DB in the DB.
  104. 104. There are two types of parameters: DB server parameters and DB client parameters. If a parameter has been changed in a process, that process must be restarted.
  105. 105. SQL is used to change a client parameter.
  106. 106. Syntax for configuring parameters
  107. 107. Case-insensitive
  108. 108. The name and value of a parameter must be inserted on the same line.
  109. 109. An equals sign (=) can be used, and a blank character can be added at both sides of the sign..
  110. 110. If the value of a parameter is a string, insertthe string without quotation marks. If a blank character is included in the string, encase it with quotation marks. </li></ul>[commom]<br />data_buffer_pages=250000<br />[demodb]<br />data_buffer_pages=500000<br />
  111. 111. <ul><li>Higher in priority than the configuration of cubrid.conf
  112. 112. Add CUBRID_ at the beginning of the parameter to configure it as an environment variable
  113. 113. Configuring with an SQL statement
  114. 114. Only client parameters can be configured
  115. 115. Use “;” for multiple configurations</li></ul>Environment Configuration File- $CUBRID/conf/cubrid.conf<br />set CUBRID_SORT_BUFFER_PAGE=512<br />SET SYSTEM PARAMETERS 'parameter_name=value [{; name=value}...]‘<br />SET SYSTEM PARAMETERS 'csql_history_num=70’<br />SET SYSTEM PARAMETERS 'csql_history_num=70; index_scan_in_oid_order=1'<br />
  116. 116. Memory Related Configurations<br /><ul><li>data_buffer_pages
  117. 117. The number of data pages cached to the memory by a DB server
  118. 118. Requires an amount of memory equivalent to num_data_buffers times database page size (the page size specified when the DB is initialized; default is 4KB). (The size of the required memory is 100MB if the default is 25,000)
  119. 119. The actual size of a DB, the size of the memory, and the number and size of other processes must be considered when determining the size
  120. 120. The larger the value, the more data needs to be cached to the memory, which means less disk I/O. However, a value that is too large will cause the full swapping of page buffers.
  121. 121. index_scan_oid_buffer_pages
  122. 122. Configure the number of buffer pages in which the OID list is to be temporarily stored when scanning indexes
  123. 123. The default value is 4, (0.05~16).</li></li></ul><li>Memory Related Configurations<br /><ul><li>sort_buffer_pages
  124. 124. The number of pages used to process queries that require sorting.
  125. 125. One sort buffer is allocated to each active client request.
  126. 126. The allocated memory is released upon the completion of sorting.
  127. 127. A value between 16 and 500 is recommended.
  128. 128. temp_file_memory_size_in_pages
  129. 129. Determines the number of buffer pages that cache the temporary results of a query
  130. 130. The default value is 4, and the maximum value is 20.</li></li></ul><li>Log Related Configurations<br /><ul><li>checkpoint_interval_in_mins, checkpoint_interval_in_npages
  131. 131. Configures the interval of a checkpoint execution in min./page
  132. 132. The larger the value, the more time it takes to recover a DB.
  133. 133. media_failure_support
  134. 134. Configures whether to keep an archive log in the event of a storage media failure
  135. 135. If it is configured to the default value (yes), all active logs will be copied to and stored in an archive log when changes are made to a transaction while the active logs are full.
  136. 136. Please note that any archive logs which have been created while active logs that are full will be deleted if this value is no.</li></li></ul><li>On Concurrency Control and Locking<br /><ul><li>isolation_level
  137. 137. A parameter used to manage transaction concurrency
  138. 138. It must be an integer from 1 to 6 or a character string (Default: 3)
  139. 139. The larger the value of the parameter, the lower the concurrency
  140. 140. SERIALIZABLE: Inaccessible until transaction is complete
  141. 141. REPEATABLE: S_LOCK is maintained until the transaction is complete at SELECT
  142. 142. READ UNCOMMITTED: Allows incomplete transactions to be read
  143. 143. READ COMMITTED: Allows only completed transactions to be read</li></li></ul><li>Configurations Related to Concurrency and Lock<br /><ul><li>deadlock_detection_interval_in_secs
  144. 144. Configures the interval, in seconds, of deadlock detection for stopped transactions.
  145. 145. Resolves deadlock by rolling back one of the deadlocked transactions
  146. 146. The default value is 1sec.Be sure not to set the interval to a large number, as doing so will allow deadlocks remain undetected for that length of time.
  147. 147. lock_escalation
  148. 148. Converts to table lock if the number of row locks belonging to a table is greater than the specified value.
  149. 149. The default value is 100,000.
  150. 150. If this value is small, the table management overhead will be reduced, but the concurrency will be decreased.
  151. 151. If this value is large, thetable management overhead is will be increased, but the concurrency will be improved.
  152. 152. lock_timeout_in_secs
  153. 153. Specifies the waiting time of a lock
  154. 154. If the lock has not been allowed within the specified period of time, the transaction is cancelled, and an error is returned.
  155. 155. The default value is -1, in which case the wait time is unlimited. If it is 0, there is no wait time.</li></li></ul><li>Configurations Related to Query Caches<br /><ul><li>max_plan_cache_entries
  156. 156. Configures the maximum number of query plans to be cached to the memory (Default: 1,000)
  157. 157. If this value is lower than 1, it will not work - it works only when the value is at least 1.
  158. 158. Configures the hint so that query execution plans are created without using cache
  159. 159. Use /*+ RECOMPILE +/ in queries</li></ul>select /*+ RECOMPILE */ * from record where …<br />
  160. 160. Configurations Related to Syntax and Type<br /><ul><li>block_ddl_statement
  161. 161. Limits Data Definition Language (as known as DDL)
  162. 162. The default value should not be no.
  163. 163. block_nowhere_statement
  164. 164. It does not execute queries if there are no WHERE clauses in an UPDATE/DELETE statement.
  165. 165. The default value should not be no.
  166. 166. single_byte_compare
  167. 167. When comparing strings, set it so that it will compare the strings by a single byte.When using Unicode, set it to Yes (for UTF-8).
  168. 168. Default:no</li></li></ul><li>Other Parameters<br /><ul><li>Parameters related to communication services
  169. 169. cubrid_port_id
  170. 170. Master Process Port
  171. 171. The default valueis 1523
  172. 172. If 1523 is already in use, the parameter must be changed to another port number.
  173. 173. Client/server request-related
  174. 174. max_clients
  175. 175. This number represents maximum number of DB clients that can be connected to a DB server at the same time, which by extension also means the total number of concurrent transactions. (Defaultvalue:50)
  176. 176. The actual number of concurrent users must be considered
  177. 177. DB Server restart configuration
  178. 178. auto_restart_server
  179. 179. Automatically restarts a DB server that has been stopped due to a failure
  180. 180. The default value when restarting the DB is yes.
  181. 181. In the HA, the default value is no.</li></li></ul><li>Other Parameters<br /><ul><li>Parameters related to transaction processing
  182. 182. async_commit
  183. 183. Enables the asynchronous commit function (Default value: must not be set to no)
  184. 184. Returns a commit to a client before the commit log is flushed to a disk
  185. 185. When a failure occurs in a DB server, all commit transactions that have not been flushed to a disk will not be able to be recovered.
  186. 186. group_commit_interval_in_msecs
  187. 187. Collects commits that have occurred during the setting in a group, and executes them (Default value: no need to configure)
  188. 188. Improves performance by collecting commit logs and flushing them to a disk</li></li></ul><li>2.3 Broker Parameters<br />$CUBRID/conf/cubrid_broker.conf<br />
  189. 189. Broker Environment Configuration - $CUBRID/conf/cubrid_broker.conf<br /><ul><li>Modifying environment configuration
  190. 190. Configuration file: $CUBRID/conf/cubrid_broker.conf
  191. 191. The file can be modified in an editor. Any changes made will be applied when the Broker restarts.
  192. 192. To modify the configuration without a restart, use the following command:
  193. 193. Configurable environment variables
  196. 196. If an environment variable and its value are incorrect, an error will occur during the restart, which will prevent the restart.</li></ul>% broker_changer <br-name> <conf-name> <conf-value><br />% broker_changerbroker1sql_log on<br />OK<br />
  197. 197. Introduction to Broker Parameters<br />
  198. 198. Introduction to Broker Parameters<br />
  199. 199. Introduction to Broker Parameters<br />
  200. 200. 2.4 Error Log File<br />$CUBRID/log/<br />$CUBRID/log/server/<br />$CUBRID/log/broker/<br />$CUBRID/log/broker/sql_log<br />$CUBRID/log/broker/error_log<br />CUBRRENT_DIRECTORY, $HOME<br />
  201. 201. Broker Log File – Connection Log$CUBRID/log/broker/<br /><ul><li>Checking connection log
  202. 202. The connection log is a record of the time it takes for each CAS to process a request by Broker.
  203. 203. This log has the name of "<broker name>.access" and resides in a directory specified in the ACCESS_LOG of cubrid_broker.conf.</li></ul>1 - - 1158198049.151 1158198049.246 2008/09/14 10:40:49 ~ 2008/09/14 10:40:49 29438 - -1<br />2 - - 1158198049.401 1158198049.406 2008/09/14 10:40:49 ~ 2008/09/14 10:40:49 29438 - -1<br />
  204. 204. Broker Log File – Error Log$CUBRID/log/broker/error_log<br /><ul><li>Checking error log
  205. 205. Records the information about an error that has occurred while processing the request from an application client into the broker_name_app_server_num.err file</li></ul>Time: 02/04/09 13:45:17.687 - SYNTAX ERROR *** ERROR CODE = -493, Tran = 1, EID = 38<br />Syntax: Unknown class "unknown_tbl". select * from unknown_tbl<br />
  206. 206. Broker Log File – SQL Log$CUBRID/log/broker/sql_log<br /><ul><li>SQL log
  207. 207. The SQL log file records the SQL that an application client requests, and is saved under the name of "broker_name_app_server_num.sql.log."</li></ul>02/04 13:45:17.687 (38) prepare 0 insert into unique_tbl values (1)<br />02/04 13:45:17.687 (38) prepare srv_h_id 1 <br />02/04 13:45:17.687 (38) execute srv_h_id 1 insert into unique_tbl values (1)<br />02/04 13:45:17.687 (38) execute error:-670 tuple 0 time 0.000, EID = 39<br />02/04 13:45:17.687 (0) auto_rollback<br />02/04 13:45:17.687 (0) auto_rollback 0<br />*** 0.000<br />02/04 13:45:17.687 (39) prepare 0 select * from unique_tbl<br />02/04 13:45:17.687 (39) prepare srv_h_id 1 (PC)<br />02/04 13:45:17.687 (39) execute srv_h_id 1 select * from unique_tbl<br />02/04 13:45:17.687 (39) execute 0 tuple 1 time 0.000<br />02/04 13:45:17.687 (0) auto_commit<br />02/04 13:45:17.687 (0) auto_commit 0<br />*** 0.000<br /><ul><li> The time at which the application sent the request
  208. 208. (39) : The sequence number of the SQL statement group, for prepared statement pooling
  209. 209. (PC) : Uses the content stored in the plan cache
  210. 210. SELECT... : The SQL statement to be executed.</li></ul> - When pooling statements, the binding variable of the WHERE clause is displayed as ?. <br /><ul><li> Execute 0 tuple 1 time 0.000</li></ul> - Onerow is executed, which takes 0.000 seconds.<br /><ul><li>auto_commit/auto_rollback</li></ul> - It signifies that the target will either be committed automatically or rolled back <br />- The second auto_commit/auto_rollback is<br />an error code. 0 signifies that the transaction has been completed without an error.<br />
  211. 211. 2.5 System Catalog<br />
  212. 212. Catalog Information<br /><ul><li>Provides schema information access through SQL
  213. 213. Tableinformation
  214. 214. db_class
  215. 215. Important fields: class_name and owner_name
  216. 216. Column information
  217. 217. db_attribute
  218. 218. Important fields: class_name, attr_name, and attr_type
  219. 219. Other
  220. 220. db_vclass
  221. 221. db_index
  222. 222. db_index_key
  223. 223. db_trig
  224. 224. db_partition
  225. 225. db_stored_procedure
  226. 226. db_auth</li></li></ul><li>Catalog Information – Checking Table Information<br /><ul><li>Searching for table information in the catalog (db_class)
  227. 227. Searching for table information in the catalog (db_index)</li></li></ul><li>3. CUBRID SQL<br />Types, Operators, and Functions<br />Comparison of Major SQLs<br />Query Plans and Hints<br />
  228. 228. 3.1 Types, Operators, and Functions<br />
  229. 229. CUBRIDIdentifiers<br />
  230. 230. CUBRID Data Types<br />
  231. 231. CUBRID Data Types<br />
  232. 232. CUBRID Operators<br />
  233. 233. CUBRIDFunctions(2008 R3.0 based)<br />
  234. 234. CUBRID Functions<br />
  235. 235. 3.2 Comparison of Major SQLs<br />
  236. 236. Cautions regarding CUBRIDSQL <br /><ul><li> Does not support implicit type conversion.</li></ul>Cannot process quotation marks in numeric data.<br /><ul><li> Does not support character sets.
  237. 237. Saves and displays the character set configured in an application as it is.
  238. 238. Can specify a character set via the JDBC connection url.
  239. 239. Does not support multi-byte characters.
  240. 240. Column sizes must be defined to allow sufficient space for multi-byte characters.
  241. 241. The length or position value in a string function is processed byte by byte.
  242. 242. Functions for joining DBs are not supported.
  243. 243. Cannot change the column size by using the ALTER TABLE statement.
  244. 244. This will be fixed in a future version.
  245. 245. If the prepare statement pooling is used, only one result set can be handled per connection.
  246. 246. It is recommended to open multiple connections for use. </li></li></ul><li>Join Query<br />[Inner] Join<br />SELECT select_list<br />FROM TABLE1T1<br /> INNER JOIN TABLE2T2 ON T1.COL1 = T2.COL2<br />WHERE T1.A = 'test' AND T2.B = 1;<br />Left [Outer] Join<br />SELECT select_list<br />FROM TABLE1 T1<br /> LEFT OUTER JOIN TABLE2 T2 ON T1.COL1 = T2.COL2 AND T2.B=1<br />WHERE T1.A = 'test';<br />
  247. 247. Pagination(LIMIT RESULT SET)<br />ROWNUM <br />SELECT select_list FROM TABLE1 T1<br />WHERE T1.A = 'test' AND ROWNUM <= 100<br />ORDER BY ORDER_COLUMN;<br />ORDERBY_NUM()<br />SELECT select_list FROM TABLE1 T1<br />WHERE T1.A = 'test' <br />ORDER BY ORDER_COLUMN<br />FOR ORDERBY_NUM() <= 100;<br />LIMIT (from R3.0)<br />SELECT select_list FROM TABLE1 T1<br />WHERE T1.A = 'test' <br />ORDER BY ORDER_COLUMN<br />LIMIT 1,100;<br />
  248. 248. AUTO_INCREMENT and SERIAL<br />SERIAL<br />CREATE SERIAL SERIAL_NAME START WITH 1 MAXVALUE 1000 NOCYCLE;<br />CREATE TABLE TABLE1( seqnumINT, <br /> name VARCHAR); <br />INSERT INTO TABLE1 VALUES (SERIAL_NAME.next_value, 'test'); //seqnum=1<br />AUTO_INCREMENT<br />CREATE TABLE TABLE1( seqnum INT AUTO_INCREMENT(1,1000) NOT NULL,<br /> name VARCHAR); <br />INSERT INTO TABLE1 (name) VALUES ('test'); //seqnum=1<br />
  249. 249. INDEX<br />CREATE INDEX on TABLE1(zipcode,lastname,address);<br />SELECT * FROM TABLE1<br />WHERE zipcode=1000 AND name LIKE '%test%' AND address LIKE '%seoul‘;<br />CUBRIDinternal process: <br /><ul><li> Step 1: Searches for a target in which zipcode=1000 at the index level
  250. 250. Step 2: Extracts targets that satisfy the name and address conditions by accessing them at the data level.</li></ul>(In contrast, MySQL accesses all the data in which zipcode=1000 at the data level, and then extracts the data that satisfy the other conditions.)<br />INDEX usage tips: <br /><ul><li>Thesmaller the size of an index key, the better the performance.
  251. 251. Configure an index for columns with a good distribution (narrow range), basic keys, and columns which are the connection point for a join.
  252. 252. When configuring indexes, use columns that are infrequently updated.</li></li></ul><li>IndexDefinition and Using USING INDEX <br />CREATE [ UNIQUE ] INDEX [ index_name ]<br />ON table_name ( column_name[(prefix_length)] [ASC | DESC] [ {, column_name[(prefix_length)] [ASC | DESC]} ...] ) [ ; ]<br /><ul><li> The UNIQUE index creates an index that is used for uniqueness constraints.
  253. 253. If no index name has been specified, it will be automatically created.
  254. 254. You can define an index only for the front part of a character string (Prefix Index)</li></ul>SELECT/UPDATE/DELETE...USING INDEX {NONE | index_name[(+)],…};<br /><ul><li> Index names are distinguished by table and are used as table_name.index_name.
  255. 255. Scans indexes only when the cost of index scan specified in the USING INDEX clause is lower than the sequential scan.
  256. 256. USING INDEX The index scan is executed unconditionally in the case of index_name(+).
  257. 257. For USING INDEX NONE, the sequential scan is executed unconditionally.
  258. 258. If more than two index names are specified behind the USING INDEX clause, the appropriate index will be selected by the optimizer.
  259. 259. If more than two tables are joined, index names must be specified for all tables.</li></li></ul><li>IndexDefinition and Using USING INDEX - Tuning<br /><ul><li> If an index column (yymm) is processed by a function in the WHERE clause,there is no index scan.</li></li></ul><li>IndexDefinition and Using USING INDEX - Tuning<br /><ul><li> When defining an index, this configures Covering Index while checking the query plan.
  260. 260. When comparing the value of an index column to NULL,there will be no index scan.Modifying query
  261. 261. Create an index to be able to cover search conditions
  262. 262. Create an index to be able to cover the ORDER BY sorting condition
  263. 263. The index scan is not available if you perform the LIKE search by binding a dynamic parameter.</li></ul>SELECT * FROM tbl WERE col1 LIKE ? || '%‘ //A sequential scan occurs<br /><ul><li>SELECT * FROM tbl WHERE col1 LIKE 'AAA‘ || '%‘//insert a static value</li></li></ul><li>3.3 QueryPlans and Hints<br />
  264. 264. Query Plans and Hints<br /><ul><li> Creates a query plan based on the scan methods (sscan and iscan) and the join methods (nl-join, idx-join, and m-join)</li></li></ul><li>Configuring the Display and Check of a Query Plan (CUBRIDManager)<br />Display Query Plan<br />
  265. 265. An Example of Display Query Plan (sscan)<br />SELECT * FROM athlete WHERE name='Yoo Nam-Kyu';<br />(card, page#)<br /><ul><li>sscan:A sequential scan
  266. 266. card: Number of records in an expected result set
  267. 267. page#: Expected number of page accesses
  268. 268. sel(selectivity): Expected selectivity that satisfies search conditions</li></ul>(card, page#)<br />sel<br />
  269. 269. Example of a Display Query Plan (iscan)<br />CREATE INDEX ON athlete(name);<br />SELECT * FROM athlete WHERE name='Yoo Nam-Kyu';<br /><ul><li>iscan:An index scan</li></li></ul><li>Example of a Display Query Plan (nl-join)<br />SELECT * FROMolympic, nation WHEREolympic.host_nation=nation.name;<br /><ul><li>outer table: Contains a small number of records
  270. 270. inner table: Contains many records and has indexes</li></li></ul><li>Example of a Display Query Plan (idx-join)<br />SELECT * FROM game, athlete WHEREgame.athlete_code=athlete.code;<br />
  271. 271. Example of a Display Query Plan (m-join)<br />SELECT/*+ USE_MERGE */<br /> * FROM game, athlete WHEREgame.athlete_code=athlete.code;<br />
  272. 272. 4 JDBCandOther Management<br />JDBC Programming<br />Transaction Management<br />
  273. 273. 4.1 JDBC Programming<br />
  274. 274. The SQL Type and the Java Type<br />
  275. 275. JDBCMain Interfaces<br /><ul><li> Supports the JDBC 2.0 standard specifications.</li></li></ul><li>How to use JDBC<br /><ul><li>Connect to DB by using JDBC</li></ul>1. Loading Driver<br /><ul><li>Class.forName("cubrid.jdbc.driver.CUBRIDDriver")
  276. 276. Can connect to DB when a driver is loaded</li></ul>2. Making the Connection<br /><ul><li>Connection con = DriverManager.getConnection(url, “user", “passwd");
  277. 277. URL style example: jdbc:CUBRID:localhost:33000:demodb::: </li></ul>3. Creating a statement object <br /><ul><li>Statement stmt = con.createStatement();</li></ul>4. Executing SQLstatement<br /><ul><li>stmt.executeUpdate(“….”);
  278. 278. ResultSetrs = stmt.executeQuery( “…..");</li></li></ul><li>Make a connection<br />Build SQL statement<br />Send SQL statement<br />Close SQL statement<br />Close a connection<br />Example of JDBCusage<br />import java.sql.*;<br />class SimpleExample {<br /> public static void main(String args[]) {<br /> String url = “jdbc:CUBRID:localhost:33000:demodb:::”;<br /> try {<br />Class.forName(“cubrid.jdbc.driver.CUBRIDDriver”);<br /> } catch (ClassNotFoundException e) {<br />System.out.println(e.getMessage());<br /> }<br /> try {<br />Connection myConnection =<br />DriverManager.getConnection(url, “user”,”passwd”);<br />Statement myStatement = myConnection.createStatement();<br />ResultSetrs =<br />myStatement.executeQuery("select sysdate from db_root");<br />myStatement.close();<br />myConnection.close();<br /> } catch (java.lang.Exception ex) {<br />ex.printStackTrace();<br /> }<br /> }<br />} <br />
  279. 279. ResultSetMetaData<br />
  280. 280. Send SQL statement<br />Fetch row<br />Get columns<br />Yes<br />More columns<br />No<br />Yes<br />More rows<br />No<br />ResultSet<br />...<br />…<br />Connection myConnection = <br />DriverManager.getConnection(url,”user”,”passwd”);<br />Statement myStatement = myConnection.createStatement();<br />ResultSetrs = myStatement.executeQuery(“SELECT name,<br /> title, salary FROM employee”);<br />int I = 0;<br />while (rs.next()) {<br /> I++;<br />String empName = rs.getString(“name”);<br /> String empTitle = rs.getString(“title”);<br /> long empSalary = rs.getLong(“salary”);<br />System.out.println(“Employee ” + empName + ” is “ <br /> + empTitle + “ and earns $” + empSalary);<br />}<br />…<br />...<br />
  281. 281. Cautions on CUBRIDJDBCusage<br /><ul><li>Returning resources
  282. 282. Make sure to return a DB object such as ResultSet or Statement,Connection after it is used.
  283. 283. Return occurs when the close() method is called for a corresponding object.
  284. 284. If AutoCommit False is used, return occurs after the transaction for a connection(Commit/Rollback) is explicitly finished.
  285. 285. If you execute inner query statements, you must allocate a different connection object to each of them.
  286. 286. When other transactions occur in a cycle statement that uses retrieved data
  287. 287. When a transaction(Commit/Rollback) occurs for a connection object that is being used, the ResultSet being used is finished.</li></li></ul><li>4.3 Transaction Management<br />
  288. 288. Introduction to CUBRIDlocking protocol<br /><ul><li>locking
  289. 289. Lockis managed for each transaction, for tables and records
  290. 290. For a record, S-lock is acquired for reading, and X-lock is acquired for writing.
  291. 291. To get S-lock for a record, you must get IS-lock for the corresponding table.
  292. 292. To get S-lock for a record, you must get IX-lock for the corresponding table.
  293. 293. Features
  294. 294. Configuring SIX-lock for a table
  295. 295. When a transaction that has S-lock for a table requests X-lock
  296. 296. Valid range of lock
  297. 297. X-lock : The time a transaction is finished (i.e., confirmation or withdrawal time)
  298. 298. S-lock : REP (the time when a transaction is finished), COMMIT (the time when reading is finished), UNCOMMIT (does not request lock)</li></li></ul><li>Features of CUBRIDlocking protocol<br /><ul><li> Configuring S-lock for a table
  299. 299. When reading the schema of a corresponding table
  300. 300. When reading the higher-tier or lower-tier table of a corresponding table
  301. 301. When the number of records a transaction reads is greater than the lock_escalationvalue
  302. 302. Configuring X-lock for a table
  303. 303. When modifying a corresponding table
  304. 304. When the number of records a transaction writes is greater than the lock_escalationvalue</li></li></ul><li>Checking locking information <br /><ul><li>You can check the current locking status of the DB.
  305. 305. Creates an object for lock object unit: table, record)
  306. 306. Displays information for each object
  307. 307. Provided information
  308. 308. Lock related configuration of a DB server
  309. 309. Information of DB clients connected to a DB server
  310. 310. Lock table information of an object</li></li></ul><li>Checking locking information – lockdbutility<br /><ul><li>Command: lockdb
  311. 311. Shows a current snapshot of the locking status of the DB. </li></ul>cubrid lockdb [OPTION] database-name<br />Options: -o <br /><ul><li>Saves output to a file</li></ul>cubrid lockdbdemodb<br />Lock-related configuration of a DB server<br />Lock Escalation at = 100000, Run Deadlock interval = 1<br /> Number of locks that can be converted from a row rock to a table lock<br />
  312. 312. Checking locking information – lockdbutility<br /><ul><li>Lock information of an object</li></ul>OID = 0| 1780| 7<br />Object type: Instance of class ( 0| 288| 6) = table_a.<br />Total mode of holders = X_LOCK, Total mode of waiters = X_LOCK.<br />Num holders= 1, Num blocked-holders= 0, Num waiters= 1<br />LOCK HOLDERS:<br />Tran_index = 2, Granted_mode = X_LOCK, Count = 2<br />LOCK WAITERS:<br />Tran_index = 1, Blocked_mode = X_LOCK<br />Start_waiting_at = Wed Sep 23 12:06:06 2009 <br />Wait_for_nsecs = -1<br />lock target object information<br />No. 2 transaction has X_LOCK for this object.<br />No. 1 transaction is waiting to acquire X_LOCK for this object.<br />
  313. 313. Checking locking information – lockdbutility<br /><ul><li>Transaction information</li></ul>Transaction (index 1, cub_cas, dba@mycom|2908)<br />Isolation REPEATABLE CLASSES AND READ UNCOMMITTED INSTANCES<br />State TRAN_ACTIVE<br />Timeout_period -1<br />Transaction (index 2, cub_cas, dba@mycom|2980)<br />Isolation REPEATABLE CLASSES AND READ UNCOMMITTED INSTANCES<br />State TRAN_ACTIVE<br />Timeout_period -1<br />No. 1 transaction, cub_casprocess,<br />logging into dba, processID:2908<br />Lock level: Guaranteeing table read, Dirty read is allowed for the record<br />No.2 transaction, cub_cas process,<br />logging into dba, processID:2980<br />Waiting time to acquire lock, -1: no timeout<br />
  314. 314. Checking locking information – CUBRID Manager<br /><ul><li>CUBRID Manger
  315. 315. Only visible to dbauser</li></li></ul><li>Checking locking information – CUBRID Manager<br /><ul><li>Transaction info</li></li></ul><li>Checking locking information – CUBRID Manager<br /><ul><li>Checking an application that has a transaction
  316. 316. For CAS, check its information in the CUBRID broker.
  317. 317. Check the order of ID in a broker by using a processID.
  318. 318. As the process IDs in the above example are 2908 and 2980, they correspond to ID1 and ID2 of query_editor broker.
  319. 319. As 2980is occupying X_LOCK, the corresponding transaction (ID2)must be forced to stop, if necessary.
  320. 320. For an application, logic change, etc. may be necessary for the application.
  321. 321. For a query editor or CSQL, stop the transaction (commit/rollback). </li></li></ul><li>Transaction Management<br /><ul><li>Stopping a broker transaction
  322. 322. Forcibly stop the corresponding transaction (rollback) by using the Killtran command</li></ul>% usage: cubridkilltran [OPTION] database-name<br />valid options:<br /> -i, --kill-transaction-index=INDEX kill transaction with transaction INDEX<br /> --kill-user-name=ID kill all transactions with user ID<br /> --kill-host-name=HOST kill all transactions with client HOST<br /> --kill-program-name=NAME kill all transactions with client program NAME<br /> -p, --dba-password=PASS password of the DBA user; will prompt if don't specify<br /> -d, --display-information display information about active transactions<br /> -f, --force kill the transaction without a prompt for verification<br />
  323. 323. 5 Practice<br />
  324. 324. CUBRID Installation<br />Installing CUBRID (for Windows)<br />Downloading and installing CUBRID.<br />Creating demodb<br />Checking if the CUBRID service tray <br />has started<br />Checking if the CUBRID service <br />has started<br />service, process<br />
  325. 325. CUBRID Installation<br />CUBRID manager client<br />Checking if DB is created<br />Starting DB server<br />Checking if there is aJAVA related error message during start<br />Using the Query Editor<br />Executing a simple query: select * from db_class<br />
  326. 326. CUBRID Installation<br />Stopping DB Server<br />Stopping CUBRID service<br />Checking process<br />Starting CUBRID service<br />
  327. 327. DB creation<br />Creating a DB that satisfies the following conditions<br />Creation location and size of each volume<br />Page size: 4Kb<br />First volume: 5,000p, C:CUBRIDdatabases<DB name><br />Log volume: 100,000p, C:CUBRIDdatabases<DB name>log<br />Data volume: 500,000p, C:CUBRIDdatabases<DB name><br />Index volume: 250,000p, C:CUBRIDdatabases<DB name><br />Temp volume: 250,000p, C:CUBRIDdatabases<DB name><br />
  328. 328. DB creation<br />Checking the created volume<br />Checking the content of databases.txt <br />Checking the files in each directory by referring to the volume information file <br />control volumes<br />information volumes<br />log volumes<br />Computer name<br />
  329. 329. Schema management<br />Creating a table that satisfies the following conditions<br />Company table (company)<br />Company ID (integer): primary key, company name (string)<br />Customer table (client)<br />CustomerID (integer): not duplicated<br />Customer name, title, email, telephone no., address: Character string<br />create table company (<br />comp_idint primary key, // company ID<br />comp_namevarchar(200) // company name<br />);<br />create table client (<br />client_idint primary key, // customer ID<br />comp_idint, // company ID<br />client_namevarchar(20), // customer name<br /> title varchar(10), // title<br /> email varchar(100), // email<br /> phone varchar(20), // phone no.<br /> address varchar(200), // address<br />);<br />
  330. 330. Schema management<br />Viewing table information in a CUBRID Manager client <br />
  331. 331. Schema management<br />Modifying a table according to the following conditions<br />Re-creating after deleting a primary key <br />Changing type <br />Title: charvarcharor varchar char<br />Adding/changing an initial value<br />Title: Specifying an initial value to ‘new staff’ and deleting it<br />alter class client drop constraint pk_client_client_id<br />alter class client add primary key(client_id)<br />// or (possible to assign PKname),alter class client add constraint pk_id primary key (client_id) <br />alter class client rename attribute title as old_title<br />alter class client add attribute title char(20)<br />update client set title = cast(old_title as char(20))<br />alter class client drop attribute old_title<br />alter class client change title default 'new staff'<br />alter class client change title default NULL<br />
  332. 332. Schema management<br />Index <br />Client<br />A customer name is unique. Add an index whose name is u_name.<br />Title is in reverse order. Add an index whose name is idx1to sort customer names in forward direction.<br />Searching table information by using a catalog<br />Checking the information of a created table<br />Table name, column information, index information<br />create unique index u_name on client(client_name)<br />create index idx1 on client(title desc, client_name)<br />select * from db_class<br />select * from db_attribute where class_name = 'client'<br />select * from db_index where class_name = 'client'<br />
  333. 333. Data search and manipulation<br />Insertingdata<br />Insert (10,’company10’), (20,’company20’) into the company table.<br />Insert an arbitrary id,name, and the company ID whose comp_id is 20 into a client table in the insert-select format.<br />Check inserteddata information by selecting rows from the client table.<br />insert into company values (10, 'company10');<br />insert into company values (10, 'company10'),(20, 'company20'); <br />insert into company (comp_id, comp_name) values (20, 'company20');<br />insert into client (comp_id, client_id, client_name) select comp_id, 20, 'new staff20'from company where comp_id = 20<br />
  334. 334. Data search and manipulation<br />Modifying data <br />insert an arbitraryid and name into a client table.<br />Check the inserted data information by searching for the client table.<br />Change the comp_id to 10 for the data inserted in the client table.<br />Check inserted data information by searching for the client table.<br />insert into client (client_id, client_name) values (30, 'new staff30')<br />update client set comp_id = 10 where client_id = 30<br />
  335. 335. Data search and manipulation<br />Data search<br />Retrieve the countries that achieved medals in the 1988 Olympics from the participants and their medal information<br />Table where participants are listed: participant<br />Medal information table : game<br />- Retrievemedal information of the participants in the 1988 Olympics<br />select (select name from nation where code = a.nation_code), medal<br />from participant a, game b<br />where a.host_year = 1988 and a.nation_code = b.nation_code and a.host_year = b.host_year<br />select (select name from nation where code = a.nation_code), medal<br />from participant a left outer join game b on a.nation_code = b.nation_code and a.host_year = b.host_year<br />where a.host_year = 1988<br />
  336. 336. Data search and manipulation<br />Usingindex<br />Sorting the cities that have hosted the Olympics in chronological order<br />Table in which the names of cities that have hosted the Olympics are listed: olympic<br />Sorting the cities that have hosted the Olympics, so that the most recent ones appear at the front<br />select host_year,host_nation,host_city from olympic where host_year > '' using index pk_olympic_host_year(+)<br />create index r_year on olympic(host_yeardesc)<br />select host_year, host_nation, host_city from olympic where host_year > '' using index r_year(+) order by host_yeardesc<br />
  337. 337. Operatorsand functions<br />Arithmetic/Join/Type conversion operators<br />Checking how many months and days are left until Christmas<br />Displaying how many hours, minutes, and seconds are left until a training session is finished<br /><ul><li>Finding out what year this is through more than two methods. </li></ul>Checking the date of the last day of this month<br />select months_between(to_date('12/25/2008'), sysdate), '12/25/2008' - sysdate from db_root<br />select to_char(t1/3600) + 'hour'+to_char(abs(mod(t1,3600)/60)) + 'minute'+ to_char(abs(mod(t1,60))) + 'second'<br />from (select '17:00' - systime from db_root) as t(t1)<br />select to_char(sysdate, 'yyyy') from db_root<br />select extract(year from sysdate) from db_root<br />select extract(day from last_day(sysdate)) from db_root<br />
  338. 338. Operators and functions<br />Function<br />Finding an arbitrary number between 1 and 100 <br />Rounding 3.141592653 to the nearest millionth <br />Finding out the number of bus stops where you can catch the No. 10 bus<br />Length of the following string (‘substring xyzxxy’), position of ‘str’, extracting 6 characters from the 4th character, removing ‘xy’ from the string, replacing ‘s’ with ‘S’<br />select mod(rand(), 100) + 1 from db_root<br />select round(3.141592653, 6), trunc(3.141592653, 6) from db_root<br />select count(sation_id) from bus where bus_num = '10'<br />select length('substring xyzxxy'), instr ('substring xyzxxy', 'str'), substr('substring xyzxxy', 4, 6), rtrim('substring xyzxxy', 'xy'), replace(('substring xyzxxy', 's', 'S')from db_root<br />
  339. 339. Operators and functions<br />For the Olympic medals, use 'G' for a 'gold medal,' 'S' for a 'silver medal,' and 'B' for a 'bronze medal.'<br />Olympic medal table : game<br />Use '1900s' for the Olympics held in the 1900s, '2000s' for 2000s, and 'Other' for other years, and calculate the number Olympics held.<br />Table showing Olympics years: olympic<br />select decode(medal, 'G'. 'gold medal', 'S', 'silver medal', 'B', 'bronze medal') from game<br />select case when host_year between 1900 and 1999 then '1900s'<br /> when host_year between 2000 and 2999then '2000s'<br /> else 'other years'end as years, count(*)<br />from olympic<br />group by case when host_year between 1900 and 1999 then '1900s'<br /> when host_year between 2000 and 2999then '2000s'<br /> else 'other years'end<br />
  340. 340. Operators and functions<br />rownum<br /><ul><li>Selectinghosting information of the 11th to 20th Olympics </li></ul>Olympics hosting information table: olympic<br /><ul><li>Selecting 11th to 20th by sorting Olympics hosting information by year chronological order
  341. 341. Modifying the above query using index hint
  342. 342. Grouping by host_nation column</li></ul>select * from olympic where rownum between 11 and 20<br />select * from olympic order by host_year for orderby_num() between 11 and 20; <br />select * from olympic order by host_year limit 11, 20; <br />select * from olympic where host_year > 0 and rownum between 11 and 20 using index pk_olympic_host_year(+)<br />select host_nation from olympicwhere rownum between 11 and 20 group by host_nation<br />
  343. 343. Operators and functions<br />serial<br /><ul><li>Create an arbitrary serial object, get the subsequent value, and check the current value.</li></ul>create serial seq_no<br />select seq_no.next_value from db_root<br />select seq_no.current_value from db_root<br />
  344. 344. Operators and functions<br />Auto increment<br /><ul><li>Create a table having an auto increment column
  345. 345. Insertdata to the autoincrement column
  346. 346. Insert no date to the auto increment column
  347. 347. Select rows and check the auto increment column values
  348. 348. Delete rows and re-insertdata</li></ul>create table bbs (<br /> id intauto_increment,<br /> title string,<br />cntint default 0<br />)<br />insert into bbs(id, title) values(5, 'arbitrary inserting for auto increment')<br />insert into bbs(title) values('auto inserting for auto increment')<br />select * from bbs<br />delete from bbs<br />insert into bbs(title) values('auto inserting for auto increment')<br />