闲的DT,想探究一下OCP平台的元数据库OB Docker的内部结构。
1.OCP平台内部租户
登录OCP云平台WEB页面,可以看到元数据集群OB集群(ocp_cluster)内有ocp_monitor、ocp_meta、sys三个租户。OCP通过这三个租户访问元数据集群(OB容器:META_OB_ZONE_1,此名称在obcluster.conf文件中定义)。
尝试登录这三个租户的管理员用户:
[root@ocp ~]# obclient -h172.31.0.69 -P2883 -uocp_cluster.sys.root -Doceanbase -A --prompt "\u [\d]> "
ERROR 1045 (42000): Access denied for user 'root'@'xxx.xxx.xxx.xxx' (using password: NO)
[root@ocp ~]# obclient -h172.31.0.69 -P2883 -uocp_cluster.ocp_monitor.root -Doceanbase -A --prompt "\u [\d]> "
ERROR 1045 (42000): Access denied for user 'root'@'xxx.xxx.xxx.xxx' (using password: NO)
[root@ocp ~]# obclient -h172.31.0.69 -P2883 -uocp_cluster.ocp_meta.root -Doceanbase -A --prompt "\u [\d]> "
ERROR 1045 (42000): Access denied for user 'root'@'xxx.xxx.xxx.xxx' (using password: NO)
发现,没有密码无法登录。
下面我们看看能否获取到这些租户的管理员密码。
实际上,在OCP节点主机上,我们已经安装了t-oceanbase-antman软件包,在/root/t-oceanbase-antman/common/目录下有两个Shell脚本。
[root@ocp ~]# cd /root/t-oceanbase-antman/common/
[root@ocp common]# ls -l
total 16
-rwxr-xr-x 1 root root 914 Jan 18 2021 ssh.sh
-rwxr-xr-x 1 root root 11613 Dec 15 10:29 utils.sh
通过阅读上述脚本,发现utils.sh脚本中定义了各个租户管理员用户的密码相关函数,如下所示:
[root@ocp common]# cat utils.sh |grep ^function |grep pass
function is_password_legal() {
function gen_pass() {
function encode_pass() {
function decode_pass() {
function get_sys_pass() {
function get_meta_pass() {
function get_monitor_pass() {
function get_sys_monitor_pass() {
function get_omsmeta_pass() {
function get_odcmeta_pass() {
我们可以通过运行这些函数来尝试能否得到OCP平台的元数据集群(OB容器)的各租户管理员密码,如下所示:
[root@ocp common]# source ./utils.sh
[root@ocp common]# get_sys_pass #sys租户管理员root用户的密码
b2+sfnSxPW
[root@ocp common]# get_meta_pass #ocp_meta租户的管理员root用户的密码
4xac^5O0S{
[root@ocp common]# get_monitor_pass #ocp_monitor租户的管理员root用户的密码
YCyM^M3V^q
[root@ocp common]# get_sys_monitor_pass #sys租户下的ocp_monitor用户的密码
+Pezt9i8OA
接下来我尝试用上面的密码登录元数据集群(OB容器),以登录sys租户为例:
[root@ocp common]# obclient -h172.31.0.69 -P2881 -uroot@sys -p'b2+sfnSxPW' -Doceanbase -A --prompt "\u [\d]> "
Welcome to the OceanBase. Commands end with ; or \g.
Your MySQL connection id is 3221567662
Server version: 5.7.25 OceanBase 2.2.73 (r20201211175814-5abff097aadf740bcae07de8f84321815907e062) (Built Dec 11 2020 18:26:06)
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
root [oceanbase]>
发现可以登录进去。看看租户表是否与预期一致:
root [oceanbase]> select tenant_id,tenant_name,replica_num,zone_list,primary_zone,status from __all_tenant;
+-----------+-------------+-------------+----------------+----------------+----------------------+
| tenant_id | tenant_name | replica_num | zone_list | primary_zone | status |
+-----------+-------------+-------------+----------------+----------------+----------------------+
| 1 | sys | -1 | META_OB_ZONE_1 | META_OB_ZONE_1 | TENANT_STATUS_NORMAL |
| 1001 | ocp_meta | -1 | META_OB_ZONE_1 | META_OB_ZONE_1 | TENANT_STATUS_NORMAL |
| 1002 | ocp_monitor | -1 | META_OB_ZONE_1 | META_OB_ZONE_1 | TENANT_STATUS_NORMAL |
+-----------+-------------+-------------+----------------+----------------+----------------------+
3 rows in set (0.005 sec)
可以看到,租户表中的租户与OCP平台上列出的租户一致。
在看看数据库和数据库中的表:
root [oceanbase]> show databases;
+--------------------+
| Database |
+--------------------+
| oceanbase |
| information_schema |
| mysql |
| SYS |
| LBACSYS |
| ORAAUDITOR |
| test |
+--------------------+
7 rows in set (0.003 sec)
root [oceanbase]> show tables;
+------------------------------------------------+
| Tables_in_oceanbase |
+------------------------------------------------+
| CDB_CKPT_HISTORY |
| CDB_OB_BACKUP_ARCHIVELOG_PROGRESS |
| CDB_OB_BACKUP_ARCHIVELOG_SUMMARY |
| CDB_OB_BACKUP_BACKUPSET_JOB |
| CDB_OB_BACKUP_BACKUPSET_JOB_HISTORY |
| CDB_OB_BACKUP_BACKUPSET_TASK |
| CDB_OB_BACKUP_BACKUPSET_TASK_HISTORY |
| CDB_OB_BACKUP_BACKUP_ARCHIVELOG_SUMMARY |
| CDB_OB_BACKUP_CLEAN_HISTORY |
......
仔细看了一下,似乎元数据并不是直接使用sys租户来记录的。
用ocp_meta租户登录OB数据库容器试试:
[root@ocp common]# obclient -h172.31.0.69 -P2883 -uocp_cluster.ocp_meta.root -p4xac^5O0S{ -Doceanbase -A --prompt "\u [\d]> "
Welcome to the OceanBase. Commands end with ; or \g.
Your MySQL connection id is 3436008
Server version: 5.6.25 OceanBase 2.2.73 (r20201211175814-5abff097aadf740bcae07de8f84321815907e062) (Built Dec 11 2020 18:26:06)
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
root [oceanbase]> show databases;
+--------------------+
| Database |
+--------------------+
| oceanbase |
| information_schema |
| mysql |
| test |
| ocp |
| backup1472 |
| backup147x |
| backup2230 |
| backup21 |
+--------------------+
9 rows in set (0.003 sec)
看到有一个名为ocp的数据库,看看其中的表:
root [oceanbase]> use ocp
Database changed
root [ocp]> show tables;
+--------------------------------------------+
| Tables_in_ocp |
+--------------------------------------------+
| __data_maintain_history |
| apsara_test_case_status |
| audit_event_history |
| backup_restore_service |
| backup_sampling_inspect |
| backup_storage |
| backup_strategy |
| common_lock |
| common_sequence |
| compute_host |
| compute_host_agent |
| compute_host_agent_process |
| compute_host_agent_process_meta |
| compute_host_process |
| compute_host_process_meta |
| compute_host_service |
| compute_host_type |
| compute_idc |
| compute_region |
| compute_software_package_meta |
| config_properties |
| dag_instance |
| dag_schedule |
| dbpaas_metrics_list |
| distributed_lock |
| distributed_server |
| distributed_service |
| file_meta |
| health_check_result |
| iam_rate_limit_policy |
| iam_role |
| iam_user |
| iam_user_login_record |
| iam_user_password_change |
| iam_user_role |
| lockservice |
| meta_id_map |
| metric_class |
| metric_group |
| metric_meta |
| migration_clients |
| migration_clusters |
| migration_commands |
| migration_config |
| migration_datasources |
| migration_dependency |
| migration_generators |
| migration_jobs |
| migration_locks |
| migration_logictables |
| migration_projects |
| migration_schedule |
| migration_tables |
| monitor_metric_ob_collect_config |
| ob_agent_cluster_collection_config_history |
| ob_agent_collection_config_history |
| ob_cluster |
| ob_cluster_compaction |
| ob_cluster_group |
| ob_cluster_idc_info |
| ob_cluster_idx_id |
| ob_cluster_obproxy |
| ob_cluster_parameter_change |
| ob_cluster_parameter_info |
| ob_cluster_system_event_template |
| ob_exporter_config |
| ob_server |
| ob_server_compaction |
| ob_tenant |
| ob_tenant_compaction_stats |
| ob_tenant_parameter_change |
| ob_tenant_parameter_info |
| ob_unit_spec |
| ob_zone |
| ob_zone_compaction |
| obpaas_account |
| obpaas_database |
| obpaas_instance |
| obpaas_privilege |
| obpaas_resource_config |
| obproxy_cluster |
| obproxy_parameter |
| obproxy_parameter_change |
| obproxy_parameter_info |
| obproxy_server |
| ocp2_alarm_channel |
| ocp2_alarm_event |
| ocp2_alarm_event_history |
| ocp2_alarm_filter |
| ocp2_alarm_filter_history |
| ocp2_alarm_group |
| ocp2_alarm_metric_meta |
| ocp2_alarm_notification |
| ocp2_alarm_notification_history |
| ocp2_alarm_rule |
| ocp2_alarm_subscription |
| ocp2_sys_property |
| ocp2_template_variable_meta |
| ocp2_user_config |
| ocp_action_stat |
| ocp_alarm_filter |
| ocp_alarm_filter_delete |
| ocp_alarm_filter_history |
| ocp_alarm_meta |
| ocp_alarm_msg_queue |
| ocp_alarm_msg_queue_history |
| ocp_alarm_send_log |
| ocp_alarm_send_statistics |
| ocp_alarm_subscribe |
| ocp_alert_msg |
| ocp_aliyun_whitelist |
| ocp_all_obproxy |
| ocp_ant_cloud_apply |
| ocp_api_task |
| ocp_autotable_config |
| ocp_backup_config_info |
| ocp_backup_config_meta |
| ocp_backup_restore_config |
| ocp_backup_restore_machine |
| ocp_backup_restore_nfs |
| ocp_backup_restore_rpm |
| ocp_backup_tenant |
| ocp_clog_backup_status |
| ocp_clog_history |
| ocp_config |
| ocp_config_server |
| ocp_config_template |
| ocp_css_order |
| ocp_css_purchase |
| ocp_datatool_data |
| ocp_datatool_task |
| ocp_db_user_info |
| ocp_disk_water_mark_summary |
| ocp_docker_machine_info |
| ocp_drc_store |
| ocp_error_log_info |
| ocp_exporter_address |
| ocp_filesystem_config |
| ocp_filesystem_store |
| ocp_full_backup_history |
| ocp_global_task |
| ocp_handle_log |
| ocp_host |
| ocp_id_generator |
| ocp_inner_subtask |
| ocp_inner_task |
| ocp_inspect_result |
| ocp_inspect_task |
| ocp_instance |
| ocp_instance_class |
| ocp_instance_mapping |
| ocp_instance_subscribe |
| ocp_io_resource_conf |
| ocp_jar_repo |
| ocp_kpi_monitor |
| ocp_logic_region |
| ocp_metric_explain |
| ocp_metric_expr_config |
| ocp_metric_table_config |
| ocp_migrating_data_size_summary |
| ocp_migrating_partition_summary |
| ocp_monitor_dim_config |
| ocp_monitor_kpi_config |
| ocp_monitor_kpi_map |
| ocp_monitor_masters |
| ocp_monitor_metric_config |
| ocp_monitor_region_groups |
| ocp_monitor_special_task_config |
| ocp_monitor_table_partition_config |
| ocp_monitor_threshold |
| ocp_monitor_workers |
| ocp_move_log |
| ocp_ob_error_code |
| ocp_ob_monitor_class |
| ocp_ob_monitor_data_table_info |
| ocp_ob_owener |
| ocp_obagent_calc_global_config |
| ocp_obagent_collect_config |
| ocp_obagent_collect_config_cpp |
| ocp_obagent_kpi_config |
| ocp_obagent_report_config |
| ocp_obproxy |
| ocp_obproxy_host |
| ocp_obstat_config |
| ocp_op_log |
| ocp_ops_alarm_msg_queue |
| ocp_ops_alarm_subscribe |
| ocp_ops_host |
| ocp_ops_host_component_map |
| ocp_ops_metrics_data |
| ocp_ops_metrics_meta |
| ocp_ops_metrics_process_map |
| ocp_ops_process_topo |
| ocp_ops_repair_queue |
| ocp_ops_repair_rule |
| ocp_ops_tag |
| ocp_people_list |
| ocp_privilege_url |
| ocp_realtime_config |
| ocp_realtime_monitor |
| ocp_realtime_region |
| ocp_region_idc_info |
| ocp_region_obrg |
| ocp_resource |
| ocp_resource_group |
| ocp_resource_history |
| ocp_role_info |
| ocp_role_privilege |
| ocp_rpc_method |
| ocp_rpm_svn |
| ocp_rule_meta |
| ocp_send_type_info |
| ocp_site_idc_info |
| ocp_slb_region |
| ocp_slb_site_map |
| ocp_sm_config |
| ocp_sql_audit_config |
| ocp_table_compress_info |
| ocp_task_blacklist |
| ocp_task_ctrl |
| ocp_taskagent_info |
| ocp_token_idb_access |
| ocp_ts_data_latest |
| ocp_ubsms_metaq_info |
| ocp_user_info |
| ocp_user_instance |
| ocp_user_instance_alarm |
| ocp_user_instance_domain |
| ocp_user_instance_group |
| ocp_user_instance_vip |
| ocp_user_python_task |
| ocp_user_resource |
| ocp_user_role |
| ocp_user_view |
| ocp_worker_info |
| ocp_zone_conf |
| profile_credential |
| profile_credential_access |
| profile_credential_v24 |
| spring_jdbc_session |
| spring_jdbc_session_attributes |
| spring_session |
| spring_session_attributes |
| storage_object_block |
| storage_object_bucket |
| storage_object_meta |
| task_dependencies |
| task_instance |
| task_log |
+--------------------------------------------+
249 rows in set (0.005 sec)
似乎离想要的答案又近了一步。
挑一个表来看看数据吧:
root [ocp]> select table_name from oceanbase.__all_table_v2 where table_name like 'compute%';
+---------------------------------+
| table_name |
+---------------------------------+
| compute_region |
| compute_idc |
| compute_host_type |
| compute_host |
| compute_host_service |
| compute_host_agent |
| compute_software_package_meta |
| compute_host_process_meta |
| compute_host_process |
| compute_host_agent_process_meta |
| compute_host_agent_process |
+---------------------------------+
11 rows in set (0.001 sec)
root [ocp]> select * from compute_idc;
+----+------+-------------+---------------------+---------------------+-----------+
| id | name | description | create_time | update_time | region_id |
+----+------+-------------+---------------------+---------------------+-----------+
| 1 | DC01 | NULL | 2021-12-15 10:27:02 | 2021-12-15 10:27:02 | 1 |
| 2 | AZ01 | NULL | 2021-12-15 10:42:27 | 2021-12-15 10:42:27 | 2 |
| 3 | AZ02 | NULL | 2021-12-15 10:43:46 | 2021-12-15 10:43:46 | 2 |
| 4 | AZ03 | NULL | 2021-12-15 10:44:40 | 2021-12-15 10:44:40 | 2 |
+----+------+-------------+---------------------+---------------------+-----------+
4 rows in set (0.002 sec)
root [ocp]> select * from compute_host;
+----+---------------+-------------+------------------------+--------------+------------------+----------+----------------------------+---------------+--------+--------+--------------+---------------+---------------+---------------------+---------------------+
| id | name | description | operating_system | architecture | inner_ip_address | ssh_port | kind | publish_ports | status | idc_id | host_type_id | serial_number | alias | create_time | update_time |
+----+---------------+-------------+------------------------+--------------+------------------+----------+----------------------------+---------------+--------+--------+--------------+---------------+---------------+---------------------+---------------------+
| 1 | ocp.test.cn | NULL | 3.10.0-1127.el7.x86_64 | x86_64 | 172.31.0.69 | 2022 | DEDICATED_PHYSICAL_MACHINE | NULL | ONLINE | 1 | 1 | NULL | NULL | 2021-12-15 10:27:04 | 2021-12-15 10:48:36 |
| 2 | obs48.test.cn | NULL | 3.10.0-1127.el7.x86_64 | x86_64 | 172.31.0.48 | 22 | DEDICATED_PHYSICAL_MACHINE | NULL | ONLINE | 2 | 2 | NULL | obs48.test.cn | 2021-12-15 10:43:05 | 2021-12-15 10:54:19 |
| 3 | obs71.test.cn | NULL | 3.10.0-1127.el7.x86_64 | x86_64 | 172.31.0.71 | 22 | DEDICATED_PHYSICAL_MACHINE | NULL | ONLINE | 2 | 2 | NULL | obs71.test.cn | 2021-12-15 10:43:26 | 2021-12-15 10:54:19 |
| 4 | obs79.test.cn | NULL | 3.10.0-1127.el7.x86_64 | x86_64 | 172.31.0.79 | 22 | DEDICATED_PHYSICAL_MACHINE | NULL | ONLINE | 3 | 2 | NULL | obs79.test.cn | 2021-12-15 10:43:56 | 2021-12-15 10:54:19 |
| 5 | obs85.test.cn | NULL | 3.10.0-1127.el7.x86_64 | x86_64 | 172.31.0.85 | 22 | DEDICATED_PHYSICAL_MACHINE | NULL | ONLINE | 3 | 2 | NULL | obs85.test.cn | 2021-12-15 10:44:19 | 2021-12-15 10:54:19 |
| 6 | obs91.test.cn | NULL | 3.10.0-1127.el7.x86_64 | x86_64 | 172.31.0.91 | 22 | DEDICATED_PHYSICAL_MACHINE | NULL | ONLINE | 4 | 2 | NULL | obs91.test.cn | 2021-12-15 10:44:54 | 2021-12-15 10:54:19 |
| 7 | obs92.test.cn | NULL | 3.10.0-1127.el7.x86_64 | x86_64 | 172.31.0.92 | 22 | DEDICATED_PHYSICAL_MACHINE | NULL | ONLINE | 4 | 2 | NULL | obs92.test.cn | 2021-12-15 10:45:14 | 2021-12-15 10:54:19 |
+----+---------------+-------------+------------------------+--------------+------------------+----------+----------------------------+---------------+--------+--------+--------------+---------------+---------------+---------------------+---------------------+
7 rows in set (0.002 sec)
这里可以看到我用OCP添加的主机信息。没错,就是这个租户了,至于表的用途及字段含义嘛,靠猜了。无聊随便看看而已,直接修改元数据表还是算了。
以下内容各位看官看看即可,不必较真!!!
2.OCP平台租户管理员密码探究
部署OCP平台过程中,OAT的部署脚本会调用/root/t-oceanbase-antman/common/utils.sh脚本,为OB中各租户、用户生成密码,涉及到的相关函数如下:
-
为用户随机生成密码的函数:
function gen_pass() { #从已定义字符生成一个长度为10个字符的随机密码 all=(a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 0 1 2 3 4 5 6 7 8 9 { } @ % ^ +) small_char=(a b c d e f g h i j k l m n o p q r s t u v w x y z) big_char=(A B C D E F G H I J K L M N O P Q R S T U V W X Y Z) num=(0 1 2 3 4 5 6 7 8 9) special_char=({ } @ % ^ +) passwd=() passwd+=${small_char[RANDOM%${#small_char[@]}]} #先随机选一个小写字母 passwd+=${big_char[RANDOM%${#big_char[@]}]} #再随机选一个大写字母附加到后面 passwd+=${num[RANDOM%${#num[@]}]} #再随机选一个数字附加到后面 passwd+=${special_char[RANDOM%${#special_char[@]}]} #再随机选一个特殊字符附加到后面 for i in {1..6}; do passwd+=${all[RANDOM%${#all[@]}]} #随机再从所有字符(all)中选6个字符附加到后面 done echo ${passwd[@]} | fold -w1 | shuf |tr -d '\n' #将所有字符全部打乱排列一次,作为最终密码 } -
租户管理员的密码加密、解密函数:
function encode_pass() { inpass_str=$1 #此处传入明文密码 outpass_file_name=$2 #此处传入密钥文件名称(如租户名/用户名) if [[ -f /root/.key/${outpass_file_name} ]]; then return fi pass_file_name=`date +%s` echo ${inpass_str} > /root/.key/${pass_file_name} openssl rsautl -encrypt -in /root/.key/${pass_file_name} -inkey /root/.key/private.pem -out /root/.key/${outpass_file_name} rm -f /root/.key/${pass_file_name} } function decode_pass() { inpass_file_name=$1 #此处传入密钥文件名称(如租户名/用户名),默认函数会从/root/.key/目录下查找同名密钥文件进行解密。 tmp_pass_file_name=`date +%s` openssl rsautl -decrypt -in /root/.key/${inpass_file_name} -inkey /root/.key/private.pem -out /root/.key/${tmp_pass_file_name} cat /root/.key/${tmp_pass_file_name} rm -f /root/.key/${tmp_pass_file_name} }假设有
test_user租户,其密码为Example_P@ssw0rd,为其加密和解密方法如下:[root@ocp ~]# cd /root/t-oceanbase-antman/common/ [root@ocp common]# source utils.sh [root@ocp common]# encode_pass Example_P@ssw0rd test_user [root@ocp common]# ls -l /root/.key/ total 36 -rw-r--r-- 1 root root 256 Nov 29 15:03 meta -rw-r--r-- 1 root root 256 Nov 29 15:03 monitor -rw-r--r-- 1 root root 256 Nov 29 15:03 odcmeta -rw-r--r-- 1 root root 256 Nov 29 15:03 omsmeta -rw-r--r-- 1 root root 1675 Nov 29 15:03 private.pem -rw-r--r-- 1 root root 256 Nov 29 15:03 sys -rw-r--r-- 1 root root 256 Nov 29 15:03 sys_monitor -rw-r--r-- 1 root root 256 Nov 30 16:46 test_user [root@ocp common]# decode_pass test_user Example_P@ssw0rd注意:这个脚本函数只是生成了密码,并将密码再本地创建为密钥文件的形式保存在OCP节点本地(
/root/.key/)。使用如下命令将
/root/.key/目录下所有租户的管理员密码解码出来:[root@ocp ~]# source /root/t-oceanbase-antman/common/utils.sh [root@ocp ~]# for u in meta monitor odcmeta omsmeta sys sys_monitor; do echo -n "$u: " ; decode_pass $u; done meta: 4xac^5O0S{ monitor: YCyM^M3V^q odcmeta: %W^ff19nL0 omsmeta: r0P9%5Yr76 sys: b2+sfnSxPW sys_monitor: +Pezt9i8OA注意:在部署OCP平台时,默认也为
odcmeta、omsmeta租户管理员生成了密码文件。上面的meta和monitor实际上是分别对应的是OB集群中的ocp_meta、ocp_monitor租户。sys_monitor对应了sys租户中的ocp_monitor用户。感觉上面的密码应该是固定的,我部署了几次OCP平台,密码都是这些,这一点个人觉得可以考虑优化一下。
3.OCP平台的一些系统表
OCP平台的数据存放在ocp_meta租户下,可以使用root@ocp_meta#ocp_cluster用户登录OCP的OB元数据集群(Docker实例:META_OB_ZONE_1)进行查看。
执行如下操作,使用root@ocp_meta#ocp_cluster用户登录OCP的OB元数据集群,查看数据库列表:
[root@ocp ~]# obclient -h172.31.0.69 -P2883 -uroot@ocp_meta#ocp_cluster -p'4xac^5O0S{' -Doceanbase -A --prompt "\u [\d]> "
Welcome to the OceanBase. Commands end with ; or \g.
Your MySQL connection id is 1603854
Server version: 5.6.25 OceanBase 2.2.73 (r20201211175814-5abff097aadf740bcae07de8f84321815907e062) (Built Dec 11 2020 18:26:06)
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
root [oceanbase]> show databases;
+--------------------+
| Database |
+--------------------+
| oceanbase |
| information_schema |
| mysql |
| test |
| ocp |
| backup1472 |
| backup147x |
| backup2230 |
| backup21 |
+--------------------+
9 rows in set (0.003 sec)
查看数据库的ID和数据库名称:
root [oceanbase]> select database_id, database_name from __all_database;
+-------------+--------------------+
| database_id | database_name |
+-------------+--------------------+
| 1 | oceanbase |
| 2 | information_schema |
| 3 | mysql |
| 4 | __recyclebin |
| 5 | __public |
| 1000 | test |
| 1051 | ocp |
| 1052 | backup1472 |
| 1053 | backup147x |
| 1054 | backup2230 |
| 1055 | backup21 |
+-------------+--------------------+
11 rows in set (0.002 sec)
查看OCP平台的系统表在各个业务数据库中的分布情况:
root [oceanbase]> SELECT count(a.table_name),b.database_name FROM oceanbase.__all_table_v2 a, oceanbase.__all_database b where a.database_id = b.database_id group by a.database_id;
+---------------------+---------------+
| count(a.table_name) | database_name |
+---------------------+---------------+
| 445 | ocp |
| 14 | backup1472 |
| 16 | backup147x |
| 23 | backup2230 |
| 22 | backup21 |
+---------------------+---------------+
5 rows in set (0.005 sec)
从上可知,ocp数据库中存放了大部分OCP平台的系统表。__all_table_v2表中存放了所有用户的表。
下面我们进入ocp数据库看看都有哪些表:
root [oceanbase]> use ocp
Database changed
root [ocp]> show tables like '%cluster%';
+--------------------------------------------+
| Tables_in_ocp (%cluster%) |
+--------------------------------------------+
| migration_clusters |
| ob_agent_cluster_collection_config_history |
| ob_cluster |
| ob_cluster_compaction |
| ob_cluster_group |
| ob_cluster_idc_info |
| ob_cluster_idx_id |
| ob_cluster_obproxy |
| ob_cluster_parameter_change |
| ob_cluster_parameter_info |
| ob_cluster_system_event_template |
| obproxy_cluster |
+--------------------------------------------+
12 rows in set (0.003 sec)
注意:这里只是通过LIKE语法筛选展示了部分OCP平台的系统表。
我的OCP平台中有两个OB集群和一个OBproxy集群,分别为ocp_cluster、obcp_ob_cluster和obcp_ob_cluster,这里可以验证一下:
root [ocp]> select id,name,ob_version,ob_cluster_id,type,status,operate_status from ob_cluster;
+----+-----------------+------------+---------------+---------+---------+----------------+
| id | name | ob_version | ob_cluster_id | type | status | operate_status |
+----+-----------------+------------+---------------+---------+---------+----------------+
| 1 | ocp_cluster | 2.2.73 | 100000 | PRIMARY | RUNNING | NORMAL |
| 2 | obcp_ob_cluster | 2.2.76 | 1 | PRIMARY | RUNNING | NORMAL |
+----+-----------------+------------+---------------+---------+---------+----------------+
2 rows in set (0.003 sec)
root [ocp]> select * from ob_cluster_obproxy;
+----+--------------------+-----------------+----------+---------------------+---------------------+
| id | obproxy_cluster_id | ob_cluster_name | username | create_time | update_time |
+----+--------------------+-----------------+----------+---------------------+---------------------+
| 1 | 1 | obcp_ob_cluster | proxyro | 2021-11-29 16:54:36 | 2021-11-29 16:54:36 |
+----+--------------------+-----------------+----------+---------------------+---------------------+
1 row in set (0.004 sec)
可以看到ocp.ob_cluster表中记录了当前OCP平台下的两个集群,符合我的OCP集群部署情况。
我的OCP平台中当前共有1个OCP主机9个OBserver主机,这一点也可以通过ocp.compute_host表进行验证:
root [ocp]> select id,name,inner_ip_address,ssh_port,status,idc_id,host_type_id,alias from ocp.compute_host;
+----+---------------+------------------+----------+-----------+--------+--------------+---------------+
| id | name | inner_ip_address | ssh_port | status | idc_id | host_type_id | alias |
+----+---------------+------------------+----------+-----------+--------+--------------+---------------+
| 1 | ocp.test.cn | 172.31.0.69 | 2022 | ONLINE | 1 | 1 | NULL |
| 14 | obs48.test.cn | 172.31.0.48 | 22022 | ONLINE | 1 | 2 | obs48.test.cn |
| 15 | obs71.test.cn | 172.31.0.71 | 22022 | ONLINE | 2 | 2 | obs71.test.cn |
| 16 | obs79.test.cn | 172.31.0.79 | 22022 | ONLINE | 3 | 2 | obs79.test.cn |
| 17 | obs78.test.cn | 172.31.0.78 | 22022 | AVAILABLE | 1 | 2 | obs78.test.cn |
| 18 | obs90.test.cn | 172.31.0.90 | 22022 | ONLINE | 1 | 2 | obs90.test.cn |
| 19 | obs92.test.cn | 172.31.0.92 | 22022 | AVAILABLE | 2 | 2 | obs92.test.cn |
| 20 | obs91.test.cn | 172.31.0.91 | 22022 | ONLINE | 2 | 2 | obs91.test.cn |
| 21 | obs85.test.cn | 172.31.0.85 | 22022 | AVAILABLE | 3 | 2 | obs85.test.cn |
| 22 | obs93.test.cn | 172.31.0.93 | 22022 | ONLINE | 3 | 2 | obs93.test.cn |
+----+---------------+------------------+----------+-----------+--------+--------------+---------------+
10 rows in set (0.003 sec)
当前,我的obcp_ob_cluster集群共有3个zone,为2-2-2部署模式。每个zone中有一台主机处于空闲状态,没有运行OBserver进程,从上面可以看到其状态为AVAILABLE。
使用如下的SQL可以查询当前OCP平台中记录的所有主机、IP地址、状态、机型、所属IDC、节点角色、所属集群及软件版本信息:
root [ocp]> SELECT
-> a.id AS SRV_ID,
-> a.name AS SRV_NAME,
-> a.inner_ip_address AS SRV_IP,
-> a.status AS SRV_STATUS,
-> c.name AS IDC_NAME,
-> d.name AS SRV_MODEL,
-> e.name AS REGION,
-> b.`type` AS SRV_ROLE,
-> b.name AS CLUSTER,
-> b.version AS OB_VERSION
-> FROM
-> compute_host a
-> LEFT JOIN compute_host_service b ON
-> a.id = b.host_id
-> LEFT JOIN compute_idc c ON
-> a.idc_id = c.id
-> LEFT JOIN compute_host_type d ON
-> a.host_type_id = d.id
-> LEFT JOIN compute_region e ON
-> c.region_id = e.id
-> ORDER BY
-> a.id;
+--------+---------------+-------------+------------+----------+---------------+---------+------------+--------------------+------------+
| SRV_ID | SRV_NAME | SRV_IP | SRV_STATUS | IDC_NAME | SRV_MODEL | REGION | SRV_ROLE | CLUSTER | OB_VERSION |
+--------+---------------+-------------+------------+----------+---------------+---------+------------+--------------------+------------+
| 1 | ocp.test.cn | 172.31.0.69 | ONLINE | DC001 | METAOB_DOCKER | CHENGDU | OB_CLUSTER | ocp_cluster:100000 | 2.2.73 |
| 14 | obs48.test.cn | 172.31.0.48 | ONLINE | DC001 | INSPUR_ECS_1 | CHENGDU | OB_CLUSTER | obcp_ob_cluster:1 | 2.2.76 |
| 15 | obs71.test.cn | 172.31.0.71 | ONLINE | DC002 | INSPUR_ECS_1 | CHENGDU | OB_CLUSTER | obcp_ob_cluster:1 | 2.2.76 |
| 16 | obs79.test.cn | 172.31.0.79 | ONLINE | DC003 | INSPUR_ECS_1 | CHENGDU | OB_CLUSTER | obcp_ob_cluster:1 | 2.2.76 |
| 17 | obs78.test.cn | 172.31.0.78 | AVAILABLE | DC001 | INSPUR_ECS_1 | CHENGDU | NULL | NULL | NULL |
| 18 | obs90.test.cn | 172.31.0.90 | ONLINE | DC001 | INSPUR_ECS_1 | CHENGDU | OB_PROXY | obcp_obp_cluster | 1.9.1.2 |
| 18 | obs90.test.cn | 172.31.0.90 | ONLINE | DC001 | INSPUR_ECS_1 | CHENGDU | OB_CLUSTER | obcp_ob_cluster:1 | 2.2.76 |
| 19 | obs92.test.cn | 172.31.0.92 | AVAILABLE | DC002 | INSPUR_ECS_1 | CHENGDU | NULL | NULL | NULL |
| 20 | obs91.test.cn | 172.31.0.91 | ONLINE | DC002 | INSPUR_ECS_1 | CHENGDU | OB_PROXY | obcp_obp_cluster | 1.9.1.2 |
| 20 | obs91.test.cn | 172.31.0.91 | ONLINE | DC002 | INSPUR_ECS_1 | CHENGDU | OB_CLUSTER | obcp_ob_cluster:1 | 2.2.76 |
| 21 | obs85.test.cn | 172.31.0.85 | AVAILABLE | DC003 | INSPUR_ECS_1 | CHENGDU | NULL | NULL | NULL |
| 22 | obs93.test.cn | 172.31.0.93 | ONLINE | DC003 | INSPUR_ECS_1 | CHENGDU | OB_PROXY | obcp_obp_cluster | 1.9.1.2 |
| 22 | obs93.test.cn | 172.31.0.93 | ONLINE | DC003 | INSPUR_ECS_1 | CHENGDU | OB_CLUSTER | obcp_ob_cluster:1 | 2.2.76 |
+--------+---------------+-------------+------------+----------+---------------+---------+------------+--------------------+------------+
13 rows in set (0.006 sec)
3.1 OCP平台中的服务器/节点
查询节点信息主要涉及到的表有:
root [ocp]> show tables like '%compute%';
+---------------------------------+
| Tables_in_ocp (%compute%) |
+---------------------------------+
| compute_host |
| compute_host_agent |
| compute_host_agent_process |
| compute_host_agent_process_meta |
| compute_host_process |
| compute_host_process_meta |
| compute_host_service |
| compute_host_type |
| compute_idc |
| compute_region |
| compute_software_package_meta |
+---------------------------------+
11 rows in set (0.004 sec)
部分系统表的用途猜想:
compute_host_agent:记录当前OCP平台中所有运行ob_agent的节点信息。compute_host_type:记录当前OCP平台中的节点机器型号。compute_host_service:记录当前OCP平台中的所有节点的类型、所属集群信息。compute_host:记录当前OCP平台中的所有节点的基础信息及状态。compute_host_process:记录当前OCP平台中的所有节点上运行的与OB相关的进程的信息。compute_idc:记录当前OCP平台中的所有IDC信息。compute_region:记录当前OCP平台中的所有Region信息。
3.2 OCP平台中的OBproxy
猜想在OCP平台中部署的OBproxy集群,其详细信息记录在如下几个元数据表中:
**obproxy_parameter**:记录部署时设置/修改的OBproxy参数。**obproxy_parameter_change**:记录OBproxy参数变更信息。**obproxy_parameter_info**:记录OBproxy的所有参数配置信息。**obproxy_server**:记录所有的OBproxy节点的详细信息。**obproxy_cluster**:记录OCP平台中部署的所有OBproxy集群。
例如,使用如下SQL查询obcp_obp_cluster的信息:
root [ocp]> SELECT
-> a.name OBP_CLUSTER,
-> a.address F5_IP,
-> a.port F5_PORT,
-> c.name SRV_NAME,
-> b.ip SRV_IP,
-> b.sql_port,
-> b.exporter_port,
-> b.status
-> FROM
-> obproxy_cluster a,
-> obproxy_server b,
-> compute_host c
-> WHERE
-> a.id = b.obproxy_cluster_id
-> AND b.host_id = c.id;
+------------------+-------------+---------+---------------+-------------+----------+---------------+---------+
| OBP_CLUSTER | F5_IP | F5_PORT | SRV_NAME | SRV_IP | sql_port | exporter_port | status |
+------------------+-------------+---------+---------------+-------------+----------+---------------+---------+
| obcp_obp_cluster | 172.31.0.69 | 8005 | obs90.test.cn | 172.31.0.90 | 2883 | 2884 | RUNNING |
| obcp_obp_cluster | 172.31.0.69 | 8005 | obs91.test.cn | 172.31.0.91 | 2883 | 2884 | RUNNING |
| obcp_obp_cluster | 172.31.0.69 | 8005 | obs93.test.cn | 172.31.0.93 | 2883 | 2884 | RUNNING |
+------------------+-------------+---------+---------------+-------------+----------+---------------+---------+
3 rows in set (0.005 sec)
我的环境中并没有物理的负载均衡设备,我这里只是简单的在OCP主机上使用Nginx(8005端口)来代理了TCP的流量,负载转发到三个OBProxy实例中。
3.3 OCP平台中的OBserver
OCP平台中记录的OB集群信息涉及如下表:
**ob_server**:记录所有运行OBserver进程的OBserver信息。**ob_server_compaction**:记录OB集群的每日定时合并信息,如持续时间等。
例如:
root [ocp]> select ip,port,sql_port,cluster_id,host_id,zone_name,is_with_rootserver,status,inner_status from ob_server;
+-------------+------+----------+------------+---------+----------------+--------------------+---------+--------------+
| ip | port | sql_port | cluster_id | host_id | zone_name | is_with_rootserver | status | inner_status |
+-------------+------+----------+------------+---------+----------------+--------------------+---------+--------------+
| 172.31.0.69 | 2882 | 2881 | 1 | 1 | META_OB_ZONE_1 | 1 | RUNNING | ACTIVE |
| 172.31.0.48 | 2882 | 2881 | 2 | 14 | zone3 | 0 | RUNNING | ACTIVE |
| 172.31.0.71 | 2882 | 2881 | 2 | 15 | zone2 | 0 | RUNNING | ACTIVE |
| 172.31.0.79 | 2882 | 2881 | 2 | 16 | zone1 | 1 | RUNNING | ACTIVE |
| 172.31.0.90 | 2882 | 2881 | 2 | 18 | zone3 | 0 | RUNNING | ACTIVE |
| 172.31.0.91 | 2882 | 2881 | 2 | 20 | zone2 | 0 | RUNNING | ACTIVE |
| 172.31.0.93 | 2882 | 2881 | 2 | 22 | zone1 | 0 | RUNNING | ACTIVE |
+-------------+------+----------+------------+---------+----------------+--------------------+---------+--------------+
7 rows in set (0.003 sec)
root [ocp]> select cluster_id,server_ip,server_port,zone_name,start_time,end_time,duration_seconds from ob_server_compaction;
+------------+-------------+-------------+----------------+---------------------+---------------------+------------------+
| cluster_id | server_ip | server_port | zone_name | start_time | end_time | duration_seconds |
+------------+-------------+-------------+----------------+---------------------+---------------------+------------------+
| 1 | 172.31.0.69 | 2882 | META_OB_ZONE_1 | 2021-11-29 14:55:07 | 2021-11-29 14:55:41 | 33 |
| 2 | 172.31.0.79 | 2882 | zone1 | 2021-11-29 16:37:42 | 2021-11-29 16:38:26 | 44 |
| 2 | 172.31.0.71 | 2882 | zone2 | 2021-11-29 16:37:42 | 2021-11-29 16:38:43 | 60 |
| 2 | 172.31.0.48 | 2882 | zone3 | 2021-11-29 16:37:42 | 2021-11-29 16:38:48 | 66 |
| 2 | 172.31.0.79 | 2882 | zone1 | 2021-11-30 02:00:23 | 2021-11-30 02:01:30 | 67 |
| 2 | 172.31.0.71 | 2882 | zone2 | 2021-11-30 02:00:22 | 2021-11-30 02:01:39 | 76 |
| 2 | 172.31.0.48 | 2882 | zone3 | 2021-11-30 02:00:22 | 2021-11-30 02:01:43 | 81 |
| 1 | 172.31.0.69 | 2882 | META_OB_ZONE_1 | 2021-11-30 02:00:22 | 2021-11-30 02:03:13 | 171 |
| 2 | 172.31.0.79 | 2882 | zone1 | 2021-12-01 02:00:23 | 2021-12-01 02:01:29 | 65 |
| 2 | 172.31.0.71 | 2882 | zone2 | 2021-12-01 02:00:23 | 2021-12-01 02:01:49 | 86 |
| 2 | 172.31.0.48 | 2882 | zone3 | 2021-12-01 02:00:22 | 2021-12-01 02:01:49 | 86 |
| 1 | 172.31.0.69 | 2882 | META_OB_ZONE_1 | 2021-12-01 02:00:22 | 2021-12-01 02:02:10 | 108 |
| 2 | 172.31.0.79 | 2882 | zone1 | 2021-12-02 02:00:21 | 2021-12-02 02:01:35 | 74 |
| 2 | 172.31.0.71 | 2882 | zone2 | 2021-12-02 02:00:22 | 2021-12-02 02:01:52 | 89 |
| 2 | 172.31.0.48 | 2882 | zone3 | 2021-12-02 02:00:22 | 2021-12-02 02:01:51 | 88 |
| 1 | 172.31.0.69 | 2882 | META_OB_ZONE_1 | 2021-12-02 02:00:22 | 2021-12-02 02:03:09 | 167 |
+------------+-------------+-------------+----------------+---------------------+---------------------+------------------+
16 rows in set (0.002 sec)
3.4 OCP平台的其他系统表用途猜想
**ob_tenant**:OCP平台中记录的各集群的租户认证信息。
**ob_tenant_compaction_stats**:集群各租户的定时自动合并记录。
**ob_tenant_parameter_info**:租户参数的配置参考(检查表),如默认值、最大值、最小值、开启、关闭等参考信息。
**ob_unit_spec**:ocp_meta租户的资源规格定义,不同于系统租户的oceanbase.__all_unit_config表。
**ob_zone**:OB集群所有Zone信息记录。
**ob_zone_compaction**:OB集群所有Zone的合并记录。
**profile_credential**:OB集群所有主机、OB集群的认证凭据信息,用户名/密码以JSON格式存储到该表secret列。




