줄기세포

[TAC 설치] VMware에 TAS-TAC 구성해보자 (5) 본문

DB/Tibero

[TAC 설치] VMware에 TAS-TAC 구성해보자 (5)

줄기세포(Stem_Cell) 2021. 12. 23. 00:14

15. bash_profile 설정

 

15-1. root bash_profile

 

vip를 제어 해야 하기 때문에 cm은 root에서 기동해야 한다.

 

A. root bash_profile (node1)

[root@dbserver1 ~]# vi ~/.bash_profile 

# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
	. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

### Tibero6 ENV ###
export TB_HOME=/tibero/tibero6
export PATH=.:$TB_HOME/bin:$TB_HOME/client/bin:$PATH
export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$LD_LIBRARY_PATH

### Tibero6 CM ENV ###
export CM_SID=cm1
export CM_HOME=$TB_HOME


[root@dbserver1 ~]# source ~/.bash_profile	# bash_profile 적용

 

B. root bash_profile (node2)

[root@dbserver2 ~]# vi ~/.bash_profile

# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
	. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH


### Tibero6 ENV ###
export TB_HOME=/tibero/tibero6
export PATH=.:$TB_HOME/bin:$TB_HOME/client/bin:$PATH
export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$LD_LIBRARY_PATH


### Tibero6 CM ENV ###
export CM_SID=cm2
export CM_HOME=$TB_HOME

[root@dbserver2 ~]# source ~/.bash_profile	# bash_profile 적용

 

15-2. tibero bash_profile

 

tibero 유저에 profile을 생성한다.

tas와 tac를 모두 제어하기 때문에 tac profile와 tas profile 도 생성한다.

 

A. tibero bash_profile (node1)

[root@dbserver1 ~]# su - tibero
Last login: Wed Dec 22 17:29:17 KST 2021 on pts/0
[tibero@dbserver1 ~]$ vi ~/.bash_profile
[tibero@dbserver1 ~]$ cat ~/.bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
	. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/.local/bin:$HOME/bin

### Tibero6 ENV ###
export PATH
export TB_HOME=/tibero/tibero6
export TB_SID=tac1
export TB_PROF_DIR=$TB_HOME/bin/prof
export PATH=.:$TB_HOME/bin:$TB_HOME/client/bin:$PATH
export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$LD_LIBRARY_PATH

### Tibero6 CM ENV ###
export CM_SID=cm1
export CM_HOME=$TB_HOME

cd $TB_BASE
alias tbcliv='vi $TB_HOME/client/config/tbdsn.tbr'
alias tbcfgv='vi $TB_HOME/config/$TB_SID.tip'
alias tm='sh /tibero/tbinary/monitor/monitor'
alias tblog='cd $TB_HOME/instance/$TB_SID/log'
alias tas='export TB_SID=tas1'
alias tac='export TB_SID=tac1'

[tibero@dbserver1 ~]$ source ~/.bash_profile

 

tibero tac_profile (node1)

[tibero@dbserver1 ~]$ vi ~/.tac_profile
[tibero@dbserver1 ~]$ cat ~/.tac_profile
### TAC ENV ###
export TB_SID=tac1
export TB_HOME=/tibero/tibero6
export PATH=.:$TB_HOME/bin:$TB_HOME/client/bin:$PATH
export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$LD_LIBRARY_PATH

 

tibero tas_profile (node1)

[tibero@dbserver1 ~]$ vi ~/.tas_profile
[tibero@dbserver1 ~]$ cat ~/.tas_profile
### TAS ENV ###
export TB_SID=tas1
export TB_HOME=/tibero/tibero6
export PATH=.:$TB_HOME/bin:$TB_HOME/client/bin:$PATH
export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$LD_LIBRARY_PATH

 

B. tibero bash_profile (node2)

[root@dbserver1 ~]# su - tibero
Last login: Wed Dec 22 17:29:17 KST 2021 on pts/0
[tibero@dbserver1 ~]$ vi ~/.bash_profile
[tibero@dbserver1 ~]$ cat ~/.bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
	. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/.local/bin:$HOME/bin

### Tibero6 ENV ###
export PATH
export TB_HOME=/tibero/tibero6
export TB_SID=tac2
export TB_PROF_DIR=$TB_HOME/bin/prof
export PATH=.:$TB_HOME/bin:$TB_HOME/client/bin:$PATH
export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$LD_LIBRARY_PATH


### Tibero6 CM ENV ###
export CM_SID=cm2
export CM_HOME=$TB_HOME

cd $TB_BASE
alias tbcliv='vi $TB_HOME/client/config/tbdsn.tbr'
alias tbcfgv='vi $TB_HOME/config/$TB_SID.tip'
alias tm='sh /tibero/tbinary/monitor/monitor'
alias tblog='cd $TB_HOME/instance/$TB_SID/log'
alias tas='export TB_SID=tas2'
alias tac='export TB_SID=tac2'

[tibero@dbserver1 ~]$ source ~/.bash_profile

 

tibero tac_profile (node2)

[tibero@dbserver2 ~]$ vi ~/.tac_profile
[tibero@dbserver2 ~]$ cat ~/.tac_profile
### TAC ENV ###
export TB_SID=tac2
export TB_HOME=/tibero/tibero6
export PATH=.:$TB_HOME/bin:$TB_HOME/client/bin:$PATH
export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$LD_LIBRARY_PATH

 

tibero tas_profile (node2)

[tibero@dbserver2 ~]$ vi ~/.tac_profile
[tibero@dbserver2 ~]$ cat ~/.tas_profile
### TAS ENV ###
export TB_SID=tas2
export TB_HOME=/tibero/tibero6
export PATH=.:$TB_HOME/bin:$TB_HOME/client/bin:$PATH
export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$LD_LIBRARY_PATH

 

16. tip 파일 생성

 

node1과 node2 모두 수행해준다. (node1과 node2간 수행에 차이는 없어서 한번만 작성함)

 

16-1. tac tip, tas tip 생성 (양노드에 모두 수행)

[tibero@dbserver1 ~]$ tac
[tibero@dbserver1 ~]$ echo $TB_SID
tac1
[tibero@dbserver1 ~]$ sh $TB_HOME/config/gen_tip.sh
Using TB_SID "tac1"
/tibero/tibero6/config/tac1.tip generated
/tibero/tibero6/config/psm_commands generated
/tibero/tibero6/client/config/tbdsn.tbr generated.
Running client/config/gen_esql_cfg.sh
Done.


[tibero@dbserver1 ~]$ tas
[tibero@dbserver1 ~]$ echo $TB_SID
tas1
[tibero@dbserver1 ~]$ sh $TB_HOME/config/gen_tip.sh
Using TB_SID "tas1"
/tibero/tibero6/config/tas1.tip generated
Already exists /tibero/tibero6/config/psm_commands!! Nothing has changed
There's already /tibero/tibero6/client/config/tbdsn.tbr!!
Added tas1 to /tibero/tibero6/client/config/tbdsn.tbr.
Running client/config/gen_esql_cfg.sh
Done.

생성 후 tip을 수정해준다.

A. node1= tac1/ tas1

[node1]

[root@dbserver1 ~]# su - tibero
Last login: Wed Dec 22 17:36:37 KST 2021 on pts/0
[tibero@dbserver1 ~]$ tac
[tibero@dbserver1 ~]$ vi $TB_HOME/config/$TB_SID.tip
[tibero@dbserver1 ~]$ cat $TB_HOME/config/$TB_SID.tip
# tip file generated from /tibero/tibero6/config/tip.template (Wed Dec 15 09:09:43 KST 2021)
#-------------------------------------------------------------------------------
#
# RDBMS initialization parameter
#
#-------------------------------------------------------------------------------

DB_NAME=tac
LISTENER_PORT=8629
CONTROL_FILES="+DS0/tac/c1.ctl","+DS0/tac/c2.ctl"
DB_CREATE_FILE_DEST="+DS0/tbdata"

# +[diskspace이름] => +DS) /  +를 쓰면 TAS의 DISKSPACE를 찾아감

MAX_SESSION_COUNT=20

TOTAL_SHM_SIZE=2G
MEMORY_TARGET=3G

AS_PORT=28629	# TAS의 서비스 포트
USE_ACTIVE_STORAGE=Y	# TAS 를 사용하려면 파라미터 Y

CLUSTER_DATABASE=Y
THREAD=0	# node0 => THREAD=0 /node1 => THREAD=0
UNDO_TABLESPACE=UNDO0
LOCAL_CLUSTER_ADDR=10.10.1.101	# 이노드의 Interconnect IP
LOCAL_CLUSTER_PORT=9629	#tac 노드간 통신포트
CM_PORT=18629	#cm1의 내부포트

[tibero@dbserver1 ~]$ tas
[tibero@dbserver1 ~]$ vi $TB_HOME/config/$TB_SID.tip
[tibero@dbserver1 ~]$ cat $TB_HOME/config/$TB_SID.tip
# tip file generated from /tibero/tibero6/config/tip.template (Wed Dec 15 09:10:03 KST 2021)
#-------------------------------------------------------------------------------
#
# RDBMS initialization parameter
#
#-------------------------------------------------------------------------------

DB_NAME=tas
LISTENER_PORT=28629 # tas 서비스 포트
MAX_SESSION_COUNT=20
MEMORY_TARGET=2G
TOTAL_SHM_SIZE=1G

CLUSTER_DATABASE=Y #TAS 다중화 구성하려면 Y 필수
THREAD=0 # node 1 -> THREAD=0
LOCAL_CLUSTER_ADDR=10.10.1.101
LOCAL_CLUSTER_PORT=29629 #TAS 인스턴스 간 연결 포트
CM_PORT=18629 # 내부 CM과 소통할 Port / CM tip의 CM_UI_PORT와 같은 값

INSTANCE_TYPE=AS # TAS의 TIP이라는 것을 알려줌
AS_DISKSTRING="/dev/tas/*"
AS_ALLOW_ONLY_RAW_DISKS=N
AS_WTHR_CNT=2

 

B. node2= tac2/ tas2

[node2]

[root@dbserver1 ~]# su - tibero
Last login: Wed Dec 22 17:36:37 KST 2021 on pts/0
[tibero@dbserver1 ~]$ tac
[tibero@dbserver1 ~]$ vi $TB_HOME/config/$TB_SID.tip
[tibero@dbserver1 ~]$ cat $TB_HOME/config/$TB_SID.tip
# tip file generated from /tibero/tibero6/config/tip.template (Wed Dec 15 09:09:35 KST 2021)
#-------------------------------------------------------------------------------
#
# RDBMS initialization parameter
#
#-------------------------------------------------------------------------------

DB_NAME=tac
LISTENER_PORT=8629
CONTROL_FILES="+DS0/tac1/c1.ctl","+DS0/tac1/c2.ctl"
DB_CREATE_FILE_DEST="+DS0/tbdata"

#CERTIFICATE_FILE="/tibero/tibero6/config/tb_wallet/tac2.crt"
#PRIVKEY_FILE="/tibero/tibero6/config/tb_wallet/tac2.key"
#WALLET_FILE="/tibero/tibero6/config/tb_wallet/WALLET"
#ILOG_MAP="/tibero/tibero6/config/ilog.map"

MAX_SESSION_COUNT=20

TOTAL_SHM_SIZE=2G
MEMORY_TARGET=3G
 
AS_PORT=28629   # TAS의 서비스 포트
USE_ACTIVE_STORAGE=Y    # TAS 를 사용하려면 파라미터 Y

CLUSTER_DATABASE=Y
THREAD=1        # node0 => THREAD=0 /node1 => THREAD=0
UNDO_TABLESPACE=UNDO1
LOCAL_CLUSTER_ADDR=10.10.1.102  # 이노드의 Interconnect IP
LOCAL_CLUSTER_PORT=9629 #tac 노드간 통신포트
CM_PORT=18629   #cm1의 내부포트

[tibero@dbserver1 ~]$ tas
[tibero@dbserver1 ~]$ vi $TB_HOME/config/$TB_SID.tip
[tibero@dbserver1 ~]$ cat $TB_HOME/config/$TB_SID.tip
# tip file generated from /tibero/tibero6/config/tip.template (Wed Dec 15 09:10:00 KST 2021)
#-------------------------------------------------------------------------------
#
# RDBMS initialization parameter
#
#-------------------------------------------------------------------------------


DB_NAME=tas
LISTENER_PORT=28629
MAX_SESSION_COUNT=20
MEMORY_TARGET=2G
TOTAL_SHM_SIZE=1G

CLUSTER_DATABASE=Y
THREAD=1
LOCAL_CLUSTER_ADDR=10.10.1.102
LOCAL_CLUSTER_PORT=29629
CM_PORT=18629

INSTANCE_TYPE=AS
AS_DISKSTRING="/dev/tas/*"
AS_ALLOW_ONLY_RAW_DISKS=N
AS_WTHR_CNT=2

 

 

16-2. cm tip 생성 (양노드에 모두 수행)

 

node1 = cm1

[tibero@dbserver1 ~]$ vi $TB_HOME/config/cm1.tip
[tibero@dbserver1 ~]$ cat $TB_HOME/config/cm1.tip
CM_NAME=cm1
CM_UI_PORT=18629
CM_RESOURCE_FILE="/tibero/tibero6/config/cm1_res"

node2 = cm2

[tibero@dbserver1 ~]$ vi $TB_HOME/config/cm2.tip
[tibero@dbserver1 ~]$ cat $TB_HOME/config/cm2.tip
CM_NAME=cm2
CM_UI_PORT=18629
CM_RESOURCE_FILE="/tibero/tibero6/config/cm2_res"

 

17. Tibero Listener 설정 (tbdsn.tbr)

 

양 노드 모두 설정해주어야 한다.

[node1]

[tibero@dbserver1 ~]$ cat $TB_HOME/client/config/tbdsn.tbr
#-------------------------------------------------
# /tibero/tibero6/client/config/tbdsn.tbr
# Network Configuration File.
# Generated by gen_tip.sh at Wed Dec 15 09:09:35 KST 2021
tac1=(
    (INSTANCE=(HOST=localhost)
              (PORT=8629)
              (DB_NAME=tac)
    )
)

#-------------------------------------------------
# Appended by gen_tip.sh at Wed Dec 15 09:10:00 KST 2021
tas1=(
    (INSTANCE=(HOST=localhost)
              (PORT=28629)
              (DB_NAME=tas)
    )
)


tac=(
    (INSTANCE=(HOST=192.168.56.121)
    (PORT=8629)
    (DB_NAME=tibero)
    )
    (INSTANCE=(HOST=192.168.56.122)
    (PORT=8629)
    (DB_NAME=tibero)
    )
    (LOAD_BALANCE=Y)
    (USE_FAILOVER=Y)
)
[node2]

[tibero@dbserver2 ~]$ cat $TB_HOME/client/config/tbdsn.tbr
#-------------------------------------------------
# /tibero/tibero6/client/config/tbdsn.tbr
# Network Configuration File.
# Generated by gen_tip.sh at Wed Dec 15 09:09:35 KST 2021
tac2=(
    (INSTANCE=(HOST=localhost)
              (PORT=8629)
              (DB_NAME=tac)
    )
)

#-------------------------------------------------
# Appended by gen_tip.sh at Wed Dec 15 09:10:00 KST 2021
tas2=(
    (INSTANCE=(HOST=localhost)
              (PORT=28629)
              (DB_NAME=tas)
    )
)

 

18. cm, tas, tac, vip

모든 설정을 마쳤으니, TAC 서비스들을 등록하고 기동한다.

아래와 같은 순서로 스크립트를 올리니 그대로 따라하기를 바란다.

 

18-1 cm, tas

cm기동 -> 네트워크와 클러스터 등록 -> tas diskspace 생성 -> 클러스터 start -> tas 등록 -> tas 기동 -> tas thread 1(node2) 추가

 

A. node1

[tibero@dbserver1 ~]$ exit
logout

# root로 cm을 기동한다.

[root@dbserver1 ~]# tbcm -b    #cm 기동
CM Guard daemon started up.

TBCM 6.1.1 (Build 199301)

TmaxData Corporation Copyright (c) 2008-. All rights reserved.

Tibero cluster manager started up.
Local node name is (cm1:18629).


# network 등록 1.private

[root@dbserver1 ~]# cmrctl add network --name inc1 --nettype private --ipaddr 10.10.1.101 --portno 19629
Resource add success! (network, inc1)


# network 등록 2.public

[root@dbserver1 ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.56.111  netmask 255.255.255.0  broadcast 192.168.56.255
        inet6 fe80::656d:43d2:a851:8952  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:a0:36:5b  txqueuelen 1000  (Ethernet)
        RX packets 2644  bytes 198287 (193.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1237  bytes 165894 (162.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens37: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.10.1.101  netmask 255.255.255.0  broadcast 10.10.1.255
        inet6 fe80::20c:29ff:fea0:3665  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:a0:36:65  txqueuelen 1000  (Ethernet)
        RX packets 28  bytes 3196 (3.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 11  bytes 836 (836.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 42  bytes 3312 (3.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 42  bytes 3312 (3.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@dbserver1 ~]# cmrctl add network --name pub1 --nettype public --ifname ens33
Resource add success! (network, pub1)
[root@dbserver1 ~]# cmrctl show
Resource List of Node cm1
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           inc1       UP (private) 10.10.1.101/19629
     COMMON  network           pub1       UP (public) ens33
=====================================================================


# cluster 등록, cluster file 등록 '+'를 경로 앞에 붙여주면 tas diskspace를 사용한다는 뜻
[root@dbserver1 ~]# cmrctl add cluster --name cls1 --incnet inc1 --pubnet pub1 --cfile "+/dev/tas/*"
Resource add success! (cluster, cls1)
[root@dbserver1 ~]# cmrctl show
Resource List of Node cm1
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           inc1       UP (private) 10.10.1.101/19629
     COMMON  network           pub1       UP (public) ens33
     COMMON  cluster           cls1     DOWN inc: inc1, pub: pub1
=====================================================================
[root@dbserver1 ~]# cmrctl start cluster --name cls1
Failed to start the resource 'cls1'

# tas diskspace없이 cluster 기동하면 실패하게된다.
# diskspace를 먼저 생성해주어야 한다.


[root@dbserver1 ~]# su - tibero
Last login: Wed Dec 22 21:51:34 KST 2021 on pts/0

[tibero@dbserver1 ~]$ tas
[tibero@dbserver1 ~]$ tbboot nomount # tas를 nomount 모드로 기동
Change core dump dir to /tibero/tibero6/bin/prof.
Listener port = 28629

Tibero 6  

TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero instance started up (NOMOUNT mode).

# 접속하여 diskspace 생성
[tibero@dbserver1 ~]$ tbsql sys/tibero

tbSQL 6  

TmaxData Corporation Copyright (c) 2008-. All rights reserved.

Connected to Tibero.

# diskspace 생성하는 쿼리
SQL> create diskspace ds0 normal redundancy
failgroup fg1 disk '/dev/tas/disk01' name disk1
failgroup fg2 disk '/dev/tas/disk02' name disk2
failgroup fg3 disk '/dev/tas/disk03' name disk3
attribute 'AU_SIZE'='4M';

Diskspace 'DS0' created.

SQL> q
Disconnected.

# nomount 모드에서 diskspace 생성하면 tas는 종료됨
[tibero@dbserver1 ~]# ps -ef | grep tbsvr
tibero      11044   1769  0 22:49 pts/0    00:00:00 grep --color=auto tbsvr
[tibero@dbserver1 ~]$ exit
logout

# tas diskspace 생성 후 기동
[root@dbserver1 ~]# cmrctl start cluster --name cls1
MSG SENDING SUCCESS! # 성공
[root@dbserver1 ~]# cmrctl show
Resource List of Node cm1
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           inc1       UP (private) 10.10.1.101/19629
     COMMON  network           pub1       UP (public) ens33
     COMMON  cluster           cls1       UP inc: inc1, pub: pub1
       cls1     file         cls1:0       UP +0
       cls1     file         cls1:1       UP +1
       cls1     file         cls1:2       UP +2
=====================================================================

(아직 node1)

# 최초에 cm을 root로 기동하게 되면
# cm1.tip과 cm1_res, cm의 log 등의 오너쉽을 tibero:dba로 변경하기 위해서
# 아래처럼 파일들이 위치한 /tibero 경로의 ownership을 통째로 변경해준다.

[root@dbserver1 ~]# chown -R tibero:dba /tibero



[root@dbserver1 ~]# su - tibero
Last login: Wed Dec 22 22:44:20 KST 2021 on pts/0

[tibero@dbserver1 ~]$ tas

# tas 서비스 등록
[tibero@dbserver1 ~]# cmrctl add service --name tas --cname cls1 --type as
Resource add success! (service, tas)

# tas 서비스에 tas1(as)를 등록
[tibero@dbserver1 ~]$ cmrctl add as --name tas1 --svcname tas --dbhome $TB_HOME --envfile /home/tibero/.tas_profile
Resource add success! (as, tas1)

# tas1 기동
[tibero@dbserver1 ~]$ cmrctl start as --name tas1
Change core dump dir to /tibero/tibero6/bin/prof.
Listener port = 28629

Tibero 6  

TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero instance started up (NORMAL mode).
[tibero@dbserver1 ~]$ cmrctl show	# tas1(as)와 tas(service) 모두 up 상태임을 확인한다.
Resource List of Node cm1
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           inc1       UP (private) 10.10.1.101/19629
     COMMON  network           pub1       UP (public) ens33
     COMMON  cluster           cls1       UP inc: inc1, pub: pub1
       cls1     file         cls1:0       UP +0
       cls1     file         cls1:1       UP +1
       cls1     file         cls1:2       UP +2
       cls1  service            tas       UP Active Storage, Active Cluster (auto-restart: OFF)
       cls1       as           tas1 UP(NRML) tas, /tibero/tibero6, failed retry cnt: 1
=====================================================================

(아직도 node1)

tas 를 시작하고 thread1을 추가한다.

[tibero@dbserver1 ~]$ tbsql sys/tibero

tbSQL 6  

TmaxData Corporation Copyright (c) 2008-. All rights reserved.

Connected to Tibero.

SQL> alter diskspace ds0 add thread 1;

Diskspace altered.

SQL> q
Disconnected.
[tibero@dbserver1 ~]$ 

# thread 1 (node2)를 등록해야 node2 tas에서 diskspace(ds0)를 공유하고 접속 가능함

 

B. node2

[tibero@dbserver2 ~]$ exit
logout
#root로 tbcm기동
[root@dbserver2 ~]# 
[root@dbserver2 ~]# tbcm -b
CM Guard daemon started up.

TBCM 6.1.1 (Build 199301)

TmaxData Corporation Copyright (c) 2008-. All rights reserved.

Tibero cluster manager started up.
Local node name is (cm2:18629).

#private, public 등록 및 확인
[root@dbserver2 ~]# cmrctl add network --name inc2 --nettype private --ipaddr 10.10.1.102 --portno 19629
Resource add success! (network, inc2)
[root@dbserver2 ~]# cmrctl add network --name pub2 --nettype public --ifname ens33
Resource add success! (network, pub2)
[root@dbserver2 ~]# cmrctl show
Resource List of Node cm2
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           inc2       UP (private) 10.10.1.102/19629
     COMMON  network           pub2       UP (public) ens33
=====================================================================

# cluster는 node1과 같은 이름으로 등록해주어야 한다. 
# (논리적으로 같은 cluster 안에 위치하니까)
[root@dbserver2 ~]# cmrctl add cluster --name cls1 --incnet inc2 --pubnet pub2 --cfile "+/dev/tas/*"
Resource add success! (cluster, cls1)
[root@dbserver2 ~]# cmrctl start cluster --name cls1
MSG SENDING SUCCESS!
[root@dbserver2 ~]# cmrctl show
Resource List of Node cm2
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           inc2       UP (private) 10.10.1.102/19629
     COMMON  network           pub2       UP (public) ens33
     COMMON  cluster           cls1       UP inc: inc2, pub: pub2
       cls1     file         cls1:0       UP +0
       cls1     file         cls1:1       UP +1
       cls1     file         cls1:2       UP +2
       cls1  service            tas       UP Active Storage, Active Cluster (auto-restart: OFF)
=====================================================================
# node1에서 등록한 tas 서비스가 보이는 것을 확인할 수 있다.
# cm간 상호 공유하기 때문에 tas를 등록하지 않아도 보이는 것이다.

(아직 node2)

[root@dbserver2 ~]# su - tibero
Last login: Wed Dec 22 23:28:12 KST 2021 on pts/0
t[tibero@dbserver2 ~]$ tas
[tibero@dbserver2 ~]$ cmrctl add as --name tas2 --svcname tas --dbhome $TB_HOME --envfile /home/tibero/.tas_profile
Resource add success! (as, tas2)


[tibero@dbserver2 ~]$ cmrctl start as --name tas2
Listener port = 28629

Tibero 6  

TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero instance started up (NORMAL mode).
BOOT SUCCESS! (MODE : NORMAL)
[tibero@dbserver2 ~]$ cmrctl show
Resource List of Node cm2
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           inc2       UP (private) 10.10.1.102/19629
     COMMON  network           pub2       UP (public) ens33
     COMMON  cluster           cls1       UP inc: inc2, pub: pub2
       cls1     file         cls1:0       UP +0
       cls1     file         cls1:1       UP +1
       cls1     file         cls1:2       UP +2
       cls1  service            tas       UP Active Storage, Active Cluster (auto-restart: OFF)
       cls1       as           tas2 UP(NRML) tas, /tibero/tibero6, failed retry cnt: 0
=====================================================================
# 정상 기동

 

18-2. tac, vip

 

A. node1

# tac 서비스 생성, tac 서비스에 tac1 등록
[tibero@dbserver1 ~]$ tac
[tibero@dbserver1 ~]$ cmrctl add service --name tac --cname cls1 --type db
Resource add success! (service, tac)
[tibero@dbserver1 ~]$ cmrctl add db --name tac1 --svcname tac --dbhome $TB_HOME --envfile /home/tibero/.tac_profile
Resource add success! (db, tac1)
[tibero@dbserver1 ~]$ cmrctl show
Resource List of Node cm1
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           inc1       UP (private) 10.10.1.101/19629
     COMMON  network           pub1       UP (public) ens33
     COMMON  cluster           cls1       UP inc: inc1, pub: pub1
       cls1     file         cls1:0       UP +0
       cls1     file         cls1:1       UP +1
       cls1     file         cls1:2       UP +2
       cls1  service            tas       UP Active Storage, Active Cluster (auto-restart: OFF)
       cls1  service            tac     DOWN Database, Active Cluster (auto-restart: OFF)
       cls1       as           tas1 UP(NRML) tas, /tibero/tibero6, failed retry cnt: 1
       cls1       db           tac1     DOWN tac, /tibero/tibero6, failed retry cnt: 0
=====================================================================
# nomount 기동 database 생성
[tibero@dbserver1 ~]$ tbboot nomount
Change core dump dir to /tibero/tibero6/bin/prof.
Listener port = 8629

Tibero 6  

TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero instance started up (NOMOUNT mode).
[tibero@dbserver1 ~]$ tbsql sys/tibero

tbSQL 6  

TmaxData Corporation Copyright (c) 2008-. All rights reserved.

Connected to Tibero.

SQL> create database
user sys identified by tibero
character set UTF8 -- UTF8, EUCKR, ASCII, MSWIN949
national character set utf16
logfile group 0 ('+DS0/tac/log01.redo') size 100M,
        group 1 ('+DS0/tac/log11.redo') size 100M,
        group 2 ('+DS0/tac/log21.redo') size 100M
maxdatafiles 2048
maxlogfiles 100
maxlogmembers 8
noarchivelog
  datafile '+DS0/tbdata/system001.dtf' size 256M autoextend on next 100M maxsize 2G
default tablespace USR 
  datafile '+DS0/tbdata/usr001.dtf' size 256M autoextend on next 100M maxsize 2G
default temporary tablespace TEMP
  tempfile '+DS0/tbdata/temp001.dtf' size 128M autoextend on next 10M maxsize 1G
  extent management local AUTOALLOCATE
undo tablespace UNDO0
  datafile '+DS0/tbdata/undo001.dtf' size 128M autoextend on next 10M maxsize 1G
  extent management local AUTOALLOCATE
;

Database created.

SQL> q
Disconnected.

# database를 생성하면 db 인스턴스가 종료됨
[tibero@dbserver1 ~]$ ps -ef |grep tbsvr
tibero    19564      1  0 23:16 pts/0    00:00:04 tbsvr          -t NORMAL -SVR_SID tas1
tibero    19576  19564  0 23:16 pts/0    00:00:00 tbsvr_MGWP     -t NORMAL -SVR_SID tas1
tibero    19577  19564  0 23:16 pts/0    00:00:02 tbsvr_FGWP000  -t NORMAL -SVR_SID tas1
tibero    19578  19564  0 23:16 pts/0    00:00:00 tbsvr_FGWP001  -t NORMAL -SVR_SID tas1
tibero    19579  19564  0 23:16 pts/0    00:00:03 tbsvr_AGNT     -t NORMAL -SVR_SID tas1
tibero    19580  19564  0 23:16 pts/0    00:00:05 tbsvr_DBWR     -t NORMAL -SVR_SID tas1
tibero    19581  19564  0 23:16 pts/0    00:00:02 tbsvr_RCWP     -t NORMAL -SVR_SID tas1
tibero    19582  19564  0 23:16 pts/0    00:00:04 tbsvr_ASSD     -t NORMAL -SVR_SID tas1
tibero    19583  19564  0 23:16 pts/0    00:00:00 tbsvr_SSIO     -t NORMAL -SVR_SID tas1
tibero    19584  19564  0 23:16 pts/0    00:00:16 tbsvr_ACSD     -t NORMAL -SVR_SID tas1
tibero    29075  17343  0 23:47 pts/0    00:00:00 grep --color=auto tbsvr

# normal 모드 기동
[tibero@dbserver1 ~]$ tbboot
Change core dump dir to /tibero/tibero6/bin/prof.
Listener port = 8629

Tibero 6  

TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero instance started up (NORMAL mode).
[tibero@dbserver1 ~]$ ps -ef | grep tbsvr
tibero    19564      1  0 23:16 pts/0    00:00:04 tbsvr          -t NORMAL -SVR_SID tas1
tibero    19576  19564  0 23:16 pts/0    00:00:00 tbsvr_MGWP     -t NORMAL -SVR_SID tas1
tibero    19577  19564  0 23:16 pts/0    00:00:02 tbsvr_FGWP000  -t NORMAL -SVR_SID tas1
tibero    19578  19564  0 23:16 pts/0    00:00:00 tbsvr_FGWP001  -t NORMAL -SVR_SID tas1
tibero    19579  19564  0 23:16 pts/0    00:00:03 tbsvr_AGNT     -t NORMAL -SVR_SID tas1
tibero    19580  19564  0 23:16 pts/0    00:00:05 tbsvr_DBWR     -t NORMAL -SVR_SID tas1
tibero    19581  19564  0 23:16 pts/0    00:00:02 tbsvr_RCWP     -t NORMAL -SVR_SID tas1
tibero    19582  19564  0 23:16 pts/0    00:00:04 tbsvr_ASSD     -t NORMAL -SVR_SID tas1
tibero    19583  19564  0 23:16 pts/0    00:00:00 tbsvr_SSIO     -t NORMAL -SVR_SID tas1
tibero    19584  19564  0 23:16 pts/0    00:00:16 tbsvr_ACSD     -t NORMAL -SVR_SID tas1
# tac 프로세서 떠있는 것 확인
tibero    29150      1  8 23:48 pts/0    00:00:00 tbsvr          -t NORMAL -SVR_SID tac1	
tibero    29157  29150  0 23:48 pts/0    00:00:00 tbsvr_MGWP     -t NORMAL -SVR_SID tac1
tibero    29158  29150  0 23:48 pts/0    00:00:00 tbsvr_FGWP000  -t NORMAL -SVR_SID tac1
tibero    29159  29150  0 23:48 pts/0    00:00:00 tbsvr_FGWP001  -t NORMAL -SVR_SID tac1
tibero    29160  29150  0 23:48 pts/0    00:00:00 tbsvr_PEWP000  -t NORMAL -SVR_SID tac1
tibero    29161  29150  0 23:48 pts/0    00:00:00 tbsvr_PEWP001  -t NORMAL -SVR_SID tac1
tibero    29162  29150  0 23:48 pts/0    00:00:00 tbsvr_PEWP002  -t NORMAL -SVR_SID tac1
tibero    29163  29150  0 23:48 pts/0    00:00:00 tbsvr_PEWP003  -t NORMAL -SVR_SID tac1
tibero    29164  29150  3 23:48 pts/0    00:00:00 tbsvr_AGNT     -t NORMAL -SVR_SID tac1
tibero    29165  29150  2 23:48 pts/0    00:00:00 tbsvr_DBWR     -t NORMAL -SVR_SID tac1
tibero    29166  29150  9 23:48 pts/0    00:00:00 tbsvr_RCWP     -t NORMAL -SVR_SID tac1
tibero    29167  29150  0 23:48 pts/0    00:00:00 tbsvr_ASSD     -t NORMAL -SVR_SID tac1
tibero    29168  29150  0 23:48 pts/0    00:00:00 tbsvr_SSIO     -t NORMAL -SVR_SID tac1
tibero    29169  29150  2 23:48 pts/0    00:00:00 tbsvr_ACSD     -t NORMAL -SVR_SID tac1
tibero    29400  17343  0 23:48 pts/0    00:00:00 grep --color=auto tbsvr

# thread 1의 undo와 redo 추가
[tibero@dbserver1 ~]$ tbsql sys/tibero

tbSQL 6  

TmaxData Corporation Copyright (c) 2008-. All rights reserved.

Connected to Tibero.

SQL> create undo tablespace undo1 datafile '+DS0/tbdata/undo1.dtf' size 128M
autoextend on next 10M maxsize 1G
extent management local autoallocate; 

Tablespace 'UNDO1' created.

SQL> alter database add logfile thread 1 group 3 '+DS0/tac/log02.log' size 100M;
alter database add logfile thread 1 group 4 '+DS0/tac/log12.log' size 100M;
alter database add logfile thread 1 group 5 '+DS0/tac/log22.log' size 100M;

Database altered.

Database altered.

Database altered.

SQL> alter database enable public thread 1;

Database altered.

SQL> q
Disconnected.

# system shell 수행
# system shell을 수행하면 system에서 사용하는 기본 스키마와 오브젝트 등이 생성됨
[tibero@dbserver1 ~]$ sh $TB_HOME/scripts/system.sh -p1 tibero -p2 syscat -a1 y -a2 y -a3 y -a4 y
Dropping agent table...
Creating text packages table ...
Creating the role DBA...
Creating system users & roles...
Creating example users...
Creating virtual tables(1)...
Creating virtual tables(2)...
...생략...
Create tudi interface
    Running /tibero/tibero6/scripts/odci.sql...
Creating spatial meta tables and views ...
Creating internal system jobs...
Creating Japanese Lexer epa source ...
Creating internal system notice queue ...
Creating sql translator profiles ...
Creating agent table...
Creating additional static views using dpv...
Done.
For details, check /tibero/tibero6/instance/tac1/log/system_init.log.
[tibero@dbserver1 ~]$ 
# 완료

(아직 node1)

# node1의 vip를 등록한다.
[tibero@dbserver1 ~]$ cmrctl add vip --name vip1 --node $CM_SID --svcname tac --ipaddr 192.168.56.121/255.255.255.0 --bcast 192.168.56.255
Resource add success! (vip, vip1)
[tibero@dbserver1 ~]$ cmrctl show
Resource List of Node cm1
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           inc1       UP (private) 10.10.1.101/19629
     COMMON  network           pub1       UP (public) ens33
     COMMON  cluster           cls1       UP inc: inc1, pub: pub1
       cls1     file         cls1:0       UP +0
       cls1     file         cls1:1       UP +1
       cls1     file         cls1:2       UP +2
       cls1  service            tas       UP Active Storage, Active Cluster (auto-restart: OFF)
       cls1  service            tac       UP Database, Active Cluster (auto-restart: OFF)
       cls1       as           tas1 UP(NRML) tas, /tibero/tibero6, failed retry cnt: 1
       cls1       db           tac1 UP(NRML) tac, /tibero/tibero6, failed retry cnt: 0
       cls1      vip           vip1  BOOTING tac, 192.168.56.121/255.255.255.0/192.168.56.255 (1)
                                             failed retry cnt: 0
=====================================================================
# vip가 booting에서 up으로 올라옴
[tibero@dbserver1 ~]$ cmrctl show
Resource List of Node cm1
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           inc1       UP (private) 10.10.1.101/19629
     COMMON  network           pub1       UP (public) ens33
     COMMON  cluster           cls1       UP inc: inc1, pub: pub1
       cls1     file         cls1:0       UP +0
       cls1     file         cls1:1       UP +1
       cls1     file         cls1:2       UP +2
       cls1  service            tas       UP Active Storage, Active Cluster (auto-restart: OFF)
       cls1  service            tac       UP Database, Active Cluster (auto-restart: OFF)
       cls1       as           tas1 UP(NRML) tas, /tibero/tibero6, failed retry cnt: 1
       cls1       db           tac1 UP(NRML) tac, /tibero/tibero6, failed retry cnt: 0
       cls1      vip           vip1       UP tac, 192.168.56.121/255.255.255.0/192.168.56.255 (1)
                                             failed retry cnt: 0
=====================================================================
[tibero@dbserver1 ~]$ ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.56.111  netmask 255.255.255.0  broadcast 192.168.56.255
        inet6 fe80::656d:43d2:a851:8952  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:a0:36:5b  txqueuelen 1000  (Ethernet)
        RX packets 9176  bytes 667830 (652.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4151  bytes 506696 (494.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
# vip 등록에 성공하면, cm이 자동으로 추가해줌.
ens33:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.56.121  netmask 255.255.255.0  broadcast 192.168.56.255
        ether 00:0c:29:a0:36:5b  txqueuelen 1000  (Ethernet)

ens37: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.10.1.101  netmask 255.255.255.0  broadcast 10.10.1.255
        inet6 fe80::20c:29ff:fea0:3665  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:a0:36:65  txqueuelen 1000  (Ethernet)
        RX packets 54664  bytes 5651653 (5.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 40126  bytes 5521210 (5.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 31342  bytes 7716024 (7.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 31342  bytes 7716024 (7.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

B. node2

# vip1이 잘 보인다.
[tibero@dbserver2 ~]$ cmrctl show
Resource List of Node cm2
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           inc2       UP (private) 10.10.1.102/19629
     COMMON  network           pub2       UP (public) ens33
     COMMON  cluster           cls1       UP inc: inc2, pub: pub2
       cls1     file         cls1:0       UP +0
       cls1     file         cls1:1       UP +1
       cls1     file         cls1:2       UP +2
       cls1  service            tas       UP Active Storage, Active Cluster (auto-restart: OFF)
       cls1  service            tac       UP Database, Active Cluster (auto-restart: OFF)
       cls1       as           tas2 UP(NRML) tas, /tibero/tibero6, failed retry cnt: 0
       cls1      vip           vip1    UP(R) tac, 192.168.56.121/255.255.255.0/192.168.56.255 (1)
                                             failed retry cnt: 0
=====================================================================

# tac2를 tac 서비스에 등록
[tibero@dbserver2 ~]$ cmrctl add db --name tac2 --svcname tac --dbhome $TB_HOME --envfile /home/tibero/.tac_profile
Resource add success! (db, tac2)
[tibero@dbserver2 ~]$ cmrctl show
Resource List of Node cm2
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           inc2       UP (private) 10.10.1.102/19629
     COMMON  network           pub2       UP (public) ens33
     COMMON  cluster           cls1       UP inc: inc2, pub: pub2
       cls1     file         cls1:0       UP +0
       cls1     file         cls1:1       UP +1
       cls1     file         cls1:2       UP +2
       cls1  service            tas       UP Active Storage, Active Cluster (auto-restart: OFF)
       cls1  service            tac       UP Database, Active Cluster (auto-restart: OFF)
       cls1       as           tas2 UP(NRML) tas, /tibero/tibero6, failed retry cnt: 0
       cls1       db           tac2     DOWN tac, /tibero/tibero6, failed retry cnt: 0
       cls1      vip           vip1    UP(R) tac, 192.168.56.121/255.255.255.0/192.168.56.255 (1)
                                             failed retry cnt: 0
=====================================================================

# tac2를 기동
[tibero@dbserver2 ~]$ cmrctl start db --name tac2
Listener port = 8629

Tibero 6  

TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero instance started up (NORMAL mode).
BOOT SUCCESS! (MODE : NORMAL)
[tibero@dbserver2 ~]$ cmrctl show
Resource List of Node cm2
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           inc2       UP (private) 10.10.1.102/19629
     COMMON  network           pub2       UP (public) ens33
     COMMON  cluster           cls1       UP inc: inc2, pub: pub2
       cls1     file         cls1:0       UP +0
       cls1     file         cls1:1       UP +1
       cls1     file         cls1:2       UP +2
       cls1  service            tas       UP Active Storage, Active Cluster (auto-restart: OFF)
       cls1  service            tac       UP Database, Active Cluster (auto-restart: OFF)
       cls1       as           tas2 UP(NRML) tas, /tibero/tibero6, failed retry cnt: 0
       cls1       db           tac2 UP(NRML) tac, /tibero/tibero6, failed retry cnt: 0
       cls1      vip           vip1    UP(R) tac, 192.168.56.121/255.255.255.0/192.168.56.255 (1)
                                             failed retry cnt: 0
=====================================================================

(node2)

# vip 등록
[tibero@dbserver2 ~]$ cmrctl add vip --name vip2 --node $CM_SID --svcname tac --ipaddr 192.168.56.122/255.255.255.0 --bcast 192.168.56.255
Resource add success! (vip, vip2)

[tibero@dbserver2 ~]$ cmrctl show
Resource List of Node cm2
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           inc2       UP (private) 10.10.1.102/19629
     COMMON  network           pub2       UP (public) ens33
     COMMON  cluster           cls1       UP inc: inc2, pub: pub2
       cls1     file         cls1:0       UP +0
       cls1     file         cls1:1       UP +1
       cls1     file         cls1:2       UP +2
       cls1  service            tas       UP Active Storage, Active Cluster (auto-restart: OFF)
       cls1  service            tac       UP Database, Active Cluster (auto-restart: OFF)
       cls1       as           tas2 UP(NRML) tas, /tibero/tibero6, failed retry cnt: 0
       cls1       db           tac2 UP(NRML) tac, /tibero/tibero6, failed retry cnt: 0
       cls1      vip           vip1    UP(R) tac, 192.168.56.121/255.255.255.0/192.168.56.255 (1)
                                             failed retry cnt: 0
       cls1      vip           vip2  BOOTING tac, 192.168.56.122/255.255.255.0/192.168.56.255 (2)
                                             failed retry cnt: 0
=====================================================================
[tibero@dbserver2 ~]$ cmrctl show
Resource List of Node cm2
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           inc2       UP (private) 10.10.1.102/19629
     COMMON  network           pub2       UP (public) ens33
     COMMON  cluster           cls1       UP inc: inc2, pub: pub2
       cls1     file         cls1:0       UP +0
       cls1     file         cls1:1       UP +1
       cls1     file         cls1:2       UP +2
       cls1  service            tas       UP Active Storage, Active Cluster (auto-restart: OFF)
       cls1  service            tac       UP Database, Active Cluster (auto-restart: OFF)
       cls1       as           tas2 UP(NRML) tas, /tibero/tibero6, failed retry cnt: 0
       cls1       db           tac2 UP(NRML) tac, /tibero/tibero6, failed retry cnt: 0
       cls1      vip           vip1    UP(R) tac, 192.168.56.121/255.255.255.0/192.168.56.255 (1)
                                             failed retry cnt: 0
       cls1      vip           vip2       UP tac, 192.168.56.122/255.255.255.0/192.168.56.255 (2)
                                             failed retry cnt: 0
=====================================================================

 

 

19. 최종확인

 

1번 2번 노드에서 각각 확인해본다.

[tibero@dbserver1 ~]$ cmrctl show
Resource List of Node cm1
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           inc1       UP (private) 10.10.1.101/19629
     COMMON  network           pub1       UP (public) ens33
     COMMON  cluster           cls1       UP inc: inc1, pub: pub1
       cls1     file         cls1:0       UP +0
       cls1     file         cls1:1       UP +1
       cls1     file         cls1:2       UP +2
       cls1  service            tas       UP Active Storage, Active Cluster (auto-restart: OFF)
       cls1  service            tac       UP Database, Active Cluster (auto-restart: OFF)
       cls1       as           tas1 UP(NRML) tas, /tibero/tibero6, failed retry cnt: 1
       cls1       db           tac1 UP(NRML) tac, /tibero/tibero6, failed retry cnt: 0
       cls1      vip           vip1       UP tac, 192.168.56.121/255.255.255.0/192.168.56.255 (1)
                                             failed retry cnt: 0
       cls1      vip           vip2    UP(R) tac, 192.168.56.122/255.255.255.0/192.168.56.255 (2)
                                             failed retry cnt: 0
=====================================================================


[tibero@dbserver2 ~]$ cmrctl show
Resource List of Node cm2
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           inc2       UP (private) 10.10.1.102/19629
     COMMON  network           pub2       UP (public) ens33
     COMMON  cluster           cls1       UP inc: inc2, pub: pub2
       cls1     file         cls1:0       UP +0
       cls1     file         cls1:1       UP +1
       cls1     file         cls1:2       UP +2
       cls1  service            tas       UP Active Storage, Active Cluster (auto-restart: OFF)
       cls1  service            tac       UP Database, Active Cluster (auto-restart: OFF)
       cls1       as           tas2 UP(NRML) tas, /tibero/tibero6, failed retry cnt: 0
       cls1       db           tac2 UP(NRML) tac, /tibero/tibero6, failed retry cnt: 0
       cls1      vip           vip1    UP(R) tac, 192.168.56.121/255.255.255.0/192.168.56.255 (1)
                                             failed retry cnt: 0
       cls1      vip           vip2       UP tac, 192.168.56.122/255.255.255.0/192.168.56.255 (2)
                                             failed retry cnt: 0
=====================================================================



[tibero@dbserver2 ~]$ cmrctl show service --name tac
Service Resource Info
=================================================
Service name    : tac
Service type    : Database
Service mode    : Active Cluster
Cluster         : cls1
Inst. Auto Start: OFF
Interrupt Status: COMMITTED
Incarnation No. : 4 / 4 (CUR / COMMIT)
=================================================
| INSTANCE LIST                                 |
|-----------------------------------------------|
| NID   NAME    Status  Intr Stat ACK No. Sched |
| --- -------- -------- --------- ------- ----- |
|   1      cm1 UP(NRML) COMMITTED       4     Y |
|   2      cm2 UP(NRML) COMMITTED       4     Y |
=================================================

[tibero@dbserver2 ~]$ cmrctl show service --name tas
Service Resource Info
=================================================
Service name    : tas
Service type    : Active Storage
Service mode    : Active Cluster
Cluster         : cls1
Inst. Auto Start: OFF
Interrupt Status: COMMITTED
Incarnation No. : 4 / 4 (CUR / COMMIT)
=================================================
| INSTANCE LIST                                 |
|-----------------------------------------------|
| NID   NAME    Status  Intr Stat ACK No. Sched |
| --- -------- -------- --------- ------- ----- |
|   1      cm1 UP(NRML) COMMITTED       4     Y |
|   2      cm2 UP(NRML) COMMITTED       4     Y |
=================================================

tibero에서 dba 권한을 가진 기본 유저(스키마)에 table을 만들고 반대편 노드에서 확인해보세요.

tbsql tibero/tmax

 

20. tac 정지하기

 

20-1. node1

db -> tas -> cm 순으로 종료

[tibero@dbserver1 ~]$ tac
[tibero@dbserver1 ~]$ tbdown

Tibero instance terminated (NORMAL mode).              

[tibero@dbserver1 ~]$ cmrctl show
Resource List of Node cm1
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           inc1       UP (private) 10.10.1.101/19629
     COMMON  network           pub1       UP (public) ens33
     COMMON  cluster           cls1       UP inc: inc1, pub: pub1
       cls1     file         cls1:0       UP +0
       cls1     file         cls1:1       UP +1
       cls1     file         cls1:2       UP +2
       cls1  service            tas       UP Active Storage, Active Cluster (auto-restart: OFF)
       cls1  service            tac       UP Database, Active Cluster (auto-restart: OFF)
       cls1       as           tas1 UP(NRML) tas, /tibero/tibero6, failed retry cnt: 1
       cls1       db           tac1     DOWN tac, /tibero/tibero6, failed retry cnt: 0
       cls1      vip           vip1    UP(R) tac, 192.168.56.121/255.255.255.0/192.168.56.255 (1)
                                             failed retry cnt: 0
       cls1      vip           vip2    UP(R) tac, 192.168.56.122/255.255.255.0/192.168.56.255 (2)
                                             failed retry cnt: 0
=====================================================================
[tibero@dbserver1 ~]$ tas
[tibero@dbserver1 ~]$ tbdown

Tibero instance terminated (NORMAL mode).              

[tibero@dbserver1 ~]$ exit
logout
[root@dbserver1 ~]# cmrctl show
Resource List of Node cm1
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           inc1       UP (private) 10.10.1.101/19629
     COMMON  network           pub1       UP (public) ens33
     COMMON  cluster           cls1       UP inc: inc1, pub: pub1
       cls1     file         cls1:0       UP +0
       cls1     file         cls1:1       UP +1
       cls1     file         cls1:2       UP +2
       cls1  service            tas     INTR Active Storage, Active Cluster (auto-restart: OFF)
       cls1  service            tac       UP Database, Active Cluster (auto-restart: OFF)
       cls1       as           tas1     DOWN tas, /tibero/tibero6, failed retry cnt: 1
       cls1       db           tac1     DOWN tac, /tibero/tibero6, failed retry cnt: 0
       cls1      vip           vip1    UP(R) tac, 192.168.56.121/255.255.255.0/192.168.56.255 (1)
                                             failed retry cnt: 0
       cls1      vip           vip2    UP(R) tac, 192.168.56.122/255.255.255.0/192.168.56.255 (2)
                                             failed retry cnt: 0
=====================================================================
[root@dbserver1 ~]# tbcm -d
CM DOWN SUCCESS!
[root@dbserver1 ~]# cmrctl show
[ERROR] Cannot connect to local CM

 

20-2. node2

[tibero@dbserver2 ~]$ tac
[tibero@dbserver2 ~]$ tbdown

Tibero instance terminated (NORMAL mode).              

[tibero@dbserver2 ~]$ cmrctl show
Resource List of Node cm2
=====================================================================
  CLUSTER     TYPE        NAME       STATUS           DETAIL         
----------- -------- -------------- -------- ------------------------
     COMMON  network           inc2       UP (private) 10.10.1.102/19629
     COMMON  network           pub2       UP (public) ens33
     COMMON  cluster           cls1       UP inc: inc2, pub: pub2
       cls1     file         cls1:0       UP +0
       cls1     file         cls1:1       UP +1
       cls1     file         cls1:2       UP +2
       cls1  service            tas       UP Active Storage, Active Cluster (auto-restart: OFF)
       cls1  service            tac     DOWN Database, Active Cluster (auto-restart: OFF)
       cls1       as           tas2 UP(NRML) tas, /tibero/tibero6, failed retry cnt: 0
       cls1       db           tac2     DOWN tac, /tibero/tibero6, failed retry cnt: 0
       cls1      vip           vip1     DOWN tac, 192.168.56.121/255.255.255.0/192.168.56.255 (1)
                                             failed retry cnt: 0
       cls1      vip           vip2     DOWN tac, 192.168.56.122/255.255.255.0/192.168.56.255 (2)
                                             failed retry cnt: 0
=====================================================================
[tibero@dbserver2 ~]$ tas
[tibero@dbserver2 ~]$ tbdown

Tibero instance terminated (NORMAL mode).              

[tibero@dbserver2 ~]$ exit
logout
[root@dbserver2 ~]# tbcm -d
CM DOWN SUCCESS!
[root@dbserver2 ~]# cmrctl show
[ERROR] Cannot connect to local CM

 

 

21. 참고

 

21-1. /etc/hosts에 TAC의 resource 등록 (양쪽 노드 모두)

빠르게 네트워크를 써치하고 찾을 수 있도록 network resource의 name들을 등록해준다.

운영기에서는 속도가 중요하여 해당 작업이 중요할 수 있다.

vi /etc/hosts
#### Tibero CLUSTER Manager ## 
192.168.56.111 pub1
192.168.56.121 vip1
10.10.1.101 inc1

192.168.56.112 pub2
192.168.56.122 vip2
10.10.1.102 inc2

 

 

따라오느라 고생많으셨습니다. 

 


[TAC 설치] VMware에 TAS-TAC 구성해보자 시리즈

  1. [TAC 설치] VMware에 TAS-TAC 구성해보자 (1) https://novice-data.tistory.com/48
  2. [TAC 설치] VMware에 TAS-TAC 구성해보자 (2) https://novice-data.tistory.com/49
  3. [TAC 설치] VMware에 TAS-TAC 구성해보자 (3) https://novice-data.tistory.com/50
  4. [TAC 설치] VMware에 TAS-TAC 구성해보자 (4) https://novice-data.tistory.com/53
  5. [TAC 설치] VMware에 TAS-TAC 구성해보자 (5) https://novice-data.tistory.com/56

 

 

 

Comments