1.2 Connect 11 Installation Overview

Prerequistes

Connect 11 deployment for AWS assume the following requirements are met.

  • Single or Multiple K8 Linux Control Plane Node(s). 8gb Memory, 250gb storage.

  • Minimum of 2 K8 Linux Worker Nodes. 8gb Memory, 250gb storage.

  • K8 Metrics Server installed and accessible.

  • K8 Dashboard installed and accessible.

  • Oracle Database installed and network accessible by the Worker Nodes.

  • Linux client available with AWSCLI, Docker, Helm and kubectl installed, with K8 Admin and Repository credentials to access and administer the container registry and the k8 cluster. (This shall be referred to as the Installation Client in this document)

  • Unzip, and xsltproc linux utilities are installed/

  • Sudo privilege

Installation client checklist

The following commands will verify that the client is ready for installation

  • docker images

  • helm list

  • aws --version

  • aws configure

  • aws ecr get-login-password --yourRegion | docker login --username AWS --password-stdin yourAWSAccount

  • kubectl get nodes

  • unzip (Verify is present)

  • xsltproc (Verify is present)

  • su ls (Verify have privilege)

Installation of Connnect 11

Make sure database schema has been created with proper permissions and tablespaces.

unzip Connect distribution to /opt/connect.install/kc

$KCHOME refers to the path /opt/connect/kc from now on

  1. run $KCHOME/install/bootstrap.sh or bootstrap.sh --offline - this will create an install.properties file, download Java and any other files needed

  2. Edit database.conf file in $KCHOME/config

  3. run $KCHOME/initschema/runliquibase.sh updateSQL - this outputs the SQL changes that Liquibase will make

  4. run $KCHOME/initschema/runliquibase.sh update - this applies the schema changes outlined in updateSQL

  5. Run $KCHOME/install/seeddb.sh Default "Customer Care" [email protected]

Create Kubernetes Deployment

Create Deployment Records

Navigate to $KCHOME/install

run the following command

installer.sh installk8s ../install.kubernetes/deployment/deployment-kubernetes.conf


This will create a deployment records in the database and files in $KCHOME/deploy/kubernetes

There are three subdirectories in $KCHOME/deploy/kubernetes

  • java

  • kc

In the POD, these directories are mounted at the following paths:

  • /opt/brickst/kc

  • /opt/brickst/java

Create the brickst namespace

kubectl create namespace brickst

If you are rebuilding an exisiting cluster you may want to delete the namespace

kubectl delete namespace brickst

In $KCHOME/install.kubernetes make sure you are logged into awscli with your aws account

run the following command

aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 0791xxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com


Edit if need $KCHOME/install.kubernetes/wildfly/wildfly.config

#
# wildfly container config
#
WFIMAGE=jboss/wildfly:21.0.2.Final
database_type=oracle
#database_type=mysql
#database_type=sql
jdbc_oracle=ojdbc8-21.1.0.0.jar
jdbc_mysql=mysql-connector-java-8.0.23.jar
jdbc_sql=mssql-jdbc-8.2.1.jre8.jar
  • WFIMAGE - The Wildfly image to be used

  • database_type - database type that will be used. Uncomment the database_type in use.

  • jdbc_* - jdbc driver that will be used depending on database type

Create the brickst and brickstwar images

Create the brickst and brickstwar images by running $KCHOME/install.kubernetes/ updateAwsImages.sh. Make sure there is an AWS repository username to prefix the repository and images. Ensure the repository is named (in this case names user-brickst and user-brickstwar already exist and are accessible)

./updateAwsImages.sh (d,w,a) your aws repo account) (repo name for brickst image) (connect release version)”. (d=brickst, w=brickstwar all= both)
 
Example for brickst container
bash ./updateAwsImages.sh d 570993xxxxxxx.dkr.ecr.us-east-1.amazonaws.com user-brickst v11.0.0
 
Example for brickstwar container
bash ./updateAwsImages.sh w 570993xxxxxxx.dkr.ecr.us-east-1.amazonaws.com user-brickstwar v11.0.0


Deploy Connect Application

  1. run $KCHOME/install.kubernetes/helm/helm dependency update

  2. Edit values.yaml - add the secret and repository/image names (image and imagePullSecrets, the rest is optional).

    brickst:
    image: localhost:32000/brickst:registry
    # controlls the pull policy of the componenets pods
    #values IfNotPresent/Always/Never
    pullPolicy: IfNotPresent
    createConfig: true
    configName: brickst-deploy-config
    drnNodeName: brickst-drn
    dmzNodeName: brickst-dmz
    createSecret: true
    dbSecretName: brickst-deploy-secret
    dbSecretPassKey: brickst-db-password
    dbPassword:
    componentBasePort: 1600
     
    imagePullSecrets: []
    #- name : regcred

    image - path to an image in a private registry
    imagePullSecrets - Kubernetes should get the credentials from this secret

  3. run $KCHOME/install.kubernetes/helmHelper.sh install brickst ./helm --set global.storageClass=gp2 --namespace brickst