Step-wise Installation Process
Lists the steps to install cQube V 5.0
Post creating the EC2 instance as per the defined hardware requirements here, make the following configurations:
Security Group Configuration:
- Port 80 inbound from 0.0.0.0/0
- Port 443 inbound from 0.0.0.0/0
- Port 8000 inbound from nginx private ip (to communicate with kong)
- 5432 inbound to the particular ip which needs access
Kong Configurations:
- 3000 will get routed to /ingestion
- 3001 will get routed to /spec
- 3003 will get routed to /generator
An AWS Identity and Access Management (IAM) user is an entity that is created in AWS to represent the person or application that uses it to interact with AWS. A user in AWS contains a name and credentials. An IAM user with administrator permissions is different from the AWS account root user. One has to create an IAM user with a supported role to provide the connectivity between EC2 and S3. The role should have list, read and write permissions
S3 Buckets: Create the following s3 buckets -
- Archiving
- Error logging
IAM User:
- Create an IAM user
- Assign IAM policy to the user
- Download the access key and secret key
IAM Policy:
- Create an IAM policy from AWS IAM
- Provide access to list, read and write the objects to s3 buckets
Step-by-Step Installation Process:
Step - 1: Connect to the cQube AWS ec2 instance.
For linux and macOS:
- Download the .pem file which is generated while creating the EC2 instance
- Open the terminal and navigate to the folder where .pem file has been downloaded
- Then give the read permission to the .pem file using following command
sudo chmod 400 <aws.pem>
- Use the following command to connect to the instance
ssh -i <path_to_the_pem_file> <user_name>@<public_ip_of_the_instance>
For windows:
- Download the .pem file which is generated while creating the EC2 instance
- Use puttygen to connect to the instance.
Step - 2: Clone the cqube-devops repository using the following command:

Step - 3: Navigate to the directory where cQube is cloned or downloaded and checkout to the desired branch
cd cqube-devops/ && git checkout dev

Step - 4: Give the following permissions to the install.sh file
sudo chmod u+x install.sh
Step - 5: Install cQube with non root user with sudo privileges
sudo ./install.sh

Install.sh file contains a shell script where it will run shell scripts and ansible-playbook to setup cQube.
Step - 6: User Input Variables - These are the variables which need to be entered by the user using the hint provided:
- state_name ( Enter the required state code by referring to the state list provided )
- api_end_point ( Enter the url in which cQube to be configured )
- s3_access_key
- s3_secret_key
- s3 archived bucket name
- s3 error bucket name

Step - 7: Optional_variables - Database credentials contain default values. If the user wishes to enter their own credentials then the user should opt for yes to enter their credentials otherwise can opt for no when the question pops up
- db_user_name ( Enter the postgres database username )
- db_name ( Enter the postgres database name )
- db_password ( Enter the postgres password )

Step - 8: Once the config file is generated, A preview of the config file is displayed followed by a question where the user gets an option to re enter the configuration values on choosing yes. If option no is selected then the install.sh moves to the next section.

Step - 9: Once the installation is completed, You will be prompted with the following messages and required reference urls.
cQube Installed Successfully
cQube ingestion api can be accessible using <domain_name>

The following steps define the cQube setup and workflow completion processes in AWS. cQube mainly comprises the areas mentioned below:
- 1.EC2 Server
- 2.IAM user and Role creation for S3 connectivity.
The cQube network setup process is described in the block diagram below:

Following are the details of the micro-services which get installed in the cQube server.
- Ingestion-ms: The ingestion-ms is used to upload the data of the events, datasets, dimensions, transformers and pipeline. All these APIs will be ingesting the data into the cQube.
- Spec-ms: The spec-ms is used to import schema of the events, datasets, dimensions, transformers and pipeline. All these specs will be defined by the cQube platform prior to ingesting the data into the cQube. These specifications are derived by considering KPIs as the Indicator.
- Generator-ms: The generator-ms is used to create the specs & transformers for the derived datasets - Performing aggregation logics, updating data to datasets based on transformation and providing status update of file processing.
- Nifi-ms: Apache NiFi is used as a real-time integrated data logistics and simple event processing platform.
- Postgres-ms: Postgres micro-service contains the schema and tables.
- Nginx-ms: It is commonly used as a reverse proxy and load balancer to manage incoming traffic and distribute it to slower upstream servers.
- Kong-ms: It is a lightweight API Gateway that secures, manages, and extends APIs and micro-services.
Install.sh file contains a shell script where it will run by following shell scripts and ansible-playbook to setup cQube
Basic_requirements.sh:
This script basically updates and upgrades the software packages in the server and installs basic softwares such as:
- Python3
- Pip3
- Ansible
- Docker
- Docker compose
Config_file_generator.sh:
This script is used to generate a configuration file which contains some constant values. Few required variables should be entered by the user. Following are the variables which get added in the config file.
- System_user_name
- base_dir
- Private_ip
- aws_default_region
Note: Users should follow the hints provided in the description and should enter the variables accordingly. If the entered value is wrong then an error message gets displayed and the user should modify the variable value accordingly.
User Input Variables - These are variables which need to be entered by the user by following the hint provided:
- state_name (Enter the required state code by referring to the state list provided)
- api_end_point (Enter the URL in which cQube to be configured )
- s3_access_key
- s3_secret_key
- s3 archived bucket name
- s3 error bucket name
Optional_variables - Database credentials contain default values. If the user wishes to enter their own credentials then the user should opt for yes to enter their credentials otherwise can opt for no when the question pops up
- db_user_name ( Enter the postgres database username )
- db_name ( Enter the postgres database name )
- db_password ( Enter the postgres password )
Once the config file is generated, a preview of the config file is displayed followed by a question where the user gets an option to re-enter the configuration values on choosing yes. If option no is selected then the install.sh moves to the next section.
Repository_clone.sh:
This script clones the following repositories in the micro-services directory and checks out to the required release branch:

Note: If the repository is already cloned then the script will pull the updated code.
Ansible-playbook
- 1.Install.yml - An install.yml ansible playbook gets triggered where it triggers the required roles to build the following microservices images.
- Ingestion-ms
- Spec-ms
- Generator-ms
- Postgres-ms
- Nifi-ms
- Kong-ms
- Nginx-ms

- 2.compose.yml - A docker compose ansible script gets triggered where it will up all the containers to running state.
Note: The following commands can be used from the Ansible directory to down the containers and to start the containers, respectively.
docker-compose -f docker-compose.yml down
docker-compose -f docker-compose.yml up -d
Once the installation is completed, You will be prompted with the following messages and required reference urls.
cQube Installed Successfully

We can check the containers running status by using following command
sudo docker ps

Last modified 2mo ago