OpenWiFi Cloud SDK may be deployed to both public and private cloud environments.
Helm charts support automated deployment of the Cloud SDK PODs and containers into AWS and MicroK8s Kubernetes platforms.
The following pages provide step-by-step instructions to deploy in public cloud (AWS) or private on-premises (MicroK8s) cloud.
TIP Controller Local Deployment
MicroK8s deployment is available as part of Release 1.0 candidate. API services, database, message bus and ability to adjust Kubernetes POD performance parameters are all possible with this system which may be useful to the Community for local on premises installations.
A snap capable operating system is required for MicroK8s installation. TIP Controller has been installed on an Ubuntu 20 system with 32Gb memory, 500Gb disk and Gigabit Ethernet network interface with a user account tip created.
The system should have a fully qualified domain name and the deployment of TIP controller will require additional DNS records to be created.
Local /etc/hosts should contain the following DNS entires for your controller assigned to the IP address your machine is using to connect to the network. This same IP will be used when configuring metallb address in a subsequent step.
When accessing the UI from a workstation or when AP is connecting to the local controller the local DNS server will need to provide authoritative response for these A records in the wlan.local domain. In a future release of TIP Controller instructions will be provided to modify the FQDN of all Controller services.
DNS default entries for /etc/hosts
Install microk8s
The above specifies latest stable release will be installed
Set user permissions
source or re-login to shell for environment to be applied to current user ($USER)
Setup MicroK8s
metlalb will request an IP address range. Specify the IP of Gigabit Ethernet interface. If your interface address is 10.1.1.10 then provide metallb with: 10.1.1.1-10.1.1.10
TIP Controller may be deployed with self-signed certificates for a local lab environment. The following steps will guide the reader through that process.
Install Keytool and Zip Packages
Enable Firewall to permit Controller traffic from Container Network Interface
Change to a directory for configuring certificates and running controller. This can be within the tip home directory.
This step only applies when using locally built AP images. All devices purchased via ODM partners from factory will ship with TIP Signed keys. Updating Redirector value for the device is the recommended way to connect to a local Cloud SDK
From the current directory, two sub-directories now exist for wlan-pki-certs and wlan-cloud-helm.
Enter the PKI directory and the configs sub-directory cd /wlan-pki-cert-scripts/configs
Modify all certificate configuration files for the value of your organizationalUnitName_default value set to your organizational name or other string value used in each of the PKI certificate files. Optionally this may be left unchanged.
Within the following files, ensure the FQDN (Fully Qualified Domain Name) based on local setup for DNS aligns accordingly. The following files are updated per:
mqtt-server.cnf
commonName_default = opensync-mqtt-broker.FQDN
openssl-server.cnf
DNS.1 = opensync-redirector.FQDN
DNS.2 = opensync-controller.FQDN
Once complete generate the service certificates and copy these to the controller.
Note within the wlan-pki-cert-scripts folder, a subfolder /generated is present after key creation.
The AP.zip archive in the generated folder contains the Access Point certificates for loading onto APs in the AP /usr/opensync/certs device folder
TIP controller defaults to a domain of wlan.local. It is possible to operate a lab DNS service permitting local resolution of this domain for the TIP controller services. Certificate instructions for a self-signed private domain will follow in a subsequent release of service and documentation.
Helm will deploy the Controller containers within a TIP namespace to microk8s on the machine.
To check status of the PODs, Services, and Persistent Volume Claims (storage) use the following commands. Please note, depending on your server, all PODs may take several minutes to fully initialize.
Get svc will return the network Services of each container port maps as well as list of containers mapped to external IP address defined during the metallb config stage earlier.
Get pvc will return the Persistent Volume Claims of the containers to the microk8s storage service
Get pods will return the current state of all containers. The controller will require two to four minutes for all pods to reach 'Running' state.
Examples of all three commands:
In the earlier stage when self-signed keys were created for the controller, keys were also created to support Access Point connections over SSL to the newly deployed controller.
To obtain these keys, return to the /wlan-pki-cert-scripts/generated
folder and copy AP.zip
containing the Access Point keys.
Extract this archive and using sing secure copy (SCP) transfer keys to the /usr/opensync/certs
folder on the AP.
The above assumes the microk8s system has IP connectivity to the Access Point If this is not possible, copy the AP.zip file to a machine that will have SCP access to the AP, extract the files and copy to the AP folder per above.
In the current release of a TIP Controller using self-signed certificates, Access Points communicate to the TIP Controller using OpenSync. Access Points are directed to the controller at this time using local configuration.
The default TIP Open AP username and login are 'root' and 'openwifi'
The IP address (or FQDN) are the same as the metallb exposed External IP as shown in earlier steps
TIP Controller Deployment Instructions
The following must be available to start this installation:
AWS Account
AWS Route 53 Hosted Zone
The steps outlined in this guide will create a local installation of the Terraform automation environment that will also depend on the AWS command line interface, Helm and Kubectl local command line packages.
Each of these applications provide detailed instructions for installation on multiple client operating systems.
If the instructions for Terraform were followed, a Docker container was locally created, nginx was run with a default localhost:80 nginx returned web page displayed. After which 'terraform destroy' was run to remove the local test of Terraform.
If the instructions for AWS CLI were followed, AWS CLI version 2 has been installed. This may be verified using the command which aws
and aws --version
. If these succeed installation of AWS CLI has completed.
If the instructions for Helm were followed, Helm has now been locally installed. This may be verified using the command which helm
and helm version
. If these succeed installation of Helm has completed.
If the instructions for Kubectl were followed, Kubectl has not been locally installed. This may be verified using the command which kubectl
and kubectl version --client
. If these succeed installation of Kubectl has completed.
If any of the above have not completed, please refer back to the specific install instructions from the package provider listed in the Prerequisite links accordingly before continuing.
Create a workspace on your local system and clone in the TIP Controller project.
Create a Terraform file in the aws-cloudsdk
directory named aws.tf
adding the following to that new file:
Terraform will use the initial configuration of your environment variables from the previous step when communicating with AWS. Prior to initializing Terraform, ensure authentication is successful.
Within the /wlan-cloud-helm/terraform/aws-cloudsdk
directory, copy the terraform.tfvars.sample
file to terraform.tfvars
and edit the content of the new terraform.tfvars
replacing parameter values for cidr and route53_zone_name accordingly:
Once these steps have been completed, it is now possible to deploy the TIP Controller to AWS.
If Terraform is able to connect and authenticate to AWS, a prompt to accept the creation of the deployment is presented. Answer yes
to proceed. Terraform will execute for 10-15 minutes during which time the following are being configured:
EKS cluster with three nodes where CloudSDK will run on
VPC for the EKS cluster
ACM that will sign the certificate for the public HTTPS services exposed by CloudSDK
Route53 record to let ACM know that you own the domain
Required IAM roles for all components
When Terraform completes the following should have emitted:
Apply complete! Resources: 57 added, 0 changed, 0 destroyed.
Outputs:
acm_arn = "arn:aws:acm:ca-central-1:1xxxxxxx68:certificate/7e3xxxx7-74xx-4xxx-8bef-d6xxxxxx5a3"
TIP Controller services use SSL certificates to ensure inter-service security. These certificates must be generated. To generate TIP Controller certificates, navigate out of the tip-wlan-cloud directory to a directory where cloning the TIP PKI repository may occur:
Enter the PKI directory and the configs sub-directory cd /wlan-pki-cert-scripts/configs
Modify all certificate configuration files for the value of your organizationalUnitName_default value set to your organizational name or other string value used in each of the PKI certificate files. Optionally this may be left unchanged.
Within the following files, ensure the FQDN (Fully Qualified Domain Name) based on the Terraform setup for Route53 hosted-zone aligns accordingly. If the defaults were not changed Terraform will have created a sub-domain cloudsdk
within the supplied Route53 hosted-zone. The following files are updated per:
mqtt-server.cnf
commonName_default = opensync-mqtt-broker.cloudsdk.route53hosted-zone_name
openssl-server.cnf
DNS.1 = opensync-redirector.cloudsdk.route53hosted-zone_name
DNS.2 = opensync-controller.cloudsdk.route53hosted-zone_name
To generate keys, ensure necessary Java resources are installed for your operating system:
openjdk-11-jre-headless
default-jdk
From within the wlan-pki-cert-scripts
folder execute ./generate_all.sh
script.
Copy the generated keys assuming the wlan-cloud-helm folder is at the same level as the wlan-pki-certs folder per: ./copy-certs-to-helm.sh ~/wlan-cloud-helm/
Note within the wlan-pki-cert-scripts folder, a subfolder /generated is present after key creation.
The AP.zip archive in the generated folder contains the Access Point certificates for loading onto APs in the AP /usr/opensync/certs device folder
Certain TIP Charts have upstream dependencies, form the wlan-cloud-helm folder execute:
Various Bitnami charts will be brought into the deployment such as Kafka, PostGres, Cassandra.
Localizing the Helm charts for AWS is a critical step. Editing the following file
Key variables to replace in this file are:
alb.ingress.kubernetes.io/certificate-arn: "Enter the ARN supplied during terraform output"
All locations with a URL or FQDN align to your AWS environment such as:
external-dns.alpha.kubernetes.io/hostname: wlan-filestore.cloudsdk.
route53hosted-zone
ovsdb: opensync-controller.cloudsdk.
route53hostedzone
mqtt: opensync-mqtt-broker.cloudsdk.
route53hostedzone
All occurrences of a URL or FQDN within this file replacing route53hostedzone with your deployed Route53 hosted zone name
Local Kubeconfig is associated to the AWS service to enable Helm control of EKS resources by entering the following:
With all dependencies met, certificates created and in place, using Helm and the authenticated session to AWS, execute the following:
If a connection error occurs, AWS CLI may need to re-authenticate. This can be done with aws configure
.
If Helm has a successful connection to AWS the following returns:
Release "cloudsdk" does not exist. Installing it now.
NAME: cloudsdk
LAST DEPLOYED: Sun Feb 7 14:00:28 2021
NAMESPACE: tip
STATUS: deployed
REVISION: 1
TEST SUITE: None
To check on status of PODs now launching in AWS enter kubectl get pods -n tip
to return results from the operational EKS Kubernetes CloudSDK cluster:
After 5 to 8 minutes elapse, all POD services should be operational for example:
Default username support@example.com
and password support
Install Kubectl
Install Helm
Install AWS CLI
Install Terraform
Access to AWS should have been satisfied with an AWS account as noted in Prerequisites. This account is entitled with Administrator level permissions. For information on this process please refer to:
If a specific authentication method previously exists depending on your local machine environment when connecting to AWS, adjustments may be required. Please consult Terraform instructions accordingly:
that will take care of creating DNS entries for all CloudSDK components
which will take care of exposing CloudSDK components to the public
Navigate to