All Posts By

powerupcloud

Access Management of GCP instances by configuring Google Cloud Identity LDAP for user login

By | Blogs | No Comments

Written by Madan Mohan K, Associate Cloud Architect

“One Man’s TRASH can be another Man’s IDENTITY”-Identity Access Management

Maintaining two identity management systems for SaaS apps and traditional apps/infrastructure which results in intricacy, fragmented security, and additional cost.

To overcome this Google launched secure LDAP that lets you manage access to SaaS apps and traditional LDAP based apps/infrastructure using a single cloud-based identity and access management (IAM) solution.

Cloud Identity:

A unified identity, access, app, and endpoint management (IAM/EMM) platform that helps IT and security teams maximize end-user efficiency, protect company data, and transition to a digital workspace.

LDAP (Lightweight Directory Access Protocol) is an application protocol for querying and modifying items in directory service providers like Active Directory, which supports a form of LDAP.

Platform Used:

  • G-Suite Admin with cloud Identity Premium 
  • Google Cloud

G-Suite Admin:

Create LDAP client from the Apps in the G-Suite Admin console

G-Suite Admin Console

In the LDAP apps page, click on “ADD CLIENT” button and Key in the required details

LDAP client creation

Under Access Permissions, you will have 3 settings:

  • Verify user credentials,
  • Read user information &
  • Read group information.

In this illustration, we chose to go with the entire domain option. If you wish to have restricted access it can be done by limiting the user access to OU.

In the “Read group information” section, change the option to On and click the “ADD LDAP CLIENT” button to create the client.

Once after the configuration When prompted with a Google SSL certificate Click on “Download certificate” and then “CONTINUE TO CLIENT DETAILS”

The service status should be in ON state. So, in the Status page, select “ON for everyone” and click on “SAVE”

Well, that is all at the G-Suite Admin console.

Google Cloud:

  • Create an instance in the GCP. In this example, we chose to use Ubuntu 16.
  • Update the Instance using sudo apt update -y
  • Install the SSSD package using sudo apt install -y sssd sssd-tools
  • Once the installation is done create a new file in /etc/sssd/ and name it as sssd.conf. You can do it using vi /etc/sssd/sssd.conf or the preferred editor.

The sssd.conf file should include the following and look similar like the image below

Note: Remember to replace the domain with yours. By default, Google Linux instances disable password authentication so change it to Yes.

Configuration in Google Instance:

  • Upload the certificate which was downloaded earlier in the G-Suite Download certificate Page.
  • Change the permission of sssd.conf file using sudo chown root:root /etc/sssd/sssd.conf & sudo chmod 600 /etc/sssd/sssd.conf.
  • Restart the SSSD service using sudo service sssd restart

To verify that SSSD is running and connecting to the LDAP server you can run the following command with any of the users in your G Suite account:

  • Type getent passwd username@powerup.university in the instance created in google cloud and the output should look something like this:

Instance Access Scenario:

Now, when you try to ssh from the open in the browser window you will receive the following error. Well now without the G-Suite user we will not be able to log in to the instance.

Granular Level Access to the G-Suite User: When you need to restrict the user access only to the instance. We need to set the custom metadata as enable-oslogin=TRUE

The following roles must be assigned to the G-Suite user to access the instance using a third-party tool(e.g putty).

  • Computer OS Admin Login
  • Compute OS Login
  • Service Account User

Now open a third-party tool and use the G-Suite user and password to login to the machine.

Inference:

When all the identities and apps are managed in a single window the complexity is reduced and security is enhanced which also leads to an increase in the adoption of cloud technology across your business

Update:

In the forthcoming days, we shall have G-Suite users access the Windows instances using the G-Suite credentials.

AWS EKS Authentication and Authorization using AWS Single SignOn

By | Blogs | No Comments

Written by Priyanka Sharma, DevOps Architect, Powerupcloud Technologies

Amazon EKS uses IAM to provide authentication to the Kubernetes cluster. The “aws eks get-token” command is being used to get the token for authentication. However, IAM is only used for authentication of valid IAM entities. All permissions for interacting with the Amazon EKS cluster’s Kubernetes API is managed through the native Kubernetes RBAC system.

In this article, we are covering the authentication and authorization of AWS EKS through OnPrem ActiveDirectory i.e. in general, how we can provide EKS access to Onprem AD users. We have divided the solution into two parts: One is integrating OnPrem AD to AWS through AWS SingleSignOn and another is implementing RBAC policies on Kubernetes Cluster.

For the solution, we are using the following services:

  • OnPrem Active Directory with predefined users and groups: Ensure the below ports are allowed for your AD:
    • TCP 53/UDP 53/TCP 389/UDP 389/TCP 88/UDP 88/UDP 389
  • AD Connector on AWS which connects to the OnPrem AD
  • AWS Single SignOn: Integrates with AD Connector, allowing users to access the AWS Command Line Interface with a set of temporary AWS credentials.
  • AWS EKS: version 1.14

Integrating OnPrem AD with the AWS Single SignOn

For instance, we have created the following users and groups on OnPrem AD for the demo purpose:

AD UsernameRespective AD Group
user1EKS-Admins
user2EKS-ReadOnly
dev1EKS-Developers

Ensure to setup and AD Connector in the same region as AWS SignOn. Refer to our previous blog for setting up an Active Directory Connector.

Switch to AWS SingleSignOn Console and change the user directory. Select the AD connector created in the above step.

Select the account where you have setup the EKS Cluster.

Search for the AD group for which you want to give the EKS access.

Create a custom permission set.

Attach the below custom permission policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
          "Effect": "Allow",
          "Action": "sts:AssumeRole",
          "Resource": "*"
        }
    ]
 }

Select the Permission set created above.

Finish. Similarly, Create for Readonly and Developers too with the same permission set policy. Verify the permission sets in AWS accounts once.

Note that no specific permissions are assigned to the assumed role for the AD Users/Groups at this stage. By default the assumed role will not have permission to perform any operations. The specific authorization permissions will be defined via Kubernetes RBAC in the next section.

Behind the scenes, AWS SSO performs the following operations in Account B (member):

  • Sets up SAML federation by configuring an Identity Provider (IdP) in AWS IAM. The Identity Provider enables the AWS account to trust AWS SSO for allowing SSO access.
  • Creates an AWS IAM role and attaches the above permission set as a policy to the role. This is the role that AWS SSO assumes on behalf of the Microsoft AD user/group to access AWS resources. The role created will have prefix “AWSReservedSSO”.

Go to the account you have applied permission set to. There will be an IAM role created by SSO. In our case, below is the screenshot from the target account:

Creating RBAC Policies on Kubernetes

A Role is used to grant permissions within a single namespace. Apply the below role to create the multiple K8s role for Admins, Readonly and Dev groups respectively:

---
 apiVersion: rbac.authorization.k8s.io/v1
 kind: Role
 metadata:
   name: default:ad-eks-admins
   namespace: default
 rules:
 - apiGroups: ["*"]
   resources: ["*"]
   verbs: ["*"]
---
 apiVersion: rbac.authorization.k8s.io/v1
 kind: Role
 metadata:
   name: default:ad-eks-readonly
   namespace: default
 rules:
 - apiGroups: [""]
   resources: ["*"]
   verbs: ["get", "list", "watch"]
---
 apiVersion: rbac.authorization.k8s.io/v1
 kind: Role
 metadata:
   name: default:ad-eks-developers
   namespace: default
 rules:
 - apiGroups: ["*"]
   resources: ["services","deployments", "pods", "configmaps", "pods/log"]
   verbs: ["get", "list", "watch", "update", "create", "patch"]
---
 apiVersion: rbac.authorization.k8s.io/v1
 kind: Role
 metadata:
   name: default:ad-eks-monitoringadmins
   namespace: monitoring
 rules:
 - apiGroups: ["*"]
   resources: ["*"]
   verbs: ["*"]

Edit the existing aws-auth configmap through the below command:

kubectl edit configmap aws-auth –namespace kube-system

Add the below contents:

- rolearn: arn:aws:iam::ACCOUNTID:role/AWSReservedSSO_AD-EKS-Admins_b2abd90bad1696ac
      username: adminuser:{{SessionName}}
      groups:
        - default:ad-eks-admins
    - rolearn: arn:aws:iam::ACCOUNTID:role/AWSReservedSSO_AD-EKS-ReadOnly_2c5eb8d559b68cb5
      username: readonlyuser:{{SessionName}}
      groups:
        - default:ad-eks-readonly
    - rolearn: arn:aws:iam::ACCOUNTID:role/AWSReservedSSO_AD-EKS-Developers_ac2b0d744059fcd6
      username: devuser:{{SessionName}}
      groups:
        - default:ad-eks-developers
    - rolearn: arn:aws:iam::ACCOUNTID:role/AWSReservedSSO_AD-EKS-Monitoring-Admins_ac2b0d744059fcd6
      username: monitoringadminuser:{{SessionName}}
      groups:
        - default:ad-eks-monitoring-admins

Ensure to remove: aws-reserved/sso.amazonaws.com/ from the role_arn.

kubectl create rolebinding eks-admins-binding --role default:ad-eks-admins --group default:ad-eks-admins --namespace default

kubectl create rolebinding eks-dev-binding --role default:ad-eks-developers --group default:ad-eks-developers --namespace default

kubectl create rolebinding eks-readonly-binding --role default:ad-eks-readonly --group default:ad-eks-readonly --namespace default

kubectl create clusterrolebinding clusterrole-eks-admins-binding --clusterrole=cluster-admin  --group default:ad-eks-admins

kubectl create clusterrolebinding clusterrole-eks-readonly-binding --clusterrole=system:aggregate-to-view  --group default:ad-eks-readonly

Time for some Action

Hit SSO User Portal URL (highlighted in the below screenshot):

Give user AD credentials which is added to EKS-Admins group:

Click on Programmatic access:

It gives temporary AWS Credentials:

Create a ~/.aws/credentials file in the server with the credentials got from SSO:

[ACCOUNTID_AD-EKS-Admins]
aws_access_key_id = ASIAZMQ74VVIMRLK2RLO
aws_secret_access_key = gzOs61AcQ/vyh0/E9y+naT3GF3PDKUqB5stZLWvv
aws_session_token = AgoJb3JpZ2luX2VjENr//////////wEaCXVzLWVhc3QtMSJIMEYCIQCcs8/t5OK/UlOvSQ/NSXt+giJm9WkxVkfUhY6MFVnJwgIhAMOuJxb/CqhNx12ObPY4Obhe4KmxyEdyosqzqq63BOLaKt8CCKP//////////wEQABoMNjQ1Mzg1NzI3MzEyIgzw9o8jbVmjgcjgTHIqswICmRCh/7qIgBbxjG0kZJdmrGFEHjssv1b4Rl3AnIel7p0RizMDzzY9lQIlsuE5S7xYVB4alVVl1MNQ/1+iNSrSAG4LlCtSIaMrmUZ+hspR1qiQ5cqS2954UhgzEb081QCzYMbPgtvtPWwiiDZ9LkYOU2tp9hWbX7mHAZksFTHgEOO62hEuJWl3bh6dGYJWqyvTO3iwSJZYeqKJ/vY0MNnx5bjcqjgehUA6LnpUES3YlxelAGQPns7nbS0kOzDatoMe4erBIUTiP60vJ4JXJ2CFPsPmX6Doray0MWrkG/C9QlH4s/dZNCIm6In5C3nBWLAjpYWXQGA9ZC6e6QZRYq5EfMmgRTV6vCGJuSWRKffAZduXQJiZsvTQKEI0r7sVMGJ9fnuMRvIXVbt28daF+4ugyp+8MOCXjewFOrMB8Km775Vi0EIUiOOItQPj0354cao+V9XTNA/Pz23WTs8kF+wA5+il7mBOOEkmhLNrxEkuRTOCv0sn52tm9TeO9vSHRbH4e4xaKoJohyBYZTlEAysiu8aRQgahg4imniLYge+qvelQeDl1zYTBsea8Z71oQDcVVtBZzxmcIbS0V+AOOm81NTLRIIM1TNcu004Z7MnhGmD+MiisD0uqOmKVLTQsGLeTKur3bKImXoXNaZuF9Tg=

Update the KubeConfig for EKSAdmins groups as below:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: XXXXXX
    server: https://3B6E58DAA490F4F0DD57DAE9D9DFD099.yl4.ap-south-1.eks.amazonaws.com
  name: arn:aws:eks:ap-south-1:ACCOUNTID:cluster/puck8s
contexts:
- context:
    cluster: arn:aws:eks:ap-south-1:ACCOUNTID:cluster/puck8s
    user: arn:aws:eks:ap-south-1:ACCOUNTID:cluster/puck8s
  name: arn:aws:eks:ap-south-1:ACCOUNTID:cluster/puck8s
current-context: arn:aws:eks:ap-south-1:ACCOUNTID:cluster/puck8s
kind: Config
preferences: {}
users:
- name: arn:aws:eks:ap-south-1:ACCOUNTID:cluster/puck8s
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - ap-south-1
      - eks
      - get-token
      - --cluster-name
      - puck8s
      - -r
      - arn:aws:iam::ACCOUNTID:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_AD-EKS-Admins_b2abd90bad1696ac
      command: aws
      env:
        - name: AWS_PROFILE
          value: "ACCOUNTID_AD-EKS-Admins"

Similarly, update the KubeConfig and get the temporary credentials for ReadOnly and Dev Users.

KubeConfig for ReadOnly Users:

users:
- name: arn:aws:eks:ap-south-1:ACCOUNTID:cluster/puck8s
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - ap-south-1
      - eks
      - get-token
      - --cluster-name
      - puck8s
      - --role
      - "arn:aws:iam::ACCOUNTID:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_AD-EKS-ReadOnly_2c5eb8d559b68cb5"
      command: aws
      env:
        - name: AWS_PROFILE
          value: "ACCOUNTID_AD-EKS-ReadOnly"

AWS Profile for Temporary Credentials:

Export the KUBECONFIG for ReadOnly Users and try out the following commands:

Export the KUBECONFIG for EKS Admin Users and try out the following commands:

Export the KUBECONFIG for EKS ReadOnly Users and try out the following commands:

That’s all..!! Hope you found it useful.

References:

https://aws.amazon.com/blogs/opensource/integrating-ldap-ad-users-kubernetes-rbac-aws-iam-authenticator-project/

Email VA report of Docker Images in ECR

By | Blogs | One Comment

Written by Praful Tamrakar, Senior Cloud Engineer, Powerupcloud Technologies

Amazon ECR is Elastic Container Registry provided by Amazon to store, decrypt and manage container images. Recently AWS announced Image scanning feature for the images stored in the ECR. Amazon ECR uses the Common Vulnerabilities and Exposures (CVEs) database from the open-source CoreOS Clair project and provides you with a list of scan findings.

Building container images in a Continuous Integration (CI) pipeline, pushing these artifacts into ECR has been a common approach adopted widely. But along with this, we would also like to Scan the Container Image and send a Vulnerability Assessment report to the customer. The Email alert will be triggered only if the container has any critical vulnerabilities.

How Can I Scan My Container Images?

Container images can be scanned using lots of third-party tools such as Clair, Sysdig Secure, etc. To use these tools, the required server/database needs to be managed by us. This adds additional effort to the operations team.

To reduce these efforts, we can use the Image scanning feature of the ECR.

  • You can scan your container images stored in ECR manually.
  • Enable Image Scan for the push on your repositories so that each and every image is checked against an aggregated set of Common Vulnerabilities and Exposures (CVEs).
  • Scan Images using an API command thereby allowing you to set up periodic scans for your container images. This ensures continuous monitoring of your images.

Problem Statement

Currently, no direct way to achieve getting the scan results in CloudWatch or CloudTrail.  However,  it can be achieved using the following approach.

Resolution:

  1. Configure an Existing Repository to Scan on Push using AWS CLI
aws ecr put-image-scanning-configuration --repository-name <ECR_REPO_NAME> --image-scanning-configuration scanOnPush=true --region <REGION_CODE>
  1. Getting the image scan findings (scan results from ECR) can be achieved through a Lambda function that will use an API call.
    1. Create an SNS topic with EMAIL as the subscription
    2. Create a lambda function with runtime Python 3.7 or above, and attach AmazonEC2ContainerRegistryPowerUser and AmazonSNSFullAccess to the following AWS provided policy to the Lambda service Role.

b. Paste the following python command which will return the image scan findings summary

import json
from datetime import datetime
from logging import getLogger, INFO
import os
import boto3
from botocore.exceptions import ClientError


logger = getLogger()
logger.setLevel(INFO)

ecr = boto3.client('ecr')
sns = boto3.client('sns')

def get_findings(tag):
    """Returns the image scan findings summary"""
    
    try:
        response = ecr.describe_image_scan_findings(
            repositoryName='<NAME_OF_ECR >',
            registryId='<AWS_ACCOUNT_ID >',
            imageId={
            'imageTag': tag},
        )
        
        criticalresult = {}
        criticalresultList = []
        findings = response['imageScanFindings']['findings']
        for finding in findings:
            if finding['severity'] == "CRITICAL": #Can be CRITICAL | HIGH
                # print(findding['severity'])
                name  = finding['name']
                description = finding['description']
                severity = finding['severity']
                criticalresult["name"] = name
                criticalresult["description"] = description
                criticalresult["severity"] = severity
                criticalresultList.append(criticalresult)
        return criticalresultList
        
            
    except ClientError as err:
        logger.error("Request failed: %s", err.response['Error']['Message'])

def lambda_handler(event, context):
    """AWS Lambda Function to send ECR Image Scan Findings to EMAIL"""
    scan_result = get_findings(event['tag'])
    print (scan_result)
    
    
    sns_response = sns.publish(
    TopicArn='arn:aws:sns:<AWS_REGION_CODE>:<AWS_ACCOUNT_ID>:<SNS_TOPIC>',    
    Message=json.dumps({'default': json.dumps(scan_result)}),
    MessageStructure='json')
    
    print (sns_response)

This email contains the Name, Description and Severity level for the scanned image.

[{
"name": "CVE-2019-2201", 
"description": "In generate_jsimd_ycc_rgb_convert_neon of jsimd_arm64_neon.S, there is a possible out of bounds write due to a missing bounds check. This could lead to remote code execution in an unprivileged process with no additional execution privileges needed. User interaction is needed for exploitation.Product: AndroidVersions: Android-8.0 Android-8.1 Android-9 Android-10Android ID: A-120551338", 
"severity": "CRITICAL"
}]

d. Trigger this from the Jenkins pipeline

stage('Image Scan'){
    node('master'){        
        sh'''

        sleep 60
        if [[ $(aws ecr describe-image-scan-findings --repository-name < ECR_REPO_NAME> --image-id imageTag=${IMAGETAG} --region ap-southeast-1 --registry-id <AWS_ACCOUNT_ID> --output json --query imageScanFindings.findingSeverityCounts.CRITICAL) -gt 0 ]]
          then
           aws lambda invoke --function-name <LAMBDA_FUNCTION_NAME> --invocation-type Event --payload '{"tag":"${IMAGETAG}"}' response.json
        fi

        '''
      
      
         }}

OPTIONAL

You can create a CloudWatch rule that will match the scanning completion event using this event pattern If you don’t want to trigger this lambda function with pipeline but with Cloudwatch event for Event for a Completed Image Push

{
    "version": "0",
    "id": "13cde686-328b-6117-af20-0e5566167482",
    "detail-type": "ECR Image Action",
    "source": "aws.ecr",
    "account": "123456789012",
    "time": "2019-11-16T01:54:34Z",
    "region": "us-west-2",
    "resources": [],
    "detail": {
        "result": "SUCCESS",
        "repository-name": "my-repo",
        "image-digest": "sha256:7f5b2640fe6fb4f46592dfd3410c4a79dac4f89e4782432e0378abcd1234",
        "action-type": "PUSH",
        "image-tag": "latest"
    }
}

This event pattern will match the exact completion’s event API. After that, you can pass all of the matched events to the Lambda function so that it extracts the values of “repository-name”, “image-digest” and “image-tags”, which will be passed to the DescribeImageScanFindings API call. For reference: https://docs.aws.amazon.com/AmazonECR/latest/userguide/ecr-eventbridge.html

Note:

If the ECR repository is not in the same account where Lambda is configured then, you must configure image permission for the image.

  1. Go to ECR console, and select the ECR repository.
  1. Click on Permission in the Left-hand side dashboard.
  2. Click on Edit policy json
{
  "Version": "2008-10-17",
  "Statement": [
    {
      "Sid": "pull and push",
      "Effect": "Allow",
      "Principal": {
        "AWS": [
                    "arn:aws:sts::<AWS_ACCOUNT_ID>:assumed-role/<LAMBDA_ROLE_NAME>"
        ]
      },
      "Action": [
        "ecr:BatchCheckLayerAvailability",
        "ecr:BatchCheckLayerAvailability",
        "ecr:BatchGetImage",
        "ecr:BatchGetImage",
        "ecr:CompleteLayerUpload",
        "ecr:DescribeImageScanFindings",
        "ecr:GetDownloadUrlForLayer",
        "ecr:GetDownloadUrlForLayer",
        "ecr:InitiateLayerUpload",
        "ecr:PutImage",
        "ecr:UploadLayerPart"
      ]
    }
  ]
}
  1. Save it.
  2. Trigger the lambda.

And that’s it..!! Hope you found it useful. Keep following our Blog for more interesting articles.

Elasticsearch Logstash Kibana (ELK) Authentication using Active Directory

By | Blogs | No Comments

Written by Priyanka Sharma, DevOps Architect, Powerupcloud Technologies

This article covers how you can enable security features on ELK to communicate with AD to authenticate Users. The security features provide two realms to achieve the same: One is LDAP realm and the other one is the ActiveDirectory realm.

We have used the Active_directory realm in our configurations. active_directory realm uses an LDAP bind request so it is similar to the LDAP realm. The Active Directory realm authenticates users using an LDAP bind request. After authenticating the user, the realm then searches to find the user’s entry in Active Directory.

Setup:

  • Elasticsearch version: 7.2
  • Three Master Nodes in private subnets
  • Kibana EC2 standalone server in private subnet
  • Logstash running on the standalone application server in private subnet
  • One Internal ALB with host-based routing for Kibana and Elasticsearch Endpoints.
    • kibana.powerupcloud.com → Pointing to Kibana Server
    • elasticsearch.powerupcloud.com → Pointing to ES Masters
  • Active Directory with ESAdmins AD group and a few users added to it which requires Elasticsearch access. Ensure port TCP 389 and UDP 389 are allowed in the AD.

Elasticsearch Nodes Configuration

Ensure to activate X-Pack on the Elastic Stack. It is an Elastic Stack extension that provides security, alerting, monitoring, reporting, machine learning, and many other capabilities. By default, X-Pack is installed when you install the Elasticsearch.

Create the Certificate to be used by Transport Layer Security (TLS) in the Elastic Stack.

/usr/share/elasticsearch/bin/elasticsearch-certutil ca
/usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
mkdir -p  /etc/elasticsearch/certs/
cp -r elastic-certificates.p12 /etc/elasticsearch/certs/

Update the Certificate Path in the /etc/elasticsearch/elasticsearch.yml. Add the realm configuration in the same file. The final elasticsearch.yml looks like as shown below:

path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch

network.host: 0.0.0.0
cluster.name: puc_elasticsearch
node.name: master-1
http.port: 9200

node.master: true
node.data: true
plugin.mandatory: "discovery-ec2"
discovery.zen.hosts_provider: ec2
discovery.ec2.availability_zones: "us-east-1a, us-east-1b, us-east-1c, us-east-1d"
discovery.zen.minimum_master_nodes: 2
discovery.seed_hosts: ["172.31.xx.xx", "172.31.xx.xx", "172.31.xx.xx"]
cluster.initial_master_nodes: ["172.31.xx.xx", "172.31.xx.xx", "172.31.xx.xx"]

xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.license.self_generated.type: trial
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/certs/elastic-certificates.p12

xpack:
  security:
    authc:
      realms:
        active_directory:
          my_ad:
            order: 1
            domain_name: AD_DNS
            url: ldap://AD_DNS:389
            user_search:
              base_dn: "cn=users,dc=puc,dc=com"
            group_search:
              base_dn: "cn=users,dc=puc,dc=com"
            files:
              role_mapping: "/etc/elasticsearch/role_mapping.yml"

Replace AD_DNS with the domain name associated with the Active Directory.

The trial license of xpack is available only for 30 days. After that, it is mandatory to purchase a license and update the type of license in the configuration.

Update the role mapping file to map the AD group with an existing ES role.

vim /etc/elasticsearch/role_mapping.yml

superuser:
   - "CN=ESAdmins,CN=Users,DC=puc,DC=com"

ESAdmins is the AD Group Name. Replace it as required.

Superuser is an inbuilt available role in Elasticsearch. By this role_mapping, we are mapping the AD Group to the superuser ES role.

Upload the same certificate “ /etc/elasticsearch/certs/elastic-certificates.p12” in the other two nodes as well. You can use scp commands to achieve it.

Add the same xpack configurations in the other two nodes too.

Validate the ES authentication by executing the curl commands as shown in the screenshot below:

Logstash Configuration

Now that we have the AD authentication enabled on the ES nodes in the above section. Update the logstash configuration to authenticate ES with an AD user. Update the logstash.conf file to add the AD user credentials as highlighted in the below config:

vim /etc/logstash/conf.d/logstash.conf

input {
 file {
   path => ["/var/log/nginx/access.log", "/var/log/nginx/error.log"]
   type => "nginx"
 }
  beats {
    port => 5044
  }
}
filter {
 grok {
   match => [ "message" , "%{COMBINEDAPACHELOG}+%{GREEDYDATA:extra_fields}"]
   overwrite => [ "message" ]
 }
 mutate {
   convert => ["response", "integer"]
   convert => ["bytes", "integer"]
   convert => ["responsetime", "float"]
 }
 geoip {
   source => "clientip"
   target => "geoip"
   add_tag => [ "nginx-geoip" ]
 }
 date {
   match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ]
   remove_field => [ "timestamp" ]
 }
 useragent {
   source => "agent"
 }
}
output {
 elasticsearch {
   hosts => ["http://elasticsearch.powerupcloud.com:80"]
   user => "esadmin@puc.com"
   password => "PASSWORD"
   index => "nginx-%{+YYYY.MM.dd}"
 }
 stdout { codec => rubydebug }
}

In the above configuration, Replace the ES Endpoint (elasticsearch.powerupcloud.com), AD user and password. The AD user must exist in the same AD group as specified in the role_mapping.yml.

Restart logstash: “service logstash restart”. Ensure to look at the logs after restart.

Kibana Configuration

Similar to Logstash, update the Kibana configuration to add the AD User Credentials for Elasticsearch endpoint.

vim /etc/kibana/kibana.yml

server.host: "0.0.0.0"
elasticsearch.hosts: ["http://elasticsearch.powerupcloud.com"]
elasticsearch.username: "esadmin@puc.com"
elasticsearch.password: "PASSWORD"

In the above configuration, Replace the ES Endpoint, elastic search.username and elasticsearch.password. The AD user must exist in the same AD group as specified in the role_mapping.yml.

Restart Kibana: service kibana restart

Hit Kibana Endpoint.

Enter the user credentials that exist in the AD group.

Appendix:

If the Xpack license is not activated earlier before enabling AD authentication, you can execute the below commands to start the trail after adding xpack configuration in elasticsearch.yml.

Create a local user:

/usr/share/elasticsearch/bin/elasticsearch-users useradd  priyanka -r superuser -p PASSWORD
curl -u priyanka http://localhost:9200/_xpack/license
curl -X POST -u priyanka "http://localhost:9200/_license/start_trial?acknowledge=true&pretty"

Validate with AD user:

curl -u esadmin@puc.com http://localhost:9200/_xpack/license

The above commands are executed only in one master server.

curl -u esadmin@puc.com http://elasticsearch.powerupcloud.com/_xpack/license

And that’s all. Hope you found it useful.

Running Websites at Scale on Application Service

By | Case Study | No Comments

Customer: An Indian leading e-commerce company

Problem Statement

Our client is regarded as India’s biggest e-commerce store which also has non e-commerce websites like careers and stories (corporate blog) websites on AWS and as part of a company wide Azure adoption, they wanted to move these sites to Azure PaaS

The Solution

Powerup helped move their Careers (PHP based) and Stories (WordPress) from AWS IaaS to Azure PaaS. The websites were configured to withstand clients scale and sudden surge in traffic due to marketing activities and huge online presence. CDN was introduced & caching frequently visited content is enabled for better performance. The stories site was recently redeployed to App Service Linux backend. 

Cloud Platform

Microsoft Azure

Technologies Used

Azure App Service, Application Insights, Azure Security Center, WordPress, MYSQL