Ratul DevOps Engineer

Serving Private Content of S3 through CloudFront Signed URL

By completing this article we'll learn more about CloudFront, S3, IAM. Another article in Medium

CloudFront is a popular web service by Amazon. It speeds up distribution of static and dynamic content to the users. CloudFront rapidly distributes the contents by routing each user request to the edge location that can best serve your content. Typically, this is a CloudFront edge server that provides the fastest delivery to the viewer. You create a CloudFront distribution to tell CloudFront where you want content to be delivered from, and the details about how to track and manage content delivery. Then CloudFront uses computers — edge servers — that are close to your viewers to deliver that content quickly when someone wants to see it or use it.

AWS S3 is an object level storage built to store and retrieve any amount of data from anywhere on the Internet. It’s a simple storage service that offers an extremely durable, highly available, and infinitely scalable data storage infrastructure at very low costs.

In this post, we’ll see how we can serve the contents of s3 through CloudFront generating a signed URL. By doing this we can secure object endpoint and also get the contents much faster.

alt text

  • Restrict access to objects in CloudFront edge caches
  • Restrict access to objects in your Amazon S3 bucket

Create CloudFront Keypair. You need to login to your AWS account using root credentials. You cannot do this via an IAM user at the moment.

Go to My Security Credentials then Cloudfront Key Pairs and create your key pair. Make sure you download the private key after creation and note the key ID (which is also in the filename of the downloaded key). You must download the Public(rsa) and Private(pk) key .pem extension and save the Key ID.

Create S3 bucket “aaaaaaaaaaabbbbbb” and upload some files into it.

In cloudfront distribution security, create Origin Access Identity, which is a special CloudFront user, and associate the origin access identity with your distribution. (For web distributions, you associate the origin access identity with origins, so you can secure all or just some of your Amazon S3 content.) You can also create an origin access identity and add it to your distribution when you create the distribution. only the origin access identity has read permission (or read and download permission). When your users access your Amazon S3 objects through CloudFront, the CloudFront origin access identity gets the objects on behalf of your users. If your users request objects directly by using Amazon S3 URLs, they’re denied access. The origin access identity has permission to access objects in your Amazon S3 bucket, but users don’t.

Create CloudFront Distribution for Web.

Hit Create distribution.

After creating the distribution you can see the bucket policy

{
 "Version": "2008-10-17",
 "Id": "PolicyForCloudFrontPrivateContent",
 "Statement": [
   {
   "Sid": "1",
   "Effect": "Allow",
   "Principal": {
       "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity your_OAI_ID"
    },
       "Action": "s3:GetObject",
       "Resource": "arn:aws:s3:::aaaaaaaaaaabbbbbb/"
    }
  ]
}

Generate a signed URL using python sdk for aws. Create a script “boto3_signed_url.py”.

The script :

import datetime
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import hashes, serialization
from cryptography.hazmat.primitives.asymmetric import padding
from botocore.signers import CloudFrontSigner

def rsa_signer(message):
# .pem is the private keyfile downloaded from CloudFront keypair

with open('pk-MOUPJHBLKJN65L1BH.pem', 'rb') as key_file:
private_key = serialization.load_pem_private_key(
key_file.read(),
password=None,
backend=default_backend()
)

signer = private_key.signer(padding.PKCS1v15(), hashes.SHA1())
signer.update(message)
return signer.finalize()

key_id = 'MOUPJHBLKJN65L1BH'
url = 'https://u09vcb1sd98xfb.cloudfront.net/rt.png'
current_time = datetime.datetime.utcnow()
expire_date = current_time + datetime.timedelta(minutes = 2)

cloudfront_signer = CloudFrontSigner(key_id, rsa_signer)
# Create a signed url that will be valid until the specfic expiry date
# provided using a canned policy.

signed_url = cloudfront_signer.generate_presigned_url(
url, date_less_than=expire_date)

print(signed_url)

Run the script:

python boto3_signed_url.py

This will return a signed URL of that .png file. If you paste the url in your browser, you’ll get the image.

Install and Configure Kubernetes Cluster on Ubuntu 16.04

By completing this article we'll learn about Docker orchestration- Kubernetes

Create 2 VMs and set the hostname

  • kube-master
  • kube-worker

In Master Node

sudo hostnamectl set-hostname kube-master

In Worker Node

sudo hostnamectl set-hostname kube-worker

Update the hosts file in both nodes

sudo vim /etc/hosts
master_private_ip   kube-master
worker_private_ip   kube-worker

Install Docker in both nodes:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get update && sudo apt-get install -y docker-ce=$(apt-cache madison docker-ce | grep 17.03 | head -1 | awk '{print $3}')



Install Kubernetes in both nodes: Installing Kubeadm, kubelet and kubectl. You will install these packages on all of your machines: kubeadm: the command to bootstrap the cluster.
kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
kubectl: the command line util to talk to your cluster.

sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat **<<EOF >**/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
**EOF**
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl

In Master Node

Make sure that the cgroup driver used by kubelet is the same as the one used by Docker. Verify that your Docker cgroup driver matches the kubelet config:

systemctl restart kubelet

Initialize Cluster The master is the machine where the control plane components run, including etcd (the cluster database) and the API server (which the kubectl CLI communicates with).

sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=**kube-masters_private_ip**

To make kubectl work for your non-root user, you might want to run these commands (which is also a part of the kubeadm init output):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you could run this:

export KUBECONFIG=/etc/kubernetes/admin.conf

Now, install a pod network add-on so that your pods can communicate with each other. It is a must…

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

Install metrics-server

git clone https://github.com/kubernetes-incubator/metrics-server.git

cd metrics-server
kubectl apply --filename deploy/1.8+/

Install heapster

kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/monitoring-standalone/v1.7.0.yaml

If API Aggregator no enabled then follow this https://kubernetes.io/docs/tasks/access-kubernetes-api/configure-aggregation-layer/

kubectl get pods --all-namespaces
kubectl get nodes -o wide

In Minion Node

Joining Worker

sudo kubeadm join 192.168.33.10:6443 --token ujbvgu.vxvk2vml3xkcl6q4 --discovery-token-ca-cert-hash sha256:73fca98b91e8fd589f4e50e3f55f4889c9db1ee026ac647af7b6ea0af2f6c624

–token: kube-master generated token and –discovery-token-ca-cert-hash: also generated by kube-master, output of (kubeadm init)

If you want to remove all configuration from any node:

kubeadm reset