Serving Private Content through AWS CloudFront

Step 1   – Get the private/public keys from AWS Admin console.

Login to AWS Admin console as root user

AWS-admin-console1

Click on “Create New Key Pair”

image

Download Private Key & Public Key files.

image (1)

Step 2   – Get the private/public keys from AWS Admin console.

Using Signed URLs.

$path :-  CDN URL

Using Signed Cookies

We need to generate the Signed URL first and then use them to set cookies from the server side.

Application URL : app1.example.com

CDN’s URL : cdn.example.com .  (This is how we do it and we need to do this because we will not be able to set the cookies for a different domain *.cloudfront.net)

And from the application, we set the following cookies in ‘.example.com’

  • CloudFront-Signature
  • CloudFront-Key-Pair-Id
  • CloudFront-Policy


		

Few more S3 commands

 

To make all existing objects in S3 bucket to be public

aws s3 cp s3://subu-test/ s3://subu-test/ –acl public-read  –recursive

To enabled SSE-S3 encryption all existing objects in S3 bucket

aws s3 cp –sse AES256 s3://subu-test/ s3://subu-test/ –recursive

Konsole-Applying Colour Schemes for SSH

When I access different servers, I badly wanted to differentiate to which server I log in to. Because, I don’t want to delete the folders that I intend to on a staging server, but actually do that on production.

1.  So, initially, I created two images like this in different colours, with “Staging”, “Production” texts on it.

  1. In Konsole, I created separate colour schemes in my profile.

3.  In my ~/.bash_aliases, I added the following.

alias resetcolors=”konsoleprofile colors=DarkPastels”
alias sshst=”konsoleprofile colors=Staging; ssh staging; resetcolors;”
alias sshp=”konsoleprofile colors=Prod; ssh production; resetcolors;”

4.  source ~/.basrc

5.  Now, I know where I am logging to.. 🙂

Peek 2017-10-26 17-43
Konsole

 

Remove ‘Server’ header from Nginx

Nginx returns “Server” header, which exposes whether we use Nginx as the web server to the client.

Screenshot from 2017-10-26 20-36-28

In order to hide that, we might have to build nginx from source and use it.

vi src/http/ngx_http_header_filter_module.c
static char ngx_http_server_string[] = "Server: Subu" CRLF;
static char ngx_http_server_full_string[] = "Server: Subu" CRLF;

After rebuilding nginx,

Screenshot from 2017-10-26 20-36-47

Setting content-type for files in AWS S3

This tool can be used for uploading / syncing files from a server to s3 bucket and vice versa.

Following pages should provide enough information on how to use s3cmd.
But while uploading content using this tool, if we forget to set the content type of the file, the default content-type of the file will be set as binary/octet-stream.
How does it impact : File will get downloaded instead of rendering in browser. Even a html file can not be viewed in browser, it will be downloaded.
So, we might need to set the content type of these files properly when we upload files to s3 bucket.
s3cmd sync content_new/ s3://bucket-name/content_new/ --acl-public --recursive --progress --verbose --exclude ".svn/*" --add-header="Content-Encoding:UTF-8" --guess-mime-type
We might need to use –guess-mime-type option.

 

 -M, --guess-mime-type
Guess MIME type of files by their extension or mime magic. Fall back to default MIME-Type as specified by--default-mime-type option
Also, we can use s3cmd along with python-magic library.
pip install python-magic
PS : Use at your own risk