Post

DenMotion: Moving to S3 + CloudFront

Moving the DenMotion portfolio off GitHub Pages and onto S3 + CloudFront. Private repo, automated deploys, and the full infrastructure now running under one roof.

DenMotion: Moving to S3 + CloudFront

The share system was the first CloudFront distribution. This is the second.

When I built share.denmotion.com, the plan was always to move the main portfolio site off GitHub Pages afterwards. The share build taught the pattern, S3 bucket, CloudFront distribution, CloudFront Functions, Route 53 records, OAC, and now I was applying the same thing to the site itself.

The reason is simple. The denmotion.com repo was public, and every layout, every CSS file, every JavaScript integration was visible on GitHub. Anyone could fork it and clone the exact portfolio I had built. The three custom layouts, the cinematic grid, the hover-to-play logic, the Fancybox configuration, the particles.js integration. That’s the intellectual property that if I’m going to sell this architecture to clients, the code needs to be private.

On a free GitHub account, making a repo private kills GitHub Pages. The site goes offline. So the move had to happen first, then the repo goes private.


What Changed

BeforeAfter
Public repo, anyone can fork the codePrivate repo, code locked down
GitHub Pages hostingS3 + CloudFront hosting
GitHub manages SSLACM wildcard cert (*.denmotion.com)
Four A records to GitHub Pages IPsA record alias to CloudFront
Chirpy workflow deploys to GitHub PagesCustom workflow deploys to S3
No control over caching or edge deliveryCloudFront CDN with global edge locations

The site looks the same and behaves the same, but it’s faster. GitHub Pages serves from a handful of data centres. CloudFront has over 400 edge locations globally. When someone in London loads denmotion.com, CloudFront serves it from a London edge location instead of routing the request across the Atlantic. The caching is better too. CloudFront caches files at each edge location with full control over cache behaviour. The visitor probably won’t notice the difference on a fast connection, but the infrastructure behind it is objectively better.

What this post covers
The CloudFront distribution setup, the IAM deployment user, the GitHub Actions workflow, the CloudFront Function for subdirectory routing, the DNS swap, and the repo lockdown. This is the final piece of the DenMotion infrastructure.


The Distribution

Same pattern as the share system. A new CloudFront distribution pointing at the same denmotion S3 bucket, but with a different origin path.

SettingValueWhy
Namedenmotion-portfolioDistinguishes it from the share distribution
Origindenmotion.s3.eu-west-2.amazonaws.comSame bucket as everything else
Origin path/websiteScopes this distribution to the website prefix
Origin accessOAC (Origin Access Control)Only CloudFront can read from S3
Default root objectindex.htmlServes the landing page at the root URL
Alternate domainsdenmotion.com, www.denmotion.comBoth the apex and www subdomain
SSL certificateExisting *.denmotion.com wildcardAlready created during the share build

The origin path /website means every request to denmotion.com/ maps to s3://denmotion/website/ behind the scenes. Same principle as the share distribution using /share. Each distribution is scoped to its own prefix, same bucket.

The wildcard certificate I created for the share system already covers denmotion.com and every subdomain. No new certificate needed. That one decision during the share build saved time here.

Two distributions, one bucket
The denmotion S3 bucket now serves two completely independent systems through two CloudFront distributions. The share distribution reads from share/. The portfolio distribution reads from website/. They have independent cache settings, independent default root objects, and independent CloudFront Functions. Neither can see the other’s files.

graph TD
    A["denmotion.com"] --> B["CloudFront<br/>denmotion-portfolio"]
    C["share.denmotion.com"] --> D["CloudFront<br/>denmotion-share"]
    B --> E["S3<br/>denmotion/website/"]
    D --> F["S3<br/>denmotion/share/"]

    style A fill:#1a1a2e,stroke:#e94560,color:#fff
    style B fill:#0f3460,stroke:#e94560,color:#fff
    style C fill:#1a1a2e,stroke:#e94560,color:#fff
    style D fill:#0f3460,stroke:#e94560,color:#fff
    style E fill:#16213e,stroke:#533483,color:#fff
    style F fill:#16213e,stroke:#533483,color:#fff

Bucket Policy

The S3 bucket policy needs to allow both distributions to read from the bucket. When I created the portfolio distribution with OAC, CloudFront added the new distribution ARN to the existing policy automatically. The policy now has two source ARNs in the condition:

1
2
3
4
5
6
7
8
"Condition": {
    "ArnLike": {
        "AWS:SourceArn": [
            "arn:aws:cloudfront::223791342103:distribution/E3EA2ZYWVBJZ6G",
            "arn:aws:cloudfront::223791342103:distribution/ENSORJ28LFJXN"
        ]
    }
}

The first is the share distribution. The second is the portfolio distribution. Both can read. Nobody else can. The bucket is still fully blocked from public access.


The Deployment User

GitHub Actions needs permission to upload files to S3 and invalidate the CloudFront cache. I created a dedicated IAM user for this with no console access and a tightly scoped policy.

The Policy

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:DeleteObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::denmotion",
                "arn:aws:s3:::denmotion/website/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": "cloudfront:CreateInvalidation",
            "Resource": "arn:aws:cloudfront::223791342103:distribution/ENSORJ28LFJXN"
        }
    ]
}

Three S3 actions, one CloudFront action, all scoped to exactly what the deploy needs.

s3:ListBucket on the bucket lets the deploy script compare what’s in S3 with what’s in the build, so it knows what to delete. s3:PutObject and s3:DeleteObject are scoped to denmotion/website/* only. The deployer can upload and remove files inside the website/ prefix but can’t touch share/, films/, or photos/.

cloudfront:CreateInvalidation is scoped to the portfolio distribution ARN only. The deployer can clear the cache on this one distribution. It can’t modify distribution settings, can’t access other distributions, can’t do anything else in CloudFront.

No console access. Programmatic keys only.

Principle of least privilege
The IAM policy scopes the deployer to exactly what it needs and nothing more. It can write to the website/ prefix in S3 but not share/, films/, or photos/. It can invalidate the portfolio CloudFront cache but not the share distribution. It has no console access. If the credentials leaked, the blast radius is limited to the website files only. Everything else in the bucket and across AWS is untouchable.

GitHub Secrets

The IAM access key and secret key go into the GitHub repo as encrypted secrets. GitHub encrypts them at rest and only exposes them to the workflow at runtime. They never appear in logs.

SecretWhat it stores
AWS_ACCESS_KEY_IDIAM deployer access key
AWS_SECRET_ACCESS_KEYIAM deployer secret key
S3_BUCKET_NAMEdenmotion
CLOUDFRONT_DIST_IDPortfolio distribution ID

Four secrets. The workflow references them by name. Nobody can see the values after they’re saved.


The GitHub Actions Workflow

The workflow replaces Chirpy’s default GitHub Pages deployment. Instead of building the site and publishing to GitHub’s CDN, it builds the site and syncs the output to S3.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
name: Deploy DenMotion to AWS

on:
  push:
    branches:
      - main

jobs:
  deploy:
    runs-on: ubuntu-latest

    steps:
    - name: Checkout Code
      uses: actions/checkout@v4

    - name: Setup Ruby
      uses: ruby/setup-ruby@v1
      with:
        ruby-version: '3.3'
        bundler-cache: true

    - name: Build Jekyll Site
      run: JEKYLL_ENV=production bundle exec jekyll build

    - name: Configure AWS Credentials
      uses: aws-actions/configure-aws-credentials@v4
      with:
        aws-access-key-id: $
        aws-secret-access-key: $
        aws-region: eu-west-2

    - name: Sync files to S3
      run: aws s3 sync _site/ s3://$/website/ --delete

    - name: Invalidate CloudFront Cache
      run: |
        aws cloudfront create-invalidation \
          --distribution-id $ \
          --paths "/*"

What Each Step Does

  1. Checkout Code
    Pulls the repo into the workflow runner. Setup Ruby installs Ruby 3.3 and caches the bundled gems so subsequent builds are faster. Build Jekyll Site runs the same Jekyll build command that Chirpy’s old workflow used, compiling the markdown and layouts into static HTML inside the _site/ folder.

  2. Configure AWS Credentials
    Loads the IAM keys from the encrypted secrets. Sync files to S3 uploads the built _site/ folder to s3://denmotion/website/. The --delete flag removes any files from the website/ prefix that no longer exist in the build. This keeps the bucket clean and prevents stale pages from being served. Invalidate CloudFront Cache forces all edge locations to fetch the latest files from S3 immediately instead of waiting for the cache TTL to expire.

The old Chirpy workflow (pages-deploy.yml) was deleted from the repo before the first push. Only one workflow runs now.

The deploy time
45 seconds from git push to live on CloudFront. The Jekyll build takes about 2 seconds. Ruby setup takes the longest at 26 seconds, but that includes downloading and caching gems. Subsequent pushes with cached gems are faster. The S3 sync and CloudFront invalidation take about 9 seconds combined.


The Subdirectory Problem

The first deploy worked. The landing page loaded. Then I clicked Films and got an XML error page.

1
2
3
4
<Error>
    <Code>AccessDenied</Code>
    <Message>Access Denied</Message>
</Error>

This is a difference between GitHub Pages and CloudFront that isn’t obvious until you hit it.

Jekyll generates pages like Films and Photos as directories with an index.html inside: films/index.html, photos/index.html. When someone visits /films/, GitHub Pages automatically serves the index.html inside that directory. CloudFront doesn’t do this. It looks for a file literally called films/ in S3, can’t find it, and returns an access denied error.

The default root object setting in CloudFront only applies to the root URL (/). It doesn’t apply to subdirectories. This is a known limitation and it trips up everyone moving from GitHub Pages to CloudFront for the first time.

Why Access Denied instead of Not Found
CloudFront returns Access Denied rather than 404 because the S3 bucket blocks public access. When CloudFront can’t find the file through OAC, S3 denies the request rather than confirming whether the file exists. This is a security feature. It prevents people from probing the bucket to discover file names.

The Fix: Another CloudFront Function

Same approach as the share system. A CloudFront Function that intercepts every request and appends index.html where needed.

1
2
3
4
5
6
7
8
9
10
11
12
function handler(event) {
    var request = event.request;
    var uri = request.uri;

    if (uri.endsWith('/')) {
        request.uri += 'index.html';
    } else if (!uri.includes('.')) {
        request.uri += '/index.html';
    }

    return request;
}

Two rules. If the path ends with a / (like /films/), it appends index.html. If the path has no file extension at all (like /films), it also appends /index.html. Anything with a file extension (.css, .js, .jpg, .mp4) passes through untouched.

RequestRewritten toWhy
/films//films/index.htmlDirectory with trailing slash
/films/films/index.htmlDirectory without trailing slash
/assets/css/style.cssNo changeHas a file extension
/No changeDefault root object handles this

I created this as denmotion-portfolio-routing in the CloudFront console, published it, and associated it with the portfolio distribution’s default behaviour on the Viewer Request event. After a couple of minutes the Films and Photos pages loaded correctly.

GitHub Pages hides this complexity
GitHub Pages, Netlify, and Vercel all handle subdirectory routing automatically. It’s one of those things you don’t think about until you move to raw CloudFront. The trade-off is that you lose the automatic magic but you gain full control over the routing logic. The CloudFront Function is 10 lines of code and runs in under a millisecond at every edge location.


The DNS Swap

With the site working on the CloudFront distribution domain (d1ugwe5cnp55gw.cloudfront.net), the last step was pointing the actual domain at it.

Before

RecordTypeValue
denmotion.comA185.199.108.153, 185.199.109.153, 185.199.110.153, 185.199.111.153
www.denmotion.comCNAMEdigitaldencloud.github.io

After

RecordTypeValue
denmotion.comA (Alias)d1ugwe5cnp55gw.cloudfront.net
www.denmotion.comCNAMEdenmotion.com

The apex domain changed from four A records pointing at GitHub Pages IPs to a single alias record pointing at the CloudFront distribution. The www subdomain changed from pointing at GitHub’s domain to pointing at denmotion.com itself, which then resolves to CloudFront. If I ever change the distribution, I update one record and www follows automatically.

DNS propagated in a few minutes. I tested in an incognito window and the site loaded from CloudFront.


Locking Down

With the site confirmed working on CloudFront, I went through the final cleanup.

  1. Unpublished GitHub Pages
    Went to the repo Settings > Pages and unpublished the site. GitHub Pages is no longer involved.

  2. Removed the custom domain
    Cleared denmotion.com from the GitHub Pages settings. GitHub doesn’t need to know about the domain anymore.

  3. Deleted the CNAME file
    The CNAME file in the repo root was only needed for GitHub Pages to know which domain to serve. CloudFront handles this through alternate domain names on the distribution. Deleted it, committed, pushed.

  4. Made the repo private
    Settings > General > Danger Zone > Change visibility > Private.

That’s the moment the code is locked down. The three custom layouts, the cinematic grid CSS, the hover-to-play JavaScript, the Fancybox configuration, the particles.js integration, and the full Jekyll Chirpy site architecture with all its overrides and customisations. All hidden. The site looks identical to every visitor. The difference is entirely behind the scenes.


The Full Infrastructure

This is everything running under denmotion.com, across both distributions.

1
2
3
4
5
denmotion (S3 bucket, eu-west-2)
├── share/          → share.denmotion.com (CloudFront: denmotion-share)
├── films/          → portfolio film assets
├── photos/         → portfolio photo assets
└── website/        → denmotion.com (CloudFront: denmotion-portfolio)
LayerTechnologyPurpose
DomainRoute 53DNS for apex and subdomains
Portfolio CDNCloudFront (denmotion-portfolio)Serves the Jekyll site from website/
Share CDNCloudFront (denmotion-share)Serves the branded video viewer from share/
URL routingCloudFront Functions (2)Subdirectory routing on portfolio, clean URLs on share
StorageS3 (single denmotion bucket)All files, all prefixes, one bucket
SSLACM wildcard cert (*.denmotion.com)Covers every subdomain
BuildGitHub ActionsJekyll build, S3 sync, cache invalidation on push
Deployment authIAM (github-actions-deployer)Scoped to website/ prefix and portfolio distribution only
Media deliveryS3 + CloudFrontSelf-hosted video and photos, zero compression
Contact formAPI Gateway + LambdaServerless, no page reload
graph TD
    A["git push"] --> B["GitHub Actions<br/>Jekyll build"]
    B --> C["aws s3 sync<br/>→ denmotion/website/"]
    C --> D["CloudFront invalidation"]
    D --> E["denmotion.com<br/>Live in 45 seconds"]

    F["~/scripts/share.sh"] --> G["aws s3 cp<br/>→ denmotion/share/"]
    G --> H["share.denmotion.com<br/>Branded viewer page"]

    style A fill:#1a1a2e,stroke:#e94560,color:#fff
    style B fill:#0f3460,stroke:#e94560,color:#fff
    style C fill:#16213e,stroke:#533483,color:#fff
    style D fill:#0f3460,stroke:#e94560,color:#fff
    style E fill:#e94560,stroke:#fff,color:#fff
    style F fill:#1a1a2e,stroke:#e94560,color:#fff
    style G fill:#16213e,stroke:#533483,color:#fff
    style H fill:#e94560,stroke:#fff,color:#fff

Two CloudFront distributions. One S3 bucket. One wildcard certificate. One hosted zone. One IAM deployer. Zero databases, zero servers, zero monthly platform fees beyond S3 storage and CloudFront requests.

The portfolio deploy is git push. The video share is one CLI command. Both go through the same bucket, the same edge network, and the same domain.


The Cost

ServiceMonthly cost
Route 53 hosted zone$0.50
S3 storagePennies (static HTML + a few videos)
CloudFront (2 distributions)Free tier covers 1TB transfer
ACM certificateFree
GitHub ActionsFree for private repos (2,000 minutes/month)
IAMFree

The total infrastructure cost is under $1/month. That’s less than a single month of any hosted platform. Squarespace starts at £16/month. WordPress hosting starts at £25/month. Wix starts at £13/month. The DenMotion infrastructure does more than any of them for the price of a Route 53 hosted zone.


What This Took

Three posts. Three systems. One infrastructure.

The first post built the portfolio site: three layouts, the cinematic grid, the self-hosted video pipeline, the client funnel. The second post built the share system: S3, CloudFront, the branded viewer page, the CloudFront Function for clean URLs. This post moved the main site onto the same infrastructure and locked down the code.

Each build taught the next one. The share system taught the CloudFront distribution setup. The portfolio migration reused the same pattern but added GitHub Actions and IAM. The CloudFront Function for clean URLs on the share system became the template for the subdirectory routing function on the portfolio.

The whole thing started with a domain name and ended up here. One S3 bucket running a cinematic portfolio, a branded video sharing system, and a fully automated deployment pipeline, all on AWS services that cost less than a coffee.


Documented April 2026.

This post is licensed under CC BY 4.0 by the author.