DenMotion: Moving to S3 + CloudFront
Moving the DenMotion portfolio off GitHub Pages and onto S3 + CloudFront. Private repo, automated deploys, and the full infrastructure now running under one roof.
The share system was the first CloudFront distribution. This is the second.
When I built share.denmotion.com, the plan was always to move the main portfolio site off GitHub Pages afterwards. The share build taught the pattern, S3 bucket, CloudFront distribution, CloudFront Functions, Route 53 records, OAC, and now I was applying the same thing to the site itself.
The reason is simple. The denmotion.com repo was public, and every layout, every CSS file, every JavaScript integration was visible on GitHub. Anyone could fork it and clone the exact portfolio I had built. The three custom layouts, the cinematic grid, the hover-to-play logic, the Fancybox configuration, the particles.js integration. That’s the intellectual property that if I’m going to sell this architecture to clients, the code needs to be private.
On a free GitHub account, making a repo private kills GitHub Pages. The site goes offline. So the move had to happen first, then the repo goes private.
What Changed
| Before | After |
|---|---|
| Public repo, anyone can fork the code | Private repo, code locked down |
| GitHub Pages hosting | S3 + CloudFront hosting |
| GitHub manages SSL | ACM wildcard cert (*.denmotion.com) |
| Four A records to GitHub Pages IPs | A record alias to CloudFront |
| Chirpy workflow deploys to GitHub Pages | Custom workflow deploys to S3 |
| No control over caching or edge delivery | CloudFront CDN with global edge locations |
The site looks the same and behaves the same, but it’s faster. GitHub Pages serves from a handful of data centres. CloudFront has over 400 edge locations globally. When someone in London loads denmotion.com, CloudFront serves it from a London edge location instead of routing the request across the Atlantic. The caching is better too. CloudFront caches files at each edge location with full control over cache behaviour. The visitor probably won’t notice the difference on a fast connection, but the infrastructure behind it is objectively better.
What this post covers
The CloudFront distribution setup, the IAM deployment user, the GitHub Actions workflow, the CloudFront Function for subdirectory routing, the DNS swap, and the repo lockdown. This is the final piece of the DenMotion infrastructure.
The Distribution
Same pattern as the share system. A new CloudFront distribution pointing at the same denmotion S3 bucket, but with a different origin path.
| Setting | Value | Why |
|---|---|---|
| Name | denmotion-portfolio | Distinguishes it from the share distribution |
| Origin | denmotion.s3.eu-west-2.amazonaws.com | Same bucket as everything else |
| Origin path | /website | Scopes this distribution to the website prefix |
| Origin access | OAC (Origin Access Control) | Only CloudFront can read from S3 |
| Default root object | index.html | Serves the landing page at the root URL |
| Alternate domains | denmotion.com, www.denmotion.com | Both the apex and www subdomain |
| SSL certificate | Existing *.denmotion.com wildcard | Already created during the share build |
The origin path /website means every request to denmotion.com/ maps to s3://denmotion/website/ behind the scenes. Same principle as the share distribution using /share. Each distribution is scoped to its own prefix, same bucket.
The wildcard certificate I created for the share system already covers denmotion.com and every subdomain. No new certificate needed. That one decision during the share build saved time here.
Two distributions, one bucket
ThedenmotionS3 bucket now serves two completely independent systems through two CloudFront distributions. The share distribution reads fromshare/. The portfolio distribution reads fromwebsite/. They have independent cache settings, independent default root objects, and independent CloudFront Functions. Neither can see the other’s files.
graph TD
A["denmotion.com"] --> B["CloudFront<br/>denmotion-portfolio"]
C["share.denmotion.com"] --> D["CloudFront<br/>denmotion-share"]
B --> E["S3<br/>denmotion/website/"]
D --> F["S3<br/>denmotion/share/"]
style A fill:#1a1a2e,stroke:#e94560,color:#fff
style B fill:#0f3460,stroke:#e94560,color:#fff
style C fill:#1a1a2e,stroke:#e94560,color:#fff
style D fill:#0f3460,stroke:#e94560,color:#fff
style E fill:#16213e,stroke:#533483,color:#fff
style F fill:#16213e,stroke:#533483,color:#fff
Bucket Policy
The S3 bucket policy needs to allow both distributions to read from the bucket. When I created the portfolio distribution with OAC, CloudFront added the new distribution ARN to the existing policy automatically. The policy now has two source ARNs in the condition:
1
2
3
4
5
6
7
8
"Condition": {
"ArnLike": {
"AWS:SourceArn": [
"arn:aws:cloudfront::223791342103:distribution/E3EA2ZYWVBJZ6G",
"arn:aws:cloudfront::223791342103:distribution/ENSORJ28LFJXN"
]
}
}
The first is the share distribution. The second is the portfolio distribution. Both can read. Nobody else can. The bucket is still fully blocked from public access.
The Deployment User
GitHub Actions needs permission to upload files to S3 and invalidate the CloudFront cache. I created a dedicated IAM user for this with no console access and a tightly scoped policy.
The Policy
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::denmotion",
"arn:aws:s3:::denmotion/website/*"
]
},
{
"Effect": "Allow",
"Action": "cloudfront:CreateInvalidation",
"Resource": "arn:aws:cloudfront::223791342103:distribution/ENSORJ28LFJXN"
}
]
}
Three S3 actions, one CloudFront action, all scoped to exactly what the deploy needs.
s3:ListBucket on the bucket lets the deploy script compare what’s in S3 with what’s in the build, so it knows what to delete. s3:PutObject and s3:DeleteObject are scoped to denmotion/website/* only. The deployer can upload and remove files inside the website/ prefix but can’t touch share/, films/, or photos/.
cloudfront:CreateInvalidation is scoped to the portfolio distribution ARN only. The deployer can clear the cache on this one distribution. It can’t modify distribution settings, can’t access other distributions, can’t do anything else in CloudFront.
No console access. Programmatic keys only.
Principle of least privilege
The IAM policy scopes the deployer to exactly what it needs and nothing more. It can write to thewebsite/prefix in S3 but notshare/,films/, orphotos/. It can invalidate the portfolio CloudFront cache but not the share distribution. It has no console access. If the credentials leaked, the blast radius is limited to the website files only. Everything else in the bucket and across AWS is untouchable.
GitHub Secrets
The IAM access key and secret key go into the GitHub repo as encrypted secrets. GitHub encrypts them at rest and only exposes them to the workflow at runtime. They never appear in logs.
| Secret | What it stores |
|---|---|
AWS_ACCESS_KEY_ID | IAM deployer access key |
AWS_SECRET_ACCESS_KEY | IAM deployer secret key |
S3_BUCKET_NAME | denmotion |
CLOUDFRONT_DIST_ID | Portfolio distribution ID |
Four secrets. The workflow references them by name. Nobody can see the values after they’re saved.
The GitHub Actions Workflow
The workflow replaces Chirpy’s default GitHub Pages deployment. Instead of building the site and publishing to GitHub’s CDN, it builds the site and syncs the output to S3.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
name: Deploy DenMotion to AWS
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v4
- name: Setup Ruby
uses: ruby/setup-ruby@v1
with:
ruby-version: '3.3'
bundler-cache: true
- name: Build Jekyll Site
run: JEKYLL_ENV=production bundle exec jekyll build
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: $
aws-secret-access-key: $
aws-region: eu-west-2
- name: Sync files to S3
run: aws s3 sync _site/ s3://$/website/ --delete
- name: Invalidate CloudFront Cache
run: |
aws cloudfront create-invalidation \
--distribution-id $ \
--paths "/*"
What Each Step Does
Checkout Code
Pulls the repo into the workflow runner. Setup Ruby installs Ruby 3.3 and caches the bundled gems so subsequent builds are faster. Build Jekyll Site runs the same Jekyll build command that Chirpy’s old workflow used, compiling the markdown and layouts into static HTML inside the_site/folder.Configure AWS Credentials
Loads the IAM keys from the encrypted secrets. Sync files to S3 uploads the built_site/folder tos3://denmotion/website/. The--deleteflag removes any files from thewebsite/prefix that no longer exist in the build. This keeps the bucket clean and prevents stale pages from being served. Invalidate CloudFront Cache forces all edge locations to fetch the latest files from S3 immediately instead of waiting for the cache TTL to expire.
The old Chirpy workflow (pages-deploy.yml) was deleted from the repo before the first push. Only one workflow runs now.
The deploy time
45 seconds fromgit pushto live on CloudFront. The Jekyll build takes about 2 seconds. Ruby setup takes the longest at 26 seconds, but that includes downloading and caching gems. Subsequent pushes with cached gems are faster. The S3 sync and CloudFront invalidation take about 9 seconds combined.
The Subdirectory Problem
The first deploy worked. The landing page loaded. Then I clicked Films and got an XML error page.
1
2
3
4
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
</Error>
This is a difference between GitHub Pages and CloudFront that isn’t obvious until you hit it.
Jekyll generates pages like Films and Photos as directories with an index.html inside: films/index.html, photos/index.html. When someone visits /films/, GitHub Pages automatically serves the index.html inside that directory. CloudFront doesn’t do this. It looks for a file literally called films/ in S3, can’t find it, and returns an access denied error.
The default root object setting in CloudFront only applies to the root URL (/). It doesn’t apply to subdirectories. This is a known limitation and it trips up everyone moving from GitHub Pages to CloudFront for the first time.
Why Access Denied instead of Not Found
CloudFront returns Access Denied rather than 404 because the S3 bucket blocks public access. When CloudFront can’t find the file through OAC, S3 denies the request rather than confirming whether the file exists. This is a security feature. It prevents people from probing the bucket to discover file names.
The Fix: Another CloudFront Function
Same approach as the share system. A CloudFront Function that intercepts every request and appends index.html where needed.
1
2
3
4
5
6
7
8
9
10
11
12
function handler(event) {
var request = event.request;
var uri = request.uri;
if (uri.endsWith('/')) {
request.uri += 'index.html';
} else if (!uri.includes('.')) {
request.uri += '/index.html';
}
return request;
}
Two rules. If the path ends with a / (like /films/), it appends index.html. If the path has no file extension at all (like /films), it also appends /index.html. Anything with a file extension (.css, .js, .jpg, .mp4) passes through untouched.
| Request | Rewritten to | Why |
|---|---|---|
/films/ | /films/index.html | Directory with trailing slash |
/films | /films/index.html | Directory without trailing slash |
/assets/css/style.css | No change | Has a file extension |
/ | No change | Default root object handles this |
I created this as denmotion-portfolio-routing in the CloudFront console, published it, and associated it with the portfolio distribution’s default behaviour on the Viewer Request event. After a couple of minutes the Films and Photos pages loaded correctly.
GitHub Pages hides this complexity
GitHub Pages, Netlify, and Vercel all handle subdirectory routing automatically. It’s one of those things you don’t think about until you move to raw CloudFront. The trade-off is that you lose the automatic magic but you gain full control over the routing logic. The CloudFront Function is 10 lines of code and runs in under a millisecond at every edge location.
The DNS Swap
With the site working on the CloudFront distribution domain (d1ugwe5cnp55gw.cloudfront.net), the last step was pointing the actual domain at it.
Before
| Record | Type | Value |
|---|---|---|
| denmotion.com | A | 185.199.108.153, 185.199.109.153, 185.199.110.153, 185.199.111.153 |
| www.denmotion.com | CNAME | digitaldencloud.github.io |
After
| Record | Type | Value |
|---|---|---|
| denmotion.com | A (Alias) | d1ugwe5cnp55gw.cloudfront.net |
| www.denmotion.com | CNAME | denmotion.com |
The apex domain changed from four A records pointing at GitHub Pages IPs to a single alias record pointing at the CloudFront distribution. The www subdomain changed from pointing at GitHub’s domain to pointing at denmotion.com itself, which then resolves to CloudFront. If I ever change the distribution, I update one record and www follows automatically.
DNS propagated in a few minutes. I tested in an incognito window and the site loaded from CloudFront.
Locking Down
With the site confirmed working on CloudFront, I went through the final cleanup.
Unpublished GitHub Pages
Went to the repo Settings > Pages and unpublished the site. GitHub Pages is no longer involved.Removed the custom domain
Cleareddenmotion.comfrom the GitHub Pages settings. GitHub doesn’t need to know about the domain anymore.Deleted the CNAME file
TheCNAMEfile in the repo root was only needed for GitHub Pages to know which domain to serve. CloudFront handles this through alternate domain names on the distribution. Deleted it, committed, pushed.Made the repo private
Settings > General > Danger Zone > Change visibility > Private.
That’s the moment the code is locked down. The three custom layouts, the cinematic grid CSS, the hover-to-play JavaScript, the Fancybox configuration, the particles.js integration, and the full Jekyll Chirpy site architecture with all its overrides and customisations. All hidden. The site looks identical to every visitor. The difference is entirely behind the scenes.
The Full Infrastructure
This is everything running under denmotion.com, across both distributions.
1
2
3
4
5
denmotion (S3 bucket, eu-west-2)
├── share/ → share.denmotion.com (CloudFront: denmotion-share)
├── films/ → portfolio film assets
├── photos/ → portfolio photo assets
└── website/ → denmotion.com (CloudFront: denmotion-portfolio)
| Layer | Technology | Purpose |
|---|---|---|
| Domain | Route 53 | DNS for apex and subdomains |
| Portfolio CDN | CloudFront (denmotion-portfolio) | Serves the Jekyll site from website/ |
| Share CDN | CloudFront (denmotion-share) | Serves the branded video viewer from share/ |
| URL routing | CloudFront Functions (2) | Subdirectory routing on portfolio, clean URLs on share |
| Storage | S3 (single denmotion bucket) | All files, all prefixes, one bucket |
| SSL | ACM wildcard cert (*.denmotion.com) | Covers every subdomain |
| Build | GitHub Actions | Jekyll build, S3 sync, cache invalidation on push |
| Deployment auth | IAM (github-actions-deployer) | Scoped to website/ prefix and portfolio distribution only |
| Media delivery | S3 + CloudFront | Self-hosted video and photos, zero compression |
| Contact form | API Gateway + Lambda | Serverless, no page reload |
graph TD
A["git push"] --> B["GitHub Actions<br/>Jekyll build"]
B --> C["aws s3 sync<br/>→ denmotion/website/"]
C --> D["CloudFront invalidation"]
D --> E["denmotion.com<br/>Live in 45 seconds"]
F["~/scripts/share.sh"] --> G["aws s3 cp<br/>→ denmotion/share/"]
G --> H["share.denmotion.com<br/>Branded viewer page"]
style A fill:#1a1a2e,stroke:#e94560,color:#fff
style B fill:#0f3460,stroke:#e94560,color:#fff
style C fill:#16213e,stroke:#533483,color:#fff
style D fill:#0f3460,stroke:#e94560,color:#fff
style E fill:#e94560,stroke:#fff,color:#fff
style F fill:#1a1a2e,stroke:#e94560,color:#fff
style G fill:#16213e,stroke:#533483,color:#fff
style H fill:#e94560,stroke:#fff,color:#fff
Two CloudFront distributions. One S3 bucket. One wildcard certificate. One hosted zone. One IAM deployer. Zero databases, zero servers, zero monthly platform fees beyond S3 storage and CloudFront requests.
The portfolio deploy is git push. The video share is one CLI command. Both go through the same bucket, the same edge network, and the same domain.
The Cost
| Service | Monthly cost |
|---|---|
| Route 53 hosted zone | $0.50 |
| S3 storage | Pennies (static HTML + a few videos) |
| CloudFront (2 distributions) | Free tier covers 1TB transfer |
| ACM certificate | Free |
| GitHub Actions | Free for private repos (2,000 minutes/month) |
| IAM | Free |
The total infrastructure cost is under $1/month. That’s less than a single month of any hosted platform. Squarespace starts at £16/month. WordPress hosting starts at £25/month. Wix starts at £13/month. The DenMotion infrastructure does more than any of them for the price of a Route 53 hosted zone.
What This Took
Three posts. Three systems. One infrastructure.
The first post built the portfolio site: three layouts, the cinematic grid, the self-hosted video pipeline, the client funnel. The second post built the share system: S3, CloudFront, the branded viewer page, the CloudFront Function for clean URLs. This post moved the main site onto the same infrastructure and locked down the code.
Each build taught the next one. The share system taught the CloudFront distribution setup. The portfolio migration reused the same pattern but added GitHub Actions and IAM. The CloudFront Function for clean URLs on the share system became the template for the subdirectory routing function on the portfolio.
The whole thing started with a domain name and ended up here. One S3 bucket running a cinematic portfolio, a branded video sharing system, and a fully automated deployment pipeline, all on AWS services that cost less than a coffee.
Documented April 2026.