DenMotion: Building the Share Sytem
A branded video sharing system built on S3 and CloudFront. Clean URLs, a dark viewer page, and a one-command upload workflow that replaces ugly S3 links with something worth sending.
I kept running into the same problem. Every time I finished editing a video for someone, I had to figure out how to get it to them. The file was too large for email. WhatsApp compressed it. Google Drive worked but I’m a cloud engineer. Using a third-party file sharing service when I have the AWS skills to build my own felt like a missed opportunity.
Uploading directly to S3 and sending the raw URL meant the recipient saw something like https://dinnertimebucket.s3.us-east-1.amazonaws.com/C2018_2.mp4, which looked like infrastructure leaked, not a brand.
The other issue was the download experience. When I uploaded a video to S3 through the console, the file played in the browser but there was no option to download it. If I wanted a download prompt, I had to re-upload using the CLI with --content-type metadata set so the browser would trigger a save dialog instead. I couldn’t have both. A viewable video in the browser with a download button alongside it. It was one or the other depending on how the file was uploaded.
Then one day I was thinking about it and realised I could solve all of this at once. Build my own system on AWS. Make the upload process smoother, give the recipient a proper viewing experience, and extend the DenMotion brand into the delivery itself. Not just a link to a file, but a branded page that plays the video and offers a clean download button. One system that does both.
What I Built
A subdomain at share.denmotion.comthat serves a branded dark viewer page for any video I upload to S3. The URL is clean. The page matches the DenMotion aesthetic. There’s a download button that works on iPhones. The whole thing runs on S3, CloudFront, and a CloudFront Function, with no server, no database, and no monthly platform fee.
The workflow from my side is one CLI command:
1
~/scripts/share.sh ~/Downloads/video.mp4 client/project-name
That uploads the video with the correct metadata and prints the URL:
1
https://share.denmotion.com/client/project-name
I send that link. Done.
What this post covers
The S3 and CloudFront infrastructure, the CloudFront Function that enables clean URLs, the branded viewer page, and the upload workflow. This is an AWS build documented from the terminal up.
The Infrastructure
The share system sits on the same S3 bucket I created for DenMotion’s portfolio content. One bucket, multiple prefixes, each serving a different purpose.
1
2
3
4
5
denmotion/
├── share/ ← shared videos, lifecycle rules on this prefix
├── films/ ← portfolio film files (poster, thumb, master)
├── photos/ ← portfolio photos (full, thumbs)
└── website/ ← site files when I move off GitHub Pages later
The share/ prefix is the only one involved in this build. Everything else stays untouched. The lifecycle rule I set up later only applies to share/, so portfolio content is permanent while shared videos clean themselves up.
S3 Bucket
The bucket is called denmotion, created in eu-west-2 with all public access blocked. Nothing in this bucket is directly accessible. Every request goes through CloudFront.
CloudFront Distribution
I created a dedicated CloudFront distribution for the share subdomain. The origin points to the denmotion S3 bucket with an origin path of /share. That origin path is the key decision. It means every request to share.denmotion.com/ maps to s3://denmotion/share/ behind the scenes. The visitor never sees the prefix in the URL.
| Setting | Value | Why |
|---|---|---|
| Origin | denmotion.s3.eu-west-2.amazonaws.com | The shared S3 bucket |
| Origin path | /share | Scopes this distribution to the share prefix only |
| Origin access | OAC (Origin Access Control) | Only CloudFront can read from S3 |
| Default root object | index.html | Serves the viewer page at the root URL |
| Compress objects | Yes | Gzip on the HTML viewer page |
The origin path also solves a security concern. Because this distribution is scoped to /share, nobody can access the films/, photos/, or website/ prefixes through this subdomain. The share distribution only sees files inside share/. Clean separation.
Why a separate distribution
I considered puttingshare.denmotion.comanddenmotion.comon the same CloudFront distribution with path-based behaviours. That works today, but when I move the main site off GitHub Pages later, the portfolio and the share system will need different cache settings, different default root objects, and different error handling. Two distributions sharing one bucket is cleaner than one distribution trying to manage two different systems.
SSL Certificate
CloudFront requires certificates to be in us-east-1 regardless of where the bucket lives. When I created the distribution, CloudFront offered to generate a wildcard certificate for *.denmotion.com automatically through ACM. Because my domain is already managed in Route 53, the DNS validation happened without any manual steps.
The wildcard covers every subdomain I’ll ever need: share.denmotion.com, www.denmotion.com, and anything I add in the future. One certificate for the entire domain.
Route 53
I added a single A record alias for the share subdomain:
| Record | Type | Value |
|---|---|---|
| share.denmotion.com | A (Alias) | d3a308ok4rytlc.cloudfront.net |
The apex domain denmotion.com still points at GitHub Pages with four A records. The share subdomain points at the CloudFront distribution. Two different systems, same domain, no conflict.
graph LR
A["denmotion.com"] --> B["GitHub Pages<br/>Portfolio site"]
C["share.denmotion.com"] --> D["CloudFront<br/>Distribution"]
D --> E["S3<br/>denmotion/share/"]
style A fill:#1a1a2e,stroke:#e94560,color:#fff
style B fill:#16213e,stroke:#533483,color:#fff
style C fill:#1a1a2e,stroke:#e94560,color:#fff
style D fill:#0f3460,stroke:#e94560,color:#fff
style E fill:#16213e,stroke:#533483,color:#fff
At this point, uploading a video to s3://denmotion/share/test.mp4 and hitting https://share.denmotion.com/test.mp4 played it in the browser. The infrastructure was working. But there was no branding, no download button, and no design around the video. It was a raw file served through a CDN. And the URL still had a file extension in it.
The Clean URL Problem
How CloudFront Normally Works
CloudFront is simple. Someone visits a URL, CloudFront looks at the path, and fetches that exact file from S3.
share.denmotion.com/video.mp4→ CloudFront fetchesvideo.mp4from S3share.denmotion.com/fabio/hyrox.mp4→ CloudFront fetchesfabio/hyrox.mp4from S3share.denmotion.com/→ CloudFront servesindex.html(because I set that as the default root object)
It’s a direct mapping. URL path in, file out. No logic, no decisions.
Problem 1: The Double Share
Because the videos live in the share/ prefix in S3 and the subdomain is share.denmotion.com, the full path came out as share.denmotion.com/share/client/project-name.mp4. The word share appeared twice, which looked redundant and messy.
Setting the origin path to /share on the CloudFront distribution fixed that. CloudFront strips the prefix from the URL and adds it behind the scenes, so share.denmotion.com/client/project.mp4 maps to s3://denmotion/share/client/project.mp4 without the visitor seeing the prefix.
Problem 2: The File Extension
I don’t want to send people share.denmotion.com/fabio/hyrox.mp4. I want to send share.denmotion.com/fabio/hyrox. Maybe I’m being extra, but a clean URL without the file extension just looks better. It looks intentional, like a proper route on a proper platform, not a direct link to a file sitting in a folder somewhere. I’ve never seen anyone do this for video sharing. Every service I’ve used, from Google Drive to WeTransfer to raw S3 links, exposes the file extension in the URL. So as far as I know, this is a first.
The problem is there’s no file called hyrox in S3. The file is called hyrox.mp4. CloudFront looks for hyrox, can’t find it, and returns an error.
Problem 3: The Raw File
Even with the clean path, if someone visits share.denmotion.com/fabio/hyrox.mp4 with the extension, CloudFront fetches the MP4 from S3 and serves it directly to the browser. The browser plays a raw video on a white background. No branding, no download button, no DenMotion design. Just a file.
I built a viewer page (index.html) that has all of that. The dark background, the nav bar, the download button, the footer. But CloudFront only serves index.html when someone visits the root /. That’s what the default root object setting does. It doesn’t apply to any other path. For /fabio/hyrox or any other URL, CloudFront just looks for the matching file in S3.
The Solution: A CloudFront Function
All three problems are solved by a CloudFront Function.
What is a CloudFront Function
A CloudFront Function is a lightweight JavaScript snippet that runs at CloudFront edge locations before the request reaches the origin (S3). It can inspect and modify requests and responses. The function executes in under a millisecond and is included in the CloudFront free tier. It’s not a Lambda function. It’s simpler, faster, and has no cold start. The trade-off is that it can only do basic request manipulation, not complex logic or external API calls.
The function sits between the visitor and S3. Every request passes through it before CloudFront fetches anything. It looks at the URL and asks one question: does this path have a file extension?
If yes (.mp4, .html, .png), it leaves the request alone. CloudFront fetches the file from S3 normally.
If no, it changes the request to /index.html. CloudFront fetches the viewer page instead.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
function handler(event) {
var request = event.request;
var uri = request.uri;
// Pass through direct file requests
if (uri.match(/\.\w+$/)) {
return request;
}
// Pass through root request
if (uri === '/' || uri === '') {
return request;
}
// Clean URL — serve the viewer page
request.uri = '/index.html';
return request;
}
So when someone visits share.denmotion.com/fabio/hyrox:
- The function sees
/fabio/hyrox, no file extension - It rewrites the request to
/index.html - CloudFront fetches
index.htmlfrom S3 - The viewer page loads in the browser with the dark background, the nav bar, the download button
- JavaScript inside the page reads the original URL path (
/fabio/hyrox) - JavaScript adds
.mp4and loads/fabio/hyrox.mp4into the video player - CloudFront fetches
hyrox.mp4from S3 and streams it into the player
The visitor sees a branded page with a video in it. They never see the raw file. They never see the .mp4 extension in the URL.
| Request | What happens | What the visitor sees |
|---|---|---|
/fabio/hyrox | Function rewrites to /index.html | Branded viewer page with video player |
/fabio/hyrox.mp4 | Passes through to S3 | Raw video file |
/index.html | Passes through to S3 | Viewer page directly |
/ | Default root object serves index.html | Viewer page |
Why not a query parameter
The alternative wasshare.denmotion.com?v=fabio/hyrox.mp4. Simpler to implement because it doesn’t need a CloudFront Function. But it looks like a search result, not a brand. The whole point of DenMotion is clean, intentional presentation. A query parameter undermines that. The CloudFront Function costs nothing and runs in under a millisecond. The clean URL is worth the 15 lines of JavaScript.
Setting It Up
Creating the function in the CloudFront console:
- CloudFront > Functions > Create function
- Name:
denmotion-share-viewer - Runtime:
cloudfront-js-2.0 - Paste the function code, save, and publish
- Go to the share distribution > Behaviors > edit the default behavior
- Under Function associations, set Viewer request to
denmotion-share-viewer
The function deploys to all edge locations within a couple of minutes. Once associated, every request to share.denmotion.com passes through it.
How the Three URLs Behave
Once the infrastructure is in place, the same video has three different URLs, each with a different result.
| URL | What happens |
|---|---|
denmotion.s3.eu-west-2.amazonaws.com/share/video.mp4 | Blocked. Public access is denied on the bucket. |
share.denmotion.com/client/video | CloudFront Function serves the branded viewer page |
share.denmotion.com/client/video.mp4 | CloudFront serves the raw file through the default browser player |
The first one is the raw S3 URL. It doesn’t work because all public access is blocked on the bucket. Nobody can reach S3 directly. Every request has to go through CloudFront, which is the whole point of Origin Access Control.
The second one is the clean URL. The CloudFront Function catches it, serves the viewer page, and the JavaScript loads the video inside the branded player with the download button.
The third one has the
.mp4extension. The CloudFront Function sees the extension and passes it straight through to S3 via CloudFront. The browser plays the raw file with its default video controls. No branding, no download button.
Three URLs, three different behaviours, all by design. The only one I share with people is the second one.
The Viewer Page
The viewer is a single index.html file uploaded to s3://denmotion/share/index.html. The CloudFront Function serves it, and the JavaScript inside handles the rest.
graph TD
A["Visitor hits<br/>share.denmotion.com/client/project"] --> B["CloudFront Function<br/>No file extension detected"]
B --> C["Rewrites to /index.html"]
C --> D["S3 serves viewer page"]
D --> E["JavaScript reads URL path<br/>/client/project"]
E --> F["HEAD request to<br/>/client/project.mp4"]
F --> G{"File exists?"}
G -->|Yes| H["Render video player"]
G -->|No| I["Show 'Video not found'"]
style A fill:#1a1a2e,stroke:#e94560,color:#fff
style B fill:#0f3460,stroke:#e94560,color:#fff
style C fill:#0f3460,stroke:#e94560,color:#fff
style D fill:#16213e,stroke:#533483,color:#fff
style E fill:#16213e,stroke:#533483,color:#fff
style F fill:#16213e,stroke:#533483,color:#fff
style G fill:#1a1a2e,stroke:#e94560,color:#fff
style H fill:#0f3460,stroke:#e94560,color:#fff
style I fill:#0f3460,stroke:#e94560,color:#fff
If the file doesn’t exist, the page shows a clean error state with a link back to denmotion.com. No S3 XML error pages, no CloudFront default errors.
The Design
The viewer page matches the DenMotion aesthetic from the portfolio site. Pitch-black background (#050505), minimal navigation, no clutter.
| Element | Detail |
|---|---|
| Background | #050505, same as the cinematic layout |
| Nav bar | DENMOTION on the left, Close ✕ on the right |
| Video player | Centred, 16:9 aspect ratio, native HTML5 controls |
| Download button | Below the video, right-aligned, transparent border with hover effect |
| Footer | © 2026 denmotion.com | Deniz Yilmaz |
The Close ✕ button links to denmotion.com in the same tab. If someone watches the video and clicks close, they land on the portfolio. The share page is a gateway, not a dead end.
The download button uses the download attribute on the <a> tag, which tells the browser to download the file rather than navigate to it. This is what fixes the iPhone problem. When the video has the correct Content-Type: video/mp4 metadata (set during upload) and the download link uses the download attribute, iOS Safari saves it properly instead of showing a blank file.
Responsive Layout
The page uses flexbox with min-height: 100dvh on the body. The dvh unit accounts for mobile browser chrome (the address bar and toolbar that appear and disappear on iOS and Android). The video wrapper has aspect-ratio: 16/9 to maintain proportions on any screen size.
On mobile, the download button goes full-width for easier tap targets. The page has natural scroll bounce but everything fits the viewport without needing to scroll to reach the footer.
The Upload Workflow
Every shared video is one CLI command. The upload script lives at ~/scripts/share.sh:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#!/bin/bash
# Usage: ./share.sh local-file.mp4 path/filename
FILE=$1
KEY=$2
if [ -z "$FILE" ] || [ -z "$KEY" ]; then
echo "Usage: ./share.sh local-file.mp4 path/filename"
exit 1
fi
aws s3 cp "$FILE" "s3://denmotion/share/${KEY}.mp4" \
--region eu-west-2 \
--content-type "video/mp4" \
--cache-control "public, max-age=86400"
echo ""
echo "https://share.denmotion.com/${KEY}"
The script does three things that matter. It sets --content-type "video/mp4" so browsers play the file instead of downloading it. It sets --cache-control "public, max-age=86400" so the browser caches the video for 24 hours and doesn’t re-download on scrub or replay. And it prints the clean URL ready to send.
1
2
3
4
5
# Upload a video
~/scripts/share.sh ~/Downloads/hyrox-compilation.mp4 fabio/hyrox
# Output
https://share.denmotion.com/fabio/hyrox
The second argument is the URL path. Whatever I type there becomes the link. No fixed folder structure. I decide when I upload.
1
2
3
4
5
6
7
# Direct share, no folder
~/scripts/share.sh ~/Downloads/reel.mp4 bodrum-reel
# → share.denmotion.com/bodrum-reel
# Organised by person
~/scripts/share.sh ~/Downloads/gym-edit.mp4 fabio/gym-edit
# → share.denmotion.com/fabio/gym-edit
Updating the Viewer Page
When I update the viewer page, the deploy is two commands:
1
2
aws s3 cp index.html s3://denmotion/share/index.html --region eu-west-2 --content-type "text/html"
aws cloudfront create-invalidation --distribution-id E3EA2ZYWVBJZ6G --paths "/index.html"
The first uploads the new file. The second clears the CloudFront cache so edge locations serve the latest version immediately. Without the invalidation, the old page could be served for up to 24 hours.
The
--content-typelesson
I hit this twice during the build. First when uploading the viewer page without--content-type "text/html", which caused the browser to download a blank file instead of rendering the page. Then I remembered the same issue is exactly what causes the iPhone video download problem. The content type has to be set explicitly on every upload. The script handles it automatically so I never think about it again.
Lifecycle Rules
Shared videos are temporary. Someone watches them, maybe downloads them, and that’s it. There’s no reason to keep paying for storage on files nobody accesses after a few weeks.
I created a lifecycle rule scoped to the share/ prefix:
| Transition | When | What happens |
|---|---|---|
| S3 Standard → Glacier Instant Retrieval | After 30 days | Storage cost drops ~68%, access time stays the same |
| Deletion | After 90 days | File is permanently removed |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
aws s3api put-bucket-lifecycle-configuration \
--bucket denmotion \
--region eu-west-2 \
--lifecycle-configuration '{
"Rules": [
{
"ID": "share-cleanup",
"Filter": { "Prefix": "share/" },
"Status": "Enabled",
"Transitions": [
{
"Days": 30,
"StorageClass": "GLACIER_IR"
}
],
"Expiration": {
"Days": 90
}
}
]
}'
The rule only touches the share/ prefix. Portfolio files in films/, photos/, and website/ are permanent and unaffected.
Glacier Instant Retrieval is the right tier for this. If someone revisits a share link a month later, the video still plays instantly. The retrieval latency is the same as S3 Standard, just cheaper to store. After 90 days the file is gone. If someone needs it again after that, I re-upload from my local archive.
Why not just delete at 30 days
A month is tight. People bookmark things, come back to them, share them with someone else. Glacier Instant Retrieval at 30 days means the link still works for three months total. The storage cost between 30 and 90 days is negligible. Deleting too early creates a bad experience. Deleting at 90 days keeps the bucket clean without cutting anyone off prematurely.
The Full Architecture
This is everything running under share.denmotion.com.
| Layer | Technology | Purpose |
|---|---|---|
| Domain | Route 53 | A record alias to CloudFront |
| CDN | CloudFront | Edge caching, SSL, request routing |
| URL routing | CloudFront Function | Rewrites clean URLs to serve the viewer page |
| Storage | S3 (denmotion/share/) | Video files and the viewer HTML |
| Viewer page | Static HTML + JavaScript | Branded player, download button, error handling |
| SSL | ACM wildcard cert (*.denmotion.com) | HTTPS on all subdomains |
| Cost management | S3 lifecycle rule | Glacier at 30 days, delete at 90 days |
| Upload | Bash script (share.sh) | One command, correct metadata, prints clean URL |
graph TD
A["~/scripts/share.sh"] --> B["aws s3 cp<br/>--content-type video/mp4<br/>--cache-control public, max-age=86400"]
B --> C["S3<br/>denmotion/share/"]
D["Visitor hits<br/>share.denmotion.com/path"] --> E["CloudFront<br/>Edge location"]
E --> F["CloudFront Function<br/>Clean URL → index.html"]
F --> C
C --> G["Viewer page loads<br/>JS reads URL path<br/>Loads video"]
style A fill:#1a1a2e,stroke:#e94560,color:#fff
style B fill:#0f3460,stroke:#e94560,color:#fff
style C fill:#16213e,stroke:#533483,color:#fff
style D fill:#1a1a2e,stroke:#e94560,color:#fff
style E fill:#0f3460,stroke:#e94560,color:#fff
style F fill:#0f3460,stroke:#e94560,color:#fff
style G fill:#16213e,stroke:#533483,color:#fff
No server. No database. No monthly platform fee. The entire system runs on AWS services that cost pennies at this scale.
What It Replaced
| Before | After |
|---|---|
| Raw S3 URLs with bucket names and regions | share.denmotion.com/client/project |
| No branding, white browser background | Dark viewer page matching DenMotion aesthetic |
| No download option on iPhones | Working download button on all devices |
| Manual uploads forgetting metadata | Script sets content type and cache headers automatically |
| Videos stored indefinitely, accumulating cost | Lifecycle rule transitions to Glacier at 30 days, deletes at 90 |
| Sharing through Google Drive or WeTransfer | One CLI command, one branded URL |
The difference between sending someone a Google Drive link and sending them share.denmotion.com/fabio/hyrox is the same difference between handing someone a USB stick and inviting them into a screening room. The content is identical. The experience is not.
What Comes Next
The share system is done. The next step is moving the main site off GitHub Pages onto S3 + CloudFront.
Right now the denmotion.com repo is public on GitHub. Every layout, every CSS file, every JavaScript integration is visible. Anyone can fork it and clone the exact portfolio I spent weeks building. The videos and images are safe because they live on S3, but the site code is exposed. The three custom layouts, the cinematic grid, the hover-to-play logic, the Fancybox configuration, the particles.js integration. That’s the intellectual property. If I ever sell this architecture to clients, it needs to be private.
Making the repo private on a free GitHub account breaks GitHub Pages. The site goes offline. The solution is the same pattern I just built for the share system. A new CloudFront distribution with origin path /website, pointing at the same denmotion S3 bucket. A GitHub Actions workflow that builds the Jekyll site and syncs it to S3 on every push. The Route 53 A records for denmotion.com swap from GitHub Pages IPs to the new CloudFront distribution.
Two distributions, one bucket, one wildcard certificate, one hosted zone:
| Distribution | Domain | Origin path | Purpose |
|---|---|---|---|
| denmotion-share | share.denmotion.com | /share | Video sharing |
| denmotion-portfolio | denmotion.com | /website | Portfolio site |
Building the share system first was the right order. It solved an immediate problem, taught the CloudFront distribution setup fresh, and established the pattern. The second time will be faster.
How This Connects
This is the second piece of the DenMotion infrastructure. The first post documented the portfolio site: three layouts, the cinematic grid, self-hosted video delivery, the client funnel from landing page to contact form.
The share system uses the same S3 bucket, the same CloudFront approach, and the same design language. The viewer page has the same #050505 background, the same nav bar typography, the same button styling. Someone who visits the portfolio and then receives a share link sees the same brand in both places.
Documented April 2026.