Recent Activity
Here's what @andrei-gae has been up to this past year
-
Upvoted comment Hi Andrei! Absolutely, happy to answer any questions you have...
-
Replied to Hi Andrei! Absolutely, happy to answer any questions you have...
Hi Tom,
Thanks for the detailed response! Ah, that makes perfect sense. The key is that the cache sits between the Worker and R2, not between the end-user (Adocast) and the Worker. That completely clears things up.
This actually sparks a follow-up question because my initial approach was different, and your answer has raised a new, exciting possibility if my understanding of how Workers operate is correct.
To give you some context, I was experimenting with making an R2 bucket public via a custom domain and then setting an aggressive "Cache Everything" rule on Cloudflare. My main goal was to leverage Cloudflare's global CDN to its fullest. I ran a quick test and saw that when I accessed a file from Spain, I got a cache HIT served directly from the Madrid data center, which was fantastic for performance.
My assumption was that if a user from the US requested the same file, they would get a cache HIT from a local US data center, effectively creating a globally distributed cache with minimal R2 operational costs. The obvious and critical downside, as you know, is the complete lack of security.
This brings me to my follow-up, just to make sure I'm understanding the power of Workers correctly.
Based on my research, it seems that Workers are true "edge" functions, meaning they execute on the Cloudflare data center closest to the end-user.
If that's the case, does this also mean that the caches.default you're using is local to that specific data center?
If so, I'm beginning to realize that your solution might be the best of both worlds.
For example:
- A user from Madrid makes a request. It hits the Madrid data center, the Worker runs there, validates the HMAC, and on a cache miss, pulls from R2 and caches the file in Madrid.
- Later, a user from the US makes a request. It hits a US data center, the Worker runs there, validates the HMAC, and on a cache miss, it would also pull from R2 and cache the file in that US data center.
Am I interpreting this correctly? If so, it would mean that your setup provides the exact same global CDN performance and caching benefits as the "public bucket" method, but with the crucial authentication layer on top?
I just wanted to briefly confirm if this understanding is right, or if I'm missing a nuance in how Workers are deployed.
Thanks again for your time and for sharing your expertise. This has been incredibly helpful!
Best,
-
Started discussion Cloudflare R2 for Video Storage
-
Anniversary Thanks for being an Acocasts member for 1 year
-
Completed lesson Cross-Site Request Forgery (CSRF) Protection in InertiaJS
-
Completed lesson User Registration with InertiaJS
-
Completed lesson Goal of this Series
-
Completed lesson Defer Loading Props in InertiaJS 2
-
Completed lesson Prefetching Page to Boost Load Times in InertiaJS 2
-
Completed lesson Polling for Changes in InertiaJS 2
-
Completed lesson Applying Our Authorization UI Checks
-
Completed lesson Refreshing Partial Page Data
-
Completed lesson Creating the Settings Shell
-
Completed lesson Common useForm Methods & Options
-
Completed lesson Linking Between Pages & Page State Flow
-
Completed lesson Sharing Data from AdonisJS to Vue via Inertia
-
Completed lesson The Flow of Pages and Page Props
-
Completed lesson Setting Up TailwindCSS, Shadcn-Vue, and Automatic Component Imports
-
Completed lesson Server-Side Rendering (SSR) vs Client-Side Rendering (CSR)
-
Completed lesson Creating Our AdonisJS App With InertiaJS
-
Completed lesson What We'll Be Building
-
Completed lesson What Is InertiaJS?
-
Completed lesson Deferring A Prop Load Until it is Visible in InertiaJS 2