Service worker is caching files but fetch event is never fired Service worker is caching files but fetch event is never fired google-chrome google-chrome

Service worker is caching files but fetch event is never fired


After looking at your gist and your question, I think your issue is with scoping.

From what I've determined with service workers(at least with static files), the service worker only has a maximum scope of the directory it is in. That is to say, it can't load files/requests/responses that are pulled from a location at or above its structure, only below.

For example, /js/service-worker.js will only be able to load files in /js/{dirName}/.

Therefore, if you change the location of your service worker to the root of your web project, the fetch event should fire and your assets should load from cache.

So something like /service-worker.js, which should be able to access the /json directory, since it is deeper than the service-worker.js file.

This is further explained here, in the "Register A service worker" section. https://developers.google.com/web/fundamentals/getting-started/primers/service-workers


I struggled with this for a long time, and I think the documentation related to the matter is seriously lacking. In my experience, there is a very important distinction:

The service worker can only intercept fetch events if it is in or above the scope of the URL it is accessed from.

For example, my sw.js file was located at /static/sw.js. When accessing my site's root at / and attempting to intercept fetch events to js files in /static/js/common.js, the fetch events were not intercepted, even though the scope of my service worker was /static/ and the js file was in /static/js/.

Once I moved my sw.js file to the top-level scope /sw.js, the fetch events were all intercepted. This is because the scope of the page I was accessing with my browser / was the same as the scope of my sw.js file /.

Please let me know if this clears things up for people, or if I am incorrect!


The exact code in the HTML5Rocks article is

self.addEventListener('fetch', function(event) {  event.respondWith(    caches.match(event.request)      .then(function(response) {        // Cache hit - return response        if (response) {          return response;        }        // IMPORTANT: Clone the request. A request is a stream and        // can only be consumed once. Since we are consuming this        // once by cache and once by the browser for fetch, we need        // to clone the response        var fetchRequest = event.request.clone();        return fetch(fetchRequest).then(          function(response) {            // Check if we received a valid response            if(!response || response.status !== 200 || response.type !== 'basic') {              return response;            }            // IMPORTANT: Clone the response. A response is a stream            // and because we want the browser to consume the response            // as well as the cache consuming the response, we need            // to clone it so we have 2 stream.            var responseToCache = response.clone();            caches.open(CACHE_NAME)              .then(function(cache) {                cache.put(event.request, responseToCache);              });            return response;          }        );      })    );});

The biggest thing that I can see is that you are not cloning the request from the fetch, you need to clone it because it is read twice, once when being used to access the network (in the fetch) and once when being used as the key to the cache.