Upgrading our React app to GraphQL Relay Hooks
We’ve been on a forked version of Relay v8 for a couple years. While the new versions had some neat features, nothing really compelled us to upgrade until now. Relay v11 (the one with hooks) is the biggest release since Relay Modern & it’s amazing. Aside from hooks, it lets us use React’s Suspense API instead of the render props pattern, allows for fine-grain control of query invalidation, and provides patterns for avoiding waterfall queries. While we’ve been able to clean up a bunch of our code, there have also been a few sharp edges during the migration. Let’s explore.
Partial Data and Client Fields
In our app, queries only need to load once. After the initial fetch, we use subscriptions to keep the data fresh. The only problem is figuring out how to prevent Relay from disposing of the query after the component got unmounted. In previous versions, we did this by forking the QueryRenderer
. In v11, it’s as easy as setting the fetch policy to store-or-network
and increasing the buffer size: const store = new Store(new RecordSource(), {gcReleaseBufferSize: 25})
. The only gotcha was that any clientField
would always get flagged as missing. For example, we had a field handler that turned rich text into plaintext for client-side searches:
In the above case, the record was flagged as missing. To determine which field caused this, I put a breakpoint in the DataChecker to pause when a missing field was hit.
The workaround is to set the hidden clientField
record. It’s kept on the parent object under the handleKey
. For example, every client handler we write now starts with this preamble:
By initializing the value from undefined
to null
the record is retained & regarded as available.
Subscriptions & Cached Queries
There’s only one problem with trusting subscriptions to keep all the data fresh: bad internet. If a computer goes to sleep, or a cell phone goes through a tunnel, it’s safe to say the data is stale & should be refetched. Connectivity logic isn’t app specific, so it should live outside the app. In our case, we use a package called Trebuchet to handle connectivity. When the client loses connection with the server, Trebuchet alerts the app that it is disconnected, kills the websocket, & starts a new one. Once it reconnects, it fires reconnect callbacks. In this case, we simply refresh the active queries:
This is SO much more elegant that what we’ve done in the past!
Hooks
It took me awhile to understand usePreloadedQuery
, useQueryLoader
, and loadQuery
. These were all new concepts because the QueryRenderer
is the equivalent of the new useLazyLoadQuery
. That hook is discouraged because it can lead to waterfall loading just like before. In my experience, it also didn’t lend itself well to the Suspense pattern, so I decided to forgo it entirely & go with useQueryLoader
.
Since my app previous used QueryRenderer
extensively, it was already set up to perform lazy loading queries. I created a helper hook that makes useQueryLoader
operate similarly to useLazyLoadQuery
:
As you can see, loadQuery
gets called immediately when the component renders. While this pattern doesn’t make the data show up any sooner today, it keeps the door open in case I want to do some optimization later down the line . If I had used useLazyLoadQuery
, those future refactors would be harder.
When I combine this hook with the query refresh hook above, it makes for a great one-liner that guarantees fresh data. The only problem was partial data…
Partial Data
Relay now supports partial data by default, which means a component can render as long as its fragment can be completed from the local cache. This is amazing! The only problem is that it doesn’t play well with createFragmentContainer
. In other words, if you replace your QueryRenderer
with usePreloadedQuery
, any child components that use createFragmentContainer
will not trigger suspense (as of React v17.0.2 + Relay v11.0.2). For example:
In the above scenario, The data in parent is partial. Child does not have the required data to render, yet it still gets called! If Child instead uses useFragment
, it would suspend correctly. However, the same problem would still apply to descendant components. This left me with the following options:
- Refactor ALL instances of
createFragmentContainer
touseFragment
- Include
...Child_user @relay(mask: false)
in theParent
so theChild
won’t render early (which would also causeParent
to subscribe to ALL changes and re-render a bunch) - Refactor just
Child
touseFragment
& pray that it requests a field that is not already cached so it suspends - Change the
fetchPolicy
tonetwork-only
and admit defeat - Use
UNSTABLE_renderPolicy: 'full'
withusePreloadedQuery
I opted for the 5th option. renderPolicy
is eventually going away, but it’s still there, and using it here buys me some time so I don’t have to immediately refactor all my createFragmentContainer
components to useFragment
.
Paginated Queries
The final hurdle was migrating to usePaginationFragment
. The new API for this hook is beautiful in its simplicity; bravo to the team for simplifying what is a ridiculously difficult area! Refetch queries are now generated automatically via a refetchable
directive. There were only 2 gotchas during this refactor.
First, pagination only applies to fragments, so I found myself calling usePreloadedQuery
and usePaginationFragment
in the same component. It felt weird to have a query & fragment in the same component, but it is otherwise harmless.
Second, the refetchable
fragment is on Query
. Maybe I’m alone, but this was the first time I’ve ever fragmented on the Query
type. Usually I fragment on Viewer
, but I couldn’t figure out how to declare my User
object as using the Viewer
protocol.
Entry Points
Entry points allow you to fetch different components based on the data returned. This is a really cool concept, but honestly I don’t use it for 2 reasons.
First, React.lazy
is good enough. Sure, it requires an extra round trip, but that roundtrip is for a .js
, which comes from our CDN so it’s extra fast.
Second, and most importantly, we have a Progressive Web App (PWA). That means most of those async chunks are fetched from the CDN via service worker long before they’re used. Sure, the client might not use every chunk, but making the app faster only costs us a few extra gigabytes/month of throughput. At Facebook scale, the cost may be prohibitive. At our scale, it’s literally pennies.
Conclusion
Overall, the initial upgrade took 2 days to complete. The business case to upgrade was the following:
- The old version had old dependencies with known vulns
- Declarative errors & loading states using Data Fetching with Suspense
- Attract new developers with our clean, modern codebase
- The new API is simpler, so it’s easier to train new developers on the new patterns
- Less code (AKA surface area for bugs) using directives like
appendEdge
- Easier to upgrade to a newer version when the next killer feature drops
Now that the patterns are in place, we can distribute the work across our team and complete the refactor in the coming months. We won’t explicitly create issues to refactor from createFragmentContainer
to useFragment
. However, if one of us is already working on a component that uses the legacy API, we’ll take an extra minute to upgrade to useFragment
. We call this “in the neighborhood” refactoring. We use it for massive, app-wide refactors such as migrating to Typescript or Emotion for CSS-in-JS. It’s been a great pattern to ensure that each developer can still ship user value & work on challenging problems.