Blog
/
Education

The most powerful GraphQL Client for the web in just 2kb

cover
Jens Neuse

Jens Neuse

min read

We're hiring!

We're looking for Golang (Go) Developers, DevOps Engineers and Solution Architects who want to help us shape the future of Microservices, distributed systems, and APIs.

By working at WunderGraph, you'll have the opportunity to build the next generation of API and Microservices infrastructure. Our customer base ranges from small startups to well-known enterprises, allowing you to not just have an impact at scale, but also to build a network of industry professionals.

I claim that WunderGraph is by far the most powerful GraphQL client for the modern web. Actually it's not just a good client for GraphQL but also for REST APIs, but that's not the point here.

We'll look at other GraphQL client implementations like Apollo, urql and graphql-request and compare them against our client.

Over the last couple of years, we've seen a constant evolution of GraphQL tooling. Developers try to get the maximum out of the constraints they've set for themsevles.

WunderGraph breaks with these rules to make room for something new.

This post tries to re-adjust your view on GraphQL. Forget about the status quo for a moment and be open to radical change.

In order to prove whether a client is good or not, we first have to find good criteria for judging an API client.

How to judge if an API client is any good?

When talking about building for the modern web, I mean frameworks like NextJS, React, Vue, Svelte etc...

So what are the most important capabilities, a modern API client should bring to the table?

Here's a list of categories that should be important to you when building modern web applications.

  1. Developer Productivity

    The number one criteria for me is developer productivity. API clients should support developers as best as they can, taking away as much of the heavy lifting as they can. Type safety and easy to use APIs are good examples.

  2. Respect the fundamentals of the web

    The web is a very powerful platform to build on. A good API client makes good use of existing infrastructure and APIs. Browsers offer some extremely powerful tools like caching which an API client shouldn't dismiss.

  3. Integrate well with OIDC & OAuth2

    Most if not all applications require some sort of authentication and authorization. A good API client integrates well with both protocols to help developers build login- and authorization flows.

  4. Secure

    This item should actually be on the first item of the list. An API client should help developers to avoid and combat common threats, e.g. the OWASP top 10.

  5. Performant & Lightweight

    Last but not least, a client should be as performant as possible without interfering with developer productivity. Page load times are important. For many websites, lower latency means more money earned. An API client should never stay in the way of building high performance applications.

From the title, you might have probably thought that WunderGraph tries to make a compromise and values performance over productivity.

The small size (2 kb gzip) of the WunderGraph client was not intentional. It's the resulting of respecting the fundamentals of the web.

Results

Before diving into the different aspects of what makes a good GraphQL client, let's have a quick look at the raw numbers.

This post is accompanied by a repository with a branch for each client. You can checkout each branch and run the bundle analyzer yourself.

ClientgzipdifferenceTime to InteractiveLargest Contentful Paint
NextJS102,99kb-1.8s1,9s
Apollo146,28kb43,29kb2,3s2,8s
graphql-request110,62kb7,63kb2,1s2,6s
urql119,94kb16,95kb2,2s2,4s
WunderGraph105,31kb2,3kb1,9s2,1s

Time to Interactive and Largest Contentful Paint was measured using Chrome Lighthouse test.

The numbers indicate that there's a correlation between size of Javascript and load times. What we'll discover throughout this post is that for "old-school" GraphQL clients, you get the most bang for the buck with urql, as it has a very good functionality to Javascript size ratio.

1. Developer productivity

Now let's have a look at how different clients affect productivity.

Project setup

Unsurprisingly, all clients make it very easy to get started. You can easily find documentation to help you with the first steps. If you look at the repository , you'll see that setting up all clients is very much straight forward.

Apollo, urql and WunderGraph use the provider pattern for dependency injection. graphql-request doesn't require this step, making it the easiest library to get started.

Hello World - your first Query

Apollo, urql and graphql-request are very intuitive, when it comes to writing your first Query.

You can define the Query side-by-side with your Components and use one of their hooks to make the first request.

Example using urql:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18

WunderGraph takes a different route here. You have to write all Queries in one .graphql file, located in the .wundergraph directory. It's also strictly required to give all operations a name. This name will be used to generate a typesafe React hook.

Example using WunderGraph:

1
2
3
4
5

TypeScript support

Supporting TypeScript nowadays is essential for a good developer experience. It prevents a lot of bugs and helps developers better understands the code.

Apollo, urql and graphql-request support TypeScript in that you're allowed to attach type definitions to query hooks. How you get those type definitions is up to the developer. You could write them by hand or use an additional code generator . There's a good change this extra step results in inconsistencies. Ideally, you wouldn't have to add extra dependencies for such a basic requirement.

With WunderGraph, I didn't want to accept the status quo and adopted a pattern from Relay, compiling Queries! WunderGraph ships out of the box with a Query Compiler / Code Generator. You can start it with one single command: wunderctl up

All you have to do is write a Query with a name. You'll get a typesafe client, hooks, type definitions, all without any extra work or adding extra dependencies.

For me, it's a dream come true. I'm using it myself, e.g. to build the WunderGraph Console and I don't want to go back.

Authentication aware data fetching

It's a very common use case that some API Endpoints (Queries) require the user to be authenticated. Usually, the client is not aware of this, leaving it up the to frontend-developer to properly circumvent situations, where a login is required.

Apollo, urql and graphql-request ignore this problem. As a frontend-developer, it's on you to know when a user needs to be authenticated. You have to catch the cases where this might happen and present the right UI components to the user.

I think this is not a good developer experience. This should be handled elegantly.

Can you guess what WunderGraph does?

When writing a Query, we're aware if it requires authentication or not. That is, we can define a Query to require authentication or not.

With this information, we instruct the server-side component to validate if the user is authenticated. At the same time, we generate some extra code in the client to "only" fire when the user is authenticated. If they're not, the Query will simply wait.

Here's an example:

1
2
3
4

The possible values of status are typesafe. The Query hook will wait for the user to authenticate before it starts fetching data. This reduces boilerplate and makes for a much more developer friendly experience.

Subscriptions

graphql-request, being a simple client, doesn't support Subscriptions. urql wants us to do some extra setup to be able to use Subscriptions. A similar step is required when using Apollo.

Guess what you have to do when using WunderGraph? Nothing. Subscriptions in WunderGraph just work. If you write a subscription Operation, we'll compile to on the server-side component. We setup an Apollo compliant GraphQL client (on the server) and stream responses to the client using HTTP/2. On the client side, we generate a typesafe client + hook for the operation.

I might be repeating myself but we really care about the developer experience. Our mantra is to remove everything that's not relevant to the developer so they can focus on building amazing applications.

2. Respect the fundamentals of the web

A fundamental concept of the web is that every resource needs to have its own unique identifier, a URL or URI. It's important to respect this concept when building APIs for the web.

When we're talking about the elephant in the room, caching, it's important to note that Browsers rely on the concept of the URI.

Browsers bring very powerful tools for efficient data fetching out of the box. However, they can only unleash their full potential if you play by the rules.

The first rule is, you have to use the verb GET for Queries. Sending requests with the verb POST will disable caching.

The second rule is, you should always return an ETag header alongside the response. The Browser will automatically send a If-None-Match Header alongside subsequent requests. If the ETag didn't change, the server can respond with a 304 status code, indicating to the client that the data is still fresh, no data has to be sent to the client.

The third rule is, whenever you can, you should use Cache-Control directives to instruct the client how to cache, in- and revalidate content.

Out of the box, Apollo, urql and graphql-request disrespect the fundamentals of the web in every possible way. There are no unique URIs, no use of the GET verb for Queries, no ETags, no Cache-Control headers. There's obviously an endless list of packages, extensions, etc. to improve the situation. However, we're looking at functionality that should be part of the core, not an extension.

WunderGraph on the other hand doesn't just respect the web, it understands it.

The WunderGraph client is actually split into two components, a server-side (WunderNode), and a client side component. Running wunderctl up automatically starts the server-side component, wunderctl deploy deploys the server-side component to the edge, as close to your users.

All Queries will be compiled to extremely efficient code on the server-side component. Queries always use the GET verb. The server-side component automatically computes ETag headers and evaluates requests with If-None-Match headers. For each Query, Cache-Control headers can be defined. The server-side component will cache data on the edge and add headers so that the client automatically caches content as well.

All this works out of the box. Write a Query, define a MaxAge, done. No extra dependencies, no nothing.

Asking developers to install extra packages and components to implement "Automatic Persisted Queries" is a solution, it's just not developer friendly. Things like this should work out of the box.

Normalized Caching

If you're familiar with GraphQL caching, you've probably expected normalized caching.

I've wrote about this topic before and want to briefly mention it here.

Apollo Client as well as urql have their own ways of implementing a normalized Cache in the client. My personal opinion is that the idea of a normalized cache is fantastic. Some really smart people came up with a super sophisticated solution to a really hard problem.

The browser is not capable to automatically cache GraphQL POST requests. People accepted this and build a solution around the problem. Normalize the data and build a local database to deliver a faster and better user experience.

It's a great solution which also adds a lot of complexity to the client. The architecture implies that there are two sources of truth. While the backend developer of an API can define state transitions, the persistence layer in the client allows the frontend developer to completely override the behaviour.

So while it's a great solution, it also comes at a cost.

Let me ask you a question. If GraphQL forced developers to always use persisted Queries, if persisted Queries used the GET verb from the beginning, if it were easy from the start to use Cache-Control headers, would we have developed normalized Caching in the first place?

Persisted Queries turns GraphQL into REST-ish APIs. I've never seen anybody talk about a normalized cache for REST APIs.

3. Integrate well with OpenID Connect & OAuth2

Let's talk about authentication.

graphql-request allows you to set headers. That's it.

The Apollo client docs go beyond just setting headers. They also show you how to use cookies. Beyond that, we're on our own.

urql takes authentication seriously. If you're looking at their docs , you'll see different examples for various flows, initial login, resume, refresh, etc... They've done an excellent job not just implementing all these flows but also documenting them!

One thing to note about urql. Their library is extremely flexible. They are not very opinionated about things. This gives you, as a library user, a lot of freedom. At the same time, it also gives you a lot of responsibility. If you know what you're doing, it's a great approach.

What about WunderGraph? We've done it again, I'm sorry. We've abstracted away the problem and make authentication super boring. In comparison to urql, we're super opinionated. This gives you less freedom but also less responsibility. It's easier to use and a lot harder to shoot yourself in the foot.

Here's an example how to configure cookie-based authentication with WunderGraph:

1
2
3
4
5
6
7
8
9
10
11
12
13
14

The above configuration generates the following hooks:

1
2
3
4
5
6

The user object contains some info (Claims) on the user if they're logged in. If the user re-focuses the browser tab, we'll re-evaluate if they logged out in another tab.

Claims are name value pairs, containg information on the user, e.g. "email" or "name".

On the server-side component, we'll configure an OpenID Connect client so that your users can authenticate with github or google (or any other OIDC compliant auth provider).

But that's not all there is. Imagine a user is logged in and you want to use one of their claims (email) as a variable in one of the Queries. Due to the nature of WunderGraph, being split into a server-side and a client-side component, we're able to accomplish this very easily.

Let's loot at an example:

1
2
3
4
5
6
7
8
9
10
  1. This generates an endpoint /operations/CreateUser on the server-side component.
  2. The client-side component generates a hook useMutation.CreateUser()
  3. Because we're using the fromClaim directive, the Operation requires Authentication
  4. Additionally, the client side generated hook will not accept a variable for the $email value, it only accepts $name and omits $email completely.
  5. The server-side component will validate that the user is authenticated and "injects" their Claim email into the Operation

For an end-user, there's no way to circumvent this. Either they're logged in and we inject their claim, or they're not and we deny access to the Operation.

If you felt the need to write a backend for such a functionality, you might want to reconsider your options.

4. Secure

Authentication

All libraries suggest Cookies as an option to implement authentication. Cookies are a very convenient way for SPAs to store data securely about the user as they push the complexity to the server.

What the documentation of all libraries is lacking is to talk about the security implications of using cookie-based authentication.

Cookies need to be encrypted and http-only. You have to properly configure Same-Site settings. Last but not least, you have to be aware of CSRF and how to prevent it. Finally, you want to make sure to properly configure CORS, don't you?

You can't blame any of the clients to not mention the topic as most of the responsibility of this topic is on the server. However, wouldn't it be nice if we don't have to deal with any of the problems listed above?

At least that's what I thought so WunderGraph comes with some very convenient defaults. The server-side component enforces http-only cookies. Cookies are always encrypted. Same-Site configurations are configured as strict as possible. CORS can be easily configured by you.

Most importantly, all mutations are CSRF protected by default! Both the client- and server-side component are implemented to protect against the threat.

Secure GraphQL

I really hope, you're aware of the risks involved in exposing a GraphQL API on the web.

To the folks of the "security-by-obscurity" camp: If you use a GraphQL on a public website or in a iOS or Android app, it's very easy for people to figure this out and try to attack your service. Your API is public and vulnerable when it has a public IP address.

The worst thing I've ever saw was someone using PostGraphile (amazing library) without any security middleware in front of it. The guy thought it's secure, because users in his app could only "see" their own content. With direct API access, the API was public obviously, you could get full access to the whole database.

Apollo Client, graphql-request and urql all rely on a publicly exposed GraphQL API. I've already wrote about why I think GraphQL is not meant to be exposed over the internet.

That said, it's not a problem per-se, you should just be aware of the risks.

If you run a GraphQL server with a public IP, there's a few problems you could run into:

  • any client can write whatever Query they want
  • Queries can be of arbitrary complexity
  • super complex Queries can overwhelm your service, the database, etc...
  • in extreme cases, this can result in an expensive AWS bill
  • attackers could traverse your Graph in unexpected ways, which leads to leaking data
  • if you're exposing PII (personally identifiable information) via GraphQL, you should pay extra attention

There are different ways of solving these problems. You can implement a middleware to compute the complexity of a GraphQL Query and deny Queries above a certain threshold. The problem is that many such solutions are not polyglot. They need to be re-implemented for each language and framework, which hinders adoption.

But there's also a super simple and boring solution to the problem. If you don't expose GraphQL, you don't expose GraphQL.

WunderGraph allows you to hide your GraphQL API from the public. The WunderGraph client talks via JSON-RPC to the server-side component. All Queries are defined during development and compiled before runtime. This pattern reduces the attack surface of your GraphQL API to the absolute minimum.

The server-side component can securely talk to your origin GraphQL server, e.g. using JWT signed requests, with secrets only known on the back-channel.

5. Performant and Lightweight

We've looked at the numbers already.

With graphql-request, you get a good entry level GraphQL client that's also the size of an entry level client. It's a good choice for simpler use cases.

urql comes with a lot more functionality than graphql-request. It's more than double the size, but you'll get a lot more functionality.

I'd say that both graphql-request and urql have a very good functionality to JavaScript size ratio.

Apollo Client on the other hand seems to offer less functionality than urql while being almost three times the size of it. This might not be accurate as I only used the documentation in my analysis. At the same time, if it's not documented, it doesn't exist.

Now let's have a look at the WunderGraph client. The filesize is just ridiculous. That's because it's generated and has no dependencies. Compiled queries on the server-side component add almost no latency. The generated RPC client in the client-side component is not just very slim, it's also a lot more performant than other clients which you can see from the benchmarks.

Inspecting the dependencies of each framework, you'll see that Apollo, graphql-request and urql rely on the graphql package. It takes up a significant chunk of space. I'm wondering why the package gets compiled into the clients. With the right architecture, there shouldn't be the need to use the package at runtime.

Maybe it's a good idea for urql to add some kind of "Compiler-Mode", similar to Relay, where they strip out some of the code with a compile time step.

What about Relay?

There are two camps here. One the one side, we have developers who don't want to use anything but Relay. It's a small group of people, I respect them and understand the value they get out of Relay.

On the other side, there's a large group of people who just don't get it. Relay does some amazing things with Fragments that really blow your mind once you understand the concept.

Clients like WunderGraph, Apollo, urql and graphql-query fetch data at the top of the component tree. You'll then have to drill into the data and pass it on to the child components.

Relay in contrast allows you to use Fragments to define the data dependencies on child components. This way, there's no tight coupling between the data requirements of child components, and the parent components who actually fetch the data.

To me, what Relay does is the holy grail of data fetching for component based UI frameworks. However, I generally believe that Relay is too complicated and leaves too much work to the user. Additionally, advancements like HTTP/2 streaming make some features of Relay obsolete. E.g. the need for @defer is less relevant when multiple requests have almost no overhead.

WunderGraph will adopt some of the features of Relay in the future. We'll try to re-interpret the framework in a more user-friendly way to give developers the power of Relay without the complexity.

You might be asking why Relay was not included in the comparison. I wanted to use NextJS for the comparison, and the Relay docs indicate to me that there's some complexity involved in setting it up. My top criteria is developer experience and I guess most people don't want to learn how to setup Babel plugins etc. with NextJS, just to get started.

On Open Source

Some people misunderstand WunderGraph as being a closed source SaaS. Some parts of our stack are closed-source for good reason. It's a requirement for us to build a sustainable business model.

At the same time, 99% of our hyperfast GraphQL engine is open source is open source.

graphql-go-tools is the most mature low level GraphQL implementation written in go. It's used by many companies in production and well maintained. It's not a GraphQL server though, it's a toolkit to build GraphQL servers. If you're looking for a library to build GraphQL proxies and the like, this is a very good fit!

When chosing open source technologies, it's important to have a ciritical look at some metrics. Popularity (GitHub stars) is not always a good indicator for maturity.

Compare the number of open issues and pull requests of Apollo Client and urql and make up your mind how these numbers should look like for a well maintained library.

It's worth mentioning how well the urql client is documented. It seems they really want to help you get the most out of their client.

I think it's actually very hard to build a successful business on top of a large open source stack. People have to be paid to maintain the open source projects. VCs want to see revenue eventually, so you're forced to focus on enterprise features. The open source community helped you grow adoption for your business. Now you cannot keep up anymore with the influx of requests.

Apollo were the ones who had to rebuild their client multiple times. We're now able to build tools from scratch that take into consideration all the learnings from their hard work over multiple years.

With WunderGraph, I'm trying to find a healthy balance between open source and sustainable business.

Conclusion

When I was younger, I always wanted to use open sources tools everywhere. It was simply unacceptable for me to use cloud services. I always wanted to run my services on prem. I've even ran my Postgres database on my own virtual machine and figured out backups etc...

What I didn't realize is two things. My home-grown solutions like self-hosting a Postgres database were very immature compared to other services. I usually fell for the trap of not solving business problems. It was just more exciting to me to figure out how to "tune" a Postgres database than getting users. I could have bought a ready to use database service, but this would mean I would have to focus on getting users or solve real use cases. Instead, I just wanted to tinker with technologies.

As I've got older, my focus shifted more towards solving real user-problems. I'm also tired solving problems that have nothing to do with my core business. I'm super happy now to pay for services as I value my own time a lot more now.

If you're the open-source tinkerer, I hope I gave you some inspiration how I solve problems with WunderGraph. Some problems described above might be new to you. You could take inspiration from the patterns we use and implement them in your own stack.

If you're more like the older me, the business problem solver type of person, you should give WunderGraph a try.

Paste the following snippet into your terminal. It starts our WunderGraph NextJS demo, so you can play around.

1
2
3