Blog
/
Education

The Fusion of GraphQL, REST, JSON-Schema and HTTP2

cover
Jens Neuse

Jens Neuse

min read

We're hiring!

We're looking for Golang (Go) Developers, DevOps Engineers and Solution Architects who want to help us shape the future of Microservices, distributed systems, and APIs.

By working at WunderGraph, you'll have the opportunity to build the next generation of API and Microservices infrastructure. Our customer base ranges from small startups to well-known enterprises, allowing you to not just have an impact at scale, but also to build a network of industry professionals.

This post is about the Fusion of GraphQL, REST, JSON-Schema and HTTP2. I'd like to convince you that you don't have to choose between GraphQL and REST. Instead, I'll propose a solution that gives you the best of all of them.

There have been endless discussions around the topic of REST vs GraphQL. The reality is, both are great, but if you choose either side, you'll realize that it's a tradeoff.

You could go down that rabbit hole and make a tough decision for your business to chose between the different API styles. But why chose if you don't have to? Why not take the best parts of each API style and combine them?

We'll be starting the discussion on common misconceptions and looking at the two opposing camps. Then, we'll move forward to identify the strengths and weaknesses of the two approaches. Finally, we'll look into a solution that combines both REST and GraphQL, with a sprinkle of JSON-Schema and the benefits of HTTP2.

Imagine you'd be able to combine the power and HTTP compatibility of REST with the most popular Query language? You'll realize that you're missing out on a lot of potential if you're sticking to either side. You don't have to chose between the two though. All you have to do is to rethink your model of APIs.

Put aside your beliefs for a moment. Try to read without judging immediately. You'll see that we can make GraphQL RESTful, and it's going to be great!

Let's get started!

The two camps and why it's so hard for them to work together

Throughout the last couple of years I've had the chance to talk to numerous API practitioners, from freelancers to developers at small to medium-sized companies as well as super large enterprises.

What I've learned is that we can usually put people in one of two camps.

The first group is people who breathe REST APIs. They usually have very strong opinions on API design, they know very well what a REST API is and what the advantages are. They are well versed with tools like OpenAPI Specification. They've probably read the dissertation on REST by Roy Fielding and know something about the Richardson Maturity Model.

This first group also has a weakness. They are way too confident. When you start discussing GraphQL with people from this group, you'll get a lot of pushback. A lot of the time, they have very good reasons to push back, but then again, they usually lack the ability to listen.

Their solution is a REST API. It's almost impossible to convince them to try something new.

On the other side of the fence, there's the group of GraphQL enthusiasts. Most of them praise GraphQL way too hard. If you look at their arguments, it's clear that they are lacking basic knowledge of APIs. This group is a lot younger than the first one. This makes it understandable that this group is less experienced. They will often praise features of GraphQL as an advantage over REST, when in reality, their REST API design was just not optimized. There's almost nothing in GraphQL that you couldn't't solve with a good REST API design. If the second group would acknowledge this, their lives could become a lot easier.

Aside from these two major groups there are also two smaller niche clusters.

One is a group of extremely experienced API enthusiasts. Their main focus is REST APIs, but they are open to other API styles. They understand that different API styles serve different purposes. For that reason, you can convince them to use GraphQL in some cases.

The second niche group is the more experienced GraphQL users. They've made it through the initial hype-cycle and realized that GraphQL is no silver bullet. They understand the advantages of the Query language, but also see the challenges using it. There are a lot of challenges to be solved around security and performance as I wrote in another blogpost.

If you look Facebook and early adopters of GraphQL, like Medium, Twitter and Netflix, you'll realize that GraphQL is not meant to be exposed over the internet. Yet, the majority of people in the GraphQL community build open source tools that do exactly this. These frameworks expose GraphQL directly to the client, neglecting all the hard work that has been put into defining crucial specifications of the internet, HTTP and REST.

What this leads to is that the work we've been doing for years on making the web scale needs to be thrown in the bin and rewritten to be compatible with GraphQL. This is a massive waste of time and resources. Why build all these tools that ignore the existence of REST when we could just build on top of it and leverage existing solutions?

But in order to understand this, we first have to talk about what RESTful actually means.

What does it mean when an API is RESTful?

Let's have a look at the dissertation of Roy Fielding, and the Richardson Maturity Model to better understand what RESTful means.

In a nutshell, a RESTful API is able to leverage the existing infrastructure of the web as efficiently as possible.

REST is NOT an API specification, it's an architectural style, a set of constraints. If you adhere to these constraints, you'll make your API compatible to what already exists on the web. RESTful APIs can leverage CDNs, Proxies, standardized web services and frameworks as well as Browsers. At the same time, it's not really clear if you should follow all constraints or which ones are the most important ones. Additionally, no REST API looks like another as the constraints leave a lot of room for interpretation.

First, let's analyze Fieldings' Dissertation:

Client-Server

The first constraint is about dividing an application into client and server to separate the concerns.

Stateless

Communication between client and server should be stateless. That is, each request from the client to the server contains all the information required for the server to process the request.

Cache

Responses from the server to the client should be able to be cached on the client side to increase performance. Servers should send caching metadata to the client so that the client understands if a response can be cached, for how long it can be cached and when a response could be invalidated.

Uniform Interface

Both client- and servers should be able to talk over a uniform interface. Implementations on both sides can be language- and framework agnostic. By only relying on the interface, clients and server implementations can talk to eachother even if implemented in different languages.

This is by far one of the most important constraints that make the web work.

Layered System

It should be possible to build multiple layers of systems that complement another. E.g. there should be a way to add a Cache Server in front of an application server. Middleware systems, like API Gateways, could be put in front of an application server to enhance the application capabilities, e.g. by adding authentication.

Code-On-Demand

We should be able to download more code at runtime to extend the client and add new functionality.

Next, let's have a look at the Richardson Maturity Model . This model defines four levels, from zero to three which indicate the maturity of a REST API.

Why REST constraints matter

Why do these constraints matter so much?

The Web is built on top of REST. If you ignore it, you ignore the Web.

Most of the standardized components of the web acknowledge HTTP and REST as a standard. These components are implemented in ways to make them compatible with existing RFCs. Everything relies on these standards.

CDN services, Proxies, Browsers, Application Servers, Frameworks, etc... All of them adhere to the standards of the Web.

Here's one simple example. If a client is sending a POST request, most if not all components of the web understand that this operation wants to make a change. For that reason, it is generally accepted that no component of the web will cache this request. In contrast, GET requests indicate that a client wants to read some information. Based on the Cache-Control Headers of the response, any intermediary, like a Proxy, as well as a Browser or Android client is able to use standardized caching mechanisms to cache the response.

So, if you stick to these constraints, you're making yourself compatible to the web. If you don't, you'll have to re-invent a lot of tooling to fix the gaps that you've just created.

We'll talk about this topic later, but in a nutshell, this is one of the biggest problems of GraphQL. Ignoring the majority of RFCs by the IETF leads to a massive tooling gap.

Richardson Maturity Model: Level 0 - RPC over HTTP

Level 0 means, a client sends remote procedure calls (RPC) to the server using HTTP.

Richardson Maturity Model: Level 1 - Resources

Level 1 introduces Resources. So, instead of sending any type of RPC and completely ignoring the URL, we're now specifying Resources using a URL schema.

E.g. the Resource users could be defined as the URL example.com/users. So, if you want to work with user objects, use this URL.

Richardson Maturity Model: Level 2 - HTTP Verbs

Level 3 adds the use of HTTP Verbs. E.g. if you want to add a user, you would send a POST request to /users. If you want to retrieve a user, you could to so by sending a GET request to /users/1, with 1 being the user ID. Deleting a user could be implemented sending a DELETE request to /users/1.

Level 2 of the RMM makes a lot of sense for most APIs. It gives REST APIs a nice structure and allows them to properly leverage the existing infrastructure of the web.

Richardson Maturity Model: Level 3 - Hypermedia Controls

Level 3 is the one that's usually confusing beginners a lot. At the same time, Hypermedia Controls are extremely powerful because they can guide the API consumer through a journey.

Here's a simple example of how they work. Imagine, you're making a REST API call to book a ticket for an event. You'll get a response back from the API that tells you the ticket is booked, awesome! That not all though, the response also contains additional "Hypermedia Controls" that tell you about possible next steps. One possible next step could be that you might want to cancel the ticket because you chose the wrong one. In this case, the response of the booked ticket could contain a link that lets you cancel the event. This way, the client doesn't have to figure out by itself what to do next, the response contains all the information so that the client is able to continue the "API journey".

This sounds like a really nice API consumer experience, right? Well, not really. Hypermedia Controls have an issue. By definition, there's no specification of what exactly these controls are. A response could contain any kind of controls without a client knowing what exactly to expect.

If both client and server are owned by exactly the same people, this pattern could work extremely well. If you add new hypermedia controls to an API response, you can add new code to your client that automatically handles these controls. What if the people who provide the API are not the ones who consume it? How do you communicate these changes? Wouldn't you need a specification for the controls? If you specify the controls, how is it then compatible with the idea that each API response can return whatever Hypermedia controls it wants? It's not, and that's why we don't see many Hypermedia APIs.

As I said before, Level 3 is extremely powerful. At the same time, it's hard to understand and even more complex to get right which is the biggest reason why most people don't even try.

The majority of API practitioners sticks to Level 2. Good URL design, combined with the use of HTTP Verbs, ideally with an OpenAPI definition gets you very far!

Let's recap this section so that we can use the essential takeaways and move forward to analyze GraphQL.

  1. REST is not a specification, it's a set of constraints
  2. Ignoring REST means, you're ignoring the existing infrastructure of the web
  3. At the same time, you'll have to build a lot of new tools to fix the gaps
  4. Not being RESTful means, not being compatible to the web

Alright, now that we've got a common sense of what REST really is about, let's analyze how RESTful GraphQL is.

Once we've done that, we'll look into ways of improving it.

How RESTful is GraphQL?

GraphQL and the Client Server Model

GraphQL, by definition, divides the implementation into client and server. You have a GraphQL server that implements a GraphQL Schema. On the other side, GraphQL clients can talk to the server using HTTP.

So, yes, GraphQL embraces the client server model.

Is GraphQL Stateless?

This one is going to be a bit more complex. So, let's quickly recap what stateless means.

This constraint says that each client request contains all the information required by the server to be able to process the request. No Sessions, no "stateful" data on the server, no nothing. Just this one single request and the server is able to return a response.

GraphQL Operations can be divided into three categories. Queries, Mutations and Subscriptions.

For those who don't know too much about GraphQL, Queries let clients ask for data, Mutations let client mutate data, Subscriptions allow clients to get notified when something specific changes.

If you're sending Queries and Mutations over HTTP, these requests are stateless. Send along a cookie or authentication token and the server can process the request and reply with a response.

The issue arises from Subscriptions, and the way most implementations deal with them. Most GraphQL implementations use a standard defined by Apollo to implement Subscriptions over WebSockets. This standard is an absolute nightmare because it will be responsible for technical debt for many more years to come. I'm not blaming the authors. I think it's a good first start and I could have probably come up with a similar solution. That said, I think it's time to revisit the topic and cleanup the technical debt before it's too late.

What's the problem with WebSockets? Wrong question, sorry! What are THE problems with WebSockets?

If a client wants to initiate a WebSocket connection, they start to by doing an HTTP Upgrade Request to which the server has to reply that the protocol change (from HTTP to TCP) was accepted. Once that happened, it's a plain TCP socket with some extras like frames etc... The user can then define their own protocols to send data back and forth between client and server.

The first problem has to do with the WebSocket specification of HTML. More specifically, it's not possible to specify Headers for the Upgrade Request . If your authentication method is to send an Authorization Header with a Bearer Token, you're out of luck with WebSockets.

What are the alternatives?

You could let the client make a login request first and set a cookie. Then, this cookie would be sent alongside the Upgrade Request. This could be a solution, but it's not ideal as it adds complexity and makes the Request non-stateless, as we're depending on a preceding request.

Another solution would be to put the token in the URL as a Query Parameter. In this case, we're risking that some intermediary or middleware accidentally (or intentionally) logs the URL. From a security point of view, this solution should be avoided.

Most users of WebSockets therefore took another route of solving the problem. They've implemented some custom protocol on top of WebSockets. This means, client and server would use specific messages to authenticate the client. From a security standpoint, this is ok, but it adds significant complexity to your application. At the same time, this approach essentially re-implements parts of HTTP over WebSockets. I would always avoid re-inventing wheels. Finally, this approach is also non-stateless. First, you initiate the socket, then you negotiate a custom protocol between client and server, send custom messages to authenticate the user to be then able to start a GraphQL Subscription.

The next issue is about the capabilities of WebSockets and the misfit for GraphQL Subscriptions. The flow of a GraphQL Subscription goes like this: The client sends a Subscription Operation to the server. The server validates it and starts executing it. Once new data is available on the server, it'll be sent to the client. I hope it's obvious but happy to make it very explicit: GraphQL has no requirements for bidirectional communication. With that in mind, WebSockets allow the client to send data to the server all the time. This means, a malicious client could spam the server with garbage messages. If you wanted to solve this problem, you'd have to look into every message and block misbehaving clients. Wouldn't it be better if you just don't have to deal with the problem at all?

It's four issues already, and we haven't even started talking about the GraphQL over WebSockets specification.

I know, we've talked a lot about non GraphQL related problems, but the main topic of this section is about the client server communication being stateless.

So, if we look at the GraphQL over WebSockets protocol again, we'll see that it's everything but not stateless. First, the client has to send an init message, then it can send start and stop messages to manage multiple subscriptions. So, the whole purpose of this specification is to manually multiplex multiple Subscriptions over one single WebSocke connection. I wrote about this topic a while ago if this topic is of special interest to you. If we break this down a bit, we've got all the issues related to WebSockets outlined above, plus a spec to multiplex many subscriptions over a single TCP connection in userspace. By userspace, I mean that this multiplexing code must be implemented by both the client and the server.

I'm pretty sure you've heard about HTTP/2 and HTTP/3. H2 can multiplex multiple Streams out of the box without all the issues described in this paragraph. H3 will improve the situation even further as it eliminates the problem of individual requests blocking each other. We'll come back later to this when talking about the solution. In any case, avoid WebSockets if you can. It's an old HTTP 1.1 specification and there haven't been any attempts to improve it and H2 makes it obsolete.

To sum up the section of statelessness. If all you do is to send Queries and Mutations over HTTP, we could call it stateless. If you add Subscriptions over WebSockets, it's not stateless anymore.

Think about what happens if the user authenticates, then starts the WebSocket connection, then logs out again, and logs in with another account while the WebSocket connection is still alive because you forgot to close it. From the server side perspective, what is the identity of the user that is starting a Subscription over this WebSocket connection? Is it the first user who is already logged out? This shouldn't be.

Is GraphQL conforming the Caching constraint of REST APIs?

This is going to be the most fun item to talk about. At first, we will think that the answer is NO. Then, we'll realize that the answer should actually be YES. Unfortunately, at the very end we'll see that instead, the answer will be NO, GraphQL does not conform to the Caching constraint, though this is only visible if you properly read the spec.

Ok, let's start with the first NO. At first glance, you cannot cache GraphQL requests. The answer is very simple. GraphQL Operations can be sent using GET requests. However, most of the time, implementations use the HTTP Verb POST. There's even a specification to standardize GraphQL over HTTP .

The second case is easy to dismiss. POST requests cannot be cached by browsers and intermediaries. This is because there's the general assumption that POST requests mutate state. Every component of the web understands and respects this. Caching POST requests would mean that web would actually break. Want to buy a ticket? Sure, here's the cached response of someone else who just bought a ticket for the same show. Nope, this doesn't make sense, not cacheable.

What about the GET request? GraphQL Operations can be large. If we take the Operation plus the variables, which btw. need to be presented as a URL encoded JSON string in the URL, we might get an insanely long string. The maximum length of a URL should not be more than 2000 Characters . If you take into consideration that URL encoding a GraphQL Operation and the JSON variables can be quite "wordy", that 2000 Characters might become a problem.

Here's an example from the GraphQL over HTTP spec:

1
2
3
4
5

...and the variables:

1
2
3

This Query results in a URL length of 132. Keep in mind that we're querying just a user with a name.

1

Did I mention that, according to the GraphQL specification , whitespace has no semantic meaning in GraphQL Operations? Two Queries, same semantic meaning, different use of whitespace, Cache miss. Oops.

Ok, this was the first NO. Let's have a look at the possible YES.

It's a myth that GraphQL cannot be cached, right? Clients like Apollo Client or urql support powerful caching out of the box. If you look at their documentation, you'll see that caching is great concern for them. They've implemented a mechanism called "normalized caching" which normalizes the data received by network requests and builds a local database of normalized data. If you ask for the same type of data but using a different Query, there's a good chance that this Query can be resolved locally by looking the data up in the normalized cache. So, even though we're sending POST requests over HTTP, GraphQL is still cacheable. Myth busted! Right?

Well, not so fast! Let's revisit the dissertation on REST to see what Roy actually meant in the section on Caching. It says that the server should send Cache Control headers to the client to indicate if a response can be cached, for how long, etc... This makes a lot of sense to me. It should be the server who defines the rules of caching, doesn't it? There should only be one single source of truth at any time. If the client comes up with its own rules on how and when to cache data, we're actually getting into trouble because at any point, it might not be clear anymore if the data is valid or not if the client makes up its own rules.

So, from a technical point of view, normalized caches make sense. But, if there are no Cache-Control Headers involved in building the Cache, we're creating more trouble than not.

This leads to the Question if we can add Cache-Control Headers to GraphQL responses. To me, this sounds almost impossible to do. For every Node in the response, you'd have to compute if it can be cached, for how long, etc... This doesn't sound like it's leading towards the right direction.

That was the second NO. Normalized Caching is not a solution to me. Who wants a second source of truth in the client, with cache control configurations all across the application?

Does GraphQL conform to the Uniform Interface REST constraint?

This is an easy one. It doesn't matter if the client is written in TypeScript or GO. It doesn't matter if the server is written in Ruby or Python. If everybody is conforming to the GraphQL specification, we're fine working together.

Take the same GraphQL Schema, replace the existing implementation in NodeJS with Java and no client would notice.

Is GraphQL allowing us to build a Layered System?

You could easily put a Proxy or API Gateway in front of your GraphQL API. Although most of them don't understand the GraphQL payload, it's still possible and could be valuable to build a layered system.

GraphQL is using HTTP, at least for Queries and Mutations, so any Middleware that understands HTTP can be used in a layered system.

That said, due to the problems described in the caching section, it's not really possible to add a Cache in front of your GraphQL API.

There are services out there that parse GraphQL Queries on the edge and build a cache close to your users. At first, it sounds like a great idea to solve the problem this way. Combined with invalidation APIs, it could be possible to build a powerful caching solution for GraphQL. However, these tools are completely missing the point. This approach is similar to a normalized client, just that it's on the edge and not in the browser. The result? Not just a second source of truth but also a proprietary system that locks you in. Why not just make GraphQL RESTful and use a standardized CDN that doesn't lock you into a specific implementation? If you apply custom invalidation logic within a CDN, isn't that CDN becoming the source of truth? Shouldn't it be the server who defines the invalidation rules?

So, in general it's possible to use GraphQL in a layered system. At the same time, due to the misuse of HTTP Verbs and lack of Cache-Control Headers, the functionality you'll get out of this layered approach might be limited.

Does GraphQL make use of the Code-On-Demand constraint?

Well, loading code at runtime is not really a concern of GraphQL. Tools like NextJS automatically load more code at runtime, based on the routes you visit. As GraphQL is not really a Hypermedia API, it doesn't make sense for it to load code at runtime to extend the client. The client needs to be built at compile time, it needs to know everything about the Schema. Changing the Schema at runtime and having the client download more code to stay compatible to the Schema is not really the way you'd work with GraphQL. It's also quite common that GraphQL Client and Server are completely separate applications. The answer therefore is NO, GraphQL doesn't make use of loading code on demand.

Next, let's look at the Richardson Maturity Model to see which level GraphQL can achieve.

Does GraphQL implement the Richardson Maturity Model Level 0 - RPC over HTTP?

To recap, RMM Level 0 was about using RPC over HTTP. Interestingly, HTTP is never mentioned in the GraphQL specification . That is because the spec is only about the Query Language itself. Follow the link to the spec and search for HTTP, you'll see that there's no mention that HTTP must be used. It describes how the schema works, how clients can define Operations and how the execution should work. GraphQL by itself is protocol agnostic.

If we want to take the spec word by word, GraphQL wouldn't even be Level 0. However, most if not all implementations do GraphQL over HTTP and as mentioned earlier, there's also a dedicated specification by the GraphQL foundation. With these facts in mind, I think it's fair to say that GraphQL achieves Level 0.

I'm actually on the fence when it comes to the GraphQL over HTTP specification. On the one hand, it's great to have a specification that standardizes how GraphQL clients and servers should be implemented. On the other hand, I believe that GraphQL over HTTP is the wrong direction. This spec, built by the GraphQL foundation, will make developers believe that it's OK to do GraphQL like this. I disagree with this, and I'm not the only one. We'll later come to a prominent quote supporting my point of view.

Next up, let's look at Level 1.

Does GraphQL conform to the Richardson Maturity Model Level 1 - URL-based Resources?

In theory, GraphQL does use Resources. The rich Type System allows developers to define Object Types, Interfaces, Enums and Unions. REST APIs in general don't enforce a Type System. You can implement a Type System, e.g. through the use of OpenAPI (formerly Swagger), but this is optional. With GraphQL, there's no way around defining the Types. Thanks to the Type System of GraphQL, it's possible to implement a lot of useful features. Introspection is one of them, allowing clients to "introspect" the GraphQL server to understand its capabilities. By using Introspection, tools can generate complete clients and SDKs which allow developers to easily use GraphQL.

From a REST point of view however, GraphQL does not have Resources. That is because the Types are not bound to unique URL paths. All Operations go to the same Endpoint, usually /graphql. While Developers can easily understand the difference between a User type and a Post type, proxies, caches, browsers, etc... are not able to distinguish the two. That's because they would have to look into the GraphQL Operation to understand the difference.

OK, GraphQL doesn't implement Level 1 of the RMM model. Let's have a look at Level 2.

Does GraphQL conform to the Richardson Maturity Model Level 2 - proper use of HTTP Verbs?

Again, there's no mention of HTTP in the GraphQL spec, so the immediate answer would be NO, but we're just assuming the GraphQL over HTTP spec to be the standard .

The spec says that it's OK to send Queries using GET . Mutations are forbidden to be sent via GET. Imagine what would happen if that was allowed.

Additionally, it's also allowed to send Queries and Mutations via POST .

We've previously spoken about the issues with sending GraphQL Operations via GET Requests and the URL length limit. Also, sending GraphQL Requests over POST seems to be the norm for most clients.

If we take all this into consideration, I'd say that GraphQL does not achieve Level 2.

You might already be able to guess the answer, but let's quickly visit level 3 as well.

Does GraphQL conform to the Richardson Maturity Model Level 2 - Hypermedia Controls

The short answer is NO, GraphQL by itself does not come with support for Hypermedia Controls. However, it's not impossible to add them. A while back, I've sketched an idea of how a GraphQL Schema with Hypermedia controls could look like. It was an experiment, and I've tried to see if I can spark some interest in the GraphQL community for the idea. So far, I didn't get much feedback on it, so my assumption is that the GraphQL community doesn't care about Hypermedia.

I still think it's a very powerful concept. Book a ticket via a mutation, and the response contains information about next possible options, like cancelling.

Summary of the Question if GraphQL is RESTful

Let's to a quick recap of the previous two sections. I hope it's clear to the reader how powerful it is for an API to be RESTful. Separating the concerns of Client and Server, building stateless Services, Making responses cacheable, the uniform interface and the possibility to build layered system. Conforming to these constraints helps us to build internet scale systems.

Unfortunately, GraphQL over HTTP fails to conform to many of these constraints. While it does use a Client-Server Model, the communication is not Stateless for all Operations and Caching is hard because of the misuse of HTTP Verbs, and the lack of Cache Controls.

Before we jump onto the solution part, Making GraphQL RESTful, I'd like to go through a bunch of common misconceptions about REST and GraphQL.

Common Misconceptions around GraphQL vs. REST

Recently, there was an interesting Thread on Twitter. Nice input for a quick discussion on GraphQL vs. REST misconceptions

I know I'm repeating myself, but GraphQL is a Query language, REST is a set of constraints. If you build services in a RESTful way, it helps making them scalable because you can leverage the existing infrastructure (browsers, caches, CDNs, frameworks) of the internet very well.

GraphQL cannot be better than REST. This sentence is just wrong. It's like saying an Apple is better than a knife. Why not use the knife to cut the Apple into nice small slices? Why not use REST to enhance the experience of GraphQL? Why fight against these constraints when they could actually help the Query language?

Every API is affected by the N+1 problem. Using plain REST APIs, the N+1 problem affects the client, whereas with GraphQL, it only affects the server. As there's latency between Client and Server, REST APIs actually suffer more from this.

Query Depth limitations is nothing else but rate limiting the complexity of Queries vs. rate limiting the number of REST API calls. There are a lot of tools to analyze the complexity of GraphQL Operations. Additionally, we'll see that there's a simpler solution to the problem.

By the way, it's not really the correct language to say "Query Depth limitation". It might be nitpicky, but the correct language is to limit the depth of GraphQL Operations. Operations can be Queries, Mutations and Subscriptions. It would be weird to say GraphQL Query Query, right?

I actually don't believe that "most" REST-ish APIs really conform to the constraints of REST. There's a good reason why GraphQL is taking up adoption so quickly. A very small amount of REST APIs really do it right. The majority of REST-ish APIs doesn't come with an OpenAPI Specification. GraphQL enforces a type system, helping developers to build better APIs.

That said, GraphQL over HTTP uses at least some constraints of REST. So the real answer here is that GraphQL is using a subset of REST, so GraphQL over HTTP could also be considered a REST API, just not a really good one.

There's really no difference between REST and GraphQL in terms of versioning. GraphQL over HTTP can use headers for versioning, or a version as part of the URL. Additionally, you're able to implement versioning as part of the GraphQL schema.

In contrast, not being able to easily version your GraphQL API actually forces developers to think about keeping their API backwards compatible. I've also written a blog post on making APIs versionless to help companies collaborate better through backwards compatible APIs.

Independent of the API style you use, your APIs are always backwards compatible, and you don't need versioning at all.

Indeed, server-side JSON Schema validation is a really powerful feature of OpenAPI (OAS). If you're familiar with OAS and JSON Schema, you'll realize that it's a way more powerful type system than GraphQL.

I don't want to jump ahead to the solution already, but I'd like to point out one thing. WunderGraph is built around the concept of Persisted Queries. Not allowing clients to send arbitrary GraphQL Operations comes with a lot of benefits. By doing so, we're essentially turning GraphQL into some kind of REST or JSON RPC. After doing the initial implementation of this feature, I realized that both the "variables" of a GraphQL Operations as well as the "response" are represented by a JSON. By going the "persisted Operations only" route, we're able to combine GraphQL with JSON Schema.

This is the core of WunderGraph and makes it so powerful. It does not only allow you to do server-side validation. You can also generate validation on the client, allowing you to build forms with input validation, just by writing a GraphQL Operation.

Why not use the amazing developer experience of GraphQL and combine it with the capabilities of OAS/JSON Schema?

GraphQL is good for fetching data. OpenID Connect (OIDC) is good for authenticating users. OAuth2 is good for authorization. REST APIs are good for file uploads. Both OIDC and OAuth2 use REST. Use the right tool for the right job, just upload your files to S3 and handle meta-data using GraphQL.

Completely underrated comment!

That's all I wanted to say about common misconceptions. We really need to stop this "GraphQL vs. REST" fight and work together on improving the developer experience of APIs. I think it would help everyone to get a better understanding of other API styles and standards. This could really help the GraphQL community to stop re-inventing so many wheels...

Not everything about REST is great though!

We've covered a lot of problems with GraphQL APIs so far and you might be tempted to ask, why use GraphQL at all? The answer is, not everything about REST is great and there are very good reasons to combine the two.

Having Resources is a very powerful concept. Combined with Type Definitions, it makes usage of an API a lot easier. If you're building on top of REST, using OpenAPI Specification (OAS) can help a lot to enable better collaboration. Both REST and OAS come with a few problems though.

It's rarely the case that a client wants to interact with a single Resource. At the same time, it's almost never the case that REST API provider can cover all possible use cases of their API.

If client transactions usually span across multiple Resources, why should we tightly couple Resources to URLs? By doing so, we're forcing clients to do unnecessary round trips. Yes, the situation got better with HTTP/2 but if Resources are very granular, an API user is still forced to wait for a parent response to make nested requests, HTTP/2 cannot do much about this. So why not just tell the server exactly what Resources we're interested in? Why not just send a GraphQL Query to the server?

As we've discussed above, sending a GraphQL Query over HTTP is not ideal. If instead, we'd just use GraphQL on the server side only, we could expose these Compositions (GraphQL Operations) as unique URLs. This approach is the perfect middle ground that uses the strengths of both REST and GraphQL. Clients can still ask for exactly the data they want, all while not breaking with the important constraints of REST that help APIs scale well on the web.

Another issue with REST and OAS is the ambiguity in terms of how to solve certain problems. How should we send an argument? As a Header? As part of the URL path? Should we use a Query parameter? What about the Request Body? If you compare OAS and GraphQL, there's two important observations you can make.

For one, the Type System of OAS is a lot more advanced than the one of GraphQL. GraphQL can tell you that something is a String, or an Array of Strings. OAS, through the help of JSON Schema, lets you describe in detail what this String is about. You can define the length, minimum, maximum, a Regex pattern, etc... There's even a way to say that each item of an Array must be unique. GraphQL is completely lacking these features because Facebook was solving them at different layers. This means, the GraphQL specification is quite clean, on the other hand, users have to find solutions for the problems themselves.

The second observation is that OAS tries to find ways of describing "existing" REST APIs. This means, OAS is not designed as an optimal solution. Instead, it tries to model all possible ways of "doing REST" that were found in nature, hence the ambiguity of ways to do the same thing.

GraphQL on the other hand was designed from the ground up for a good Developer Experience. Frontend Developers love the DX of GraphQL, how else could you define a good product market fit?

Putting a layer of GraphQL on top of you existing REST APIs allows us to clean up all the chaotic ways developers found to build their REST APIs.

Why did we create such a mess in the first place? Because REST is just a number of constraints, it's not a spec, it's just a bunch of guidelines, very good guidelines.

GraphQL doesn't give you two ways of implementing arguments. There's just one, it's defined in the spec, no discussions, no chaos. We'll, you can still not design your GraphQL Schema, but that's another story.

How to make GraphQL RESTful

Great! You've made it to the solution. We've discussed REST, we've learned that GraphQL only conforms to REST to some small degree. Now let's fix this!

You'll see that the final solution will not adopt all RESTful patterns. E.g. we're not going to port over the tight coupling between Resources and URLs.

On Persisted GraphQL Operations

Most of the time, I have to use my own words to convince you with a solution. Today, I'm very happy to have some help from Ivan Goncharov, he's a member of the GraphQL foundation and a core contributor to the GraphQL JS implementation.

The solution I'm going to present is based around the concept of "Persisted Queries", or better yet, "Persisted Operations".

A while back I've had the chance to talk to Ivan about Persisted Queries, here's what he had to say:

Persistent queries is a key feature that will allow unlocking full potential of GraphQL especially for infrastructure tooling like CDN, logging, etc. Also, persistent queries have the potential to solve so many problems of maintaining public GraphQL APIs.

-- <cite>Ivan Goncharov</cite>

To which I asked: Could you elaborate a bit on the problems of maintaining public APIs?

Few examples: Unpredictable complexity checks. If you change how the cost is estimated you are risking breaking client's queries without even notifying them. You should have a significantly longer deprecation period for fields In general, public APIs without persistent queries limit how you can make changes. You will be forced to either version GraphQL API (what Shopify does) or spend significant effort on maintaining backward compatibility as long as possible (what GitHub does).

-- <cite>Ivan Goncharov</cite>

Let's unpack what Ivan said step by step.

Currently, there's a run in the GraphQL market to fill gaps with new tools. One prominent example is the CDN market. A few tools like GraphCDN are trying to solve the problem of caching GraphQL Operations on the edge. The base assumption here is that we're sending GraphQL Operations over HTTP. A CDN service provider can now build proprietary logic to implement this feature. We've covered this earlier, but I'd like to repeat it again. Cache Invalidation of a CDN relying on GraphQL over HTTP is forced to use proprietary logic, locking customers into their ecosystem. This is because it's almost impossible for a GraphQL server to tell the time to live for a Response. Any GraphQL Operation can be completely different, asking for different Nodes of the Graph, each Node with a different TTL.

If instead, we RESTify our GraphQL APIs, we can put any public CDN provider in front of our API. Just give each persisted Operation a MaxAge Cache Control Header, an ETag and optionally a StaleWhileRevalidate value and Cloudflare & Co. can do their thing. No additional proprietary tooling is required. We can decide between multiple Cloud providers, avoiding vendor lock in for edge caching and most importantly, we're not creating a second source of truth. Extra benefit, native browser caching, with automatic content revalidation through ETags, works out of the box. That's one of the reasons why conforming to REST is so important. We can re-use what's already there!

What about the problems Ivan was mentioning about public APIs?

Public GraphQL APIs were forced to find ways to protect themselves from getting overwhelmed by clients. Any GraphQL Operation can have almost infinite complexity. To combat the issue, public API providers implemented patterns that calculate the complexity on the fly. Once calculated, clients can be rate-limited based on the complexity.

This comes with a few problems. Clients don't know ahead of time how much "complexity points" each individual Operation costs them. Some API providers are really nice and return this information as part of the meta data of the response, but this could already be too late. The other problem is that APIs change over time. One issue that can arise from this is breaking changes. I've covered this topic in another post. The other problem was already mentioned by Ivan. If you change the model of how you calculate the GraphQL Operation Complexity, you'll inevitably break some of your clients in unexpected ways.

How do persisted Operations solve this problem? As a client, you register an Operation with a GraphQL server. The server responds with a URL and tells you about the calculated rate limit points. We're not able to use endpoint based rate limiting. Additionally, as described in the another post about Versionless APIs, the API provider has now a very good tool to keep this endpoint non-breaking.

A primer on Persisted GraphQL Operations

If you're not familiar with the concept of Persisted Operations, here's a quick primer to explain the concept.

Usually, GraphQL clients send GraphQL Operations to the GraphQL server. The server will then parse the Request and resolve the response. This comes at the cost of additional CPU and Memory each time an Operation is getting parsed, validated, etc... Additionally, this approach comes with a lot of security issues as discussed in another blog post.

Persisted Operations do things slightly differently. Instead of sending a GraphQL Operation every time, the client will "register" the Operation on the server, or in simple words, store the Operation on the server, hence persisted. During the registration, the server can parse, validate and even estimate the complexity of the Operation. If the Operation is valid, a URL will be returned to the client, so it can call the Operation later.

Calling the operation later will not just be a lot more efficient. It's saving a lot of CPU and Memory because we can skip a lot of unnecessary parsing, validation, etc...

In a nutshell, Persisted GraphQL Operations increase security and performance. They're also good for the environment because we can skip unnecessary CPU cycles.

Thunk-based resolving: Compiling GraphQL Operations

WunderGraph takes the approach of Persisted Operations one step further. Over the course of three years, we've developed a GraphQL Engine that resolves Operations using thunks.

Usually, a GraphQL resolver is a function that returns some data. Here's a simple example:

1
2
3

If you call this function, it will immediately return some data. This model is simple to program for humans, but quite inefficient for computers because the Operation cannot be cached.

If you think about the functions that call this userResolver, they must understand the GraphQL Operation and know how to resolve individual fields. You could say that resolving Operations the "default" way is like running an interpreter. Once the user is returned from the DB, the function enclosing the resolver must parse the selection set to see what fields to return. All of this work needs to be done on every request.

Now let's look at an example of a thunk-based resolver. Keep in mind that WunderGraph's Compiler-based Engine is written Go, so this is just an example using a language we all understand:

1
2
3
4
5
6
7
8
9
10
11

At "planning" time, the WunderGraph Execution Engine compiles the GraphQL Operation into an Execution Plan. There are no direct resolvers. The enterNode and leaveNode functions get called whenever the AST visitor comes across a GraphQL AST Node. The Planner then gathers all data that is required at execution time.

The Plan that is generated by the Planner doesn't require any GraphQL knowledge at runtime. It's a description of the Response that needs to be generated. It contains information on how to fetch individual nodes of the Response, how to pick fields from a Response set, etc...

At runtime, all we have to do is walk through the generated Plan and execute all thunks. If you're not familiar with the term thunk, here's the Wikipedia article .

Just executing these thunks is at least as efficient as a REST API controller, so by going this route, we're not adding any extra latency compared to REST.

JSON Schema - the extra benefit of Persisted GraphQL Operations

I want to be honest with you, I didn't plan to have this feature, it was an accidental discovery.

When I started experimenting with GraphQL Operations, at some point it just struck me.

GraphQL APIs return JSON, that's obvious. If you de-inline all GraphQL arguments (turn them into variables), the variables can be represented as a JSON too, that's also kind of obvious.

It took me a while though to see what was in front of me. Combine Persisted GraphQL Operations with the two facts I've just told you.

Persisted GraphQL Operations turn GraphQL into JSON-RPC automatically!

Each persisted Operation can be described as a function that takes a JSON input and has a JSON response.

Is there a powerful specification that can help us to describe a JSON input as well as a JSON response? Hello JSON Schema!

We've met JSON Schema earlier when we were talking about OpenAPI Specification. OAS is using JSON Schema as a Type System.

Guess what, we're doing the same thing with WunderGraph!

There's a whole section on this Feature but I'd like to give a short primer here:

1
2
3
4
5
6
7
8
9
10
11
12

This is a Mutation that takes a message and creates a Post. We can give the message variable a title and description. Additionally, we're able to define a Regex pattern for input validation.

The JSON Schema for the Inputs of this Operation looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13

The benefits of this feature are endless:

  • server-side input validation
  • client-side input validation
  • code generation of Type Safe clients
  • Type Safe Middlewares, e.g. using TypeScript
  • we're even able to generate forms on the client
  • we can generate Postman Collections for the generated API

GraphQL as the API Orchestration Layer, an ORM to your APIs

Ok, let's think this through. We're adding GraphQL but it's insecure and not conforming to REST. To solve the problem, we're adding another layer of indirection on top of this. Are we not going full circle, REST to GraphQL to REST (JSON-RPC)?

I've recently published another blog post on GraphQL security where a reader made a very good comment on HN .

โ€œIt is hard not to interpret the recommendation at the end of this article, which is to wrap your GraphQL API in a locked down JSON-RPC API, as an argument for not using GraphQL at all.โ€

Simon Willison

Thanks, Simon! Very good observation. Why use GraphQL at all?

We're usually not talking to a single service, a single API. When we build applications, most of the time, we have to integrate multiple APIs and compose them into one API, dedicated to this particular product.

GraphQL has its origin in frontend data fetching. I believe that GraphQL has a lot more potential than that.

GraphQL can become the API orchestration layer, the ORM to all your APIs.

When I talk about GraphQL, I usually mention the term "Virtual Graph". My philosophy of WunderGraph can be divided into three steps:

  1. Combine all APIs you'd like to use into one Virtual Graph, a GraphQL API that only exists virtually as we don't expose it.
  2. Define your Operations by writing GraphQL Queries, Mutations and Subscriptions
  3. Generate the Server, using the thunk based approach described above, all well as type safe clients

GraphQL's selling point is that clients get exactly the data they need. But that's not enough. What we really need is a framework that allows us to create a backend for frontend on the fly.

The Virtual Graph with the Persisted Operations is exactly that: A framework to create API integrations.

Summary of the solution

Let's go through our Checklist to verify how RESTful our new API style is. Btw. I call this pattern "GraphQL over JSON-RPC". You could say GraphQL over REST or RESTful GraphQL but I don't want to argue with Hypermedia enthusiasts as we're definitely not building a Hypermedia API.

  • [x] Client Server Not much changed in terms of client and server, we're still separating these concerns.
  • [x] Stateless With JSON-RPC in front of our GraphQL API, we're able to use HTTP/2 Streams for Subscriptions and Live Queries. In contrast to WebSockets, these are just regular stateless HTTP Requests. Each Request can have its own Auth Context.
  • [x] Uniform Interface WunderGraph doesn't just give you a uniform interface. We're also making it extremely easy for you to swap implementations of an API contract without breaking clients.
  • [x] Layered System We're relying on JSON-RPC and widely used Standards like Cache-Control Headers, ETags. For Authentication, we're using OpenID Connect. All this means, you're able to integrate WunderGraph easily into existing stacks and can leverage Proxies like Varnish or CDNs like Cloudflare or Fastly.

The only concern you could have is that we're not exposing the same URL Scheme as a classic REST API. However, as pointed out earlier, we see this as an advantage because this solves over- and underfetching.

Additionally, you're almost always not directly using the "raw" API. The Developer Experience is one of our biggest concerns. We don't want Developers to waste their time on repetitive and boring tasks. That's why we generate fully TypeSafe Clients based on the user-defined Operations.

But we don't end it there. We're not just generating the client. We've built an open and extensible Code-Generation framework that can generate anything. From Postman Collections or OpenAPI Specifications to React Hooks, Android or iOS Clients or even just Curl shell scripts, anything can be generated.

GraphQL vs REST vs OpenAPI Specification vs WunderGraph comparison Matrix

CapabilityWunderGraphRESTOASGraphQL
Easy to secure[x][x][x]o
Easy to scale[x][x][x]o
Native Browser Caching[x][x][x]o
Lightweight Client[x][x][x]o
leverages existing CDNs and Proxies[x][x][x]o
JSON Schema validation[x]o[x]o
Type System[x]o[x]o
Expressive Error Handling[x]oo[x]
No ambiguity in API design[x]oo[x]
Clients get exactly the data they need[x]oo[x]
High Quality generated Clients[x]ooo
Integrate multiple APIs easily[x]ooo
Integrated Database to GraphQL using Prisma[x]ooo
Integrated Content Revalidation using ETags[x]ooo
Integrated Authentication using OpenID Connect[x]ooo
Integrated Authorization via Claims injection[x]ooo
Integrated Mocking[x]ooo
Integrated TypeSafe Hooks for custom middleware logic[x]ooo
Integrated Realtime Subscriptions for any API[x]ooo
Integrated CSRF Protection for Mutations[x]ooo

Everytime we meet a new Client, we ask them how long it would take them to replicate our Demo Application from scratch. They usually answer something between a <strong>few days and two Weeks</strong>. We then show them how little code we've actually written and tell them it took us only half an hour. You can literally hear people smiling, even with their Webcam disabled. It's such a joy to do these demos! Sign up , and we'll do one for you too!

Addressing a few of your concerns

Is the approach with Persisted Operations not destroying the Developer Experience of GraphQL?

No, it's quite the opposite.

Without WunderGraph the developer workflow usually looks like this: I'm using React as an example. You define a GraphQL Operation somewhere in your Codebase. Next, you run a code generator to generate TypeScript models for your Operation. Then, you'll include the models in your codebase, call a React Hook with the Operation and attach the Models. There's a chance that models and Operation diverge, or you choose the wrong model.

Now let's have a look at the WunderGraph Development flow: We're using file based routing, so you create a file containing your GraphQL Operation in the .wundergraph/operations directory. Once saved, our Code-Generator will extend the server-side API and update the generated TypeScript client, the generated Hooks, Models, Mocks, TypeSafe Middleware Stubs, Forms (yes, we generate Forms too!) etc... Include the generated form Component, or simply the generated Hook, and you're done.

WunderGraph becomes a core part of your infrastructure, you're afraid of vendor lock in

We've touched on vendor lock in before and how WunderGraph helps you to not get locked into proprietary CDN solutions. At the same time, are we not also locking you into our own proprietary system?

We're so confident that our tool can add a lot of value to your stack that I'm happy to share with you how to Eject from us and share some details of the stack we're using ourselves.

The WunderGraph GraphQL Engine is built on top of a well and actively maintained Open Source project with contributions from many different Developers and companies. It's in use in production for many years now. Amongst the users are Insurances, super large Enterprises and API Management Companies, etc...

Through our Code-Generator, it's possible to generate Postman Collections and OpenAPI Specifications. We could also provide an AsyncAPI specification for Subscriptions and Live Queries. For Caching, we rely on standard Cache-Control Headers and ETags. Authentication is handled using OpenID Connect. Authorization is implemented by injecting Claims into GraphQL Operations. For Database Access, we're using Prisma.

So how do you Eject then?

  • Take the OpenAPI Specification that we generate and implement it with your framework of choice
  • Add your own custom middleware for Authentication & Authorization
  • Find an Open Source solution for Mocking as we're also generating TypeSafe Mocks for you.
  • Get yourself a library to add JSON Schema validation
  • Add a Caching Middleware that automatically handles ETags & Cache Control Headers and can scale across multiple servers, e.g. using Redis
  • Implement a server-side polling mechanism to stream Updates from your upstream APIs or Database
  • Add CSRF protection on both client and server
  • Either build your own Code-Generator to generate a fully TypeSafe client that is compatible with your API, handles Authentication etc... or just build the client manually

We believe that no team should have to do all these things themselves. Instead, focus on what matters to your business, focus on what matters to your customers. Let us do this boring API integration Middleware stuff and build something great on top of it!

Try it out yourself, it's free and Open-Source!

What are you waiting for? Save yourself a lot of time, build better apps, more secure and performant.

I hope I've convinced you to stop worrying about GraphQL vs. REST. Take the best features of both and use them together!

You can try out WunderGraph on your local machine in just a Minute. Paste this into your terminal, and you're good to go:

1
2
3

We'd love to hear from you!

Do you have question or feedback? Meet us on Discord !

Want to talk to an Engineer to figure out if WunderGraph is right for you? Let's have a Chat! We'd love to give you a demo!