:point_up: GraphQL File Uploads with React Hooks, TypeScript & Amazon S3 [Tutorial]
Learn how to use GraphQL to upload files with multipart upload requests
As time goes on, it looks like more developers are choosing to build their public-facing APIs with GraphQL instead of REST. We’re going to see a lot of the same problems people were solving with REST, solved with GraphQL, in a much cleaner and enjoyable way.
A common task in a lot of web applications is performing file uploads. Luckily, if you’re using Apollo Server, uploads are enabled by default.
By adding the
Upload type to our Apollo Server type definitions, we enable the ability to upload files from the client.
If we build a mutation that utilizes the
Upload type, what we get back is a
stream of data that we can pipe to a file stored on our server, or, more interestingly, to an external cloud service like AWS S3. It’s also pretty common for things like profile pictures that we’d want to also make sure we store the URL of the file in our database so that we can use it to show people’s display pictures.
In this practical tutorial, I’ll walk you through how to:
- Set up an Apollo Server with TypeScript for file uploads
- Setup your Apollo Client to upload files
- Pipe an uploaded file to AWS S3
- Get the URL of the uploaded file so that we can save it to our database
Hold up:hand:: Before we get started, I urge you to check out the Apollo Server File Upload Best Practices Guide . In that guide, we cover three different types of ways to perform file uploads (Multipart Upload Requests, Signed URL Uploads, and rolling your image server).
In this tutorial , we’re going to implement #1 — Multipart Upload Requests . This approach is perfect for hobbyist and proof-of-concept projects, but other approaches are more suitable for production environments.
With that said, onwards!
Setting up an Apollo Server is a piece of cake. We just need to install the following npm packages.
npm install --save apollo-server graphql
If you’re starting a project from scratch, check out “ Getting started with Apollo Server ”. If you’re adding a GraphQL Server to an existing Express.js REST API, check out “ Add a GraphQL Server to a RESTful Express.js API in 2 Minutes ”.
TypeScript types will come in handy when we build the uploader, so let’s add that to our project as well.
npm install --save-dev typescript @types/node && tsc --init
Check out “ How to Setup a TypeScript + Node.js Project ” if you’ve never set up a TypeScript app before.
When we’re done with that, the most basic Apollo Server setup we could have should look a little something like this.
We want clients to be able to upload a file to our GraphQL endpoint, so we’ll need to expose a
singleUpload GraphQL mutation to do just that. Using the
Upload scalar that comes with Apollo Server, write a
singleUpload mutation that takes in a non-null
Upload and returns a non-null
The Upload type
file object that we get from the second parameter of the
singleUpload resolver is a
Promise that resolves to an
Upload type with the following attributes:
stream: The upload stream of the file(s) we’re uploading. We can pipe a Node.js stream to the filesystem or other cloud storage locations.
filename: The name of the uploaded file(s).
mimetype: The MIME type of the file(s) such as text/plain, application/octet-stream, etc.
encoding: The file encoding such as UTF-8.
At this point, we have a
singleUpload mutation ready to accept a file upload and turn it into a stream that we can pipe to some destination. We’re not doing anything with that yet, so let’s change it.
Uploading to AWS S3
Amazon S3 is a popular object storage service that we can use to store images, videos, and just about any other kind of file that you can think of.
Let’s make another file and create an
AWSS3Uploader class to hold the responsibility of uploading to S3.
Creating an AWS S3 Uploader
We’re going to need the AWS SDK, so let’s install that first.
npm install --save aws-sdk
Then we’ll create the
AWSS3Uploader class that accepts an
S3UploadConfig (a handy-dandy type that we create) in the constructor. To create a new instance of one of these, we need to pass in everything necessary to get an authenticated uploader up and running.
That means we’ll need the:
accessKeyId– You can get this by using IAM, creating a user, and attaching the
AmazonS3FullAccesspermission to them, then creating an access key for them. Check this link for more info.
secretAccessKeyId– Same as above.
destinationBucketName– With S3, we store data in buckets. You’ll want to create a bucket first, and then use the name of the bucket here.
Here’s what the class looks like so far.
Cool, so when we create a new one of these, we get an instance of
AWSS3Uploader , initialized with the AWS settings we need to upload file data to an S3 bucket.
Replacing (or composing) the resolver
Ideally, it would be nice if this
AWSS3Uploader class could replace (or somehow compose) the resolver that we have on our Apollo Server. With TypeScript, we can define the contract of the resolver function using an interface, and then if our
AWSS3Uploader implements that interface, we can delegate the work.
I like that approach. Using an
IUploader interface, define the contract for the
singleFileUploadResolver and create other strict TypeScript types for the parameters and the return value.
Then, implement the
IUploader interface on the
Advanced Design tip: We’ve just done here is planted the seeds to implementing the design principle called Liskov Substitution in that we should be able to swap one implementation for another. If later on in the future, we’d like to switch to using Cloudinary or Google Cloud for uploads instead, all we have to do is implement the
IUploader interface on a new object, and we can swap it out safely. Beautiful!
Before we implement the S3 upload code, let’s go back to our Apollo Server and create an instance of our
And then we can replace the anonymous resolver function with our
Because we’re using classes, when control inverts to
s3Uploader , the value of
this with respect to
s3Uploader will be lost. We can save that initial
this value by using the
bind method. We’ll need to do this if we’re working with class-based components that call own methods. There are other ways to do this!
Implementing the upload logic
Now the fun part. What we want to do is:
- Create the destination file path
- Create an upload stream that goes to S3
- Pipe the file data into the upload stream
- Get the link representing the uploaded file
- (optional) save it to our database
To create the file path, let’s add a method called
createDestinationFilePath that takes in everything we currently know about the file. I’m going to leave it really simple by just returning the name of the file that we want to upload, but if you wanted to create your own naming pattern, you could do that here.
Next, we’ve got to create an upload stream that points to our AWS S3 bucket. Streams are one of the most confusing parts of Node.js, so think of this step as if we’re creating the fire hose and pointing it directly at the S3 bucket. We’re not doing anything with the data yet ; we’re just defining where it’s going to go.
To do this, we define a new type, an
S3UploadStream object that holds both the upload stream and a promise that we can invoke to start the upload. That promise is essentially the valve to our fire hose .
Now let’s connect the read stream (our data) to the write/upload stream.
// Pipe the file data into the upload stream stream.pipe(uploadStream.writeStream);
And let’s open the valve.
const result = await uploadStream.promise;
At this point, the
singleFileUploadResolver method should look like this.
We can get the link that the file was uploaded to by pulling it out of the
result object. If you wish to save this to a database somewhere, this would be the appropriate place for you to do so. See below.
You may need to associate the upload to the particular user who made the request- you can accomplish this using the third argument in the GraphQL resolver- the context argument. For more details on how this works, check out the Apollo Docs on the Context Argument .
And that completes our server-side configuration!
Let’s move over to the client-side and walk through a simple setup with Apollo Client.
Assuming you already have a React app created (and if you don’t, see how to use Create React App to create a new one), you’ll want to set up an instance of Apollo Client.
Just want the code?Go ahead and peep it on Github .
Run this command to install the latest version of Apollo Client.
npm install --save @apollo/client
Next, we can create an instance of
ApolloClient , connect it to our Apollo Server using the
HttpLink Link component, and wrap our React app with an
That’s the basic setup.
To uploads working, we need to rely on a community-built package called
apollo-upload-client which adds capabilities for multipart requests to the ApolloClient instance.
You can read the docs for
apollo-upload-client here .
Let’s install it.
npm install apollo-upload-client
To hook it up, we need to replace the
HttpLink Link instance with a Link created by using the
createUploadLink factory function.
Because the type contracts aren’t nominally equivalent between the official Apollo Client and the object created by
createUploadLink (at the moment), we need to use
@ts-ignore to prevent type error.
Uploading a file from the client to cloud storage
From the client, I’m going to create a straightforward
App component, I’ve defined another component called
UploadFile . Let’s create that now.
UploadFile component uses the
useMutation hook that takes in a GraphQL
mutation that we’re about to write. When the
onChange callback gets called on the
input tag, it supplies a
validity object that we can test against to determine if we should execute the mutation with
mutate . You can read more about the nuances and features of
apollo-client-uploads in the GitHub docs .
Lastly, we need to write the
mutation and import the necessary utilities to do so.
Notice that the
Upload type we’re referring to is the one that Apollo Server knows about as a scalar type.
That’s it! Try it out, and check your S3 console for your uploaded files.
Here’s the client-side upload component in completion.
We just learned how to use multipart requests to perform GraphQL file uploads. If you’re just getting started with GraphQL, or you’re working on a non-critical project, this approach is excellent because it’s the simplest way to get something up and running.
If you’re working on something in production that is critical, definitely remember to check out thebest practices guide for the alternative approaches.