The Jamstack architecture advocates deployment on edge servers. Such deployments are fast, can be scaled easily and allow for greater security. However, since we are operating in an environment where server-side code execution is impossible, we need to find a way to run server-side code. Serverless functions enable developers to execute such code. There are multiple ways in which functions can be created, and some deployment providers in the Jamstack space - such as Netlify - have made a convenient way of deploying serverless functions from within their environment. This article will discuss how to upload images using Netlify Functions, store them in Cloudinary and do some "geomagic" with them.
If you are new to serverless functions, I have created two video courses on this subject (concerning the Jamstack): Jamstack and Serverless and Serverless Functions and Databases.
The app requires you to upload an image with available Exif GPS metadata (more on this later). If you don't have access to such an image, please download and use this one.
You can find the entire codebase discussed in this article on GitHub. If you wish to deploy this application please don't forget to add your own environment variables. Please see a demo below.
The process is going to be simple - we are going to build a function that accepts an image sent via a form (which we will make as well) via an HTTP POST request. I decided not to use any framework for this project, so the code will be in vanilla JavaScript.
The form will be simple, and for brevity, we'll only focus on the most critical aspects of it.
<input type="file" id="uploader" />
Sending and parsing a form via serverless functions can happen using FileReader
or FormData
. In our case, we'll opt-in to use FileReader
for a simple reason: when we send the data, we want to send it as a base64 encoded string, and FileReader
has the readAsDataURL
method available, which does the conversion.
Since FileReader
doesn't use promises natively, we can "promisify" it by wrapping ig in a new Promise
statement like so:
const readFile = (file) => {
return new Promise((resolve, reject) => {
const reader = new FileReader();
reader.onload = () => {
resolve(reader.result);
};
reader.onerror = reject;
reader.readAsDataURL(file);
});
};
The readFile
function above accepts a file and returns us the base64 encoded URL once the FileReader
has finished reading the file. But how can we pass a file
parameter to our function?
That's a relatively easy task, since we have created a file selector already, we can read the selected file in the following way:
const uploader = document.getElementById('uploader');
const file = await readFile(uploader.files[0]);
Note that
<input type="file" />
can be used to select either a single file. For allowing the selection of multiple files, add themultiple
attribute. Furthermore, to only accept, say JPEG and PNG files, you can use theaccept
attribute in the following way:<input type="file" accept="image/png, image/jpeg" />
.
Now we have a way to select a file, but we don't have the means to process that. This is where the serverless function will come into play.
The primary purpose of the serverless function is going to be to accept the base64 encoded string (our image) and store it somewhere. There are multiple possibilities here - you can store it on S3, for example. Still, we'll be uploading our image to Cloudinary because later on, we'll be leveraging some other features from them to enhance our application.
This article is not going to cover all the basics for Netlify Functions but at a very high level the function signature looks like this:
module.exports.handler = async (event, context) => {
return {
statusCode: 200,
body: 'RETURN MESSAGE',
};
};
I frequently see developers using both the
async
keyword and(event, context, callback)
as the parameters to the function - this is not required since thecallback
would only be needed if theasync
keyword is not present. My advice is to use theasync
version of the function.
We already know that we are receiving a base64 encoded string, but how can we parse that? Luckily for us, that task is straightforward: event
has many properties in Netlify functions, and event.body
contains the payload sent from the form (how to send this exactly will be something that we explore in a moment).
For the upload, we are leveraging Cloudinary's Node.js SDK, which needs to be configured first by adding the appropriate API keys and secrets, once done we can do the upload by simply executing the following:
const body = event.body;
try {
const upload = await cloudinary.uploader.upload(body, {
public_id: 'netlify-uploaded-image',
image_metadata: true,
});
}
Note the image_metadata: true
option - this is going to be crucial later on.
At this point, the Netlify function is ready, and we can test it locally by executing netlify dev
from the CLI. The function is ready at localhost:8888/.netlify/functions/upload
. For testing, use Insomnia or Postman.
Now that we know the URL for our upload function, we can go back to the HTML form and update it:
const upload = async () => {
// ... the code from the previous part
const response = await fetch(
`${document.location.origin}/.netlify/functions/upload`,
{
method: 'POST',
body: file,
}
);
const data = await response.json();
};
document.location.origin
is a trick that I employ here to make sure that even after the deployment, the form can access the right path to the function as opposed to hardcoding it tolocalhost:8888
, which would break after deployment.
At this point, the file upload is working, but let's not stop here.
Remember that image_metadata: true
flag from before? Well, it's there to preserve all the various pieces of metadata found in the photo, and those values will also be stored in Cloudinary. Furthermore, with that option set to true, once the image gets uploaded, the response data object from Cloudinary will also contain all the Exif metadata.
Exif is "Exchangeable image file format" - all sorts of metadata is attached to images such as the camera type, exposure time, device orientation and GPS location. Such metadata doesn't get eliminated when creating a base64 encoded version of the image.
As part of this project, we'll read and parse the GPS location found under Exif metadata and send those details to Google Maps' Reverse Geocoding API. To do so, we need first to read the GPS Latitude and Longitude information, then convert it from degrees, minutes and seconds (which is the default format for these pieces of Exif metadata) to decimal-based latitude and longitude. Once we have that value, we can send it to the Reverse Geocoding API to get the location's name.
The conversion from degrees, minutes and seconds to decimals is relatively straightforward by applying the following maths: degrees + (minutes/60) + (seconds/3600). (Bearing in mind that when doing the calculation, the cardinal directions need to be taking into consideration as well, so South and West would mean -
decimal numbers.)
This is how the conversion function would look like - please note that the return statement here is done so that it can be immediately fed into the Reverse Geocoding API:
const convert = (lat, lng) => {
const latElements = lat.split(' ');
let = decimalLat = (
parseInt(latElements[0]) +
parseInt(latElements[2]) / 60 +
parseFloat(latElements[3]) / 3600
).toFixed(4);
if (latElements.pop() === 'S') {
decimalLat = decimalLat * -1;
}
const lngElements = lng.split(' ');
let decimalLng = (
parseInt(lngElements[0]) +
parseInt(lngElements[2]) / 60 +
parseFloat(lngElements[3]) / 3600
).toFixed(4);
if (lngElements.pop() === 'W') {
decimalLng = decimalLng * -1;
}
return `${decimalLat}, ${decimalLng}`;
Let's see this in action by updating the Netlify function:
const lat = upload.image_metadata.GPSLatitude;
const lng = upload.image_metadata.GPSLongitude;
if (!lat || !lng) {
return {
statusCode: 400,
body: JSON.stringify({
message:
'Image does not contain GPS EXIF metadata. Please use an image with appropriate metadata',
error: true,
}),
};
}
const latlng = convert(lat, lng);
const response = await (
await fetch(
`https://maps.googleapis.com/maps/api/geocode/json?latlng=${latlng}&key=${googleMapKey}`
)
).json();
let location = response.plus_code.compound_code;
If the image has GPS Exif metadata, we should see the location's name returned from Google.
Let's not stop here! Now that we have the location, where should we display that? Wouldn't it be nice to overlay the location on the image itself? This would allow us to have a nice automated workflow where we upload an image, and the rest is done for us "automagically". The good news is that we can achieve this with ease by using Cloudinary again. With the Node.js SDK, we can apply overlays and generate a final image URL that can be sent to our frontend. Still, within our Netlify function, we can add the following:
const finalImage = cloudinary.url(upload.public_id, {
fetch_format: 'auto',
quality: 'auto',
overlay: {
font_family: 'Roboto',
font_size: 24,
font_weight: 'bold',
text: `Location: ${location}`,
},
color: colour,
transformation: [
{
radius: 25,
width: 800,
crop: 'fit',
},
],
});
The code above will generate an access URL for our image (which we can feed into an <img>
element's src
attribute directly) and also optimise the photo for us, and add the location as an overlay.
As an exercise compare the file that you upload, with the file that is being delivered here. If you use the provided example file, and if you use Google Chrome, the different should be around 300 kB.
Please go ahead and play around by placing the text in different locations on the image by consulting the Cloudinary documentation on the matter.
When adding a text overlay, we also need to specify the colour of our text. There seem to be many viable solutions, some of which I'm still investigating, but I found one which works rather well, and it's easy to implement, but it may not cover all the cases. When uploading an image, Cloudinary does some analysis, and it returns an array of prominent colours, which is a 2D array consisting of the name of the colour and its frequency of occurrence. Taking that information, I created some rudimentary logic where I first read the prominent colours and compare them with an array of colours that I deemed to be 'dark' colours. If there's a match (based on a filter that I arbitrarily setup), I change the text colour from black to white:
const colours = await cloudinary.api.resource(upload.public_id, {
colors: true,
});
const predominantColours = colours.predominant.google;
const prominentColours = predominantColours
.filter((colour) => colour[1] > 35)
.map((colour) => colour[0]);
const darkColours = ['black', 'brown', 'blue', 'red', 'orange'];
const foundDarkColours = prominentColours.some((colours) =>
darkColours.includes(colours)
);
let colour = 'black';
if (foundDarkColours) {
colour = 'white';
}
This approach seemed to work for all my use-cases, but this is not bulletproof, and it would need more research, potentially combining it with some of the insights found in the articles linked at the beginning of this section.
Netlify functions (and serverless functions in general) provide us with a vast number of opportunities when it comes to doing server-side code execution. We can do as little as call a third party SDK or, as we saw it in this article, chain multiple SDK calls together to achieve our desired functionality. Such setup is ideal for the Jamstack but can equally be suitable for other architectures as well.