React, Videos and the Orientation API
Older Article
This article was published 6 years ago. Some information may be outdated or no longer applicable.
The internet runs on visuals now. Users want images, 360-degree product shots, and videos. Video especially keeps climbing because it pulls people in faster than static content ever could.
Landscape vs Portrait
We’ve all done it. You grab your phone, hit record, and only after you stop do you realise you picked the wrong orientation. Landscape captures wide scenes and works well on YouTube or your TV. Portrait fits online video calls and close-up facial shots, but it looks terrible on YouTube. And most people instinctively hold their phone upright when they start recording.
So what happens when you’ve filmed something brilliant in portrait mode? Or when you need to squeeze a landscape video into a portrait container? Or when the video just doesn’t fit the container you’ve got?
Cloudinary to the rescue
Cloudinary handles this. They’ve got excellent image optimisation and transformations features, and they extend those same capabilities to videos too. For this example, we’ll assume our videos are already uploaded to Cloudinary (who offer a very generous free account, by the way).
Imagine we’ve recorded the following video:
It looks stunning. But there’s a problem that shows up when we shrink it to 300x300 pixels (notice we’re using Cloudinary to resize the video):
Still a great shot, but the first few seconds are quite dull. I want to see the ship. Turns out you can fix this by adding g_auto to the URL, which automatically crops around the most interesting frames:
Videos with fast(er) movement
What about something that moves more dynamically, like a dog? Can Cloudinary track those frames? You already know the answer. :)
Here’s our original video:
Let’s crop it to 400 x 800, pushing it towards portrait mode:
Now let’s add the g_auto bit:
There we go. Portrait mode, most interesting frames captured. No context lost.
Applying the concept to React
How does this connect to React? Honestly, you could apply this to any frontend framework (or plain JavaScript). I chose React for this article. Cloudinary’s React SDK exposes components like Image, Video, and Transformation. Combined with a small trick, we can build great experiences for users.
It starts with the screen orientation property, wrapped in a useEffect hook:
import { useState, useEffect } from 'react';
export default function useScreenOrientation() {
const [orientation, setOrientation] = useState(
window.screen.orientation.type
);
useEffect(() => {
const handleOrientationChange = () =>
setOrientation(window.screen.orientation.type);
window.addEventListener('orientationchange', handleOrientationChange);
return () =>
window.removeEventListener('orientationchange', handleOrientationChange);
}, []);
return orientation;
}
This code captures the orientation whenever window.screen.orientation.type changes, and returns the current value.
The orientation value comes back from an experimental Web API called Screen.orientation. The values it can return are: landscape-primary (landscape), landscape-secondary (upside down), portrait-primary and portrait-secondary(portrait).
With this helper in hand, we can display videos in the right format. Here’s a component that does exactly that:
import React from 'react';
import { Video } from 'cloudinary-react';
import useScreenOrientation from './orientationChange';
function VideoOrientationDemo() {
const orientation = useScreenOrientation();
return (
<>
{orientation === 'portrait-primary' ? (
<Video
controls
cloudName="demo"
publicId="dog_orig_qflwce"
height="400"
width="244"
crop="fill"
gravity="auto"
></Video>
) : (
<Video
controls
cloudName="demo"
publicId="dog_orig_qflwce"
width="600"
></Video>
)}
</>
);
}
export default VideoOrientationDemo;
Take a look at the final result in the video below. If you’re interested, you can also grab the source code from GitHub from this repository: https://github.com/tpiros/react-cloudinary-orientation-demo.
Have fun!