Since the Internet has turned visual, end-users are ever craving for visually appealing content. And web developers don't fail to deliver - e-commerce stores have images (including 360-degree ones) and videos about the product. The video seems to be on the rise as well since it provides users with an even more engaging experience.
In this article, we will talk about videos and discuss landscape and portrait modes. When it comes to user-generated content, let's face it, we have all been guilty of this: we grab our phone to record a video quickly, and just after recording we realise that we should have used another orientation mode. Videos recorded in landscape mode are ideal for the likes of YouTube or even for playing it directly on our TV and generally it's capable of capturing a wide area. Videos shot using the portrait mode are ideal for online video communication or capturing someone's facial area. Still, they don't look pretty on YouTube. And grabbing the phone and recording something using the portrait mode seems more intuitive to most people.
What can we do when we have captured the video footage of our life, but we used the portrait mode? What if, we are asked to display a video recorded in landscape mode in a portrait container, or even a more straightforward "problem" could be that we need to fit the video to a container that's a lot smaller than the dimensions of the video.
Luckily for us, we can use a service such as Cloudinary who have excellent image optimisation and transformations features, and they do extend these capabilities to videos as well. For the purposes of this example, we are going to assume that our videos have been uploaded to Cloudinary (who by the way offer a very generous free account).
Let's imagine that we have recorded the following video:
It looks stunning, doesn't it? But see, there's a small problem, and it becomes apparent when we try to reduce the size of the video to be 300x300 pixels (notice how we are using the Cloudinary to reduce the dimensions of the video as well):
It's still a great video shot, but the first few seconds are quite dull to be fair. I want to see the ship! Believe it or not, this is possible just by adding g_auto
as part of the URL which will automatically crop the video around the most exciting frames:
But what if we have something that moves more dynamically, such as a dog? Can Cloudinary find those frames? I think you already know the answer. :)
Let's start by taking a look at our original video below.
Let's apply a crop to it - 400 x 800 should be a good enough indication that we are trying to crop to portrait mode.
And now let's add the magical g_auto
bit:
Et voilà, we now have our video in portrait mode, capturing the most exciting frames. No more context loss.
At this point, you're probably wondering, how is this relevant to React? The truth is that the below solution could be applied to any frontend framework (and even Vanilla JavaScript). Still, for this article, I chose to use React. Cloudinary has a React SDK which exposes a bunch of components: Image, Video and Transformation. Using these, plus a little "trick", we can enable amazing experiences for end-users.
It all starts with utilising the screen orientation property and wrapping that up in a useEffect hook:
import { useState, useEffect } from 'react';
export default function useScreenOrientation() {
const [orientation, setOrientation] = useState(
window.screen.orientation.type
);
useEffect(() => {
const handleOrientationChange = () =>
setOrientation(window.screen.orientation.type);
window.addEventListener('orientationchange', handleOrientationChange);
return () =>
window.removeEventListener('orientationchange', handleOrientationChange);
}, []);
return orientation;
}
The above code allows us to capture the orientation when the window.screen.orientation.type
changes, furthermore we also return the actual value of the orientation.
The orientation value comes back from an experimental Web API called Screen.orientation. The values it can return are: landscape-primary (landscape), landscape-secondary (upside down), portrait-primary and portrait-secondary(portrait).
Given this helper, we can now efficiently utilise it to display videos in the right form, take this component as an example:
import React from 'react';
import { Video } from 'cloudinary-react';
import useScreenOrientation from './orientationChange';
function VideoOrientationDemo() {
const orientation = useScreenOrientation();
return (
<>
{orientation === 'portrait-primary' ? (
<Video
controls
cloudName="demo"
publicId="dog_orig_qflwce"
height="400"
width="244"
crop="fill"
gravity="auto"
></Video>
) : (
<Video
controls
cloudName="demo"
publicId="dog_orig_qflwce"
width="600"
></Video>
)}
</>
);
}
export default VideoOrientationDemo;
Take a look at the final result in the video below. If you're interested, you can also grab the source code from GitHub from this repository: https://github.com/tpiros/react-cloudinary-orientation-demo.
Have fun!