To build a camera component let's first understand the needed Browser APIs.
MediaDevices
API.MediaStream
with a video
element.canvas
element.Let's build a custom camera element so you don't have to worry about hooking this code up ever again.
This tutorial is not framework specifc. Leaf node components should be reusable. Custom Elements are a new(ish) browser standard that allows you to build reusable elements that are portable in most JavaScript frameworks. If you're not familiar with Custom Elements, it's okay. They're not too hard to use up front. It can get complex in advanced situations, but we'll steer clear of those paths. Here's a simple example:
class HelloElement extends HTMLElement {
constructor() {
// calling the construtor is not required.
// but if you do, make sure to call super()
super();
}
// this is called when the element is connected to the DOM
connectedCallback() {
// attach a shadow root so nobody can mess with your styles
const shadow = this.attachShadow({ mode: "open" });
shadow.textContent = "Hello world!";
}
}
// define the tag name, it must have a dash
customElements.define("hello-element", HelloElement);
<hello-element></hello-element>
That's the general idea. Like I said, it gets more complicated, but in the case of the camera component we can keep things simple.
Let's start with the simple caxmera component.
class SimpleCamera extends HTMLElement {
connectedCallback() {
const shadow = this.attachShadow({ mode: "open" });
this.videoElement = document.createElement("video");
this.canvasElement = document.createElemnt("canvas");
this.videoElement.setAttribute("playsinline", true);
this.canvasElement.style.display = "none";
shadow.appendChild(this.videoElement);
shadow.appendChild(this.canvasElement);
}
}
customElements.define("simple-camera", SimpleCamera);
This component simply adds two elements: a video
and a hidden canvas
element. The playsinline
attribute helps prevent janky video. These elements set the stage for streaming video and taking photos.
<!DOCTYPE html>
<html lang="en">
<head>
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
>
<title>Simple Camera Component</title>
<script src="camera.js"></script>
</head>
<body>
<simple-camera></simple-camera>
</body>
</html>
This HTML document imports the component from the camera.js
file and creates an element for the camera. Let's start streaming some video.
Use the navigator.mediaDevices.getUserMedia()
method to permissibly gain access to a user's camera.
navigator.mediaDevices.getUserMedia(constraints).then(mediaStream => {});
Notice that getUserMedia()
returns a Promise
. The Promise
resolves a MediaStream
if successful. This stream is used on a video
element. If the Promise
rejects, you know the user has not granted permission. However! The Promise
may never resolve or reject. The user can decide to never take action on the permission popup. Isn't that fun?
The MediaDevices
API is strongly supported. It's available in all modern browsers. However, there's no support in Internet Explorer, so you'll need a feature check.
if (navigator.mediaDevices.getUserMedia === undefined) {
navigator.mediaDevices.getUserMedia(constraints).then(mediaStream => {});
}
However, some browser versions have partial support for MediaDevices
and some have vendor specific implementations. The MDN article has a great section on setting the polyfills. Fortunately these polyfills should be applied outside of our element, so we won't need to account for this in our element.
The getUserMedia()
method takes in a set of contraints
. These contraints help configure the stream after the user accepts permission. They have the type of MediaStreamConstraints
. You can specify two main properties: audio
and video
.
navigator.mediaDevices
.getUserMedia({ audio: false, video: { facingMode: "user" } })
.then(mediaStream => {});
The audio
property is a simple boolean. You request the user's audio or you don't. The video
property is much more complex. The video
constraints, or also known as the MediaTrackConstraints
, specify everything you could possibly need for a video stream: echoCancellation
, latency
, sampleRate
, sampleSize
, volume
, noiseSuppression
,frameRate
,aspectRatio
,facingMode
, and of course height
and width
.
These are a lot of contraints. However, unless you're building one heck of a camera app you'll only need a few. Namely, height
, width
, and facingMode
.
Now that the MediaStream
is configured, you can assign it to a video
element.
open(constraints) {
return navigator.mediaDevices.getUserMedia(constraints)
.then((mediaStream) => {
// Assign the MediaStream!
this.videoElement.srcObject = mediaStream;
// Play the stream when loaded!
this.videoElement.onloadedmetadata = (e) => {
this.videoElement.play();
};
});
}
The video element has a srcObject
. It streams from the device's camera when assigned a MediaStream
. This snippet above added a open
method on the element. Custom Elements have callable methods. If a user calls this open
method it will start the video stream.
<!DOCTYPE html>
<html lang="en">
<head>
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Simple Camera Component</title>
<script src="camera.js"></script>
</head>
<body>
<simple-camera></simple-camera>
<script>
(async function() {
const camera = document.querySelector("simple-camera");
await camera.open({ video: { facingMode: "user" } });
})();
</script>
</body>
</html>
Now that we can stream video, let's take photos.
The canvas
element has the ability to draw a frame from a video element. Using this functionality you can draw on an invisible canvas and then export the image as a blob.
_drawImage() {
const imageWidth = this.videoElement.videoWidth;
const imageHeight = this.videoElement.videoHeight;
const context = this.canvasElement.getContext('2d');
this.canvasElement.width = imageWidth;
this.canvasElement.height = imageHeight;
context.drawImage(this.videoElement, 0, 0, imageWidth, imageHeight);
return { imageHeight, imageWidth };
}
This private _drawImage()
method sets the height and width of the invisible canvas to the video's height. Then it uses the drawImage()
method on the context
. The video element, x position, y position, width, and height are supplied. This creates a drawing on the invisible canvas and sets us up to create a blob.
takeBlobPhoto() {
const { imageHeight, imageWidth } = this._drawImage();
return new Promise((resolve, reject) => {
this.canvasElement.toBlob((blob) => {
resolve({ blob, imageHeight, imageWidth });
});
});
}
The canvas
element has a toBlob()
method. Since it is async, you can turn it into a Promise
so it's easier to consume.
Now you can start to control this camera:
<!DOCTYPE html>
<html lang="en">
<head>
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Simple Camera Component</title>
<script src="camera.js"></script>
</head>
<body>
<simple-camera></simple-camera>
<button id="btnPhoto"></button>
<script>
(async function() {
const camera = document.querySelector("simple-camera");
const btnPhoto = document.querySelector("#btnPhoto");
await camera.open({ video: { facingMode: "user" } });
btnPhoto.addEventListener("click", async event => {
const photo = await camera.takeBlobPhoto();
});
})();
</script>
</body>
</html>
Blobs are great when you need to upload a file. But sometimes it's nice to just stick a base64 encoded string into an image tag. The canvas
element has a solution just for this.
The canvas
element has a toDataURL()
method. This method takes the current contents of the canvas and spits it out to a base64 encoded image.
takeBase64Photo({ type, quality } = { type: 'png', quality: 1 }) {
const { imageHeight, imageWidth } = this._drawImage();
const base64 = this.canvasElement.toDataURL('image/' + type, quality);
return { base64, imageHeight, imageWidth };
}
The takeBase64()
method calls the toDataUrl()
method and returns it's base64 value. Notice that you can specify image type and the quality of the image.
<!DOCTYPE html>
<html lang="en">
<head>
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Simple Camera Component</title>
<script src="camera.js"></script>
</head>
<body>
<simple-camera></simple-camera>
<button id="btnBlobPhoto">Take Blob</button>
<button id="btnBase64Photo">Take Base64</button>
<script>
(async function() {
const camera = document.querySelector("simple-camera");
const btnBlobPhoto = document.querySelector("#btnBlobPhoto");
const btnBase64Photo = document.querySelector("#btnBase64Photo");
await camera.open({ video: { facingMode: "user" } });
btnBlobPhoto.addEventListener("click", async event => {
const photo = await camera.takeBlobPhoto();
});
btnBase64Photo.addEventListener("click", async event => {
const photo = camera.takeBase64Photo({ type: "jpeg", quality: 0.8 });
});
})();
</script>
</body>
</html>
Modern JavaScript frameworks have the ability to use custom elements. This makes custom elements an atrractive choice for building common components. You can easily port this component if your company manages multiple apps that use multiple frameworks. The Custom Elements Everywhere shows how compatible each framework is with custom elements.
See each framework's docs for registering custom elements:
NOTE! This section is curated my Maxim Salnikov! He's one of the most knowledgeable and passionate PWA developers out there. Give him a follow on Twitter.