Blur Plates and Faces in Video Files
Blur software processes frames of a video to blur a video file.
Download Scripts​
The source code for the utilities is available here, make sure you are in your working directory and download all files.
git clone --depth 1 https://github.com/parkpow/deep-license-plate-recognition.git
cd deep-license-plate-recognition/video-editor
cp env.txt.sample env.txt
Create Environment Variables File​
Once you downloaded the video-editor files, you need to set up the environment variables by creating an env.txt
file in the same directory as the docker-compose.yml
file. You can find a sample env.txt.sample
file in the same directory.
Variable | Description | Default |
---|---|---|
TOKEN | Your API token for Snapshot. Get it from here. | None |
LICENSE_KEY | Your Snapshot SDK license key. You can find it here. | None |
VIDEO_BLUR | Controls the blur intensity applied. The weakest blur is 10. The strongest is 1. | 1 |
SAMPLE | Controls how many frames are skipped when performing inference. To decrease processing time, Blur performs computationally intensive inference on one frame out of SAMPLE frames seen and then approximates blurring in between. Raise this parameter to speed up processing, lower it to improve quality. | 5 |
LOGGING | The logging level for the API. Available logging levels are DEBUG, INFO, WARNING, ERROR, and CRITICAL | DEBUG |
This is how your env.txt
file should look like:
LOGGING=DEBUG
TOKEN=YOUR_API_TOKEN
LICENSE_KEY=YOUR_LICENSE_KEY
VIDEO_BLUR=1
SAMPLE=3
You can add more configuration parameters at the end of your environment file, using the format PARAMETER=VALUE, for example, REGIONS=in,us
Previous versions of the Blur video utility used the BLUR
environment variable to control the blur intensity. This variable is now deprecated. Use VIDEO_BLUR
instead.
Build and Run the Docker Image​
Once you have set up your environment variables, you can build and run the video-editor image with docker-compose.
- Compose
- Compose V2
docker volume create license # if you do not use Snapshot on this device already
docker-compose up --build
docker volume create license # if you do not use Snapshot on this device already
docker compose up --build
Upon successful build, the API will be available at http://localhost:8081/process-video
.
Blur Plates and Faces​
To blur plates and faces in a video, pass the video file path and action=blur
to the API.
curl -X POST -F "upload=@/user-data/test.mp4" -F "action=blur" http://localhost:8081/process-video
A new video named blur-{camera_id}.mp4
will be created in the output folder.
API Parameters​
Parameter | Description | Default |
---|---|---|
upload | The video file to process | None |
action | The action to perform on the video file. options blur , frames and visualization . See the below instructions for the frames and visualization actions. | None |
Both parameters are required for the API to work. If any of the parameters are missing, you will get the following response:
{
"error": "Invalid request."
}
Additional Features​
Extract Frames as Images​
To extract frames from a video, pass the video file path and action=frames to the API. For example:
curl -X POST -F "upload=@/user-data/test.mp4" -F "action=frames" http://localhost:8081/process-video
The extracted frames will be saved in a folder named {filename}_frames
inside the output folder.
Draw Vehicle and Plate Bounding Boxes​
This works by uploading every frame in the video to Snapshot.
Visualize plates and vehicles on a video, pass the video file path, and action=visualization to the API. For example:
curl -X POST -F "upload=@/user-data/test.mp4" -F "action=visualization" http://localhost:8081/process-video
A new video named {filename}_visualization.avi
will be created in the output folder.