Skip to main content


After you install Stream, a config.ini is created in the directory you have specified. Please note that anytime you change the contents of config.ini, you will need to restart the Docker container.

timezone = America/Los_Angeles[cameras]  [[camera-1]]    active = yes    url = ...    name = Camera One    regions = fr, it    mmc=true    max_prediction_delay = 3    memory_decay = 100    csv_file = camera-1.csv    image_format = $(camera)/$(camera)_screenshots/%y-%m-%d/%H-%M-%S.%f.jpg    sample = 2  [[camera-2]]    active = yes    url = ...    name = Camera Two    regions = au-nsw    mmc=true    max_prediction_delay = 4    csv_file = camera-2.csv    sample = 2


All parameters are optional except url.


  1. To run Stream on a RTSP camera feed, just specify the URL to point to the RTSP file. For example, url = rtsp://
  2. For additional help with RTSP, please go to
  3. You can also process video files.


  1. Stream processes all the cameras defined in the config file if active is set to yes. See example below.
  2. Stream will automatically reconnect to the camera stream when there is a disconnection. There is a delay between attempts.


Indicate the camera name. In the example above, we’ve called the camera as Camera One.


  1. Indicate the filename of the CSV output you’d like to save. In the example above, we’ve called the CSV file as camera-1.csv.
  2. The name can be dynamic. Refer to the field image_format for details. For example: csvfile = $(camera)%y-%m-%d.csv


  1. Save the prediction results to a JSON file. For example:
  • jsonlines_file = my_camera_results.jsonl
  • jsonlinesfile = $(camera)%y-%m-%d.jsonl
  1. We are using the JSON Lines format. Refer to the field image_format for details.


Include one or multiple regions from this list:


  1. Get Vehicle Make, Model and Color (MMC) identification from a global dataset of 9000+ make/models.
  2. Vehicle Orientation refers to Front or Rear of a vehicle.
  3. Direction of Travel is an angle in degrees between 0 and 360 of a Unit Circle. Examples:
    1. 180° = Car going left.
    2. 0° = Car going right.
    3. 90° = Car going upward.
    4. 270° = Car going downward.
  4. The output will be in both CSV file and also via Webhooks.
  5. Please note that the Stream Free Trial does not include Vehicle MMC. To get Vehicle MMC on Stream, subscribe.
  6. If you have a subscription for Vehicle MMC, then add this line: mmc = true


  1. Set the time delay (in seconds) for the vehicle plate prediction.
  2. The default value is 6 seconds. So by default, Stream waits at most 6 seconds before it sends the plate output via Webhooks or saves the plate output in the CSV file.
  3. For Parking Access use-cases, where the camera can see the vehicle approaching the parking gate, you can decrease this parameter to say 3 seconds to speed up the time it takes to open the gate.
  4. In situations where the camera cannot see the license plate very well (say due to some obstruction or because it's a lower res camera), then increasing max_prediction_delay will give Stream a bit more time without rushing to find the best frame for the best ALPR results.


  1. Set the time between when Stream will detect that same vehicle again for a particular camera. This has no effect if a vehicle is seen by multiple cameras.
  2. If this parameter is omitted, Stream will use the default value, which is 300 seconds (5 minutes).
  3. This can be useful in parking access situations where the same vehicle may turn around and approach the same Stream camera.
  4. By flushing out the detection memory, Stream will be able to recognize and decode that same vehicle.
  5. The minimum value would be 0.1 seconds. But, for clarity, you don't want to set it so low because then if the camera still sees that vehicle again (say 0.2 seconds later), then Stream will count that same vehicle again in the ALPR results..


  1. You can set the timezone for the timestamp in the CSV and also Webhooks output.
  2. If you omit this field, then the default timezone output will be UTC.
  3. Please refer to the timezones in
  4. Plate Recognizer automatically adjusts for time changes (e.g. daylight savings, standard time) for each timezone. Examples: a) For Silicon Valley, use timezone = America/Los_Angeles b) For Budapest, use timezone = Europe/Budapest. You can also use timezone = Europe/Berlin.

The timestamp field is the time the vehicle was captured by the camera. We are using the operating system clock and we apply the timezone from config.ini.


  1. Save images in a particular folder with a specific filename. In the example above, it saves images as one folder per camera and each image is named camera_timestamp
  2. Customize with the following examples:
    • $(camera) is replaced by camera-1
    • If the current date is 2020-06-03 20:54:35. %y-%m-%d/%H-%M-%S.jpg is replaced by 20-06-03/20-54-35.jpg. Letters starting with the percent sign are converted according to those rules
    • To put images from all cameras into the same folder: image_format = screenshots/$(camera)_%y-%m-%d_%H-%M-%S.%f.jpg


  1. Set Stream to skip frames of a particular camera or video file.
  2. By default, sample = 2, so Stream processes every other frame.
  3. Set sample=3 if you want to process every third frame.
  4. This parameter lets you skip frames in situations where you have limited hardware and/or do not need to process all the frames of the video or camera feed.
  5. See section on Optimizing Stream for more info.


When enabled, it only accepts the results that exactly match the templates of the specified region. For example, if the license plate of a region is 3 letters and 3 numbers, the value abc1234 will be discarded. To turn this on add region_config = strict.


Add detection_rule = strict and license plates that are detected outside a vehicle will be discarded.


Detection of vehicles without plates can be enabled with detection_mode = vehicle . This output uses a different format. The license plate object is now optional. The props element is also optional and is for the object properties (make model or license plate text).


Improve accuracy when Stream is used with a moving camera (for example, a camera mounted on a car). Set merge_buffer = 1 to turn this on. This setting will increase compute.

Webhook Parameters#


  1. Stream can automatically forward the results of the license plate detection and the associated vehicle image to an URL.
  2. You can use Webhooks as well as save images and CSV files in a folder.
  3. By default, no Webhooks are sent.
  4. The recognition data and vehicle image are encoded in multipart/form-data.
  5. Please note that the JSON response contains both the UTC timestamp as well as the local timestamp that reflects the timezone set in Stream
  6. To ensure that your webhook_target endpoint is correct, please test it out at
  7. To read the webhook message, you can use our Github tiny server in Python.

You can send multiple targets by simply listing all the targets. Each target shares the same webhook_image and webhook_image_type property.

webhook_targets =,
  • The webhook data always uses the timestamp set at capture time.
  • If the target cannot be reached (HTTP errors or timeout), the request is retried 3 times with a 10 seconds interval.
  • If it is still failing, the data is saved to disk.
    • When a webhook fails, all new webhook data will directly be saved to disk for a period of 5 minutes. After that webhooks are processed normally.
    • If a new webhook is received and processed successfully, we will also process the data saved to disk if there is any.
    • When webhooks are saved to disk, we remove the oldest data when the free disk space is low.


  1. This field can be set to either:
    webhook_image = yeswebhook_image = no
  2. When ‘webhook_image = no’ is set, it will send only the decoded plate information but the image data will not be sent to the target URL. This lets your system receive the plate information faster. This is especially useful for Parking Access situations where you need to open an access gate based on the license plate decoded.
  3. The license plate bounding box is calculated based on the original image when that field is no. Else it is calculated based on the image sent in the webhook.


  1. This field can be set to either:
    webhook_image_type = originalwebhook_image_type = vehicle
  2. When set to original, the webhook will send the full-size original image from the camera.
  3. When set to vehicle, the webhook will send only the image contained within the bounding box of each vehicle detected.

Forwarding ALPR to ParkPow (example only)#

  1. To forward ALPR info from Stream over to ParkPow (our ALPR Dashboard and Parking Management solution), please refer to this example below:
    webhook_targets = = Authorization: Token 5e858******3cwebhook_image = yeswebhook_image_type = vehicle
  2. Please note that there’s an addition of the ‘webhook_header’ in sending info to ParkPow via webhooks.

Other Parameters#

Detection Zone#

  1. Detection Zones exclude overlay texts, street signs or other objects. For more info, read our blog on Detection Zones.
  2. To start, go to our Detection Zone in your Plate Rec Account Page.
  3. Make sure that the Stream camera_id set in Detection Zone is the same as the camera_id in your Stream config file.
  4. After you upload an image from that specific Stream camera, you can use the marker to mask the areas you want Plate Recognizer to ignore.
  5. Make sure to restart the Docker container to see the effects. When you open your Stream folder you will now see one file per zone.

To remove a detection zone, click Remove on Detection Zone. Then remove the file zone_camera-id.png from the Stream folder.