Deepstack
Posted By admin On 07/04/22This is an updated version of our previous post on AI Object Detection for Blue Iris that can be found here. You should be able to continue to use the older method referenced in that post, or try out this new one.
DeepStack's developer centre. DeepStack across all supported platforms provies in-built state-of-the-art AI APIs and support for Custom APIs for custom objects detection and recognition. DeepStack is the first theoretically sound application of heuristic search methods—which have been famously successful in games like checkers, chess, and Go—to imperfect information games. At the heart of DeepStack is continual re-solving, a sound local strategy computation that only considers situations as they arise during play.
Disclosure: This post may contain affiliate links, and as an Amazon Associate I earn from qualifying purchases when you click the links, at no additional cost to you. If you click a link and buy a qualifying item, it is a small, easy way to support my blog, and I greatly appreciate it!
Single Camera Sub-Stream Support
If you are looking to setup AI Object Detection for Blue Iris, you came to the right place. In this article, we are going to go over a new single-camera method for this setup. You may have seen my previous post which highlighted how to do this with a dual-camera setup (two cameras within Blue Iris for a single camera), but we now can do this with just one!
I was wondering around the ipcamtalk forums the other day and noticed something very interesting in the “What’s New” section for Blue Iris. Just a few months back Blue Iris Released version 5.2.7.0. This newly released version allowed for a single camera within Blue Iris to contain both the high-resolution mainstream and the lower resolution sub-stream.
What is a sub-stream?
In most mid to high-end IP cameras, you are given two streams: a mainstream and a sub-stream. The main stream is your high resolution stream that is usually 1080p, 1440p, or even 4k. The sub-stream is a much lower resolution stream, usually around 640×480. The sub-stream is used when viewing livestream previews or for viewing the live stream in low bandwidth conditions.
Here is how Blue Iris decides what stream to use:
The main stream is used when:
- Viewing a single camera in the web or app interface
- Performing direct to disc recording (any time you record video)
- Listening to the live audio from the camera
The sub-stream is used when:
- Viewing multiple cameras at once within the web or app interface
- Detecting motion
- Taking alert snapshots
- Doing any operation not listed in the main stream’s usage above
This has many advantages, one of the main ones being CPU utilization. Utilizing the sub-stream in a single camera within Blue Iris really brings down CPU utilization. Some people have seen savings of up to 50% with this setup. This also means our previous two-camera setup can be condensed down to a single camera setup. This means our Blue Iris “canvas” can really be cleaned up. Before with two cameras in Blue Iris per camera, we had to hide the sub-streams or just deal with two of each camera.
AI Object Detection Requirements
The requirements to setup AI object detection for Blue Iris are as follows:
- Blue Iris updated to at-least 5.2.7.0.
- A camera that supports a mainstream and sub-stream.
- A Deepstack.cc account to download our AI Server.
- The most recent AITool from VorlonCD.
- A computer capable of running the Deepstack server and AITool (You should be able to run this on the same computer as Blue Iris)
Blue Iris Setup
Folder Structure
The first thing we need to do is go into the Blue Iris settings and set up our folder structure under Clips and Archiving. Once there we need to set up a folder to save our recorded footage. In my example below, this is just called “new” and I have it set to save up to 225 GB worth of video and to delete anything older than 7 days. Set this to your liking or based on how much room you have to store video.
The next thing we need to do is to set up a folder that will hold our snapshots that we will send to our AI server to process. In my example below, I created the folder called “aiinput” and set it to hold on to up to 1 GB worth of snapshots and only hold on to 12 hours worth of snapshots. The thought here is that anything that does get detected will be recorded so I don’t need to hold on to these for too long.
Web Server
We need to next click on the Web Server tab. In here we need to make note and set up a couple of settings. First let’s write down what port our web server is sitting on. Mine sits on port 83.
Next check off “Use UI3 for non-IE Browsers”.
Next, check what your local LAN address is and write it down. This is important if your AITool and Deepstack install will not reside on the same computer. If you have this all setup on one computer/server, we won’t need it.
Finally, click advanced and make sure that Authentication is set to Non-LAN only and uncheck “Use secure session keys and login page.”
AI User Setup
We need a user that will trigger the alerts. This user is essentially a service account we can use to trigger alerts from the AITool. Click on the Users tab and then click “Add”.
In the user setup we need to set a username and password. Set this to something you can remember for later. The important part here is that you check Administrator and LAN only. Nothing else should really need to be checked off here. There are some things checked in my example below but they are inconsequential to what we are doing.
Camera Setup
In order to setup your camera correctly you will need to setup the main stream and sub-stream options under video configuration. You can see below in the picture that my mainstream is something like:
rtsp://<IP address>:554/cam/realmonitor?channel=1&subtype=0
The sub-stream URL is very similar. We just change the subtype option from 0 to 1. In the sub-stream box you don’t have to put the rest of the URL, just what is after the port number. For example:
/cam/realmonitor?channel=1&subtype=
1
Now, this URL is specific to the camera I am setting up here (Amcrest Video Doorbell). Every camera’s URL structure will look different and you will have to do some Googling to figure out what you will need for both of your URLs.
Once everything is setup like the above picture, we should be ready to configure the rest of the tabs.
Trigger Tab
The next thing we need to do is to configure the Trigger tab for the camera. Once we’ve clicked on the trigger tab you’ll want to click the check box for Motion Sensor, Capture alert list image, and set the break time. The break time will say that this trigger is over in that many seconds unless motion is still happening. This is a setting you might have to play a little with.
Next we need to click into the Configure button for the Motion Sensor. In here you’ll need to adjust the dials to set the sensitivity of what motion to sense. Again, this is a setting you’ll have to play with. If you are getting too many detections then you will need to make it less sensitive. I tend to start super sensitive and dial back until I find a sweet spot.
Also set your make time to 1 second. This is what was suggested to me and seems to work. This is how long motion needs to be detected to sense an event. Once you’ve got these settings similar to what mine are in the gallery below, we can move on.
Record Tab
In the record tab we have a few things to setup. Reference the picture at the bottom of this section for a quick view.
First tick the video option and make sure it’s set to “When triggered”. Make sure the folder option is set to whatever folder you want your video files to go to. From earlier, we are putting these in our “New” folder.
Next, we will tick the “JPEG snapshot each (mm:ss)” option and set the time option to 2 seconds. Tick “Only when triggered”. This will take a snapshot every two seconds when motion is detected and save this to our folder which the AITool will monitor.
Finally, tick the “Pre-trigger video buffer” option and set it to something you find appropriate. This option basically says “When I get triggered to record, I will run the footage back X seconds and start recording from there.” Sometimes, if we have a lot of inputs to the AI server, it can take up to a second or more from motion start to the AITool sending the command to record. This allows us to see just before the motion started in our recorded video.
Deepstack
The AI server we’ll be using is one called Deepstack. You can go to this URL to download the Windows version. Once it’s downloaded, double click on it to open the installer and work through the install wizard to install it.
Now we cant start the server. Open your start menu and type “Deepstack” to locate the executable. Click on it to open and once it opens click “Start Server”. You’ll get a couple of options here. Set the priority to medium, make sure only object detection is ticked, and make sure the port is set to something different than what your Blue Iris web server is set to (we checked what it was early on in this tutorial). I set the Deepstack server to 81 since my Blue Iris server is on port 83.
Once you’ve started the server, we can check to confirm it’s started. Open up a browser and go to the address bar and type in the IP Address and port in the form ipaddress:port. If you are on the same computer as the server you should be able to use 127.0.0.1:port. Here I go to 127.0.0.1:81 and I am presented with a landing page saying my Deepstack server is up and running.
AITool
Download and Install
We’re nearly there. We need to grab the tool that will be the intermediary between Blue Iris and Deepstack. Remember from above, AITool watches for snapshots from Blue Iris, sends them to Deepstack, receives information from Deepstack and then tells Blue Iris to record on that camera.
We’re going to grab VorlonCD’s version of AITool which can be found here. This is a fork of GentlePumpkin’s original tool that has quite a few enhancements made to it. Once you have it downloaded, unzip it somewhere convenient. Go to that folder, right-click on AITool.exe and click run as administrator.
Settings and Camera Setup
First we need to hit up the settings tab. In here we need to setup the Deepstack server address. The format is ipaddress:port. If the Deepstack server is running on the same machine as AITool you should be able to put in 127.0.0.1:port or localhost:port. For example, mine says localhost:81. This is the same port you put in the Deepstack server earlier.
Next we’ll click the cameras tab and click “Add” at the bottom to add a camera. Once the dialog box pops up, give it a name and hit ok. The next option is setting the prefix we’ll be looking for in the picture directory. This should be the same as the short name you setup for the camera we are using. In my example, the short name is “frontdoor”. When Blue Iris takes pictures from that camera, it puts them in our aiinput directory, and the name will start with “frontdoor”.
Next, we need to select the directory to look in. This is the directory you setup in Blue Iris earlier. Mine was called aiinput. Click browse, locate the folder, and click ok.
What to detect
Here’s a bit of a fun part. We get to select what to watch for. You could keep it simple and select just person. You can see in mine that I selected almost everything. This is mostly out of fun; you never know what might walk by! Mostly I want to catch people and vehicles going by. Right under this is your confidence ratings to watch for. To start, leave this at 0% and 100%. As you get data you can decide to up that low end to keep false positives at bay.
The last part is to click the settings button for this camera. The most important part here is the trigger URLs at the bottom. You need to copy and paste the two following URLs in the trigger URL box.
The things you need to replace in here are the IP and port, the username, the password, and the memo portion. The IP and port are for the Blue Iris server. The username and password will be the AI user we setup in Blue Iris, and the memo field on the second URL can be a variable that AI Tool can send to Blue Iris to mark the clip. In mine I have [Detection] which will give the clip a mark of “Person (90.5%)” or something like “Car (84%)”. You can also set the cool down at the top if you don’t want it detecting multiple pictures in a row.
Deepstack Poker Tournament Tips
What to Detect – UPDATE
I’ve seen some people that have been having issues with this setup always recording on motion, even when AITool/DeepStack don’t detect anything. This is due to the fact we are grabbing a picture when triggered and grabbing video on triggers. There’s no way to tell it to only record on external triggers so this is something that I had been dealing with for a while.
That is until I found out about the Cancel URL. From my research, the Cancel URL was built for the Sentry AI program and allows you to cancel an alert if the AI comes back with nothing. Well, almost. It cancels them but then stores them under the “Cancelled Alerts” folder. See below for how I have the Cancel URL in AITool setup.
The basically the flagalert=0 means it’s going to cancel alerts/motion triggers that the AI didn’t detect anything for. Again, it still records, it just hides them away in a “Cancelled Alerts” folder. Here is the syntax:
Since I’ve done this, only the alerts that were true hits from AITool/Deepstack show up in my alert lists. The upside being is that ones that didn’t are still available if I think I need them under the “Cancelled Alerts” drop down.
Testing
We should be fully setup. If you go walk in front of this camera (and had Person as one of your detections), you should see the picture show up in the history tab for that camera, and it should show you the annotation of what it found, and the percent confidence. Watch the logs in AITool to confirm your trigger URLs worked. You should be able to head to Blue Iris, list out the alerts (I usually filter by “Flagged” since our trigger URL flags AI Detections) and see where you walked in front of the camera.
If AITool shows the picture, and Blue Iris has the video, you should be good to go. Repeat this as needed for your cameras. If it’s not working, a good place to check is if the camera is taking pictures. If it is, make sure AITool is looking at the right directory, or that the Deepstack server is running and AITool is pointed to the right address for it.
Deepstack Ai Server
Final Thoughts
There are plenty of other things you can set up with AITool that are super helpful. I use the Telegram integration to send me the snapshots from my front door camera when a person is there, or anytime there’s motion in my back yard. You can also set up MQTT and send topics via MQTT. I use this to send status to sensors in my Home Assistant installation. It basically allows me to turn my cameras into motion sensors for Home Assistant, which allows me to create some pretty powerful automations.
Deepstack Docker
Let me know if this tutorial for AI Object detection for Blue Iris has been helpful to you. If you’re stuck, leave me a comment below and I can try to point you in the right direction. Keep an eye out later this week because I plan to post a video tutorial on my YouTube Channel for this same setup!
Deepstack Poker Venetian
Below I’ve posted a list of the cameras I use around the house if you are interested in my favorite cameras!