Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Many updates #32

Open
wants to merge 382 commits into
base: master
Choose a base branch
from
Open

Many updates #32

wants to merge 382 commits into from

Conversation

VorlonCD
Copy link

GentlePumpkin, I LOVE that with this app BlueIris is better than Sentry in many ways and free! Thank you for all the work on this.

Got a bunch of stuff for ye... Hope you can integrate without too much trouble. I believe you have not updated anything since the fork I created a few weeks ago.

  • Blueiris info class - Created the drop downs for "new camera" name and "input path" that contain actual blue iris settings from the registry (assuming blue iris is installed on same machine)

  • DeepStack class and tab - If you run the WINDOWS version of Deepstack on the same machine, it will detect it, stop/start it and let you totally replace the stupid deepstack.exe control that comes with it. The AITOOL log window will output deepstack messages if AITOOL actually starts deepstack. Auto start deepstack when aitool starts. Will auto change port if different than configured in aitool. Perhaps docker integration in future, I see there is a nuget package for it. I might have accedently de-compiled their exe to see exactly what it did :)

  • Updated Inputbox to support dropdowns and perrrty-er-er

  • New logging class for writing log and History csv file. Queues up writes, writes in another thread so the UI doesnt have to wait for it, and also waits for the log file to become available to avoid shared exceptions that I still see. (multiple instances or threads cause much trouble)

  • Better FileSystemWatcher events - Wait ONLY as long as needed for the file to be come available, wait in thread before continuing since deepstack can only take one file being processed at a time. (unless I'm wrong? Its been known to happen. Don't tell my wife.)

  • New LOG tab with color to highlight errors, shows the calling function, etc. Must enable "Log everything" to see everything.

  • Better error checking, timing and logging in DetectObjects and other various functions. Trying very hard to avoid unexpected NULL object related errors.

  • Start with windows option (non service - that would take a lot more work)

  • SETTINGS class, save all settings to aitool.settings.json file. (But STILL need to integrate CAMERAS to this json file)

  • Detected history list items are green now

  • Object label color set to transparent rather than solid so you can see what is under it

  • Fixed tab order on cameras tab

  • Changed to default 64 bit process. This allows the deepstack class to correctly get the command line parameters of its running processes like port, mode, etc. And why not? Back in my day we had 3gb memory limit and we LIKED IT :)

  • Obligatorily "Misc fixes and enhancements"

  • TODO: Create a true service that avoids use of 3rd party tool and lets tray icon/ui communicate with service running in background. Service will do all actual work, so need to separate all UI related stuff in distinct classes. Still need to research.

  • TODO: I would really like to figure out a way to avoid triggers in cases like this: Car parked in driveway all the time, fucking spiderwebs trigger motion all night and end up with just as many alerts as normal. Thinking something along the lines of temporary masking based on repeated detection of same object in roughly the same area. How to determine "roughly" may be tricky, and how long do we hold the temp mask in place? How many repeats before we create? How to create temp mask image? We can doo0 it. Rick and Morty forever. Need input!

  • TODO: Use "FastObjectListView" for the history. Should be MUCH faster since can have 1000's of items without slowdown but may take a bit of work converting everything to classes - but can directly use camera class maybe?

  • TODO: Store Cameras in json settings file.

Wish list? Suggestions?

Vorlon

@gentlepumpkin
Copy link
Owner

I actually forced my self to not check on AI Tool because I have zero time right now, but reading THIS, I couldn‘t hesitate to log in and state my appreciation! That‘s alot of amazing work and I‘ll check it as soon as my holiday starts in 4 weeks. Thank you for your contribution and I‘ll get back soon :)

@gentlepumpkin
Copy link
Owner

hell, that‘s super amazing stuff!

@gentlepumpkin
Copy link
Owner

and the comment is real comedy gold 😂

@VorlonCD
Copy link
Author

Ha, thanks! Old camera text files auto imported and saved to json coming soon. Also noticed that you dont really need to do masking on your end since in BI you can use the 'blackout masked areas' option on the 'AI' camera. Unless we think its worth it or doable to implement some kind of auto/temp masking to handle repeated detection's of something we dont care about like car in driveway all night. And a quick look seems that danecreekphotography's app doesnt need a second camera configured in BI? Downsides? Hmm, wonder if BI api is good enough so I can auto configure our version of the camera?

@gentlepumpkin
Copy link
Owner

1.67 already implemented some means to avoid duplicate cameras, I‘m running without duplicate cameras myself, but I did not update the guide accordingly. It works by flagging confirmed alerts. We can discuss more details asap my holiday starts

@VorlonCD
Copy link
Author

FYI going to do another pull request when I can get it stable in a few days. I hope to have it so you can trigger a full list of customizable events/posts/etc on detection. Maybe even play a sound based on detection type.

@patwoowong
Copy link

Ha, thanks! Old camera text files auto imported and saved to json coming soon. Also noticed that you dont really need to do masking on your end since in BI you can use the 'blackout masked areas' option on the 'AI' camera. Unless we think its worth it or doable to implement some kind of auto/temp masking to handle repeated detection's of something we dont care about like car in driveway all night.

If you don't use a second camera stream then the black out areas will be blocked in the images and recording. I would think you'd want to block the area from detection but not entirely delete it.

@VorlonCD & @gentlepumpkin Wondering if either of you have considered adding an "other' section in the relevant objects list. It's possible someone would want to detect something else that's more specific with a custom deepstack model such as a mailman, mail truck etc. I guess you'd have to have a few options, {custom_model_name: "", relevant_labels: []}

Also one last thing, would it be possible to eventually add confidence limits per object type? Maybe you want to have 50% confidence on a person to catch all the people but 90+ on cars for the same camera.

Thank you for all the work that's been done on this. Setting it up was super easy.

@classObject
Copy link
Collaborator

classObject commented Aug 12, 2020

@VorlonCD @gentlepumpkin

I have a solution to your todo about creating temporary masks.
https://github.com/classObject/bi-aidetection/tree/dynamic_masking

My neighbors park in random places along the street so creating fixed mask files was all but impossible without masking out most of the image. I create a process to dynamically mask objects that are seen in the same location after a configurable number of detections. It's still a work in progress but it has been up and running for about a week and I'm no longer getting endless alerts every time my neighbors have friends over, park like they've been out binge drinking, or decide hey wouldn't it be cool to park an RV here? I think it answers most of your questions.

How to determine “roughly”

After each detection create a history object that stores the detected object’s coordinates and a count of how many times it was detected. There is a coordinate variance number to account for slight changes in the position returned by Deepstack. So if the current position is between (xmin, xmax, ymin, ymax +- variance) that’s in history == match.

How many repeats before we create?

The counter in the history object is used to determine when to create a mask. Each time an object is found in the history list, the counter increases. When the counter exceeds a user defined max history value, the object is removed from history and moved to a masked list. The history is cleared in a user defined number of minutes to prevent false positives. The max history value and history save minutes are setup in the Camera UI tab in the AItool.

How long do we hold the temp mask in place?

After each deepstack execution, search the masked list for matches. If the object is not found, decrease the counter. When the counter hits zero, remove the mask. The masked object counter is also user defined in the Camera UI tab in the AITool.

How to create temp mask image?

All that is required is a list of the coordinates returned from deepstack. This is all stored in the masked object list.

f**king spiderwebs trigger motion all night

Maybe try some spider bait from Lowes :)

@VorlonCD
Copy link
Author

VorlonCD commented Aug 15, 2020

@classObject - Oh wow, VERY cool! Unless you started from my fork in the first place I will integrate with mine and see how it works!! I was assuming we would have to modify each image with a mask before giving to deepstack, but this is much easier

@VorlonCD
Copy link
Author

@classObject - A quick look so far. I wonder if MaskManager.last_positions_history and masked_positions would be better off as Dictionary<int, ObjectPosition>, then use the Objectposition.key for the key in the dictionary. The lookups/containskey should be really fast that way. Of course it doesnt really matter unless the list count grows pretty high.

@classObject
Copy link
Collaborator

classObject commented Aug 16, 2020

@VorlonCD Agreed. I started off using a dictionary but ran into issues because there wasn't a unique key for lookups due to variations in returned object positions. What do we search for if there's a range of matching values for the dictionary key? Instead it's using ranges in the equals method to determine if the objects are in similar positions. Sure there's another solution but didn't have a chance to research alternatives... but then again like you said if the list isn't that larger it doesn't matter much. 20-30 items will probably be a max. Mine never exceeds 10.

@classObject
Copy link
Collaborator

classObject commented Aug 16, 2020

@VorlonCD Thank you for reviewing the code by the way! I forgot to mention above that the Objectposition.key is created by using the exact coordinates returned from deepstack so (xmin * ymin * xmax * ymax) = key. The problem is this doesn't take into account even slight coordinate variations. If there is any variation in the returned values, there is no match in the dictionary. For my setup I've found deepstack varies by a max of +-35 on each coordinate for each detection. Currently the key is just used as a quick reference point for debugging.

@VorlonCD
Copy link
Author

@classObject - Ahh I see that would be tricky now. I'm sure list is fine since it seems unlikely it will have many items in the list. Since cameras may have different resolutions, maybe we need to expose thresholdMaxRange = 35 in camera UI? Or I wonder if we could make that number based on a percentage of X or Y pixels in the image?

@classObject
Copy link
Collaborator

@VorlonCD Good suggestion. I like the idea of making the variance percentage based. Will make the update soon.

@doudar
Copy link

doudar commented Aug 19, 2020

Great work by everybody! I'd like to point out that I can greatly reduce my server load by offloading all motion detection to the cameras and having them upload snapshots to the BI server which then runs AI detection on the images. This gets rid of the need/workaround of using secondary streams on BI. The largest issue I have (and would make it a hurdle for most users) is that Dahua cameras insist on putting their snapshots in the folder CameraName/Year/Month/Day/Hour/Minute/randomNumber.jpg when they upload so I have to use a secondary script from bp2008 (bp2008/ImageOrganizer) to sort out that mess before processing.

@VorlonCD
Copy link
Author

@classObject FYI I got your code integrated to my fork. Very cool stuff! It seems to work, I think I just need to tweak some of the settings. I noticed the history list contained a few detection's that were really close in size and width so could it be the Equals function isnt correctly merging a similar item into the history? That camera is 4k and the detection rectangle was at a distance so maybe it was still outside configured percentage? FYI build a nice little viewer of the history and active masks for troubleshooting - See Frm_DynamicMaskDetails.cs. Also I moved settings from settings tab to its own dialog - settings was getting too congested. Will eventually move more (or everything) to separate settings dialog. Will need ObjectListView and a few of my own routines unless you want to start fresh from my fork. Its got 99% of your stuff integrated I think. I do use my own custom logging routine so I can output to RTF log window in the app rather than NLOG
Annotation 2020-08-25 143739

@classObject
Copy link
Collaborator

@VorlonCD make sure and pull the lastest version. The calculation for the change in the object's location has updated.

@classObject
Copy link
Collaborator

@VorlonCD The viewer looks awesome! Great idea. I will pull your code later and check it out. Your issue with detection's should be solved with my lastest update to the calculation. I had accidently checked in an incomplete version of the code. You might want to also checkout my changes to the camera config. It reads and writes the camera settings to json now and auto migrates older .txt configs to json as well.

@VorlonCD
Copy link
Author

@classObject - Ahh I might have missed the checkin from yesterday, thanks. I saw the JSON but I had already converted ALL settings to single settings class and writes everything including cameras to one JSON file. It has a very safe method of reading and writing the file - I had so many cases of corruption with BSOD's on one machine that I now check for null's, etc before reading the file. If it looks bad, I read a backup copy instead. I'm currently working on a better way of queuing files to be processed in another thread so that an overload of new files being generated will not be missed as easy.

@classObject
Copy link
Collaborator

@VorlonCD ok, cool. Haven't looked into all your changes yet. You've done a lot! Guess we both went for JSON :) Just pulled the code and setting it up now.

@classObject
Copy link
Collaborator

@VorlonCD I love the details view for masks! I've been tailing the log to get the data for debugging but this is so much better. To visually see the locations of the masks on the image is awesome. I would recommend hiding the isVisible flag. It's used to keep track of objects that are visible until deepstack is finished processing the image. After that the counters are updated and the flag is reset to false for the next run. That's why it always shows false.

@VorlonCD
Copy link
Author

@classObject - the columns are in order of the class, so if you move isVisible property near bottom most people wont see it. Do you think it would be helpful to show the max variance size rectangle in another different color? Not sure what calc from Equals to use without digging in a bit

@classObject
Copy link
Collaborator

classObject commented Aug 26, 2020

@VorlonCD It might be helpful to see the max rectangle size on the screen. It depends on how busy the image becomes and the colors or shading. Maybe an option to toggle it on/off?

Calculations for the rectangle size max variance EDIT: (this requires some more thought)

@VorlonCD
Copy link
Author

@classObject - have at it. Go ahead. You can DOOO iiit.

@VorlonCD
Copy link
Author

@doudar - If we changed the prefix so it works with a wildcard, would that help for images uploaded by your camera? This fork already lets you monitor subfolders so I think that would be all that is needed.

@SHerms
Copy link

SHerms commented Aug 27, 2020

@VorlonCD
All these changes are awesome! Can't wait to try them out.
Would it be possible to configure different deepstack IP's for different cameras?

I'm having an issue when multiple cameras are sending photos to the same deepstack server. It causes a pileup and by the time the trigger is sent to BI the object could be out of frame. Having the ability to override the default deepstack IP per camera would be much better than trying to run multiple copies of the AI Tools program.

@classObject - Thank you for implementing this change. This was the main thing I was waiting for :) 👍

@VorlonCD
Copy link
Author

@doudar - This may work without your extra script in the latest version of my fork. (/CameraName/Year/Month/Day/Hour/Minute/randomNumber.jpg) If I can't get CAMNAME. from the name of the jpg, I fall back on looking for the INPUT FOLDER assigned to each camera. I think you would just have to give it a root path unique for each camera and enable the scan subfolders checkbox.

@doudar
Copy link

doudar commented Aug 28, 2020

Wow! Looks good - I’ll check it out sand let you know!

@SHerms
Copy link

SHerms commented Aug 29, 2020

@VorlonCD Thanks you rock! I'll compile and test it out later tonight. 👍

VorlonCD and others added 30 commits May 11, 2021 07:23
- Startup error fix
- Allow deepstack tab to show even if windows version is not installed
…ExDark' custom model

- Added drop shadow to splash screen and prevent the form from being topmost so that messagebox wont be behind it if one appears (like for migrating settings)
- When duplicate objects are merged, if the label/object name is different in a dupe object, add it into the details field.
- Some work on face training UI, but not implemented yet.   But any image with a face that is found will be copied to \_Settings\FaceStorage\FaceName
…w has priority over Confidence levels when removing duplicates.

- Ability to disabling merging duplicate objects in AITOOL.SETTINGS.JSON via 'HistoryMergeDuplicatePrediction's=False.  Defaults to true.
-Fix issue calc dusk-dawn check
-Loosen UrlItem equals check to avoid getting dupes added just because the existing items ActiveTimeRange is not the same.
…lete the requested service" by compressing aitool.settings.json before saving the backup to the registry.
…- Uses 'URLResetAfterDisabledMinutes' (60) to re-enable disabled URLs after that time.

- Cleans history every 24 hours to improve performance.   Controlled by 'HoursBetweenCleaning' in AITOOL.SETTINGS.JSON
- Prevent error when closing UPDATE box before the update check has finished.
- Minor updates to upcoming face training (not implemented yet).   Use 'SaveUnknownFaces' (true) and ''SaveKnownFaces' (true) in \Settings\FaceStorage\FACES.JSON to control if faces will be auto stored.  Also see 'MaxFilesPerFace' (1000) and 'MaxFileAgeDates' (182) to further control how many faces files are stored.   Defaults to saving ANY face so it can be used for upcoming face training feature.
- If deepstack dark custom model is being used, do not default to 'UseOnlyAsLinkedServer', otherwise default to true.   Most other types of custom models we dont want to be the only source of detected objects
- Mqtt now uses a new server/port/username/etc without restarting aitool
…ing 'HistoryHoursBetweenCleaning' in AITOOL.SETTINGS.JSON
…to 500ms and will be a delay that will be applied between most actions, even individual trigger URL calls. @balqosz  #266

- Allow for reading BlueIris lat/long settings when the stored number uses a comma rather than period (some non-US countries)  Thanks @johngianni !
- Recompile in Visual Studio 2022, .NET 4.8, and update NuGet packages
- Version on startup now detects Win11.
- Try to prevent telegram Could not create SSL/TLS secure channel exception on Win7.  @xanthos84  #267
- All AITOOL temp files now go to %TEMP%\_AITOOL rather than directly in the temp folder.   This temp folder will be cleaned out on startup of AITOOL.   @balucanb #287
- Added GUI option for how old the files in "Alert images folder" can get before they are removed.  This defaults to 30 days and is now checked every hour, just before a new file is copied in.  @42ism  #260
- Show warning when INPUT PATH is the same as 'Copy alert images to folder' path in actions.   This has caused a number of issues in the past.  @balucanb #287
- In History tab > settings, added 'Merge Duplicate Predictions'.   It defaults to merging them, but you may want to turn it off as seen in this:  #251
- Added ability in the Deepstack tab (deepstack for windows only) to run each custom model with a specific MODE.
- Pushover:   [This needs testing!!]   In actions, you can now send messages at different priorities based on the time of day.    So if PRIORITY was set to 'Normal | Emergency' and TIME was set to 'Dawn-Dusk | Dusk-Dawn' then only emergency priority would be used at night.    All fields can have more than one item separated by a PIPE symbol and it is best, although not required if all fields have an equal number of items.  If they DONT, it will default to the FIRST item in each field.  @162884  #254
- Action to activate/maximize the BI window.  It will maximize the BI window IF it is installed on the same machine AITOOL is running on.   I put this feature in because the BI per-camera feature Trigger > 'Restore/focus app window' restores focus for ANY movement, not just when AITOOL triggers.
- Hero:  https://theoatmeal.com/comics/hero
…_Settings\FaceStorage. If you edit FACES.JSON (without AITOOLS open) you can control "MaxFilesPerFace" (defaults to 1000) and "MaxFilesAgeDays" (defaults to 182), "SaveUnknownFaces" (defaults to true), "SaveKnownFaces" (defaults to true. Sorry, no face training or UI for this feature yet. #256   #241

- Fix crash that happens when you try to open Deepstack > STDERR.TXT.    If Windows 11 has broken TXT file associations you have to re-associate manually to fix.
- Increase FileSystemWatcher.InternalBufferSize from 8k to 65k to try to avoid "too many changes at once in directory" error
- Fix error about for input path & copy alert images to folder if both paths were EMPTY.  #294
- Allow for "empty" deepstack CUSTOM model MODE field.   And do not force mode to uppercase since it appears to be case sensitive in the back end python code.   Force first letter upper, the rest lower.  Needs only Low Medium and High so MEDIUM fails with error in STDERR.TXT.  #281
- Update default camera Trigger URL to be more compatible with newer releases of BI - The new one seems to work with a single URL rather than multiple.  #273
- Lower default ActionDelayMS from 500 to 250.
… too often and it exceeds GITHUB core or search rate limits.

- New Sound features:  Better control over Camera > Actions > Sounds:  1) if you use a wav file name without a path it will auto search the AITOOL folder, \Blueiris\Sounds folder and C:\windows\media folder to find it.  2)  *Ability to speak*  To change the default voice you must use the OLD windows speech control panel:   Start menu > type 'Control Panel' > Easy of use > Speech Recognition > Advanced speech options > Text to speech tab.   For some reason the new Win10/Win11 speech control panel doesnt change what is being used.

USAGE:

Talk:
    Talk:There is a [Label] outside
    person ; talk:There is a mother f'in person in the driveway
Simple sound play:
    C:\BlueIris\sounds\are-you-kidding.wav
    are-you-kidding.wav   <-- No need to specify path if in AITOOL, BlueIris folder or Windows Media folder
    are-you-kidding
    C:\Windows\Media\Ring10.wav
    Ring10.wav
Conditional:
    cat ; catsound.wav
    cat,dog,sheep ; animalsound.wav
    bear ; fuuuuck.wav
Combine any with pipe symbols
    Talk:There is a [Label] outside | object1, object2 ; soundfile.wav | object1, object2 ; anotherfile.wav | * ; defaultsound.wav

- New feature:   Ability to reply to telegram messages to START and STOP cameras.  To get telegram messages you must first disable group /setprivacy mode - Note that your BOT name is case sensitive: https://teleme.io/articles/group_privacy_mode_of_telegram_bots

Usage:
PAUSE|STOP [CAMNAME] [MINUTES]
STOP 30   <<---Stops/pauses all cameras for 30 minutes
PAUSE CAMERANAME 30
START|RESUME [CAMNAME]
RESUME
RESUME CAMERANAME
…nt object manager. This way you can tell how big or small certain objects ever get and limit them better.

- Fix Telegram errors:  I found that when my Internet went out there was a telegram.bot bug where it would repeat an exception many times a second, so I rate limited it to one every 2 minutes if it is a duplicate message.
- Telegram:  Only start listening for reply command messages if there is not another instance of AITOOL running on the same machine (like as a service).   Try to prevent 'Telegram API [409] Conflict: terminated by other getUpdates request; make sure that only one bot instance is running'
- Telegram:  AITOOL.SETTINGS.JSON now has a setting called "telegram_monitor_commands" so you can disable listening to the telegram chat commands.  You may need to do this if using AITOOL or another telegram app on another machine using the same Telegram TOKEN.
- A few more basic remote control commands via Telegram:   MUTE,  UNMUTE, VOLUMEUP, VOLUMEDOWN, VOLUMESET LEVEL, RESTARTCOMPUTER
…messages. Only one client per token

- Speed up resolving remote blueiris shares
- Cached reading of mac address since it can take over 500 ms sometimes
…d, Deepstack tab no longer says ERROR) #314

- New remote Telegram command:  RESTARTAITOOL (or RESTART).   This will restart AITOOL only.  #319
- New remote Telegram command:  SCREENSHOT.   Sends a screenshot of the entire screen AITOOL is running on. #319
- Added SHUTDOWNCOMPUTER telegram command.  It will shut down in 10 seconds.  #298
- Dont force 'Detection API' to be enabled (Unless NO other modes including Custom are enabled.  Previously, if Custom was enabled, it would still try to re-enable regular Detection).   This is so people that only want to run their own custom models dont have to run normal detection consuming extra memory and cpu.   #318
- Tweak how lat/long is read from blueiris registry for use with SUNRISE-SUNSET  #296
- FaceStorage path now checks to see if the folder is accessible.  If not, puts the path under the regular settings folder.   (Like in the case AITOOL folder was moved)  #316
- Rework facestorage manager.   Still havent had time to make the UI for this working but all faces are still stored when found.  See previous commit details for how it works.    May have fixed a bug where the FACES.JSON file gets too large or corrupt.  #316
- Only save every 30 seconds (or on demand) to reduce chance of settings file corruption (Controlled via 'SaveSettingsIntervalSeconds' in JSON settings file).  prior to this, it was saving for EVERY image that was processed.
- Pause now always defaults to all cameras rather than selected.
- CTRL-MOUSEWEEL to change font size in History and Log
- Added INNO setup files and automatic build of the installer (Before it was relying on Visual & Installer, a paid product)  #301
- Some prep work with fonts and scaling for eventual move to .NET Framework 6.  Changed default font to Segoe UI 8.25 for most controls.   This can be set in the settings JSON file via "DefaultFont".
…r. If you do not have .net 6 installed you can install it from here: https://dotnet.microsoft.com/en-us/download/dotnet/6.0   (.NET Desktop Runtime 6.0)

* Updated default object names to include a few more from ipcam-animal.   If you hit RESET in relievant objects you will get the new default list.
* Added 'ObjectsExcluded' setting in AITOOL.Settings.JSON file with a default list of items such as Airplane, Frisbee, Chair, etc that normally just clutter up the object lists.
* Update all nuget packages
* MQTT.net had minor code changes for .net 6 version of nuget package - THIS HAS NOT BEEN TESTED YET.   If someone can confirm its working, that would be great.

Note:  You should consider using CodeProject.AI for object detection rather than Deepstack since it is not being developed any longer:   https://www.codeproject.com/Articles/5322557/CodeProject-AI-Server-AI-the-easy-way

* Telegram.net also had some minor code changes for .net 6 but it has been tested.
* Fix Codeproject.ai ALPR plate detection so it actually shows the plate number found.  ("Sighthound_vehicle" seems to work much better than CP ALPR though??)  #332
* Added name property to URL servers
* Added Ambulance as a known vehicle
* Force all enabled relevant objects to the top of the list
* Change default object list for new cameras (No longer contains Meat Popsicle :) )
* Fix Save button on Prediction Tolerances form - #333
* Checkboxes in AI Server list for easy enable/disable.
* AI Server list now has all enabled first, disabled last, and a few columns where reordered
* Prediction Details screen now works correctly when you multi select more than one item from the history list.
…. You may need to install from here: https://dotnet.microsoft.com/en-us/download/dotnet/thank-you/runtime-desktop-8.0.1-windows-x64-installer

* Fixed issue where an AI Server may be stuck "InUse" if you close AITOOL while it was in use.
* The next few items can help reduce annoying or unessessary errors sent to telegram:
* Fix blueiris httpclient trigger timeout error in cases where mutiple triggers have been made in 'queued' mode.  (Prevent concurrent trigger calls to the same servername/port - The trigger call to blue iris will be skipped if it is currently already working on a trigger - Why does it take so long?  Sometimes it takes over a minute to get done triggering?)
* HTTP Trigger timeout defaults to 120 seconds rather than 55
* Move trigger URL calls to after sound and a few other things for less of delay with other notification such as sound when running in a home office  (Blueiris can take forever to finish its http trigger, so we dont wait for it now)
* IgnoreConnectionError:   JSON setting to ignore errors (not send them via telegram, etc if we cant even get a ping resonse from the server.  This might be useful when a computer is only on certain hours of the day without setting a specific URL time range.
* Checkbox in URL edit screen "Ignore if offline".   When this is enabled, it will ping the server first and if the ping fails it will silently ignore and skip the URL.  This is useful for when a server is a machine that may go to sleep and the time schedule is not consistent enough.  In AI Servers list > LastSkippedReason column, you will see "NotOnline" when this happens.   LastSkippedReason shows most recent reasons first.
* AI Server list tweaked to provide more debug info about why it was not used and make it more threadsafe
* AI Server list Double-click AI server to edit
* AI Server list up/down now keeps selection
* AI Server list now correctly refreshes every 1 so you can see it working live
* Codeproject.AI now sometimes returns a "the request timed out" error.   If you disable the "Error" checkbox when editing a SERVER, it will prevent this from being an error that is sent via telegram, etc.  (Because I was starting to find it annoying) - I think it happened when a custom IPCAM model and the regular model were being accessed at the same time.
* We used to look for Debug: error: warn: etc anywhere within the log line.  Now its only within the first 6 chars not including dots, dashes, spaces.  This might reduce less serious things from being sent to telegram, etc.
* Fix to fully respect unchecking AUTO ADD on the deepstack tab #334
* Made sqldatabase history connection a little more reliable.   If it gets an error on initial connection it will try to revert to a backup copy.  If that failes it will delete the database and recreate it.
* Exit right-click tray icon
* New AI icon generated by ChatGPT  (Of course!)
* For refinement servers, you can now use 'Animal', 'Person', people or 'Vehicle' in addition to actual object names
* Pause and resume right-click tray icon
* Better error checking when trying to activate blue iris window as non admin
* Fix crash on camera properties if you dont have a camera selected
* Fix triggering object list sometimes empty
* Fix ObjectListView issue, .net 7/8 changed virtualListSize to _virtualListSize
* Code cleanup, update to latest nuget packages, fixing a few security issues
* Added more return properties for Codeproject.ai response (not used yet)
* Added Action time min/max ms to status bar
* Is it worth trying to integrate other cloud vision AI tools when we have a decent local CodeProject.AI now?   "Google.Cloud.Vision", "Azure AI Vision", "Imagga", etc?
…istakenly caching every single image processed in memory for at least an HOUR. Which can be a large amount of ram with multiple cameras, 4k, etc. #345
… "Allow AI Server based queue". Because CodeProject.AI manages its own queue it can handle concurrent requests (Unlike Deepsack), we ignore the fact that it is "in use" and just keep giving it as many images as we get to process. So far this actually seems to work really well. It should prevent some cases of the default 100 image queue error from happening. Note: When you enable this it will be more rare that a server OTHER THAN THE FIRST is used. If you still want other AI servers to be used by AITOOL there are a few things you can do:

1) Reduce AI SERVER > Edit URL > Max AI server queue length setting.  CPAI defaults to 1024, so if, for example, you dropped that down to 4, it would only try the next server in line when the queue was above 4.   You will have to test in your environment to see if this makes sense as it may not.
2) Reduce 'AI Server Queue Seconds larger than'.   If its queue gets too high you can force it to go to the next AITOOLS server in the list.
3) Reduce 'Skip if AITOOL Img Queue Larger Than' setting.   If the AITOOL image queue is larger than this value, and the AI server
has at least 1 item in its queue, skip to the next server to give it a
chance to help lower the queue.
4) In AITOOL > Settings, enable "queued" checkbox.  This way AITOOL will take turns and always use the server that was used the longest ago.   This may not be ideal if some of the servers are much slower than others.

Tip:  In CPAI settings web page, enable MESH and make sure it can talk to the other servers you may have configured. (all have to be on the same network with open/fowarded UDP ports - docker to docker to physical instance may take some work to get to see each other).  This way, CPAI will do the work of offloading to the next server in line!

Tip:  For faster queue processing, enable as many moduels (YOLOv5 6.2, YOLOv5.NET, YOLOv8,  etc).   It will help spread the workload out so in some cases you dont even need more than one CPAI server.

Tip:  If you use IPCAM Animal and and a few others as 'linked servers', you will get errors if you have anything other than YOLOv5 6.2 enabled because the models have not been build for the others yet.   I havent found a good way around this yet.

Tip:   If the MESH cannot see DOCKER or VM based instances of CPAI servers, edit your C:\ProgramData\CodeProject\AI\serversettings.json file and manually add the servers it cannot automatically find.  For example:

"KnownMeshHostnames": [ "prox-docker", "pihole"],

* Some new columns in Edit AI URL screen related to queue time, min, max, etc.  AIQueueLength, AIQueueLengthCalcs, AIQueueTimeCalcs, etc. Some other regular aitool stats may not be as accurate when you enable 'Allow server based queue'
* Update setup to only check for .NET 8 rather than 6
* Implement new easier to use version of Threadsafe classes.  This should also shrink the json settings file a bit and make the code easier to read.
* If you enable 'Ignore if offline' for a CPAI server that is running in mesh mode and mesh returns an error (ie the mesh computer was turned off for example) you will not see an error.
* Fixed bug where using linked servers, there may be duplicates or disabled urls in the list slowing down the overall response time.
* Gotham City's corruption problem is still a work in progress.  I'm Batman.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.