This commit is contained in:
jokob-sk
2025-03-29 12:31:29 +11:00
parent 7e5373b2cd
commit 929964f9e2
13 changed files with 215 additions and 90 deletions

View File

@@ -49,3 +49,44 @@ NetAlertX comes with MQTT support, allowing you to show all detected devices as
[list]: ./img/HOME_ASISSTANT/HomeAssistant-Devices-List.png "list"
[overview]: ./img/HOME_ASISSTANT/HomeAssistant-Overview-Card.png "overview"
## Troubleshooting
If you can't see all devices detected, run `sudo arp-scan --interface=eth0 192.168.1.0/24` (change these based on your setup, read [Subnets](./SUBNETS.md) docs for details). This command has to be executed the NetAlertX container, not in the Home Assistant container.
You can access the NetAlertX container via Portainer on your host or via ssh. The container name will be something like `addon_db21ed7f_netalertx` (you can copy the `db21ed7f_netalertx` part from the browser when accessing the UI of NetAlertX).
## Accessing the NetAlertX container via SSH
1. Log into your Home Assistant host via SSH
```bash
local@local:~ $ ssh pi@192.168.1.9
```
2. Find the NetAlertX container name, in this case `addon_db21ed7f_netalertx`
```bash
pi@raspberrypi:~ $ sudo docker container ls | grep netalertx
06c540d97f67 ghcr.io/alexbelgium/netalertx-armv7:25.3.1 "/init" 6 days ago Up 6 days (healthy) addon_db21ed7f_netalertx
```
3. SSH into the NetAlertX cointainer
```bash
pi@raspberrypi:~ $ sudo docker exec -it addon_db21ed7f_netalertx /bin/sh
/ #
```
4. Execute a test `asrp-scan` scan
```bash
/ # sudo arp-scan --ignoredups --retry=6 192.168.1.0/24 --interface=eth0
Interface: eth0, type: EN10MB, MAC: dc:a6:32:73:8a:b1, IPv4: 192.168.1.9
Starting arp-scan 1.10.0 with 256 hosts (https://github.com/royhills/arp-scan)
192.168.1.1 74:ac:b9:54:09:fb Ubiquiti Networks Inc.
192.168.1.21 74:ac:b9:ad:c3:30 Ubiquiti Networks Inc.
192.168.1.58 1c:69:7a:a2:34:7b EliteGroup Computer Systems Co., LTD
192.168.1.57 f4:92:bf:a3:f3:56 Ubiquiti Networks Inc.
...
```
If your result doesn't contain results similar to the above, double check your subnet, interface and if you are dealing with an inaccessible network segment, read the [Remote networks documentation](./REMOTE_NETWORKS.md).

View File

@@ -2,17 +2,14 @@
You need to specify the network interface and the network mask. You can also configure multiple subnets and specify VLANs (see VLAN exceptions below).
`ARPSCAN` can scan multiple networks if the network allows it. To scan networks directly, the subnets must be accessible from the network where NetAlertX is running. This means NetAlertX needs to have access to the interface attached to that subnet. You can verify this by running the following command in the container (replace the interface and ip mask):
`ARPSCAN` can scan multiple networks if the network allows it. To scan networks directly, the subnets must be accessible from the network where NetAlertX is running. This means NetAlertX needs to have access to the interface attached to that subnet.
`sudo arp-scan --interface=eth0 192.168.1.0/24`
> [!WARNING]
> If you don't see all expected devices run the following command in the NetAlertX container (replace the interface and ip mask):
> `sudo arp-scan --interface=eth0 192.168.1.0/24`
>
> If this command returns no results, the network is not accessible due to your network or firewall restrictions (Wi-Fi Extenders, VPNs and inaccessible networks). If direct scans are not possible, check the [remote networks documentation](./REMOTE_NETWORKS.md) for workarounds.
In this example, `--interface=eth0 192.168.1.0/24` represents a neighboring subnet. If this command returns no results, the network is not accessible due to your network or firewall restrictions.
If direct scans are not possible (Wi-Fi Extenders, VPNs and inaccessible networks), check the [remote networks documentation](./REMOTE_NETWORKS.md).
> [!TIP]
> You may need to increase the time between scans `ARPSCAN_RUN_SCHD` and the timeout `ARPSCAN_RUN_TIMEOUT` (and similar settings for related plugins) when adding more subnets. If the timeout setting is exceeded, the scan is canceled to prevent the application from hanging due to rogue plugins.
> Check [debugging plugins](./DEBUG_PLUGINS.md) for more tips.
## Example Values
@@ -24,7 +21,17 @@ If direct scans are not possible (Wi-Fi Extenders, VPNs and inaccessible network
* One subnet: `SCAN_SUBNETS = ['192.168.1.0/24 --interface=eth0']`
* Two subnets: `SCAN_SUBNETS = ['192.168.1.0/24 --interface=eth0','192.168.1.0/24 --interface=eth1 --vlan=107']`
If you get timeout messages, decrease the network mask (e.g.: from `/16` to `/24`) or increase the `TIMEOUT` setting (e.g.: `ARPSCAN_RUN_TIMEOUT` to `300` (5-minute timeout)) for the plugin and the interval between scans (e.g.: `ARPSCAN_RUN_SCHD` to `*/10 * * * *` (scans every 10 minutes)).
> [!TIP]
> When adding more subnets, you may need to increase both the scan interval (`ARPSCAN_RUN_SCHD`) and the timeout (`ARPSCAN_RUN_TIMEOUT`)—as well as similar settings for related plugins.
>
> If the timeout is too short, you may see timeout errors in the log. To prevent the application from hanging due to unresponsive plugins, scans are canceled when they exceed the timeout limit.
>
> To fix this:
> - Reduce the subnet size (e.g., change `/16` to `/24`).
> - Increase the timeout (e.g., set `ARPSCAN_RUN_TIMEOUT` to `300` for a 5-minute timeout).
> - Extend the scan interval (e.g., set `ARPSCAN_RUN_SCHD` to `*/10 * * * *` to scan every 10 minutes).
>
> For more troubleshooting tips, see [Debugging Plugins](./DEBUG_PLUGINS.md).
---

View File

@@ -1303,6 +1303,38 @@ $(document).ready(function() {
}
});
// -----------------------------------------------------------
// Restart Backend Python Server
function askRestartBackend() {
// Ask
showModalWarning(getString('Maint_RestartServer'), getString('Maint_Restart_Server_noti_text'),
getString('Gen_Cancel'), getString('Maint_RestartServer'), 'restartBackend');
}
// -----------------------------------------------------------
function restartBackend() {
modalEventStatusId = 'modal-message-front-event'
// Execute
$.ajax({
method: "POST",
url: "php/server/util.php",
data: { function: "addToExecutionQueue", action: `${getGuid()}|cron_restart_backend` },
success: function(data, textStatus) {
// showModalOk ('Result', data );
// show message
showModalOk(getString("general_event_title"), `${getString("general_event_description")} <br/> <br/> <code id='${modalEventStatusId}'></code>`);
updateModalState()
write_notification('[Maintenance] App manually restarted', 'info')
}
})
}
// -----------------------------------------------------------------------------
// initialize
// -----------------------------------------------------------------------------

View File

@@ -395,38 +395,6 @@ function deleteActHistory()
});
}
// -----------------------------------------------------------
// Restart Backend Python Server
function askRestartBackend() {
// Ask
showModalWarning('<?= lang('Maint_RestartServer');?>', '<?= lang('Maint_Restart_Server_noti_text');?>',
'<?= lang('Gen_Cancel');?>', '<?= lang('Maint_RestartServer');?>', 'restartBackend');
}
// -----------------------------------------------------------
function restartBackend() {
modalEventStatusId = 'modal-message-front-event'
// Execute
$.ajax({
method: "POST",
url: "php/server/util.php",
data: { function: "addToExecutionQueue", action: `${getGuid()}|cron_restart_backend` },
success: function(data, textStatus) {
// showModalOk ('Result', data );
// show message
showModalOk(getString("general_event_title"), `${getString("general_event_description")} <br/> <br/> <code id='${modalEventStatusId}'></code>`);
updateModalState()
write_notification('[Maintenance] App manually restarted', 'info')
}
})
}
// -----------------------------------------------------------
// Import pasted Config ASK
function askImportPastedConfig() {

View File

@@ -225,7 +225,7 @@
"Device_TableHead_Name": "Name",
"Device_TableHead_NetworkSite": "Network Site",
"Device_TableHead_Owner": "Owner",
"Device_TableHead_Parent_MAC": "Parent node MAC",
"Device_TableHead_Parent_MAC": "Parent network node",
"Device_TableHead_Port": "Port",
"Device_TableHead_PresentLastScan": "Presence",
"Device_TableHead_RowID": "Row ID",

View File

@@ -21,6 +21,11 @@
<?= lang('DevDetail_button_Save');?>
</button>
</div>
<div class="restart-app col-sm-12 col-xs-12">
<button type="button" class="btn btn-primary col-sm-12 col-xs-12" id="save" onclick="askRestartBackend()">
<?= lang('Maint_RestartServer');?>
</button>
</div>
</div>
</section>
@@ -228,16 +233,6 @@ function generateWorkflowUI(wf, wfIndex) {
class: "panel col-sm-12 col-sx-12"
});
// Dropdown for action.field
let $fieldDropdown = createEditableDropdown(
`[${wfIndex}].actions[${actionIndex}].field`,
"Field",
fieldOptions,
action.field,
`wf-${wfIndex}-actionIndex-${actionIndex}-field`
);
// Dropdown for action.type
let $actionDropdown= createEditableDropdown(
`[${wfIndex}].actions[${actionIndex}].type`,
@@ -247,8 +242,20 @@ function generateWorkflowUI(wf, wfIndex) {
`wf-${wfIndex}-actionIndex-${actionIndex}-type`
);
$actionEl.append($actionDropdown);
// Action Value Input (Editable)
if(action.type == "update_field")
{
// Dropdown for action.field
let $fieldDropdown = createEditableDropdown(
`[${wfIndex}].actions[${actionIndex}].field`,
"Field",
fieldOptions,
action.field,
`wf-${wfIndex}-actionIndex-${actionIndex}-field`
);
// Textbox for action.value
let $actionValueInput = createEditableInput(
`[${wfIndex}].actions[${actionIndex}].value`,
"Value",
@@ -257,10 +264,12 @@ function generateWorkflowUI(wf, wfIndex) {
"action-value-input"
);
$actionEl.append($actionDropdown);
$actionEl.append($fieldDropdown);
$actionEl.append($actionValueInput);
}
// Actions
let $actionRemoveButtonWrap = $("<div>", { class: "button-container col-sm-1 col-sx-12" });
@@ -612,6 +621,8 @@ function updateWorkflowObject(newValue, jsonPath) {
console.log("Updated workflows:", workflows);
updateWorkflowsJson(workflows)
renderWorkflows();
}

View File

@@ -196,8 +196,10 @@ def main ():
# Fetch new unprocessed events
new_events = workflow_manager.get_new_app_events()
mylog('debug', [f'[MAIN] Processing WORKFLOW new_events from get_new_app_events: {len(new_events)}'])
# Process each new event and check triggers
if new_events:
if len(new_events) > 0:
updateState("Workflows: Start")
update_api_flag = False
for event in new_events:

View File

@@ -71,7 +71,7 @@ class DeviceInstance:
self.db.sql.execute(f"""
UPDATE Devices SET {field} = ? WHERE devGUID = ?
""", (value, devGUID))
self.db.sql.commit()
self.db.commitDB()
# Delete a device by devGUID
def delete(self, devGUID):
@@ -81,4 +81,4 @@ class DeviceInstance:
raise ValueError(m)
self.db.sql.execute("DELETE FROM Devices WHERE devGUID = ?", (devGUID,))
self.db.sql.commit()
self.db.commitDB()

View File

@@ -52,7 +52,7 @@ class PluginObjectInstance:
self.db.sql.execute(f"""
UPDATE Plugins_Objects SET {field} = ? WHERE ObjectGUID = ?
""", (value, ObjectGUID))
self.db.sql.commit()
self.db.commitDB()
# Delete a plugin object by ObjectGUID
def delete(self, ObjectGUID):
@@ -62,4 +62,4 @@ class PluginObjectInstance:
raise ValueError(m)
self.db.sql.execute("DELETE FROM Plugins_Objects WHERE ObjectGUID = ?", (ObjectGUID,))
self.db.sql.commit()
self.db.commitDB()

View File

@@ -1,4 +1,5 @@
import sys
import sqlite3
# Register NetAlertX directories
INSTALL_PATH="/app"
@@ -7,6 +8,8 @@ sys.path.extend([f"{INSTALL_PATH}/server"])
import conf
from logger import mylog, Logger
from helper import get_setting_value, timeNowTZ
from models.device_instance import DeviceInstance
from models.plugin_object_instance import PluginObjectInstance
# Make sure log level is initialized correctly
Logger(get_setting_value('LOG_LEVEL'))
@@ -27,22 +30,76 @@ class Action:
class UpdateFieldAction(Action):
"""Action to update a specific field of an object."""
def __init__(self, field, value, trigger):
def __init__(self, db, field, value, trigger):
super().__init__(trigger) # Call the base class constructor
self.field = field
self.value = value
self.db = db
def execute(self):
mylog('verbose', [f"Updating field '{self.field}' to '{self.value}' for event object {self.trigger.object_type}"])
mylog('verbose', f"[WF] Updating field '{self.field}' to '{self.value}' for event object {self.trigger.object_type}")
obj = self.trigger.object
# convert to dict for easeir handling
if isinstance(obj, sqlite3.Row):
obj = dict(obj) # Convert Row object to a standard dictionary
processed = False
# currently unused
if isinstance(obj, dict) and "ObjectGUID" in obj:
plugin_instance = PluginObjectInstance(self.trigger.db)
mylog('debug', f"[WF] Updating Object '{obj}' ")
plugin_instance = PluginObjectInstance(self.db)
plugin_instance.updateField(obj["ObjectGUID"], self.field, self.value)
processed = True
elif isinstance(obj, dict) and "devGUID" in obj:
device_instance = DeviceInstance(self.trigger.db)
mylog('debug', f"[WF] Updating Device '{obj}' ")
device_instance = DeviceInstance(self.db)
device_instance.updateField(obj["devGUID"], self.field, self.value)
processed = True
if not processed:
mylog('none', f"[WF] Could not process action for object: {obj}")
return obj
class DeleteObjectAction(Action):
"""Action to delete an object."""
def __init__(self, db, trigger):
super().__init__(trigger) # Call the base class constructor
self.db = db
def execute(self):
mylog('verbose', f"[WF] Deleting event object {self.trigger.object_type}")
obj = self.trigger.object
# convert to dict for easeir handling
if isinstance(obj, sqlite3.Row):
obj = dict(obj) # Convert Row object to a standard dictionary
processed = False
# currently unused
if isinstance(obj, dict) and "ObjectGUID" in obj:
mylog('debug', f"[WF] Updating Object '{obj}' ")
plugin_instance = PluginObjectInstance(self.db)
plugin_instance.delete(obj["ObjectGUID"])
processed = True
elif isinstance(obj, dict) and "devGUID" in obj:
mylog('debug', f"[WF] Updating Device '{obj}' ")
device_instance = DeviceInstance(self.db)
device_instance.delete(obj["devGUID"])
processed = True
if not processed:
mylog('none', f"[WF] Could not process action for object: {obj}")
return obj

View File

@@ -49,20 +49,21 @@ class AppEvent_obj:
"ObjectIsArchived": "NEW.devIsArchived",
"ObjectPlugin": "'DEVICES'"
}
},
"Plugins_Objects": {
"fields": {
"ObjectGUID": "NEW.ObjectGUID",
"ObjectPrimaryID": "NEW.Plugin",
"ObjectSecondaryID": "NEW.Object_PrimaryID",
"ObjectForeignKey": "NEW.ForeignKey",
"ObjectStatus": "NEW.Status",
"ObjectStatusColumn": "'Status'",
"ObjectIsNew": "CASE WHEN NEW.Status = 'new' THEN 1 ELSE 0 END",
"ObjectIsArchived": "0", # Default value
"ObjectPlugin": "NEW.Plugin"
}
}
# ,
# "Plugins_Objects": {
# "fields": {
# "ObjectGUID": "NEW.ObjectGUID",
# "ObjectPrimaryID": "NEW.Plugin",
# "ObjectSecondaryID": "NEW.Object_PrimaryID",
# "ObjectForeignKey": "NEW.ForeignKey",
# "ObjectStatus": "NEW.Status",
# "ObjectStatusColumn": "'Status'",
# "ObjectIsNew": "CASE WHEN NEW.Status = 'new' THEN 1 ELSE 0 END",
# "ObjectIsArchived": "0", # Default value
# "ObjectPlugin": "NEW.Plugin"
# }
# }
}

View File

@@ -42,6 +42,9 @@ class WorkflowManager:
WHERE AppEventProcessed = 0
ORDER BY DateTimeCreated ASC
""").fetchall()
mylog('none', [f'[WF] get_new_app_events - new events count: {len(result)}'])
return result
def process_event(self, event):
@@ -103,14 +106,17 @@ class WorkflowManager:
if action["type"] == "update_field":
field = action["field"]
value = action["value"]
action_instance = UpdateFieldAction(field, value, trigger)
action_instance = UpdateFieldAction(self.db, field, value, trigger)
# indicate if the api has to be updated
self.update_api = True
elif action["type"] == "run_plugin":
plugin_name = action["plugin"]
params = action["params"]
action_instance = RunPluginAction(plugin_name, params, trigger)
action_instance = RunPluginAction(self.db, plugin_name, params, trigger)
elif action["type"] == "delete_device":
action_instance = DeleteObjectAction(self.db, trigger)
# elif action["type"] == "send_notification":
# method = action["method"]