Merge remote-tracking branch 'origin/main' into hardening

This commit is contained in:
Adam Outler
2025-10-18 13:23:57 -04:00
63 changed files with 3718 additions and 730 deletions

View File

@@ -59,6 +59,8 @@ http://<server>:<GRAPHQL_PORT>/
* [Events](API_EVENTS.md) Device event logging and management
* [Sessions](API_SESSIONS.md) Connection sessions and history
* [Settings](API_SETTINGS.md) Settings
* Messaging:
* [In app messaging](API_MESSAGING_IN_APP.md) - In-app messaging
* [Metrics](API_METRICS.md) Prometheus metrics and per-device status
* [Network Tools](API_NETTOOLS.md) Utilities like Wake-on-LAN, traceroute, nslookup, nmap, and internet info
* [Online History](API_ONLINEHISTORY.md) Online/offline device records

173
docs/API_MESSAGING_IN_APP.md Executable file
View File

@@ -0,0 +1,173 @@
# In-app Notifications API
Manage in-app notifications for users. Notifications can be written, retrieved, marked as read, or deleted.
---
### Write Notification
* **POST** `/messaging/in-app/write` → Create a new in-app notification.
**Request Body:**
```json
{
"content": "This is a test notification",
"level": "alert" // optional, ["interrupt","info","alert"] default: "alert"
}
```
**Response:**
```json
{
"success": true
}
```
#### `curl` Example
```bash
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/write" \
-H "Authorization: Bearer <API_TOKEN>" \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-d '{
"content": "This is a test notification",
"level": "alert"
}'
```
---
### Get Unread Notifications
* **GET** `/messaging/in-app/unread` → Retrieve all unread notifications.
**Response:**
```json
[
{
"timestamp": "2025-10-10T12:34:56",
"guid": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
"read": 0,
"level": "alert",
"content": "This is a test notification"
}
]
```
#### `curl` Example
```bash
curl -X GET "http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/unread" \
-H "Authorization: Bearer <API_TOKEN>" \
-H "Accept: application/json"
```
---
### Mark All Notifications as Read
* **POST** `/messaging/in-app/read/all` → Mark all notifications as read.
**Response:**
```json
{
"success": true
}
```
#### `curl` Example
```bash
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/read/all" \
-H "Authorization: Bearer <API_TOKEN>" \
-H "Accept: application/json"
```
---
### Mark Single Notification as Read
* **POST** `/messaging/in-app/read/<guid>` → Mark a single notification as read using its GUID.
**Response (success):**
```json
{
"success": true
}
```
**Response (failure):**
```json
{
"success": false,
"error": "Notification not found"
}
```
#### `curl` Example
```bash
curl -X POST "http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/read/f47ac10b-58cc-4372-a567-0e02b2c3d479" \
-H "Authorization: Bearer <API_TOKEN>" \
-H "Accept: application/json"
```
---
### Delete All Notifications
* **DELETE** `/messaging/in-app/delete` → Remove all notifications from the system.
**Response:**
```json
{
"success": true
}
```
#### `curl` Example
```bash
curl -X DELETE "http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/delete" \
-H "Authorization: Bearer <API_TOKEN>" \
-H "Accept: application/json"
```
---
### Delete Single Notification
* **DELETE** `/messaging/in-app/delete/<guid>` → Remove a single notification by its GUID.
**Response (success):**
```json
{
"success": true
}
```
**Response (failure):**
```json
{
"success": false,
"error": "Notification not found"
}
```
#### `curl` Example
```bash
curl -X DELETE "http://<server_ip>:<GRAPHQL_PORT>/messaging/in-app/delete/f47ac10b-58cc-4372-a567-0e02b2c3d479" \
-H "Authorization: Bearer <API_TOKEN>" \
-H "Accept: application/json"
```

View File

@@ -31,10 +31,11 @@ To improve presence accuracy and reduce false offline states:
### ✅ Increase ARP Scan Timeout
Extend the ARP scanner timeout to ensure full subnet coverage:
Extend the ARP scanner timeout and DURATION to ensure full subnet coverage:
```env
ARPSCAN_RUN_TIMEOUT=360
ARPSCAN_DURATION=30
```
> Adjust based on your network size and device count.

View File

@@ -63,13 +63,28 @@ wget https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/debian12/
## 📥 Ubuntu 24 (Noble Numbat)
> [!NOTE]
> Maintained by [ingoratsdorf](https://github.com/ingoratsdorf)
### Installation via curl
```bash
curl -o install.ubuntu24.sh https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/ubuntu24/install.ubuntu24.sh && sudo chmod +x install.ubuntu24.sh && sudo ./install.ubuntu24.sh
curl -o install.sh https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/ubuntu24/install.sh && sudo chmod +x install.sh && sudo ./install.sh
```
### Installation via wget
```bash
wget https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/ubuntu24/install.ubuntu24.sh -O install.ubuntu24.sh && sudo chmod +x install.ubuntu24.sh && sudo ./install.ubuntu24.sh
wget https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/ubuntu24/install.sh -O install.sh && sudo chmod +x install.sh && sudo ./install.sh
```
## 📥 Bare Metal - Proxmox
> [!NOTE]
> Use this on a clean LXC/VM for Debian 13 OR Ubuntu 24.
> The Scipt will detect OS and build acordingly.
> Maintained by [JVKeller](https://github.com/JVKeller)
### Installation via wget
```bash
wget https://raw.githubusercontent.com/jokob-sk/NetAlertX/main/install/proxmox/proxmox-install-netalertx.sh -O proxmox-install-netalertx.sh && chmod +x proxmox-install-netalertx.sh && ./proxmox-install-netalertx.sh
```

View File

@@ -10,6 +10,9 @@ NetAlertX comes with a plugin system to feed events from third-party scripts int
> (Currently, update/overwriting of existing objects is only supported for devices via the `CurrentScan` table.)
> [!NOTE]
> For a high-level overview of how the `config.json` is used and it's lifecycle check the [config.json Lifecycle in NetAlertX Guide](PLUGINS_DEV_CONFIG.md).
### 🎥 Watch the video:
> [!TIP]

146
docs/PLUGINS_DEV_CONFIG.md Executable file
View File

@@ -0,0 +1,146 @@
## config.json Lifecycle in NetAlertX
This document describes on a high level how `config.json` is read, processed, and used by the NetAlertX core and plugins. It also outlines the plugin output contract and the main plugin types.
> [!NOTE]
> For a deep-dive on the specific configuration options and sections of the `config.json` plugin manifest, consult the [Plugins Development Guide](PLUGINS_DEV.md).
---
### 1. Loading
* On startup, the app core loads `config.json` for each plugin.
* The `config.json` represents a plugin manifest, that contains metadata and runtime settings.
---
### 2. Validation
* The core checks that each required settings key (such as `RUN`) for a plugin exists.
* Invalid or missing values may be replaced with defaults, or the plugin may be disabled.
---
### 3. Preparation
* The plugins settings (paths, commands, parameters) are prepared.
* Database mappings (`mapped_to_table`, `database_column_definitions`) for data ingestion into the core app are parsed.
---
### 4. Execution
* Plugins can be run at different core app execution points, such as on schedule, once on start, after a notification, etc.
* At runtime, the scheduler triggers plugins according to their `interval`.
* The plugin executes its command or script.
---
### 5. Parsing
* Plugin output is expected in **pipe (`|`)-delimited format**.
* The core parses lines into fields, matching the **plugin interface contract**.
---
### 6. Mapping
* Each parsed field is moved into the `Plugins_` database tables and can be mapped into a configured database table.
* Controlled by `database_column_definitions` and `mapped_to_table`.
* Example: `Object_PrimaryID → Devices.MAC`.
---
### 6a. Plugin Output Contract
Each plugin must output results in the **plugin interface contract format**, pipe (`|`)-delimited values, in the column order described under [Plugin Interface Contract](PLUGINS_DEV.md)
#### IDs
* `Object_PrimaryID` and `Object_SecondaryID` identify the record (e.g. `MAC|IP`).
#### **Watched values (`Watched_Value14`)**
* Used by the core to detect changes between runs.
* Changes here can trigger **notifications**.
#### **Extra value (`Extra`)**
* Optional, extra field.
* Stored in the database but **not used for alerts**.
#### **Helper values (`Helper_Value13`)**
* Added for cases where more than IDs + watched + extra are needed.
* Can be made visible in the UI.
* Stored in the database but **not used for alerts**.
#### **Mapping matters**
* While the plugin output is free-form, the `database_column_definitions` and `mapped_to_table` settings in `config.json` determine the **target columns and data types** in NetAlertX.
---
### 7. Persistence
* Data is upserted into the database.
* Conflicts are resolved using `Object_PrimaryID` + `Object_SecondaryID`.
---
### 8. Plugin Types and Expected Outputs
Beyond the `data_source` setting, plugins fall into functional categories. Each has its own input requirements and output expectations:
#### **Device discovery plugins**
* **Inputs:** `N/A`, subnet, or API for discovery service, or similar.
* **Outputs:** At minimum `MAC` and `IP` that results in a new or updated device records in the `Devices` table.
* **Mapping:** Must be mapped to the `CurrentScan` table via `database_column_definitions` and `data_filters`.
* **Examples:** ARP-scan, NMAP device discovery (e.g., `ARPSCAN`, `NMAPDEV`).
#### **Device-data enrichment plugins**
* **Inputs:** Device identifier (usually `MAC`, `IP`).
* **Outputs:** Additional data for that device (e.g. open ports).
* **Mapping:** Controlled via `database_column_definitions` and `data_filters`.
* **Examples:** Ports, MQTT messages (e.g., `NMAP`, `MQTT`)
#### **Name resolver plugins**
* **Inputs:** Device identifiers (MAC, IP, or hostname).
* **Outputs:** Updated `devName` and `devFQDN` fields.
* **Mapping:** Not expected.
* **Note:** Currently requires **core app modification** to add new plugins, not fully driven by the plugins `config.json`.
* **Examples:** Avahiscan (e.g., `NBTSCAN`, `NSLOOKUP`).
#### **Generic plugins**
* **Inputs:** Whatever the script or query provides.
* **Outputs:** Data shown only in **Integrations → Plugins**, not tied to devices.
* **Mapping:** Not expected.
* **Examples:** External monitoring data (e.g., `INTRSPD`)
#### **Configuration-only plugins**
* **Inputs/Outputs:** None at runtime.
* **Mapping:** Not expected.
* **Examples:** Used to provide additional settings or execute scripts (e.g., `MAINT`, `CSVBCKP`).
---
### 9. Post-Processing
* Notifications are generated if watched values change.
* UI is updated with new or updated records.
* All values that are configured to be shown in teh UI appear in the Plugins section.
---
### 10. Summary
The lifecycle of `config.json` entries is:
**Load → Validate → Prepare → Execute → Parse → Map → Persist → Post-process**
Plugins must follow the **output contract**, and their category (discovery, specific, resolver, generic, config-only) defines what inputs they require and what outputs are expected.

View File

@@ -512,21 +512,29 @@ function updateDevicePageName(mac) {
}
// Page title - Name
if (mac == "new") {
$('#pageTitle').html(
`<i title="${getString("Gen_create_new_device")}" class="fa fa-square-plus"></i> ` + getString("Gen_create_new_device")
);
$('#devicePageInfoPlc .inner').html(
`<i class="fa fa-circle-info"></i> ` + getString("Gen_create_new_device_info")
);
$('#devicePageInfoPlc').show();
} else if (!owner || (name.toString()).indexOf(owner) !== -1) {
$('#pageTitle').html(name);
$('#devicePageInfoPlc').hide();
let pageTitleText;
if (mac === "new") {
pageTitleText = getString("Gen_create_new_device");
$('#pageTitle').html(
`<i title="${pageTitleText}" class="fa fa-square-plus"></i> ` + pageTitleText
);
$('#devicePageInfoPlc .inner').html(
`<i class="fa fa-circle-info"></i> ` + getString("Gen_create_new_device_info")
);
$('#devicePageInfoPlc').show();
} else if (!owner || name.toString().includes(owner)) {
pageTitleText = name;
$('#pageTitle').html(pageTitleText);
$('#devicePageInfoPlc').hide();
} else {
$('#pageTitle').html(name + ' (' + owner + ')');
$('#devicePageInfoPlc').hide();
pageTitleText = `${name} (${owner})`;
$('#pageTitle').html(pageTitleText);
$('#devicePageInfoPlc').hide();
}
// Prepend to the <title> tag
$('title').html(pageTitleText + ' - ' + $('title').html());
}

View File

@@ -123,7 +123,7 @@
<!-- page script ----------------------------------------------------------- -->
<script>
var deviceStatus = 'all';
var tableRows = getCache ("nax_parTableRows") == "" ? 20 : getCache ("nax_parTableRows") ;
var tableRows = getCache ("nax_parTableRows") == "" ? parseInt(getSetting("UI_DEFAULT_PAGE_SIZE")) : getCache ("nax_parTableRows") ;
var tableOrder = getCache ("nax_parTableOrder") == "" ? [[3,'desc'], [0,'asc']] : JSON.parse(getCache ("nax_parTableOrder")) ;
var tableColumnHide = [];
@@ -743,7 +743,7 @@ function initializeDatatable (status) {
},
'paging' : true,
'lengthChange' : true,
'lengthMenu' : [[10, 20, 25, 50, 100, 500, 100000], [10, 20, 25, 50, 100, 500, getString('Device_Tablelenght_all')]],
'lengthMenu' : getLengthMenu(parseInt(getSetting("UI_DEFAULT_PAGE_SIZE"))),
'searching' : true,
'ordering' : true,

View File

@@ -169,7 +169,7 @@
var eventsType = 'all';
var period = '1 day';
var tableRows = 25;
var tableRows = parseInt(getSetting("UI_DEFAULT_PAGE_SIZE"));
// Read parameters & Initialize components
main();
@@ -181,7 +181,7 @@ function main() {
period = getCookie(parPeriod) === "" ? "1 day" : getCookie(parPeriod);
$('#period').val(period);
tableRows = getCookie(parTableRows) === "" ? 50 : parseInt(getCookie(parTableRows), 10);
tableRows = getCookie(parTableRows) === "" ? parseInt(getSetting("UI_DEFAULT_PAGE_SIZE")) : parseInt(getCookie(parTableRows), 10);
// Initialize components
initializeDatatable();
@@ -197,7 +197,7 @@ function initializeDatatable () {
$('#tableEvents').DataTable({
'paging' : true,
'lengthChange' : true,
'lengthMenu' : [[10, 25, 50, 100, 500, -1], [10, 25, 50, 100, 500, 'All']],
'lengthMenu' : getLengthMenu(parseInt(getSetting("UI_DEFAULT_PAGE_SIZE"))),
'searching' : true,
'ordering' : true,
'info' : true,

View File

@@ -11,8 +11,8 @@
var timerRefreshData = ''
var emptyArr = ['undefined', "", undefined, null, 'null'];
var UI_LANG = "English";
const allLanguages = ["en_us", "es_es", "de_de", "fr_fr", "it_it", "ru_ru", "nb_no", "pl_pl", "pt_br", "pt_pt", "tr_tr", "zh_cn", "cs_cz", "ar_ar", "ca_ca", "uk_ua"]; // needs to be same as in lang.php
var UI_LANG = "English (en_us)";
const allLanguages = ["ar_ar","ca_ca","cs_cz","de_de","en_us","es_es","fa_fa","fr_fr","it_it","nb_no","pl_pl","pt_br","pt_pt","ru_ru","tr_tr","uk_ua","zh_cn"]; // needs to be same as in lang.php
var settingsJSON = {}
@@ -299,7 +299,7 @@ function getString(key) {
// -----------------------------------------------------------------------------
// Get current language ISO code
// below has to match exactly teh values in /front/php/templates/language/lang.php & /front/js/common.js
// below has to match exactly the values in /front/php/templates/language/lang.php & /front/js/common.js
function getLangCode() {
UI_LANG = getSetting("UI_LANG");
@@ -307,19 +307,22 @@ function getLangCode() {
let lang_code = 'en_us';
switch (UI_LANG) {
case 'English':
case 'English (en_us)':
lang_code = 'en_us';
break;
case 'Spanish':
case 'Spanish (es_es)':
lang_code = 'es_es';
break;
case 'German':
case 'German (de_de)':
lang_code = 'de_de';
break;
case 'French':
case 'Farsi (fa_fa)':
lang_code = 'fa_fa';
break;
case 'French (fr_fr)':
lang_code = 'fr_fr';
break;
case 'Norwegian':
case 'Norwegian (nb_no)':
lang_code = 'nb_no';
break;
case 'Polish (pl_pl)':
@@ -337,7 +340,7 @@ function getLangCode() {
case 'Italian (it_it)':
lang_code = 'it_it';
break;
case 'Russian':
case 'Russian (ru_ru)':
lang_code = 'ru_ru';
break;
case 'Chinese (zh_cn)':

View File

@@ -497,26 +497,78 @@ function checkNotification() {
});
}
// Handling unread notifications favicon + bell floating number bublbe
/**
* Handles unread notification indicators:
* - Updates the floating bell count bubble.
* - Changes the favicon to indicate unread notifications.
* - Updates the page title with a numeric prefix like "(3)".
*
* The function expects that the favicon element has the ID `#favicon`
* and that the bell count element has the ID `#unread-notifications-bell-count`.
*
* @param {number} count - The number of unread notifications.
*
* @example
* handleUnreadNotifications(3);
* // → shows "(3)" in the title, notification icon, and bell bubble
*
* handleUnreadNotifications(0);
* // → restores original favicon and hides bubble
*/
function handleUnreadNotifications(count) {
$('#unread-notifications-bell-count').html(count);
const $countBubble = $('#unread-notifications-bell-count');
const $favicon = $('#favicon');
// Capture current title — ideally cache the original globally if calling repeatedly
const originalTitle = document.title;
// Update notification bubble and favicon
$countBubble.html(count);
if (count > 0) {
$('#unread-notifications-bell-count').show();
// Change the favicon to show there are notifications
$('#favicon').attr('href', 'img/NetAlertX_logo_notification.png');
// Update the title to include the count
document.title = `(${count}) ` + originalTitle;
$countBubble.show();
$favicon.attr('href', 'img/NetAlertX_logo_notification.png');
} else {
$('#unread-notifications-bell-count').hide();
// Change the favicon back to the original
$('#favicon').attr('href', 'img/NetAlertX_logo.png');
// Revert the title to the original title
document.title = originalTitle;
$countBubble.hide();
$favicon.attr('href', 'img/NetAlertX_logo.png');
}
// Update the document title with "(count)" prefix
document.title = addOrUpdateNumberBrackets(originalTitle, count);
}
// Store the original title of the document
var originalTitle = document.title;
/**
* Adds, updates, or removes a numeric prefix in parentheses before a given string.
*
* Behavior:
* - If `count` is 0 → removes any existing "(...)" prefix.
* - If string already starts with "(...)" → replaces it with the new count.
* - Otherwise → adds "(count)" as a prefix before the input text.
*
* Examples:
* addOrUpdateNumberBrackets("Device", 3) → "(3) Device"
* addOrUpdateNumberBrackets("(1) Device", 4) → "(4) Device"
* addOrUpdateNumberBrackets("(5) Device", 0) → "Device"
*
* @param {string} input - The input string (e.g., a device name).
* @param {number} count - The number to place inside the parentheses.
* @returns {string} The updated string with the correct "(count)" prefix.
*/
function addOrUpdateNumberBrackets(input, count) {
let result = input.trim();
if (count === 0) {
// Remove any existing "(...)" prefix
result = result.replace(/^\(.*?\)\s*/, '');
} else if (/^\(.*?\)/.test(result)) {
// Replace existing "(...)" prefix
result = result.replace(/^\(.*?\)/, `(${count})`);
} else {
// Add new "(count)" prefix
result = `(${count}) ${result}`;
}
return result.trim();
}
// Start checking for notifications periodically

View File

@@ -952,5 +952,41 @@ function initHoverNodeInfo() {
});
}
/**
* Generates a DataTables-style `lengthMenu` array with an optional custom entry inserted
* in the correct numeric order.
*
* Example output:
* [[10, 20, 25, 50, 100, 500, 100000], [10, 20, 25, 50, 100, 500, 'All']]
*
* @param {number} newEntry - A numeric entry to insert into the list (e.g. 30).
* If it already exists or equals -1, it will be ignored.
* @returns {Array[]} A two-dimensional array where:
* - The first array is the numeric page lengths.
* - The second array is the display labels (same values, but 'All' for -1).
*
* @example
* getLengthMenu(30);
* // → [[10, 20, 25, 30, 50, 100, 500, 100000], [10, 20, 25, 30, 50, 100, 500, 'All']]
*/
function getLengthMenu(newEntry) {
const values = [10, 20, 25, 50, 100, 500, 100000];
const labels = [10, 20, 25, 50, 100, 500, getString('Device_Tablelenght_all')];
// Insert newEntry in sorted order, skipping duplicates and -1/'All'
const insertSorted = (arr, val) => {
if (val === -1 || arr.includes(val)) return arr;
const idx = arr.findIndex(v => v > val || v === -1);
if (idx === -1) arr.push(val);
else arr.splice(idx, 0, val);
return arr;
};
insertSorted(values, newEntry);
insertSorted(labels, newEntry);
return [values, labels];
}
console.log("init ui_components.js")

View File

@@ -340,9 +340,14 @@
console.log(columnValue);
// update selected
executeAction('update', 'devMac', selectorMacs(), targetColumns, columnValue )
if(selectorMacs() != "")
{
executeAction('update', 'devMac', selectorMacs(), targetColumns, columnValue )
}
else
{
showModalWarning(getString("Gen_Error"), getString('Device_MultiEdit_No_Devices'));
}
}
// -----------------------------------------------------------------------------
@@ -354,22 +359,23 @@
function executeAction(action, whereColumnName, key, targetColumns, newTargetColumnValue )
{
$.get(`php/server/dbHelper.php?action=${action}&dbtable=Devices&columnName=${whereColumnName}&id=${key}&columns=${targetColumns}&values=${newTargetColumnValue}`, function(data) {
// console.log(data);
// console.log(data);
if (sanitize(data) == 'OK') {
showMessage(getString('Gen_DataUpdatedUITakesTime'));
// Remove navigation prompt "Are you sure you want to leave..."
window.onbeforeunload = null;
if (sanitize(data) == 'OK') {
showMessage(getString('Gen_DataUpdatedUITakesTime'));
// Remove navigation prompt "Are you sure you want to leave..."
window.onbeforeunload = null;
// update API endpoints to refresh the UI
updateApi("devices,appevents")
// update API endpoints to refresh the UI
updateApi("devices,appevents")
write_notification(`[Multi edit] Executed "${action}" on Columns "${targetColumns}" matching "${key}"`, 'info')
write_notification(`[Multi edit] Executed "${action}" on Columns "${targetColumns}" matching "${key}"`, 'info')
} else {
showMessage(getString('Gen_LockedDB'));
}
});
} else {
console.error(data);
showMessage(getString('Gen_LockedDB'));
}
});
}

View File

@@ -1,5 +1,10 @@
<?php
// 🔺----- API ENDPOINTS SUPERSEDED -----🔺
// check server/api_server/api_server_start.py for equivalents
// equivalent: /messaging/in-app
// 🔺----- API ENDPOINTS SUPERSEDED -----🔺
require dirname(__FILE__).'/../templates/globals.php';
//------------------------------------------------------------------------------

View File

@@ -199,6 +199,7 @@
"Device_MultiEdit_Backup": "نسخة احتياطية",
"Device_MultiEdit_Fields": "الحقول",
"Device_MultiEdit_MassActions": "إجراءات جماعية",
"Device_MultiEdit_No_Devices": "",
"Device_MultiEdit_Tooltip": "تعديل الأجهزة المحددة",
"Device_Searchbox": "بحث",
"Device_Shortcut_AllDevices": "جميع الأجهزة",

View File

@@ -199,6 +199,7 @@
"Device_MultiEdit_Backup": "Atenció, entrar valors incorrectes a continuació trencarà la configuració. Si us plau, abans feu còpia de seguretat la vostra base de dades o configuració de Dispositius (<a href=\"php/server/devices.php?action=ExportCSV\">clic per descarregar <i class=\"fa-solid fa-download fa-bounce\"></i></a>). Llegiu com per recuperar Dispositius des d'aquest fitxer al <a href=\"https://github.com/jokob-sk/NetAlertX/blob/main/docs/BACKUPS.md#scenario-2-corrupted-database\" target=\"_blank\">documentació de Còpies de seguretat</a>. Per aplicar els canvis, feu click a la <b>Save<i class=\"fa-solid fa-save\"></i></b> icona de cada camp que volgueu actualitzar.",
"Device_MultiEdit_Fields": "Editar camps:",
"Device_MultiEdit_MassActions": "Accions massives:",
"Device_MultiEdit_No_Devices": "Cap dispositiu seleccionat.",
"Device_MultiEdit_Tooltip": "Atenció. Si feu clic a això s'aplicarà el valor de l'esquerra a tots els dispositius seleccionats a dalt.",
"Device_Searchbox": "Cerca",
"Device_Shortcut_AllDevices": "Els meus dispositius",

View File

@@ -199,6 +199,7 @@
"Device_MultiEdit_Backup": "",
"Device_MultiEdit_Fields": "",
"Device_MultiEdit_MassActions": "",
"Device_MultiEdit_No_Devices": "",
"Device_MultiEdit_Tooltip": "",
"Device_Searchbox": "",
"Device_Shortcut_AllDevices": "",

View File

@@ -203,10 +203,11 @@
"Device_MultiEdit_Backup": "Achtung! Falsche Eingaben können die Installation beschädigen. Bitte sichere deine Datenbank oder Gerätekonfiguration zuerst: (<a href=\"php/server/devices.php?action=ExportCSV\">Konfiguration herunterladen <i class=\"fa-solid fa-download fa-bounce\"></i></a>). Wie du dein Gerät wiederherstellen kannst findest du in der <a href=\"https://github.com/jokob-sk/NetAlertX/blob/main/docs/BACKUPS.md#scenario-2-corrupted-database\" target=\"_blank\">Dokumentation über Backups</a>.",
"Device_MultiEdit_Fields": "Felder bearbeiten:",
"Device_MultiEdit_MassActions": "Massen aktionen:",
"Device_MultiEdit_No_Devices": "Keine Geräte ausgewählt.",
"Device_MultiEdit_Tooltip": "Achtung! Beim Drücken werden alle Werte auf die oben ausgewählten Geräte übertragen.",
"Device_Searchbox": "Suche",
"Device_Shortcut_AllDevices": "Meine Geräte",
"Device_Shortcut_AllNodes": "",
"Device_Shortcut_AllNodes": "Alle Knoten",
"Device_Shortcut_Archived": "Archiviert",
"Device_Shortcut_Connected": "Verbunden",
"Device_Shortcut_Devices": "Geräte",

View File

@@ -199,6 +199,7 @@
"Device_MultiEdit_Backup": "Careful, entering wrong values below will break your setup. Please backup your database or Devices configuration first (<a href=\"php/server/devices.php?action=ExportCSV\">click to download <i class=\"fa-solid fa-download fa-bounce\"></i></a>). Read how to recover Devices from this file in the <a href=\"https://github.com/jokob-sk/NetAlertX/blob/main/docs/BACKUPS.md#scenario-2-corrupted-database\" target=\"_blank\">Backups documentation</a>. In order to apply your changes click the <b>Save<i class=\"fa-solid fa-save\"></i></b> icon on each field you want to update.",
"Device_MultiEdit_Fields": "Edit fields:",
"Device_MultiEdit_MassActions": "Mass actions:",
"Device_MultiEdit_No_Devices": "No devices selected.",
"Device_MultiEdit_Tooltip": "Careful. Clicking this will apply the value on the left to all devices selected above.",
"Device_Searchbox": "Search",
"Device_Shortcut_AllDevices": "My devices",

View File

@@ -201,6 +201,7 @@
"Device_MultiEdit_Backup": "Tenga cuidado, ingresar valores incorrectos o romperá su configuración. Por favor, haga una copia de seguridad de su base de datos o de la configuración de los dispositivos primero (<a href=\"php/server/devices.php?action=ExportCSV\">haga clic para descargar <i class=\"fa-solid fa-download fa-bounce\"></i></a>). Lea cómo recuperar dispositivos de este archivo en la documentación de <a href=\"https://github.com/jokob-sk/NetAlertX/blob/main/docs/BACKUPS.md#scenario-2-corrupted-database\" target=\"_blank\">Copia de seguridad</a>. Para aplicar sus cambios haga click en el ícono de <b>Guardar<i class=\"fa-solid fa-save\"></i></b> en cada campo que quiera actualizar.",
"Device_MultiEdit_Fields": "Editar campos:",
"Device_MultiEdit_MassActions": "Acciones masivas:",
"Device_MultiEdit_No_Devices": "",
"Device_MultiEdit_Tooltip": "Cuidado. Al hacer clic se aplicará el valor de la izquierda a todos los dispositivos seleccionados anteriormente.",
"Device_Searchbox": "Búsqueda",
"Device_Shortcut_AllDevices": "Mis dispositivos",

View File

@@ -0,0 +1,764 @@
{
"API_CUSTOM_SQL_description": "",
"API_CUSTOM_SQL_name": "",
"API_TOKEN_description": "",
"API_TOKEN_name": "",
"API_display_name": "",
"API_icon": "",
"About_Design": "",
"About_Exit": "",
"About_Title": "",
"AppEvents_AppEventProcessed": "",
"AppEvents_DateTimeCreated": "",
"AppEvents_Extra": "",
"AppEvents_GUID": "",
"AppEvents_Helper1": "",
"AppEvents_Helper2": "",
"AppEvents_Helper3": "",
"AppEvents_ObjectForeignKey": "",
"AppEvents_ObjectIndex": "",
"AppEvents_ObjectIsArchived": "",
"AppEvents_ObjectIsNew": "",
"AppEvents_ObjectPlugin": "",
"AppEvents_ObjectPrimaryID": "",
"AppEvents_ObjectSecondaryID": "",
"AppEvents_ObjectStatus": "",
"AppEvents_ObjectStatusColumn": "",
"AppEvents_ObjectType": "",
"AppEvents_Plugin": "",
"AppEvents_Type": "",
"BackDevDetail_Actions_Ask_Run": "",
"BackDevDetail_Actions_Not_Registered": "",
"BackDevDetail_Actions_Title_Run": "",
"BackDevDetail_Copy_Ask": "",
"BackDevDetail_Copy_Title": "",
"BackDevDetail_Tools_WOL_error": "",
"BackDevDetail_Tools_WOL_okay": "",
"BackDevices_Arpscan_disabled": "",
"BackDevices_Arpscan_enabled": "",
"BackDevices_Backup_CopError": "",
"BackDevices_Backup_Failed": "",
"BackDevices_Backup_okay": "",
"BackDevices_DBTools_DelDevError_a": "",
"BackDevices_DBTools_DelDevError_b": "",
"BackDevices_DBTools_DelDev_a": "",
"BackDevices_DBTools_DelDev_b": "",
"BackDevices_DBTools_DelEvents": "",
"BackDevices_DBTools_DelEventsError": "",
"BackDevices_DBTools_ImportCSV": "",
"BackDevices_DBTools_ImportCSVError": "",
"BackDevices_DBTools_ImportCSVMissing": "",
"BackDevices_DBTools_Purge": "",
"BackDevices_DBTools_UpdDev": "",
"BackDevices_DBTools_UpdDevError": "",
"BackDevices_DBTools_Upgrade": "",
"BackDevices_DBTools_UpgradeError": "",
"BackDevices_Device_UpdDevError": "",
"BackDevices_Restore_CopError": "",
"BackDevices_Restore_Failed": "",
"BackDevices_Restore_okay": "",
"BackDevices_darkmode_disabled": "",
"BackDevices_darkmode_enabled": "",
"CLEAR_NEW_FLAG_description": "",
"CLEAR_NEW_FLAG_name": "",
"CustProps_cant_remove": "",
"DAYS_TO_KEEP_EVENTS_description": "",
"DAYS_TO_KEEP_EVENTS_name": "",
"DISCOVER_PLUGINS_description": "",
"DISCOVER_PLUGINS_name": "",
"DevDetail_Children_Title": "",
"DevDetail_Copy_Device_Title": "",
"DevDetail_Copy_Device_Tooltip": "",
"DevDetail_CustomProperties_Title": "",
"DevDetail_CustomProps_reset_info": "",
"DevDetail_DisplayFields_Title": "",
"DevDetail_EveandAl_AlertAllEvents": "",
"DevDetail_EveandAl_AlertDown": "",
"DevDetail_EveandAl_Archived": "",
"DevDetail_EveandAl_NewDevice": "",
"DevDetail_EveandAl_NewDevice_Tooltip": "",
"DevDetail_EveandAl_RandomMAC": "",
"DevDetail_EveandAl_ScanCycle": "",
"DevDetail_EveandAl_ScanCycle_a": "",
"DevDetail_EveandAl_ScanCycle_z": "",
"DevDetail_EveandAl_Skip": "",
"DevDetail_EveandAl_Title": "",
"DevDetail_Events_CheckBox": "",
"DevDetail_GoToNetworkNode": "",
"DevDetail_Icon": "",
"DevDetail_Icon_Descr": "",
"DevDetail_Loading": "",
"DevDetail_MainInfo_Comments": "",
"DevDetail_MainInfo_Favorite": "",
"DevDetail_MainInfo_Group": "",
"DevDetail_MainInfo_Location": "",
"DevDetail_MainInfo_Name": "",
"DevDetail_MainInfo_Network": "",
"DevDetail_MainInfo_Network_Port": "",
"DevDetail_MainInfo_Network_Site": "",
"DevDetail_MainInfo_Network_Title": "",
"DevDetail_MainInfo_Owner": "",
"DevDetail_MainInfo_SSID": "",
"DevDetail_MainInfo_Title": "",
"DevDetail_MainInfo_Type": "",
"DevDetail_MainInfo_Vendor": "",
"DevDetail_MainInfo_mac": "",
"DevDetail_NavToChildNode": "",
"DevDetail_Network_Node_hover": "",
"DevDetail_Network_Port_hover": "",
"DevDetail_Nmap_Scans": "",
"DevDetail_Nmap_Scans_desc": "",
"DevDetail_Nmap_buttonDefault": "",
"DevDetail_Nmap_buttonDefault_text": "",
"DevDetail_Nmap_buttonDetail": "",
"DevDetail_Nmap_buttonDetail_text": "",
"DevDetail_Nmap_buttonFast": "",
"DevDetail_Nmap_buttonFast_text": "",
"DevDetail_Nmap_buttonSkipDiscovery": "",
"DevDetail_Nmap_buttonSkipDiscovery_text": "",
"DevDetail_Nmap_resultsLink": "",
"DevDetail_Owner_hover": "",
"DevDetail_Periodselect_All": "",
"DevDetail_Periodselect_LastMonth": "",
"DevDetail_Periodselect_LastWeek": "",
"DevDetail_Periodselect_LastYear": "",
"DevDetail_Periodselect_today": "",
"DevDetail_Run_Actions_Title": "",
"DevDetail_Run_Actions_Tooltip": "",
"DevDetail_SessionInfo_FirstSession": "",
"DevDetail_SessionInfo_LastIP": "",
"DevDetail_SessionInfo_LastSession": "",
"DevDetail_SessionInfo_StaticIP": "",
"DevDetail_SessionInfo_Status": "",
"DevDetail_SessionInfo_Title": "",
"DevDetail_SessionTable_Additionalinfo": "",
"DevDetail_SessionTable_Connection": "",
"DevDetail_SessionTable_Disconnection": "",
"DevDetail_SessionTable_Duration": "",
"DevDetail_SessionTable_IP": "",
"DevDetail_SessionTable_Order": "",
"DevDetail_Shortcut_CurrentStatus": "",
"DevDetail_Shortcut_DownAlerts": "",
"DevDetail_Shortcut_Presence": "",
"DevDetail_Shortcut_Sessions": "",
"DevDetail_Tab_Details": "",
"DevDetail_Tab_Events": "",
"DevDetail_Tab_EventsTableDate": "",
"DevDetail_Tab_EventsTableEvent": "",
"DevDetail_Tab_EventsTableIP": "",
"DevDetail_Tab_EventsTableInfo": "",
"DevDetail_Tab_Nmap": "",
"DevDetail_Tab_NmapEmpty": "",
"DevDetail_Tab_NmapTableExtra": "",
"DevDetail_Tab_NmapTableHeader": "",
"DevDetail_Tab_NmapTableIndex": "",
"DevDetail_Tab_NmapTablePort": "",
"DevDetail_Tab_NmapTableService": "",
"DevDetail_Tab_NmapTableState": "",
"DevDetail_Tab_NmapTableText": "",
"DevDetail_Tab_NmapTableTime": "",
"DevDetail_Tab_Plugins": "",
"DevDetail_Tab_Presence": "",
"DevDetail_Tab_Sessions": "",
"DevDetail_Tab_Tools": "",
"DevDetail_Tab_Tools_Internet_Info_Description": "",
"DevDetail_Tab_Tools_Internet_Info_Error": "",
"DevDetail_Tab_Tools_Internet_Info_Start": "",
"DevDetail_Tab_Tools_Internet_Info_Title": "",
"DevDetail_Tab_Tools_Nslookup_Description": "",
"DevDetail_Tab_Tools_Nslookup_Error": "",
"DevDetail_Tab_Tools_Nslookup_Start": "",
"DevDetail_Tab_Tools_Nslookup_Title": "",
"DevDetail_Tab_Tools_Speedtest_Description": "",
"DevDetail_Tab_Tools_Speedtest_Start": "",
"DevDetail_Tab_Tools_Speedtest_Title": "",
"DevDetail_Tab_Tools_Traceroute_Description": "",
"DevDetail_Tab_Tools_Traceroute_Error": "",
"DevDetail_Tab_Tools_Traceroute_Start": "",
"DevDetail_Tab_Tools_Traceroute_Title": "",
"DevDetail_Tools_WOL": "",
"DevDetail_Tools_WOL_noti": "",
"DevDetail_Tools_WOL_noti_text": "",
"DevDetail_Type_hover": "",
"DevDetail_Vendor_hover": "",
"DevDetail_WOL_Title": "",
"DevDetail_button_AddIcon": "",
"DevDetail_button_AddIcon_Help": "",
"DevDetail_button_AddIcon_Tooltip": "",
"DevDetail_button_Delete": "",
"DevDetail_button_DeleteEvents": "",
"DevDetail_button_DeleteEvents_Warning": "",
"DevDetail_button_Delete_ask": "",
"DevDetail_button_OverwriteIcons": "",
"DevDetail_button_OverwriteIcons_Tooltip": "",
"DevDetail_button_OverwriteIcons_Warning": "",
"DevDetail_button_Reset": "",
"DevDetail_button_Save": "",
"DeviceEdit_ValidMacIp": "",
"Device_MultiEdit": "",
"Device_MultiEdit_Backup": "",
"Device_MultiEdit_Fields": "",
"Device_MultiEdit_MassActions": "",
"Device_MultiEdit_No_Devices": "",
"Device_MultiEdit_Tooltip": "",
"Device_Searchbox": "",
"Device_Shortcut_AllDevices": "",
"Device_Shortcut_AllNodes": "",
"Device_Shortcut_Archived": "",
"Device_Shortcut_Connected": "",
"Device_Shortcut_Devices": "",
"Device_Shortcut_DownAlerts": "",
"Device_Shortcut_DownOnly": "",
"Device_Shortcut_Favorites": "",
"Device_Shortcut_NewDevices": "",
"Device_Shortcut_OnlineChart": "",
"Device_TableHead_AlertDown": "",
"Device_TableHead_Connected_Devices": "",
"Device_TableHead_CustomProps": "",
"Device_TableHead_FQDN": "",
"Device_TableHead_Favorite": "",
"Device_TableHead_FirstSession": "",
"Device_TableHead_GUID": "",
"Device_TableHead_Group": "",
"Device_TableHead_Icon": "",
"Device_TableHead_LastIP": "",
"Device_TableHead_LastIPOrder": "",
"Device_TableHead_LastSession": "",
"Device_TableHead_Location": "",
"Device_TableHead_MAC": "",
"Device_TableHead_MAC_full": "",
"Device_TableHead_Name": "",
"Device_TableHead_NetworkSite": "",
"Device_TableHead_Owner": "",
"Device_TableHead_ParentRelType": "",
"Device_TableHead_Parent_MAC": "",
"Device_TableHead_Port": "",
"Device_TableHead_PresentLastScan": "",
"Device_TableHead_ReqNicsOnline": "",
"Device_TableHead_RowID": "",
"Device_TableHead_Rowid": "",
"Device_TableHead_SSID": "",
"Device_TableHead_SourcePlugin": "",
"Device_TableHead_Status": "",
"Device_TableHead_SyncHubNodeName": "",
"Device_TableHead_Type": "",
"Device_TableHead_Vendor": "",
"Device_Table_Not_Network_Device": "",
"Device_Table_info": "",
"Device_Table_nav_next": "",
"Device_Table_nav_prev": "",
"Device_Tablelenght": "",
"Device_Tablelenght_all": "",
"Device_Title": "",
"Devices_Filters": "",
"ENABLE_PLUGINS_description": "",
"ENABLE_PLUGINS_name": "",
"ENCRYPTION_KEY_description": "",
"ENCRYPTION_KEY_name": "",
"Email_display_name": "",
"Email_icon": "",
"Events_Loading": "",
"Events_Periodselect_All": "",
"Events_Periodselect_LastMonth": "",
"Events_Periodselect_LastWeek": "",
"Events_Periodselect_LastYear": "",
"Events_Periodselect_today": "",
"Events_Searchbox": "",
"Events_Shortcut_AllEvents": "",
"Events_Shortcut_DownAlerts": "",
"Events_Shortcut_Events": "",
"Events_Shortcut_MissSessions": "",
"Events_Shortcut_NewDevices": "",
"Events_Shortcut_Sessions": "",
"Events_Shortcut_VoidSessions": "",
"Events_TableHead_AdditionalInfo": "",
"Events_TableHead_Connection": "",
"Events_TableHead_Date": "",
"Events_TableHead_Device": "",
"Events_TableHead_Disconnection": "",
"Events_TableHead_Duration": "",
"Events_TableHead_DurationOrder": "",
"Events_TableHead_EventType": "",
"Events_TableHead_IP": "",
"Events_TableHead_IPOrder": "",
"Events_TableHead_Order": "",
"Events_TableHead_Owner": "",
"Events_TableHead_PendingAlert": "",
"Events_Table_info": "",
"Events_Table_nav_next": "",
"Events_Table_nav_prev": "",
"Events_Tablelenght": "",
"Events_Tablelenght_all": "",
"Events_Title": "",
"GRAPHQL_PORT_description": "",
"GRAPHQL_PORT_name": "",
"Gen_Action": "",
"Gen_Add": "",
"Gen_AddDevice": "",
"Gen_Add_All": "",
"Gen_All_Devices": "",
"Gen_AreYouSure": "",
"Gen_Backup": "",
"Gen_Cancel": "",
"Gen_Change": "",
"Gen_Copy": "",
"Gen_CopyToClipboard": "",
"Gen_DataUpdatedUITakesTime": "",
"Gen_Delete": "",
"Gen_DeleteAll": "",
"Gen_Description": "",
"Gen_Error": "",
"Gen_Filter": "",
"Gen_Generate": "",
"Gen_InvalidMac": "",
"Gen_LockedDB": "",
"Gen_NetworkMask": "",
"Gen_Offline": "",
"Gen_Okay": "",
"Gen_Online": "",
"Gen_Purge": "",
"Gen_ReadDocs": "",
"Gen_Remove_All": "",
"Gen_Remove_Last": "",
"Gen_Reset": "",
"Gen_Restore": "",
"Gen_Run": "",
"Gen_Save": "",
"Gen_Saved": "",
"Gen_Search": "",
"Gen_Select": "",
"Gen_SelectIcon": "",
"Gen_SelectToPreview": "",
"Gen_Selected_Devices": "",
"Gen_Subnet": "",
"Gen_Switch": "",
"Gen_Upd": "",
"Gen_Upd_Fail": "",
"Gen_Update": "",
"Gen_Update_Value": "",
"Gen_ValidIcon": "",
"Gen_Warning": "",
"Gen_Work_In_Progress": "",
"Gen_create_new_device": "",
"Gen_create_new_device_info": "",
"General_display_name": "",
"General_icon": "",
"HRS_TO_KEEP_NEWDEV_description": "",
"HRS_TO_KEEP_NEWDEV_name": "",
"HRS_TO_KEEP_OFFDEV_description": "",
"HRS_TO_KEEP_OFFDEV_name": "",
"LOADED_PLUGINS_description": "",
"LOADED_PLUGINS_name": "",
"LOG_LEVEL_description": "",
"LOG_LEVEL_name": "",
"Loading": "",
"Login_Box": "",
"Login_Default_PWD": "",
"Login_Info": "",
"Login_Psw-box": "",
"Login_Psw_alert": "",
"Login_Psw_folder": "",
"Login_Psw_new": "",
"Login_Psw_run": "",
"Login_Remember": "",
"Login_Remember_small": "",
"Login_Submit": "",
"Login_Toggle_Alert_headline": "",
"Login_Toggle_Info": "",
"Login_Toggle_Info_headline": "",
"Maint_PurgeLog": "",
"Maint_RestartServer": "",
"Maint_Restart_Server_noti_text": "",
"Maintenance_InitCheck": "",
"Maintenance_InitCheck_Checking": "",
"Maintenance_InitCheck_QuickSetupGuide": "",
"Maintenance_InitCheck_Success": "",
"Maintenance_ReCheck": "",
"Maintenance_Running_Version": "",
"Maintenance_Status": "",
"Maintenance_Title": "",
"Maintenance_Tool_DownloadConfig": "",
"Maintenance_Tool_DownloadConfig_text": "",
"Maintenance_Tool_DownloadWorkflows": "",
"Maintenance_Tool_DownloadWorkflows_text": "",
"Maintenance_Tool_ExportCSV": "",
"Maintenance_Tool_ExportCSV_noti": "",
"Maintenance_Tool_ExportCSV_noti_text": "",
"Maintenance_Tool_ExportCSV_text": "",
"Maintenance_Tool_ImportCSV": "",
"Maintenance_Tool_ImportCSV_noti": "",
"Maintenance_Tool_ImportCSV_noti_text": "",
"Maintenance_Tool_ImportCSV_text": "",
"Maintenance_Tool_ImportConfig_noti": "",
"Maintenance_Tool_ImportPastedCSV": "",
"Maintenance_Tool_ImportPastedCSV_noti_text": "",
"Maintenance_Tool_ImportPastedCSV_text": "",
"Maintenance_Tool_ImportPastedConfig": "",
"Maintenance_Tool_ImportPastedConfig_noti_text": "",
"Maintenance_Tool_ImportPastedConfig_text": "",
"Maintenance_Tool_arpscansw": "",
"Maintenance_Tool_arpscansw_noti": "",
"Maintenance_Tool_arpscansw_noti_text": "",
"Maintenance_Tool_arpscansw_text": "",
"Maintenance_Tool_backup": "",
"Maintenance_Tool_backup_noti": "",
"Maintenance_Tool_backup_noti_text": "",
"Maintenance_Tool_backup_text": "",
"Maintenance_Tool_check_visible": "",
"Maintenance_Tool_darkmode": "",
"Maintenance_Tool_darkmode_noti": "",
"Maintenance_Tool_darkmode_noti_text": "",
"Maintenance_Tool_darkmode_text": "",
"Maintenance_Tool_del_ActHistory": "",
"Maintenance_Tool_del_ActHistory_noti": "",
"Maintenance_Tool_del_ActHistory_noti_text": "",
"Maintenance_Tool_del_ActHistory_text": "",
"Maintenance_Tool_del_alldev": "",
"Maintenance_Tool_del_alldev_noti": "",
"Maintenance_Tool_del_alldev_noti_text": "",
"Maintenance_Tool_del_alldev_text": "",
"Maintenance_Tool_del_allevents": "",
"Maintenance_Tool_del_allevents30": "",
"Maintenance_Tool_del_allevents30_noti": "",
"Maintenance_Tool_del_allevents30_noti_text": "",
"Maintenance_Tool_del_allevents30_text": "",
"Maintenance_Tool_del_allevents_noti": "",
"Maintenance_Tool_del_allevents_noti_text": "",
"Maintenance_Tool_del_allevents_text": "",
"Maintenance_Tool_del_empty_macs": "",
"Maintenance_Tool_del_empty_macs_noti": "",
"Maintenance_Tool_del_empty_macs_noti_text": "",
"Maintenance_Tool_del_empty_macs_text": "",
"Maintenance_Tool_del_selecteddev": "",
"Maintenance_Tool_del_selecteddev_text": "",
"Maintenance_Tool_del_unknowndev": "",
"Maintenance_Tool_del_unknowndev_noti": "",
"Maintenance_Tool_del_unknowndev_noti_text": "",
"Maintenance_Tool_del_unknowndev_text": "",
"Maintenance_Tool_displayed_columns_text": "",
"Maintenance_Tool_drag_me": "",
"Maintenance_Tool_order_columns_text": "",
"Maintenance_Tool_purgebackup": "",
"Maintenance_Tool_purgebackup_noti": "",
"Maintenance_Tool_purgebackup_noti_text": "",
"Maintenance_Tool_purgebackup_text": "",
"Maintenance_Tool_restore": "",
"Maintenance_Tool_restore_noti": "",
"Maintenance_Tool_restore_noti_text": "",
"Maintenance_Tool_restore_text": "",
"Maintenance_Tool_upgrade_database_noti": "",
"Maintenance_Tool_upgrade_database_noti_text": "",
"Maintenance_Tool_upgrade_database_text": "",
"Maintenance_Tools_Tab_BackupRestore": "",
"Maintenance_Tools_Tab_Logging": "",
"Maintenance_Tools_Tab_Settings": "",
"Maintenance_Tools_Tab_Tools": "",
"Maintenance_Tools_Tab_UISettings": "",
"Maintenance_arp_status": "",
"Maintenance_arp_status_off": "",
"Maintenance_arp_status_on": "",
"Maintenance_built_on": "",
"Maintenance_current_version": "",
"Maintenance_database_backup": "",
"Maintenance_database_backup_found": "",
"Maintenance_database_backup_total": "",
"Maintenance_database_lastmod": "",
"Maintenance_database_path": "",
"Maintenance_database_rows": "",
"Maintenance_database_size": "",
"Maintenance_lang_selector_apply": "",
"Maintenance_lang_selector_empty": "",
"Maintenance_lang_selector_lable": "",
"Maintenance_lang_selector_text": "",
"Maintenance_new_version": "",
"Maintenance_themeselector_apply": "",
"Maintenance_themeselector_empty": "",
"Maintenance_themeselector_lable": "",
"Maintenance_themeselector_text": "",
"Maintenance_version": "",
"NETWORK_DEVICE_TYPES_description": "",
"NETWORK_DEVICE_TYPES_name": "",
"Navigation_About": "",
"Navigation_AppEvents": "",
"Navigation_Devices": "",
"Navigation_Donations": "",
"Navigation_Events": "",
"Navigation_Integrations": "",
"Navigation_Maintenance": "",
"Navigation_Monitoring": "",
"Navigation_Network": "",
"Navigation_Notifications": "",
"Navigation_Plugins": "",
"Navigation_Presence": "",
"Navigation_Report": "",
"Navigation_Settings": "",
"Navigation_SystemInfo": "",
"Navigation_Workflows": "",
"Network_Assign": "",
"Network_Cant_Assign": "",
"Network_Cant_Assign_No_Node_Selected": "",
"Network_Configuration_Error": "",
"Network_Connected": "",
"Network_Devices": "",
"Network_ManageAdd": "",
"Network_ManageAdd_Name": "",
"Network_ManageAdd_Name_text": "",
"Network_ManageAdd_Port": "",
"Network_ManageAdd_Port_text": "",
"Network_ManageAdd_Submit": "",
"Network_ManageAdd_Type": "",
"Network_ManageAdd_Type_text": "",
"Network_ManageAssign": "",
"Network_ManageDel": "",
"Network_ManageDel_Name": "",
"Network_ManageDel_Name_text": "",
"Network_ManageDel_Submit": "",
"Network_ManageDevices": "",
"Network_ManageEdit": "",
"Network_ManageEdit_ID": "",
"Network_ManageEdit_ID_text": "",
"Network_ManageEdit_Name": "",
"Network_ManageEdit_Name_text": "",
"Network_ManageEdit_Port": "",
"Network_ManageEdit_Port_text": "",
"Network_ManageEdit_Submit": "",
"Network_ManageEdit_Type": "",
"Network_ManageEdit_Type_text": "",
"Network_ManageLeaf": "",
"Network_ManageUnassign": "",
"Network_NoAssignedDevices": "",
"Network_NoDevices": "",
"Network_Node": "",
"Network_Node_Name": "",
"Network_Parent": "",
"Network_Root": "",
"Network_Root_Not_Configured": "",
"Network_Root_Unconfigurable": "",
"Network_ShowArchived": "",
"Network_ShowOffline": "",
"Network_Table_Hostname": "",
"Network_Table_IP": "",
"Network_Table_State": "",
"Network_Title": "",
"Network_UnassignedDevices": "",
"Notifications_All": "",
"Notifications_Mark_All_Read": "",
"PIALERT_WEB_PASSWORD_description": "",
"PIALERT_WEB_PASSWORD_name": "",
"PIALERT_WEB_PROTECTION_description": "",
"PIALERT_WEB_PROTECTION_name": "",
"PLUGINS_KEEP_HIST_description": "",
"PLUGINS_KEEP_HIST_name": "",
"Plugins_DeleteAll": "",
"Plugins_Filters_Mac": "",
"Plugins_History": "",
"Plugins_Obj_DeleteListed": "",
"Plugins_Objects": "",
"Plugins_Out_of": "",
"Plugins_Unprocessed_Events": "",
"Plugins_no_control": "",
"Presence_CalHead_day": "",
"Presence_CalHead_lang": "",
"Presence_CalHead_month": "",
"Presence_CalHead_quarter": "",
"Presence_CalHead_week": "",
"Presence_CalHead_year": "",
"Presence_CallHead_Devices": "",
"Presence_Key_OnlineNow": "",
"Presence_Key_OnlineNow_desc": "",
"Presence_Key_OnlinePast": "",
"Presence_Key_OnlinePastMiss": "",
"Presence_Key_OnlinePastMiss_desc": "",
"Presence_Key_OnlinePast_desc": "",
"Presence_Loading": "",
"Presence_Shortcut_AllDevices": "",
"Presence_Shortcut_Archived": "",
"Presence_Shortcut_Connected": "",
"Presence_Shortcut_Devices": "",
"Presence_Shortcut_DownAlerts": "",
"Presence_Shortcut_Favorites": "",
"Presence_Shortcut_NewDevices": "",
"Presence_Title": "",
"REFRESH_FQDN_description": "",
"REFRESH_FQDN_name": "",
"REPORT_DASHBOARD_URL_description": "",
"REPORT_DASHBOARD_URL_name": "",
"REPORT_ERROR": "",
"REPORT_MAIL_description": "",
"REPORT_MAIL_name": "",
"REPORT_TITLE": "",
"RandomMAC_hover": "",
"Reports_Sent_Log": "",
"SCAN_SUBNETS_description": "",
"SCAN_SUBNETS_name": "",
"SYSTEM_TITLE": "",
"Setting_Override": "",
"Setting_Override_Description": "",
"Settings_Metadata_Toggle": "",
"Settings_Show_Description": "",
"Settings_device_Scanners_desync": "",
"Settings_device_Scanners_desync_popup": "",
"Speedtest_Results": "",
"Systeminfo_AvailableIps": "",
"Systeminfo_CPU": "",
"Systeminfo_CPU_Cores": "",
"Systeminfo_CPU_Name": "",
"Systeminfo_CPU_Speed": "",
"Systeminfo_CPU_Temp": "",
"Systeminfo_CPU_Vendor": "",
"Systeminfo_Client_Resolution": "",
"Systeminfo_Client_User_Agent": "",
"Systeminfo_General": "",
"Systeminfo_General_Date": "",
"Systeminfo_General_Date2": "",
"Systeminfo_General_Full_Date": "",
"Systeminfo_General_TimeZone": "",
"Systeminfo_Memory": "",
"Systeminfo_Memory_Total_Memory": "",
"Systeminfo_Memory_Usage": "",
"Systeminfo_Memory_Usage_Percent": "",
"Systeminfo_Motherboard": "",
"Systeminfo_Motherboard_BIOS": "",
"Systeminfo_Motherboard_BIOS_Date": "",
"Systeminfo_Motherboard_BIOS_Vendor": "",
"Systeminfo_Motherboard_Manufactured": "",
"Systeminfo_Motherboard_Name": "",
"Systeminfo_Motherboard_Revision": "",
"Systeminfo_Network": "",
"Systeminfo_Network_Accept_Encoding": "",
"Systeminfo_Network_Accept_Language": "",
"Systeminfo_Network_Connection_Port": "",
"Systeminfo_Network_HTTP_Host": "",
"Systeminfo_Network_HTTP_Referer": "",
"Systeminfo_Network_HTTP_Referer_String": "",
"Systeminfo_Network_Hardware": "",
"Systeminfo_Network_Hardware_Interface_Mask": "",
"Systeminfo_Network_Hardware_Interface_Name": "",
"Systeminfo_Network_Hardware_Interface_RX": "",
"Systeminfo_Network_Hardware_Interface_TX": "",
"Systeminfo_Network_IP": "",
"Systeminfo_Network_IP_Connection": "",
"Systeminfo_Network_IP_Server": "",
"Systeminfo_Network_MIME": "",
"Systeminfo_Network_Request_Method": "",
"Systeminfo_Network_Request_Time": "",
"Systeminfo_Network_Request_URI": "",
"Systeminfo_Network_Secure_Connection": "",
"Systeminfo_Network_Secure_Connection_String": "",
"Systeminfo_Network_Server_Name": "",
"Systeminfo_Network_Server_Name_String": "",
"Systeminfo_Network_Server_Query": "",
"Systeminfo_Network_Server_Query_String": "",
"Systeminfo_Network_Server_Version": "",
"Systeminfo_Services": "",
"Systeminfo_Services_Description": "",
"Systeminfo_Services_Name": "",
"Systeminfo_Storage": "",
"Systeminfo_Storage_Device": "",
"Systeminfo_Storage_Mount": "",
"Systeminfo_Storage_Size": "",
"Systeminfo_Storage_Type": "",
"Systeminfo_Storage_Usage": "",
"Systeminfo_Storage_Usage_Free": "",
"Systeminfo_Storage_Usage_Mount": "",
"Systeminfo_Storage_Usage_Total": "",
"Systeminfo_Storage_Usage_Used": "",
"Systeminfo_System": "",
"Systeminfo_System_AVG": "",
"Systeminfo_System_Architecture": "",
"Systeminfo_System_Kernel": "",
"Systeminfo_System_OSVersion": "",
"Systeminfo_System_Running_Processes": "",
"Systeminfo_System_System": "",
"Systeminfo_System_Uname": "",
"Systeminfo_System_Uptime": "",
"Systeminfo_This_Client": "",
"Systeminfo_USB_Devices": "",
"TICKER_MIGRATE_TO_NETALERTX": "",
"TIMEZONE_description": "",
"TIMEZONE_name": "",
"UI_DEV_SECTIONS_description": "",
"UI_DEV_SECTIONS_name": "",
"UI_ICONS_description": "",
"UI_ICONS_name": "",
"UI_LANG_description": "",
"UI_LANG_name": "",
"UI_MY_DEVICES_description": "",
"UI_MY_DEVICES_name": "",
"UI_NOT_RANDOM_MAC_description": "",
"UI_NOT_RANDOM_MAC_name": "",
"UI_PRESENCE_description": "",
"UI_PRESENCE_name": "",
"UI_REFRESH_description": "",
"UI_REFRESH_name": "",
"VERSION_description": "",
"VERSION_name": "",
"WF_Action_Add": "",
"WF_Action_field": "",
"WF_Action_type": "",
"WF_Action_value": "",
"WF_Actions": "",
"WF_Add": "",
"WF_Add_Condition": "",
"WF_Add_Group": "",
"WF_Condition_field": "",
"WF_Condition_operator": "",
"WF_Condition_value": "",
"WF_Conditions": "",
"WF_Conditions_logic_rules": "",
"WF_Duplicate": "",
"WF_Enabled": "",
"WF_Export": "",
"WF_Export_Copy": "",
"WF_Import": "",
"WF_Import_Copy": "",
"WF_Name": "",
"WF_Remove": "",
"WF_Remove_Copy": "",
"WF_Save": "",
"WF_Trigger": "",
"WF_Trigger_event_type": "",
"WF_Trigger_type": "",
"add_icon_event_tooltip": "",
"add_option_event_tooltip": "",
"copy_icons_event_tooltip": "",
"devices_old": "",
"general_event_description": "",
"general_event_title": "",
"go_to_device_event_tooltip": "",
"go_to_node_event_tooltip": "",
"new_version_available": "",
"report_guid": "",
"report_guid_missing": "",
"report_select_format": "",
"report_time": "",
"run_event_tooltip": "",
"select_icon_event_tooltip": "",
"settings_core_icon": "",
"settings_core_label": "",
"settings_device_scanners": "",
"settings_device_scanners_icon": "",
"settings_device_scanners_info": "",
"settings_device_scanners_label": "",
"settings_enabled": "",
"settings_enabled_icon": "",
"settings_expand_all": "",
"settings_imported": "",
"settings_imported_label": "",
"settings_missing": "",
"settings_missing_block": "",
"settings_old": "",
"settings_other_scanners": "",
"settings_other_scanners_icon": "",
"settings_other_scanners_label": "",
"settings_publishers": "",
"settings_publishers_icon": "",
"settings_publishers_info": "",
"settings_publishers_label": "",
"settings_readonly": "",
"settings_saved": "",
"settings_system_icon": "",
"settings_system_label": "",
"settings_update_item_warning": "",
"test_event_tooltip": ""
}

View File

@@ -199,6 +199,7 @@
"Device_MultiEdit_Backup": "Attention, renseigner des valeurs non cohérentes ci-dessous peut bloquer votre paramétrage. Veillez à faire une sauvegarde de votre base de données ou de la configuration de vos appareils en premier lieu (<a href=\"php/server/devices.php?action=ExportCSV\">clisuer ici pour la télécharger <i class=\"fa-solid fa-download fa-bounce\"></i></a>). Renseignez-vous sur comment remettre les appareils depuis ce fichier via la <a href=\"https://github.com/jokob-sk/NetAlertX/blob/main/docs/BACKUPS.md#scenario-2-corrupted-database\" target=\"_blank\">documentation des sauvegardes</a>. Afin d'enregistrer les changements, cliquer sur l'icône <b>Sauvegarder<i class=\"fa-solid fa-save\"></i></b> sur chaque champ que vous voulez mettre à jour.",
"Device_MultiEdit_Fields": "Champs modifiables:",
"Device_MultiEdit_MassActions": "Actions en masse:",
"Device_MultiEdit_No_Devices": "Aucun appareil sélectionné.",
"Device_MultiEdit_Tooltip": "Attention. Ceci va appliquer la valeur de gauche à tous les appareils sélectionnés au-dessus.",
"Device_Searchbox": "Rechercher",
"Device_Shortcut_AllDevices": "Mes appareils",
@@ -760,4 +761,4 @@
"settings_system_label": "Système",
"settings_update_item_warning": "Mettre à jour la valeur ci-dessous. Veillez à bien suivre le même format qu'auparavant. <b>Il n'y a pas de pas de contrôle.</b>",
"test_event_tooltip": "Enregistrer d'abord vos modifications avant de tester vôtre paramétrage."
}
}

View File

@@ -199,6 +199,7 @@
"Device_MultiEdit_Backup": "Attento, l'inserimento di valori errati di seguito interromperà la configurazione. Effettua prima il backup del database o della configurazione dei dispositivi (<a href=\"php/server/devices.php?action=ExportCSV\">fai clic per scaricare <i class=\"fa-solid fa-download fa-bounce\"></i> </a>). Leggi come ripristinare i dispositivi da questo file nella <a href=\"https://github.com/jokob-sk/NetAlertX/blob/main/docs/BACKUPS.md#scenario-2-corrupted-database\" target=\" _blank\">Documentazione di backup</a>. Per applicare le modifiche, fai clic sull'icona <b>Salva<i class=\"fa-solid fa-save\"></i></b> su ogni campo che desideri aggiornare.",
"Device_MultiEdit_Fields": "Modifica campi:",
"Device_MultiEdit_MassActions": "Azioni di massa:",
"Device_MultiEdit_No_Devices": "Nessun dispositivo selezionato.",
"Device_MultiEdit_Tooltip": "Attento. Facendo clic verrà applicato il valore sulla sinistra a tutti i dispositivi selezionati sopra.",
"Device_Searchbox": "Cerca",
"Device_Shortcut_AllDevices": "I miei dispositivi",
@@ -760,4 +761,4 @@
"settings_system_label": "Sistema",
"settings_update_item_warning": "Aggiorna il valore qui sotto. Fai attenzione a seguire il formato precedente. <b>La convalida non viene eseguita.</b>",
"test_event_tooltip": "Salva le modifiche prima di provare le nuove impostazioni."
}
}

View File

@@ -5,7 +5,7 @@
// ###################################
$defaultLang = "en_us";
$allLanguages = ["en_us", "es_es", "de_de", "fr_fr", "it_it", "ru_ru", "nb_no", "pl_pl", "pt_br", "pt_pt", "tr_tr", "zh_cn", "cs_cz", "ar_ar", "ca_ca", "uk_ua"];
$allLanguages = [ "ar_ar", "ca_ca", "cs_cz", "de_de", "en_us", "es_es", "fa_fa", "fr_fr", "it_it", "nb_no", "pl_pl", "pt_br", "pt_pt", "ru_ru", "tr_tr", "uk_ua", "zh_cn"];
global $db;
@@ -14,22 +14,24 @@ $result = $db->querySingle("SELECT setValue FROM Settings WHERE setKey = 'UI_LAN
// below has to match exactly the values in /front/php/templates/language/lang.php & /front/js/common.js
switch($result){
case 'Spanish': $pia_lang_selected = 'es_es'; break;
case 'German': $pia_lang_selected = 'de_de'; break;
case 'Norwegian': $pia_lang_selected = 'nb_no'; break;
case 'Polish (pl_pl)': $pia_lang_selected = 'pl_pl'; break;
case 'Portuguese (pt_br)': $pia_lang_selected = 'pt_br'; break;
case 'Portuguese (pt_pt)': $pia_lang_selected = 'pt_pt'; break;
case 'Italian (it_it)': $pia_lang_selected = 'it_it'; break;
case 'Russian': $pia_lang_selected = 'ru_ru'; break;
case 'Turkish (tr_tr)': $pia_lang_selected = 'tr_tr'; break;
case 'French': $pia_lang_selected = 'fr_fr'; break;
case 'Chinese (zh_cn)': $pia_lang_selected = 'zh_cn'; break;
case 'Czech (cs_cz)': $pia_lang_selected = 'cs_cz'; break;
case 'Arabic (ar_ar)': $pia_lang_selected = 'ar_ar'; break;
case 'Catalan (ca_ca)': $pia_lang_selected = 'ca_ca'; break;
case 'Ukrainian (uk_ua)': $pia_lang_selected = 'uk_ua'; break;
default: $pia_lang_selected = 'en_us'; break;
case 'Arabic (ar_ar)': $pia_lang_selected = 'ar_ar'; break;
case 'Catalan (ca_ca)': $pia_lang_selected = 'ca_ca'; break;
case 'Czech (cs_cz)': $pia_lang_selected = 'cs_cz'; break;
case 'German (de_de)': $pia_lang_selected = 'de_de'; break;
case 'English (en_us)': $pia_lang_selected = 'en_us'; break;
case 'Spanish (es_es)': $pia_lang_selected = 'es_es'; break;
case 'Farsi (fa_fa)': $pia_lang_selected = 'fa_fa'; break;
case 'French (fr_fr)': $pia_lang_selected = 'fr_fr'; break;
case 'Italian (it_it)': $pia_lang_selected = 'it_it'; break;
case 'Norwegian (nb_no)': $pia_lang_selected = 'nb_no'; break;
case 'Polish (pl_pl)': $pia_lang_selected = 'pl_pl'; break;
case 'Portuguese (pt_br)': $pia_lang_selected = 'pt_br'; break;
case 'Portuguese (pt_pt)': $pia_lang_selected = 'pt_pt'; break;
case 'Russian (ru_ru)': $pia_lang_selected = 'ru_ru'; break;
case 'Turkish (tr_tr)': $pia_lang_selected = 'tr_tr'; break;
case 'Ukrainian (uk_ua)': $pia_lang_selected = 'uk_ua'; break;
case 'Chinese (zh_cn)': $pia_lang_selected = 'zh_cn'; break;
default: $pia_lang_selected = 'en_us'; break;
}
if (isset($pia_lang_selected) == FALSE or (strlen($pia_lang_selected) == 0)) {$pia_lang_selected = $defaultLang;}

View File

@@ -33,6 +33,7 @@ def merge_translations(main_file, other_files):
if __name__ == "__main__":
current_path = os.path.dirname(os.path.abspath(__file__))
# language codes can be found here: http://www.lingoes.net/en/translator/langcode.htm
json_files = ["en_us.json", "de_de.json", "es_es.json", "fr_fr.json", "nb_no.json", "ru_ru.json", "it_it.json", "pt_br.json", "pt_pt.json", "pl_pl.json", "zh_cn.json", "tr_tr.json", "cs_cz.json", "ar_ar.json", "ca_ca.json", "uk_ua.json"]
# "en_us.json" has to be first!
json_files = [ "en_us.json", "ar_ar.json", "ca_ca.json", "cs_cz.json", "de_de.json", "es_es.json", "fa_fa.json", "fr_fr.json", "it_it.json", "nb_no.json", "pl_pl.json", "pt_br.json", "pt_pt.json", "ru_ru.json", "tr_tr.json", "uk_ua.json", "zh_cn.json"]
file_paths = [os.path.join(current_path, file) for file in json_files]
merge_translations(file_paths[0], file_paths[1:])

View File

@@ -199,6 +199,7 @@
"Device_MultiEdit_Backup": "Forsiktig, hvis du legger inn feil verdier nedenfor, vil oppsettet ditt ødelegges. Ta sikkerhetskopi av databasen eller enhetskonfigurasjonen først (<a href=\"php/server/devices.php?action=ExportCSV\">klikk for å laste ned <i class=\"fa-solid fa-download fa-bounce\"></i> </a>). Les hvordan du gjenoppretter enheter fra denne filen i <a href=\"https://github.com/jokob-sk/NetAlertX/blob/main/docs/BACKUPS.md#scenario-2-corrupted-database\" target=\"_blank\">Sikkerhetskopierings dokumentasjon</a>.",
"Device_MultiEdit_Fields": "Rediger felt:",
"Device_MultiEdit_MassActions": "Flerhandlinger:",
"Device_MultiEdit_No_Devices": "",
"Device_MultiEdit_Tooltip": "Forsiktig. Ved å klikke på denne vil verdien til venstre brukes på alle enhetene som er valgt ovenfor.",
"Device_Searchbox": "Søk",
"Device_Shortcut_AllDevices": "Mine Enheter",

View File

@@ -199,6 +199,7 @@
"Device_MultiEdit_Backup": "Uwaga, wprowadzenie niepoprawnych wartości poniżej może uszkodzić Twoją konfigurację. Najpierw wykonaj kopię zapasową bazy danych lub konfiguracji urządzeń (<a href=\"php/server/devices.php?action=ExportCSV\">kliknij, aby pobrać <i class=\"fa-solid fa-download fa-bounce\"></i></a>). Instrukcje odzyskiwania urządzeń z tego pliku znajdziesz w <a href=\"https://github.com/jokob-sk/NetAlertX/blob/main/docs/BACKUPS.md#scenario-2-corrupted-database\" target=\"_blank\">dokumentacji kopii zapasowych</a>. Aby zastosować zmiany, kliknij ikonę <b>Zapisz<i class=\"fa-solid fa-save\"></i></b> przy każdym polu, które chcesz zaktualizować.",
"Device_MultiEdit_Fields": "Edytuj pola:",
"Device_MultiEdit_MassActions": "Operacje zbiorcze:",
"Device_MultiEdit_No_Devices": "",
"Device_MultiEdit_Tooltip": "Uwaga. Kliknięcie tego spowoduje zastosowanie wartości po lewej stronie do wszystkich wybranych powyżej urządzeń.",
"Device_Searchbox": "Szukaj",
"Device_Shortcut_AllDevices": "Moje urządzenia",

View File

@@ -199,6 +199,7 @@
"Device_MultiEdit_Backup": "Cuidado, inserir valores errados abaixo interromperá sua configuração. Faça backup do seu banco de dados ou da configuração dos dispositivos primeiro (<a href=\"php/server/devices.php?action=ExportCSV\">clique para baixar <i class=\"fa-solid fa-download fa-bounce\"></i> </a>). Leia como recuperar dispositivos deste arquivo no <a href=\"https://github.com/jokob-sk/NetAlertX/blob/main/docs/BACKUPS.md#scenario-2-corrupted-database\" target=\" _blank\">Documentação de backups</a>.",
"Device_MultiEdit_Fields": "Editar campos:",
"Device_MultiEdit_MassActions": "Ações em massa:",
"Device_MultiEdit_No_Devices": "",
"Device_MultiEdit_Tooltip": "Cuidadoso. Clicar aqui aplicará o valor à esquerda a todos os dispositivos selecionados acima.",
"Device_Searchbox": "Procurar",
"Device_Shortcut_AllDevices": "Meus dispositivos",

View File

@@ -199,6 +199,7 @@
"Device_MultiEdit_Backup": "",
"Device_MultiEdit_Fields": "Editar campos:",
"Device_MultiEdit_MassActions": "Ações em massa:",
"Device_MultiEdit_No_Devices": "",
"Device_MultiEdit_Tooltip": "Cuidadoso. Clicar aqui aplicará o valor à esquerda a todos os dispositivos selecionados acima.",
"Device_Searchbox": "Procurar",
"Device_Shortcut_AllDevices": "",
@@ -760,4 +761,4 @@
"settings_system_label": "",
"settings_update_item_warning": "",
"test_event_tooltip": "Guarde as alterações antes de testar as definições."
}
}

View File

@@ -199,6 +199,7 @@
"Device_MultiEdit_Backup": "Будьте осторожны: ввод неправильных значений ниже приведет к поломке вашей настройки. Сначала сделайте резервную копию базы данных или конфигурации устройств (<a href=\"php/server/devices.php?action=ExportCSV\">нажмите для загрузки <i class=\"fa-solid fa-download fa-bounce\"></i></a>). О том, как восстановить Устройства из этого файла, читайте в разделе <a href=\"https://github.com/jokob-sk/NetAlertX/blob/main/docs/BACKUPS.md#scenario-2-corrupted-database\" target=\"_blank\">Документация о резервном копировании</a>. Чтобы применить свои изменения, нажмите на значок <b> Сохранить <i class = \"fa-solid fa-save\"> </i> </b> в каждом поле, которое вы хотите обновить.",
"Device_MultiEdit_Fields": "Редактировать поля:",
"Device_MultiEdit_MassActions": "Массовые действия:",
"Device_MultiEdit_No_Devices": "Устройства не выбраны.",
"Device_MultiEdit_Tooltip": "Осторожно. При нажатии на эту кнопку значение слева будет применено ко всем устройствам, выбранным выше.",
"Device_Searchbox": "Поиск",
"Device_Shortcut_AllDevices": "Мои устройства",

View File

@@ -199,6 +199,7 @@
"Device_MultiEdit_Backup": "Dikkat, aşağıya yanlış değerler girmeniz yapılandırmanızı bozabilir. Lütfen önce veritabanınızı veya Cihazlar yapılandırmanızı yedekleyin (<a href=\"php/server/devices.php?action=ExportCSV\">İndirmeniz için tıklayın <i class=\"fa-solid fa-download fa-bounce\"></i></a>). Bu dosyadan Cihazları nasıl geri yükleyeceğinizi öğrenmek için <a href=\"https://github.com/jokob-sk/NetAlertX/blob/main/docs/BACKUPS.md#scenario-2-corrupted-database\" target=\"_blank\">Yedekleme dökümantasyonunu</a> okuyun.",
"Device_MultiEdit_Fields": "Alanları Düzenle:",
"Device_MultiEdit_MassActions": "Toplu komutlar:",
"Device_MultiEdit_No_Devices": "",
"Device_MultiEdit_Tooltip": "Dikkat. Buna tıklamak, soldaki değeri yukarıda seçilen tüm cihazlara uygulayacaktır.",
"Device_Searchbox": "Arama",
"Device_Shortcut_AllDevices": "Cihazlarım",

View File

@@ -199,6 +199,7 @@
"Device_MultiEdit_Backup": "Обережно, введення неправильних значень нижче призведе до порушення роботи налаштувань. Спочатку створіть резервну копію бази даних або конфігурації пристроїв (<a href=\"php/server/devices.php?action=ExportCSV\">натисніть, щоб завантажити <i class=\"fa-solid fa-download fa-bounce\"></i></a>). Прочитайте, як відновити пристрої з цього файлу, у <a href=\"https://github.com/jokob-sk/NetAlertX/blob/main/docs/BACKUPS.md#scenario-2-corrupted-database\" target=\"_blank\">документації щодо резервних копій</a>. Щоб застосувати зміни, натисніть значок <b>Зберегти<i class=\"fa-solid fa-save\"></i></b> у кожному полі, яке потрібно оновити.",
"Device_MultiEdit_Fields": "Редагувати поля:",
"Device_MultiEdit_MassActions": "Масові акції:",
"Device_MultiEdit_No_Devices": "Не вибрано жодного пристрою.",
"Device_MultiEdit_Tooltip": "Обережно. Якщо натиснути це, значення зліва буде застосовано до всіх пристроїв, вибраних вище.",
"Device_Searchbox": "Пошук",
"Device_Shortcut_AllDevices": "Мої пристрої",
@@ -760,4 +761,4 @@
"settings_system_label": "Система",
"settings_update_item_warning": "Оновіть значення нижче. Слідкуйте за попереднім форматом. <b>Перевірка не виконана.</b>",
"test_event_tooltip": "Перш ніж перевіряти налаштування, збережіть зміни."
}
}

View File

@@ -199,6 +199,7 @@
"Device_MultiEdit_Backup": "小心,输入错误的值将破坏您的设置。请先备份您的数据库或设备配置(<a href=\"php/server/devices.php?action=ExportCSV\">点击下载<i class=\"fa-solid fa-download fa-bounce\"></i></a>)。在<a href=\"https://github.com/jokob-sk/NetAlertX/blob/main/docs/BACKUPS.md#scenario-2-corrupted-database\" target=\"_blank\">备份文档</a>中了解如何从此文件恢复设备。要应用更改,请在每个需要更新的字段点击<b>保存<i class='fa-solid fa-save'></i></b>图标。",
"Device_MultiEdit_Fields": "编辑:",
"Device_MultiEdit_MassActions": "谨慎操作:",
"Device_MultiEdit_No_Devices": "未选择设备。",
"Device_MultiEdit_Tooltip": "小心。 单击此按钮会将左侧的值应用到上面选择的所有设备。",
"Device_Searchbox": "搜索",
"Device_Shortcut_AllDevices": "我的设备",

View File

@@ -1,36 +1,32 @@
#!/usr/bin/env python
import json
import subprocess
import argparse
import os
import pathlib
import sys
from datetime import datetime
import time
import re
import unicodedata
import paho.mqtt.client as mqtt
# from paho.mqtt import client as mqtt_client
# from paho.mqtt import CallbackAPIVersion as mqtt_CallbackAPIVersion
import hashlib
import sqlite3
from pytz import timezone
# Register NetAlertX directories
INSTALL_PATH="/app"
INSTALL_PATH = "/app"
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
# NetAlertX modules
import conf
from const import apiPath, confFileName, logPath
from const import confFileName, logPath
from plugin_utils import getPluginObject
from plugin_helper import Plugin_Objects
from logger import mylog, Logger, append_line_to_file
from helper import timeNowTZ, get_setting_value, bytes_to_string, sanitize_string, normalize_string
from models.notification_instance import NotificationInstance
from logger import mylog, Logger
from helper import timeNowTZ, get_setting_value, bytes_to_string, \
sanitize_string, normalize_string
from database import DB, get_device_stats
from pytz import timezone
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
@@ -49,20 +45,19 @@ plugin_objects = Plugin_Objects(RESULT_FILE)
md5_hash = hashlib.md5()
# globals
mqtt_sensors = []
mqtt_connected_to_broker = False
mqtt_client = None # mqtt client
topic_root = get_setting_value('MQTT_topic_root')
def main():
mylog('verbose', [f'[{pluginName}](publisher) In script'])
mylog('verbose', [f'[{pluginName}](publisher) In script'])
# Check if basic config settings supplied
if check_config() == False:
mylog('verbose', [f'[{pluginName}] ⚠ ERROR: Publisher notification gateway not set up correctly. Check your {confFileName} {pluginName}_* variables.'])
if not check_config():
return
# Create a database connection
@@ -70,60 +65,85 @@ def main():
db.open()
mqtt_start(db)
mqtt_client.disconnect()
plugin_objects.write_result_file()
#-------------------------------------------------------------------------------
# -----------------------------------------------------------------------------
# MQTT
#-------------------------------------------------------------------------------
#-------------------------------------------------------------------------------
# -----------------------------------------------------------------------------
# -----------------------------------------------------------------------------
def check_config():
if get_setting_value('MQTT_BROKER') == '' or get_setting_value('MQTT_PORT') == '' or get_setting_value('MQTT_USER') == '' or get_setting_value('MQTT_PASSWORD') == '':
mylog('verbose', [f'[Check Config] ⚠ ERROR: MQTT service not set up correctly. Check your {confFileName} MQTT_* variables.'])
return False
else:
return True
"""
Checks whether the MQTT configuration settings are properly set.
Returns:
bool: True if all required MQTT settings
('MQTT_BROKER', 'MQTT_PORT', 'MQTT_USER', 'MQTT_PASSWORD')
are non-empty;
False otherwise. Logs a verbose error message
if any setting is missing.
"""
if get_setting_value('MQTT_BROKER') == '' \
or get_setting_value('MQTT_PORT') == '' \
or get_setting_value('MQTT_USER') == '' \
or get_setting_value('MQTT_PASSWORD') == '':
mylog('verbose', [f'[Check Config] ⚠ ERROR: MQTT service not set up \
correctly. Check your {confFileName} MQTT_* variables.'])
return False
else:
return True
#-------------------------------------------------------------------------------
# Sensor configs are tracking which sensors in NetAlertX exist and if a config has changed
# -----------------------------------------------------------------------------
# Sensor configs are tracking which sensors in NetAlertX exist
# and if a config has changed
class sensor_config:
def __init__(self, deviceId, deviceName, sensorType, sensorName, icon, mac):
def __init__(self,
deviceId,
deviceName,
sensorType,
sensorName,
icon,
mac):
"""
Initialize the sensor_config object with provided parameters. Sets up sensor configuration
and generates necessary MQTT topics and messages based on the sensor type.
Initialize the sensor_config object with provided parameters.
Sets up sensor configuration and generates necessary MQTT topics
and messages based on the sensor type.
"""
# Assign initial attributes
self.deviceId = deviceId
self.deviceName = deviceName
self.sensorType = sensorType
self.sensorName = sensorName
self.icon = icon
self.icon = icon
self.mac = mac
self.model = deviceName
self.hash = ''
self.model = deviceName
self.hash = ''
self.state_topic = ''
self.json_attr_topic = ''
self.topic = ''
self.message = {} # Initialize message as an empty dictionary
self.unique_id = ''
# Call helper functions to initialize the message, generate a hash, and handle plugin object
# Call helper functions to initialize the message, generate a hash,
# and handle plugin object
self.initialize_message()
self.generate_hash()
self.handle_plugin_object()
def initialize_message(self):
"""
Initialize the MQTT message payload based on the sensor type. This method handles sensors of types:
Initialize the MQTT message payload based on the sensor type.
This method handles sensors of types:
- 'timestamp'
- 'binary_sensor'
- 'sensor'
- 'device_tracker'
"""
# Ensure self.message is initialized as a dictionary if not already done
# Ensure self.message is initialized as a dictionary
# if not already done
if not isinstance(self.message, dict):
self.message = {}
@@ -153,7 +173,6 @@ class sensor_config:
"icon": f'mdi:{self.icon}'
})
# Handle 'device_tracker' sensor type
elif self.sensorType == 'device_tracker':
self.topic = f'homeassistant/device_tracker/{self.deviceId}/config'
@@ -229,25 +248,36 @@ class sensor_config:
)
#-------------------------------------------------------------------------------
# -------------------------------------------------------------------------------
def publish_mqtt(mqtt_client, topic, message):
"""
Publishes a message to an MQTT topic using the provided MQTT client.
If the message is not a string, it is converted to a JSON-formatted string.
The function retrieves the desired QoS level from settings and logs the publishing process.
If the client is not connected to the broker, the function logs an error and aborts.
It attempts to publish the message, retrying until the publish status indicates success.
Args:
mqtt_client: The MQTT client instance used to publish the message.
topic (str): The MQTT topic to publish to.
message (Any): The message payload to send. Non-string messages are converted to JSON.
Returns:
bool: True if the message was published successfully, False if not connected to the broker.
"""
status = 1
# convert anything but a simple string to json
if not isinstance(message, str):
message = json.dumps(message).replace("'",'"')
message = json.dumps(message).replace("'", '"')
qos = get_setting_value('MQTT_QOS')
mylog('verbose', [f"[{pluginName}] Sending MQTT topic: {topic}"])
mylog('verbose', [f"[{pluginName}] Sending MQTT message: {message}"])
mylog('debug', [f"[{pluginName}] Sending MQTT topic: {topic}",
f"[{pluginName}] Sending MQTT message: {message}"])
# mylog('verbose', [f"[{pluginName}] get_setting_value('MQTT_QOS'): {qos}"])
if mqtt_connected_to_broker == False:
mylog('verbose', [f"[{pluginName}] ⚠ ERROR: Not connected to broker, aborting."])
if not mqtt_connected_to_broker:
mylog('minimal', [f"[{pluginName}] ⚠ ERROR: Not connected to broker, aborting."])
return False
while status != 0:
@@ -267,45 +297,46 @@ def publish_mqtt(mqtt_client, topic, message):
# mylog('verbose', [f"[{pluginName}] status: {status}"])
# mylog('verbose', [f"[{pluginName}] result: {result}"])
if status != 0:
mylog('verbose', [f"[{pluginName}] Waiting to reconnect to MQTT broker"])
time.sleep(0.1)
if status != 0:
mylog('debug', [f"[{pluginName}] Waiting to reconnect to MQTT broker"])
time.sleep(0.1)
return True
#-------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
# Create a generic device for overal stats
def create_generic_device(mqtt_client, deviceId, deviceName):
create_sensor(mqtt_client, deviceId, deviceName, 'sensor', 'online', 'wifi-check')
create_sensor(mqtt_client, deviceId, deviceName, 'sensor', 'down', 'wifi-cancel')
def create_generic_device(mqtt_client, deviceId, deviceName):
create_sensor(mqtt_client, deviceId, deviceName, 'sensor', 'online', 'wifi-check')
create_sensor(mqtt_client, deviceId, deviceName, 'sensor', 'down', 'wifi-cancel')
create_sensor(mqtt_client, deviceId, deviceName, 'sensor', 'all', 'wifi')
create_sensor(mqtt_client, deviceId, deviceName, 'sensor', 'archived', 'wifi-lock')
create_sensor(mqtt_client, deviceId, deviceName, 'sensor', 'new', 'wifi-plus')
create_sensor(mqtt_client, deviceId, deviceName, 'sensor', 'unknown', 'wifi-alert')
#-------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
# Register sensor config on the broker
def create_sensor(mqtt_client, deviceId, deviceName, sensorType, sensorName, icon, mac=""):
global mqtt_sensors
def create_sensor(mqtt_client, deviceId, deviceName, sensorType, sensorName, icon, mac=""):
global mqtt_sensors
# check previous configs
sensorConfig = sensor_config(deviceId, deviceName, sensorType, sensorName, icon, mac)
sensorConfig = sensor_config(deviceId, deviceName, sensorType, sensorName, icon, mac)
# send if new
if sensorConfig.isNew:
# Create the HA sensor config if a new device is discovered
if sensorConfig.isNew:
# add the sensor to the global list to keep track of succesfully added sensors
if publish_mqtt(mqtt_client, sensorConfig.topic, sensorConfig.message):
# hack - delay adding to the queue in case the process is
time.sleep(get_setting_value('MQTT_DELAY_SEC')) # restarted and previous publish processes aborted
# (it takes ~2s to update a sensor config on the broker)
mqtt_sensors.append(sensorConfig)
if publish_mqtt(mqtt_client, sensorConfig.topic, sensorConfig.message):
# hack - delay adding to the queue in case the process is
# restarted and previous publish processes aborted
# (it takes ~2s to update a sensor config on the broker)
time.sleep(get_setting_value('MQTT_DELAY_SEC'))
mqtt_sensors.append(sensorConfig)
return sensorConfig
#-------------------------------------------------------------------------------
# -----------------------------------------------------------------------------
def mqtt_create_client():
# attempt reconnections on failure, ref https://www.emqx.com/en/blog/how-to-use-mqtt-in-python
@@ -313,11 +344,11 @@ def mqtt_create_client():
RECONNECT_RATE = 2
MAX_RECONNECT_COUNT = 12
MAX_RECONNECT_DELAY = 60
mytransport = 'tcp' # or 'websockets'
mytransport = 'tcp' # or 'websockets'
def on_disconnect(mqtt_client, userdata, rc):
global mqtt_connected_to_broker
mylog('verbose', [f"[{pluginName}] Connection terminated, reason_code: {rc}"])
@@ -328,7 +359,7 @@ def mqtt_create_client():
try:
mqtt_client.reconnect()
mqtt_connected_to_broker = True # Signal connection
mqtt_connected_to_broker = True # Signal connection
mylog('verbose', [f"[{pluginName}] Reconnected successfully"])
return
except Exception as err:
@@ -338,19 +369,18 @@ def mqtt_create_client():
reconnect_delay *= RECONNECT_RATE
reconnect_delay = min(reconnect_delay, MAX_RECONNECT_DELAY)
reconnect_count += 1
mqtt_connected_to_broker = False
def on_connect(mqtt_client, userdata, flags, rc, properties):
global mqtt_connected_to_broker
# REF: Good docu on reason codes: https://www.emqx.com/en/blog/mqtt5-new-features-reason-code-and-ack
if rc == 0:
mylog('verbose', [f"[{pluginName}] Connected to broker"])
mqtt_connected_to_broker = True # Signal connection
else:
if rc == 0:
mylog('verbose', [f"[{pluginName}] Connected to broker"])
mqtt_connected_to_broker = True # Signal connection
else:
mylog('verbose', [f"[{pluginName}] Connection failed, reason_code: {rc}"])
mqtt_connected_to_broker = False
@@ -367,10 +397,12 @@ def mqtt_create_client():
version = mqtt.MQTTv5
# we now hardcode the client id into here.
# TODO: Add config ffor client id
# TODO: Add config for client id (atm, we use a fixed client id,
# so only one instance of NetAlertX can connect to the broker at any given time)
# If you intend to run multiple instances simultaneously, make sure to set unique client IDs for each instance.
mqtt_client = mqtt.Client(
client_id='netalertx',
callback_api_version = mqtt.CallbackAPIVersion.VERSION2,
callback_api_version=mqtt.CallbackAPIVersion.VERSION2,
transport=mytransport,
protocol=version)
mqtt_client.on_connect = on_connect
@@ -379,8 +411,8 @@ def mqtt_create_client():
if get_setting_value('MQTT_TLS'):
mqtt_client.tls_set()
mqtt_client.username_pw_set(username = get_setting_value('MQTT_USER'), password = get_setting_value('MQTT_PASSWORD'))
err_code = mqtt_client.connect(host = get_setting_value('MQTT_BROKER'), port = get_setting_value('MQTT_PORT'))
mqtt_client.username_pw_set(username=get_setting_value('MQTT_USER'), password=get_setting_value('MQTT_PASSWORD'))
err_code = mqtt_client.connect(host=get_setting_value('MQTT_BROKER'), port=get_setting_value('MQTT_PORT'))
if (err_code == mqtt.MQTT_ERR_SUCCESS):
# We (prematurely) set the connection state to connected
# the callback may be delayed
@@ -389,36 +421,36 @@ def mqtt_create_client():
# Mosquitto works straight away
# EMQX has a delay and does not update in loop below, so we cannot rely on it, we wait 1 sec
time.sleep(1)
mqtt_client.loop_start()
mqtt_client.loop_start()
return mqtt_client
#-------------------------------------------------------------------------------
def mqtt_start(db):
# -----------------------------------------------------------------------------
def mqtt_start(db):
global mqtt_client, mqtt_connected_to_broker
if mqtt_connected_to_broker == False:
mqtt_connected_to_broker = True
mqtt_client = mqtt_create_client()
if not mqtt_connected_to_broker:
mqtt_client = mqtt_create_client()
deviceName = get_setting_value('MQTT_DEVICE_NAME')
deviceId = get_setting_value('MQTT_DEVICE_ID')
# General stats
deviceId = get_setting_value('MQTT_DEVICE_ID')
# General stats
# Create a generic device for overal stats
if get_setting_value('MQTT_SEND_STATS') == True:
# Create a new device representing overall stats
if get_setting_value('MQTT_SEND_STATS'):
# Create a new device representing overall stats
create_generic_device(mqtt_client, deviceId, deviceName)
# Get the data
row = get_device_stats(db)
row = get_device_stats(db)
# Publish (wrap into {} and remove last ',' from above)
publish_mqtt(mqtt_client, f"{topic_root}/sensor/{deviceId}/state",
{
publish_mqtt(mqtt_client, f"{topic_root}/sensor/{deviceId}/state",
{
"online": row[0],
"down": row[1],
"all": row[2],
@@ -429,7 +461,7 @@ def mqtt_start(db):
)
# Generate device-specific MQTT messages if enabled
if get_setting_value('MQTT_SEND_DEVICES') == True:
if get_setting_value('MQTT_SEND_DEVICES'):
# Specific devices processing
@@ -438,37 +470,35 @@ def mqtt_start(db):
sec_delay = len(devices) * int(get_setting_value('MQTT_DELAY_SEC'))*5
mylog('verbose', [f"[{pluginName}] Estimated delay: ", (sec_delay), 's ', '(', round(sec_delay/60,1) , 'min)' ])
mylog('verbose', [f"[{pluginName}] Estimated delay: ", (sec_delay), 's ', '(', round(sec_delay/60, 1), 'min)'])
debug_index = 0
for device in devices:
for device in devices:
# # debug statement START 🔻
# if 'Moto' not in device["devName"]:
# mylog('none', [f"[{pluginName}] ALERT - ⚠⚠⚠⚠ DEBUGGING ⚠⚠⚠⚠ - this should not be in uncommented in production"])
# mylog('none', [f"[{pluginName}] ALERT - ⚠⚠⚠⚠ DEBUGGING ⚠⚠⚠⚠ - this should not be in uncommented in production"])
# continue
# # debug statement END 🔺
# Create devices in Home Assistant - send config messages
deviceId = 'mac_' + device["devMac"].replace(" ", "").replace(":", "_").lower()
# Normalize the string and remove unwanted characters
devDisplayName = re.sub('[^a-zA-Z0-9-_\\s]', '', normalize_string(device["devName"]))
devDisplayName = re.sub('[^a-zA-Z0-9-_\\s]', '', normalize_string(device["devName"]))
sensorConfig = create_sensor(mqtt_client, deviceId, devDisplayName, 'sensor', 'last_ip', 'ip-network', device["devMac"])
sensorConfig = create_sensor(mqtt_client, deviceId, devDisplayName, 'sensor', 'mac_address', 'folder-key-network', device["devMac"])
sensorConfig = create_sensor(mqtt_client, deviceId, devDisplayName, 'sensor', 'is_new', 'bell-alert-outline', device["devMac"])
sensorConfig = create_sensor(mqtt_client, deviceId, devDisplayName, 'sensor', 'vendor', 'cog', device["devMac"])
sensorConfig = create_sensor(mqtt_client, deviceId, devDisplayName, 'sensor', 'vendor', 'cog', device["devMac"])
sensorConfig = create_sensor(mqtt_client, deviceId, devDisplayName, 'sensor', 'first_connection', 'calendar-start', device["devMac"])
sensorConfig = create_sensor(mqtt_client, deviceId, devDisplayName, 'sensor', 'last_connection', 'calendar-end', device["devMac"])
# handle device_tracker
# IMPORTANT: shared payload - device_tracker attributes and individual sensors
devJson = {
"last_ip": device["devLastIP"],
"is_new": str(device["devIsNew"]),
"alert_down": str(device["devAlertDown"]),
"vendor": sanitize_string(device["devVendor"]),
devJson = {
"last_ip": device["devLastIP"],
"is_new": str(device["devIsNew"]),
"alert_down": str(device["devAlertDown"]),
"vendor": sanitize_string(device["devVendor"]),
"mac_address": str(device["devMac"]),
"model": devDisplayName,
"last_connection": prepTimeStamp(str(device["devLastConnection"])),
@@ -480,53 +510,48 @@ def mqtt_start(db):
"network_parent_name": next((dev["devName"] for dev in devices if dev["devMAC"] == device["devParentMAC"]), "")
}
# bulk update device sensors in home assistant
# bulk update device sensors in home assistant
publish_mqtt(mqtt_client, sensorConfig.state_topic, devJson) # REQUIRED, DON'T DELETE
# create and update is_present sensor
sensorConfig = create_sensor(mqtt_client, deviceId, devDisplayName, 'binary_sensor', 'is_present', 'wifi', device["devMac"])
publish_mqtt(mqtt_client, sensorConfig.state_topic,
{
publish_mqtt(mqtt_client, sensorConfig.state_topic,
{
"is_present": to_binary_sensor(str(device["devPresentLastScan"]))
}
)
)
# handle device_tracker
sensorConfig = create_sensor(mqtt_client, deviceId, devDisplayName, 'device_tracker', 'is_home', 'home', device["devMac"])
sensorConfig = create_sensor(mqtt_client, deviceId, devDisplayName, 'device_tracker', 'is_home', 'home', device["devMac"])
# <away|home> are only valid states
state = 'away'
if to_binary_sensor(str(device["devPresentLastScan"])) == "ON":
state = 'home'
publish_mqtt(mqtt_client, sensorConfig.state_topic, state)
publish_mqtt(mqtt_client, sensorConfig.state_topic, state)
# publish device_tracker attributes
publish_mqtt(mqtt_client, sensorConfig.json_attr_topic, devJson)
publish_mqtt(mqtt_client, sensorConfig.json_attr_topic, devJson)
#===============================================================================
# =============================================================================
# Home Assistant UTILs
#===============================================================================
# =============================================================================
def to_binary_sensor(input):
# In HA a binary sensor returns ON or OFF
result = "OFF"
"""
Converts various input types to a binary sensor state ("ON" or "OFF") for Home Assistant.
"""
if isinstance(input, (int, float)) and input >= 1:
return "ON"
elif isinstance(input, bool) and input:
return "ON"
elif isinstance(input, str) and input == "1":
return "ON"
elif isinstance(input, bytes) and bytes_to_string(input) == "1":
return "ON"
return "OFF"
# bytestring
if isinstance(input, str):
if input == "1":
result = "ON"
elif isinstance(input, int):
if input == 1:
result = "ON"
elif isinstance(input, bool):
if input == True:
result = "ON"
elif isinstance(input, bytes):
if bytes_to_string(input) == "1":
result = "ON"
return result
# -------------------------------------
# Convert to format that is interpretable by Home Assistant
@@ -537,7 +562,7 @@ def prepTimeStamp(datetime_str):
# If the parsed datetime is naive (i.e., does not contain timezone info), add UTC timezone
if parsed_datetime.tzinfo is None:
parsed_datetime = parsed_datetime.replace(tzinfo=conf.tz)
parsed_datetime = conf.tz.localize(parsed_datetime)
except ValueError:
mylog('verbose', [f"[{pluginName}] Timestamp conversion failed of string '{datetime_str}'"])
@@ -547,9 +572,7 @@ def prepTimeStamp(datetime_str):
# Convert to the required format with 'T' between date and time and ensure the timezone is included
return parsed_datetime.isoformat() # This will include the timezone offset
# -------------INIT---------------------
if __name__ == '__main__':
sys.exit(main())

View File

@@ -64,7 +64,8 @@
"name": "subnets",
"type": "setting",
"value": "SCAN_SUBNETS",
"base64": true
"base64": true,
"timeoutMultiplier": true
}
],
"settings": [
@@ -387,6 +388,34 @@
"string": "Arguments to run arps-scan with. Recommended and tested only with the setting: <br/> <code>sudo arp-scan --ignoredups --retry=6</code>."
}
]
},
{
"function": "DURATION",
"type": {
"dataType": "integer",
"elements": [
{
"elementType": "input",
"elementOptions": [{ "type": "number" }],
"transformers": []
}
]
},
"default_value": 0,
"options": [],
"localized": ["name", "description"],
"name": [
{
"language_code": "en_us",
"string": "Discovery duration"
}
],
"description": [
{
"language_code": "en_us",
"string": "If <code>DURATION</code> is not <code>0</code>, the scan runs repeatedly per interface for that many seconds. <strong>Important:</strong> <code>RUN_TIMEOUT</code> must be greater than <code>DURATION</code>, otherwise the scan will fail."
}
]
}
],
"database_column_definitions": [

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env python
import os
import time
import pathlib
import argparse
import sys
@@ -46,7 +47,7 @@ def main():
plugin_objects = Plugin_Objects(RESULT_FILE)
# Print a message to indicate that the script is starting.
mylog('verbose', ['[ARP Scan] In script '])
mylog('verbose', [f'[{pluginName}] In script '])
# holds a list of user-submitted subnets.
# mylog('verbose', ['[ARP Scan] values.userSubnets: ', values.userSubnets])
@@ -150,16 +151,28 @@ def execute_arpscan_on_interface(interface):
# Prepare command arguments
arpscan_args = get_setting_value('ARPSCAN_ARGS').split() + interface.split()
# Execute command
# Optional duration in seconds (0 = run once)
try:
# try running a subprocess safely
result = subprocess.check_output(arpscan_args, universal_newlines=True)
except subprocess.CalledProcessError as e:
# An error occurred, handle it
error_type = type(e).__name__ # Capture the error type
result = ""
scan_duration = int(get_setting_value('ARPSCAN_DURATION'))
except Exception:
scan_duration = 0 # default: single run
return result
results = []
start_time = time.time()
while True:
try:
result = subprocess.check_output(arpscan_args, universal_newlines=True)
results.append(result)
except subprocess.CalledProcessError as e:
result = ""
# stop looping if duration not set or expired
if scan_duration == 0 or (time.time() - start_time) > scan_duration:
break
time.sleep(2) # short delay between scans
# concatenate all outputs (for regex parsing)
return "\n".join(results)

View File

@@ -1,232 +0,0 @@
#!/usr/bin/env python
import os
import pathlib
import sys
import json
import subprocess
# Define the installation path and extend the system path for plugin imports
INSTALL_PATH = "/app"
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from plugin_helper import Plugin_Object, Plugin_Objects, decodeBase64
from plugin_utils import get_plugins_configs
from logger import mylog, Logger
from const import pluginsPath, fullDbPath, logPath
from helper import timeNowTZ, get_setting_value
from messaging.in_app import write_notification
from database import DB
from models.device_instance import DeviceInstance
import conf
from pytz import timezone
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
# Make sure log level is initialized correctly
Logger(get_setting_value('LOG_LEVEL'))
pluginName = 'AVAHISCAN'
# Define the current path and log file paths
LOG_PATH = logPath + '/plugins'
LOG_FILE = os.path.join(LOG_PATH, f'script.{pluginName}.log')
RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
# Initialize the Plugin obj output file
plugin_objects = Plugin_Objects(RESULT_FILE)
def main():
mylog('verbose', [f'[{pluginName}] In script'])
# Retrieve timeout from settings (use AVAHISCAN_RUN_TIMEOUT), fall back to 20
try:
_timeout_val = get_setting_value('AVAHISCAN_RUN_TIMEOUT')
if _timeout_val is None or _timeout_val == '':
timeout = 20
else:
try:
timeout = int(_timeout_val)
except (ValueError, TypeError):
timeout = 20
except Exception:
timeout = 20
# Create a database connection
db = DB() # instance of class DB
db.open()
# Initialize the Plugin obj output file
plugin_objects = Plugin_Objects(RESULT_FILE)
# Create a DeviceInstance instance
device_handler = DeviceInstance(db)
# Retrieve devices
if get_setting_value("REFRESH_FQDN"):
devices = device_handler.getAll()
else:
devices = device_handler.getUnknown()
mylog('verbose', [f'[{pluginName}] Devices count: {len(devices)}'])
# Mock list of devices (replace with actual device_handler.getUnknown() in production)
# devices = [
# {'devMac': '00:11:22:33:44:55', 'devLastIP': '192.168.1.121'},
# {'devMac': '00:11:22:33:44:56', 'devLastIP': '192.168.1.9'},
# {'devMac': '00:11:22:33:44:57', 'devLastIP': '192.168.1.82'},
# ]
if len(devices) > 0:
# ensure service is running
ensure_avahi_running()
for device in devices:
domain_name = execute_name_lookup(device['devLastIP'], timeout)
# check if found and not a timeout ('to')
if domain_name != '' and domain_name != 'to':
plugin_objects.add_object(
# "MAC", "IP", "Server", "Name"
primaryId = device['devMac'],
secondaryId = device['devLastIP'],
watched1 = '', # You can add any relevant info here if needed
watched2 = domain_name,
watched3 = '',
watched4 = '',
extra = '',
foreignKey = device['devMac'])
plugin_objects.write_result_file()
mylog('verbose', [f'[{pluginName}] Script finished'])
return 0
#===============================================================================
# Execute scan
#===============================================================================
def execute_name_lookup(ip, timeout):
"""
Execute the avahi-resolve command on the IP.
"""
args = ['avahi-resolve', '-a', ip]
# Execute command
output = ""
try:
mylog('debug', [f'[{pluginName}] DEBUG CMD :', args])
# Run the subprocess with a forced timeout
output = subprocess.check_output(args, universal_newlines=True, stderr=subprocess.STDOUT, timeout=timeout)
mylog('debug', [f'[{pluginName}] DEBUG OUTPUT : {output}'])
domain_name = ''
# Split the output into lines
lines = output.splitlines()
# Look for the resolved IP address
for line in lines:
if ip in line:
parts = line.split()
if len(parts) > 1:
domain_name = parts[1] # Second part is the resolved domain name
else:
mylog('verbose', [f'[{pluginName}] ⚠ ERROR - Unexpected output format: {line}'])
mylog('debug', [f'[{pluginName}] Domain Name: {domain_name}'])
return domain_name
except subprocess.CalledProcessError as e:
mylog('none', [f'[{pluginName}] ⚠ ERROR - {e.output}'])
except subprocess.TimeoutExpired as e:
# Return a distinct value that main() checks for when a timeout occurs
# Keep logging for telemetry/debugging
mylog('none', [f'[{pluginName}] TIMEOUT - the process forcefully terminated as timeout reached{": " + str(getattr(e, "output", "")) if getattr(e, "output", None) else ""}'])
return 'to'
if output == "":
mylog('none', [f'[{pluginName}] Scan: FAIL - check logs'])
else:
mylog('debug', [f'[{pluginName}] Scan: SUCCESS'])
return ''
# Function to ensure Avahi and its dependencies are running
def ensure_avahi_running(attempt=1, max_retries=2):
"""
Ensure that D-Bus is running and the Avahi daemon is started, with recursive retry logic.
"""
mylog('debug', [f'[{pluginName}] Attempt {attempt} - Ensuring D-Bus and Avahi daemon are running...'])
# Check rc-status
try:
subprocess.run(['rc-status'], check=True)
except subprocess.CalledProcessError as e:
mylog('none', [f'[{pluginName}] ⚠ ERROR - Failed to check rc-status: {e.output}'])
return
# Create OpenRC soft level (wrap in try/except to keep error handling consistent)
try:
subprocess.run(['touch', '/run/openrc/softlevel'], check=True, capture_output=True, text=True)
except subprocess.CalledProcessError as e:
mylog('none', [f'[{pluginName}] ⚠ ERROR - Failed to create OpenRC soft level: {e.stderr if e.stderr else str(e)}'])
return
# Add Avahi daemon to runlevel
try:
subprocess.run(['rc-update', 'add', 'avahi-daemon'], check=True)
except subprocess.CalledProcessError as e:
mylog('none', [f'[{pluginName}] ⚠ ERROR - Failed to add Avahi to runlevel: {e.output}'])
return
# Start the D-Bus service
try:
subprocess.run(['rc-service', 'dbus', 'start'], check=True)
except subprocess.CalledProcessError as e:
mylog('none', [f'[{pluginName}] ⚠ ERROR - Failed to start D-Bus: {e.output}'])
return
# Check Avahi status
status_output = subprocess.run(['rc-service', 'avahi-daemon', 'status'], capture_output=True, text=True)
if 'started' in status_output.stdout:
mylog('debug', [f'[{pluginName}] Avahi Daemon is already running.'])
return
mylog('none', [f'[{pluginName}] Avahi Daemon is not running, attempting to start... (Attempt {attempt})'])
# Start the Avahi daemon
try:
subprocess.run(['rc-service', 'avahi-daemon', 'start'], check=True)
except subprocess.CalledProcessError as e:
mylog('none', [f'[{pluginName}] ⚠ ERROR - Failed to start Avahi daemon: {e.output}'])
# Check status after starting
status_output = subprocess.run(['rc-service', 'avahi-daemon', 'status'], capture_output=True, text=True)
if 'started' in status_output.stdout:
mylog('debug', [f'[{pluginName}] Avahi Daemon successfully started.'])
return
# Retry if not started and attempts are left
if attempt < max_retries:
mylog('debug', [f'[{pluginName}] Retrying... ({attempt + 1}/{max_retries})'])
ensure_avahi_running(attempt + 1, max_retries)
else:
mylog('none', [f'[{pluginName}] ⚠ ERROR - Avahi Daemon failed to start after {max_retries} attempts.'])
# rc-update add avahi-daemon
# rc-service avahi-daemon status
# rc-service avahi-daemon start
if __name__ == '__main__':
main()

View File

@@ -1,127 +1,142 @@
#!/usr/bin/env python
#!/usr/bin/env python3
import os
import pathlib
import sys
import json
import dns.resolver
import socket
import ipaddress
from zeroconf import Zeroconf, ServiceBrowser, ServiceInfo, InterfaceChoice, IPVersion
from zeroconf.asyncio import AsyncZeroconf
# Define the installation path and extend the system path for plugin imports
INSTALL_PATH = "/app"
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from plugin_helper import Plugin_Object, Plugin_Objects, decodeBase64
from plugin_utils import get_plugins_configs
from logger import mylog as write_log, Logger
from const import pluginsPath, fullDbPath, logPath
from helper import timeNowTZ, get_setting_value
from messaging.in_app import write_notification
from plugin_helper import Plugin_Objects
from logger import mylog, Logger
from const import logPath
from helper import get_setting_value
from database import DB
from models.device_instance import DeviceInstance
import conf
from pytz import timezone
# Make sure the TIMEZONE for logging is correct
conf.tz = timezone(get_setting_value('TIMEZONE'))
# Configure timezone and logging
conf.tz = timezone(get_setting_value("TIMEZONE"))
Logger(get_setting_value("LOG_LEVEL"))
# Make sure log level is initialized correctly
Logger(get_setting_value('LOG_LEVEL'))
pluginName = "AVAHISCAN"
pluginName = 'AVAHISCAN'
# Define log paths
LOG_PATH = os.path.join(logPath, "plugins")
LOG_FILE = os.path.join(LOG_PATH, f"script.{pluginName}.log")
RESULT_FILE = os.path.join(LOG_PATH, f"last_result.{pluginName}.log")
# Define the current path and log file paths
LOG_PATH = logPath + '/plugins'
LOG_FILE = os.path.join(LOG_PATH, f'script.{pluginName}.log')
RESULT_FILE = os.path.join(LOG_PATH, f'last_result.{pluginName}.log')
# Initialize the Plugin obj output file
# Initialize plugin results
plugin_objects = Plugin_Objects(RESULT_FILE)
#===============================================================================
# Execute scan using DNS resolver
#===============================================================================
def resolve_ips_with_zeroconf(ips, timeout):
# =============================================================================
# Helper functions
# =============================================================================
def resolve_mdns_name(ip: str, timeout: int = 5) -> str:
"""
Uses DNS resolver to actively query PTR records for reverse DNS lookups on given IP addresses.
Attempts to resolve a hostname via multicast DNS using the Zeroconf library.
Args:
ip (str): The IP address to resolve.
timeout (int): Timeout in seconds for mDNS resolution.
Returns:
str: Resolved hostname (or empty string if not found).
"""
resolved_hosts = {}
for ip in ips:
mylog("debug", [f"[{pluginName}] Resolving mDNS for {ip}"])
# Convert string IP to an address object
try:
addr = ipaddress.ip_address(ip)
except ValueError:
mylog("none", [f"[{pluginName}] Invalid IP: {ip}"])
return ""
# Reverse lookup name, e.g. "121.1.168.192.in-addr.arpa"
if addr.version == 4:
rev_name = ipaddress.ip_address(ip).reverse_pointer
else:
rev_name = ipaddress.ip_address(ip).reverse_pointer
try:
zeroconf = Zeroconf()
hostname = socket.getnameinfo((ip, 0), socket.NI_NAMEREQD)[0]
zeroconf.close()
if hostname and hostname != ip:
mylog("debug", [f"[{pluginName}] Found mDNS name: {hostname}"])
return hostname
except Exception as e:
mylog("debug", [f"[{pluginName}] Zeroconf lookup failed for {ip}: {e}"])
finally:
try:
# Construct the reverse IP for PTR query (e.g., 8.1.168.192.in-addr.arpa.)
reverse_ip = '.'.join(reversed(ip.split('.'))) + '.in-addr.arpa.'
# Query PTR record with timeout; respect the passed timeout per query
answers = dns.resolver.resolve(reverse_ip, 'PTR', lifetime=max(1, timeout))
if answers:
# For PTR records, the hostname is in the target field
hostname = str(answers[0].target).rstrip('.')
resolved_hosts[ip] = hostname
write_log('verbose', [f'[{pluginName}] Resolved {ip} -> {hostname}'])
except Exception as e:
write_log('verbose', [f'[{pluginName}] Error resolving {ip}: {e}'])
write_log('verbose', [f'[{pluginName}] Active resolution finished. Found {len(resolved_hosts)} hosts.'])
return resolved_hosts
zeroconf.close()
except Exception:
pass
return ""
# =============================================================================
# Main logic
# =============================================================================
def main():
write_log('verbose', [f'[{pluginName}] In script'])
mylog("verbose", [f"[{pluginName}] Script started"])
# Get timeout from settings, default to 20s, and subtract a buffer
try:
timeout_setting = int(get_setting_value('AVAHISCAN_RUN_TIMEOUT'))
except (ValueError, TypeError):
timeout_setting = 30 # Default to 30s as a safe value
timeout = get_setting_value("AVAHISCAN_RUN_TIMEOUT")
use_mock = "--mockdata" in sys.argv
# Use a timeout 5 seconds less than the plugin's configured timeout to allow for cleanup
scan_duration = max(5, timeout_setting - 5)
db = DB()
db.open()
plugin_objects = Plugin_Objects(RESULT_FILE)
device_handler = DeviceInstance(db)
# Retrieve devices based on REFRESH_FQDN setting to match original script's logic
if get_setting_value("REFRESH_FQDN"):
devices = device_handler.getAll()
write_log('verbose', [f'[{pluginName}] REFRESH_FQDN is true, getting all devices.'])
if use_mock:
mylog("verbose", [f"[{pluginName}] Running in MOCK mode"])
devices = [
{"devMac": "00:11:22:33:44:55", "devLastIP": "192.168.1.121"},
{"devMac": "00:11:22:33:44:56", "devLastIP": "192.168.1.9"},
{"devMac": "00:11:22:33:44:57", "devLastIP": "192.168.1.82"},
]
else:
devices = device_handler.getUnknown()
write_log('verbose', [f'[{pluginName}] REFRESH_FQDN is false, getting devices with unknown hostnames.'])
db = DB()
db.open()
device_handler = DeviceInstance(db)
devices = (
device_handler.getAll()
if get_setting_value("REFRESH_FQDN")
else device_handler.getUnknown()
)
# db.close() # This was causing the crash, DB object doesn't have a close method.
mylog("verbose", [f"[{pluginName}] Devices count: {len(devices)}"])
write_log('verbose', [f'[{pluginName}] Devices to scan: {len(devices)}'])
for device in devices:
ip = device["devLastIP"]
mac = device["devMac"]
if len(devices) > 0:
ips_to_find = [device['devLastIP'] for device in devices if device['devLastIP']]
if ips_to_find:
write_log('verbose', [f'[{pluginName}] IPs to be scanned: {ips_to_find}'])
resolved_hosts = resolve_ips_with_zeroconf(ips_to_find, scan_duration)
hostname = resolve_mdns_name(ip, timeout)
for device in devices:
domain_name = resolved_hosts.get(device['devLastIP'])
if domain_name:
plugin_objects.add_object(
primaryId = device['devMac'],
secondaryId = device['devLastIP'],
watched1 = '',
watched2 = domain_name,
watched3 = '',
watched4 = '',
extra = '',
foreignKey = device['devMac']
)
else:
write_log('verbose', [f'[{pluginName}] No devices with IP addresses to scan.'])
if hostname:
plugin_objects.add_object(
primaryId=mac,
secondaryId=ip,
watched1="",
watched2=hostname,
watched3="",
watched4="",
extra="",
foreignKey=mac,
)
plugin_objects.write_result_file()
write_log('verbose', [f'[{pluginName}] Script finished'])
mylog("verbose", [f"[{pluginName}] Script finished"])
return 0
if __name__ == '__main__':
# =============================================================================
# Entrypoint
# =============================================================================
if __name__ == "__main__":
main()

View File

@@ -2,6 +2,8 @@
Plugin for pinging existing devices via the [ping](https://linux.die.net/man/8/ping) network utility. The devices have to be accessible from the container. You can use this plugin with other suplementing plugins as described in the [subnets docs](https://github.com/jokob-sk/NetAlertX/blob/main/docs/SUBNETS.md).
This plugin can be used if you are getting false offline positives on specific devices. See the [Fix offline detection guide](https://github.com/jokob-sk/NetAlertX/blob/main/docs/FIX_OFFLINE_DETECTION.md) for details.
### Usage
- Check the Settings page for details.

View File

@@ -25,3 +25,11 @@ To assign a meaningful device name, the plugin resolves it in the following orde
- **Comment**: The `comment` field in the MikroTik router's DHCP lease configuration. This is useful for naming static leases of known devies.
- **Hostname**: The hostname provided by the device during DHCP negotiation.
- **"(unknown)"**: as the fallback name, allowing other plugins to resolve the device name later.
### Other info
- Version: 1.0
- Author: [lookflying](https://github.com/lookflying)
- Maintainer(s): [elraro](https://github.com/elraro), [kamil-olszewski-devskiller](https://github.com/kamil-olszewski-devskiller)
- Release Date: 12-Sep-2024

View File

@@ -178,7 +178,7 @@ def main():
if file_name != 'last_result.log':
mylog('verbose', [f'[{pluginName}] Processing: "{file_name}"'])
# make sure the file has teh correct name (e.g last_result.encoded.Node_1.1.log) to skip any otehr plugin files
# make sure the file has the correct name (e.g last_result.encoded.Node_1.1.log) to skip any otehr plugin files
if len(file_name.split('.')) > 2:
# Store e.g. Node_1 from last_result.encoded.Node_1.1.log
syncHubNodeName = file_name.split('.')[1]
@@ -210,9 +210,10 @@ def main():
existing_mac_addresses = set(row[0] for row in cursor.fetchall())
# insert devices into the lats_result.log to manage state
# insert devices into the last_result.log and thus CurrentScan table to manage state
for device in device_data:
if device['devPresentLastScan'] == 1:
# only insert devices taht were online and skip the root node to prevent IP flipping on the hub
if device['devPresentLastScan'] == 1 and str(device['devMac']).lower() != 'internet':
plugin_objects.add_object(
primaryId = device['devMac'],
secondaryId = device['devLastIP'],

View File

@@ -213,6 +213,33 @@
}
]
},
{
"function": "DEFAULT_PAGE_SIZE",
"type": {
"dataType": "integer",
"elements": [
{
"elementType": "input",
"elementOptions": [{ "type": "number" }],
"transformers": []
}
]
},
"maxLength": 50,
"default_value": 20,
"options": [],
"localized": [],
"name": [
{
"string": "Default page size"
}
],
"description": [
{
"string": "Default number of items shown in tables per page, for example in teh Devices lists."
}
]
},
{
"function": "DEV_SECTIONS",
"type": {

View File

@@ -112,7 +112,12 @@ def get_device_data(site, api):
mylog('verbose', [f'[{pluginName}] Site: {site_name} clients: {json.dumps(clients_resp, indent=2)}'])
# Build a lookup for devices by their 'id' to find parent MAC easily
device_id_to_mac = {dev['id']: dev.get('macAddress', '') for dev in unifi_devices}
device_id_to_mac = {}
for dev in unifi_devices:
if "id" not in dev:
mylog("verbose", [f"[{pluginName}] Skipping device without 'id': {json.dumps(dev)}"])
continue
device_id_to_mac[dev["id"]] = dev.get("macAddress", "")
# Helper to resolve uplinkDeviceId to parent MAC, or "Internet" if no uplink
def resolve_parent_mac(uplink_id):

View File

@@ -566,122 +566,6 @@ $settingsJSON_DB = json_encode($settings, JSON_HEX_TAG | JSON_HEX_AMP | JSON_HEX
setCodeName = set["setKey"]
settingsArray = collectSetting(prefix, setCodeName, setType, settingsArray)
// // console.log(prefix);
// const setTypeObject = JSON.parse(processQuotes(setType))
// // console.log(setTypeObject);
// const dataType = setTypeObject.dataType;
// // get the element with the input value(s)
// let elements = setTypeObject.elements.filter(element => element.elementHasInputValue === 1);
// // if none found, take last
// if(elements.length == 0)
// {
// elementWithInputValue = setTypeObject.elements[setTypeObject.elements.length - 1]
// } else
// {
// elementWithInputValue = elements[0]
// }
// const { elementType, elementOptions = [], transformers = [] } = elementWithInputValue;
// const {
// inputType,
// readOnly,
// isMultiSelect,
// isOrdeable,
// cssClasses,
// placeholder,
// suffix,
// sourceIds,
// separator,
// editable,
// valRes,
// getStringKey,
// onClick,
// onChange,
// customParams,
// customId,
// columns,
// base64Regex,
// elementOptionsBase64
// } = handleElementOptions('none', elementOptions, transformers, val = "");
// let value;
// if (dataType === "string" && elementWithInputValue.elementType === "datatable" ) {
// value = collectTableData(`#${setCodeName}_table`)
// settingsArray.push([prefix, setCodeName, dataType, btoa(JSON.stringify(value))]);
// } else if (dataType === "string" ||
// (dataType === "integer" && (inputType === "number" || inputType === "text"))) {
// value = $('#' + setCodeName).val();
// value = applyTransformers(value, transformers);
// settingsArray.push([prefix, setCodeName, dataType, value]);
// } else if (inputType === 'checkbox') {
// value = $(`#${setCodeName}`).is(':checked') ? 1 : 0;
// if(dataType === "boolean")
// {
// value = value == 1 ? "True" : "False";
// }
// value = applyTransformers(value, transformers);
// settingsArray.push([prefix, setCodeName, dataType, value]);
// } else if (dataType === "array" ) {
// let temps = [];
// if(isOrdeable)
// {
// temps = $(`#${setCodeName}`).val()
// } else
// {
// // make sure to collect all if set as "editable" or selected only otherwise
// $(`#${setCodeName}`).attr("my-editable") == "true" ? additionalSelector = "" : additionalSelector = ":selected";
// $(`#${setCodeName} option${additionalSelector}`).each(function() {
// const vl = $(this).val();
// if (vl !== '') {
// temps.push(applyTransformers(vl, transformers));
// }
// });
// }
// value = JSON.stringify(temps);
// settingsArray.push([prefix, setCodeName, dataType, value]);
// } else if (dataType === "none") {
// // no value to save
// value = ""
// settingsArray.push([prefix, setCodeName, dataType, value]);
// } else if (dataType === "json") {
// value = $('#' + setCodeName).val();
// value = applyTransformers(value, transformers);
// value = JSON.stringify(value, null, 2)
// settingsArray.push([prefix, setCodeName, dataType, value]);
// } else {
// console.error(`[saveSettings] Couldn't determine how to handle (setCodeName|dataType|inputType):(${setCodeName}|${dataType}|${inputType})`);
// value = $('#' + setCodeName).val();
// value = applyTransformers(value, transformers);
// console.error(`[saveSettings] Saving value "${value}"`);
// settingsArray.push([prefix, setCodeName, dataType, value]);
// }
});
// sanity check to make sure settings were loaded & collected correctly

View File

@@ -4,6 +4,7 @@
require_once $_SERVER['DOCUMENT_ROOT'] . '/php/templates/security.php';
require_once $_SERVER['DOCUMENT_ROOT'] . '/php/server/db.php';
require_once $_SERVER['DOCUMENT_ROOT'] . '/php/templates/language/lang.php';
require_once $_SERVER['DOCUMENT_ROOT'] . '/php/templates/globals.php';
?>
<?php

View File

@@ -74,6 +74,8 @@ require 'php/templates/header.php';
$(document).ready(function() {
const table = $('#notificationsTable').DataTable({
"pageLength": parseInt(getSetting("UI_DEFAULT_PAGE_SIZE")),
'lengthMenu' : getLengthMenu(parseInt(getSetting("UI_DEFAULT_PAGE_SIZE"))),
"columns": [
{ "data": "timestamp" ,
"render": function(data, type, row) {

View File

@@ -10,6 +10,8 @@
server {
listen ${LISTEN_ADDR}:${PORT} default_server;
large_client_header_buffers 4 16k;
root ${INSTALL_DIR}/front;
index index.php;
add_header X-Forwarded-Prefix "/app" always;

195
install/proxmox/README.md Executable file
View File

@@ -0,0 +1,195 @@
# NetAlertX Proxmox Installer
An installer script for deploying NetAlertX on Proxmox VE (Debian-based) systems. This installer automates the complete setup including dependencies, NGINX configuration, systemd service, and security hardening.
## 🚀 Quick Start
### Prerequisites
- Fresh LXC or VM of Debian 13 or Ubuntu 24
- Root access
- Internet connection
### Installation
## Download and run the installer
```bash
wget https://raw.githubusercontent.com/jokob-sk/NetAlertX/refs/heads/main/install/proxmox/proxmox-install-netalertx.sh -O proxmox-install-netalertx.sh && chmod +x proxmox-install-netalertx.sh && ./proxmox-install-netalertx.sh
```
## 📋 What This Installer Does
### System Dependencies
- **PHP 8.4** with FPM, SQLite3, cURL extensions
- **NGINX** with custom configuration
- **Python 3** with virtual environment
- **Network tools**: nmap, arp-scan, traceroute, mtr, speedtest-cli
- **Additional tools**: git, build-essential, avahi-daemon
### Security Features
- **Hardened permissions**: Proper user/group ownership
- **TMPFS mounts**: Log and API directories mounted as tmpfs for security
### Service Management
- **Systemd service**: Auto-start on boot with restart policies
- **Service monitoring**: Built-in health checks and logging
- **Dependency management**: Waits for network and NGINX
## 🔧 Configuration
### Port Configuration
The installer will prompt for a custom port, or defaultto 20211 after 10-seconds:
```
Enter HTTP port for NetAlertX [20211] (auto-continue in 10s):
```
### Service Management
```bash
# Check service status
systemctl status netalertx
# View logs
journalctl -u netalertx -f
# Restart service
systemctl restart netalertx
# Stop service
systemctl stop netalertx
```
## 🌐 Access
After installation, access NetAlertX at:
```
http://[SERVER_IP]:[PORT]
```
## 🔒 Security Considerations
### TMPFS Mounts
- `/app/log` - Mounted as tmpfs (no persistent logs)
- `/app/api` - Mounted as tmpfs (temporary API data)
### File Permissions
- Application files: `www-data:www-data` with appropriate permissions
- NGINX runs as `www-data` user
- Log directories: Secure permissions with tmpfs
### Network Security
- NGINX configured for internal network access
- No external firewall rules added (configure manually if needed)
## 🛠️ Troubleshooting
### Common Issues
#### 403 Forbidden Error
```bash
# Check file permissions
ls -la /var/www/html/netalertx
ls -la /app/front
# Fix permissions
chown -R www-data:www-data /app/front
chmod -R 755 /app/front
```
#### Service Won't Start
```bash
# Check service status
systemctl status netalertx
# View detailed logs
journalctl -u netalertx --no-pager -l
# Check if port is in use
ss -tlnp | grep :20211
```
#### GraphQL Connection Issues
```bash
# Check API token in config
grep API_TOKEN /app/config/app.conf
# Verify GraphQL port
grep GRAPHQL_PORT /app/config/app.conf
# Check backend logs
tail -f /app/log/app.log
```
### Log Locations
- **Service logs**: `journalctl -u netalertx`
- **Application logs**: `/app/log/` (tmpfs)
- **NGINX logs**: `/var/log/nginx/`
- **PHP logs**: `/app/log/app.php_errors.log`
### Manual Service Start
If systemd service fails:
```bash
# Activate Python environment
source /opt/myenv/bin/activate
# Start manually
cd /app
python server/
```
or
```
./start.netalertx.sh
```
## 🔄 Updates
### Updating NetAlertX
```bash
# Stop service
systemctl stop netalertx
# Update from repository
cd /app
git pull origin main
# Restart service
systemctl start netalertx
```
## 📁 File Structure
```
/app/ # Main application directory
├── front/ # Web interface (symlinked to /var/www/html/netalertx)
├── server/ # Python backend
├── config/ # Configuration files
├── db/ # Database files
├── log/ # Log files (tmpfs)
├── api/ # API files (tmpfs)
└── start.netalertx.sh # Service startup script
/etc/systemd/system/
└── netalertx.service # Systemd service definition
/etc/nginx/conf.d/
└── netalertx.conf # NGINX configuration
```
## 🤝 Contributing
This installer will need a maintainer
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Test thoroughly
5. Submit a pull request
## 🙏 Acknowledgments
- NetAlertX development team
- Proxmox VE community
- Debian/Ubuntu maintainers
- Open source contributors
---
**Note**: This installer was designed for a Proxmox LXC Debian 13 or Ubuntu 24 containers. For other systems, please use the appropriate installer or manual installation instructions.

33
install/proxmox/netalertx.conf Executable file
View File

@@ -0,0 +1,33 @@
server {
listen 20211;
server_name _; #change this to your custom domain if you have one
# Web-interface files location
root /var/www/html/netalertx;
# Main page
index index.php;
#rewrite /app/(.*) / permanent;
add_header X-Forwarded-Prefix "/netalertx" always;
proxy_set_header X-Forwarded-Prefix "/netalertx";
# Specify a character set
charset utf-8;
location / {
# Try to serve files directly, fallback to index.php
try_files $uri $uri/ /index.php?$query_string;
}
# FastCGI configuration for PHP
location ~ \.php$ {
# Use a Unix socket for better performance
fastcgi_pass unix:/var/run/php/php-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param QUERY_STRING $query_string;
include fastcgi_params;
}
}

View File

@@ -0,0 +1,425 @@
#!/usr/bin/env bash
# Exit immediately if a command exits with a non-zero status.
set -e
# Treat unset variables as an error when substituting
set -u
# Consider failures in a pipeline
set -o pipefail
# Safe IFS
IFS=$' \t\n'
# 🛑 Important: This is only used for the bare-metal install 🛑
# Colors (guarded)
if [ -t 1 ] && [ -z "${NO_COLOR:-}" ]; then
RESET='\e[0m'
GREEN='\e[1;38;5;2m'
RED='\e[31m'
else
RESET=''; GREEN=''; RED=''
fi
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "${GREEN}[UPDATING] ${RESET}Making sure the system is up to date"
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "${GREEN}[INSTALLING] ${RESET}Running proxmox-install-netalertx.sh"
printf "%b\n" "--------------------------------------------------------------------------"
# Set environment variables
INSTALL_DIR=/app # default installation directory
# DO NOT CHANGE ANYTHING BELOW THIS LINE!
INSTALLER_DIR="$INSTALL_DIR/install/proxmox"
CONF_FILE=app.conf
DB_FILE=app.db
NGINX_CONF_FILE=netalertx.conf
WEB_UI_DIR=/var/www/html/netalertx
NGINX_CONFIG=/etc/nginx/conf.d/$NGINX_CONF_FILE
OUI_FILE="/usr/share/arp-scan/ieee-oui.txt"
FILEDB=$INSTALL_DIR/db/$DB_FILE
# DO NOT CHANGE ANYTHING ABOVE THIS LINE!
# Check if script is run as root
if [[ $EUID -ne 0 ]]; then
echo "This script must be run as root."
exit 1
fi
# Interactive confirmation: warn about overwriting/removing existing installation and NGINX config
if [ -z "${NETALERTX_ASSUME_YES:-}" ] && [ -z "${ASSUME_YES:-}" ] && [ -z "${NETALERTX_FORCE:-}" ]; then
printf "%b\n" "------------------------------------------------------------------------"
printf "%b\n" "${RED}[WARNING] ${RESET}This script should be run on a fresh server"
printf "%b\n" "${RED}[WARNING] ${RESET}This script will install NetAlertX and will:"
printf "%b\n" "${RED}[WARNING] ${RESET}• Update OS with apt-get update/upgrade"
printf "%b\n" "${RED}[WARNING] ${RESET}• Overwrite existing files under ${INSTALL_DIR} "
printf "%b\n" "${RED}[WARNING] ${RESET}• Wipe any existing database"
printf "%b\n" "${RED}[WARNING] ${RESET}• Wipe/Set up NGINX configuration under /etc/nginx"
printf "%b\n" "${RED}[WARNING] ${RESET}• Set up systemd services."
read -r -p "Proceed with installation? [y/N]: " _reply
case "${_reply}" in
y|Y|yes|YES) ;;
*) echo "Aborting by user choice."; exit 1;;
esac
else
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "${GREEN}[INSTALLING] ${RESET}Non-interactive mode detected; proceeding without confirmation."
printf "%b\n" "--------------------------------------------------------------------------"
fi
# Getting up to date
apt-get update -y
apt-get upgrade -y
# Prompt for HTTP port (default 20211) with countdown fallback
DEFAULT_PORT=20211
if [ -z "${NETALERTX_ASSUME_YES:-}" ] && [ -z "${ASSUME_YES:-}" ] && [ -z "${NETALERTX_FORCE:-}" ]; then
printf "%b\n" "--------------------------------------------------------------------------"
# Countdown-based prompt
_entered_port=""
for _sec in 10 9 8 7 6 5 4 3 2 1; do
printf "\rEnter HTTP port for NetAlertX [${DEFAULT_PORT}] (auto-continue in %2ds): " "${_sec}"
if read -t 1 -r _entered_port; then
break
fi
done
printf "\n"
if [ -z "${_entered_port}" ]; then
PORT="${DEFAULT_PORT}"
elif printf '%s' "${_entered_port}" | grep -Eq '^[0-9]+$' && [ "${_entered_port}" -ge 1 ] && [ "${_entered_port}" -le 65535 ]; then
PORT="${_entered_port}"
else
printf "%b\n" "${RED}[WARNING] ${RESET}Invalid port. Falling back to ${DEFAULT_PORT}"
PORT="${DEFAULT_PORT}"
fi
else
PORT="${PORT-}"; PORT="${PORT:-${DEFAULT_PORT}}"
fi
export PORT
# Detect primary server IP
SERVER_IP="$(ip -4 route get 1.1.1.1 2>/dev/null | awk '{for(i=1;i<=NF;i++) if ($i=="src") {print $(i+1); exit}}')"
if [ -z "${SERVER_IP}" ]; then
SERVER_IP="$(hostname -I 2>/dev/null | awk '{print $1}')"
fi
if [ -z "${SERVER_IP}" ]; then
SERVER_IP="127.0.0.1"
fi
export SERVER_IP
# Ensure tmpfs mounts are cleaned up on exit/failure
trap 'umount "${INSTALL_DIR}/log" 2>/dev/null || true; umount "${INSTALL_DIR}/api" 2>/dev/null || true' EXIT
# Making sure the system is clean
if [ -d "$INSTALL_DIR" ]; then
printf "%b\n" "Removing existing directory: $INSTALL_DIR"
rm -rf "$INSTALL_DIR"
fi
# 1. INSTALL SYSTEM DEPENDENCIES & ADD PHP REPOSITORY
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "${GREEN}[INSTALLING] ${RESET}Installing system dependencies"
printf "%b\n" "--------------------------------------------------------------------------"
export DEBIAN_FRONTEND=noninteractive
apt-get update -y
# software-properties-common is not available and not needed
apt-get install -y --no-install-recommends \
ca-certificates lsb-release curl gnupg
# Detect OS
. /etc/os-release
OS_ID="${ID:-}"
OS_VER="${VERSION_ID:-}"
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "${GREEN}[INSTALLING] ${RESET}Detected OS: ${OS_ID} ${OS_VER}"
printf "%b\n" "--------------------------------------------------------------------------"
if
[ "${OS_ID}" = "ubuntu" ] && printf '%s' "${OS_VER}" | grep -q '^24'; then
# Ubuntu 24.x typically ships PHP 8.3; add ondrej/php PPA and set 8.4
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "${GREEN}[INSTALLING] ${RESET}Ubuntu 24 detected - enabling ondrej/php PPA for PHP 8.4"
printf "%b\n" "--------------------------------------------------------------------------"
apt-get install -y --no-install-recommends software-properties-common || true
add-apt-repository ppa:ondrej/php -y
apt update -y
elif
[ "${OS_ID}" = "debian" ] && printf '%s' "${OS_VER}" | grep -q '^13'; then
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "${GREEN}[INSTALLING] ${RESET}Debian 13 detected - using built-in PHP 8.4"
printf "%b\n" "--------------------------------------------------------------------------"
fi
apt-get install -y --no-install-recommends \
tini snmp ca-certificates curl libwww-perl arp-scan perl apt-utils cron sudo \
php8.4 php8.4-cgi php8.4-fpm php8.4-sqlite3 php8.4-curl sqlite3 dnsutils net-tools mtr \
python3 python3-dev iproute2 nmap python3-pip zip usbutils traceroute nbtscan \
avahi-daemon avahi-utils build-essential git gnupg2 lsb-release \
debian-archive-keyring python3-venv
if
[ "${OS_ID}" = "ubuntu" ] && printf '%s' "${OS_VER}" | grep -q '^24'; then # Set PHP 8.4 as the default alternatives where applicable
update-alternatives --set php /usr/bin/php8.4 || true
systemctl enable php8.4-fpm || true
systemctl restart php8.4-fpm || true
fi
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "${GREEN}[INSTALLING] ${RESET}Setting up NGINX - Might take a minute!"
printf "%b\n" "--------------------------------------------------------------------------"
apt-get install -y nginx
# Enable and start nginx
if command -v systemctl >/dev/null 2>&1; then
systemctl enable nginx || true
systemctl restart nginx || true
fi
# 3. CLONE OR UPDATE APPLICATION REPOSITORY
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "${GREEN}[INSTALLING] ${RESET}Cloning application repository and setup"
printf "%b\n" "--------------------------------------------------------------------------"
mkdir -p "$INSTALL_DIR"
git clone -b baremetal-installer https://github.com/jokob-sk/NetAlertX.git "$INSTALL_DIR/" #change after testing
if [ ! -f "$INSTALL_DIR/front/buildtimestamp.txt" ]; then
date +%s > "$INSTALL_DIR/front/buildtimestamp.txt"
fi
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "${GREEN}[FINISHED] ${RESET}NetAlertX Installation complete"
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "${GREEN}[CONFIGURATION] ${RESET}Configuring the web server"
printf "%b\n" "--------------------------------------------------------------------------"
# Stop any existing NetAlertX python server process (narrow pattern)
pkill -f "^python(3)?\s+.*${INSTALL_DIR}/server/?$" 2>/dev/null || true
# 4. SET UP PYTHON VIRTUAL ENVIRONMENT & DEPENDENCIES
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "${GREEN}[INSTALLING] ${RESET}Setting up Python environment"
printf "%b\n" "--------------------------------------------------------------------------"
python3 -m venv /opt/myenv
source /opt/myenv/bin/activate
python -m pip install --upgrade pip
python -m pip install -r "${INSTALLER_DIR}/requirements.txt"
# Backup default NGINX site just in case
if [ -L /etc/nginx/sites-enabled/default ] ; then
rm /etc/nginx/sites-enabled/default
elif [ -f /etc/nginx/sites-enabled/default ]; then
mv /etc/nginx/sites-enabled/default /etc/nginx/sites-available/default.bkp_netalertx
fi
# Clear existing directories and files
if [ -d "$WEB_UI_DIR" ]; then
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "${GREEN}[CHECKING] ${RESET}Removing existing NetAlertX web-UI"
printf "%b\n" "--------------------------------------------------------------------------"
rm -R "$WEB_UI_DIR"
fi
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "${GREEN}[CHECKING] ${RESET}Removing existing NetAlertX NGINX config"
printf "%b\n" "--------------------------------------------------------------------------"
rm "$NGINX_CONFIG" 2>/dev/null || true
# Create web directory if it doesn't exist
mkdir -p /var/www/html
# create symbolic link to the installer directory
ln -sfn "${INSTALL_DIR}/front" "$WEB_UI_DIR"
# Copy NGINX configuration to NetAlertX config directory
cp "${INSTALLER_DIR}/${NGINX_CONF_FILE}" "${INSTALL_DIR}/config/${NGINX_CONF_FILE}"
# Use selected port (may be default 20211)
if [ -n "${PORT-}" ]; then
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "Setting webserver to port ($PORT)"
printf "%b\n" "--------------------------------------------------------------------------"
# Update the template to reflect the right port
sed -i "s/listen 20211;/listen ${PORT};/g" "${INSTALL_DIR}/config/${NGINX_CONF_FILE}"
# Warn if port is already in use
if ss -ltn | awk '{print $4}' | grep -q ":${PORT}$"; then
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "${RED}[WARNING] ${RESET}Port ${PORT} appears in use. NGINX may fail to bind."
printf "%b\n" "--------------------------------------------------------------------------"
fi
fi
# Create symbolic link to NGINX configuration coming with NetAlertX
ln -sfn "${INSTALL_DIR}/config/${NGINX_CONF_FILE}" "${NGINX_CONFIG}"
# Run the hardware vendors update at least once
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "${GREEN}[VENDORS UPDATE] ${RESET}Run the hardware vendors update"
printf "%b\n" "--------------------------------------------------------------------------"
# Check if ieee-oui.txt or ieee-iab.txt exist
if [ -f "$OUI_FILE" ]; then
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "The file ieee-oui.txt exists. Skipping update_vendors.sh..."
printf "%b\n" "--------------------------------------------------------------------------"
else
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "The file ieee-oui.txt does not exist. Running update_vendors..."
printf "%b\n" "--------------------------------------------------------------------------"
# Run the update_vendors.sh script
if [ -f "${INSTALL_DIR}/back/update_vendors.sh" ]; then
"${INSTALL_DIR}/back/update_vendors.sh"
else
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" " update_vendors.sh script not found in $INSTALL_DIR."
printf "%b\n" "--------------------------------------------------------------------------"
fi
fi
# Create empty log files and plugin folders
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "${GREEN}[INSTALLING] ${RESET}Creating mounts and file structure"
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "Cleaning up old mounts if any"
umount "${INSTALL_DIR}/log" 2>/dev/null || true
umount "${INSTALL_DIR}/api" 2>/dev/null || true
printf "%b\n" "Creating log api folders if they don't exist"
mkdir -p "${INSTALL_DIR}/log" "${INSTALL_DIR}/api"
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "${GREEN}[INSTALLING] ${RESET}Mounting log and api folders as tmpfs"
printf "%b\n" "--------------------------------------------------------------------------"
mountpoint -q "${INSTALL_DIR}/log" || mount -t tmpfs -o noexec,nosuid,nodev tmpfs "${INSTALL_DIR}/log"
mountpoint -q "${INSTALL_DIR}/api" || mount -t tmpfs -o noexec,nosuid,nodev tmpfs "${INSTALL_DIR}/api"
chown -R www-data:www-data "${INSTALL_DIR}/log" "${INSTALL_DIR}/api"
# Ensure plugins directory exists within the tmpfs mount
mkdir -p "${INSTALL_DIR}"/log/plugins
chown -R www-data:www-data "${INSTALL_DIR}"/log/plugins
# Create the execution_queue.log file if it doesn't exist
touch ${INSTALL_DIR}/log/{app.log,execution_queue.log,app_front.log,app.php_errors.log,stderr.log,stdout.log,db_is_locked.log}
touch ${INSTALL_DIR}/api/user_notifications.json
chown -R www-data:www-data "${INSTALL_DIR}"/log "${INSTALL_DIR}"/api
chmod -R ug+rwX "${INSTALL_DIR}"/log "${INSTALL_DIR}"/api
# Set ownership of the tmpfs mountpoints first.
chown www-data:www-data "${INSTALL_DIR}/log" "${INSTALL_DIR}/api"
# Ensure plugins directory exists within the tmpfs mount
mkdir -p "${INSTALL_DIR}/log/plugins"
# Create log and api files directly as the www-data user to ensure correct ownership from the start.
sudo -u www-data touch ${INSTALL_DIR}/log/{app.log,execution_queue.log,app_front.log,app.php_errors.log,stderr.log,stdout.log,db_is_locked.log}
sudo -u www-data touch ${INSTALL_DIR}/api/user_notifications.json
# Set final permissions for all created files and directories.
chown -R www-data:www-data "${INSTALL_DIR}/log" "${INSTALL_DIR}/api"
chmod -R ug+rwX "${INSTALL_DIR}/log" "${INSTALL_DIR}/api"
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "${GREEN}[INSTALLING] ${RESET}Setting up DB and CONF files"
printf "%b\n" "--------------------------------------------------------------------------"
# Copy starter $DB_FILE and $CONF_FILE
mkdir -p "${INSTALL_DIR}/config" "${INSTALL_DIR}/db"
cp -u "${INSTALL_DIR}/back/${CONF_FILE}" "${INSTALL_DIR}/config/${CONF_FILE}"
cp -u "${INSTALL_DIR}/back/${DB_FILE}" "${FILEDB}"
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "${GREEN}[CONFIGURING] ${RESET}Setting File Permissions"
printf "%b\n" "--------------------------------------------------------------------------"
# Restrict wide permissions; allow owner/group access
chgrp -R www-data "$INSTALL_DIR"
chmod -R ug+rwX,o-rwx "$INSTALL_DIR"
chmod -R ug+rwX,o-rwx "$WEB_UI_DIR"
# chmod -R ug+rwX "$INSTALL_DIR/log" "$INSTALL_DIR/config"
chown -R www-data:www-data "$FILEDB" 2>/dev/null || true
# start PHP
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "${GREEN}[STARTING] ${RESET}Starting PHP and NGINX"
printf "%b\n" "--------------------------------------------------------------------------"
/etc/init.d/php8.4-fpm start
nginx -t || {
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "${RED}[ERROR] ${RESET}NGINX config test failed!"
printf "%b\n" "--------------------------------------------------------------------------"; exit 1; }
/etc/init.d/nginx start
# Make a start script
cat > "$INSTALL_DIR/start.netalertx.sh" << EOF
#!/usr/bin/env bash
# Activate the virtual python environment
source /opt/myenv/bin/activate
echo -e "--------------------------------------------------------------------------"
echo -e "Starting NetAlertX - navigate to http://${SERVER_IP}:${PORT}"
echo -e "--------------------------------------------------------------------------"
# Start the NetAlertX python script
python server/
EOF
chmod +x "$INSTALL_DIR/start.netalertx.sh"
# Install and manage systemd service if available, otherwise fallback to direct start
if command -v systemctl >/dev/null 2>&1; then
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "${GREEN}[INSTALLING] ${RESET}Setting up systemd service"
printf "%b\n" "--------------------------------------------------------------------------"
cat > /etc/systemd/system/netalertx.service << 'EOF'
[Unit]
Description=NetAlertX Service
After=network-online.target nginx.service
Wants=network-online.target
[Service]
Type=simple
User=www-data
Group=www-data
ExecStart=/app/start.netalertx.sh
WorkingDirectory=/app
Restart=on-failure
RestartSec=5
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
EOF
# Reload systemd and enable/start service
systemctl daemon-reload
systemctl enable netalertx.service
systemctl start netalertx.service
systemctl restart nginx
# Verify service is running
if systemctl is-active --quiet netalertx.service; then
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "${GREEN}[SUCCESS] ${RESET}NetAlertX service started successfully"
printf "%b\n" "--------------------------------------------------------------------------"
else
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "${RED}[WARNING] ${RESET}NetAlertX service may not have started properly"
printf "%b\n" "--------------------------------------------------------------------------"
systemctl status netalertx.service --no-pager -l
fi
else
printf "%b\n" "--------------------------------------------------------------------------"
printf "%b\n" "${GREEN}[INSTALLING] ${RESET}Starting NetAlertX (no systemd)"
printf "%b\n" "--------------------------------------------------------------------------"
"$INSTALL_DIR/start.netalertx.sh" &
fi
echo -e "--------------------------------------------------------------------------"
echo -e "${GREEN}[Service] 🚀 Starting app - navigate to http://${SERVER_IP}:${PORT}"
echo -e "--------------------------------------------------------------------------"

View File

@@ -0,0 +1,26 @@
openwrt-luci-rpc
asusrouter
aiohttp
graphene
flask
flask-cors
unifi-sm-api
tplink-omada-client
wakeonlan
pycryptodome
requests
paho-mqtt
scapy
cron-converter
pytz
json2table
dhcp-leases
pyunifi
speedtest-cli
chardet
python-nmap
dnspython
librouteros
yattag
zeroconf
git+https://github.com/foreign-sub/aiofreepybox.git

0
install/ubuntu24/netalertx.service Normal file → Executable file
View File

1
install/ubuntu24/requirements.txt Normal file → Executable file
View File

@@ -22,4 +22,5 @@ python-nmap
dnspython
librouteros
yattag
zeroconf
git+https://github.com/foreign-sub/aiofreepybox.git

View File

@@ -74,6 +74,7 @@ nav:
- Environment Setup: DEV_ENV_SETUP.md
- Devcontainer: DEV_DEVCONTAINER.md
- Custom Plugins: PLUGINS_DEV.md
- Plugin Config: PLUGINS_DEV_CONFIG.md
- Frontend Development: FRONTEND_DEVELOPMENT.md
- Database: DATABASE.md
- Settings: SETTINGS_SYSTEM.md
@@ -86,6 +87,7 @@ nav:
- Sessions: API_SESSIONS.md
- Settings: API_SETTINGS.md
- Events: API_EVENTS.md
- Messaging in-app: API_MESSAGING_IN_APP.md
- Metrics: API_METRICS.md
- Net Tools: API_NETTOOLS.md
- Online History: API_ONLINEHISTORY.md

View File

@@ -0,0 +1,425 @@
#!/bin/sh
# This script recreates the database from schema code.
#Database location
NETALERTX_DB_FILE=/app/db/app.db
#remove the old database
rm ${NETALERTX_DB_FILE}
# Write schema to text to app.db file until we see "end-of-database-schema"
cat << end-of-database-schema > ${NETALERTX_DB_FILE}.sql
CREATE TABLE sqlite_stat1(tbl,idx,stat);
CREATE TABLE Events (eve_MAC STRING (50) NOT NULL COLLATE NOCASE, eve_IP STRING (50) NOT NULL COLLATE NOCASE, eve_DateTime DATETIME NOT NULL, eve_EventType STRING (30) NOT NULL COLLATE NOCASE, eve_AdditionalInfo STRING (250) DEFAULT (''), eve_PendingAlertEmail BOOLEAN NOT NULL CHECK (eve_PendingAlertEmail IN (0, 1)) DEFAULT (1), eve_PairEventRowid INTEGER);
CREATE TABLE Sessions (ses_MAC STRING (50) COLLATE NOCASE, ses_IP STRING (50) COLLATE NOCASE, ses_EventTypeConnection STRING (30) COLLATE NOCASE, ses_DateTimeConnection DATETIME, ses_EventTypeDisconnection STRING (30) COLLATE NOCASE, ses_DateTimeDisconnection DATETIME, ses_StillConnected BOOLEAN, ses_AdditionalInfo STRING (250));
CREATE TABLE IF NOT EXISTS "Online_History" (
"Index" INTEGER,
"Scan_Date" TEXT,
"Online_Devices" INTEGER,
"Down_Devices" INTEGER,
"All_Devices" INTEGER,
"Archived_Devices" INTEGER,
"Offline_Devices" INTEGER,
PRIMARY KEY("Index" AUTOINCREMENT)
);
CREATE TABLE sqlite_sequence(name,seq);
CREATE TABLE Devices (
devMac STRING (50) PRIMARY KEY NOT NULL COLLATE NOCASE,
devName STRING (50) NOT NULL DEFAULT "(unknown)",
devOwner STRING (30) DEFAULT "(unknown)" NOT NULL,
devType STRING (30),
devVendor STRING (250),
devFavorite BOOLEAN CHECK (devFavorite IN (0, 1)) DEFAULT (0) NOT NULL,
devGroup STRING (10),
devComments TEXT,
devFirstConnection DATETIME NOT NULL,
devLastConnection DATETIME NOT NULL,
devLastIP STRING (50) NOT NULL COLLATE NOCASE,
devStaticIP BOOLEAN DEFAULT (0) NOT NULL CHECK (devStaticIP IN (0, 1)),
devScan INTEGER DEFAULT (1) NOT NULL,
devLogEvents BOOLEAN NOT NULL DEFAULT (1) CHECK (devLogEvents IN (0, 1)),
devAlertEvents BOOLEAN NOT NULL DEFAULT (1) CHECK (devAlertEvents IN (0, 1)),
devAlertDown BOOLEAN NOT NULL DEFAULT (0) CHECK (devAlertDown IN (0, 1)),
devSkipRepeated INTEGER DEFAULT 0 NOT NULL,
devLastNotification DATETIME,
devPresentLastScan BOOLEAN NOT NULL DEFAULT (0) CHECK (devPresentLastScan IN (0, 1)),
devIsNew BOOLEAN NOT NULL DEFAULT (1) CHECK (devIsNew IN (0, 1)),
devLocation STRING (250) COLLATE NOCASE,
devIsArchived BOOLEAN NOT NULL DEFAULT (0) CHECK (devIsArchived IN (0, 1)),
devParentMAC TEXT,
devParentPort INTEGER,
devIcon TEXT,
devGUID TEXT,
devSite TEXT,
devSSID TEXT,
devSyncHubNode TEXT,
devSourcePlugin TEXT
, "devCustomProps" TEXT);
CREATE TABLE IF NOT EXISTS "Settings" (
"setKey" TEXT,
"setName" TEXT,
"setDescription" TEXT,
"setType" TEXT,
"setOptions" TEXT,
"setGroup" TEXT,
"setValue" TEXT,
"setEvents" TEXT,
"setOverriddenByEnv" INTEGER
);
CREATE TABLE IF NOT EXISTS "Parameters" (
"par_ID" TEXT PRIMARY KEY,
"par_Value" TEXT
);
CREATE TABLE Plugins_Objects(
"Index" INTEGER,
Plugin TEXT NOT NULL,
Object_PrimaryID TEXT NOT NULL,
Object_SecondaryID TEXT NOT NULL,
DateTimeCreated TEXT NOT NULL,
DateTimeChanged TEXT NOT NULL,
Watched_Value1 TEXT NOT NULL,
Watched_Value2 TEXT NOT NULL,
Watched_Value3 TEXT NOT NULL,
Watched_Value4 TEXT NOT NULL,
Status TEXT NOT NULL,
Extra TEXT NOT NULL,
UserData TEXT NOT NULL,
ForeignKey TEXT NOT NULL,
SyncHubNodeName TEXT,
"HelpVal1" TEXT,
"HelpVal2" TEXT,
"HelpVal3" TEXT,
"HelpVal4" TEXT,
ObjectGUID TEXT,
PRIMARY KEY("Index" AUTOINCREMENT)
);
CREATE TABLE Plugins_Events(
"Index" INTEGER,
Plugin TEXT NOT NULL,
Object_PrimaryID TEXT NOT NULL,
Object_SecondaryID TEXT NOT NULL,
DateTimeCreated TEXT NOT NULL,
DateTimeChanged TEXT NOT NULL,
Watched_Value1 TEXT NOT NULL,
Watched_Value2 TEXT NOT NULL,
Watched_Value3 TEXT NOT NULL,
Watched_Value4 TEXT NOT NULL,
Status TEXT NOT NULL,
Extra TEXT NOT NULL,
UserData TEXT NOT NULL,
ForeignKey TEXT NOT NULL,
SyncHubNodeName TEXT,
"HelpVal1" TEXT,
"HelpVal2" TEXT,
"HelpVal3" TEXT,
"HelpVal4" TEXT, "ObjectGUID" TEXT,
PRIMARY KEY("Index" AUTOINCREMENT)
);
CREATE TABLE Plugins_History(
"Index" INTEGER,
Plugin TEXT NOT NULL,
Object_PrimaryID TEXT NOT NULL,
Object_SecondaryID TEXT NOT NULL,
DateTimeCreated TEXT NOT NULL,
DateTimeChanged TEXT NOT NULL,
Watched_Value1 TEXT NOT NULL,
Watched_Value2 TEXT NOT NULL,
Watched_Value3 TEXT NOT NULL,
Watched_Value4 TEXT NOT NULL,
Status TEXT NOT NULL,
Extra TEXT NOT NULL,
UserData TEXT NOT NULL,
ForeignKey TEXT NOT NULL,
SyncHubNodeName TEXT,
"HelpVal1" TEXT,
"HelpVal2" TEXT,
"HelpVal3" TEXT,
"HelpVal4" TEXT, "ObjectGUID" TEXT,
PRIMARY KEY("Index" AUTOINCREMENT)
);
CREATE TABLE Plugins_Language_Strings(
"Index" INTEGER,
Language_Code TEXT NOT NULL,
String_Key TEXT NOT NULL,
String_Value TEXT NOT NULL,
Extra TEXT NOT NULL,
PRIMARY KEY("Index" AUTOINCREMENT)
);
CREATE TABLE CurrentScan (
cur_MAC STRING(50) NOT NULL COLLATE NOCASE,
cur_IP STRING(50) NOT NULL COLLATE NOCASE,
cur_Vendor STRING(250),
cur_ScanMethod STRING(10),
cur_Name STRING(250),
cur_LastQuery STRING(250),
cur_DateTime STRING(250),
cur_SyncHubNodeName STRING(50),
cur_NetworkSite STRING(250),
cur_SSID STRING(250),
cur_NetworkNodeMAC STRING(250),
cur_PORT STRING(250),
cur_Type STRING(250),
UNIQUE(cur_MAC)
);
CREATE TABLE IF NOT EXISTS "AppEvents" (
"Index" INTEGER PRIMARY KEY AUTOINCREMENT,
"GUID" TEXT UNIQUE,
"AppEventProcessed" BOOLEAN,
"DateTimeCreated" TEXT,
"ObjectType" TEXT,
"ObjectGUID" TEXT,
"ObjectPlugin" TEXT,
"ObjectPrimaryID" TEXT,
"ObjectSecondaryID" TEXT,
"ObjectForeignKey" TEXT,
"ObjectIndex" TEXT,
"ObjectIsNew" BOOLEAN,
"ObjectIsArchived" BOOLEAN,
"ObjectStatusColumn" TEXT,
"ObjectStatus" TEXT,
"AppEventType" TEXT,
"Helper1" TEXT,
"Helper2" TEXT,
"Helper3" TEXT,
"Extra" TEXT
);
CREATE TABLE IF NOT EXISTS "Notifications" (
"Index" INTEGER,
"GUID" TEXT UNIQUE,
"DateTimeCreated" TEXT,
"DateTimePushed" TEXT,
"Status" TEXT,
"JSON" TEXT,
"Text" TEXT,
"HTML" TEXT,
"PublishedVia" TEXT,
"Extra" TEXT,
PRIMARY KEY("Index" AUTOINCREMENT)
);
CREATE INDEX IDX_eve_DateTime ON Events (eve_DateTime);
CREATE INDEX IDX_eve_EventType ON Events (eve_EventType COLLATE NOCASE);
CREATE INDEX IDX_eve_MAC ON Events (eve_MAC COLLATE NOCASE);
CREATE INDEX IDX_eve_PairEventRowid ON Events (eve_PairEventRowid);
CREATE INDEX IDX_ses_EventTypeDisconnection ON Sessions (ses_EventTypeDisconnection COLLATE NOCASE);
CREATE INDEX IDX_ses_EventTypeConnection ON Sessions (ses_EventTypeConnection COLLATE NOCASE);
CREATE INDEX IDX_ses_DateTimeDisconnection ON Sessions (ses_DateTimeDisconnection);
CREATE INDEX IDX_ses_MAC ON Sessions (ses_MAC COLLATE NOCASE);
CREATE INDEX IDX_ses_DateTimeConnection ON Sessions (ses_DateTimeConnection);
CREATE INDEX IDX_dev_PresentLastScan ON Devices (devPresentLastScan);
CREATE INDEX IDX_dev_FirstConnection ON Devices (devFirstConnection);
CREATE INDEX IDX_dev_AlertDeviceDown ON Devices (devAlertDown);
CREATE INDEX IDX_dev_StaticIP ON Devices (devStaticIP);
CREATE INDEX IDX_dev_ScanCycle ON Devices (devScan);
CREATE INDEX IDX_dev_Favorite ON Devices (devFavorite);
CREATE INDEX IDX_dev_LastIP ON Devices (devLastIP);
CREATE INDEX IDX_dev_NewDevice ON Devices (devIsNew);
CREATE INDEX IDX_dev_Archived ON Devices (devIsArchived);
CREATE VIEW Events_Devices AS
SELECT *
FROM Events
LEFT JOIN Devices ON eve_MAC = devMac
/* Events_Devices(eve_MAC,eve_IP,eve_DateTime,eve_EventType,eve_AdditionalInfo,eve_PendingAlertEmail,eve_PairEventRowid,devMac,devName,devOwner,devType,devVendor,devFavorite,devGroup,devComments,devFirstConnection,devLastConnection,devLastIP,devStaticIP,devScan,devLogEvents,devAlertEvents,devAlertDown,devSkipRepeated,devLastNotification,devPresentLastScan,devIsNew,devLocation,devIsArchived,devParentMAC,devParentPort,devIcon,devGUID,devSite,devSSID,devSyncHubNode,devSourcePlugin,devCustomProps) */;
CREATE VIEW LatestEventsPerMAC AS
WITH RankedEvents AS (
SELECT
e.*,
ROW_NUMBER() OVER (PARTITION BY e.eve_MAC ORDER BY e.eve_DateTime DESC) AS row_num
FROM Events AS e
)
SELECT
e.*,
d.*,
c.*
FROM RankedEvents AS e
LEFT JOIN Devices AS d ON e.eve_MAC = d.devMac
INNER JOIN CurrentScan AS c ON e.eve_MAC = c.cur_MAC
WHERE e.row_num = 1
/* LatestEventsPerMAC(eve_MAC,eve_IP,eve_DateTime,eve_EventType,eve_AdditionalInfo,eve_PendingAlertEmail,eve_PairEventRowid,row_num,devMac,devName,devOwner,devType,devVendor,devFavorite,devGroup,devComments,devFirstConnection,devLastConnection,devLastIP,devStaticIP,devScan,devLogEvents,devAlertEvents,devAlertDown,devSkipRepeated,devLastNotification,devPresentLastScan,devIsNew,devLocation,devIsArchived,devParentMAC,devParentPort,devIcon,devGUID,devSite,devSSID,devSyncHubNode,devSourcePlugin,devCustomProps,cur_MAC,cur_IP,cur_Vendor,cur_ScanMethod,cur_Name,cur_LastQuery,cur_DateTime,cur_SyncHubNodeName,cur_NetworkSite,cur_SSID,cur_NetworkNodeMAC,cur_PORT,cur_Type) */;
CREATE VIEW Sessions_Devices AS SELECT * FROM Sessions LEFT JOIN "Devices" ON ses_MAC = devMac
/* Sessions_Devices(ses_MAC,ses_IP,ses_EventTypeConnection,ses_DateTimeConnection,ses_EventTypeDisconnection,ses_DateTimeDisconnection,ses_StillConnected,ses_AdditionalInfo,devMac,devName,devOwner,devType,devVendor,devFavorite,devGroup,devComments,devFirstConnection,devLastConnection,devLastIP,devStaticIP,devScan,devLogEvents,devAlertEvents,devAlertDown,devSkipRepeated,devLastNotification,devPresentLastScan,devIsNew,devLocation,devIsArchived,devParentMAC,devParentPort,devIcon,devGUID,devSite,devSSID,devSyncHubNode,devSourcePlugin,devCustomProps) */;
CREATE VIEW Convert_Events_to_Sessions AS SELECT EVE1.eve_MAC,
EVE1.eve_IP,
EVE1.eve_EventType AS eve_EventTypeConnection,
EVE1.eve_DateTime AS eve_DateTimeConnection,
CASE WHEN EVE2.eve_EventType IN ('Disconnected', 'Device Down') OR
EVE2.eve_EventType IS NULL THEN EVE2.eve_EventType ELSE '<missing event>' END AS eve_EventTypeDisconnection,
CASE WHEN EVE2.eve_EventType IN ('Disconnected', 'Device Down') THEN EVE2.eve_DateTime ELSE NULL END AS eve_DateTimeDisconnection,
CASE WHEN EVE2.eve_EventType IS NULL THEN 1 ELSE 0 END AS eve_StillConnected,
EVE1.eve_AdditionalInfo
FROM Events AS EVE1
LEFT JOIN
Events AS EVE2 ON EVE1.eve_PairEventRowID = EVE2.RowID
WHERE EVE1.eve_EventType IN ('New Device', 'Connected','Down Reconnected')
UNION
SELECT eve_MAC,
eve_IP,
'<missing event>' AS eve_EventTypeConnection,
NULL AS eve_DateTimeConnection,
eve_EventType AS eve_EventTypeDisconnection,
eve_DateTime AS eve_DateTimeDisconnection,
0 AS eve_StillConnected,
eve_AdditionalInfo
FROM Events AS EVE1
WHERE (eve_EventType = 'Device Down' OR
eve_EventType = 'Disconnected') AND
EVE1.eve_PairEventRowID IS NULL
/* Convert_Events_to_Sessions(eve_MAC,eve_IP,eve_EventTypeConnection,eve_DateTimeConnection,eve_EventTypeDisconnection,eve_DateTimeDisconnection,eve_StillConnected,eve_AdditionalInfo) */;
CREATE TRIGGER "trg_insert_devices"
AFTER INSERT ON "Devices"
WHEN NOT EXISTS (
SELECT 1 FROM AppEvents
WHERE AppEventProcessed = 0
AND ObjectType = 'Devices'
AND ObjectGUID = NEW.devGUID
AND ObjectStatus = CASE WHEN NEW.devPresentLastScan = 1 THEN 'online' ELSE 'offline' END
AND AppEventType = 'insert'
)
BEGIN
INSERT INTO "AppEvents" (
"GUID",
"DateTimeCreated",
"AppEventProcessed",
"ObjectType",
"ObjectGUID",
"ObjectPrimaryID",
"ObjectSecondaryID",
"ObjectStatus",
"ObjectStatusColumn",
"ObjectIsNew",
"ObjectIsArchived",
"ObjectForeignKey",
"ObjectPlugin",
"AppEventType"
)
VALUES (
lower(
hex(randomblob(4)) || '-' || hex(randomblob(2)) || '-' || '4' ||
substr(hex( randomblob(2)), 2) || '-' ||
substr('AB89', 1 + (abs(random()) % 4) , 1) ||
substr(hex(randomblob(2)), 2) || '-' ||
hex(randomblob(6))
)
,
DATETIME('now'),
FALSE,
'Devices',
NEW.devGUID, -- ObjectGUID
NEW.devMac, -- ObjectPrimaryID
NEW.devLastIP, -- ObjectSecondaryID
CASE WHEN NEW.devPresentLastScan = 1 THEN 'online' ELSE 'offline' END, -- ObjectStatus
'devPresentLastScan', -- ObjectStatusColumn
NEW.devIsNew, -- ObjectIsNew
NEW.devIsArchived, -- ObjectIsArchived
NEW.devGUID, -- ObjectForeignKey
'DEVICES', -- ObjectForeignKey
'insert'
);
END;
CREATE TRIGGER "trg_update_devices"
AFTER UPDATE ON "Devices"
WHEN NOT EXISTS (
SELECT 1 FROM AppEvents
WHERE AppEventProcessed = 0
AND ObjectType = 'Devices'
AND ObjectGUID = NEW.devGUID
AND ObjectStatus = CASE WHEN NEW.devPresentLastScan = 1 THEN 'online' ELSE 'offline' END
AND AppEventType = 'update'
)
BEGIN
INSERT INTO "AppEvents" (
"GUID",
"DateTimeCreated",
"AppEventProcessed",
"ObjectType",
"ObjectGUID",
"ObjectPrimaryID",
"ObjectSecondaryID",
"ObjectStatus",
"ObjectStatusColumn",
"ObjectIsNew",
"ObjectIsArchived",
"ObjectForeignKey",
"ObjectPlugin",
"AppEventType"
)
VALUES (
lower(
hex(randomblob(4)) || '-' || hex(randomblob(2)) || '-' || '4' ||
substr(hex( randomblob(2)), 2) || '-' ||
substr('AB89', 1 + (abs(random()) % 4) , 1) ||
substr(hex(randomblob(2)), 2) || '-' ||
hex(randomblob(6))
)
,
DATETIME('now'),
FALSE,
'Devices',
NEW.devGUID, -- ObjectGUID
NEW.devMac, -- ObjectPrimaryID
NEW.devLastIP, -- ObjectSecondaryID
CASE WHEN NEW.devPresentLastScan = 1 THEN 'online' ELSE 'offline' END, -- ObjectStatus
'devPresentLastScan', -- ObjectStatusColumn
NEW.devIsNew, -- ObjectIsNew
NEW.devIsArchived, -- ObjectIsArchived
NEW.devGUID, -- ObjectForeignKey
'DEVICES', -- ObjectForeignKey
'update'
);
END;
CREATE TRIGGER "trg_delete_devices"
AFTER DELETE ON "Devices"
WHEN NOT EXISTS (
SELECT 1 FROM AppEvents
WHERE AppEventProcessed = 0
AND ObjectType = 'Devices'
AND ObjectGUID = OLD.devGUID
AND ObjectStatus = CASE WHEN OLD.devPresentLastScan = 1 THEN 'online' ELSE 'offline' END
AND AppEventType = 'delete'
)
BEGIN
INSERT INTO "AppEvents" (
"GUID",
"DateTimeCreated",
"AppEventProcessed",
"ObjectType",
"ObjectGUID",
"ObjectPrimaryID",
"ObjectSecondaryID",
"ObjectStatus",
"ObjectStatusColumn",
"ObjectIsNew",
"ObjectIsArchived",
"ObjectForeignKey",
"ObjectPlugin",
"AppEventType"
)
VALUES (
lower(
hex(randomblob(4)) || '-' || hex(randomblob(2)) || '-' || '4' ||
substr(hex( randomblob(2)), 2) || '-' ||
substr('AB89', 1 + (abs(random()) % 4) , 1) ||
substr(hex(randomblob(2)), 2) || '-' ||
hex(randomblob(6))
)
,
DATETIME('now'),
FALSE,
'Devices',
OLD.devGUID, -- ObjectGUID
OLD.devMac, -- ObjectPrimaryID
OLD.devLastIP, -- ObjectSecondaryID
CASE WHEN OLD.devPresentLastScan = 1 THEN 'online' ELSE 'offline' END, -- ObjectStatus
'devPresentLastScan', -- ObjectStatusColumn
OLD.devIsNew, -- ObjectIsNew
OLD.devIsArchived, -- ObjectIsArchived
OLD.devGUID, -- ObjectForeignKey
'DEVICES', -- ObjectForeignKey
'delete'
);
END;
end-of-database-schema
# Import the database schema into the new database file
sqlite3 ${NETALERTX_DB_FILE} < ${NETALERTX_DB_FILE}.sql

View File

@@ -1,6 +1,19 @@
import threading
import sys
from flask import Flask, request, jsonify, Response
from flask_cors import CORS
# Register NetAlertX directories
INSTALL_PATH = "/app"
sys.path.extend([f"{INSTALL_PATH}/server"])
from logger import mylog
from helper import get_setting_value, timeNowTZ
from db.db_helper import get_date_from_period
from app_state import updateState
from .graphql_endpoint import devicesSchema
from .device_endpoint import get_device_data, set_device_data, delete_device, delete_device_events, reset_device_props, copy_device, update_device_column
from .devices_endpoint import get_all_devices, delete_unknown_devices, delete_all_with_empty_macs, delete_devices, export_devices, import_csv, devices_totals, devices_by_status
@@ -11,17 +24,7 @@ from .sessions_endpoint import get_sessions, delete_session, create_session, get
from .nettools_endpoint import wakeonlan, traceroute, speedtest, nslookup, nmap_scan, internet_info
from .dbquery_endpoint import read_query, write_query, update_query, delete_query
from .sync_endpoint import handle_sync_post, handle_sync_get
import sys
# Register NetAlertX directories
INSTALL_PATH = "/app"
sys.path.extend([f"{INSTALL_PATH}/server"])
from logger import mylog
from helper import get_setting_value, timeNowTZ
from db.db_helper import get_date_from_period
from app_state import updateState
from messaging.in_app import write_notification
from messaging.in_app import write_notification, mark_all_notifications_read, delete_notifications, get_unread_notifications, delete_notification, mark_notification_as_read
# Flask application
app = Flask(__name__)
@@ -36,6 +39,7 @@ CORS(
r"/sessions/*": {"origins": "*"},
r"/settings/*": {"origins": "*"},
r"/dbquery/*": {"origins": "*"},
r"/messaging/*": {"origins": "*"},
r"/events/*": {"origins": "*"}
},
supports_credentials=True,
@@ -500,6 +504,69 @@ def metrics():
# Return Prometheus metrics as plain text
return Response(get_metric_stats(), mimetype="text/plain")
# --------------------------
# In-app notifications
# --------------------------
@app.route("/messaging/in-app/write", methods=["POST"])
def api_write_notification():
if not is_authorized():
return jsonify({"error": "Forbidden"}), 403
data = request.json or {}
content = data.get("content")
level = data.get("level", "alert")
if not content:
return jsonify({"success": False, "error": "Missing content"}), 400
write_notification(content, level)
return jsonify({"success": True})
@app.route("/messaging/in-app/unread", methods=["GET"])
def api_get_unread_notifications():
if not is_authorized():
return jsonify({"error": "Forbidden"}), 403
return get_unread_notifications()
@app.route("/messaging/in-app/read/all", methods=["POST"])
def api_mark_all_notifications_read():
if not is_authorized():
return jsonify({"error": "Forbidden"}), 403
return jsonify(mark_all_notifications_read())
@app.route("/messaging/in-app/delete", methods=["DELETE"])
def api_delete_all_notifications():
if not is_authorized():
return jsonify({"error": "Forbidden"}), 403
return delete_notifications()
@app.route("/messaging/in-app/delete/<guid>", methods=["DELETE"])
def api_delete_notification(guid):
"""Delete a single notification by GUID."""
if not is_authorized():
return jsonify({"error": "Forbidden"}), 403
result = delete_notification(guid)
if result.get("success"):
return jsonify({"success": True})
else:
return jsonify({"success": False, "error": result.get("error")}), 500
@app.route("/messaging/in-app/read/<guid>", methods=["POST"])
def api_mark_notification_read(guid):
"""Mark a single notification as read by GUID."""
if not is_authorized():
return jsonify({"error": "Forbidden"}), 403
result = mark_notification_as_read(guid)
if result.get("success"):
return jsonify({"success": True})
else:
return jsonify({"success": False, "error": result.get("error")}), 500
# --------------------------
# SYNC endpoint

View File

@@ -31,7 +31,7 @@ arpscan_devices = []
SCAN_SUBNETS = ['192.168.1.0/24 --interface=eth1', '192.168.1.0/24 --interface=eth0']
LOG_LEVEL = 'verbose'
TIMEZONE = 'Europe/Berlin'
UI_LANG = 'English'
UI_LANG = 'English (en_us)'
UI_PRESENCE = ['online', 'offline', 'archived']
UI_MY_DEVICES = ['online', 'offline', 'archived', 'new', 'down']
UI_NOT_RANDOM_MAC = []

View File

@@ -153,47 +153,259 @@ class SafeConditionBuilder:
def _parse_condition(self, condition: str) -> Tuple[str, Dict[str, Any]]:
"""
Parse a condition string into safe SQL with parameters.
This method handles basic patterns like:
- AND devName = 'value'
- AND devComments LIKE '%value%'
- AND eve_EventType IN ('type1', 'type2')
This method handles both single and compound conditions:
- Single: AND devName = 'value'
- Compound: AND devName = 'value' AND devVendor = 'Apple'
- Multiple clauses with AND/OR operators
Args:
condition: Condition string to parse
Returns:
Tuple of (safe_sql_snippet, parameters_dict)
"""
condition = condition.strip()
# Handle empty conditions
if not condition:
return "", {}
# Check if this is a compound condition (multiple clauses)
if self._is_compound_condition(condition):
return self._parse_compound_condition(condition)
# Single condition: extract leading logical operator if present
logical_op = None
clause_text = condition
# Check for leading AND
if condition.upper().startswith('AND ') or condition.upper().startswith('AND\t'):
logical_op = 'AND'
clause_text = condition[3:].strip()
# Check for leading OR
elif condition.upper().startswith('OR ') or condition.upper().startswith('OR\t'):
logical_op = 'OR'
clause_text = condition[2:].strip()
# Parse the single condition
return self._parse_single_condition(clause_text, logical_op)
def _is_compound_condition(self, condition: str) -> bool:
"""
Determine if a condition contains multiple clauses (compound condition).
A compound condition has multiple logical operators (AND/OR) connecting
separate comparison clauses.
Args:
condition: Condition string to check
Returns:
True if compound (multiple clauses), False if single clause
"""
# Track if we're inside quotes to avoid counting operators in quoted strings
in_quotes = False
logical_op_count = 0
i = 0
while i < len(condition):
char = condition[i]
# Toggle quote state
if char == "'":
in_quotes = not in_quotes
i += 1
continue
# Only count logical operators outside of quotes
if not in_quotes:
# Look for AND or OR as whole words
remaining = condition[i:].upper()
# Check for AND (must be word boundary)
if remaining.startswith('AND ') or remaining.startswith('AND\t'):
logical_op_count += 1
i += 3
continue
# Check for OR (must be word boundary)
if remaining.startswith('OR ') or remaining.startswith('OR\t'):
logical_op_count += 1
i += 2
continue
i += 1
# A compound condition has more than one logical operator
# (first AND/OR starts the condition, subsequent ones connect clauses)
return logical_op_count > 1
def _parse_compound_condition(self, condition: str) -> Tuple[str, Dict[str, Any]]:
"""
Parse a compound condition with multiple clauses.
Splits the condition into individual clauses, parses each one,
and reconstructs the full condition with all parameters.
Args:
condition: Compound condition string
Returns:
Tuple of (safe_sql_snippet, parameters_dict)
"""
# Split the condition into individual clauses while preserving logical operators
clauses = self._split_by_logical_operators(condition)
# Parse each clause individually
parsed_parts = []
all_params = {}
for clause_text, logical_op in clauses:
# Parse this single clause
sql_part, params = self._parse_single_condition(clause_text, logical_op)
if sql_part:
parsed_parts.append(sql_part)
all_params.update(params)
if not parsed_parts:
raise ValueError("No valid clauses found in compound condition")
# Join all parsed parts
final_sql = " ".join(parsed_parts)
return final_sql, all_params
def _split_by_logical_operators(self, condition: str) -> List[Tuple[str, Optional[str]]]:
"""
Split a compound condition into individual clauses.
Returns a list of tuples: (clause_text, logical_operator)
The logical operator is the AND/OR that precedes the clause.
Args:
condition: Compound condition string
Returns:
List of (clause_text, logical_op) tuples
"""
clauses = []
current_clause = []
current_logical_op = None
in_quotes = False
i = 0
while i < len(condition):
char = condition[i]
# Toggle quote state
if char == "'":
in_quotes = not in_quotes
current_clause.append(char)
i += 1
continue
# Only look for logical operators outside of quotes
if not in_quotes:
remaining = condition[i:].upper()
# Check if we're at a word boundary (start of string or after whitespace)
at_word_boundary = (i == 0 or condition[i-1] in ' \t')
# Check for AND (must be at word boundary)
if at_word_boundary and (remaining.startswith('AND ') or remaining.startswith('AND\t')):
# Save current clause if we have one
if current_clause:
clause_text = ''.join(current_clause).strip()
if clause_text:
clauses.append((clause_text, current_logical_op))
current_clause = []
# Set the logical operator for the next clause
current_logical_op = 'AND'
i += 3 # Skip 'AND'
# Skip whitespace after AND
while i < len(condition) and condition[i] in ' \t':
i += 1
continue
# Check for OR (must be at word boundary)
if at_word_boundary and (remaining.startswith('OR ') or remaining.startswith('OR\t')):
# Save current clause if we have one
if current_clause:
clause_text = ''.join(current_clause).strip()
if clause_text:
clauses.append((clause_text, current_logical_op))
current_clause = []
# Set the logical operator for the next clause
current_logical_op = 'OR'
i += 2 # Skip 'OR'
# Skip whitespace after OR
while i < len(condition) and condition[i] in ' \t':
i += 1
continue
# Add character to current clause
current_clause.append(char)
i += 1
# Don't forget the last clause
if current_clause:
clause_text = ''.join(current_clause).strip()
if clause_text:
clauses.append((clause_text, current_logical_op))
return clauses
def _parse_single_condition(self, condition: str, logical_op: Optional[str] = None) -> Tuple[str, Dict[str, Any]]:
"""
Parse a single condition clause into safe SQL with parameters.
This method handles basic patterns like:
- devName = 'value' (with optional AND/OR prefix)
- devComments LIKE '%value%'
- eve_EventType IN ('type1', 'type2')
Args:
condition: Single condition string to parse
logical_op: Optional logical operator (AND/OR) to prepend
Returns:
Tuple of (safe_sql_snippet, parameters_dict)
"""
condition = condition.strip()
# Handle empty conditions
if not condition:
return "", {}
# Simple pattern matching for common conditions
# Pattern 1: AND/OR column operator value (supporting Unicode in quoted strings)
pattern1 = r'^\s*(AND|OR)?\s+(\w+)\s+(=|!=|<>|<|>|<=|>=|LIKE|NOT\s+LIKE)\s+\'([^\']*)\'\s*$'
# Pattern 1: [AND/OR] column operator value (supporting Unicode in quoted strings)
pattern1 = r'^\s*(\w+)\s+(=|!=|<>|<|>|<=|>=|LIKE|NOT\s+LIKE)\s+\'([^\']*)\'\s*$'
match1 = re.match(pattern1, condition, re.IGNORECASE | re.UNICODE)
if match1:
logical_op, column, operator, value = match1.groups()
column, operator, value = match1.groups()
return self._build_simple_condition(logical_op, column, operator, value)
# Pattern 2: AND/OR column IN ('val1', 'val2', ...)
pattern2 = r'^\s*(AND|OR)?\s+(\w+)\s+(IN|NOT\s+IN)\s+\(([^)]+)\)\s*$'
# Pattern 2: [AND/OR] column IN ('val1', 'val2', ...)
pattern2 = r'^\s*(\w+)\s+(IN|NOT\s+IN)\s+\(([^)]+)\)\s*$'
match2 = re.match(pattern2, condition, re.IGNORECASE)
if match2:
logical_op, column, operator, values_str = match2.groups()
column, operator, values_str = match2.groups()
return self._build_in_condition(logical_op, column, operator, values_str)
# Pattern 3: AND/OR column IS NULL/IS NOT NULL
pattern3 = r'^\s*(AND|OR)?\s+(\w+)\s+(IS\s+NULL|IS\s+NOT\s+NULL)\s*$'
# Pattern 3: [AND/OR] column IS NULL/IS NOT NULL
pattern3 = r'^\s*(\w+)\s+(IS\s+NULL|IS\s+NOT\s+NULL)\s*$'
match3 = re.match(pattern3, condition, re.IGNORECASE)
if match3:
logical_op, column, operator = match3.groups()
column, operator = match3.groups()
return self._build_null_condition(logical_op, column, operator)
# If no patterns match, reject the condition for security

View File

@@ -157,7 +157,7 @@ def importConfigs (pm, db, all_plugins):
# ----------------------------------------
# ccd(key, default, config_dir, name, inputtype, options, group, events=[], desc = "", regex = "", setJsonMetadata = {}, overrideTemplate = {})
conf.LOADED_PLUGINS = ccd('LOADED_PLUGINS', [] , c_d, 'Loaded plugins', '{"dataType":"array","elements":[{"elementType":"select","elementOptions":[{"multiple":"true","ordeable":"true"}],"transformers":[]},{"elementType":"button","elementOptions":[{"sourceSuffixes":[]},{"separator":""},{"cssClasses":"col-xs-12"},{"onClick":"selectChange(this)"},{"getStringKey":"Gen_Change"}],"transformers":[]}]}', '[]', 'General')
conf.LOADED_PLUGINS = ccd('LOADED_PLUGINS', [] , c_d, 'Loaded plugins', '{"dataType":"array","elements":[{"elementType":"select","elementHasInputValue":1,"elementOptions":[{"multiple":"true","ordeable":"true"}],"transformers":[]},{"elementType":"button","elementOptions":[{"sourceSuffixes":[]},{"separator":""},{"cssClasses":"col-xs-12"},{"onClick":"selectChange(this)"},{"getStringKey":"Gen_Change"}],"transformers":[]}]}', '[]', 'General')
conf.DISCOVER_PLUGINS = ccd('DISCOVER_PLUGINS', True , c_d, 'Discover plugins', """{"dataType": "boolean","elements": [{"elementType": "input","elementOptions": [{ "type": "checkbox" }],"transformers": []}]}""", '[]', 'General')
conf.SCAN_SUBNETS = ccd('SCAN_SUBNETS', ['192.168.1.0/24 --interface=eth1', '192.168.1.0/24 --interface=eth0'] , c_d, 'Subnets to scan', '''{"dataType": "array","elements": [{"elementType": "input","elementOptions": [{"placeholder": "192.168.1.0/24 --interface=eth1"},{"suffix": "_in"},{"cssClasses": "col-sm-10"},{"prefillValue": "null"}],"transformers": []},{"elementType": "button","elementOptions": [{"sourceSuffixes": ["_in"]},{"separator": ""},{"cssClasses": "col-xs-12"},{"onClick": "addList(this, false)"},{"getStringKey": "Gen_Add"}],"transformers": []},{"elementType": "select","elementHasInputValue": 1,"elementOptions": [{"multiple": "true"},{"readonly": "true"},{"editable": "true"}],"transformers": []},{"elementType": "button","elementOptions": [{"sourceSuffixes": []},{"separator": ""},{"cssClasses": "col-xs-6"},{"onClick": "removeAllOptions(this)"},{"getStringKey": "Gen_Remove_All"}],"transformers": []},{"elementType": "button","elementOptions": [{"sourceSuffixes": []},{"separator": ""},{"cssClasses": "col-xs-6"},{"onClick": "removeFromList(this)"},{"getStringKey": "Gen_Remove_Last"}],"transformers": []}]}''', '[]', 'General')
conf.LOG_LEVEL = ccd('LOG_LEVEL', 'verbose' , c_d, 'Log verboseness', '{"dataType":"string", "elements": [{"elementType" : "select", "elementOptions" : [] ,"transformers": []}]}', "['none', 'minimal', 'verbose', 'debug', 'trace']", 'General')
@@ -176,7 +176,7 @@ def importConfigs (pm, db, all_plugins):
conf.API_TOKEN = ccd('API_TOKEN', 't_' + generate_random_string(20) , c_d, 'API token', '{"dataType": "string","elements": [{"elementType": "input","elementHasInputValue": 1,"elementOptions": [{ "cssClasses": "col-xs-12" }],"transformers": []},{"elementType": "button","elementOptions": [{ "getStringKey": "Gen_Generate" },{ "customParams": "API_TOKEN" },{ "onClick": "generateApiToken(this, 20)" },{ "cssClasses": "col-xs-12" }],"transformers": []}]}', '[]', 'General')
# UI
conf.UI_LANG = ccd('UI_LANG', 'English' , c_d, 'Language Interface', '{"dataType":"string", "elements": [{"elementType" : "select", "elementOptions" : [] ,"transformers": []}]}', "['English', 'German', 'Spanish', 'French', 'Norwegian', 'Russian', 'Italian (it_it)', 'Portuguese (pt_br)', 'Portuguese (pt_pt)', 'Polish (pl_pl)', 'Chinese (zh_cn)', 'Turkish (tr_tr)', 'Czech (cs_cz)', 'Arabic (ar_ar)', 'Catalan (ca_ca)', 'Ukrainian (uk_ua)' ]", 'UI')
conf.UI_LANG = ccd('UI_LANG', 'English (en_us)' , c_d, 'Language Interface', '{"dataType":"string", "elements": [{"elementType" : "select", "elementOptions" : [] ,"transformers": []}]}', "['English (en_us)', 'Arabic (ar_ar)', 'Catalan (ca_ca)', 'Czech (cs_cz)', 'German (de_de)', 'Spanish (es_es)', 'Farsi (fa_fa)', 'French (fr_fr)', 'Italian (it_it)', 'Norwegian (nb_no)', 'Polish (pl_pl)', 'Portuguese (pt_br)', 'Portuguese (pt_pt)', 'Russian (ru_ru)', 'Turkish (tr_tr)', 'Ukrainian (uk_ua)', 'Chinese (zh_cn)']", 'UI')
# Init timezone in case it changed and handle invalid values
try:
@@ -368,19 +368,8 @@ def importConfigs (pm, db, all_plugins):
# mylog('verbose', [f"[Config] pref {plugin["unique_prefix"]} run_val {run_val} run_sch {run_sch} "])
if run_val == 'schedule':
newSchedule = None
try:
newSchedule = Cron(run_sch).schedule(start_date=datetime.datetime.now(conf.tz))
if (newSchedule is not None):
conf.mySchedules.append(schedule_class(plugin["unique_prefix"], newSchedule, newSchedule.next(), False))
else:
raise(ValueError("Invalid schedule"))
except ValueError as e:
mylog('none', [f"[Config] [ERROR] Invalid schedule '{run_sch}' for plugin '{plugin['unique_prefix']}'. Error: {e}."])
except Exception as e:
mylog('none', [f"[Config] [ERROR] Could not set schedule '{run_sch}' for plugin '{plugin['unique_prefix']}'. Error: {e}."])
newSchedule = Cron(run_sch).schedule(start_date=datetime.datetime.now(conf.tz))
conf.mySchedules.append(schedule_class(plugin["unique_prefix"], newSchedule, newSchedule.next(), False))
# mylog('verbose', [f"[Config] conf.mySchedules {conf.mySchedules}"])

View File

@@ -84,13 +84,16 @@ class Logger:
root_logger.setLevel(custom_to_logging_levels.get(currentLevel, logging.NOTSET))
def mylog(self, requestedDebugLevel, *args):
self.reqLvl = self._to_num(requestedDebugLevel)
if self.reqLvl is not None and self.reqLvl <= self.setLvl:
self.setLvl = self._to_num(currentLevel)
if self.isAbove(requestedDebugLevel):
file_print(*args)
def isAbove(self, requestedDebugLevel):
reqLvl = self._to_num(requestedDebugLevel)
return reqLvl is not None and self.setLvl >= reqLvl
return reqLvl is not None and self.setLvl is not None and self.setLvl >= reqLvl
#-------------------------------------------------------------------------------
# Dedicated thread for writing logs
@@ -122,6 +125,8 @@ def start_log_writer_thread():
def file_print(*args):
result = timeNowTZ().strftime('%H:%M:%S') + ' '
for arg in args:
if isinstance(arg, list):
arg = ' '.join(str(a) for a in arg) # so taht new lines are handled correctly also when passing a list
result += str(arg)
logging.log(custom_to_logging_levels.get(currentLevel, logging.NOTSET), result)

View File

@@ -9,6 +9,7 @@ import subprocess
import requests
from yattag import indent
from json2table import convert
from flask import jsonify
# Register NetAlertX directories
INSTALL_PATH="/app"
@@ -25,7 +26,18 @@ NOTIFICATION_API_FILE = apiPath + 'user_notifications.json'
# Show Frontend User Notification
def write_notification(content, level='alert', timestamp=None):
"""
Create and append a new user notification entry to the notifications file.
Args:
content (str): The message content to display to the user.
level (str, optional): Notification severity (e.g., 'info', 'alert', 'warning').
Defaults to 'alert'.
timestamp (datetime, optional): Custom timestamp; if None, uses current time.
Returns:
None
"""
if timestamp is None:
timestamp = timeNowTZ()
@@ -67,7 +79,15 @@ def write_notification(content, level='alert', timestamp=None):
# Trim notifications
def remove_old(keepNumberOfEntries):
"""
Trim the notifications file, keeping only the most recent N entries.
Args:
keepNumberOfEntries (int): Number of latest notifications to retain.
Returns:
None
"""
# Check if file exists
if not os.path.exists(NOTIFICATION_API_FILE):
mylog('info', '[Notification] No notifications file to clean.')
@@ -106,3 +126,141 @@ def remove_old(keepNumberOfEntries):
mylog('verbose', f'[Notification] Trimmed notifications to latest {keepNumberOfEntries}')
except Exception as e:
mylog('none', f'Error writing trimmed notifications file: {e}')
def mark_all_notifications_read():
"""
Mark all existing notifications as read.
Returns:
dict: JSON-compatible dictionary containing:
{
"success": bool,
"error": str (optional)
}
"""
if not os.path.exists(NOTIFICATION_API_FILE):
return {"success": True}
try:
with open(NOTIFICATION_API_FILE, "r") as f:
notifications = json.load(f)
except Exception as e:
mylog("none", f"[Notification] Failed to read notifications: {e}")
return {"success": False, "error": str(e)}
for n in notifications:
n["read"] = 1
try:
with open(NOTIFICATION_API_FILE, "w") as f:
json.dump(notifications, f, indent=4)
except Exception as e:
mylog("none", f"[Notification] Failed to write notifications: {e}")
return {"success": False, "error": str(e)}
mylog("debug", "[Notification] All notifications marked as read.")
return {"success": True}
def delete_notifications():
"""
Delete all notifications from the JSON file.
Returns:
A JSON response with {"success": True}.
"""
with open(NOTIFICATION_API_FILE, "w") as f:
json.dump([], f, indent=4)
mylog("debug", "[Notification] All notifications deleted.")
return jsonify({"success": True})
def get_unread_notifications():
"""
Retrieve all unread notifications from the JSON file.
Returns:
A JSON array of unread notification objects.
"""
if not os.path.exists(NOTIFICATION_API_FILE):
return jsonify([])
with open(NOTIFICATION_API_FILE, "r") as f:
notifications = json.load(f)
unread = [n for n in notifications if n.get("read", 0) == 0]
return jsonify(unread)
def mark_notification_as_read(guid=None, max_attempts=3):
"""
Mark a notification as read based on GUID.
If guid is None, mark all notifications as read.
Args:
guid (str, optional): The GUID of the notification to mark. Defaults to None.
max_attempts (int, optional): Number of attempts to read/write file. Defaults to 3.
Returns:
dict: {"success": True} on success, {"success": False, "error": "..."} on failure
"""
attempts = 0
while attempts < max_attempts:
try:
if os.path.exists(NOTIFICATION_API_FILE) and os.access(NOTIFICATION_API_FILE, os.R_OK | os.W_OK):
with open(NOTIFICATION_API_FILE, "r") as f:
notifications = json.load(f)
if notifications is not None:
for notification in notifications:
if guid is None or notification.get("guid") == guid:
notification["read"] = 1
with open(NOTIFICATION_API_FILE, "w") as f:
json.dump(notifications, f, indent=4)
return {"success": True}
except Exception as e:
mylog("none", f"[Notification] Attempt {attempts+1} failed: {e}")
attempts += 1
time.sleep(0.5) # Sleep 0.5 seconds before retrying
error_msg = f"Failed to read/write notification file after {max_attempts} attempts."
mylog("none", f"[Notification] {error_msg}")
return {"success": False, "error": error_msg}
def delete_notification(guid):
"""
Delete a notification from the notifications file based on its GUID.
Args:
guid (str): The GUID of the notification to delete.
Returns:
dict: {"success": True} on success, {"success": False, "error": "..."} on failure
"""
if not guid:
return {"success": False, "error": "GUID is required"}
if not os.path.exists(NOTIFICATION_API_FILE):
return {"success": True} # Nothing to delete
try:
with open(NOTIFICATION_API_FILE, "r") as f:
notifications = json.load(f)
# Filter out the notification with the specified GUID
filtered_notifications = [n for n in notifications if n.get("guid") != guid]
# Write the updated notifications back
with open(NOTIFICATION_API_FILE, "w") as f:
json.dump(filtered_notifications, f, indent=4)
return {"success": True}
except Exception as e:
mylog("none", f"[Notification] Failed to delete notification {guid}: {e}")
return {"success": False, "error": str(e)}

326
test/test_compound_conditions.py Executable file
View File

@@ -0,0 +1,326 @@
"""
Unit tests for SafeConditionBuilder compound condition parsing.
Tests the fix for Issue #1210 - compound conditions with multiple AND/OR clauses.
"""
import sys
import unittest
from unittest.mock import MagicMock
# Mock the logger module before importing SafeConditionBuilder
sys.modules['logger'] = MagicMock()
# Add parent directory to path for imports
sys.path.insert(0, '/tmp/netalertx_hotfix/server/db')
from sql_safe_builder import SafeConditionBuilder
class TestCompoundConditions(unittest.TestCase):
"""Test compound condition parsing functionality."""
def setUp(self):
"""Create a fresh builder instance for each test."""
self.builder = SafeConditionBuilder()
def test_user_failing_filter_six_and_clauses(self):
"""Test the exact user-reported failing filter from Issue #1210."""
condition = (
"AND devLastIP NOT LIKE '192.168.50.%' "
"AND devLastIP NOT LIKE '192.168.60.%' "
"AND devLastIP NOT LIKE '192.168.70.2' "
"AND devLastIP NOT LIKE '192.168.70.5' "
"AND devLastIP NOT LIKE '192.168.70.3' "
"AND devLastIP NOT LIKE '192.168.70.4'"
)
sql, params = self.builder.build_safe_condition(condition)
# Should successfully parse
self.assertIsNotNone(sql)
self.assertIsNotNone(params)
# Should have 6 parameters (one per clause)
self.assertEqual(len(params), 6)
# Should contain all 6 AND operators
self.assertEqual(sql.count('AND'), 6)
# Should contain all 6 NOT LIKE operators
self.assertEqual(sql.count('NOT LIKE'), 6)
# Should have 6 parameter placeholders
self.assertEqual(sql.count(':param_'), 6)
# Verify all IP patterns are in parameters
param_values = list(params.values())
self.assertIn('192.168.50.%', param_values)
self.assertIn('192.168.60.%', param_values)
self.assertIn('192.168.70.2', param_values)
self.assertIn('192.168.70.5', param_values)
self.assertIn('192.168.70.3', param_values)
self.assertIn('192.168.70.4', param_values)
def test_multiple_and_clauses_simple(self):
"""Test multiple AND clauses with simple equality operators."""
condition = "AND devName = 'Device1' AND devVendor = 'Apple' AND devFavorite = '1'"
sql, params = self.builder.build_safe_condition(condition)
# Should have 3 parameters
self.assertEqual(len(params), 3)
# Should have 3 AND operators
self.assertEqual(sql.count('AND'), 3)
# Verify all values are parameterized
param_values = list(params.values())
self.assertIn('Device1', param_values)
self.assertIn('Apple', param_values)
self.assertIn('1', param_values)
def test_multiple_or_clauses(self):
"""Test multiple OR clauses."""
condition = "OR devName = 'Device1' OR devName = 'Device2' OR devName = 'Device3'"
sql, params = self.builder.build_safe_condition(condition)
# Should have 3 parameters
self.assertEqual(len(params), 3)
# Should have 3 OR operators
self.assertEqual(sql.count('OR'), 3)
# Verify all device names are parameterized
param_values = list(params.values())
self.assertIn('Device1', param_values)
self.assertIn('Device2', param_values)
self.assertIn('Device3', param_values)
def test_mixed_and_or_clauses(self):
"""Test mixed AND/OR logical operators."""
condition = "AND devName = 'Device1' OR devName = 'Device2' AND devFavorite = '1'"
sql, params = self.builder.build_safe_condition(condition)
# Should have 3 parameters
self.assertEqual(len(params), 3)
# Should preserve the logical operator order
self.assertIn('AND', sql)
self.assertIn('OR', sql)
# Verify all values are parameterized
param_values = list(params.values())
self.assertIn('Device1', param_values)
self.assertIn('Device2', param_values)
self.assertIn('1', param_values)
def test_single_condition_backward_compatibility(self):
"""Test that single conditions still work (backward compatibility)."""
condition = "AND devName = 'TestDevice'"
sql, params = self.builder.build_safe_condition(condition)
# Should have 1 parameter
self.assertEqual(len(params), 1)
# Should match expected format
self.assertIn('AND devName = :param_', sql)
# Parameter should contain the value
self.assertIn('TestDevice', params.values())
def test_single_condition_like_operator(self):
"""Test single LIKE condition for backward compatibility."""
condition = "AND devComments LIKE '%important%'"
sql, params = self.builder.build_safe_condition(condition)
# Should have 1 parameter
self.assertEqual(len(params), 1)
# Should contain LIKE operator
self.assertIn('LIKE', sql)
# Parameter should contain the pattern
self.assertIn('%important%', params.values())
def test_compound_with_like_patterns(self):
"""Test compound conditions with LIKE patterns."""
condition = "AND devLastIP LIKE '192.168.%' AND devVendor LIKE '%Apple%'"
sql, params = self.builder.build_safe_condition(condition)
# Should have 2 parameters
self.assertEqual(len(params), 2)
# Should have 2 LIKE operators
self.assertEqual(sql.count('LIKE'), 2)
# Verify patterns are parameterized
param_values = list(params.values())
self.assertIn('192.168.%', param_values)
self.assertIn('%Apple%', param_values)
def test_compound_with_inequality_operators(self):
"""Test compound conditions with various inequality operators."""
condition = "AND eve_DateTime > '2024-01-01' AND eve_DateTime < '2024-12-31'"
sql, params = self.builder.build_safe_condition(condition)
# Should have 2 parameters
self.assertEqual(len(params), 2)
# Should have both operators
self.assertIn('>', sql)
self.assertIn('<', sql)
# Verify dates are parameterized
param_values = list(params.values())
self.assertIn('2024-01-01', param_values)
self.assertIn('2024-12-31', param_values)
def test_empty_condition(self):
"""Test empty condition string."""
condition = ""
sql, params = self.builder.build_safe_condition(condition)
# Should return empty results
self.assertEqual(sql, "")
self.assertEqual(params, {})
def test_whitespace_only_condition(self):
"""Test condition with only whitespace."""
condition = " \t\n "
sql, params = self.builder.build_safe_condition(condition)
# Should return empty results
self.assertEqual(sql, "")
self.assertEqual(params, {})
def test_invalid_column_name_rejected(self):
"""Test that invalid column names are rejected."""
condition = "AND malicious_column = 'value'"
with self.assertRaises(ValueError):
self.builder.build_safe_condition(condition)
def test_invalid_operator_rejected(self):
"""Test that invalid operators are rejected."""
condition = "AND devName EXECUTE 'DROP TABLE'"
with self.assertRaises(ValueError):
self.builder.build_safe_condition(condition)
def test_sql_injection_attempt_blocked(self):
"""Test that SQL injection attempts are blocked."""
condition = "AND devName = 'value'; DROP TABLE devices; --"
# Should either reject or sanitize the dangerous input
# The semicolon and comment should not appear in the final SQL
try:
sql, params = self.builder.build_safe_condition(condition)
# If it doesn't raise an error, it should sanitize the input
self.assertNotIn('DROP', sql.upper())
self.assertNotIn(';', sql)
except ValueError:
# Rejection is also acceptable
pass
def test_quoted_string_with_spaces(self):
"""Test that quoted strings with spaces are handled correctly."""
condition = "AND devName = 'My Device Name' AND devComments = 'Has spaces here'"
sql, params = self.builder.build_safe_condition(condition)
# Should have 2 parameters
self.assertEqual(len(params), 2)
# Verify values with spaces are preserved
param_values = list(params.values())
self.assertIn('My Device Name', param_values)
self.assertIn('Has spaces here', param_values)
def test_compound_condition_with_not_equal(self):
"""Test compound conditions with != operator."""
condition = "AND devName != 'Device1' AND devVendor != 'Unknown'"
sql, params = self.builder.build_safe_condition(condition)
# Should have 2 parameters
self.assertEqual(len(params), 2)
# Should have != operators (or converted to <>)
self.assertTrue('!=' in sql or '<>' in sql)
# Verify values are parameterized
param_values = list(params.values())
self.assertIn('Device1', param_values)
self.assertIn('Unknown', param_values)
def test_very_long_compound_condition(self):
"""Test handling of very long compound conditions (10+ clauses)."""
clauses = []
for i in range(10):
clauses.append(f"AND devName != 'Device{i}'")
condition = " ".join(clauses)
sql, params = self.builder.build_safe_condition(condition)
# Should have 10 parameters
self.assertEqual(len(params), 10)
# Should have 10 AND operators
self.assertEqual(sql.count('AND'), 10)
# Verify all device names are parameterized
param_values = list(params.values())
for i in range(10):
self.assertIn(f'Device{i}', param_values)
class TestParameterGeneration(unittest.TestCase):
"""Test parameter generation and naming."""
def setUp(self):
"""Create a fresh builder instance for each test."""
self.builder = SafeConditionBuilder()
def test_parameters_have_unique_names(self):
"""Test that all parameters get unique names."""
condition = "AND devName = 'A' AND devName = 'B' AND devName = 'C'"
sql, params = self.builder.build_safe_condition(condition)
# All parameter names should be unique
param_names = list(params.keys())
self.assertEqual(len(param_names), len(set(param_names)))
def test_parameter_values_match_condition(self):
"""Test that parameter values correctly match the condition values."""
condition = "AND devLastIP NOT LIKE '192.168.1.%' AND devLastIP NOT LIKE '10.0.0.%'"
sql, params = self.builder.build_safe_condition(condition)
# Should have exactly the values from the condition
param_values = sorted(params.values())
expected_values = sorted(['192.168.1.%', '10.0.0.%'])
self.assertEqual(param_values, expected_values)
def test_parameters_referenced_in_sql(self):
"""Test that all parameters are actually referenced in the SQL."""
condition = "AND devName = 'Device1' AND devVendor = 'Apple'"
sql, params = self.builder.build_safe_condition(condition)
# Every parameter should appear in the SQL
for param_name in params.keys():
self.assertIn(f':{param_name}', sql)
if __name__ == '__main__':
unittest.main()

View File

@@ -0,0 +1,111 @@
# -----------------------------
# In-app notifications tests with cleanup
# -----------------------------
import json
import random
import string
import uuid
import pytest
import os
import sys
# Define the installation path and extend the system path for plugin imports
INSTALL_PATH = "/app"
sys.path.extend([f"{INSTALL_PATH}/front/plugins", f"{INSTALL_PATH}/server"])
from api_server.api_server_start import app
from messaging.in_app import NOTIFICATION_API_FILE # Import the path to notifications file
from helper import get_setting_value
@pytest.fixture(scope="session")
def api_token():
return get_setting_value("API_TOKEN")
@pytest.fixture
def client():
with app.test_client() as client:
yield client
def auth_headers(token):
return {"Authorization": f"Bearer {token}"}
@pytest.fixture
def random_content():
return "Test Notification " + "".join(random.choices(string.ascii_letters + string.digits, k=6))
@pytest.fixture
def notification_guid(client, api_token, random_content):
# Write a notification and return its GUID
resp = client.post(
"/messaging/in-app/write",
json={"content": random_content, "level": "alert"},
headers=auth_headers(api_token)
)
assert resp.status_code == 200
# Fetch the unread notifications and get GUID
resp = client.get("/messaging/in-app/unread", headers=auth_headers(api_token))
data = resp.json
guid = next((n["guid"] for n in data if n["content"] == random_content), None)
assert guid is not None
return guid
@pytest.fixture(autouse=True)
def cleanup_notifications():
# Runs before and after each test
# Backup original file if exists
backup = None
if os.path.exists(NOTIFICATION_API_FILE):
with open(NOTIFICATION_API_FILE, "r") as f:
backup = f.read()
yield # run the test
# Cleanup after test
with open(NOTIFICATION_API_FILE, "w") as f:
f.write("[]")
# Restore backup if needed
if backup:
with open(NOTIFICATION_API_FILE, "w") as f:
f.write(backup)
# -----------------------------
def test_write_notification(client, api_token, random_content):
resp = client.post(
"/messaging/in-app/write",
json={"content": random_content, "level": "alert"},
headers=auth_headers(api_token)
)
assert resp.status_code == 200
assert resp.json.get("success") is True
def test_get_unread_notifications(client, api_token, random_content):
client.post("/messaging/in-app/write", json={"content": random_content}, headers=auth_headers(api_token))
resp = client.get("/messaging/in-app/unread", headers=auth_headers(api_token))
assert resp.status_code == 200
notifications = resp.json
assert any(n["content"] == random_content for n in notifications)
def test_mark_all_notifications_read(client, api_token, random_content):
client.post("/messaging/in-app/write", json={"content": random_content}, headers=auth_headers(api_token))
resp = client.post("/messaging/in-app/read/all", headers=auth_headers(api_token))
assert resp.status_code == 200
assert resp.json.get("success") is True
def test_mark_single_notification_read(client, api_token, notification_guid):
resp = client.post(f"/messaging/in-app/read/{notification_guid}", headers=auth_headers(api_token))
assert resp.status_code == 200
assert resp.json.get("success") is True
def test_delete_single_notification(client, api_token, notification_guid):
resp = client.delete(f"/messaging/in-app/delete/{notification_guid}", headers=auth_headers(api_token))
assert resp.status_code == 200
assert resp.json.get("success") is True
def test_delete_all_notifications(client, api_token, random_content):
# Add a notification first
client.post("/messaging/in-app/write", json={"content": random_content}, headers=auth_headers(api_token))
resp = client.delete("/messaging/in-app/delete", headers=auth_headers(api_token))
assert resp.status_code == 200
assert resp.json.get("success") is True