Merge branch 'chore_timestamps' of https://github.com/netalertx/NetAlertX into chore_timestamps
8
.github/skills/settings-management/SKILL.md
vendored
@@ -37,11 +37,3 @@ Define in plugin's `config.json` manifest under the settings section.
|
|||||||
## Environment Override
|
## Environment Override
|
||||||
|
|
||||||
Use `APP_CONF_OVERRIDE` environment variable for settings that must be set before startup.
|
Use `APP_CONF_OVERRIDE` environment variable for settings that must be set before startup.
|
||||||
|
|
||||||
## Backend API URL
|
|
||||||
|
|
||||||
For Codespaces, set `BACKEND_API_URL` to your Codespace URL:
|
|
||||||
|
|
||||||
```
|
|
||||||
BACKEND_API_URL=https://something-20212.app.github.dev/
|
|
||||||
```
|
|
||||||
|
|||||||
@@ -219,6 +219,13 @@ CREATE INDEX IDX_dev_Favorite ON Devices (devFavorite);
|
|||||||
CREATE INDEX IDX_dev_LastIP ON Devices (devLastIP);
|
CREATE INDEX IDX_dev_LastIP ON Devices (devLastIP);
|
||||||
CREATE INDEX IDX_dev_NewDevice ON Devices (devIsNew);
|
CREATE INDEX IDX_dev_NewDevice ON Devices (devIsNew);
|
||||||
CREATE INDEX IDX_dev_Archived ON Devices (devIsArchived);
|
CREATE INDEX IDX_dev_Archived ON Devices (devIsArchived);
|
||||||
|
CREATE UNIQUE INDEX IF NOT EXISTS idx_events_unique
|
||||||
|
ON Events (
|
||||||
|
eve_MAC,
|
||||||
|
eve_IP,
|
||||||
|
eve_EventType,
|
||||||
|
eve_DateTime
|
||||||
|
);
|
||||||
CREATE VIEW Events_Devices AS
|
CREATE VIEW Events_Devices AS
|
||||||
SELECT *
|
SELECT *
|
||||||
FROM Events
|
FROM Events
|
||||||
|
|||||||
56
docs/ADVISORY_EYES_ON_GLASS.md
Normal file
@@ -0,0 +1,56 @@
|
|||||||
|
### Build an MSP Wallboard for Network Monitoring
|
||||||
|
|
||||||
|
For Managed Service Providers (MSPs) and Network Operations Centers (NOC), "Eyes on Glass" monitoring requires a UI that is both self-healing (auto-refreshing) and focused only on critical data. By leveraging the **UI Settings Plugin**, you can transform NetAlertX from a management tool into a dedicated live monitor.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 1. Configure Auto-Refresh for Live Monitoring
|
||||||
|
|
||||||
|
Static dashboards are the enemy of real-time response. NetAlertX allows you to force the UI to pull fresh data without manual page reloads.
|
||||||
|
|
||||||
|
* **Setting:** Locate the `UI_REFRESH` (or similar "Auto-refresh UI") setting within the **UI Settings plugin**.
|
||||||
|
* **Optimal Interval:** Set this between **60 to 120 seconds**.
|
||||||
|
* *Note:* Refreshing too frequently (e.g., <30s) on large networks can lead to high browser and server CPU usage.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
### 2. Streamlining the Dashboard (MSP Mode)
|
||||||
|
|
||||||
|
An MSP's focus is on what is *broken*, not what is working. Hide the noise to increase reaction speed.
|
||||||
|
|
||||||
|
* **Hide Unnecessary Blocks:** Under UI Settings, disable dashboard blocks that don't provide immediate utility, such as **Online presence** or **Tiles**.
|
||||||
|
* **Hide virtual connections:** You can specify which relationships shoudl be hidden from the main view to remove any virtual devices that are not essential from your views.
|
||||||
|
* **Browser Full-Screen:** Use the built-in "Full Screen" toggle in the top bar to remove browser chrome (URL bars/tabs) for a cleaner "Wallboard" look.
|
||||||
|
|
||||||
|
### 3. Creating Custom NOC Views
|
||||||
|
|
||||||
|
Use the UI Filters in tandem with UI Settings to create custom views.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
| Feature | NOC/MSP Application |
|
||||||
|
| --- | --- |
|
||||||
|
| **Site-Specific Nodes** | Filter the view by a specific "Sync Node" or "Location" filter to monitor a single client site. |
|
||||||
|
| **Filter by Criticality** | Filter devices where `Group == "Infrastructure"` or `"Server"`. (depending on your predefined values) |
|
||||||
|
| **Predefined "Down" View** | Bookmark the URL with the `/devices.php#down` path to ensure the dashboard always loads into an "Alert Only" mode. |
|
||||||
|
|
||||||
|
### 4. Browser & Cache Stability
|
||||||
|
|
||||||
|
Because the UI is a web application, long-running sessions can occasionally experience cache drift.
|
||||||
|
|
||||||
|
* **Cache Refresh:** If you notice the "Show # Entries" resetting or icons failing to load after days of uptime, use the **Reload** icon in the application header (not the browser refresh) to clear the internal app cache.
|
||||||
|
* **Dedicated Hardware:** For 24/7 monitoring, use a dedicated thin client or Raspberry Pi running in "Kiosk Mode" to prevent OS-level popups from obscuring the dashboard.
|
||||||
|
|
||||||
|
> [!TIP]
|
||||||
|
> [NetAlertX - Detailed Dashboard Guide](https://www.youtube.com/watch?v=umh1c_40HW8)
|
||||||
|
> This video provides a visual walkthrough of the NetAlertX dashboard features, including how to map and visualize devices which is crucial for setting up a clear "Eyes on Glass" monitoring environment.
|
||||||
|
|
||||||
|
### Summary Checklist
|
||||||
|
|
||||||
|
* [ ] **Automate Refresh:** Set `UI_REFRESH` to **60-120s** in UI Settings to ensure the dashboard stays current without manual intervention.
|
||||||
|
* [ ] **Filter for Criticality:** Bookmark the **`/devices.php#down`** view to instantly focus on offline assets rather than the entire inventory.
|
||||||
|
* [ ] **Remove UI Noise:** Use UI Settings to hide non-essential dashboard blocks (e.g., **Tiles** or remove **Virtual Connections** devices) to maximize screen real estate for alerts.
|
||||||
|
* [ ] **Segment by Site:** Use **Location** or **Sync Node** filters to create dedicated views for specific client networks or physical branches.
|
||||||
|
* [ ] **Ensure Stability:** Run on a dedicated "Kiosk" browser and use the internal **Reload icon** occasionally to maintain a clean application cache.
|
||||||
121
docs/ADVISORY_MULTI_NETWORK.md
Normal file
@@ -0,0 +1,121 @@
|
|||||||
|
## ADVISORY: Best Practices for Monitoring Multiple Networks with NetAlertX
|
||||||
|
|
||||||
|
### 1. Define Monitoring Scope & Architecture
|
||||||
|
|
||||||
|
Effective multi-network monitoring starts with understanding how NetAlertX "sees" your traffic.
|
||||||
|
|
||||||
|
* **A. Understand Network Accessibility:** Local ARP-based scanning (**ARPSCAN**) only discovers devices on directly accessible subnets due to Layer 2 limitations. It cannot traverse VPNs or routed borders without specific configuration.
|
||||||
|
* **B. Plan Subnet & Scan Interfaces:** Explicitly configure each accessible segment in `SCAN_SUBNETS` with the corresponding interfaces.
|
||||||
|
* **C. Remote & Inaccessible Networks:** For networks unreachable via ARP, use these strategies:
|
||||||
|
* **Alternate Plugins:** Supplement discovery with [SNMPDSC](SNMPDSC) or [DHCP lease imports](https://docs.netalertx.com/PLUGINS/?h=DHCPLSS#available-plugins).
|
||||||
|
* **Centralized Multi-Tenant Management using Sync Nodes:** Run secondary NetAlertX instances on isolated networks and aggregate data using the **SYNC plugin**.
|
||||||
|
* **Manual Entry:** For static assets where only ICMP (ping) status is needed.
|
||||||
|
|
||||||
|
> [!TIP]
|
||||||
|
> Explore the [remote networks](./REMOTE_NETWORKS.md) documentation for more details on how to set up the approaches menationed above.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 2. Automating IT Asset Inventory with Workflows
|
||||||
|
|
||||||
|
[Workflows](./WORKFLOWS.md) are the "engine" of NetAlertX, reducing manual overhead as your device list grows.
|
||||||
|
|
||||||
|
* **A. Logical Ownership & VLAN Tagging:** Create a workflow triggered on **Device Creation** to:
|
||||||
|
1. Inspect the IP/Subnet.
|
||||||
|
2. Set `devVlan` or `devOwner` custom fields automatically.
|
||||||
|
|
||||||
|
|
||||||
|
* **B. Auto-Grouping:** Use conditional logic to categorize devices.
|
||||||
|
* *Example:* If `devLastIP == 10.10.20.*`, then `Set devLocation = "BranchOffice"`.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"name": "Assign Location - BranchOffice",
|
||||||
|
"trigger": {
|
||||||
|
"object_type": "Devices",
|
||||||
|
"event_type": "update"
|
||||||
|
},
|
||||||
|
"conditions": [
|
||||||
|
{
|
||||||
|
"logic": "AND",
|
||||||
|
"conditions": [
|
||||||
|
{
|
||||||
|
"field": "devLastIP",
|
||||||
|
"operator": "contains",
|
||||||
|
"value": "10.10.20."
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"actions": [
|
||||||
|
{
|
||||||
|
"type": "update_field",
|
||||||
|
"field": "devLocation",
|
||||||
|
"value": "BranchOffice"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
* **C. Sync Node Tracking:** When using multiple instances, ensure all synchub nodes have a descriptive `SYNC_node_name` name to distinguish between sites.
|
||||||
|
|
||||||
|
> [!TIP]
|
||||||
|
> Always test new workflows in a "Staging" instance. A misconfigured workflow can trigger thousands of unintended updates across your database.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 3. Notification Strategy: Low Noise, High Signal
|
||||||
|
|
||||||
|
A multi-network environment can generate significant "alert fatigue." Use a layered filtering approach.
|
||||||
|
|
||||||
|
| Level | Strategy | Recommended Action |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| **Device** | Silence Flapping | Use "Skip repeated notifications" for unstable IoT devices. |
|
||||||
|
| **Plugin** | Tune Watchers | Only enable `_WATCH` on reliable plugins (e.g., ICMP/SNMP). |
|
||||||
|
| **Global** | Filter Sections | Limit `NTFPRCS_INCLUDED_SECTIONS` to `new_devices` and `down_devices`. |
|
||||||
|
|
||||||
|
|
||||||
|
> [!TIP]
|
||||||
|
> **Ignore Rules:** Maintain strict **Ignored MAC** (`NEWDEV_ignored_MACs`) and **Ignored IP** (`NEWDEV_ignored_IPs`) lists for guest networks or broadcast scanners to keep your logs clean.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 4. UI Filters for Multi-Network Clarity
|
||||||
|
|
||||||
|
Don't let a massive device list overwhelm you. Use the [Multi-edit features](./DEVICES_BULK_EDITING.md) to categorize devices and create focused views:
|
||||||
|
|
||||||
|
* **By Zone:** Filter by "Location", "Site" or "Sync Node" you et up in Section 2.
|
||||||
|
* **By Criticality:** Use custom the device Type field to separate "Core Infrastructure" from "Ephemeral Clients."
|
||||||
|
* **By Status:** Use predefined views specifically for "Devices currently Down" to act as a Network Operations Center (NOC) dashboard.
|
||||||
|
|
||||||
|
> [!TIP]
|
||||||
|
> If you are providing services as a Managed Service Provider (MSP) customize your default UI to be exactly how you need it, by hiding parts of the UI that you are not interested in, or by configuring a auto-refreshed screen monitoring your most important clients. See the [Eyes on glass](./ADVISORY_EYES_ON_GLASS.md) advisory for more details.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 5. Operational Stability & Sync Health
|
||||||
|
|
||||||
|
* **Health Checks:** Regularly monitor the [Logs](https://docs.netalertx.com/LOGGING/?h=logs) to ensure remote nodes are reporting in.
|
||||||
|
* **Backups:** Use the **CSV Devices Backup** plugin. Standardize your workflow templates and [back up](./BACKUPS.md) you `/config` folders so that if a node fails, you can redeploy it with the same logic instantly.
|
||||||
|
|
||||||
|
|
||||||
|
### 6. Optimize Performance
|
||||||
|
|
||||||
|
As your environment grows, tuning the underlying engine is vital to maintain a snappy UI and reliable discovery cycles.
|
||||||
|
|
||||||
|
* **Plugin Scheduling:** Avoid "Scan Storms" by staggering plugin execution. Running intensive tasks like `NMAP` or `MASS_DNS` simultaneously can spike CPU and cause database locks.
|
||||||
|
* **Database Health:** Large-scale monitoring generates massive event logs. Use the **[DBCLNP (Database Cleanup)](https://www.google.com/search?q=https://docs.netalertx.com/PLUGINS/%23dbclnp)** plugin to prune old records and keep the SQLite database performant.
|
||||||
|
* **Resource Management:** For high-device counts, consider increasing the memory limit for the container and utilizing `tmpfs` for temporary files to reduce SD card/disk I/O bottlenecks.
|
||||||
|
|
||||||
|
> [!IMPORTANT]
|
||||||
|
> For a deep dive into hardware requirements, database vacuuming, and specific environment variables for high-load instances, refer to the full **[Performance Optimization Guide](https://docs.netalertx.com/PERFORMANCE/)**.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Summary Checklist
|
||||||
|
|
||||||
|
* [ ] **Discovery:** Are all subnets explicitly defined?
|
||||||
|
* [ ] **Automation:** Do new devices get auto-assigned to a VLAN/Owner?
|
||||||
|
* [ ] **Noise Control:** Are transient "Down" alerts delayed via `NTFPRCS_alert_down_time`?
|
||||||
|
* [ ] **Remote Sites:** Is the SYNC plugin authenticated and heartbeat-active?
|
||||||
@@ -39,9 +39,24 @@ The **MAC** field and the **Last IP** field will then become editable.
|
|||||||

|

|
||||||
|
|
||||||
|
|
||||||
> [!NOTE]
|
## Dummy or Manually Created Device Status
|
||||||
>
|
|
||||||
> You can couple this with the `ICMP` plugin which can be used to monitor the status of these devices, if they are actual devices reachable with the `ping` command. If not, you can use a loopback IP address so they appear online, such as `0.0.0.0` or `127.0.0.1`.
|
You can control a dummy device’s status either via `ICMP` (automatic) or the `Force Status` field (manual). Choose based on whether the device is real and how important **data hygiene** is.
|
||||||
|
|
||||||
|
### `ICMP` (Real Devices)
|
||||||
|
|
||||||
|
Use a real IP that responds to ping so status is updated automatically.
|
||||||
|
|
||||||
|
### `Force Status` (Best for Data Hygiene)
|
||||||
|
|
||||||
|
Manually set the status when the device is not reachable or is purely logical.
|
||||||
|
This keeps your data clean and avoids fake IPs.
|
||||||
|
|
||||||
|
### Loopback IP (`127.0.0.1`, `0.0.0.0`)
|
||||||
|
|
||||||
|
Use when you want the device to always appear online via `ICMP`.
|
||||||
|
Note this simulates reachability and introduces artificial data. This approach might be preferred, if you want to filter and distinguish dummy devices based on IP when filtering your asset lists.
|
||||||
|
|
||||||
|
|
||||||
## Copying data from an existing device.
|
## Copying data from an existing device.
|
||||||
|
|
||||||
|
|||||||
@@ -215,7 +215,7 @@ services:
|
|||||||
|
|
||||||
### 1.3 Migration from NetAlertX `v25.10.1`
|
### 1.3 Migration from NetAlertX `v25.10.1`
|
||||||
|
|
||||||
Starting from v25.10.1, the container uses a [more secure, read-only runtime environment](./SECURITY_FEATURES.md), which requires all writable paths (e.g., logs, API cache, temporary data) to be mounted as `tmpfs` or permanent writable volumes, with sufficient access [permissions](./FILE_PERMISSIONS.md). The data location has also hanged from `/app/db` and `/app/config` to `/data/db` and `/data/config`. See detailed steps below.
|
Starting from `v25.10.1`, the container uses a [more secure, read-only runtime environment](./SECURITY_FEATURES.md), which requires all writable paths (e.g., logs, API cache, temporary data) to be mounted as `tmpfs` or permanent writable volumes, with sufficient access [permissions](./FILE_PERMISSIONS.md). The data location has also hanged from `/app/db` and `/app/config` to `/data/db` and `/data/config`. See detailed steps below.
|
||||||
|
|
||||||
#### STEPS:
|
#### STEPS:
|
||||||
|
|
||||||
@@ -248,7 +248,7 @@ services:
|
|||||||
services:
|
services:
|
||||||
netalertx:
|
netalertx:
|
||||||
container_name: netalertx
|
container_name: netalertx
|
||||||
image: "ghcr.io/jokob-sk/netalertx" # 🆕 This has changed
|
image: "ghcr.io/jokob-sk/netalertx:25.11.29" # 🆕 This has changed
|
||||||
network_mode: "host"
|
network_mode: "host"
|
||||||
cap_drop: # 🆕 New line
|
cap_drop: # 🆕 New line
|
||||||
- ALL # 🆕 New line
|
- ALL # 🆕 New line
|
||||||
|
|||||||
@@ -63,7 +63,7 @@ There is also an in-app Help / FAQ section that should be answering frequently a
|
|||||||
|
|
||||||
#### ♻ Misc
|
#### ♻ Misc
|
||||||
|
|
||||||
- [Reverse proxy (Nginx, Apache, SWAG)](./REVERSE_PROXY.md)
|
- [Reverse Proxy](./REVERSE_PROXY.md)
|
||||||
- [Installing Updates](./UPDATES.md)
|
- [Installing Updates](./UPDATES.md)
|
||||||
- [Setting up Authelia](./AUTHELIA.md) (DRAFT)
|
- [Setting up Authelia](./AUTHELIA.md) (DRAFT)
|
||||||
|
|
||||||
|
|||||||
@@ -51,7 +51,7 @@ If you don't need to discover new devices and only need to report on their statu
|
|||||||
|
|
||||||
For more information on how to add devices manually (or dummy devices), refer to the [Device Management](./DEVICE_MANAGEMENT.md) documentation.
|
For more information on how to add devices manually (or dummy devices), refer to the [Device Management](./DEVICE_MANAGEMENT.md) documentation.
|
||||||
|
|
||||||
To create truly dummy devices, you can use a loopback IP address (e.g., `0.0.0.0` or `127.0.0.1`) so they appear online.
|
To create truly dummy devices, you can use a loopback IP address (e.g., `0.0.0.0` or `127.0.0.1`) or the `Force Status` field so they appear online.
|
||||||
|
|
||||||
## NMAP and Fake MAC Addresses
|
## NMAP and Fake MAC Addresses
|
||||||
|
|
||||||
|
|||||||
577
docs/REVERSE_PROXY.md
Executable file → Normal file
@@ -1,526 +1,135 @@
|
|||||||
# Reverse Proxy Configuration
|
# Reverse Proxy Configuration
|
||||||
|
|
||||||
> [!NOTE]
|
A reverse proxy is a server that sits between users and your NetAlertX instance. It allows you to:
|
||||||
> This is community-contributed. Due to environment, setup, or networking differences, results may vary. Please open a PR to improve it instead of creating an issue, as the maintainer is not actively maintaining it.
|
- Access NetAlertX via a domain name (e.g., `https://netalertx.example.com`).
|
||||||
|
- Add HTTPS/SSL encryption.
|
||||||
|
- Enforce authentication (like SSO).
|
||||||
|
|
||||||
> [!NOTE]
|
```mermaid
|
||||||
> NetAlertX requires access to both the **web UI** (default `20211`) and the **GraphQL backend `GRAPHQL_PORT`** (default `20212`) ports.
|
flowchart LR
|
||||||
> Ensure your reverse proxy allows traffic to both for proper functionality.
|
Browser --HTTPS--> Proxy[Reverse Proxy] --HTTP--> Container[NetAlertX Container]
|
||||||
|
|
||||||
> [!IMPORTANT]
|
|
||||||
> You will need to specify 2 entries in your reverse proxy, one for the front end, one for the backend URL. The custom backend URL, including the `GRAPHQL_PORT`, needs to be aslo specified in the `BACKEND_API_URL` setting.This is the URL that points to the backend API server.
|
|
||||||
>
|
|
||||||
> 
|
|
||||||
>
|
|
||||||
> 
|
|
||||||
|
|
||||||
See also:
|
|
||||||
|
|
||||||
- [CADDY + AUTHENTIK](./REVERSE_PROXY_CADDY.md)
|
|
||||||
- [TRAEFIK](./REVERSE_PROXY_TRAEFIK.md)
|
|
||||||
|
|
||||||
|
|
||||||
## NGINX HTTP Configuration (Direct Path)
|
|
||||||
|
|
||||||
> Submitted by amazing [cvc90](https://github.com/cvc90) 🙏
|
|
||||||
|
|
||||||
> [!NOTE]
|
|
||||||
> There are various NGINX config files for NetAlertX, some for the bare-metal install, currently Debian 12 and Ubuntu 24 (`netalertx.conf`), and one for the docker container (`netalertx.template.conf`).
|
|
||||||
>
|
|
||||||
> The first one you can find in the respective bare metal installer folder `/app/install/\<system\>/netalertx.conf`.
|
|
||||||
> The docker one can be found in the [install](https://github.com/jokob-sk/NetAlertX/tree/main/install) folder. Map, or use, the one appropriate for your setup.
|
|
||||||
|
|
||||||
<br/>
|
|
||||||
|
|
||||||
1. On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx
|
|
||||||
|
|
||||||
2. In this file, paste the following code:
|
|
||||||
|
|
||||||
```
|
|
||||||
server {
|
|
||||||
listen 80;
|
|
||||||
server_name netalertx;
|
|
||||||
proxy_preserve_host on;
|
|
||||||
proxy_pass http://localhost:20211/;
|
|
||||||
proxy_pass_reverse http://localhost:20211/;
|
|
||||||
}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
3. Activate the new website by running the following command:
|
## NetAlertX Ports
|
||||||
|
|
||||||
`nginx -s reload` or `systemctl restart nginx`
|
NetAlertX exposes two ports that serve different purposes. Your reverse proxy can target one or both, depending on your needs.
|
||||||
|
|
||||||
4. Check your config with `nginx -t`. If there are any issues, it will tell you.
|
| Port | Service | Purpose |
|
||||||
|
|------|---------|---------|
|
||||||
|
| **20211** | Nginx (Web UI) | The main interface. |
|
||||||
|
| **20212** | Backend API | Direct access to the API and GraphQL. Includes API docs you can view with a browser. |
|
||||||
|
|
||||||
5. Once NGINX restarts, you should be able to access the proxy website at http://netalertx/
|
> [!WARNING]
|
||||||
|
> **Do not document or use `/server` as an external API endpoint.** It is an internal route used by the Nginx frontend to communicate with the backend.
|
||||||
|
|
||||||
<br/>
|
## Connection Patterns
|
||||||
|
|
||||||
## NGINX HTTP Configuration (Sub Path)
|
### 1. Default (No Proxy)
|
||||||
|
For local testing or LAN access. The browser accesses the UI on port 20211. Code and API docs are accessible on 20212.
|
||||||
|
|
||||||
1. On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx
|
```mermaid
|
||||||
|
flowchart LR
|
||||||
2. In this file, paste the following code:
|
B[Browser]
|
||||||
|
subgraph NAC[NetAlertX Container]
|
||||||
```
|
N[Nginx listening on port 20211]
|
||||||
server {
|
A[Service on port 20212]
|
||||||
listen 80;
|
N -->|Proxy /server to localhost:20212| A
|
||||||
server_name netalertx;
|
end
|
||||||
proxy_preserve_host on;
|
B -->|port 20211| NAC
|
||||||
location ^~ /netalertx/ {
|
B -->|port 20212| NAC
|
||||||
proxy_pass http://localhost:20211/;
|
|
||||||
proxy_pass_reverse http://localhost:20211/;
|
|
||||||
proxy_redirect ~^/(.*)$ /netalertx/$1;
|
|
||||||
rewrite ^/netalertx/?(.*)$ /$1 break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
3. Check your config with `nginx -t`. If there are any issues, it will tell you.
|
### 2. Direct API Consumer (Not Recommended)
|
||||||
|
Connecting directly to the backend API port (20212).
|
||||||
|
|
||||||
4. Activate the new website by running the following command:
|
> [!CAUTION]
|
||||||
|
> This exposes the API directly to the network without additional protection. Avoid this on untrusted networks.
|
||||||
|
|
||||||
`nginx -s reload` or `systemctl restart nginx`
|
```mermaid
|
||||||
|
flowchart LR
|
||||||
5. Once NGINX restarts, you should be able to access the proxy website at http://netalertx/netalertx/
|
B[Browser] -->|HTTPS| S[Any API Consumer app]
|
||||||
|
subgraph NAC[NetAlertX Container]
|
||||||
<br/>
|
N[Nginx listening on port 20211]
|
||||||
|
N -->|Proxy /server to localhost:20212| A[Service on port 20212]
|
||||||
## NGINX HTTP Configuration (Sub Path) with module ngx_http_sub_module
|
end
|
||||||
|
S -->|Port 20212| NAC
|
||||||
1. On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx
|
|
||||||
|
|
||||||
2. In this file, paste the following code:
|
|
||||||
|
|
||||||
```
|
|
||||||
server {
|
|
||||||
listen 80;
|
|
||||||
server_name netalertx;
|
|
||||||
proxy_preserve_host on;
|
|
||||||
location ^~ /netalertx/ {
|
|
||||||
proxy_pass http://localhost:20211/;
|
|
||||||
proxy_pass_reverse http://localhost:20211/;
|
|
||||||
proxy_redirect ~^/(.*)$ /netalertx/$1;
|
|
||||||
rewrite ^/netalertx/?(.*)$ /$1 break;
|
|
||||||
sub_filter_once off;
|
|
||||||
sub_filter_types *;
|
|
||||||
sub_filter 'href="/' 'href="/netalertx/';
|
|
||||||
sub_filter '(?>$host)/css' '/netalertx/css';
|
|
||||||
sub_filter '(?>$host)/js' '/netalertx/js';
|
|
||||||
sub_filter '/img' '/netalertx/img';
|
|
||||||
sub_filter '/lib' '/netalertx/lib';
|
|
||||||
sub_filter '/php' '/netalertx/php';
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
3. Check your config with `nginx -t`. If there are any issues, it will tell you.
|
### 3. Recommended: Reverse Proxy to Web UI
|
||||||
|
Using a reverse proxy (Nginx, Traefik, Caddy, etc.) to handle HTTPS and Auth in front of the main UI.
|
||||||
|
|
||||||
4. Activate the new website by running the following command:
|
```mermaid
|
||||||
|
flowchart LR
|
||||||
`nginx -s reload` or `systemctl restart nginx`
|
B[Browser] -->|HTTPS| S[Any Auth/SSL proxy]
|
||||||
|
subgraph NAC[NetAlertX Container]
|
||||||
5. Once NGINX restarts, you should be able to access the proxy website at http://netalertx/netalertx/
|
N[Nginx listening on port 20211]
|
||||||
|
N -->|Proxy /server to localhost:20212| A[Service on port 20212]
|
||||||
<br/>
|
end
|
||||||
|
S -->|port 20211| NAC
|
||||||
**NGINX HTTPS Configuration (Direct Path)**
|
|
||||||
|
|
||||||
1. On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx
|
|
||||||
|
|
||||||
2. In this file, paste the following code:
|
|
||||||
|
|
||||||
```
|
|
||||||
server {
|
|
||||||
listen 443;
|
|
||||||
server_name netalertx;
|
|
||||||
SSLEngine On;
|
|
||||||
SSLCertificateFile /etc/ssl/certs/netalertx.pem;
|
|
||||||
SSLCertificateKeyFile /etc/ssl/private/netalertx.key;
|
|
||||||
proxy_preserve_host on;
|
|
||||||
proxy_pass http://localhost:20211/;
|
|
||||||
proxy_pass_reverse http://localhost:20211/;
|
|
||||||
}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
3. Check your config with `nginx -t`. If there are any issues, it will tell you.
|
### 4. Recommended: Proxied API Consumer
|
||||||
|
Using a proxy to secure API access with TLS or IP limiting.
|
||||||
|
|
||||||
4. Activate the new website by running the following command:
|
**Why is this important?**
|
||||||
|
The backend API (`:20212`) is powerful—more so than the Web UI, which is a safer, password-protectable interface. By using a reverse proxy to **limit sources** (e.g., allowing only your Home Assistant server's IP), you ensure that only trusted devices can talk to your backend.
|
||||||
|
|
||||||
`nginx -s reload` or `systemctl restart nginx`
|
```mermaid
|
||||||
|
flowchart LR
|
||||||
5. Once NGINX restarts, you should be able to access the proxy website at https://netalertx/
|
B[Browser] -->|HTTPS| S[Any API Consumer app]
|
||||||
|
C[HTTPS/source-limiting Proxy]
|
||||||
<br/>
|
subgraph NAC[NetAlertX Container]
|
||||||
|
N[Nginx listening on port 20211]
|
||||||
**NGINX HTTPS Configuration (Sub Path)**
|
N -->|Proxy /server to localhost:20212| A[Service on port 20212]
|
||||||
|
end
|
||||||
1. On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx
|
S -->|HTTPS| C
|
||||||
|
C -->|Port 20212| NAC
|
||||||
2. In this file, paste the following code:
|
|
||||||
|
|
||||||
```
|
|
||||||
server {
|
|
||||||
listen 443;
|
|
||||||
server_name netalertx;
|
|
||||||
SSLEngine On;
|
|
||||||
SSLCertificateFile /etc/ssl/certs/netalertx.pem;
|
|
||||||
SSLCertificateKeyFile /etc/ssl/private/netalertx.key;
|
|
||||||
location ^~ /netalertx/ {
|
|
||||||
proxy_pass http://localhost:20211/;
|
|
||||||
proxy_pass_reverse http://localhost:20211/;
|
|
||||||
proxy_redirect ~^/(.*)$ /netalertx/$1;
|
|
||||||
rewrite ^/netalertx/?(.*)$ /$1 break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
3. Check your config with `nginx -t`. If there are any issues, it will tell you.
|
## Getting Started: Nginx Proxy Manager
|
||||||
|
|
||||||
4. Activate the new website by running the following command:
|
For beginners, we recommend **[Nginx Proxy Manager](https://nginxproxymanager.com/)**. It provides a user-friendly interface to manage proxy hosts and free SSL certificates via Let's Encrypt.
|
||||||
|
|
||||||
`nginx -s reload` or `systemctl restart nginx`
|
1. Install Nginx Proxy Manager alongside NetAlertX.
|
||||||
|
2. Create a **Proxy Host** pointing to your NetAlertX IP and Port `20211` for the Web UI.
|
||||||
|
3. (Optional) Create a second host for the API on Port `20212`.
|
||||||
|
|
||||||
5. Once NGINX restarts, you should be able to access the proxy website at https://netalertx/netalertx/
|

|
||||||
|
|
||||||
<br/>
|
### Configuration Settings
|
||||||
|
|
||||||
## NGINX HTTPS Configuration (Sub Path) with module ngx_http_sub_module
|
When using a reverse proxy, you should verify two settings in **Settings > Core > General**:
|
||||||
|
|
||||||
1. On your NGINX server, create a new file called /etc/nginx/sites-available/netalertx
|
1. **BACKEND_API_URL**: This should be set to `/server`.
|
||||||
|
* *Reason:* The frontend should communicate with the backend via the internal Nginx proxy rather than routing out to the internet and back.
|
||||||
|
|
||||||
2. In this file, paste the following code:
|
2. **REPORT_DASHBOARD_URL**: Set this to your external proxy URL (e.g., `https://netalertx.example.com`).
|
||||||
|
* *Reason:* This URL is used to generate proper clickable links in emails and HTML reports.
|
||||||
|
|
||||||
```
|

|
||||||
server {
|
|
||||||
listen 443;
|
|
||||||
server_name netalertx;
|
|
||||||
SSLEngine On;
|
|
||||||
SSLCertificateFile /etc/ssl/certs/netalertx.pem;
|
|
||||||
SSLCertificateKeyFile /etc/ssl/private/netalertx.key;
|
|
||||||
location ^~ /netalertx/ {
|
|
||||||
proxy_pass http://localhost:20211/;
|
|
||||||
proxy_pass_reverse http://localhost:20211/;
|
|
||||||
proxy_redirect ~^/(.*)$ /netalertx/$1;
|
|
||||||
rewrite ^/netalertx/?(.*)$ /$1 break;
|
|
||||||
sub_filter_once off;
|
|
||||||
sub_filter_types *;
|
|
||||||
sub_filter 'href="/' 'href="/netalertx/';
|
|
||||||
sub_filter '(?>$host)/css' '/netalertx/css';
|
|
||||||
sub_filter '(?>$host)/js' '/netalertx/js';
|
|
||||||
sub_filter '/img' '/netalertx/img';
|
|
||||||
sub_filter '/lib' '/netalertx/lib';
|
|
||||||
sub_filter '/php' '/netalertx/php';
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Check your config with `nginx -t`. If there are any issues, it will tell you.
|
## Other Reverse Proxies
|
||||||
|
|
||||||
4. Activate the new website by running the following command:
|
NetAlertX uses standard HTTP. Any reverse proxy will work. Simply forward traffic to the appropriate port (`20211` or `20212`).
|
||||||
|
|
||||||
`nginx -s reload` or `systemctl restart nginx`
|
For configuration details, consult the documentation for your preferred proxy:
|
||||||
|
|
||||||
5. Once NGINX restarts, you should be able to access the proxy website at https://netalertx/netalertx/
|
* **[NGINX](https://nginx.org/en/docs/http/ngx_http_proxy_module.html)**
|
||||||
|
* **[Apache (mod_proxy)](https://httpd.apache.org/docs/current/mod/mod_proxy.html)**
|
||||||
|
* **[Caddy](https://caddyserver.com/docs/caddyfile/directives/reverse_proxy)**
|
||||||
|
* **[Traefik](https://doc.traefik.io/traefik/routing/services/)**
|
||||||
|
|
||||||
<br/>
|
## Authentication
|
||||||
|
|
||||||
## Apache HTTP Configuration (Direct Path)
|
If you wish to add Single Sign-On (SSO) or other authentication in front of NetAlertX, refer to the documentation for your identity provider:
|
||||||
|
|
||||||
1. On your Apache server, create a new file called /etc/apache2/sites-available/netalertx.conf.
|
* **[Authentik](https://docs.goauthentik.io/)**
|
||||||
|
* **[Authelia](https://www.authelia.com/docs/)**
|
||||||
|
|
||||||
2. In this file, paste the following code:
|
## Further Reading
|
||||||
|
|
||||||
```
|
|
||||||
<VirtualHost *:80>
|
|
||||||
ServerName netalertx
|
|
||||||
ProxyPreserveHost On
|
|
||||||
ProxyPass / http://localhost:20211/
|
|
||||||
ProxyPassReverse / http://localhost:20211/
|
|
||||||
</VirtualHost>
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Check your config with `httpd -t` (or `apache2ctl -t` on Debian/Ubuntu). If there are any issues, it will tell you.
|
|
||||||
|
|
||||||
4. Activate the new website by running the following command:
|
|
||||||
|
|
||||||
`a2ensite netalertx` or `service apache2 reload`
|
|
||||||
|
|
||||||
5. Once Apache restarts, you should be able to access the proxy website at http://netalertx/
|
|
||||||
|
|
||||||
<br/>
|
|
||||||
|
|
||||||
## Apache HTTP Configuration (Sub Path)
|
|
||||||
|
|
||||||
1. On your Apache server, create a new file called /etc/apache2/sites-available/netalertx.conf.
|
|
||||||
|
|
||||||
2. In this file, paste the following code:
|
|
||||||
|
|
||||||
```
|
|
||||||
<VirtualHost *:80>
|
|
||||||
ServerName netalertx
|
|
||||||
location ^~ /netalertx/ {
|
|
||||||
ProxyPreserveHost On
|
|
||||||
ProxyPass / http://localhost:20211/
|
|
||||||
ProxyPassReverse / http://localhost:20211/
|
|
||||||
}
|
|
||||||
</VirtualHost>
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Check your config with `httpd -t` (or `apache2ctl -t` on Debian/Ubuntu). If there are any issues, it will tell you.
|
|
||||||
|
|
||||||
4. Activate the new website by running the following command:
|
|
||||||
|
|
||||||
`a2ensite netalertx` or `service apache2 reload`
|
|
||||||
|
|
||||||
5. Once Apache restarts, you should be able to access the proxy website at http://netalertx/
|
|
||||||
|
|
||||||
<br/>
|
|
||||||
|
|
||||||
## Apache HTTPS Configuration (Direct Path)
|
|
||||||
|
|
||||||
1. On your Apache server, create a new file called /etc/apache2/sites-available/netalertx.conf.
|
|
||||||
|
|
||||||
2. In this file, paste the following code:
|
|
||||||
|
|
||||||
```
|
|
||||||
<VirtualHost *:443>
|
|
||||||
ServerName netalertx
|
|
||||||
SSLEngine On
|
|
||||||
SSLCertificateFile /etc/ssl/certs/netalertx.pem
|
|
||||||
SSLCertificateKeyFile /etc/ssl/private/netalertx.key
|
|
||||||
ProxyPreserveHost On
|
|
||||||
ProxyPass / http://localhost:20211/
|
|
||||||
ProxyPassReverse / http://localhost:20211/
|
|
||||||
</VirtualHost>
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Check your config with `httpd -t` (or `apache2ctl -t` on Debian/Ubuntu). If there are any issues, it will tell you.
|
|
||||||
|
|
||||||
4. Activate the new website by running the following command:
|
|
||||||
|
|
||||||
`a2ensite netalertx` or `service apache2 reload`
|
|
||||||
|
|
||||||
5. Once Apache restarts, you should be able to access the proxy website at https://netalertx/
|
|
||||||
|
|
||||||
<br/>
|
|
||||||
|
|
||||||
## Apache HTTPS Configuration (Sub Path)
|
|
||||||
|
|
||||||
1. On your Apache server, create a new file called /etc/apache2/sites-available/netalertx.conf.
|
|
||||||
|
|
||||||
2. In this file, paste the following code:
|
|
||||||
|
|
||||||
```
|
|
||||||
<VirtualHost *:443>
|
|
||||||
ServerName netalertx
|
|
||||||
SSLEngine On
|
|
||||||
SSLCertificateFile /etc/ssl/certs/netalertx.pem
|
|
||||||
SSLCertificateKeyFile /etc/ssl/private/netalertx.key
|
|
||||||
location ^~ /netalertx/ {
|
|
||||||
ProxyPreserveHost On
|
|
||||||
ProxyPass / http://localhost:20211/
|
|
||||||
ProxyPassReverse / http://localhost:20211/
|
|
||||||
}
|
|
||||||
</VirtualHost>
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Check your config with `httpd -t` (or `apache2ctl -t` on Debian/Ubuntu). If there are any issues, it will tell you.
|
|
||||||
|
|
||||||
4. Activate the new website by running the following command:
|
|
||||||
|
|
||||||
`a2ensite netalertx` or `service apache2 reload`
|
|
||||||
|
|
||||||
5. Once Apache restarts, you should be able to access the proxy website at https://netalertx/netalertx/
|
|
||||||
|
|
||||||
<br/>
|
|
||||||
|
|
||||||
## Reverse proxy example by using LinuxServer's SWAG container.
|
|
||||||
|
|
||||||
> Submitted by [s33d1ing](https://github.com/s33d1ing). 🙏
|
|
||||||
|
|
||||||
## [linuxserver/swag](https://github.com/linuxserver/docker-swag)
|
|
||||||
|
|
||||||
In the SWAG container create `/config/nginx/proxy-confs/netalertx.subfolder.conf` with the following contents:
|
|
||||||
|
|
||||||
``` nginx
|
|
||||||
## Version 2023/02/05
|
|
||||||
# make sure that your netalertx container is named netalertx
|
|
||||||
# netalertx does not require a base url setting
|
|
||||||
|
|
||||||
# Since NetAlertX uses a Host network, you may need to use the IP address of the system running NetAlertX for $upstream_app.
|
|
||||||
|
|
||||||
location /netalertx {
|
|
||||||
return 301 $scheme://$host/netalertx/;
|
|
||||||
}
|
|
||||||
|
|
||||||
location ^~ /netalertx/ {
|
|
||||||
# enable the next two lines for http auth
|
|
||||||
#auth_basic "Restricted";
|
|
||||||
#auth_basic_user_file /config/nginx/.htpasswd;
|
|
||||||
|
|
||||||
# enable for ldap auth (requires ldap-server.conf in the server block)
|
|
||||||
#include /config/nginx/ldap-location.conf;
|
|
||||||
|
|
||||||
# enable for Authelia (requires authelia-server.conf in the server block)
|
|
||||||
#include /config/nginx/authelia-location.conf;
|
|
||||||
|
|
||||||
# enable for Authentik (requires authentik-server.conf in the server block)
|
|
||||||
#include /config/nginx/authentik-location.conf;
|
|
||||||
|
|
||||||
include /config/nginx/proxy.conf;
|
|
||||||
include /config/nginx/resolver.conf;
|
|
||||||
|
|
||||||
set $upstream_app netalertx;
|
|
||||||
set $upstream_port 20211;
|
|
||||||
set $upstream_proto http;
|
|
||||||
|
|
||||||
proxy_pass $upstream_proto://$upstream_app:$upstream_port;
|
|
||||||
proxy_set_header Accept-Encoding "";
|
|
||||||
|
|
||||||
proxy_redirect ~^/(.*)$ /netalertx/$1;
|
|
||||||
rewrite ^/netalertx/?(.*)$ /$1 break;
|
|
||||||
|
|
||||||
sub_filter_once off;
|
|
||||||
sub_filter_types *;
|
|
||||||
|
|
||||||
sub_filter 'href="/' 'href="/netalertx/';
|
|
||||||
|
|
||||||
sub_filter '(?>$host)/css' '/netalertx/css';
|
|
||||||
sub_filter '(?>$host)/js' '/netalertx/js';
|
|
||||||
|
|
||||||
sub_filter '/img' '/netalertx/img';
|
|
||||||
sub_filter '/lib' '/netalertx/lib';
|
|
||||||
sub_filter '/php' '/netalertx/php';
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
<br/>
|
|
||||||
|
|
||||||
## Traefik
|
|
||||||
|
|
||||||
> Submitted by [Isegrimm](https://github.com/Isegrimm) 🙏 (based on this [discussion](https://github.com/jokob-sk/NetAlertX/discussions/449#discussioncomment-7281442))
|
|
||||||
|
|
||||||
Assuming the user already has a working Traefik setup, this is what's needed to make NetAlertX work at a URL like www.domain.com/netalertx/.
|
|
||||||
|
|
||||||
Note: Everything in these configs assumes '**www.domain.com**' as your domainname and '**section31**' as an arbitrary name for your certificate setup. You will have to substitute these with your own.
|
|
||||||
|
|
||||||
Also, I use the prefix '**netalertx**'. If you want to use another prefix, change it in these files: dynamic.toml and default.
|
|
||||||
|
|
||||||
Content of my yaml-file (this is the generic Traefik config, which defines which ports to listen on, redirect http to https and sets up the certificate process).
|
|
||||||
It also contains Authelia, which I use for authentication.
|
|
||||||
This part contains nothing specific to NetAlertX.
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
version: '3.8'
|
|
||||||
|
|
||||||
services:
|
|
||||||
traefik:
|
|
||||||
image: traefik
|
|
||||||
container_name: traefik
|
|
||||||
command:
|
|
||||||
- "--api=true"
|
|
||||||
- "--api.insecure=true"
|
|
||||||
- "--api.dashboard=true"
|
|
||||||
- "--entrypoints.web.address=:80"
|
|
||||||
- "--entrypoints.web.http.redirections.entryPoint.to=websecure"
|
|
||||||
- "--entrypoints.web.http.redirections.entryPoint.scheme=https"
|
|
||||||
- "--entrypoints.websecure.address=:443"
|
|
||||||
- "--providers.file.filename=/traefik-config/dynamic.toml"
|
|
||||||
- "--providers.file.watch=true"
|
|
||||||
- "--log.level=ERROR"
|
|
||||||
- "--certificatesresolvers.section31.acme.email=postmaster@domain.com"
|
|
||||||
- "--certificatesresolvers.section31.acme.storage=/traefik-config/acme.json"
|
|
||||||
- "--certificatesresolvers.section31.acme.httpchallenge=true"
|
|
||||||
- "--certificatesresolvers.section31.acme.httpchallenge.entrypoint=web"
|
|
||||||
ports:
|
|
||||||
- "80:80"
|
|
||||||
- "443:443"
|
|
||||||
- "8080:8080"
|
|
||||||
volumes:
|
|
||||||
- "/var/run/docker.sock:/var/run/docker.sock:ro"
|
|
||||||
- /appl/docker/traefik/config:/traefik-config
|
|
||||||
depends_on:
|
|
||||||
- authelia
|
|
||||||
restart: unless-stopped
|
|
||||||
authelia:
|
|
||||||
container_name: authelia
|
|
||||||
image: authelia/authelia:latest
|
|
||||||
ports:
|
|
||||||
- "9091:9091"
|
|
||||||
volumes:
|
|
||||||
- /appl/docker/authelia:/config
|
|
||||||
restart: u
|
|
||||||
nless-stopped
|
|
||||||
```
|
|
||||||
Snippet of the dynamic.toml file (referenced in the yml-file above) that defines the config for NetAlertX:
|
|
||||||
The following are self-defined keywords, everything else is traefik keywords:
|
|
||||||
- netalertx-router
|
|
||||||
- netalertx-service
|
|
||||||
- auth
|
|
||||||
- netalertx-stripprefix
|
|
||||||
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[http.routers]
|
|
||||||
[http.routers.netalertx-router]
|
|
||||||
entryPoints = ["websecure"]
|
|
||||||
rule = "Host(`www.domain.com`) && PathPrefix(`/netalertx`)"
|
|
||||||
service = "netalertx-service"
|
|
||||||
middlewares = "auth,netalertx-stripprefix"
|
|
||||||
[http.routers.netalertx-router.tls]
|
|
||||||
certResolver = "section31"
|
|
||||||
[[http.routers.netalertx-router.tls.domains]]
|
|
||||||
main = "www.domain.com"
|
|
||||||
|
|
||||||
[http.services]
|
|
||||||
[http.services.netalertx-service]
|
|
||||||
[[http.services.netalertx-service.loadBalancer.servers]]
|
|
||||||
url = "http://internal-ip-address:20211/"
|
|
||||||
|
|
||||||
[http.middlewares]
|
|
||||||
[http.middlewares.auth.forwardAuth]
|
|
||||||
address = "http://authelia:9091/api/verify?rd=https://www.domain.com/authelia/"
|
|
||||||
trustForwardHeader = true
|
|
||||||
authResponseHeaders = ["Remote-User", "Remote-Groups", "Remote-Name", "Remote-Email"]
|
|
||||||
[http.middlewares.netalertx-stripprefix.stripprefix]
|
|
||||||
prefixes = "/netalertx"
|
|
||||||
forceSlash = false
|
|
||||||
|
|
||||||
```
|
|
||||||
To make NetAlertX work with this setup I modified the default file at `/etc/nginx/sites-available/default` in the docker container by copying it to my local filesystem, adding the changes as specified by [cvc90](https://github.com/cvc90) and mounting the new file into the docker container, overwriting the original one. By mapping the file instead of changing the file in-place, the changes persist if an updated dockerimage is pulled. This is also a downside when the default file is updated, so I only use this as a temporary solution, until the dockerimage is updated with this change.
|
|
||||||
|
|
||||||
Default-file:
|
|
||||||
|
|
||||||
```
|
|
||||||
server {
|
|
||||||
listen 80 default_server;
|
|
||||||
root /var/www/html;
|
|
||||||
index index.php;
|
|
||||||
#rewrite /netalertx/(.*) / permanent;
|
|
||||||
add_header X-Forwarded-Prefix "/netalertx" always;
|
|
||||||
proxy_set_header X-Forwarded-Prefix "/netalertx";
|
|
||||||
|
|
||||||
location ~* \.php$ {
|
|
||||||
fastcgi_pass unix:/run/php/php8.2-fpm.sock;
|
|
||||||
include fastcgi_params;
|
|
||||||
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
|
|
||||||
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
|
|
||||||
fastcgi_connect_timeout 75;
|
|
||||||
fastcgi_send_timeout 600;
|
|
||||||
fastcgi_read_timeout 600;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Mapping the updated file (on the local filesystem at `/appl/docker/netalertx/default`) into the docker container:
|
|
||||||
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
...
|
|
||||||
volumes:
|
|
||||||
- /appl/docker/netalertx/default:/etc/nginx/sites-available/default
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
|
If you want to understand more about reverse proxies and networking concepts:
|
||||||
|
|
||||||
|
* [What is a Reverse Proxy? (Cloudflare)](https://www.cloudflare.com/learning/cdn/glossary/reverse-proxy/)
|
||||||
|
* [Proxy vs Reverse Proxy (StrongDM)](https://www.strongdm.com/blog/difference-between-proxy-and-reverse-proxy)
|
||||||
|
* [Nginx Reverse Proxy Glossary](https://www.nginx.com/resources/glossary/reverse-proxy-server/)
|
||||||
|
|||||||
@@ -1,892 +0,0 @@
|
|||||||
## Caddy + Authentik Outpost Proxy SSO
|
|
||||||
> Submitted by [luckylinux](https://github.com/luckylinux) 🙏.
|
|
||||||
|
|
||||||
> [!NOTE]
|
|
||||||
> This is community-contributed. Due to environment, setup, or networking differences, results may vary. Please open a PR to improve it instead of creating an issue, as the maintainer is not actively maintaining it.
|
|
||||||
|
|
||||||
> [!NOTE]
|
|
||||||
> NetAlertX requires access to both the **web UI** (default `20211`) and the **GraphQL backend `GRAPHQL_PORT`** (default `20212`) ports.
|
|
||||||
> Ensure your reverse proxy allows traffic to both for proper functionality.
|
|
||||||
|
|
||||||
### Introduction
|
|
||||||
|
|
||||||
This Setup assumes:
|
|
||||||
|
|
||||||
1. Authentik Installation running on a separate Host at `https://authentik.MYDOMAIN.TLD`
|
|
||||||
2. Container Management is done on Baremetal OR in a Virtual Machine (KVM/Xen/ESXi/..., no LXC Containers !):
|
|
||||||
i. Docker and Docker Compose configured locally running as Root (needed for `network_mode: host`) OR
|
|
||||||
ii. Podman (optionally `podman-compose`) configured locally running as Root (needed for `network_mode: host`)
|
|
||||||
3. TLS Certificates are already pre-obtained and located at `/var/lib/containers/certificates/letsencrypt/MYDOMAIN.TLD`.
|
|
||||||
I use the `certbot/dns-cloudflare` Podman Container on a separate Host to obtain the Certificates which I then distribute internally.
|
|
||||||
This Container uses the Wildcard Top-Level Domain Certificate which is valid for `MYDOMAIN.TLD` and `*.MYDOMAIN.TLD`.
|
|
||||||
4. Proxied Access
|
|
||||||
i. NetAlertX Web Interface is accessible via Caddy Reverse Proxy at `https://netalertx.MYDOMAIN.TLD` (default HTTPS Port 443: `https://netalertx.MYDOMAIN.TLD:443`) with `REPORT_DASHBOARD_URL=https://netalertx.MYDOMAIN.TLD`
|
|
||||||
ii. NetAlertX GraphQL Interface is accessible via Caddy Reverse Proxy at `https://netalertx.MYDOMAIN.TLD:20212` with `BACKEND_API_URL=https://netalertx.MYDOMAIN.TLD:20212`
|
|
||||||
iii. Authentik Proxy Outpost is accessible via Caddy Reverse Proxy at `https://netalertx.MYDOMAIN.TLD:9443`
|
|
||||||
5. Internal Ports
|
|
||||||
i. NGINX Web Server is set to listen on internal Port 20211 set via `PORT=20211`
|
|
||||||
ii. Python Web Server is set to listen on internal Port `GRAPHQL_PORT=20219`
|
|
||||||
iii. Authentik Proxy Outpost is listening on internal Port `AUTHENTIK_LISTEN__HTTP=[::1]:6000` (unencrypted) and Port `AUTHENTIK_LISTEN__HTTPS=[::1]:6443` (encrypted)
|
|
||||||
|
|
||||||
8. Some further Configuration for Caddy is performed in Terms of Logging, SSL Certificates, etc
|
|
||||||
|
|
||||||
It's also possible to [let Caddy automatically request & keep TLS Certificates up-to-date](https://caddyserver.com/docs/automatic-https), although please keep in mind that:
|
|
||||||
|
|
||||||
1. You risk enumerating your LAN. Every Domain/Subdomain for which Caddy requests a TLS Certificate for you will result in that Host to be listed on [List of Letsencrypt Certificates issued](https://crt.sh/).
|
|
||||||
2. You need to either:
|
|
||||||
i. Open Port 80 for external Access ([HTTP challenge](https://caddyserver.com/docs/automatic-https#http-challenge)) in order for Letsencrypt to verify the Ownership of the Domain/Subdomain
|
|
||||||
ii. Open Port 443 for external Access ([TLS-ALPN challenge](https://caddyserver.com/docs/automatic-https#tls-alpn-challenge)) in order for Letsencrypt to verify the Ownership of the Domain/Subdomain
|
|
||||||
iii. Give Caddy the Credentials to update the DNS Records at your DNS Provider ([DNS challenge](https://caddyserver.com/docs/automatic-https#dns-challenge))
|
|
||||||
|
|
||||||
You can also decide to deploy your own Certificates & Certification Authority, either manually with OpenSSL, or by using something like [mkcert](https://github.com/FiloSottile/mkcert).
|
|
||||||
|
|
||||||
In Terms of IP Stack Used:
|
|
||||||
- External: Caddy listens on both IPv4 and IPv6.
|
|
||||||
- Internal:
|
|
||||||
- Authentik Outpost Proxy listens on IPv6 `[::1]`
|
|
||||||
- NetAlertX listens on IPv4 `0.0.0.0`
|
|
||||||
|
|
||||||
### Flow
|
|
||||||
The Traffic Flow will therefore be as follows:
|
|
||||||
|
|
||||||
- Web GUI:
|
|
||||||
i. Client accesses `http://authentik.MYDOMAIN.TLD:80`: default (built-in Caddy) Redirect to `https://authentik.MYDOMAIN.TLD:443`
|
|
||||||
ii. Client accesses `https://authentik.MYDOMAIN.TLD:443` -> reverse Proxy to internal Port 20211 (NetAlertX Web GUI / NGINX - unencrypted)
|
|
||||||
- GraphQL: Client accesses `https://authentik.MYDOMAIN.TLD:20212` -> reverse Proxy to internal Port 20219 (NetAlertX GraphQL - unencrypted)
|
|
||||||
- Authentik Outpost: Client accesses `https://authentik.MYDOMAIN.TLD:9443` -> reverse Proxy to internal Port 6000 (Authentik Outpost Proxy - unencrypted)
|
|
||||||
|
|
||||||
An Overview of the Flow is provided in the Picture below:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
### Security Considerations
|
|
||||||
|
|
||||||
#### Caddy should be run rootless
|
|
||||||
|
|
||||||
> [!WARNING]
|
|
||||||
> By default Caddy runs as `root` which is a Security Risk.
|
|
||||||
> In order to solve this, it's recommended to create an unprivileged User `caddy` and Group `caddy` on the Host:
|
|
||||||
> ```
|
|
||||||
> groupadd --gid 980 caddy
|
|
||||||
> useradd --shell /usr/sbin/nologin --gid 980 --uid 980 -c "Caddy web server" --base-dir /var/lib/caddy
|
|
||||||
> ```
|
|
||||||
|
|
||||||
At least using Quadlets with Usernames (NOT required with UID/GID), but possibly using Compose in certain Cases as well, a custom `/etc/passwd` and `/etc/group` might need to be bind-mounted inside the Container.
|
|
||||||
`passwd`:
|
|
||||||
```
|
|
||||||
root:x:0:0:root:/root:/bin/sh
|
|
||||||
bin:x:1:1:bin:/bin:/sbin/nologin
|
|
||||||
daemon:x:2:2:daemon:/sbin:/sbin/nologin
|
|
||||||
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
|
|
||||||
sync:x:5:0:sync:/sbin:/bin/sync
|
|
||||||
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
|
|
||||||
halt:x:7:0:halt:/sbin:/sbin/halt
|
|
||||||
mail:x:8:12:mail:/var/mail:/sbin/nologin
|
|
||||||
news:x:9:13:news:/usr/lib/news:/sbin/nologin
|
|
||||||
uucp:x:10:14:uucp:/var/spool/uucppublic:/sbin/nologin
|
|
||||||
cron:x:16:16:cron:/var/spool/cron:/sbin/nologin
|
|
||||||
ftp:x:21:21::/var/lib/ftp:/sbin/nologin
|
|
||||||
sshd:x:22:22:sshd:/dev/null:/sbin/nologin
|
|
||||||
games:x:35:35:games:/usr/games:/sbin/nologin
|
|
||||||
ntp:x:123:123:NTP:/var/empty:/sbin/nologin
|
|
||||||
guest:x:405:100:guest:/dev/null:/sbin/nologin
|
|
||||||
nobody:x:65534:65534:nobody:/:/sbin/nologin
|
|
||||||
caddy:x:980:980:caddy:/var/lib/caddy:/bin/sh
|
|
||||||
```
|
|
||||||
|
|
||||||
`group`:
|
|
||||||
```
|
|
||||||
root:x:0:root
|
|
||||||
bin:x:1:root,bin,daemon
|
|
||||||
daemon:x:2:root,bin,daemon
|
|
||||||
sys:x:3:root,bin
|
|
||||||
adm:x:4:root,daemon
|
|
||||||
tty:x:5:
|
|
||||||
disk:x:6:root
|
|
||||||
lp:x:7:lp
|
|
||||||
kmem:x:9:
|
|
||||||
wheel:x:10:root
|
|
||||||
floppy:x:11:root
|
|
||||||
mail:x:12:mail
|
|
||||||
news:x:13:news
|
|
||||||
uucp:x:14:uucp
|
|
||||||
cron:x:16:cron
|
|
||||||
audio:x:18:
|
|
||||||
cdrom:x:19:
|
|
||||||
dialout:x:20:root
|
|
||||||
ftp:x:21:
|
|
||||||
sshd:x:22:
|
|
||||||
input:x:23:
|
|
||||||
tape:x:26:root
|
|
||||||
video:x:27:root
|
|
||||||
netdev:x:28:
|
|
||||||
kvm:x:34:kvm
|
|
||||||
games:x:35:
|
|
||||||
shadow:x:42:
|
|
||||||
www-data:x:82:
|
|
||||||
users:x:100:games
|
|
||||||
ntp:x:123:
|
|
||||||
abuild:x:300:
|
|
||||||
utmp:x:406:
|
|
||||||
ping:x:999:
|
|
||||||
nogroup:x:65533:
|
|
||||||
nobody:x:65534:
|
|
||||||
caddy:x:980:
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Authentication of GraphQL Endpoint
|
|
||||||
|
|
||||||
> [!WARNING]
|
|
||||||
> Currently the GraphQL Endpoint is NOT authenticated !
|
|
||||||
|
|
||||||
### Environment Files
|
|
||||||
Depending on the Preference of the User (Environment Variables defined in Compose/Quadlet or in external `.env` File[s]), it might be prefereable to place at least some Environment Variables in external `.env` and `.env.<application>` Files.
|
|
||||||
|
|
||||||
The following is proposed:
|
|
||||||
|
|
||||||
- `.env`: common Settings (empty by Default)
|
|
||||||
- `.env.caddy`: Caddy Settings
|
|
||||||
- `.env.server`: NetAlertX Server/Application Settings
|
|
||||||
- `.env.outpost.proxy`: Authentik Proxy Outpost Settings
|
|
||||||
|
|
||||||
The following Contents is assumed.
|
|
||||||
|
|
||||||
`.env.caddy`:
|
|
||||||
```
|
|
||||||
# Define Application Hostname
|
|
||||||
APPLICATION_HOSTNAME=netalertx.MYDOMAIN.TLD
|
|
||||||
|
|
||||||
# Define Certificate Domain
|
|
||||||
# In this case: use Wildcard Certificate
|
|
||||||
APPLICATION_CERTIFICATE_DOMAIN=MYDOMAIN.TLD
|
|
||||||
APPLICATION_CERTIFICATE_CERT_FILE=fullchain.pem
|
|
||||||
APPLICATION_CERTIFICATE_KEY_FILE=privkey.pem
|
|
||||||
|
|
||||||
# Define Outpost Hostname
|
|
||||||
OUTPOST_HOSTNAME=netalertx.MYDOMAIN.TLD
|
|
||||||
|
|
||||||
# Define Outpost External Port (TLS)
|
|
||||||
OUTPOST_EXTERNAL_PORT=9443
|
|
||||||
```
|
|
||||||
|
|
||||||
`.env.server`:
|
|
||||||
```
|
|
||||||
PORT=20211
|
|
||||||
PORT_SSL=443
|
|
||||||
NETALERTX_NETWORK_MODE=host
|
|
||||||
LISTEN_ADDR=0.0.0.0
|
|
||||||
GRAPHQL_PORT=20219
|
|
||||||
NETALERTX_DEBUG=1
|
|
||||||
BACKEND_API_URL=https://netalertx.MYDOMAIN.TLD:20212
|
|
||||||
```
|
|
||||||
|
|
||||||
`.env.outpost.proxy`:
|
|
||||||
```
|
|
||||||
AUTHENTIK_TOKEN=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
|
|
||||||
AUTHENTIK_LISTEN__HTTP=[::1]:6000
|
|
||||||
AUTHENTIK_LISTEN__HTTPS=[::1]:6443
|
|
||||||
```
|
|
||||||
|
|
||||||
### Compose Setup
|
|
||||||
```
|
|
||||||
version: "3.8"
|
|
||||||
services:
|
|
||||||
netalertx-caddy:
|
|
||||||
container_name: netalertx-caddy
|
|
||||||
|
|
||||||
network_mode: host
|
|
||||||
image: docker.io/library/caddy:latest
|
|
||||||
pull: missing
|
|
||||||
|
|
||||||
env_file:
|
|
||||||
- .env
|
|
||||||
- .env.caddy
|
|
||||||
|
|
||||||
environment:
|
|
||||||
CADDY_DOCKER_CADDYFILE_PATH: "/etc/caddy/Caddyfile"
|
|
||||||
|
|
||||||
volumes:
|
|
||||||
- ./Caddyfile:/etc/caddy/Caddyfile:ro,z
|
|
||||||
- /var/lib/containers/data/netalertx/caddy:/data/caddy:rw,z
|
|
||||||
- /var/lib/containers/log/netalertx/caddy:/var/log:rw,z
|
|
||||||
- /var/lib/containers/config/netalertx/caddy:/config/caddy:rw,z
|
|
||||||
- /var/lib/containers/certificates/letsencrypt:/certificates:ro,z
|
|
||||||
|
|
||||||
# Set User
|
|
||||||
user: "caddy:caddy"
|
|
||||||
|
|
||||||
# Automatically restart Container
|
|
||||||
restart: unless-stopped
|
|
||||||
|
|
||||||
netalertx-server:
|
|
||||||
container_name: netalertx-server # The name when you docker contiainer ls
|
|
||||||
|
|
||||||
network_mode: host # Use host networking for ARP scanning and other services
|
|
||||||
|
|
||||||
depends_on:
|
|
||||||
netalertx-caddy:
|
|
||||||
condition: service_started
|
|
||||||
restart: true
|
|
||||||
netalertx-outpost-proxy:
|
|
||||||
condition: service_started
|
|
||||||
restart: true
|
|
||||||
|
|
||||||
# Local built Image including latest Changes
|
|
||||||
image: localhost/netalertx-dev:dev-20260109-232454
|
|
||||||
|
|
||||||
read_only: true # Make the container filesystem read-only
|
|
||||||
|
|
||||||
# It is most secure to start with user 20211, but then we lose provisioning capabilities.
|
|
||||||
# user: "${NETALERTX_UID:-20211}:${NETALERTX_GID:-20211}"
|
|
||||||
cap_drop: # Drop all capabilities for enhanced security
|
|
||||||
- ALL
|
|
||||||
cap_add: # Add only the necessary capabilities
|
|
||||||
- NET_ADMIN # Required for scanning with arp-scan, nmap, nbtscan, traceroute, and zero-conf
|
|
||||||
- NET_RAW # Required for raw socket operations with arp-scan, nmap, nbtscan, traceroute and zero-conf
|
|
||||||
- NET_BIND_SERVICE # Required to bind to privileged ports with nbtscan
|
|
||||||
- CHOWN # Required for root-entrypoint to chown /data + /tmp before dropping privileges
|
|
||||||
- SETUID # Required for root-entrypoint to switch to non-root user
|
|
||||||
- SETGID # Required for root-entrypoint to switch to non-root group
|
|
||||||
volumes:
|
|
||||||
|
|
||||||
# Override NGINX Configuration Template
|
|
||||||
- type: bind
|
|
||||||
source: /var/lib/containers/config/netalertx/server/nginx/netalertx.conf.template
|
|
||||||
target: /services/config/nginx/netalertx.conf.template
|
|
||||||
read_only: true
|
|
||||||
bind:
|
|
||||||
selinux: Z
|
|
||||||
|
|
||||||
# Letsencrypt Certificates
|
|
||||||
- type: bind
|
|
||||||
source: /var/lib/containers/certificates/letsencrypt/MYDOMAIN.TLD
|
|
||||||
target: /certificates
|
|
||||||
read_only: true
|
|
||||||
bind:
|
|
||||||
selinux: Z
|
|
||||||
|
|
||||||
# Data Storage for NetAlertX
|
|
||||||
- type: bind # Persistent Docker-managed Named Volume for storage
|
|
||||||
source: /var/lib/containers/data/netalertx/server
|
|
||||||
target: /data # consolidated configuration and database storage
|
|
||||||
read_only: false # writable volume
|
|
||||||
bind:
|
|
||||||
selinux: Z
|
|
||||||
|
|
||||||
# Set the Timezone
|
|
||||||
- type: bind # Bind mount for timezone consistency
|
|
||||||
source: /etc/localtime
|
|
||||||
target: /etc/localtime
|
|
||||||
read_only: true
|
|
||||||
bind:
|
|
||||||
selinux: Z
|
|
||||||
|
|
||||||
# tmpfs mounts for writable directories in a read-only container and improve system performance
|
|
||||||
# All writes now live under /tmp/* subdirectories which are created dynamically by entrypoint.d scripts
|
|
||||||
# mode=1700 gives rwx------ permissions; ownership is set by /root-entrypoint.sh
|
|
||||||
- type: tmpfs
|
|
||||||
target: /tmp
|
|
||||||
tmpfs-mode: 1700
|
|
||||||
uid: 0
|
|
||||||
gid: 0
|
|
||||||
rw: true
|
|
||||||
noexec: true
|
|
||||||
nosuid: true
|
|
||||||
nodev: true
|
|
||||||
async: true
|
|
||||||
noatime: true
|
|
||||||
nodiratime: true
|
|
||||||
bind:
|
|
||||||
selinux: Z
|
|
||||||
|
|
||||||
env_file:
|
|
||||||
- .env
|
|
||||||
- .env.server
|
|
||||||
|
|
||||||
environment:
|
|
||||||
PUID: ${NETALERTX_UID:-20211} # Runtime UID after priming (Synology/no-copy-up safe)
|
|
||||||
PGID: ${NETALERTX_GID:-20211} # Runtime GID after priming (Synology/no-copy-up safe)
|
|
||||||
LISTEN_ADDR: ${LISTEN_ADDR:-0.0.0.0} # Listen for connections on all interfaces
|
|
||||||
PORT: ${PORT:-20211} # Application port
|
|
||||||
PORT_SSL: ${PORT_SSL:-443}
|
|
||||||
GRAPHQL_PORT: ${GRAPHQL_PORT:-20212} # GraphQL API port
|
|
||||||
ALWAYS_FRESH_INSTALL: ${ALWAYS_FRESH_INSTALL:-false} # Set to true to reset your config and database on each container start
|
|
||||||
NETALERTX_DEBUG: ${NETALERTX_DEBUG:-0} # 0=kill all services and restart if any dies. 1 keeps running dead services.
|
|
||||||
BACKEND_API_URL: ${BACKEND_API_URL-"https://netalertx.MYDOMAIN.TLD:20212"}
|
|
||||||
|
|
||||||
# Resource limits to prevent resource exhaustion
|
|
||||||
mem_limit: 4096m # Maximum memory usage
|
|
||||||
mem_reservation: 2048m # Soft memory limit
|
|
||||||
cpu_shares: 512 # Relative CPU weight for CPU contention scenarios
|
|
||||||
pids_limit: 512 # Limit the number of processes/threads to prevent fork bombs
|
|
||||||
logging:
|
|
||||||
driver: "json-file" # Use JSON file logging driver
|
|
||||||
options:
|
|
||||||
max-size: "10m" # Rotate log files after they reach 10MB
|
|
||||||
max-file: "3" # Keep a maximum of 3 log files
|
|
||||||
|
|
||||||
# Always restart the container unless explicitly stopped
|
|
||||||
restart: unless-stopped
|
|
||||||
|
|
||||||
# To sign Out, you need to visit
|
|
||||||
# {$OUTPOST_HOSTNAME}:{$OUTPOST_EXTERNAL_PORT}/outpost.goauthentik.io/sign_out
|
|
||||||
netalertx-outpost-proxy:
|
|
||||||
container_name: netalertx-outpost-proxy
|
|
||||||
|
|
||||||
network_mode: host
|
|
||||||
|
|
||||||
depends_on:
|
|
||||||
netalertx-caddy:
|
|
||||||
condition: service_started
|
|
||||||
restart: true
|
|
||||||
|
|
||||||
restart: unless-stopped
|
|
||||||
|
|
||||||
image: ghcr.io/goauthentik/proxy:2025.10
|
|
||||||
pull: missing
|
|
||||||
|
|
||||||
env_file:
|
|
||||||
- .env
|
|
||||||
- .env.outpost.proxy
|
|
||||||
|
|
||||||
environment:
|
|
||||||
AUTHENTIK_HOST: "https://authentik.MYDOMAIN.TLD"
|
|
||||||
AUTHENTIK_INSECURE: false
|
|
||||||
AUTHENTIK_LISTEN__HTTP: "[::1]:6000"
|
|
||||||
AUTHENTIK_LISTEN__HTTPS: "[::1]:6443"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Quadlet Setup
|
|
||||||
`netalertx.pod`:
|
|
||||||
```
|
|
||||||
[Pod]
|
|
||||||
# Name of the Pod
|
|
||||||
PodName=netalertx
|
|
||||||
|
|
||||||
# Network Mode Host is required for ARP to work
|
|
||||||
Network=host
|
|
||||||
|
|
||||||
# Automatically start Pod at Boot Time
|
|
||||||
[Install]
|
|
||||||
WantedBy=default.target
|
|
||||||
```
|
|
||||||
|
|
||||||
`netalertx-caddy.container`:
|
|
||||||
```
|
|
||||||
[Unit]
|
|
||||||
Description=NetAlertX Caddy Container
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Restart=always
|
|
||||||
|
|
||||||
[Container]
|
|
||||||
ContainerName=netalertx-caddy
|
|
||||||
|
|
||||||
Pod=netalertx.pod
|
|
||||||
StartWithPod=true
|
|
||||||
|
|
||||||
# Generic Environment Configuration
|
|
||||||
EnvironmentFile=.env
|
|
||||||
|
|
||||||
# Caddy Specific Environment Configuration
|
|
||||||
EnvironmentFile=.env.caddy
|
|
||||||
|
|
||||||
Environment=CADDY_DOCKER_CADDYFILE_PATH=/etc/caddy/Caddyfile
|
|
||||||
|
|
||||||
Image=docker.io/library/caddy:latest
|
|
||||||
Pull=missing
|
|
||||||
|
|
||||||
# Run as rootless
|
|
||||||
# Specifying User & Group by Name requires to mount a custom passwd & group File inside the Container
|
|
||||||
# Otherwise an Error like the following will result: netalertx-caddy[593191]: Error: unable to find user caddy: no matching entries in passwd file
|
|
||||||
# User=caddy
|
|
||||||
# Group=caddy
|
|
||||||
# Volume=/var/lib/containers/config/netalertx/caddy-rootless/passwd:/etc/passwd:ro,z
|
|
||||||
# Volume=/var/lib/containers/config/netalertx/caddy-rootless/group:/etc/group:ro,z
|
|
||||||
|
|
||||||
# Run as rootless
|
|
||||||
# Specifying User & Group by UID/GID will NOT require a custom passwd / group File to be bind-mounted inside the Container
|
|
||||||
User=980
|
|
||||||
Group=980
|
|
||||||
|
|
||||||
Volume=./Caddyfile:/etc/caddy/Caddyfile:ro,z
|
|
||||||
Volume=/var/lib/containers/data/netalertx/caddy:/data/caddy:z
|
|
||||||
Volume=/var/lib/containers/log/netalertx/caddy:/var/log:z
|
|
||||||
Volume=/var/lib/containers/config/netalertx/caddy:/config/caddy:z
|
|
||||||
Volume=/var/lib/containers/certificates/letsencrypt:/certificates:ro,z
|
|
||||||
```
|
|
||||||
|
|
||||||
`netalertx-server.container`:
|
|
||||||
```
|
|
||||||
[Unit]
|
|
||||||
Description=NetAlertX Server Container
|
|
||||||
Requires=netalertx-caddy.service netalertx-outpost-proxy.service
|
|
||||||
After=netalertx-caddy.service netalertx-outpost-proxy.service
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Restart=always
|
|
||||||
|
|
||||||
[Container]
|
|
||||||
ContainerName=netalertx-server
|
|
||||||
|
|
||||||
Pod=netalertx.pod
|
|
||||||
StartWithPod=true
|
|
||||||
|
|
||||||
# Local built Image including latest Changes
|
|
||||||
Image=localhost/netalertx-dev:dev-20260109-232454
|
|
||||||
Pull=missing
|
|
||||||
|
|
||||||
# Make the container filesystem read-only
|
|
||||||
ReadOnly=true
|
|
||||||
|
|
||||||
# Drop all capabilities for enhanced security
|
|
||||||
DropCapability=ALL
|
|
||||||
|
|
||||||
# It is most secure to start with user 20211, but then we lose provisioning capabilities.
|
|
||||||
# User=20211:20211
|
|
||||||
|
|
||||||
# Required for scanning with arp-scan, nmap, nbtscan, traceroute, and zero-conf
|
|
||||||
AddCapability=NET_ADMIN
|
|
||||||
|
|
||||||
# Required for raw socket operations with arp-scan, nmap, nbtscan, traceroute and zero-conf
|
|
||||||
AddCapability=NET_RAW
|
|
||||||
|
|
||||||
# Required to bind to privileged ports with nbtscan
|
|
||||||
AddCapability=NET_BIND_SERVICE
|
|
||||||
|
|
||||||
# Required for root-entrypoint to chown /data + /tmp before dropping privileges
|
|
||||||
AddCapability=CHOWN
|
|
||||||
|
|
||||||
# Required for root-entrypoint to switch to non-root user
|
|
||||||
AddCapability=SETUID
|
|
||||||
|
|
||||||
# Required for root-entrypoint to switch to non-root group
|
|
||||||
AddCapability=SETGID
|
|
||||||
|
|
||||||
# Override the Configuration Template
|
|
||||||
Volume=/var/lib/containers/config/netalertx/server/nginx/netalertx.conf.template:/services/config/nginx/netalertx.conf.template:ro,Z
|
|
||||||
|
|
||||||
# Letsencrypt Certificates
|
|
||||||
Volume=/var/lib/containers/certificates/letsencrypt/MYDOMAIN.TLD:/certificates:ro,Z
|
|
||||||
|
|
||||||
# Data Storage for NetAlertX
|
|
||||||
Volume=/var/lib/containers/data/netalertx/server:/data:rw,Z
|
|
||||||
|
|
||||||
# Set the Timezone
|
|
||||||
Volume=/etc/localtime:/etc/localtime:ro,Z
|
|
||||||
|
|
||||||
# tmpfs mounts for writable directories in a read-only container and improve system performance
|
|
||||||
# All writes now live under /tmp/* subdirectories which are created dynamically by entrypoint.d scripts
|
|
||||||
# mode=1700 gives rwx------ permissions; ownership is set by /root-entrypoint.sh
|
|
||||||
# Mount=type=tmpfs,destination=/tmp,tmpfs-mode=1700,uid=0,gid=0,rw=true,noexec=true,nosuid=true,nodev=true,async=true,noatime=true,nodiratime=true,relabel=private
|
|
||||||
Mount=type=tmpfs,destination=/tmp,tmpfs-mode=1700,rw=true,noexec=true,nosuid=true,nodev=true
|
|
||||||
|
|
||||||
# Environment Configuration
|
|
||||||
EnvironmentFile=.env
|
|
||||||
EnvironmentFile=.env.server
|
|
||||||
|
|
||||||
# Runtime UID after priming (Synology/no-copy-up safe)
|
|
||||||
Environment=PUID=20211
|
|
||||||
|
|
||||||
# Runtime GID after priming (Synology/no-copy-up safe)
|
|
||||||
Environment=PGID=20211
|
|
||||||
|
|
||||||
# Listen for connections on all interfaces (IPv4)
|
|
||||||
Environment=LISTEN_ADDR=0.0.0.0
|
|
||||||
|
|
||||||
# Application port
|
|
||||||
Environment=PORT=20211
|
|
||||||
|
|
||||||
# SSL Port
|
|
||||||
Environment=PORT_SSL=443
|
|
||||||
|
|
||||||
# GraphQL API port
|
|
||||||
Environment=GRAPHQL_PORT=20212
|
|
||||||
|
|
||||||
# Set to true to reset your config and database on each container start
|
|
||||||
Environment=ALWAYS_FRESH_INSTALL=false
|
|
||||||
|
|
||||||
# 0=kill all services and restart if any dies. 1 keeps running dead services.
|
|
||||||
Environment=NETALERTX_DEBUG=0
|
|
||||||
|
|
||||||
# Set the GraphQL URL for external Access (via Caddy Reverse Proxy)
|
|
||||||
Environment=BACKEND_API_URL=https://netalertx-fedora.MYDOMAIN.TLD:20212
|
|
||||||
|
|
||||||
# Resource limits to prevent resource exhaustion
|
|
||||||
# Maximum memory usage
|
|
||||||
Memory=4g
|
|
||||||
|
|
||||||
# Limit the number of processes/threads to prevent fork bombs
|
|
||||||
PidsLimit=512
|
|
||||||
|
|
||||||
# Relative CPU weight for CPU contention scenarios
|
|
||||||
PodmanArgs=--cpus=2
|
|
||||||
PodmanArgs=--cpu-shares=512
|
|
||||||
|
|
||||||
# Soft memory limit
|
|
||||||
PodmanArgs=--memory-reservation=2g
|
|
||||||
|
|
||||||
# !! The following Keys are unfortunately not [yet] supported !!
|
|
||||||
|
|
||||||
# Relative CPU weight for CPU contention scenarios
|
|
||||||
#CpuShares=512
|
|
||||||
|
|
||||||
# Soft memory limit
|
|
||||||
#MemoryReservation=2g
|
|
||||||
```
|
|
||||||
|
|
||||||
`netalertx-outpost-proxy.container`:
|
|
||||||
```
|
|
||||||
[Unit]
|
|
||||||
Description=NetAlertX Authentik Proxy Outpost Container
|
|
||||||
Requires=netalertx-caddy.service
|
|
||||||
After=netalertx-caddy.service
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Restart=always
|
|
||||||
|
|
||||||
[Container]
|
|
||||||
ContainerName=netalertx-outpost-proxy
|
|
||||||
|
|
||||||
Pod=netalertx.pod
|
|
||||||
StartWithPod=true
|
|
||||||
|
|
||||||
# General Configuration
|
|
||||||
EnvironmentFile=.env
|
|
||||||
|
|
||||||
# Authentik Outpost Proxy Specific Configuration
|
|
||||||
EnvironmentFile=.env.outpost.proxy
|
|
||||||
|
|
||||||
Environment=AUTHENTIK_HOST=https://authentik.MYDOMAIN.TLD
|
|
||||||
Environment=AUTHENTIK_INSECURE=false
|
|
||||||
|
|
||||||
# Overrides Value from .env.outpost.rac
|
|
||||||
# Environment=AUTHENTIK_TOKEN=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
|
|
||||||
|
|
||||||
# Optional setting to be used when `authentik_host` for internal communication doesn't match the public URL
|
|
||||||
# Environment=AUTHENTIK_HOST_BROWSER=https://authentik.MYDOMAIN.TLD
|
|
||||||
|
|
||||||
# Container Image
|
|
||||||
Image=ghcr.io/goauthentik/proxy:2025.10
|
|
||||||
Pull=missing
|
|
||||||
|
|
||||||
# Network Configuration
|
|
||||||
Network=container:supermicro-ikvm-pve031-caddy
|
|
||||||
|
|
||||||
# Security Configuration
|
|
||||||
NoNewPrivileges=true
|
|
||||||
```
|
|
||||||
|
|
||||||
### Firewall Setup
|
|
||||||
|
|
||||||
Depending on which GNU/Linux Distribution you are running, it might be required to open up some Firewall Ports in order to be able to access the Endpoints from outside the Host itself.
|
|
||||||
|
|
||||||
This is for instance the Case for Fedora Linux, where I had to open:
|
|
||||||
|
|
||||||
- Port 20212 for external GraphQL Access (both TCP & UDP are open, unsure if UDP is required)
|
|
||||||
- Port 9443 for external Authentik Outpost Proxy Access (both TCP & UDP are open, unsure if UDP is required)
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
### Authentik Setup
|
|
||||||
|
|
||||||
In order to enable Single Sign On (SSO) with Authentik, you will need to create a Provider, an Application and an Outpost.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
First of all, using the Left Sidebar, navigate to `Applications` → `Providers`, click on `Create` (Blue Button at the Top of the Screen), select `Proxy Provider`, then click `Next`:
|
|
||||||

|
|
||||||
|
|
||||||
Fill in the required Fields:
|
|
||||||
|
|
||||||
- Name: choose a Name for the Provider (e.g. `netalertx`)
|
|
||||||
- Authorization Flow: choose the Authorization Flow. I typically use `default-provider-authorization-implicit-consent (Authorize Application)`. If you select the `default-provider-authorization-explicit-consent (Authorize Application)` you will need to authorize Authentik every Time you want to log in NetAlertX, which can make the Experience less User-friendly
|
|
||||||
- Type: Click on `Forward Auth (single application)`
|
|
||||||
- External Host: set to `https://netalertx.MYDOMAIN.TLD`
|
|
||||||
|
|
||||||
Click `Finish`.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
Now, using the Left Sidebar, navigate to `Applications` → `Applications`, click on `Create` (Blue Button at the Top of the Screen) and fill in the required Fields:
|
|
||||||
|
|
||||||
- Name: choose a Name for the Application (e.g. `netalertx`)
|
|
||||||
- Slug: choose a Slug for the Application (e.g. `netalertx`)
|
|
||||||
- Group: optionally you can assign this Application to a Group of Applications of your Choosing (for grouping Purposes within Authentik User Interface)
|
|
||||||
- Provider: select the Provider you created the the `Providers` Section previosly (e.g. `netalertx`)
|
|
||||||
|
|
||||||
Then click `Create`.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
Now, using the Left Sidebar, navigate to `Applications` → `Outposts`, click on `Create` (Blue Button at the Top of the Screen) and fill in the required Fields:
|
|
||||||
|
|
||||||
- Name: choose a Name for the Outpost (e.g. `netalertx`)
|
|
||||||
- Type: `Proxy`
|
|
||||||
- Integration: open the Dropdown and click on `---------`. Make sure it is NOT set to `Local Docker connection` !
|
|
||||||
|
|
||||||
In the `Available Applications` Section, select the Application you created in the Previous Step, then click the right Arrow (approx. located in the Center of the Screen), so that it gets copied in the `Selected Applications` Section.
|
|
||||||
|
|
||||||
Then click `Create`.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
Wait a few Seconds for the Outpost to be created. Once it appears in the List, click on `Deployment Info` on the Right Side of the relevant Line.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
Take note of that Token. You will need it for the Authentik Outpost Proxy Container, which will read it as the `AUTHENTIK_TOKEN` Environment Variable.
|
|
||||||
|
|
||||||
### NGINX Configuration inside NetAlertX Container
|
|
||||||
> [!NOTE]
|
|
||||||
> This is something that was implemented based on the previous Content of this Reverse Proxy Document.
|
|
||||||
> Due to some Buffer Warnings/Errors in the Logs as well as some other Issues I was experiencing, I increased a lot the client_body_buffer_size and large_client_header_buffers Parameters, although these might not be required anymore.
|
|
||||||
> Further Testing might be required.
|
|
||||||
|
|
||||||
```
|
|
||||||
# Set number of worker processes automatically based on number of CPU cores.
|
|
||||||
worker_processes auto;
|
|
||||||
|
|
||||||
# Enables the use of JIT for regular expressions to speed-up their processing.
|
|
||||||
pcre_jit on;
|
|
||||||
|
|
||||||
# Configures default error logger.
|
|
||||||
error_log /tmp/log/nginx-error.log warn;
|
|
||||||
|
|
||||||
pid /tmp/run/nginx.pid;
|
|
||||||
|
|
||||||
events {
|
|
||||||
# The maximum number of simultaneous connections that can be opened by
|
|
||||||
# a worker process.
|
|
||||||
worker_connections 1024;
|
|
||||||
}
|
|
||||||
|
|
||||||
http {
|
|
||||||
|
|
||||||
# Mapping of temp paths for various nginx modules.
|
|
||||||
client_body_temp_path /tmp/nginx/client_body;
|
|
||||||
proxy_temp_path /tmp/nginx/proxy;
|
|
||||||
fastcgi_temp_path /tmp/nginx/fastcgi;
|
|
||||||
uwsgi_temp_path /tmp/nginx/uwsgi;
|
|
||||||
scgi_temp_path /tmp/nginx/scgi;
|
|
||||||
|
|
||||||
# Includes mapping of file name extensions to MIME types of responses
|
|
||||||
# and defines the default type.
|
|
||||||
include /services/config/nginx/mime.types;
|
|
||||||
default_type application/octet-stream;
|
|
||||||
|
|
||||||
# Name servers used to resolve names of upstream servers into addresses.
|
|
||||||
# It's also needed when using tcpsocket and udpsocket in Lua modules.
|
|
||||||
#resolver 1.1.1.1 1.0.0.1 [2606:4700:4700::1111] [2606:4700:4700::1001];
|
|
||||||
|
|
||||||
# Don't tell nginx version to the clients. Default is 'on'.
|
|
||||||
server_tokens off;
|
|
||||||
|
|
||||||
# Specifies the maximum accepted body size of a client request, as
|
|
||||||
# indicated by the request header Content-Length. If the stated content
|
|
||||||
# length is greater than this size, then the client receives the HTTP
|
|
||||||
# error code 413. Set to 0 to disable. Default is '1m'.
|
|
||||||
client_max_body_size 1m;
|
|
||||||
|
|
||||||
# Sendfile copies data between one FD and other from within the kernel,
|
|
||||||
# which is more efficient than read() + write(). Default is off.
|
|
||||||
sendfile on;
|
|
||||||
|
|
||||||
# Causes nginx to attempt to send its HTTP response head in one packet,
|
|
||||||
# instead of using partial frames. Default is 'off'.
|
|
||||||
tcp_nopush on;
|
|
||||||
|
|
||||||
|
|
||||||
# Enables the specified protocols. Default is TLSv1 TLSv1.1 TLSv1.2.
|
|
||||||
# TIP: If you're not obligated to support ancient clients, remove TLSv1.1.
|
|
||||||
ssl_protocols TLSv1.2 TLSv1.3;
|
|
||||||
|
|
||||||
# Path of the file with Diffie-Hellman parameters for EDH ciphers.
|
|
||||||
# TIP: Generate with: `openssl dhparam -out /etc/ssl/nginx/dh2048.pem 2048`
|
|
||||||
#ssl_dhparam /etc/ssl/nginx/dh2048.pem;
|
|
||||||
|
|
||||||
# Specifies that our cipher suits should be preferred over client ciphers.
|
|
||||||
# Default is 'off'.
|
|
||||||
ssl_prefer_server_ciphers on;
|
|
||||||
|
|
||||||
# Enables a shared SSL cache with size that can hold around 8000 sessions.
|
|
||||||
# Default is 'none'.
|
|
||||||
ssl_session_cache shared:SSL:2m;
|
|
||||||
|
|
||||||
# Specifies a time during which a client may reuse the session parameters.
|
|
||||||
# Default is '5m'.
|
|
||||||
ssl_session_timeout 1h;
|
|
||||||
|
|
||||||
# Disable TLS session tickets (they are insecure). Default is 'on'.
|
|
||||||
ssl_session_tickets off;
|
|
||||||
|
|
||||||
|
|
||||||
# Enable gzipping of responses.
|
|
||||||
gzip on;
|
|
||||||
|
|
||||||
# Set the Vary HTTP header as defined in the RFC 2616. Default is 'off'.
|
|
||||||
gzip_vary on;
|
|
||||||
|
|
||||||
|
|
||||||
# Specifies the main log format.
|
|
||||||
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
|
|
||||||
'$status $body_bytes_sent "$http_referer" '
|
|
||||||
'"$http_user_agent" "$http_x_forwarded_for"';
|
|
||||||
|
|
||||||
# Sets the path, format, and configuration for a buffered log write.
|
|
||||||
access_log /tmp/log/nginx-access.log main;
|
|
||||||
|
|
||||||
|
|
||||||
# Virtual host config (unencrypted)
|
|
||||||
server {
|
|
||||||
listen ${LISTEN_ADDR}:${PORT} default_server;
|
|
||||||
root /app/front;
|
|
||||||
index index.php;
|
|
||||||
add_header X-Forwarded-Prefix "/app" always;
|
|
||||||
|
|
||||||
server_name netalertx-server;
|
|
||||||
proxy_set_header Host $host;
|
|
||||||
proxy_set_header X-Real-IP $remote_addr;
|
|
||||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
|
||||||
proxy_set_header X-Forwarded-Proto $scheme;
|
|
||||||
|
|
||||||
client_body_buffer_size 512k;
|
|
||||||
large_client_header_buffers 64 128k;
|
|
||||||
|
|
||||||
location ~* \.php$ {
|
|
||||||
# Set Cache-Control header to prevent caching on the first load
|
|
||||||
add_header Cache-Control "no-store";
|
|
||||||
fastcgi_pass unix:/tmp/run/php.sock;
|
|
||||||
include /services/config/nginx/fastcgi_params;
|
|
||||||
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
|
|
||||||
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
|
|
||||||
fastcgi_connect_timeout 75;
|
|
||||||
fastcgi_send_timeout 600;
|
|
||||||
fastcgi_read_timeout 600;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Caddyfile
|
|
||||||
```
|
|
||||||
# Example and Guide
|
|
||||||
# https://caddyserver.com/docs/caddyfile/options
|
|
||||||
|
|
||||||
# General Options
|
|
||||||
{
|
|
||||||
# (Optional) Debug Mode
|
|
||||||
# debug
|
|
||||||
|
|
||||||
# (Optional ) Enable / Disable Admin API
|
|
||||||
admin off
|
|
||||||
|
|
||||||
# TLS Options
|
|
||||||
# (Optional) Disable Certificates Management (only if SSL/TLS Certificates are managed by certbot or other external Tools)
|
|
||||||
auto_https disable_certs
|
|
||||||
}
|
|
||||||
|
|
||||||
# (Optional Enable Admin API)
|
|
||||||
# localhost {
|
|
||||||
# reverse_proxy /api/* localhost:9001
|
|
||||||
# }
|
|
||||||
|
|
||||||
# NetAlertX Web GUI (HTTPS Port 443)
|
|
||||||
# (Optional) Only if SSL/TLS Certificates are managed by certbot or other external Tools and Custom Logging is required
|
|
||||||
{$APPLICATION_HOSTNAME}:443 {
|
|
||||||
tls /certificates/{$APPLICATION_CERTIFICATE_DOMAIN}/{$APPLICATION_CERTIFICATE_CERT_FILE:fullchain.pem} /certificates/{$APPLICATION_CERTIFICATE_DOMAIN}/{$APPLICATION_CERTIFICATE_KEY_FILE:privkey.pem}
|
|
||||||
|
|
||||||
log {
|
|
||||||
output file /var/log/{$APPLICATION_HOSTNAME}/access_web.json {
|
|
||||||
roll_size 100MiB
|
|
||||||
roll_keep 5000
|
|
||||||
roll_keep_for 720h
|
|
||||||
roll_uncompressed
|
|
||||||
}
|
|
||||||
|
|
||||||
format json
|
|
||||||
}
|
|
||||||
|
|
||||||
route {
|
|
||||||
# Always forward outpost path to actual outpost
|
|
||||||
reverse_proxy /outpost.goauthentik.io/* https://{$OUTPOST_HOSTNAME}:{$OUTPOST_EXTERNAL_PORT} {
|
|
||||||
header_up Host {http.reverse_proxy.upstream.hostport}
|
|
||||||
}
|
|
||||||
|
|
||||||
# Forward authentication to outpost
|
|
||||||
forward_auth https://{$OUTPOST_HOSTNAME}:{$OUTPOST_EXTERNAL_PORT} {
|
|
||||||
uri /outpost.goauthentik.io/auth/caddy
|
|
||||||
|
|
||||||
# Capitalization of the headers is important, otherwise they will be empty
|
|
||||||
copy_headers X-Authentik-Username X-Authentik-Groups X-Authentik-Email X-Authentik-Name X-Authentik-Uid X-Authentik-Jwt X-Authentik-Meta-Jwks X-Authentik-Meta-Outpost X-Authentik-Meta-Provider X-Authentik-Meta-App X-Authentik-Meta-Version
|
|
||||||
|
|
||||||
# (Optional)
|
|
||||||
# If not set, trust all private ranges, but for Security Reasons, this should be set to the outposts IP
|
|
||||||
trusted_proxies private_ranges
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# IPv4 Reverse Proxy to NetAlertX Web GUI (internal unencrypted Host)
|
|
||||||
reverse_proxy http://0.0.0.0:20211
|
|
||||||
|
|
||||||
# IPv6 Reverse Proxy to NetAlertX Web GUI (internal unencrypted Host)
|
|
||||||
# reverse_proxy http://[::1]:20211
|
|
||||||
}
|
|
||||||
|
|
||||||
# NetAlertX GraphQL Endpoint (HTTPS Port 20212)
|
|
||||||
# (Optional) Only if SSL/TLS Certificates are managed by certbot or other external Tools and Custom Logging is required
|
|
||||||
{$APPLICATION_HOSTNAME}:20212 {
|
|
||||||
tls /certificates/{$APPLICATION_CERTIFICATE_DOMAIN}/{$APPLICATION_CERTIFICATE_CERT_FILE:fullchain.pem} /certificates/{$APPLICATION_CERTIFICATE_DOMAIN}/{$APPLICATION_CERTIFICATE_KEY_FILE:privkey.pem}
|
|
||||||
|
|
||||||
log {
|
|
||||||
output file /var/log/{$APPLICATION_HOSTNAME}/access_graphql.json {
|
|
||||||
roll_size 100MiB
|
|
||||||
roll_keep 5000
|
|
||||||
roll_keep_for 720h
|
|
||||||
roll_uncompressed
|
|
||||||
}
|
|
||||||
|
|
||||||
format json
|
|
||||||
}
|
|
||||||
|
|
||||||
# IPv4 Reverse Proxy to NetAlertX GraphQL Endpoint (internal unencrypted Host)
|
|
||||||
reverse_proxy http://0.0.0.0:20219
|
|
||||||
|
|
||||||
# IPv6 Reverse Proxy to NetAlertX GraphQL Endpoint (internal unencrypted Host)
|
|
||||||
# reverse_proxy http://[::1]:6000
|
|
||||||
}
|
|
||||||
|
|
||||||
# Authentik Outpost
|
|
||||||
# (Optional) Only if SSL/TLS Certificates are managed by certbot or other external Tools and Custom Logging is required
|
|
||||||
{$OUTPOST_HOSTNAME}:{$OUTPOST_EXTERNAL_PORT} {
|
|
||||||
tls /certificates/{$APPLICATION_CERTIFICATE_DOMAIN}/{$APPLICATION_CERTIFICATE_CERT_FILE:fullchain.pem} /certificates/{$APPLICATION_CERTIFICATE_DOMAIN}/{$APPLICATION_CERTIFICATE_KEY_FILE:privkey.pem}
|
|
||||||
|
|
||||||
log {
|
|
||||||
output file /var/log/outpost/{$OUTPOST_HOSTNAME}/access.json {
|
|
||||||
roll_size 100MiB
|
|
||||||
roll_keep 5000
|
|
||||||
roll_keep_for 720h
|
|
||||||
roll_uncompressed
|
|
||||||
}
|
|
||||||
|
|
||||||
format json
|
|
||||||
}
|
|
||||||
|
|
||||||
# IPv4 Reverse Proxy to internal unencrypted Host
|
|
||||||
# reverse_proxy http://0.0.0.0:6000
|
|
||||||
|
|
||||||
# IPv6 Reverse Proxy to internal unencrypted Host
|
|
||||||
reverse_proxy http://[::1]:6000
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Login
|
|
||||||
Now try to login by visiting `https://netalertx.MYDOMAIN.TLD`.
|
|
||||||
|
|
||||||
You should be greeted with a Login Screen by Authentik.
|
|
||||||
|
|
||||||
If you are already logged in Authentik, log out first. You can do that by visiting `https://netalertx.MYDOMAIN.TLD/outpost.goauthentik.io/sign_out`, then click on `Log out of authentik` (2nd Button). Or you can just sign out from your Authentik Admin Panel at `https://authentik.MYDOMAIN.TLD`.
|
|
||||||
|
|
||||||
If everything works as expected, then you can now set `SETPWD_enable_password=false` to disable double Authentication.
|
|
||||||
|
|
||||||

|
|
||||||
@@ -1,86 +0,0 @@
|
|||||||
# Guide: Routing NetAlertX API via Traefik v3
|
|
||||||
|
|
||||||
> [!NOTE]
|
|
||||||
> NetAlertX requires access to both the **web UI** (default `20211`) and the **GraphQL backend `GRAPHQL_PORT`** (default `20212`) ports.
|
|
||||||
> Ensure your reverse proxy allows traffic to both for proper functionality.
|
|
||||||
|
|
||||||
|
|
||||||
> [!NOTE]
|
|
||||||
> This is community-contributed. Due to environment, setup, or networking differences, results may vary. Please open a PR to improve it instead of creating an issue, as the maintainer is not actively maintaining it.
|
|
||||||
|
|
||||||
|
|
||||||
Traefik v3 requires the following setup to route traffic properly. This guide shows a working configuration using a dedicated `PathPrefix`.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 1. Configure NetAlertX Backend URL
|
|
||||||
|
|
||||||
1. Open the NetAlertX UI: **Settings → Core → General**.
|
|
||||||
2. Set the `BACKEND_API_URL` to include a custom path prefix, for example:
|
|
||||||
|
|
||||||
```
|
|
||||||
https://netalertx.yourdomain.com/netalertx-api
|
|
||||||
```
|
|
||||||
|
|
||||||
This tells the frontend where to reach the backend API.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 2. Create a Traefik Router for the API
|
|
||||||
|
|
||||||
Define a router specifically for the API with a higher priority and a `PathPrefix` rule:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
netalertx-api:
|
|
||||||
rule: "Host(`netalertx.yourdomain.com`) && PathPrefix(`/netalertx-api`)"
|
|
||||||
service: netalertx-api-service
|
|
||||||
middlewares:
|
|
||||||
- netalertx-stripprefix
|
|
||||||
priority: 100
|
|
||||||
```
|
|
||||||
|
|
||||||
**Notes:**
|
|
||||||
|
|
||||||
* `Host(...)` ensures requests are only routed for your domain.
|
|
||||||
* `PathPrefix(...)` routes anything under `/netalertx-api` to the backend.
|
|
||||||
* Priority `100` ensures this router takes precedence over other routes.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 3. Add a Middleware to Strip the Prefix
|
|
||||||
|
|
||||||
NetAlertX expects requests at the root (`/`). Use Traefik’s `StripPrefix` middleware:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
middlewares:
|
|
||||||
netalertx-stripprefix:
|
|
||||||
stripPrefix:
|
|
||||||
prefixes:
|
|
||||||
- "/netalertx-api"
|
|
||||||
```
|
|
||||||
|
|
||||||
This removes `/netalertx-api` before forwarding the request to the backend container.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 4. Map the API Service to the Backend Container
|
|
||||||
|
|
||||||
Point the service to the internal GraphQL/Backend port (20212):
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
netalertx-api-service:
|
|
||||||
loadBalancer:
|
|
||||||
servers:
|
|
||||||
- url: "http://<INTERNAL_IP>:20212"
|
|
||||||
```
|
|
||||||
|
|
||||||
Replace `<INTERNAL_IP>` with your NetAlertX container’s internal address.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
✅ With this setup:
|
|
||||||
|
|
||||||
* `https://netalertx.yourdomain.com` → Web interface (port 20211)
|
|
||||||
* `https://netalertx.yourdomain.com/netalertx-api` → API/GraphQL backend (port 20212)
|
|
||||||
|
|
||||||
This cleanly separates API requests from frontend requests while keeping everything under the same domain.
|
|
||||||
BIN
docs/img/ADVISORIES/down_devices.png
Normal file
|
After Width: | Height: | Size: 63 KiB |
BIN
docs/img/ADVISORIES/filters.png
Normal file
|
After Width: | Height: | Size: 83 KiB |
BIN
docs/img/ADVISORIES/ui_customization_settings.png
Normal file
|
After Width: | Height: | Size: 137 KiB |
|
Before Width: | Height: | Size: 78 KiB |
|
Before Width: | Height: | Size: 1.5 MiB |
|
Before Width: | Height: | Size: 61 KiB |
|
Before Width: | Height: | Size: 52 KiB |
|
Before Width: | Height: | Size: 128 KiB |
|
Before Width: | Height: | Size: 89 KiB |
|
Before Width: | Height: | Size: 27 KiB |
|
Before Width: | Height: | Size: 67 KiB |
@@ -1,202 +0,0 @@
|
|||||||
<mxfile host="Electron" modified="2026-01-15T05:36:26.645Z" agent="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/24.1.0 Chrome/120.0.6099.109 Electron/28.1.0 Safari/537.36" etag="OpSjRPjeNeyudFLZJ2fD" version="24.1.0" type="device">
|
|
||||||
<diagram name="Page-1" id="mulIpG3YQAhf4Klf7Njm">
|
|
||||||
<mxGraphModel dx="6733" dy="1168" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="4681" pageHeight="3300" math="0" shadow="0">
|
|
||||||
<root>
|
|
||||||
<mxCell id="0" />
|
|
||||||
<mxCell id="1" parent="0" />
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-1" value="" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="850" y="160" width="920" height="810" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-2" value="NetAlertX Pod" style="text;html=1;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=32;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="850" y="130" width="670" height="30" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-3" value="" style="image;html=1;image=img/lib/clip_art/computers/Laptop_128x128.png" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="-50" y="395" width="140" height="140" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-4" value="" style="image;html=1;image=img/lib/clip_art/networking/Firewall_02_128x128.png" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="488" y="344" width="80" height="80" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-5" value="" style="image;html=1;image=img/lib/clip_art/networking/Firewall_02_128x128.png" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="488" y="555" width="80" height="80" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-8" value="Web UI<br>(NGINX + PHP)" style="text;html=1;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=24;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="230" y="320" width="200" height="60" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-9" value="API GraphQL<br>(Python)" style="text;html=1;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=24;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="230" y="555" width="200" height="30" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-10" value="" style="endArrow=classic;html=1;rounded=0;dashed=1;dashPattern=8 8;" edge="1" parent="1">
|
|
||||||
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
|
||||||
<mxPoint x="240" y="390" as="sourcePoint" />
|
|
||||||
<mxPoint x="240" y="600" as="targetPoint" />
|
|
||||||
</mxGeometry>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-12" value="<div>443</div>" style="text;html=1;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=24;fontColor=#FF0000;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="581" y="335" width="100" height="60" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-13" value="20212" style="text;html=1;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=24;fontColor=#FF0000;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="581" y="554" width="100" height="60" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-14" value="" style="image;html=1;image=img/lib/clip_art/networking/Firewall_02_128x128.png" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="488" y="813" width="80" height="80" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-16" value="Authentik SSO for Web UI" style="text;html=1;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=24;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="230" y="793" width="200" height="60" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-17" value="9443" style="text;html=1;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=24;fontColor=#FF0000;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="580" y="803" width="100" height="60" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-18" value="" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="1470" y="250" width="288" height="440" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-19" value="NetAlertX" style="text;html=1;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=24;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="1470" y="210" width="288" height="40" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-21" value="" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="1260" y="751" width="500" height="199" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-22" value="Authentik Outpost Proxy" style="text;html=1;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=24;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="1280" y="711" width="480" height="40" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-23" value="" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="860" y="250" width="380" height="700" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-24" value="Caddy" style="text;html=1;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=24;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="860" y="210" width="390" height="40" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-25" value="" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="1498" y="319" width="220" height="130" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-26" value="" style="rounded=0;whiteSpace=wrap;html=1;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="1498" y="530" width="220" height="150" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-27" value="Web UI<div>(NGINX + PHP)</div>" style="text;html=1;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=24;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="1498" y="264" width="220" height="50" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-28" value="API GraphQL<div>(Python)</div>" style="text;html=1;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=24;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="1498" y="475" width="220" height="50" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-6" value="" style="endArrow=classic;html=1;rounded=0;entryX=0;entryY=0.5;entryDx=0;entryDy=0;" edge="1" parent="1" source="wwqsnaxs0Bt7SYwqQu8i-53" target="wwqsnaxs0Bt7SYwqQu8i-58">
|
|
||||||
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
|
||||||
<mxPoint x="130" y="390" as="sourcePoint" />
|
|
||||||
<mxPoint x="1129" y="389.9999999999998" as="targetPoint" />
|
|
||||||
</mxGeometry>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-30" value="" style="endArrow=classic;html=1;rounded=0;entryX=0;entryY=0.5;entryDx=0;entryDy=0;exitX=1;exitY=0.5;exitDx=0;exitDy=0;" edge="1" parent="1" source="wwqsnaxs0Bt7SYwqQu8i-59" target="wwqsnaxs0Bt7SYwqQu8i-31">
|
|
||||||
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
|
||||||
<mxPoint x="1214" y="483" as="sourcePoint" />
|
|
||||||
<mxPoint x="1209" y="823" as="targetPoint" />
|
|
||||||
</mxGeometry>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-31" value="Authenticated &amp; Authorized ?" style="rhombus;whiteSpace=wrap;html=1;fontSize=18;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="1294" y="773.5" width="170" height="160" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-35" value="20211" style="text;html=1;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=24;fontColor=#FF0000;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="1488" y="335" width="100" height="60" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-36" value="" style="endArrow=classic;html=1;rounded=0;" edge="1" parent="1">
|
|
||||||
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
|
||||||
<mxPoint x="1688" y="369" as="sourcePoint" />
|
|
||||||
<mxPoint x="1688" y="649" as="targetPoint" />
|
|
||||||
</mxGeometry>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-37" value="20219" style="text;html=1;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=24;fontColor=#FF0000;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="1498" y="535" width="100" height="60" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-38" value="HTTPS" style="text;html=1;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=24;fontColor=#66CC00;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="730" y="340" width="100" height="60" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-39" value="HTTPS" style="text;html=1;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=24;fontColor=#66CC00;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="730" y="803" width="100" height="60" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-40" value="HTTPS" style="text;html=1;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=24;fontColor=#66CC00;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="730" y="554" width="100" height="60" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-42" value="" style="endArrow=none;html=1;rounded=0;endFill=0;" edge="1" parent="1">
|
|
||||||
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
|
||||||
<mxPoint x="1381" y="1071" as="sourcePoint" />
|
|
||||||
<mxPoint x="130" y="1071" as="targetPoint" />
|
|
||||||
</mxGeometry>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-43" value="" style="endArrow=classic;html=1;rounded=0;exitX=0.5;exitY=1;exitDx=0;exitDy=0;" edge="1" parent="1">
|
|
||||||
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
|
||||||
<mxPoint x="130.5" y="1070" as="sourcePoint" />
|
|
||||||
<mxPoint x="130" y="860" as="targetPoint" />
|
|
||||||
</mxGeometry>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-44" value="NO" style="text;html=1;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=24;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="1364" y="1000" width="100" height="60" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-45" value="YES" style="text;html=1;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=24;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="1294" y="680" width="100" height="60" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-47" value="" style="endArrow=classic;html=1;rounded=0;exitX=1;exitY=0.5;exitDx=0;exitDy=0;" edge="1" parent="1">
|
|
||||||
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
|
||||||
<mxPoint x="1156.5" y="450" as="sourcePoint" />
|
|
||||||
<mxPoint x="1157" y="1070" as="targetPoint" />
|
|
||||||
</mxGeometry>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-48" value="" style="endArrow=classic;html=1;rounded=0;exitX=1;exitY=0.5;exitDx=0;exitDy=0;entryX=0;entryY=0.5;entryDx=0;entryDy=0;" edge="1" parent="1" source="wwqsnaxs0Bt7SYwqQu8i-56" target="wwqsnaxs0Bt7SYwqQu8i-26">
|
|
||||||
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
|
||||||
<mxPoint x="1299" y="600" as="sourcePoint" />
|
|
||||||
<mxPoint x="1499" y="600" as="targetPoint" />
|
|
||||||
</mxGeometry>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-49" value="HTTP" style="text;html=1;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=24;fontColor=#FF0000;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="1379" y="340" width="100" height="60" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-50" value="HTTP" style="text;html=1;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=24;fontColor=#FF0000;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="1379" y="554" width="100" height="60" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-54" value="" style="endArrow=classic;html=1;rounded=0;" edge="1" parent="1" target="wwqsnaxs0Bt7SYwqQu8i-53">
|
|
||||||
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
|
||||||
<mxPoint x="130" y="390" as="sourcePoint" />
|
|
||||||
<mxPoint x="1129" y="390" as="targetPoint" />
|
|
||||||
</mxGeometry>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-53" value="TLS Termination" style="whiteSpace=wrap;html=1;aspect=fixed;fontSize=18;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="905" y="340" width="100" height="100" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-56" value="TLS Termination" style="whiteSpace=wrap;html=1;aspect=fixed;fontSize=18;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="902" y="554" width="100" height="100" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-7" value="" style="endArrow=classic;html=1;rounded=0;entryX=0;entryY=0.5;entryDx=0;entryDy=0;" edge="1" parent="1" target="wwqsnaxs0Bt7SYwqQu8i-56">
|
|
||||||
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
|
||||||
<mxPoint x="130" y="601" as="sourcePoint" />
|
|
||||||
<mxPoint x="850" y="601" as="targetPoint" />
|
|
||||||
</mxGeometry>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-58" value="Check Authentication" style="whiteSpace=wrap;html=1;aspect=fixed;fontSize=18;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="1097" y="330" width="120" height="120" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-59" value="TLS Termination" style="whiteSpace=wrap;html=1;aspect=fixed;fontSize=18;" vertex="1" parent="1">
|
|
||||||
<mxGeometry x="899" y="803" width="100" height="100" as="geometry" />
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-15" value="" style="endArrow=classic;html=1;rounded=0;entryX=0;entryY=0.5;entryDx=0;entryDy=0;" edge="1" parent="1" target="wwqsnaxs0Bt7SYwqQu8i-59">
|
|
||||||
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
|
||||||
<mxPoint x="30" y="853" as="sourcePoint" />
|
|
||||||
<mxPoint x="850" y="853" as="targetPoint" />
|
|
||||||
</mxGeometry>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-60" value="" style="endArrow=classic;html=1;rounded=0;entryX=0;entryY=0.5;entryDx=0;entryDy=0;" edge="1" parent="1">
|
|
||||||
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
|
||||||
<mxPoint x="1379" y="390" as="sourcePoint" />
|
|
||||||
<mxPoint x="1500" y="389.58" as="targetPoint" />
|
|
||||||
</mxGeometry>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-61" value="" style="endArrow=none;html=1;rounded=0;entryX=0;entryY=0.5;entryDx=0;entryDy=0;endFill=0;" edge="1" parent="1">
|
|
||||||
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
|
||||||
<mxPoint x="1379" y="773" as="sourcePoint" />
|
|
||||||
<mxPoint x="1379" y="390" as="targetPoint" />
|
|
||||||
</mxGeometry>
|
|
||||||
</mxCell>
|
|
||||||
<mxCell id="wwqsnaxs0Bt7SYwqQu8i-62" value="" style="endArrow=classic;html=1;rounded=0;exitX=1;exitY=0.5;exitDx=0;exitDy=0;" edge="1" parent="1">
|
|
||||||
<mxGeometry width="50" height="50" relative="1" as="geometry">
|
|
||||||
<mxPoint x="1380" y="933.5" as="sourcePoint" />
|
|
||||||
<mxPoint x="1379" y="1069" as="targetPoint" />
|
|
||||||
</mxGeometry>
|
|
||||||
</mxCell>
|
|
||||||
</root>
|
|
||||||
</mxGraphModel>
|
|
||||||
</diagram>
|
|
||||||
</mxfile>
|
|
||||||
|
Before Width: | Height: | Size: 176 KiB |
|
Before Width: | Height: | Size: 31 KiB |
@@ -479,7 +479,12 @@ function setDeviceData(direction = '', refreshCallback = '') {
|
|||||||
if (resp && resp.success) {
|
if (resp && resp.success) {
|
||||||
showMessage(getString("Device_Saved_Success"));
|
showMessage(getString("Device_Saved_Success"));
|
||||||
} else {
|
} else {
|
||||||
showMessage(getString("Device_Saved_Unexpected"));
|
|
||||||
|
console.log(resp);
|
||||||
|
|
||||||
|
errorMessage = resp?.error;
|
||||||
|
|
||||||
|
showMessage(`${getString("Device_Saved_Unexpected")}: ${errorMessage}`, 5000, "modal_red");
|
||||||
}
|
}
|
||||||
|
|
||||||
// Remove navigation prompt
|
// Remove navigation prompt
|
||||||
|
|||||||
@@ -116,7 +116,7 @@ function initializeEventsDatatable (eventsRows) {
|
|||||||
{
|
{
|
||||||
targets: [0],
|
targets: [0],
|
||||||
'createdCell': function (td, cellData, rowData, row, col) {
|
'createdCell': function (td, cellData, rowData, row, col) {
|
||||||
$(td).html(translateHTMLcodes(localizeTimestamp(cellData)));
|
$(td).html(translateHTMLcodes((cellData)));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
|
|||||||
@@ -12,7 +12,11 @@ var timerRefreshData = ''
|
|||||||
|
|
||||||
var emptyArr = ['undefined', "", undefined, null, 'null'];
|
var emptyArr = ['undefined', "", undefined, null, 'null'];
|
||||||
var UI_LANG = "English (en_us)";
|
var UI_LANG = "English (en_us)";
|
||||||
const allLanguages = ["ar_ar","ca_ca","cs_cz","de_de","en_us","es_es","fa_fa","fr_fr","it_it","ja_jp","nb_no","pl_pl","pt_br","pt_pt","ru_ru","sv_sv","tr_tr","uk_ua","zh_cn"]; // needs to be same as in lang.php
|
const allLanguages = ["ar_ar","ca_ca","cs_cz","de_de",
|
||||||
|
"en_us","es_es","fa_fa","fr_fr",
|
||||||
|
"it_it","ja_jp","nb_no","pl_pl",
|
||||||
|
"pt_br","pt_pt","ru_ru","sv_sv",
|
||||||
|
"tr_tr","uk_ua","vi_vn","zh_cn"]; // needs to be same as in lang.php
|
||||||
var settingsJSON = {}
|
var settingsJSON = {}
|
||||||
|
|
||||||
|
|
||||||
@@ -364,6 +368,9 @@ function getLangCode() {
|
|||||||
case 'Ukrainian (uk_uk)':
|
case 'Ukrainian (uk_uk)':
|
||||||
lang_code = 'uk_ua';
|
lang_code = 'uk_ua';
|
||||||
break;
|
break;
|
||||||
|
case 'Vietnamese (vi_vn)':
|
||||||
|
lang_code = 'vi_vn';
|
||||||
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
return lang_code;
|
return lang_code;
|
||||||
@@ -447,6 +454,7 @@ function localizeTimestamp(input) {
|
|||||||
return formatSafe(input, tz);
|
return formatSafe(input, tz);
|
||||||
|
|
||||||
function formatSafe(str, tz) {
|
function formatSafe(str, tz) {
|
||||||
|
|
||||||
// CHECK: Does the input string have timezone information?
|
// CHECK: Does the input string have timezone information?
|
||||||
// - Ends with Z: "2026-02-11T11:37:02Z"
|
// - Ends with Z: "2026-02-11T11:37:02Z"
|
||||||
// - Has GMT±offset: "Wed Feb 11 2026 12:34:12 GMT+1100 (...)"
|
// - Has GMT±offset: "Wed Feb 11 2026 12:34:12 GMT+1100 (...)"
|
||||||
|
|||||||
@@ -27,8 +27,8 @@ function initOnlineHistoryGraph() {
|
|||||||
var archivedCounts = [];
|
var archivedCounts = [];
|
||||||
|
|
||||||
res.data.forEach(function(entry) {
|
res.data.forEach(function(entry) {
|
||||||
var dateObj = new Date(entry.Scan_Date);
|
|
||||||
var formattedTime = dateObj.toLocaleTimeString([], {hour: '2-digit', minute: '2-digit', hour12: false});
|
var formattedTime = localizeTimestamp(entry.Scan_Date).slice(11, 17);
|
||||||
|
|
||||||
timeStamps.push(formattedTime);
|
timeStamps.push(formattedTime);
|
||||||
onlineCounts.push(entry.Online_Devices);
|
onlineCounts.push(entry.Online_Devices);
|
||||||
|
|||||||
@@ -789,4 +789,4 @@
|
|||||||
"settings_system_label": "نظام",
|
"settings_system_label": "نظام",
|
||||||
"settings_update_item_warning": "قم بتحديث القيمة أدناه. احرص على اتباع التنسيق السابق. <b>لم يتم إجراء التحقق.</b>",
|
"settings_update_item_warning": "قم بتحديث القيمة أدناه. احرص على اتباع التنسيق السابق. <b>لم يتم إجراء التحقق.</b>",
|
||||||
"test_event_tooltip": "احفظ التغييرات أولاً قبل اختبار الإعدادات."
|
"test_event_tooltip": "احفظ التغييرات أولاً قبل اختبار الإعدادات."
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -27,7 +27,7 @@
|
|||||||
"AppEvents_ObjectType": "Object Type",
|
"AppEvents_ObjectType": "Object Type",
|
||||||
"AppEvents_Plugin": "Plugin",
|
"AppEvents_Plugin": "Plugin",
|
||||||
"AppEvents_Type": "Type",
|
"AppEvents_Type": "Type",
|
||||||
"BACKEND_API_URL_description": "Used to generate backend API URLs. Specify if you use reverse proxy to map to your <code>GRAPHQL_PORT</code>. Enter full URL starting with <code>http://</code> including the port number (no trailing slash <code>/</code>).",
|
"BACKEND_API_URL_description": "Used to allow the frontend to communicate with the backend. By default this is set to <code>/server</code> and generally should not be changed.",
|
||||||
"BACKEND_API_URL_name": "Backend API URL",
|
"BACKEND_API_URL_name": "Backend API URL",
|
||||||
"BackDevDetail_Actions_Ask_Run": "Do you want to execute the action?",
|
"BackDevDetail_Actions_Ask_Run": "Do you want to execute the action?",
|
||||||
"BackDevDetail_Actions_Not_Registered": "Action not registered: ",
|
"BackDevDetail_Actions_Not_Registered": "Action not registered: ",
|
||||||
|
|||||||
@@ -27,7 +27,7 @@
|
|||||||
"AppEvents_ObjectType": "Type d'objet",
|
"AppEvents_ObjectType": "Type d'objet",
|
||||||
"AppEvents_Plugin": "Plugin",
|
"AppEvents_Plugin": "Plugin",
|
||||||
"AppEvents_Type": "Type",
|
"AppEvents_Type": "Type",
|
||||||
"BACKEND_API_URL_description": "Utilisé pour générer les URL de l'API back-end. Spécifiez si vous utiliser un reverse proxy pour mapper votre <code>GRAPHQL_PORT</code>. Renseigner l'URL complète, en commençant par <code>http://</code>, et en incluant le numéro de port (sans slash de fin <code>/</code>).",
|
"BACKEND_API_URL_description": "Utilisé pour autoriser l'interface utilisateur à communiquer avec le serveur. Par défaut, cela est défini sur <code>/serveur</code> et ne doit généralement pas être changé.",
|
||||||
"BACKEND_API_URL_name": "URL de l'API backend",
|
"BACKEND_API_URL_name": "URL de l'API backend",
|
||||||
"BackDevDetail_Actions_Ask_Run": "Voulez-vous exécuter cette action ?",
|
"BackDevDetail_Actions_Ask_Run": "Voulez-vous exécuter cette action ?",
|
||||||
"BackDevDetail_Actions_Not_Registered": "Action non enregistrée : ",
|
"BackDevDetail_Actions_Not_Registered": "Action non enregistrée : ",
|
||||||
|
|||||||
@@ -27,7 +27,7 @@
|
|||||||
"AppEvents_ObjectType": "Tipo oggetto",
|
"AppEvents_ObjectType": "Tipo oggetto",
|
||||||
"AppEvents_Plugin": "Plugin",
|
"AppEvents_Plugin": "Plugin",
|
||||||
"AppEvents_Type": "Tipo",
|
"AppEvents_Type": "Tipo",
|
||||||
"BACKEND_API_URL_description": "Utilizzato per generare URL API backend. Specifica se utilizzi un proxy inverso per il mapping al tuo <code>GRAPHQL_PORT</code>. Inserisci l'URL completo che inizia con <code>http://</code> incluso il numero di porta (senza barra finale <code>/</code>).",
|
"BACKEND_API_URL_description": "Utilizzato per consentire al frontend di comunicare con il backend. Per impostazione predefinita è impostato su <code>/server</code> e generalmente non dovrebbe essere modificato.",
|
||||||
"BACKEND_API_URL_name": "URL API backend",
|
"BACKEND_API_URL_name": "URL API backend",
|
||||||
"BackDevDetail_Actions_Ask_Run": "Vuoi eseguire questa azione?",
|
"BackDevDetail_Actions_Ask_Run": "Vuoi eseguire questa azione?",
|
||||||
"BackDevDetail_Actions_Not_Registered": "Azione non registrata: ",
|
"BackDevDetail_Actions_Not_Registered": "Azione non registrata: ",
|
||||||
|
|||||||
@@ -789,4 +789,4 @@
|
|||||||
"settings_system_label": "システム",
|
"settings_system_label": "システム",
|
||||||
"settings_update_item_warning": "以下の値を更新してください。以前のフォーマットに従うよう注意してください。<b>検証は行われません。</b>",
|
"settings_update_item_warning": "以下の値を更新してください。以前のフォーマットに従うよう注意してください。<b>検証は行われません。</b>",
|
||||||
"test_event_tooltip": "設定をテストする前に、まず変更を保存してください。"
|
"test_event_tooltip": "設定をテストする前に、まず変更を保存してください。"
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -5,15 +5,19 @@
|
|||||||
// ###################################
|
// ###################################
|
||||||
|
|
||||||
$defaultLang = "en_us";
|
$defaultLang = "en_us";
|
||||||
$allLanguages = [ "ar_ar", "ca_ca", "cs_cz", "de_de", "en_us", "es_es", "fa_fa", "fr_fr", "it_it", "ja_jp", "nb_no", "pl_pl", "pt_br", "pt_pt", "ru_ru", "sv_sv", "tr_tr", "uk_ua", "zh_cn"];
|
$allLanguages = [ "ar_ar", "ca_ca", "cs_cz", "de_de",
|
||||||
|
"en_us", "es_es", "fa_fa", "fr_fr",
|
||||||
|
"it_it", "ja_jp", "nb_no", "pl_pl",
|
||||||
|
"pt_br", "pt_pt", "ru_ru", "sv_sv",
|
||||||
|
"tr_tr", "uk_ua", "vi_vn", "zh_cn"];
|
||||||
|
|
||||||
|
|
||||||
global $db;
|
global $db;
|
||||||
|
|
||||||
$result = $db->querySingle("SELECT setValue FROM Settings WHERE setKey = 'UI_LANG'");
|
$result = $db->querySingle("SELECT setValue FROM Settings WHERE setKey = 'UI_LANG'");
|
||||||
|
|
||||||
// below has to match exactly the values in /front/php/templates/language/lang.php & /front/js/common.js
|
// below has to match exactly the values in /front/php/templates/language/lang.php & /front/js/common.js
|
||||||
switch($result){
|
switch($result){
|
||||||
case 'Arabic (ar_ar)': $pia_lang_selected = 'ar_ar'; break;
|
case 'Arabic (ar_ar)': $pia_lang_selected = 'ar_ar'; break;
|
||||||
case 'Catalan (ca_ca)': $pia_lang_selected = 'ca_ca'; break;
|
case 'Catalan (ca_ca)': $pia_lang_selected = 'ca_ca'; break;
|
||||||
case 'Czech (cs_cz)': $pia_lang_selected = 'cs_cz'; break;
|
case 'Czech (cs_cz)': $pia_lang_selected = 'cs_cz'; break;
|
||||||
@@ -32,6 +36,7 @@ switch($result){
|
|||||||
case 'Swedish (sv_sv)': $pia_lang_selected = 'sv_sv'; break;
|
case 'Swedish (sv_sv)': $pia_lang_selected = 'sv_sv'; break;
|
||||||
case 'Turkish (tr_tr)': $pia_lang_selected = 'tr_tr'; break;
|
case 'Turkish (tr_tr)': $pia_lang_selected = 'tr_tr'; break;
|
||||||
case 'Ukrainian (uk_ua)': $pia_lang_selected = 'uk_ua'; break;
|
case 'Ukrainian (uk_ua)': $pia_lang_selected = 'uk_ua'; break;
|
||||||
|
case 'Vietnamese (vi_vn)': $pia_lang_selected = 'vi_vn'; break;
|
||||||
case 'Chinese (zh_cn)': $pia_lang_selected = 'zh_cn'; break;
|
case 'Chinese (zh_cn)': $pia_lang_selected = 'zh_cn'; break;
|
||||||
default: $pia_lang_selected = 'en_us'; break;
|
default: $pia_lang_selected = 'en_us'; break;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -38,6 +38,6 @@ if __name__ == "__main__":
|
|||||||
json_files = ["en_us.json", "ar_ar.json", "ca_ca.json", "cs_cz.json", "de_de.json",
|
json_files = ["en_us.json", "ar_ar.json", "ca_ca.json", "cs_cz.json", "de_de.json",
|
||||||
"es_es.json", "fa_fa.json", "fr_fr.json", "it_it.json", "ja_jp.json",
|
"es_es.json", "fa_fa.json", "fr_fr.json", "it_it.json", "ja_jp.json",
|
||||||
"nb_no.json", "pl_pl.json", "pt_br.json", "pt_pt.json", "ru_ru.json",
|
"nb_no.json", "pl_pl.json", "pt_br.json", "pt_pt.json", "ru_ru.json",
|
||||||
"sv_sv.json", "tr_tr.json", "uk_ua.json", "zh_cn.json"]
|
"sv_sv.json", "tr_tr.json", "vi_vn.json", "uk_ua.json", "zh_cn.json"]
|
||||||
file_paths = [os.path.join(current_path, file) for file in json_files]
|
file_paths = [os.path.join(current_path, file) for file in json_files]
|
||||||
merge_translations(file_paths[0], file_paths[1:])
|
merge_translations(file_paths[0], file_paths[1:])
|
||||||
|
|||||||
792
front/php/templates/language/vi_vn.json
Normal file
@@ -0,0 +1,792 @@
|
|||||||
|
{
|
||||||
|
"API_CUSTOM_SQL_description": "",
|
||||||
|
"API_CUSTOM_SQL_name": "",
|
||||||
|
"API_TOKEN_description": "",
|
||||||
|
"API_TOKEN_name": "",
|
||||||
|
"API_display_name": "",
|
||||||
|
"API_icon": "",
|
||||||
|
"About_Design": "",
|
||||||
|
"About_Exit": "",
|
||||||
|
"About_Title": "",
|
||||||
|
"AppEvents_AppEventProcessed": "",
|
||||||
|
"AppEvents_DateTimeCreated": "",
|
||||||
|
"AppEvents_Extra": "",
|
||||||
|
"AppEvents_GUID": "",
|
||||||
|
"AppEvents_Helper1": "",
|
||||||
|
"AppEvents_Helper2": "",
|
||||||
|
"AppEvents_Helper3": "",
|
||||||
|
"AppEvents_ObjectForeignKey": "",
|
||||||
|
"AppEvents_ObjectIndex": "",
|
||||||
|
"AppEvents_ObjectIsArchived": "",
|
||||||
|
"AppEvents_ObjectIsNew": "",
|
||||||
|
"AppEvents_ObjectPlugin": "",
|
||||||
|
"AppEvents_ObjectPrimaryID": "",
|
||||||
|
"AppEvents_ObjectSecondaryID": "",
|
||||||
|
"AppEvents_ObjectStatus": "",
|
||||||
|
"AppEvents_ObjectStatusColumn": "",
|
||||||
|
"AppEvents_ObjectType": "",
|
||||||
|
"AppEvents_Plugin": "",
|
||||||
|
"AppEvents_Type": "",
|
||||||
|
"BACKEND_API_URL_description": "",
|
||||||
|
"BACKEND_API_URL_name": "",
|
||||||
|
"BackDevDetail_Actions_Ask_Run": "",
|
||||||
|
"BackDevDetail_Actions_Not_Registered": "",
|
||||||
|
"BackDevDetail_Actions_Title_Run": "",
|
||||||
|
"BackDevDetail_Copy_Ask": "",
|
||||||
|
"BackDevDetail_Copy_Title": "",
|
||||||
|
"BackDevDetail_Tools_WOL_error": "",
|
||||||
|
"BackDevDetail_Tools_WOL_okay": "",
|
||||||
|
"BackDevices_Arpscan_disabled": "",
|
||||||
|
"BackDevices_Arpscan_enabled": "",
|
||||||
|
"BackDevices_Backup_CopError": "",
|
||||||
|
"BackDevices_Backup_Failed": "",
|
||||||
|
"BackDevices_Backup_okay": "",
|
||||||
|
"BackDevices_DBTools_DelDevError_a": "",
|
||||||
|
"BackDevices_DBTools_DelDevError_b": "",
|
||||||
|
"BackDevices_DBTools_DelDev_a": "",
|
||||||
|
"BackDevices_DBTools_DelDev_b": "",
|
||||||
|
"BackDevices_DBTools_DelEvents": "",
|
||||||
|
"BackDevices_DBTools_DelEventsError": "",
|
||||||
|
"BackDevices_DBTools_ImportCSV": "",
|
||||||
|
"BackDevices_DBTools_ImportCSVError": "",
|
||||||
|
"BackDevices_DBTools_ImportCSVMissing": "",
|
||||||
|
"BackDevices_DBTools_Purge": "",
|
||||||
|
"BackDevices_DBTools_UpdDev": "",
|
||||||
|
"BackDevices_DBTools_UpdDevError": "",
|
||||||
|
"BackDevices_DBTools_Upgrade": "",
|
||||||
|
"BackDevices_DBTools_UpgradeError": "",
|
||||||
|
"BackDevices_Device_UpdDevError": "",
|
||||||
|
"BackDevices_Restore_CopError": "",
|
||||||
|
"BackDevices_Restore_Failed": "",
|
||||||
|
"BackDevices_Restore_okay": "",
|
||||||
|
"BackDevices_darkmode_disabled": "",
|
||||||
|
"BackDevices_darkmode_enabled": "",
|
||||||
|
"CLEAR_NEW_FLAG_description": "",
|
||||||
|
"CLEAR_NEW_FLAG_name": "",
|
||||||
|
"CustProps_cant_remove": "",
|
||||||
|
"DAYS_TO_KEEP_EVENTS_description": "",
|
||||||
|
"DAYS_TO_KEEP_EVENTS_name": "",
|
||||||
|
"DISCOVER_PLUGINS_description": "",
|
||||||
|
"DISCOVER_PLUGINS_name": "",
|
||||||
|
"DevDetail_Children_Title": "",
|
||||||
|
"DevDetail_Copy_Device_Title": "",
|
||||||
|
"DevDetail_Copy_Device_Tooltip": "",
|
||||||
|
"DevDetail_CustomProperties_Title": "",
|
||||||
|
"DevDetail_CustomProps_reset_info": "",
|
||||||
|
"DevDetail_DisplayFields_Title": "",
|
||||||
|
"DevDetail_EveandAl_AlertAllEvents": "",
|
||||||
|
"DevDetail_EveandAl_AlertDown": "",
|
||||||
|
"DevDetail_EveandAl_Archived": "",
|
||||||
|
"DevDetail_EveandAl_NewDevice": "",
|
||||||
|
"DevDetail_EveandAl_NewDevice_Tooltip": "",
|
||||||
|
"DevDetail_EveandAl_RandomMAC": "",
|
||||||
|
"DevDetail_EveandAl_ScanCycle": "",
|
||||||
|
"DevDetail_EveandAl_ScanCycle_a": "",
|
||||||
|
"DevDetail_EveandAl_ScanCycle_z": "",
|
||||||
|
"DevDetail_EveandAl_Skip": "",
|
||||||
|
"DevDetail_EveandAl_Title": "",
|
||||||
|
"DevDetail_Events_CheckBox": "",
|
||||||
|
"DevDetail_GoToNetworkNode": "",
|
||||||
|
"DevDetail_Icon": "",
|
||||||
|
"DevDetail_Icon_Descr": "",
|
||||||
|
"DevDetail_Loading": "",
|
||||||
|
"DevDetail_MainInfo_Comments": "",
|
||||||
|
"DevDetail_MainInfo_Favorite": "",
|
||||||
|
"DevDetail_MainInfo_Group": "",
|
||||||
|
"DevDetail_MainInfo_Location": "",
|
||||||
|
"DevDetail_MainInfo_Name": "",
|
||||||
|
"DevDetail_MainInfo_Network": "",
|
||||||
|
"DevDetail_MainInfo_Network_Port": "",
|
||||||
|
"DevDetail_MainInfo_Network_Site": "",
|
||||||
|
"DevDetail_MainInfo_Network_Title": "",
|
||||||
|
"DevDetail_MainInfo_Owner": "",
|
||||||
|
"DevDetail_MainInfo_SSID": "",
|
||||||
|
"DevDetail_MainInfo_Title": "",
|
||||||
|
"DevDetail_MainInfo_Type": "",
|
||||||
|
"DevDetail_MainInfo_Vendor": "",
|
||||||
|
"DevDetail_MainInfo_mac": "",
|
||||||
|
"DevDetail_NavToChildNode": "",
|
||||||
|
"DevDetail_Network_Node_hover": "",
|
||||||
|
"DevDetail_Network_Port_hover": "",
|
||||||
|
"DevDetail_Nmap_Scans": "",
|
||||||
|
"DevDetail_Nmap_Scans_desc": "",
|
||||||
|
"DevDetail_Nmap_buttonDefault": "",
|
||||||
|
"DevDetail_Nmap_buttonDefault_text": "",
|
||||||
|
"DevDetail_Nmap_buttonDetail": "",
|
||||||
|
"DevDetail_Nmap_buttonDetail_text": "",
|
||||||
|
"DevDetail_Nmap_buttonFast": "",
|
||||||
|
"DevDetail_Nmap_buttonFast_text": "",
|
||||||
|
"DevDetail_Nmap_buttonSkipDiscovery": "",
|
||||||
|
"DevDetail_Nmap_buttonSkipDiscovery_text": "",
|
||||||
|
"DevDetail_Nmap_resultsLink": "",
|
||||||
|
"DevDetail_Owner_hover": "",
|
||||||
|
"DevDetail_Periodselect_All": "",
|
||||||
|
"DevDetail_Periodselect_LastMonth": "",
|
||||||
|
"DevDetail_Periodselect_LastWeek": "",
|
||||||
|
"DevDetail_Periodselect_LastYear": "",
|
||||||
|
"DevDetail_Periodselect_today": "",
|
||||||
|
"DevDetail_Run_Actions_Title": "",
|
||||||
|
"DevDetail_Run_Actions_Tooltip": "",
|
||||||
|
"DevDetail_SessionInfo_FirstSession": "",
|
||||||
|
"DevDetail_SessionInfo_LastIP": "",
|
||||||
|
"DevDetail_SessionInfo_LastSession": "",
|
||||||
|
"DevDetail_SessionInfo_StaticIP": "",
|
||||||
|
"DevDetail_SessionInfo_Status": "",
|
||||||
|
"DevDetail_SessionInfo_Title": "",
|
||||||
|
"DevDetail_SessionTable_Additionalinfo": "",
|
||||||
|
"DevDetail_SessionTable_Connection": "",
|
||||||
|
"DevDetail_SessionTable_Disconnection": "",
|
||||||
|
"DevDetail_SessionTable_Duration": "",
|
||||||
|
"DevDetail_SessionTable_IP": "",
|
||||||
|
"DevDetail_SessionTable_Order": "",
|
||||||
|
"DevDetail_Shortcut_CurrentStatus": "",
|
||||||
|
"DevDetail_Shortcut_DownAlerts": "",
|
||||||
|
"DevDetail_Shortcut_Presence": "",
|
||||||
|
"DevDetail_Shortcut_Sessions": "",
|
||||||
|
"DevDetail_Tab_Details": "",
|
||||||
|
"DevDetail_Tab_Events": "",
|
||||||
|
"DevDetail_Tab_EventsTableDate": "",
|
||||||
|
"DevDetail_Tab_EventsTableEvent": "",
|
||||||
|
"DevDetail_Tab_EventsTableIP": "",
|
||||||
|
"DevDetail_Tab_EventsTableInfo": "",
|
||||||
|
"DevDetail_Tab_Nmap": "",
|
||||||
|
"DevDetail_Tab_NmapEmpty": "",
|
||||||
|
"DevDetail_Tab_NmapTableExtra": "",
|
||||||
|
"DevDetail_Tab_NmapTableHeader": "",
|
||||||
|
"DevDetail_Tab_NmapTableIndex": "",
|
||||||
|
"DevDetail_Tab_NmapTablePort": "",
|
||||||
|
"DevDetail_Tab_NmapTableService": "",
|
||||||
|
"DevDetail_Tab_NmapTableState": "",
|
||||||
|
"DevDetail_Tab_NmapTableText": "",
|
||||||
|
"DevDetail_Tab_NmapTableTime": "",
|
||||||
|
"DevDetail_Tab_Plugins": "",
|
||||||
|
"DevDetail_Tab_Presence": "",
|
||||||
|
"DevDetail_Tab_Sessions": "",
|
||||||
|
"DevDetail_Tab_Tools": "",
|
||||||
|
"DevDetail_Tab_Tools_Internet_Info_Description": "",
|
||||||
|
"DevDetail_Tab_Tools_Internet_Info_Error": "",
|
||||||
|
"DevDetail_Tab_Tools_Internet_Info_Start": "",
|
||||||
|
"DevDetail_Tab_Tools_Internet_Info_Title": "",
|
||||||
|
"DevDetail_Tab_Tools_Nslookup_Description": "",
|
||||||
|
"DevDetail_Tab_Tools_Nslookup_Error": "",
|
||||||
|
"DevDetail_Tab_Tools_Nslookup_Start": "",
|
||||||
|
"DevDetail_Tab_Tools_Nslookup_Title": "",
|
||||||
|
"DevDetail_Tab_Tools_Speedtest_Description": "",
|
||||||
|
"DevDetail_Tab_Tools_Speedtest_Start": "",
|
||||||
|
"DevDetail_Tab_Tools_Speedtest_Title": "",
|
||||||
|
"DevDetail_Tab_Tools_Traceroute_Description": "",
|
||||||
|
"DevDetail_Tab_Tools_Traceroute_Error": "",
|
||||||
|
"DevDetail_Tab_Tools_Traceroute_Start": "",
|
||||||
|
"DevDetail_Tab_Tools_Traceroute_Title": "",
|
||||||
|
"DevDetail_Tools_WOL": "",
|
||||||
|
"DevDetail_Tools_WOL_noti": "",
|
||||||
|
"DevDetail_Tools_WOL_noti_text": "",
|
||||||
|
"DevDetail_Type_hover": "",
|
||||||
|
"DevDetail_Vendor_hover": "",
|
||||||
|
"DevDetail_WOL_Title": "",
|
||||||
|
"DevDetail_button_AddIcon": "",
|
||||||
|
"DevDetail_button_AddIcon_Help": "",
|
||||||
|
"DevDetail_button_AddIcon_Tooltip": "",
|
||||||
|
"DevDetail_button_Delete": "",
|
||||||
|
"DevDetail_button_DeleteEvents": "",
|
||||||
|
"DevDetail_button_DeleteEvents_Warning": "",
|
||||||
|
"DevDetail_button_Delete_ask": "",
|
||||||
|
"DevDetail_button_OverwriteIcons": "",
|
||||||
|
"DevDetail_button_OverwriteIcons_Tooltip": "",
|
||||||
|
"DevDetail_button_OverwriteIcons_Warning": "",
|
||||||
|
"DevDetail_button_Reset": "",
|
||||||
|
"DevDetail_button_Save": "",
|
||||||
|
"DeviceEdit_ValidMacIp": "",
|
||||||
|
"Device_MultiEdit": "",
|
||||||
|
"Device_MultiEdit_Backup": "",
|
||||||
|
"Device_MultiEdit_Fields": "",
|
||||||
|
"Device_MultiEdit_MassActions": "",
|
||||||
|
"Device_MultiEdit_No_Devices": "",
|
||||||
|
"Device_MultiEdit_Tooltip": "",
|
||||||
|
"Device_Save_Failed": "",
|
||||||
|
"Device_Save_Unauthorized": "",
|
||||||
|
"Device_Saved_Success": "",
|
||||||
|
"Device_Saved_Unexpected": "",
|
||||||
|
"Device_Searchbox": "",
|
||||||
|
"Device_Shortcut_AllDevices": "",
|
||||||
|
"Device_Shortcut_AllNodes": "",
|
||||||
|
"Device_Shortcut_Archived": "",
|
||||||
|
"Device_Shortcut_Connected": "",
|
||||||
|
"Device_Shortcut_Devices": "",
|
||||||
|
"Device_Shortcut_DownAlerts": "",
|
||||||
|
"Device_Shortcut_DownOnly": "",
|
||||||
|
"Device_Shortcut_Favorites": "",
|
||||||
|
"Device_Shortcut_NewDevices": "",
|
||||||
|
"Device_Shortcut_OnlineChart": "",
|
||||||
|
"Device_TableHead_AlertDown": "",
|
||||||
|
"Device_TableHead_Connected_Devices": "",
|
||||||
|
"Device_TableHead_CustomProps": "",
|
||||||
|
"Device_TableHead_FQDN": "",
|
||||||
|
"Device_TableHead_Favorite": "",
|
||||||
|
"Device_TableHead_FirstSession": "",
|
||||||
|
"Device_TableHead_GUID": "",
|
||||||
|
"Device_TableHead_Group": "",
|
||||||
|
"Device_TableHead_IPv4": "",
|
||||||
|
"Device_TableHead_IPv6": "",
|
||||||
|
"Device_TableHead_Icon": "",
|
||||||
|
"Device_TableHead_LastIP": "",
|
||||||
|
"Device_TableHead_LastIPOrder": "",
|
||||||
|
"Device_TableHead_LastSession": "",
|
||||||
|
"Device_TableHead_Location": "",
|
||||||
|
"Device_TableHead_MAC": "",
|
||||||
|
"Device_TableHead_MAC_full": "",
|
||||||
|
"Device_TableHead_Name": "",
|
||||||
|
"Device_TableHead_NetworkSite": "",
|
||||||
|
"Device_TableHead_Owner": "",
|
||||||
|
"Device_TableHead_ParentRelType": "",
|
||||||
|
"Device_TableHead_Parent_MAC": "",
|
||||||
|
"Device_TableHead_Port": "",
|
||||||
|
"Device_TableHead_PresentLastScan": "",
|
||||||
|
"Device_TableHead_ReqNicsOnline": "",
|
||||||
|
"Device_TableHead_RowID": "",
|
||||||
|
"Device_TableHead_Rowid": "",
|
||||||
|
"Device_TableHead_SSID": "",
|
||||||
|
"Device_TableHead_SourcePlugin": "",
|
||||||
|
"Device_TableHead_Status": "",
|
||||||
|
"Device_TableHead_SyncHubNodeName": "",
|
||||||
|
"Device_TableHead_Type": "",
|
||||||
|
"Device_TableHead_Vendor": "",
|
||||||
|
"Device_TableHead_Vlan": "",
|
||||||
|
"Device_Table_Not_Network_Device": "",
|
||||||
|
"Device_Table_info": "",
|
||||||
|
"Device_Table_nav_next": "",
|
||||||
|
"Device_Table_nav_prev": "",
|
||||||
|
"Device_Tablelenght": "",
|
||||||
|
"Device_Tablelenght_all": "",
|
||||||
|
"Device_Title": "",
|
||||||
|
"Devices_Filters": "",
|
||||||
|
"ENABLE_PLUGINS_description": "",
|
||||||
|
"ENABLE_PLUGINS_name": "",
|
||||||
|
"ENCRYPTION_KEY_description": "",
|
||||||
|
"ENCRYPTION_KEY_name": "",
|
||||||
|
"Email_display_name": "",
|
||||||
|
"Email_icon": "",
|
||||||
|
"Events_Loading": "",
|
||||||
|
"Events_Periodselect_All": "",
|
||||||
|
"Events_Periodselect_LastMonth": "",
|
||||||
|
"Events_Periodselect_LastWeek": "",
|
||||||
|
"Events_Periodselect_LastYear": "",
|
||||||
|
"Events_Periodselect_today": "",
|
||||||
|
"Events_Searchbox": "",
|
||||||
|
"Events_Shortcut_AllEvents": "",
|
||||||
|
"Events_Shortcut_DownAlerts": "",
|
||||||
|
"Events_Shortcut_Events": "",
|
||||||
|
"Events_Shortcut_MissSessions": "",
|
||||||
|
"Events_Shortcut_NewDevices": "",
|
||||||
|
"Events_Shortcut_Sessions": "",
|
||||||
|
"Events_Shortcut_VoidSessions": "",
|
||||||
|
"Events_TableHead_AdditionalInfo": "",
|
||||||
|
"Events_TableHead_Connection": "",
|
||||||
|
"Events_TableHead_Date": "",
|
||||||
|
"Events_TableHead_Device": "",
|
||||||
|
"Events_TableHead_Disconnection": "",
|
||||||
|
"Events_TableHead_Duration": "",
|
||||||
|
"Events_TableHead_DurationOrder": "",
|
||||||
|
"Events_TableHead_EventType": "",
|
||||||
|
"Events_TableHead_IP": "",
|
||||||
|
"Events_TableHead_IPOrder": "",
|
||||||
|
"Events_TableHead_Order": "",
|
||||||
|
"Events_TableHead_Owner": "",
|
||||||
|
"Events_TableHead_PendingAlert": "",
|
||||||
|
"Events_Table_info": "",
|
||||||
|
"Events_Table_nav_next": "",
|
||||||
|
"Events_Table_nav_prev": "",
|
||||||
|
"Events_Tablelenght": "",
|
||||||
|
"Events_Tablelenght_all": "",
|
||||||
|
"Events_Title": "",
|
||||||
|
"FakeMAC_hover": "",
|
||||||
|
"FieldLock_Error": "",
|
||||||
|
"FieldLock_Lock_Tooltip": "",
|
||||||
|
"FieldLock_Locked": "",
|
||||||
|
"FieldLock_SaveBeforeLocking": "",
|
||||||
|
"FieldLock_Source_Label": "",
|
||||||
|
"FieldLock_Unlock_Tooltip": "",
|
||||||
|
"FieldLock_Unlocked": "",
|
||||||
|
"GRAPHQL_PORT_description": "",
|
||||||
|
"GRAPHQL_PORT_name": "",
|
||||||
|
"Gen_Action": "",
|
||||||
|
"Gen_Add": "",
|
||||||
|
"Gen_AddDevice": "",
|
||||||
|
"Gen_Add_All": "",
|
||||||
|
"Gen_All_Devices": "",
|
||||||
|
"Gen_AreYouSure": "",
|
||||||
|
"Gen_Backup": "",
|
||||||
|
"Gen_Cancel": "",
|
||||||
|
"Gen_Change": "",
|
||||||
|
"Gen_Copy": "",
|
||||||
|
"Gen_CopyToClipboard": "",
|
||||||
|
"Gen_DataUpdatedUITakesTime": "",
|
||||||
|
"Gen_Delete": "",
|
||||||
|
"Gen_DeleteAll": "",
|
||||||
|
"Gen_Description": "",
|
||||||
|
"Gen_Error": "",
|
||||||
|
"Gen_Filter": "",
|
||||||
|
"Gen_Generate": "",
|
||||||
|
"Gen_InvalidMac": "",
|
||||||
|
"Gen_Invalid_Value": "",
|
||||||
|
"Gen_LockedDB": "",
|
||||||
|
"Gen_NetworkMask": "",
|
||||||
|
"Gen_Offline": "",
|
||||||
|
"Gen_Okay": "",
|
||||||
|
"Gen_Online": "",
|
||||||
|
"Gen_Purge": "",
|
||||||
|
"Gen_ReadDocs": "",
|
||||||
|
"Gen_Remove_All": "",
|
||||||
|
"Gen_Remove_Last": "",
|
||||||
|
"Gen_Reset": "",
|
||||||
|
"Gen_Restore": "",
|
||||||
|
"Gen_Run": "",
|
||||||
|
"Gen_Save": "",
|
||||||
|
"Gen_Saved": "",
|
||||||
|
"Gen_Search": "",
|
||||||
|
"Gen_Select": "",
|
||||||
|
"Gen_SelectIcon": "",
|
||||||
|
"Gen_SelectToPreview": "",
|
||||||
|
"Gen_Selected_Devices": "",
|
||||||
|
"Gen_Subnet": "",
|
||||||
|
"Gen_Switch": "",
|
||||||
|
"Gen_Upd": "",
|
||||||
|
"Gen_Upd_Fail": "",
|
||||||
|
"Gen_Update": "",
|
||||||
|
"Gen_Update_Value": "",
|
||||||
|
"Gen_ValidIcon": "",
|
||||||
|
"Gen_Warning": "",
|
||||||
|
"Gen_Work_In_Progress": "",
|
||||||
|
"Gen_create_new_device": "",
|
||||||
|
"Gen_create_new_device_info": "",
|
||||||
|
"General_display_name": "",
|
||||||
|
"General_icon": "",
|
||||||
|
"HRS_TO_KEEP_NEWDEV_description": "",
|
||||||
|
"HRS_TO_KEEP_NEWDEV_name": "",
|
||||||
|
"HRS_TO_KEEP_OFFDEV_description": "",
|
||||||
|
"HRS_TO_KEEP_OFFDEV_name": "",
|
||||||
|
"LOADED_PLUGINS_description": "",
|
||||||
|
"LOADED_PLUGINS_name": "",
|
||||||
|
"LOG_LEVEL_description": "",
|
||||||
|
"LOG_LEVEL_name": "",
|
||||||
|
"Loading": "",
|
||||||
|
"Login_Box": "",
|
||||||
|
"Login_Default_PWD": "",
|
||||||
|
"Login_Info": "",
|
||||||
|
"Login_Psw-box": "",
|
||||||
|
"Login_Psw_alert": "",
|
||||||
|
"Login_Psw_folder": "",
|
||||||
|
"Login_Psw_new": "",
|
||||||
|
"Login_Psw_run": "",
|
||||||
|
"Login_Remember": "",
|
||||||
|
"Login_Remember_small": "",
|
||||||
|
"Login_Submit": "",
|
||||||
|
"Login_Toggle_Alert_headline": "",
|
||||||
|
"Login_Toggle_Info": "",
|
||||||
|
"Login_Toggle_Info_headline": "",
|
||||||
|
"Maint_PurgeLog": "",
|
||||||
|
"Maint_RestartServer": "",
|
||||||
|
"Maint_Restart_Server_noti_text": "",
|
||||||
|
"Maintenance_InitCheck": "",
|
||||||
|
"Maintenance_InitCheck_Checking": "",
|
||||||
|
"Maintenance_InitCheck_QuickSetupGuide": "",
|
||||||
|
"Maintenance_InitCheck_Success": "",
|
||||||
|
"Maintenance_ReCheck": "",
|
||||||
|
"Maintenance_Running_Version": "",
|
||||||
|
"Maintenance_Status": "",
|
||||||
|
"Maintenance_Title": "",
|
||||||
|
"Maintenance_Tool_DownloadConfig": "",
|
||||||
|
"Maintenance_Tool_DownloadConfig_text": "",
|
||||||
|
"Maintenance_Tool_DownloadWorkflows": "",
|
||||||
|
"Maintenance_Tool_DownloadWorkflows_text": "",
|
||||||
|
"Maintenance_Tool_ExportCSV": "",
|
||||||
|
"Maintenance_Tool_ExportCSV_noti": "",
|
||||||
|
"Maintenance_Tool_ExportCSV_noti_text": "",
|
||||||
|
"Maintenance_Tool_ExportCSV_text": "",
|
||||||
|
"Maintenance_Tool_ImportCSV": "",
|
||||||
|
"Maintenance_Tool_ImportCSV_noti": "",
|
||||||
|
"Maintenance_Tool_ImportCSV_noti_text": "",
|
||||||
|
"Maintenance_Tool_ImportCSV_text": "",
|
||||||
|
"Maintenance_Tool_ImportConfig_noti": "",
|
||||||
|
"Maintenance_Tool_ImportPastedCSV": "",
|
||||||
|
"Maintenance_Tool_ImportPastedCSV_noti_text": "",
|
||||||
|
"Maintenance_Tool_ImportPastedCSV_text": "",
|
||||||
|
"Maintenance_Tool_ImportPastedConfig": "",
|
||||||
|
"Maintenance_Tool_ImportPastedConfig_noti_text": "",
|
||||||
|
"Maintenance_Tool_ImportPastedConfig_text": "",
|
||||||
|
"Maintenance_Tool_UnlockFields": "",
|
||||||
|
"Maintenance_Tool_UnlockFields_noti": "",
|
||||||
|
"Maintenance_Tool_UnlockFields_noti_text": "",
|
||||||
|
"Maintenance_Tool_UnlockFields_text": "",
|
||||||
|
"Maintenance_Tool_arpscansw": "",
|
||||||
|
"Maintenance_Tool_arpscansw_noti": "",
|
||||||
|
"Maintenance_Tool_arpscansw_noti_text": "",
|
||||||
|
"Maintenance_Tool_arpscansw_text": "",
|
||||||
|
"Maintenance_Tool_backup": "",
|
||||||
|
"Maintenance_Tool_backup_noti": "",
|
||||||
|
"Maintenance_Tool_backup_noti_text": "",
|
||||||
|
"Maintenance_Tool_backup_text": "",
|
||||||
|
"Maintenance_Tool_check_visible": "",
|
||||||
|
"Maintenance_Tool_clearSourceFields_selected": "",
|
||||||
|
"Maintenance_Tool_clearSourceFields_selected_noti": "",
|
||||||
|
"Maintenance_Tool_clearSourceFields_selected_text": "",
|
||||||
|
"Maintenance_Tool_darkmode": "",
|
||||||
|
"Maintenance_Tool_darkmode_noti": "",
|
||||||
|
"Maintenance_Tool_darkmode_noti_text": "",
|
||||||
|
"Maintenance_Tool_darkmode_text": "",
|
||||||
|
"Maintenance_Tool_del_ActHistory": "",
|
||||||
|
"Maintenance_Tool_del_ActHistory_noti": "",
|
||||||
|
"Maintenance_Tool_del_ActHistory_noti_text": "",
|
||||||
|
"Maintenance_Tool_del_ActHistory_text": "",
|
||||||
|
"Maintenance_Tool_del_alldev": "",
|
||||||
|
"Maintenance_Tool_del_alldev_noti": "",
|
||||||
|
"Maintenance_Tool_del_alldev_noti_text": "",
|
||||||
|
"Maintenance_Tool_del_alldev_text": "",
|
||||||
|
"Maintenance_Tool_del_allevents": "",
|
||||||
|
"Maintenance_Tool_del_allevents30": "",
|
||||||
|
"Maintenance_Tool_del_allevents30_noti": "",
|
||||||
|
"Maintenance_Tool_del_allevents30_noti_text": "",
|
||||||
|
"Maintenance_Tool_del_allevents30_text": "",
|
||||||
|
"Maintenance_Tool_del_allevents_noti": "",
|
||||||
|
"Maintenance_Tool_del_allevents_noti_text": "",
|
||||||
|
"Maintenance_Tool_del_allevents_text": "",
|
||||||
|
"Maintenance_Tool_del_empty_macs": "",
|
||||||
|
"Maintenance_Tool_del_empty_macs_noti": "",
|
||||||
|
"Maintenance_Tool_del_empty_macs_noti_text": "",
|
||||||
|
"Maintenance_Tool_del_empty_macs_text": "",
|
||||||
|
"Maintenance_Tool_del_selecteddev": "",
|
||||||
|
"Maintenance_Tool_del_selecteddev_text": "",
|
||||||
|
"Maintenance_Tool_del_unknowndev": "",
|
||||||
|
"Maintenance_Tool_del_unknowndev_noti": "",
|
||||||
|
"Maintenance_Tool_del_unknowndev_noti_text": "",
|
||||||
|
"Maintenance_Tool_del_unknowndev_text": "",
|
||||||
|
"Maintenance_Tool_del_unlockFields_selecteddev_text": "",
|
||||||
|
"Maintenance_Tool_displayed_columns_text": "",
|
||||||
|
"Maintenance_Tool_drag_me": "",
|
||||||
|
"Maintenance_Tool_order_columns_text": "",
|
||||||
|
"Maintenance_Tool_purgebackup": "",
|
||||||
|
"Maintenance_Tool_purgebackup_noti": "",
|
||||||
|
"Maintenance_Tool_purgebackup_noti_text": "",
|
||||||
|
"Maintenance_Tool_purgebackup_text": "",
|
||||||
|
"Maintenance_Tool_restore": "",
|
||||||
|
"Maintenance_Tool_restore_noti": "",
|
||||||
|
"Maintenance_Tool_restore_noti_text": "",
|
||||||
|
"Maintenance_Tool_restore_text": "",
|
||||||
|
"Maintenance_Tool_unlockFields_selecteddev": "",
|
||||||
|
"Maintenance_Tool_unlockFields_selecteddev_noti": "",
|
||||||
|
"Maintenance_Tool_upgrade_database_noti": "",
|
||||||
|
"Maintenance_Tool_upgrade_database_noti_text": "",
|
||||||
|
"Maintenance_Tool_upgrade_database_text": "",
|
||||||
|
"Maintenance_Tools_Tab_BackupRestore": "",
|
||||||
|
"Maintenance_Tools_Tab_Logging": "",
|
||||||
|
"Maintenance_Tools_Tab_Settings": "",
|
||||||
|
"Maintenance_Tools_Tab_Tools": "",
|
||||||
|
"Maintenance_Tools_Tab_UISettings": "",
|
||||||
|
"Maintenance_arp_status": "",
|
||||||
|
"Maintenance_arp_status_off": "",
|
||||||
|
"Maintenance_arp_status_on": "",
|
||||||
|
"Maintenance_built_on": "",
|
||||||
|
"Maintenance_current_version": "",
|
||||||
|
"Maintenance_database_backup": "",
|
||||||
|
"Maintenance_database_backup_found": "",
|
||||||
|
"Maintenance_database_backup_total": "",
|
||||||
|
"Maintenance_database_lastmod": "",
|
||||||
|
"Maintenance_database_path": "",
|
||||||
|
"Maintenance_database_rows": "",
|
||||||
|
"Maintenance_database_size": "",
|
||||||
|
"Maintenance_lang_selector_apply": "",
|
||||||
|
"Maintenance_lang_selector_empty": "",
|
||||||
|
"Maintenance_lang_selector_lable": "",
|
||||||
|
"Maintenance_lang_selector_text": "",
|
||||||
|
"Maintenance_new_version": "",
|
||||||
|
"Maintenance_themeselector_apply": "",
|
||||||
|
"Maintenance_themeselector_empty": "",
|
||||||
|
"Maintenance_themeselector_lable": "",
|
||||||
|
"Maintenance_themeselector_text": "",
|
||||||
|
"Maintenance_version": "",
|
||||||
|
"NETWORK_DEVICE_TYPES_description": "",
|
||||||
|
"NETWORK_DEVICE_TYPES_name": "",
|
||||||
|
"Navigation_About": "",
|
||||||
|
"Navigation_AppEvents": "",
|
||||||
|
"Navigation_Devices": "",
|
||||||
|
"Navigation_Donations": "",
|
||||||
|
"Navigation_Events": "",
|
||||||
|
"Navigation_Integrations": "",
|
||||||
|
"Navigation_Maintenance": "",
|
||||||
|
"Navigation_Monitoring": "",
|
||||||
|
"Navigation_Network": "",
|
||||||
|
"Navigation_Notifications": "",
|
||||||
|
"Navigation_Plugins": "",
|
||||||
|
"Navigation_Presence": "",
|
||||||
|
"Navigation_Report": "",
|
||||||
|
"Navigation_Settings": "",
|
||||||
|
"Navigation_SystemInfo": "",
|
||||||
|
"Navigation_Workflows": "",
|
||||||
|
"Network_Assign": "",
|
||||||
|
"Network_Cant_Assign": "",
|
||||||
|
"Network_Cant_Assign_No_Node_Selected": "",
|
||||||
|
"Network_Configuration_Error": "",
|
||||||
|
"Network_Connected": "",
|
||||||
|
"Network_Devices": "",
|
||||||
|
"Network_ManageAdd": "",
|
||||||
|
"Network_ManageAdd_Name": "",
|
||||||
|
"Network_ManageAdd_Name_text": "",
|
||||||
|
"Network_ManageAdd_Port": "",
|
||||||
|
"Network_ManageAdd_Port_text": "",
|
||||||
|
"Network_ManageAdd_Submit": "",
|
||||||
|
"Network_ManageAdd_Type": "",
|
||||||
|
"Network_ManageAdd_Type_text": "",
|
||||||
|
"Network_ManageAssign": "",
|
||||||
|
"Network_ManageDel": "",
|
||||||
|
"Network_ManageDel_Name": "",
|
||||||
|
"Network_ManageDel_Name_text": "",
|
||||||
|
"Network_ManageDel_Submit": "",
|
||||||
|
"Network_ManageDevices": "",
|
||||||
|
"Network_ManageEdit": "",
|
||||||
|
"Network_ManageEdit_ID": "",
|
||||||
|
"Network_ManageEdit_ID_text": "",
|
||||||
|
"Network_ManageEdit_Name": "",
|
||||||
|
"Network_ManageEdit_Name_text": "",
|
||||||
|
"Network_ManageEdit_Port": "",
|
||||||
|
"Network_ManageEdit_Port_text": "",
|
||||||
|
"Network_ManageEdit_Submit": "",
|
||||||
|
"Network_ManageEdit_Type": "",
|
||||||
|
"Network_ManageEdit_Type_text": "",
|
||||||
|
"Network_ManageLeaf": "",
|
||||||
|
"Network_ManageUnassign": "",
|
||||||
|
"Network_NoAssignedDevices": "",
|
||||||
|
"Network_NoDevices": "",
|
||||||
|
"Network_Node": "",
|
||||||
|
"Network_Node_Name": "",
|
||||||
|
"Network_Parent": "",
|
||||||
|
"Network_Root": "",
|
||||||
|
"Network_Root_Not_Configured": "",
|
||||||
|
"Network_Root_Unconfigurable": "",
|
||||||
|
"Network_ShowArchived": "",
|
||||||
|
"Network_ShowOffline": "",
|
||||||
|
"Network_Table_Hostname": "",
|
||||||
|
"Network_Table_IP": "",
|
||||||
|
"Network_Table_State": "",
|
||||||
|
"Network_Title": "",
|
||||||
|
"Network_UnassignedDevices": "",
|
||||||
|
"Notifications_All": "",
|
||||||
|
"Notifications_Mark_All_Read": "",
|
||||||
|
"PIALERT_WEB_PASSWORD_description": "",
|
||||||
|
"PIALERT_WEB_PASSWORD_name": "",
|
||||||
|
"PIALERT_WEB_PROTECTION_description": "",
|
||||||
|
"PIALERT_WEB_PROTECTION_name": "",
|
||||||
|
"PLUGINS_KEEP_HIST_description": "",
|
||||||
|
"PLUGINS_KEEP_HIST_name": "",
|
||||||
|
"Plugins_DeleteAll": "",
|
||||||
|
"Plugins_Filters_Mac": "",
|
||||||
|
"Plugins_History": "",
|
||||||
|
"Plugins_Obj_DeleteListed": "",
|
||||||
|
"Plugins_Objects": "",
|
||||||
|
"Plugins_Out_of": "",
|
||||||
|
"Plugins_Unprocessed_Events": "",
|
||||||
|
"Plugins_no_control": "",
|
||||||
|
"Presence_CalHead_day": "",
|
||||||
|
"Presence_CalHead_lang": "",
|
||||||
|
"Presence_CalHead_month": "",
|
||||||
|
"Presence_CalHead_quarter": "",
|
||||||
|
"Presence_CalHead_week": "",
|
||||||
|
"Presence_CalHead_year": "",
|
||||||
|
"Presence_CallHead_Devices": "",
|
||||||
|
"Presence_Key_OnlineNow": "",
|
||||||
|
"Presence_Key_OnlineNow_desc": "",
|
||||||
|
"Presence_Key_OnlinePast": "",
|
||||||
|
"Presence_Key_OnlinePastMiss": "",
|
||||||
|
"Presence_Key_OnlinePastMiss_desc": "",
|
||||||
|
"Presence_Key_OnlinePast_desc": "",
|
||||||
|
"Presence_Loading": "",
|
||||||
|
"Presence_Shortcut_AllDevices": "",
|
||||||
|
"Presence_Shortcut_Archived": "",
|
||||||
|
"Presence_Shortcut_Connected": "",
|
||||||
|
"Presence_Shortcut_Devices": "",
|
||||||
|
"Presence_Shortcut_DownAlerts": "",
|
||||||
|
"Presence_Shortcut_Favorites": "",
|
||||||
|
"Presence_Shortcut_NewDevices": "",
|
||||||
|
"Presence_Title": "",
|
||||||
|
"REFRESH_FQDN_description": "",
|
||||||
|
"REFRESH_FQDN_name": "",
|
||||||
|
"REPORT_DASHBOARD_URL_description": "",
|
||||||
|
"REPORT_DASHBOARD_URL_name": "",
|
||||||
|
"REPORT_ERROR": "",
|
||||||
|
"REPORT_MAIL_description": "",
|
||||||
|
"REPORT_MAIL_name": "",
|
||||||
|
"REPORT_TITLE": "",
|
||||||
|
"RandomMAC_hover": "",
|
||||||
|
"Reports_Sent_Log": "",
|
||||||
|
"SCAN_SUBNETS_description": "",
|
||||||
|
"SCAN_SUBNETS_name": "",
|
||||||
|
"SYSTEM_TITLE": "",
|
||||||
|
"Setting_Override": "",
|
||||||
|
"Setting_Override_Description": "",
|
||||||
|
"Settings_Metadata_Toggle": "",
|
||||||
|
"Settings_Show_Description": "",
|
||||||
|
"Settings_device_Scanners_desync": "",
|
||||||
|
"Settings_device_Scanners_desync_popup": "",
|
||||||
|
"Speedtest_Results": "",
|
||||||
|
"Systeminfo_AvailableIps": "",
|
||||||
|
"Systeminfo_CPU": "",
|
||||||
|
"Systeminfo_CPU_Cores": "",
|
||||||
|
"Systeminfo_CPU_Name": "",
|
||||||
|
"Systeminfo_CPU_Speed": "",
|
||||||
|
"Systeminfo_CPU_Temp": "",
|
||||||
|
"Systeminfo_CPU_Vendor": "",
|
||||||
|
"Systeminfo_Client_Resolution": "",
|
||||||
|
"Systeminfo_Client_User_Agent": "",
|
||||||
|
"Systeminfo_General": "",
|
||||||
|
"Systeminfo_General_Date": "",
|
||||||
|
"Systeminfo_General_Date2": "",
|
||||||
|
"Systeminfo_General_Full_Date": "",
|
||||||
|
"Systeminfo_General_TimeZone": "",
|
||||||
|
"Systeminfo_Memory": "",
|
||||||
|
"Systeminfo_Memory_Total_Memory": "",
|
||||||
|
"Systeminfo_Memory_Usage": "",
|
||||||
|
"Systeminfo_Memory_Usage_Percent": "",
|
||||||
|
"Systeminfo_Motherboard": "",
|
||||||
|
"Systeminfo_Motherboard_BIOS": "",
|
||||||
|
"Systeminfo_Motherboard_BIOS_Date": "",
|
||||||
|
"Systeminfo_Motherboard_BIOS_Vendor": "",
|
||||||
|
"Systeminfo_Motherboard_Manufactured": "",
|
||||||
|
"Systeminfo_Motherboard_Name": "",
|
||||||
|
"Systeminfo_Motherboard_Revision": "",
|
||||||
|
"Systeminfo_Network": "",
|
||||||
|
"Systeminfo_Network_Accept_Encoding": "",
|
||||||
|
"Systeminfo_Network_Accept_Language": "",
|
||||||
|
"Systeminfo_Network_Connection_Port": "",
|
||||||
|
"Systeminfo_Network_HTTP_Host": "",
|
||||||
|
"Systeminfo_Network_HTTP_Referer": "",
|
||||||
|
"Systeminfo_Network_HTTP_Referer_String": "",
|
||||||
|
"Systeminfo_Network_Hardware": "",
|
||||||
|
"Systeminfo_Network_Hardware_Interface_Mask": "",
|
||||||
|
"Systeminfo_Network_Hardware_Interface_Name": "",
|
||||||
|
"Systeminfo_Network_Hardware_Interface_RX": "",
|
||||||
|
"Systeminfo_Network_Hardware_Interface_TX": "",
|
||||||
|
"Systeminfo_Network_IP": "",
|
||||||
|
"Systeminfo_Network_IP_Connection": "",
|
||||||
|
"Systeminfo_Network_IP_Server": "",
|
||||||
|
"Systeminfo_Network_MIME": "",
|
||||||
|
"Systeminfo_Network_Request_Method": "",
|
||||||
|
"Systeminfo_Network_Request_Time": "",
|
||||||
|
"Systeminfo_Network_Request_URI": "",
|
||||||
|
"Systeminfo_Network_Secure_Connection": "",
|
||||||
|
"Systeminfo_Network_Secure_Connection_String": "",
|
||||||
|
"Systeminfo_Network_Server_Name": "",
|
||||||
|
"Systeminfo_Network_Server_Name_String": "",
|
||||||
|
"Systeminfo_Network_Server_Query": "",
|
||||||
|
"Systeminfo_Network_Server_Query_String": "",
|
||||||
|
"Systeminfo_Network_Server_Version": "",
|
||||||
|
"Systeminfo_Services": "",
|
||||||
|
"Systeminfo_Services_Description": "",
|
||||||
|
"Systeminfo_Services_Name": "",
|
||||||
|
"Systeminfo_Storage": "",
|
||||||
|
"Systeminfo_Storage_Device": "",
|
||||||
|
"Systeminfo_Storage_Mount": "",
|
||||||
|
"Systeminfo_Storage_Size": "",
|
||||||
|
"Systeminfo_Storage_Type": "",
|
||||||
|
"Systeminfo_Storage_Usage": "",
|
||||||
|
"Systeminfo_Storage_Usage_Free": "",
|
||||||
|
"Systeminfo_Storage_Usage_Mount": "",
|
||||||
|
"Systeminfo_Storage_Usage_Total": "",
|
||||||
|
"Systeminfo_Storage_Usage_Used": "",
|
||||||
|
"Systeminfo_System": "",
|
||||||
|
"Systeminfo_System_AVG": "",
|
||||||
|
"Systeminfo_System_Architecture": "",
|
||||||
|
"Systeminfo_System_Kernel": "",
|
||||||
|
"Systeminfo_System_OSVersion": "",
|
||||||
|
"Systeminfo_System_Running_Processes": "",
|
||||||
|
"Systeminfo_System_System": "",
|
||||||
|
"Systeminfo_System_Uname": "",
|
||||||
|
"Systeminfo_System_Uptime": "",
|
||||||
|
"Systeminfo_This_Client": "",
|
||||||
|
"Systeminfo_USB_Devices": "",
|
||||||
|
"TICKER_MIGRATE_TO_NETALERTX": "",
|
||||||
|
"TIMEZONE_description": "",
|
||||||
|
"TIMEZONE_name": "",
|
||||||
|
"UI_DEV_SECTIONS_description": "",
|
||||||
|
"UI_DEV_SECTIONS_name": "",
|
||||||
|
"UI_ICONS_description": "",
|
||||||
|
"UI_ICONS_name": "",
|
||||||
|
"UI_LANG_description": "",
|
||||||
|
"UI_LANG_name": "",
|
||||||
|
"UI_MY_DEVICES_description": "",
|
||||||
|
"UI_MY_DEVICES_name": "",
|
||||||
|
"UI_NOT_RANDOM_MAC_description": "",
|
||||||
|
"UI_NOT_RANDOM_MAC_name": "",
|
||||||
|
"UI_PRESENCE_description": "",
|
||||||
|
"UI_PRESENCE_name": "",
|
||||||
|
"UI_REFRESH_description": "",
|
||||||
|
"UI_REFRESH_name": "",
|
||||||
|
"VERSION_description": "",
|
||||||
|
"VERSION_name": "",
|
||||||
|
"WF_Action_Add": "",
|
||||||
|
"WF_Action_field": "",
|
||||||
|
"WF_Action_type": "",
|
||||||
|
"WF_Action_value": "",
|
||||||
|
"WF_Actions": "",
|
||||||
|
"WF_Add": "",
|
||||||
|
"WF_Add_Condition": "",
|
||||||
|
"WF_Add_Group": "",
|
||||||
|
"WF_Condition_field": "",
|
||||||
|
"WF_Condition_operator": "",
|
||||||
|
"WF_Condition_value": "",
|
||||||
|
"WF_Conditions": "",
|
||||||
|
"WF_Conditions_logic_rules": "",
|
||||||
|
"WF_Duplicate": "",
|
||||||
|
"WF_Enabled": "",
|
||||||
|
"WF_Export": "",
|
||||||
|
"WF_Export_Copy": "",
|
||||||
|
"WF_Import": "",
|
||||||
|
"WF_Import_Copy": "",
|
||||||
|
"WF_Name": "",
|
||||||
|
"WF_Remove": "",
|
||||||
|
"WF_Remove_Copy": "",
|
||||||
|
"WF_Save": "",
|
||||||
|
"WF_Trigger": "",
|
||||||
|
"WF_Trigger_event_type": "",
|
||||||
|
"WF_Trigger_type": "",
|
||||||
|
"add_icon_event_tooltip": "",
|
||||||
|
"add_option_event_tooltip": "",
|
||||||
|
"copy_icons_event_tooltip": "",
|
||||||
|
"devices_old": "",
|
||||||
|
"general_event_description": "",
|
||||||
|
"general_event_title": "",
|
||||||
|
"go_to_device_event_tooltip": "",
|
||||||
|
"go_to_node_event_tooltip": "",
|
||||||
|
"new_version_available": "",
|
||||||
|
"report_guid": "",
|
||||||
|
"report_guid_missing": "",
|
||||||
|
"report_select_format": "",
|
||||||
|
"report_time": "",
|
||||||
|
"run_event_tooltip": "",
|
||||||
|
"select_icon_event_tooltip": "",
|
||||||
|
"settings_core_icon": "",
|
||||||
|
"settings_core_label": "",
|
||||||
|
"settings_device_scanners": "",
|
||||||
|
"settings_device_scanners_icon": "",
|
||||||
|
"settings_device_scanners_info": "",
|
||||||
|
"settings_device_scanners_label": "",
|
||||||
|
"settings_enabled": "",
|
||||||
|
"settings_enabled_icon": "",
|
||||||
|
"settings_expand_all": "",
|
||||||
|
"settings_imported": "",
|
||||||
|
"settings_imported_label": "",
|
||||||
|
"settings_missing": "",
|
||||||
|
"settings_missing_block": "",
|
||||||
|
"settings_old": "",
|
||||||
|
"settings_other_scanners": "",
|
||||||
|
"settings_other_scanners_icon": "",
|
||||||
|
"settings_other_scanners_label": "",
|
||||||
|
"settings_publishers": "",
|
||||||
|
"settings_publishers_icon": "",
|
||||||
|
"settings_publishers_info": "",
|
||||||
|
"settings_publishers_label": "",
|
||||||
|
"settings_readonly": "",
|
||||||
|
"settings_saved": "",
|
||||||
|
"settings_system_icon": "",
|
||||||
|
"settings_system_label": "",
|
||||||
|
"settings_update_item_warning": "",
|
||||||
|
"test_event_tooltip": ""
|
||||||
|
}
|
||||||
@@ -69,11 +69,9 @@ def cleanup_database(
|
|||||||
|
|
||||||
mylog("verbose", [f"[{pluginName}] Upkeep Database: {dbPath}"])
|
mylog("verbose", [f"[{pluginName}] Upkeep Database: {dbPath}"])
|
||||||
|
|
||||||
# Connect to the App database
|
|
||||||
conn = get_temp_db_connection()
|
conn = get_temp_db_connection()
|
||||||
cursor = conn.cursor()
|
cursor = conn.cursor()
|
||||||
|
|
||||||
# Reindwex to prevent fails due to corruption
|
|
||||||
try:
|
try:
|
||||||
cursor.execute("REINDEX;")
|
cursor.execute("REINDEX;")
|
||||||
mylog("verbose", [f"[{pluginName}] REINDEX completed"])
|
mylog("verbose", [f"[{pluginName}] REINDEX completed"])
|
||||||
@@ -82,25 +80,25 @@ def cleanup_database(
|
|||||||
|
|
||||||
# -----------------------------------------------------
|
# -----------------------------------------------------
|
||||||
# Cleanup Online History
|
# Cleanup Online History
|
||||||
mylog("verbose", [f"[{pluginName}] Online_History: Delete all but keep latest 150 entries"],)
|
mylog("verbose", [f"[{pluginName}] Online_History: Delete all but keep latest 150 entries"])
|
||||||
cursor.execute(
|
cursor.execute(
|
||||||
"""DELETE from Online_History where "Index" not in (
|
"""DELETE from Online_History where "Index" not in (
|
||||||
SELECT "Index" from Online_History
|
SELECT "Index" from Online_History
|
||||||
order by Scan_Date desc limit 150)"""
|
order by Scan_Date desc limit 150)"""
|
||||||
)
|
)
|
||||||
|
mylog("verbose", [f"[{pluginName}] Online_History deleted rows: {cursor.rowcount}"])
|
||||||
|
|
||||||
# -----------------------------------------------------
|
# -----------------------------------------------------
|
||||||
# Cleanup Events
|
# Cleanup Events
|
||||||
mylog("verbose", f"[{pluginName}] Events: Delete all older than {str(DAYS_TO_KEEP_EVENTS)} days (DAYS_TO_KEEP_EVENTS setting)")
|
mylog("verbose", f"[{pluginName}] Events: Delete all older than {str(DAYS_TO_KEEP_EVENTS)} days (DAYS_TO_KEEP_EVENTS setting)")
|
||||||
sql = f"""DELETE FROM Events WHERE eve_DateTime <= date('now', '-{str(DAYS_TO_KEEP_EVENTS)} day')"""
|
sql = f"""DELETE FROM Events WHERE eve_DateTime <= date('now', '-{str(DAYS_TO_KEEP_EVENTS)} day')"""
|
||||||
|
|
||||||
mylog("verbose", [f"[{pluginName}] SQL : {sql}"])
|
mylog("verbose", [f"[{pluginName}] SQL : {sql}"])
|
||||||
cursor.execute(sql)
|
cursor.execute(sql)
|
||||||
# -----------------------------------------------------
|
mylog("verbose", [f"[{pluginName}] Events deleted rows: {cursor.rowcount}"])
|
||||||
# Trim Plugins_History entries to less than PLUGINS_KEEP_HIST setting per unique "Plugin" column entry
|
|
||||||
mylog("verbose", f"[{pluginName}] Plugins_History: Trim Plugins_History entries to less than {str(PLUGINS_KEEP_HIST)} per Plugin (PLUGINS_KEEP_HIST setting)")
|
|
||||||
|
|
||||||
# Build the SQL query to delete entries that exceed the limit per unique "Plugin" column entry
|
# -----------------------------------------------------
|
||||||
|
# Plugins_History
|
||||||
|
mylog("verbose", f"[{pluginName}] Plugins_History: Trim to {str(PLUGINS_KEEP_HIST)} per Plugin")
|
||||||
delete_query = f"""DELETE FROM Plugins_History
|
delete_query = f"""DELETE FROM Plugins_History
|
||||||
WHERE "Index" NOT IN (
|
WHERE "Index" NOT IN (
|
||||||
SELECT "Index"
|
SELECT "Index"
|
||||||
@@ -111,17 +109,13 @@ def cleanup_database(
|
|||||||
) AS ranked_objects
|
) AS ranked_objects
|
||||||
WHERE row_num <= {str(PLUGINS_KEEP_HIST)}
|
WHERE row_num <= {str(PLUGINS_KEEP_HIST)}
|
||||||
);"""
|
);"""
|
||||||
|
|
||||||
cursor.execute(delete_query)
|
cursor.execute(delete_query)
|
||||||
|
mylog("verbose", [f"[{pluginName}] Plugins_History deleted rows: {cursor.rowcount}"])
|
||||||
|
|
||||||
# -----------------------------------------------------
|
# -----------------------------------------------------
|
||||||
# Trim Notifications entries to less than DBCLNP_NOTIFI_HIST setting
|
# Notifications
|
||||||
|
|
||||||
histCount = get_setting_value("DBCLNP_NOTIFI_HIST")
|
histCount = get_setting_value("DBCLNP_NOTIFI_HIST")
|
||||||
|
mylog("verbose", f"[{pluginName}] Notifications: Trim to {histCount}")
|
||||||
mylog("verbose", f"[{pluginName}] Plugins_History: Trim Notifications entries to less than {histCount}")
|
|
||||||
|
|
||||||
# Build the SQL query to delete entries
|
|
||||||
delete_query = f"""DELETE FROM Notifications
|
delete_query = f"""DELETE FROM Notifications
|
||||||
WHERE "Index" NOT IN (
|
WHERE "Index" NOT IN (
|
||||||
SELECT "Index"
|
SELECT "Index"
|
||||||
@@ -132,16 +126,13 @@ def cleanup_database(
|
|||||||
) AS ranked_objects
|
) AS ranked_objects
|
||||||
WHERE row_num <= {histCount}
|
WHERE row_num <= {histCount}
|
||||||
);"""
|
);"""
|
||||||
|
|
||||||
cursor.execute(delete_query)
|
cursor.execute(delete_query)
|
||||||
|
mylog("verbose", [f"[{pluginName}] Notifications deleted rows: {cursor.rowcount}"])
|
||||||
|
|
||||||
# -----------------------------------------------------
|
# -----------------------------------------------------
|
||||||
# Trim Workflow entries to less than WORKFLOWS_AppEvents_hist setting
|
# AppEvents
|
||||||
histCount = get_setting_value("WORKFLOWS_AppEvents_hist")
|
histCount = get_setting_value("WORKFLOWS_AppEvents_hist")
|
||||||
|
|
||||||
mylog("verbose", [f"[{pluginName}] Trim AppEvents to less than {histCount}"])
|
mylog("verbose", [f"[{pluginName}] Trim AppEvents to less than {histCount}"])
|
||||||
|
|
||||||
# Build the SQL query to delete entries
|
|
||||||
delete_query = f"""DELETE FROM AppEvents
|
delete_query = f"""DELETE FROM AppEvents
|
||||||
WHERE "Index" NOT IN (
|
WHERE "Index" NOT IN (
|
||||||
SELECT "Index"
|
SELECT "Index"
|
||||||
@@ -152,38 +143,40 @@ def cleanup_database(
|
|||||||
) AS ranked_objects
|
) AS ranked_objects
|
||||||
WHERE row_num <= {histCount}
|
WHERE row_num <= {histCount}
|
||||||
);"""
|
);"""
|
||||||
|
|
||||||
cursor.execute(delete_query)
|
cursor.execute(delete_query)
|
||||||
|
mylog("verbose", [f"[{pluginName}] AppEvents deleted rows: {cursor.rowcount}"])
|
||||||
|
|
||||||
conn.commit()
|
conn.commit()
|
||||||
|
|
||||||
# -----------------------------------------------------
|
# -----------------------------------------------------
|
||||||
# Cleanup New Devices
|
# Cleanup New Devices
|
||||||
if HRS_TO_KEEP_NEWDEV != 0:
|
if HRS_TO_KEEP_NEWDEV != 0:
|
||||||
mylog("verbose", f"[{pluginName}] Devices: Delete all New Devices older than {str(HRS_TO_KEEP_NEWDEV)} hours (HRS_TO_KEEP_NEWDEV setting)")
|
mylog("verbose", f"[{pluginName}] Devices: Delete New Devices older than {str(HRS_TO_KEEP_NEWDEV)} hours")
|
||||||
query = f"""DELETE FROM Devices WHERE devIsNew = 1 AND devFirstConnection < date('now', '-{str(HRS_TO_KEEP_NEWDEV)} hour')"""
|
query = f"""DELETE FROM Devices WHERE devIsNew = 1 AND devFirstConnection < date('now', '-{str(HRS_TO_KEEP_NEWDEV)} hour')"""
|
||||||
mylog("verbose", [f"[{pluginName}] Query: {query} "])
|
mylog("verbose", [f"[{pluginName}] Query: {query}"])
|
||||||
cursor.execute(query)
|
cursor.execute(query)
|
||||||
|
mylog("verbose", [f"[{pluginName}] Devices (new) deleted rows: {cursor.rowcount}"])
|
||||||
|
|
||||||
# -----------------------------------------------------
|
# -----------------------------------------------------
|
||||||
# Cleanup Offline Devices
|
# Cleanup Offline Devices
|
||||||
if HRS_TO_KEEP_OFFDEV != 0:
|
if HRS_TO_KEEP_OFFDEV != 0:
|
||||||
mylog("verbose", f"[{pluginName}] Devices: Delete all New Devices older than {str(HRS_TO_KEEP_OFFDEV)} hours (HRS_TO_KEEP_OFFDEV setting)")
|
mylog("verbose", f"[{pluginName}] Devices: Delete Offline Devices older than {str(HRS_TO_KEEP_OFFDEV)} hours")
|
||||||
query = f"""DELETE FROM Devices WHERE devPresentLastScan = 0 AND devLastConnection < date('now', '-{str(HRS_TO_KEEP_OFFDEV)} hour')"""
|
query = f"""DELETE FROM Devices WHERE devPresentLastScan = 0 AND devLastConnection < date('now', '-{str(HRS_TO_KEEP_OFFDEV)} hour')"""
|
||||||
mylog("verbose", [f"[{pluginName}] Query: {query} "])
|
mylog("verbose", [f"[{pluginName}] Query: {query}"])
|
||||||
cursor.execute(query)
|
cursor.execute(query)
|
||||||
|
mylog("verbose", [f"[{pluginName}] Devices (offline) deleted rows: {cursor.rowcount}"])
|
||||||
|
|
||||||
# -----------------------------------------------------
|
# -----------------------------------------------------
|
||||||
# Clear New Flag
|
# Clear New Flag
|
||||||
if CLEAR_NEW_FLAG != 0:
|
if CLEAR_NEW_FLAG != 0:
|
||||||
mylog("verbose", f'[{pluginName}] Devices: Clear "New Device" flag for all devices older than {str(CLEAR_NEW_FLAG)} hours (CLEAR_NEW_FLAG setting)')
|
mylog("verbose", f'[{pluginName}] Devices: Clear "New Device" flag older than {str(CLEAR_NEW_FLAG)} hours')
|
||||||
query = f"""UPDATE Devices SET devIsNew = 0 WHERE devIsNew = 1 AND date(devFirstConnection, '+{str(CLEAR_NEW_FLAG)} hour') < date('now')"""
|
query = f"""UPDATE Devices SET devIsNew = 0 WHERE devIsNew = 1 AND date(devFirstConnection, '+{str(CLEAR_NEW_FLAG)} hour') < date('now')"""
|
||||||
# select * from Devices where devIsNew = 1 AND date(devFirstConnection, '+3 hour' ) < date('now')
|
mylog("verbose", [f"[{pluginName}] Query: {query}"])
|
||||||
mylog("verbose", [f"[{pluginName}] Query: {query} "])
|
|
||||||
cursor.execute(query)
|
cursor.execute(query)
|
||||||
|
mylog("verbose", [f"[{pluginName}] Devices updated rows (clear new): {cursor.rowcount}"])
|
||||||
|
|
||||||
# -----------------------------------------------------
|
# -----------------------------------------------------
|
||||||
# De-dupe (de-duplicate) from the Plugins_Objects table
|
# De-dupe Plugins_Objects
|
||||||
# TODO This shouldn't be necessary - probably a concurrency bug somewhere in the code :(
|
|
||||||
mylog("verbose", [f"[{pluginName}] Plugins_Objects: Delete all duplicates"])
|
mylog("verbose", [f"[{pluginName}] Plugins_Objects: Delete all duplicates"])
|
||||||
cursor.execute(
|
cursor.execute(
|
||||||
"""
|
"""
|
||||||
@@ -197,25 +190,20 @@ def cleanup_database(
|
|||||||
)
|
)
|
||||||
"""
|
"""
|
||||||
)
|
)
|
||||||
|
mylog("verbose", [f"[{pluginName}] Plugins_Objects deleted rows: {cursor.rowcount}"])
|
||||||
|
|
||||||
conn.commit()
|
conn.commit()
|
||||||
|
|
||||||
# Check WAL file size
|
# WAL + Vacuum
|
||||||
cursor.execute("PRAGMA wal_checkpoint(TRUNCATE);")
|
cursor.execute("PRAGMA wal_checkpoint(TRUNCATE);")
|
||||||
cursor.execute("PRAGMA wal_checkpoint(FULL);")
|
cursor.execute("PRAGMA wal_checkpoint(FULL);")
|
||||||
|
|
||||||
mylog("verbose", [f"[{pluginName}] WAL checkpoint executed to truncate file."])
|
mylog("verbose", [f"[{pluginName}] WAL checkpoint executed to truncate file."])
|
||||||
|
|
||||||
# Shrink DB
|
|
||||||
mylog("verbose", [f"[{pluginName}] Shrink Database"])
|
mylog("verbose", [f"[{pluginName}] Shrink Database"])
|
||||||
cursor.execute("VACUUM;")
|
cursor.execute("VACUUM;")
|
||||||
|
|
||||||
# Close the database connection
|
|
||||||
conn.close()
|
conn.close()
|
||||||
|
|
||||||
|
|
||||||
# ===============================================================================
|
|
||||||
# BEGIN
|
|
||||||
# ===============================================================================
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
main()
|
main()
|
||||||
|
|||||||
@@ -99,7 +99,7 @@
|
|||||||
"description": [
|
"description": [
|
||||||
{
|
{
|
||||||
"language_code": "en_us",
|
"language_code": "en_us",
|
||||||
"string": "Selects the ICMP engine to use. <code>ping</code> checks devices individually and works even when the ARP / neighbor cache is empty, but is slower on larger networks. <code>fping</code> scans IP ranges in parallel and is significantly faster, but relies on the system neighbor cache to resolve IP addresses to MAC addresses. For most networks, <code>fping</code> is recommended. The default command arguments <code>ICMP_ARGS</code> are compatible with both modes."
|
"string": "Selects the ICMP engine to use. <code>ping</code> checks devices individually, works even with an empty ARP/neighbor cache, but is slower on large networks. <code>fping</code> scans IP ranges in parallel and is much faster, but depends on the system neighbor cache, which can delay MAC resolution. For most networks, <code>fping</code> is recommended, unless precise and timely offline/online detection is needed. Default <code>ICMP_ARGS</code> work with both engines."
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -1,103 +0,0 @@
|
|||||||
<?php
|
|
||||||
|
|
||||||
// External files
|
|
||||||
require '/app/front/php/server/init.php';
|
|
||||||
|
|
||||||
$method = $_SERVER['REQUEST_METHOD'];
|
|
||||||
|
|
||||||
// ----------------------------------------------
|
|
||||||
// Method to check authorization
|
|
||||||
function checkAuthorization($method) {
|
|
||||||
// Retrieve the authorization header
|
|
||||||
$headers = apache_request_headers();
|
|
||||||
$auth_header = $headers['Authorization'] ?? '';
|
|
||||||
$expected_token = 'Bearer ' . getSettingValue('API_TOKEN');
|
|
||||||
|
|
||||||
// Verify the authorization token
|
|
||||||
if ($auth_header !== $expected_token) {
|
|
||||||
http_response_code(403);
|
|
||||||
echo 'Forbidden';
|
|
||||||
displayInAppNoti("[Plugin: SYNC] Incoming data: Incorrect API Token (".$method.")", "error");
|
|
||||||
exit;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// ----------------------------------------------
|
|
||||||
// Function to return JSON response
|
|
||||||
function jsonResponse($status, $data = '', $message = '') {
|
|
||||||
http_response_code($status);
|
|
||||||
header('Content-Type: application/json');
|
|
||||||
echo json_encode([
|
|
||||||
'node_name' => getSettingValue('SYNC_node_name'),
|
|
||||||
'status' => $status,
|
|
||||||
'message' => $message,
|
|
||||||
'data_base64' => $data,
|
|
||||||
'timestamp' => date('Y-m-d H:i:s')
|
|
||||||
]);
|
|
||||||
}
|
|
||||||
|
|
||||||
// ----------------------------------------------
|
|
||||||
// MAIN
|
|
||||||
// ----------------------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
// requesting data (this is a NODE)
|
|
||||||
if ($method === 'GET') {
|
|
||||||
checkAuthorization($method);
|
|
||||||
|
|
||||||
$apiRoot = getenv('NETALERTX_API') ?: '/tmp/api';
|
|
||||||
$file_path = rtrim($apiRoot, '/') . '/table_devices.json';
|
|
||||||
|
|
||||||
$data = file_get_contents($file_path);
|
|
||||||
|
|
||||||
// Prepare the data to return as a JSON response
|
|
||||||
$response_data = base64_encode($data);
|
|
||||||
|
|
||||||
// Return JSON response
|
|
||||||
jsonResponse(200, $response_data, 'OK');
|
|
||||||
|
|
||||||
displayInAppNoti("[Plugin: SYNC] Data sent", "info");
|
|
||||||
|
|
||||||
}
|
|
||||||
// receiving data (this is a HUB)
|
|
||||||
else if ($method === 'POST') {
|
|
||||||
checkAuthorization($method);
|
|
||||||
|
|
||||||
// Retrieve and decode the data from the POST request
|
|
||||||
$data = $_POST['data'] ?? '';
|
|
||||||
$file_path = $_POST['file_path'] ?? '';
|
|
||||||
$node_name = $_POST['node_name'] ?? '';
|
|
||||||
$plugin = $_POST['plugin'] ?? '';
|
|
||||||
|
|
||||||
$logRoot = getenv('NETALERTX_PLUGINS_LOG') ?: (rtrim(getenv('NETALERTX_LOG') ?: '/tmp/log', '/') . '/plugins');
|
|
||||||
$storage_path = rtrim($logRoot, '/');
|
|
||||||
|
|
||||||
// // check location
|
|
||||||
// if (!is_dir($storage_path)) {
|
|
||||||
// echo "Could not open folder: {$storage_path}";
|
|
||||||
// write_notification("[Plugin: SYNC] Could not open folder: {$storage_path}", "alert");
|
|
||||||
// http_response_code(500);
|
|
||||||
// exit;
|
|
||||||
// }
|
|
||||||
|
|
||||||
// Generate a unique file path to avoid overwriting existing files
|
|
||||||
$encoded_files = glob("{$storage_path}/last_result.{$plugin}.encoded.{$node_name}.*.log");
|
|
||||||
$decoded_files = glob("{$storage_path}/last_result.{$plugin}.decoded.{$node_name}.*.log");
|
|
||||||
|
|
||||||
$files = array_merge($encoded_files, $decoded_files);
|
|
||||||
$file_count = count($files) + 1;
|
|
||||||
|
|
||||||
$file_path_new = "{$storage_path}/last_result.{$plugin}.encoded.{$node_name}.{$file_count}.log";
|
|
||||||
|
|
||||||
// Save the decoded data to the file
|
|
||||||
file_put_contents($file_path_new, $data);
|
|
||||||
http_response_code(200);
|
|
||||||
echo 'Data received and stored successfully';
|
|
||||||
displayInAppNoti("[Plugin: SYNC] Data received ({$file_path_new})", "info");
|
|
||||||
|
|
||||||
} else {
|
|
||||||
http_response_code(405);
|
|
||||||
echo 'Method Not Allowed';
|
|
||||||
displayInAppNoti("[Plugin: SYNC] Method Not Allowed", "error");
|
|
||||||
}
|
|
||||||
?>
|
|
||||||
@@ -269,7 +269,6 @@ def main():
|
|||||||
# Data retrieval methods
|
# Data retrieval methods
|
||||||
api_endpoints = [
|
api_endpoints = [
|
||||||
"/sync", # New Python-based endpoint
|
"/sync", # New Python-based endpoint
|
||||||
"/plugins/sync/hub.php" # Legacy PHP endpoint
|
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
require 'php/templates/header.php';
|
require 'php/templates/header.php';
|
||||||
require 'php/templates/modals.php';
|
require 'php/templates/modals.php';
|
||||||
|
|
||||||
?>
|
?>
|
||||||
|
|
||||||
<script>
|
<script>
|
||||||
@@ -14,7 +14,7 @@
|
|||||||
|
|
||||||
<!-- Content header--------------------------------------------------------- -->
|
<!-- Content header--------------------------------------------------------- -->
|
||||||
<!-- Main content ---------------------------------------------------------- -->
|
<!-- Main content ---------------------------------------------------------- -->
|
||||||
<section class="content tab-content">
|
<section class="content tab-content">
|
||||||
|
|
||||||
<div class="box box-gray col-xs-12" >
|
<div class="box box-gray col-xs-12" >
|
||||||
<div class="box-header">
|
<div class="box-header">
|
||||||
@@ -45,7 +45,7 @@
|
|||||||
<select id="formatSelect" class="pointer">
|
<select id="formatSelect" class="pointer">
|
||||||
<option value="HTML">HTML</option>
|
<option value="HTML">HTML</option>
|
||||||
<option value="JSON">JSON</option>
|
<option value="JSON">JSON</option>
|
||||||
<option value="Text">Text</option>
|
<option value="Text">Text</option>
|
||||||
</select>
|
</select>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
@@ -80,7 +80,7 @@
|
|||||||
const prevButton = document.getElementById('prevButton');
|
const prevButton = document.getElementById('prevButton');
|
||||||
const nextButton = document.getElementById('nextButton');
|
const nextButton = document.getElementById('nextButton');
|
||||||
const formatSelect = document.getElementById('formatSelect');
|
const formatSelect = document.getElementById('formatSelect');
|
||||||
|
|
||||||
let currentIndex = -1; // Current report index
|
let currentIndex = -1; // Current report index
|
||||||
|
|
||||||
// Function to update the displayed data and timestamp based on the selected format and index
|
// Function to update the displayed data and timestamp based on the selected format and index
|
||||||
@@ -115,7 +115,7 @@
|
|||||||
|
|
||||||
// console.log(notification)
|
// console.log(notification)
|
||||||
|
|
||||||
timestamp.textContent = notification.DateTimeCreated;
|
timestamp.textContent = localizeTimestamp(notification.DateTimeCreated);
|
||||||
notiGuid.textContent = notification.GUID;
|
notiGuid.textContent = notification.GUID;
|
||||||
currentIndex = index;
|
currentIndex = index;
|
||||||
|
|
||||||
@@ -161,17 +161,17 @@
|
|||||||
console.log(index)
|
console.log(index)
|
||||||
|
|
||||||
if (index == -1) {
|
if (index == -1) {
|
||||||
showModalOk('WARNING', `${getString("report_guid_missing")} <br/> <br/> <code>${guid}</code>`)
|
showModalOk('WARNING', `${getString("report_guid_missing")} <br/> <br/> <code>${guid}</code>`)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Load the notification with the specified GUID
|
// Load the notification with the specified GUID
|
||||||
updateData(formatSelect.value, index);
|
updateData(formatSelect.value, index);
|
||||||
|
|
||||||
})
|
})
|
||||||
.catch(error => {
|
.catch(error => {
|
||||||
console.error('Error:', error);
|
console.error('Error:', error);
|
||||||
});
|
});
|
||||||
} else {
|
} else {
|
||||||
|
|
||||||
// Initial data load
|
// Initial data load
|
||||||
updateData('HTML', -1); // Default format to HTML and load the latest report
|
updateData('HTML', -1); // Default format to HTML and load the latest report
|
||||||
|
|||||||
@@ -57,14 +57,14 @@ nav:
|
|||||||
- Authelia: AUTHELIA.md
|
- Authelia: AUTHELIA.md
|
||||||
- Performance: PERFORMANCE.md
|
- Performance: PERFORMANCE.md
|
||||||
- Reverse DNS: REVERSE_DNS.md
|
- Reverse DNS: REVERSE_DNS.md
|
||||||
- Reverse Proxy:
|
- Reverse Proxy: REVERSE_PROXY.md
|
||||||
- Reverse Proxy Overview: REVERSE_PROXY.md
|
|
||||||
- Caddy and Authentik: REVERSE_PROXY_CADDY.md
|
|
||||||
- Traefik: REVERSE_PROXY_TRAEFIK.md
|
|
||||||
- Webhooks (n8n): WEBHOOK_N8N.md
|
- Webhooks (n8n): WEBHOOK_N8N.md
|
||||||
- Workflows: WORKFLOWS.md
|
- Workflows: WORKFLOWS.md
|
||||||
- Workflow Examples: WORKFLOW_EXAMPLES.md
|
- Workflow Examples: WORKFLOW_EXAMPLES.md
|
||||||
- Docker Swarm: DOCKER_SWARM.md
|
- Docker Swarm: DOCKER_SWARM.md
|
||||||
|
- Best practice advisories:
|
||||||
|
- Eyes on glass: ADVISORY_EYES_ON_GLASS.md
|
||||||
|
- Multi-network monitoring: ADVISORY_MULTI_NETWORK.md
|
||||||
- Help:
|
- Help:
|
||||||
- Common issues: COMMON_ISSUES.md
|
- Common issues: COMMON_ISSUES.md
|
||||||
- Random MAC: RANDOM_MAC.md
|
- Random MAC: RANDOM_MAC.md
|
||||||
|
|||||||
@@ -56,7 +56,7 @@ default_tz = "Europe/Berlin"
|
|||||||
NULL_EQUIVALENTS = ["", "null", "(unknown)", "(Unknown)", "(name not found)"]
|
NULL_EQUIVALENTS = ["", "null", "(unknown)", "(Unknown)", "(name not found)"]
|
||||||
|
|
||||||
# Convert list to SQL string: wrap each value in single quotes and escape single quotes if needed
|
# Convert list to SQL string: wrap each value in single quotes and escape single quotes if needed
|
||||||
NULL_EQUIVALENTS_SQL = ",".join(f"'{v.replace('\'', '\'\'')}'" for v in NULL_EQUIVALENTS)
|
NULL_EQUIVALENTS_SQL = ",".join("'" + v.replace("'", "''") + "'" for v in NULL_EQUIVALENTS)
|
||||||
|
|
||||||
|
|
||||||
# ===============================================================================
|
# ===============================================================================
|
||||||
|
|||||||
@@ -12,8 +12,9 @@ from const import NULL_EQUIVALENTS_SQL # noqa: E402 [flake8 lint suppression]
|
|||||||
|
|
||||||
|
|
||||||
def get_device_conditions():
|
def get_device_conditions():
|
||||||
network_dev_types = ",".join(f"'{v.replace('\'', '\'\'')}'" for v in get_setting_value("NETWORK_DEVICE_TYPES"))
|
network_dev_types = ",".join("'" + v.replace("'", "''") + "'" for v in get_setting_value("NETWORK_DEVICE_TYPES"))
|
||||||
|
|
||||||
|
# DO NOT CHANGE ORDER
|
||||||
conditions = {
|
conditions = {
|
||||||
"all": "WHERE devIsArchived=0",
|
"all": "WHERE devIsArchived=0",
|
||||||
"my": "WHERE devIsArchived=0",
|
"my": "WHERE devIsArchived=0",
|
||||||
@@ -27,6 +28,7 @@ def get_device_conditions():
|
|||||||
"network_devices_down": f"WHERE devIsArchived=0 AND devType in ({network_dev_types}) AND devPresentLastScan=0",
|
"network_devices_down": f"WHERE devIsArchived=0 AND devType in ({network_dev_types}) AND devPresentLastScan=0",
|
||||||
"unknown": f"WHERE devIsArchived=0 AND devName in ({NULL_EQUIVALENTS_SQL})",
|
"unknown": f"WHERE devIsArchived=0 AND devName in ({NULL_EQUIVALENTS_SQL})",
|
||||||
"known": f"WHERE devIsArchived=0 AND devName not in ({NULL_EQUIVALENTS_SQL})",
|
"known": f"WHERE devIsArchived=0 AND devName not in ({NULL_EQUIVALENTS_SQL})",
|
||||||
|
"favorites_offline": "WHERE devIsArchived=0 AND devFavorite=1 AND devPresentLastScan=0",
|
||||||
}
|
}
|
||||||
|
|
||||||
return conditions
|
return conditions
|
||||||
|
|||||||
@@ -1,10 +1,6 @@
|
|||||||
import sys
|
import conf
|
||||||
import os
|
from zoneinfo import ZoneInfo
|
||||||
|
import datetime as dt
|
||||||
# Register NetAlertX directories
|
|
||||||
INSTALL_PATH = os.getenv("NETALERTX_APP", "/app")
|
|
||||||
sys.path.extend([f"{INSTALL_PATH}/server"])
|
|
||||||
|
|
||||||
from logger import mylog # noqa: E402 [flake8 lint suppression]
|
from logger import mylog # noqa: E402 [flake8 lint suppression]
|
||||||
from messaging.in_app import write_notification # noqa: E402 [flake8 lint suppression]
|
from messaging.in_app import write_notification # noqa: E402 [flake8 lint suppression]
|
||||||
|
|
||||||
@@ -246,6 +242,23 @@ def ensure_Indexes(sql) -> bool:
|
|||||||
Parameters:
|
Parameters:
|
||||||
- sql: database cursor or connection wrapper (must support execute()).
|
- sql: database cursor or connection wrapper (must support execute()).
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
# Remove after 12/12/2026 - prevens idx_events_unique from failing - dedupe
|
||||||
|
clean_duplicate_events = """
|
||||||
|
DELETE FROM Events
|
||||||
|
WHERE rowid NOT IN (
|
||||||
|
SELECT MIN(rowid)
|
||||||
|
FROM Events
|
||||||
|
GROUP BY
|
||||||
|
eve_MAC,
|
||||||
|
eve_IP,
|
||||||
|
eve_EventType,
|
||||||
|
eve_DateTime
|
||||||
|
);
|
||||||
|
"""
|
||||||
|
|
||||||
|
sql.execute(clean_duplicate_events)
|
||||||
|
|
||||||
indexes = [
|
indexes = [
|
||||||
# Sessions
|
# Sessions
|
||||||
(
|
(
|
||||||
@@ -273,6 +286,10 @@ def ensure_Indexes(sql) -> bool:
|
|||||||
"idx_eve_type_date",
|
"idx_eve_type_date",
|
||||||
"CREATE INDEX idx_eve_type_date ON Events(eve_EventType, eve_DateTime)",
|
"CREATE INDEX idx_eve_type_date ON Events(eve_EventType, eve_DateTime)",
|
||||||
),
|
),
|
||||||
|
(
|
||||||
|
"idx_events_unique",
|
||||||
|
"CREATE UNIQUE INDEX idx_events_unique ON Events (eve_MAC, eve_IP, eve_EventType, eve_DateTime)",
|
||||||
|
),
|
||||||
# Devices
|
# Devices
|
||||||
("idx_dev_mac", "CREATE INDEX idx_dev_mac ON Devices(devMac)"),
|
("idx_dev_mac", "CREATE INDEX idx_dev_mac ON Devices(devMac)"),
|
||||||
(
|
(
|
||||||
@@ -503,26 +520,25 @@ def ensure_plugins_tables(sql) -> bool:
|
|||||||
def is_timestamps_in_utc(sql) -> bool:
|
def is_timestamps_in_utc(sql) -> bool:
|
||||||
"""
|
"""
|
||||||
Check if existing timestamps in Devices table are already in UTC format.
|
Check if existing timestamps in Devices table are already in UTC format.
|
||||||
|
|
||||||
Strategy:
|
Strategy:
|
||||||
1. Sample 10 non-NULL devFirstConnection timestamps from Devices
|
1. Sample 10 non-NULL devFirstConnection timestamps from Devices
|
||||||
2. For each timestamp, assume it's UTC and calculate what it would be in local time
|
2. For each timestamp, assume it's UTC and calculate what it would be in local time
|
||||||
3. Check if timestamps have a consistent offset pattern (indicating local time storage)
|
3. Check if timestamps have a consistent offset pattern (indicating local time storage)
|
||||||
4. If offset is consistently > 0, they're likely local timestamps (need migration)
|
4. If offset is consistently > 0, they're likely local timestamps (need migration)
|
||||||
5. If offset is ~0 or inconsistent, they're likely already UTC (skip migration)
|
5. If offset is ~0 or inconsistent, they're likely already UTC (skip migration)
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
bool: True if timestamps appear to be in UTC already, False if they need migration
|
bool: True if timestamps appear to be in UTC already, False if they need migration
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
# Get timezone offset in seconds
|
# Get timezone offset in seconds
|
||||||
import conf
|
import conf
|
||||||
from zoneinfo import ZoneInfo
|
|
||||||
import datetime as dt
|
import datetime as dt
|
||||||
|
|
||||||
now = dt.datetime.now(dt.UTC).replace(microsecond=0)
|
now = dt.datetime.now(dt.UTC).replace(microsecond=0)
|
||||||
current_offset_seconds = 0
|
current_offset_seconds = 0
|
||||||
|
|
||||||
try:
|
try:
|
||||||
if isinstance(conf.tz, dt.tzinfo):
|
if isinstance(conf.tz, dt.tzinfo):
|
||||||
tz = conf.tz
|
tz = conf.tz
|
||||||
@@ -532,13 +548,13 @@ def is_timestamps_in_utc(sql) -> bool:
|
|||||||
tz = None
|
tz = None
|
||||||
except Exception:
|
except Exception:
|
||||||
tz = None
|
tz = None
|
||||||
|
|
||||||
if tz:
|
if tz:
|
||||||
local_now = dt.datetime.now(tz).replace(microsecond=0)
|
local_now = dt.datetime.now(tz).replace(microsecond=0)
|
||||||
local_offset = local_now.utcoffset().total_seconds()
|
local_offset = local_now.utcoffset().total_seconds()
|
||||||
utc_offset = now.utcoffset().total_seconds() if now.utcoffset() else 0
|
utc_offset = now.utcoffset().total_seconds() if now.utcoffset() else 0
|
||||||
current_offset_seconds = int(local_offset - utc_offset)
|
current_offset_seconds = int(local_offset - utc_offset)
|
||||||
|
|
||||||
# Sample timestamps from Devices table
|
# Sample timestamps from Devices table
|
||||||
sql.execute("""
|
sql.execute("""
|
||||||
SELECT devFirstConnection, devLastConnection, devLastNotification
|
SELECT devFirstConnection, devLastConnection, devLastNotification
|
||||||
@@ -546,27 +562,27 @@ def is_timestamps_in_utc(sql) -> bool:
|
|||||||
WHERE devFirstConnection IS NOT NULL
|
WHERE devFirstConnection IS NOT NULL
|
||||||
LIMIT 10
|
LIMIT 10
|
||||||
""")
|
""")
|
||||||
|
|
||||||
samples = []
|
samples = []
|
||||||
for row in sql.fetchall():
|
for row in sql.fetchall():
|
||||||
for ts in row:
|
for ts in row:
|
||||||
if ts:
|
if ts:
|
||||||
samples.append(ts)
|
samples.append(ts)
|
||||||
|
|
||||||
if not samples:
|
if not samples:
|
||||||
mylog("verbose", "[db_upgrade] No timestamp samples found in Devices - assuming UTC")
|
mylog("verbose", "[db_upgrade] No timestamp samples found in Devices - assuming UTC")
|
||||||
return True # Empty DB, assume UTC
|
return True # Empty DB, assume UTC
|
||||||
|
|
||||||
# Parse samples and check if they have timezone info (which would indicate migration already done)
|
# Parse samples and check if they have timezone info (which would indicate migration already done)
|
||||||
has_tz_marker = any('+' in str(ts) or 'Z' in str(ts) for ts in samples)
|
has_tz_marker = any('+' in str(ts) or 'Z' in str(ts) for ts in samples)
|
||||||
if has_tz_marker:
|
if has_tz_marker:
|
||||||
mylog("verbose", "[db_upgrade] Timestamps have timezone markers - already migrated to UTC")
|
mylog("verbose", "[db_upgrade] Timestamps have timezone markers - already migrated to UTC")
|
||||||
return True
|
return True
|
||||||
|
|
||||||
mylog("debug", f"[db_upgrade] Sampled {len(samples)} timestamps. Current TZ offset: {current_offset_seconds}s")
|
mylog("debug", f"[db_upgrade] Sampled {len(samples)} timestamps. Current TZ offset: {current_offset_seconds}s")
|
||||||
mylog("verbose", "[db_upgrade] Timestamps appear to be in system local time - migration needed")
|
mylog("verbose", "[db_upgrade] Timestamps appear to be in system local time - migration needed")
|
||||||
return False
|
return False
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
mylog("warn", f"[db_upgrade] Error checking UTC status: {e} - assuming UTC")
|
mylog("warn", f"[db_upgrade] Error checking UTC status: {e} - assuming UTC")
|
||||||
return True
|
return True
|
||||||
@@ -574,63 +590,91 @@ def is_timestamps_in_utc(sql) -> bool:
|
|||||||
|
|
||||||
def migrate_timestamps_to_utc(sql) -> bool:
|
def migrate_timestamps_to_utc(sql) -> bool:
|
||||||
"""
|
"""
|
||||||
Migrate all timestamp columns in the database from local time to UTC.
|
Safely migrate timestamp columns from local time to UTC.
|
||||||
|
|
||||||
This function determines if migration is needed based on the VERSION setting:
|
Migration rules (fail-safe):
|
||||||
- Fresh installs (no VERSION): Skip migration - timestamps already UTC from timeNowUTC()
|
- Default behaviour: RUN migration unless proven safe to skip
|
||||||
- Version >= 26.2.6: Skip migration - already using UTC timestamps
|
- Version > 26.2.6 → timestamps already UTC → skip
|
||||||
- Version < 26.2.6: Run migration - convert local timestamps to UTC
|
- Missing / unknown / unparsable version → migrate
|
||||||
|
- Migration flag present → skip
|
||||||
Affected tables:
|
- Detection says already UTC → skip
|
||||||
- Devices: devFirstConnection, devLastConnection, devLastNotification
|
|
||||||
- Events: eve_DateTime
|
|
||||||
- Sessions: ses_DateTimeConnection, ses_DateTimeDisconnection
|
|
||||||
- Notifications: DateTimeCreated, DateTimePushed
|
|
||||||
- Online_History: Scan_Date
|
|
||||||
- Plugins_Objects: DateTimeCreated, DateTimeChanged
|
|
||||||
- Plugins_Events: DateTimeCreated, DateTimeChanged
|
|
||||||
- Plugins_History: DateTimeCreated, DateTimeChanged
|
|
||||||
- AppEvents: DateTimeCreated
|
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
bool: True if migration completed or wasn't needed, False on error
|
bool: True if migration completed or not needed, False on error
|
||||||
"""
|
"""
|
||||||
|
|
||||||
try:
|
try:
|
||||||
import conf
|
# -------------------------------------------------
|
||||||
from zoneinfo import ZoneInfo
|
# Check migration flag (idempotency protection)
|
||||||
import datetime as dt
|
# -------------------------------------------------
|
||||||
|
try:
|
||||||
# Check VERSION from Settings table (from previous app run)
|
sql.execute("SELECT setValue FROM Settings WHERE setKey='DB_TIMESTAMPS_UTC_MIGRATED'")
|
||||||
sql.execute("SELECT setValue FROM Settings WHERE setKey = 'VERSION'")
|
result = sql.fetchone()
|
||||||
|
if result and str(result[0]) == "1":
|
||||||
|
mylog("verbose", "[db_upgrade] UTC timestamp migration already completed - skipping")
|
||||||
|
return True
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# -------------------------------------------------
|
||||||
|
# Read previous version
|
||||||
|
# -------------------------------------------------
|
||||||
|
sql.execute("SELECT setValue FROM Settings WHERE setKey='VERSION'")
|
||||||
result = sql.fetchone()
|
result = sql.fetchone()
|
||||||
prev_version = result[0] if result else ""
|
prev_version = result[0] if result else ""
|
||||||
|
|
||||||
# Fresh install: VERSION is empty → timestamps already UTC from timeNowUTC()
|
mylog("verbose", f"[db_upgrade] Version '{prev_version}' detected.")
|
||||||
if not prev_version or prev_version == "" or prev_version == "unknown":
|
|
||||||
mylog("verbose", "[db_upgrade] Fresh install detected - timestamps already in UTC format")
|
# Default behaviour: migrate unless proven safe
|
||||||
|
should_migrate = True
|
||||||
|
|
||||||
|
# -------------------------------------------------
|
||||||
|
# Version-based safety check
|
||||||
|
# -------------------------------------------------
|
||||||
|
if prev_version and str(prev_version).lower() != "unknown":
|
||||||
|
try:
|
||||||
|
version_parts = prev_version.lstrip('v').split('.')
|
||||||
|
major = int(version_parts[0]) if len(version_parts) > 0 else 0
|
||||||
|
minor = int(version_parts[1]) if len(version_parts) > 1 else 0
|
||||||
|
patch = int(version_parts[2]) if len(version_parts) > 2 else 0
|
||||||
|
|
||||||
|
# UTC timestamps introduced AFTER v26.2.6
|
||||||
|
if (major, minor, patch) > (26, 2, 6):
|
||||||
|
should_migrate = False
|
||||||
|
mylog(
|
||||||
|
"verbose",
|
||||||
|
f"[db_upgrade] Version {prev_version} confirmed UTC timestamps - skipping migration",
|
||||||
|
)
|
||||||
|
|
||||||
|
except (ValueError, IndexError) as e:
|
||||||
|
mylog(
|
||||||
|
"warn",
|
||||||
|
f"[db_upgrade] Could not parse version '{prev_version}': {e} - running migration as safety measure",
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
mylog(
|
||||||
|
"warn",
|
||||||
|
"[db_upgrade] VERSION missing/unknown - running migration as safety measure",
|
||||||
|
)
|
||||||
|
|
||||||
|
# -------------------------------------------------
|
||||||
|
# Detection fallback
|
||||||
|
# -------------------------------------------------
|
||||||
|
if should_migrate:
|
||||||
|
try:
|
||||||
|
if is_timestamps_in_utc(sql):
|
||||||
|
mylog(
|
||||||
|
"verbose",
|
||||||
|
"[db_upgrade] Timestamps appear already UTC - skipping migration",
|
||||||
|
)
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
mylog(
|
||||||
|
"warn",
|
||||||
|
f"[db_upgrade] UTC detection failed ({e}) - continuing with migration",
|
||||||
|
)
|
||||||
|
else:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
# Parse version - format: "26.2.6" or "v26.2.6"
|
|
||||||
try:
|
|
||||||
version_parts = prev_version.strip('v').split('.')
|
|
||||||
major = int(version_parts[0]) if len(version_parts) > 0 else 0
|
|
||||||
minor = int(version_parts[1]) if len(version_parts) > 1 else 0
|
|
||||||
patch = int(version_parts[2]) if len(version_parts) > 2 else 0
|
|
||||||
|
|
||||||
# UTC timestamps introduced in v26.2.6
|
|
||||||
# If upgrading from 26.2.6 or later, timestamps are already UTC
|
|
||||||
if (major > 26) or (major == 26 and minor > 2) or (major == 26 and minor == 2 and patch >= 6):
|
|
||||||
mylog("verbose", f"[db_upgrade] Version {prev_version} already uses UTC timestamps - skipping migration")
|
|
||||||
return True
|
|
||||||
|
|
||||||
mylog("verbose", f"[db_upgrade] Upgrading from {prev_version} (< v26.2.6) - migrating timestamps to UTC")
|
|
||||||
|
|
||||||
except (ValueError, IndexError) as e:
|
|
||||||
mylog("warn", f"[db_upgrade] Could not parse version '{prev_version}': {e} - checking timestamps")
|
|
||||||
# Fallback: use detection logic
|
|
||||||
if is_timestamps_in_utc(sql):
|
|
||||||
mylog("verbose", "[db_upgrade] Timestamps appear to be in UTC - skipping migration")
|
|
||||||
return True
|
|
||||||
|
|
||||||
# Get timezone offset
|
# Get timezone offset
|
||||||
try:
|
try:
|
||||||
@@ -642,15 +686,15 @@ def migrate_timestamps_to_utc(sql) -> bool:
|
|||||||
tz = None
|
tz = None
|
||||||
except Exception:
|
except Exception:
|
||||||
tz = None
|
tz = None
|
||||||
|
|
||||||
if tz:
|
if tz:
|
||||||
now_local = dt.datetime.now(tz)
|
now_local = dt.datetime.now(tz)
|
||||||
offset_hours = (now_local.utcoffset().total_seconds()) / 3600
|
offset_hours = (now_local.utcoffset().total_seconds()) / 3600
|
||||||
else:
|
else:
|
||||||
offset_hours = 0
|
offset_hours = 0
|
||||||
|
|
||||||
mylog("verbose", f"[db_upgrade] Starting UTC timestamp migration (offset: {offset_hours} hours)")
|
mylog("verbose", f"[db_upgrade] Starting UTC timestamp migration (offset: {offset_hours} hours)")
|
||||||
|
|
||||||
# List of tables and their datetime columns
|
# List of tables and their datetime columns
|
||||||
timestamp_columns = {
|
timestamp_columns = {
|
||||||
'Devices': ['devFirstConnection', 'devLastConnection', 'devLastNotification'],
|
'Devices': ['devFirstConnection', 'devLastConnection', 'devLastNotification'],
|
||||||
@@ -663,7 +707,7 @@ def migrate_timestamps_to_utc(sql) -> bool:
|
|||||||
'Plugins_History': ['DateTimeCreated', 'DateTimeChanged'],
|
'Plugins_History': ['DateTimeCreated', 'DateTimeChanged'],
|
||||||
'AppEvents': ['DateTimeCreated'],
|
'AppEvents': ['DateTimeCreated'],
|
||||||
}
|
}
|
||||||
|
|
||||||
for table, columns in timestamp_columns.items():
|
for table, columns in timestamp_columns.items():
|
||||||
try:
|
try:
|
||||||
# Check if table exists
|
# Check if table exists
|
||||||
@@ -671,7 +715,7 @@ def migrate_timestamps_to_utc(sql) -> bool:
|
|||||||
if not sql.fetchone():
|
if not sql.fetchone():
|
||||||
mylog("debug", f"[db_upgrade] Table '{table}' does not exist - skipping")
|
mylog("debug", f"[db_upgrade] Table '{table}' does not exist - skipping")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
for column in columns:
|
for column in columns:
|
||||||
try:
|
try:
|
||||||
# Update non-NULL timestamps
|
# Update non-NULL timestamps
|
||||||
@@ -691,22 +735,21 @@ def migrate_timestamps_to_utc(sql) -> bool:
|
|||||||
SET {column} = DATETIME({column}, '+{abs_hours} hours', '+{abs_mins} minutes')
|
SET {column} = DATETIME({column}, '+{abs_hours} hours', '+{abs_mins} minutes')
|
||||||
WHERE {column} IS NOT NULL
|
WHERE {column} IS NOT NULL
|
||||||
""")
|
""")
|
||||||
|
|
||||||
row_count = sql.rowcount
|
row_count = sql.rowcount
|
||||||
if row_count > 0:
|
if row_count > 0:
|
||||||
mylog("verbose", f"[db_upgrade] Migrated {row_count} timestamps in {table}.{column}")
|
mylog("verbose", f"[db_upgrade] Migrated {row_count} timestamps in {table}.{column}")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
mylog("warn", f"[db_upgrade] Error updating {table}.{column}: {e}")
|
mylog("warn", f"[db_upgrade] Error updating {table}.{column}: {e}")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
mylog("warn", f"[db_upgrade] Error processing table {table}: {e}")
|
mylog("warn", f"[db_upgrade] Error processing table {table}: {e}")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
mylog("none", "[db_upgrade] ✓ UTC timestamp migration completed successfully")
|
mylog("none", "[db_upgrade] ✓ UTC timestamp migration completed successfully")
|
||||||
return True
|
return True
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
mylog("none", f"[db_upgrade] ERROR during timestamp migration: {e}")
|
mylog("none", f"[db_upgrade] ERROR during timestamp migration: {e}")
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
|||||||
@@ -401,7 +401,7 @@ def importConfigs(pm, db, all_plugins):
|
|||||||
c_d,
|
c_d,
|
||||||
"Language Interface",
|
"Language Interface",
|
||||||
'{"dataType":"string", "elements": [{"elementType" : "select", "elementOptions" : [] ,"transformers": []}]}',
|
'{"dataType":"string", "elements": [{"elementType" : "select", "elementOptions" : [] ,"transformers": []}]}',
|
||||||
"['English (en_us)', 'Arabic (ar_ar)', 'Catalan (ca_ca)', 'Czech (cs_cz)', 'German (de_de)', 'Spanish (es_es)', 'Farsi (fa_fa)', 'French (fr_fr)', 'Italian (it_it)', 'Japanese (ja_jp)', 'Norwegian (nb_no)', 'Polish (pl_pl)', 'Portuguese (pt_br)', 'Portuguese (pt_pt)', 'Russian (ru_ru)', 'Swedish (sv_sv)', 'Turkish (tr_tr)', 'Ukrainian (uk_ua)', 'Chinese (zh_cn)']", # noqa: E501 - inline JSON
|
"['English (en_us)', 'Arabic (ar_ar)', 'Catalan (ca_ca)', 'Czech (cs_cz)', 'German (de_de)', 'Spanish (es_es)', 'Farsi (fa_fa)', 'French (fr_fr)', 'Italian (it_it)', 'Japanese (ja_jp)', 'Norwegian (nb_no)', 'Polish (pl_pl)', 'Portuguese (pt_br)', 'Portuguese (pt_pt)', 'Russian (ru_ru)', 'Swedish (sv_sv)', 'Turkish (tr_tr)', 'Ukrainian (uk_ua)', 'Vietnamese (vi_vn)', 'Chinese (zh_cn)']", # noqa: E501 - inline JSON
|
||||||
"UI",
|
"UI",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|||||||
@@ -9,7 +9,7 @@ import logging
|
|||||||
# NetAlertX imports
|
# NetAlertX imports
|
||||||
import conf
|
import conf
|
||||||
from const import logPath
|
from const import logPath
|
||||||
from utils.datetime_utils import timeNowUTC
|
from utils.datetime_utils import timeNowTZ
|
||||||
|
|
||||||
DEFAULT_LEVEL = "none"
|
DEFAULT_LEVEL = "none"
|
||||||
|
|
||||||
@@ -124,12 +124,12 @@ def start_log_writer_thread():
|
|||||||
|
|
||||||
# -------------------------------------------------------------------------------
|
# -------------------------------------------------------------------------------
|
||||||
def file_print(*args):
|
def file_print(*args):
|
||||||
result = timeNowUTC(as_string=False).strftime("%H:%M:%S") + " "
|
result = timeNowTZ(as_string=False).strftime("%H:%M:%S") + " "
|
||||||
for arg in args:
|
for arg in args:
|
||||||
if isinstance(arg, list):
|
if isinstance(arg, list):
|
||||||
arg = " ".join(
|
arg = " ".join(
|
||||||
str(a) for a in arg
|
str(a) for a in arg
|
||||||
) # so taht new lines are handled correctly also when passing a list
|
) # so that new lines are handled correctly also when passing a list
|
||||||
result += str(arg)
|
result += str(arg)
|
||||||
|
|
||||||
logging.log(custom_to_logging_levels.get(currentLevel, logging.NOTSET), result)
|
logging.log(custom_to_logging_levels.get(currentLevel, logging.NOTSET), result)
|
||||||
|
|||||||
@@ -10,9 +10,10 @@
|
|||||||
# cvc90 2023 https://github.com/cvc90 GNU GPLv3 #
|
# cvc90 2023 https://github.com/cvc90 GNU GPLv3 #
|
||||||
# ---------------------------------------------------------------------------------#
|
# ---------------------------------------------------------------------------------#
|
||||||
|
|
||||||
import json
|
|
||||||
import os
|
import os
|
||||||
|
import json
|
||||||
import sys
|
import sys
|
||||||
|
from zoneinfo import ZoneInfo
|
||||||
|
|
||||||
# Register NetAlertX directories
|
# Register NetAlertX directories
|
||||||
INSTALL_PATH = os.getenv("NETALERTX_APP", "/app")
|
INSTALL_PATH = os.getenv("NETALERTX_APP", "/app")
|
||||||
@@ -23,231 +24,237 @@ from helper import ( # noqa: E402 [flake8 lint suppression]
|
|||||||
)
|
)
|
||||||
from logger import mylog # noqa: E402 [flake8 lint suppression]
|
from logger import mylog # noqa: E402 [flake8 lint suppression]
|
||||||
from db.sql_safe_builder import create_safe_condition_builder # noqa: E402 [flake8 lint suppression]
|
from db.sql_safe_builder import create_safe_condition_builder # noqa: E402 [flake8 lint suppression]
|
||||||
from utils.datetime_utils import get_timezone_offset # noqa: E402 [flake8 lint suppression]
|
from utils.datetime_utils import format_date_iso # noqa: E402 [flake8 lint suppression]
|
||||||
|
import conf # noqa: E402 [flake8 lint suppression]
|
||||||
|
|
||||||
|
# ===============================================================================
|
||||||
|
# Timezone conversion
|
||||||
|
# ===============================================================================
|
||||||
|
|
||||||
|
DATETIME_FIELDS = {
|
||||||
|
"new_devices": ["Datetime"],
|
||||||
|
"down_devices": ["eve_DateTime"],
|
||||||
|
"down_reconnected": ["eve_DateTime"],
|
||||||
|
"events": ["Datetime"],
|
||||||
|
"plugins": ["DateTimeChanged"],
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def get_datetime_fields_from_columns(column_names):
|
||||||
|
return [
|
||||||
|
col for col in column_names
|
||||||
|
if "date" in col.lower() or "time" in col.lower()
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def apply_timezone_to_json(json_obj, section=None):
|
||||||
|
data = json_obj.json["data"]
|
||||||
|
columns = json_obj.columnNames
|
||||||
|
|
||||||
|
fields = DATETIME_FIELDS.get(section) or get_datetime_fields_from_columns(columns)
|
||||||
|
|
||||||
|
return apply_timezone(data, fields)
|
||||||
|
|
||||||
|
|
||||||
|
def apply_timezone(data, fields):
|
||||||
|
"""
|
||||||
|
Convert UTC datetime fields in a list of dicts to the configured timezone.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
data (list[dict]): Rows returned from DB
|
||||||
|
fields (list[str]): Field names to convert
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
list[dict]: Modified data with timezone-aware ISO strings
|
||||||
|
"""
|
||||||
|
if not data or not fields:
|
||||||
|
return data
|
||||||
|
|
||||||
|
# Determine local timezone
|
||||||
|
tz = conf.tz
|
||||||
|
if isinstance(tz, str):
|
||||||
|
tz = ZoneInfo(tz)
|
||||||
|
|
||||||
|
for row in data:
|
||||||
|
if not isinstance(row, dict):
|
||||||
|
continue
|
||||||
|
|
||||||
|
for field in fields:
|
||||||
|
value = row.get(field)
|
||||||
|
if not value:
|
||||||
|
continue
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Convert DB UTC string → local timezone ISO
|
||||||
|
# format_date_iso already assumes UTC if naive
|
||||||
|
row[field] = format_date_iso(value)
|
||||||
|
except Exception:
|
||||||
|
# Never crash, leave original value if conversion fails
|
||||||
|
continue
|
||||||
|
|
||||||
|
return data
|
||||||
|
|
||||||
|
|
||||||
# ===============================================================================
|
# ===============================================================================
|
||||||
# REPORTING
|
# REPORTING
|
||||||
# ===============================================================================
|
# ===============================================================================
|
||||||
|
|
||||||
|
|
||||||
# -------------------------------------------------------------------------------
|
|
||||||
def get_notifications(db):
|
def get_notifications(db):
|
||||||
sql = db.sql # TO-DO
|
"""
|
||||||
|
Fetch notifications for all configured sections, applying timezone conversions.
|
||||||
|
|
||||||
# Reporting section
|
Args:
|
||||||
mylog("verbose", ["[Notification] Check if something to report"])
|
db: Database object with `.sql` for executing queries.
|
||||||
|
|
||||||
# prepare variables for JSON construction
|
Returns:
|
||||||
json_new_devices = []
|
dict: JSON-ready dict with data and metadata for each section.
|
||||||
json_new_devices_meta = {}
|
"""
|
||||||
json_down_devices = []
|
sql = db.sql
|
||||||
json_down_devices_meta = {}
|
|
||||||
json_down_reconnected = []
|
|
||||||
json_down_reconnected_meta = {}
|
|
||||||
json_events = []
|
|
||||||
json_events_meta = {}
|
|
||||||
json_plugins = []
|
|
||||||
json_plugins_meta = {}
|
|
||||||
|
|
||||||
# Disable reporting on events for devices where reporting is disabled based on the MAC address
|
mylog("verbose", "[Notification] Check if something to report")
|
||||||
|
|
||||||
# Disable notifications (except down/down reconnected) on devices where devAlertEvents is disabled
|
# Disable events where reporting is disabled
|
||||||
sql.execute("""UPDATE Events SET eve_PendingAlertEmail = 0
|
sql.execute("""
|
||||||
WHERE eve_PendingAlertEmail = 1 AND eve_EventType not in ('Device Down', 'Down Reconnected', 'New Device' ) AND eve_MAC IN
|
UPDATE Events SET eve_PendingAlertEmail = 0
|
||||||
(
|
WHERE eve_PendingAlertEmail = 1
|
||||||
SELECT devMac FROM Devices WHERE devAlertEvents = 0
|
AND eve_EventType NOT IN ('Device Down', 'Down Reconnected', 'New Device')
|
||||||
)""")
|
AND eve_MAC IN (SELECT devMac FROM Devices WHERE devAlertEvents = 0)
|
||||||
|
""")
|
||||||
# Disable down/down reconnected notifications on devices where devAlertDown is disabled
|
sql.execute("""
|
||||||
sql.execute("""UPDATE Events SET eve_PendingAlertEmail = 0
|
UPDATE Events SET eve_PendingAlertEmail = 0
|
||||||
WHERE eve_PendingAlertEmail = 1 AND eve_EventType in ('Device Down', 'Down Reconnected') AND eve_MAC IN
|
WHERE eve_PendingAlertEmail = 1
|
||||||
(
|
AND eve_EventType IN ('Device Down', 'Down Reconnected')
|
||||||
SELECT devMac FROM Devices WHERE devAlertDown = 0
|
AND eve_MAC IN (SELECT devMac FROM Devices WHERE devAlertDown = 0)
|
||||||
)""")
|
""")
|
||||||
|
|
||||||
sections = get_setting_value("NTFPRCS_INCLUDED_SECTIONS")
|
sections = get_setting_value("NTFPRCS_INCLUDED_SECTIONS")
|
||||||
|
|
||||||
mylog("verbose", ["[Notification] Included sections: ", sections])
|
mylog("verbose", ["[Notification] Included sections: ", sections])
|
||||||
|
|
||||||
if "new_devices" in sections:
|
# Define SQL templates per section
|
||||||
# Compose New Devices Section (no empty lines in SQL queries!)
|
sql_templates = {
|
||||||
# Use SafeConditionBuilder to prevent SQL injection vulnerabilities
|
"new_devices": """
|
||||||
condition_builder = create_safe_condition_builder()
|
SELECT
|
||||||
new_dev_condition_setting = get_setting_value("NTFPRCS_new_dev_condition")
|
eve_MAC as MAC,
|
||||||
|
eve_DateTime as Datetime,
|
||||||
try:
|
devLastIP as IP,
|
||||||
safe_condition, parameters = condition_builder.get_safe_condition_legacy(
|
eve_EventType as "Event Type",
|
||||||
new_dev_condition_setting
|
devName as "Device name",
|
||||||
)
|
devComments as Comments
|
||||||
sqlQuery = """SELECT
|
FROM Events_Devices
|
||||||
eve_MAC as MAC,
|
WHERE eve_PendingAlertEmail = 1 AND eve_EventType = 'New Device' {condition}
|
||||||
eve_DateTime as Datetime,
|
ORDER BY eve_DateTime
|
||||||
devLastIP as IP,
|
""",
|
||||||
eve_EventType as "Event Type",
|
"down_devices": f"""
|
||||||
devName as "Device name",
|
SELECT
|
||||||
devComments as Comments FROM Events_Devices
|
devName,
|
||||||
WHERE eve_PendingAlertEmail = 1
|
eve_MAC,
|
||||||
AND eve_EventType = 'New Device' {}
|
devVendor,
|
||||||
ORDER BY eve_DateTime""".format(safe_condition)
|
eve_IP,
|
||||||
except (ValueError, KeyError, TypeError) as e:
|
eve_DateTime,
|
||||||
mylog("verbose", ["[Notification] Error building safe condition for new devices: ", e])
|
eve_EventType
|
||||||
# Fall back to safe default (no additional conditions)
|
FROM Events_Devices AS down_events
|
||||||
sqlQuery = """SELECT
|
WHERE eve_PendingAlertEmail = 1
|
||||||
eve_MAC as MAC,
|
AND down_events.eve_EventType = 'Device Down'
|
||||||
eve_DateTime as Datetime,
|
AND eve_DateTime < datetime('now', '-{int(get_setting_value("NTFPRCS_alert_down_time") or 0)} minutes')
|
||||||
devLastIP as IP,
|
AND NOT EXISTS (
|
||||||
eve_EventType as "Event Type",
|
SELECT 1
|
||||||
devName as "Device name",
|
FROM Events AS connected_events
|
||||||
devComments as Comments FROM Events_Devices
|
WHERE connected_events.eve_MAC = down_events.eve_MAC
|
||||||
WHERE eve_PendingAlertEmail = 1
|
AND connected_events.eve_EventType = 'Connected'
|
||||||
AND eve_EventType = 'New Device'
|
AND connected_events.eve_DateTime > down_events.eve_DateTime
|
||||||
ORDER BY eve_DateTime"""
|
)
|
||||||
parameters = {}
|
ORDER BY down_events.eve_DateTime
|
||||||
|
""",
|
||||||
mylog("debug", ["[Notification] new_devices SQL query: ", sqlQuery])
|
"down_reconnected": """
|
||||||
mylog("debug", ["[Notification] new_devices parameters: ", parameters])
|
SELECT
|
||||||
|
devName,
|
||||||
# Get the events as JSON using parameterized query
|
eve_MAC,
|
||||||
json_obj = db.get_table_as_json(sqlQuery, parameters)
|
devVendor,
|
||||||
|
eve_IP,
|
||||||
json_new_devices_meta = {
|
eve_DateTime,
|
||||||
"title": "🆕 New devices",
|
eve_EventType
|
||||||
"columnNames": json_obj.columnNames,
|
FROM Events_Devices AS reconnected_devices
|
||||||
}
|
WHERE reconnected_devices.eve_EventType = 'Down Reconnected'
|
||||||
|
AND reconnected_devices.eve_PendingAlertEmail = 1
|
||||||
json_new_devices = json_obj.json["data"]
|
ORDER BY reconnected_devices.eve_DateTime
|
||||||
|
""",
|
||||||
if "down_devices" in sections:
|
"events": """
|
||||||
# Compose Devices Down Section
|
SELECT
|
||||||
# - select only Down Alerts with pending email of devices that didn't reconnect within the specified time window
|
eve_MAC as MAC,
|
||||||
minutes = int(get_setting_value("NTFPRCS_alert_down_time") or 0)
|
eve_DateTime as Datetime,
|
||||||
tz_offset = get_timezone_offset()
|
devLastIP as IP,
|
||||||
sqlQuery = f"""
|
eve_EventType as "Event Type",
|
||||||
SELECT devName, eve_MAC, devVendor, eve_IP, eve_DateTime, eve_EventType
|
devName as "Device name",
|
||||||
FROM Events_Devices AS down_events
|
devComments as Comments
|
||||||
WHERE eve_PendingAlertEmail = 1
|
FROM Events_Devices
|
||||||
AND down_events.eve_EventType = 'Device Down'
|
WHERE eve_PendingAlertEmail = 1
|
||||||
AND eve_DateTime < datetime('now', '-{minutes} minutes', '{tz_offset}')
|
AND eve_EventType IN ('Connected', 'Down Reconnected', 'Disconnected','IP Changed') {condition}
|
||||||
AND NOT EXISTS (
|
ORDER BY eve_DateTime
|
||||||
SELECT 1
|
""",
|
||||||
FROM Events AS connected_events
|
"plugins": """
|
||||||
WHERE connected_events.eve_MAC = down_events.eve_MAC
|
SELECT
|
||||||
AND connected_events.eve_EventType = 'Connected'
|
Plugin,
|
||||||
AND connected_events.eve_DateTime > down_events.eve_DateTime
|
Object_PrimaryId,
|
||||||
)
|
Object_SecondaryId,
|
||||||
ORDER BY down_events.eve_DateTime;
|
DateTimeChanged,
|
||||||
"""
|
Watched_Value1,
|
||||||
|
Watched_Value2,
|
||||||
# Get the events as JSON
|
Watched_Value3,
|
||||||
json_obj = db.get_table_as_json(sqlQuery)
|
Watched_Value4,
|
||||||
|
Status
|
||||||
json_down_devices_meta = {
|
FROM Plugins_Events
|
||||||
"title": "🔴 Down devices",
|
"""
|
||||||
"columnNames": json_obj.columnNames,
|
|
||||||
}
|
|
||||||
json_down_devices = json_obj.json["data"]
|
|
||||||
|
|
||||||
mylog("debug", f"[Notification] json_down_devices: {json.dumps(json_down_devices)}")
|
|
||||||
|
|
||||||
if "down_reconnected" in sections:
|
|
||||||
# Compose Reconnected Down Section
|
|
||||||
# - select only Devices, that were previously down and now are Connected
|
|
||||||
sqlQuery = """
|
|
||||||
SELECT devName, eve_MAC, devVendor, eve_IP, eve_DateTime, eve_EventType
|
|
||||||
FROM Events_Devices AS reconnected_devices
|
|
||||||
WHERE reconnected_devices.eve_EventType = 'Down Reconnected'
|
|
||||||
AND reconnected_devices.eve_PendingAlertEmail = 1
|
|
||||||
ORDER BY reconnected_devices.eve_DateTime;
|
|
||||||
"""
|
|
||||||
|
|
||||||
# Get the events as JSON
|
|
||||||
json_obj = db.get_table_as_json(sqlQuery)
|
|
||||||
|
|
||||||
json_down_reconnected_meta = {
|
|
||||||
"title": "🔁 Reconnected down devices",
|
|
||||||
"columnNames": json_obj.columnNames,
|
|
||||||
}
|
|
||||||
json_down_reconnected = json_obj.json["data"]
|
|
||||||
|
|
||||||
mylog("debug", f"[Notification] json_down_reconnected: {json.dumps(json_down_reconnected)}")
|
|
||||||
|
|
||||||
if "events" in sections:
|
|
||||||
# Compose Events Section (no empty lines in SQL queries!)
|
|
||||||
# Use SafeConditionBuilder to prevent SQL injection vulnerabilities
|
|
||||||
condition_builder = create_safe_condition_builder()
|
|
||||||
event_condition_setting = get_setting_value("NTFPRCS_event_condition")
|
|
||||||
|
|
||||||
try:
|
|
||||||
safe_condition, parameters = condition_builder.get_safe_condition_legacy(
|
|
||||||
event_condition_setting
|
|
||||||
)
|
|
||||||
sqlQuery = """SELECT
|
|
||||||
eve_MAC as MAC,
|
|
||||||
eve_DateTime as Datetime,
|
|
||||||
devLastIP as IP,
|
|
||||||
eve_EventType as "Event Type",
|
|
||||||
devName as "Device name",
|
|
||||||
devComments as Comments FROM Events_Devices
|
|
||||||
WHERE eve_PendingAlertEmail = 1
|
|
||||||
AND eve_EventType IN ('Connected', 'Down Reconnected', 'Disconnected','IP Changed') {}
|
|
||||||
ORDER BY eve_DateTime""".format(safe_condition)
|
|
||||||
except Exception as e:
|
|
||||||
mylog("verbose", f"[Notification] Error building safe condition for events: {e}")
|
|
||||||
# Fall back to safe default (no additional conditions)
|
|
||||||
sqlQuery = """SELECT
|
|
||||||
eve_MAC as MAC,
|
|
||||||
eve_DateTime as Datetime,
|
|
||||||
devLastIP as IP,
|
|
||||||
eve_EventType as "Event Type",
|
|
||||||
devName as "Device name",
|
|
||||||
devComments as Comments FROM Events_Devices
|
|
||||||
WHERE eve_PendingAlertEmail = 1
|
|
||||||
AND eve_EventType IN ('Connected', 'Down Reconnected', 'Disconnected','IP Changed')
|
|
||||||
ORDER BY eve_DateTime"""
|
|
||||||
parameters = {}
|
|
||||||
|
|
||||||
mylog("debug", ["[Notification] events SQL query: ", sqlQuery])
|
|
||||||
mylog("debug", ["[Notification] events parameters: ", parameters])
|
|
||||||
|
|
||||||
# Get the events as JSON using parameterized query
|
|
||||||
json_obj = db.get_table_as_json(sqlQuery, parameters)
|
|
||||||
|
|
||||||
json_events_meta = {"title": "⚡ Events", "columnNames": json_obj.columnNames}
|
|
||||||
json_events = json_obj.json["data"]
|
|
||||||
|
|
||||||
if "plugins" in sections:
|
|
||||||
# Compose Plugins Section
|
|
||||||
sqlQuery = """SELECT
|
|
||||||
Plugin,
|
|
||||||
Object_PrimaryId,
|
|
||||||
Object_SecondaryId,
|
|
||||||
DateTimeChanged,
|
|
||||||
Watched_Value1,
|
|
||||||
Watched_Value2,
|
|
||||||
Watched_Value3,
|
|
||||||
Watched_Value4,
|
|
||||||
Status
|
|
||||||
from Plugins_Events"""
|
|
||||||
|
|
||||||
# Get the events as JSON
|
|
||||||
json_obj = db.get_table_as_json(sqlQuery)
|
|
||||||
|
|
||||||
json_plugins_meta = {"title": "🔌 Plugins", "columnNames": json_obj.columnNames}
|
|
||||||
json_plugins = json_obj.json["data"]
|
|
||||||
|
|
||||||
final_json = {
|
|
||||||
"new_devices": json_new_devices,
|
|
||||||
"new_devices_meta": json_new_devices_meta,
|
|
||||||
"down_devices": json_down_devices,
|
|
||||||
"down_devices_meta": json_down_devices_meta,
|
|
||||||
"down_reconnected": json_down_reconnected,
|
|
||||||
"down_reconnected_meta": json_down_reconnected_meta,
|
|
||||||
"events": json_events,
|
|
||||||
"events_meta": json_events_meta,
|
|
||||||
"plugins": json_plugins,
|
|
||||||
"plugins_meta": json_plugins_meta,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Titles for metadata
|
||||||
|
section_titles = {
|
||||||
|
"new_devices": "🆕 New devices",
|
||||||
|
"down_devices": "🔴 Down devices",
|
||||||
|
"down_reconnected": "🔁 Reconnected down devices",
|
||||||
|
"events": "⚡ Events",
|
||||||
|
"plugins": "🔌 Plugins"
|
||||||
|
}
|
||||||
|
|
||||||
|
final_json = {}
|
||||||
|
|
||||||
|
# Pre-initialize final_json with all expected keys
|
||||||
|
final_json = {}
|
||||||
|
for section in ["new_devices", "down_devices", "down_reconnected", "events", "plugins"]:
|
||||||
|
final_json[section] = []
|
||||||
|
final_json[f"{section}_meta"] = {"title": section_titles.get(section, section), "columnNames": []}
|
||||||
|
|
||||||
|
# Loop through each included section
|
||||||
|
for section in sections:
|
||||||
|
try:
|
||||||
|
# Build safe condition for sections that support it
|
||||||
|
condition_builder = create_safe_condition_builder()
|
||||||
|
condition_setting = get_setting_value(f"NTFPRCS_{section}_condition")
|
||||||
|
safe_condition, parameters = condition_builder.get_safe_condition_legacy(condition_setting)
|
||||||
|
sqlQuery = sql_templates.get(section, "").format(condition=safe_condition)
|
||||||
|
except Exception:
|
||||||
|
# Fallback if safe condition fails
|
||||||
|
sqlQuery = sql_templates.get(section, "").format(condition="")
|
||||||
|
parameters = {}
|
||||||
|
|
||||||
|
mylog("debug", [f"[Notification] {section} SQL query: ", sqlQuery])
|
||||||
|
mylog("debug", [f"[Notification] {section} parameters: ", parameters])
|
||||||
|
|
||||||
|
# Fetch data as JSON
|
||||||
|
json_obj = db.get_table_as_json(sqlQuery, parameters)
|
||||||
|
|
||||||
|
mylog("debug", [f"[Notification] json_obj.json: {json.dumps(json_obj.json)}"])
|
||||||
|
|
||||||
|
# Apply timezone conversion
|
||||||
|
json_obj.json["data"] = apply_timezone_to_json(json_obj, section=section)
|
||||||
|
|
||||||
|
# Save data and metadata
|
||||||
|
final_json[section] = json_obj.json["data"]
|
||||||
|
final_json[f"{section}_meta"] = {
|
||||||
|
"title": section_titles.get(section, section),
|
||||||
|
"columnNames": json_obj.columnNames
|
||||||
|
}
|
||||||
|
|
||||||
|
mylog("debug", [f"[Notification] final_json: {json.dumps(final_json)}"])
|
||||||
|
|
||||||
return final_json
|
return final_json
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -536,6 +536,12 @@ class DeviceInstance:
|
|||||||
normalized_mac = normalize_mac(mac)
|
normalized_mac = normalize_mac(mac)
|
||||||
normalized_parent_mac = normalize_mac(data.get("devParentMAC") or "")
|
normalized_parent_mac = normalize_mac(data.get("devParentMAC") or "")
|
||||||
|
|
||||||
|
if normalized_mac == normalized_parent_mac:
|
||||||
|
return {
|
||||||
|
"success": False,
|
||||||
|
"error": "Can't set current node as the node parent."
|
||||||
|
}
|
||||||
|
|
||||||
fields_updated_by_set_device_data = {
|
fields_updated_by_set_device_data = {
|
||||||
"devName",
|
"devName",
|
||||||
"devOwner",
|
"devOwner",
|
||||||
|
|||||||
@@ -88,7 +88,7 @@ class EventInstance:
|
|||||||
def add(self, mac, ip, eventType, info="", pendingAlert=True, pairRow=None):
|
def add(self, mac, ip, eventType, info="", pendingAlert=True, pairRow=None):
|
||||||
conn = self._conn()
|
conn = self._conn()
|
||||||
conn.execute("""
|
conn.execute("""
|
||||||
INSERT INTO Events (
|
INSERT OR IGNORE INTO Events (
|
||||||
eve_MAC, eve_IP, eve_DateTime,
|
eve_MAC, eve_IP, eve_DateTime,
|
||||||
eve_EventType, eve_AdditionalInfo,
|
eve_EventType, eve_AdditionalInfo,
|
||||||
eve_PendingAlertEmail, eve_PairEventRowid
|
eve_PendingAlertEmail, eve_PairEventRowid
|
||||||
@@ -124,7 +124,7 @@ class EventInstance:
|
|||||||
cur = conn.cursor()
|
cur = conn.cursor()
|
||||||
cur.execute(
|
cur.execute(
|
||||||
"""
|
"""
|
||||||
INSERT INTO Events (eve_MAC, eve_IP, eve_DateTime, eve_EventType, eve_AdditionalInfo, eve_PendingAlertEmail)
|
INSERT OR IGNORE INTO Events (eve_MAC, eve_IP, eve_DateTime, eve_EventType, eve_AdditionalInfo, eve_PendingAlertEmail)
|
||||||
VALUES (?, ?, ?, ?, ?, ?)
|
VALUES (?, ?, ?, ?, ?, ?)
|
||||||
""",
|
""",
|
||||||
(mac, ip, start_time, event_type, additional_info, pending_alert),
|
(mac, ip, start_time, event_type, additional_info, pending_alert),
|
||||||
|
|||||||
@@ -16,7 +16,7 @@ from helper import (
|
|||||||
getBuildTimeStampAndVersion,
|
getBuildTimeStampAndVersion,
|
||||||
)
|
)
|
||||||
from messaging.in_app import write_notification
|
from messaging.in_app import write_notification
|
||||||
from utils.datetime_utils import timeNowUTC, get_timezone_offset
|
from utils.datetime_utils import timeNowUTC, timeNowTZ, get_timezone_offset
|
||||||
|
|
||||||
|
|
||||||
# -----------------------------------------------------------------------------
|
# -----------------------------------------------------------------------------
|
||||||
@@ -107,7 +107,7 @@ class NotificationInstance:
|
|||||||
mail_html = mail_html.replace("NEW_VERSION", newVersionText)
|
mail_html = mail_html.replace("NEW_VERSION", newVersionText)
|
||||||
|
|
||||||
# Report "REPORT_DATE" in Header & footer
|
# Report "REPORT_DATE" in Header & footer
|
||||||
timeFormated = timeNowUTC()
|
timeFormated = timeNowTZ()
|
||||||
mail_text = mail_text.replace("REPORT_DATE", timeFormated)
|
mail_text = mail_text.replace("REPORT_DATE", timeFormated)
|
||||||
mail_html = mail_html.replace("REPORT_DATE", timeFormated)
|
mail_html = mail_html.replace("REPORT_DATE", timeFormated)
|
||||||
|
|
||||||
|
|||||||
@@ -49,6 +49,18 @@ class PluginObjectInstance:
|
|||||||
"SELECT * FROM Plugins_Objects WHERE Plugin = ?", (plugin,)
|
"SELECT * FROM Plugins_Objects WHERE Plugin = ?", (plugin,)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
def getLastNCreatedPerPLugin(self, plugin, entries=1):
|
||||||
|
return self._fetchall(
|
||||||
|
"""
|
||||||
|
SELECT *
|
||||||
|
FROM Plugins_Objects
|
||||||
|
WHERE Plugin = ?
|
||||||
|
ORDER BY DateTimeCreated DESC
|
||||||
|
LIMIT ?
|
||||||
|
""",
|
||||||
|
(plugin, entries),
|
||||||
|
)
|
||||||
|
|
||||||
def getByField(self, plugPrefix, matchedColumn, matchedKey, returnFields=None):
|
def getByField(self, plugPrefix, matchedColumn, matchedKey, returnFields=None):
|
||||||
rows = self._fetchall(
|
rows = self._fetchall(
|
||||||
f"SELECT * FROM Plugins_Objects WHERE Plugin = ? AND {matchedColumn} = ?",
|
f"SELECT * FROM Plugins_Objects WHERE Plugin = ? AND {matchedColumn} = ?",
|
||||||
|
|||||||
@@ -606,7 +606,7 @@ def create_new_devices(db):
|
|||||||
|
|
||||||
mylog("debug", '[New Devices] Insert "New Device" Events')
|
mylog("debug", '[New Devices] Insert "New Device" Events')
|
||||||
query_new_device_events = f"""
|
query_new_device_events = f"""
|
||||||
INSERT INTO Events (
|
INSERT OR IGNORE INTO Events (
|
||||||
eve_MAC, eve_IP, eve_DateTime,
|
eve_MAC, eve_IP, eve_DateTime,
|
||||||
eve_EventType, eve_AdditionalInfo,
|
eve_EventType, eve_AdditionalInfo,
|
||||||
eve_PendingAlertEmail
|
eve_PendingAlertEmail
|
||||||
|
|||||||
@@ -171,7 +171,7 @@ def insert_events(db):
|
|||||||
|
|
||||||
# Check device down
|
# Check device down
|
||||||
mylog("debug", "[Events] - 1 - Devices down")
|
mylog("debug", "[Events] - 1 - Devices down")
|
||||||
sql.execute(f"""INSERT INTO Events (eve_MAC, eve_IP, eve_DateTime,
|
sql.execute(f"""INSERT OR IGNORE INTO Events (eve_MAC, eve_IP, eve_DateTime,
|
||||||
eve_EventType, eve_AdditionalInfo,
|
eve_EventType, eve_AdditionalInfo,
|
||||||
eve_PendingAlertEmail)
|
eve_PendingAlertEmail)
|
||||||
SELECT devMac, devLastIP, '{startTime}', 'Device Down', '', 1
|
SELECT devMac, devLastIP, '{startTime}', 'Device Down', '', 1
|
||||||
@@ -184,7 +184,7 @@ def insert_events(db):
|
|||||||
|
|
||||||
# Check new Connections or Down Reconnections
|
# Check new Connections or Down Reconnections
|
||||||
mylog("debug", "[Events] - 2 - New Connections")
|
mylog("debug", "[Events] - 2 - New Connections")
|
||||||
sql.execute(f""" INSERT INTO Events (eve_MAC, eve_IP, eve_DateTime,
|
sql.execute(f""" INSERT OR IGNORE INTO Events (eve_MAC, eve_IP, eve_DateTime,
|
||||||
eve_EventType, eve_AdditionalInfo,
|
eve_EventType, eve_AdditionalInfo,
|
||||||
eve_PendingAlertEmail)
|
eve_PendingAlertEmail)
|
||||||
SELECT DISTINCT c.scanMac, c.scanLastIP, '{startTime}',
|
SELECT DISTINCT c.scanMac, c.scanLastIP, '{startTime}',
|
||||||
@@ -201,7 +201,7 @@ def insert_events(db):
|
|||||||
|
|
||||||
# Check disconnections
|
# Check disconnections
|
||||||
mylog("debug", "[Events] - 3 - Disconnections")
|
mylog("debug", "[Events] - 3 - Disconnections")
|
||||||
sql.execute(f"""INSERT INTO Events (eve_MAC, eve_IP, eve_DateTime,
|
sql.execute(f"""INSERT OR IGNORE INTO Events (eve_MAC, eve_IP, eve_DateTime,
|
||||||
eve_EventType, eve_AdditionalInfo,
|
eve_EventType, eve_AdditionalInfo,
|
||||||
eve_PendingAlertEmail)
|
eve_PendingAlertEmail)
|
||||||
SELECT devMac, devLastIP, '{startTime}', 'Disconnected', '',
|
SELECT devMac, devLastIP, '{startTime}', 'Disconnected', '',
|
||||||
@@ -215,7 +215,7 @@ def insert_events(db):
|
|||||||
|
|
||||||
# Check IP Changed
|
# Check IP Changed
|
||||||
mylog("debug", "[Events] - 4 - IP Changes")
|
mylog("debug", "[Events] - 4 - IP Changes")
|
||||||
sql.execute(f"""INSERT INTO Events (eve_MAC, eve_IP, eve_DateTime,
|
sql.execute(f"""INSERT OR IGNORE INTO Events (eve_MAC, eve_IP, eve_DateTime,
|
||||||
eve_EventType, eve_AdditionalInfo,
|
eve_EventType, eve_AdditionalInfo,
|
||||||
eve_PendingAlertEmail)
|
eve_PendingAlertEmail)
|
||||||
SELECT scanMac, scanLastIP, '{startTime}', 'IP Changed',
|
SELECT scanMac, scanLastIP, '{startTime}', 'IP Changed',
|
||||||
|
|||||||
@@ -3,7 +3,7 @@
|
|||||||
import datetime
|
import datetime
|
||||||
|
|
||||||
from logger import mylog
|
from logger import mylog
|
||||||
from utils.datetime_utils import timeNowUTC
|
from utils.datetime_utils import timeNowTZ
|
||||||
|
|
||||||
|
|
||||||
# -------------------------------------------------------------------------------
|
# -------------------------------------------------------------------------------
|
||||||
@@ -28,11 +28,11 @@ class schedule_class:
|
|||||||
# Initialize the last run time if never run before
|
# Initialize the last run time if never run before
|
||||||
if self.last_run == 0:
|
if self.last_run == 0:
|
||||||
self.last_run = (
|
self.last_run = (
|
||||||
timeNowUTC(as_string=False) - datetime.timedelta(days=365)
|
timeNowTZ(as_string=False) - datetime.timedelta(days=365)
|
||||||
).replace(microsecond=0)
|
).replace(microsecond=0)
|
||||||
|
|
||||||
# get the current time with the currently specified timezone
|
# get the current time with the currently specified timezone
|
||||||
nowTime = timeNowUTC(as_string=False)
|
nowTime = timeNowTZ(as_string=False)
|
||||||
|
|
||||||
# Run the schedule if the current time is past the schedule time we saved last time and
|
# Run the schedule if the current time is past the schedule time we saved last time and
|
||||||
# (maybe the following check is unnecessary)
|
# (maybe the following check is unnecessary)
|
||||||
|
|||||||
@@ -47,6 +47,33 @@ def timeNowUTC(as_string=True):
|
|||||||
return utc_now.strftime(DATETIME_PATTERN) if as_string else utc_now
|
return utc_now.strftime(DATETIME_PATTERN) if as_string else utc_now
|
||||||
|
|
||||||
|
|
||||||
|
def timeNowTZ(as_string=True):
|
||||||
|
"""
|
||||||
|
Return the current time in the configured local timezone.
|
||||||
|
Falls back to UTC if conf.tz is invalid or missing.
|
||||||
|
"""
|
||||||
|
# Get canonical UTC time
|
||||||
|
utc_now = timeNowUTC(as_string=False)
|
||||||
|
|
||||||
|
# Resolve timezone safely
|
||||||
|
tz = None
|
||||||
|
try:
|
||||||
|
if isinstance(conf.tz, datetime.tzinfo):
|
||||||
|
tz = conf.tz
|
||||||
|
elif isinstance(conf.tz, str) and conf.tz:
|
||||||
|
tz = ZoneInfo(conf.tz)
|
||||||
|
except Exception:
|
||||||
|
tz = None
|
||||||
|
|
||||||
|
if tz is None:
|
||||||
|
tz = datetime.UTC # fallback to UTC
|
||||||
|
|
||||||
|
# Convert to local timezone (or UTC fallback)
|
||||||
|
local_now = utc_now.astimezone(tz)
|
||||||
|
|
||||||
|
return local_now.strftime(DATETIME_PATTERN) if as_string else local_now
|
||||||
|
|
||||||
|
|
||||||
def get_timezone_offset():
|
def get_timezone_offset():
|
||||||
if conf.tz:
|
if conf.tz:
|
||||||
now = timeNowUTC(as_string=False).astimezone(conf.tz)
|
now = timeNowUTC(as_string=False).astimezone(conf.tz)
|
||||||
|
|||||||
@@ -159,9 +159,13 @@ def test_devices_totals(client, api_token, test_mac):
|
|||||||
# 3. Ensure the response is a JSON list
|
# 3. Ensure the response is a JSON list
|
||||||
data = resp.json
|
data = resp.json
|
||||||
assert isinstance(data, list)
|
assert isinstance(data, list)
|
||||||
assert len(data) == len(get_device_conditions()) # devices, connected, favorites, new, down, archived
|
|
||||||
|
|
||||||
# 4. Check that at least 1 device exists
|
# 4. Dynamically get expected length
|
||||||
|
conditions = get_device_conditions()
|
||||||
|
expected_length = len(conditions)
|
||||||
|
assert len(data) == expected_length
|
||||||
|
|
||||||
|
# 5. Check that at least 1 device exists
|
||||||
assert data[0] >= 1 # 'devices' count includes the dummy device
|
assert data[0] >= 1 # 'devices' count includes the dummy device
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -123,7 +123,7 @@ class TestSafeConditionBuilder(unittest.TestCase):
|
|||||||
"'; DROP TABLE Devices; --",
|
"'; DROP TABLE Devices; --",
|
||||||
"' UNION SELECT * FROM Settings --",
|
"' UNION SELECT * FROM Settings --",
|
||||||
"' OR 1=1 --",
|
"' OR 1=1 --",
|
||||||
"'; INSERT INTO Events VALUES(1,2,3); --",
|
"'; INSERT OR IGNORE INTO Events VALUES(1,2,3); --",
|
||||||
"' AND (SELECT COUNT(*) FROM sqlite_master) > 0 --",
|
"' AND (SELECT COUNT(*) FROM sqlite_master) > 0 --",
|
||||||
"'; ATTACH DATABASE '/etc/passwd' AS pwn; --"
|
"'; ATTACH DATABASE '/etc/passwd' AS pwn; --"
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -204,7 +204,7 @@ def test_sql_injection_prevention(builder):
|
|||||||
"'; DROP TABLE Events_Devices; --",
|
"'; DROP TABLE Events_Devices; --",
|
||||||
"' OR '1'='1",
|
"' OR '1'='1",
|
||||||
"1' UNION SELECT * FROM Devices --",
|
"1' UNION SELECT * FROM Devices --",
|
||||||
"'; INSERT INTO Events VALUES ('hacked'); --",
|
"'; INSERT OR IGNORE INTO Events VALUES ('hacked'); --",
|
||||||
"' AND (SELECT COUNT(*) FROM sqlite_master) > 0 --"
|
"' AND (SELECT COUNT(*) FROM sqlite_master) > 0 --"
|
||||||
]
|
]
|
||||||
for payload in malicious_inputs:
|
for payload in malicious_inputs:
|
||||||
|
|||||||